Table of contents
In February 2026, three essays published within a few weeks of each other addressed the same phenomenon: artificial intelligence had surpassed a qualitative threshold. It was no longer just a matter of tools that assisted human cognitive work, but of systems capable of operating with increasing autonomy, architectural coherence and, above all, a recursive ability to contribute to their own improvement.
Matt Shumer (2026) observes the acceleration from a technological perspective. Dario Amodei (2026), CEO of Anthropic, analyses the systemic implications of the expansion of artificial intelligence. Andrea Pignataro (2026), founder of ION Group, interprets the recent volatility of enterprise software markets as a sign of structural transformation.
Taken together, the three interventions do not represent divergent interpretations, but different levels of the same process: technological capacity, governance and economic architecture are evolving asynchronously.
The fact, from recursive capacity to the centralisation of value
Shumer starts from an operational fact: frontier models are no longer limited to generating plausible outputs. They demonstrate structured reasoning, systemic consistency and sensitivity to context. The critical element is recursive capability: AI contributes to the design and optimisation of subsequent versions.
When a technology actively participates in its own improvement cycle, the speed of iteration increases and progress tends to compound exponentially. For businesses based on structured cognitive work, this implies a potentially faster erosion of traditional defensive barriers.
Amodei shifts the focus to another variable: governance capacity is not growing at the same pace as technical capacity. While artificial intelligence is scaling up in power and autonomy, institutional control mechanisms – regulation, safety standards, international coordination – operate on political and bureaucratic timelines. The risk is not only technical, but systemic: when cycles of technological reinforcement outpace those of institutional balance, the likelihood of instability increases.
Pignataro, on the other hand, observes economic architecture. Enterprise software is not just task automation, but coordination infrastructure: it codifies permissions, workflows, compliance and organisational grammar. The widespread integration of AI is gradually exposing this ‘institutional grammar’ to foundational models. Over time, sectoral knowledge tends to migrate to the infrastructure level, compressing the differentiation between operators and concentrating value in platforms.
The strategic context, three subsystems, a single dynamic
From the perspective of the innovation ecosystem, the three contributions describe interconnected subsystems.
In the technological subsystem, recursive capacity fuels reinforcement cycles: greater capacity → greater research productivity → faster iteration → further capacity increase. This mechanism encourages adoption by businesses and increases competitive pressure.
In the economic subsystem, adoption increases the exposure of business processes to general models. The accumulation of structural knowledge in platforms gradually reduces sector specificity and promotes the concentration of capital upstream.
In the institutional subsystem, increased concentration and technological autonomy heighten the perception of systemic risk. The response is to strengthen governance measures, which, however, operate with significant time lags.
The tension arises from the asymmetry of speed: the dynamics of technological reinforcement and economic concentration move at digital speed; institutional balancing mechanisms move at political speed.
Implications for businesses, investors and policy makers
For businesses, the integration of AI cannot be considered solely as a lever for productivity. Every decision to adopt it affects the long-term competitive structure. Growing exposure to foundational infrastructure requires reflection on technological dependence, data control and maintaining differentiation.
For investors, the key issue concerns the location of competitive advantage. If institutional coordination tends to migrate towards the platform level, the resilience of valuations will depend on the ability to maintain control over distinctive assets that cannot be easily absorbed by general models.
For policy makers, the key issue is timing. In systems characterised by rapid feedback cycles, delays in activating balancing mechanisms can lead to overshoot phenomena before stabilisation occurs. Regulation must not only limit risk, but also grow in proportion to the scale of technical capacity.
A structural threshold, not just a technological one
The three authors agree on one implicit point: 2026 represents a threshold that is not exclusively technical but systemic.
AI capabilities continue to grow. Adoption accelerates. Value tends to concentrate. Governance attempts to absorb the shock. The outcome will depend on the balance between reinforcement cycles and balancing cycles.
The question is no longer whether artificial intelligence can perform complex cognitive tasks. The question is whether institutional, economic and regulatory architectures designed for a human-speed context will be able to adapt to a cognitive infrastructure that evolves at machine speed.
The difference between a managed transition and a structural breakdown will depend on the ability to develop governance and coordination at a pace comparable to that of technology. (photo by BoliviaInteligente on Unsplash)
References
Amodei, D. (2026, January). The adolescence of technology: Confronting and overcoming the risks of powerful AI. https://www.darioamodei.com/essay/the-adolescence-of-technology
Pignataro, A. (2026, February 17). The wrong apocalypse. ION Analytics. https://ionanalytics.com/insights/mergermarket/the-wrong-apocalypse-op-ed/
Shumer, M. (2026, February 9). Something big is happening. https://shumer.dev/something-big-is-happening
ALL RIGHTS RESERVED ©