I feel like I have been reading recently quite contradictory observations about the pace of AI development and adoption. We have the tech CEOs claiming we're on the verge of artificial general intelligence, but economists see slower AI adoption, or at least, slower changes in productivity as a result of adoption. And yet we are all meeting individuals, friends and relatives, who experience quite profound shocks, a crisis even, in their work and personal experiences of AI.
There are, I suspect, some deeper conceptual problems which are masked by this framework of talking about AI adoption as a unified phenomenon: a single timeline of development.
Surely AI is developing in different contexts - in different strata of our economic culture, if you like - moving at different speeds in each and producing different kinds of shocks. There is no single take-off curve. I want to look at three of these braided timelines: corporate, systemic and personal.
Corporate time
Companies face a brutal temporal reality: the next few years will determine who controls the advantage in artificial intelligence and will set the scene for how those advantages compound once established.
Google's two decades of search queries create training datasets no competitor can replicate overnight. Tesla's million-vehicle fleet generates real-world driving data that traditional automakers, despite their engineering prowess, cannot match. Microsoft's integration of AI into Office tools locks millions of users into workflows that become stickier: harder to abandon with each passing month they are used, even if no one really thinks they are best practice.
These advantages require enormous upfront investment but become self-reinforcing once established. Speed matters because first-mover advantages in AI may prove durable even when the underlying technology remains brittle.
In addition to any technical advantage, the narrative becomes an asset in itself. Markets and partners mobilize around believable futures; over-belief eventually corrects, but meanwhile, it fuels hiring, partnerships, and spectacular valuations.
That's a dangerous game, though. When narrative diverges too far from reality, you get market corrections, as we have already seen with some AI stocks. The challenge is that AI systems do work: they're just not as general or reliable as the hype suggests. This partial success makes it harder to calibrate expectations.
Still, the corporate timeline progresses in quarters and fiscal years, and AI vendors act as if new capabilities arrive tomorrow, regardless of whether broader economic statistics will reflect that transformation.
Systemic time
By the system, I mean our economic culture at large. Institutions and governance from public sector bodies, regulators, courts and standards agencies. Infrastructure and physical constraints, such as energy grids, broadband, data centers, GPUs, and supply chains. Economic structures and sectors, along with social and organisational practices, including workplace cultures.
All of these cultural factors together determine how tools spread across industries, geographies, and communities.
This systemic layer explains why new AI capabilities don't instantly translate into productivity. The economy's slowest-changing sectors (healthcare, education, government) hold back that growth by their sheer scale. More sophisticated AI doesn't automatically address those human and structural limitations.
This systemic rhythm operates on timescales of years, maybe even decades, but the friction is not merely obstruction; it is how a complex society absorbs a new general-purpose technology without tearing itself apart.
Personal time
If you have used AI intensively on a project in your specialism, you may well have had that sudden, uncomfortable realization that it may be better than you. For many of us, this is where the real action is.
I have seen AI synthesize hours of user research transcripts, not only returning results in minutes, but with some genuinely insightful findings that I could have missed. A Swedish friend, an economic historian with over 40 years of research experience, feels he may have very little to contribute from now on, except guiding an AI to the right questions.
This realization comes quickly and can change your practices - in research, coding or writing - almost overnight. The personal timeline operates in days and weeks: faster than an organization can change and almost instantly compared to the slow churn of economic impact.
The paradox of three timelines
There's something almost Aristotelian about this framework: psyche, praxis and polis. In classical thought, these three domains (or timelines here) were meant to be integrated: individual character and behaviour, practical action and at the social level, political participation were understood as mutually reinforcing.
The three AI adoption timelines I describe seem to be pulling them apart. Corporate practice operates independently of any democratic deliberation; individual psychological adaptation happens in isolation, alone with the screen; systemic change proceeds without any sense of direction from either personal values or credible political institutions.
So when we hear disagreements about the speed of AI's development, we are often hearing from observers who focus on different temporal scales: the different layers of our social economy.
If your personal work is displaced overnight, that's a revolutionary disruption; for executives operating on quarterly cycles and year-end strategy reviews, there is definitely urgency, but economists studying GDP may see gradual change.
And perhaps this is why so many feel there is an acute AI crisis. For some, AI systems challenge human capabilities while for others that's all part of the excitement. But, for everyone, these disruptions are arriving in a context where the traditional sources of meaning (community, purpose, transcendence) have already been weakened. AI becomes both a symptom and the accelerant of a deeper fragmentation.
Timing itself becomes a form of power
So, who gets to decide the pace of AI adoption? Timing itself becomes a form of power: individual willingness to work with AI rather than retreat before it; a corporate ability to marshal commercial resources; the institutional capacity to adapt.
Rather than one curve of AI progress, rapid or gradual, we face at least three interconnected rhythms. Success may depend on understanding which temporal scale governs the decisions before you, then acting with urgency or patience as needed. And it would require humility about our predictions. We don't know which timeline will prove most influential, or how the layers will ultimately interact.
The AI transformation isn't just about technology; it's about learning to navigate uncertainty while preserving what we value. That demands not just technical innovation but wisdom in the classical sense: the ability to live well amid complexity and change.
This resonates deeply, Donald. Your three-timeline framework makes sense of what feels like living in multiple realities simultaneously. The tension between personal disruption happening overnight while systemic change unfolds over decades explains so much of the current confusion. Thanks