In my last post, AI Fast and Slow, I explored how AI adoption moves along different timelines (corporate, systemic, and personal), each with its own rhythm, pressures, and shocks. Each timeline creates different experiences and expectations of what AI represents.
I don't feel any temptation to resolve those differences into a unified, defined view of what AI really is or how it is progressing. If my earlier post was about recognising multiple timelines, this one is about how to inhabit them, often simultaneously in our different roles, holding them in tension.
Partly, I'm concerned that the current discourse around AI is being shaped primarily by corporate interests and the political system rather than by our accumulated personal experience. Perhaps even more so, the discourse is shaped by the creators of AI and reflects their priorities. If those creators have a narrow understanding of intelligence - and I suspect they do - their technology and the discourse around that technology will reinforce that narrowness.
The temptation to fix intelligence
When facing rapid and uneven change, there is often a strong urge to settle the matter with a definition: a well-marked boundary around where supposedly true intelligence lies, human or machine. And there is a strong urge to own that definition, too.
Corporate strategists want a roadmap to advantage; educators want to define what constitutes real learning; technologists want to declare the threshold of Artificial General Intelligence; artists want to hold on to real art; the most humane long for real, rather than artificial, empathy.
The danger is not simply that we might define AI incorrectly or differently, but that these tidy definitions can each harden into dogma. They offer comfort by closing off the complex, unknown and emerging, but in doing so, they blind us to how intelligence, and our relationship to it, keeps evolving.
So, how can we hold our definitions of intelligence less tightly, resisting the urge to lock them in place, and remain open to being reshaped by what we do not yet fully understand?
Across all three timelines, we are in that same situation: we may be using, or have built, things we don’t know well enough to define or definitively respond to.
Uncertainty helps
To hold up different views of AI as it develops technically and as our use of it develops is certainly not easy. It requires an uncomfortable (and uncomforting) humility of mind, recognizing the uncertainty of what we know.
This is entirely natural with a rapidly innovating technology. Back in 2004, David Lane and Robert Maxfield of the Santa Fe Institute wrote about Ontological Uncertainty and Innovation.
Lane and Maxfield's research reveals that innovation involves three types of uncertainty, but most businesses only focus on one. While companies typically worry about truth uncertainty (will this work or not?), the real breakthroughs come from grappling with deeper uncertainties: not knowing what something means or even what kinds of opportunities exist in the first place. You can see the relevance to our different timelines of AI.
The key insight for business leaders, but also for personal users and even politicians, is that instead of trying to eliminate all uncertainty, successful innovators learn to work productively with it. Innovation often happens through relationships and partnerships where nobody fully understands what they're creating together.
This is the humility I referred to. But I want to be sure to distinguish uncertainty from mere skepticism or relativism. Humility in thinking comes not from doubting everything, but from remaining open: it's being actively receptive to what we don't know.
Lane and Maxfield describe companies that thrive as those that build scaffolding structures that help them navigate unknown territory while staying open to discovering entirely new categories of value.
So, I'm suggesting here that, rather than trying to resolve the multiple experiences of AI - its intelligence, understanding, empathy, or even consciousness - into a single, manageable definition, we may do better to live with these highly contrasting views.
Social intelligence
This matters because intelligence, human or artificial, emerges in a relationship. It’s shaped by interaction and shared context. So, the intelligence of AI may well be quite different in different settings.
Perhaps each timeline requires different, tentative definitions of intelligence, even from the same model. The intelligence of AI, even the same underlying system, seems to manifest in different ways: for example, when I interact with models in research settings compared to when someone uses them for more creative writing.
In AI, there is a democratizing force at play: the tools enable people who might otherwise be excluded from cultural or economic life to participate in new ways. Even if the intelligence is instrumental, it creates space for more voices.
Still, technological and political narratives insist we pick a side on AI’s value, while institutional incentives reward quick wins and visible novelty over deep work. We live in an impatient culture that undervalues the slow processes of wisdom; the narratives of corporate and political power tend to flatten intelligence into utility.
We can resist that. Each timeline (personal, corporate, systemic) can provide different kinds of scaffolding for uncertainty. Personally, it might mean cultivating patience and curiosity in how we use AI in daily life. Corporately, it may involve resisting the pressure to define fixed roadmaps too early, instead allowing room for exploration and partnership. Systemically, it means creating policies and institutions that leave space for diverse experiences of intelligence rather than collapsing them into rigid regulations.
The scaffolding we build today, in classrooms, workplaces, and policy, should hold space for discoveries we cannot predict. Intelligence, after all, has always been a moving target.
I enjoyed reading this Donald - being open and calm enough to see the possibilities is indeed a skill that we all need to work on!