There's a common theme I hear about in discussions of AI these days. AI is just a tool.
Sometimes, this is a headline, sometimes a slide, but the message is the same and well-intentioned: let's not be afraid of AI; after all, it's just a tool.
But the assertion oversimplifies a much more complex reality because, like many tools before it, from the stone axe to the printing press, AI will reshape us.
And no, AI is not just an agent
Yuval Hariri takes a different view, saying in an interview:
I think the most important thing to understand about AI is that AI is not a tool: it is an agent. It's the first technology in history that can make decisions and invent new ideas by itself ...
That’s a reasonable observation, but by seeing AI as an agent, Hariri still emphasizes its autonomy from us, as something external to us: a separate entity with its own agency. I think he's missing the point.
Heidegger often used the example of a very obvious tool: the hammer. When hammering a nail, you're not typically thinking about the hammer itself - its weight, shape, or feel in your hand. Your focus is on the nail and the task of driving it into the wood. The hammer, in a sense, becomes an extension of your arm. For Heidegger, this is Zuhandenheit, where the tool is ready-to-hand and integrated into your activity in a way that feels natural and intuitive.
However, Heidegger also sees that when the tool malfunctions or breaks, its state is Vorhandenheit - presence-at-hand. In that moment, it shifts into conscious awareness as an object to be analyzed or fixed.
So, we can think of AI very usefully this way. Like a hammer becoming an extension of the arm, AI can become an extension of our thinking and emotional processing. When AI functions seamlessly, we will not be aware of using it as such.
This is already happening for me. Now, when I browse the web using MacOS Sequoia, if I find a detailed web page, I simply ask Safari Reader for a summary. I do so almost without thinking about it: the AI feature, in just a few weeks, has already become a natural part of my browsing, ready-to-hand. But when it doesn’t work as expected — if it summarizes incoherently or not at all — suddenly, it becomes present-at-hand, shifting from an extension of my mind to an object of concern.
This is useful, I think, as a way of considering AI. But still, I’m restless. I want to move beyond both Hariri and Heidegger.
Where does the mind end?
In 1998, the philosophers Andy Clark and David Chalmers proposed that our cognitive processes, not just our actions, often extend beyond the boundaries of our brains, interacting with external tools and the environment. They start with the compelling question, Where does the mind stop and the rest of the world begin?
They say that sometimes, the things around us - like our phones, notebooks, or even the layout of our room - become part of how we think. It's not just that we use these things; they actually become part of our thinking process.
So, while Heidegger focuses on how we use tools, Clark and Chalmers suggest that sometimes, the world around us becomes part of our mind itself.
Consider an everyday example: when doing some involved arithmetic, we often reach for a pen and paper to write out calculations. We know how to calculate, but we can't without the cognitive extension of pen and paper.
That's straightforward. But pen and paper afford another, even more powerful, extension to our cognition. When we write a note, whether it's a grocery list, a to-do list, or a detailed journal entry, we are extending our memory. In this process, our mind doesn't simply offload information to paper; it creates pointers in our internal memory that link to those external notes. When we need to recall a piece of information, our mind doesn’t just search within the brain; it remembers where to find the relevant note. This is an interplay between internal memory and external storage, which is not using the notebook as a tool but as a cognitive extension.
Cognition as Environmental Integration
If I have a favourite living philosopher, it would be Alva Noë, whose work constantly fascinates me. He emphasizes that our cognitive abilities, such as navigation and knowledge, are not solely internal resources. Instead, we rely heavily on skillful familiarity with and integration into the world.
When navigating a complex environment like New York City, we don't have a complete, memorized map in our heads. (London cabbies do for London, but that's a very different and extraordinary story.) We use external markers like street signs, landmarks, and maps to navigate. For Noë, knowing how to get around isn’t an internal process but a cognitive ability rooted in our environment. The world’s cues and markers become part of our thinking and knowledge.
So, when we say we know something, like the time of day, it's not always about possessing the information internally. Instead, it's about having quick, reliable access to that information, such as glancing at a watch or your phone, which is an external part of your cognitive system.
You probably use Google in a very similar way when browsing the web, or as Peter Pirolli puts it, Information Foraging.
AI as an Epsitemic Partner
When we think of AI merely as a tool, we imagine it as something separate from us, a machine that we use to perform tasks. But I think we can say that there is no strict boundary between what is in the mind and outside it.
When we consider AI as an agent, like Hariri, we view it as an independent entity capable of action. This is one use, but for me, not the compelling one.
AI can be more like the notebook or landmarks we use for navigation. AI can become part of our cognitive system. It interacts with us in ways that shape our thoughts, facilitate self-reflection, and even alter our understanding of the world.
As we become more used to AI, we will stop focusing on the AI itself and how it indirectly assists us; instead, we'll be directly engaged with the task or thought, with AI not as a tool to be used but as an extension of our mind, blurring the line between what is internal and external.
AI is not just a tool. AI is not just an agent. AI is an extension of us.
Donald, I use GenAI almost everyday. The statement below is deeply troubling. It should give us all pause and prompt serious reflection on its implications for our society and values.
"AI can be more like the notebook or landmarks we use for navigation. AI can become part of our cognitive system. It interacts with us in ways that shape our thoughts, facilitate self-reflection, and even alter our understanding of the world."
AI would influence us in ways we cannot even recognize, it is a black box even for its creators. That's scary.
Interesting post, learned about a lot of philosophers in the process, lol. Thanks.