When we talk about artificial intelligence "learning" or developing "expertise," we reach for human analogies. After all, we understand from our own experience how humans progress in skills. So, why not think of AI as developing in the same way?
This question isn't just academic. As AI systems take on more complex roles in healthcare, engineering, and creative work, understanding how they develop capabilities, and especially how this *differs* from human learning, will be critical to set realistic expectations.
Looking into this, I thought I might consider how a human skill model used in designing training and professional development courses could be applied in AI.
The result is this three-part post.
This first post deals with a human skills framework as a model for AI.
The second part will consider the implications of this for AI regulation.
Finally, I will consider how thinking about AI development as skills development will impact our use of AI in the workplace when, as Jensen Huang suggested, IT becomes the HR department for AI agents.
Nursing an idea
In the 1980s, the brothers Stuart and Hubert Dreyfus offered a framework for understanding how humans develop expertise.
Studying everyone from chess players to airline pilots, they identified a consistent progression: learners move from rigidly following rules to developing nuanced, context-based judgment and finally to making rapid, intuitive decisions based on deep experience.
The Dreyfus model argues that true expertise depends on unconscious, intuitive skills that can’t be fully captured by explicit rules. Subsequently, the model has been applied in many contexts. One of the best-known and most persistent has been the model of Clinical Competence for nurses developed by Dr Patricia Benner, which lays out the following stages:
Novice: Nursing students in early training exhibit limited, inflexible clinical behavior. Unable to predict outcomes, they only recognize signs (e.g., mental status changes) after repeated exposure.
Advanced Beginner: New graduates recognize recurring clinical patterns but lack depth. They apply knowledge contextually but struggle with nuanced decision-making.
Competent: Now, nurses demonstrate organizational skills and partial mastery. They anticipate situations faster than advanced beginners but lack proficiency in speed/adaptability.
Proficient: Nurses holistically assess situations, adapt plans based on experience, and anticipate typical outcomes.
Expert: Intuitively grasp situations, prioritizing critical issues. Rely on deep experience over rules, using analysis only for unfamiliar/unexpected scenarios.
The pattern is simple enough: beginners follow task lists and rules; experts integrate tasks into a holistic view based on acquired experience. There is therefore a development from abstract concepts to concrete experiences.
Although this approach has its critics in human education (for example, it takes very little account of diverse learning styles), it's tempting to apply it to these AI systems, especially as early models of automation and machine learning do indeed start off as rules-based systems. We might construct something like this.
Novice AI: Rule-Based Systems which follow strict, predefined rules and require clear, structured input. Older "Expert Systems" are at this stage.
Advanced Beginner AI: Statistical Learning Models which use pattern recognition but still relies heavily on predefined data sets. They can improve with exposure to more data but lack contextual awareness. Siri? Or perhaps recommendation engines from Netflix or Amazon.
Competent AI: Context-Aware Assistants which can generalize learning from past data and apply it in new situations. Examples might include some of the AI-powered customer service bots that adjust responses based on past interactions or AI-based fraud detection that adapts to new financial crime patterns.
Proficient AI: Adaptive, multi-modal models that can handle unstructured, ambiguous input and adjust responses based on context. These models may show reasoning abilities rather than just pattern matching. We're there already with GPT-4, Claude Sonnet or DeepSeek R1. However, more domain-specific examples are also emerging in the form of Copilots or application assistants.
Expert AI: Intuitive, self-improving AI or even AGI. We are told it is coming soon. This will no longer rely on predefined datasets but instead will learn continuously from real-time interactions and be able to operate in novel, unforeseen situations without retraining.
I could be tempted to leave it here. Artificial intelligence (AI) is indeed evolving from rigid, rule-based systems to more intuitive, context-aware assistants. Advanced AI systems transition from strict algorithms to flexible, contextual decision-making. The Dreyfus Model of Skill Acquisition, or Benner's model of Clinical Competence, perhaps provides a framework to understand this evolution. AI may never get a corner office, but we can see how it may develop a “career” of its own.
But ...
Hubert Dreyfus, one of the originators of this model, was an early, and vehement critic of AI. His books, Alchemy and AI (1965) and What Computers Can’t Do (1972) make an important point that much human learning and intelligence is subconscious and embodied and therefore cannot be approached by the rules-based reasoning of AI.
There is still much debate about Dreyfus’ critique, but we can see that AI systems develop “skills” in a very different way from humans. The Benner model, for example, is not based on abstract learning but on the move from abstract to concrete experiences. The nurse learns by doing; there's a muscle memory in their behaviour, an embodied experience which is a critical component of their skill.
In contrast, AI does not practice skills the way humans do; even the most advanced AI models remain pattern-matching engines without real-world grounding.
For sure, in the early stages, AI behaves like a rule follower, operating with consistency but without flexibility; if conditions fall outside the scripted rules, they fail. But AI doesn’t learn from experience in the human sense; it learns from data. Data is the fuel for abstract AI skill acquisition.
So, while we often use human terms like “learning” or “expertise” for AI, there are fundamental differences between AI’s skill acquisition and a human’s.
An interesting facet of human learning is that it is linear, gradual and accumulative. We get better over time and, although people vary widely in their abilities, the pattern of linear experience is in essence the same for all of us, although our pace of learning and our ability to intuit new insights may vary.
The development of AI is notably non-linear: an algorithm might show modest improvement with more data until a tipping point suddenly enables abilities that were unreachable before. We all saw this with large language models that showed emergent capabilities once they reached a certain scale, suddenly performing tasks (like coherent paragraph generation or code writing) that smaller models simply could not.
So AI development is marked by step changes: plateau periods and then rapid advances when a crucial threshold is crossed. This makes AI skill development unpredictable; a system might go from subpar to superhuman in a narrow task with a relatively small tweak or a bit more training, unlike a human who cannot jump from novice to master overnight.
Understanding these differences may help us set realistic expectations. Comparing AI to a human expert is tempting, but one must remember that an AI’s “expertise” is brittle. It may calculate faster or find hidden patterns, yet it will also make mistakes no human would because it lacks understanding, context, and embodied common sense.
A Staged Framework for AI Capability Evolution
Nevertheless, we can still describe AI’s growth in distinct stages, each characterized by greater adaptability, complexity and autonomy.
Rule-Based Automation – AI performs tasks with fixed if-then rules and no learning (e.g., basic scripts or expert systems).
Context-Aware Systems – AI incorporates some context or environmental input into decision-making, adjusting predefined rules based on situations (early chatbots that use user data or location).
Adaptive Learning AI – AI uses machine learning to improve over time, learning from new data and feedback to refine its performance (models that personalize recommendations or learn from user interactions).
Goal-Driven Autonomy – AI agents operate with high independence, setting and executing steps to achieve objectives without step-by-step human guidance (e.g., an AI that plans and acts to meet a complex goal).
Fully Autonomous AI – The most advanced AI is capable of self-learning in real time and handling unforeseen scenarios with minimal human oversight . (Mostly aspirational or limited to narrow domains today.)
Here's a chart of how this plays out, representing adaptability, autonomy, and complexity as AI advances, emphasizing the increasing capability and sophistication of AI at each stage:
As we progress through these stages, the autonomy and adaptability of the AI increase, but so do the technical complexity and the need to manage limitations.
By Stage 4, AI systems achieve a degree of autonomous decision-making within a confined domain. Here, the AI can handle multi-step decision processes and act with minimal human intervention in routine cases. It’s the closest parallel to a human “expert” in a specific field. The system might use techniques like advanced planning algorithms, deep reinforcement learning with simulations, or large knowledge graphs to inform decisions. Importantly, the AI at this stage can make some contextual judgments: it selects actions not just by following a learned mapping but by evaluating the current situation against its learned experience and goals.
When people express their doubts or fears about AI's behaviour, they are mostly concerned about autonomy: will the AI have a mind of its own, or its own interests and preferences. Complexity and adaptability are internal properties of AI models, often invisible to users and difficult to quantify in legislation. By contrast, autonomy dictates how much power an AI has over decisions and actions, making it a more practical target for regulation.
So, I expect we will find that AI regulations focus more on autonomy than on other characteristics. In the second post of this three-part series, I'll consider how practical this approach is and its potential unintended consequences.
A footnote on creativity.
How did this three-part series on the developmental stages of AI and the potential for regulation come about? Well ...
A friend posted on Facebook some story about the buxom English actress Barbara Windsor, illustrated with a picture of her in a ridiculous nurse's uniform with a caption praising her performance in the film Carry On Nurse.
But I am cursed with a trivial memory so I knew that Ms Windsor wasn't in the early black-and-white Carry on Nurse, but in the later Carry on Doctor, from which the still was taken.
At that point I realised (and confirmed on YouTube) that the film's title is a pun. To "carry on" in UK English means to make a fuss or a commotion; but also, when the doctors or the matron in the movie have dealt with a patient, they say something like "I am finished here. Carry on, nurse."
You can see where this is going. I was reminded of Dr Benner's Stages of Clinical Competence and, reading up about that model again, I wondered if something similar could be applied to the development of AI competence.
I guess this is called, at a stretch, creativity. It's certainly not something you are likely to see when DeepSeek or ChatGPT describe their reasoning processes. It's not reasonable at all.
All because of Barbara Windsor. Although, personally I always liked the lovely Shirley Eaton who was in Carry On Nurse.
Thought provoking, as usual. Thanks, Donald. AI from peripheral vision
When we think about how learning happens, I am struck by the sense that there are powerful factors af play which do not pertain directly to the subject and object of such learning. For examples, I think the questions of values, preferences, desires, incentives, penalties, evaluation, feedback, error correction, etc are salient if we want to understand how human learning happens. Without addressing these ideas directly, we are left with the profound question "what pressure or forcing functions create an avoidance response to error and a reward-chasing response to expertise/knowledge/truth?" Or "how does one arbitrate between alternative choices that cannot be fully disambiguated -- how much certainty is the right amount for a particular decision?" "Why does it matter to me if my answer is right or my decision defensible?" Much of the discussion of ai focuses on the mechanics of how a decision is chosen, but I am very interested in the question "why" -- that is, the way this question's experience fits into a larger context of how the answerer is forging a path toward something it values, a path in which right answers and good knowledge are helpful to the answerer -- and bad answers have consequences for it as well. This is a powerful factor for kids learning chess or adults learning to be nurses. This is a sideward glance at the ai questions we are all discussing, but in a dark room sometimes indirect vision registers forms that direct vision cannot yet capture. If we say it does not matter for ai, we should abandon the pretense of analogizing between ai learning and human learning.