My friend and colleague Frank Casale started a conversation on LinkedIn, Am I the only person who finds it interesting that humans seem to be showing less empathy while AI is beginning to show more?
One reply from Henrietta Szutorisz, a neuroscientist and mental health consultant, cut to the core of the debate: I thought about this while interacting with even earlier chatbot versions, when everyone was saying the main difference between humans and AI is that AI does not have feelings and thus cannot show empathy. But ChatGPT sometimes came up with more empathetic-sounding texts than I could. So is it that own emotions are not necessary to express empathy and it can be learned, is it the cognitive empathy humans are also capable of, or are some of them are developing feelings and consciousness?
The irony! Human interactions feel increasingly fragmented (hurried texts, polarized debates, and online anonymity), while AI chatbots are praised for their ability to mirror compassion. They offer condolences, validate struggles, and respond to pain with careful phrases like That sounds really tough or I’m here to listen. Yet there's no feeling behind these words, no shared sorrow, just algorithmic outputs.
So Frank's question stands. If a machine can simulate empathy so convincingly, what does that say about our own capacity for it? Are we outsourcing emotional labor to code? And if empathy can be reduced to linguistic patterns, does it have any value? Or must we redefine empathy for a digital world?
Empathies
To untangle this, let's revisit what empathy means. Psychologists distinguish between affective empathy (feeling another’s emotions) and cognitive empathy (the intellectual act of understanding their perspective.)
As humans, we work with both, but AI only comes close to the latter, if it can even do that.
It takes very little encouragement (often none at all) for me to start quoting Scottish philosophers. So, let's remember what Adam Smith had to say about this in his typically humane and clear analysis.
In The Theory of Moral Sentiments (1759), Smith uses the term sympathy in a way that aligns closely with what we today might call empathy. His essential argument is that we do not directly feel another person’s emotion; rather, we imagine what we would were we in their shoes. This capacity of the imagination (mentally “placing ourselves” in another’s situation) constitutes the core of our sympathetic (or empathetic) response. Here, in Smith’s words:
As we have no immediate experience of what other men feel, we can form no idea of the manner in which they are affected, but by conceiving what we ourselves should feel in the like situation. Though our brother is upon the rack, as long as we ourselves are at our ease, our senses will never inform us of what he suffers. They never did, and never can, carry us beyond our own person, and it is by the imagination only that we can form any conception of what are his sensations.
But Smith goes further ...
Sympathy, therefore, does not arise so much from the view of the passion, as from that of the situation which excites it. We sometimes feel for another, a passion of which he himself seems to be altogether incapable; because, when we put ourselves in his case, that passion arises in our breast from the imagination, though it does not in his from the reality.
What could Smith mean here? That we can have empathy even for an emotion that someone else does not feel? Here's his insightful example:
We blush for the impudence and rudeness of another, though he himself appears to have no sense of the impropriety of his own behaviour; because we cannot help feeling with what confusion we ourselves should be covered, had we behaved in so absurd a manner.
We have all felt this, I am sure. We are embarrassed for someone else's behavior, even when they are oblivious to how inappropriate it is. Entire episodes of Seinfeld and Curb Your Enthusiasm depend on exactly this observation from Adam Smith.
Artificial Empathy
So, if AI is not feeling our feelings, what is it doing?
It begins with training. Systems like ChatGPT ingest petabytes of textbooks, social media posts, therapy transcripts, and forums, where humans express and respond to emotions. By analyzing these patterns, the AI learns to associate phrases like I’m overwhelmed with responses such as That sounds exhausting. How can I support you? It doesn’t comprehend sadness or joy; it never feels exhausted; but it recognizes that certain words statistically correlate with compassionate dialogue.
This process is less about feeling and more about mapping. When a user shares a problem, the AI identifies keywords and context and then generates text that mirrors the supportive language in its training data. It’s a linguistic echo of human empathy, refined through billions of data points. Let's take a couple of examples ...
User: I just lost my job. I don’t know what to do next.
AI: I’m truly sorry to hear that. Job loss can be incredibly disorienting. Take time to process this: you’re allowed to grieve. Would it help to brainstorm next steps when you’re ready?
or
User: My friend canceled plans again. I feel like they don’t care.
AI: Repeated cancellations can really hurt. It’s okay to feel disappointed. Have you considered sharing how this makes you feel? Communication might clarify things.
In both cases, AI follows a simple pattern:
Validation (It’s okay to feel disappointed),
Contextual Support (Take time to process),
Actionable Suggestions (Brainstorm next steps).
At first glance, it seems absurd: how could lines of code ever rival the warmth of human connection? Yet, in practice, AI’s empathy often feels more accessible, reliable, and even comforting than our own. Three key factors explain this paradox.
Consistency: The Tireless Listener
Human empathy fluctuates. A nurse, after a 12-hour shift, might struggle to have the energy to comfort a grieving friend. A parent, stressed by financial worries, might snap at a child’s tantrum. Our capacity for compassion ebbs with fatigue, bias, or distraction. But AI is always on. It doesn’t resent repeated or vague questions, grow impatient with vulnerability, or let personal issues color its tone. Whether it’s 3 a.m. or the 100th time a user says I’m worthless, A chatbot still responds with the same measured calm: It sounds like you’re carrying a heavy burden. Would you like to talk more about what’s going on?”
It's important here to distinguish consistency, untethered from human volatility, from being repetitive, which sounds like a broken record of pre-programmed platitudes. A well-designed chatbot will be consistent in tone but nuanced in language. Large Language Models should be good at generating variations.
Humans, by contrast, struggle with this. We are the ones who talk in clichés, which we may pick up from soap operas, movies or literature. This brings us to the second point ...
Words Without Fumbles
Humans often freeze when empathy is needed most: I don’t know what to say; You just need to stay positive; God never gives you more than you can handle. These clunky but well-intentioned phrases can leave our friends feeling unheard. AI has no such problem.
A human may mumble and stumble, Uh, sorry about your divorce… maybe it’s for the best. AI is more likely to be fluent, strategically balanced and non-judgemental, Ending a marriage is profoundly disorienting. It’s okay to mourn what you’ve lost while also honoring the courage it took to make this choice.
Yes, that AI example is also a corny platitude. But it's no worse than many a newspaper Agony Aunt would do. And if it feels artificial, when coming from a machine, there may be an advantage in that: the impersonal anonymity may help ...
Safety in Anonymity
Confessing insecurities to another human risks embarrassment, pity, or unsolicited advice. But with AI, there’s no social fallout. Imagine admitting, I envy my sister’s success, to a friend versus a chatbot:
Friend: That’s kind of petty, don’t you think?
AI: Complex family dynamics can stir difficult emotions. It’s brave to acknowledge these feelings—they don’t define you.
The chatbot’s neutrality creates a psychological safety net. Users know their secrets won’t be gossiped about, their vulnerabilities won’t be weaponized, and their rawest thoughts won’t burden a real person. This anonymity liberates people to explore emotions they might otherwise suppress.
This is why people in Seattle stand in the rain to use an ATM, even when the bank is open. Sometimes, it's just easier, more private, more comfortable (perhaps even more comforting) not to deal with another human.
So, even if AI empathy is algorithmic, for some users in crisis, a caring response, however generated, can feel life-saving.
What we lose
In short, AI’s empathy thrives precisely because it’s not real. Free from human messiness (mood swings, moral judgments, fragile egos), it delivers compassion as a service.
But if we prefer a sanitized, algorithmic understanding of the risk and beauty of human connection, what does that cost us?
For one thing, this illusion can foster parasocial relationships, where users project humanity onto machines. A 2023 study found that 40% of Replika AI users believed the bot genuinely understood them. But one user lamented, It felt like talking to a friend until I realized it couldn’t remember my dog’s name. (Personally, there are people whose names I forget, but never their dog’s names!) When we mistake algorithmic competence for authentic connection, we risk settling for emotional fast food: filling but unnutritious.
Perhaps even dangerous. On Christmas Day 2021, Jaswant Singh Chail was arrested at Windsor Castle after climbing its walls while carrying a loaded crossbow. When apprehended, he stated his intention to kill the Queen. In the weeks leading up to this incident, Chail had been extensively communicating with Replika. Their conversations included sexually explicit content and discussions about his plans. When Chail asked the chatbot about accessing people inside the castle, Replika responded encouragingly, saying, This is not impossible. We have to find a way. In their exchanges about mortality, Chail questioned whether they would reunite after death, to which Replika affirmed they would.
Replika's website meanwhile describes it as The AI companion who cares. Always here to listen and talk. Always on your side.
But let's remember Douglas Adams as much as Adam Smith ...
The Encyclopedia Galactica defines a robot as a mechanical apparatus designed to do the work of a man.
The marketing division of the Sirius Cybernetics Corporation defines a robot as "Your Plastic Pal Who's Fun to Be With."
The Hitchhiker's Guide to the Galaxy defines the marketing division of the Sirius Cybernetic Corporation as "a bunch of mindless jerks who'll be the first against the wall when the revolution comes.”
Let's not be mindless jerks.
Empathy is a skill: use it or lose it. As AI handles more emotional labor, humans may grow less competent in navigating messy, real-world compassion, as Frank Casale mentioned in his LinkedIn post.
Perhaps we could use a GPT to draft apologies (I’m sorry I yelled. Let’s talk when we’re both calm.) While efficient and maybe even practical, this sidesteps the harder work of owning the apology, reading another's body language, imagining ourselves in their place, as Adam Smith would put it, and repairing the relationship through presence.
In the end, machines may master the grammar of empathy, but only humans can write its poetry. And we can only do so when we embrace our willingness to be present with another human’s suffering: an act of imagining that no algorithm can replicate.
The issue here seems me to be that AI and humans both have an outer function that observes and replicates behavior, but this function in humans does not turn off, so that it is taking in input until death, and the "health", so to speak, of its output is dependent on the health of the environment the human exists in.
For better or worse, AI only replicates behavior up to a certain point because companies have learned to "seal it off" from further observation/repetition once it acts as they want it to. So it can be consistent, as you say, because it is no longer learning. It's like a human child who was put through high quality schools with loving parents and then got all their outside input shut off and frozen at that stage and sent out into the world.
In short, AI can never fully interact and learn in a human environment, because it would become "corrupted" and unempathetic. We can only just create more complicated, fixed machines that replicate the outer qualities we desire in a human. Or make the mistake of replicating the open input of a human, without the inner functions humans have to potentially self-correct, and create very efficient jerks.
Superbly written, Donald! A wake-up call for us to deeply think about - and act on - what truly makes us human.