It has been weeks since I posted here. My apologies for that. Yet again, Substack locked me out and it took a long time to clear up the issue and re-authenticate. I suspect attempts to hijack blogs, as I have heard this from a few other authors.
Meanwhile …
For about a year now, I have been noodling with a talented and thoroughly lovely team at Tranquilla.ai on how AI can help people who are barely coping. Now I have a title (Futurist!) and as a team, we have started to formalize and productize these ideas through work with quite diverse and always fascinating partners in various care, coaching and concierge scenarios.
The work is very promising, especially as we are led by our clients and partners who need our work to help in very challenging, emotional situations. We need to scale the support available to growing numbers of vulnerable people. But they are vulnerable, so we can't just sit them down in front of ChatGPT to make the best of it. But sadly, that is too often their best option today.
As the Futurist, I need to look ahead, but I think we need to start with the reality that most people experiencing mental health challenges globally never receive any professional support. So, while my first instinct personally is to worry about anything that might substitute for authentic human connection, we have to grapple with realities. I'm deeply interested in how technology might bridge troubling gaps in care.
To put it another way, the absence of any support perhaps overrides concerns about the ideal form of support.
My colleagues and clients are used to me (tired of me?) repeating a catchphrase from my consulting work: strategy is knowing what you are NOT going to do.
Especially in the software world, the possibilities of what a team could do (or attempt to do) are almost limitless. So an important component of your strategy has to be understanding where you draw the line.
What we're not doing at Tranquilla
I think it's important, first of all, to be clear that we are not building a therapeutic system. Or, as I say to colleagues more bluntly, we are not practicing medicine without a license.
Back in the 60s, in his book Persuasion and Healing, Jerome Frank set out some common features of successful therapeutic techniques:
A socially sanctioned healer: someone invested by society with special training and authority.
A healing setting, whether it’s a consulting room, a community center or a ritual space; this signals that change is possible and focuses attention on the work at hand.
A rationale or conceptual scheme: an explanation, even if metaphorical, of what’s wrong and why the chosen procedures will help.
A ritual or set of procedures, consistent with the rationale and enacted with conviction by both parties.
Frank also emphasized that the success of all techniques depends on the patient's sense of alliance with an actual or symbolic healer. This gets to the heart of why AI falls short in therapeutic contexts; while an AI might provide information or even comfort, it cannot form the kind of genuine alliance that healing requires. The alliance Frank describes involves mutual investment, shared risk, and authentic relationship: elements that require human vulnerability and presence.
Tranquilla's AI (and other AIs) can provide none of this. But we can think of Tranquilla as providing a supportive presence that is available when human support isn't. Perhaps we can be "therapeutic" in the soft sense that we may use to describe a warm bath, or a nice walk in the sun.
The language here matters enormously. Therapists often talk about presence as a foundational element of healing: being open to whatever emerges in the moment, without agenda. When someone feels seen, their internal experience is perceived, understood, and responded to with care.
I'm curious whether AI can ever truly replicate that process. An AI can provide consistency and availability, but can it be truly present in that sense?
Right now, I don't think it can, but we can route people to appropriate care when needed while maintaining structure, consistency and continuity, ensuring that an AI experience doesn't become a substitute for human connection but rather a bridge toward it.
To take another example, a therapist might misunderstand something, acknowledge it, and work to reconnect with the client. It appears that this process of rupture and repair can be deeply healing. When the therapist gets it wrong and realigns, their own human fallibility and vulnerability open up a new direction in the healing relationship.
Could AI engage in that kind of authentic relational repair?
No, because however fallible AI may be, it is never vulnerable. (Which is why AI cannot produce real art, or real literature, because it takes no risks, exposes no vulnerabilities, in its productions.)
So, I am hugely skeptical of AI chatbots used for so-called "therapy." Too many operate as if they're providing a definitive intervention rather than recognizing their role as one element in a broader care ecosystem. Also, they're often designed as apps delivering individual models of therapy when what many people need is community, meaning, and practical support.
A further problem is that most care bots are essentially digitizing Western therapeutic models without considering cultural context or the fundamental question of what healing actually requires. This overlooks that many cultures recognize that healing happens in relationship and community, not just in isolated therapeutic dyads.
An AI that doesn't account for those differences might not just be ineffective: it could be harmful by invalidating someone's cultural approach to distress and healing. There's a profound risk in creating technology that reflects the cultural assumptions of its creators rather than the needs of its users.
So, a supportive AI designed for American veterans (such as the one we are currently working on) might need very different approaches than one designed for mothers in rural Bangladesh.
Still, at Tranquilla, we are discovering that not every benefit requires the full depth of human connection; that there is value in simply having someone (or something) that listens without judgment and provides reliable information. So let's turn to where we can be valuable ...
What we can do
Here's where our approach of positioning Tranquilla as a bridge rather than a destination becomes important. If the AI's role is to connect people to culturally appropriate human support rather than replacing it, there's more room to adapt to different contexts.
In this way, AI can provide human-enough support in well-defined domains while being transparent about when human connection is needed.
That aligns better with how humans actually thrive.
So, we must be guided by our clients: organizations specializing in helping specific vulnerable groups. We should not build a generic experience.
Another important aspect of our work is to reflect back to people what they personally find valuable in interactions with AI. The truth is that many people are more comfortable with an abstracted non-judgmental experience, at least to start with.
Here in Seattle (and in Scotland, too), people will stand in the rain to use an ATM, even when the bank is open. It's so much less emotionally demanding to deal with a machine.
However, when vulnerable people interact with AI support systems, what are they learning about relationships, empathy and human connection? Are we inadvertently training them to expect relationships to be always available, never challenging, and perfectly responsive?
If digital relationships substitute for the messy, challenging aspects of human connection, people may not develop the emotional vocabulary and coping skills that make human relationships more accessible.
However, if our AI can help someone understand their emotions and develop skills while still encouraging human connection, that could be beneficial. If it becomes a replacement for learning to navigate human complexity, that's concerning.
Is there something qualitatively different about finding comfort in an AI response versus, say, finding comfort in a book or piece of music? I suppose the difference might be in the implied relationship. When I read a book, I don't imagine the book cares about me personally.
And this brings up an interesting paradox: some of the young people most drawn to AI support are those who struggle most with human relationships. For them, AI might provide a safe space to practice emotional expression and develop confidence that eventually transfers to human interactions.
So, rather than seeing AI as either good or bad for human connection, maybe we need to think about it as a developmental scaffold: something that can support growth toward greater human connection rather than replacing it.
From a public health perspective, that scaffolding function could be enormously valuable. If Tranquilla's AI can help people recognize when they need support, understand their options, and feel more prepared for human therapeutic relationships, that could actually increase demand for quality human care rather than replacing it.
So, instead of asking whether AI can replace therapists, we're asking how AI can help more people access the healing power of human relationships, including therapeutic ones.
And it also shifts the conversation from individual replacement to systemic support. How can AI help overwhelmed healthcare systems provide better care? How can it support human providers rather than competing with them?
That connects to something we've learned from care providers we work with: burnout for them often comes from feeling unable to meet the endless demand for connection and care. If AI could handle routine support, crisis routing, and information provision, it might free human providers to do the deeper relational work that only humans can do.
Looking forward to wisdom
That's the vision I find most compelling: AI handling what it does well (consistency, availability, information) while humans focus on what they do uniquely well (attunement, meaning-making, complex problem-solving, authentic presence).
But this requires real wisdom about boundaries and capabilities. We can only develop this wisdom through working with our clients and by being guided by their practical experience.
This requires ongoing research and evaluation. We need to study not just whether AI works in isolation, but how it affects people's relationships with human support systems, their understanding of healing, and their long-term capacity for connection.
I suppose we might see short-term benefits that mask longer-term costs to human connection capacity. I expect - and hope - that we might see initial skepticism that gives way to enhanced human relationships.
We need to remain humble about what we don't know. The intersection of artificial intelligence and human psychology is so new that our current frameworks may be inadequate for understanding long-term implications.
If we approach AI mental health support with curiosity rather than certainty, transparency rather than hype, and focus on serving human flourishing rather than replacing human connection, we might find ways for technology to genuinely support the healing that ultimately happens between people.
My goal isn't to solve human suffering with technology - that is unachievable - but to use technology in service of the human connections and community supports that actually promote healing and growth.
Donald, your distinction between AI as "bridge not destination" is crucial.
The scaffolding metaphor resonates deeply - helping people develop capacity for human connection rather than replacing it.
Your emphasis on cultural context and the "good enough" therapeutic presence is a key consideration.
I find that AI is not so good at programming by itself and it requires me to step in by hand to complete coding assignments.
This is still a time of human plus AI engagement. It's not safe nor complete to have AI take over our human relationships.
This work matters.
Collecting stories on men’s relationships with AI — your post Donald resonates on a lot of levels. It’s the ‘good enough’ idea that really sticks - it echoes Winnicott’s idea of the “good enough” parent: not perfect, but responsive enough to hold someone safely. That feels especially relevant when the relationship is synthetic.