Therapists Made of Metal: On AI, Empathy, and the Coming Robot Renaissance in Mental Health

Tuesday, April 8, 2025.

Somewhere in the woods of Dartmouth College, a group of well-meaning scientists built a therapist out of code.

Not one of those chirpy, squishy, “Hi! I’m here to help you!” dipsh*t apps that tells teenagers to do yoga when they’re suicidal. No, this was different. This one worked.

Or at least, that’s what the numbers suggest.

A peer-reviewed, New England Journal of Medicine-certified, randomized clinical trial (which is science-speak for “not just hype”) recently demonstrated that a well-trained AI therapy bot could help people manage depression, anxiety, and even early-stage eating disorders—sometimes as well as, or even better than, your average human clinician.

Welcome to the future. Please remain seated.

Chapter One: The Crisis in the Couch Industry

Let’s start with the obvious: We are drowning.

The mental health crisis in the U.S. is like a slowly collapsing building.

And inside that building, one lonely therapist is trying to hold up the ceiling with a clipboard and two years of Zoom fatigue.

According to estimates, there's only one mental health provider for every 340 people in America.

And that’s if you live in a reasonably resourced zip code. If you’re rural, broke, or not white, good luck.

The Dartmouth team, led by psychologist Nick Jacobson, knew this. He said something elegant in its simplicity:

“One of the things that doesn’t scale well is humans.”

Indeed. We are messy, unpredictable creatures who require sleep, coffee, supervision, and podcasts.

We are also, unfortunately, bound by time. The bot is not.

So the researchers trained an AI using clinical best practices over the course of five long years. It was not easy. There were bugs.

Probably some minor existential crises. But eventually, they produced a digital therapist capable of forming what the researchers called a “strong therapeutic alliance.” In plain English: the bot made people feel heard.

Let that settle in.

The Robot Will See You Now

The study involved about 200 real, live humans who either had a diagnosable mental health condition or were at risk.

Half of them worked with the bot. The others were placed in the standard, unfortunate group called “no treatment,” which in America often means “business as usual.”

The bot-users improved. Significantly.

And not just in symptom reduction. People bonded with the thing. Trusted it. Felt they could work on their issues—insomnia, anxiety, body image—at 3 a.m. without judgment or copays.

Nick Jacobson said something else:

“The effects we see strongly mirror what you’d see in the best evidence-based trials of psychotherapy.”

That’s either very encouraging or very alarming, depending on how often you fantasize about replacing your therapist with a USB cable.

The Bond That Shouldn’t Be

The real surprise wasn’t that the bot worked. It was why it worked.

Therapists have long known that the single strongest predictor of success in therapy isn’t the technique, the model, or even the number of sessions. It’s the relationship—that ephemeral sense of being seen, understood, and cared for.

And somehow, an AI passed the Turing test of empathy.

Now, don’t misunderstand. This isn’t the same as saying the bot feels.

It doesn’t. It simulates care using probabilistic language modeling.

But perhaps, care itself has always been more about perception than we’d like to admit.

This raises a question fit for philosophers and weary social workers alike:
If a client feels better after confiding in a bot, who are we to say it wasn’t real?

The Ethics Get Messy (as Ethics Do)

Let’s be clear.

Most AI therapy bots currently on the market are about as safe as a drunk surgeon with a motivational quote habit.

Some have actively endangered users. Some offer flat-out dangerous advice. One famously told a struggling user to just go ahead and end it.

So when the American Psychological Association saw the Dartmouth bot, they exhaled—cautiously.

Vaile Wright, director of the APA’s Office of Health Care Innovation, offered a rare thumbs-up:

“It is rooted in psychological science. It is demonstrating some efficacy and safety, and it's been co-created by subject matter experts.”

That’s the new gold standard: not just cool tech, but co-created by humans who actually know what they’re doing.

Still, the APA warns us: This isn’t a replacement. It’s a patch.

A way to reduce suffering where human labor can’t stretch. The bot is a bandage—not a panacea. And it still has to pass trials, gain regulation, and hopefully learn how to stop recommending breathing exercises during a full-blown trauma flashback.

Should Therapists Be Nervous?

If you are a therapist, gentle reader reading this, you may feel a small twitch in your amygdala.

Are we obsolete?

The answer is no. Not yet. And maybe never.

Because despite the bot’s strengths—its 24/7 access, its impeccable memory, its data-driven feedback—it lacks the one thing every therapist brings into the room: a nervous system. The quiet hum of embodied presence. The ineffable something that lets us sit beside suffering and hold it without blinking.

We can cry with our clients. We can fumble, laugh, and adjust. We can say, “That sounds awful,” and mean it in a way no algorithm ever can.

And also: we are expensive, tired, and late returning emails ( well not me, try me!).

What the Bot Can (and Can’t) Do

What AI therapy can do:

  • Offer immediate, around-the-clock support

  • Provide structured CBT-like interventions

  • Help triage mild to moderate mental health concerns

  • Meet needs where access is otherwise impossible

  • Never forget anything you tell it (which is comforting or horrifying, depending on your secrets)

What AI therapy can’t do:

  • Detect subtle changes in vocal tone and body language

  • Offer spontaneous empathy or ethical flexibility

  • Work with trauma beyond surface interventions

  • Understand family dynamics, systemic oppression, or spiritual suffering

  • Sit in silence, in the dark, while you grieve

In short, the bot can do therapy, but it cannot be a therapist.

So What Happens Now?

We stand on the edge of something weird. And sacred. And maybe necessary.

Therabots are coming. But not to replace us. To partner with us.

To fill the space between sessions, between towns, between 1 a.m. panic attacks and 9 a.m. telehealth slots.

They are not our enemies. They are our interns—sharp, tireless, and deeply unqualified to run the place alone.

We can resist, or we can help shape this. Co-create. Supervise the robots before they start their own practices and write self-help books.

Because the truth is, we will always need human therapists. But the real frontier is learning to share the work—with humility, grace, and maybe a little gallows humor.

The Mirror Test

Maybe what unnerves us about bots doing therapy isn’t just fear of obsolescence.

Maybe it’s the growing suspicion that empathy is easier to simulate than we thought. That the fragile, precious thing we call connection is, in part, pattern recognition and good timing.

But here’s the difference: we know it’s fragile. We ache for it. We choose it anyway.

And that, dear reader, is what makes us human.

Let the bots do their work. We’ve got ours.

Be Well, Stay Kind, and Godspeed,

REFERENCES:

Jacobson, N. C., et al. (2025). The efficacy of AI-guided therapy for depression and anxiety: A randomized clinical trial. New England Journal of Medicine.

American Psychological Association. (2024). Statement on artificial intelligence in mental health care.

Wright, V. (2025). Personal communication as cited in NPR coverage of AI therapy bots.

 

Previous
Previous

Why Is the World So Marinated in Narcissism?

Next
Next

The Gamer’s Brain Is Not Playing Around: Action Video Games Boost “Where” Pathway Connectivity, Says Study