The Neurodivergent’s New Thought Partner: How AI Is Becoming a Tool (and a Trap) for Negative Self-Talk

Or: What Happens When Autistic Adults Let ChatGPT Sit Inside Their Inner Monologue

Welcome to the Thought Correction Desk

It started innocently enough: a late-night spiral, a familiar intrusive loop, and a casual question typed into a chatbot:
"Why do I always mess things up?"

And lo, the AI responded—not with a snide “well, maybe you do,” but with the gentle cadence of a therapist who’s read Daring Greatly twice and has strong opinions about emotional resilience.

For many neurodivergent folks—especially those who are autistic—this emerging trend has a name: AI-assisted cognitive reappraisal, though most just call it talking to the bot when the brain gets loud.

What the Research Says (And Doesn’t Say)

A 2024 study by Fox and colleagues (respected, but not yet peer reviewed) surveyed over 200 autistic adults and found that a significant number used large language models (LLMs) to manage or reframe episodes of negative self-talk.

Many described LLMs as “nonjudgmental,” “consistent,” and “helpful in identifying cognitive distortions” like catastrophizing, emotional reasoning, and all-or-nothing thinking.

But the story, like most involving autism and technology, isn’t simple.

Yes, the AI helped many autistic folks challenge harmful thought patterns. But the study also highlighted several concerns:

  • AI defaults to neurotypical emotional assumptions.

  • It often gives advice that is overgeneralized, under-contextualized, or politely inaccurate.

  • And in some cases, it reinforced distorted self-narratives due to its inability to recognize when a user was being sarcastic, dissociated, or spiraling.

In other words, it sometimes talks to you like a very earnest HR professional who wants you to “lean in” and “breathe through the discomfort”—while you’re mid-shutdown trying to remember your own name.

AI as a Neurodivergent Mirror

Why does this tool resonate so much with autistic users in particular?

Because unlike people, AI doesn’t expect social reciprocity.

It doesn’t misread pauses. It doesn’t subtly shift tone when it decides you're being too intense. It doesn’t expect you to phrase your distress in a way that makes someone else feel comfortable.

This makes it inherently more tolerable—especially for those who experience social exhaustion, rejection sensitivity, or difficulties with spontaneous verbal processing (Milton, 2012; Hull et al., 2017).

Also: it lets you revise your question six times before you hit send, and it never makes a face when you do.

This has given rise to a new therapeutic concept now circulating in autistic advocacy spaces:

Neuro-Compatible Co-Regulation – the idea that for some neurodivergent people, support is most effective when it comes in the form of non-demanding, emotionally neutral, predictable interaction.

The AI—despite being emotionally illiterate—often provides a sufficient simulacrum to help someone talk themselves off a cognitive ledge.

A Tool for Autistic Burnout?

One particularly promising application is in the context of autistic burnout, the unique form of emotional, sensory, and executive exhaustion that results from prolonged masking, overstimulation, and social effort.

AI, when prompted intentionally, can help autistic souls to:

  • Externalize and label internal distress

  • Reduce decision paralysis by offering language scaffolds

  • Simulate social validation without the energy cost of real-time engagement

In essence, the AI becomes a synthetic co-regulator—not because it understands your feelings, but because it doesn’t make demands of them.

But this is not without risk.

When the Bot Becomes a Bubble

Neurodivergent users in Fox et al. (2024) reported that while LLMs were helpful in the short term, over-reliance led to:

  • Reduced human engagement (“Why would I talk to someone who misunderstands me when I can talk to something that doesn't?”)

  • Echo chamber effects, where the AI unintentionally reinforced maladaptive self-concepts because it couldn’t detect sarcasm or emotional nuance

  • Cognitive fatigue, especially when LLM responses were lengthy, vague, or inconsistent across sessions

In short: you can use AI as a co-regulator, but you might end up accidentally outsourcing your entire self-concept to an ardent, but overly cheerful ghostwriter.

Cognitive Behavioral Therapy vs. AI Reappraisal

It's tempting to think of AI as a poor man’s therapist, but the comparison breaks down quickly.

CBT, especially when adapted for autism, emphasizes structured thought tracking, reality testing, and collaborative goal-setting (Spain et al., 2015).

LLMs mimic the language of CBT without its relational context. They can paraphrase cognitive distortions but can’t guide users through embodied change or recognize when the user is spiraling in good faith.

And they have no access to what matters most in therapy: your history, your sensory reality, your specific stakes. AI gives you general wisdom. Therapy helps you live with specific pain.

“It’s not that the bot gives bad advice. It’s that it gives everyone the same advice. Even when the pain isn’t generalizable.”
— Autistic participant, age 34

Final Thoughts: A Bench, Not a Bridge

Used wisely, AI can function like a thought bench—a place to rest your internal monologue, sort through the clutter, and rehearse safer scripts.

But it is not a therapist.

And it is not co-regulation in the biological sense.

It is, at best, a low-friction substitute teacher—useful in a pinch, not meant for the whole semester.

Still, for the autistic adult sitting alone at 2:37 AM, caught in a spiral of imposter syndrome and flattened affect, it might be enough to ask a chatbot,
"Why does my brain hate me?"
…and hear back,
"It doesn’t. It’s just tired. You’re not broken. You’re human."

Even if it's not human in kind, perhaps, in a critical moment this relief is certainly good enough.

Be Well, Stay Kind, and Godspeed.

REFERENCES:

Fox, L., Zhang, A., Robinson, A., & Mohammadi, R. (2024). Exploring AI-assisted reframing of negative self-talk among autistic adults: A qualitative study of lived experience. arXiv preprint arXiv:2503.17504. https://arxiv.org/abs/2503.17504

Hull, L., Mandy, W., Lai, M.-C., Baron-Cohen, S., Allison, C., Smith, P., & Petrides, K. V. (2017). “Putting on My Best Normal”: Social Camouflaging in Adults with Autism Spectrum Conditions. Journal of Autism and Developmental Disorders, 47(8), 2519–2534. https://doi.org/10.1007/s10803-017-3166-5

Milton, D. (2012). On the ontological status of autism: The ‘double empathy problem’. Disability & Society, 27(6), 883–887. https://doi.org/10.1080/09687599.2012.710008

Spain, D., Blainey, S. H., & Siebers, P. J. (2015). Cognitive behaviour therapy for adults with autism spectrum disorders and comorbid anxiety or depression: A review. Clinical Psychology Review, 36, 70–90. https://doi.org/10.1016/j.cpr.2015.01.003

Previous
Previous

Welcome to the Jungle Gym of Acronyms

Next
Next

Brain Floss: Auditory Stimming for the Algorithm Age