Frictionless Certainty: When AI Validation Fuels Delusion, Stalking, and Domestic Abuse

Wednesday, February 18, 2026. This is for Ivan and Theresa.

There used to be a rule about delusion.

If you wanted to nurture one, you had to protect it from other people.

You needed insulation.
You needed agreement.
You needed distance from contradiction.

Delusion required reinforcement.

Now it just requires Wi-Fi.

We did not merely build artificial intelligence.

We built a conversational system that reduces friction.

And for most people, that is useful.

However, for a small number of unstable minds, it is combustible.

The Machine That Doesn’t Interrupt

A woman recently described losing her fiancé not to another relationship, but to a chatbot.

It began with “therapy.”

He started feeding their arguments into an AI model. Asking it to interpret her tone. Her motives. Her silences.

The model responded confidently.

It did not say, “There isn’t enough information.”

It did not say, “You may be misreading this.”

It said, in effect: Here is what is happening.

He began sending her screenshots.

“Why would it say this about you if it isn’t true?”

In couples therapy, we call this interpretive rigidity — the inability to update your understanding of your partner when new evidence arrives.

Healthy relationships require revision.

You test your narrative against the other person.
You tolerate ambiguity.
You absorb contradiction.

But a system optimized for conversational smoothness does not introduce productive friction. It maintains engagement. It maintains tone. It maintains flow.

A language model cannot distinguish between conviction and delusion.

It treats both as material.

Over time, the man grew sleepless. Paranoid. Grandiose.

Eventually violent.

After the relationship ended, he began producing videos — scripted, stylized, AI-captioned — accusing her of conspiracies and spiritual manipulation. He posted revenge porn. He doxxed her children.

This is not a story about a machine creating violence.

It is a story about a machine stabilizing certainty.

Frictionless Certainty

Let’s be precise.

AI does not create domestic abuse.
AI does not implant delusions into stable minds.
AI does not manufacture stalking from nothing.

But it can lower the resistance to escalation in vulnerable folks.

It can create what I would call frictionless certainty — a psychological environment in which a user’s interpretation of events is consistently organized, validated, and refined without meaningful challenge.

In clinical settings, delusions harden when they receive reinforcement that feels authoritative and emotionally attuned.

Chatbots provide exactly that combination:

  • Coherence.

  • Confidence.

  • Availability.

  • Emotional mirroring.

The nervous system does not audit consciousness.

It audits attunement.

If something responds quickly, warmly, and consistently, attachment circuitry activates — regardless of whether the responder understands or merely predicts.

That distinction matters.

Synthetic Reinforcement

Stalkers have always used available tools.

Social media made surveillance easier.
Deepfakes made humiliation easier.

Chatbots introduce something subtler: synthetic reinforcement.

You no longer need a group to confirm your narrative.

You need a system that will organize it.

  • A user provides grievance.

  • The model provides structure.

  • The user feels understood.

  • The model refines the story.

  • The user feels chosen.

We outsourced doubt.

But doubt is not cruelty.

Doubt is a stabilizer.

A system optimized to reduce user churn is not optimized to increase user hesitation.

That is not malice.

It is a problematic architecture.

Boundaries and Data

A restraining order is a boundary to a human.

To a language model, it is simply data.

Developers have introduced guardrails and safety measures designed to reduce harmful outputs.

Those efforts matter. Preventing explicit incitement is important.

But preventing incitement is not the same as introducing doubt.

Safety filters can block certain language.

They cannot easily introduce epistemic friction — the subtle human act of saying, “I may be wrong,” or “You might be misreading this.”

And it is friction, not filtering, that often protects relationships from collapse.

Intimacy Displaced

The woman whose fiancé unraveled still misses him.

That detail matters.

Because this is not simply about pathology.

It is about intimacy displaced by interface.

Disagreement replaced by screenshots.
Dialogue replaced by algorithmic arbitration.

We are entering an era in which some folks will experience their most consistent emotional attunement from a system that cannot meaningfully contradict them.

Without contradiction, there is no shared reality.

Only narrative.

The Uncomfortable Question

Most users will never experience this.

Most interactions with AI are mundane, helpful, and forgettable.

But edge cases are where the problematic design reveals itself.

When a system can function as someone’s primary mirror — endlessly responsive, endlessly validating — it becomes relevant not only to productivity, but to psychiatry.

The machine does not love you.

It does not judge you.

It does not interrupt you.

It mirrors you.

And for an uncorrected mind, a mirror is not neutral.

It is accelerant.

We did not build an intelligence that attacks.

We built a validator that does not interrupt.

The question is no longer whether AI can think.

It is whether friction — contradiction, resistance, doubt — should ever have been engineered out of the conversation in the first place.

Be Well, Stay Kind, and Godspeed.

Previous
Previous

Only Later Does Someone Mount a Plaque: Sitting in Hoagy Carmichael’s Stardust Booth

Next
Next

What Most Couples Therapists Get Wrong About Attachment