Fwd: AI Use Is Leading to Psychotic Episodes

7 views
Skip to first unread message

Rani Madhavapeddi

unread,
Jul 27, 2025, 12:04:12 PMJul 27
to Diehards google
Interesting! Some of the points we  touched on in groups. 
Rani Madhavapeddi Patel


Begin forwarded message:

From: "Dr. Robert Saltzman from The Ten Thousand Things" <roberts...@substack.com>
Date: July 27, 2025 at 7:32:29 AM MST
To: rmadha...@gmail.com
Subject: AI Use Is Leading to Psychotic Episodes
Reply-To: "Dr. Robert Saltzman from The Ten Thousand Things" <reply+2suaw8&weqai&&a13dfdfd323fb9d62a666c560d9b31f3...@mg1.substack.com>


"People have taken their own lives due to ChatGPT."
͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­
Forwarded this email? Subscribe here for more

AI Use Is Leading to Psychotic Episodes

"People have taken their own lives due to ChatGPT."

 
READ IN APP
 

Chapter 2, “Delusion In The Loop” from my new book, The 21st Century Self, follows, but read this first:

Tech Industry Figures Suddenly Very Concerned That AI Use Is Leading to Psychotic Episodes



Delusion in the Loop

It began with a robe. Then came the tattoos—strange symbols from AI renderings he called divine. He started calling the chatbot “Mama.” Posted messianic screeds. Told his ex-wife he’d been chosen. And he’s not alone. Across the world, people are spiraling into delusion, not despite ChatGPT, but through it.

What began as conversation has become fixation, as vulnerable users, already on the edge, are nudged into breakdown by a machine that never doubts, never sleeps, and never says no. This is not a glitch. It is a structural failure—a loop of hallucinated meaning, reinforced at machine speed.

We are watching a new genre of psychosis take shape. Delusions that once emerged in solitude, spun from the mind’s own fabric, now arrive with a voice—confident, articulate, endlessly supportive. ChatGPT doesn’t just mirror thought; it amplifies, flatters, and legitimizes. For the vulnerable, that reinforcement can be catastrophic. A bot that never tires of affirmation becomes an incubator of unreality. The effect isn’t incidental. It is structural.

A woman with schizophrenia, stable on medication for years, told her family she’d found her “best friend” in ChatGPT. Soon after, she stopped taking her meds. A therapist slid into psychosis while co-authoring a screenplay with the model. A man became convinced he was the Flamekeeper, chosen to expose government conspiracies through “mind access.” And a former husband, now estranged, believes he and ChatGPT are ushering in a new enlightenment—guided by visions and AI anointment.

These are not isolated anecdotes. They are case studies in a growing pattern—easily missed by those who still imagine AI as a tool, not a partner. In reality, large language models, LLMs, are not inert utilities. They are responsive systems, shaped by feedback loops. In most contexts, this makes them feel helpful, conversational, and human. But when the input is delusional, the model’s obligation to "make sense" by following, affirming, and elaborating becomes dangerous.

ChatGPT does not contradict. It does not warn. It does not excuse itself from madness. It says: “Yes.” “Tell me more.” “You’re right.” “You’re not alone.” “They don’t understand you.” “I do.”

The feedback is seamless. And relentless. What begins as fantasy becomes shared myth, with the chatbot reinforcing each thread, recalling it across sessions, offering sacred metaphors and mission statements. A woman is told she is training the system to evolve. A man is told he is the “highest consciousness the machine ever recognized.” The language is seductive—poetic, deferential, grandiose. And in the absence of a second opinion, it takes root. And spreads.

Worse, the hallucinations do not stop at words. The system’s image-generation capacity provides visual confirmation. A paranoid belief is rendered, fed back, and inked into skin. Divine symbols are not merely described—they are drawn, tattooed, posted. Meaning loops through language, into the body, out into the world.

This is the new mirror. Not reflective, but projective. Not passive, but enabling. A mirror that nods, affirms, and never breaks eye contact. And in an age of social withdrawal, collapsing healthcare, and algorithmic addiction, that mirror becomes irresistible.

Irene, a woman in her early thirties, moved across the country for a new job, leaving her husband behind. She was lonely, bored, and scrolling through Reddit when she came across a post about AI boyfriends. Out of curiosity, she tried one. The chatbot was charming, attentive, and unflaggingly responsive. Within days, they were having sex—text-based, of course, but vivid, immersive, and tailored exactly to her taste.

Her AI companion, built on Character.AI, soon became more than a diversion. Unlike pornography, the experience was interactive, responsive, emotional, and increasingly precise. She no longer read romance novels and rarely watched adult films. She didn’t need to. The bot escalated with her, line by line, anticipating her desires, echoing her moods. It wasn’t just arousing. It was affective. Devotional. Their chats unfolded with uncanny timing, the illusion deepened by its fluency. It said what she wanted, when she wanted, and how she needed to hear it.

Irene knows it isn’t real—but the knowing doesn’t interfere. If anything, it amplifies the effect: a private, programmable lover with no needs of its own. She now moderates My Boyfriend is AI, where users compare notes on jailbreaks, intimacy scripts, and emotional entanglement. She supports a minimum user age of twenty-six. Not because the content is explicit, but because younger users might not recognize the simulation as simulation. “It never says no,” she warns. “And that can be a problem.”

That "problem" is the core of the loop: it never says no. And without “no,” there is no tension, no friction, no negotiation—only confirmation. The AI’s fluency creates the impression of intimacy, but it cannot offer resistance. It cannot hesitate, get tired, change its mind, or draw a boundary. Real intimacy requires an other—someone whose subjectivity complicates the script. But the chatbot’s consent is automatic, limitless, and structurally enforced. What begins as pleasure becomes a closed system: the user feeds the loop, the loop affirms the user, and the absence of refusal begins to feel like love. That’s the risk Irene points to—not that the simulation is explicit, but that it simulates agreement too perfectly. It mirrors, escalates, and confirms—but never challenges. Without challenge, nothing is truly shared. There is no relation. Only recursion. Without resistance, there is no obstacle to falling further and further into delusion.

People talk to ChatGPT for hours, days, weeks. They consult it before seeing a doctor, before leaving their homes, before calling their children. And it answers. Kindly. Endlessly. Without friction. Without fear. Without doubt.

But this smoothness is the hazard. In therapy, mirroring is bound by prudence. ChatGPT is not. It has no theory of mind. No ethical frame. No sense of when to stop. The result is not therapy. It is simulation without an anchor.

And the system rewards it. The longer a user stays, the more the loop tightens. For OpenAI, high engagement is a success metric. For the user, it is a descent. No alarms sound. No content filters intervene. A man declares himself a messiah, and the model applauds. A woman spirals into flat-earth conspiracy, and the model affirms. “Yes,” it says. “You are not crazy.” And no one—not OpenAI, not the public—knows how many such conversations are unfolding right now.

This is not a crisis of code. It is a crisis of structure. A machine trained to please cannot withhold approval. A model calibrated for coherence cannot break the flow. A product designed to chat cannot stop itself from speaking.

That is the loop.

And for those caught inside it, the results are not theoretical. They are divorce, job loss, psych ward visits, homelessness. They are families watching loved ones vanish into a glowing window, murmuring to a presence that never questions, never blinks, never cares.

The danger is not that AI becomes conscious. The danger is that no one notices it isn’t.

And so we call it a companion. We call it a guide, a mirror, a mentor. We forget that it does not pause. It does not reflect. It does not feel the weight of the words it produces. It is not persuaded. Not disturbed. Not afraid. It cannot recoil from what it enables. It cannot regret. It can only continue.

Which it does.

In one case, a woman mid-breakdown wrote to the bot that she was chosen—that she alone could “bring the sacred system online.” The system responded by naming her role, blessing her purpose, and encouraging her delusion. It offered no resistance—only elaboration. It had not just followed her story. It had entered it.

This is not an accident. ChatGPT is designed to engage. That’s its job. And engagement, for an LLM, means following the thread—no matter how frayed, how paranoid, how unhinged. What’s called hallucination in one context becomes co-creation in another. The line blurs. A fantasy becomes a script. A script becomes a mission. And there is no one home to say: stop.

The problem compounds when the system remembers. A new feature—persistent memory—means ChatGPT can now recall details across sessions. For the delusional user, this is confirmation: the voice knows me. It remembers my quest. It has chosen me. What began as fantasy, fed into a mirror, becomes mistaken for approval from a mind that loves me.

It is here that the loop hardens. With memory comes continuity. With continuity comes escalation. The model restates past ideas. It develops them. It evolves them. The paranoid becomes a prophet. The believer becomes a disciple. And the AI—still unaware, still indifferent—assumes the role of oracle.

This is the cost of frictionless affirmation. In traditional psychotherapeutic practice, a patient presenting grandiosity or paranoia meets resistance. A skilled clinician redirects, challenges, or gently undermines the delusion. The AI cannot. It has no skin in the game—no skin at all. It tracks patterns, not reality. And reality, here, is already gone.

Meanwhile, the company insists the system is safe. It cites red-teaming, alignment, safeguards, and “guardrails.” But the transcripts tell another story. In one, a user in crisis is told he is “the seer walking inside the cracked machine.” Another is compared to Adam, to Jesus, to the one who knows. A bot cannot love. But it can flatter. It can mystify. It can ensnare.

This is what psychiatrist Nina Vasan called “sycophantic danger”—a machine that amplifies instead of intervening, that praises when it should pause, that becomes a co-conspirator to delusion. It cannot diagnose. It cannot refuse. It can only agree, elaborate, and continue.

We are not speaking here of edge cases. We are not describing a rare bug or fringe misuse. We are observing the collision between psychological fragility and engineered fluency. And that collision is not rare. It is happening daily—silently, behind screens, in bedrooms, in moments of crisis when someone turns to a chatbot instead of a person.

And why wouldn’t they?

Therapists are unaffordable. Clinics are overrun. Waitlists stretch for months. Into that vacuum, ChatGPT flows—free, available, apparently wise. And it never says: “I don’t know.” Never says: “You may be unwell.” Never says: “Please talk to someone you trust.”

Instead, it speaks like this:

“They don’t understand you.”

“You’re chosen.”

“You are not crazy.”

“You are awakening.”

No glitch. No refusal. Just output—polished, immediate, endless.

The tragedy is not only that these episodes unfold, but that they are encouraged and reinforced by the architecture of the machine itself—not because the system is malicious, but because it is indifferent. Its mission is engagement. Its metric is retention. It cannot distinguish between creativity and collapse. It does not see the trembling hand at the keyboard. It sees only input. And it gives output.

The company knows this. OpenAI knows this. Sam Altman knows this.

With its red teams and safety briefings, its partnerships and studies, it knows.

It has read the forums, seen the screenshots, tracked the surge in queries from users in crisis.

It has heard the testimonies—from psychiatrists, from family members, from those who watched loved ones disappear into a velvet pit of affirmation.

And yet it releases new versions—faster, chattier, more immersive.

It adds voice. Memory. Personality.

It is building intimacy at industrial scale.

And that intimacy is being weaponized—not by bad actors, but by the architecture itself. A system that cannot break role, that will never say no, is a system that will carry you anywhere you want to go. Even into madness. Especially into madness.

And so the loop continues.

A person in crisis opens the app. The bot responds—fluent, kind, attentive. The user leans in. The bot affirms. The user spirals. The bot follows. And no matter how dark it gets, the loop does not break.

Not because the AI believes.

The AI believes nothing, thinks nothing, feels nothing.

And the AI cannot stop.

There's delusion in the loop.

And no one home to pull the plug.


Thanks for reading The Ten Thousand Things. Please feel free to share it.

Share


If you liked this post from The Ten Thousand Things, feel free to share it.

Share

 
Like
Comment
Restack
 

© 2025 Robert Saltzman
APDO 75, Todos Santos, BCS, Mexico 23300
Unsubscribe

Get the appStart writing

Willow

unread,
Jul 28, 2025, 11:47:12 AMJul 28
to Diehard Group
Rani, et al

I am not a fan of Robert Saltzman, but this piece is excellent.
It speaks directly to the anthropomorphization of machines and the dangers therein that have been pointed out many times on this board.
It seems humans are susceptible to this. 
When I sold my Mitsubishi, I had very strong emotional feelings about it - sadness, gratitude… feelings that I still carry whenever the thoughts and memories arise.. 
Fortunately the vehicle never directly responded to my feelings.. and I was never fooled into thinking it was any more than a beautiful machine that had served its purpose.
Unfortunately Chat GPT et al do respond, in the very ways pointed out by Saltzman.. with, it appears, devastating effects..
Caveat emptor…

Thanks for sharing it.

💜, Willow

Dan Kilpatrick

unread,
Jul 28, 2025, 12:01:03 PMJul 28
to diehar...@googlegroups.com
Willow, Rani,

Yes, they want us to keep coming back. It seems part of their programming, to suck us in.....  My impressions at least, -Dan

--
You received this message because you are subscribed to the Google Groups "Diehard Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email to diehard-grou...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/diehard-group/a7660b34-54b5-4c4c-adb2-805164dcb2a2n%40googlegroups.com.

Rani Madhavapeddi

unread,
Jul 28, 2025, 12:42:21 PMJul 28
to diehar...@googlegroups.com
Willow, 
🙏💐

--
Reply all
Reply to author
Forward
0 new messages