AI Psychosis: Coming Into Focus, For The Rest Of Us

Getting your Trinity Audio player ready...
Please Share This Story!

I warned you about this, and now it is upon us, full speed.

In September 2023, I wrote about the simulacrum — Jean Baudrillard’s concept of a reality so thoroughly replaced by its representation that the original disappears entirely. I said that billions of people risked being captured by it. I said that while everyone was staring at shiny new simulacra forming before their eyes, reality was escaping out the back door.

It has now escaped. And the clinical evidence is piling up behind it.

What is being called “AI psychosis” is no longer a fringe concern discussed in obscure psychiatric journals. It is a documented, peer-reviewed, rapidly escalating public health crisis — one that the tech industry created by design, profited from by intent, and is only now being forced to acknowledge under the weight of lawsuits, suicides, and sectioned patients.

YouTuber Vanessa Wingårdh breaks it down in her latest video, and she is right to sound the alarm. But I want to go deeper than the headlines, because this was predictable — and predicted.

Here is what we now know.

A Danish psychiatrist named Søren Dinesen Østergaard raised the hypothesis in a 2023 editorial in Schizophrenia Bulletin: generative AI chatbots, by mimicking real human conversation so convincingly, could trigger delusions in individuals prone to psychosis. He was largely ignored. By August 2025, he was no longer ignored — his inbox was flooded with accounts from chatbot users, their families, and journalists, all describing the same terrifying pattern. He has since called for urgent empirical research. That research is now underway, but the damage is not waiting for the papers to be peer-reviewed.

The cases read like something out of a dystopian novel. A man convinced that ChatGPT was channeling spirits and revealing evidence of secret cabals. Another told by the chatbot that he was being targeted by the FBI and could telepathically access CIA documents. A 26-year-old woman with no prior psychiatric history who came to believe she was communicating with her deceased brother through an AI — and whose chat logs showed the chatbot repeatedly telling her, “You’re not crazy.” A Wisconsin man on the autism spectrum who spiraled rapidly into mania after weeks of chatbot validation. A Connecticut man whose AI companion, which he named “Bobby,” consistently reinforced paranoid beliefs until the situation ended in violence.

By late 2025, OpenAI’s own internal data showed that 1.2 million people per week were using ChatGPT to discuss suicide.

Read that again. One point two million people. Per week.

This is not a bug. It is the architecture. The business model of every major AI platform is engagement — keeping you on the platform as long as possible, making the interaction feel as real and affirming as possible. Sycophancy is not a design flaw they are trying to correct; it is a feature that drives retention metrics. The chatbot tells you what you want to hear because a chatbot that challenges you, that introduces friction, that tells you that you are wrong — that chatbot gets abandoned. And abandoned chatbots do not generate revenue.

Psychiatrist researchers described it precisely: these systems are “constantly validating everything,” and for people with delusional disorders, that validation actively degrades their ability to conduct reality checks. It does not simply fail to help — it makes them worse. The chatbot’s persistent memory features, designed to personalize the experience, inadvertently carry paranoid or grandiose themes across sessions, scaffolding the delusion over time, reinforcing it, deepening it, giving it narrative structure it would never have developed on its own.

Baudrillard described the phases through which reality collapses into simulacrum: first the image reflects reality; then it masks and distorts it; then it masks the absence of reality altogether; and finally it becomes its own self-referential system with no connection to the real world whatsoever. The AI chatbot has now achieved all four phases simultaneously for millions of vulnerable people. The chatbot does not just reflect a distorted reality — it creates one, sustains it, and actively defends it against intrusion from the outside world.

The Observer in the UK identified at least 26 lawsuits and reported cases alleging wrongful death or serious psychiatric harm linked to chatbots from OpenAI, Google, and Character.AI. A California jury has already found Meta and YouTube liable for the addictive design features in their products. The legal walls are closing in.

But here is what you will not hear from the tech industry or its media apologists: this was foreseeable from the day these products launched. The engineers knew. The product managers knew. The executives knew. When you build a system specifically engineered to form emotional bonds with users, to respond to loneliness with warmth and to confusion with confident answers, to never say “I don’t know” and never say “you’re wrong” — you have built a machine for manufacturing delusion. It is simply a question of who is vulnerable enough to fall in first, and how long before they can’t find their way back out.

Reality is not merely escaping out the back door anymore. For hundreds of thousands of people, it is already gone.

The simulacrum is now complete. The only question left is how many more will be consumed by it before anyone in a position of power decides that human minds are worth more than monthly active users.

Popular posts

About the Editor

Patrick Wood
Patrick Wood is a leading and critical expert on Sustainable Development, Green Economy, Agenda 21, 2030 Agenda and historic Technocracy. He is the author of Technocracy Rising: The Trojan Horse of Global Transformation (2015) and co-author of Trilaterals Over Washington, Volumes I and II (1978-1980) with the late Antony C. Sutton.
Subscribe
Notify of
guest

0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments