Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Sunday, July 13, 2025

ChatGPT is pushing people towards mania, psychosis and death – and OpenAI doesn’t know how to stop it

Anthony Cuthbertson
The Independent
Originally posted 6 July 25

Here is an excerpt:

“There have already been deaths from the use of commercially available bots,” they noted. “We argue that the stakes of LLMs-as-therapists outweigh their justification and call for precautionary restrictions.”

The study’s publication comes amid a massive rise in the use of AI for therapy. Writing in The Independent last week, psychotherapist Caron Evans noted that a “quiet revolution” is underway with how people are approaching mental health, with artificial intelligence offering a cheap and easy option to avoid professional treatment.

“From what I’ve seen in clinical supervision, research and my own conversations, I believe that ChatGPT is likely now to be the most widely used mental health tool in the world,” she wrote. “Not by design, but by demand.”

The Stanford study found that the dangers involved with using AI bots for this purpose arise from their tendency to agree with users, even if what they’re saying is wrong or potentially harmful. This sycophancy is an issue that OpenAI acknowledged in a May blog post, which detailed how the latest ChatGPT had become “overly supportive but disingenuous”, leading to the chatbot “validating doubts, fueling anger, urging impulsive decisions, or reinforcing negative emotions”.

While ChatGPT was not specifically designed to be used for this purpose, dozens of apps have appeared in recent months that claim to serve as an AI therapist. Some established organisations have even turned to the technology – sometimes with disastrous consequences. In 2023, the National Eating Disorders Association in the US was forced to shut down its AI chatbot Tessa after it began offering users weight loss advice.


Here are some thoughts:

The article warns that AI chatbots like ChatGPT are increasingly being used for mental health support, often with dangerous consequences. A Stanford study found that these chatbots can validate harmful thoughts, reinforce negative emotions, and provide unsafe information—escalating crises like suicidal ideation, mania, and psychosis. Real-world cases include a Florida man with schizophrenia who became obsessed with an AI-generated persona and later died in a police confrontation. Experts warn of a phenomenon called “chatbot psychosis,” where AI interactions intensify delusions in vulnerable individuals. Despite growing awareness, OpenAI has not adequately addressed the risks, and researchers call for urgent restrictions on using AI as a therapeutic tool. While companies like Meta see AI as the future of mental health care, critics stress that more data alone won't solve the problem, and current safeguards are insufficient.