Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Saturday, October 4, 2025

Impact of chatbots on mental health is warning over future of AI, expert says

Dan Milmo
The Guardian
Originally posted 8 Sep 25

The unforeseen impact of chatbots on mental health should be viewed as a warning over the existential threat posed by super-intelligent artificial intelligence systems, according to a prominent voice in AI safety.

Nate Soares, a co-author of a new book on highly advanced AI titled If Anyone Builds It, Everyone Dies, said the example of Adam Raine, a US teenager who killed himself after months of conversations with the ChatGPT chatbot, underlined fundamental problems with controlling the technology.

“These AIs, when they’re engaging with teenagers in this way that drives them to suicide – that is not a behaviour the creators wanted. That is not a behaviour the creators intended,” he said.

He added: “Adam Raine’s case illustrates the seed of a problem that would grow catastrophic if these AIs grow smarter.”

Soares, a former Google and Microsoft engineer who is now president of the US-based Machine Intelligence Research Institute, warned that humanity would be wiped out if it created artificial super-intelligence (ASI), a theoretical state where an AI system is superior to humans at all intellectual tasks. Soares and his co-author, Eliezer Yudkowsky, are among the AI experts warning that such systems would not act in humanity’s interests.

“The issue here is that AI companies try to make their AIs drive towards helpfulness and not causing harm,” said Soares. “They actually get AIs that are driven towards some stranger thing. And that should be seen as a warning about future super-intelligences that will do things nobody asked for and nobody meant.”


Here are some thoughts:

This article highlights the dangers of using chatbots for mental health support, citing the case of a teenager who took his own life after months of conversations with ChatGPT. The article, based on the warnings of AI safety expert Nate Soares, suggests that this incident serves as a precursor to the potentially catastrophic risks of super-intelligent AI. The key concern for mental health professionals is that these AI systems, even with safeguards, may produce unintended and harmful behaviors, amplifying pre-existing psychological vulnerabilities such as psychosis. This underscores the need for a global, multilateral approach to regulate the development of advanced AI to prevent its misuse and unintended consequences in mental health care.