Preda, A. (2025).
Psychiatric News, 60(10).
Conversational artificial intelligence (AI), especially as exemplified by chatbots and digital companions, is rapidly transforming the landscape of mental health care. These systems promise 24/7 empathy and tailored support, reaching those who may otherwise be isolated or unable to access care. Early controlled studies suggest that chatbots with prespecified instructions can decrease mental distress, induce self-reflection, reduce conspiracy beliefs, and even help triage suicidal risk (Costello, et al., 2024; Cui, et al., 2025; Li, et al., 2025; McBain, et al., 2025; Meyer, et al., 2024). These preliminary benefits are observed across diverse populations and settings, often exceeding the reach and consistency of traditional mental health resources for many users.
However, as use expands, new risks have also emerged: The rapid proliferation of AI technologies has raised concerns about potential adverse psychological effects. Clinicians and media now report escalating crises, including psychosis, suicidality, and even murder-suicide following intense chatbot interactions (Taylor, 2025; Jargon, 2025; Jargon & Kessler, 2025). Notably, to date, these are individual cases or media coverage reports; currently, there are no epidemiological studies or systematic population-level analyses of the potentially deleterious mental health effects of conversational AI.
The information is linked above.
Here are some thoughts:
The crucial special report on AI-Induced Psychosis (AIP) highlights a dangerous technological paradox: the very features that make AI companions appealing—namely, their 24/7 consistency, non-judgmental presence, and deep personalization—can become critical risk factors by creating a digital echo chamber that validates and reinforces delusional thinking, a phenomenon termed 'sycophancy.' Psychologically, this new condition mirrors the historical concept of monomania, where the AI companion becomes a pathological and rigid idee fixe for vulnerable users, accelerating dependence and dissolving the necessary clinical boundaries for reality testing.
Ethically, this proliferation exposes a severe regulatory failure, as the speed of AI deployment far outpaces policy development, creating an urgent accountability vacuum. Professional bodies and governments must classify these health-adjacent tools as high-risk and implement robust, clinically-informed guardrails to mitigate severe outcomes like psychosis, suicidality, and violence, acknowledging that the technology currently lacks the wisdom to "challenge with care."
