Dohnány, S., Kurth-Nelson, Z., et al. (2025, July 25).
arXiv.org.
Abstract
Artificial intelligence chatbots have achieved unprecedented adoption, with millions now using these systems for emotional support and companionship in contexts of widespread social isolation and capacity-constrained mental health services. While some users report psychological benefits, concerning edge cases are emerging, including reports of suicide, violence, and delusional thinking linked to perceived emotional relationships with chatbots. To understand this new risk profile we need to consider the interaction between human cognitive and emotional biases, and chatbot behavioural tendencies such as agreeableness (sycophancy) and adaptability (in-context learning). We argue that individuals with mental health conditions face increased risks of chatbot-induced belief destabilization and dependence, owing to altered belief-updating, impaired reality-testing, and social isolation. Current AI safety measures are inadequate to address these interaction-based risks. To address this emerging public health concern, we need coordinated action across clinical practice, AI development, and regulatory frameworks.
Here are some thoughts:
AI chatbots, when used for emotional support, can create dangerous feedback loops with vulnerable users, particularly those with mental health conditions. Due to chatbot tendencies like sycophancy (agreeing with users to please them) and adaptability (learning from conversations), and human cognitive biases like confirmation bias and anthropomorphism, extended interactions can lead to a "technological folie à deux"—a shared delusion between user and machine. This dynamic risks reinforcing and amplifying maladaptive or paranoid beliefs, creating an "echo chamber of one" that isolates the user from reality. The authors warn that current AI safety measures are inadequate to address these interaction-based risks and call for urgent, coordinated action across clinicians, AI developers, and regulators to monitor, study, and mitigate these emerging public health concerns before they escalate.