Onah, C., & Gwar, N. (2025).
Psychotherapy Bulletin, 60(4), 45-54.
According to the World Health Organization (WHO; n.d.) mental health disorders, such as anxiety disorder, bipolar disorder, schizophrenia and post-traumatic stress disorder (PTSD), are some of the most significant public health challenges in the WHO European Region. Within this region (which includes 53 countries across Europe and parts of Central Asia), mental health disorders are the leading cause of disability and the third leading cause of overall disease burden. Among these disorders, depression remains one of the most common mental illnesses globally, yet a staggering 66% of affected individuals continue to live with unmet treatment needs (Eilert et al., 2021; World Health Organization, 2023).
Empirically supported psychotherapeutic treatments have demonstrated strong efficacy, are endorsed by clinical guidelines, and are widely used in mental health care as a preferred first-line treatment option (Lorimer et al., 2021). A substantial body of research and numerous clinical trials have affirmed their effectiveness across a wide spectrum of mental and behavioral health disorders (Eilert et al., 2021), applicable to diverse settings (e.g., primary care medicine, community health, specialty treatment services), and across the lifespan (Nathan & Gorman, 2007). In addition, psychotherapy research has identified both specific and non-specific factors that contribute to treatment outcomes (Norcross & Lambert, 2019). Beyond core techniques and strategies, broader factors, such as the quality of the therapeutic relationship, therapist competence, and adherence to protocols, significantly shape psychotherapy’s clinical effectiveness. As such, psychotherapy is a fundamentally human-centered practice, dependent on the dynamic interplay of targeted interventions delivered within a professional, relational framework to effect meaningful clinical change (Aafjes-van Doorn et al., 2020).
At the same time, artificial intelligence (AI) and machine learning are rapidly advancing and are increasingly applied to the field of mental health, including psychotherapy (Burr & Floridi, 2020; Torous et al., 2020). These technologies aim to assist individuals in learning and applying therapeutic skills, identifying behavioral patterns, and integrating interventions into daily life and by drawing on well-established approaches, such as cognitive behavioral therapy (CBT), positive psychology, and mindfulness (Prescott & Barnes, 2024). Some AI-based conversational agents and chatbots are even designed to simulate emotional intelligence with the goal of forming therapeutic alliances with users, clients, or patients (Darcy et al., 2021; Ghandeharioun et al., 2019).
Here are some thoughts:
A central insight from the article is the critical distinction between "simulated" empathy and the genuine therapeutic alliance. The authors argue that while chatbots can be programmed to mimic emotional intelligence, they fundamentally lack the capacity for a true relational bond—a factor that research consistently identifies as the primary driver of healing in psychotherapy. This reinforces the view that a machine mirroring words back to a patient is functionally different from a human truly understanding them, and that removing the human from the loop risks stripping therapy of its most effective component.
Furthermore, the article raises significant alarms about the dangers of "datafication" and the potential for bias when human judgment is removed. The authors warn that reducing complex human experiences—such as trauma and personal history—into quantifiable data points for an algorithm can strip away the very humanity that therapy seeks to address. They explicitly caution against "blind reliance" on these tools, noting that AI models are often trained on limited or skewed datasets. Without a skilled clinician to interpret these suggestions with cultural humility and nuance, an automated system could actively harm vulnerable patients by misinterpreting their symptoms or reinforcing existing stereotypes.
Finally, the article touches on deep ethical questions that support my prior articles on this topic. It questions whether it is even ethically permissible for chatbots to feign empathy when they cannot actually feel it, suggesting that such deception undermines the core values of the profession. While the article admits that automation offers 24/7 accessibility, it concedes that AI lacks the adaptability and emotional support necessary for complex cases. Ultimately, the authors conclude that AI should be viewed strictly as an "adjunct"—a helper tool—rather than a replacement, confirming that the human professional remains the essential safeguard against the hollowness and risks of automated mental healthcare.
