Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Monday, April 6, 2026

Exploring the Ethical Challenges of Conversational AI in Mental Health Care: Scoping Review

Meadi, M. R., et al. (2025)
JMIR Mental Health, 12, e60432.

Abstract

Background: Conversational artificial intelligence (CAI) is emerging as a promising digital technology for mental health care. CAI apps, such as psychotherapeutic chatbots, are available in app stores, but their use raises ethical concerns.

Objective: We aimed to provide a comprehensive overview of ethical considerations surrounding CAI as a therapist for individuals with mental health issues.

Methods: We conducted a systematic search across PubMed, Embase, APA PsycINFO, Web of Science, Scopus, the Philosopher’s Index, and ACM Digital Library databases. Our search comprised 3 elements: embodied artificial intelligence, ethics, and mental health. We defined CAI as a conversational agent that interacts with a person and uses artificial intelligence to formulate output. We included articles discussing the ethical challenges of CAI functioning in the role of a therapist for individuals with mental health issues. We added additional articles through snowball searching. We included articles in English or Dutch. All types of articles were considered except abstracts of symposia. Screening for eligibility was done by 2 independent researchers (MRM and TS or AvB). An initial charting form was created based on the expected considerations and revised and complemented during the charting process. The ethical challenges were divided into themes. When a concern occurred in more than 2 articles, we identified it as a distinct theme.

Conclusions: Our scoping review has comprehensively covered ethical aspects of CAI in mental health care. While certain themes remain underexplored and stakeholders’ perspectives are insufficiently represented, this study highlights critical areas for further research. These include evaluating the risks and benefits of CAI in comparison to human therapists, determining its appropriate roles in therapeutic contexts and its impact on care access, and addressing accountability. Addressing these gaps can inform normative analysis and guide the development of ethical guidelines for responsible CAI use in mental health care.

Here are some thoughts:

From a clinical perspective, the most immediate ethical tension identified in this review is the conflict between increasing accessibility and ensuring nonmaleficence (doing no harm). While proponents argue that CAI can bridge care gaps by offering constant availability and reaching those who fear stigma, the risks regarding safety and crisis management are profound. The review highlights that CAI systems often fail to contextualize user cues, leading to inappropriate responses in critical situations, such as suicidality. Furthermore, the phenomenon of AI "hallucinations"—where the system presents false information as fact—poses a unique danger in mental health, potentially exacerbating eating disorders or anxiety through misinformation. The lack of strong clinical evidence is also concerning; despite the commercial "hype," a significant portion of these tools have not been subjected to rigorous clinical studies to prove their efficacy compared to active controls.

Technologically, the "black box" problem creates a significant barrier to integrating CAI into professional practice. The review notes that the opacity of machine learning algorithms makes it difficult to explain how a CAI arrived at a specific therapeutic intervention, which undermines the principle of explicability and trust. This lack of transparency complicates accountability; if a CAI harms a patient, it remains unclear whether the responsibility lies with the developers, the deploying clinicians, or the algorithm itself—a concept known as the "responsibility gap". For board-certified professionals, who are bound by codes of ethics to demonstrate reasonable care, relying on a system that cannot explain its decision-making process is ethically precarious.