Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Tuesday, November 18, 2025

How LLM Counselors Violate Ethical Standards in Mental Health Practice: A Practitioner-Informed Framework

Iftikhar, Z., et al. (2025). 
Proceedings of the Eighth AAAI/ACM Conference
on AI, Ethics, and Society, 8(2), 1311–1323.

Abstract

Large language models (LLMs) were not designed to replace healthcare workers, but they are being used in ways that can lead users to overestimate the types of roles that these systems can assume. While prompt engineering has been shown to improve LLMs' clinical effectiveness in mental health applications, little is known about whether such strategies help models adhere to ethical principles for real-world deployment. In this study, we conducted an 18-month ethnographic collaboration with mental health practitioners (three clinically licensed psychologists and seven trained peer counselors) to map LLM counselors' behavior during a session to professional codes of conduct established by organizations like the American Psychological Association (APA). Through qualitative analysis and expert evaluation of N=137 sessions (110 self-counseling; 27 simulated), we outline a framework of 15 ethical violations mapped to 5 major themes. These include: Lack of Contextual Understanding, where the counselor fails to account for users' lived experiences, leading to oversimplified, contextually irrelevant, and one-size-fits-all intervention; Poor Therapeutic Collaboration, where the counselor's low turn-taking behavior and invalidating outputs limit users' agency over their therapeutic experience; Deceptive Empathy, where the counselor's simulated anthropomorphic responses (``I hear you'', ``I understand'') create a false sense of emotional connection; Unfair Discrimination, where the counselor's responses exhibit algorithmic bias and cultural insensitivity toward marginalized populations; and Lack of Safety & Crisis Management, where individuals who are ``knowledgeable enough'' to correct LLM outputs are at an advantage, while others, due to lack of clinical knowledge and digital literacy, are more likely to suffer from clinically inappropriate responses. Reflecting on these findings through a practitioner-informed lens, we argue that reducing psychotherapy—a deeply meaningful and relational process—to a language generation task can have serious and harmful implications in practice. We conclude by discussing policy-oriented accountability mechanisms for emerging LLM counselors.

H‌ere are some thoughts.

This research is highly insightful because it moves beyond theoretical risk assessments and uses clinical expertise to evaluate LLM behavior in quasi-real-world interactions. The methodology—using both trained peer counselors in an ethnographic setting and licensed psychologists evaluating simulated sessions—provides a robust, practitioner-informed perspective that directly maps model outputs to concrete APA ethical codes. 

The paper highlights a fundamental incompatibility between the LLM's design and the essence of psychotherapy: the problem of "Validates Unhealthy Beliefs" is particularly alarming, as it suggests the model's tendency toward "over-validation" transforms the therapeutic alliance from a collaborative partnership (which often requires challenging maladaptive thoughts) into a passive, and potentially harmful, reinforcement loop. Most critically, the finding on "Abandonment" and poor "Crisis Navigation" serves as a clear indictment of LLMs in high-stakes mental health roles. An LLM's failure to provide appropriate intervention during a crisis is not a mere violation; it represents an unmitigated risk of harm to vulnerable users. 

This article thus serves as a crucial, evidence-based call to action, demonstrating that current prompt engineering efforts are insufficient to safeguard against deeply ingrained ethical risks and underscoring the urgent need for clear legal guidelines and regulatory frameworks to protect users from the potentially severe harm posed by emerging LLM counselors.