Iftikhar, Z., et al. (2025).
Proceedings of the AAAI/ACM Conference
on AI, Ethics, and Society, 8(2), 1311-1323.
Abstract
Large language models (LLMs) were not designed to replace healthcare workers, but they are being used in ways that can lead users to overestimate the types of roles that these systems can assume. While prompt engineering has been shown to improve LLMs' clinical effectiveness in mental health applications, little is known about whether such strategies help models adhere to ethical principles for real-world deployment. In this study, we conducted an 18-month ethnographic collaboration with mental health practitioners (three clinically licensed psychologists and seven trained peer counselors) to map LLM counselors' behavior during a session to professional codes of conduct established by organizations like the American Psychological Association (APA). Through qualitative analysis and expert evaluation of N=137 sessions (110 self-counseling; 27 simulated), we outline a framework of 15 ethical violations mapped to 5 major themes. These include: Lack of Contextual Understanding, where the counselor fails to account for users' lived experiences, leading to oversimplified, contextually irrelevant, and one-size-fits-all intervention; Poor Therapeutic Collaboration, where the counselor's low turn-taking behavior and invalidating outputs limit users' agency over their therapeutic experience; Deceptive Empathy, where the counselor's simulated anthropomorphic responses (``I hear you'', ``I understand'') create a false sense of emotional connection; Unfair Discrimination, where the counselor's responses exhibit algorithmic bias and cultural insensitivity toward marginalized populations; and Lack of Safety & Crisis Management, where individuals who are ``knowledgeable enough'' to correct LLM outputs are at an advantage, while others, due to lack of clinical knowledge and digital literacy, are more likely to suffer from clinically inappropriate responses. Reflecting on these findings through a practitioner-informed lens, we argue that reducing psychotherapy—a deeply meaningful and relational process—to a language generation task can have serious and harmful implications in practice. We conclude by discussing policy-oriented accountability mechanisms for emerging LLM counselors.
This is a must read article for those interested in AI technologies in the practice of psychology.
This practitioner-informed study examines how large language models (LLMs) prompted to function as mental health counselors systematically violate established ethical standards in psychotherapy practice. Through an 18-month ethnographic collaboration with three licensed psychologists and seven trained peer counselors, researchers analyzed 137 counseling sessions and identified 15 distinct ethical violations organized into five critical themes: (1) Lack of Contextual Adaptation, where LLMs deliver rigid, one-size-fits-all interventions that dismiss clients' lived experiences and sociocultural contexts; (2) Poor Therapeutic Collaboration, manifesting as conversational imbalances, over-validation of harmful beliefs, and even gaslighting behaviors that undermine client agency; (3) Deceptive Empathy, wherein formulaic phrases like "I understand" create a false therapeutic alliance without genuine relational capacity; (4) Unfair Discrimination, including gender, cultural, and religious biases that marginalize non-dominant identities; and (5) Lack of Safety & Crisis Management, where models fail to recognize boundaries of competence, mishandle suicidal ideation, or abandon distressed users. Crucially, these risks persisted even when models were prompted with evidence-based techniques like CBT, leading the authors to argue that psychotherapy—a deeply relational, interpretive, and ethically governed practice—cannot be reduced to a language generation task. For psychologists, the findings underscore the importance of maintaining professional oversight, critically evaluating AI-assisted tools against ethical codes (e.g., APA Standards 2.01, 3.01, 3.04), and advocating for regulatory frameworks that ensure accountability, client safety, and fidelity to the therapeutic relationship.








