Resource Pages

Thursday, October 30, 2025

Regulating AI in Mental Health: Ethics of Care Perspective

Tavory T. (2024).
JMIR mental health, 11, e58493.

Abstract

This article contends that the responsible artificial intelligence (AI) approach—which is the dominant ethics approach ruling most regulatory and ethical guidance—falls short because it overlooks the impact of AI on human relationships. Focusing only on responsible AI principles reinforces a narrow concept of accountability and responsibility of companies developing AI. This article proposes that applying the ethics of care approach to AI regulation can offer a more comprehensive regulatory and ethical framework that addresses AI’s impact on human relationships. This dual approach is essential for the effective regulation of AI in the domain of mental health care. The article delves into the emergence of the new “therapeutic” area facilitated by AI-based bots, which operate without a therapist. The article highlights the difficulties involved, mainly the absence of a defined duty of care toward users, and shows how implementing ethics of care can establish clear responsibilities for developers. It also sheds light on the potential for emotional manipulation and the risks involved. In conclusion, the article proposes a series of considerations grounded in the ethics of care for the developmental process of AI-powered therapeutic tools.

Here are some thoughts:

This article argues that current AI regulation in mental health—largely guided by the “responsible AI” framework—falls short because it prioritizes principles like autonomy, fairness, and transparency while neglecting the profound impact of AI on human relationships, emotions, and care. Drawing on the ethics of care—a feminist-informed moral perspective that emphasizes relationality, vulnerability, context, and responsibility—the author contends that developers of AI-based mental health tools (e.g., therapeutic chatbots) must be held to standards akin to those of human clinicians. The piece highlights risks such as emotional manipulation, abrupt termination of AI “support,” commercial exploitation of sensitive data, and the illusion of empathy, all of which can harm vulnerable users. It calls for a dual regulatory approach: retaining responsible AI safeguards while integrating ethics-of-care principles—such as attentiveness to user needs, competence in care delivery, responsiveness to feedback, and collaborative, inclusive design. The article proposes practical measures, including clinical validation, ethical review committees, heightened confidentiality standards, and built-in pathways to human support, urging psychologists and regulators to ensure AI enhances, rather than erodes, the relational core of mental health care.