Mussgnug, A. M. (2025).
Philosophy & Technology, 38(106).
Abstract
Recent research illustrates how AI can be developed and deployed in a manner detached from the concrete social context of application. By abstracting from the contexts of AI application, practitioners also disengage from the distinct normative structures that govern them. As a result, AI applications can disregard existing norms, best practices, and regulations with often dire ethical and social consequences. I argue that efforts to promote responsible and ethical AI can inadvertently contribute to and seemingly legitimize this disregard for established contextual norms. Echoing a persistent undercurrent in technology ethics of understanding emerging technologies as uncharted moral territory, certain approaches to AI ethics can promote a notion of AI as a novel and distinct realm for ethical deliberation, norm setting, and virtue cultivation. This narrative of AI as new ethical ground, however, can come at the expense of practitioners, policymakers, and ethicists engaging with already established norms and virtues that were gradually cultivated to promote successful and responsible practice within concrete social contexts. In response, I question the current narrow prioritization in AI ethics of moral innovation over moral preservation. Engaging also with emerging foundation models and AI agents, I advocate for a moderately conservative approach to the ethics of AI that prioritizes the responsible and considered integration of AI within established social contexts and their respective normative structures.
Here are some thoughts:
This article is important to psychologists because it highlights how AI systems, particularly in mental health care, often disregard long-established ethical norms and professional standards. It emphasizes the concept of contextual integrity, which underscores that ethical practices in domains like psychology—such as confidentiality, informed consent, and diagnostic best practices—have evolved over time to protect patients and ensure responsible care. AI systems, especially mental health chatbots and diagnostic tools, frequently fail to uphold these standards, leading to privacy breaches, misdiagnoses, and the erosion of patient trust.
The article warns that AI ethics efforts sometimes treat AI as a new moral territory, detached from existing professional contexts, which can legitimize the disregard for these norms. For psychologists, this raises critical concerns about how AI is integrated into clinical practice, the potential for AI to distort public understanding of mental health, and the need for an integrative AI ethics approach—one that prioritizes the responsible incorporation of AI within existing ethical frameworks rather than treating AI as an isolated ethical domain. Psychologists must therefore be actively involved in shaping AI ethics to ensure that technological advancements support, rather than undermine, the core values and responsibilities of psychological practice.