Munn, L. (2022).
AI And Ethics, 3(3), 869–877.
Abstract
As the awareness of AI’s power and danger has risen, the dominant response has been a turn to ethical principles. A flood of AI guidelines and codes of ethics have been released in both the public and private sector in the last several years. However, these are meaningless principles which are contested or incoherent, making them difficult to apply; they are isolated principles situated in an industry and education system which largely ignores ethics; and they are toothless principles which lack consequences and adhere to corporate agendas. For these reasons, I argue that AI ethical principles are useless, failing to mitigate the racial, social, and environmental damages of AI technologies in any meaningful sense. The result is a gap between high-minded principles and technological practice. Even when this gap is acknowledged and principles seek to be “operationalized,” the translation from complex social concepts to technical rulesets is non-trivial. In a zero-sum world, the dominant turn to AI principles is not just fruitless but a dangerous distraction, diverting immense financial and human resources away from potentially more effective activity. I conclude by highlighting alternative approaches to AI justice that go beyond ethical principles: thinking more broadly about systems of oppression and more narrowly about accuracy and auditing.
Here are some thoughts:
This paper is important for multiple reasons. First, it critically examines how artificial intelligence—increasingly embedded in areas like healthcare, education, law enforcement, and social services—can perpetuate racial, gendered, and socioeconomic biases, often under the guise of neutrality and objectivity. These systems can influence or even determine outcomes in mental health diagnostics, hiring practices, criminal justice risk assessments, and educational tracking, all of which have profound psychological implications for individuals and communities. Psychologists, particularly those working in clinical, organizational, or forensic fields, must understand how these technologies shape behavior, identity, and access to resources.
Second, the article highlights how ethical principles guiding AI development are often vague, inconsistently applied, and disconnected from real-world impacts. This raises concerns about the psychological effects of deploying systems that claim to promote fairness or well-being but may actually deepen inequalities or erode trust in institutions. For psychologists involved in policy-making or advocacy, this underscores the need to push for more robust, evidence-based frameworks that consider human behavior, cultural context, and systemic oppression.
Finally, the piece calls attention to the broader sociopolitical systems in which AI operates, urging a shift from abstract ethical statements to concrete actions that address structural inequities. This aligns with growing interest in community psychology and critical approaches that emphasize social justice and the importance of centering marginalized voices. Ultimately, understanding the limitations and risks of current AI ethics frameworks allows psychologists to better advocate for humane, equitable, and psychologically informed technological practices.