Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Thursday, March 14, 2024

A way forward for responsibility in the age of AI

Gogoshin, D.L.
Inquiry (2024)

Abstract

Whatever one makes of the relationship between free will and moral responsibility – e.g. whether it’s the case that we can have the latter without the former and, if so, what conditions must be met; whatever one thinks about whether artificially intelligent agents might ever meet such conditions, one still faces the following questions. What is the value of moral responsibility? If we take moral responsibility to be a matter of being a fitting target of moral blame or praise, what are the goods attached to them? The debate concerning ‘machine morality’ is often hinged on whether artificial agents are or could ever be morally responsible, and it is generally taken for granted (following Matthias 2004) that if they cannot, they pose a threat to the moral responsibility system and associated goods. In this paper, I challenge this assumption by asking what the goods of this system, if any, are, and what happens to them in the face of artificially intelligent agents. I will argue that they neither introduce new problems for the moral responsibility system nor do they threaten what we really (ought to) care about. I conclude the paper with a proposal for how to secure this objective.


Here is my summary:

While AI may not possess true moral agency, it's crucial to consider how the development and use of AI can be made more responsible. The author challenges the assumption that AI's lack of moral responsibility inherently creates problems for our current system of ethics. Instead, they focus on the "goods" this system provides, such as deserving blame or praise, and how these can be upheld even with AI's presence. To achieve this, the author proposes several steps, including:
  1. Shifting the focus from AI's moral agency to the agency of those who design, build, and use it. This means holding these individuals accountable for the societal impacts of AI.
  2. Developing clear ethical guidelines for AI development and use. These guidelines should be comprehensive, addressing issues like fairness, transparency, and accountability.
  3. Creating robust oversight mechanisms. This could involve independent bodies that monitor AI development and use, and have the power to intervene when necessary.
  4. Promoting public understanding of AI. This will help people make informed decisions about how AI is used in their lives and hold developers and users accountable.