Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Sunday, October 8, 2023

Moral Uncertainty and Our Relationships with Unknown Minds

Danaher, J. (2023). 
Cambridge Quarterly of Healthcare Ethics, 
32(4), 482-495.
doi:10.1017/S0963180123000191

Abstract

We are sometimes unsure of the moral status of our relationships with other entities. Recent case studies in this uncertainty include our relationships with artificial agents (robots, assistant AI, etc.), animals, and patients with “locked-in” syndrome. Do these entities have basic moral standing? Could they count as true friends or lovers? What should we do when we do not know the answer to these questions? An influential line of reasoning suggests that, in such cases of moral uncertainty, we need meta-moral decision rules that allow us to either minimize the risks of moral wrongdoing or improve the choice-worthiness of our actions. One particular argument adopted in this literature is the “risk asymmetry argument,” which claims that the risks associated with accepting or rejecting some moral facts may be sufficiently asymmetrical as to warrant favoring a particular practical resolution of this uncertainty. Focusing on the case study of artificial beings, this article argues that this is best understood as an ethical-epistemic challenge. The article argues that taking potential risk asymmetries seriously can help resolve disputes about the status of human–AI relationships, at least in practical terms (philosophical debates will, no doubt, continue); however, the resolution depends on a proper, empirically grounded assessment of the risks involved. Being skeptical about basic moral status, but more open to the possibility of meaningful relationships with such entities, may be the most sensible approach to take.


My take: 

John Danaher explores the ethical challenges of interacting with entities whose moral status is uncertain, such as artificial beings, animals, and patients with locked-in syndrome. Danaher argues that this is best understood as an ethical-epistemic challenge, and that we need to develop meta-moral decision rules that allow us to minimize the risks of moral wrongdoing or improve the choiceworthiness of our actions.

One particular argument that Danaher adopts is the "risk asymmetry argument," which claims that the risks associated with accepting or rejecting some moral facts may be sufficiently asymmetrical as to warrant favoring a particular practical resolution of this uncertainty. In the context of human-AI relationships, Danaher argues that it is more prudent to err on the side of caution and treat AI systems as if they have moral standing, even if we are not sure whether they actually do. This is because the potential risks of mistreating AI systems, such as creating social unrest or sparking an arms race, are much greater than the potential risks of treating them too respectfully.

Danaher acknowledges that this approach may create some tension in our moral views, as it suggests that we should be skeptical about the basic moral status of AI systems, but more open to the possibility of meaningful relationships with them. However, he argues that this is the most sensible approach to take, given the ethical-epistemic challenges that we face.