Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Sunday, September 9, 2018

People Are Averse to Machines Making Moral Decisions

Yochanan E. Bigman and Kurt Gray
In press, Cognition

Abstract

Do people want autonomous machines making moral decisions? Nine studies suggest that that
the answer is ‘no’—in part because machines lack a complete mind. Studies 1-6 find that people
are averse to machines making morally-relevant driving, legal, medical, and military decisions,
and that this aversion is mediated by the perception that machines can neither fully think nor
feel. Studies 5-6 find that this aversion exists even when moral decisions have positive outcomes.
Studies 7-9 briefly investigate three potential routes to increasing the acceptability of machine
moral decision-making: limiting the machine to an advisory role (Study 7), increasing machines’
perceived experience (Study 8), and increasing machines’ perceived expertise (Study 9).
Although some of these routes show promise, the aversion to machine moral decision-making is
difficult to eliminate. This aversion may prove challenging for the integration of autonomous
technology in moral domains including medicine, the law, the military, and self-driving vehicles.

The research is here.