Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Wednesday, March 25, 2015

Sacrifice One For the Good of Many? People Apply Different Moral Norms to Human and Robot Agents

By B.F. Malle, M. Scheutz, T. Arnold, J. Voiklis, and C. Cusimano
HRI '15, March 02 - 05 2015

Abstract

Moral norms play an essential role in regulating human interaction. With the growing sophistication and proliferation of robots, it is important to understand how ordinary people apply moral norms to robot agents and make moral judgments about their behavior. We report the first comparison of people’s moral judgments (of permissibility, wrongness, and blame) about human and robot agents. Two online experiments (total N = 316) found that robots, compared with human agents, were more strongly expected to take an action that sacrifices one person for the good of many (a “utilitarian” choice), and they were blamed more than their human counterparts when they did not make that choice.  Though the utilitarian sacrifice was generally seen as permissible for human agents, they were blamed more for choosing this option than for doing nothing. These results provide a first step toward a new field of Moral HRI, which is well placed to help guide the design of social robots.

The entire article is here.