Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Thursday, February 14, 2019

Can artificial intelligences be moral agents?

Bartosz Bro┼╝ek and Bartosz Janik
New Ideas in Psychology
Available online 8 January 2019


The paper addresses the question whether artificial intelligences can be moral agents. We begin by observing that philosophical accounts of moral agency, in particular Kantianism and utilitarianism, are very abstract theoretical constructions: no human being can ever be a Kantian or a utilitarian moral agent. Ironically, it is easier for a machine to approximate this idealised type of agency than it is for homo sapiens. We then proceed to outline the structure of human moral practices. Against this background, we identify two conditions of moral agency: internal and external. We argue further that the existing AI architectures are unable to meet the two conditions. In consequence, machines - at least at the current stage of their development - cannot be considered moral agents.

Here is the conclusion:

The second failure of the artificial agents - to meet the internal condition of moral agency - is connected to the fact that their behaviour is not emotion driven. This makes it impossible for them to fully take part in moral practices. A Kantian or a Benthamian machine, acting on a set of abstract rules, would simply be no fit for the complex, culture-dependent and intuition-based practices of any particular community. Finally, both failures are connected: the more human-like machines become, i.e. the more capable they are of fully participating in moral practices, the more likely it is that they will also be recognised as moral agents.

The info is here.