Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Thursday, December 31, 2015

How do you teach a machine to be moral?

By Francesca Rossi
The Washington Post
Originally published November 5, 2015

Here is an excerpt:

For this cooperation to work safely and beneficially for both humans and machines, artificial agents should follow moral values and ethical principles (appropriate to where they will act), as well as safety constraints. When directed to achieve a set of goals, agents should ensure that their actions do not violate these principles and values overtly, or through negligence by performing risky actions.

It would be easier for humans to accept and trust machines who behave as ethically as we do, and these principles would make it easier for artificial agents to determine their actions and explain their behavior in terms understandable by humans. Moreover, if machines and humans needed to make decisions together, shared moral values and ethical principles would facilitate consensus and compromise. Imagine a room full of physicians trying to decide on the best treatment for a patient with a difficult case. Now add an artificial agent that has read everything that has been written about the patient’s disease and similar cases, and thus can help the physicians compare the options and make a much more informed choice. To be trustworthy, the agent should care about the same values as the physicians: curing the disease should not at detriment of the patient’s well-being.

The entire article is here.