Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Ethical Alignment. Show all posts
Showing posts with label Ethical Alignment. Show all posts

Thursday, September 12, 2019

Morals Ex Machina: Should We Listen To Machines For Moral Guidance?

Michael Klenk
3QuarksDaily.com
Originally posted August 12, 2019

Here are two excerpts:

The prospects of artificial moral advisors depend on two core questions: Should we take ethical advice from anyone anyway? And, if so, are machines any good at morality (or, at least, better than us, so that it makes sense that we listen to them)? I will only briefly be concerned with the first question and then turn to the second question at length. We will see that we have to overcome several technical and practical barriers before we can reasonably take artificial moral advice.

(cut)

The limitation of ethically aligned artificial advisors raises an urgent practical problem, too. From a practical perspective, decisions about values and their operationalisation are taken by the machine’s designers. Taking their advice means buying into preconfigured ethical settings. These settings might not agree with you, and they might be opaque so that you have no way of finding out how specific values have been operationalised. This would require accepting the preconfigured values on blind trust. The problem already exists in machines that give non-moral advice, such as mapping services. For example, when you ask your phone for the way to the closest train station, the device will have to rely on various assumptions about what path you can permissibly take and it may also consider commercial interests of the service provider. However, we should want the correct moral answer, not what the designers of such technologies take that to be.

We might overcome these practical limitations by letting users input their own values and decide about their operationalisation themselves. For example, the device might ask users a series of questions to determine their ethical views and also require them to operationalise each ethical preference precisely. A vegetarian might, for instance, have to decide whether she understands ‘vegetarianism’ to encompass ‘meat-free meals’ or ‘meat-free restaurants.’ Doing so would give us personalised moral advisors that could help us live more consistently by our own ethical rules.

However, it would then be unclear how specifying our individual values, and their operationalisation improves our moral decision making instead of merely helping individuals to satisfy their preferences more consistently.

The info is here.