Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Friday, November 22, 2019

Artificial Intelligence as a Socratic Assistant for Moral Enhancement

Lara, F. & Deckers, J.
Neuroethics (2019).
https://doi.org/10.1007/s12152-019-09401-y

Abstract

The moral enhancement of human beings is a constant theme in the history of humanity. Today, faced with the threats of a new, globalised world, concern over this matter is more pressing. For this reason, the use of biotechnology to make human beings more moral has been considered. However, this approach is dangerous and very controversial. The purpose of this article is to argue that the use of another new technology, AI, would be preferable to achieve this goal. Whilst several proposals have been made on how to use AI for moral enhancement, we present an alternative that we argue to be superior to other proposals that have been developed.

(cut)

Here is a portion of the Conclusion

Given our incomplete current knowledge of the biological determinants of moral behaviour and of the use of biotechnology to safely influence such determinants, it is reckless to defend moral bioenhancement, even if it were voluntary. However, the age-old human desire to be morally better must be taken very seriously in a globalised world where local decisions can have far-reaching consequences and where moral corruption threatens the survival of us all. This situation forces us to seek the satisfaction of that desire by means of other technologies. AI could, in principle, be a good option. Since it does not intervene directly in our biology, it can, in principle, be less dangerous and controversial.

However, we argued that it also carries risks. For the exhaustive project, these include the capitulation of human decision-making to machines that we may not understand and the negation of what makes us ethical human beings. We argued also that even some auxiliary projects that do not promote the surrendering of human decision-making, for example systems that foster decision-making on the basis of moral agents’ own values, may jeopardise the development of our moral capacities if they focus too much on outcomes, thus providing insufficient opportunities for individuals to be critical of their values and of the processes by which outcomes are produced, which are essential factors for personal moral progress and for rapprochement between different individuals’ positions.

No comments: