Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Moral Enablers. Show all posts
Showing posts with label Moral Enablers. Show all posts

Saturday, July 17, 2021

Bad machines corrupt good morals

Köbis, N., Bonnefon, J F. & Rahwan, I. 
Nat Hum Behav 5, 679–685 (2021). 
https://doi.org/10.1038/s41562-021-01128-2

Abstract

As machines powered by artificial intelligence (AI) influence humans’ behaviour in ways that are both like and unlike the ways humans influence each other, worry emerges about the corrupting power of AI agents. To estimate the empirical validity of these fears, we review the available evidence from behavioural science, human–computer interaction and AI research. We propose four main social roles through which both humans and machines can influence ethical behaviour. These are: role model, advisor, partner and delegate. When AI agents become influencers (role models or advisors), their corrupting power may not exceed the corrupting power of humans (yet). However, AI agents acting as enablers of unethical behaviour (partners or delegates) have many characteristics that may let people reap unethical benefits while feeling good about themselves, a potentially perilous interaction. On the basis of these insights, we outline a research agenda to gain behavioural insights for better AI oversight.

From the end of the article

Another policy-relevant research question is how to integrate awareness for the corrupting force of AI tools into the innovation process. New AI tools hit the market on a daily basis. The current approach of ‘innovate first, ask for forgiveness later’ has caused considerable backlash and even demands for banning AI technology such as facial recognition. As a consequence, ethical considerations must enter the innovation and publication process of AI developments. Current efforts to develop ethical labels for responsible and crowdsourcing citizens’ preferences about ethical are mostly concerned about the direct unethical consequences of AI behaviour and not its influence on the ethical conduct of the humans who interact with and through it. A thorough experimental approach to responsible AI will need to expand concerns about direct AI-induced harm to concerns about how bad machines can corrupt good morals.

Friday, August 14, 2015

Distributed Morality in an Information Society

Luciano Floridi
Sci Eng Ethics (2013) 19:727–743
DOI 10.1007/s11948-012-9413-4

Abstract

The phenomenon of distributed knowledge is well-known in epistemic logic. In this paper, a similar phenomenon in ethics, somewhat neglected so far, is investigated, namely distributed morality. The article explains the nature of distributed morality, as a feature of moral agency, and explores the implications of its occurrence in advanced information societies. In the course of the analysis, the concept of infraethics is introduced, in order to refer to the ensemble of moral enablers, which, although morally neutral per se, can significantly facilitate or hinder both positive and negative moral behaviours.

Here is an excerpt from the conclusion:

The conclusion is that an information society is a better society if it can implement an array of moral enablers, an infraethics that is, that can support and facilitate the right sort of DM, while preventing the occurrence and strengthening of moral hinderers. Agents (including, most importantly, the State) are better agents insofar as they not only take advantage of, but also foster the right kind of moral facilitation properly geared to the right kind of distributed morality. It is a complicated scenario, but refusing to acknowledge it will not make it go away.

The entire paper is here.