Köbis, N., Bonnefon, J F. & Rahwan, I.
Nat Hum Behav 5, 679–685 (2021).
https://doi.org/10.1038/s41562-021-01128-2
Abstract
As machines powered by artificial intelligence (AI) influence humans’ behaviour in ways that are both like and unlike the ways humans influence each other, worry emerges about the corrupting power of AI agents. To estimate the empirical validity of these fears, we review the available evidence from behavioural science, human–computer interaction and AI research. We propose four main social roles through which both humans and machines can influence ethical behaviour. These are: role model, advisor, partner and delegate. When AI agents become influencers (role models or advisors), their corrupting power may not exceed the corrupting power of humans (yet). However, AI agents acting as enablers of unethical behaviour (partners or delegates) have many characteristics that may let people reap unethical benefits while feeling good about themselves, a potentially perilous interaction. On the basis of these insights, we outline a research agenda to gain behavioural insights for better AI oversight.
From the end of the article
Another policy-relevant research question is how to integrate awareness for the corrupting force of AI tools into the innovation process. New AI tools hit the market on a daily basis. The current approach of ‘innovate first, ask for forgiveness later’ has caused considerable backlash and even demands for banning AI technology such as facial recognition. As a consequence, ethical considerations must enter the innovation and publication process of AI developments. Current efforts to develop ethical labels for responsible and crowdsourcing citizens’ preferences about ethical are mostly concerned about the direct unethical consequences of AI behaviour and not its influence on the ethical conduct of the humans who interact with and through it. A thorough experimental approach to responsible AI will need to expand concerns about direct AI-induced harm to concerns about how bad machines can corrupt good morals.