Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Friday, November 17, 2023

Humans feel too special for machines to score their morals

Purcell, Z. A., & Jean‐François Bonnefon. (2023).
PNAS Nexus, 2(6).

Abstract

Artificial intelligence (AI) can be harnessed to create sophisticated social and moral scoring systems—enabling people and organizations to form judgments of others at scale. However, it also poses significant ethical challenges and is, subsequently, the subject of wide debate. As these technologies are developed and governing bodies face regulatory decisions, it is crucial that we understand the attraction or resistance that people have for AI moral scoring. Across four experiments, we show that the acceptability of moral scoring by AI is related to expectations about the quality of those scores, but that expectations about quality are compromised by people's tendency to see themselves as morally peculiar. We demonstrate that people overestimate the peculiarity of their moral profile, believe that AI will neglect this peculiarity, and resist for this reason the introduction of moral scoring by AI.‌

Significance Statement

The potential use of artificial intelligence (AI) to create sophisticated social and moral scoring systems poses significant ethical challenges. To inform the regulation of this technology, it is critical that we understand the attraction or resistance that people have for AI moral scoring. This project develops that understanding across four empirical studies—demonstrating that people overestimate the peculiarity of their moral profile, believe that AI will neglect this peculiarity, and resist for this reason the introduction of moral scoring by AI.

The link to the research is above.

My summary:

Here is another example of "myside bias" in which humans base decisions based on their uniqueness or better than average hypothesis.  This research study investigated whether people would accept AI moral scoring systems. The study found that people are unlikely to accept such systems, in large part because they feel too special for machines to score their personal morals.

Specifically, the results showed that people were more likely to accept AI moral scoring systems if they believed that the systems were accurate. However, even if people believed that the systems were accurate, they were still less likely to accept them if they believed that they were morally unique.

The study's authors suggest that these findings may be due to the fact that people have a strong need to feel unique and special. They also suggest that people may be hesitant to trust AI systems to accurately assess their moral character.

Key findings:
  • People are unlikely to accept AI moral scoring systems, in large part because they feel too special for machines to score their personal morals.
  • People's willingness to accept AI moral scoring is influenced by two factors: their perceived accuracy of the system and their belief that they are morally unique.
  • People are more likely to accept AI moral scoring systems if they believe that the systems are accurate. However, even if people believe that the systems are accurate, they are still less likely to accept them if they believe that they are morally unique.