Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Thursday, November 28, 2024

Effects of Personalization on Credit and Blame for AI-Generated Content: Evidence from Four Countries

Earp, B. D.,  et al. (2024, July 15).
Annals of the New York Academy of Sciences
(in press).

Abstract

Generative artificial intelligence (AI) raises ethical questions concerning moral and legal responsibility—specifically, the attributions of credit and blame for AI-generated content. For example, if a human invests minimal skill or effort to produce a beneficial output with an AI tool, can the human still take credit? How does the answer change if the AI has been personalized (i.e., fine-tuned) on previous  outputs  produced  (i.e.,  without  AI  assistance) by the same human?  We  conducted pre-registered experiments with representative samples (N= 1,802) from four countries (US, UK, China, and Singapore). We investigated laypeople’s attributions of credit and blame to human users for producing beneficial or harmful outputs with a standard large language model (LLM), a personalized LLM, and without AI assistance (control  condition). Participants generally attributed more credit to human users of personalized versus standard LLMs for beneficial outputs, whereas LLM type did not significantly affect blame attributions for harmful outputs, with a partial exception among Chinese participants. In addition, UK participants attributed more blame for using any type of LLM versus no LLM. Practical, ethical, and policy implications of these findings are discussed.


Here are some thoughts:

The studies indicate that artificial intelligence and machine learning models often fail to accurately reproduce human judgments about rule violations and other normative decisions. This discrepancy arises primarily due to the way data is collected and labeled for training these models. Descriptive labeling, which focuses on identifying factual features, tends to result in harsher judgments compared to normative labeling, where humans are explicitly asked about rule violations. This finding aligns with research on human decision-making, which suggests that people tend to be more lenient when making normative judgments compared to descriptive assessments.

The asymmetry in judgments between AI models and humans has significant implications for various fields, including recruitment, content moderation, and criminal justice. For instance, AI models trained on descriptively labeled data may make stricter judgments about rule violations or candidate suitability than human decision-makers would. This could lead to unfair outcomes in hiring processes, social media content moderation, or even criminal sentencing.

These findings relate to broader research on cognitive biases and decision-making heuristics in humans. Just as humans exhibit biases in their judgments, AI models can inherit and even amplify these biases through the data they are trained on and the algorithms they use. The challenge lies in developing AI systems that can more accurately replicate human normative judgments while avoiding the pitfalls of human cognitive biases.

Furthermore, the research highlights the importance of transparency in data collection and model training processes. Understanding how data is gathered and labeled is crucial for predicting and mitigating potential biases in AI-driven decision-making systems. This aligns with calls for explainable AI and ethical AI development in various fields.

In conclusion, these studies underscore the complex relationship between human judgment, AI decision-making, and information asymmetry. They emphasize the need for careful consideration of data collection methods, model training processes, and the potential impacts of AI deployment in various domains. Future research could focus on developing methods to better align AI judgments with human normative decisions while maintaining the benefits of AI's data processing capabilities.