Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Personalization. Show all posts
Showing posts with label Personalization. Show all posts

Thursday, August 21, 2025

On the conversational persuasiveness of GPT-4

Salvi, F., Ribeiro, M. H., Gallotti, R., & West, R. (2025).
Nature Human Behaviour.

Abstract

Early work has found that large language models (LLMs) can generate persuasive content. However, evidence on whether they can also personalize arguments to individual attributes remains limited, despite being crucial for assessing misuse. This preregistered study examines AI-driven persuasion in a controlled setting, where participants engaged in short multiround debates. Participants were randomly assigned to 1 of 12 conditions in a 2 × 2 × 3 design: (1) human or GPT-4 debate opponent; (2) opponent with or without access to sociodemographic participant data; (3) debate topic of low, medium or high opinion strength. In debate pairs where AI and humans were not equally persuasive, GPT-4 with personalization was more persuasive 64.4% of the time (81.2% relative increase in odds of higher post-debate agreement; 95% confidence interval [+26.0%, +160.7%], P < 0.01; N = 900). Our findings highlight the power of LLM-based persuasion and have implications for the governance and design of online platforms.

Here are some thoughts:

This study is highly relevant to psychologists because it raises pressing ethical concerns and offers important implications for clinical and applied settings. Ethically, the research demonstrates that GPT-4 can use even minimal demographic data—such as age, gender, or political affiliation—to personalize persuasive arguments more effectively than human counterparts. This ability to microtarget individuals poses serious risks of manipulation, particularly when users may not be aware of how their personal information is being used. 

For psychologists concerned with informed consent, autonomy, and the responsible use of technology, these findings underscore the need for robust ethical guidelines governing AI-driven communication. 

Importantly, the study has significant relevance for clinical, counseling, and health psychologists. As AI becomes more integrated into mental health apps, health messaging, and therapeutic tools, understanding how machines influence human attitudes and behavior becomes essential. This research suggests that AI could potentially support therapeutic goals—but also has the capacity to undermine trust, reinforce bias, or sway vulnerable individuals in unintended ways.

Thursday, November 28, 2024

Effects of Personalization on Credit and Blame for AI-Generated Content: Evidence from Four Countries

Earp, B. D.,  et al. (2024, July 15).
Annals of the New York Academy of Sciences
(in press).

Abstract

Generative artificial intelligence (AI) raises ethical questions concerning moral and legal responsibility—specifically, the attributions of credit and blame for AI-generated content. For example, if a human invests minimal skill or effort to produce a beneficial output with an AI tool, can the human still take credit? How does the answer change if the AI has been personalized (i.e., fine-tuned) on previous  outputs  produced  (i.e.,  without  AI  assistance) by the same human?  We  conducted pre-registered experiments with representative samples (N= 1,802) from four countries (US, UK, China, and Singapore). We investigated laypeople’s attributions of credit and blame to human users for producing beneficial or harmful outputs with a standard large language model (LLM), a personalized LLM, and without AI assistance (control  condition). Participants generally attributed more credit to human users of personalized versus standard LLMs for beneficial outputs, whereas LLM type did not significantly affect blame attributions for harmful outputs, with a partial exception among Chinese participants. In addition, UK participants attributed more blame for using any type of LLM versus no LLM. Practical, ethical, and policy implications of these findings are discussed.


Here are some thoughts:

The studies indicate that artificial intelligence and machine learning models often fail to accurately reproduce human judgments about rule violations and other normative decisions. This discrepancy arises primarily due to the way data is collected and labeled for training these models. Descriptive labeling, which focuses on identifying factual features, tends to result in harsher judgments compared to normative labeling, where humans are explicitly asked about rule violations. This finding aligns with research on human decision-making, which suggests that people tend to be more lenient when making normative judgments compared to descriptive assessments.

The asymmetry in judgments between AI models and humans has significant implications for various fields, including recruitment, content moderation, and criminal justice. For instance, AI models trained on descriptively labeled data may make stricter judgments about rule violations or candidate suitability than human decision-makers would. This could lead to unfair outcomes in hiring processes, social media content moderation, or even criminal sentencing.

These findings relate to broader research on cognitive biases and decision-making heuristics in humans. Just as humans exhibit biases in their judgments, AI models can inherit and even amplify these biases through the data they are trained on and the algorithms they use. The challenge lies in developing AI systems that can more accurately replicate human normative judgments while avoiding the pitfalls of human cognitive biases.

Furthermore, the research highlights the importance of transparency in data collection and model training processes. Understanding how data is gathered and labeled is crucial for predicting and mitigating potential biases in AI-driven decision-making systems. This aligns with calls for explainable AI and ethical AI development in various fields.

In conclusion, these studies underscore the complex relationship between human judgment, AI decision-making, and information asymmetry. They emphasize the need for careful consideration of data collection methods, model training processes, and the potential impacts of AI deployment in various domains. Future research could focus on developing methods to better align AI judgments with human normative decisions while maintaining the benefits of AI's data processing capabilities.

Tuesday, May 28, 2019

Values in the Filter Bubble Ethics of Personalization Algorithms in Cloud Computing

Engin Bozdag and Job Timmermans
Delft University of Technology
Faculty of Technology, Policy and Management

Abstract

Cloud services such as Facebook and Google search started to use personalization algorithms in order to deal with growing amount of data online. This is often done in order to reduce the “information overload”. User’s interaction with the system is recorded in a single identity, and the information is personalized for the user using this identity. However, as we argue, such filters often ignore the context of information and they are never value neutral. These algorithms operate without the control and knowledge of the user, leading to a “filter bubble”. In this paper we use Value Sensitive Design methodology to identify the values and value assumptions implicated in personalization algorithms. By building on existing philosophical work, we discuss three human values implicated in personalized filtering: autonomy, identity, and transparency.

A copy of the paper is here.