Resource Pages

Monday, June 16, 2025

The impact of AI errors in a human-in-the-loop process

Agudo, U., Liberal, K. G., et al. (2024).
Cognitive Research Principles and 
Implications, 9(1).

Abstract

Automated decision-making is becoming increasingly common in the public sector. As a result, political institutions recommend the presence of humans in these decision-making processes as a safeguard against potentially erroneous or biased algorithmic decisions. However, the scientific literature on human-in-the-loop performance is not conclusive about the benefits and risks of such human presence, nor does it clarify which aspects of this human–computer interaction may influence the final decision. In two experiments, we simulate an automated decision-making process in which participants judge multiple defendants in relation to various crimes, and we manipulate the time in which participants receive support from a supposed automated system with Artificial Intelligence (before or after they make their judgments). Our results show that human judgment is affected when participants receive incorrect algorithmic support, particularly when they receive it before providing their own judgment, resulting in reduced accuracy. The data and materials for these experiments are freely available at the Open Science Framework: https://osf.io/b6p4z/ Experiment 2 was preregistered.

Here are some thoughts:


This study explores the impact of AI errors in human-in-the-loop processes, where humans and AI systems collaborate in decision-making.  The research specifically investigates how the timing of AI support influences human judgment and decision accuracy.  The findings indicate that human judgment is negatively affected by incorrect algorithmic support, particularly when provided before the human's own judgment, leading to decreased accuracy.  This research highlights the complexities of human-computer interaction in automated decision-making contexts and emphasizes the need for a deeper understanding of how AI support systems can be effectively integrated to minimize errors and biases.    

This is important for psychologists because it sheds light on the cognitive biases and decision-making processes involved when humans interact with AI systems, which is an increasingly relevant area of study in the field.  Understanding these interactions can help psychologists develop interventions and strategies to mitigate negative impacts, such as automation bias, and improve the design of human-computer interfaces to optimize decision-making accuracy and reduce errors in various sectors, including public service, healthcare, and justice.