Resource Pages

Thursday, June 19, 2025

Large Language Model (LLM) Algorithms in Reshaping Decision-Making and Cognitive Biases in the AI-Leading World: An Experimental Study.

Khatoon, H., Khan, M. L., & Irshad, A. 
(2025, January 22). PsyArXiv

Abstract

The rise of artificial intelligence (AI) has accelerated decision-making since AI algorithmic recommendation may help reduce human limitations while increasing decision accuracy and efficiency. Large language model (LLM) algorithms are designed to enhance human decision-making competencies and remove possible cognitive biases. However, these algorithms can be biased and lead to poor decision-making. Building on previously existing LLM algorithm (i.e., ChatGPT and Perplexity.ai), this study examines whether users who get AI assistance during task-based decision-making have greater decision-making abilities than their peers who employ their own cognitive processes to make decisions. By using domain-independent LLM , incentives, and scenario-based task decisions, we find that the advice suggested by these AIs in the decisive situations were biased and wrong, and that resulted in poor decision outcomes. It has been observed that using public access LLM in crucial situations might result in both ineffective outcomes for the advisee and inadvertent consequences for third parties. Findings highlight the need of having an ethical AI algorithm and the ability to accurately assess trust in order to effectively deploy these systems. This raises concerns regarding the use of AI in decision making with careful assistance.

Here are some thoughts:

This research is important to psychologists because it examines how collaboration with large language models (LLMs) like ChatGPT affects human decision-making, particularly in relation to cognitive biases. By using a modified Adult Decision-Making Competence battery, the study offers empirical data on whether AI assistance improves or impairs judgment. It highlights the psychological dynamics of trust in AI, the risk of overreliance, and the ethical implications of using AI in decisions that impact others. These findings are especially relevant for psychologists interested in cognitive bias, human-technology interaction, and the integration of AI into clinical, organizational, and educational settings.