Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Sunday, May 19, 2024

AI-Augmented Predictions: LLM Assistants Improve Human Forecasting Accuracy

P. Schoenegger, P. S. Park, E. Karger, P. E. Tetlock


Large language models (LLMs) show impressive capabilities, matching and sometimes exceeding human performance in many domains. This study explores the potential of LLMs to augment judgement in forecasting tasks. We evaluated the impact on forecasting accuracy of two GPT-4-Turbo assistants: one designed to provide high-quality advice ('superforecasting'), and the other designed to be overconfident and base-rate-neglecting. Participants (N = 991) had the option to consult their assigned LLM assistant throughout the study, in contrast to a control group that used a less advanced model (DaVinci-003) without direct forecasting support. Our preregistered analyses reveal that LLM augmentation significantly enhances forecasting accuracy by 23% across both types of assistants, compared to the control group. This improvement occurs despite the superforecasting assistant's higher accuracy in predictions, indicating the augmentation's benefit is not solely due to model prediction accuracy. Exploratory analyses showed a pronounced effect in one forecasting item, without which we find that the superforecasting assistant increased accuracy by 43%, compared with 28% for the biased assistant. We further examine whether LLM augmentation disproportionately benefits less skilled forecasters, degrades the wisdom-of-the-crowd by reducing prediction diversity, or varies in effectiveness with question difficulty. Our findings do not consistently support these hypotheses. Our results suggest that access to an LLM assistant, even a biased one, can be a helpful decision aid in cognitively demanding tasks where the answer is not known at the time of interaction.

This paper investigates the use of large language models (LLMs) like GPT-4 as an augmentation tool to improve human forecasting accuracy on various questions about future events. The key findings from their preregistered study with 991 participants are:
  1. LLM augmentation, both with a "superforecasting" prompt and a biased prompt, significantly improved individual forecasting accuracy by around 23% compared to a control group using a simpler language model without direct forecasting support.
  2. There was no statistically significant difference in accuracy between the superforecasting and biased LLM augmentation conditions, despite the superforecasting model providing more accurate solo forecasts initially.
  3. The effect of LLM augmentation did not differ significantly between high and low-skilled forecasters.
  4. Results on whether LLM augmentation improved or degraded aggregate forecast accuracy were mixed across preregistered and exploratory analyses.
  5. LLM augmentation did not have a significantly different effect on easier versus harder forecasting questions in preregistered analyses.
The paper argues that LLM augmentation can serve as a decision aid to improve human forecasting on novel questions, even when LLMs perform poorly at that task alone. However, the mechanisms behind these improvements require further study.