Lane, J. N., Boussioux, L., et al. (2025)
Working Paper: Harvard Business Review
Abstract
Do AI-generated narrative explanations enhance human oversight or diminish it? We investigate this question through a field experiment with 228 evaluators screening 48 early-stage innovations under three conditions: human-only, black-box AI recommendations without explanations, and narrative AI with explanatory rationales. Across 3,002 screening decisions, we uncover a human-AI oversight paradox: under the high cognitive load of rapid innovation screening, AI-generated explanations increase reliance on AI recommendations rather than strengthening human judgment, potentially reducing meaningful human oversight. Screeners assisted by AI were 19 percentage points more likely to align with AI recommendations, an effect that was strongest when the AI advised rejection. Considering in-depth expert evaluations of the solutions, we find that while both AI conditions outperformed human-only screening, narrative AI showed no quality improvements over black-box recommendations despite higher compliance rates and may actually increase rejection of high-potential solutions. These findings reveal a fundamental tension: AI assistance improves overall screening efficiency and quality, but narrative persuasiveness may inadvertently filter out transformative innovations that deviate from standard evaluation frameworks.
Here are some thoughts:
This paper is particularly important to psychologists as it delves into the intricate dynamics of human-AI collaboration, specifically examining how AI-generated narratives influence decision-making processes under high cognitive load. By investigating the psychological mechanisms behind algorithm aversion and appreciation, the study extends traditional theories of bounded rationality, offering fresh insights into how individuals rely on mental shortcuts when faced with complex evaluations. The findings reveal that while AI narratives can enhance alignment with recommendations, they paradoxically lead to cognitive substitution rather than complementarity, reducing critical evaluation of information. This has significant implications for understanding how humans process decisions in uncertain and cognitively demanding environments, especially when evaluating early-stage innovations.
Moreover, the paper sheds light on the psychological functions of narratives beyond their informational value, highlighting how persuasiveness and coherence play a role in shaping trust and decision-making. Psychologists can draw valuable insights from this research regarding how individuals use narratives to justify decisions, diffuse accountability, and reduce cognitive burden. The exploration of phenomena such as the "illusion of explanatory depth" and the elimination of beneficial cognitive friction provides a deeper understanding of how people interact with AI systems, particularly in contexts requiring subjective judgments and creativity. This work also raises critical questions about responsibility attribution, trust, and the psychological safety associated with deferring to AI recommendations, making it highly relevant to the study of human behavior in increasingly automated environments. Overall, the paper contributes significantly to the evolving discourse on human-AI interaction, offering empirical evidence that can inform psychological theories of decision-making, heuristics, and technology adoption.