Resource Pages

Wednesday, October 1, 2025

Theory Is All You Need: AI, Human Cognition, and Causal Reasoning

Felin, T., & Holweg, M. (2024).
SSRN Electronic Journal.

Abstract

Scholars argue that artificial intelligence (AI) can generate genuine novelty and new knowledge and, in turn, that AI and computational models of cognition will replace human decision making under uncertainty. We disagree. We argue that AI’s data-based prediction is different from human theory-based causal logic and reasoning. We highlight problems with the decades-old analogy between computers and minds as input–output devices, using large language models as an example. Human cognition is better conceptualized as a form of theory-based causal reasoning rather than AI’s emphasis on information processing and data-based prediction. AI uses a probability-based approach to knowledge and is largely backward looking and imitative, whereas human cognition is forward-looking and capable of generating genuine novelty. We introduce the idea of data–belief asymmetries to highlight the difference between AI and human cognition, using the example of heavier-than-air flight to illustrate our arguments. Theory-based causal reasoning provides a cognitive mechanism for humans to intervene in the world and to engage in directed experimentation to generate new data. Throughout the article, we discuss the implications of our argument for understanding the origins of novelty, new knowledge, and decision making under uncertainty.

Here are some thoughts:

This paper challenges the dominant view that artificial intelligence (AI), particularly large language models (LLMs), mirrors or will soon surpass human cognition. The authors argue against the widespread computational metaphor of the mind, which treats human thinking as data-driven, predictive information processing akin to AI. Instead, they emphasize that human cognition is fundamentally theory-driven and rooted in causal reasoning, experimentation, and the generation of novel, heterogenous beliefs—often in defiance of existing data or consensus. Drawing on historical examples like the Wright brothers, who succeeded despite prevailing scientific skepticism, the paper illustrates how human progress often stems from delusional-seeming ideas that later prove correct. Unlike AI systems that rely on statistical pattern recognition and next-word prediction from vast datasets, humans engage in counterfactual thinking, intentional intervention, and theory-building, enabling true innovation and scientific discovery. The authors caution against over-reliance on prediction-based AI in decision-making, especially under uncertainty, and advocate for a "theory-based view" of cognition that prioritizes causal understanding over mere correlation. In essence, they contend that while AI excels at extrapolating from the past, only human theory-making can generate genuinely new knowledge.