Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Meta-Decision Making. Show all posts
Showing posts with label Meta-Decision Making. Show all posts

Friday, December 3, 2021

A rational reinterpretation of dual-process theories

S. Milli, F. Lieder, & T. L. Griffiths
Cognition
Volume 217, December 2021, 104881

Abstract

Highly influential “dual-process” accounts of human cognition postulate the coexistence of a slow accurate system with a fast error-prone system. But why would there be just two systems rather than, say, one or 93? Here, we argue that a dual-process architecture might reflect a rational tradeoff between the cognitive flexibility afforded by multiple systems and the time and effort required to choose between them. We investigate what the optimal set and number of cognitive systems would be depending on the structure of the environment. We find that the optimal number of systems depends on the variability of the environment and the difficulty of deciding when which system should be used. Furthermore, we find that there is a plausible range of conditions under which it is optimal to be equipped with a fast system that performs no deliberation (“System 1”) and a slow system that achieves a higher expected accuracy through deliberation (“System 2”). Our findings thereby suggest a rational reinterpretation of dual-process theories.

From the General Discussion

While we have formulated the function of selecting between multiple cognitive systems as metareasoning, this does not mean that the mechanisms through which this function is realized have to involve any form
of reasoning. Rather, our analysis holds for all selection and arbitration mechanisms as having more cognitive systems incurs a higher cognitive cost. This also applies to model-free mechanisms that choose decision systems based on learned associations. This is because the more actions there are, the longer it takes for model-free reinforcement learning to converge to a good solution and the suboptimal choices during the learning phase can be costly.

The emerging connection between normative modeling and dual-process theories is remarkable because the findings from these approaches are often invoked to support opposite views on human (ir)rationality (Stanovich, 2011). In this debate, some authors (Ariely, 2009; Marcus, 2009) have interpreted the existence of a fast, error-prone cognitive system whose heuristics violate the rules of logic, probability theory, and expected utility theory as a sign of human irrationality.  By contrast, our analysis suggests that having a fast but fallible cognitive system in addition to a slow but accurate system might be the best
possible solution. This implies that the variability, fallibility, and inconsistency of human judgment that result from people’s switching between System 1 and System 2 should not be interpreted as evidence
for human irrationality, because it might reflect the rational use of limited cognitive resources.