Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Deliberation. Show all posts
Showing posts with label Deliberation. Show all posts

Thursday, April 20, 2023

Toward Parsimony in Bias Research: A Proposed Common Framework of Belief-Consistent Information Processing for a Set of Biases

Oeberst, A., & Imhoff, R. (2023).
Perspectives on Psychological Science, 0(0).
https://doi.org/10.1177/17456916221148147

Abstract

One of the essential insights from psychological research is that people’s information processing is often biased. By now, a number of different biases have been identified and empirically demonstrated. Unfortunately, however, these biases have often been examined in separate lines of research, thereby precluding the recognition of shared principles. Here we argue that several—so far mostly unrelated—biases (e.g., bias blind spot, hostile media bias, egocentric/ethnocentric bias, outcome bias) can be traced back to the combination of a fundamental prior belief and humans’ tendency toward belief-consistent information processing. What varies between different biases is essentially the specific belief that guides information processing. More importantly, we propose that different biases even share the same underlying belief and differ only in the specific outcome of information processing that is assessed (i.e., the dependent variable), thus tapping into different manifestations of the same latent information processing. In other words, we propose for discussion a model that suffices to explain several different biases. We thereby suggest a more parsimonious approach compared with current theoretical explanations of these biases. We also generate novel hypotheses that follow directly from the integrative nature of our perspective.

Conclusion

There have been many prior attempts of synthesizing and integrating research on (parts of) biased information processing (e.g., Birch & Bloom, 2004; Evans, 1989; Fiedler, 1996, 2000; Gawronski & Strack, 2012; Gilovich, 1991; Griffin & Ross, 1991; Hilbert, 2012; Klayman & Ha, 1987; Kruglanski et al., 2012; Kunda, 1990; Lord & Taylor, 2009; Pronin et al., 2004; Pyszczynski & Greenberg, 1987; Sanbonmatsu et al., 1998; Shermer, 1997; Skov & Sherman, 1986; Trope & Liberman, 1996). Some of them have made similar or overlapping arguments or implicitly made similar assumptions to the ones outlined here and thus resonate with our reasoning. In none of them, however, have we found the same line of thought and its consequences explicated.

To put it briefly, theoretical advancements necessitate integration and parsimony (the integrative potential), as well as novel ideas and hypotheses (the generative potential). We believe that the proposed framework for understanding bias as presented in this article has merits in both of these aspects. We hope to instigate discussion as well as empirical scrutiny with the ultimate goal of identifying common principles across several disparate research strands that have heretofore sought to understand human biases.


This article proposes a common framework for studying biases in information processing, aiming for parsimony in bias research. The framework suggests that biases can be understood as a result of belief-consistent information processing, and highlights the importance of considering both cognitive and motivational factors.

Friday, March 31, 2023

Do conspiracy theorists think too much or too little?

N.M. Brashier
Current Opinion in Psychology
Volume 49, February 2023, 101504

Abstract

Conspiracy theories explain distressing events as malevolent actions by powerful groups. Why do people believe in secret plots when other explanations are more probable? On the one hand, conspiracy theorists seem to disregard accuracy; they tend to endorse mutually incompatible conspiracies, think intuitively, use heuristics, and hold other irrational beliefs. But by definition, conspiracy theorists reject the mainstream explanation for an event, often in favor of a more complex account. They exhibit a general distrust of others and expend considerable effort to find ‘evidence’ supporting their beliefs. In searching for answers, conspiracy theorists likely expose themselves to misleading information online and overestimate their own knowledge. Understanding when elaboration and cognitive effort might backfire is crucial, as conspiracy beliefs lead to political disengagement, environmental inaction, prejudice, and support for violence.

Implications

People who are drawn to conspiracy theories exhibit other stable traits – like lower cognitive ability, intuitive thinking, and proneness to cognitive biases – that suggest they are ‘lazy thinkers.’ On the other hand, conspiracy theorists also exhibit extreme levels of skepticism and expend energy justifying their beliefs; this effortful processing can ironically reinforce conspiracy beliefs. Thus, people carelessly fall down rabbit holes at some points (e.g., when reading repetitive conspiratorial claims) and methodically climb down at others (e.g., when initiating searches online). Conspiracy theories undermine elections, threaten the environment, and harm human health, so it is vitally important that interventions aimed at increasing evaluation and reducing these beliefs do not inadvertently backfire.

Sunday, February 26, 2023

Time pressure reduces misinformation discrimination ability but does not alter response bias

Sultan, M., Tump, A.N., Geers, M. et al. 
Sci Rep 12, 22416 (2022).
https://doi.org/10.1038/s41598-022-26209-8

Abstract

Many parts of our social lives are speeding up, a process known as social acceleration. How social acceleration impacts people’s ability to judge the veracity of online news, and ultimately the spread of misinformation, is largely unknown. We examined the effects of accelerated online dynamics, operationalised as time pressure, on online misinformation evaluation. Participants judged the veracity of true and false news headlines with or without time pressure. We used signal detection theory to disentangle the effects of time pressure on discrimination ability and response bias, as well as on four key determinants of misinformation susceptibility: analytical thinking, ideological congruency, motivated reflection, and familiarity. Time pressure reduced participants’ ability to accurately distinguish true from false news (discrimination ability) but did not alter their tendency to classify an item as true or false (response bias). Key drivers of misinformation susceptibility, such as ideological congruency and familiarity, remained influential under time pressure. Our results highlight the dangers of social acceleration online: People are less able to accurately judge the veracity of news online, while prominent drivers of misinformation susceptibility remain present. Interventions aimed at increasing deliberation may thus be fruitful avenues to combat online misinformation.

Discussion

In this study, we investigated the impact of time pressure on people’s ability to judge the veracity of online misinformation in terms of (a) discrimination ability, (b) response bias, and (c) four key determinants of misinformation susceptibility (i.e., analytical thinking, ideological congruency, motivated reflection, and familiarity). We found that time pressure reduced discrimination ability but did not alter the—already present—negative response bias (i.e., general tendency to evaluate news as false). Moreover, the associations observed for the four determinants of misinformation susceptibility were largely stable across treatments, with the exception that the positive effect of familiarity on response bias (i.e., response tendency to treat familiar news as true) was slightly reduced under time pressure. We discuss each of these findings in more detail next.

As predicted, we found that time pressure reduced discrimination ability: Participants under time pressure were less able to distinguish between true and false news. These results corroborate earlier work on the speed–accuracy trade-off, and indicate that fast-paced news consumption on social media is likely leading to people misjudging the veracity of not only false news, as seen in the study by Bago and colleagues, but also true news. Like in their paper, we stress that interventions aimed at mitigating misinformation should target this phenomenon and seek to improve veracity judgements by encouraging deliberation. It will also be important to follow up on these findings by examining whether time pressure has a similar effect in the context of news items that have been subject to interventions such as debunking.

Our results for the response bias showed that participants had a general tendency to evaluate news headlines as false (i.e., a negative response bias); this effect was similarly strong across the two treatments. From the perspective of the individual decision maker, this response bias could reflect a preference to avoid one type of error over another (i.e., avoiding accepting false news as true more than rejecting true news as false) and/or an overall expectation that false news are more prevalent than true news in our experiment. Note that the ratio of true versus false news we used (1:1) is different from the real world, which typically is thought to contain a much smaller fraction of false news. A more ecologically valid experiment with a more representative sample could yield a different response bias. It will, thus, be important for future studies to assess whether participants hold such a bias in the real world, are conscious of this response tendency, and whether it translates into (in)accurate beliefs about the news itself.

Sunday, October 23, 2022

Advancing theorizing about fast-and-slow thinking

De Neys, W. (2022). 
Behavioral and Brain Sciences, 1-68. 
doi:10.1017/S0140525X2200142X

Abstract

Human reasoning is often conceived as an interplay between a more intuitive and deliberate thought process. In the last 50 years, influential fast-and-slow dual process models that capitalize on this distinction have been used to account for numerous phenomena—from logical reasoning biases, over prosocial behavior, to moral decision-making. The present paper clarifies that despite the popularity, critical assumptions are poorly conceived. My critique focuses on two interconnected foundational issues: the exclusivity and switch feature. The exclusivity feature refers to the tendency to conceive intuition and deliberation as generating unique responses such that one type of response is assumed to be beyond the capability of the fast-intuitive processing mode. I review the empirical evidence in key fields and show that there is no solid ground for such exclusivity. The switch feature concerns the mechanism by which a reasoner can decide to shift between more intuitive and deliberate processing. I present an overview of leading switch accounts and show that they are conceptually problematic—precisely because they presuppose exclusivity. I build on these insights to sketch the groundwork for a more viable dual process architecture and illustrate how it can set a new research agenda to advance the field in the coming years.

Conclusion

In the last 50 years dual process models of thinking have moved to the center stage in research on human reasoning. These models have been instrumental for the initial exploration of human thinking in the cognitive sciences and related fields (Chater, 2018; De Neys, 2021). However, it is time to rethink foundational assumptions. Traditional dual process models have typically conceived intuition and deliberation as generating unique responses such that one type of response is exclusively tied to deliberation and is assumed to be beyond the reach of the intuitive system. I reviewed empirical evidence from key dual process applications that argued against this exclusivity feature. I also showed how exclusivity leads to conceptual complications when trying to explain how a reasoner switches between intuitive and deliberate reasoning. To avoid these complications, I sketched an elementary non-exclusive working model in which it is the activation strength of competing intuitions within System 1 that determines System 2 engagement. 

It will be clear that the working model is a starting point that will need to be further developed and specified. However, by avoiding the conceptual paradoxes that plague the traditional model, it presents a more viable basic architecture that can serve as theoretical groundwork to build future dual process models in various fields. In addition, it should at the very least force dual process theorists to specify more explicitly how they address the switch issue. In the absence of such specification, dual process models might continue to provide an appealing narrative but will do little to advance our understanding of the interaction between intuitive and deliberate— fast and slow—thinking. It is in this sense that I hope that the present paper can help to sketch the building blocks of a more judicious dual process future. 

Saturday, April 9, 2022

Deciding to be authentic: Intuition is favored over deliberation when authenticity matters

K. Oktar & T. Lombrozo
Cognition
Volume 223, June 2022, 105021

Abstract

Deliberative analysis enables us to weigh features, simulate futures, and arrive at good, tractable decisions. So why do we so often eschew deliberation, and instead rely on more intuitive, gut responses? We propose that intuition might be prescribed for some decisions because people's folk theory of decision-making accords a special role to authenticity, which is associated with intuitive choice. Five pre-registered experiments find evidence in favor of this claim. In Experiment 1 (N = 654), we show that participants prescribe intuition and deliberation as a basis for decisions differentially across domains, and that these prescriptions predict reported choice. In Experiment 2 (N = 555), we find that choosing intuitively vs. deliberately leads to different inferences concerning the decision-maker's commitment and authenticity—with only inferences about the decision-maker's authenticity showing variation across domains that matches that observed for the prescription of intuition in Experiment 1. In Experiment 3 (N = 631), we replicate our prior results and rule out plausible confounds. Finally, in Experiment 4 (N = 177) and Experiment 5 (N = 526), we find that an experimental manipulation of the importance of authenticity affects the prescribed role for intuition as well as the endorsement of expert human or algorithmic advice. These effects hold beyond previously recognized influences on intuitive vs. deliberative choice, such as computational costs, presumed reliability, objectivity, complexity, and expertise.

From the Discussion section

Our theory and results are broadly consistent with prior work on cross-domain variation in processing preferences (e.g., Inbar et al., 2010), as well as work showing that people draw social inferences from intuitive decisions (e.g., Tetlock, 2003). However, we bridge and extend these literatures by relating inferences made on the basis of an individual's decision to cross-domain variation in the prescribed roles of intuition and deliberation. Importantly, our work is unique in showing that neither judgments about how decisions ought to be made, nor inferences from decisions, are fully reducible to considerations of differential processing costs or the reliability of a given process for the case at hand. Our stimuli—unlike those used in prior work (e.g., Inbar et al., 2010; Pachur & Spaar, 2015)—involved deliberation costs that had already been incurred at the time of decision, yet participants nevertheless displayed substantial and systematic cross-domain variation in their inferences, processing judgments, and eventual decisions. Most dramatically, our matched-information scenarios in Experiment 3 ensured that effects were driven by decision basis alone. In addition to excluding the computational costs of deliberation and matching the decision to deliberate, these scenarios also matched the evidence available concerning the quality of each choice. Nonetheless, decisions that were based on intuition vs. deliberation were judged differently along a number of dimensions, including their authenticity.

Tuesday, December 7, 2021

Memory and decision making interact to shape the value of unchosen options

Biderman, N., Shohamy, D.
Nat Commun 12, 4648 (2021). 
https://doi.org/10.1038/s41467-021-24907-x

Abstract

The goal of deliberation is to separate between options so that we can commit to one and leave the other behind. However, deliberation can, paradoxically, also form an association in memory between the chosen and unchosen options. Here, we consider this possibility and examine its consequences for how outcomes affect not only the value of the options we chose, but also, by association, the value of options we did not choose. In five experiments (total n = 612), including a preregistered experiment (n = 235), we found that the value assigned to unchosen options is inversely related to their chosen counterparts. Moreover, this inverse relationship was associated with participants’ memory of the pairs they chose between. Our findings suggest that deciding between options does not end the competition between them. Deliberation binds choice options together in memory such that the learned value of one can affect the inferred value of the other.

From the Discussion

We found that stronger memory for the deliberated options is related to a stronger discrepancy between the value assigned to the chosen and unchosen options. This result suggests that choosing between options leaves a memory trace. By definition, deliberation is meant to tease apart the value of competing options in the service of making the decision; our findings suggest that deliberation and choice also bind pairs of choice options in memory. Consequently, unchosen options do not vanish from memory after a decision is made, but rather they continue to linger through their link to the chosen options.

We show that participants use the association between choice options to infer the value of unchosen options. This finding complements and extends previous studies reporting transfer of value between associated items in the same direction, which allows agents to generalize reward value across associated exemplars. For example, in the sensory preconditioning task, pairs of neutral items are associated by virtue of appearing in temporal proximity. Subsequently, just one item gains feedback—it is either rewarded or not. When probed to choose between items that did not receive feedback, participants tend to select those previously paired with rewarded items. In contrast, our participants tended to avoid the items whose counterpart was previously rewarded. Put in learning terms, when the chosen option proved to be successful, participants’ choices in our task reflected avoidance of, rather than approach to, the unchosen option. One important difference between our task and the sensory preconditioning task is the manner in which the association is formed. In both tasks a pair of items appears in close temporal proximity, yet in our task participants are also asked to decide between these items and the act of deliberation seems to result in an inverse association between the deliberated options.



Friday, October 8, 2021

Can induced reflection affect moral decision-making

Daniel Spears, et al. (2021) 
Philosophical Psychology, 34:1, 28-46, 
DOI: 10.1080/09515089.2020.1861234

Abstract

Evidence about whether reflective thinking may be induced and whether it affects utilitarian choices is inconclusive. Research suggests that answering items correctly in the Cognitive Reflection Test (CRT) before responding to dilemmas may lead to more utilitarian decisions. However, it is unclear to what extent this effect is driven by the inhibition of intuitive wrong responses (reflection) versus the requirement to engage in deliberative processing. To clarify this issue, participants completed either the CRT or the Berlin Numeracy Test (BNT) – which does not require reflection – before responding to moral dilemmas. To distinguish between the potential effect of participants’ previous reflective traits and that of performing a task that can increase reflectivity, we manipulated whether participants received feedback for incorrect items. Findings revealed that both CRT and BNT scores predicted utilitarian decisions when feedback was not provided. Additionally, feedback enhanced performance for both tasks, although it only increased utilitarian decisions when it was linked to the BNT. Taken together, these results suggest that performance in a numeric task that requires deliberative thinking may predict utilitarian responses to moral dilemmas. The finding that feedback increased utilitarian decisions only in the case of BNT casts doubt upon the reflective-utilitarian link.

From the General Discussion

Our data, however, did not fully support these predictions. Although feedback resulted in more utilitarian responses to moral dilemmas, this effect was mostly attributable to feedback on the BNT.  The effect was  not  attributable to differences in baseline task-performance. Additionally, both CRT and BNT scores predicted utilitarian responses when feedback was not provided. That  performance in the CRT predicts  utilitarian decisions is in agreement with a previous study linking cognitive reflection to utilitarian choice (Paxton et al., 2012; but see Sirota, Kostovicova, Juanchich, & Dewberry, pre-print, for the absence of effect when using a verbal CRT without numeric component).

Tuesday, March 30, 2021

On Dual- and Single-Process Models of Thinking

De Neys W. On 
Perspectives on Psychological Science. 
February 2021. 
doi:10.1177/1745691620964172

Abstract

Popular dual-process models of thinking have long conceived intuition and deliberation as two qualitatively different processes. Single-process-model proponents claim that the difference is a matter of degree and not of kind. Psychologists have been debating the dual-process/single-process question for at least 30 years. In the present article, I argue that it is time to leave the debate behind. I present a critical evaluation of the key arguments and critiques and show that—contra both dual- and single-model proponents—there is currently no good evidence that allows one to decide the debate. Moreover, I clarify that even if the debate were to be solved, it would be irrelevant for psychologists because it does not advance the understanding of the processing mechanisms underlying human thinking.

Time to Move On

The dual vs single process model debate has not been resolved, it can be questioned whether the debate
can be resolved, and even if it were to be resolved, it will not inform our theory development about the critical processing mechanism underlying human thinking. This implies that the debate is irrelevant for the empirical study of thinking. In a sense the choice between a single and dual process model boils—quite literally—down to a choice between two different religions. Scholars can (and may) have different personal beliefs and preferences as to which model serves their conceptualizing and communicative goals best. However, what they cannot do is claim there are good empirical or theoretical scientific arguments to favor one over the other.

I do not contest that the single vs dual process model debate might have been useful in the past. For example, the relentless critique of single process proponents helped to discard the erroneous perfect feature alignment view. Likewise, the work of Evans and Stanovich in trying to pinpoint defining features was helpful to start sketching the descriptive building blocks of the mental simulation and cognitive decoupling process. Hence, I do believe that the debate has had some positive by-products. 

Thursday, February 8, 2018

How can groups make good decisions? Deliberation & Diversity

Mariano Sigman and Dan Ariely
TED Talk
Originally recorded April 2017

We all know that when we make decisions in groups, they don't always go right -- and sometimes they go very wrong. How can groups make good decisions? With his colleague Dan Ariely, neuroscientist Mariano Sigman has been inquiring into how we interact to reach decisions by performing experiments with live crowds around the world. In this fun, fact-filled explainer, he shares some intriguing results -- as well as some implications for how it might impact our political system. In a time when people seem to be more polarized than ever, Sigman says, better understanding how groups interact and reach conclusions might spark interesting new ways to construct a healthier democracy.

Tuesday, October 10, 2017

Reasons Probably Won’t Change Your Mind: The Role of Reasons in Revising Moral Decisions

Stanley, M. L., Dougherty, A. M., Yang, B. W., Henne, P., & De Brigard, F. (2017).
Journal of Experimental Psychology: General. Advance online publication.

Abstract

Although many philosophers argue that making and revising moral decisions ought to be a matter of deliberating over reasons, the extent to which the consideration of reasons informs people’s moral decisions and prompts them to change their decisions remains unclear. Here, after making an initial decision in 2-option moral dilemmas, participants examined reasons for only the option initially chosen (affirming reasons), reasons for only the option not initially chosen (opposing reasons), or reasons for both options. Although participants were more likely to change their initial decisions when presented with only opposing reasons compared with only affirming reasons, these effect sizes were consistently small. After evaluating reasons, participants were significantly more likely not to change their initial decisions than to change them, regardless of the set of reasons they considered. The initial decision accounted for most of the variance in predicting the final decision, whereas the reasons evaluated accounted for a relatively small proportion of the variance in predicting the final decision. This resistance to changing moral decisions is at least partly attributable to a biased, motivated evaluation of the available reasons: participants rated the reasons supporting their initial decisions more favorably than the reasons opposing their initial decisions, regardless of the reported strategy used to make the initial decision. Overall, our results suggest that the consideration of reasons rarely induces people to change their initial decisions in moral dilemmas.

The article is here, behind a paywall.

You can contact the lead investigator for a personal copy.

Monday, July 18, 2016

Cooperation, Fast and Slow: Meta-Analytic Evidence for a Theory of Social Heuristics and Self-Interested Deliberation

David G. Rand
(In press).
Psychological Science.

Abstract

Does cooperating require the inhibition of selfish urges? Or does “rational” self-interest constrain cooperative impulses? I investigated the role of intuition and deliberation in cooperation by meta-analyzing 67 studies in which cognitive-processing manipulations were applied to economic cooperation games (total N = 17,647; no indication of publication bias using Egger’s test, Begg’s test, or p-curve). My meta-analysis was guided by the Social Heuristics Hypothesis, which proposes that intuition favors behavior that typically maximizes payoffs, whereas deliberation favors behavior that maximizes one’s payoff in the current situation. Therefore, this theory predicts that deliberation will undermine pure cooperation (i.e., cooperation in settings where there are few future consequences for one’s actions, such that cooperating is never in one’s self-interest) but not strategic cooperation (i.e., cooperation in settings where cooperating can maximize one’s payoff). As predicted, the meta-analysis revealed 17.3% more pure cooperation when intuition was promoted relative to deliberation, but no significant difference in strategic cooperation between intuitive and deliberation conditions.

The article is here.

Monday, February 15, 2016

When Deliberation Isn’t Smart

By Adam Bear and David Rand
Evonomics
Originally published January 25, 2016

Cooperation is essential for successful organizations. But cooperating often requires people to put others’ welfare ahead of their own. In this post, we discuss recent research on cooperation that applies the “Thinking, fast and slow” logic of intuition versus deliberation. We explain why people sometimes (but not always) cooperate in situations where it’s not in their self-interest to do so, and show how properly designed policies can build “habits of virtue” that create a culture of cooperation. TL;DR summary: intuition favors behaviors that are typically optimal, so institutions that make cooperation typically advantageous lead people to adopt cooperation as their intuitive default; this default then “spills over” into settings where it’s not actually individually advantageous to cooperate.

Life is full of opportunities to make personal sacrifices on behalf others, and we often rise to the occasion. We do favors for co-workers and friends, give money to charity, donate blood, and engage in a host of other cooperative endeavors. Sometimes, these nice deeds are reciprocated (like when we help out a friend, and she helps us with something in return). Other times, however, we pay a cost and get little in return (like when we give money to a homeless person whom we’ll never encounter again).

Although you might not realize it, nowhere is the importance of cooperation more apparent than in the workplace. If your boss is watching you, you’d probably be wise to be a team player and cooperate with your co-workers, since doing so will enhance your reputation and might even get you a promotion down the road. In other instances, though, you might get no recognition from, say, helping out a fellow employee who needs assistance meeting a deadline, or who calls out sick.

The article is here.

Monday, February 23, 2015

On making the right choice: A meta-analysis and large-scale replication attempt of the unconscious thought advantage

M.R. Nieuwenstein, T. Wirenga, R.D. Morey, J.M. Wichers, T.N. Blom, E. Wagenmakers, and H. vanRijn
Judgment and Decision Making, Vol. 10, No. 1, January 2015, pp. 1-17

Abstract

Are difficult decisions best made after a momentary diversion of thought? Previous research addressing this important question has yielded dozens of experiments in which participants were asked to choose the best of several options (e.g., cars or apartments) either after conscious deliberation, or after a momentary diversion of thought induced by an unrelated task. The results of these studies were mixed. Some found that participants who had first performed the unrelated task were more likely to choose the best option, whereas others found no evidence for this so-called unconscious thought advantage (UTA). The current study examined two accounts of this inconsistency in previous findings. According to the reliability account, the UTA does not exist and previous reports of this effect concern nothing but spurious effects obtained with an unreliable paradigm. In contrast, the moderator account proposes that the UTA is a real effect that occurs only when certain conditions are met in the choice task. To test these accounts, we conducted a meta-analysis and a large-scale replication study (N = 399) that met the conditions deemed optimal for replicating the UTA. Consistent with the reliability account, the large-scale replication study yielded no evidence for the UTA, and the meta-analysis showed that previous reports of the UTA were confined to underpowered studies that used relatively small sample sizes. Furthermore, the results of the large-scale study also dispelled the recent suggestion that the UTA might be gender-specific. Accordingly, we conclude that there exists no reliable support for the claim that a momentary diversion of thought leads to better decision making than a period of deliberation.

The entire article is here.