Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Motivated Reasoning. Show all posts
Showing posts with label Motivated Reasoning. Show all posts

Sunday, November 5, 2023

Is Applied Ethics Morally Problematic?

Franz, D.J.
J Acad Ethics 20, 359–374 (2022).
https://doi.org/10.1007/s10805-021-09417-1

Abstract

This paper argues that applied ethics can itself be morally problematic. As illustrated by the case of Peter Singer’s criticism of social practice, morally loaded communication by applied ethicists can lead to protests, backlashes, and aggression. By reviewing the psychological literature on self-image, collective identity, and motivated reasoning three categories of morally problematic consequences of ethical criticism by applied ethicists are identified: serious psychological discomfort, moral backfiring, and hostile conflict. The most worrisome is moral backfiring: psychological research suggests that ethical criticism of people’s central moral convictions can reinforce exactly those attitudes. Therefore, applied ethicists unintentionally can contribute to a consolidation of precisely those social circumstances that they condemn to be unethical. Furthermore, I argue that the normative concerns raised in this paper are not dependent on the commitment to one specific paradigm in moral philosophy. Utilitarianism, Aristotelian virtue ethics, and Rawlsian contractarianism all provide sound reasons to take morally problematic consequences of ethical criticism seriously. Only the case of deontological ethics is less clear-cut. Finally, I point out that the issues raised in this paper provide an excellent opportunity for further interdisciplinary collaboration between applied ethics and social sciences. I also propose strategies for communicating ethics effectively.


Here is my summary:

First, ethical criticism can cause serious psychological discomfort. People often have strong emotional attachments to their moral convictions, and being told that their beliefs are wrong can be very upsetting. In some cases, ethical criticism can even lead to anxiety, depression, and other mental health problems.

Second, ethical criticism can lead to moral backfiring. This is when people respond to ethical criticism by doubling down on their existing beliefs. Moral backfiring is thought to be caused by a number of factors, including motivated reasoning and the need to maintain a positive self-image.

Third, ethical criticism can lead to hostile conflict. When people feel threatened by ethical criticism, they may become defensive and aggressive. This can lead to heated arguments, social isolation, and even violence.

Franz argues that these negative consequences are not just hypothetical. He points to a number of real-world examples, such as the backlash against Peter Singer's arguments for vegetarianism.

The author concludes by arguing that applied ethicists should be aware of the ethical dimension of their own work. They should be mindful of the potential for their work to cause harm, and they should take steps to mitigate these risks. For example, applied ethicists should be careful to avoid making personal attacks on those who disagree with them. They should also be willing to engage in respectful dialogue with those who have different moral views.

Thursday, April 6, 2023

People recognize and condone their own morally motivated reasoning

Cusimano, C., & Lombrozo, T. (2023).
Cognition, 234, 105379.

Abstract

People often engage in biased reasoning, favoring some beliefs over others even when the result is a departure from impartial or evidence-based reasoning. Psychologists have long assumed that people are unaware of these biases and operate under an “illusion of objectivity.” We identify an important domain of life in which people harbor little illusion about their biases – when they are biased for moral reasons. For instance, people endorse and feel justified believing morally desirable propositions even when they think they lack evidence for them (Study 1a/1b). Moreover, when people engage in morally desirable motivated reasoning, they recognize the influence of moral biases on their judgment, but nevertheless evaluate their reasoning as ideal (Studies 2–4). These findings overturn longstanding assumptions about motivated reasoning and identify a boundary condition on Naïve Realism and the Bias Blind Spot. People's tendency to be aware and proud of their biases provides both new opportunities, and new challenges, for resolving ideological conflict and improving reasoning.

Highlights

• Dominant theories assume people form beliefs only under an illusion of objectivity.

• We document a boundary condition on this illusion: morally desirable biases.

• People endorse beliefs they regard as evidentially weak but morally desirable.

• People realize when they have just engaged in morally motivated reasoning.

• Accurate self-attributions of moral bias fully attenuate the ‘bias blind spot’.

From the General discussion

Our beliefs about our beliefs – including whether they are biased or justified – play a crucial role in guiding inquiry, shaping belief revision, and navigating disagreement. One line of research suggests that these judgments are almost universally characterized by an illusion of objectivity such that people consciously reason with the goal of being objective and basing their beliefs on evidence, and because of this, people nearly always assume that their current beliefs meet those standards. Another line of work suggests that people sometimes think that values legitimately bear on whether someone is justified to hold a belief (Cusimano & Lombrozo, 2021b). These findings raise the possibility, consistent with some prior theoretical proposals (Cusimano & Lombrozo, 2021a; Tetlock, 2002), that people will knowingly violate norms of impartiality, or knowingly maintain beliefs that lack evidential support, when doing so advances what they consider to be morally laudable goals. Two predictions follow. First, people should evaluate their beliefs in part based on their perceived moral value. And second, in situations in which people engage in morally motivated reasoning, they should recognize that they have done so and should evaluate their morally motivated reasoning as appropriate. We document support for these predictions across four studies (Table 1).

Conclusion

A great deal of work has assumed that people treat objectivity and evidence-based reasoning as cardinal norms governing their belief formation. This assumption has grown increasingly tenuous in light of recent work highlighting the importance of moral concerns in almost all facets of life. Consistent with this recent work, we find evidence that people’s evaluations of the moral quality of a proposition predict their subjective confidence that it is true, their likelihood of claiming that they believe it and know it, and the extent to which they take their belief to be justified. Moreover, people exhibit metacognitive awareness of this fact and approve of morality’s influence on their reasoning. People often want to be right, but they also want to be good – and they know it.

Sunday, October 16, 2022

A framework for understanding reasoning errors: From fake news to climate change and beyond

Pennycook, G. (2022, August 31).
https://doi.org/10.31234/osf.io/j3w7d

Abstract

Humans have the capacity, but perhaps not always the willingness, for great intelligence. From global warming to the spread of misinformation and beyond, our species is facing several major challenges that are the result of the limits of our own reasoning and decision-making. So, why are we so prone to errors during reasoning? In this chapter, I will outline a framework for understanding reasoning errors that is based on a three-stage dual-process model of analytic engagement (intuition, metacognition, and reason). The model has two key implications: 1) That a mere lack of deliberation and analytic thinking is a primary source of errors and 2) That when deliberation is activated, it generally reduces errors (via questioning intuitions and integrating new information) than increasing errors (via rationalization and motivated reasoning). In support of these claims, I review research showing the extensive predictive validity of measures that index individual differences in analytic cognitive style – even beyond explicit errors per se. In particular, analytic thinking is not only predictive of skepticism about a wide range of epistemically suspect beliefs (paranormal, conspiratorial, COVID-19 misperceptions, pseudoscience and alternative medicines) as well as decreased susceptibility to bullshit, fake news, and misinformation, but also important differences in people’s moral judgments and values as well as their religious beliefs (and disbeliefs). Furthermore, in some (but not all cases), there is evidence from experimental paradigms that support a causal role of analytic thinking in determining judgments, beliefs, and behaviors. The findings reviewed here provide some reason for optimism for the future: It may be possible to foster analytic thinking and therefore improve the quality of our decisions.

Evaluating the evidence: Does reason matter?

Thus far, I have prioritized explaining the various alternative frameworks. I will now turn to an in-depth review of some of the key relevant evidence that helps mediate between these accounts. I will organize this review around two key implications that emerge from the framework that I have proposed.

First, the primary difference between the three-stage model (and related dual-process models) and the social-intuitionist models (and related intuitionist models) is that the former argues that people should be able to overcome intuitive errors using deliberation whereas the latter argues that reason is generally infirm and therefore that intuitive errors will simply dominate. Thus, the reviewed research will investigate the apparent role of deliberation in driving people’s choices, beliefs, and behaviors.

Second, the primary difference between the three-stage model (and related dual-process models) and the identity-protective cognition model is that the latter argues that deliberation facilitates biased information processing whereas the former argues that deliberation generally facilitates accuracy. Thus, the reviewed research will also focus on whether deliberation is linked with inaccuracy in politically-charged or identity-relevant contexts.

Saturday, October 1, 2022

COVID-19 and Politically Motivated Reasoning

Maguire, A., Persson, E., Västfjäll, D., & 
Tinghög, G. (2022). Medical Decision Making.
https://doi.org/10.1177/0272989X221118078

Abstract

Background
During the COVID-19 pandemic, the world witnessed a partisan segregation of beliefs toward the global health crisis and its management. Politically motivated reasoning, the tendency to interpret information in accordance with individual motives to protect valued beliefs rather than objectively considering the facts, could represent a key process involved in the polarization of attitudes. The objective of this study was to explore politically motivated reasoning when participants assess information regarding COVID-19.

Design
We carried out a preregistered online experiment using a diverse sample (N = 1500) from the United States. Both Republicans and Democrats assessed the same COVID-19–related information about the health effects of lockdowns, social distancing, vaccination, hydroxychloroquine, and wearing face masks.

Results
At odds with our prestated hypothesis, we found no evidence in line with politically motivated reasoning when interpreting numerical information about COVID-19. Moreover, we found no evidence supporting the idea that numeric ability or cognitive sophistication bolster politically motivated reasoning in the case of COVID-19. Instead, our findings suggest that participants base their assessment on prior beliefs of the matter.

Conclusions
Our findings suggest that politically polarized attitudes toward COVID-19 are more likely to be driven by lack of reasoning than politically motivated reasoning—a finding that opens potential avenues for combating political polarization about important health care topics.

Highlights
  • Participants assessed numerical information regarding the effect of different COVID-19 policies.
  • We found no evidence in line with politically motivated reasoning when interpreting numerical information about COVID-19.
  • Participants tend to base their assessment of COVID-19–related facts on prior beliefs of the matter.
  • Politically polarized attitudes toward COVID-19 are more a result of lack of thinking than partisanship.

Saturday, January 15, 2022

What Dilemma? Moral Evaluation Shapes Factual Belief

B. Lui, & P. Ditto
Social Psychological and Personality Science. 2013;4(3):316-323. doi:10.1177/1948550612456045

Abstract

Moral dilemmas—like the “trolley problem” or real-world examples like capital punishment—result from a conflict between consequentialist and deontological intuitions (i.e., whether ends justify means). The authors contend that people often resolve such moral conflict by aligning factual beliefs about consequences of acts with evaluations of the act’s inherent morality (i.e., morality independent of its consequences). In both artificial (Study 1) and real-world (Study 2) dilemmas, the more an act was deemed inherently immoral, the more it was seen as unlikely to produce beneficial consequences and likely to involve harmful costs. Coherence between moral evaluations and factual beliefs increased with greater moral conviction, self-proclaimed topical knowledge, and political conservatism (Study 2). Reading essays about the inherent morality or immorality of capital punishment (Study 3) changed beliefs about its costs and benefits, even though no information about consequences was supplied. Implications for moral reasoning and political conflict are discussed.

From the General Discussion

While individuals can and do appeal to principle in some cases to support their moral positions, we argue that this is a difficult stance psychologically because it conflicts with well-rehearsed economic intuitions urging that the most rational course of action is the one that produces the most favorable cost–benefit ratio. Our research suggests that people resolve such dilemmas by bringing cost–benefit beliefs into line with moral evaluations, such that the right course of action morally becomes the right course of action practically as well.Study 3 provides experimental confirmation of a pattern implied by both our own and others’ correlational research(e.g., Kahan, 2010): People shape their descriptive understand-ing of the world to fit their prescriptive understanding of it.

Wednesday, January 12, 2022

Hidden wisdom or pseudo-profound bullshit? The effect of speaker admirability

Kara-Yakoubian, et al.
(2021, October 28).
https://doi.org/10.31234/osf.io/tpnkw

Abstract

How do people reason in response to ambiguous messages shared by admirable individuals? Using behavioral markers and self-report questionnaires, in two experiments (N = 571) we examined the influence of speakers’ admirability on meaning-seeking and wise reasoning in response to pseudo-profound bullshit. In both studies, statements that sounded superficially impressive but lacked intent to communicate meaning generated meaning-seeking, but only when delivered by high admirability speakers (e.g., the Dalai Lama) as compared to low admirability speakers (e.g., Kim Kardashian). The effect of speakers’ admirability on meaning-seeking was unique to pseudo-profound bullshit statements and was absent for mundane (Study 1) and motivational (Study 2) statements. In Study 2, participants also engaged in wiser reasoning for pseudo-profound bullshit (vs. motivational) statements and did more so when speakers were high in admirability. These effects occurred independently of the amount of time spent on statements or the complexity of participants’ reflections. It appears that pseudo-profound bullshit can promote epistemic reflection and certain aspects of wisdom, when associated with an admirable speaker.

From the General Discussion

Pseudo-profound language represents a type of misinformation (Čavojová et al., 2019b; Littrell et al., 2021; Pennycook & Rand, 2019a) where ambiguity reigns. Our findings suggest that source admirability could play an important role in the cognitive processing of ambiguous misinformation, including fake news (Pennycook & Rand, 2020) and euphemistic language (Walker et al., 2021). For instance, in the case of fake news, people may be more inclined to engage in epistemic reflection if the source of an article is highly admirable. However, we also observed that statements from high (vs. low) admirability sources were judged as more profound and were better liked. Extended to misinformation, a combination of greater perceived profundity, liking, and acquired meaning could potentially facilitate the sharing of ambiguous fake news content throughout social networks. Increased reflective thinking (as measured by the CRT) has also been linked to greater discernment on social media, with individuals who score higher on the CRT being less likely to believe fake news stories and share this type of content (Mosleh et al., 2021; Pennycook & Rand, 2019a). Perhaps, people might engage in more epistemic reflection if the source of an article is highly admirable, which may in turn predict a decrease in the sharing behaviour of fake news. Similarly, people may be more inclined to engage in epistemic reflection for euphemistic language, such as the term “enhanced interrogation” used in replacement of “torture,” and conclude that this type of language means something other than what it refers to, if used by a more admirable (compared to a less admirable) individual.

Thursday, September 2, 2021

Reconciling scientific and commonsense values to improve reasoning

C. Cusimano & T. Lombrozo
Trends in Cognitive Sciences
Available online July 2021

Abstract

Scientific reasoning is characterized by commitments to evidence and objectivity. New research suggests that under some conditions, people are prone to reject these commitments, and instead sanction motivated reasoning and bias. Moreover, people’s tendency to devalue scientific reasoning likely explains the emergence and persistence of many biased beliefs. However, recent work in epistemology has identified ways in which bias might be legitimately incorporated into belief formation. Researchers can leverage these insights to evaluate when commonsense affirmation of bias is justified and when it is unjustified and therefore a good target for intervention.

Highlights
  • People espouse a ‘lay ethics of belief’ that defines standards for how beliefs should be evaluated and formed.
  • People vary in the extent to which they endorse scientific norms of reasoning, such as evidentialism and impartiality, in their own norms of belief. In some cases, people sanction motivated or biased thinking.
  • Variation in endorsement of scientific norms predicts belief accuracy, suggesting that interventions that target norms could lead to more accurate beliefs.
  • Normative theories in epistemology vary in whether, and how, they regard reasoning and belief formation as legitimately impacted by moral or pragmatic considerations.
  • Psychologists can leverage knowledge of people’s lay ethics of belief, and normative arguments about when and whether bias is appropriate, to develop interventions to improve reasoning that are both ethical and effective.

Concluding remarks

It is no secret that humans are biased reasoners. Recent work suggests that these departures from scientific reasoning are not simply the result of unconscious bias, but are also a consequence of endorsing norms for belief that place personal, moral, or social good above truth.  The link between devaluing the ‘scientific ethos’ and holding biased beliefs suggests that, in some cases, interventions on the perceived value of scientific reasoning could lead to better reasoning and to better outcomes. In this spirit, we have offered a strategy for value debiasing.

Monday, March 1, 2021

Morality justifies motivated reasoning in the folk ethics of belief

Corey Cusimano & Tania Lombrozo
Cognition
19 January 2021

Abstract

When faced with a dilemma between believing what is supported by an impartial assessment of the evidence (e.g., that one's friend is guilty of a crime) and believing what would better fulfill a moral obligation (e.g., that the friend is innocent), people often believe in line with the latter. But is this how people think beliefs ought to be formed? We addressed this question across three studies and found that, across a diverse set of everyday situations, people treat moral considerations as legitimate grounds for believing propositions that are unsupported by objective, evidence-based reasoning. We further document two ways in which moral considerations affect how people evaluate others' beliefs. First, the moral value of a belief affects the evidential threshold required to believe, such that morally beneficial beliefs demand less evidence than morally risky beliefs. Second, people sometimes treat the moral value of a belief as an independent justification for belief, and on that basis, sometimes prescribe evidentially poor beliefs to others. Together these results show that, in the folk ethics of belief, morality can justify and demand motivated reasoning.

From the General Discussion

5.2. Implications for motivated reasoning

Psychologists have long speculated that commonplace deviations from rational judgments and decisions could reflect commitments to different normative standards for decision making rather than merely cognitive limitations or unintentional errors (Cohen, 1981; Koehler, 1996; Tribe, 1971). This speculation has been largely confirmed in the domain of decision making, where work has documented that people will refuse to make certain decisions because of a normative commitment to not rely on certain kinds of evidence (Nesson, 1985; Wells, 1992), or because of a normative commitment to prioritize deontological concerns over utility-maximizing concerns (Baron & Spranca, 1997; Tetlock et al., 2000). And yet, there has been comparatively little investigation in the domain of belief formation. While some work has suggested that people evaluate beliefs in ways that favor non-objective, or non-evidential criteria (e.g., Armor et al., 2008; Cao et al., 2019; Metz, Weisberg, & Weisberg, 2018; Tenney et al., 2015), this work has failed to demonstrate that people prescribe beliefs that violate what objective, evidence-based reasoning would warrant. To our knowledge, our results are the first to demonstrate that people will knowingly endorse non-evidential norms for belief, and specifically, prescribe motivated reasoning to others.

(cut)

Our findings suggest more proximate explanations for these biases: That lay people see these beliefs as morally beneficial and treat these moral benefits as legitimate grounds for motivated reasoning. Thus, overconfidence or over-optimism may persist in communities because people hold others to lower standards of evidence for adopting morally-beneficial optimistic beliefs than they do for pessimistic beliefs, or otherwise treat these benefits as legitimate reasons to ignore the evidence that one has.

Friday, February 19, 2021

The Cognitive Science of Fake News

Pennycook, G., & Rand, D. G. 
(2020, November 18). 

Abstract

We synthesize a burgeoning literature investigating why people believe and share “fake news” and other misinformation online. Surprisingly, the evidence contradicts a common narrative whereby partisanship and politically motivated reasoning explain failures to discern truth from falsehood. Instead, poor truth discernment is linked to a lack of careful reasoning and relevant knowledge, and to the use of familiarity and other heuristics. Furthermore, there is a substantial disconnect between what people believe and what they will share on social media. This dissociation is largely driven by inattention, rather than purposeful sharing of misinformation. As a result, effective interventions can nudge social media users to think about accuracy, and can leverage crowdsourced veracity ratings to improve social media ranking algorithms.

From the Discussion

Indeed, recent research shows that a simple accuracy nudge intervention –specifically, having participants rate the accuracy of a single politically neutral headline (ostensibly as part of a pretest) prior to making judgments about social media sharing –improves the extent to which people discern between true and false news content when deciding what to share online in survey experiments. This approach has also been successfully deployed in a large-scale field experiment on Twitter, in which messages asking users to rate the accuracy of a random headline were sent to thousands of accounts who recently shared links to misinformation sites. This subtle nudge significantly increased the quality of the content they subsequently shared; see Figure3B. Furthermore, survey experiments have shown that asking participants to explain how they know if a headline is true of false before sharing it increases sharing discernment, and having participants rate accuracy at the time of encoding protects against familiarity effects."

Saturday, January 16, 2021

Why Facts Are Not Enough: Understanding and Managing the Motivated Rejection of Science

Hornsey MJ. 
Current Directions in Psychological Science
2020;29(6):583-591. 

Abstract

Efforts to change the attitudes of creationists, antivaccination advocates, and climate skeptics by simply providing evidence have had limited success. Motivated reasoning helps make sense of this communication challenge: If people are motivated to hold a scientifically unorthodox belief, they selectively interpret evidence to reinforce their preferred position. In the current article, I summarize research on six psychological roots from which science-skeptical attitudes grow: (a) ideologies, (b) vested interests, (c) conspiracist worldviews, (d) fears and phobias, (e) personal-identity expression, and (f) social-identity needs. The case is made that effective science communication relies on understanding and attending to these underlying motivations.

(cut)

Conclusion

This article outlines six reasons people are motivated to hold views that are inconsistent with scientific consensus. This perspective helps explain why education and explication of data sometimes has a limited impact on science skeptics, but I am not arguing that education and facts are pointless. Quite the opposite: The provision of clear, objective information is the first and best line of defense against misinformation, mythmaking, and ignorance. However, for polarizing scientific issues—for example, climate change, vaccination, evolution, and in-vitro meat—it is clear that facts alone will not do the job. Successful communication around these issues will require sensitive understandings of the psychological motivations people have for rejecting science and the flexibility to devise communication frames that align with or circumvent these motivations.

Friday, January 8, 2021

Bias in science: natural and social

Joshua May
Synthese 

Abstract 

Moral, social, political, and other “nonepistemic” values can lead to bias in science, from prioritizing certain topics over others to the rationalization of questionable research practices. Such values might seem particularly common or powerful in the social sciences, given their subject matter. However, I argue first that the well documented phenomenon of motivated reasoning provides a useful framework for understanding when values guide scientific inquiry (in pernicious or productive ways). Second, this analysis reveals a parity thesis: values influence the social and natural sciences about equally, particularly because both are so prominently affected by desires for social credit and status, including recognition and career advancement. Ultimately, bias in natural and social science is both natural and social—that is, a part of human nature and considerably motivated by a concern for social status (and its maintenance). Whether the pervasive influence of values is inimical to the sciences is a separate question.

Conclusion 

We have seen how many of the putative biases that affect science can be explained and illuminated in terms of motivated reasoning, which yields a general understanding of how a researcher’s goals and values can influence scientific practice (whether positively or negatively). This general account helps to show that it is unwarranted to assume that such influences are significantly more prominent in the social sciences. The defense of this parity claim relies primarily on two key points. First, the natural sciences are also susceptible to the same values found in social science, particularly given that findings in many fields have social or political implications. Second, the ideological motivations that might seem to arise only in social science are minor compared to others. In particular, one’s reasoning is more often motivated by a desire to gain social credit (e.g. recognition among peers) than a desire to promote a moral or political ideology. Although there may be discernible differences in the quality of research across scientific domains, all are influenced by researchers’ values, as manifested in their motivations.

Wednesday, August 26, 2020

Morality justifies motivated reasoning in the folk ethics of belief

Cusimano, C., & Lombrozo, T. (2020, July 20).
https://doi.org/10.31234/osf.io/7r5yb

Abstract

When faced with a dilemma between believing what is supported by an impartial assessment of the evidence (e.g., that one’s friend is guilty of a crime) and believing what would better fulfill a moral obligation (e.g., that the friend is innocent), people often believe in line with the latter. But is this how people think beliefs ought to be formed? We addressed this question across three studies and found that, across a diverse set of everyday situations, people treat moral considerations as legitimate grounds for believing propositions that are unsupported by objective, evidence-based reasoning. We further document two ways in which moral evaluations affect how people prescribe beliefs to others. First, the moral value of a belief affects the evidential threshold required to believe, such that morally good beliefs demand less evidence than morally bad beliefs. Second, people sometimes treat the moral value of a belief as an independent justification for belief, and so sometimes prescribe evidentially poor beliefs to others. Together these results show that, in the folk ethics of belief, morality can justify and demand motivated reasoning.

From the Discussion

Additionally, participants reported that moral concerns affected the standards of evidence that apply to belief, such that morally-desirable beliefs require less evidence than morally-undesirable beliefs. In Study 1, participants reported that, relative to an impartial observer with the same information, someone with a moral reason to be optimistic had a wider range of beliefs that could be considered“consistent with” and “based on” the evidence.  Critically however, the broader range of beliefs that were consistent with the same evidence were only beliefs that were more morally desirable; morally undesirable beliefs were not more consistent with the evidence. In Studies 2 and 3, participants agreed more strongly that someone who had a moral reason to adopt a desirable belief had sufficient evidence to do so compared to someone who lacked a moral reason, even though they formed the same belief on the basis of the same evidence.  Likewise, on average, participants judged that when someone adopted the morally undesirable belief, they were more often judged as having insufficient evidence for doing so relative to someone who lacked a moral reason (again, even though they formed the same belief on the basis of the same evidence).  Finally, in Study 2 (though not in Study 3), these judgments replicated using an indirect measure of evidentiary quality; namely, attributions of knowledge. In sum, these findings document that one reason people may prescribe a motivated belief to someone is because morality changes how much evidence they consider to be required to hold the belief in an evidentially-sound way.

Editor's Note: Huge implications for psychotherapy.

Thursday, August 20, 2020

Morality justifies motivated reasoning

Corey Cusimano and Tania Lombrozo
Paper found online

Abstract

A great deal of work argues that people demand impartial, evidence-based reasoning from others. However, recent findings show that moral values occupy a cardinal position in people’s evaluation of others, raising the possibility that people sometimes prescribe morally-good but evidentially-poor beliefs. We report two studies investigating how people evaluate beliefs when these two ideals conflict and find that people regularly endorse motivated reasoning when it can be morally justified. Furthermore, we document two ways that moral considerations result in prescribed motivated reasoning. First, morality can provide an alternative justification for belief, leading people to prescribe evidentially unsupported beliefs to others. And, second, morality can affect how people evaluate the way evidence is weighed by lowering or raising the threshold of required evidence for morally good and bad beliefs, respectively. These results illuminate longstanding questions about the nature of motivated reasoning and the social regulation of belief.

From the General Discussion

These results can potentially explain the presence and persistence of certain motivated beliefs. In particular, morally-motivated beliefs could persist in part because people do not demand that they or others reason accurately or acquire equal evidence for their beliefs (Metz, Weisburg, & Weisburg, 2018). These findings also invite a reinterpretation of some classic biases, which are in general interpreted as unintentional errors (Kunda, 1990). We suggest instead that some apparent errors reflect convictions that one ought to be biased or discount evidence. Future work investigating biased belief formation should incorporate the perceived moral value of the belief.

The pdf can be found here.

Friday, August 14, 2020

Four Ways to Avoid the Pitfalls of Motivated Moral Reasoning

Notre Dame
Deloitte Center for Ethical Leadership

Here is an excerpt:

Four Ways to Control Motivated Reasoning

Motivated reasoning happens all the time, and we can never fully eradicate it. But we can recognize it and guard against its worst effects. Use these guidelines as a way to help quiet your inner lawyer and access your inner judge.

Use the "Front Page" Test

Studies have shown that when we expect our decisions to be made public we are more circumspect. Ask yourself, "Would I be comfortable having this choice published on the front page of a local newspaper?" Doing so provides an opportunity to step back from the conditions that may induce motivated reasoning and engage in more critical thinking.

Don’t Go It Alone

While it is difficult to notice motivated reasoning in ourselves, we can much more easily recognize it in others. Surround yourself with the voices of those you trust, and make sure you’re prepared to listen and acknowledge your limitations. You can even make it someone's job to voice dissent. If you're surrounded only by "yes men" it can be all too easy for motivated reasoning to take over.

Avoid Ambiguity

Motivated reasoning becomes more likely when the rules are fuzzy or vague. Rely on accepted standards and definitions of ethical behavior, and make sure that your principles are clear enough for employees to understand what they mean in practice. In ethics training, make sure to use scenarios and stories to show what ethical behavior looks like. If your values or principles are too general, they may provide convenient justifications for unethical behavior instead of guarding against it.

Stay Humble

In addition to intelligence tests, research suggests that we’re touchy about receiving any feedback we don’t agree with. One study found that when participants received negative feedback about their leadership qualities they were likely to use racial stereotypes to dismiss the person giving the feedback. The next time you receive feedback you’d rather ignore, slow down, pay attention, and consider how the feedback could help you grow as a person and as a leader.

Wednesday, July 31, 2019

The “Fake News” Effect: An Experiment on Motivated Reasoning and Trust in News

Michael Thaler
Harvard University
Originally published May 28, 2019

Abstract

When people receive information about controversial issues such as immigration policies, upward mobility, and racial discrimination, the information often evokes both what they currently believe and what they are motivated to believe. This paper theoretically and experimentally explores the importance in inference of this latter channel: motivated reasoning. In the theory of motivated reasoning this paper develops, people misupdate from information by treating their motivated beliefs as an extra signal. To test the theory, I create a new experimental design in which people make inferences about the veracity of news sources. This design is unique in that it identifies motivated reasoning from Bayesian updating and confirmation bias, and doesn’t require elicitation of people’s entire belief distribution. It is also very portable: In a large online experiment, I find the first identifying evidence for politically-driven motivated reasoning on eight different economic and social issues. Motivated reasoning leads people to become more polarized, less accurate, and more overconfident in their beliefs about these issues.

From the Conclusion:

One interpretation of this paper is unambiguously bleak: People of all demographics similarly motivatedly reason, do so on essentially every topic they are asked about, and make particularly biased inferences on issues they find important. However, there is an alternative interpretation: This experiment takes a step towards better understanding motivated reasoning, and makes it easier for future work to attenuate the bias. Using this experimental design, we can identify and estimate the magnitude of the bias; future projects that use interventions to attempt to mitigate motivated reasoning can use this estimated magnitude as an outcome variable. Since the bias does decrease utility in at least some settings, people may have demand for such interventions.

The research is here.

Monday, June 24, 2019

Motivated free will belief: The theory, new (preregistered) studies, and three meta-analyses

Clark, C. J., Winegard, B. M., & Shariff, A. F. (2019).
Manuscript submitted for publication.

Abstract

Do desires to punish lead people to attribute more free will to individual actors (motivated free will attributions) and to stronger beliefs in human free will (motivated free will beliefs) as suggested by prior research? Results of 14 new (7 preregistered) studies (n=4,014) demonstrated consistent support for both of these. These findings consistently replicated in studies (k=8) in which behaviors meant to elicit desires to punish were rated as equally or less counternormative than behaviors in control conditions. Thus, greater perceived counternormativity cannot account for these effects. Additionally, three meta-analyses of the existing data (including eight vignette types and eight free will judgment types) found support for motivated free will attributions (k=22; n=7,619; r=.25, p<.001) and beliefs (k=27; n=8,100; r=.13, p<.001), which remained robust after removing all potential moral responsibility confounds (k=26; n=7,953; r=.12, p<.001). The size of these effects varied by vignette type and free will belief measurement. For example, presenting the FAD+ free will belief subscale mixed among three other subscales (as in Monroe and Ysidron’s [2019] failed replications) produced a smaller average effect size (r=.04) than shorter and more immediate measures (rs=.09-.28). Also, studies with neutral control conditions produced larger effects (Attributions: r=.30; Beliefs: rs=.14-.16) than those with control conditions involving bad actions (Attributions: r=.05; Beliefs: rs=.04-.06). Removing these two kinds of studies from the meta-analyses produced larger average effect sizes (Attributions: r=.28; Beliefs: rs=.17-.18). We discuss the relevance of these findings for past and future research and the significance of these findings for human responsibility.

From the Discussion Section:

We suspect that motivated free will beliefs have become more common as society has become more humane and more concerned about proportionate punishment. Many people now assiduously reflect upon their own society’s punitive practices and separate those who deserve to be punished from those who are incapable of being fully responsible for their actions. Free will is crucial here because it is often considered a prerequisite for moral responsibility (Nichols & Knobe, 2007; Sarkissian et al., 2010; Shariff et al., 2014). Therefore, when one is motivated to punish another person, one is also motivated to inflate free will beliefs and free will attributions to specific perpetrators as a way to justify punishing the person.

A preprint can be downloaded here.

Sunday, June 23, 2019

On the belief that beliefs should change according to evidence: Implications for conspiratorial, moral, paranormal, political, religious, and science beliefs

Gordon Pennycook, James Allan Cheyne, Derek Koehler, & Jonathan Fugelsang
PsyAirXiv PrePrints - Last edited on May 24, 2019

Abstract

Does one’s stance toward evidence evaluation and belief revision have relevance for actual beliefs? We investigate the role of having an actively open-minded thinking style about evidence (AOT-E) on a wide range of beliefs, values, and opinions. Participants indicated the extent to which they think beliefs (Study 1) or opinions (Studies 2 and 3) ought to change according to evidence on an 8-item scale. Across three studies with 1,692 participants from two different sources (Mechanical Turk and Lucid for Academics), we find that our short AOT-E scale correlates negatively with beliefs about topics ranging from extrasensory perception, to respect for tradition, to abortion, to God; and positively with topics ranging from anthropogenic global warming to support for free speech on college campuses. More broadly, the belief that beliefs should change according to evidence was robustly associated with political liberalism, the rejection of traditional moral values, the acceptance of science, and skepticism about religious, paranormal, and conspiratorial claims. However, we also find that AOT-E is much more strongly predictive for political liberals (Democrats) than conservatives (Republicans). We conclude that socio-cognitive theories of belief (both specific and general) should take into account people’s beliefs about when and how beliefs should change – that is, meta-beliefs – but that further work is required to understand how meta-beliefs about evidence interact with political ideology.

Conclusion

Our 8-item actively open-minded thinking about evidence (AOT-E) scale was strongly predictive of a wide range of beliefs, values, and opinions. People who reported believing that beliefs and opinions should change according to evidence were less likely to be religious, less likely to hold paranormal and conspiratorial beliefs, more likely to believe in a variety of scientific claims, and were more political liberal (in terms of overall ideology, partisan affiliation, moral values, and a variety of specific political opinions). Moreover, the effect sizes for these correlations was often large or very large, based on established norms (Funder & Ozer, 2019; Gignac & Szodorai, 2016). The size and diversity of AOT-E correlates strongly supports one major, if broad, conclusion: Socio-cognitive theories of belief (both specific and general) should take into account what people believe about when and how beliefs and opinions should change (i.e., meta-beliefs). That is, we should not assume that evidence is equally important for everyone. However, future work is required to more clearly delineate why AOT-E is more predictive for political liberals than conservatives.

A preprint can be downloaded here.

Friday, June 7, 2019

Trading morality for a good economy

Michael Gerson
www.dailyherald.com
Originally posted May 28, 2019

Here is an excerpt:

Bennett went on to talk about how capitalism itself depends on good private character; how our system of government requires leaders of integrity; how failings of character can't be neatly compartmentalized. "A president whose character manifests itself in patterns of reckless personal conduct, deceit, abuse of power and contempt for the rule of law," he wrote, "cannot be a good president."

Above all, Bennett argued that the cultivation of character depends on the principled conduct of those in positions of public trust. "During moments of crisis," he wrote, "of unfolding scandal, people watch closely. They learn from what they see. And they often embrace a prevailing attitude and ethos, and employ what seems to work for others. So it matters if the legacy of the president is that the ends justify the means; that rules do not apply across the board; that lawlessness can be excused. It matters, too, if we demean the presidency by lowering our standards of expectations for the office and by redefining moral authority down. It matters if truth becomes incidental, and public office is used to cover up misdeeds. And it matters if we treat a president as if he were a king, above the law."

All this was written while Bill Clinton was president. And Bennett himself now seems reluctant to apply these rules "across the board" to a Republican president. This is not unusual. It is the political norm to ignore the poor character of politicians we agree with. But this does nothing to discredit Bennett's argument.

If you are a sexual harasser who wants to escape consequences, or a businessperson who habitually plays close to ethical lines, your hour has come. If you dream of having a porn-star mistress, or hope to game the tax system for your benefit, you have found your man and your moment. For all that is bent and sleazy, for all that is dishonest and dodgy, these are the golden days.

The info is here.

Thursday, December 6, 2018

Partisanship, Political Knowledge, and the Dunning‐Kruger Effect

Ian G. Anson
Political Psychology
First published: 02 April 2018
https://doi.org/10.1111/pops.12490

Abstract

A widely cited finding in social psychology holds that individuals with low levels of competence will judge themselves to be higher achieving than they really are. In the present study, I examine how the so‐called “Dunning‐Kruger effect” conditions citizens' perceptions of political knowledgeability. While low performers on a political knowledge task are expected to engage in overconfident self‐placement and self‐assessment when reflecting on their performance, I also expect the increased salience of partisan identities to exacerbate this phenomenon due to the effects of directional motivated reasoning. Survey experimental results confirm the Dunning‐Kruger effect in the realm of political knowledge. They also show that individuals with moderately low political expertise rate themselves as increasingly politically knowledgeable when partisan identities are made salient. This below‐average group is also likely to rely on partisan source cues to evaluate the political knowledge of peers. In a concluding section, I comment on the meaning of these findings for contemporary debates about rational ignorance, motivated reasoning, and political polarization.

Friday, September 1, 2017

Political differences in free will belief are driven by differences in moralization

Clark, C. J., Everett, J. A. C., Luguri, J. B., Earp, B. D., Ditto, P., & Shariff, A.
PsyArXiv. (2017, August 1).

Abstract

Five studies tested whether political conservatives’ stronger free will beliefs are driven by their broader view of morality, and thus a broader motivation to assign responsibility. On an individual difference level, Study 1 found that political conservatives’ higher moral wrongness judgments accounted for their higher belief in free will.In Study 2, conservatives ascribed more free will for negative events than liberals,while no differences emerged for positive events. For actions ideologically equivalent in perceived moral wrongness, free will judgments also did not differ (Study 3), and actions that liberals perceived as more wrong, liberals judged as more free(Study 4). Finally, higher wrongness judgments mediated the effect of conservatism on free will beliefs(Study 5). Higher free will beliefs among conservatives may be explained by conservatives’ tendency to moralize, which strengthens motivation to justify blame with stronger belief in free will and personal accountability.

The preprint research article is here.