Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Political polarization. Show all posts
Showing posts with label Political polarization. Show all posts

Sunday, September 4, 2022

Reducing Explicit Blatant Dehumanization by Correcting Exaggerated Meta-Perceptions

Landry, A. P., Schooler, J. W., Willer, R., 
& Seli, P. (2022). 
Social Psychological and Personality Science.

Abstract

If explicitly, blatantly dehumanizing a group of people—overtly characterizing them as less than human—facilitates harming them, then reversing this process is paramount. Addressing dehumanization among American political partisans appears especially crucial, given that it has been linked to their anti-democratic hostility. Perhaps because of its overt nature, partisans recognize—and greatly exaggerate—the extent to which out-partisans explicitly, blatantly dehumanize them. Past research has found that when people perceive they are dehumanized by an outgroup (i.e., meta-dehumanization), they respond with reciprocal dehumanization. Therefore, we reasoned that partisans’ dehumanization could be reduced by correcting their exaggerated meta-dehumanization. Indeed, across three preregistered studies (N = 4,154), an intervention correcting American partisans’ exaggerated meta-dehumanization reduced their own dehumanization of out-partisans. This decreased dehumanization persisted at a 1-week follow-up and predicted downstream reductions in partisans’ anti-democratic hostility, suggesting that correcting exaggerated meta-dehumanization can durably mitigate the dark specter of dehumanization.

Discussion

Explicit blatant dehumanization continues to mar contemporary intergroup relations (Kteily & Landry, 2022). For instance, a troubling number of American partisans explicitly, blatantly dehumanize one another, which has been linked to their anti-democratic hostility (e.g., Moore-Berg et al., 2020). We sought to reduce partisan dehumanization by integrating research demonstrating that (a) individuals who think an outgroup dehumanizes their own group (i.e., meta-dehumanization) respond with reciprocal dehumanization (Kteily et al., 2016; Landry, Ihm & Schooler, 2022) and (b) individuals attribute overly-negative attitudes to outgroup members (Lees & Cikara, 2021). We developed an intervention informing American partisans of their tendency to overestimate how much they are dehumanized by out-partisans (Landry, Ihm, Kwit & Schooler, 2021; Moore-Berg et al., 2020). This reduced partisans’ own dehumanization of out-partisans across three studies–an effect that persisted at a 1-week follow-up.

Correcting partisans’ meta-dehumanization also produced modest—yet reliable—reductions in their anti-democratic hostility. This is notable given recent work finding that interventions reducing negative affect do not influence anti-democratic attitudes (Broockman et al., 2020; Voelkel et al., 2021). Perhaps our dehumanization-focused intervention reduced anti-democratic attitudes when affect-focused interventions did not because dehumanization is more strongly linked to anti-democratic attitudes. Indeed, we observed particularly strong indirect effects of the intervention on reduced anti-democratic spite through dehumanization (average βindirect = −.23, compared to an average βindirect = −.03 for negative affect; see also Landry, Ihm & Schooler, 2022). Although experimental tests of mediation are needed to confirm this cross-sectional indirect effect, future work attempting to bolster support for democratic norms should consider the promise of targeting dehumanization.

Sunday, August 14, 2022

Political conspiracy theories as tools for mobilization and signaling

Marie, A., & Petersen, M. B. (2022).
Current Opinion in Psychology, 101440

Abstract

Political conspiracist communities emerge and bind around hard-to-falsify narratives about political opponents or elites convening to secretly exploit the public in contexts of perceived political conflict. While the narratives appear descriptive, we propose that their content as well as the cognitive systems regulating their endorsement and dissemination may have co-evolved, at least in part, to reach coalitional goals: To drive allies’ attention to the social threat to increase their commitment and coordination for collective action, and to signal devotion to gain within-group status. Those evolutionary social functions may be best fulfilled if individuals endorse the conspiratorial narrative sincerely.

Highlights

•  Political conspiracist groups unite around clear-cut and hard-to-falsify narratives about political opponents or elites secretly organizing to deceive and exploit the public.

•  Such social threat-based narratives and the cognitive systems that regulate them may have co-evolved, at least in part, to serve social rather than epistemic functions: facilitating ingroup recruitment, coordination, and signaling for cooperative benefits.

•  While social in nature, those adaptive functions may be best fulfilled if group leaders and members endorse conspiratorial narratives sincerely.

Conclusions

Political conspiracy theories are cognitively attractive, hard-to-falsify narratives about the secret misdeeds of political opponents and elites. While descriptive in appearance, endorsement and expression of those narratives may be regulated, at least partly, by cognitive systems pursuing social goals: to attract attention of allies towards a social threat to enhance commitment and coordination for joint action (in particular, in conflict), and signal devotion to gain within-group status.

Rather than constituting a special category of cultural beliefs, we see political conspiracy theories as part of a wider family of abstract ideological narratives denouncing how an evil, villains, or oppressive system—more or less real and clearly delineated—exploit a virtuous victim group. This family also comprises anti-capitalistic vs. anti-communist or religious propaganda, white supremacist vs. anti-racist discourses, etc. Future research should explore the content properties that make those threat-based narratives compelling; the balance between their hypothetical social functions of signaling, commitment, and coordination enhancers; and the factors moderating their spread (such as intellectual humility and beliefs that the outgroup does not hate the ingroup).

Saturday, August 13, 2022

The moral psychology of misinformation: Why we excuse dishonesty in a post-truth world

Effron, D.A., & Helgason, B. A.
Current Opinion in Psychology
Volume 47, October 2022, 101375

Abstract

Commentators say we have entered a “post-truth” era. As political lies and “fake news” flourish, citizens appear not only to believe misinformation, but also to condone misinformation they do not believe. The present article reviews recent research on three psychological factors that encourage people to condone misinformation: partisanship, imagination, and repetition. Each factor relates to a hallmark of “post-truth” society: political polarization, leaders who push “alterative facts,” and technology that amplifies disinformation. By lowering moral standards, convincing people that a lie's “gist” is true, or dulling affective reactions, these factors not only reduce moral condemnation of misinformation, but can also amplify partisan disagreement. We discuss implications for reducing the spread of misinformation.

Repeated exposure to misinformation reduces moral condemnation

A third hallmark of a post-truth society is the existence of technologies, such as social media platforms, that amplify misinformation. Such technologies allow fake news – “articles that are intentionally and verifiably false and that could mislead readers” – to spread fast and far, sometimes in multiple periods of intense “contagion” across time. When fake news does “go viral,” the same person is likely to encounter the same piece of misinformation multiple times. Research suggests that these multiple encounters may make the misinformation seem less unethical to spread.

Conclusion

In a post-truth world, purveyors of misinformation need not convince the public that their lies are true. Instead, they can reduce the moral condemnation they receive by appealing to our politics (partisanship), convincing us a falsehood could have been true or might become true in the future (imagination), or simply exposing us to the same misinformation multiple times (repetition). Partisanship may lower moral standards, partisanship and imagination can both make the broader meaning of the falsehood seem true, and repetition can blunt people's negative affective reaction to falsehoods (see Figure 1). Moreover, because partisan alignment strengthens the effects of imagination and facilitates repeated contact with falsehoods, each of these processes can exacerbate partisan divisions in the moral condemnation of falsehoods. Understanding these effects and their pathways informs interventions aimed at reducing the spread of misinformation.

Ultimately, the line of research we have reviewed offers a new perspective on our post-truth world. Our society is not just post-truth in that people can lie and be believed. We are post-truth in that it is concerningly easy to get a moral pass for dishonesty – even when people know you are lying.