Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Partisanship. Show all posts
Showing posts with label Partisanship. Show all posts

Tuesday, March 21, 2023

Mitigating welfare-related prejudice and partisanship among U.S. conservatives with moral reframing of a universal basic income policy

Thomas, C. C., Walton, G. M., et al.
Journal of Experimental Social Psychology
Volume 105, March 2023, 104424

Abstract

Inequality and deep poverty have risen sharply in the US since the 1990s. Simultaneously, cash-based welfare policies have frayed, support for public assistance has fallen on the political right, and prejudice against recipients of welfare has remained high. Yet, in recent years Universal Basic Income (UBI) has gained traction, a policy proposing to give all citizens cash sufficient to meet basic needs with no strings attached. We hypothesized that UBI can mitigate the partisanship and prejudice that define the existing welfare paradigm in the US but that this potential depends critically on the narratives attached to it. Indeed, across three online experiments with US adults (total N = 1888), we found that communicating the novel policy features of UBI alone were not sufficient to achieve bipartisan support for UBI or overcome negative stereotyping of its recipients. However, when UBI was described as advancing the more conservative value of financial freedom, conservatives perceived the policy to be more aligned with their values and were less opposed to the policy (meta-analytic effect on policy support: d = 0.36 [95% CI: 0.27 to 0.46]). Extending the literatures on moral reframing and cultural match, we further find that this values-aligned policy narrative mitigated prejudice among conservatives, reducing negative welfare-related stereotyping of policy recipients (meta-analytic effect d = −0.27 [95% CI: −0.38 to −0.16]), while increasing affiliation with them. Together, these findings point to moral reframing as a promising means by which institutional narratives can be used to bridge partisan divides and reduce prejudice.

Highlights

• Policies like Universal Basic Income (UBI) propose to mitigate poverty and inequality by giving all citizens cash

• A UBI policy narrative based in freedom most increased policy support and reduced prejudice among conservatives

• This narrative also achieved the highest perceived moral fit, or alignment with one’s values, among conservatives

• Moral reframing of policy communications may be an effective institutional lever for mitigating partisanship and prejudice

(cut)

General discussion

Three experiments revealed that a values-based narrative of UBI, one grounded in the conservative value of economic freedom, can advance bipartisanship in support for UBI and simultaneously mitigate welfare-related prejudice among U.S. conservatives. While policy reforms often focus on changes to objective policy features, these studies suggest that the narratives attached to such features will meaningfully influence public attitudes towards both the policy and its recipients. In other words, the potential of policies like UBI to advance goals such as inequality reduction and prejudice mitigation may be limited if they fail to attend to the narratives that accompany them.

Here, we demonstrate the potential for policy narratives that elevate the moral foundations of those most opposed to the policy, U.S. conservatives in this case. Why might this narrative approach succeed? At a higher-order level, our findings suggests that inclusion begets inclusion: when conservatives felt that the policy recognized and reflected their own values, they were more likely to support the policy and express inclusive attitudes toward its recipients.

Thursday, June 16, 2022

Record-High 50% of Americans Rate U.S. Moral Values as 'Poor'

Megan Brenan & Nicole Willcoxon
www.gallup.com
Originally posted 15 June 22

Story Highlights
  • 50% say state of moral values is "poor"; 37% "only fair"
  • 78% think moral values in the U.S. are getting worse
  • "Consideration of others" cited as top problem with state of moral values
A record-high 50% of Americans rate the overall state of moral values in the U.S. as "poor," and another 37% say it is "only fair." Just 1% think the state of moral values is "excellent" and 12% "good."

Although negative views of the nation's moral values have been the norm throughout Gallup's 20-year trend, the current poor rating is the highest on record by one percentage point.

These findings, from Gallup's May 2-22 Values and Beliefs poll, are generally in line with perceptions since 2017 except for a slight improvement in views in 2020 when Donald Trump was running for reelection. On average since 2002, 43% of U.S. adults have rated moral values in the U.S. as poor, 38% as fair and 18% as excellent or good.

Republicans' increasingly negative assessment of the state of moral values is largely responsible for the record-high overall poor rating. At 72%, Republicans' poor rating of moral values is at its highest point since the inception of the trend and up sharply since Trump left office.

At the same time, 36% of Democrats say the state of moral values is poor, while a 48% plurality rate it as only fair and 15% as excellent or good. Independents' view of the current state of moral values is relatively stable and closer to Democrats' than Republicans' rating, with 44% saying it is poor, 40% only fair and 16% excellent or good.

Outlook for State of Moral Values Is Equally Bleak

Not only are Americans feeling grim about the current state of moral values in the nation, but they are also mostly pessimistic about the future on the subject, as 78% say morals are getting worse and just 18% getting better. The latest percentage saying moral values are getting worse is roughly in line with the average of 74% since 2002, but it is well above the past two years' 67% and 68% readings.

Saturday, October 16, 2021

Social identity shapes antecedents and functional outcomes of moral emotion expression in online networks

Brady, W. J., & Van Bavel, J. J. 
(2021, April 2). 

Abstract

As social interactions increasingly occur through social media platforms, intergroup affective phenomena such as “outrage firestorms” and “cancel culture” have emerged with notable consequences for society. In this research, we examine how social identity shapes the antecedents and functional outcomes of moral emotion expression online. Across four pre-registered experiments (N = 1,712), we find robust evidence that the inclusion of moral-emotional expressions in political messages has a causal influence on intentions to share the messages on social media. We find that individual differences in the strength of partisan identification is a consistent predictor of sharing messages with moral-emotional expressions, but little evidence that brief manipulations of identity salience increased sharing. Negative moral emotion expression in social media messages also causes the message author to be perceived as more strongly identified among their partisan ingroup, but less open-minded and less worthy of conversation to outgroup members. These experiments highlight the role of social identity in affective phenomena in the digital age, and showcase how moral emotion expressions in online networks can serve ingroup reputation functions while at the same time hinder discourse between political groups.

Conclusion

In the context of contentious political conversations online, moral-emotional language causes political partisans to share the message more often, and that this effect was strongest in strong group identifiers. Expressing negative moral-emotional language in social media messages makes the message author appear more strongly identified with their group, but also makes outgroup members think the author is less open-minded and less worth of conversation. This work sheds light on antecedents and functional outcomes of moral-emotion expression in the digital age, which is becoming increasingly important to study as intergroup affective phenomena such as viral outrage and affective polarization are reaching historic levels.

Wednesday, November 4, 2020

The psychology and neuroscience of partisanship

Harris, E. A., Pärnamets, et al.
psyarxiv.com

Abstract

Why have citizens become increasingly polarized? The answer is that there is increasing identification with political parties —a process known as partisanship (Mason, 2018). This chapter will focus on the role that social identity plays in contemporary politics (Greene, 2002). These party identities influence political preferences, such that partisans are more likely to agree with policies that were endorsed by their political party, regardless of the policy content, and, in some cases, their own ideological beliefs (Cohen, 2003; Samuels & Zucco Jr, 2014). There are many social and structural factors that are related to partisanship, including polarization (Lupu, 2015), intergroup threat (e.g., Craig & Richeson, 2014), and media exposure (Tucker et al., 2018; Barberá, 2015). Our chapter will focus on the psychology and neuroscience of partisanship within these broader socio-political contexts. This will help reveal the roots of partisanship across political contexts.

Conclusion

A burgeoning literature suggests that partisanship is a form of social identity with interesting and wide-reaching implications for our brains and behavior. In some ways, the effects of partisanship mirror those of other forms of group identity, both behaviorally and in the brain. However, partisanship also has interesting biological antecedents and effects in political domains such as belief in fake news and conspiracy theories, as well as voting behavior. As political polarization rises in many nations across the world, partisanship will become an increasingly divisive and influential form of social identity in those countries, thus highlighting the urgency to understand its psychological and neural underpinnings.

Wednesday, May 27, 2020

Trust in Medical Scientists Has Grown in U.S.

C. Funk, B. Kennedy, & C. Johnson
Pew Research Center
Originally published 21 May 20

Americans’ confidence in medical scientists has grown since the coronavirus outbreak first began to upend life in the United States, as have perceptions that medical doctors hold very high ethical standards. And in their own estimation, most U.S. adults think the outbreak raises the importance of scientific developments.

Scientists have played a prominent role in advising government leaders and informing the public about the course of the pandemic, with doctors such as Anthony Fauci and Deborah Birx, among others, appearing at press conferences alongside President Donald Trump and other government officials.

But there are growing partisan divisions over the risk the novel coronavirus poses to public health, as well as public confidence in the scientific and medical community and the role such experts are playing in public policy.

Still, most Americans believe social distancing measures are helping at least some to slow the spread of the coronavirus disease, known as COVID-19. People see a mix of reasons behind new cases of infection, including limited testing, people not following social distancing measures and the nature of the disease itself.

These are among the key findings from a new national survey by Pew Research Center, conducted April 29 to May 5 among 10,957 U.S. adults, and a new analysis of a national survey conducted April 20 to 26 among 10,139 U.S. adults, both using the Center’s American Trends Panel.

Public confidence in medical scientists to act in the best interests of the public has gone up from 35% with a great deal of confidence before the outbreak to 43% in the Center’s April survey. Similarly, there is a modest uptick in public confidence in scientists, from 35% in 2019 to 39% today. (A random half of survey respondents rated their confidence in one of the two groups.)

The info is here.

Monday, February 4, 2019

(Ideo)Logical Reasoning: Ideology Impairs Sound Reasoning

Anup Gampa, Sean Wojcik, Matt Motyl, Brian Nosek, & Pete Ditto
PsycArXiv
Originally posted January 15, 2019
 
Abstract

Beliefs shape how people interpret information and may impair how people engage in logical reasoning. In 3 studies, we show how ideological beliefs impair people's ability to: (1) recognize logical validity in arguments that oppose their political beliefs, and, (2) recognize the lack of logical validity in arguments that support their political beliefs. We observed belief bias effects among liberals and conservatives who evaluated the logical soundness of classically structured logical syllogisms supporting liberal or conservative beliefs. Both liberals and conservatives frequently evaluated the logical structure of entire arguments based on the believability of arguments’ conclusions, leading to predictable patterns of logical errors. As a result, liberals were better at identifying flawed arguments supporting conservative beliefs and conservatives were better at identifying flawed arguments supporting liberal beliefs. These findings illuminate one key mechanism for how political beliefs distort people’s abilities to reason about political topics soundly.

The research is here.

Friday, December 14, 2018

Don’t Want to Fall for Fake News? Don’t Be Lazy

Robbie Gonzalez
www.wired.com
Originally posted November 9, 2018

Here are two excerpts:

Misinformation researchers have proposed two competing hypotheses for why people fall for fake news on social media. The popular assumption—supported by research on apathy over climate change and the denial of its existence—is that people are blinded by partisanship, and will leverage their critical-thinking skills to ram the square pegs of misinformation into the round holes of their particular ideologies. According to this theory, fake news doesn't so much evade critical thinking as weaponize it, preying on partiality to produce a feedback loop in which people become worse and worse at detecting misinformation.

The other hypothesis is that reasoning and critical thinking are, in fact, what enable people to distinguish truth from falsehood, no matter where they fall on the political spectrum. (If this sounds less like a hypothesis and more like the definitions of reasoning and critical thinking, that's because they are.)

(cut)

All of which suggests susceptibility to fake news is driven more by lazy thinking than by partisan bias. Which on one hand sounds—let's be honest—pretty bad. But it also implies that getting people to be more discerning isn't a lost cause. Changing people's ideologies, which are closely bound to their sense of identity and self, is notoriously difficult. Getting people to think more critically about what they're reading could be a lot easier, by comparison.

Then again, maybe not. "I think social media makes it particularly hard, because a lot of the features of social media are designed to encourage non-rational thinking." Rand says. Anyone who has sat and stared vacantly at their phone while thumb-thumb-thumbing to refresh their Twitter feed, or closed out of Instagram only to re-open it reflexively, has experienced firsthand what it means to browse in such a brain-dead, ouroboric state. Default settings like push notifications, autoplaying videos, algorithmic news feeds—they all cater to humans' inclination to consume things passively instead of actively, to be swept up by momentum rather than resist it.

The info is here.

Monday, July 23, 2018

Assessing the contextual stability of moral foundations: Evidence from a survey experiment

David Ciuk
Research and Politics
First Published June 20, 2018

Abstract

Moral foundations theory (MFT) claims that individuals use their intuitions on five “virtues” as guidelines for moral judgment, and recent research makes the case that these intuitions cause people to adopt important political attitudes, including partisanship and ideology. New work in political science, however, demonstrates not only that the causal effect of moral foundations on these political predispositions is weaker than once thought, but it also opens the door to the possibility that causality runs in the opposite direction—from political predispositions to moral foundations. In this manuscript, I build on this new work and test the extent to which partisan and ideological considerations cause individuals’ moral foundations to shift in predictable ways. The results show that while these group-based cues do exert some influence on moral foundations, the effects of outgroup cues are particularly strong. I conclude that small shifts in political context do cause MFT measures to move, and, to close, I discuss the need for continued theoretical development in MFT as well as an increased attention to measurement.

The research is here.

Tuesday, March 20, 2018

Why Partisanship Is Such a Worthy Foe of Objective Truth

Charlotte Hu
Discover Magazine
Originally published February 20, 2018

Here is an excerpt:

Take, for example, an experiment that demonstrated party affiliation affected people’s perception of a protest video. When participants felt the video depicted liberally minded protesters, Republicans were more in favor of police intervention than Democrats. The opposite emerged when participants thought the video showed a conservative protest. The visual information was identical, but people drew vastly different conclusions that were shaded by their political group affiliation.

“People are more likely to behave in and experience emotions in ways that are congruent with the activated social identity,” says Bavel. In other words, people will go along with the group, even if the ideas oppose their own ideologies—belonging may have more value than facts.

The situation is extenuated by social media, which creates echo chambers on both the left and the right. In these concentric social networks, the same news articles are circulated, validating the beliefs of the group and strengthening their identity association with the group.

The article is here.