Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Misinformation. Show all posts
Showing posts with label Misinformation. Show all posts

Monday, January 8, 2024

Human-Algorithm Interactions Help Explain the Spread of Misinformation

McLoughlin, K. L., & Brady, W. J. (2023).
Current Opinion in Psychology, 101770.

Abstract

Human attention biases toward moral and emotional information are as prevalent online as they are offline. When these biases interact with content algorithms that curate social media users’ news feeds to maximize attentional capture, moral and emotional information are privileged in the online information ecosystem. We review evidence for these human-algorithm interactions and argue that misinformation exploits this process to spread online. This framework suggests that interventions aimed at combating misinformation require a dual-pronged approach that combines person-centered and design-centered interventions to be most effective. We suggest several avenues for research in the psychological study of misinformation sharing under a framework of human-algorithm interaction.

Here is my summary:

This research highlights the crucial role of human-algorithm interactions in driving the spread of misinformation online. It argues that both human attentional biases and algorithmic amplification mechanisms contribute to this phenomenon.

Firstly, humans naturally gravitate towards information that evokes moral and emotional responses. This inherent bias makes us more susceptible to engaging with and sharing misinformation that leverages these emotions, such as outrage, fear, or anger.

Secondly, social media algorithms are designed to maximize user engagement, which often translates to prioritizing content that triggers strong emotions. This creates a feedback loop where emotionally charged misinformation is amplified, further attracting human attention and fueling its spread.

The research concludes that effectively combating misinformation requires a multifaceted approach. It emphasizes the need for interventions that address both human psychology and algorithmic design. This includes promoting media literacy, encouraging critical thinking skills, and designing algorithms that prioritize factual accuracy and diverse perspectives over emotional engagement.

Thursday, January 4, 2024

Americans’ Trust in Scientists, Positive Views of Science Continue to Decline

Brian Kennedy & Alec Tyson
Pew Research
Originally published 14 NOV 23

Impact of science on society

Overall, 57% of Americans say science has had a mostly positive effect on society. This share is down 8 percentage points since November 2021 and down 16 points since before the start of the coronavirus outbreak.

About a third (34%) now say the impact of science on society has been equally positive as negative. A small share (8%) think science has had a mostly negative impact on society.

Trust in scientists

When it comes to the standing of scientists, 73% of U.S. adults have a great deal or fair amount of confidence in scientists to act in the public’s best interests. But trust in scientists is 14 points lower than it was at the early stages of the pandemic.

The share expressing the strongest level of trust in scientists – saying they have a great deal of confidence in them – has fallen from 39% in 2020 to 23% today.

As trust in scientists has fallen, distrust has grown: Roughly a quarter of Americans (27%) now say they have not too much or no confidence in scientists to act in the public’s best interests, up from 12% in April 2020.

Ratings of medical scientists mirror the trend seen in ratings of scientists generally. Read Chapter 1 of the report for a detailed analysis of this data.

Differences between Republicans and Democrats in ratings of scientists and science

Declining levels of trust in scientists and medical scientists have been particularly pronounced among Republicans and Republican-leaning independents over the past several years. In fact, nearly four-in-ten Republicans (38%) now say they have not too much or no confidence at all in scientists to act in the public’s best interests. This share is up dramatically from the 14% of Republicans who held this view in April 2020. Much of this shift occurred during the first two years of the pandemic and has persisted in more recent surveys.


My take on why this important:

Science is a critical driver of progress. From technological advancements to medical breakthroughs, scientific discoveries have dramatically improved our lives. Without public trust in science, these advancements may slow or stall.

Science plays a vital role in addressing complex challenges. Climate change, pandemics, and other pressing issues demand evidence-based solutions. Undermining trust in science weakens our ability to respond effectively to these challenges.

Erosion of trust can have far-reaching consequences. It can fuel misinformation campaigns, hinder scientific collaboration, and ultimately undermine public health and well-being.

Wednesday, December 13, 2023

Science and Ethics of “Curing” Misinformation

Freiling, I., Knause, N.M., & Scheufele, D.A.
AMA J Ethics. 2023;25(3):E228-237. 

Abstract

A growing chorus of academicians, public health officials, and other science communicators have warned of what they see as an ill-informed public making poor personal or electoral decisions. Misinformation is often seen as an urgent new problem, so some members of these communities have pushed for quick but untested solutions without carefully diagnosing ethical pitfalls of rushed interventions. This article argues that attempts to “cure” public opinion that are inconsistent with best available social science evidence not only leave the scientific community vulnerable to long-term reputational damage but also raise significant ethical questions. It also suggests strategies for communicating science and health information equitably, effectively, and ethically to audiences affected by it without undermining affected audiences’ agency over what to do with it.

My summary:

The authors explore the challenges and ethical considerations surrounding efforts to combat misinformation. The authors argue that using the term "curing" to describe these efforts is problematic, as it suggests that misinformation is a disease that can be eradicated. They argue that this approach is overly simplistic and disregards the complex social and psychological factors that contribute to the spread of misinformation.

The authors identify several ethical concerns with current approaches to combating misinformation, including:
  • The potential for censorship and suppression of legitimate dissent.
  • The undermining of public trust in science and expertise.
  • The creation of echo chambers and further polarization of public opinion.
Instead of trying to "cure" misinformation, the authors propose a more nuanced and ethical approach that focuses on promoting critical thinking, media literacy, and civic engagement. They also emphasize the importance of addressing the underlying social and psychological factors that contribute to the spread of misinformation, such as social isolation, distrust of authority, and a desire for simple explanations.

Wednesday, November 1, 2023

People believe misinformation is a threat because they assume others are gullible

Altay, S., & Acerbi, A. (2023).
New Media & Society, 0(0).

Abstract

Alarmist narratives about the flow of misinformation and its negative consequences have gained traction in recent years. If these fears are to some extent warranted, the scientific literature suggests that many of them are exaggerated. Why are people so worried about misinformation? In two pre-registered surveys conducted in the United Kingdom (Nstudy_1 = 300, Nstudy_2 = 300) and replicated in the United States (Nstudy_1 = 302, Nstudy_2 = 299), we investigated the psychological factors associated with perceived danger of misinformation and how it contributes to the popularity of alarmist narratives on misinformation. We find that the strongest, and most reliable, predictor of perceived danger of misinformation is the third-person effect (i.e. the perception that others are more vulnerable to misinformation than the self) and, in particular, the belief that “distant” others (as opposed to family and friends) are vulnerable to misinformation. The belief that societal problems have simple solutions and clear causes was consistently, but weakly, associated with perceived danger of online misinformation. Other factors, like negative attitudes toward new technologies and higher sensitivity to threats, were inconsistently, and weakly, associated with perceived danger of online misinformation. Finally, we found that participants who report being more worried about misinformation are more willing to like and share alarmist narratives on misinformation. Our findings suggest that fears about misinformation tap into our tendency to view other people as gullible.

My thoughts:

The authors conducted a study in the United Kingdom. They found that people who believed that others were more gullible than themselves were also more likely to perceive misinformation as a threat. This relationship was independent of other factors such as people's political beliefs, media consumption habits, and trust in institutions.

The authors argue that this finding suggests that people's concerns about misinformation may be rooted in their own biases about the intelligence and critical thinking skills of others. They also suggest that this bias may make people more likely to share and spread misinformation themselves.

The authors conclude by calling for more research on the role of bias in people's perceptions of misinformation. They also suggest that interventions to reduce misinformation should address people's biases about the gullibility of others.

One implication of this research is that people who are concerned about misinformation should be mindful of their own biases. It is important to remember that everyone is vulnerable to misinformation, regardless of their intelligence or education level. We should all be critical of the information we encounter online and be careful about sharing things that we are not sure are true.

Sunday, October 15, 2023

Bullshit blind spots: the roles of miscalibration and information processing in bullshit detection

Shane Littrell & Jonathan A. Fugelsang
(2023) Thinking & Reasoning
DOI: 10.1080/13546783.2023.2189163

Abstract

The growing prevalence of misleading information (i.e., bullshit) in society carries with it an increased need to understand the processes underlying many people’s susceptibility to falling for it. Here we report two studies (N = 412) examining the associations between one’s ability to detect pseudo-profound bullshit, confidence in one’s bullshit detection abilities, and the metacognitive experience of evaluating potentially misleading information. We find that people with the lowest (highest) bullshit detection performance overestimate (underestimate) their detection abilities and overplace (underplace) those abilities when compared to others. Additionally, people reported using both intuitive and reflective thinking processes when evaluating misleading information. Taken together, these results show that both highly bullshit-receptive and highly bullshit-resistant people are largely unaware of the extent to which they can detect bullshit and that traditional miserly processing explanations of receptivity to misleading information may be insufficient to fully account for these effects.


Here's my summary:

The authors of the article argue that people have two main blind spots when it comes to detecting bullshit: miscalibration and information processing. Miscalibration is the tendency to overestimate our ability to detect bullshit. We think we're better at detecting bullshit than we actually are.

Information processing is the way that we process information in order to make judgments. The authors argue that we are more likely to be fooled by bullshit when we are not paying close attention or when we are processing information quickly.

The authors also discuss some strategies for overcoming these blind spots. One strategy is to be aware of our own biases and limitations. We should also be critical of the information that we consume and take the time to evaluate evidence carefully.

Overall, the article provides a helpful framework for understanding the challenges of bullshit detection. It also offers some practical advice for overcoming these challenges.

Here are some additional tips for detecting bullshit:
  • Be skeptical of claims that seem too good to be true.
  • Look for evidence to support the claims that are being made.
  • Be aware of the speaker or writer's motives.
  • Ask yourself if the claims are making sense and whether they are consistent with what you already know.
  • If you're not sure whether something is bullshit, it's better to err on the side of caution and be skeptical.

Tuesday, October 10, 2023

The Moral Case for No Longer Engaging With Elon Musk’s X

David Lee
Bloomberg.com
Originally published 5 October 23

Here is an excerpt:

Social networks are molded by the incentives presented to users. In the same way we can encourage people to buy greener cars with subsidies or promote healthy living by giving out smartwatches, so, too, can levers be pulled to improve the health of online life. Online, people can’t be told what to post, but sites can try to nudge them toward behaving in a certain manner, whether through design choices or reward mechanisms.

Under the previous management, Twitter at least paid lip service to this. In 2020, it introduced a feature that encouraged people to actually read articles before retweeting them, for instance, to promote “informed discussion.” Jack Dorsey, the co-founder and former chief executive officer, claimed to be thinking deeply about improving the quality of conversations on the platform — seeking ways to better measure and improve good discourse online. Another experiment was hiding the “likes” count in an attempt to train away our brain’s yearn for the dopamine hit we get from social engagement.

One thing the prior Twitter management didn’t do is actively make things worse. When Musk introduced creator payments in July, he splashed rocket fuel over the darkest elements of the platform. These kinds of posts always existed, in no small number, but are now the despicable main event. There’s money to be made. X’s new incentive structure has turned the site into a hive of so-called engagement farming — posts designed with the sole intent to elicit literally any kind of response: laughter, sadness, fear. Or the best one: hate. Hate is what truly juices the numbers.

The user who shared the video of Carson’s attack wasn’t the only one to do it. But his track record on these kinds of posts, and the inflammatory language, primed it to be boosted by the algorithm. By Tuesday, the user was still at it, making jokes about Carson’s girlfriend. All content monetized by advertising, which X desperately needs. It’s no mistake, and the user’s no fringe figure. In July, he posted that the site had paid him more than $16,000. Musk interacts with him often.


Here's my take: 

Lee pointed out that social networks can shape user behavior through incentives, and the previous management of Twitter had made some efforts to promote healthier online interactions. However, under Elon Musk's management, the platform has taken a different direction, actively encouraging provocative and hateful content to boost engagement.

Lee criticized the new incentive structure on X, where users are financially rewarded for producing controversial content. They argued that as the competition for attention intensifies, the content will likely become more violent and divisive.

Lee also mentioned an incident involving former executive Yoel Roth, who raised concerns about hate speech on the platform, and Musk's dismissive response to those concerns.  Musk is not a business genius and does not understand how to promote a healthy social media site.

Wednesday, August 30, 2023

Not all skepticism is “healthy” skepticism: Theorizing accuracy- and identity-motivated skepticism toward social media misinformation

Li, J. (2023). 
New Media & Society, 0(0). 

Abstract

Fostering skepticism has been seen as key to addressing misinformation on social media. This article reveals that not all skepticism is “healthy” skepticism by theorizing, measuring, and testing the effects of two types of skepticism toward social media misinformation: accuracy- and identity-motivated skepticism. A two-wave panel survey experiment shows that when people’s skepticism toward social media misinformation is driven by accuracy motivations, they are less likely to believe in congruent misinformation later encountered. They also consume more mainstream media, which in turn reinforces accuracy-motivated skepticism. In contrast, when skepticism toward social media misinformation is driven by identity motivations, people not only fall for congruent misinformation later encountered, but also disregard platform interventions that flag a post as false. Moreover, they are more likely to see social media misinformation as favoring opponents and intentionally avoid news on social media, both of which form a vicious cycle of fueling more identity-motivated skepticism.

Discussion

I have made the case that it is important to distinguish between accuracy-motivated skepticism and identity-motivated skepticism. They are empirically distinguishable constructs that cast opposing effects on outcomes important for a well-functioning democracy. Across the board, accuracy-motivated skepticism produces normatively desirable outcomes. Holding a higher level of accuracy-motivated skepticism makes people less likely to believe in congruent misinformation they encounter later, offering hope that partisan motivated reasoning can be attenuated. Accuracy-motivated skepticism toward social media misinformation also has a mutually reinforcing relationship with consuming news from mainstream media, which can serve to verify information on social media and produce potential learning effects.

In contrast, not all skepticism is “healthy” skepticism. Holding a higher level of identity-motivated skepticism not only increases people’s susceptibility to congruent misinformation they encounter later, but also renders content flagging by social media platforms less effective. This is worrisome as calls for skepticism and platform content moderation have been a crucial part of recently proposed solutions to misinformation. Further, identity-motivated skepticism reinforces perceived bias of misinformation and intentional avoidance of news on social media. These can form a vicious cycle of close-mindedness and politicization of misinformation.

This article advances previous understanding of skepticism by showing that beyond the amount of questioning (the tipping point between skepticism and cynicism), the type of underlying motivation matters for whether skepticism helps people become more informed. By bringing motivated reasoning and media skepticism into the same theoretical space, this article helps us make sense of the contradictory evidence on the utility of media skepticism. Skepticism in general should not be assumed to be “healthy” for democracy. When driven by identity motivations, skepticism toward social media misinformation is counterproductive for political learning; only when skepticism toward social media is driven by the accuracy motivations does it inoculate people against favorable falsehoods and encourage consumption of credible alternatives.


Here are some additional thoughts on the research:
  • The distinction between accuracy-motivated skepticism and identity-motivated skepticism is a useful one. It helps to explain why some people are more likely to believe in misinformation than others.
  • The findings of the studies suggest that interventions that promote accuracy-motivated skepticism could be effective in reducing the spread of misinformation on social media.
  • It is important to note that the research was conducted in the United States. It is possible that the findings would be different in other countries.

Monday, August 28, 2023

'You can't bullshit a bullshitter' (or can you?): Bullshitting frequency predicts receptivity to various types of misleading information

Littrell, S., Risko, E. F., & Fugelsang, J. A. (2021).
The British journal of social psychology, 60(4), 
1484–1505.

Abstract

Research into both receptivity to falling for bullshit and the propensity to produce it have recently emerged as active, independent areas of inquiry into the spread of misleading information. However, it remains unclear whether those who frequently produce bullshit are inoculated from its influence. For example, both bullshit receptivity and bullshitting frequency are negatively related to cognitive ability and aspects of analytic thinking style, suggesting that those who frequently engage in bullshitting may be more likely to fall for bullshit. However, separate research suggests that individuals who frequently engage in deception are better at detecting it, thus leading to the possibility that frequent bullshitters may be less likely to fall for bullshit. Here, we present three studies (N = 826) attempting to distinguish between these competing hypotheses, finding that frequency of persuasive bullshitting (i.e., bullshitting intended to impress or persuade others) positively predicts susceptibility to various types of misleading information and that this association is robust to individual differences in cognitive ability and analytic cognitive style.

Conclusion

Gaining a better understanding of the differing ways in which various types of misleading information are transmitted and received is becoming increasingly important in the information age (Kristansen & Kaussler, 2018). Indeed, an oft-repeated maxim in popular culture is, “you can’t bullshit a bullshitter.” While folk wisdom may assert that this is true, the present investigation suggests that the reality is a bit more complicated. Our primary aim was to examine the extent to which bullshitting frequency is associated with susceptibility to falling for bullshit. Overall, we found that persuasive bullshitters (but not evasive bullshitters) were more receptive to various types of bullshit and, in the case of pseudo-profound statements, even when controlling for factors related to intelligence and analytic thinking. These results enrich our understanding of the transmission and detection of certain types of misleading information, specifically the associations between the propensity to produce and the tendency to fall for bullshit and will help to inform future research in this growing area of scholarship.



Friday, August 4, 2023

Social Media and Morality

Van Bavel, J. J., Robertson, C. et al. (2023, June 6).

Abstract

Nearly five billion people around the world now use social media, and this number continues to grow. One of the primary goals of social media platforms is to capture and monetize human attention. One means by which individuals and groups can capture attention and drive engagement on these platforms is by sharing morally and emotionally evocative content. We review a growing body of research on the interrelationship of social media and morality–as well the consequences for individuals and society. Moral content often goes “viral” on social media, and social media makes moral behavior (such as punishment) less costly. Thus, social media often acts as an accelerant for existing moral dynamics – amplifying outrage, status seeking, and intergroup conflict, while also potentially amplifying more constructive facets of morality, such as social support, pro-sociality, and collective action. We discuss trends, heated debates, and future directions in this emerging literature.

From Discussions and Future Directions

Addressing the interplay between social media and morality 

There is a growing recognition among scholars and the public that social media has deleterious consequences for society and there is a growing appetite for greater transparency and some form of regulation of social media platforms (Rathje et al., 2023). To address the adverse consequences of social media, solutions at the system level are necessary (e.g., Chater & Loewenstein, 2022), but individual- or group-level solutions may be useful for creating behavioral change before system-level change is in place and for increasing public support for system-level solutions (Koppel et. al., 2023). In the following section, we discuss a range of solutions that address the adverse consequences of the interplay between social media and morality.

Regulation is one of the most heavily debated ways of mitigating the adverse features of social media. Regulating social media can be done both on platforms as well at the national or cross-national level, but always involves discussions about who should decide what should be allowed on which platforms (Kaye, 2019). Currently, there is relatively little editorial oversight with the content even on mainstream platforms, yet the connotations with censorship makes regulation inherently controversial. For instance, Americans believe that social media companies censor political viewpoints (Vogels et al., 2020) and believe it is hard to regulate social media because people cannot agree upon what should and should not be removed (PewResearch Center, 2019). Moreover, authoritarian states can suppress dissent through the regulation of speech on social media.

In general, people on the political left are supportive of regulating social media platforms (Kozyreva, 2023; Rasmussen, 2022), reflecting liberals’ general tendency to more supportive, and conservatives' tendency to more opposing, of regulatory policies (e.g. Grossman, 2015). In the context of content on social media, one explanation is that left-leaning people infer more harm from aggressive behaviors. In other words, they may perceive immoral behaviors on social media as more harmful for the victim, which in turn justifies regulation (Graham 2009; Crawford 2017; Walter 2019; Boch 2020). There are conflicting results, however, on whether people oppose regulating hate speech (Bilewicz et. al. 2017; Rasmussen 2023a) because they use hate to derogate minority and oppressed groups (Sidanius, Pratto, and Bobo 1996; Federico and Sidanius, 2002) or because of principled political preferences deriving from conservatism values (Grossman 2016; Grossman 2015; Sniderman & Carmines, 1997; Sniderman & Piazza, 1993; Sniderman, Piazza, Tetlock, & Kendrick, 1991). While sensitivity to harm contributes to making people on the political left more supportive of regulating social media, it is contested whether opposition from the political right derives from group-based dominance or principled opposition.

Click the link above to get to the research.

Here is a summary from me:
  • Social media can influence our moral judgments. Studies have shown that people are more likely to make moral judgments that align with the views of their social media friends and the content they consume on social media. For example, one study found that people who were exposed to pro-environmental content on social media were more likely to make moral judgments that favored environmental protection.
  • Social media can lead to moral disengagement. Moral disengagement is a psychological process that allows people to justify harmful or unethical behavior. Studies have shown that social media can contribute to moral disengagement by making it easier for people to distance themselves from the consequences of their actions. For example, one study found that people who were exposed to violent content on social media were more likely to engage in moral disengagement.
  • Social media can promote prosocial behavior. Prosocial behavior is behavior that is helpful or beneficial to others. Studies have shown that social media can promote prosocial behavior by connecting people with others who share their values and by providing opportunities for people to help others. For example, one study found that people who used social media to connect with others were more likely to volunteer their time to help others.
  • Social media can be used to spread misinformation and hate speech. Misinformation is false or misleading information that is spread intentionally or unintentionally. Hate speech is speech that attacks a person or group on the basis of attributes such as race, religion, or sexual orientation. Social media platforms have been used to spread misinformation and hate speech, which can have a negative impact on society.
Overall, the research on social media and morality suggests that social media can have both positive and negative effects on our moral judgments and behavior. It is important to be aware of the potential risks and benefits of social media and to use it in a way that promotes positive moral values.

Tuesday, August 1, 2023

When Did Medicine Become a Battleground for Everything?

Tara Haelle
Medscape.com
Originally posted 18 July 23

Like hundreds of other medical experts, Leana Wen, MD, an emergency physician and former Baltimore health commissioner, was an early and avid supporter of COVID vaccines and their ability to prevent severe disease, hospitalization, and death from SARS-CoV-2 infections.

When 51-year-old Scott Eli Harris, of Aubrey, Texas, heard of Wen's stance in July 2021, the self-described "5th generation US Army veteran and a sniper" sent Wen an electronic invective laden with racist language and very specific threats to shoot her.

Harris pled guilty to transmitting threats via interstate commerce last February and began serving 6 months in federal prison last fall, but his threats wouldn't be the last for Wen. Just 2 days after Harris was sentenced, charges were unsealed against another man in Massachusetts, who threatened that Wen would "end up in pieces" if she continued "pushing" her thoughts publicly.'

Wen has plenty of company. In an August 2022 survey of emergency doctors conducted by the American College of Emergency Physicians, 85% of respondents said violence against them is increasing. One in four doctors said they're being assaulted by patients and their family and friends multiple times a week, compared to just 8% of doctors who said as much in 2018. Sixty-four percent of emergency physicians reported receiving verbal assaults and threats of violence; 40% reported being hit or slapped, and 26% were kicked.

This uptick of violence and threats against physicians didn't come out of nowhere; violence against healthcare workers has been gradually increasing over the past decade. Healthcare providers can attest to the hostility that particular topics have sparked for years: vaccines in pediatrics, abortion in ob-gyn, and gender-affirming care in endocrinology.

But the pandemic fueled the fire. While there have always been hot-button issues in medicine, the ire they arouse today is more intense than ever before. The proliferation of misinformation (often via social media) and the politicization of public health and medicine are at the center of the problem.

"The People Attacking Are Themselves Victims'

The misinformation problem first came to a head in one area of public health: vaccines. The pandemic accelerated antagonism in medicine ― thanks, in part, to decades of anti- antivaccine activism.

The anti-vaccine movement, which has ebbed and flowed in the US and across the globe since the first vaccine, experienced a new wave in the early 2000s with the combination of concerns about thimerosal in vaccines and a now disproven link between autism and the MMR vaccine. But that movement grew. It picked up steam when activists gained political clout after a 2014 measles outbreak at Disneyland led California schools to tighten up policies regarding vaccinations for kids who enrolled. These stronger public school vaccination laws ran up against religious freedom arguments from anti-vaccine advocates.

Monday, July 24, 2023

How AI can distort human beliefs

Kidd, C., & Birhane, A. (2023, June 23).
Science, 380(6651), 1222-1223.
doi:10.1126/science. adi0248

Here is an excerpt:

Three core tenets of human psychology can help build a bridge of understanding about what is at stake when discussing regulation and policy options. These ideas in psychology can connect to machine learning but also those in political science, education, communication, and the other fields that are considering the impact of bias and misinformation on population-level beliefs.

People form stronger, longer-lasting beliefs when they receive information from agents that they judge to be confident and knowledgeable, starting in early childhood. For example, children learned better when they learned from an agent who asserted their knowledgeability in the domain as compared with one who did not (5). That very young children track agents’ knowledgeability and use it to inform their beliefs and exploratory behavior supports the theory that this ability reflects an evolved capacity central to our species’ knowledge development.

Although humans sometimes communicate false or biased information, the rate of human errors would be an inappropriate baseline for judging AI because of fundamental differences in the types of exchanges between generative AI and people versus people and people. For example, people regularly communicate uncertainty through phrases such as “I think,” response delays, corrections, and speech disfluencies. By contrast, generative models unilaterally generate confident, fluent responses with no uncertainty representations nor the ability to communicate their absence. This lack of uncertainty signals in generative models could cause greater distortion compared with human inputs.

Further, people assign agency and intentionality readily. In a classic study, people read intentionality into the movements of simple animated geometric shapes (6). Likewise, people commonly read intentionality— and humanlike intelligence or emergent sentience—into generative models even though these attributes are unsubstantiated (7). This readiness to perceive generative models as knowledgeable, intentional agents implies a readiness to adopt the information that they provide more rapidly and with greater certainty. This tendency may be further strengthened because models support multimodal interactions that allow users to ask models to perform actions like “see,” “draw,” and “speak” that are associated with intentional agents. The potential influence of models’ problematic outputs on human beliefs thus exceeds what is typically observed for the influence of other forms of algorithmic content suggestion such as search. These issues are exacerbated by financial and liability interests incentivizing companies to anthropomorphize generative models as intelligent, sentient, empathetic, or even childlike.


Here is a summary of solutions that can be used to address the problem of AI-induced belief distortion. These solutions include:

Transparency: AI models should be transparent about their biases and limitations. This will help people to understand the limitations of AI models and to be more critical of the information that they generate.

Education: People should be educated about the potential for AI models to distort beliefs. This will help people to be more aware of the risks of using AI models and to be more critical of the information that they generate.

Regulation: Governments could regulate the use of AI models to ensure that they are not used to spread misinformation or to reinforce existing biases.

Tuesday, July 11, 2023

Conspirituality: How New Age conspiracy theories threaten public health

D. Beres, M. Remski, & J. Walker
bigthink.com
Originally posted 17 June 23

Here is an excerpt:

Disaster capitalism and disaster spirituality rely, respectively, on an endless supply of items to commodify and minds to recruit. While both roar into high gear in times of widespread precarity and vulnerability, in disaster spirituality there is arguably more at stake on the supply side. Hedge fund managers can buy up distressed properties in post-Katrina New Orleans to gentrify and flip. They have cash on hand to pull from when opportunity strikes, whereas most spiritual figures have to use other means for acquisitions and recruitment during times of distress.

Most of the influencers operating in today’s conspirituality landscape stand outside of mainstream economies and institutional support. They’ve been developing fringe religious ideas and making money however they can, usually up against high customer turnover.

For the mega-rich disaster capitalist, a hurricane or civil war is a windfall. But for the skint disaster spiritualist, a public catastrophe like 9/11 or COVID-19 is a life raft. Many have no choice but to climb aboard and ride. Additionally, if your spiritual group has been claiming for years to have the answers to life’s most desperate problems, the disaster is an irresistible dare, a chance to make good on divine promises. If the spiritual group has been selling health ideologies or products they guarantee will ensure perfect health, how can they turn away from the opportunity presented by a pandemic?


Here is my summary with some extras:

The article argues that conspirituality is a growing problem that is threatening public health. Conspiritualists push the false beliefs that vaccines are harmful, that the COVID-19 pandemic is a hoax, and that natural immunity is the best way to protect oneself from disease. These beliefs can lead people to make decisions that put their health and the health of others at risk.

The article also argues that conspirituality is often spread through social media platforms, which can make it difficult to verify the accuracy of information. This can lead people to believe false or misleading information, which can have serious consequences for their health.  However, some individuals can make a profit from the spread of disinformation.

The article concludes by calling for more research on conspirituality and its impact on public health. It also calls for public health professionals to be more aware of conspirituality and to develop strategies to address it.
  • Conspirituality is a term that combines "conspiracy" and "spirituality." It refers to the belief that certain anti-science ideas (such as alternative medicine, non-scientific interventions, and spiritual healing) are being suppressed by a powerful elite. Conspiritualists often believe that this elite is responsible for a wide range of problems, including the COVID-19 pandemic.
  • The term "conspirituality" was coined by sociologists Charlotte Ward and David Voas in 2011. They argued that conspirituality is a unique form of conspiracy theory that is characterized by blending 1) New Age beliefs (religious and spiritual ideas) of a paradigm shift in consciousness (in which we will all be awakened to a new reality); and, 2) traditional conspiracy theories (in which an elite, powerful, and covert group of individuals are either controlling or trying to control the social and political order.)

Sunday, May 7, 2023

Stolen elections: How conspiracy beliefs during the 2020 American presidential elections changed over time

Wang, H., & Van Prooijen, J. (2022).
Applied Cognitive Psychology.
https://doi.org/10.1002/acp.3996

Abstract

Conspiracy beliefs have been studied mostly through cross-sectional designs. We conducted a five-wave longitudinal study (N = 376; two waves before and three waves after the 2020 American presidential elections) to examine if the election results influenced specific conspiracy beliefs and conspiracy mentality, and whether effects differ between election winners (i.e., Biden voters) versus losers (i.e., Trump voters) at the individual level. Results revealed that conspiracy mentality kept unchanged over 2 months, providing first evidence that this indeed is a relatively stable trait. Specific conspiracy beliefs (outgroup and ingroup conspiracy beliefs) did change over time, however. In terms of group-level change, outgroup conspiracy beliefs decreased over time for Biden voters but increased for Trump voters. Ingroup conspiracy beliefs decreased over time across all voters, although those of Trump voters decreased faster. These findings illuminate how specific conspiracy beliefs are, and conspiracy mentality is not, influenced by an election event.

From the General Discussion

Most studies on conspiracy beliefs provide correlational evidence through cross-sectional designs (van Prooijen & Douglas, 2018). The present research took full advantage of the 2020 American presidential elections through a five-wave longitudinal design, enabling three complementary contributions. First, the results provide evidence that conspiracy mentality is a relatively stable individual difference trait (Bruder et al., 2013; Imhoff & Bruder, 2014): While the election did influence specific conspiracy beliefs (i.e., that the elections were rigged), it did not influence conspiracy mentality. Second, the results provide evidence for the notion that conspiracy beliefs are for election losers (Uscinski & Parent, 2014), as reflected in the finding that Biden voters' outgroup conspiracy beliefs decreased at the individual level, while Trump voters' did not. The group-level effects on changes in outgroup conspiracy beliefs also underscored the role of intergroup conflict in conspiracy theories (van Prooijen & Song, 2021). And third, the present research examined conspiracy theories about one's own political ingroup, and found that such ingroup conspiracy beliefs decreased over time.

The decrease over time for ingroup conspiracy beliefs occurred among both Biden and Trump voters. We speculate that, given its polarized nature and contested result, this election increased intergroup conflict between Biden and Trump voters. Such intergroup conflict may have increased feelings of ingroup loyalty within both voter groups (Druckman, 1994), therefore decreasing beliefs that members of one's own group were conspiring. Moreover, ingroup conspiracy beliefs were higher for Trump than Biden voters (particularly at the first measurement point). This difference might expand previous findings that Republicans are more susceptible to conspiracy cues than Democrats (Enders & Smallpage, 2019), by suggesting that these effects generalize to conspiracy cues coming from their own ingroup.

Conclusion

The 2020 American presidential elections yielded many conspiracy beliefs that the elections were rigged, and conspiracy beliefs generally have negative consequences for societies. One key challenge for scientists and policymakers is to establish how conspiracy theories develop over time. In this research, we conducted a longitudinal study to provide empirical insights into the temporal dynamics underlying conspiracy beliefs, in the setting of a polarized election. We conclude that specific conspiracy beliefs that the elections were rigged—but not conspiracy mentality—are malleable over time, depending on political affiliations and election results.

Tuesday, May 2, 2023

Lies and bullshit: The negative effects of misinformation grow stronger over time

Petrocelli, J. V., Seta, C. E., & Seta, J. J. (2023). 
Applied Cognitive Psychology, 37(2), 409–418. 
https://doi.org/10.1002/acp.4043

Abstract

In a world where exposure to untrustworthy communicators is common, trust has become more important than ever for effective marketing. Nevertheless, we know very little about the long-term consequences of exposure to untrustworthy sources, such bullshitters. This research examines how untrustworthy sources—liars and bullshitters—influence consumer attitudes toward a product. Frankfurt's (1986) insidious bullshit hypothesis (i.e., bullshitting is evaluated less negatively than lying but bullshit can be more harmful than are lies) is examined within a traditional sleeper effect—a persuasive influence that increases, rather than decays over time. We obtained a sleeper effect after participants learned that the source of the message was either a liar or a bullshitter. However, compared to the liar source condition, the same message from a bullshitter resulted in more extreme immediate and delayed attitudes that were in line with an otherwise discounted persuasive message (i.e., an advertisement). Interestingly, attitudes returned to control condition levels when a bullshitter was the source of the message, suggesting that knowing an initially discounted message may be potentially accurate/inaccurate (as is true with bullshit, but not lies) does not result in the long-term discounting of that message. We discuss implications for marketing and other contexts of persuasion.

General Discussion

There is a considerable body of knowledge about the antecedents and consequences of lying in marketing and other contexts (e.g., Ekman, 1985), but much less is known about the other untrustworthy source: The Bullshitter. The current investigation suggests that the distinction between bullshitting and lying is important to marketing and to persuasion more generally. People are exposed to scores of lies and bullshit every day and this exposure has increased dramatically as the use of the internet has shifted from a platform for socializing to a source of information (e.g., Di Domenico et al., 2021). Because things such as truth status and source status fade faster than familiarity, illusory truth effects for consumer products can emerge after only 3 days post-initial exposure (Skurnik et al., 2005), and within the hour for basic knowledge questions (Fazio et al., 2015). As mirrored in our conditions that received discounting cues after the initial attitude information, at times people are lied to, or bullshitted, and only learn afterwards they were deceived. It is then that these untrustworthy sources appear to have a sleeper effect creating unwarranted and undiscounted attitudes.

It should be noted that our data do not suggest that the impact of lie and bullshit discounting cues fade differentially. However, the discounting cue in the bullshit condition had less of an immediate and long-term suppression effect than in the lie condition. In fact, after 14 days, the bullshit communication not only had more of an influence on attitudes, but the influence was not significantly different from that of the control communication. This finding suggests that bullshit can be more insidious than lies. As it relates to marketing, the insidious nature of exposure to bullshit can create false beliefs that subsequently affect behavior, even when people have been told that the information came from a person known to spread bullshit. The insidious nature of bullshit is magnified by the fact that even when it is clear that one is expressing his/her opinion via bullshit, people do not appear to hold the bullshitter to the same standard as the liar (Frankfurt, 1986). People may think that at least the bullshitter often believes his/her own bullshit, whereas the liar knows his/her statement is not true (Bernal, 2006; Preti, 2006; Reisch, 2006). Because of this difference, what may appear to be harmless communications from a bullshitter may have serious repercussions for consumers and organizations. Additionally, along with the research of Foos et al. (2016), the present research suggests that the harmful influence of untrustworthy sources may not be recognized initially but appears over time. The present research suggests that efforts to fight the consequences of fake news (see Atkinson, 2019) are more difficult because of the sleeper effect. The negative effects of unsubstantiated or false information may not only persist but may grow stronger over time.

Friday, April 28, 2023

Filling in the Gaps: False Memories and Partisan Bias

Armaly, M.T. & Enders, A.
Political Psychology, Vol. 0, No. 0, 2022
doi: 10.1111/pops.12841

Abstract

While cognitive psychologists have learned a great deal about people's propensity for constructing and acting on false memories, the connection between false memories and politics remains understudied. If partisan bias guides the adoption of beliefs and colors one's interpretation of new events and information, so too might it prove powerful enough to fabricate memories of political circumstances. Across two studies, we first distinguish false memories from false beliefs and expressive responses; false political memories appear to be genuine and subject to partisan bias. We also examine the political and psychological correlates of false memories. Nearly a third of respondents reported remembering a fabricated or factually altered political event, with many going so far as to convey the circumstances under which they “heard about” the event. False-memory recall is correlated with the strength of partisan attachments, interest in politics, and participation, as well as narcissism, conspiratorial thinking, and cognitive ability.

Conclusion

While cognitive psychologists have learned a great deal about people’s propensity for constructing and acting on false memories, the role of false memories in political attitudes has received scant attention. In this study, we built on previous work by investigating the partisan foundations and political and psychological correlates of false memories. We found that nearly a third of respondents reported remembering a fabricated or factually altered political event. These false memories are not mere beliefs or expressive responses; indeed, most respondents conveyed where they “heard about” at least one event in question, with some providing vivid details of their circumstances. We also found that false memory is associated with the strength of one’s partisan attachments, conspiracism, and interest in politics, among other factors.

Altogether, false memories seem to behave like a form of partisan bias: The more in touch one is with politics, especially the political parties, the more susceptible they are to false- memory construction. While we cannot ascribe causality, uncovering this (likely) mechanism has several implications. First, the more polarized we become, the more likely individuals may be to con-struct false memories about in-  and outgroups. In turn, the falser memories one constructs about the greatness of one’s ingroup and the evil doings of the outgroup, the higher the temperature of polarization rises. Second, false- memory construction may be one mechanism by which mis-information takes hold psychologically. By exposing people to information they are motivated to believe, skilled traffickers of misinformation may be able to not only convince one to be-lieve something but convince them that something which never transpired actually did so. The conviction that accompanies memory— people’s natural tendency to believe their memories are trustworthy— makes false memories a particularly pernicious route by which to manipulate those subject to this bias. Indeed, this is precisely the concern presented by “deepfakes”— images and videos that have been expertly altered or fabricated for the purpose of exploiting targeted viewers. Finally, and relatedly, politicians may be able to induce false memories, strategically molding a past reality to suit their political will.

Saturday, April 15, 2023

Resolving content moderation dilemmas between free speech and harmful misinformation

Kozyreva, A., Herzog, S. M., et al. (2023). 
PNAS of US, 120(7).
https://doi.org/10.1073/pnas.2210666120

Abstract

In online content moderation, two key values may come into conflict: protecting freedom of expression and preventing harm. Robust rules based in part on how citizens think about these moral dilemmas are necessary to deal with this conflict in a principled way, yet little is known about people’s judgments and preferences around content moderation. We examined such moral dilemmas in a conjoint survey experiment where US respondents (N = 2, 564) indicated whether they would remove problematic social media posts on election denial, antivaccination, Holocaust denial, and climate change denial and whether they would take punitive action against the accounts. Respondents were shown key information about the user and their post as well as the consequences of the misinformation. The majority preferred quashing harmful misinformation over protecting free speech. Respondents were more reluctant to suspend accounts than to remove posts and more likely to do either if the harmful consequences of the misinformation were severe or if sharing it was a repeated offense. Features related to the account itself (the person behind the account, their partisanship, and number of followers) had little to no effect on respondents’ decisions. Content moderation of harmful misinformation was a partisan issue: Across all four scenarios, Republicans were consistently less willing than Democrats or independents to remove posts or penalize the accounts that posted them. Our results can inform the design of transparent rules for content moderation of harmful misinformation.

Significance

Content moderation of online speech is a moral minefield, especially when two key values come into conflict: upholding freedom of expression and preventing harm caused by misinformation. Currently, these decisions are made without any knowledge of how people would approach them. In our study, we systematically varied factors that could influence moral judgments and found that despite significant differences along political lines, most US citizens preferred quashing harmful misinformation over protecting free speech. Furthermore, people were more likely to remove posts and suspend accounts if the consequences of the misinformation were severe or if it was a repeated offense. Our results can inform the design of transparent, consistent rules for content moderation that the general public accepts as legitimate.

Discussion

Content moderation is controversial and consequential. Regulators are reluctant to restrict harmful but legal content such as misinformation, thereby leaving platforms to decide what content to allow and what to ban. At the heart of policy approaches to online content moderation are trade-offs between fundamental values such as freedom of expression and the protection of public health. In our investigation of which aspects of content moderation dilemmas affect people’s choices about these trade-offs and what impact individual attitudes have on these decisions, we found that respondents’ willingness to remove posts or to suspend an account increased with the severity of the consequences of misinformation and whether the account had previously posted misinformation. The topic of the misinformation also mattered—climate change denial was acted on the least, whereas Holocaust denial and election denial were acted on more often, closely followed by antivaccination content. In contrast, features of the account itself—the person behind the account, their partisanship, and number of followers—had little to no effect on respondents’ decisions. In sum, the individual characteristics of those who spread misinformation mattered little, whereas the amount of harm, repeated offenses, and type of content mattered the most.

Sunday, February 26, 2023

Time pressure reduces misinformation discrimination ability but does not alter response bias

Sultan, M., Tump, A.N., Geers, M. et al. 
Sci Rep 12, 22416 (2022).
https://doi.org/10.1038/s41598-022-26209-8

Abstract

Many parts of our social lives are speeding up, a process known as social acceleration. How social acceleration impacts people’s ability to judge the veracity of online news, and ultimately the spread of misinformation, is largely unknown. We examined the effects of accelerated online dynamics, operationalised as time pressure, on online misinformation evaluation. Participants judged the veracity of true and false news headlines with or without time pressure. We used signal detection theory to disentangle the effects of time pressure on discrimination ability and response bias, as well as on four key determinants of misinformation susceptibility: analytical thinking, ideological congruency, motivated reflection, and familiarity. Time pressure reduced participants’ ability to accurately distinguish true from false news (discrimination ability) but did not alter their tendency to classify an item as true or false (response bias). Key drivers of misinformation susceptibility, such as ideological congruency and familiarity, remained influential under time pressure. Our results highlight the dangers of social acceleration online: People are less able to accurately judge the veracity of news online, while prominent drivers of misinformation susceptibility remain present. Interventions aimed at increasing deliberation may thus be fruitful avenues to combat online misinformation.

Discussion

In this study, we investigated the impact of time pressure on people’s ability to judge the veracity of online misinformation in terms of (a) discrimination ability, (b) response bias, and (c) four key determinants of misinformation susceptibility (i.e., analytical thinking, ideological congruency, motivated reflection, and familiarity). We found that time pressure reduced discrimination ability but did not alter the—already present—negative response bias (i.e., general tendency to evaluate news as false). Moreover, the associations observed for the four determinants of misinformation susceptibility were largely stable across treatments, with the exception that the positive effect of familiarity on response bias (i.e., response tendency to treat familiar news as true) was slightly reduced under time pressure. We discuss each of these findings in more detail next.

As predicted, we found that time pressure reduced discrimination ability: Participants under time pressure were less able to distinguish between true and false news. These results corroborate earlier work on the speed–accuracy trade-off, and indicate that fast-paced news consumption on social media is likely leading to people misjudging the veracity of not only false news, as seen in the study by Bago and colleagues, but also true news. Like in their paper, we stress that interventions aimed at mitigating misinformation should target this phenomenon and seek to improve veracity judgements by encouraging deliberation. It will also be important to follow up on these findings by examining whether time pressure has a similar effect in the context of news items that have been subject to interventions such as debunking.

Our results for the response bias showed that participants had a general tendency to evaluate news headlines as false (i.e., a negative response bias); this effect was similarly strong across the two treatments. From the perspective of the individual decision maker, this response bias could reflect a preference to avoid one type of error over another (i.e., avoiding accepting false news as true more than rejecting true news as false) and/or an overall expectation that false news are more prevalent than true news in our experiment. Note that the ratio of true versus false news we used (1:1) is different from the real world, which typically is thought to contain a much smaller fraction of false news. A more ecologically valid experiment with a more representative sample could yield a different response bias. It will, thus, be important for future studies to assess whether participants hold such a bias in the real world, are conscious of this response tendency, and whether it translates into (in)accurate beliefs about the news itself.

Saturday, August 13, 2022

The moral psychology of misinformation: Why we excuse dishonesty in a post-truth world

Effron, D.A., & Helgason, B. A.
Current Opinion in Psychology
Volume 47, October 2022, 101375

Abstract

Commentators say we have entered a “post-truth” era. As political lies and “fake news” flourish, citizens appear not only to believe misinformation, but also to condone misinformation they do not believe. The present article reviews recent research on three psychological factors that encourage people to condone misinformation: partisanship, imagination, and repetition. Each factor relates to a hallmark of “post-truth” society: political polarization, leaders who push “alterative facts,” and technology that amplifies disinformation. By lowering moral standards, convincing people that a lie's “gist” is true, or dulling affective reactions, these factors not only reduce moral condemnation of misinformation, but can also amplify partisan disagreement. We discuss implications for reducing the spread of misinformation.

Repeated exposure to misinformation reduces moral condemnation

A third hallmark of a post-truth society is the existence of technologies, such as social media platforms, that amplify misinformation. Such technologies allow fake news – “articles that are intentionally and verifiably false and that could mislead readers” – to spread fast and far, sometimes in multiple periods of intense “contagion” across time. When fake news does “go viral,” the same person is likely to encounter the same piece of misinformation multiple times. Research suggests that these multiple encounters may make the misinformation seem less unethical to spread.

Conclusion

In a post-truth world, purveyors of misinformation need not convince the public that their lies are true. Instead, they can reduce the moral condemnation they receive by appealing to our politics (partisanship), convincing us a falsehood could have been true or might become true in the future (imagination), or simply exposing us to the same misinformation multiple times (repetition). Partisanship may lower moral standards, partisanship and imagination can both make the broader meaning of the falsehood seem true, and repetition can blunt people's negative affective reaction to falsehoods (see Figure 1). Moreover, because partisan alignment strengthens the effects of imagination and facilitates repeated contact with falsehoods, each of these processes can exacerbate partisan divisions in the moral condemnation of falsehoods. Understanding these effects and their pathways informs interventions aimed at reducing the spread of misinformation.

Ultimately, the line of research we have reviewed offers a new perspective on our post-truth world. Our society is not just post-truth in that people can lie and be believed. We are post-truth in that it is concerningly easy to get a moral pass for dishonesty – even when people know you are lying.

Thursday, May 26, 2022

Do You Still Believe in the “Chemical Imbalance Theory of Mental Illness”?

Bruce Levine
counterpunch.org
Originally published 29 APR 22

Here are two excerpts:

If you knew that psychiatric drugs—similar to other psychotropic substances such as marijuana and alcohol—merely “take the edge off” rather than correct a chemical imbalances, would you be more hesitant about using them, and more reluctant to give them to your children? Drug companies certainly believe you would be less inclined if you knew the truth, and that is why we were early on flooded with commercials about how antidepressants “work to correct this imbalance.”

So, when exactly did psychiatry discard its chemical imbalance theory? While researchers began jettisoning it by the 1990s, one of psychiatry’s first loud rejections was in 2011, when psychiatrist Ronald Pies, Editor-in-Chief Emeritus of the Psychiatric Times, stated: “In truth, the ‘chemical imbalance’ notion was always a kind of urban legend—never a theory seriously propounded by well-informed psychiatrists.” Pies is not the highest-ranking psychiatrist to acknowledge the invalidity of the chemical imbalance theory.

Thomas Insel was the NIMH director from 2002 to 2015, and in his recently published book, Healing (2022), he notes, “The idea of mental illness as a ‘chemical imbalance’ has now given way to mental illnesses as ‘connectional’ or brain circuit disorders.” While this latest “brain circuit disorder” theory remains controversial, it is now consensus at the highest levels of psychiatry that the chemical imbalance theory is invalid.

The jettisoning of the chemical imbalance theory should have been uncontroversial twenty-five years ago, when it became clear to research scientists that it was a disproved hypothesis. In Blaming the Brain (1998), Elliot Valenstein, professor emeritus of psychology and neuroscience at the University of Michigan, detailed research showing that it is just as likely for people with normal serotonin levels to feel depressed as it is for people with abnormal serotonin levels, and that it is just as likely for people with abnormally high serotonin levels to feel depressed as it is for people with abnormally low serotonin levels. Valenstein concluded, “Furthermore, there is no convincing evidence that depressed people have a serotonin or norepinephrine deficiency.” But how many Americans heard about this?

(cut)

Apparently, authorities at the highest levels have long known that the chemical imbalance theory was a disproven hypothesis, but they have viewed it as a useful “noble lie” to encourage medication use.

If you took SSRI antidepressants believing that these drugs helped correct a chemical imbalance, how does it feel to learn that this theory has long been disproven? Will this affect your trust of current and future claims by psychiatry? Were you prescribed an antidepressant not from a psychiatrist but from your primary care physician, and will this make you anxious about trusting all healthcare authorities?

Monday, April 18, 2022

The psychological drivers of misinformation belief and its resistance to correction

Ecker, U.K.H., Lewandowsky, S., Cook, J. et al. 
Nat Rev Psychol 1, 13–29 (2022).
https://doi.org/10.1038/s44159-021-00006-y

Abstract

Misinformation has been identified as a major contributor to various contentious contemporary events ranging from elections and referenda to the response to the COVID-19 pandemic. Not only can belief in misinformation lead to poor judgements and decision-making, it also exerts a lingering influence on people’s reasoning after it has been corrected — an effect known as the continued influence effect. In this Review, we describe the cognitive, social and affective factors that lead people to form or endorse misinformed views, and the psychological barriers to knowledge revision after misinformation has been corrected, including theories of continued influence. We discuss the effectiveness of both pre-emptive (‘prebunking’) and reactive (‘debunking’) interventions to reduce the effects of misinformation, as well as implications for information consumers and practitioners in various areas including journalism, public health, policymaking and education.

Summary and future directions

Psychological research has built solid foundational knowledge of how people decide what is true and false, form beliefs, process corrections, and might continue to be influenced by misinformation even after it has been corrected. However, much work remains to fully understand the psychology of misinformation.

First, in line with general trends in psychology and elsewhere, research methods in the field of misinformation should be improved. Researchers should rely less on small-scale studies conducted in the laboratory or a small number of online platforms, often on non-representative (and primarily US-based) participants. Researchers should also avoid relying on one-item questions with relatively low reliability. Given the well-known attitude–behaviour gap — that attitude change does not readily translate into behavioural effects — researchers should also attempt to use more behavioural measures, such as information-sharing measures, rather than relying exclusively on self-report questionnaires. Although existing research has yielded valuable insights into how people generally process misinformation (many of which will translate across different contexts and cultures), an increased focus on diversification of samples and more robust methods is likely to provide a better appreciation of important contextual factors and nuanced cultural differences.