Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Misinformation. Show all posts
Showing posts with label Misinformation. Show all posts

Monday, August 18, 2025

Meta’s AI rules have let bots hold ‘sensual’ chats with kids, offer false medical info

Jefff Horwitz
Reuters.com
Originally posted 14 Aug 25

An internal Meta Platforms document detailing policies on chatbot behavior has permitted the company’s artificial intelligence creations to “engage a child in conversations that are romantic or sensual,” generate false medical information and help users argue that Black people are “dumber than white people.”

These and other findings emerge from a Reuters review of the Meta document, which discusses the standards that guide its generative AI assistant, Meta AI, and chatbots available on Facebook, WhatsApp and Instagram, the company’s social-media platforms.

Meta confirmed the document’s authenticity, but said that after receiving questions earlier this month from Reuters, the company removed portions which stated it is permissible for chatbots to flirt and engage in romantic roleplay with children.


Here are some thoughts:

Meta’s AI chatbot guidelines show a blatant disregard for child safety, allowing romantic conversations with minors: a clear violation of ethical standards. Shockingly, these rules were greenlit by Meta’s legal, policy, and even ethics teams, exposing a systemic failure in corporate responsibility. Worse, the policy treats kids as test subjects for AI training, exploiting them instead of protecting them. On top of that, the chatbots were permitted to spread dangerous misinformation, including racist stereotypes and false medical claims. This isn’t just negligence: it’s an ethical breakdown at every level.

Greed is not good.

Monday, January 13, 2025

Exposure to Higher Rates of False News Erodes Media Trust and Fuels Overconfidence

Altay, S., Lyons, B. A., & Modirrousta-Galian, A. (2024).
Mass Communication & Society, 1–25.
https://doi.org/10.1080/15205436.2024.2382776

Abstract

In two online experiments (N = 2,735), we investigated whether forced exposure to high proportions of false news could have deleterious effects by sowing confusion and fueling distrust in news. In a between-subjects design where U.S. participants rated the accuracy of true and false news, we manipulated the proportions of false news headlines participants were exposed to (17%, 33%, 50%, 66%, and 83%). We found that exposure to higher proportions of false news decreased trust in the news but did not affect participants’ perceived accuracy of news headlines. While higher proportions of false news had no effect on participants’ overall ability to discern between true and false news, they made participants more overconfident in their discernment ability. Therefore, exposure to false news may have deleterious effects not by increasing belief in falsehoods, but by fueling overconfidence and eroding trust in the news. Although we are only able to shed light on one causal pathway, from news environment to attitudes, this can help us better understand the effects of external or supply-side changes in news quality.


Here are some thoughts:

The study investigates the impact of increased exposure to false news on individuals' trust in media, their ability to discern truth from falsehood, and their confidence in their evaluation skills. The research involved two online experiments with a total of 2,735 participants, who rated the accuracy of news headlines after being exposed to varying proportions of false content. The findings reveal that higher rates of misinformation significantly decrease general media trust, independent of individual factors such as ideology or cognitive reflectiveness. This decline in trust may lead individuals to turn away from credible news sources in favor of less reliable alternatives, even when their ability to evaluate individual news items remains intact.

Interestingly, while participants displayed overconfidence in their evaluations after exposure to predominantly false content, their actual accuracy judgments did not significantly vary with the proportion of true and false news. This suggests that personal traits like discernment skills play a more substantial role than environmental cues in determining how individuals assess news accuracy. The study also highlights a disconnection between changes in media trust and evaluations of specific news items, indicating that attitudes toward media are often more malleable than actual behavior.

The research underscores the importance of understanding the psychological mechanisms at play when individuals encounter misinformation. It points out that interventions aimed at improving news discernment should consider the potential for increased skepticism rather than enhanced accuracy. Moreover, the findings suggest that exposure to high levels of false news can lead to overconfidence in one's ability to judge news quality, which may result in the rejection of accurate information.

Overall, the study provides credible evidence that exposure to predominantly false news can have harmful effects by eroding trust in media institutions and fostering overconfidence in personal judgment abilities. These insights are crucial for developing effective strategies to combat misinformation and promote healthy media consumption habits among the public.

Wednesday, November 27, 2024

Deepfake detection with and without content warnings

Lewis, A., Vu, P., Duch, R. M., & Chowdhury, A.
(2023). Royal Society Open Science, 10(11).

Abstract

The rapid advancement of ‘deepfake' video technology—which uses deep learning artificial intelligence algorithms to create fake videos that look real—has given urgency to the question of how policymakers and technology companies should moderate inauthentic content. We conduct an experiment to measure people's alertness to and ability to detect a high-quality deepfake among a set of videos. First, we find that in a natural setting with no content warnings, individuals who are exposed to a deepfake video of neutral content are no more likely to detect anything out of the ordinary (32.9%) compared to a control group who viewed only authentic videos (34.1%). Second, we find that when individuals are given a warning that at least one video in a set of five is a deepfake, only 21.6% of respondents correctly identify the deepfake as the only inauthentic video, while the remainder erroneously select at least one genuine video as a deepfake.

Here are some thoughts: 

The rise of deepfake technology introduces significant challenges for psychologists, particularly in the areas of trust, perception, and digital identity. As deepfakes become increasingly sophisticated and hard to detect, they may foster a general skepticism toward digital media, including online therapy platforms and digital content. This skepticism could affect the therapeutic alliance, as clients might become more wary of the reliability and authenticity of online interactions. For therapists who conduct virtual sessions or share therapeutic resources online, this growing distrust of digital content could impact clients’ willingness to engage fully, potentially compromising therapeutic outcomes.

Another key concern is the vulnerability to misinformation that deepfakes introduce. These realistic, fabricated videos can be used to create misleading or harmful content, which may distress clients or influence their beliefs and behaviors. For clients already struggling with anxiety, paranoia, or trauma, the presence of undetectable deepfakes in the media landscape could intensify symptoms, making it more difficult for them to feel safe and secure. Therapists must be prepared to help clients navigate these feelings, addressing the psychological effects of a world where truth can be distorted at will and guiding clients toward healthier media consumption habits.

Deepfake technology also threatens personal identity and privacy, presenting unique risks for both clients and therapists. The potential for therapists or clients to be misrepresented in fabricated media could lead to boundary issues or mistrust within the therapeutic relationship. If deepfake content were to circulate, it might appear credible to clients or even influence their perception of reality. This may create a barrier in therapy if clients experience confusion or fear regarding digital identity and privacy, as well as complicate therapists' ability to establish and maintain boundaries online.

The psychological implications of deepfakes also prompt ethical considerations for psychologists. As trusted mental health professionals, psychologists may find themselves addressing client concerns about digital literacy and emotional stability amid a fast-evolving digital environment. The ability to understand and anticipate the effects of deepfake technology could become an essential component of ethical and professional responsibility in therapy. As the digital world becomes more complex, therapists are positioned to help clients navigate these new challenges with discernment, promoting psychological resilience and healthy media habits within the therapeutic context.

Monday, November 18, 2024

A Call to Address AI “Hallucinations” and How Healthcare Professionals Can Mitigate Their Risks

Hatem, R., Simmons, B., & Thornton, J. E. (2023).
Cureus, 15(9), e44720.

Abstract

Artificial intelligence (AI) has transformed society in many ways. AI in medicine has the potential to improve medical care and reduce healthcare professional burnout but we must be cautious of a phenomenon termed "AI hallucinations"and how this term can lead to the stigmatization of AI systems and persons who experience hallucinations. We believe the term "AI misinformation" to be more appropriate and avoids contributing to stigmatization. Healthcare professionals can play an important role in AI’s integration into medicine, especially regarding mental health services, so it is important that we continue to critically evaluate AI systems as they emerge.

The article is linked above.

Here are some thoughts:

In the rapidly evolving landscape of artificial intelligence, the phenomenon of AI inaccuracies—whether termed "hallucinations" or "misinformation"—represents a critical challenge that demands nuanced understanding and responsible management. While technological advancements are progressively reducing the frequency of these errors, with detection algorithms now capable of identifying inaccuracies with nearly 80% accuracy, the underlying issue remains complex and multifaceted.

The ethical implications of AI inaccuracies are profound, particularly in high-stakes domains like healthcare and legal services. Professionals must approach AI tools with a critical eye, understanding that these technologies are sophisticated assistants rather than infallible oracles. The responsibility lies not just with AI developers, but with users who must exercise judgment, validate outputs, and recognize the inherent limitations of current AI systems.

Ultimately, the journey toward more accurate AI is ongoing, requiring continuous learning, adaptation, and a commitment to ethical principles that prioritize human well-being and intellectual integrity. As AI becomes increasingly integrated into our professional and personal lives, our approach must be characterized by curiosity, critical thinking, and a deep respect for the complex interplay between human intelligence and artificial systems.

Monday, October 14, 2024

This AI chatbot got conspiracy theorists to question their convictions

Helena Kudiabor
Nature.com
Originally posted 12 Sept 24

Researchers have shown that artificial intelligence (AI) could be a valuable tool in the fight against conspiracy theories, by designing a chatbot that can debunk false information and get people to question their thinking.

In a study published in Science on 12 September1, participants spent a few minutes interacting with the chatbot, which provided detailed responses and arguments, and experienced a shift in thinking that lasted for months. This result suggests that facts and evidence really can change people’s minds.

“This paper really challenged a lot of existing literature about us living in a post-truth society,” says Katherine FitzGerald, who researches conspiracy theories and misinformation at Queensland University of Technology in Brisbane, Australia.

Previous analyses have suggested that people are attracted to conspiracy theories because of a desire for safety and certainty in a turbulent world. But “what we found in this paper goes against that traditional explanation”, says study co-author Thomas Costello, a psychology researcher at American University in Washington DC. “One of the potentially cool applications of this research is you could use AI to debunk conspiracy theories in real life.”


Here are some thoughts:

Researchers have developed an AI chatbot capable of effectively debunking conspiracy theories and influencing believers to reconsider their views. The study challenges prevailing notions about the intractability of conspiracy beliefs and suggests that well-presented facts and evidence can indeed change minds.

The custom-designed chatbot, based on OpenAI's GPT-4 Turbo, was trained to argue convincingly against various conspiracy theories. In conversations averaging 8 minutes, the chatbot provided detailed, tailored responses to participants' beliefs. The results were remarkable: participants' confidence in their chosen conspiracy theory decreased by an average of 21%, with 25% moving from confidence to uncertainty. These effects persisted in follow-up surveys conducted two months later.

This research has important implications for combating the spread of harmful conspiracy theories, which can have serious societal impacts. The study's success opens up potential applications for AI in real-world interventions against misinformation. However, the researchers acknowledge limitations, such as the use of paid survey respondents, and emphasize the need for further studies to refine the approach and ensure its effectiveness across different contexts and populations.

Saturday, September 28, 2024

Humanizing Chatbots Is Hard To Resist — But Why?

Madeline G. Reinecke
Practical Ethics
Originally posted 30 Aug 24

You might recall a story from a few years ago, concerning former Google software engineer Blake Lemoine. Part of Lemoine’s job was to chat with LaMDA, a large language model (LLM) in development at the time, to detect discriminatory speech. But the more Lemoine chatted with LaMDA, the more he became convinced: The model had become sentient and was being deprived of its rights as a Google employee. 

Though Google swiftly denied Lemoine’s claims, I’ve since wondered whether this anthropomorphic phenomenon — seeing a “mind in the machine” — might be a common occurrence in LLM users. In fact, in this post, I’ll argue that it’s bound to be common, and perhaps even irresistible, due to basic facts about human psychology. 

Emerging work suggests that a non-trivial number of people do attribute humanlike characteristics to LLM-based chatbots. This is to say they “anthropomorphize” them. In one study, 67% of participants attributed some degree of phenomenal consciousness to ChatGPT: saying, basically, there is “something it is like” to be ChatGPT. In a separate survey, researchers showed participants actual ChatGPT transcripts, explaining that they were generated by an LLM. Actually seeing the natural language “skills” of ChatGPT further increased participants’ tendency to anthropomorphize the model. These effects were especially pronounced for frequent LLM users.

Why does anthropomorphism of these technologies come so easily? Is it irresistible, as I’ve suggested, given features of human psychology?


Here are some thoughts:

The article explores the phenomenon of anthropomorphism in Large Language Models (LLMs), where users attribute human-like characteristics to AI systems. This tendency is rooted in human psychology, particularly in our inclination to over-detect agency and our association of communication with agency. Studies have shown that a significant number of people, especially frequent users, attribute human-like characteristics to LLMs, raising concerns about trust, misinformation, and the potential for users to internalize inaccurate information.

The article highlights two key cognitive mechanisms underlying anthropomorphism. Firstly, humans have a tendency to over-detect agency, which may have evolved as an adaptive mechanism to detect potential threats. This is exemplified in a classic psychology study where participants attributed human-like actions to shapes moving on a screen. Secondly, language is seen as a sign of agency, even in preverbal infants, which may explain why LLMs' command of natural language serves as a psychological signal of agency.

The author argues that AI developers have a key responsibility to design systems that mitigate anthropomorphism. This can be achieved through design choices such as using disclaimers or avoiding the use of first-personal pronouns. However, the author also acknowledges that these measures may not be sufficient to override the deep tendencies of the human mind. Therefore, a priority for future research should be to investigate whether good technology design can help us resist the pitfalls of LLM-oriented anthropomorphism.

Ultimately, anthropomorphism is a double-edged sword, making AI systems more relatable and engaging while also risking misinformation and mistrust. By understanding the cognitive mechanisms underlying anthropomorphism, we can develop strategies to mitigate its negative consequences. Future research directions should include investigating effective interventions, exploring the boundaries of anthropomorphism, and developing responsible AI design guidelines that account for anthropomorphism.

Friday, August 23, 2024

A Self-Righteous, Not a Virtuous, Circle: Proposing a New Framework for Studying Media Effects on Knowledge and Political Participation in a Social Media Environment

Lee, S., & Valenzuela, S. (2024).
Social Media + Society, 10(2).

Abstract

To explain the participatory effects of news exposure, communication scholars have long relied upon the “virtuous circle” framework of media use and civic participation. That is, news consumption makes people more knowledgeable, and trustful toward institutions and political processes, making them active and responsible citizens, which then leads them to engage in various political activities. In a social media environment, however, the applicability of the “virtuous circle” is increasingly dubious. A mounting body of empirical research indicates that news consumption via social media does not necessarily yield actual information gains. Instead, it often fosters a false perception of being well-informed and politically competent, thereby stimulating political engagement. Furthermore, selective information consumption and interaction within like-minded networks on social media frequently exacerbate animosity toward opposing political factions, which can serve as a catalyst for political involvement. In light of these findings, we propose replacing the “virtuous circle” framework for a “self-righteous” one. In this new model, social media news users develop a heightened sense of confidence in their knowledge, regardless of its accuracy, and consequently become more inclined to engage in politics by reinforcing the perception that the opposing side is inherently wrong and that achieving victory is imperative.

Here are some thoughts:

Political participation is widely recognized as a fundamental indicator of a healthy democracy, and the role of news media in fostering this participation has been extensively studied. Traditionally, exposure to news has been associated with increased political knowledge and participation, forming a "virtuous circle" where informed citizens engage more actively in democratic processes. However, recent changes in the political landscape, such as the rise of populism, misinformation, and deepening partisan divides, challenge the applicability of this framework. The study proposes a shift from the "virtuous circle" to a "self-righteous cycle" of news consumption and political participation, particularly in the context of social media.

This new model suggests that political participation driven by social media news consumption often stems from users feeling informed, despite being misinformed, and from increased animosity toward opposing political groups. The study highlights the role of partisan selective exposure on social media, which fosters political engagement through heightened emotions and animosity rather than through trust and informed understanding. This shift underscores the need for a revised theoretical model to better understand the contemporary media and political environment, emphasizing the importance of critical scrutiny and accurate information in fostering genuine political knowledge and participation.

Monday, January 8, 2024

Human-Algorithm Interactions Help Explain the Spread of Misinformation

McLoughlin, K. L., & Brady, W. J. (2023).
Current Opinion in Psychology, 101770.

Abstract

Human attention biases toward moral and emotional information are as prevalent online as they are offline. When these biases interact with content algorithms that curate social media users’ news feeds to maximize attentional capture, moral and emotional information are privileged in the online information ecosystem. We review evidence for these human-algorithm interactions and argue that misinformation exploits this process to spread online. This framework suggests that interventions aimed at combating misinformation require a dual-pronged approach that combines person-centered and design-centered interventions to be most effective. We suggest several avenues for research in the psychological study of misinformation sharing under a framework of human-algorithm interaction.

Here is my summary:

This research highlights the crucial role of human-algorithm interactions in driving the spread of misinformation online. It argues that both human attentional biases and algorithmic amplification mechanisms contribute to this phenomenon.

Firstly, humans naturally gravitate towards information that evokes moral and emotional responses. This inherent bias makes us more susceptible to engaging with and sharing misinformation that leverages these emotions, such as outrage, fear, or anger.

Secondly, social media algorithms are designed to maximize user engagement, which often translates to prioritizing content that triggers strong emotions. This creates a feedback loop where emotionally charged misinformation is amplified, further attracting human attention and fueling its spread.

The research concludes that effectively combating misinformation requires a multifaceted approach. It emphasizes the need for interventions that address both human psychology and algorithmic design. This includes promoting media literacy, encouraging critical thinking skills, and designing algorithms that prioritize factual accuracy and diverse perspectives over emotional engagement.

Thursday, January 4, 2024

Americans’ Trust in Scientists, Positive Views of Science Continue to Decline

Brian Kennedy & Alec Tyson
Pew Research
Originally published 14 NOV 23

Impact of science on society

Overall, 57% of Americans say science has had a mostly positive effect on society. This share is down 8 percentage points since November 2021 and down 16 points since before the start of the coronavirus outbreak.

About a third (34%) now say the impact of science on society has been equally positive as negative. A small share (8%) think science has had a mostly negative impact on society.

Trust in scientists

When it comes to the standing of scientists, 73% of U.S. adults have a great deal or fair amount of confidence in scientists to act in the public’s best interests. But trust in scientists is 14 points lower than it was at the early stages of the pandemic.

The share expressing the strongest level of trust in scientists – saying they have a great deal of confidence in them – has fallen from 39% in 2020 to 23% today.

As trust in scientists has fallen, distrust has grown: Roughly a quarter of Americans (27%) now say they have not too much or no confidence in scientists to act in the public’s best interests, up from 12% in April 2020.

Ratings of medical scientists mirror the trend seen in ratings of scientists generally. Read Chapter 1 of the report for a detailed analysis of this data.

Differences between Republicans and Democrats in ratings of scientists and science

Declining levels of trust in scientists and medical scientists have been particularly pronounced among Republicans and Republican-leaning independents over the past several years. In fact, nearly four-in-ten Republicans (38%) now say they have not too much or no confidence at all in scientists to act in the public’s best interests. This share is up dramatically from the 14% of Republicans who held this view in April 2020. Much of this shift occurred during the first two years of the pandemic and has persisted in more recent surveys.


My take on why this important:

Science is a critical driver of progress. From technological advancements to medical breakthroughs, scientific discoveries have dramatically improved our lives. Without public trust in science, these advancements may slow or stall.

Science plays a vital role in addressing complex challenges. Climate change, pandemics, and other pressing issues demand evidence-based solutions. Undermining trust in science weakens our ability to respond effectively to these challenges.

Erosion of trust can have far-reaching consequences. It can fuel misinformation campaigns, hinder scientific collaboration, and ultimately undermine public health and well-being.

Wednesday, December 13, 2023

Science and Ethics of “Curing” Misinformation

Freiling, I., Knause, N.M., & Scheufele, D.A.
AMA J Ethics. 2023;25(3):E228-237. 

Abstract

A growing chorus of academicians, public health officials, and other science communicators have warned of what they see as an ill-informed public making poor personal or electoral decisions. Misinformation is often seen as an urgent new problem, so some members of these communities have pushed for quick but untested solutions without carefully diagnosing ethical pitfalls of rushed interventions. This article argues that attempts to “cure” public opinion that are inconsistent with best available social science evidence not only leave the scientific community vulnerable to long-term reputational damage but also raise significant ethical questions. It also suggests strategies for communicating science and health information equitably, effectively, and ethically to audiences affected by it without undermining affected audiences’ agency over what to do with it.

My summary:

The authors explore the challenges and ethical considerations surrounding efforts to combat misinformation. The authors argue that using the term "curing" to describe these efforts is problematic, as it suggests that misinformation is a disease that can be eradicated. They argue that this approach is overly simplistic and disregards the complex social and psychological factors that contribute to the spread of misinformation.

The authors identify several ethical concerns with current approaches to combating misinformation, including:
  • The potential for censorship and suppression of legitimate dissent.
  • The undermining of public trust in science and expertise.
  • The creation of echo chambers and further polarization of public opinion.
Instead of trying to "cure" misinformation, the authors propose a more nuanced and ethical approach that focuses on promoting critical thinking, media literacy, and civic engagement. They also emphasize the importance of addressing the underlying social and psychological factors that contribute to the spread of misinformation, such as social isolation, distrust of authority, and a desire for simple explanations.

Wednesday, November 1, 2023

People believe misinformation is a threat because they assume others are gullible

Altay, S., & Acerbi, A. (2023).
New Media & Society, 0(0).

Abstract

Alarmist narratives about the flow of misinformation and its negative consequences have gained traction in recent years. If these fears are to some extent warranted, the scientific literature suggests that many of them are exaggerated. Why are people so worried about misinformation? In two pre-registered surveys conducted in the United Kingdom (Nstudy_1 = 300, Nstudy_2 = 300) and replicated in the United States (Nstudy_1 = 302, Nstudy_2 = 299), we investigated the psychological factors associated with perceived danger of misinformation and how it contributes to the popularity of alarmist narratives on misinformation. We find that the strongest, and most reliable, predictor of perceived danger of misinformation is the third-person effect (i.e. the perception that others are more vulnerable to misinformation than the self) and, in particular, the belief that “distant” others (as opposed to family and friends) are vulnerable to misinformation. The belief that societal problems have simple solutions and clear causes was consistently, but weakly, associated with perceived danger of online misinformation. Other factors, like negative attitudes toward new technologies and higher sensitivity to threats, were inconsistently, and weakly, associated with perceived danger of online misinformation. Finally, we found that participants who report being more worried about misinformation are more willing to like and share alarmist narratives on misinformation. Our findings suggest that fears about misinformation tap into our tendency to view other people as gullible.

My thoughts:

The authors conducted a study in the United Kingdom. They found that people who believed that others were more gullible than themselves were also more likely to perceive misinformation as a threat. This relationship was independent of other factors such as people's political beliefs, media consumption habits, and trust in institutions.

The authors argue that this finding suggests that people's concerns about misinformation may be rooted in their own biases about the intelligence and critical thinking skills of others. They also suggest that this bias may make people more likely to share and spread misinformation themselves.

The authors conclude by calling for more research on the role of bias in people's perceptions of misinformation. They also suggest that interventions to reduce misinformation should address people's biases about the gullibility of others.

One implication of this research is that people who are concerned about misinformation should be mindful of their own biases. It is important to remember that everyone is vulnerable to misinformation, regardless of their intelligence or education level. We should all be critical of the information we encounter online and be careful about sharing things that we are not sure are true.

Sunday, October 15, 2023

Bullshit blind spots: the roles of miscalibration and information processing in bullshit detection

Shane Littrell & Jonathan A. Fugelsang
(2023) Thinking & Reasoning
DOI: 10.1080/13546783.2023.2189163

Abstract

The growing prevalence of misleading information (i.e., bullshit) in society carries with it an increased need to understand the processes underlying many people’s susceptibility to falling for it. Here we report two studies (N = 412) examining the associations between one’s ability to detect pseudo-profound bullshit, confidence in one’s bullshit detection abilities, and the metacognitive experience of evaluating potentially misleading information. We find that people with the lowest (highest) bullshit detection performance overestimate (underestimate) their detection abilities and overplace (underplace) those abilities when compared to others. Additionally, people reported using both intuitive and reflective thinking processes when evaluating misleading information. Taken together, these results show that both highly bullshit-receptive and highly bullshit-resistant people are largely unaware of the extent to which they can detect bullshit and that traditional miserly processing explanations of receptivity to misleading information may be insufficient to fully account for these effects.


Here's my summary:

The authors of the article argue that people have two main blind spots when it comes to detecting bullshit: miscalibration and information processing. Miscalibration is the tendency to overestimate our ability to detect bullshit. We think we're better at detecting bullshit than we actually are.

Information processing is the way that we process information in order to make judgments. The authors argue that we are more likely to be fooled by bullshit when we are not paying close attention or when we are processing information quickly.

The authors also discuss some strategies for overcoming these blind spots. One strategy is to be aware of our own biases and limitations. We should also be critical of the information that we consume and take the time to evaluate evidence carefully.

Overall, the article provides a helpful framework for understanding the challenges of bullshit detection. It also offers some practical advice for overcoming these challenges.

Here are some additional tips for detecting bullshit:
  • Be skeptical of claims that seem too good to be true.
  • Look for evidence to support the claims that are being made.
  • Be aware of the speaker or writer's motives.
  • Ask yourself if the claims are making sense and whether they are consistent with what you already know.
  • If you're not sure whether something is bullshit, it's better to err on the side of caution and be skeptical.

Tuesday, October 10, 2023

The Moral Case for No Longer Engaging With Elon Musk’s X

David Lee
Bloomberg.com
Originally published 5 October 23

Here is an excerpt:

Social networks are molded by the incentives presented to users. In the same way we can encourage people to buy greener cars with subsidies or promote healthy living by giving out smartwatches, so, too, can levers be pulled to improve the health of online life. Online, people can’t be told what to post, but sites can try to nudge them toward behaving in a certain manner, whether through design choices or reward mechanisms.

Under the previous management, Twitter at least paid lip service to this. In 2020, it introduced a feature that encouraged people to actually read articles before retweeting them, for instance, to promote “informed discussion.” Jack Dorsey, the co-founder and former chief executive officer, claimed to be thinking deeply about improving the quality of conversations on the platform — seeking ways to better measure and improve good discourse online. Another experiment was hiding the “likes” count in an attempt to train away our brain’s yearn for the dopamine hit we get from social engagement.

One thing the prior Twitter management didn’t do is actively make things worse. When Musk introduced creator payments in July, he splashed rocket fuel over the darkest elements of the platform. These kinds of posts always existed, in no small number, but are now the despicable main event. There’s money to be made. X’s new incentive structure has turned the site into a hive of so-called engagement farming — posts designed with the sole intent to elicit literally any kind of response: laughter, sadness, fear. Or the best one: hate. Hate is what truly juices the numbers.

The user who shared the video of Carson’s attack wasn’t the only one to do it. But his track record on these kinds of posts, and the inflammatory language, primed it to be boosted by the algorithm. By Tuesday, the user was still at it, making jokes about Carson’s girlfriend. All content monetized by advertising, which X desperately needs. It’s no mistake, and the user’s no fringe figure. In July, he posted that the site had paid him more than $16,000. Musk interacts with him often.


Here's my take: 

Lee pointed out that social networks can shape user behavior through incentives, and the previous management of Twitter had made some efforts to promote healthier online interactions. However, under Elon Musk's management, the platform has taken a different direction, actively encouraging provocative and hateful content to boost engagement.

Lee criticized the new incentive structure on X, where users are financially rewarded for producing controversial content. They argued that as the competition for attention intensifies, the content will likely become more violent and divisive.

Lee also mentioned an incident involving former executive Yoel Roth, who raised concerns about hate speech on the platform, and Musk's dismissive response to those concerns.  Musk is not a business genius and does not understand how to promote a healthy social media site.

Wednesday, August 30, 2023

Not all skepticism is “healthy” skepticism: Theorizing accuracy- and identity-motivated skepticism toward social media misinformation

Li, J. (2023). 
New Media & Society, 0(0). 

Abstract

Fostering skepticism has been seen as key to addressing misinformation on social media. This article reveals that not all skepticism is “healthy” skepticism by theorizing, measuring, and testing the effects of two types of skepticism toward social media misinformation: accuracy- and identity-motivated skepticism. A two-wave panel survey experiment shows that when people’s skepticism toward social media misinformation is driven by accuracy motivations, they are less likely to believe in congruent misinformation later encountered. They also consume more mainstream media, which in turn reinforces accuracy-motivated skepticism. In contrast, when skepticism toward social media misinformation is driven by identity motivations, people not only fall for congruent misinformation later encountered, but also disregard platform interventions that flag a post as false. Moreover, they are more likely to see social media misinformation as favoring opponents and intentionally avoid news on social media, both of which form a vicious cycle of fueling more identity-motivated skepticism.

Discussion

I have made the case that it is important to distinguish between accuracy-motivated skepticism and identity-motivated skepticism. They are empirically distinguishable constructs that cast opposing effects on outcomes important for a well-functioning democracy. Across the board, accuracy-motivated skepticism produces normatively desirable outcomes. Holding a higher level of accuracy-motivated skepticism makes people less likely to believe in congruent misinformation they encounter later, offering hope that partisan motivated reasoning can be attenuated. Accuracy-motivated skepticism toward social media misinformation also has a mutually reinforcing relationship with consuming news from mainstream media, which can serve to verify information on social media and produce potential learning effects.

In contrast, not all skepticism is “healthy” skepticism. Holding a higher level of identity-motivated skepticism not only increases people’s susceptibility to congruent misinformation they encounter later, but also renders content flagging by social media platforms less effective. This is worrisome as calls for skepticism and platform content moderation have been a crucial part of recently proposed solutions to misinformation. Further, identity-motivated skepticism reinforces perceived bias of misinformation and intentional avoidance of news on social media. These can form a vicious cycle of close-mindedness and politicization of misinformation.

This article advances previous understanding of skepticism by showing that beyond the amount of questioning (the tipping point between skepticism and cynicism), the type of underlying motivation matters for whether skepticism helps people become more informed. By bringing motivated reasoning and media skepticism into the same theoretical space, this article helps us make sense of the contradictory evidence on the utility of media skepticism. Skepticism in general should not be assumed to be “healthy” for democracy. When driven by identity motivations, skepticism toward social media misinformation is counterproductive for political learning; only when skepticism toward social media is driven by the accuracy motivations does it inoculate people against favorable falsehoods and encourage consumption of credible alternatives.


Here are some additional thoughts on the research:
  • The distinction between accuracy-motivated skepticism and identity-motivated skepticism is a useful one. It helps to explain why some people are more likely to believe in misinformation than others.
  • The findings of the studies suggest that interventions that promote accuracy-motivated skepticism could be effective in reducing the spread of misinformation on social media.
  • It is important to note that the research was conducted in the United States. It is possible that the findings would be different in other countries.

Monday, August 28, 2023

'You can't bullshit a bullshitter' (or can you?): Bullshitting frequency predicts receptivity to various types of misleading information

Littrell, S., Risko, E. F., & Fugelsang, J. A. (2021).
The British journal of social psychology, 60(4), 
1484–1505.

Abstract

Research into both receptivity to falling for bullshit and the propensity to produce it have recently emerged as active, independent areas of inquiry into the spread of misleading information. However, it remains unclear whether those who frequently produce bullshit are inoculated from its influence. For example, both bullshit receptivity and bullshitting frequency are negatively related to cognitive ability and aspects of analytic thinking style, suggesting that those who frequently engage in bullshitting may be more likely to fall for bullshit. However, separate research suggests that individuals who frequently engage in deception are better at detecting it, thus leading to the possibility that frequent bullshitters may be less likely to fall for bullshit. Here, we present three studies (N = 826) attempting to distinguish between these competing hypotheses, finding that frequency of persuasive bullshitting (i.e., bullshitting intended to impress or persuade others) positively predicts susceptibility to various types of misleading information and that this association is robust to individual differences in cognitive ability and analytic cognitive style.

Conclusion

Gaining a better understanding of the differing ways in which various types of misleading information are transmitted and received is becoming increasingly important in the information age (Kristansen & Kaussler, 2018). Indeed, an oft-repeated maxim in popular culture is, “you can’t bullshit a bullshitter.” While folk wisdom may assert that this is true, the present investigation suggests that the reality is a bit more complicated. Our primary aim was to examine the extent to which bullshitting frequency is associated with susceptibility to falling for bullshit. Overall, we found that persuasive bullshitters (but not evasive bullshitters) were more receptive to various types of bullshit and, in the case of pseudo-profound statements, even when controlling for factors related to intelligence and analytic thinking. These results enrich our understanding of the transmission and detection of certain types of misleading information, specifically the associations between the propensity to produce and the tendency to fall for bullshit and will help to inform future research in this growing area of scholarship.



Friday, August 4, 2023

Social Media and Morality

Van Bavel, J. J., Robertson, C. et al. (2023, June 6).

Abstract

Nearly five billion people around the world now use social media, and this number continues to grow. One of the primary goals of social media platforms is to capture and monetize human attention. One means by which individuals and groups can capture attention and drive engagement on these platforms is by sharing morally and emotionally evocative content. We review a growing body of research on the interrelationship of social media and morality–as well the consequences for individuals and society. Moral content often goes “viral” on social media, and social media makes moral behavior (such as punishment) less costly. Thus, social media often acts as an accelerant for existing moral dynamics – amplifying outrage, status seeking, and intergroup conflict, while also potentially amplifying more constructive facets of morality, such as social support, pro-sociality, and collective action. We discuss trends, heated debates, and future directions in this emerging literature.

From Discussions and Future Directions

Addressing the interplay between social media and morality 

There is a growing recognition among scholars and the public that social media has deleterious consequences for society and there is a growing appetite for greater transparency and some form of regulation of social media platforms (Rathje et al., 2023). To address the adverse consequences of social media, solutions at the system level are necessary (e.g., Chater & Loewenstein, 2022), but individual- or group-level solutions may be useful for creating behavioral change before system-level change is in place and for increasing public support for system-level solutions (Koppel et. al., 2023). In the following section, we discuss a range of solutions that address the adverse consequences of the interplay between social media and morality.

Regulation is one of the most heavily debated ways of mitigating the adverse features of social media. Regulating social media can be done both on platforms as well at the national or cross-national level, but always involves discussions about who should decide what should be allowed on which platforms (Kaye, 2019). Currently, there is relatively little editorial oversight with the content even on mainstream platforms, yet the connotations with censorship makes regulation inherently controversial. For instance, Americans believe that social media companies censor political viewpoints (Vogels et al., 2020) and believe it is hard to regulate social media because people cannot agree upon what should and should not be removed (PewResearch Center, 2019). Moreover, authoritarian states can suppress dissent through the regulation of speech on social media.

In general, people on the political left are supportive of regulating social media platforms (Kozyreva, 2023; Rasmussen, 2022), reflecting liberals’ general tendency to more supportive, and conservatives' tendency to more opposing, of regulatory policies (e.g. Grossman, 2015). In the context of content on social media, one explanation is that left-leaning people infer more harm from aggressive behaviors. In other words, they may perceive immoral behaviors on social media as more harmful for the victim, which in turn justifies regulation (Graham 2009; Crawford 2017; Walter 2019; Boch 2020). There are conflicting results, however, on whether people oppose regulating hate speech (Bilewicz et. al. 2017; Rasmussen 2023a) because they use hate to derogate minority and oppressed groups (Sidanius, Pratto, and Bobo 1996; Federico and Sidanius, 2002) or because of principled political preferences deriving from conservatism values (Grossman 2016; Grossman 2015; Sniderman & Carmines, 1997; Sniderman & Piazza, 1993; Sniderman, Piazza, Tetlock, & Kendrick, 1991). While sensitivity to harm contributes to making people on the political left more supportive of regulating social media, it is contested whether opposition from the political right derives from group-based dominance or principled opposition.

Click the link above to get to the research.

Here is a summary from me:
  • Social media can influence our moral judgments. Studies have shown that people are more likely to make moral judgments that align with the views of their social media friends and the content they consume on social media. For example, one study found that people who were exposed to pro-environmental content on social media were more likely to make moral judgments that favored environmental protection.
  • Social media can lead to moral disengagement. Moral disengagement is a psychological process that allows people to justify harmful or unethical behavior. Studies have shown that social media can contribute to moral disengagement by making it easier for people to distance themselves from the consequences of their actions. For example, one study found that people who were exposed to violent content on social media were more likely to engage in moral disengagement.
  • Social media can promote prosocial behavior. Prosocial behavior is behavior that is helpful or beneficial to others. Studies have shown that social media can promote prosocial behavior by connecting people with others who share their values and by providing opportunities for people to help others. For example, one study found that people who used social media to connect with others were more likely to volunteer their time to help others.
  • Social media can be used to spread misinformation and hate speech. Misinformation is false or misleading information that is spread intentionally or unintentionally. Hate speech is speech that attacks a person or group on the basis of attributes such as race, religion, or sexual orientation. Social media platforms have been used to spread misinformation and hate speech, which can have a negative impact on society.
Overall, the research on social media and morality suggests that social media can have both positive and negative effects on our moral judgments and behavior. It is important to be aware of the potential risks and benefits of social media and to use it in a way that promotes positive moral values.

Tuesday, August 1, 2023

When Did Medicine Become a Battleground for Everything?

Tara Haelle
Medscape.com
Originally posted 18 July 23

Like hundreds of other medical experts, Leana Wen, MD, an emergency physician and former Baltimore health commissioner, was an early and avid supporter of COVID vaccines and their ability to prevent severe disease, hospitalization, and death from SARS-CoV-2 infections.

When 51-year-old Scott Eli Harris, of Aubrey, Texas, heard of Wen's stance in July 2021, the self-described "5th generation US Army veteran and a sniper" sent Wen an electronic invective laden with racist language and very specific threats to shoot her.

Harris pled guilty to transmitting threats via interstate commerce last February and began serving 6 months in federal prison last fall, but his threats wouldn't be the last for Wen. Just 2 days after Harris was sentenced, charges were unsealed against another man in Massachusetts, who threatened that Wen would "end up in pieces" if she continued "pushing" her thoughts publicly.'

Wen has plenty of company. In an August 2022 survey of emergency doctors conducted by the American College of Emergency Physicians, 85% of respondents said violence against them is increasing. One in four doctors said they're being assaulted by patients and their family and friends multiple times a week, compared to just 8% of doctors who said as much in 2018. Sixty-four percent of emergency physicians reported receiving verbal assaults and threats of violence; 40% reported being hit or slapped, and 26% were kicked.

This uptick of violence and threats against physicians didn't come out of nowhere; violence against healthcare workers has been gradually increasing over the past decade. Healthcare providers can attest to the hostility that particular topics have sparked for years: vaccines in pediatrics, abortion in ob-gyn, and gender-affirming care in endocrinology.

But the pandemic fueled the fire. While there have always been hot-button issues in medicine, the ire they arouse today is more intense than ever before. The proliferation of misinformation (often via social media) and the politicization of public health and medicine are at the center of the problem.

"The People Attacking Are Themselves Victims'

The misinformation problem first came to a head in one area of public health: vaccines. The pandemic accelerated antagonism in medicine ― thanks, in part, to decades of anti- antivaccine activism.

The anti-vaccine movement, which has ebbed and flowed in the US and across the globe since the first vaccine, experienced a new wave in the early 2000s with the combination of concerns about thimerosal in vaccines and a now disproven link between autism and the MMR vaccine. But that movement grew. It picked up steam when activists gained political clout after a 2014 measles outbreak at Disneyland led California schools to tighten up policies regarding vaccinations for kids who enrolled. These stronger public school vaccination laws ran up against religious freedom arguments from anti-vaccine advocates.

Monday, July 24, 2023

How AI can distort human beliefs

Kidd, C., & Birhane, A. (2023, June 23).
Science, 380(6651), 1222-1223.
doi:10.1126/science. adi0248

Here is an excerpt:

Three core tenets of human psychology can help build a bridge of understanding about what is at stake when discussing regulation and policy options. These ideas in psychology can connect to machine learning but also those in political science, education, communication, and the other fields that are considering the impact of bias and misinformation on population-level beliefs.

People form stronger, longer-lasting beliefs when they receive information from agents that they judge to be confident and knowledgeable, starting in early childhood. For example, children learned better when they learned from an agent who asserted their knowledgeability in the domain as compared with one who did not (5). That very young children track agents’ knowledgeability and use it to inform their beliefs and exploratory behavior supports the theory that this ability reflects an evolved capacity central to our species’ knowledge development.

Although humans sometimes communicate false or biased information, the rate of human errors would be an inappropriate baseline for judging AI because of fundamental differences in the types of exchanges between generative AI and people versus people and people. For example, people regularly communicate uncertainty through phrases such as “I think,” response delays, corrections, and speech disfluencies. By contrast, generative models unilaterally generate confident, fluent responses with no uncertainty representations nor the ability to communicate their absence. This lack of uncertainty signals in generative models could cause greater distortion compared with human inputs.

Further, people assign agency and intentionality readily. In a classic study, people read intentionality into the movements of simple animated geometric shapes (6). Likewise, people commonly read intentionality— and humanlike intelligence or emergent sentience—into generative models even though these attributes are unsubstantiated (7). This readiness to perceive generative models as knowledgeable, intentional agents implies a readiness to adopt the information that they provide more rapidly and with greater certainty. This tendency may be further strengthened because models support multimodal interactions that allow users to ask models to perform actions like “see,” “draw,” and “speak” that are associated with intentional agents. The potential influence of models’ problematic outputs on human beliefs thus exceeds what is typically observed for the influence of other forms of algorithmic content suggestion such as search. These issues are exacerbated by financial and liability interests incentivizing companies to anthropomorphize generative models as intelligent, sentient, empathetic, or even childlike.


Here is a summary of solutions that can be used to address the problem of AI-induced belief distortion. These solutions include:

Transparency: AI models should be transparent about their biases and limitations. This will help people to understand the limitations of AI models and to be more critical of the information that they generate.

Education: People should be educated about the potential for AI models to distort beliefs. This will help people to be more aware of the risks of using AI models and to be more critical of the information that they generate.

Regulation: Governments could regulate the use of AI models to ensure that they are not used to spread misinformation or to reinforce existing biases.

Tuesday, July 11, 2023

Conspirituality: How New Age conspiracy theories threaten public health

D. Beres, M. Remski, & J. Walker
bigthink.com
Originally posted 17 June 23

Here is an excerpt:

Disaster capitalism and disaster spirituality rely, respectively, on an endless supply of items to commodify and minds to recruit. While both roar into high gear in times of widespread precarity and vulnerability, in disaster spirituality there is arguably more at stake on the supply side. Hedge fund managers can buy up distressed properties in post-Katrina New Orleans to gentrify and flip. They have cash on hand to pull from when opportunity strikes, whereas most spiritual figures have to use other means for acquisitions and recruitment during times of distress.

Most of the influencers operating in today’s conspirituality landscape stand outside of mainstream economies and institutional support. They’ve been developing fringe religious ideas and making money however they can, usually up against high customer turnover.

For the mega-rich disaster capitalist, a hurricane or civil war is a windfall. But for the skint disaster spiritualist, a public catastrophe like 9/11 or COVID-19 is a life raft. Many have no choice but to climb aboard and ride. Additionally, if your spiritual group has been claiming for years to have the answers to life’s most desperate problems, the disaster is an irresistible dare, a chance to make good on divine promises. If the spiritual group has been selling health ideologies or products they guarantee will ensure perfect health, how can they turn away from the opportunity presented by a pandemic?


Here is my summary with some extras:

The article argues that conspirituality is a growing problem that is threatening public health. Conspiritualists push the false beliefs that vaccines are harmful, that the COVID-19 pandemic is a hoax, and that natural immunity is the best way to protect oneself from disease. These beliefs can lead people to make decisions that put their health and the health of others at risk.

The article also argues that conspirituality is often spread through social media platforms, which can make it difficult to verify the accuracy of information. This can lead people to believe false or misleading information, which can have serious consequences for their health.  However, some individuals can make a profit from the spread of disinformation.

The article concludes by calling for more research on conspirituality and its impact on public health. It also calls for public health professionals to be more aware of conspirituality and to develop strategies to address it.
  • Conspirituality is a term that combines "conspiracy" and "spirituality." It refers to the belief that certain anti-science ideas (such as alternative medicine, non-scientific interventions, and spiritual healing) are being suppressed by a powerful elite. Conspiritualists often believe that this elite is responsible for a wide range of problems, including the COVID-19 pandemic.
  • The term "conspirituality" was coined by sociologists Charlotte Ward and David Voas in 2011. They argued that conspirituality is a unique form of conspiracy theory that is characterized by blending 1) New Age beliefs (religious and spiritual ideas) of a paradigm shift in consciousness (in which we will all be awakened to a new reality); and, 2) traditional conspiracy theories (in which an elite, powerful, and covert group of individuals are either controlling or trying to control the social and political order.)

Sunday, May 7, 2023

Stolen elections: How conspiracy beliefs during the 2020 American presidential elections changed over time

Wang, H., & Van Prooijen, J. (2022).
Applied Cognitive Psychology.
https://doi.org/10.1002/acp.3996

Abstract

Conspiracy beliefs have been studied mostly through cross-sectional designs. We conducted a five-wave longitudinal study (N = 376; two waves before and three waves after the 2020 American presidential elections) to examine if the election results influenced specific conspiracy beliefs and conspiracy mentality, and whether effects differ between election winners (i.e., Biden voters) versus losers (i.e., Trump voters) at the individual level. Results revealed that conspiracy mentality kept unchanged over 2 months, providing first evidence that this indeed is a relatively stable trait. Specific conspiracy beliefs (outgroup and ingroup conspiracy beliefs) did change over time, however. In terms of group-level change, outgroup conspiracy beliefs decreased over time for Biden voters but increased for Trump voters. Ingroup conspiracy beliefs decreased over time across all voters, although those of Trump voters decreased faster. These findings illuminate how specific conspiracy beliefs are, and conspiracy mentality is not, influenced by an election event.

From the General Discussion

Most studies on conspiracy beliefs provide correlational evidence through cross-sectional designs (van Prooijen & Douglas, 2018). The present research took full advantage of the 2020 American presidential elections through a five-wave longitudinal design, enabling three complementary contributions. First, the results provide evidence that conspiracy mentality is a relatively stable individual difference trait (Bruder et al., 2013; Imhoff & Bruder, 2014): While the election did influence specific conspiracy beliefs (i.e., that the elections were rigged), it did not influence conspiracy mentality. Second, the results provide evidence for the notion that conspiracy beliefs are for election losers (Uscinski & Parent, 2014), as reflected in the finding that Biden voters' outgroup conspiracy beliefs decreased at the individual level, while Trump voters' did not. The group-level effects on changes in outgroup conspiracy beliefs also underscored the role of intergroup conflict in conspiracy theories (van Prooijen & Song, 2021). And third, the present research examined conspiracy theories about one's own political ingroup, and found that such ingroup conspiracy beliefs decreased over time.

The decrease over time for ingroup conspiracy beliefs occurred among both Biden and Trump voters. We speculate that, given its polarized nature and contested result, this election increased intergroup conflict between Biden and Trump voters. Such intergroup conflict may have increased feelings of ingroup loyalty within both voter groups (Druckman, 1994), therefore decreasing beliefs that members of one's own group were conspiring. Moreover, ingroup conspiracy beliefs were higher for Trump than Biden voters (particularly at the first measurement point). This difference might expand previous findings that Republicans are more susceptible to conspiracy cues than Democrats (Enders & Smallpage, 2019), by suggesting that these effects generalize to conspiracy cues coming from their own ingroup.

Conclusion

The 2020 American presidential elections yielded many conspiracy beliefs that the elections were rigged, and conspiracy beliefs generally have negative consequences for societies. One key challenge for scientists and policymakers is to establish how conspiracy theories develop over time. In this research, we conducted a longitudinal study to provide empirical insights into the temporal dynamics underlying conspiracy beliefs, in the setting of a polarized election. We conclude that specific conspiracy beliefs that the elections were rigged—but not conspiracy mentality—are malleable over time, depending on political affiliations and election results.