Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Debunking. Show all posts
Showing posts with label Debunking. Show all posts

Monday, October 14, 2024

This AI chatbot got conspiracy theorists to question their convictions

Helena Kudiabor
Nature.com
Originally posted 12 Sept 24

Researchers have shown that artificial intelligence (AI) could be a valuable tool in the fight against conspiracy theories, by designing a chatbot that can debunk false information and get people to question their thinking.

In a study published in Science on 12 September1, participants spent a few minutes interacting with the chatbot, which provided detailed responses and arguments, and experienced a shift in thinking that lasted for months. This result suggests that facts and evidence really can change people’s minds.

“This paper really challenged a lot of existing literature about us living in a post-truth society,” says Katherine FitzGerald, who researches conspiracy theories and misinformation at Queensland University of Technology in Brisbane, Australia.

Previous analyses have suggested that people are attracted to conspiracy theories because of a desire for safety and certainty in a turbulent world. But “what we found in this paper goes against that traditional explanation”, says study co-author Thomas Costello, a psychology researcher at American University in Washington DC. “One of the potentially cool applications of this research is you could use AI to debunk conspiracy theories in real life.”


Here are some thoughts:

Researchers have developed an AI chatbot capable of effectively debunking conspiracy theories and influencing believers to reconsider their views. The study challenges prevailing notions about the intractability of conspiracy beliefs and suggests that well-presented facts and evidence can indeed change minds.

The custom-designed chatbot, based on OpenAI's GPT-4 Turbo, was trained to argue convincingly against various conspiracy theories. In conversations averaging 8 minutes, the chatbot provided detailed, tailored responses to participants' beliefs. The results were remarkable: participants' confidence in their chosen conspiracy theory decreased by an average of 21%, with 25% moving from confidence to uncertainty. These effects persisted in follow-up surveys conducted two months later.

This research has important implications for combating the spread of harmful conspiracy theories, which can have serious societal impacts. The study's success opens up potential applications for AI in real-world interventions against misinformation. However, the researchers acknowledge limitations, such as the use of paid survey respondents, and emphasize the need for further studies to refine the approach and ensure its effectiveness across different contexts and populations.

Sunday, March 3, 2019

When and why people think beliefs are “debunked” by scientific explanations for their origins

Dillon Plunkett, Lara Buchak, and Tania Lombrozo

Abstract

How do scientific explanations for beliefs affect people’s confidence in those beliefs? For example, do people think neuroscientific explanations for religious belief support or challenge belief in God? In five experiments, we find that the effects of scientific explanations for belief depend on whether the explanations imply normal or abnormal functioning (e.g., if a neural mechanism is doing what it evolved to do). Experiments 1 and 2 find that people think brain based explanations for religious, moral, and scientific beliefs corroborate those beliefs when the explanations invoke a normally functioning mechanism, but not an abnormally functioning mechanism. Experiment 3 demonstrates comparable effects for other kinds of scientific explanations (e.g., genetic explanations). Experiment 4 confirms that these effects derive from (im)proper functioning, not statistical (in)frequency. Experiment 5 suggests that these effects interact with people’s prior beliefs to produce motivated judgments: People are more skeptical of scientific explanations for their own beliefs if the explanations appeal to abnormal functioning, but they are less skeptical of scientific explanations of opposing beliefs if the explanations appeal to abnormal functioning. These findings suggest that people treat “normality” as a proxy for epistemic reliability and reveal that folk epistemic commitments shape attitudes towards scientific explanations.

The research is here.

Wednesday, August 27, 2014

Process Debunking and Ethics

By Shaun Nichols
Ethics, Vol. 124, No. 4 (July 2014), pp. 727-749
Published by: The University of Chicago Press
Article DOI: 10.1086/675877

The rise of empirical moral psychology has been accompanied by the return of debunking arguments in ethics. This is no surprise since debunking arguments often depend on empirical premises about the beliefs under consideration. As we learn more about our moral psychology, we put ourselves in a position to develop more empirically informed debunking arguments.

In this essay, I will start by distinguishing different forms of debunking arguments, and I will adopt a particular, psychologically oriented, approach to debunking. On the type of debunking argument that I will promote, one attempts to undercut the justificatory status of a person’s belief by showing that the belief was formed by an epistemically defective psychological process. There are natural ways to develop such debunking arguments in metaethics, I’ll contend; but in normative ethics, debunking arguments face greater obstacles.

The entire article is here, behind a paywall.  Hopefully, your university library can get this for you.