Helena Kudiabor
Nature.com
Originally posted 12 Sept 24
Researchers have shown that artificial intelligence (AI) could be a valuable tool in the fight against conspiracy theories, by designing a chatbot that can debunk false information and get people to question their thinking.
In a study published in Science on 12 September1, participants spent a few minutes interacting with the chatbot, which provided detailed responses and arguments, and experienced a shift in thinking that lasted for months. This result suggests that facts and evidence really can change people’s minds.
“This paper really challenged a lot of existing literature about us living in a post-truth society,” says Katherine FitzGerald, who researches conspiracy theories and misinformation at Queensland University of Technology in Brisbane, Australia.
Previous analyses have suggested that people are attracted to conspiracy theories because of a desire for safety and certainty in a turbulent world. But “what we found in this paper goes against that traditional explanation”, says study co-author Thomas Costello, a psychology researcher at American University in Washington DC. “One of the potentially cool applications of this research is you could use AI to debunk conspiracy theories in real life.”
Here are some thoughts:
Researchers have developed an AI chatbot capable of effectively debunking conspiracy theories and influencing believers to reconsider their views. The study challenges prevailing notions about the intractability of conspiracy beliefs and suggests that well-presented facts and evidence can indeed change minds.
The custom-designed chatbot, based on OpenAI's GPT-4 Turbo, was trained to argue convincingly against various conspiracy theories. In conversations averaging 8 minutes, the chatbot provided detailed, tailored responses to participants' beliefs. The results were remarkable: participants' confidence in their chosen conspiracy theory decreased by an average of 21%, with 25% moving from confidence to uncertainty. These effects persisted in follow-up surveys conducted two months later.
This research has important implications for combating the spread of harmful conspiracy theories, which can have serious societal impacts. The study's success opens up potential applications for AI in real-world interventions against misinformation. However, the researchers acknowledge limitations, such as the use of paid survey respondents, and emphasize the need for further studies to refine the approach and ensure its effectiveness across different contexts and populations.