Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Confirmation Bias. Show all posts
Showing posts with label Confirmation Bias. Show all posts

Wednesday, February 7, 2024

Listening to bridge societal divides

Santoro, E., & Markus, H. R. (2023).
Current opinion in psychology, 54, 101696.

Abstract

The U.S. is plagued by a variety of societal divides across political orientation, race, and gender, among others. Listening has the potential to be a key element in spanning these divides. Moreover, the benefits of listening for mitigating social division has become a culturally popular idea and practice. Recent evidence suggests that listening can bridge divides in at least two ways: by improving outgroup sentiment and by granting outgroup members greater status and respect. When reviewing this literature, we pay particular attention to mechanisms and to boundary conditions, as well as to the possibility that listening can backfire. We also review a variety of current interventions designed to encourage and improve listening at all levels of the culture cycle. The combination of recent evidence and the growing popular belief in the significance of listening heralds a bright future for research on the many ways that listening can diffuse stereotypes and improve attitudes underlying intergroup division.

The article is paywalled, which is not really helpful in spreading the word.  This information can be very helpful in couples and family therapy.  Here are my thoughts:

The idea that listening can help bridge societal divides is a powerful one. When we truly listen to someone from a different background, we open ourselves up to understanding their perspective and experiences. This can help to break down stereotypes and foster empathy.

Benefits of Listening:
  • Reduces prejudice: Studies have shown that listening to people from different groups can help to reduce prejudice. When we hear the stories of others, we are more likely to see them as individuals, rather than as members of a stereotyped group.
  • Builds trust: Listening can help to build trust between people from different groups. When we show that we are willing to listen to each other, we demonstrate that we are open to understanding and respecting each other's views.
  • Finds common ground: Even when people disagree, listening can help them to find common ground. By focusing on areas of agreement, rather than on differences, we can build a foundation for cooperation and collaboration.
Challenges of Listening:

It is important to acknowledge that listening is not always easy. There are a number of challenges that can make it difficult to truly hear and understand someone from a different background. These challenges include:
  • Bias: We all have biases, and these biases can influence the way we listen to others. It is important to be aware of our own biases and to try to set them aside when we are listening to someone else.
  • Distraction: In today's world, there are many distractions that can make it difficult to focus on what someone else is saying. It is important to create a quiet and distraction-free environment when we are trying to have a meaningful conversation with someone.
  • Discomfort: Talking about difficult topics can be uncomfortable. However, it is important to be willing to listen to these conversations, even if they make us feel uncomfortable.
Tips for Effective Listening:
  • Pay attention: Make eye contact and avoid interrupting the speaker.
  • Be open-minded: Try to see things from the speaker's perspective, even if you disagree with them.
  • Ask questions: Ask clarifying questions to make sure you understand what the speaker is saying.
  • Summarize: Briefly summarize what you have heard to show that you were paying attention.
  • By practicing these tips, we can become more effective listeners and, in turn, help to bridge the divides that separate us.

Saturday, August 26, 2023

Can Confirmation Bias Improve Group Learning?

Gabriel, N. and O'Connor, C. (2022)
[Preprint]

Abstract

Confirmation bias has been widely studied for its role in failures of reasoning. Individuals exhibiting confirmation bias fail to engage with information that contradicts their current beliefs, and, as a result, can fail to abandon inaccurate beliefs. But although most investigations of confirmation bias focus on individual learning, human knowledge is typically developed within a social structure. How does the presence of confirmation bias influence learning and the development of consensus within a group? In this paper, we use network models to study this question. We find, perhaps surprisingly, that moderate confirmation bias often improves group learning. This is because confirmation bias leads the group to entertain a wider variety of theories for a longer time, and prevents them from prematurely settling on a suboptimal theory. There is a downside, however, which is that a stronger form of confirmation bias can cause persistent polarization, and hurt the knowledge producing capacity of the community. We discuss implications of these results for epistemic communities, including scientific ones.

Conclusion

We find that confirmation bias, in a more moderate form, improves the epistemic performance of agents in a networked community. This is perhaps surprising given that previous work mostly emphasizes the epistemic harms of confirmation bias. By decreasing the chances that a group pre-emptively settles on a
promising theory or option, confirmation bias can improve the likelihood that the group chooses optimal options in the long run. In this, it can play a similar role to decreased network connectivity or stubbornness (Zollman, 2007, 2010; Wu, 2021). The downside is that more robust confirmation bias, where agents entirely ignore data that is too disconsonant with their current beliefs, can lead to polarization, and harm the epistemic success of a community. Our modeling results thus provide potential support for the arguments of Mercier & Sperber (2017) regarding the benefits of confirmation bias to a group, but also a caution.  Too much confirmation bias does not provide such benefits.

There are several ongoing discussions in philosophy and the social sciences where these results are relevant. Mayo-Wilson et al. (2011) use network models to argue for the independence thesis—that rationality of individual agents and rationality of the groups they form sometimes come apart. I.e., individually rational agents may form groups which are not ideally rational, and rational groups may sometimes consist in individually irrational agents. Our results lend support to this claim. While there is a great deal of evidence suggesting that confirmation bias is not ideal for individual reasoners, our results suggest that it can nonetheless improve group reasoning under the right conditions.


The authors conclude that confirmation bias can have both positive and negative effects on group learning. The key is to find a moderate level of confirmation bias that allows the group to explore a variety of theories without becoming too polarized.

Here are some of the key findings of the paper:
  • Moderate confirmation bias can improve group learning by preventing the group from prematurely settling on a suboptimal theory.
  • Too much confirmation bias can lead to polarization and a decrease in the group's ability to learn.
  • The key to effective group learning is to find a moderate level of confirmation bias.

Thursday, April 20, 2023

Toward Parsimony in Bias Research: A Proposed Common Framework of Belief-Consistent Information Processing for a Set of Biases

Oeberst, A., & Imhoff, R. (2023).
Perspectives on Psychological Science, 0(0).
https://doi.org/10.1177/17456916221148147

Abstract

One of the essential insights from psychological research is that people’s information processing is often biased. By now, a number of different biases have been identified and empirically demonstrated. Unfortunately, however, these biases have often been examined in separate lines of research, thereby precluding the recognition of shared principles. Here we argue that several—so far mostly unrelated—biases (e.g., bias blind spot, hostile media bias, egocentric/ethnocentric bias, outcome bias) can be traced back to the combination of a fundamental prior belief and humans’ tendency toward belief-consistent information processing. What varies between different biases is essentially the specific belief that guides information processing. More importantly, we propose that different biases even share the same underlying belief and differ only in the specific outcome of information processing that is assessed (i.e., the dependent variable), thus tapping into different manifestations of the same latent information processing. In other words, we propose for discussion a model that suffices to explain several different biases. We thereby suggest a more parsimonious approach compared with current theoretical explanations of these biases. We also generate novel hypotheses that follow directly from the integrative nature of our perspective.

Conclusion

There have been many prior attempts of synthesizing and integrating research on (parts of) biased information processing (e.g., Birch & Bloom, 2004; Evans, 1989; Fiedler, 1996, 2000; Gawronski & Strack, 2012; Gilovich, 1991; Griffin & Ross, 1991; Hilbert, 2012; Klayman & Ha, 1987; Kruglanski et al., 2012; Kunda, 1990; Lord & Taylor, 2009; Pronin et al., 2004; Pyszczynski & Greenberg, 1987; Sanbonmatsu et al., 1998; Shermer, 1997; Skov & Sherman, 1986; Trope & Liberman, 1996). Some of them have made similar or overlapping arguments or implicitly made similar assumptions to the ones outlined here and thus resonate with our reasoning. In none of them, however, have we found the same line of thought and its consequences explicated.

To put it briefly, theoretical advancements necessitate integration and parsimony (the integrative potential), as well as novel ideas and hypotheses (the generative potential). We believe that the proposed framework for understanding bias as presented in this article has merits in both of these aspects. We hope to instigate discussion as well as empirical scrutiny with the ultimate goal of identifying common principles across several disparate research strands that have heretofore sought to understand human biases.


This article proposes a common framework for studying biases in information processing, aiming for parsimony in bias research. The framework suggests that biases can be understood as a result of belief-consistent information processing, and highlights the importance of considering both cognitive and motivational factors.

Sunday, November 1, 2020

Believing in Overcoming Cognitive Biases

T. S. Doherty & A. E. Carroll
AMA J Ethics. 2020;22(9):E773-778. 
doi: 10.1001/amajethics.2020.773.

Abstract

Like all humans, health professionals are subject to cognitive biases that can render diagnoses and treatment decisions vulnerable to error. Learning effective debiasing strategies and cultivating awareness of confirmation, anchoring, and outcomes biases and the affect heuristic, among others, and their effects on clinical decision making should be prioritized in all stages of education.

Here is an excerpt:

The practice of reflection reinforces behaviors that reduce bias in complex situations. A 2016 systematic review of cognitive intervention studies found that guided reflection interventions were associated with the most consistent success in improving diagnostic reasoning. A guided reflection intervention involves searching for and being open to alternative diagnoses and willingness to engage in thoughtful and effortful reasoning and reflection on one’s own conclusions, all with supportive feedback or challenge from a mentor.

The same review suggests that cognitive forcing strategies may also have some success in improving diagnostic outcomes. These strategies involve conscious consideration of alternative diagnoses other than those that come intuitively. One example involves reading radiographs in the emergency department. According to studies, a common pitfall among inexperienced clinicians in such a situation is to call off the search once a positive finding has been noticed, which often leads to other abnormalities (eg, second fractures) being overlooked. Thus, the forcing strategy in this situation would be to continue a search even after an initial fracture has been detected.

Sunday, August 25, 2019

Chances are, you’re not as open-minded as you think

David Epstein
The Washington Post
Originally published July 20, 2019

Here is an excerpt:

The lesson is clear enough: Most of us are probably not as open-minded as we think. That is unfortunate and something we can change. A hallmark of teams that make good predictions about the world around them is something psychologists call “active open mindedness.” People who exhibit this trait do something, alone or together, as a matter of routine that rarely occurs to most of us: They imagine their own views as hypotheses in need of testing.

They aim not to bring people around to their perspective but to encourage others to help them disprove what they already believe. This is not instinctive behavior. Most of us, armed with a Web browser, do not start most days by searching for why we are wrong.

As our divisive politics daily feed our tilt toward confirmation bias, it is worth asking if this instinct to think we know enough is hardening into a habit of poor judgment. Consider that, in a study during the run-up to the Brexit vote, a small majority of both Remainers and Brexiters could correctly interpret made-up statistics about the efficacy of a rash-curing skin cream. But when the same voters were given similarly false data presented as if it indicated that immigration either increased or decreased crime, hordes of Brits suddenly became innumerate and misinterpreted statistics that disagreed with their beliefs.

The info is here.

Thursday, May 23, 2019

Pre-commitment and Updating Beliefs

Charles R. Ebersole
Doctoral Dissertation, University of Virginia

Abstract

Beliefs help individuals make predictions about the world. When those predictions are incorrect, it may be useful to update beliefs. However, motivated cognition and biases (notably, hindsight bias and confirmation bias) can instead lead individuals to reshape interpretations of new evidence to seem more consistent with prior beliefs. Pre-committing to a prediction or evaluation of new evidence before knowing its results may be one way to reduce the impact of these biases and facilitate belief updating. I first examined this possibility by having participants report predictions about their performance on a challenging anagrams task before or after completing the task. Relative to those who reported predictions after the task, participants who pre-committed to predictions reported predictions that were more discrepant from actual performance and updated their beliefs about their verbal ability more (Studies 1a and 1b). The effect on belief updating was strongest among participants who directly tested their predictions (Study 2) and belief updating was related to their evaluations of the validity of the task (Study 3). Furthermore, increased belief updating seemed to not be due to faulty or shifting memory of initial ratings of verbal ability (Study 4), but rather reflected an increase in the discrepancy between predictions and observed outcomes (Study 5). In a final study (Study 6), I examined pre-commitment as an intervention to reduce confirmation bias, finding that pre-committing to evaluations of new scientific studies eliminated the relation between initial beliefs and evaluations of evidence while also increasing belief updating. Together, these studies suggest that pre-commitment can reduce biases and increase belief updating in light of new evidence.

The dissertation is here.

Saturday, September 22, 2018

The Business Case for Curiosity

Francesca Gino
Harvard Business Review
Originally posted September-October Issue

Here are two excerpts:

The Benefits of Curiosity

New research reveals a wide range of benefits for organizations, leaders, and employees.

Fewer decision-making errors.

In my research I found that when our curiosity is triggered, we are less likely to fall prey to confirmation bias (looking for information that supports our beliefs rather than for evidence suggesting we are wrong) and to stereotyping people (making broad judgments, such as that women or minorities don’t make good leaders). Curiosity has these positive effects because it leads us to generate alternatives.

(cut)

It’s natural to concentrate on results, especially in the face of tough challenges. But focusing on learning is generally more beneficial to us and our organizations, as some landmark studies show. For example, when U.S. Air Force personnel were given a demanding goal for the number of planes to be landed in a set time frame, their performance decreased. Similarly, in a study led by Southern Methodist University’s Don VandeWalle, sales professionals who were naturally focused on performance goals, such as meeting their targets and being seen by colleagues as good at their jobs, did worse during a promotion of a product (a piece of medical equipment priced at about $5,400) than reps who were naturally focused on learning goals, such as exploring how to be a better salesperson. That cost them, because the company awarded a bonus of $300 for each unit sold.

A body of research demonstrates that framing work around learning goals (developing competence, acquiring skills, mastering new situations, and so on) rather than performance goals (hitting targets, proving our competence, impressing others) boosts motivation. And when motivated by learning goals, we acquire more-diverse skills, do better at work, get higher grades in college, do better on problem-solving tasks, and receive higher ratings after training. Unfortunately, organizations often prioritize performance goals.

The information is here.

Monday, July 30, 2018

Biases Make People Vulnerable to Misinformation Spread by Social Media

Giovanni Luca Ciampaglia & Filippo Mencze
Scientific American
Originally published June 21, 2018

Here is an excerpt:

The third group of biases arises directly from the algorithms used to determine what people see online. Both social media platforms and search engines employ them. These personalization technologies are designed to select only the most engaging and relevant content for each individual user. But in doing so, it may end up reinforcing the cognitive and social biases of users, thus making them even more vulnerable to manipulation.

For instance, the detailed advertising tools built into many social media platforms let disinformation campaigners exploit confirmation bias by tailoring messages to people who are already inclined to believe them.

Also, if a user often clicks on Facebook links from a particular news source, Facebook will tend to show that person more of that site’s content. This so-called “filter bubble” effect may isolate people from diverse perspectives, strengthening confirmation bias.

The information is here.

Friday, March 16, 2018

How Russia Hacked the American Mind

Maya Kosoff
Vanity Fair
Originally posted February 19, 2018

Here is an excerpt:

Social media certainly facilitated the Russian campaign. As part of Facebook’s charm offensive, Zuckerberg has since offered tangible fixes, including a plan to verify election advertisements and an effort to emphasize friends, family, and Groups. But Americans’ lack of news literacy transcends Facebook, and was created in part by the Internet itself. As news has shifted from print and television outlets to digital versions of those same outlets to information shared on social-media platforms (still the primary source of news for an overwhelming majority of Americans) audiences failed to keep pace; they never learned to vet the news they consume online.

It’s also a problem we’ve created ourselves. As we’ve become increasingly polarized, news outlets have correspondingly adjusted to cater to our tastes, resulting in a media landscape that’s split into separate, non-overlapping universes of conflicting facts—a world in which Fox News and CNN spout theories about the school shooting in Parkland, Florida, that are diametrically opposed. It was this atmosphere that made the U.S. fertile ground for foreign manipulation. As political scientists Jay J. Van Bavel and Andrea Pereira noted in a recent paper, “Partisanship can even alter memory, implicit evaluation, and even perceptual judgment,” fueling an “human attraction to fake and untrustworthy news” that “poses a serious problem for healthy democratic functioning.”

The article is here.

Friday, December 15, 2017

The Vortex

Oliver Burkeman
The Guardian
Originally posted November 30, 2017

Here is an excerpt:

I realise you don’t need me to tell you that something has gone badly wrong with how we discuss controversial topics online. Fake news is rampant; facts don’t seem to change the minds of those in thrall to falsehood; confirmation bias drives people to seek out only the information that bolsters their views, while dismissing whatever challenges them. (In the final three months of the 2016 presidential election campaign, according to one analysis by Buzzfeed, the top 20 fake stories were shared more online than the top 20 real ones: to a terrifying extent, news is now more fake than not.) Yet, to be honest, I’d always assumed that the problem rested solely on the shoulders of other, stupider, nastier people. If you’re not the kind of person who makes death threats, or uses misogynistic slurs, or thinks Hillary Clinton’s campaign manager ran a child sex ring from a Washington pizzeria – if you’re a basically decent and undeluded sort, in other words – it’s easy to assume you’re doing nothing wrong.

But this, I am reluctantly beginning to understand, is self-flattery. One important feature of being trapped in the Vortex, it turns out, is the way it looks like everyone else is trapped in the Vortex, enslaved by their anger and delusions, obsessed with point-scoring and insult-hurling instead of with establishing the facts – whereas you’re just speaking truth to power. Yet in reality, when it comes to the divisive, depressing, energy-sapping nightmare that is modern online political debate, it’s like the old line about road congestion: you’re not “stuck in traffic”. You are the traffic.

The article is here.

Tuesday, December 5, 2017

Liberals and conservatives are similarly motivated to avoid exposure to one another's opinions

Jeremy A. Frimer, Linda J. Skitka, Matt Motyl
Journal of Experimental Social Psychology
Volume 72, September 2017, Pages 1-12

Abstract

Ideologically committed people are similarly motivated to avoid ideologically crosscutting information. Although some previous research has found that political conservatives may be more prone to selective exposure than liberals are, we find similar selective exposure motives on the political left and right across a variety of issues. The majority of people on both sides of the same-sex marriage debate willingly gave up a chance to win money to avoid hearing from the other side (Study 1). When thinking back to the 2012 U.S. Presidential election (Study 2), ahead to upcoming elections in the U.S. and Canada (Study 3), and about a range of other Culture War issues (Study 4), liberals and conservatives reported similar aversion toward learning about the views of their ideological opponents. Their lack of interest was not due to already being informed about the other side or attributable election fatigue. Rather, people on both sides indicated that they anticipated that hearing from the other side would induce cognitive dissonance (e.g., require effort, cause frustration) and undermine a sense of shared reality with the person expressing disparate views (e.g., damage the relationship; Study 5). A high-powered meta-analysis of our data sets (N = 2417) did not detect a difference in the intensity of liberals' (d = 0.63) and conservatives' (d = 0.58) desires to remain in their respective ideological bubbles.

The research is here.

Tuesday, October 3, 2017

Facts Don’t Change People’s Minds. Here’s What Does

Ozan Varol
Helio
Originally posted September 6, 2017

Here is an excerpt:

The mind doesn’t follow the facts. Facts, as John Adams put it, are stubborn things, but our minds are even more stubborn. Doubt isn’t always resolved in the face of facts for even the most enlightened among us, however credible and convincing those facts might be.

As a result of the well-documented confirmation bias, we tend to undervalue evidence that contradicts our beliefs and overvalue evidence that confirms them. We filter out inconvenient truths and arguments on the opposing side. As a result, our opinions solidify, and it becomes increasingly harder to disrupt established patterns of thinking.

We believe in alternative facts if they support our pre-existing beliefs. Aggressively mediocre corporate executives remain in office because we interpret the evidence to confirm the accuracy of our initial hiring decision. Doctors continue to preach the ills of dietary fat despite emerging research to the contrary.

If you have any doubts about the power of the confirmation bias, think back to the last time you Googled a question. Did you meticulously read each link to get a broad objective picture? Or did you simply skim through the links looking for the page that confirms what you already believed was true? And let’s face it, you’ll always find that page, especially if you’re willing to click through to Page 12 on the Google search results.

The article is here.

Tuesday, August 1, 2017

Morality isn’t a compass — it’s a calculator

DB Krupp
The Conversation
Originally published July 9, 2017

Here is the conclusion:

Unfortunately, the beliefs that straddle moral fault lines are largely impervious to empirical critique. We simply embrace the evidence that supports our cause and deny the evidence that doesn’t. If strategic thinking motivates belief, and belief motivates reason, then we may be wasting our time trying to persuade the opposition to change their minds.

Instead, we should strive to change the costs and benefits that provoke discord in the first place. Many disagreements are the result of worlds colliding — people with different backgrounds making different assessments of the same situation. By closing the gap between their experiences and by lowering the stakes, we can bring them closer to consensus. This may mean reducing inequality, improving access to health care or increasing contact between unfamiliar groups.

We have little reason to see ourselves as unbiased sources of moral righteousness, but we probably will anyway. The least we can do is minimize that bias a bit.

The article is here.

Wednesday, June 14, 2017

You’re Not Going to Change Your Mind

Ben Tappin, Leslie Van Der Leer and Ryan McKay
The New York Times
Originally published May 28, 2017

A troubling feature of political disagreement in the United States today is that many issues on which liberals and conservatives hold divergent views are questions not of value but of fact. Is human activity responsible for global warming? Do guns make society safer? Is immigration harmful to the economy?

Though undoubtedly complicated, these questions turn on empirical evidence. As new information emerges, we ought to move, however fitfully, toward consensus.

But we don’t. Unfortunately, people do not always revise their beliefs in light of new information. On the contrary, they often stubbornly maintain their views. Certain disagreements stay entrenched and polarized.

Why? A common explanation is confirmation bias. This is the psychological tendency to favor information that confirms our beliefs and to disfavor information that counters them — a tendency manifested in the echo chambers and “filter bubbles” of the online world.

If this explanation is right, then there is a relatively straightforward solution to political polarization: We need to consciously expose ourselves to evidence that challenges our beliefs to compensate for our inclination to discount it.

But what if confirmation bias isn’t the only culprit?

The article is here.

Tuesday, June 13, 2017

Why It’s So Hard to Admit You’re Wrong

Kristin Wong
The New York Times
Originally published May 22, 2017

Here are two excerpts:

Mistakes can be hard to digest, so sometimes we double down rather than face them. Our confirmation bias kicks in, causing us to seek out evidence to prove what we already believe. The car you cut off has a small dent in its bumper, which obviously means that it is the other driver’s fault.

Psychologists call this cognitive dissonance — the stress we experience when we hold two contradictory thoughts, beliefs, opinions or attitudes.

(cut)

“Cognitive dissonance is what we feel when the self-concept — I’m smart, I’m kind, I’m convinced this belief is true — is threatened by evidence that we did something that wasn’t smart, that we did something that hurt another person, that the belief isn’t true,” said Carol Tavris, a co-author of the book “Mistakes Were Made (But Not by Me).”

She added that cognitive dissonance threatened our sense of self.

“To reduce dissonance, we have to modify the self-concept or accept the evidence,” Ms. Tavris said. “Guess which route people prefer?”

Or maybe you cope by justifying your mistake. The psychologist Leon Festinger suggested the theory of cognitive dissonance in the 1950s when he studied a small religious group that believed a flying saucer would rescue its members from an apocalypse on Dec. 20, 1954. Publishing his findings in the book “When Prophecy Fails,” he wrote that the group doubled down on its belief and said God had simply decided to spare the members, coping with their own cognitive dissonance by clinging to a justification.

“Dissonance is uncomfortable and we are motivated to reduce it,” Ms. Tavris said.

When we apologize for being wrong, we have to accept this dissonance, and that is unpleasant. On the other hand, research has shown that it can feel good to stick to our guns.

Monday, March 13, 2017

Why Facts Don't Change Our Minds

Elizabeth Kolbert
The New Yorker
Originally published February 27, 2017

Here is an excerpt:

Stripped of a lot of what might be called cognitive-science-ese, Mercier and Sperber’s argument runs, more or less, as follows: Humans’ biggest advantage over other species is our ability to coöperate. Coöperation is difficult to establish and almost as difficult to sustain. For any individual, freeloading is always the best course of action. Reason developed not to enable us to solve abstract, logical problems or even to help us draw conclusions from unfamiliar data; rather, it developed to resolve the problems posed by living in collaborative groups.

“Reason is an adaptation to the hypersocial niche humans have evolved for themselves,” Mercier and Sperber write. Habits of mind that seem weird or goofy or just plain dumb from an “intellectualist” point of view prove shrewd when seen from a social “interactionist” perspective.

Consider what’s become known as “confirmation bias,” the tendency people have to embrace information that supports their beliefs and reject information that contradicts them. Of the many forms of faulty thinking that have been identified, confirmation bias is among the best catalogued; it’s the subject of entire textbooks’ worth of experiments. One of the most famous of these was conducted, again, at Stanford. For this experiment, researchers rounded up a group of students who had opposing opinions about capital punishment. Half the students were in favor of it and thought that it deterred crime; the other half were against it and thought that it had no effect on crime.

The article is here.

Thursday, February 9, 2017

Financial ties between researchers and drug industry linked to positive trial results

British Medical Journal
Press Release
Originally released January 17, 2017

Here is an excerpt:

More than half (58%) of principal investigators had financial ties to the drug industry - including travel expenses, honorariums, payment for advisory work, or stock ownership.

The results show that trials authored by principal investigators with financial ties to drug manufacturers were more likely than other trials to report favourable results.

Even after accounting for factors that may have affected the results, such as funding source and sample size, financial ties were still significantly associated with positive study outcomes.

The authors point to possible mechanisms linking industry funding, financial ties, and trial results such as bias by selective outcome reporting, lack of publication, and inappropriate analyses.

The pressor is here.

Tuesday, July 5, 2016

How scientists fool themselves – and how they can stop

Regina Nuzzo
Nature 526, 182–185 (08 October 2015)
doi:10.1038/526182a

Here is an excerpt:

This is the big problem in science that no one is talking about: even an honest person is a master of self-deception. Our brains evolved long ago on the African savannah, where jumping to plausible conclusions about the location of ripe fruit or the presence of a predator was a matter of survival. But a smart strategy for evading lions does not necessarily translate well to a modern laboratory, where tenure may be riding on the analysis of terabytes of multidimensional data. In today's environment, our talent for jumping to conclusions makes it all too easy to find false patterns in randomness, to ignore alternative explanations for a result or to accept 'reasonable' outcomes without question — that is, to ceaselessly lead ourselves astray without realizing it.

Failure to understand our own biases has helped to create a crisis of confidence about the reproducibility of published results, says statistician John Ioannidis, co-director of the Meta-Research Innovation Center at Stanford University in Palo Alto, California. The issue goes well beyond cases of fraud. Earlier this year, a large project that attempted to replicate 100 psychology studies managed to reproduce only slightly more than one-third. In 2012, researchers at biotechnology firm Amgen in Thousand Oaks, California, reported that they could replicate only 6 out of 53 landmark studies in oncology and haematology. And in 2009, Ioannidis and his colleagues described how they had been able to fully reproduce only 2 out of 18 microarray-based gene-expression studies.

The article is here.

Editor's note: These biases also apply to clinicians who use research or their own theories about how and why psychotherapy works.

Thursday, April 21, 2016

The Science of Choosing Wisely — Overcoming the Therapeutic Illusion

David Casarett
New England Journal of Medicine 2016; 374:1203-1205
March 31, 2016
DOI: 10.1056/NEJMp1516803

Here are two excerpts:

The success of such efforts, however, may be limited by the tendency of human beings to overestimate the effects of their actions. Psychologists call this phenomenon, which is based on our tendency to infer causality where none exists, the “illusion of control.” In medicine, it may be called the “therapeutic illusion” (a label first applied in 1978 to “the unjustified enthusiasm for treatment on the part of both patients and doctors”). When physicians believe that their actions or tools are more effective than they actually are, the results can be unnecessary and costly care. Therefore, I think that efforts to promote more rational decision making will need to address this illusion directly.

(cut)

The outcome of virtually all medical decisions is at least partly outside the physician’s control, and random chance can encourage physicians to embrace mistaken beliefs about causality. For instance, joint lavage is overused for relief of osteoarthritis-related knee pain, despite a recommendation against it from the American Academy of Orthopedic Surgery. Knee pain tends to wax and wane, so many patients report improvement in symptoms after lavage, and it’s natural to conclude that the intervention was effective.

The article is here.

Thursday, June 25, 2015

Why do humans reason? Arguments for an argumentative theory

By Hugo Mercier and Dan Sperber
Behavioral and Brain Sciences (2011) 34, 57 –111
doi:10.1017/S0140525X10000968

Abstract:

Reasoning is generally seen as a means to improve knowledge and make better decisions. However, much evidence shows that reasoning often leads to epistemic distortions and poor decisions. This suggests that the function of reasoning should be rethought.  Our hypothesis is that the function of reasoning is argumentative. It is to devise and evaluate arguments intended to persuade.  Reasoning so conceived is adaptive given the exceptional dependence of humans on communication and their vulnerability to misinformation. A wide range of evidence in the psychology of reasoning and decision making can be reinterpreted and better explained in the light of this hypothesis. Poor performance in standard reasoning tasks is explained by the lack of argumentative context. When the same problems are placed in a proper argumentative setting, people turn out to be skilled arguers. Skilled arguers, however, are not after the truth but after arguments supporting their views. This explains the notorious confirmation bias.  This bias is apparent not only when people are actually arguing, but also when they are reasoning proactively from the perspective of having to defend their opinions. Reasoning so motivated can distort evaluations and attitudes and allow erroneous beliefs to
persist. Proactively used reasoning also favors decisions that are easy to justify but not necessarily better. In all these instances traditionally described as failures or flaws, reasoning does exactly what can be expected of an argumentative device: Look for arguments that support a given conclusion, and, ceteris paribus, favor conclusions for which arguments can be found.

The entire article is here.