Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Cognitive Biases. Show all posts
Showing posts with label Cognitive Biases. Show all posts

Tuesday, October 19, 2021

Why Empathy Is Not a Reliable Source of Information in Moral Decision Making

Decety, J. (2021).
Current Directions in Psychological Science. 
https://doi.org/10.1177/09637214211031943

Abstract

Although empathy drives prosocial behaviors, it is not always a reliable source of information in moral decision making. In this essay, I integrate evolutionary theory, behavioral economics, psychology, and social neuroscience to demonstrate why and how empathy is unconsciously and rapidly modulated by various social signals and situational factors. This theoretical framework explains why decision making that relies solely on empathy is not ideal and can, at times, erode ethical values. This perspective has social and societal implications and can be used to reduce cognitive biases and guide moral decisions.

From the Conclusion

Empathy can encourage overvaluing some people and ignoring others, and privileging one over many. Reasoning is therefore essential to filter and evaluate emotional responses that guide moral decisions. Understanding the ultimate causes and proximate mechanisms of empathy allows characterization of the kinds of signals that are prioritized and identification of situational factors that exacerbate empathic failure. Together, this knowledge is useful at a theoretical level, and additionally provides practical information about how to reframe situations to activate alternative evolved systems in ways that promote normative moral conduct compatible with current societal aspirations. This conceptual framework advances current understanding of the role of empathy in moral decision making and may inform efforts to correct personal biases. Becoming aware of one’s biases is not the most effective way to manage and mitigate them, but empathy is not something that can be ignored. It has an adaptive biological function, after all.

Tuesday, June 15, 2021

Diagnostic Mistakes a Big Contributor to Malpractice Suits, Study Finds

Joyce Friedan
MedPageToday.com
Originally posted 26 May 21

Here are two excerpts

One problem is that "healthcare is inherently risky," she continued. For example, "there's ever-changing industry knowledge, growing bodies of clinical options, new diseases, and new technology. There are variable work demands -- boy, didn't we experience that this past year! -- and production pressure has long been a struggle and a challenge for our providers and their teams." Not to mention variable individual competency, an aging population, complex health issues, and evolving workforces.

(cut)

Cognitive biases can also trigger diagnostic errors, Siegal said. "Anchor bias" occurs when "a provider anchors on a diagnosis, early on, and then through the course of the journey looks for things to confirm that diagnosis. Once they've confirmed it enough that 'search satisfaction' is met, that leads to premature closure" of the patient's case. But that causes a problem because "it means that there's a failure to continue exploring other options. What else could it be? It's a failure to establish, perhaps, every differential diagnosis."

To avoid this problem, providers "always want to think about, 'Am I anchoring too soon? Am I looking to confirm, rather than challenge, my diagnosis?'" she said. According to the study, 25% of cases didn't have evidence of a differential diagnosis, and 36% fell into the category of "confirmation bias" -- "I was looking for things to confirm what I knew, but there were relevant signs and symptoms or positive tests that were still present that didn't quite fit the picture, but it was close. So they were somehow discounted, and the premature closure took over and a diagnosis was made," she said.

She suggested that clinicians take a "diagnostic timeout" -- similar to a surgical timeout -- when they're arriving at a diagnosis. "What else could this be? Have I truly explored all the other possibilities that seem relevant in this scenario and, more importantly, what doesn't fit? Be sure to dis-confirm as well."

Sunday, November 1, 2020

Believing in Overcoming Cognitive Biases

T. S. Doherty & A. E. Carroll
AMA J Ethics. 2020;22(9):E773-778. 
doi: 10.1001/amajethics.2020.773.

Abstract

Like all humans, health professionals are subject to cognitive biases that can render diagnoses and treatment decisions vulnerable to error. Learning effective debiasing strategies and cultivating awareness of confirmation, anchoring, and outcomes biases and the affect heuristic, among others, and their effects on clinical decision making should be prioritized in all stages of education.

Here is an excerpt:

The practice of reflection reinforces behaviors that reduce bias in complex situations. A 2016 systematic review of cognitive intervention studies found that guided reflection interventions were associated with the most consistent success in improving diagnostic reasoning. A guided reflection intervention involves searching for and being open to alternative diagnoses and willingness to engage in thoughtful and effortful reasoning and reflection on one’s own conclusions, all with supportive feedback or challenge from a mentor.

The same review suggests that cognitive forcing strategies may also have some success in improving diagnostic outcomes. These strategies involve conscious consideration of alternative diagnoses other than those that come intuitively. One example involves reading radiographs in the emergency department. According to studies, a common pitfall among inexperienced clinicians in such a situation is to call off the search once a positive finding has been noticed, which often leads to other abnormalities (eg, second fractures) being overlooked. Thus, the forcing strategy in this situation would be to continue a search even after an initial fracture has been detected.

Sunday, October 25, 2020

The objectivity illusion and voter polarization in the 2016 presidential election

M. C. Schwalbe, G. L. Cohen, L. D. Ross
PNAS Sep 2020, 117 (35) 21218-21229; 

Abstract

Two studies conducted during the 2016 presidential campaign examined the dynamics of the objectivity illusion, the belief that the views of “my side” are objective while the views of the opposing side are the product of bias. In the first, a three-stage longitudinal study spanning the presidential debates, supporters of the two candidates exhibited a large and generally symmetrical tendency to rate supporters of the candidate they personally favored as more influenced by appropriate (i.e., “normative”) considerations, and less influenced by various sources of bias than supporters of the opposing candidate. This study broke new ground by demonstrating that the degree to which partisans displayed the objectivity illusion predicted subsequent bias in their perception of debate performance and polarization in their political attitudes over time, as well as closed-mindedness and antipathy toward political adversaries. These associations, furthermore, remained significant even after controlling for baseline levels of partisanship. A second study conducted 2 d before the election showed similar perceptions of objectivity versus bias in ratings of blog authors favoring the candidate participants personally supported or opposed. These ratings were again associated with polarization and, additionally, with the willingness to characterize supporters of the opposing candidate as evil and likely to commit acts of terrorism. At a time of particular political division and distrust in America, these findings point to the exacerbating role played by the illusion of objectivity.

Significance

Political polarization increasingly threatens democratic institutions. The belief that “my side” sees the world objectively while the “other side” sees it through the lens of its biases contributes to this political polarization and accompanying animus and distrust. This conviction, known as the “objectivity illusion,” was strong and persistent among Trump and Clinton supporters in the weeks before the 2016 presidential election. We show that the objectivity illusion predicts subsequent bias and polarization, including heightened partisanship over the presidential debates. A follow-up study showed that both groups impugned the objectivity of a putative blog author supporting the opposition candidate and saw supporters of that opposing candidate as evil.

Monday, March 23, 2020

Changes in risk perception and protective behavior during the first week of the COVID-19 pandemic in the United States

T. Wise, T. Zbozinek, & others
PsyArXiv
Originally posted 19 March 20

Abstract

By mid-March 2020, the COVID-19 pandemic spread to over 100 countries and all 50 states in the US. Government efforts to minimize the spread of disease emphasized behavioral interventions, including raising awareness of the disease and encouraging protective behaviors such as social distancing and hand washing, and seeking medical attention if experiencing symptoms. However, it is unclear to what extent individuals are aware of the risks associated with the disease, how they are altering their behavior, factors which could influence the spread of the virus to vulnerable populations. We characterized risk perception and engagement in preventative measures in 1591 United States based individuals over the first week of the pandemic (March 11th-16th 2020) and examined the extent to which protective behaviors are predicted by individuals’ perception of risk. Over 5 days, subjects demonstrated growing awareness of the risk posed by the virus, and largely reported engaging in protective behaviors with increasing frequency. However, they underestimated their personal risk of infection relative to the average person in the country. We found that engagement in social distancing and hand washing was most strongly predicted by the perceived likelihood of personally being infected, rather than likelihood of transmission or severity of potential transmitted infections. However, substantial variability emerged among individuals, and using data-driven methods we found a subgroup of subjects who are largely disengaged, unaware, and not practicing protective behaviors. Our results have implications for our understanding of how risk perception and protective behaviors can facilitate early interventions during large-scale pandemics.

From the Discussion:

One explanation for our results is the optimism bias.  This bias is associated with the belief that we are less likely to acquire a disease than others, and has been shown across a variety of diseases including lung  cancer. Indeed,  those  who  show  the  optimism  bias  are  less  likely  to  be  vaccinated  against disease. Recent evidence suggests that this may also be the case for COVID-19 and could result in a failure to engage in behaviors that contribute to the spread this highly contagious disease.  Our results extend  on  these  findings  by  showing  that behavior  changes  over  the  first  week  of  the  COVID-19 pandemic such that as individuals perceive an increase in personal risk they increasingly engage in risk-prevention  behaviors.   Notably,  we  observed  rapid  increases  in  risk  perception  over  a  5-day  period, indicating that public health messages spread through government and the media can be effective in raising awareness of the risk.

The research is here.

Saturday, October 19, 2019

Forensic Clinicians’ Understanding of Bias

Tess Neal, Nina MacLean, Robert D. Morgan,
and Daniel C. Murrie
Psychology, Public Policy, and Law, 
Sep 16 , 2019, No Pagination Specified

Abstract:

Bias, or systematic influences that create errors in judgment, can affect psychological evaluations in ways that lead to erroneous diagnoses and opinions. Although these errors can have especially serious consequences in the criminal justice system, little research has addressed forensic psychologists’ awareness of well-known cognitive biases and debiasing strategies. We conducted a national survey with a sample of 120 randomly-selected licensed psychologists with forensic interests to examine a) their familiarity with and understanding of cognitive biases, b) their self-reported strategies to mitigate bias, and c) the relation of a and b to psychologists’ cognitive reflection abilities. Most psychologists reported familiarity with well-known biases and distinguished these from sham biases, and reported using research-identified strategies but not fictional/sham strategies. However, some psychologists reported little familiarity with actual biases, endorsed sham biases as real, failed to recognize effective bias mitigation strategies, and endorsed ineffective bias mitigation strategies. Furthermore, nearly everyone endorsed introspection (a strategy known to be ineffective) as an effective bias mitigation strategy. Cognitive reflection abilities were systematically related to error, such that stronger cognitive reflection was associated with less endorsement of sham biases.

Here is the conclusion:

These findings (along with Neal & Brodsky’s, 2016) suggest that forensic clinicians are in need of additional training not only to recognize biases but perhaps to begin to effectively mitigate harm from biases. For example, in predoctoral (e.g., internship) and postdoctoral (fellowships), didactic training could address bias, recognizing bias and providing strategies for minimizing bias. Additionally, supervisors could address identifying and reducing bias as a regular part of supervision (e.g., by including this as part of case conceptualization). However, further research is needed to determine the types of training and workflow strategies that best reduce bias. Future studies should focus on experimentally examining the presence of biases and ways to mitigate their effects in forensic evaluations.

The research is here.

Tuesday, September 17, 2019

Aiming For Moral Mediocrity

Eric Schwitzgebel
Res Philosophica, Vol 96 (3), July 2019.
DOI: 10.11612/resphil.1806

Abstract

Most people aim to be about as morally good as their peers—not especially better, not especially worse. We do not aim to be good, or non-bad, or to act permissibly rather than impermissibly, by fixed moral standards. Rather, we notice the typical behavior of our peers, then calibrate toward so-so. This is a somewhat bad way to be, but it’s not a terribly bad way to be. We are somewhat morally criticizable for having low moral ambitions. Typical arguments defending the moral acceptability of low moral ambitions—the So-What-If-I’m-Not-a-Saint Excuse, the Fairness Objection, the Happy Coincidence Defense, and the claim that you’re already in The-Most-You-Can-Do Sweet Spot—do not survive critical scrutiny.

Conclusion

Most of us do not aim to be morally good by absolute standards. Instead we aim to be about as morally good as our peers. Our peers are somewhat morally criticizable—not morally horrible, but morally mediocre. If we aim to approximately match their mediocrity, we are somewhat morally
criticizable for having such low personal moral ambitions. It’s tempting to try to rationalize one’s mediocrity away by admitting merely that one is not a saint, or by appealing to the Fairness Objection or the Happy Coincidence Defense, or by flattering oneself that one is already in TheMost-You-Can-Do Sweet Spot—but these self-serving excuses don’t survive scrutiny.

Consider where you truly aim. Maybe moral goodness isn’t so important to you, as long as you’re not among the worst. If so, own your mediocrity.  Accept the moral criticism you deserve for your low moral ambitions, or change them.

Thursday, August 22, 2019

Repetition increases perceived truth equally for plausible and implausible statements

Lisa Fazio David Rand Gordon Pennycook
PsyArXiv
Originally created February 28, 2019

Abstract

Repetition increases the likelihood that a statement will be judged as true. This illusory truth effect is well-established; however, it has been argued that repetition will not affect belief in unambiguous statements. When individuals are faced with obviously true or false statements, repetition should have no impact. We report a simulation study and a preregistered experiment that investigate this idea. Contrary to many intuitions, our results suggest that belief in all statements is increased by repetition. The observed illusory truth effect is largest for ambiguous items, but this can be explained by the psychometric properties of the task, rather than an underlying psychological mechanism that blocks the impact of repetition for implausible items. Our results indicate that the illusory truth effect is highly robust and occurs across all levels of plausibility. Therefore, even highly implausible statements will become more plausible with enough repetition.

The research is here.

The conclusion:

In conclusion, our findings are consistent with the hypothesis that repetition increases belief in all statements equally, regardless of their plausibility. However, there is an important difference between this internal mechanism (equal increase across plausibility) and the observable effect. The observable effect of repetition on truth ratings is greatest for items near the midpoint of perceived truth, and small or nonexistent for items at the extremes. While repetition effects are difficult to observe for very high and very low levels of perceived truth, our results suggest that repetition increases participants’ internal representation of truth equally for all statements. These findings have large implications for daily life where people are often repeatedly exposed to both plausible and implausible falsehoods. Even implausible falsehoods may slowly become more plausible with repetition.

Thursday, February 1, 2018

How to Counter the Circus of Pseudoscience

Lisa Pryor
The New York Times
Originally published January 5, 2018

Here are two excerpts:

In the face of such doubt, it is not surprising that some individuals, even those who are intelligent and well educated, are swept away by the breezy confidence of health gurus, who are full of passionate intensity while the qualified lack all conviction, to borrow from Yeats.

It is a cognitive bias known in psychology as the Dunning-Kruger Effect. In short, the less you know, the less able you are to recognize how little you know, so the less likely you are to recognize your errors and shortcomings. For the highly skilled, like trained scientists, the opposite is true: The more you know, the more likely you are to see how little you know. This is truly a cognitive bias for our time.

(cut)

Engaging is difficult when the alternative-health proponents are on such a different astral plane that it is a challenge even to find common language for a conversation, especially when they promote spurious concepts such as “pyrrole disease,” which they can speak about in great, false detail, drawing the well-informed physician, dietitian or scientist into a vortex of personal anecdote and ancient wisdom, with quips about big pharma thrown in for good measure.

The information is here.

Sunday, September 24, 2017

The Bush Torture Scandal Isn’t Over

Daniel Engber
Slate.com
Originally published September 5, 2017

In June, a little-known academic journal called Teaching of Psychology published an article about the American Psychological Association’s role in the U.S. government’s war on terror and the interrogation of military detainees. Mitchell Handelsman’s seven-page paper, called “A Teachable Ethics Scandal,” suggested that the seemingly cozy relationship between APA officials and the Department of Defense might be used to illustrate numerous psychological concepts for students including obedience, groupthink, terror management theory, group influence, and motivation.

By mid-July, Teaching of Psychology had taken steps to retract the paper. The thinking that went into that decision reveals a disturbing under-covered coda to a scandal that, for a time, was front-page news. In July 2015, then–APA President Nadine Kaslow apologized for the organization’s involvement in Bush-era enhanced interrogations. “This bleak chapter in our history,” she said, speaking for a group with more than 100,000 members and a nine-figure budget, “occurred over a period of years and will not be resolved in a matter of months.” Two years later, the APA’s attempt to turn the page has devolved into a vicious internecine battle in which former association presidents have taken aim at one another. At issue is the question of who (if anyone) should be blamed for giving the Bush administration what’s been called a “green light” to torture detainees—and when the APA will ever truly get past this scandal.

The article is here.

Friday, April 28, 2017

How rational is our rationality?

Interview by Richard Marshall
3 AM Magazine
Originally posted March 18, 2017

Here is an excerpt:

As I mentioned earlier, I think that the point of the study of rationality, and of normative epistemology more generally, is to help us figure out how to inquire, and the aim of inquiry, I believe, is to get at the truth. This means that there had better be a close connection between what we conclude about what’s rational to believe, and what we expect to be true. But it turns out to be very tricky to say what the nature of this connection is! For example, we know that sometimes evidence can mislead us, and so rational beliefs can be false. This means that there’s no guarantee that rational beliefs will be true. The goal of the paper is to get clear about why, and to what extent, it nonetheless makes sense to expect that rational beliefs will be more accurate than irrational ones. One reason this should be of interest to non-philosophers is that if it turns out that there isn’t some close connection between rationality and truth, then we should be much less critical of people with irrational beliefs. They may reasonably say: “Sure, my belief is irrational – but I care about the truth, and since my irrational belief is true, I won’t abandon it!” It seems like there’s something wrong with this stance, but to justify why it’s wrong, we need to get clear on the connection between a judgment about a belief’s rationality and a judgment about its truth. The account I give is difficult to summarize in just a few sentences, but I can say this much: what we say about the connection between what’s rational and what’s true will depend on whether we think it’s rational to doubt our own rationality. If it can be rational to doubt our own rationality (which I think is plausible), then the connection between rationality and truth is, in a sense, surprisingly tenuous.

The interview is here.

Monday, March 13, 2017

Why Facts Don't Change Our Minds

Elizabeth Kolbert
The New Yorker
Originally published February 27, 2017

Here is an excerpt:

Stripped of a lot of what might be called cognitive-science-ese, Mercier and Sperber’s argument runs, more or less, as follows: Humans’ biggest advantage over other species is our ability to coöperate. Coöperation is difficult to establish and almost as difficult to sustain. For any individual, freeloading is always the best course of action. Reason developed not to enable us to solve abstract, logical problems or even to help us draw conclusions from unfamiliar data; rather, it developed to resolve the problems posed by living in collaborative groups.

“Reason is an adaptation to the hypersocial niche humans have evolved for themselves,” Mercier and Sperber write. Habits of mind that seem weird or goofy or just plain dumb from an “intellectualist” point of view prove shrewd when seen from a social “interactionist” perspective.

Consider what’s become known as “confirmation bias,” the tendency people have to embrace information that supports their beliefs and reject information that contradicts them. Of the many forms of faulty thinking that have been identified, confirmation bias is among the best catalogued; it’s the subject of entire textbooks’ worth of experiments. One of the most famous of these was conducted, again, at Stanford. For this experiment, researchers rounded up a group of students who had opposing opinions about capital punishment. Half the students were in favor of it and thought that it deterred crime; the other half were against it and thought that it had no effect on crime.

The article is here.