Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Thursday, June 17, 2021

Biased Benevolence: The Perceived Morality of Effective Altruism Across Social Distance

Law, K. F., Campbell, D., & Gaesser, B. 
(2019, July 11). 
https://doi.org/10.31234/osf.io/qzx67

Abstract

Is altruism always morally good, or is the morality of altruism fundamentally shaped by the social opportunity costs that often accompany helping decisions? Across five studies, we reveal that, although helping both socially closer and socially distant others is generally perceived favorably (Study 1), in cases of realistic tradeoffs in social distance for gains in welfare where helping socially distant others necessitates not helping socially closer others with the same resources, helping is deemed as less morally acceptable (Studies 2-5). Making helping decisions at a cost to socially closer others also negatively affects judgments of relationship quality (Study 3) and in turn, decreases cooperative behavior with the helper (Study 4). Ruling out an alternative explanation of physical distance accounting for the effects in Studies 1-4, social distance continued to impact moral acceptability when physical distance across social targets was matched (Study 5). These findings reveal that attempts to decrease biases in helping may have previously unconsidered consequences for moral judgments, relationships, and cooperation.

General Discussion

When judging the morality of altruistic tradeoffs in social distance for gains in welfare advocated by the philosophy and social movement of effective altruism, we find that the perceived morality of altruism is graded by social distance. People consistently view socially distant altruism as less morally acceptable as the person not receiving help becomes socially closer to the agent helping. This suggests that whereas altruism is generally evaluated as morally praiseworthy, the moral calculus of altruism flexibly shifts according to the social distance between the person offering aid and the people in need. Such findings highlight the empirical value and theoretical importance of investigating moral judgments situated in real-world social contexts.

Wednesday, June 16, 2021

Lazy, Not Biased: Susceptibility to Partisan Fake News Is Better Explained by Lack of Reasoning Than by Motivated Reasoning

Pennycook, G. & Rand, D. G.
Cognition. (2019)
Volume 188, July 2019, Pages 39-50

Abstract

Why do people believe blatantly inaccurate news headlines (“fake news”)? Do we use our reasoning abilities to convince ourselves that statements that align with our ideology are true, or does reasoning allow us to effectively differentiate fake from real regardless of political ideology? Here we test these competing accounts in two studies (total N = 3,446 Mechanical Turk workers) by using the Cognitive Reflection Test (CRT) as a measure of the propensity to engage in analytical reasoning. We find that CRT performance is negatively correlated with the perceived accuracy of fake news, and positively correlated with the ability to discern fake news from real news – even for headlines that align with individuals’ political ideology. Moreover, overall discernment was actually better for ideologically aligned headlines than for misaligned headlines. Finally, a headline-level analysis finds that CRT is negatively correlated with perceived accuracy of relatively implausible (primarily fake) headlines, and positively correlated with perceived accuracy of relatively plausible (primarily real) headlines. In contrast, the correlation between CRT and perceived accuracy is unrelated to how closely the headline aligns with the participant’s ideology. Thus, we conclude that analytic thinking is used to assess the plausibility of headlines, regardless of whether the stories are consistent or inconsistent with one’s political ideology. Our findings therefore suggest that susceptibility to fake news is driven more by lazy thinking than it is by partisan bias per se – a finding that opens potential avenues for fighting fake news.

Highlights

• Participants rated perceived accuracy of fake and real news headlines.

• Analytic thinking was associated with ability to discern between fake and real.

• We found no evidence that analytic thinking exacerbates motivated reasoning.

• Falling for fake news is more a result of a lack of thinking than partisanship.

Tuesday, June 15, 2021

Diagnostic Mistakes a Big Contributor to Malpractice Suits, Study Finds

Joyce Friedan
MedPageToday.com
Originally posted 26 May 21

Here are two excerpts

One problem is that "healthcare is inherently risky," she continued. For example, "there's ever-changing industry knowledge, growing bodies of clinical options, new diseases, and new technology. There are variable work demands -- boy, didn't we experience that this past year! -- and production pressure has long been a struggle and a challenge for our providers and their teams." Not to mention variable individual competency, an aging population, complex health issues, and evolving workforces.

(cut)

Cognitive biases can also trigger diagnostic errors, Siegal said. "Anchor bias" occurs when "a provider anchors on a diagnosis, early on, and then through the course of the journey looks for things to confirm that diagnosis. Once they've confirmed it enough that 'search satisfaction' is met, that leads to premature closure" of the patient's case. But that causes a problem because "it means that there's a failure to continue exploring other options. What else could it be? It's a failure to establish, perhaps, every differential diagnosis."

To avoid this problem, providers "always want to think about, 'Am I anchoring too soon? Am I looking to confirm, rather than challenge, my diagnosis?'" she said. According to the study, 25% of cases didn't have evidence of a differential diagnosis, and 36% fell into the category of "confirmation bias" -- "I was looking for things to confirm what I knew, but there were relevant signs and symptoms or positive tests that were still present that didn't quite fit the picture, but it was close. So they were somehow discounted, and the premature closure took over and a diagnosis was made," she said.

She suggested that clinicians take a "diagnostic timeout" -- similar to a surgical timeout -- when they're arriving at a diagnosis. "What else could this be? Have I truly explored all the other possibilities that seem relevant in this scenario and, more importantly, what doesn't fit? Be sure to dis-confirm as well."

Monday, June 14, 2021

Bias Is a Big Problem. But So Is ‘Noise.’

Daniel Kahneman, O. Sibony & C.R. Sunstein
The New York Times
Originally posted 15 May 21

Here is an excerpt:

There is much evidence that irrelevant circumstances can affect judgments. In the case of criminal sentencing, for instance, a judge’s mood, fatigue and even the weather can all have modest but detectable effects on judicial decisions.

Another source of noise is that people can have different general tendencies. Judges often vary in the severity of the sentences they mete out: There are “hanging” judges and lenient ones.

A third source of noise is less intuitive, although it is usually the largest: People can have not only different general tendencies (say, whether they are harsh or lenient) but also different patterns of assessment (say, which types of cases they believe merit being harsh or lenient about). 

Underwriters differ in their views of what is risky, and doctors in their views of which ailments require treatment. 

We celebrate the uniqueness of individuals, but we tend to forget that, when we expect consistency, uniqueness becomes a liability.

Once you become aware of noise, you can look for ways to reduce it. For instance, independent judgments from a number of people can be averaged (a frequent practice in forecasting). 

Guidelines, such as those often used in medicine, can help professionals reach better and more uniform decisions. 

As studies of hiring practices have consistently shown, imposing structure and discipline in interviews and other forms of assessment tends to improve judgments of job candidates.

No noise-reduction techniques will be deployed, however, if we do not first recognize the existence of noise. 

Noise is too often neglected. But it is a serious issue that results in frequent error and rampant injustice. 

Organizations and institutions, public and private, will make better decisions if they take noise seriously.

Sunday, June 13, 2021

Philosophy in Science: Can philosophers of science permeate through science and produce scientific knowledge?

Pradeu, T., et al. (2021)
Preprint
British Journal of the Philosophy of Science

Abstract

Most philosophers of science do philosophy ‘on’ science. By contrast, others do philosophy ‘in’ science (‘PinS’), i.e., they use philosophical tools to address scientific problems and to provide scientifically useful proposals. Here, we consider the evidence in favour of a trend of this nature. We proceed in two stages. First, we identify relevant authors and articles empirically with bibliometric tools, given that PinS would be likely to infiltrate science and thus to be published in scientific journals (‘intervention’), cited in scientific journals (‘visibility’) and sometimes recognized as a scientific result by scientists (‘contribution’). We show that many central figures in philosophy of science have been involved in PinS, and that some philosophers have even ‘specialized’ in this practice. Second, we propose a conceptual definition of PinS as a process involving three conditions (raising a scientific problem, using philosophical tools to address it, and making a scientific proposal), and we ask whether the articles identified at the first stage fulfil all these conditions. We show that PinS is a distinctive, quantitatively substantial trend within philosophy of science, demonstrating the existence of a methodological continuity from science to philosophy of science.

From the Conclusion

A crucial and long-standing question for philosophers of science is how philosophy of science relates to science, including, in particular, its possible impact on science. Various important ways in which philosophy of science can have an impact on science have been documented in the past, from the influence of Mach, PoincarĂ© and Schopenhauer on the development of the theory of relativity (Rovelli [2018]) to Popper’s long-recognized influence on scientists, such as Eccles and Medawar, and some recent reflections on how best to organize science institutionally (e.g. Leonelli [2017]). Here, we identify and describe an
approach that we propose to call ‘PinS’, which adds another, in our view essential, layer to this picture.

By combining quantitative and qualitative tools, we demonstrate the existence of a corpus of articles by philosophers of science, either published in philosophy of science journals or in scientific journals, raising scientific problems and aiming to contribute to their resolution via the use of philosophical tools. PinS constitutes a subdomain of philosophy of science, which has a long history, with canonical texts and authors, but, to our knowledge, this is the first time this domain is delineated and analysed.

Saturday, June 12, 2021

Science Doesn't Work That Way

Gregory E. Kaebnick
Boston Review
Originally published 30 April 21

Here is an excerpt:

The way to square this circle is to acknowledge that what objectivity science is able to deliver derives not from individual scientists but from the social institutions and practices that structure their work. The philosopher of science Karl Popper expressed this idea clearly in his 1945 book The Open Society and Its Enemies. “There is no doubt that we are all suffering under our own system of prejudices,” he acknowledged—“and scientists are no exception to this rule.” But this is no threat to objectivity, he argued—not because scientists manage to liberate themselves from their prejudices, but rather because objectivity is “closely bound up with the social aspect of scientific method.” In particular, “science and scientific objectivity do not (and cannot) result from the attempts of an individual scientist to be ‘objective,’ but from the friendly-hostile co-operation of many scientists.” Thus Robinson Crusoe cannot be a scientist, “For there is nobody but himself to check his results.”

More recently, philosophers and historians of science such as Helen Longino, Miriam Solomon, and Naomi Oreskes have developed detailed arguments along similar lines, showing how the integrity and objectivity of scientific knowledge depend crucially on social practices. Science even sometimes advances not in spite but because of scientists’ filters and biases—whether a tendency to focus single-mindedly on a particular set of data, a desire to beat somebody else to an announcement, a contrarian streak, or an overweening self-confidence. Any vision of science that makes it depend on complete disinterestedness is doomed to make science impossible. Instead, we must develop a more widespread appreciation of the way science depends on protocols and norms that scientists have collectively developed for testing, refining, and disseminating scientific knowledge. A scientist must be able to show that research has met investigative standards, that it has been exposed to criticism, and that criticisms can be met with arguments.

The implication is that science works not so much because scientists have a special ability to filter out their biases or to access the world as it really is, but instead because they are adhering to a social process that structures their work—constraining and channeling their predispositions and predilections, their moments of eureka, their large yet inevitably limited understanding, their egos and their jealousies. These practices and protocols, these norms and standards, do not guarantee mistakes are never made. But nothing can make that guarantee. The rules of the game are themselves open to scrutiny and revision in light of argument, and that is the best we can hope for.

This way of understanding science fares better than the exalted view, which makes scientific knowledge impossible. Like all human endeavors, science is fallible, but still it warrants belief—according to how well it adheres to rules we have developed for it. What makes for objectivity and expertise is not, or not merely, the simple alignment between what one claims and how the world is, but a commitment to a process that is accepted as producing empirical adequacy.

Friday, June 11, 2021

Record-High 47% in U.S. Think Abortion Is Morally Acceptable

Megan Brenan
Gallup.com
Originally posted 9 June 21

Americans are sharply divided in their abortion views, including on its morality, with an equal split between those who believe it is morally acceptable and those who say it is morally wrong. The 47% who say it is acceptable is, by two percentage points, the highest Gallup has recorded in two decades of measurement. Just one point separates them from the 46% who think abortion is wrong from a moral perspective.

Since 2001, the gap between these readings has varied from zero to 20 points. The latest gap, based on a May 3-18 Gallup poll, is slightly smaller than last year's, when 47% thought abortion was morally wrong and 44% said it was morally acceptable. Americans have been typically more inclined to say abortion is morally wrong than morally acceptable, though the gap has narrowed in recent years. The average gap has been five points since 2013 (43% morally acceptable and 48% morally wrong), compared with 11 points between 2001 and 2012 (39% and 50%, respectively).

Democrats and political independents have become more likely to say abortion is morally acceptable. Sixty-four percent of Democrats, 51% of independents and 26% of Republicans currently hold this view.

Causal judgments about atypical actions are influenced by agents' epistemic states

Kirfel, L. & Lagnado, D.
Cognition, Volume 212, July 2021.

Abstract

A prominent finding in causal cognition research is people’s tendency to attribute increased causality to atypical actions. If two agents jointly cause an outcome (conjunctive causation), but differ in how frequently they have performed the causal action before, people judge the atypically acting agent to have caused the outcome to a greater extent. In this paper, we argue that it is the epistemic state of an abnormally acting agent, rather than the abnormality of their action, that is driving people's causal judgments. Given the predictability of the normally acting agent's behaviour, the abnormal agent is in a better position to foresee the consequences of their action. We put this hypothesis to test in four experiments. In Experiment 1, we show that people judge the atypical agent as more causal than the normally acting agent, but also judge the atypical agent to have an epistemic advantage. In Experiment 2, we find that people do not judge a causal difference if no epistemic advantage for the abnormal agent arises. In Experiment 3, we replicate these findings in a scenario in which the abnormal agent's epistemic advantage generalises to a novel context. In Experiment 4, we extend these findings to mental states more broadly construed and develop a Bayesian network model that predicts the degree of outcome-oriented mental states based on action normality and epistemic states. We find that people infer mental states like desire and intention to a greater extent from abnormal behaviour when this behaviour is accompanied by an epistemic advantage. We discuss these results in light of current theories and research on people's preference for abnormal causes.

From the Conclusion

In this paper, we have argued that the typicality of actions changes how much an agent can foresee the consequences of their action. We have shown that it is this very epistemic asymmetry between a normally and abnormally acting agent that influences people’s causal judgments. Both employee and party organiser acted abnormally, but when acting, they also could have foreseen the event of a dust explosion to a greater extent. While further research is needed on this topic, the connection between action typicality and epistemic states brings us one step closer to understanding the enigmatic role of normality in causal cognition.

Thursday, June 10, 2021

Moral Extremism

Spencer Case
Wuhan University, Penultimate Draft for
Journal of Applied Philosophy

Abstract

The word ‘extremist’ is often used pejoratively, but it’s not clear what, if anything, is wrong with extremism. My project is to give an account of moral extremism as a vice. It consists roughly in having moral convictions so intense that they cause a sort of moral tunnel vision, pushing salient competing considerations out of mind. We should be interested in moral extremism for several reasons: it’s consequential, it’s insidious – we don’t expect immorality to arise from excessive devotion to morality – and it’s yet to attract much philosophical attention. I give several examples of moral extremism from history and explore their social-political implications. I also consider how we should evaluate people who miss the mark, being either too extreme in the service of a good cause or inconsistent with their righteous convictions. I compare John Brown and John Quincy Adams, who fell on either side of this spectrum, as examples.

Conclusion

Accusations of extremism are often thrown around to discredit unpopular positions. It seems fair for the person accused of being an extremist to ask: “Who cares if I’m an extremist, or if the position I’m defending is extreme, if I’m right?” I began with quotes from three reformers who took this line of reply. I’ve argued, however, that we should worry about extremism in the service of good causes. Extremism on my account is a vice. What it consists in, roughly, is an intense moral conviction that prevents the agent from perceiving, or acting on, competing moral considerations when these are important. I’ve argued that this vice has had baleful consequences throughout history. The discussion of John Brown and John Adams introduced a wrinkle: perhaps in rare circumstances, extremists can also confer certain benefits on a society. A general lesson from this discussion is that we must occasionally look at our own moral convictions, especially the ones that generate the strongest emotions, with a degree of suspicion. Passion for some righteous cause doesn’t necessarily indicate that we are morally on the right track. Evil can be insidious, and even our strongest moral convictions can morally mislead.