Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Tuesday, June 15, 2021

Diagnostic Mistakes a Big Contributor to Malpractice Suits, Study Finds

Joyce Friedan
MedPageToday.com
Originally posted 26 May 21

Here are two excerpts

One problem is that "healthcare is inherently risky," she continued. For example, "there's ever-changing industry knowledge, growing bodies of clinical options, new diseases, and new technology. There are variable work demands -- boy, didn't we experience that this past year! -- and production pressure has long been a struggle and a challenge for our providers and their teams." Not to mention variable individual competency, an aging population, complex health issues, and evolving workforces.

(cut)

Cognitive biases can also trigger diagnostic errors, Siegal said. "Anchor bias" occurs when "a provider anchors on a diagnosis, early on, and then through the course of the journey looks for things to confirm that diagnosis. Once they've confirmed it enough that 'search satisfaction' is met, that leads to premature closure" of the patient's case. But that causes a problem because "it means that there's a failure to continue exploring other options. What else could it be? It's a failure to establish, perhaps, every differential diagnosis."

To avoid this problem, providers "always want to think about, 'Am I anchoring too soon? Am I looking to confirm, rather than challenge, my diagnosis?'" she said. According to the study, 25% of cases didn't have evidence of a differential diagnosis, and 36% fell into the category of "confirmation bias" -- "I was looking for things to confirm what I knew, but there were relevant signs and symptoms or positive tests that were still present that didn't quite fit the picture, but it was close. So they were somehow discounted, and the premature closure took over and a diagnosis was made," she said.

She suggested that clinicians take a "diagnostic timeout" -- similar to a surgical timeout -- when they're arriving at a diagnosis. "What else could this be? Have I truly explored all the other possibilities that seem relevant in this scenario and, more importantly, what doesn't fit? Be sure to dis-confirm as well."

Monday, June 14, 2021

Bias Is a Big Problem. But So Is ‘Noise.’

Daniel Kahneman, O. Sibony & C.R. Sunstein
The New York Times
Originally posted 15 May 21

Here is an excerpt:

There is much evidence that irrelevant circumstances can affect judgments. In the case of criminal sentencing, for instance, a judge’s mood, fatigue and even the weather can all have modest but detectable effects on judicial decisions.

Another source of noise is that people can have different general tendencies. Judges often vary in the severity of the sentences they mete out: There are “hanging” judges and lenient ones.

A third source of noise is less intuitive, although it is usually the largest: People can have not only different general tendencies (say, whether they are harsh or lenient) but also different patterns of assessment (say, which types of cases they believe merit being harsh or lenient about). 

Underwriters differ in their views of what is risky, and doctors in their views of which ailments require treatment. 

We celebrate the uniqueness of individuals, but we tend to forget that, when we expect consistency, uniqueness becomes a liability.

Once you become aware of noise, you can look for ways to reduce it. For instance, independent judgments from a number of people can be averaged (a frequent practice in forecasting). 

Guidelines, such as those often used in medicine, can help professionals reach better and more uniform decisions. 

As studies of hiring practices have consistently shown, imposing structure and discipline in interviews and other forms of assessment tends to improve judgments of job candidates.

No noise-reduction techniques will be deployed, however, if we do not first recognize the existence of noise. 

Noise is too often neglected. But it is a serious issue that results in frequent error and rampant injustice. 

Organizations and institutions, public and private, will make better decisions if they take noise seriously.

Sunday, June 13, 2021

Philosophy in Science: Can philosophers of science permeate through science and produce scientific knowledge?

Pradeu, T., et al. (2021)
Preprint
British Journal of the Philosophy of Science

Abstract

Most philosophers of science do philosophy ‘on’ science. By contrast, others do philosophy ‘in’ science (‘PinS’), i.e., they use philosophical tools to address scientific problems and to provide scientifically useful proposals. Here, we consider the evidence in favour of a trend of this nature. We proceed in two stages. First, we identify relevant authors and articles empirically with bibliometric tools, given that PinS would be likely to infiltrate science and thus to be published in scientific journals (‘intervention’), cited in scientific journals (‘visibility’) and sometimes recognized as a scientific result by scientists (‘contribution’). We show that many central figures in philosophy of science have been involved in PinS, and that some philosophers have even ‘specialized’ in this practice. Second, we propose a conceptual definition of PinS as a process involving three conditions (raising a scientific problem, using philosophical tools to address it, and making a scientific proposal), and we ask whether the articles identified at the first stage fulfil all these conditions. We show that PinS is a distinctive, quantitatively substantial trend within philosophy of science, demonstrating the existence of a methodological continuity from science to philosophy of science.

From the Conclusion

A crucial and long-standing question for philosophers of science is how philosophy of science relates to science, including, in particular, its possible impact on science. Various important ways in which philosophy of science can have an impact on science have been documented in the past, from the influence of Mach, PoincarĂ© and Schopenhauer on the development of the theory of relativity (Rovelli [2018]) to Popper’s long-recognized influence on scientists, such as Eccles and Medawar, and some recent reflections on how best to organize science institutionally (e.g. Leonelli [2017]). Here, we identify and describe an
approach that we propose to call ‘PinS’, which adds another, in our view essential, layer to this picture.

By combining quantitative and qualitative tools, we demonstrate the existence of a corpus of articles by philosophers of science, either published in philosophy of science journals or in scientific journals, raising scientific problems and aiming to contribute to their resolution via the use of philosophical tools. PinS constitutes a subdomain of philosophy of science, which has a long history, with canonical texts and authors, but, to our knowledge, this is the first time this domain is delineated and analysed.

Saturday, June 12, 2021

Science Doesn't Work That Way

Gregory E. Kaebnick
Boston Review
Originally published 30 April 21

Here is an excerpt:

The way to square this circle is to acknowledge that what objectivity science is able to deliver derives not from individual scientists but from the social institutions and practices that structure their work. The philosopher of science Karl Popper expressed this idea clearly in his 1945 book The Open Society and Its Enemies. “There is no doubt that we are all suffering under our own system of prejudices,” he acknowledged—“and scientists are no exception to this rule.” But this is no threat to objectivity, he argued—not because scientists manage to liberate themselves from their prejudices, but rather because objectivity is “closely bound up with the social aspect of scientific method.” In particular, “science and scientific objectivity do not (and cannot) result from the attempts of an individual scientist to be ‘objective,’ but from the friendly-hostile co-operation of many scientists.” Thus Robinson Crusoe cannot be a scientist, “For there is nobody but himself to check his results.”

More recently, philosophers and historians of science such as Helen Longino, Miriam Solomon, and Naomi Oreskes have developed detailed arguments along similar lines, showing how the integrity and objectivity of scientific knowledge depend crucially on social practices. Science even sometimes advances not in spite but because of scientists’ filters and biases—whether a tendency to focus single-mindedly on a particular set of data, a desire to beat somebody else to an announcement, a contrarian streak, or an overweening self-confidence. Any vision of science that makes it depend on complete disinterestedness is doomed to make science impossible. Instead, we must develop a more widespread appreciation of the way science depends on protocols and norms that scientists have collectively developed for testing, refining, and disseminating scientific knowledge. A scientist must be able to show that research has met investigative standards, that it has been exposed to criticism, and that criticisms can be met with arguments.

The implication is that science works not so much because scientists have a special ability to filter out their biases or to access the world as it really is, but instead because they are adhering to a social process that structures their work—constraining and channeling their predispositions and predilections, their moments of eureka, their large yet inevitably limited understanding, their egos and their jealousies. These practices and protocols, these norms and standards, do not guarantee mistakes are never made. But nothing can make that guarantee. The rules of the game are themselves open to scrutiny and revision in light of argument, and that is the best we can hope for.

This way of understanding science fares better than the exalted view, which makes scientific knowledge impossible. Like all human endeavors, science is fallible, but still it warrants belief—according to how well it adheres to rules we have developed for it. What makes for objectivity and expertise is not, or not merely, the simple alignment between what one claims and how the world is, but a commitment to a process that is accepted as producing empirical adequacy.

Friday, June 11, 2021

Record-High 47% in U.S. Think Abortion Is Morally Acceptable

Megan Brenan
Gallup.com
Originally posted 9 June 21

Americans are sharply divided in their abortion views, including on its morality, with an equal split between those who believe it is morally acceptable and those who say it is morally wrong. The 47% who say it is acceptable is, by two percentage points, the highest Gallup has recorded in two decades of measurement. Just one point separates them from the 46% who think abortion is wrong from a moral perspective.

Since 2001, the gap between these readings has varied from zero to 20 points. The latest gap, based on a May 3-18 Gallup poll, is slightly smaller than last year's, when 47% thought abortion was morally wrong and 44% said it was morally acceptable. Americans have been typically more inclined to say abortion is morally wrong than morally acceptable, though the gap has narrowed in recent years. The average gap has been five points since 2013 (43% morally acceptable and 48% morally wrong), compared with 11 points between 2001 and 2012 (39% and 50%, respectively).

Democrats and political independents have become more likely to say abortion is morally acceptable. Sixty-four percent of Democrats, 51% of independents and 26% of Republicans currently hold this view.

Causal judgments about atypical actions are influenced by agents' epistemic states

Kirfel, L. & Lagnado, D.
Cognition, Volume 212, July 2021.

Abstract

A prominent finding in causal cognition research is people’s tendency to attribute increased causality to atypical actions. If two agents jointly cause an outcome (conjunctive causation), but differ in how frequently they have performed the causal action before, people judge the atypically acting agent to have caused the outcome to a greater extent. In this paper, we argue that it is the epistemic state of an abnormally acting agent, rather than the abnormality of their action, that is driving people's causal judgments. Given the predictability of the normally acting agent's behaviour, the abnormal agent is in a better position to foresee the consequences of their action. We put this hypothesis to test in four experiments. In Experiment 1, we show that people judge the atypical agent as more causal than the normally acting agent, but also judge the atypical agent to have an epistemic advantage. In Experiment 2, we find that people do not judge a causal difference if no epistemic advantage for the abnormal agent arises. In Experiment 3, we replicate these findings in a scenario in which the abnormal agent's epistemic advantage generalises to a novel context. In Experiment 4, we extend these findings to mental states more broadly construed and develop a Bayesian network model that predicts the degree of outcome-oriented mental states based on action normality and epistemic states. We find that people infer mental states like desire and intention to a greater extent from abnormal behaviour when this behaviour is accompanied by an epistemic advantage. We discuss these results in light of current theories and research on people's preference for abnormal causes.

From the Conclusion

In this paper, we have argued that the typicality of actions changes how much an agent can foresee the consequences of their action. We have shown that it is this very epistemic asymmetry between a normally and abnormally acting agent that influences people’s causal judgments. Both employee and party organiser acted abnormally, but when acting, they also could have foreseen the event of a dust explosion to a greater extent. While further research is needed on this topic, the connection between action typicality and epistemic states brings us one step closer to understanding the enigmatic role of normality in causal cognition.

Thursday, June 10, 2021

Moral Extremism

Spencer Case
Wuhan University, Penultimate Draft for
Journal of Applied Philosophy

Abstract

The word ‘extremist’ is often used pejoratively, but it’s not clear what, if anything, is wrong with extremism. My project is to give an account of moral extremism as a vice. It consists roughly in having moral convictions so intense that they cause a sort of moral tunnel vision, pushing salient competing considerations out of mind. We should be interested in moral extremism for several reasons: it’s consequential, it’s insidious – we don’t expect immorality to arise from excessive devotion to morality – and it’s yet to attract much philosophical attention. I give several examples of moral extremism from history and explore their social-political implications. I also consider how we should evaluate people who miss the mark, being either too extreme in the service of a good cause or inconsistent with their righteous convictions. I compare John Brown and John Quincy Adams, who fell on either side of this spectrum, as examples.

Conclusion

Accusations of extremism are often thrown around to discredit unpopular positions. It seems fair for the person accused of being an extremist to ask: “Who cares if I’m an extremist, or if the position I’m defending is extreme, if I’m right?” I began with quotes from three reformers who took this line of reply. I’ve argued, however, that we should worry about extremism in the service of good causes. Extremism on my account is a vice. What it consists in, roughly, is an intense moral conviction that prevents the agent from perceiving, or acting on, competing moral considerations when these are important. I’ve argued that this vice has had baleful consequences throughout history. The discussion of John Brown and John Adams introduced a wrinkle: perhaps in rare circumstances, extremists can also confer certain benefits on a society. A general lesson from this discussion is that we must occasionally look at our own moral convictions, especially the ones that generate the strongest emotions, with a degree of suspicion. Passion for some righteous cause doesn’t necessarily indicate that we are morally on the right track. Evil can be insidious, and even our strongest moral convictions can morally mislead.

Wednesday, June 9, 2021

Towards a computational theory of social groups: A finite set of cognitive primitives for representing any and all social groups in the context of conflict

Pietraszewski, D. (2021). 
Behavioral and Brain Sciences, 1-62. 
doi:10.1017/S0140525X21000583

Abstract

We don't yet have adequate theories of what the human mind is representing when it represents a social group. Worse still, many people think we do. This mistaken belief is a consequence of the state of play: Until now, researchers have relied on their own intuitions to link up the concept social group on the one hand, and the results of particular studies or models on the other. While necessary, this reliance on intuition has been purchased at considerable cost. When looked at soberly, existing theories of social groups are either (i) literal, but not remotely adequate (such as models built atop economic games), or (ii) simply metaphorical (typically a subsumption or containment metaphor). Intuition is filling in the gaps of an explicit theory. This paper presents a computational theory of what, literally, a group representation is in the context of conflict: it is the assignment of agents to specific roles within a small number of triadic interaction types. This “mental definition” of a group paves the way for a computational theory of social groups—in that it provides a theory of what exactly the information-processing problem of representing and reasoning about a group is. For psychologists, this paper offers a different way to conceptualize and study groups, and suggests that a non-tautological definition of a social group is possible. For cognitive scientists, this paper provides a computational benchmark against which natural and artificial intelligences can be held.

Summary and Conclusion

Despite an enormous literature on groups and group dynamics, little attention has been paid to explicit computational theories of how the mind represents and reasons about groups. The goal of this paper has been, in a conceptual, non-technical manner, to propose a simple but non-trivial framework for starting to ask questions about the nature of the underlying representations that make the phenomenon of social groups possible—all described at the level of information processing. This computational theory, when combined with many more such theories—and followed by extensive task analyses and empirical investigations—will eventually contribute to a full accounting of the information-processing required to represent, reason about, and act in accordance with group representations.

Tuesday, June 8, 2021

Action and inaction in moral judgments and decisions: ‎Meta-analysis of Omission-Bias omission-commission asymmetries

Jamison, J., Yay, T., & Feldman, G.
Journal of Experimental Social Psychology
Volume 89, July 2020, 103977

Abstract

Omission bias is the preference for harm caused through omissions over harm caused through commissions. In a pre-registered experiment (N = 313), we successfully replicated an experiment from Spranca, Minsk, and Baron (1991), considered a classic demonstration of the omission bias, examining generalizability to a between-subject design with extensions examining causality, intent, and regret. Participants in the harm through commission condition(s) rated harm as more immoral and attributed higher responsibility compared to participants in the harm through omission condition (d = 0.45 to 0.47 and d = 0.40 to 0.53). An omission-commission asymmetry was also found for perceptions of causality and intent, in that commissions were attributed stronger action-outcome links and higher intentionality (d = 0.21 to 0.58). The effect for regret was opposite from the classic findings on the action-effect, with higher regret for inaction over action (d = −0.26 to −0.19). Overall, higher perceived causality and intent were associated with higher attributed immorality and responsibility, and with lower perceived regret.

From the Discussion

Regret: Deviation from the action-effect 

The classic action-effect (Kahneman & Tversky, 1982) findings were that actions leading to a negative outcome are regretted more than inactions leading to the same negative outcomes. We added a regret measure to examine whether the action-effect findings would extend to situations of morality involving intended harmful behavior. Our findings were opposite to the expected action-effect omission-commission asymmetry with participants rating omissions as more regretted than commissions (d = 0.18 to 0.26).  

One explanation for this surprising finding may be an intermingling of the perception of an actors’ regret for their behavior with their regret for the outcome. In typical action-effect scenarios, actors behave in a way that is morally neutral but are faced with an outcome that deviates from expectations, such as losing money over an investment. In this study’s omission bias scenarios, the actors behaved immorally to harm others for personal or interpersonal gain, and then are faced with an outcome that deviates from expectation. We hypothesized that participants would perceive actors as being more regretful for taking action that would immorally harm another person rather than allowing that harm through inaction. Yet it is plausible that participants were focused on the regret that actors would feel for not taking more direct action towards their goal of personal or interpersonal gain.  

Another possible explanation for the regret finding is the side-taking hypothesis (DeScioli, 2016; Descoli & Kurzban, 2013). This states that group members side against a wrongdoer who has performed an action that is perceived morally wrong by also attributing lack of remorse or regret. The negative relationship observed between the positive characteristic of regret and the negative characteristics of immorality, causality, and intentionality is in support of this explanation. Future research may be able to explore the true mechanisms of regret in such scenarios.