Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Error. Show all posts
Showing posts with label Error. Show all posts

Tuesday, February 20, 2024

Understanding Liability Risk from Using Health Care Artificial Intelligence Tools

Mello, M. M., & Guha, N. (2024).
The New England journal of medicine, 390(3), 271–278. https://doi.org/10.1056/NEJMhle2308901

Optimism about the explosive potential of artificial intelligence (AI) to transform medicine is tempered by worry about what it may mean for the clinicians being "augmented." One question is especially problematic because it may chill adoption: when Al contributes to patient injury, who will be held responsible?

Some attorneys counsel health care organizations with dire warnings about liability1 and dauntingly long lists of legal concerns. Unfortunately, liability concern can lead to overly conservative decisions, including reluctance to try new things. Yet, older forms of clinical decision support provided important opportunities to prevent errors and malpractice claims. Given the slow progress in reducing diagnostic errors, not adopting new tools also has consequences and at some point may itself become malpractice. Liability uncertainty also affects Al developers' cost of capital and incentives to develop particular products, thereby influencing which Al innovations become available and at what price.

To help health care organizations and physicians weigh Al-related liability risk against the benefits of adoption, we examine the issues that courts have grappled with in cases involving software error and what makes them so challenging. Because the signals emerging from case law remain somewhat faint, we conducted further analysis of the aspects of Al tools that elevate or mitigate legal risk. Drawing on both analyses, we provide risk-management recommendations, focusing on the uses of Al in direct patient care with a "human in the loop" since the use of fully autonomous systems raises additional issues.

(cut)

The Awkward Adolescence of Software-Related Liability

Legal precedent regarding Al injuries is rare because Al models are new and few personal-injury claims result in written opinions. As this area of law matures, it will confront several challenges.

Challenges in Applying Tort Law Principles to Health Care Artificial Intelligence (AI).

Ordinarily, when a physician uses or recommends a product and an injury to the patient results, well-established rules help courts allocate liability among the physician, product maker, and patient. The liabilities of the physician and product maker are derived from different standards of care, but for both kinds of defendants, plaintiffs must show that the defendant owed them a duty, the defendant breached the applicable standard of care, and the breach caused their injury; plaintiffs must also rebut any suggestion that the injury was so unusual as to be outside the scope of liability.

The article is paywalled, which is not how this should work.

Saturday, May 7, 2022

Mathematical model offers clear-cut answers to how morals will change over time

The Institute for Future Studies
Phys.org
Originally posted 13 APR 2022

Researchers at the Institute for Futures Studies in Stockholm, Sweden, have created a mathematical model to predict changes in moral opinion. It predicts that values about corporal punishment of children, abortion-rights and how parental leave should be shared between parents, will all move in liberal directions in the U.S. Results from a first test of the model using data from large opinion surveys continuously conducted in the U.S. are promising.

Corporal punishment of children, such as spanking or paddling, is still widely accepted in the U.S. But public opinion is changing rapidly, and in the United States and elsewhere around the world, this norm will soon become a marginal position. The right to abortion is currently being threatened through a series of court cases—but though change is slow, the view of abortion as a right will eventually come to dominate. A majority of Americans today reject the claim that parental leave should be equally shared between parents, but within 15 years, public opinion will flip, and a majority will support an equal division.

"Almost all moral issues are moving in the liberal direction. Our model is based on large opinion surveys continuously conducted in the U.S., but our method for analyzing the dynamics of moral arguments to predict changing public opinion on moral issues can be applied anywhere," says social norm researcher Pontus Strimling, a research leader at the Institute for Futures Studies, who together with mathematician Kimmo Eriksson and statistician Irina Vartanova conducted the study that will be published in the journal Royal Society Open Science on Wednesday, April 13th.


From the Discussion

Overall, this study shows that moral opinion change can to some extent be predicted, even under unusually volatile circumstances. Note that the prediction method used in this paper is quite rudimentary. Specifically, the method is only based on a very simple survey measure of each opinion's argument advantage and the use of historical opinion data to calibrate a parameter for converting such measures to predicted change rates. Given that the direction is predicted completely based on surveys about argument advantage it is remarkable that the direction was correctly predicted in two-thirds of the cases (three-quarters if the issues related to singular events were excluded). Even so, the method can probably be improved.

Predicting how the U.S. public opinion on moral issues will change from 2018 to 2020 and beyond, Royal Society Open Science (2022).

Monday, March 28, 2022

Do people understand determinism? The tracking problem for measuring free will beliefs

Murray, S., Dykhuis, E., & Nadelhoffer, T.
(2022, February 8). 
https://doi.org/10.31234/osf.io/kyza7

Abstract

Experimental work on free will typically relies on using deterministic stimuli to elicit judgments of free will. We call this the Vignette-Judgment model. In this paper, we outline a problem with research based on this model. It seems that people either fail to respond to the deterministic aspects of vignettes when making judgments or that their understanding of determinism differs from researcher expectations. We provide some empirical evidence for a key assumption of the problem. In the end, we argue that people seem to lack facility with the concept of determinism, which calls into question the validity of experimental work operating under the Vignette-Judgment model. We also argue that alternative experimental paradigms are unlikely to elicit judgments that are philosophically relevant to questions about the metaphysics of free will.

Error and judgment

Our results show that people make several errors about deterministic stimuli used to elicit judgments about free will and responsibility. Many participants seem to conflate determinism with different  constructs  (bypassing  or  fatalism) or mistakenly interpret the implications of deterministic constraints on agents (intrusion).

Measures of item invariance suggest that participants were not responding differently to error measures across different vignettes. Hence, responses to error measures cannot be explained exclusively in terms of differences in vignettes, but rather seem to reflect participants’ mistaken judgments about determinism. Further, these mistakes are associated with significant differences in judgments about free will. Some of the patterns are predictable: participants who conflate determinism with bypassing attribute less free will to individuals in deterministic scenarios, while participants who import intrusion into deterministic scenarios attribute greater free will. This makes sense. As participants perceive mental states to be less causally efficacious or individuals as less ultimately in control of their decisions, free will is diminished. However, as people perceive more indeterminism, free will is amplified.

Additionally, we found that errors of intrusion are stronger than errors of bypassing or fatalism. Because bypassing errors are associated with diminished judgments of free will and intrusion errors are associated with amplified judgments, then, if all three errors were equal in strength, we would expect a linear relationship between different errors: individuals who make bypassing errors would have the lowest average judgments, individuals who make intrusion errors would have the highest average judgments, and people who make both errors would be in the middle (as both errors would cancel each other out). We did not observe this relationship. Instead, participants who make intrusion errors are statistically indistinguishable from each other, no matter what other kinds of errors they make.

Thus, errors of intrusion seem to trump others in the process of forming judgments of free will.  Thus, the errors people make are not incidentally related to their judgments. Instead, there are significant associations between people’s inferential errors about determinism and how they attribute free will and responsibility. This evidence supports our claim that people make several errors about the nature and implications of determinism.

Friday, September 3, 2021

What is consciousness, and could machines have it?

S. Dahaene, H. Lau, & S. Kouider
Science  27 Oct 2017:
Vol. 358, Issue 6362, pp. 486-492

Abstract

The controversial question of whether machines may ever be conscious must be based on a careful consideration of how consciousness arises in the only physical system that undoubtedly possesses it: the human brain. We suggest that the word “consciousness” conflates two different types of information-processing computations in the brain: the selection of information for global broadcasting, thus making it flexibly available for computation and report (C1, consciousness in the first sense), and the self-monitoring of those computations, leading to a subjective sense of certainty or error (C2, consciousness in the second sense). We argue that despite their recent successes, current machines are still mostly implementing computations that reflect unconscious processing (C0) in the human brain. We review the psychological and neural science of unconscious (C0) and conscious computations (C1 and C2) and outline how they may inspire novel machine architectures.

From Concluding remarks

Our stance is based on a simple hypothesis: What we call “consciousness” results from specific types of information-processing computations, physically realized by the hardware of the brain. It differs from other theories in being resolutely computational; we surmise that mere information-theoretic quantities do not suffice to define consciousness unless one also considers the nature and depth of the information being processed.

We contend that a machine endowed with C1 and C2 would behave as though it were conscious; for instance, it would know that it is seeing something, would express confidence in it, would report it to others, could suffer hallucinations when its monitoring mechanisms break down, and may even experience the same perceptual illusions as humans. Still, such a purely functional definition of consciousness may leave some readers unsatisfied. Are we “over-intellectualizing” consciousness, by assuming that some high-level cognitive functions are necessarily tied to consciousness? Are we leaving aside the experiential component (“what it is like” to be conscious)? Does subjective experience escape a computational definition?

Although those philosophical questions lie beyond the scope of the present paper, we close by noting that empirically, in humans the loss of C1 and C2 computations covaries with a loss of subjective experience. 

Sunday, February 24, 2019

Biased algorithms: here’s a more radical approach to creating fairness

Tom Douglas
theconversation.com
Originally posted January 21, 2019

Here is an excerpt:

What’s fair?

AI researchers concerned about fairness have, for the most part, been focused on developing algorithms that are procedurally fair – fair by virtue of the features of the algorithms themselves, not the effects of their deployment. But what if it’s substantive fairness that really matters?

There is usually a tension between procedural fairness and accuracy – attempts to achieve the most commonly advocated forms of procedural fairness increase the algorithm’s overall error rate. Take the COMPAS algorithm for example. If we equalised the false positive rates between black and white people by ignoring the predictors of recidivism that tended to be disproportionately possessed by black people, the likely result would be a loss in overall accuracy, with more people wrongly predicted to re-offend, or not re-offend.

We could avoid these difficulties if we focused on substantive rather than procedural fairness and simply designed algorithms to maximise accuracy, while simultaneously blocking or compensating for any substantively unfair effects that these algorithms might have. For example, instead of trying to ensure that crime prediction errors affect different racial groups equally – a goal that may in any case be unattainable – we could instead ensure that these algorithms are not used in ways that disadvantage those at high risk. We could offer people deemed “high risk” rehabilitative treatments rather than, say, subjecting them to further incarceration.

The info is here.

Tuesday, June 13, 2017

Why It’s So Hard to Admit You’re Wrong

Kristin Wong
The New York Times
Originally published May 22, 2017

Here are two excerpts:

Mistakes can be hard to digest, so sometimes we double down rather than face them. Our confirmation bias kicks in, causing us to seek out evidence to prove what we already believe. The car you cut off has a small dent in its bumper, which obviously means that it is the other driver’s fault.

Psychologists call this cognitive dissonance — the stress we experience when we hold two contradictory thoughts, beliefs, opinions or attitudes.

(cut)

“Cognitive dissonance is what we feel when the self-concept — I’m smart, I’m kind, I’m convinced this belief is true — is threatened by evidence that we did something that wasn’t smart, that we did something that hurt another person, that the belief isn’t true,” said Carol Tavris, a co-author of the book “Mistakes Were Made (But Not by Me).”

She added that cognitive dissonance threatened our sense of self.

“To reduce dissonance, we have to modify the self-concept or accept the evidence,” Ms. Tavris said. “Guess which route people prefer?”

Or maybe you cope by justifying your mistake. The psychologist Leon Festinger suggested the theory of cognitive dissonance in the 1950s when he studied a small religious group that believed a flying saucer would rescue its members from an apocalypse on Dec. 20, 1954. Publishing his findings in the book “When Prophecy Fails,” he wrote that the group doubled down on its belief and said God had simply decided to spare the members, coping with their own cognitive dissonance by clinging to a justification.

“Dissonance is uncomfortable and we are motivated to reduce it,” Ms. Tavris said.

When we apologize for being wrong, we have to accept this dissonance, and that is unpleasant. On the other hand, research has shown that it can feel good to stick to our guns.

Monday, March 27, 2017

Healthcare Data Breaches Up 40% Since 2015

Alexandria Wilson Pecci
MedPage Today
Originally posted February 26, 2017

Here is an excerpt:

Broken down by industry, hacking was the most common data breach source for the healthcare sector, according to data provided to HealthLeaders Media by the Identity Theft Resource Center. Physical theft was the biggest breach category for healthcare in 2015 and 2014.

Insider theft and employee error/negligence tied for the second most common data breach sources in 2016 in the health industry. In addition, insider theft was a bigger problem in the healthcare sector than in other industries, and has been for the past five years.

Insider theft is alleged to have been at play in the Jackson Health System incident. Former employee Evelina Sophia Reid was charged in a fourteen-count indictment with conspiracy to commit access device fraud, possessing fifteen or more unauthorized access devices, aggravated identity theft, and computer fraud, the Department of Justice said. Prosecutors say that her co-conspirators used the stolen information to file fraudulent tax returns in the patients' names.

The article is here.

Tuesday, March 14, 2017

“I placed too much faith in underpowered studies:” Nobel Prize winner admits mistakes

Retraction Watch
Originally posted February 21, 2017

Although it’s the right thing to do, it’s never easy to admit error — particularly when you’re an extremely high-profile scientist whose work is being dissected publicly. So while it’s not a retraction, we thought this was worth noting: A Nobel Prize-winning researcher has admitted on a blog that he relied on weak studies in a chapter of his bestselling book.

The blog — by Ulrich Schimmack, Moritz Heene, and Kamini Kesavan — critiqued the citations included in a book by Daniel Kahneman, a psychologist whose research has illuminated our understanding of how humans form judgments and make decisions and earned him half of the 2002 Nobel Prize in Economics.

The article is here.