Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Medical Ethics. Show all posts
Showing posts with label Medical Ethics. Show all posts

Wednesday, February 8, 2023

AI in the hands of imperfect users

Kostick-Quenet, K.M., Gerke, S. 
npj Digit. Med. 5, 197 (2022). 
https://doi.org/10.1038/s41746-022-00737-z

Abstract

As the use of artificial intelligence and machine learning (AI/ML) continues to expand in healthcare, much attention has been given to mitigating bias in algorithms to ensure they are employed fairly and transparently. Less attention has fallen to addressing potential bias among AI/ML’s human users or factors that influence user reliance. We argue for a systematic approach to identifying the existence and impacts of user biases while using AI/ML tools and call for the development of embedded interface design features, drawing on insights from decision science and behavioral economics, to nudge users towards more critical and reflective decision making using AI/ML.

(cut)

Impacts of uncertainty and urgency on decision quality

Trust plays a particularly critical role when decisions are made in contexts of uncertainty. Uncertainty, of course, is a central feature of most clinical decision making, particularly for conditions (e.g., COVID-1930) or treatments (e.g., deep brain stimulation or gene therapies) that lack a long history of observed outcomes. As Wang and Busemeyer (2021) describe, “uncertain” choice situations can be distinguished from “risky” ones in that risky decisions have a range of outcomes with known odds or probabilities. If you flip a coin, we know we have a 50% chance to land on heads. However, to bet on heads comes with a high level of risk, specifically, a 50% chance of losing. Uncertain decision-making scenarios, on the other hand, have no well-known or agreed-upon outcome probabilities. This also makes uncertain decision making contexts risky, but those risks are not sufficiently known to the extent that permits rational decision making. In information-scarce contexts, critical decisions are by necessity made using imperfect reasoning or the use of “gap-filling heuristics” that can lead to several predictable cognitive biases. Individuals might defer to an authority figure (messenger bias, authority bias); they may look to see what others are doing (“bandwagon” and social norm effects); or may make affective forecasting errors, projecting current emotional states onto one’s future self. The perceived or actual urgency of clinical decisions can add further biases, like ambiguity aversion (preference for known versus unknown risks38) or deferral to the status quo or default, and loss aversion (weighing losses more heavily than gains of the same magnitude). These biases are intended to mitigate risks of the unknown when fast decisions must be made, but they do not always get us closer to arriving at the “best” course of action if all possible information were available.

(cut)

Conclusion

We echo others’ calls that before AI tools are “released into the wild,” we must better understand their outcomes and impacts in the hands of imperfect human actors by testing at least some of them according to a risk-based approach in clinical trials that reflect their intended use settings. We advance this proposal by drawing attention to the need to empirically identify and test how specific user biases and decision contexts shape how AI tools are used in practice and influence patient outcomes. We propose that VSD can be used to strategize human-machine interfaces in ways that encourage critical reflection, mitigate bias, and reduce overreliance on AI systems in clinical decision making. We believe this approach can help to reduce some of the burdens on physicians to figure out on their own (with only basic training or knowledge about AI) the optimal role of AI tools in decision making by embedding a degree of bias mitigation directly into AI systems and interfaces.

Wednesday, February 1, 2023

Ethics Consult: Keep Patient Alive Due to Spiritual Beliefs?

Jacob M. Appel
MedPageToday
Originally posted 28 Jan 23

Welcome to Ethics Consult -- an opportunity to discuss, debate (respectfully), and learn together. We select an ethical dilemma from a true, but anonymized, patient care case. You vote on your decision in the case and, next week, we'll reveal how you all made the call. Bioethicist Jacob M. Appel, MD, JD, will also weigh in with an ethical framework to help you learn and prepare.

The following case is adapted from Appel's 2019 book, Who Says You're Dead? Medical & Ethical Dilemmas for the Curious & Concerned.

Alexander is a 49-year-old man who comes to a prominent teaching hospital for a heart transplant. While awaiting the transplant, he is placed on a machine called a BIVAD, or biventricular assist device -- basically, an artificial heart the size of a small refrigerator to tide him over until a donor heart becomes available. While awaiting a heart, he suffers a severe stroke.

The doctors tell his wife, Katie, that no patient who has suffered such a severe stroke has ever regained consciousness and that Alexander is no longer a candidate for transplant. They would like to turn off the BIVAD and allow nature to take its course.

Not lost on these doctors is that Alexander occupies a desperately needed ICU bed, which could benefit other patients, and that his care costs the healthcare system upwards of $10,000 a day. The doctors are also aware than Alexander could survive for years on the BIVAD and the other machines that are now helping to keep him alive: a ventilator and a dialysis machine.

Katie refuses to yield to the request. "I realize he has no chance of recovery," she says. "But Alexander believed deeply in reincarnation. What mattered most to him was that he die at the right moment -- so that his soul could return to Earth in the body for which it was destined. To him, that would have meant keeping him on the machines until all brain function ceases, even if it means decades. I feel obligated to honor those wishes."

Monday, June 13, 2022

San Diego doctor who smuggled hydroxychloroquine into US, sold medication as a COVID-19 cure sentenced

Hope Sloop
KSWB-TV San Diego
Originally posted 29 MAY 22

A San Diego doctor was sentenced Friday to 30 days of custody and one year of house arrest for attempting to smuggle hydroxychloroquine into the U.S. and sell COVID-19 "treatment kits" at the beginning of the pandemic.  

According to officials with the U.S. Department of Justice, Jennings Ryan Staley attempted to sell what he described as a "medical cure" for the coronavirus, which was really hydroxychloroquine powder that the physician had imported in from China by mislabeling the shipping container as "yam extract." Staley had attempted to replicate this process with another seller at one point, as well, but the importer told the San Diego doctor that they "must do it legally." 

Following the arrival of his shipment of the hydroxychloroquine powder, Staley solicited investors to help fund his operation to sell the filled capsules as a "medical cure" for COVID-19. The SoCal doctor told potential investors that he could triple their money within 90 days.  

Staley also told investigators via his plea agreement that he had written false prescriptions for hydroxychloroquine, using his associate's name and personal details without the employee's consent or knowledge.  

During an undercover operation, an agent purchased six of Staley's "treatment kits" for $4,000 and, during a recorded phone call, the doctor bragged about the efficacy of the kits and said, "I got the last tank of . . . hydroxychloroquine, smuggled out of China."  

Tuesday, September 1, 2020

Systemic racism and U.S. health care

J. Feagin & Z. Bennefield
Social Science & Medicine
Volume 103, February 2014, Pages 7-14

Abstract

This article draws upon a major social science theoretical approach–systemic racism theory–to assess decades of empirical research on racial dimensions of U.S. health care and public health institutions. From the 1600s, the oppression of Americans of color has been systemic and rationalized using a white racial framing–with its constituent racist stereotypes, ideologies, images, narratives, and emotions. We review historical literature on racially exploitative medical and public health practices that helped generate and sustain this racial framing and related structural discrimination targeting Americans of color. We examine contemporary research on racial differentials in medical practices, white clinicians' racial framing, and views of patients and physicians of color to demonstrate the continuing reality of systemic racism throughout health care and public health institutions. We conclude from research that institutionalized white socioeconomic resources, discrimination, and racialized framing from centuries of slavery, segregation, and contemporary white oppression severely limit and restrict access of many Americans of color to adequate socioeconomic resources–and to adequate health care and health outcomes. Dealing justly with continuing racial “disparities” in health and health care requires a conceptual paradigm that realistically assesses U.S. society's white-racist roots and contemporary racist realities. We conclude briefly with examples of successful public policies that have brought structural changes in racial and class differentials in health care and public health in the U.S. and other countries.

Highlights

• A full-fledged theory of structural (systemic) racism for interpreting health care data.

• A full-fledged developed theory of structural (systemic) racism for interpreting public health data.

• Focus on powerful white decision makers central to health-related institutions.

• Importance of listening to patients and physicians of color on health issues.

• Implications of systemic racism theory and data for public policies regarding medical care and public health.

The info is here.

Sunday, March 10, 2019

Rethinking Medical Ethics

Insights Team
Forbes.com
Originally posted February 11, 2019

Here is an excerpt:

In June 2018, the American Medical Association (AMA) issued its first guidelines for how to develop, use and regulate AI. (Notably, the association refers to AI as “augmented intelligence,” reflecting its belief that AI will enhance, not replace, the work of physicians.) Among its recommendations, the AMA says, AI tools should be designed to identify and address bias and avoid creating or exacerbating disparities in the treatment of vulnerable populations. Tools, it adds, should be transparent and protect patient privacy.

None of those recommendations will be easy to satisfy. Here is how medical practitioners, researchers, and medical ethicists are approaching some of the most pressing ethical challenges.

Avoiding Bias

In 2017, the data analytics team at University of Chicago Medicine (UCM) used AI to predict how long a patient might stay in the hospital. The goal was to identify patients who could be released early, freeing up hospital resources and providing relief for the patient. A case manager would then be assigned to help sort out insurance, make sure the patient had a ride home, and otherwise smooth the way for early discharge.

In testing the system, the team found that the most accurate predictor of a patient’s length of stay was his or her ZIP code. This immediately raised red flags for the team: ZIP codes, they knew, were strongly correlated with a patient’s race and socioeconomic status. Relying on them would disproportionately affect African-Americans from Chicago’s poorest neighborhoods, who tended to stay in the hospital longer. The team decided that using the algorithm to assign case managers would be biased and unethical.

The info is here.

Monday, October 9, 2017

Would We Even Know Moral Bioenhancement If We Saw It?

Wiseman H.
Camb Q Healthc Ethics. 2017;26(3):398-410.

Abstract

The term "moral bioenhancement" conceals a diverse plurality encompassing much potential, some elements of which are desirable, some of which are disturbing, and some of which are simply bland. This article invites readers to take a better differentiated approach to discriminating between elements of the debate rather than talking of moral bioenhancement "per se," or coming to any global value judgments about the idea as an abstract whole (no such whole exists). Readers are then invited to consider the benefits and distortions that come from the usual dichotomies framing the various debates, concluding with an additional distinction for further clarifying this discourse qua explicit/implicit moral bioenhancement.

The article is here, behind a paywall.

Email the author directly for a personal copy.

Wednesday, September 6, 2017

The Nuremberg Code 70 Years Later

Jonathan D. Moreno, Ulf Schmidt, and Steve Joffe
JAMA. Published online August 17, 2017.

Seventy years ago, on August 20, 1947, the International Medical Tribunal in Nuremberg, Germany, delivered its verdict in the trial of 23 doctors and bureaucrats accused of war crimes and crimes against humanity for their roles in cruel and often lethal concentration camp medical experiments. As part of its judgment, the court articulated a 10-point set of rules for the conduct of human experiments that has come to be known as the Nuremberg Code. Among other requirements, the code called for the “voluntary consent” of the human research subject, an assessment of risks and benefits, and assurances of competent investigators. These concepts have become an important reference point for the ethical conduct of medical research. Yet, there has in the past been considerable debate among scholars about the code’s authorship, scope, and legal standing in both civilian and military science. Nonetheless, the Nuremberg Code has undoubtedly been a milestone in the history of biomedical research ethics.1- 3

Writings on medical ethics, laws, and regulations in a number of jurisdictions and countries, including a detailed and sophisticated set of guidelines from the Reich Ministry of the Interior in 1931, set the stage for the code. The same focus on voluntariness and risk that characterizes the code also suffuses these guidelines. What distinguishes the code is its context. As lead prosecutor Telford Taylor emphasized, although the Doctors’ Trial was at its heart a murder trial, it clearly implicated the ethical practices of medical experimenters and, by extension, the medical profession’s relationship to the state understood as an organized community living under a particular political structure. The embrace of Nazi ideology by German physicians, and the subsequent participation of some of their most distinguished leaders in the camp experiments, demonstrates the importance of professional independence from and resistance to the ideological and geopolitical ambitions of the authoritarian state.

The article is here.

Saturday, December 17, 2016

Free Will and Autonomous Medical DecisionMaking

Matthew A. Butkus
Neuroethics 3 (1): 75–119.

Abstract

Modern medical ethics makes a series of assumptions about how patients and their care providers make decisions about forgoing treatment. These assumptions are based on a model of thought and cognition that does not reflect actual cognition—it has substituted an ideal moral agent for a practical one. Instead of a purely rational moral agent, current psychology and neuroscience have shown that decision-making reflects a number of different factors that must be considered when conceptualizing autonomy. Multiple classical and contemporary discussions of autonomy and decision-making are considered and synthesized into a model of cognitive autonomy. Four categories of autonomy criteria are proposed to reflect current research in cognitive psychology and common clinical issues.

The article is here.

Sunday, August 21, 2016

Professing the Values of Medicine The Modernized AMA Code of Medical Ethics

Brotherton S, Kao A, Crigger BJ.
JAMA. Published online July 14, 2016.
doi:10.1001/jama.2016.9752

The word profession is derived from the Latin word that means “to declare openly.” On June 13, 2016, the first comprehensive update of the AMA Code of Medical Ethics in more than 50 years was adopted at the annual meeting of the American Medical Association (AMA). By so doing, physician delegates attending the meeting, who represent every state and nearly every specialty, publicly professed to uphold the values that are the underpinning of the ethical practice of medicine in service to patients and the public.

The AMA Code was created in 1847 as a national code of ethics for physicians, the first of its kind for any profession anywhere in the world.1 Since its inception, the AMA Code has been a living document that has evolved and expanded as medicine and its social environment have changed. By the time the AMA Council on Ethical and Judicial Affairs embarked on a systematic review of the AMA Code in 2008, it had come to encompass 220 separate opinions or ethics guidance for physicians on topics ranging from abortion to xenotransplantation. The AMA Code, over the years, became more fragmented and unwieldy. Opinions on individual topics were difficult to find; lacked a common narrative structure, which meant the underlying value motivating the guidance was not readily apparent; and were not always consistent in the guidance they offered or language they used.

The article is here.

Wednesday, August 10, 2016

Fool Me Twice, Shame on You; Fool Me Three Times, I’m a Medical Board

by David Epstein
ProPublica
Originally published July 15, 2016

Here is an excerpt:

The Journal-Constitution analyzed public records from every single state. The low-bar-good-news is that “the vast majority of the nation’s 900,000 licensed physicians don’t sexually abuse patients.” Hurrah. The bad news is that the AJC couldn’t determine the extent of the problem due to reporting practices that give as much information as a teenager asked about his day at school. Except minus “fine.”

What else?

Some cases were truly egregious, particularly when “hospitals … fail to report sexual misconduct to regulators, despite laws in most states requiring them to do so.” For example: the AJC reported that one doctor was fired by three Tennessee hospitals (twice for sexual misconduct), but incurred no medical board actions.

The information is here.

Monday, July 13, 2015

How national security gave birth to bioethics

By Jonathan D. Moreno
The Conversation
Originally posted June 8, 2015

Here is an excerpt:

Ironically, while the experiments in Guatemala were going on in the late 1940s, three American judges were hearing the arguments in a war crimes trial in Germany. Twenty-three Nazi doctors and bureaucrats were accused of horrific experiments on people in concentration camps.

The judges decided they needed to make the rules around human experiments clear, so as part of their decision they wrote what has come to be known as the Nuremberg Code. The code states that “the voluntary consent of the human subject is absolutely essential.”

The Guatemala experiments clearly violated that code. President Obama’s commission found that the US public health officials knew what they were doing was unethical, so they kept it quiet. Years later, one of those doctors had a key role in the infamous syphilis experiments in Tuskegee, Alabama that studied the progression of untreated syphilis. None of the 600 men enrolled in the experiments was told if he had syphilis or not. No one with the disease was offered penicillin, the treatment of choice for syphilis. The 40-year experiment finally ended in 1972.

The entire article is here.

Monday, February 16, 2015

A Little Girl Died Because Canada Chose Cultural Sensitivity Over Western Medicine

By Jerry Coyne
The New Republic
Originally published

On Monday, Makayla Sault, an 11-year-old from Ontario and member of the Mississauga tribe of the New Credit First Nation, died from acute lymphoblastic leukemia after suffering a stroke the previous day. This would normally not be big news in Canada or the U.S.—except for the fact that Makayla's death was probably preventable and thus unnecessary.

Makayla died not only from leukemia, but from faith—the faith of her parents, who are pastors. They not only inculcated her with Christianity, but, on religious grounds, removed her from chemotherapy to put her in a dubious institute of “alternative medicine” in Florida.

The entire article is here.

Wednesday, January 21, 2015

Laws that Conflict with the Ethics of Medicine: What Should Doctors Do?

By Dena S. Davis and Eric Kodish
Hastings Center Report 44, no. 6 (2014): 11-14.
DOI: 10.1002/hast.382

Here is an excerpt:

Medical ethics has always asked doctors to put their patients first, even at some risk to themselves. “Medicine is, at its center, a moral enterprise grounded in a covenant of trust,” writes Christine Cassell. “This covenant obliges physicians to be competent and to use their competence in the patient's best interests. Physicians, therefore, are both intellectually and morally obliged to act as advocates for the sick wherever their welfare is threatened and for their health at all times.”[19] Physicians are expected to care for patients with infectious diseases, even at risk of their own health. Physicians are expected to do some pro bono work, to take on some patients who are not financial assets, and so on. Physicians should be advocates for the health of all people, above and beyond even their own patients. The AAP is “dedicated to the health of all children.”[20] The imperative to act on this ethical norm clearly suggests that physicians should challenge these types of laws. On rare occasions, individual doctors may be ethically justified in disobeying or breaking the law.

The entire article is here.