Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Healthcare. Show all posts
Showing posts with label Healthcare. Show all posts

Tuesday, March 5, 2024

You could lie to a health chatbot – but it might change how you perceive yourself

Dominic Wilkinson
The Conversation
Originally posted 8 FEB 24

Here is an excerpt:

The ethics of lying

There are different ways that we can think about the ethics of lying.

Lying can be bad because it causes harm to other people. Lies can be deeply hurtful to another person. They can cause someone to act on false information, or to be falsely reassured.

Sometimes, lies can harm because they undermine someone else’s trust in people more generally. But those reasons will often not apply to the chatbot.

Lies can wrong another person, even if they do not cause harm. If we willingly deceive another person, we potentially fail to respect their rational agency, or use them as a means to an end. But it is not clear that we can deceive or wrong a chatbot, since they don’t have a mind or ability to reason.

Lying can be bad for us because it undermines our credibility. Communication with other people is important. But when we knowingly make false utterances, we diminish the value, in other people’s eyes, of our testimony.

For the person who repeatedly expresses falsehoods, everything that they say then falls into question. This is part of the reason we care about lying and our social image. But unless our interactions with the chatbot are recorded and communicated (for example, to humans), our chatbot lies aren’t going to have that effect.

Lying is also bad for us because it can lead to others being untruthful to us in turn. (Why should people be honest with us if we won’t be honest with them?)

But again, that is unlikely to be a consequence of lying to a chatbot. On the contrary, this type of effect could be partly an incentive to lie to a chatbot, since people may be conscious of the reported tendency of ChatGPT and similar agents to confabulate.


Here is my summary:

The article discusses the potential consequences of lying to a health chatbot, even though it might seem tempting. It highlights a situation where someone frustrated with a wait for surgery considers exaggerating their symptoms to a chatbot screening them.

While lying might offer short-term benefits like quicker attention, the author argues it could have unintended consequences:

Impact on healthcare:
  • Inaccurate information can hinder proper diagnosis and treatment.
  • It contributes to an already strained healthcare system.
Self-perception:
  • Repeatedly lying, even to a machine, can erode honesty and integrity.
  • It reinforces unhealthy avoidance of seeking professional help.
The article encourages readers to be truthful with chatbots for better healthcare outcomes and self-awareness. It acknowledges the frustration with healthcare systems but emphasizes the importance of transparency for both individual and collective well-being.

Tuesday, February 20, 2024

Understanding Liability Risk from Using Health Care Artificial Intelligence Tools

Mello, M. M., & Guha, N. (2024).
The New England journal of medicine, 390(3), 271–278. https://doi.org/10.1056/NEJMhle2308901

Optimism about the explosive potential of artificial intelligence (AI) to transform medicine is tempered by worry about what it may mean for the clinicians being "augmented." One question is especially problematic because it may chill adoption: when Al contributes to patient injury, who will be held responsible?

Some attorneys counsel health care organizations with dire warnings about liability1 and dauntingly long lists of legal concerns. Unfortunately, liability concern can lead to overly conservative decisions, including reluctance to try new things. Yet, older forms of clinical decision support provided important opportunities to prevent errors and malpractice claims. Given the slow progress in reducing diagnostic errors, not adopting new tools also has consequences and at some point may itself become malpractice. Liability uncertainty also affects Al developers' cost of capital and incentives to develop particular products, thereby influencing which Al innovations become available and at what price.

To help health care organizations and physicians weigh Al-related liability risk against the benefits of adoption, we examine the issues that courts have grappled with in cases involving software error and what makes them so challenging. Because the signals emerging from case law remain somewhat faint, we conducted further analysis of the aspects of Al tools that elevate or mitigate legal risk. Drawing on both analyses, we provide risk-management recommendations, focusing on the uses of Al in direct patient care with a "human in the loop" since the use of fully autonomous systems raises additional issues.

(cut)

The Awkward Adolescence of Software-Related Liability

Legal precedent regarding Al injuries is rare because Al models are new and few personal-injury claims result in written opinions. As this area of law matures, it will confront several challenges.

Challenges in Applying Tort Law Principles to Health Care Artificial Intelligence (AI).

Ordinarily, when a physician uses or recommends a product and an injury to the patient results, well-established rules help courts allocate liability among the physician, product maker, and patient. The liabilities of the physician and product maker are derived from different standards of care, but for both kinds of defendants, plaintiffs must show that the defendant owed them a duty, the defendant breached the applicable standard of care, and the breach caused their injury; plaintiffs must also rebut any suggestion that the injury was so unusual as to be outside the scope of liability.

The article is paywalled, which is not how this should work.

Friday, February 16, 2024

Citing Harms, Momentum Grows to Remove Race From Clinical Algorithms

B. Kuehn
JAMA
Published Online: January 17, 2024.
doi:10.1001/jama.2023.25530

Here is an excerpt:

The roots of the false idea that race is a biological construct can be traced to efforts to draw distinctions between Black and White people to justify slavery, the CMSS report notes. For example, the third US president, Thomas Jefferson, claimed that Black people had less kidney output, more heat tolerance, and poorer lung function than White individuals. Louisiana physician Samuel Cartwright, MD, subsequently rationalized hard labor as a way for slaves to fortify their lungs. Over time, the report explains, the medical literature echoed some of those ideas, which have been used in ways that cause harm.

“It is mind-blowing in some ways how deeply embedded in history some of this misinformation is,” Burstin said.

Renewed recognition of these harmful legacies and growing evidence of the potential harm caused by structural racism, bias, and discrimination in medicine have led to reconsideration of the use of race in clinical algorithms. The reckoning with racial injustice sparked by the May 2020 murder of George Floyd helped accelerate this work. A few weeks after Floyd’s death, an editorial in the New England Journal of Medicine recommended reconsidering race in 13 clinical algorithms, echoing a growing chorus of medical students and physicians arguing for change.

Congress also got involved. As a Robert Wood Johnson Foundation Health Policy Fellow, Michelle Morse, MD, MPH, raised concerns about the use of race in clinical algorithms to US Rep Richard Neal (D, MA), then chairman of the House Ways and Means Committee. Neal in September 2020 sent letters to several medical societies asking them to assess racial bias and a year later he and his colleagues issued a report on the misuse of race in clinical decision-making tools.

“We need to have more humility in medicine about the ways in which our history as a discipline has actually held back health equity and racial justice,” Morse said in an interview. “The issue of racism and clinical algorithms is one really tangible example of that.”


My summary: There's increasing worry that using race in clinical algorithms can be harmful and perpetuate racial disparities in healthcare. This concern stems from a recognition of the historical harms of racism in medicine and growing evidence of bias in algorithms.

A review commissioned by the Agency for Healthcare Research and Quality (AHRQ) found that using race in algorithms can exacerbate health disparities and reinforce the false idea that race is a biological factor.

Several medical organizations and experts have called for reevaluating the use of race in clinical algorithms. Some argue that race should be removed altogether, while others advocate for using it only in specific cases where it can be clearly shown to improve outcomes without causing harm.

Wednesday, January 24, 2024

Salve Lucrum: The Existential Threat of Greed in US Health Care

Berwick DM.
JAMA. 2023;329(8):629–630.
doi:10.1001/jama.2023.0846

Here is an excerpt:

Particularly costly has been profiteering among insurance companies participating in the Medicare Advantage (MA) program. Originally intended to give Medicare beneficiaries the choice of access to well-managed care at lower cost, MA has mushroomed into a massive program, now about to cover more than 50% of all Medicare beneficiaries and costing far more per beneficiary than traditional Medicare ever has. By gaming Medicare risk codes and the ways in which comparative “benchmarks” are set for expected costs, MA plans have become by far the most profitable branches of large insurance companies. According to some health services research, MA will cost Medicare over $600 billion more in the next 8 years than would have been the case if the same enrollees had remained in traditional Medicare. Opinions differ about whether MA enrollees experience better care and outcomes than those in traditional Medicare, but the weight of evidence is that they do not.

Hospital pricing games are also widespread. Hospitals claim large operating losses, especially in the COVID pandemic period, but large systems sit on balance sheets with tens of billions of dollars in the bank or invested. Hospital prices for the top 37 infused cancer drugs averaged 86.2% higher per unit than in physician offices. A patient was billed $73 800 at the University of Chicago for 2 injections of Lupron depot, a treatment for prostate cancer, a drug available in the UK for $260 a dose. To drive up their own revenues, many hospitals serving wealthy populations take advantage of a federal subsidy program originally intended to reduce drug costs for people with low income.

Recent New York Times investigations have reported on nonprofit hospitals’ reducing and closing services in poor areas while opening new ones in wealthy suburbs and on their use of collection agencies for pursuing payment from patients with low income. The Massachusetts Health Policy Commission reported in 2022 that hospital prices and revenues increased during a decade at almost 4 times the rate of inflation.

Windfall profits also appear in salaries and benefits for many health care executives. Of the 10 highest paid among all corporate executives in the US in 2020, 3 were from Oak Street Health, and salary and benefits included, reportedly, $568 million for the chief executive officer (CEO). Executives in large hospital systems commonly have salaries and benefits of several million dollars a year. Some academic medical centers’ boards allow their CEO to serve for 6-figure stipends and multimillion-dollar stock options on outside company boards, including ones that supply products and services to the medical center.


My summary and warnings are here:

Greed is not good, especially in healthcare. This article outlines the concerning issue of greed pervading the US healthcare system. It argues that prioritizing profit over patient well-being has become widespread, impacting everything from drug companies to hospitals. The author contends that this greed is detrimental to both patients and the healthcare system as a whole. To address this, the article proposes solutions like fostering greater transparency and accountability, along with reevaluating how healthcare is financed.

Saturday, January 20, 2024

Private equity is buying up health care, but the real problem is why doctors are selling

Yashaswini Singh & Christopher Whaley
The Hill
Originally published 21 Dec 23

Here is an excerpt:

But amid warnings that private equity is taking over health care and portrayals of financiers as greedy villains, we’re ignoring the reality that no one is coercing individual physicians to sell. Many doctors are eager to hand off their practices, and for not just for the payday. Running a private practice has become increasingly unsustainable, and alternative employment options, such as working for hospitals, are often unappealing. That leaves private equity as an attractive third path.

There are plenty of short-term steps that regulators should take to keep private equity firms in check. But the bigger problem we must address is why so many doctors feel the need to sell. The real solution to private equity in health care is to boost competition and address the pressures physicians are facing.

Consolidation in health care isn’t new. For decades, physician practices have been swallowed up by hospital systems. According to a study by the Physicians Advocacy Institute, nearly 75 percent of physicians now work for a hospital or corporate owner. While hospitals continue to drive consolidation, private equity is ramping up its spending and market share. One recent report found that private equity now owns more than 30 percent of practices in nearly one-third of metropolitan areas.

Years of study suggest that consolidation drives up health care costs without improving quality of care, and our research shows that private equity is no different. To deliver a high return to investors, private equity firms inflate charges and cut costs. One of our studies found that a few years after private equity invested in a practice, charges per patient were 50% higher than before. Practices also experience high turnover of physicians and increased hiring of non-physician staff.

How we got here has more to do with broader problems in health care than with private equity itself.


Here is my summary, which is really a warning:

The article dives into the concerning trend of private equity firms acquiring healthcare practices. It argues that while this might seem concerning, the bigger issue lies in understanding why doctors are willing to sell their practices in the first place.

The author highlights the immense financial burden doctors shoulder while running their own practices. Between rising costs and stagnant insurance reimbursements, it's becoming increasingly difficult for them to stay afloat. This, the article argues, is what's pushing them towards private equity firms, who offer immediate financial relief but often come with their own set of downsides for patients, like higher costs and reduced quality of care.

Therefore, instead of solely focusing on restricting private equity involvement, the article suggests we address the root cause: the financial woes of independent doctors. This could involve solutions like increased Medicare payments, tax breaks for independent practices, and alleviating the administrative burden doctors face. Only then can we ensure a sustainable healthcare system that prioritizes patient well-being.

Thursday, January 18, 2024

Biden administration rescinds much of Trump ‘conscience’ rule for health workers

Nathan Weixel
The Hill
Originally published 9 Jan 24

The Biden administration will largely undo a Trump-era rule that boosted the rights of medical workers to refuse to perform abortions or other services that conflicted with their religious or moral beliefs.

The final rule released Tuesday partially rescinds the Trump administration’s 2019 policy that would have stripped federal funding from health facilities that required workers to provide any service they objected to, such as abortions, contraception, gender-affirming care and sterilization.

The health care conscience protection statutes represent Congress’s attempt to strike a balance between maintaining access to health care and honoring religious beliefs and moral convictions, the Department of Health and Human Services said in the rule.

“Some doctors, nurses, and hospitals, for example, object for religious or moral reasons to providing or referring for abortions or assisted suicide, among other procedures. Respecting such objections honors liberty and human dignity,” the department said.

But at the same time, Health and Human Services said “patients also have rights and health needs, sometimes urgent ones. The Department will continue to respect the balance Congress struck, work to ensure individuals understand their conscience rights, and enforce the law.”


Summary from Healthcare Dive

The HHS Office of Civil Rights has again updated guidance on providers’ conscience rights. The latest iteration, announced on Tuesday, aims to strike a balance between honoring providers’ religious and moral beliefs and ensuring access to healthcare, according to the agency.

President George W. Bush created conscience rules in 2008, which codify the rights of healthcare workers to refuse to perform medical services that conflict with their religious or moral beliefs. Since then, subsequent administrations have rewritten the rules, with Democrats limiting the scope and Republicans expanding conscience protections. 

The most recent revision largely undoes a 2019 Trump-era policy — which never took effect — that sought to expand the rights of healthcare workers broadly to refuse to perform medical services, such as abortions, on religious or moral grounds.

Wednesday, January 17, 2024

Trump Is Coming for Obamacare Again

Ronald Brownstein
The Atlantic
Originally posted 10 Jan 24

Donald Trump’s renewed pledge on social media and in campaign rallies to repeal and replace the Affordable Care Act has put him on a collision course with a widening circle of Republican constituencies directly benefiting from the law.

In 2017, when Trump and congressional Republicans tried and failed to repeal the ACA, also known as Obamacare, they faced the core contradiction that many of the law’s principal beneficiaries were people and institutions that favored the GOP. That list included lower-middle-income workers without college degrees, older adults in the final years before retirement, and rural communities.


Here's the gist:
  • Trump's stance: He believes Obamacare is a "catastrophe" and wants to replace it with "MUCH BETTER HEALTHCARE."
  • Challenges: Repealing Obamacare is likely an uphill battle. Its popularity has increased, and even some Republicans benefit from the law.
  • Potential consequences: If Trump succeeds, millions of Americans could lose their health insurance, while others face higher premiums.
  • Political implications: Trump's renewed focus on Obamacare could energize his base but alienate moderate voters.

Tuesday, January 16, 2024

Criminal Justice Reform Is Health Care Reform

Haber LA, Boudin C, Williams BA.
JAMA.
Published online December 14, 2023.

Here is an excerpt:

Health Care While Incarcerated

Federal law mandates provision of health care for incarcerated persons. In 1976, the US Supreme Court ruled in Estelle v Gamble that “deliberate indifference to serious medical needs of prisoners constitutes the ‘unnecessary and wanton infliction of pain,’” prohibited under the Eighth Amendment. Subsequent cases established that incarcerated individuals must receive access to medical care, enactment of ordered care, and treatment without bias to their incarcerated status.

Such court decisions establish rights and responsibilities, but do not fund or oversee health care delivery. Community health care oversight, such as the Joint Commission, does not apply to prison health care. When access to quality care is inadequate, incarcerated patients must resort to lawsuits to advocate for change—a right curtailed by the Prison Litigation Reform Act of 1996, which limited prisoners’ ability to file suit in federal court.

Despite Eighth Amendment guarantees, simply entering the criminal-legal system carries profound personal health risks: violent living conditions result in traumatic injuries, housing in congregate settings predisposes to the spread of infectious diseases, and exceptions to physical comfort, health privacy, and informed decision-making occur during medical care delivery. These factors compound existing health disparities commonly found in the incarcerated population.

The First Step Act

Signed under then-president Trump, the First Step Act of 2018 (FSA) was a bipartisan criminal justice reform bill designed to reduce the federal prison population while also protecting public safety. The legislation aimed to decrease entry into prison, provide rehabilitation during incarceration, improve protections for medically vulnerable individuals, and expedite release.

To achieve these goals, the FSA included prospective and retroactive sentencing reforms, most notably expanded relief from mandatory minimum sentences for drug distribution offenses that disproportionately affect Black individuals in the US. The FSA additionally called for the use of evidence-based tools, such as the Prisoner Assessment Tool Targeting Estimated Risk and Needs, to facilitate release decisions.

The legislation also addressed medical scenarios commonly encountered by professionals providing care to incarcerated persons, including prohibitions on shackling pregnant patients, deescalation training for correctional officers when encountering people with psychiatric illness or cognitive deficits, easing access to compassionate release for those with advanced age or life-limiting illness, and mandatory reporting on the use of medication-assisted treatment for opioid use disorder. With opioid overdose being the leading cause of postrelease mortality, the latter requirement has been particularly important for those transitioning out of correctional settings.

During the recent COVID-19 pandemic, FSA amendments expanding incarcerated individuals’ access to the courts led to a marked increase in successful petitions for early release from prison. Decarcerating those individuals most medically at risk during the public health crisis reduced the spread of viral illness associated with prison overcrowding, protecting both incarcerated individuals and those working in carceral settings.

Thursday, January 4, 2024

Americans’ Trust in Scientists, Positive Views of Science Continue to Decline

Brian Kennedy & Alec Tyson
Pew Research
Originally published 14 NOV 23

Impact of science on society

Overall, 57% of Americans say science has had a mostly positive effect on society. This share is down 8 percentage points since November 2021 and down 16 points since before the start of the coronavirus outbreak.

About a third (34%) now say the impact of science on society has been equally positive as negative. A small share (8%) think science has had a mostly negative impact on society.

Trust in scientists

When it comes to the standing of scientists, 73% of U.S. adults have a great deal or fair amount of confidence in scientists to act in the public’s best interests. But trust in scientists is 14 points lower than it was at the early stages of the pandemic.

The share expressing the strongest level of trust in scientists – saying they have a great deal of confidence in them – has fallen from 39% in 2020 to 23% today.

As trust in scientists has fallen, distrust has grown: Roughly a quarter of Americans (27%) now say they have not too much or no confidence in scientists to act in the public’s best interests, up from 12% in April 2020.

Ratings of medical scientists mirror the trend seen in ratings of scientists generally. Read Chapter 1 of the report for a detailed analysis of this data.

Differences between Republicans and Democrats in ratings of scientists and science

Declining levels of trust in scientists and medical scientists have been particularly pronounced among Republicans and Republican-leaning independents over the past several years. In fact, nearly four-in-ten Republicans (38%) now say they have not too much or no confidence at all in scientists to act in the public’s best interests. This share is up dramatically from the 14% of Republicans who held this view in April 2020. Much of this shift occurred during the first two years of the pandemic and has persisted in more recent surveys.


My take on why this important:

Science is a critical driver of progress. From technological advancements to medical breakthroughs, scientific discoveries have dramatically improved our lives. Without public trust in science, these advancements may slow or stall.

Science plays a vital role in addressing complex challenges. Climate change, pandemics, and other pressing issues demand evidence-based solutions. Undermining trust in science weakens our ability to respond effectively to these challenges.

Erosion of trust can have far-reaching consequences. It can fuel misinformation campaigns, hinder scientific collaboration, and ultimately undermine public health and well-being.

Monday, December 11, 2023

Many Americans receive too much health care. That may finally be changing

Elsa Pearson Sites
StatNews.com
Originally published 8 Nov 23

The opioid crisis rocked America, bringing addiction and overdose into the spotlight. But it also highlighted the overtreatment of pain: Medical and dental providers alike overprescribed opioids after procedures and for chronic conditions. Out of that overtreatment came an epidemic.

In American health care, overtreatment is common. Recently though, there has been a subtle shift in the opposite direction. It’s possible that “less is more” is catching on.

For many Americans, it can be challenging to even access care: Treatment is expensive, insurance is confusing, and there aren’t enough providers. But ironically, we often use too much care, too.

Now, some providers are asking what the line between necessary and unnecessary really is. The results are encouraging, suggesting that, in some cases, it may be possible to achieve the same health outcomes with less treatment — and fewer side effects, too.

This shift is particularly noticeable in cancer care.


Here is my take:

The article delves into the pervasive issue of overtreatment and overdiagnosis in the healthcare system. It highlights the unintended consequences of modern medical practices, where patients are often subjected to unnecessary tests, procedures, and treatments that may not necessarily improve their health outcomes. The article emphasizes how overtreatment can lead to adverse effects, both physically and financially, for patients, while overdiagnosis can result in the unnecessary burden of managing conditions that may never cause harm. The piece discusses the challenges in striking a balance between providing thorough medical care and avoiding unnecessary interventions, urging a shift toward a more patient-centered and evidence-based approach to reduce harm and improve the overall quality of healthcare.

The author suggests that addressing the issue of overtreatment and overdiagnosis requires a comprehensive reevaluation of medical practices, incorporating shared decision-making between healthcare providers and patients. The article underscores the importance of fostering a healthcare culture that prioritizes the avoidance of unnecessary interventions and aligns treatments with patients' preferences and values. By acknowledging and addressing the challenges associated with overmedicalization, the article advocates for a more thoughtful and personalized approach to healthcare delivery that considers the potential harm of unnecessary treatments and strives to enhance the overall well-being of patients.

Sunday, December 3, 2023

ChatGPT one year on: who is using it, how and why?

Ghassemi, M., Birhane, A., et al.
Nature 624, 39-41 (2023)
doi: https://doi.org/10.1038/d41586-023-03798-6

Here is an excerpt:

More pressingly, text and image generation are prone to societal biases that cannot be easily fixed. In health care, this was illustrated by Tessa, a rule-based chatbot designed to help people with eating disorders, run by a US non-profit organization. After it was augmented with generative AI, the now-suspended bot gave detrimental advice. In some US hospitals, generative models are being used to manage and generate portions of electronic medical records. However, the large language models (LLMs) that underpin these systems are not giving medical advice and so do not require clearance by the US Food and Drug Administration. This means that it’s effectively up to the hospitals to ensure that LLM use is fair and accurate. This is a huge concern.

The use of generative AI tools, in general and in health settings, needs more research with an eye towards social responsibility rather than efficiency or profit. The tools are flexible and powerful enough to make billing and messaging faster — but a naive deployment will entrench existing equity issues in these areas. Chatbots have been found, for example, to recommend different treatments depending on a patient’s gender, race and ethnicity and socioeconomic status (see J. Kim et al. JAMA Netw. Open 6, e2338050; 2023).

Ultimately, it is important to recognize that generative models echo and extend the data they have been trained on. Making generative AI work to improve health equity, for instance by using empathy training or suggesting edits that decrease biases, is especially important given how susceptible humans are to convincing, and human-like, generated texts. Rather than taking the health-care system we have now and simply speeding it up — with the risk of exacerbating inequalities and throwing in hallucinations — AI needs to target improvement and transformation.


Here is my summary:

The article on ChatGPT's one-year anniversary presents a comprehensive analysis of its usage, exploring the diverse user base, applications, and underlying motivations driving its adoption. It reveals that ChatGPT has found traction across a wide spectrum of users, including writers, developers, students, professionals, and hobbyists. This broad appeal can be attributed to its adaptability in assisting with a myriad of tasks, from generating creative content to aiding in coding challenges and providing language translation support.

The analysis further dissects how users interact with ChatGPT, showcasing distinct patterns of utilization. Some users leverage it for brainstorming ideas, drafting content, or generating creative writing, while others turn to it for programming assistance, using it as a virtual coding companion. Additionally, the article explores the strategies users employ to enhance the model's output, such as providing more context or breaking down queries into smaller parts.  There are still issues with biases, inaccurate information, and inappropriate uses.

Friday, November 24, 2023

UnitedHealth faces class action lawsuit over algorithmic care denials in Medicare Advantage plans

Casey Ross and Bob Herman
Statnews.com
Originally posted 14 Nov 23

A class action lawsuit was filed Tuesday against UnitedHealth Group and a subsidiary alleging that they are illegally using an algorithm to deny rehabilitation care to seriously ill patients, even though the companies know the algorithm has a high error rate.

The class action suit, filed on behalf of deceased patients who had a UnitedHealthcare Medicare Advantage plan and their families by the California-based Clarkson Law Firm, follows the publication of a STAT investigation Tuesday. The investigation, cited by the lawsuit, found UnitedHealth pressured medical employees to follow an algorithm, which predicts a patient’s length of stay, to issue payment denials to people with Medicare Advantage plans. Internal documents revealed that managers within the company set a goal for clinical employees to keep patients rehab stays within 1% of the days projected by the algorithm.

The lawsuit, filed in the U.S. District Court of Minnesota, accuses UnitedHealth and its subsidiary, NaviHealth, of using the computer algorithm to “systematically deny claims” of Medicare beneficiaries struggling to recover from debilitating illnesses in nursing homes. The suit also cites STAT’s previous reporting on the issue.

“The fraudulent scheme affords defendants a clear financial windfall in the form of policy premiums without having to pay for promised care,” the complaint alleges. “The elderly are prematurely kicked out of care facilities nationwide or forced to deplete family savings to continue receiving necessary care, all because an [artificial intelligence] model ‘disagrees’ with their real live doctors’ recommendations.”


Here are some of my concerns:

The use of algorithms in healthcare decision-making has raised a number of ethical concerns. Some critics argue that algorithms can be biased and discriminatory, and that they can lead to decisions that are not in the best interests of patients. Others argue that algorithms can lack transparency, and that they can make it difficult for patients to understand how decisions are being made.

The lawsuit against UnitedHealth raises a number of specific ethical concerns. First, the plaintiffs allege that UnitedHealth's algorithm is based on inaccurate and incomplete data. This raises the concern that the algorithm may be making decisions that are not based on sound medical evidence. Second, the plaintiffs allege that UnitedHealth has failed to adequately train its employees on how to use the algorithm. This raises the concern that employees may be making decisions that are not in the best interests of patients, either because they do not understand how the algorithm works or because they are pressured to deny claims.

The lawsuit also raises the general question of whether algorithms should be used to make healthcare decisions. Some argue that algorithms can be used to make more efficient and objective decisions than humans can. Others argue that algorithms are not capable of making complex medical decisions that require an understanding of the individual patient's circumstances.

The use of algorithms in healthcare is a complex issue with no easy answers. It is important to carefully consider the potential benefits and risks of using algorithms before implementing them in healthcare settings.

Saturday, September 30, 2023

Toward a Social Bioethics Through Interpretivism: A Framework for Healthcare Ethics.

Dougherty, R., & Fins, J. (2023).
Cambridge Quarterly of Healthcare Ethics, 1-11.

Abstract

Recent global events demonstrate that analytical frameworks to aid professionals in healthcare ethics must consider the pervasive role of social structures in the emergence of bioethical issues. To address this, the authors propose a new sociologically informed approach to healthcare ethics that they term “social bioethics.” Their approach is animated by the interpretive social sciences to highlight how social structures operate vis-à-vis the everyday practices and moral reasoning of individuals, a phenomenon known as social discourse. As an exemplar, the authors use social bioethics to reframe common ethical issues in psychiatric services and discuss potential implications. Lastly, the authors discuss how social bioethics illuminates the ways healthcare ethics consultants in both policy and clinical decision-making participate in and shape broader social, political, and economic systems, which then cyclically informs the design and delivery of healthcare.

My summary: 

The authors argue that traditional bioethical frameworks, which focus on individual rights and responsibilities, are not sufficient to address the complex ethical issues that arise in healthcare. They argue that social bioethics can help us to better understand how social structures, such as race, class, gender, and sexual orientation, shape the experiences of patients and healthcare providers, and how these experiences can influence ethical decision-making.

The authors use the example of psychiatric services to illustrate how social bioethics can be used to reframe common ethical issues. They argue that the way we think about mental illness is shaped by social and cultural factors, such as our understanding of what it means to be "normal" and "healthy." These factors can influence how we diagnose, treat, and care for people with mental illness.

The authors also argue that social bioethics can help us to understand the role of healthcare ethics consultants in shaping broader social, political, and economic systems. They argue that these consultants participate in a process of "social discourse," in which they help to define the terms of the debate about ethical issues in healthcare. This discourse can then have a cyclical effect on the design and delivery of healthcare.

Here are some of the key concepts of social bioethics:
  • Social structures: The systems of power and inequality that shape our society.
  • Social discourse: The process of communication and negotiation through which we define and understand social issues.
  • Healthcare ethics consultants: Professionals who help to resolve ethical dilemmas in healthcare.
  • Social justice: The fair and equitable distribution of resources and opportunities.

Friday, July 7, 2023

The Dobbs Decision — Exacerbating U.S. Health Inequity

Harvey, S. M., et al. (2023).
New England Journal of Medicine, 
388(16), 1444–1447. 

Here is an excerpt:

In 2019, half of U.S. women living below the FPL were insured by Medicaid. Medicaid coverage rates were higher in certain groups, including women who described their health as fair or poor, women from marginalized racial or ethnic groups, and single mothers. Approximately two thirds of adult women enrolled in Medicaid are in their reproductive years and are potentially at risk for an unintended pregnancy. For many low-income people, however, federal and state funding restrictions created substantial financial and other barriers to accessing abortion services even before Dobbs. Notably, the Hyde Amendment greatly disadvantaged low-income people by blocking use of federal Medicaid funds for abortion services except in cases of rape or incest or to save the pregnant person’s life. In 32 states, Medicaid programs adhere to the strict guidelines of the Hyde Amendment, making it difficult for low-income people to access abortion services in these states.

Before the fall of Roe, Medicaid coverage could determine whether women in some states did or did not receive abortion services. Since the implementation of the post-Dobbs abortion bans, abortion care is even more restricted in entire regions of the country. Access to abortion services under Medicaid will continue to vary by place of residence and depend on the confluence of restrictions or bans on abortion care and Medicaid policies currently in effect within each state. In the new landscape (see map), obtaining abortion services has become even more challenging for low-income women in most of the country, despite the fact that most states have expanded Medicaid coverage.

After Dobbs, complete or partial bans on abortion went into effect in more than a dozen states, forcing people in those states to travel to other states to access abortion care. More than a third of women of reproductive age now live more than an hour from an abortion facility and will probably face additional barriers, including costs for travel and child care and the need to take time off from work. Regrettably, people who already had poorer-than-average access pre-Dobbs face even greater health burdens and risks. For example, members of marginalized racial and ethnic groups that face disproportionate burdens of pregnancy-related mortality are more likely than other groups to have to travel longer distances to get an abortion post-Dobbs.

As a result of the overturning of Roe, a substantial proportion of people who want abortion services will not have access to them and will end up carrying their pregnancies to term. For decades, research has demonstrated that abortion bans most severely affect low-income women and marginalized racial and ethnic groups that already struggle with barriers to accessing health care, including abortion. The economic, educational, and physical and mental health consequences of being denied a wanted abortion have been thoroughly documented in the landmark Turnaway Study. Thanks to nearly 50 years of legal abortion practice, we now have a robust body of research on the safety and efficacy of abortion and the impact of abortion restrictions on people’s socioeconomic circumstances, health, and well-being.

Innovative strategies, such as telemedicine for medication abortion services, can improve access to abortion care. Self-managed, at home, medication abortions are safe, effective, and acceptable to many patients. In states where abortions are legal that are bordered by states where abortions are banned, telemedicine could mean the difference between patients being able to simply drive across the state line, in order to be physically in the state providing care, and having to drive to a clinic that could be hundreds of miles away. In addition, Planned Parenthood affiliates have plans to launch mobile services and to open clinics along state borders where abortion is illegal in one state but legal in the other.

Wednesday, May 17, 2023

In Search of an Ethical Constraint on Hospital Revenue

Lauren Taylor
The Hastings Center
Originally published 14 APR 23

Here are two excerpts:

A physician whistleblower came forward alleging that Detroit Medical Center, owned by for-profit Tenet Healthcare, refused to halt elective procedures in early days of the pandemic, even after dozens of patients and staff were exposed to a COVID-positive patient undergoing an organ transplant. According to the physician, Tenet persisted on account of the margin it stood to generate. “Continuing to do this [was] truly a crime against patients,” recalled Dr. Shakir Hussein, who was fired shortly thereafter.

Earlier in 2022, nonprofit Bon Secours health system was investigated for its strategic downsizing of a community hospital in Richmond, Va., which left a predominantly Black community lacking access to standard medical services such as MRIs and maternity care. Still, the hospital managed to turn a $100 million margin, which buoyed the system’s $1 billion net revenue in 2021. “Bon Secours was basically laundering money through this poor hospital to its wealthy outposts,” said one emergency department physician who had worked at Richmond Community Hospital. “It was all about profits.”  

The academic literature further substantiates concerns about hospital margin maximization. One paper examining the use of municipal, tax-exempt debt among nonprofit hospitals found evidence of arbitrage behavior, where hospitals issued debt not to invest in new capital (the stated purpose of most municipal debt issuances) but to invest the proceeds of the issuance in securities and other endowment accounts. A more recent paper, focused on private equity-owned hospitals, found that facilities acquired by private equity were more likely to “add specific, profitable hospital-based services and less likely to add or continue those with unreliable revenue streams.” These and other findings led Donald Berwick to write that greed poses an existential threat to U.S. health care.

None of the hospital actions described above are necessarily illegal but they certainly bring long-lurking issues within bioethics to the fore. Recognizing that hospitals are resource-dependent organizations, what normative, ethical responsibilities–or constraints–do they face with regard to revenue-generation? A review of the health services and bioethics literature to date turns up three general answers to this question, all of which are unsatisfactory.

(cut)

In sum, we cannot rely on laws alone to provide an effective check on hospital revenue generation due to the law’s inevitably limited scope. We therefore must identify an internalized ethic to guide hospital revenue generation. The concept of an organizational mission is a weak check on nonprofit hospitals and virtually meaningless among for-profit hospitals, and reliance on professionalism is incongruous with the empirical data about who has final decision-making authority over hospitals today. We need a new way to conceptualize hospital responsibilities.

Two critiques of this idea merit confrontation. The first is that there is no urgent need for an internalized constraint on revenue generation because more than half of hospitals are currently operating in the red; seeking to curb their revenue further is counterproductive. But just because a proportion of this sector is in the red does not undercut the egregiousness of the hospital actions described earlier. Moreover, if hospitals are running a deficit in part because they choose not to undertake unethical action to generate revenue, then any rule developed saying they can’t undertake ethical actions to generate revenue won’t apply to them. The second critique is that the current revenues that hospitals generate are legitimate because they bolster institutional “rainy day funds” of sorts, which can be deployed to help people and communities in need at a future date. But with a declining national life expectancy, a Black maternal mortality rate hovering at roughly that of Tajikistan, and medical debt the leading cause of personal bankruptcy in the U.S. – it is already raining. Increasing reserves, by any means, can no longer be defended with this logic.

Tuesday, May 16, 2023

Approaches to Muslim Biomedical Ethics: A Classification and Critique

Dabbagh, H., Mirdamadi, S.Y. & Ajani, R.R.
Bioethical Inquiry (2023).

Abstract

This paper provides a perspective on where contemporary Muslim responses to biomedical-ethical issues stand to date. There are several ways in which Muslim responses to biomedical ethics can and have been studied in academia. The responses are commonly divided along denominational lines or under the schools of jurisprudence. All such efforts classify the responses along the lines of communities of interpretation rather than the methods of interpretation. This research is interested in the latter. Thus, our criterion for classification is the underlying methodology behind the responses. The proposed classification divides Muslim biomedical-ethical reasoning into three methodological categories: 1) textual, 2) contextual, and 3) para-textual.

Conclusion

There is widespread recognition among Muslim scholars dealing with biomedical ethical issues that context plays an essential role in forming ethical principles and judgements. The context-sensitive approaches in Muslim biomedical ethics respond to the requirements of modern biomedical issues by recognizing the contexts in which scriptural text has been formed and developed through the course of Muslim intellectual history. This paves the way for bringing in different context-sensitive interpretations of the sacred texts through different reasoning tools and methods, whether they are rooted in the uṣūl al-fiqh tradition for the contextualists, or in moral philosophy for the para-textualists. For the textualists, reasoning outside of the textual boundaries is not acceptable. While contextualists tend to believe that contextual considerations make sense only in light of Sharīʿa law and should not be understood independently of Sharīʿa law, para-textualists believe that moral perceptions and contextual considerations are valid irrespective of Sharīʿa law, insofar as they do not neglect the moral vision of the scriptures. The common ground between the majority of the textualists and the contextualists lies in giving primacy to the Sharīʿa law. Moral requirements for both the textualists and the contextualists are only determined by Sharīʿa commandments, and Sharīʿa commandments are the only basis on which to decide what is morally permissible or impermissible in biomedical ethical issues. This is an Ashʿarī-inspired approach to biomedical ethics with respect to human moral reasoning (Sachedina 2005; Aramesh 2020; Reinhart 2004; Moosa 2004; Moosapour et al. 2018).

Para-textualists, on the other hand, do not deny the relevance of Sharīʿa, but treat the reasoning embedded in Sharīʿa as being on a par with moral reasoning in general. Thus, if there are contending strands of moral reasoning on a particular biomedical ethical issue, Sharīʿa-based reasoning will need to compete with other moral reasoning on the issue. If the aḥkām (religious judgements) are deemed to be reasonably sound, then for para-textualists there are no grounds for not accepting them. Although using and referring to Sharīʿa might work in many cases, it is not the case that Sharīʿa is enough in every case to judge on moral issues. For instance, morally speaking, it is not enough to refer to Sharīʿa when someone is choosing or refusing euthansia or abortion. For para-textualists what matters most is how Sharīʿa morally reasons about the permissibility or impermissibility of an action. If it is morally justified to euthanize or abort, we are rationally (and morally) bound to accept it, and if it is not morally justified, we will then either have to leave our judgement about choosing or refusing euthanasia or abortion or find another context-sensitive interpretation to rationalize the relevant commandment derived from Sharīʿa. Thus, the departure point for the para-textualist approach is moral reasoning, whether it is found in moral philosophy, Muslim jurisprudence, or elsewhere (Soroush 2009; Shahrur 1990, 2009; Hallaq 1997; An-Na’im 2008). Para-textualist methodology tries to remain open to the possibility of morally criticizing religious judgements (aḥkām), while remaining true to the moral vision of the scriptures. This is a Muʿtazilī-inspired approach to biomedical ethics (Hourani 1976; Vasalou 2008; Sheikh 2019; Farahat 2019; Reinhart 1995; Al-Bar and Chamsi-Pasha 2015; Hallaq 2014).

Saturday, May 13, 2023

Doctors are drowning in paperwork. Some companies claim AI can help

Geoff Brumfiel
NPR.org - Health Shots
Originally posted 5 APR 23

Here are two excerpts:

But Paul kept getting pinged from younger doctors and medical students. They were using ChatGPT, and saying it was pretty good at answering clinical questions. Then the users of his software started asking about it.

In general, doctors should not be using ChatGPT by itself to practice medicine, warns Marc Succi, a doctor at Massachusetts General Hospital who has conducted evaluations of how the chatbot performs at diagnosing patients. When presented with hypothetical cases, he says, ChatGPT could produce a correct diagnosis accurately at close to the level of a third- or fourth-year medical student. Still, he adds, the program can also hallucinate findings and fabricate sources.

"I would express considerable caution using this in a clinical scenario for any reason, at the current stage," he says.

But Paul believed the underlying technology can be turned into a powerful engine for medicine. Paul and his colleagues have created a program called "Glass AI" based off of ChatGPT. A doctor tells the Glass AI chatbot about a patient, and it can suggest a list of possible diagnoses and a treatment plan. Rather than working from the raw ChatGPT information base, the Glass AI system uses a virtual medical textbook written by humans as its main source of facts – something Paul says makes the system safer and more reliable.

(cut)

Nabla, which he co-founded, is now testing a system that can, in real time, listen to a conversation between a doctor and a patient and provide a summary of what the two said to one another. Doctors inform their patients that the system is being used in advance, and as a privacy measure, it doesn't actually record the conversation.

"It shows a report, and then the doctor will validate with one click, and 99% of the time it's right and it works," he says.

The summary can be uploaded to a hospital records system, saving the doctor valuable time.

Other companies are pursuing a similar approach. In late March, Nuance Communications, a subsidiary of Microsoft, announced that it would be rolling out its own AI service designed to streamline note-taking using the latest version of ChatGPT, GPT-4. The company says it will showcase its software later this month.

Wednesday, May 10, 2023

Foundation Models are exciting, but they should not disrupt the foundations of caring

Morley, Jessica and Floridi, Luciano
(April 20, 2023).

Abstract

The arrival of Foundation Models in general, and Large Language Models (LLMs) in particular, capable of ‘passing’ medical qualification exams at or above a human level, has sparked a new wave of ‘the chatbot will see you now’ hype. It is exciting to witness such impressive technological progress, and LLMs have the potential to benefit healthcare systems, providers, and patients. However, these benefits are unlikely to be realised by propagating the myth that, just because LLMs are sometimes capable of passing medical exams, they will ever be capable of supplanting any of the main diagnostic, prognostic, or treatment tasks of a human clinician. Contrary to popular discourse, LLMs are not necessarily more efficient, objective, or accurate than human healthcare providers. They are vulnerable to errors in underlying ‘training’ data and prone to ‘hallucinating’ false information rather than facts. Moreover, there are nuanced, qualitative, or less measurable reasons why it is prudent to be mindful of hyperbolic claims regarding the transformative power ofLLMs. Here we discuss these reasons, including contextualisation, empowerment, learned intermediaries, manipulation, and empathy. We conclude that overstating the current potential of LLMs does a disservice to the complexity of healthcare and the skills of healthcare practitioners and risks a ‘costly’ new AI winter. A balanced discussion recognising the potential benefits and limitations can help avoid this outcome.

Conclusion

The technical feats achieved by foundation models in the last five years, and especially in the last six months, are undeniably impressive. Also undeniable is the fact that most healthcare systems across the world are under considerable strain. It is right, therefore, to recognise and invest in the potentially transformative power of models such as Med-PaLM and ChatGPT – healthcare systems will almost certainly benefit.  However, overstating their current potential does a disservice to the complexity of healthcare and the skills required of healthcare practitioners. Not only does this ‘hype’ risk direct patient and societal harm, but it also risks re-creating the conditions of previous AI winters when investors and enthusiasts became discouraged by technological developments that over-promised and under-delivered. This could be the most harmful outcome of all, resulting in significant opportunity costs and missed chances to benefit transform healthcare and benefit patients in smaller, but more positively impactful, ways. A balanced approach recognising the potential benefits and limitations can help avoid this outcome. 

Saturday, March 25, 2023

A Christian Health Nonprofit Saddled Thousands With Debt as It Built a Family Empire Including a Pot Farm, a Bank and an Airline

Ryan Gabrielson & J. David McSwane
ProPublic.org
Originally published 25 FEB 23

Here is an excerpt:

Four years after its launch in 2014, the ministry enrolled members in almost every state and collected $300 million in annual revenue. Liberty used the money to pay at least $140 million to businesses owned and operated by Beers family members and friends over a seven-year period, the investigation found. The family then funneled the money through a network of shell companies to buy a private airline in Ohio, more than $20 million in real estate holdings and scores of other businesses, including a winery in Oregon that they turned into a marijuana farm. The family calls this collection of enterprises “the conglomerate.”

Beers has disguised his involvement in Liberty. He has never been listed as a Liberty executive or board member, and none of the family’s 50-plus companies or assets are in his name, records show.

From the family’s 700-acre ranch north of Canton, however, Beers acts as the shadow lord of a financial empire. It was built from money that people paid to Liberty, Beers’ top lieutenant confirmed to ProPublica. He plays in high-stakes poker tournaments around the country, travels to the Caribbean and leads big-game hunts at a vast hunting property in Canada, which the family partly owns. He is a man, said one former Liberty executive, with all the “trappings of large money coming his way.”

Despite abundant evidence of fraud, much of it detailed in court records and law enforcement files obtained by ProPublica, members of the Beers family have flourished in the health care industry and have never been prevented from running a nonprofit. Instead, the family’s long and lucrative history illustrates how health care sharing ministries thrive in a regulatory no man’s land where state insurance commissioners are barred from investigating, federal agencies turn a blind eye and law enforcement settles for paltry civil settlements.

The Ohio attorney general has twice investigated Beers for activities that financial crimes investigators said were probable felonies. Instead, the office settled for civil fines, most recently in 2021. It also required Liberty to sever its ties to some Beers family members.

The IRS has pursued individual family members for underreporting their income and failing to pay million-dollar tax bills. But there’s no indication that the IRS has investigated how several members of one family amassed such substantial wealth in just seven years by running a Christian nonprofit.

The agencies’ failure to move decisively against the Beers family has left Liberty members struggling with millions of dollars in medical debt. Many have joined a class-action lawsuit accusing the nonprofit of fraud.

After years of complaints, health care sharing ministries are now attracting more scrutiny. Sharity Ministries, once among the largest organizations in the industry, filed for bankruptcy and then dissolved in 2021 as regulators in multiple states investigated its failure to pay members’ bills. In January, the Justice Department seized the assets of a small Missouri-based ministry, Medical Cost Sharing Inc., and those of its founders, accusing them of fraud and self-enrichment. The founders have denied the government’s allegations.

Sunday, March 5, 2023

Four Recommendations for Ethical AI in Healthcare

Lindsey Jarrett
Center for Practical Bioethics

For several decades now, we have been having conversations about the impact that technology, from the voyage into space to the devices in our pockets, will have on society. The force with which technology alters our lives at times feels abrupt. It has us feeling excited one day and fearful the next.

If your experiences in life are not dependent on the use of technology — especially if your work still allows for you to disconnect from the virtual world – it may feel like technology is working at a decent pace. However, many of us require some sort of technology to work, to communicate with others, to develop relationships, and to disseminate ideas into the world. Further, we also increasingly need technology to help us make decisions. These decisions vary in complexity from auto-correcting our messages to connecting to someone on a dating app, and without access to a piece of technology, it is increasingly challenging to rely on anything but technology.

Is the use of technology for decision making a problem in and of itself due to its entrenched use across our lives, or are there particular components and contexts that need attention? Your answer may depend on what you want to use it for, how you want others to use it to know you, and why the technology is needed over other tools. These considerations are widely discussed in the areas of criminal justice, finance, security, hiring practices, and conversations are developing in other sectors as issues of inequity, injustice and power differentials begin to emerge.

Issues emerging in the healthcare sector is of particular interest to many, especially since the coronavirus pandemic. As these conversations unfold, people start to unpack the various dilemmas that exist within the intersection of technology and healthcare. Scholars have engaged in theoretical rhetoric to examine ethical implications, researchers have worked to evaluate the decision-making processes of data scientists who build clinical algorithms, and healthcare executives have tried to stay ahead of regulation that is looming over their hospital systems.

However, recommendations tend to focus exclusively on those involved with algorithm creation and offer little support to other stakeholders across the healthcare industry. While this guidance turns into practice across data science teams building algorithms, especially those building machine learning based tools, the Ethical AI Initiative sees opportunities to examine decisions that are made regarding these tools before they get to a data scientist’s queue and after they are ready for production. These opportunities are where systemic change can occur, and without that level of change, we will continue to build products to put on the shelf and more products to fill the shelf when those fail.

Healthcare is not unique in facing these types of challenges, and I will outline a few recommendations on how an adapted, augmented system of healthcare technology can operate, as the industry prepares for more forceful regulation of the use of machine learning-based tools in healthcare practice.