Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Clinical Decision-making. Show all posts
Showing posts with label Clinical Decision-making. Show all posts

Wednesday, April 9, 2025

How AI can distort clinical decision-making to prioritize profits over patients

Katie Palmer
STATnews.com
Originally posted 3 March 25

More than a decade ago, Ken Mandl was on a call with a pharmaceutical company and the leader of a social network for people with diabetes. The drug maker was hoping to use the platform to encourage its members to get a certain lab test.

The test could determine a patient's need for a helpful drug. But in that moment, said Mandl, director of the computational health informatics program at Boston Children's Hospital, "I could see this focus on a biomarker as a way to increase sales of the product." To describe the phenomenon, he coined the term "biomarkup": the way commercial interests can influence the creation, adoption, and interpretation of seemingly objective measures of medical status.

These days, Mandl has been thinking about how the next generation of quantified outputs in health could be gamed: artificial intelligence tools.

"It is easy to imagine a new generation of Al-based revenue cycle management model tools that achieve higher reimbursements by nudging clinicians toward more lucrative care pathways," Mandl wrote in a recent perspective in NEJM AI. "Al-based decision support interventions are vulnerable across their entire development life cycle and could be manipulated to favor specific products or services."


Here are some thoughts:

Dr. Ken Mandl raises a critical concern about the potential for "biomarkup" in the age of artificial intelligence within healthcare. This concept, initially describing how commercial interests can manipulate seemingly objective medical measures, now extends to AI tools. Mandl warns that AI-driven systems, designed for tasks like revenue cycle management or clinical decision support, could be subtly manipulated to prioritize financial gain over patient well-being. This manipulation might involve nudging clinicians towards more lucrative care pathways or tuning algorithms to generate more referrals, particularly in fee-for-service models. The issue is exacerbated in direct-to-consumer healthcare, where profit motives may be even stronger and regulatory oversight potentially weaker. The ease with which financial outcomes can be measured, compared to patient outcomes, further compounds the problem, creating a risk of AI implementation being driven primarily by return on investment. Mandl emphasizes the urgent need for transparency in AI decision frameworks, ethical development practices, and careful regulatory oversight to safeguard patient interests and ensure that AI serves its intended purpose of improving healthcare, not just increasing profits.

Thursday, March 27, 2025

How Moral Case Deliberation Supports Good Clinical Decision Making

Inguaggiato, G., et al. (2019).
The AMA Journal of Ethic, 21(10),
E913-919.

Abstract

In clinical decision making, facts are presented and discussed, preferably in the context of both evidence-based medicine and patients’ values. Because clinicians’ values also have a role in determining the best courses of action, we argue that reflecting on both patients’ and professionals’ values fosters good clinical decision making, particularly in situations of moral uncertainty. Moral case deliberation, a form of clinical ethics support, can help elucidate stakeholders’ values and how they influence interpretation of facts. This article demonstrates how this approach can help clarify values and contribute to good clinical decision making through a case example.

Here are some thoughts:

This article discusses how moral case deliberation (MCD) supports good clinical decision-making. It argues that while evidence-based medicine and patient values are crucial, clinicians' values also play a significant role, especially in morally uncertain situations. MCD, a form of clinical ethics support, helps clarify the values of all stakeholders and how these values influence the interpretation of facts. The article explains how MCD differs from shared decision-making, emphasizing its focus on ethical dilemmas and understanding moral uncertainty among caregivers rather than reaching a shared decision with the patient. Through dialogue and a structured approach, MCD facilitates a deeper understanding of the situation, leading to better-informed and morally sensitive clinical decisions. The article uses a case study from a neonatal intensive care unit to illustrate how MCD can help resolve disagreements and uncertainties by exploring the different values held by nurses and physicians.

Thursday, February 28, 2019

Should Watson Be Consulted for a Second Opinion?

David Luxton
AMA J Ethics. 2019;21(2):E131-137.
doi: 10.1001/amajethics.2019.131.

Abstract

This article discusses ethical responsibility and legal liability issues regarding use of IBM Watson™ for clinical decision making. In a case, a patient presents with symptoms of leukemia. Benefits and limitations of using Watson or other intelligent clinical decision-making tools are considered, along with precautions that should be taken before consulting artificially intelligent systems. Guidance for health care professionals and organizations using artificially intelligent tools to diagnose and to develop treatment recommendations are also offered.

Here is an excerpt:

Understanding Watson’s Limitations

There are precautions that should be taken into consideration before consulting Watson. First, it’s important for physicians such as Dr O to understand the technical challenges of accessing quality data that the system needs to analyze in order to derive recommendations. Idiosyncrasies in patient health care record systems is one culprit, causing missing or incomplete data. If some of the data that is available to Watson is inaccurate, then it could result in diagnosis and treatment recommendations that are flawed or at least inconsistent. An advantage of using a system such as Watson, however, is that it might be able to identify inconsistencies (such as those caused by human input error) that a human might otherwise overlook. Indeed, a primary benefit of systems such as Watson is that they can discover patterns that not even human experts might be aware of, and they can do so in an automated way. This automation has the potential to reduce uncertainty and improve patient outcomes.

Friday, November 2, 2018

Companies Tout Psychiatric Pharmacogenomic Testing, But Is It Ready for a Store Near You?

Jennifer Abbasi
JAMA Network
Originally posted October 3, 2018

Here is an excerpt:

According to Dan Dowd, PharmD, vice president of medical affairs at Genomind, pharmacists in participating stores can inform customers about the Genecept Assay if they notice a history of psychotropic drug switching or drug-related adverse effects. If the test is administered, a physician’s order is required for the company’s laboratory to process it.

“This certainly is a recipe for selling a whole lot more tests,” Potash said of the approach, adding that patients often feel “desperate” to find a successful treatment. “What percentage of the time selling these tests will result in better patient outcomes remains to be seen.”

Biernacka also had reservations about the in-store model. “Generally, it could be helpful for a pharmacist to tell a patient or their provider that perhaps the patient could benefit from pharmacogenetic testing,” she said. “[B]ut until the tests are more thoroughly assessed, the decision to pursue such an option (and with which test) should be left more to the treating clinician and patient.”

Some physicians said they’ve found pharmacogenomic testing to be useful. Aron Fast, MD, a family physician in Hesston, Kansas, uses GeneSight for patients with depression or anxiety who haven’t improved after trying 2 or 3 antidepressants. Each time, he said, his patients were less depressed or anxious after switching to a new drug based on their genotyping results.

Part of their improvements may stem from expecting the test to help, he acknowledged. The testing “raises confidence in the medication to be prescribed,” Müller explained, which might contribute to a placebo effect. However, Müller emphasized that the placebo effect alone is unlikely to explain lasting improvements in patients with moderate to severe depression. In his psychiatric consulting practice, pharmacogenomic-guided drug changes have led to improvements in patients “sometimes even up to the point where they’re completely remitted,” he said.

The info is here.

Friday, October 19, 2018

Risk Management Considerations When Treating Violent Patients

Kristen Lambert
Psychiatric News
Originally posted September 4, 2018

Here is an excerpt:

When a patient has a history of expressing homicidal ideation or has been violent previously, you should document, in every subsequent session, whether the patient admits or denies homicidal ideation. When the patient expresses homicidal ideation, document what he/she expressed and the steps you did or did not take in response and why. Should an incident occur, your documentation will play an important role in defending your actions.

Despite taking precautions, your patient may still commit a violent act. The following are some strategies that may minimize your risk.

  • Conduct complete timely/thorough risk assessments.
  • Document, including the reasons for taking and not taking certain actions.
  • Understand your state’s law on duty to warn. Be aware of the language in the law on whether you have a mandatory, permissive, or no duty to warn/protect.
  • Understand your state’s laws regarding civil commitment.
  • Understand your state’s laws regarding disclosure of confidential information and when you can do so.
  • Understand your state’s laws regarding discussing firearms ownership and/or possession with patients.
  • If you have questions, consult an attorney or risk management professional.

Monday, August 13, 2018

This AI Just Beat Human Doctors On A Clinical Exam

Parmy Olson
Forbes.com
Originally posted June 28, 2018

Here is an excerpt:

Now Parsa is bringing his software service and virtual doctor network to insurers in the U.S. His pitch is that the smarter and more “reassuring” his AI-powered chatbot gets, the more likely patients across the Atlantic are to resolve their issues with software alone.

It’s a model that could save providers millions, potentially, but Parsa has yet to secure a big-name American customer.

“The American market is much more tuned to the economics of healthcare,” he said from his office. “We’re talking to everyone: insurers, employers, health systems. They have massive gaps in delivery of the care.”

“We will set up physical and virtual clinics, and AI services in the United States,” he said, adding that Babylon would be operational with U.S. clinics in 2019, starting state by state. “For a fixed fee, we take total responsibility for the cost of primary care.”

Parsa isn’t shy about his transatlantic ambitions: “I think the U.S. will be our biggest market shortly,” he adds.

The info is here.

Tuesday, March 20, 2018

The Psychology of Clinical Decision Making: Implications for Medication Use

Jerry Avorn
February 22, 2018
N Engl J Med 2018; 378:689-691

Here is an excerpt:

The key problem is medicine’s ongoing assumption that clinicians and patients are, in general, rational decision makers. In reality, we are all influenced by seemingly irrational preferences in making choices about reward, risk, time, and trade-offs that are quite different from what would be predicted by bloodless, if precise, quantitative calculations. Although we physicians sometimes resist the syllogism, if all humans are prone to irrational decision making, and all clinicians are human, then these insights must have important implications for patient care and health policy. There have been some isolated innovative applications of that understanding in medicine, but despite a growing number of publications about the psychology of decision making, most medical care — at the bedside and the systems level — is still based on a “rational actor” understanding of how we make decisions.

The choices we make about prescription drugs provide one example of how much further medicine could go in taking advantage of a more nuanced understanding of decision making under conditions of uncertainty — a description that could define the profession itself. We persist in assuming that clinicians can obtain comprehensive information about the comparative worth (clinical as well as economic) of alternative drug choices for a given condition, assimilate and evaluate all the findings, and synthesize them to make the best drug choices for our patients. Leaving aside the access problem — the necessary comparative effectiveness research often doesn’t exist — actual drug-utilization data make it clear that real-world prescribing choices are in fact based heavily on various “irrational” biases, many of which have been described by behavioral economists and other decision theorists.

The article is here.

Tuesday, January 9, 2018

Dangers of neglecting non-financial conflicts of interest in health and medicine

Wiersma M, Kerridge I, Lipworth W.
Journal of Medical Ethics 
Published Online First: 24 November 2017.
doi: 10.1136/medethics-2017-104530

Abstract

Non-financial interests, and the conflicts of interest that may result from them, are frequently overlooked in biomedicine. This is partly due to the complex and varied nature of these interests, and the limited evidence available regarding their prevalence and impact on biomedical research and clinical practice. We suggest that there are no meaningful conceptual distinctions, and few practical differences, between financial and non-financial conflicts of interest, and accordingly, that both require careful consideration. Further, a better understanding of the complexities of non-financial conflicts of interest, and their entanglement with financial conflicts of interest, may assist in the development of a more sophisticated approach to all forms of conflicts of interest.

The article is here.

Friday, October 20, 2017

A virtue ethics approach to moral dilemmas in medicine

P Gardiner
J Med Ethics. 2003 Oct; 29(5): 297–302.

Abstract

Most moral dilemmas in medicine are analysed using the four principles with some consideration of consequentialism but these frameworks have limitations. It is not always clear how to judge which consequences are best. When principles conflict it is not always easy to decide which should dominate. They also do not take account of the importance of the emotional element of human experience. Virtue ethics is a framework that focuses on the character of the moral agent rather than the rightness of an action. In considering the relationships, emotional sensitivities, and motivations that are unique to human society it provides a fuller ethical analysis and encourages more flexible and creative solutions than principlism or consequentialism alone. Two different moral dilemmas are analysed using virtue ethics in order to illustrate how it can enhance our approach to ethics in medicine.

A pdf download of the article can be found here.

Note from John: This article is interesting for a myriad of reasons. For me, we ethics educators have come a long way in 14 years.

Friday, July 1, 2016

Predicting Suicide is not Reliable, according to recent study

Matthew Large , M. Kaneson, N. Myles, H. Myles, P. Gunaratne, C. Ryan
PLOS One
Published: June 10, 2016
http://dx.doi.org/10.1371/journal.pone.0156322

Discussion

The pooled estimate from a large and representative body of research conducted over 40 years suggests a statistically strong association between high-risk strata and completed suicide. However the meta-analysis of the sensitivity of suicide risk categorization found that about half of all suicides are likely to occur in lower-risk groups and the meta-analysis of PPV suggests that 95% of high-risk patients will not suicide. Importantly, the pooled odds ratio (and the estimates of the sensitivity and PPV) and any assessment of the overall strength of risk assessment should be interpreted very cautiously in the context of several limitations documented below.

With respect to our first hypothesis, the statistical estimates of between study heterogeneity and the distribution of the outlying, quartile and median effect sizes values suggests that the statistical strength of suicide risk assessment cannot be considered to be consistent between studies, potentially limiting the generalizability of the pooled estimate.

With respect to our second hypothesis we found no evidence that the statistical strength of suicide risk assessment has improved over time.

The research is here.

Friday, June 17, 2016

Vignette 34: A Dreadful Voicemail

Dr. Vanessa Ives works in a solo private practice. She has been working with Mr. Dorian Gray for several months for signs and symptoms of depression. Mr. Gray comes to some sessions as emotionally intense, and high strung.  Dr. Ives has considered the possibility that Mr. Gray suffers with some type of cyclic mood disorder.

As part of treatment, Mr. Gray admitted to experiencing anger management problems, to the point where he described physically intimidating his wife and pushing her down. They worked on anger management skills. Mr. Gray reported progress in this area.

Dr. Ives receives a phone message from Mr. Gray’s wife.  In the voicemail, Mrs. Gray reports that Mr. Gray has become more physically intimidating and has starting to push her around.  The voicemail indicated he has not caused her any significant harm.  She requested a session to see Dr. Ives to explain what is happening between them.  Dr. Ives only met Mrs. Gray informally while she sat in the waiting room before and after several sessions.

Dr. Ives wants to be helpful, but she is struggling with whether she should even return Mrs. Gray’s phone call.  Dr. Ives has a personal history of being involved in a physically abusive relationship herself and is concerned about both the clinical and ethical issues involved regarding calling Mrs. Gray back.

Feeling uncomfortable about what is happening with this patient and his wife, Dr. Ives calls you for a professional consultation.  She wants to make an appointment to talk with you candidly about her history as well as the dynamics of the current case.

What are the ethical issues involved in this case?

What are the pertinent clinical issues in this case?

How would you help Dr. Ives deal with her emotions related to this situation, given how her history relates to this patient and his wife?

Would you recommend Dr. Ives return the call or not?

What are some possible options should Dr. Ives return the phone call?

How much transparency would you suggest to Dr. Ives with Mr. Gray about the phone message?

Thursday, June 4, 2015

Understanding Bias — The Case for Careful Study

Lisa Rosenbaum
N Engl J Med 2015; 372:1959-1963
May 14, 2015

Here is an excerpt:

Whether our judgments are motivated by fatigue, hunger, institutional norms, the diagnosis of the last patient we saw, or a memory of a patient who died, we are all biased in countless subtle ways. Teasing out the relative effects of any of these other biases is nearly impossible. You can’t exactly randomly assign some physicians to being motivated by the pursuit of tenure, others by ideology, others by the possibility of future stock returns, and others by just wanting to be really good doctors. The difficulty of measuring these other motivations, however, creates the problem that plagues many quality-improvement efforts: we go after only what we can count. It is easy to count the dollars industry pays doctors, but this ease of measurement obscures two key questions: Does the money introduce a bias that undermines scientific integrity? And by focusing on these pecuniary biases, are we overlooking others that are equally powerful?

The entire article is here.

Saturday, February 21, 2015

Clinical supervision of psychotherapy: essential ethics issues for supervisors and supervisees

By Jeffrey E. Barnett and Corey H. Molzon
J Clin Psychol 2014 Nov 14;70(11):1051-61. Epub 2014 Sep 14.

Abstract

Clinical supervision is an essential aspect of every mental health professional's training. The importance of ensuring that supervision is provided competently, ethically, and legally is explained. The elements of the ethical practice of supervision are described and explained. Specific issues addressed include informed consent and the supervision contract, supervisor and supervisee competence, attention to issues of diversity and multicultural competence, boundaries and multiple relationships in the supervision relationship, documentation and record keeping by both supervisor and supervisee, evaluation and feedback, self-care and the ongoing promotion of wellness, emergency coverage, and the ending of the supervision relationship. Additionally, the role of clinical supervisor as mentor, professional role model, and gatekeeper for the profession are discussed. Specific recommendations are provided for ethically and effectively conducting the supervision relationship and for addressing commonly arising dilemmas that supervisors and supervisees may confront.

The entire article is here.