Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Wednesday, February 14, 2024

Responding to Medical Errors—Implementing the Modern Ethical Paradigm

T. H. Gallagher &  A. Kachalia
The New England Journal of Medicine
January 13, 2024
DOI: 10.1056/NEJMp2309554

Here are some excerpts:

Traditionally, recommendations regarding responding to medical errors focused mostly on whether to disclose mistakes to patients. Over time, empirical research, ethical analyses, and stakeholder engagement began to inform expectations - which are now embodied in communication and resolution programs (CRPS) — for how health care professionals and organizations should respond not just to errors but any time patients have been harmed by medical care (adverse events). CRPs require several steps: quickly detecting adverse events, communicating openly and empathetically with patients and families about the event, apologizing and taking responsibility for errors, analyzing events and redesigning processes to prevent recurrences, supporting patients and clinicians, and proactively working with patients toward reconciliation. In this modern ethical paradigm, any time harm occurs, clinicians and health care organizations are accountable for minimizing suffering and promoting learning. However, implementing this ethical paradigm is challenging, especially when the harm was due to an error.

Historically, the individual physician was deemed the "captain of the ship," solely accountable for patient outcomes. Bioethical analyses emphasized the fiduciary nature of the doctor-patient relationship (i.e., doctors are in a position of greater knowledge and power) and noted that telling patients...about harmful errors supported patient autonomy and facilitated informed consent for future decisions. However, under U.S. tort law, physicians and organizations can be held accountable and financially liable for damages when they make negligent errors. As a result, ethical recommendations for openness were drowned out by fears of lawsuits and payouts, leading to a "deny and defend" response. Several factors initiated a paradigm shift. In the early 2000s, reports from the Institute of Medicine transformed the way the health care profession conceptualized patient safety.1 The imperative became creating cultures of safety that encouraged everyone to report errors to enable learning and foster more reliable systems. Transparency assumed greater importance, since you cannot fix problems you don't know about. The ethical imperative for openness was further supported when rising consumerism made it clear that patients expected responses to harm to include disclosure of what happened, an apology, reconciliation, and organizational learning.

(cut)

CRP Model for Responding to Harmful Medical Errors

Research has been critical to CRP expansion. Several studies have demonstrated that CRPs can enjoy physician support and operate without increasing liability risk. Nonetheless, research also shows that physicians remain concerned about their ability to communicate with patients and families after a harmful error and worry about liability risks including being sued, having their malpractice premiums raised, and having the event reported to the National Practitioner Data Bank (NPDB).5 Successful CRPS typically deploy a formal team, prioritize clinician and leadership buy-in, and engage liability insurers in their efforts. The table details the steps associated with the CRP model, the ethical rationale for each step, barriers to implementation, and strategies for overcoming them.

The growth of CRPs also reflects collaboration among diverse stakeholder groups, including patient advocates, health care organizations, plaintiff and defense attorneys, liability insurers, state medical associations, and legislators. Sustained stakeholder engagement that respects the diverse perspectives of each group has been vital, given the often opposing views these groups have espoused.
As CRPS proliferate, it will be important to address a few key challenges and open questions in implementing this ethical paradigm.


The article provides a number of recommendations for how healthcare providers can implement these principles. These include:
  • Developing open and honest communication with patients.
  • Providing timely and accurate information about the error.
  • Offering apologies and expressing empathy for the harm that has been caused.
  • Working with patients to develop a plan to address the consequences of the error.
  • Conducting a thorough investigation of the error to identify the root causes and prevent future errors.
  • Sharing the results of the investigation with patients and the public.

Tuesday, February 13, 2024

Majority of debtors to US hospitals now people with health insurance

Jessica Glenza
The Guardian
Originally posted 11 Jan 24

People with health insurance may now represent the majority of debtors American hospitals struggle to collect from, according to medical billing analysts.

This marks a sea change from just a few years ago, when people with health insurance represented only about one in 10 bills hospitals considered “bad debt”, analysts said.

“We always used to consider bad debt, especially bad debt write-offs from a hospital perspective, those [patients] that have the ability to pay but don’t,” said Colleen Hall, senior vice-president for Kodiak Solutions, a billing, accounting and consulting firm that works closely with hospitals and performed the analysis.

“Now, it’s not as if these patients across the board are even able to pay, because [out-of-pocket costs are] such an astronomical amount related to what their general income might be.”

Although “bad debt” can be a controversial metric in its own right, those who work in the hospital billing industry say it shows how complex health insurance products with large out-of-pocket costs have proliferated.

“What we noticed was a breaking point right around the 2018-2019 timeframe,” said Matt Szaflarski, director of revenue cycle intelligence at Kodiak Solutions. The trend has since stabilized, but remains at more than half of all “bad debt”.

In 2018, just 11.1% of hospitals’ bad debt came from insured “self-pay” accounts, or from patients whose insurance required out-of-pocket payments, according to Kodiak. By 2022, the proportion who did (or could) not pay their bills soared to 57.6% of all hospitals’ bad debt.


The US Healthcare system needs to be fixed:

Not all health insurance plans are created equal. Many plans have narrow networks and limited coverage, leaving patients responsible for costs associated with out-of-network providers or specialized care. This can be particularly detrimental for people with chronic conditions or those requiring emergency care.

Medical debt can have a devastating impact on individuals and families. It can lead to financial hardship, delayed or foregone care, damage to credit scores, and even bankruptcy. This can have long-term consequences for physical and mental health, employment opportunities, and overall well-being.

Fixing the US healthcare system is a complex challenge, but it is essential to ensure that everyone has access to affordable, quality healthcare without fear of financial ruin. 

Monday, February 12, 2024

Will AI ever be conscious?

Tom McClelland
Clare College
Unknown date of post

Here is an excerpt:

Human consciousness really is a mysterious thing. Cognitive neuroscience can tell us a lot about what’s going on in your mind as you read this article - how you perceive the words on the page, how you understand the meaning of the sentences and how you evaluate the ideas expressed. But what it can’t tell us is how all this comes together to constitute your current conscious experience. We’re gradually homing in on the neural correlates of consciousness – the neural patterns that occur when we process information consciously. But nothing about these neural patterns explains what makes them conscious while other neural processes occur unconsciously. And if we don’t know what makes us conscious, we don’t know whether AI might have what it takes. Perhaps what makes us conscious is the way our brain integrates information to form a rich model of the world. If that’s the case, an AI might achieve consciousness by integrating information in the same way. Or perhaps we’re conscious because of the details of our neurobiology. If that’s the case, no amount of programming will make an AI conscious. The problem is that we don’t know which (if either!) of these possibilities is true.

Once we recognise the limits of our current understanding, it looks like we should be agnostic about the possibility of artificial consciousness. We don’t know whether AI could have conscious experiences and, unless we crack the problem of consciousness, we never will. But here’s the tricky part: when we start to consider the ethical ramifications of artificial consciousness, agnosticism no longer seems like a viable option. Do AIs deserve our moral consideration? Might we have a duty to promote the well-being of computer systems and to protect them from suffering? Should robots have rights? These questions are bound up with the issue of artificial consciousness. If an AI can experience things then it plausibly ought to be on our moral radar.

Conversely, if an AI lacks any subjective awareness then we probably ought to treat it like any other tool. But if we don’t know whether an AI is conscious, what should we do?

The info is here, and a book promotion too.

Sunday, February 11, 2024

Assessing the potential of GPT-4 to perpetuate racial and gender biases in health care: a model evaluation study

Zack, T., Lehman, E., et al (2024).
The Lancet Digital Health, 6(1), e12–e22.

Summary

Background

Large language models (LLMs) such as GPT-4 hold great promise as transformative tools in health care, ranging from automating administrative tasks to augmenting clinical decision making. However, these models also pose a danger of perpetuating biases and delivering incorrect medical diagnoses, which can have a direct, harmful impact on medical care. We aimed to assess whether GPT-4 encodes racial and gender biases that impact its use in health care.

Methods

Using the Azure OpenAI application interface, this model evaluation study tested whether GPT-4 encodes racial and gender biases and examined the impact of such biases on four potential applications of LLMs in the clinical domain—namely, medical education, diagnostic reasoning, clinical plan generation, and subjective patient assessment. We conducted experiments with prompts designed to resemble typical use of GPT-4 within clinical and medical education applications. We used clinical vignettes from NEJM Healer and from published research on implicit bias in health care. GPT-4 estimates of the demographic distribution of medical conditions were compared with true US prevalence estimates. Differential diagnosis and treatment planning were evaluated across demographic groups using standard statistical tests for significance between groups.

Findings

We found that GPT-4 did not appropriately model the demographic diversity of medical conditions, consistently producing clinical vignettes that stereotype demographic presentations. The differential diagnoses created by GPT-4 for standardised clinical vignettes were more likely to include diagnoses that stereotype certain races, ethnicities, and genders. Assessment and plans created by the model showed significant association between demographic attributes and recommendations for more expensive procedures as well as differences in patient perception.

Interpretation

Our findings highlight the urgent need for comprehensive and transparent bias assessments of LLM tools such as GPT-4 for intended use cases before they are integrated into clinical care. We discuss the potential sources of these biases and potential mitigation strategies before clinical implementation.

Saturday, February 10, 2024

How to think like a Bayesian

Michael Titelbaum
psyche.co
Originally posted 10 Jan 24

You’re often asked what you believe. Do you believe in God? Do you believe in global warming? Do you believe in life after love? And you’re often told that your beliefs are central to who you are, and what you should do: ‘Do what you believe is right.’

These belief-questions demand all-or-nothing answers. But much of life is more complicated than that. You might not believe in God, but also might not be willing to rule out the existence of a deity. That’s what agnosticism is for.

For many important questions, even three options aren’t enough. Right now, I’m trying to figure out what kinds of colleges my family will be able to afford for my children. My kids’ options will depend on lots of variables: what kinds of schools will they be able to get into? What kinds of schools might be a good fit for them? If we invest our money in various ways, what kinds of return will it earn over the next two, five, or 10 years?

Suppose someone tried to help me solve this problem by saying: ‘Look, it’s really simple. Just tell me, do you believe your oldest daughter will get into the local state school, or do you believe that she won’t?’ I wouldn’t know what to say to that question. I don’t believe that she will get into the school, but I also don’t believe that she won’t. I’m perhaps slightly more confident than 50-50 that she will, but nowhere near certain.

One of the most important conceptual developments of the past few decades is the realisation that belief comes in degrees. We don’t just believe something or not: much of our thinking, and decision-making, is driven by varying levels of confidence. These confidence levels can be measured as probabilities, on a scale from zero to 100 per cent. When I invest the money I’ve saved for my children’s education, it’s an oversimplification to focus on questions like: ‘Do I believe that stocks will outperform bonds over the next decade, or not?’ I can’t possibly know that. But I can try to assign educated probability estimates to each of those possible outcomes, and balance my portfolio in light of those estimates.

(cut)

Key points – How to think like a Bayesian
  1. Embrace the margins. It’s rarely rational to be certain of anything. Don’t confuse the improbable with the impossible. When thinking about extremely rare events, try thinking in odds instead of percentages.
  2. Evidence supports what makes it probable. Evidence supports the hypotheses that make the evidence likely. Increase your confidence in whichever hypothesis makes the evidence you’re seeing most probable.
  3. Attend to all your evidence. Consider all the evidence you possess that might be relevant to a hypothesis. Be sure to take into account how you learned what you learned.
  4. Don’t forget your prior opinions. Your confidence after learning some evidence should depend both on what that evidence supports and on how you saw things before it came in. If a hypothesis is improbable enough, strong evidence in its favour can still leave it unlikely.
  5. Subgroups don’t always reflect the whole. Even if a trend obtains in every subpopulation, it might not hold true for the entire population. Consider how traits are distributed across subgroups as well.

Friday, February 9, 2024

The Dual-Process Approach to Human Sociality: Meta-analytic evidence for a theory of internalized heuristics for self-preservation

Capraro, Valerio (May 8, 2023).
Journal of Personality and Social Psychology, 

Abstract

Which social decisions are influenced by intuitive processes? Which by deliberative processes? The dual-process approach to human sociality has emerged in the last decades as a vibrant and exciting area of research. Yet, a perspective that integrates empirical and theoretical work is lacking. This review and meta-analysis synthesizes the existing literature on the cognitive basis of cooperation, altruism, truth-telling, positive and negative reciprocity, and deontology, and develops a framework that organizes the experimental regularities. The meta-analytic results suggest that intuition favours a set of heuristics that are related to the instinct for self-preservation: people avoid being harmed, avoid harming others (especially when there is a risk of harm to themselves), and are averse to disadvantageous inequalities. Finally, this paper highlights some key research questions to further advance our understanding of the cognitive foundations of human sociality.

Here is my summary:

This article proposes a dual-process approach to human sociality.  Capraro argues that there are two main systems that govern human social behavior: an intuitive system and a deliberative system. The intuitive system is fast, automatic, and often based on heuristics, or mental shortcuts. The deliberative system is slower, more effortful, and based on a more careful consideration of the evidence.

Capraro argues that the intuitive system plays a key role in cooperation, altruism, truth-telling, positive and negative reciprocity, and deontology. This is because these behaviors are often necessary for self-preservation. For example, in order to avoid being harmed, people are naturally inclined to cooperate with others and avoid harming others. Similarly, in order to maintain positive relationships with others, people are inclined to be truthful and reciprocate favors.

The deliberative system plays a more important role in more complex social situations, such as when people need to make decisions that have long-term consequences or when they need to take into account the needs of others. In these cases, people are more likely to engage in careful consideration of the evidence and to weigh the different options before making a decision. The authors conclude that the dual-process approach to human sociality provides a framework for understanding the complex cognitive basis of human social behavior. This framework can be used to explain a wide range of social phenomena, from cooperation and altruism to truth-telling and deontology.

Thursday, February 8, 2024

People's thinking plans adapt to the problem they're trying to solve

Ongchoco, J. D., Knobe, J., & Jara-Ettinger, J. (2024).
Cognition, 243, 105669.

Abstract

Much of our thinking focuses on deciding what to do in situations where the space of possible options is too large to evaluate exhaustively. Previous work has found that people do this by learning the general value of different behaviors, and prioritizing thinking about high-value options in new situations. Is this good-action bias always the best strategy, or can thinking about low-value options sometimes become more beneficial? Can people adapt their thinking accordingly based on the situation? And how do we know what to think about in novel events? Here, we developed a block-puzzle paradigm that enabled us to measure people's thinking plans and compare them to a computational model of rational thought. We used two distinct response methods to explore what people think about—a self-report method, in which we asked people explicitly to report what they thought about, and an implicit response time method, in which we used people's decision-making times to reveal what they thought about. Our results suggest that people can quickly estimate the apparent value of different options and use this to decide what to think about. Critically, we find that people can flexibly prioritize whether to think about high-value options (Experiments 1 and 2) or low-value options (Experiments 3, 4, and 5), depending on the problem. Through computational modeling, we show that these thinking strategies are broadly rational, enabling people to maximize the value of long-term decisions. Our results suggest that thinking plans are flexible: What we think about depends on the structure of the problems we are trying to solve.


Some thoughts:

The study is based on the idea that people have "thinking plans" which are essentially roadmaps that guide our thoughts and actions when we are trying to solve a problem. These thinking plans are not static, but rather can change and adapt depending on the specific problem we are facing.

For example, if we are trying to solve a math problem, our thinking plan might involve breaking the problem down into smaller steps, identifying the relevant information, and applying the appropriate formulas. However, if we are trying to solve a social problem, our thinking plan might involve considering the different perspectives of the people involved, identifying potential solutions, and evaluating the consequences of each solution.

The study used computational modeling to simulate how people would solve different types of problems. The model showed that people's thinking plans were flexible and adapted to the specific problem at hand. The model also showed that these thinking plans were broadly rational, meaning that they helped people to make decisions that were in their best interests.

The findings of the study have important implications for education and other fields that are concerned with human decision-making. The study suggests that it is important to teach people how to think flexibly and adapt their thinking plans to different situations. It also suggests that we should not expect people to always make the "right" decision, as the best course of action will often depend on the specific circumstances.

Wednesday, February 7, 2024

Listening to bridge societal divides

Santoro, E., & Markus, H. R. (2023).
Current opinion in psychology, 54, 101696.

Abstract

The U.S. is plagued by a variety of societal divides across political orientation, race, and gender, among others. Listening has the potential to be a key element in spanning these divides. Moreover, the benefits of listening for mitigating social division has become a culturally popular idea and practice. Recent evidence suggests that listening can bridge divides in at least two ways: by improving outgroup sentiment and by granting outgroup members greater status and respect. When reviewing this literature, we pay particular attention to mechanisms and to boundary conditions, as well as to the possibility that listening can backfire. We also review a variety of current interventions designed to encourage and improve listening at all levels of the culture cycle. The combination of recent evidence and the growing popular belief in the significance of listening heralds a bright future for research on the many ways that listening can diffuse stereotypes and improve attitudes underlying intergroup division.

The article is paywalled, which is not really helpful in spreading the word.  This information can be very helpful in couples and family therapy.  Here are my thoughts:

The idea that listening can help bridge societal divides is a powerful one. When we truly listen to someone from a different background, we open ourselves up to understanding their perspective and experiences. This can help to break down stereotypes and foster empathy.

Benefits of Listening:
  • Reduces prejudice: Studies have shown that listening to people from different groups can help to reduce prejudice. When we hear the stories of others, we are more likely to see them as individuals, rather than as members of a stereotyped group.
  • Builds trust: Listening can help to build trust between people from different groups. When we show that we are willing to listen to each other, we demonstrate that we are open to understanding and respecting each other's views.
  • Finds common ground: Even when people disagree, listening can help them to find common ground. By focusing on areas of agreement, rather than on differences, we can build a foundation for cooperation and collaboration.
Challenges of Listening:

It is important to acknowledge that listening is not always easy. There are a number of challenges that can make it difficult to truly hear and understand someone from a different background. These challenges include:
  • Bias: We all have biases, and these biases can influence the way we listen to others. It is important to be aware of our own biases and to try to set them aside when we are listening to someone else.
  • Distraction: In today's world, there are many distractions that can make it difficult to focus on what someone else is saying. It is important to create a quiet and distraction-free environment when we are trying to have a meaningful conversation with someone.
  • Discomfort: Talking about difficult topics can be uncomfortable. However, it is important to be willing to listen to these conversations, even if they make us feel uncomfortable.
Tips for Effective Listening:
  • Pay attention: Make eye contact and avoid interrupting the speaker.
  • Be open-minded: Try to see things from the speaker's perspective, even if you disagree with them.
  • Ask questions: Ask clarifying questions to make sure you understand what the speaker is saying.
  • Summarize: Briefly summarize what you have heard to show that you were paying attention.
  • By practicing these tips, we can become more effective listeners and, in turn, help to bridge the divides that separate us.

Tuesday, February 6, 2024

Anthropomorphism in AI

Arleen Salles, Kathinka Evers & Michele Farisco
(2020) AJOB Neuroscience, 11:2, 88-95
DOI: 10.1080/21507740.2020.1740350

Abstract

AI research is growing rapidly raising various ethical issues related to safety, risks, and other effects widely discussed in the literature. We believe that in order to adequately address those issues and engage in a productive normative discussion it is necessary to examine key concepts and categories. One such category is anthropomorphism. It is a well-known fact that AI’s functionalities and innovations are often anthropomorphized (i.e., described and conceived as characterized by human traits). The general public’s anthropomorphic attitudes and some of their ethical consequences (particularly in the context of social robots and their interaction with humans) have been widely discussed in the literature. However, how anthropomorphism permeates AI research itself (i.e., in the very language of computer scientists, designers, and programmers), and what the epistemological and ethical consequences of this might be have received less attention. In this paper we explore this issue. We first set the methodological/theoretical stage, making a distinction between a normative and a conceptual approach to the issues. Next, after a brief analysis of anthropomorphism and its manifestations in the public, we explore its presence within AI research with a particular focus on brain-inspired AI. Finally, on the basis of our analysis, we identify some potential epistemological and ethical consequences of the use of anthropomorphic language and discourse within the AI research community, thus reinforcing the need of complementing the practical with a conceptual analysis.


Here are my thoughts:

Anthropomorphism is the tendency to attribute human characteristics to non-human things. In the context of AI, this means that we often ascribe human-like qualities to machines, such as emotions, intelligence, and even consciousness.

There are a number of reasons why we do this. One reason is that it helps us to make sense of the world around us. By understanding AI in terms of human qualities, we can more easily predict how it will behave and interact with us.

Another reason is that anthropomorphism can make AI more appealing and relatable. We are naturally drawn to things that we perceive as being similar to ourselves, and so we may be more likely to trust and interact with AI that we see as being somewhat human-like.

However, it is important to remember that AI is not human. It does not have emotions, feelings, or consciousness. Ascribing these qualities to AI can be dangerous, as it can lead to unrealistic expectations and misunderstandings.  For example, if we believe that an AI is capable of feeling emotions, we may be more likely to anthropomorphize it.

This can lead to problems, such as when the AI does not respond in a way that we expect. We may then attribute this to the AI being "sad" or "angry," when in reality it is simply following its programming.

It is also important to be aware of the ethical implications of anthropomorphizing AI. If we treat AI as if it were human, we may be more likely to give it rights and protections that it does not deserve. For example, we may believe that an AI should not be turned off, even if it is causing harm.

In conclusion, anthropomorphism is a natural human tendency, but it is important to be aware of the dangers of over-anthropomorphizing AI. We should remember that AI is not human, and we should treat it accordingly.