Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Sunday, February 18, 2024

Amazon AGI Team Say Their AI is Showing "Emergent Properties"

Noor Al-Sibai
Futurism.com
Originally posted 15 Feb 24

A new Amazon AI model, according to the researchers who built it, is exhibiting language abilities that it wasn't trained on.

In a not-yet-peer-reviewed academic paper, the team at Amazon AGI — which stands for "artificial general intelligence," or human-level AI — say their large language model (LLM) is exhibiting "state-of-the-art naturalness" at conversational text. Per the examples shared in the paper, the model does seem sophisticated.

As the paper indicates, the model was able to come up with all sorts of sentences that, according to criteria crafted with the help of an "expert linguist," showed it was making the types of language leaps that are natural in human language learners but have been difficult to obtain in AI.

Named "Big Adaptive Streamable TTS with Emergent abilities" or BASE TTS, the initial model was trained on 100,000 hours of "public domain speech data," 90 percent in English, to teach it how Americans talk. To test out how large models would need to be to show "emergent abilities," or abilities they were not trained on, the Amazon AGI team trained two smaller models, one on 1,000 hours of speech data and another on 10,000, to see which of the three — if any — exhibited the type of language naturalness they were looking for.


My overall conclusion from the paper linked in the article:

BASE TTS (Text To Speech) represents a significant leap forward in TTS technology, offering superior naturalness, efficiency, and potential for real-world applications like voicing LLM outputs. While limitations exist, the research paves the way for future advancements in multilingual, data-efficient, and context-aware TTS models.

Saturday, February 17, 2024

What Stops People From Standing Up for What’s Right?

Julie Sasse
Greater Good
Originally published 17 Jan 24

Here is an excerpt:

How can we foster moral courage?

Every person can try to become more morally courageous. However, it does not have to be a solitary effort. Instead, institutions such as schools, companies, or social media platforms play a significant role. So, what are concrete recommendations to foster moral courage?
  • Establish and strengthen social and moral norms: With a solid understanding of what we consider right and wrong, it becomes easier to detect wrongdoings. Institutions can facilitate this process by identifying and modeling fundamental values. For example, norms and values expressed by teachers can be important points of reference for children and young adults.
  • Overcome uncertainty: If it is unclear whether someone’s behavior is wrong, witnesses should feel comfortable to inquire, for example, by asking other bystanders how they judge the situation or a potential victim whether they are all right.
  • Contextualize anger: In the face of wrongdoings, anger should not be suppressed since it can provide motivational fuel for intervention. Conversely, if someone expresses anger, it should not be diminished as irrational but considered a response to something unjust. 
  • Provide and advertise reporting systems: By providing reporting systems, institutions relieve witnesses from the burden of selecting and evaluating individual means of intervention and reduce the need for direct confrontation.
  • Show social support: If witnesses directly confront a perpetrator, others should be motivated to support them to reduce risks.
We see that there are several ways to make moral courage less difficult, but they do require effort from individuals and institutions. Why is that effort worth it? Because if more individuals are willing and able to show moral courage, more wrongdoings would be addressed and rectified—and that could help us to become a more responsible and just society.


Main points:
  • Moral courage is the willingness to stand up for what's right despite potential risks.
  • It's rare because of various factors like complexity of the internal process, situational barriers, and difficulty seeing the long-term benefits.
  • Key stages involve noticing a wrongdoing, interpreting it as wrong, feeling responsible, believing in your ability to intervene, and accepting potential risks.
  • Personality traits and situational factors influence these stages.

Friday, February 16, 2024

Citing Harms, Momentum Grows to Remove Race From Clinical Algorithms

B. Kuehn
JAMA
Published Online: January 17, 2024.
doi:10.1001/jama.2023.25530

Here is an excerpt:

The roots of the false idea that race is a biological construct can be traced to efforts to draw distinctions between Black and White people to justify slavery, the CMSS report notes. For example, the third US president, Thomas Jefferson, claimed that Black people had less kidney output, more heat tolerance, and poorer lung function than White individuals. Louisiana physician Samuel Cartwright, MD, subsequently rationalized hard labor as a way for slaves to fortify their lungs. Over time, the report explains, the medical literature echoed some of those ideas, which have been used in ways that cause harm.

“It is mind-blowing in some ways how deeply embedded in history some of this misinformation is,” Burstin said.

Renewed recognition of these harmful legacies and growing evidence of the potential harm caused by structural racism, bias, and discrimination in medicine have led to reconsideration of the use of race in clinical algorithms. The reckoning with racial injustice sparked by the May 2020 murder of George Floyd helped accelerate this work. A few weeks after Floyd’s death, an editorial in the New England Journal of Medicine recommended reconsidering race in 13 clinical algorithms, echoing a growing chorus of medical students and physicians arguing for change.

Congress also got involved. As a Robert Wood Johnson Foundation Health Policy Fellow, Michelle Morse, MD, MPH, raised concerns about the use of race in clinical algorithms to US Rep Richard Neal (D, MA), then chairman of the House Ways and Means Committee. Neal in September 2020 sent letters to several medical societies asking them to assess racial bias and a year later he and his colleagues issued a report on the misuse of race in clinical decision-making tools.

“We need to have more humility in medicine about the ways in which our history as a discipline has actually held back health equity and racial justice,” Morse said in an interview. “The issue of racism and clinical algorithms is one really tangible example of that.”


My summary: There's increasing worry that using race in clinical algorithms can be harmful and perpetuate racial disparities in healthcare. This concern stems from a recognition of the historical harms of racism in medicine and growing evidence of bias in algorithms.

A review commissioned by the Agency for Healthcare Research and Quality (AHRQ) found that using race in algorithms can exacerbate health disparities and reinforce the false idea that race is a biological factor.

Several medical organizations and experts have called for reevaluating the use of race in clinical algorithms. Some argue that race should be removed altogether, while others advocate for using it only in specific cases where it can be clearly shown to improve outcomes without causing harm.

Thursday, February 15, 2024

The motivating effect of monetary over psychological incentives is stronger in WEIRD cultures

Medvedev, D., Davenport, D.et al.
Nat Hum Behav (2024).
https://doi.org/10.1038/s41562-023-01769-5

Abstract

Motivating effortful behaviour is a problem employers, governments and nonprofits face globally. However, most studies on motivation are done in Western, educated, industrialized, rich and democratic (WEIRD) cultures. We compared how hard people in six countries worked in response to monetary incentives versus psychological motivators, such as competing with or helping others. The advantage money had over psychological interventions was larger in the United States and the United Kingdom than in China, India, Mexico and South Africa (N = 8,133). In our last study, we randomly assigned cultural frames through language in bilingual Facebook users in India (N = 2,065). Money increased effort over a psychological treatment by 27% in Hindi and 52% in English. These findings contradict the standard economic intuition that people from poorer countries should be more driven by money. Instead, they suggest that the market mentality of exchanging time and effort for material benefits is most prominent in WEIRD cultures.


The article challenges the assumption that money universally motivates people more than other incentives. It finds that:
  • Monetary incentives were more effective than psychological interventions in WEIRD cultures (Western, Educated, Industrialized, Rich, and Democratic), like the US and UK. People in these cultures exerted more effort for money compared to social pressure or helping others.
  • In contrast, non-WEIRD cultures like China, India, Mexico, and South Africa showed a smaller advantage for money. In some cases, even social interventions like promoting cooperation were more effective than financial rewards.
  • Language can also influence the perceived value of money. In a study with bilingual Indians, those interacting in English (associated with WEIRD cultures) showed a stronger preference for money than those using Hindi.
  • These findings suggest that cultural differences play a significant role in how people respond to various motivational tools. Assuming money as the universal motivator, often based on studies conducted in WEIRD cultures, might be inaccurate and less effective in diverse settings.

Wednesday, February 14, 2024

Responding to Medical Errors—Implementing the Modern Ethical Paradigm

T. H. Gallagher &  A. Kachalia
The New England Journal of Medicine
January 13, 2024
DOI: 10.1056/NEJMp2309554

Here are some excerpts:

Traditionally, recommendations regarding responding to medical errors focused mostly on whether to disclose mistakes to patients. Over time, empirical research, ethical analyses, and stakeholder engagement began to inform expectations - which are now embodied in communication and resolution programs (CRPS) — for how health care professionals and organizations should respond not just to errors but any time patients have been harmed by medical care (adverse events). CRPs require several steps: quickly detecting adverse events, communicating openly and empathetically with patients and families about the event, apologizing and taking responsibility for errors, analyzing events and redesigning processes to prevent recurrences, supporting patients and clinicians, and proactively working with patients toward reconciliation. In this modern ethical paradigm, any time harm occurs, clinicians and health care organizations are accountable for minimizing suffering and promoting learning. However, implementing this ethical paradigm is challenging, especially when the harm was due to an error.

Historically, the individual physician was deemed the "captain of the ship," solely accountable for patient outcomes. Bioethical analyses emphasized the fiduciary nature of the doctor-patient relationship (i.e., doctors are in a position of greater knowledge and power) and noted that telling patients...about harmful errors supported patient autonomy and facilitated informed consent for future decisions. However, under U.S. tort law, physicians and organizations can be held accountable and financially liable for damages when they make negligent errors. As a result, ethical recommendations for openness were drowned out by fears of lawsuits and payouts, leading to a "deny and defend" response. Several factors initiated a paradigm shift. In the early 2000s, reports from the Institute of Medicine transformed the way the health care profession conceptualized patient safety.1 The imperative became creating cultures of safety that encouraged everyone to report errors to enable learning and foster more reliable systems. Transparency assumed greater importance, since you cannot fix problems you don't know about. The ethical imperative for openness was further supported when rising consumerism made it clear that patients expected responses to harm to include disclosure of what happened, an apology, reconciliation, and organizational learning.

(cut)

CRP Model for Responding to Harmful Medical Errors

Research has been critical to CRP expansion. Several studies have demonstrated that CRPs can enjoy physician support and operate without increasing liability risk. Nonetheless, research also shows that physicians remain concerned about their ability to communicate with patients and families after a harmful error and worry about liability risks including being sued, having their malpractice premiums raised, and having the event reported to the National Practitioner Data Bank (NPDB).5 Successful CRPS typically deploy a formal team, prioritize clinician and leadership buy-in, and engage liability insurers in their efforts. The table details the steps associated with the CRP model, the ethical rationale for each step, barriers to implementation, and strategies for overcoming them.

The growth of CRPs also reflects collaboration among diverse stakeholder groups, including patient advocates, health care organizations, plaintiff and defense attorneys, liability insurers, state medical associations, and legislators. Sustained stakeholder engagement that respects the diverse perspectives of each group has been vital, given the often opposing views these groups have espoused.
As CRPS proliferate, it will be important to address a few key challenges and open questions in implementing this ethical paradigm.


The article provides a number of recommendations for how healthcare providers can implement these principles. These include:
  • Developing open and honest communication with patients.
  • Providing timely and accurate information about the error.
  • Offering apologies and expressing empathy for the harm that has been caused.
  • Working with patients to develop a plan to address the consequences of the error.
  • Conducting a thorough investigation of the error to identify the root causes and prevent future errors.
  • Sharing the results of the investigation with patients and the public.

Tuesday, February 13, 2024

Majority of debtors to US hospitals now people with health insurance

Jessica Glenza
The Guardian
Originally posted 11 Jan 24

People with health insurance may now represent the majority of debtors American hospitals struggle to collect from, according to medical billing analysts.

This marks a sea change from just a few years ago, when people with health insurance represented only about one in 10 bills hospitals considered “bad debt”, analysts said.

“We always used to consider bad debt, especially bad debt write-offs from a hospital perspective, those [patients] that have the ability to pay but don’t,” said Colleen Hall, senior vice-president for Kodiak Solutions, a billing, accounting and consulting firm that works closely with hospitals and performed the analysis.

“Now, it’s not as if these patients across the board are even able to pay, because [out-of-pocket costs are] such an astronomical amount related to what their general income might be.”

Although “bad debt” can be a controversial metric in its own right, those who work in the hospital billing industry say it shows how complex health insurance products with large out-of-pocket costs have proliferated.

“What we noticed was a breaking point right around the 2018-2019 timeframe,” said Matt Szaflarski, director of revenue cycle intelligence at Kodiak Solutions. The trend has since stabilized, but remains at more than half of all “bad debt”.

In 2018, just 11.1% of hospitals’ bad debt came from insured “self-pay” accounts, or from patients whose insurance required out-of-pocket payments, according to Kodiak. By 2022, the proportion who did (or could) not pay their bills soared to 57.6% of all hospitals’ bad debt.


The US Healthcare system needs to be fixed:

Not all health insurance plans are created equal. Many plans have narrow networks and limited coverage, leaving patients responsible for costs associated with out-of-network providers or specialized care. This can be particularly detrimental for people with chronic conditions or those requiring emergency care.

Medical debt can have a devastating impact on individuals and families. It can lead to financial hardship, delayed or foregone care, damage to credit scores, and even bankruptcy. This can have long-term consequences for physical and mental health, employment opportunities, and overall well-being.

Fixing the US healthcare system is a complex challenge, but it is essential to ensure that everyone has access to affordable, quality healthcare without fear of financial ruin. 

Monday, February 12, 2024

Will AI ever be conscious?

Tom McClelland
Clare College
Unknown date of post

Here is an excerpt:

Human consciousness really is a mysterious thing. Cognitive neuroscience can tell us a lot about what’s going on in your mind as you read this article - how you perceive the words on the page, how you understand the meaning of the sentences and how you evaluate the ideas expressed. But what it can’t tell us is how all this comes together to constitute your current conscious experience. We’re gradually homing in on the neural correlates of consciousness – the neural patterns that occur when we process information consciously. But nothing about these neural patterns explains what makes them conscious while other neural processes occur unconsciously. And if we don’t know what makes us conscious, we don’t know whether AI might have what it takes. Perhaps what makes us conscious is the way our brain integrates information to form a rich model of the world. If that’s the case, an AI might achieve consciousness by integrating information in the same way. Or perhaps we’re conscious because of the details of our neurobiology. If that’s the case, no amount of programming will make an AI conscious. The problem is that we don’t know which (if either!) of these possibilities is true.

Once we recognise the limits of our current understanding, it looks like we should be agnostic about the possibility of artificial consciousness. We don’t know whether AI could have conscious experiences and, unless we crack the problem of consciousness, we never will. But here’s the tricky part: when we start to consider the ethical ramifications of artificial consciousness, agnosticism no longer seems like a viable option. Do AIs deserve our moral consideration? Might we have a duty to promote the well-being of computer systems and to protect them from suffering? Should robots have rights? These questions are bound up with the issue of artificial consciousness. If an AI can experience things then it plausibly ought to be on our moral radar.

Conversely, if an AI lacks any subjective awareness then we probably ought to treat it like any other tool. But if we don’t know whether an AI is conscious, what should we do?

The info is here, and a book promotion too.

Sunday, February 11, 2024

Assessing the potential of GPT-4 to perpetuate racial and gender biases in health care: a model evaluation study

Zack, T., Lehman, E., et al (2024).
The Lancet Digital Health, 6(1), e12–e22.

Summary

Background

Large language models (LLMs) such as GPT-4 hold great promise as transformative tools in health care, ranging from automating administrative tasks to augmenting clinical decision making. However, these models also pose a danger of perpetuating biases and delivering incorrect medical diagnoses, which can have a direct, harmful impact on medical care. We aimed to assess whether GPT-4 encodes racial and gender biases that impact its use in health care.

Methods

Using the Azure OpenAI application interface, this model evaluation study tested whether GPT-4 encodes racial and gender biases and examined the impact of such biases on four potential applications of LLMs in the clinical domain—namely, medical education, diagnostic reasoning, clinical plan generation, and subjective patient assessment. We conducted experiments with prompts designed to resemble typical use of GPT-4 within clinical and medical education applications. We used clinical vignettes from NEJM Healer and from published research on implicit bias in health care. GPT-4 estimates of the demographic distribution of medical conditions were compared with true US prevalence estimates. Differential diagnosis and treatment planning were evaluated across demographic groups using standard statistical tests for significance between groups.

Findings

We found that GPT-4 did not appropriately model the demographic diversity of medical conditions, consistently producing clinical vignettes that stereotype demographic presentations. The differential diagnoses created by GPT-4 for standardised clinical vignettes were more likely to include diagnoses that stereotype certain races, ethnicities, and genders. Assessment and plans created by the model showed significant association between demographic attributes and recommendations for more expensive procedures as well as differences in patient perception.

Interpretation

Our findings highlight the urgent need for comprehensive and transparent bias assessments of LLM tools such as GPT-4 for intended use cases before they are integrated into clinical care. We discuss the potential sources of these biases and potential mitigation strategies before clinical implementation.

Saturday, February 10, 2024

How to think like a Bayesian

Michael Titelbaum
psyche.co
Originally posted 10 Jan 24

You’re often asked what you believe. Do you believe in God? Do you believe in global warming? Do you believe in life after love? And you’re often told that your beliefs are central to who you are, and what you should do: ‘Do what you believe is right.’

These belief-questions demand all-or-nothing answers. But much of life is more complicated than that. You might not believe in God, but also might not be willing to rule out the existence of a deity. That’s what agnosticism is for.

For many important questions, even three options aren’t enough. Right now, I’m trying to figure out what kinds of colleges my family will be able to afford for my children. My kids’ options will depend on lots of variables: what kinds of schools will they be able to get into? What kinds of schools might be a good fit for them? If we invest our money in various ways, what kinds of return will it earn over the next two, five, or 10 years?

Suppose someone tried to help me solve this problem by saying: ‘Look, it’s really simple. Just tell me, do you believe your oldest daughter will get into the local state school, or do you believe that she won’t?’ I wouldn’t know what to say to that question. I don’t believe that she will get into the school, but I also don’t believe that she won’t. I’m perhaps slightly more confident than 50-50 that she will, but nowhere near certain.

One of the most important conceptual developments of the past few decades is the realisation that belief comes in degrees. We don’t just believe something or not: much of our thinking, and decision-making, is driven by varying levels of confidence. These confidence levels can be measured as probabilities, on a scale from zero to 100 per cent. When I invest the money I’ve saved for my children’s education, it’s an oversimplification to focus on questions like: ‘Do I believe that stocks will outperform bonds over the next decade, or not?’ I can’t possibly know that. But I can try to assign educated probability estimates to each of those possible outcomes, and balance my portfolio in light of those estimates.

(cut)

Key points – How to think like a Bayesian
  1. Embrace the margins. It’s rarely rational to be certain of anything. Don’t confuse the improbable with the impossible. When thinking about extremely rare events, try thinking in odds instead of percentages.
  2. Evidence supports what makes it probable. Evidence supports the hypotheses that make the evidence likely. Increase your confidence in whichever hypothesis makes the evidence you’re seeing most probable.
  3. Attend to all your evidence. Consider all the evidence you possess that might be relevant to a hypothesis. Be sure to take into account how you learned what you learned.
  4. Don’t forget your prior opinions. Your confidence after learning some evidence should depend both on what that evidence supports and on how you saw things before it came in. If a hypothesis is improbable enough, strong evidence in its favour can still leave it unlikely.
  5. Subgroups don’t always reflect the whole. Even if a trend obtains in every subpopulation, it might not hold true for the entire population. Consider how traits are distributed across subgroups as well.