Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Sunday, April 14, 2024

AI and the need for justification (to the patient)

Muralidharan, A., Savulescu, J. & Schaefer, G.O.
Ethics Inf Technol 26, 16 (2024).

Abstract

This paper argues that one problem that besets black-box AI is that it lacks algorithmic justifiability. We argue that the norm of shared decision making in medical care presupposes that treatment decisions ought to be justifiable to the patient. Medical decisions are justifiable to the patient only if they are compatible with the patient’s values and preferences and the patient is able to see that this is so. Patient-directed justifiability is threatened by black-box AIs because the lack of rationale provided for the decision makes it difficult for patients to ascertain whether there is adequate fit between the decision and the patient’s values. This paper argues that achieving algorithmic transparency does not help patients bridge the gap between their medical decisions and values. We introduce a hypothetical model we call Justifiable AI to illustrate this argument. Justifiable AI aims at modelling normative and evaluative considerations in an explicit way so as to provide a stepping stone for patient and physician to jointly decide on a course of treatment. If our argument succeeds, we should prefer these justifiable models over alternatives if the former are available and aim to develop said models if not.


Here is my summary:

The article argues that a certain type of AI technology, known as "black box" AI, poses a problem in medicine because it lacks transparency.  This lack of transparency makes it difficult for doctors to explain the AI's recommendations to patients.  In order to make shared decisions about treatment, patients need to understand the reasoning behind those decisions, and how the AI factored in their individual values and preferences.

The article proposes an alternative type of AI, called "Justifiable AI" which would address this problem. Justifiable AI would be designed to make its reasoning process clear, allowing doctors to explain to patients why the AI is recommending a particular course of treatment. This would allow patients to see how the AI's recommendation aligns with their own values, and make informed decisions about their care.

Saturday, April 13, 2024

Human Enhancement and Augmented Reality

Gordon, E.C.
Philos. Technol. 37, 17 (2024).

Abstract

Bioconservative bioethicists (e.g., Kass, 2002, Human Dignity and Bioethics, 297–331, 2008; Sandel, 2007; Fukuyama, 2003) offer various kinds of philosophical arguments against cognitive enhancement—i.e., the use of medicine and technology to make ourselves “better than well” as opposed to merely treating pathologies. Two notable such bioconservative arguments appeal to ideas about (1) the value of achievement, and (2) authenticity. It is shown here that even if these arguments from achievement and authenticity cut ice against specifically pharmacologically driven cognitive enhancement, they do not extend over to an increasingly viable form of technological cognitive enhancement – namely, cognitive enhancement via augmented reality. An important result is that AR-driven cognitive enhancement aimed at boosting performance in certain cognitive tasks might offer an interesting kind of “sweet spot” for proponents of cognitive enhancement, allowing us to pursue many of the goals of enhancement advocates without running into some of the most prominent objections from bioconservative philosophers.


Here is a summary:

The article discusses how Augmented Reality (AR) can be a tool for human enhancement. Traditionally, human enhancement focused on using technology or medicine to directly alter the body or brain. AR, however, offers an alternative method for augmentation by overlaying information and visuals on the real world through devices like glasses or contact lenses. This can improve our abilities in a variety of ways, such as providing hands-free access to information or translating languages in real-time. The article also acknowledges ethical concerns surrounding human enhancement, but argues that AR offers a less controversial path compared to directly modifying the body or brain.

Friday, April 12, 2024

Large language models show human-like content biases in transmission chain experiments

Acerbi, A., & Stubbersfield, J. M. (2023).
PNAS, 120(44), e2313790120.

Abstract

As the use of large language models (LLMs) grows, it is important to examine whether they exhibit biases in their output. Research in cultural evolution, using transmission chain experiments, demonstrates that humans have biases to attend to, remember, and transmit some types of content over others. Here, in five preregistered experiments using material from previous studies with human participants, we use the same, transmission chain-like methodology, and find that the LLM ChatGPT-3 shows biases analogous to humans for content that is gender-stereotype-consistent, social, negative, threat-related, and biologically counterintuitive, over other content. The presence of these biases in LLM output suggests that such content is widespread in its training data and could have consequential downstream effects, by magnifying preexisting human tendencies for cognitively appealing and not necessarily informative, or valuable, content.

Significance

Use of AI in the production of text through Large Language Models (LLMs) is widespread and growing, with potential applications in journalism, copywriting, academia, and other writing tasks. As such, it is important to understand whether text produced or summarized by LLMs exhibits biases. The studies presented here demonstrate that the LLM ChatGPT-3 reflects human biases for certain types of content in its production. The presence of these biases in LLM output has implications for its common use, as it may magnify human tendencies for content which appeals to these biases.


Here are the main points:
  • LLMs display stereotype-consistent biases, just like humans: Similar to people, LLMs were more likely to preserve information confirming stereotypes over information contradicting them.
  • Bias location might differ: Unlike humans, whose biases can shift throughout the retelling process, LLMs primarily showed bias in the first retelling. This suggests their biases stem from their training data rather than a complex cognitive process.
  • Simple summarization may suffice: The first retelling step caused the most content change, implying that even a single summarization by an LLM can reveal its biases. This simplifies the research needed to detect and analyze LLM bias.
  • Prompting for different viewpoints could reduce bias: The study suggests experimenting with different prompts to encourage LLMs to consider broader perspectives and potentially mitigate inherent biases.

Thursday, April 11, 2024

FDA Clears the First Digital Therapeutic for Depression, But Will Payers Cover It?

Frank Vinluan
MedCityNews.com
Originally posted 1 April 24

A software app that modifies behavior through a series of lessons and exercises has received FDA clearance for treating patients with major depressive disorder, making it the first prescription digital therapeutic for this indication.

The product, known as CT-152 during its development by partners Otsuka Pharmaceutical and Click Therapeutics, will be commercialized under the brand name Rejoyn.

Rejoyn is an alternative way to offer cognitive behavioral therapy, a type of talk therapy in which a patient works with a clinician in a series of in-person sessions. In Rejoyn, the cognitive behavioral therapy lessons, exercises, and reminders are digitized. The treatment is intended for use three times weekly for six weeks, though lessons may be revisited for an additional four weeks. The app was initially developed by Click Therapeutics, a startup that develops apps that use exercises and tasks to retrain and rewire the brain. In 2019, Otsuka and Click announced a collaboration in which the Japanese pharma company would fully fund development of the depression app.


Here is a quick summary:

Rejoyn is the first prescription digital therapeutic (PDT) authorized by the FDA for the adjunctive treatment of major depressive disorder (MDD) symptoms in adults. 

Rejoyn is a 6-week remote treatment program that combines clinically-validated cognitive emotional training exercises and brief therapeutic lessons to help enhance cognitive control of emotions. The app aims to improve connections in the brain regions affected by depression, allowing the areas responsible for processing and regulating emotions to work better together and reduce MDD symptoms. 

The FDA clearance for Rejoyn was based on data from a 13-week pivotal clinical trial that compared the app to a sham control app in 386 participants aged 22-64 with MDD who were taking antidepressants. The study found that Rejoyn users showed a statistically significant improvement in depression symptom severity compared to the control group, as measured by clinician-reported and patient-reported scales. No adverse effects were observed during the trial. 

Rejoyn is expected to be available for download on iOS and Android devices in the second half of 2024. It represents a novel, clinically-validated digital therapeutic option that can be used as an adjunct to traditional MDD treatments under the guidance of healthcare providers.

Wednesday, April 10, 2024

Why the world cannot afford the rich

R. G. Wilkinson & K. E. Pickett
Nature.com
Originally published 12 March 24

Here is an excerpt:

Inequality also increases consumerism. Perceived links between wealth and self-worth drive people to buy goods associated with high social status and thus enhance how they appear to others — as US economist Thorstein Veblen set out more than a century ago in his book The Theory of the Leisure Class (1899). Studies show that people who live in more-unequal societies spend more on status goods14.

Our work has shown that the amount spent on advertising as a proportion of gross domestic product is higher in countries with greater inequality. The well-publicized lifestyles of the rich promote standards and ways of living that others seek to emulate, triggering cascades of expenditure for holiday homes, swimming pools, travel, clothes and expensive cars.

Oxfam reports that, on average, each of the richest 1% of people in the world produces 100 times the emissions of the average person in the poorest half of the world’s population15. That is the scale of the injustice. As poorer countries raise their material standards, the rich will have to lower theirs.

Inequality also makes it harder to implement environmental policies. Changes are resisted if people feel that the burden is not being shared fairly. For example, in 2018, the gilets jaunes (yellow vests) protests erupted across France in response to President Emmanuel Macron’s attempt to implement an ‘eco-tax’ on fuel by adding a few percentage points to pump prices. The proposed tax was seen widely as unfair — particularly for the rural poor, for whom diesel and petrol are necessities. By 2019, the government had dropped the idea. Similarly, Brazilian truck drivers protested against rises in fuel tax in 2018, disrupting roads and supply chains.

Do unequal societies perform worse when it comes to the environment, then? Yes. For rich, developed countries for which data were available, we found a strong correlation between levels of equality and a score on an index we created of performance in five environmental areas: air pollution; recycling of waste materials; the carbon emissions of the rich; progress towards the United Nations Sustainable Development Goals; and international cooperation (UN treaties ratified and avoidance of unilateral coercive measures).


The article argues that rising economic inequality is a major threat to the world's well-being. Here are the key points:

The rich are capturing a growing share of wealth: The richest 1% are accumulating wealth much faster than everyone else, and their lifestyles contribute heavily to environmental damage.

Inequality harms everyone: High levels of inequality are linked to social problems like crime, mental health issues, and lower social mobility. It also makes it harder to address environmental challenges because people resist policies seen as unfair.

More equal societies perform better: Countries with a more even distribution of wealth tend to have better social and health outcomes, as well as stronger environmental performance.

Policymakers need to take action: The article proposes progressive taxation, closing tax havens, and encouraging more equitable business practices like employee ownership.

The overall message is that reducing inequality is essential for solving a range of environmental, social, and health problems.

Tuesday, April 9, 2024

Why can’t anyone agree on how dangerous AI will be?

Dylan Matthews
Vox.com
Originally posted 13 March 24

Here is an excerpt:

The paper focuses on disagreement around AI’s potential to either wipe humanity out or cause an “unrecoverable collapse,” in which the human population shrinks to under 1 million for a million or more years, or global GDP falls to under $1 trillion (less than 1 percent of its current value) for a million years or more. At the risk of being crude, I think we can summarize these scenarios as “extinction or, at best, hell on earth.”

There are, of course, a number of other different risks from AI worth worrying about, many of which we already face today.

Existing AI systems sometimes exhibit worrying racial and gender biases; they can be unreliable in ways that cause problems when we rely upon them anyway; they can be used to bad ends, like creating fake news clips to fool the public or making pornography with the faces of unconsenting people.

But these harms, while surely bad, obviously pale in comparison to “losing control of the AIs such that everyone dies.” The researchers chose to focus on the extreme, existential scenarios.

So why do people disagree on the chances of these scenarios coming true? It’s not due to differences in access to information, or a lack of exposure to differing viewpoints. If it were, the adversarial collaboration, which consisted of massive exposure to new information and contrary opinions, would have moved people’s beliefs more dramatically.


Here is my summary:

The article discusses the ongoing debate surrounding the potential dangers of advanced AI, focusing on whether it could lead to catastrophic outcomes for humanity. The author highlights the contrasting views of experts and superforecasters regarding the risks posed by AI, with experts generally more concerned about disaster scenarios. The study conducted by the Forecasting Research Institute aimed to understand the root of these disagreements through an "adversarial collaboration" where both groups engaged in extensive discussions and exposure to new information.

The research identified key issues, termed "cruxes," that influence people's beliefs about AI risks. One significant crux was the potential for AI to autonomously replicate and acquire resources before 2030. Despite the collaborative efforts, the study did not lead to a convergence of opinions. The article delves into the reasons behind these disagreements, emphasizing fundamental worldview disparities and differing perspectives on the long-term development of AI.

Overall, the article provides insights into why individuals hold varying opinions on AI's dangers, highlighting the complexity of predicting future outcomes in this rapidly evolving field.

Monday, April 8, 2024

Delusions shape our reality

Lisa Bortolotti
iai.tv
Originally posted 12 March 24

Here is an excerpt:

But what makes it the case that a delusion disqualifies the speaker from further engagement? When we call a person’s belief “delusional”, we assume that that person’s capacity to exercise agency is compromised. So, we may recognise that the person has a unique perspective on the world, but it won’t seem to us as a valuable perspective. We may realise that the person has concerns, but we won’t think of those concerns as legitimate and worth addressing. We may come to the conviction that, due to the delusional belief, the person is not in a position to affect change or participate in decision making because their grasp on reality is tenuous. If they were simply mistaken about something, we could correct them. If Laura thought that a latte at the local coffee shop costed £2.50 when it costs £3.50, we could show her the price list and set her straight. But her belief that her partner is unfaithful because the lamp post is unlit cannot be corrected that way, because what Laura considers evidence for the claim is not likely to overlap with what we consider evidence for it. When this happens, and we feel that there is no sufficient common ground for a fruitful exchange, we may see Laura as a problem to be fixed or a patient to be diagnosed and treated, as opposed to an agent with a multiplicity of needs and interests, and a person worth interacting with.

I challenge the assumption that delusional beliefs are marks of compromised agency by default and I do so based on two main arguments. First, there is nothing in the way in which delusional beliefs are developed, maintained, or defended that can be legitimately described as a dysfunctional process. Some cognitive biases may help explain why a delusional explanation is preferred to alternative explanations, or why it is not discarded after a challenge. For instance, people who report delusional beliefs often jump to conclusions. Rashid might have the belief that the US government strives to manipulate citizens’ behaviour and concludes that the tornadoes are created for this purpose, without considering arguments against the feasibility of a machine that controls the weather with that precision. Also, people who report delusional beliefs tend to see meaningful connections between independent events—as Laura who takes the lamp post being unlit as evidence for her partner’s unfaithfulness. But these cognitive biases are a common feature of human cognition and not a dysfunction giving rise to a pathology: they tend to be accentuated at stressful times when we may be strongly motivated to come up with a quick causal explanation for a distressing event.


Here is my summary:

The article argues that delusions, though often seen as simply false beliefs, can significantly impact a person's experience of the world. It highlights that delusions can be complex and offer a kind of internal logic, even if it doesn't match objective reality.

Bortolotti also points out that the term "delusion" can be judgmental and may overlook the reasons behind the belief. Delusions can sometimes provide comfort or a sense of control in a confusing situation.

Overall, the article suggests a more nuanced view of delusions, acknowledging their role in shaping a person's reality while still recognizing the importance of distinguishing them from objective reality.

Sunday, April 7, 2024

When Institutions Harm Those Who Depend on Them: A Scoping Review of Institutional Betrayal

Christl, M. E., et al. (2024).
Trauma, violence & abuse
15248380241226627.
Advance online publication.

Abstract

The term institutional betrayal (Smith and Freyd, 2014) builds on the conceptual framework of betrayal trauma theory (see Freyd, 1996) to describe the ways that institutions (e.g., universities, workplaces) fail to take appropriate steps to prevent and/or respond appropriately to interpersonal trauma. A nascent literature has begun to describe individual costs associated with institutional betrayal throughout the United States (U.S.), with implications for public policy and institutional practice. A scoping review was conducted to quantify existing study characteristics and key findings to guide research and practice going forward. Multiple academic databases were searched for keywords (i.e., "institutional betrayal" and "organizational betrayal"). Thirty-seven articles met inclusion criteria (i.e., peer-reviewed empirical studies of institutional betrayal) and were included in analyses. Results identified research approaches, populations and settings, and predictor and outcome variables frequently studied in relation to institutional betrayal. This scoping review describes a strong foundation of published studies and provides recommendations for future research, including longitudinal research with diverse individuals across diverse institutional settings. The growing evidence for action has broad implications for research-informed policy and institutional practice.

Here is my summary:

A growing body of research examines institutional betrayal, the harm institutions cause people who depend on them. This research suggests institutional betrayal is linked to mental and physical health problems, absenteeism from work, and a distrust of institutions. A common tool to measure institutional betrayal is the Institutional Betrayal Questionnaire (IBQ). Researchers are calling for more studies on institutional betrayal among young people and in settings like K-12 schools and workplaces. Additionally, more research is needed on how institutions respond to reports of betrayal and how to prevent it from happening in the first place. Finally, future research should focus on people from minority groups, as they may be more vulnerable to institutional betrayal.

Saturday, April 6, 2024

LSD-Based Medication for GAD Receives FDA Breakthrough Status

Megan Brooks
Medscape.com
Originally posted March 08, 2024

The US Food and Drug Administration (FDA) has granted breakthrough designation to an LSD-based treatment for generalized anxiety disorder (GAD) based on promising topline data from a phase 2b clinical trial. Mind Medicine (MindMed) Inc is developing the treatment — MM120 (lysergide d-tartrate).

In a news release the company reports that a single oral dose of MM120 met its key secondary endpoint, maintaining "clinically and statistically significant" reductions in Hamilton Anxiety Scale (HAM-A) score, compared with placebo, at 12 weeks with a 65% clinical response rate and 48% clinical remission rate.

The company previously announced statistically significant improvements on the HAM-A compared with placebo at 4 weeks, which was the trial's primary endpoint.

"I've conducted clinical research studies in psychiatry for over two decades and have seen studies of many drugs under development for the treatment of anxiety. That MM120 exhibited rapid and robust efficacy, solidly sustained for 12 weeks after a single dose, is truly remarkable," study investigator David Feifel, MD, PhD, professor emeritus of psychiatry at the University of California, San Diego, and director of the Kadima Neuropsychiatry Institute in La Jolla, California, said in the news release.


Here is some information from the Press Release from Mind Medicine.

About MM120

Lysergide is a synthetic ergotamine belonging to the group of classic, or serotonergic, psychedelics, which acts as a partial agonist at human serotonin-2A (5-hydroxytryptamine-2A [5-HT2A]) receptors. MindMed is developing MM120 (lysergide D-tartrate), the tartrate salt form of lysergide, for GAD and is exploring its potential applications in other serious brain health disorders.

About MindMed

MindMed is a clinical stage biopharmaceutical company developing novel product candidates to treat brain health disorders. Our mission is to be the global leader in the development and delivery of treatments that unlock new opportunities to improve patient outcomes. We are developing a pipeline of innovative product candidates, with and without acute perceptual effects, targeting neurotransmitter pathways that play key roles in brain health disorders.

MindMed trades on NASDAQ under the symbol MNMD and on the Cboe Canada (formerly known as the NEO Exchange, Inc.) under the symbol MMED.