Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Tuesday, April 16, 2024

As A.I.-Controlled Killer Drones Become Reality, Nations Debate Limits

Eric Lipton
The New York Times
Originally posted 21 Nov 23

Here is an excerpt:

Rapid advances in artificial intelligence and the intense use of drones in conflicts in Ukraine and the Middle East have combined to make the issue that much more urgent. So far, drones generally rely on human operators to carry out lethal missions, but software is being developed that soon will allow them to find and select targets more on their own.

The intense jamming of radio communications and GPS in Ukraine has only accelerated the shift, as autonomous drones can often keep operating even when communications are cut off.

“This isn’t the plot of a dystopian novel, but a looming reality,” Gaston Browne, the prime minister of Antigua and Barbuda, told officials at a recent U.N. meeting.

Pentagon officials have made it clear that they are preparing to deploy autonomous weapons in a big way.

Deputy Defense Secretary Kathleen Hicks announced this summer that the U.S. military would “field attritable, autonomous systems at scale of multiple thousands” in the coming two years, saying that the push to compete with China’s own investment in advanced weapons necessitated that the United States “leverage platforms that are small, smart, cheap and many.”

The concept of an autonomous weapon is not entirely new. Land mines — which detonate automatically — have been used since the Civil War. The United States has missile systems that rely on radar sensors to autonomously lock on to and hit targets.

What is changing is the introduction of artificial intelligence that could give weapons systems the capability to make decisions themselves after taking in and processing information.


Here is a summary:

This article discusses the debate at the UN regarding Lethal Autonomous Weapons (LAW) - essentially autonomous drones with AI that can choose and attack targets without human intervention. There are concerns that this technology could lead to unintended casualties, make wars more likely, and remove the human element from the decision to take a life.
  • Many countries are worried about the development and deployment of LAWs.
  • Austria and other countries are proposing a total ban on LAWs or at least strict regulations requiring human control and limitations on how they can be used.
  • The US, Russia, and China are opposed to a ban and argue that LAWs could potentially reduce civilian casualties in wars.
  • The US prefers non-binding guidelines over new international laws.
  • The UN is currently deadlocked on the issue with no clear path forward for creating regulations.

Monday, April 15, 2024

On the Ethics of Chatbots in Psychotherapy.

Benosman, M. (2024, January 7).
PsyArXiv Preprints
https://doi.org/10.31234/osf.io/mdq8v

Introduction:

In recent years, the integration of chatbots in mental health care has emerged as a groundbreaking development. These artificial intelligence (AI)-driven tools offer new possibilities for therapy and support, particularly in areas where mental health services are scarce or stigmatized. However, the use of chatbots in this sensitive domain raises significant ethical concerns that must be carefully considered. This essay explores the ethical implications of employing chatbots in mental health, focusing on issues of non-maleficence, beneficence, explicability, and care. Our main ethical question is: should we trust chatbots with our mental health and wellbeing?

Indeed, the recent pandemic has made mental health an urgent global problem. This fact, together with the widespread shortage in qualified human therapists, makes the proposal of chatbot therapists a timely, and perhaps, viable alternative. However, we need to be cautious about hasty implementations of such alternative. For instance, recent news has reported grave incidents involving chatbots-human interactions. For example, (Walker, 2023) reports the death of an eco-anxious man who committed suicide following a prolonged interaction with a chatbot named ELIZA, which encouraged him to put an end to his life to save the planet. Another individual was caught while executing a plan to assassinate the Queen of England, after a chatbot encouraged him to do so (Singleton, Gerken, & McMahon, 2023).

These are only a few recent examples that demonstrate the potential maleficence effect of chatbots on-fragile-individuals. Thus, to be ready to safely deploy such technology, in the context of mental health care, we need to carefully study its potential impact on patients from an ethics standpoint.


Here is my summary:

The article analyzes the ethical considerations around the use of chatbots as mental health therapists, from the perspectives of different stakeholders - bioethicists, therapists, and engineers. It examines four main ethical values:

Non-maleficence: Ensuring chatbots do not cause harm, either accidentally or deliberately. There is agreement that chatbots need rigorous evaluation and regulatory oversight like other medical devices before clinical deployment.

Beneficence: Ensuring chatbots are effective at providing mental health support. There is a need for evidence-based validation of their efficacy, while also considering broader goals like improving quality of life.

Explicability: The need for transparency and accountability around how chatbot algorithms work, so patients can understand the limitations of the technology.

Care: The inability of chatbots to truly empathize, which is a crucial aspect of effective human-based psychotherapy. This raises concerns about preserving patient autonomy and the risk of manipulation.

Overall, the different stakeholders largely agree on the importance of these ethical values, despite coming from different backgrounds. The text notes a surprising level of alignment, even between the more technical engineering perspective and the more humanistic therapist and bioethicist viewpoints. The key challenge seems to be ensuring chatbots can meet the high bar of empathy and care required for effective mental health therapy.

Sunday, April 14, 2024

AI and the need for justification (to the patient)

Muralidharan, A., Savulescu, J. & Schaefer, G.O.
Ethics Inf Technol 26, 16 (2024).

Abstract

This paper argues that one problem that besets black-box AI is that it lacks algorithmic justifiability. We argue that the norm of shared decision making in medical care presupposes that treatment decisions ought to be justifiable to the patient. Medical decisions are justifiable to the patient only if they are compatible with the patient’s values and preferences and the patient is able to see that this is so. Patient-directed justifiability is threatened by black-box AIs because the lack of rationale provided for the decision makes it difficult for patients to ascertain whether there is adequate fit between the decision and the patient’s values. This paper argues that achieving algorithmic transparency does not help patients bridge the gap between their medical decisions and values. We introduce a hypothetical model we call Justifiable AI to illustrate this argument. Justifiable AI aims at modelling normative and evaluative considerations in an explicit way so as to provide a stepping stone for patient and physician to jointly decide on a course of treatment. If our argument succeeds, we should prefer these justifiable models over alternatives if the former are available and aim to develop said models if not.


Here is my summary:

The article argues that a certain type of AI technology, known as "black box" AI, poses a problem in medicine because it lacks transparency.  This lack of transparency makes it difficult for doctors to explain the AI's recommendations to patients.  In order to make shared decisions about treatment, patients need to understand the reasoning behind those decisions, and how the AI factored in their individual values and preferences.

The article proposes an alternative type of AI, called "Justifiable AI" which would address this problem. Justifiable AI would be designed to make its reasoning process clear, allowing doctors to explain to patients why the AI is recommending a particular course of treatment. This would allow patients to see how the AI's recommendation aligns with their own values, and make informed decisions about their care.

Saturday, April 13, 2024

Human Enhancement and Augmented Reality

Gordon, E.C.
Philos. Technol. 37, 17 (2024).

Abstract

Bioconservative bioethicists (e.g., Kass, 2002, Human Dignity and Bioethics, 297–331, 2008; Sandel, 2007; Fukuyama, 2003) offer various kinds of philosophical arguments against cognitive enhancement—i.e., the use of medicine and technology to make ourselves “better than well” as opposed to merely treating pathologies. Two notable such bioconservative arguments appeal to ideas about (1) the value of achievement, and (2) authenticity. It is shown here that even if these arguments from achievement and authenticity cut ice against specifically pharmacologically driven cognitive enhancement, they do not extend over to an increasingly viable form of technological cognitive enhancement – namely, cognitive enhancement via augmented reality. An important result is that AR-driven cognitive enhancement aimed at boosting performance in certain cognitive tasks might offer an interesting kind of “sweet spot” for proponents of cognitive enhancement, allowing us to pursue many of the goals of enhancement advocates without running into some of the most prominent objections from bioconservative philosophers.


Here is a summary:

The article discusses how Augmented Reality (AR) can be a tool for human enhancement. Traditionally, human enhancement focused on using technology or medicine to directly alter the body or brain. AR, however, offers an alternative method for augmentation by overlaying information and visuals on the real world through devices like glasses or contact lenses. This can improve our abilities in a variety of ways, such as providing hands-free access to information or translating languages in real-time. The article also acknowledges ethical concerns surrounding human enhancement, but argues that AR offers a less controversial path compared to directly modifying the body or brain.

Friday, April 12, 2024

Large language models show human-like content biases in transmission chain experiments

Acerbi, A., & Stubbersfield, J. M. (2023).
PNAS, 120(44), e2313790120.

Abstract

As the use of large language models (LLMs) grows, it is important to examine whether they exhibit biases in their output. Research in cultural evolution, using transmission chain experiments, demonstrates that humans have biases to attend to, remember, and transmit some types of content over others. Here, in five preregistered experiments using material from previous studies with human participants, we use the same, transmission chain-like methodology, and find that the LLM ChatGPT-3 shows biases analogous to humans for content that is gender-stereotype-consistent, social, negative, threat-related, and biologically counterintuitive, over other content. The presence of these biases in LLM output suggests that such content is widespread in its training data and could have consequential downstream effects, by magnifying preexisting human tendencies for cognitively appealing and not necessarily informative, or valuable, content.

Significance

Use of AI in the production of text through Large Language Models (LLMs) is widespread and growing, with potential applications in journalism, copywriting, academia, and other writing tasks. As such, it is important to understand whether text produced or summarized by LLMs exhibits biases. The studies presented here demonstrate that the LLM ChatGPT-3 reflects human biases for certain types of content in its production. The presence of these biases in LLM output has implications for its common use, as it may magnify human tendencies for content which appeals to these biases.


Here are the main points:
  • LLMs display stereotype-consistent biases, just like humans: Similar to people, LLMs were more likely to preserve information confirming stereotypes over information contradicting them.
  • Bias location might differ: Unlike humans, whose biases can shift throughout the retelling process, LLMs primarily showed bias in the first retelling. This suggests their biases stem from their training data rather than a complex cognitive process.
  • Simple summarization may suffice: The first retelling step caused the most content change, implying that even a single summarization by an LLM can reveal its biases. This simplifies the research needed to detect and analyze LLM bias.
  • Prompting for different viewpoints could reduce bias: The study suggests experimenting with different prompts to encourage LLMs to consider broader perspectives and potentially mitigate inherent biases.

Thursday, April 11, 2024

FDA Clears the First Digital Therapeutic for Depression, But Will Payers Cover It?

Frank Vinluan
MedCityNews.com
Originally posted 1 April 24

A software app that modifies behavior through a series of lessons and exercises has received FDA clearance for treating patients with major depressive disorder, making it the first prescription digital therapeutic for this indication.

The product, known as CT-152 during its development by partners Otsuka Pharmaceutical and Click Therapeutics, will be commercialized under the brand name Rejoyn.

Rejoyn is an alternative way to offer cognitive behavioral therapy, a type of talk therapy in which a patient works with a clinician in a series of in-person sessions. In Rejoyn, the cognitive behavioral therapy lessons, exercises, and reminders are digitized. The treatment is intended for use three times weekly for six weeks, though lessons may be revisited for an additional four weeks. The app was initially developed by Click Therapeutics, a startup that develops apps that use exercises and tasks to retrain and rewire the brain. In 2019, Otsuka and Click announced a collaboration in which the Japanese pharma company would fully fund development of the depression app.


Here is a quick summary:

Rejoyn is the first prescription digital therapeutic (PDT) authorized by the FDA for the adjunctive treatment of major depressive disorder (MDD) symptoms in adults. 

Rejoyn is a 6-week remote treatment program that combines clinically-validated cognitive emotional training exercises and brief therapeutic lessons to help enhance cognitive control of emotions. The app aims to improve connections in the brain regions affected by depression, allowing the areas responsible for processing and regulating emotions to work better together and reduce MDD symptoms. 

The FDA clearance for Rejoyn was based on data from a 13-week pivotal clinical trial that compared the app to a sham control app in 386 participants aged 22-64 with MDD who were taking antidepressants. The study found that Rejoyn users showed a statistically significant improvement in depression symptom severity compared to the control group, as measured by clinician-reported and patient-reported scales. No adverse effects were observed during the trial. 

Rejoyn is expected to be available for download on iOS and Android devices in the second half of 2024. It represents a novel, clinically-validated digital therapeutic option that can be used as an adjunct to traditional MDD treatments under the guidance of healthcare providers.

Wednesday, April 10, 2024

Why the world cannot afford the rich

R. G. Wilkinson & K. E. Pickett
Nature.com
Originally published 12 March 24

Here is an excerpt:

Inequality also increases consumerism. Perceived links between wealth and self-worth drive people to buy goods associated with high social status and thus enhance how they appear to others — as US economist Thorstein Veblen set out more than a century ago in his book The Theory of the Leisure Class (1899). Studies show that people who live in more-unequal societies spend more on status goods14.

Our work has shown that the amount spent on advertising as a proportion of gross domestic product is higher in countries with greater inequality. The well-publicized lifestyles of the rich promote standards and ways of living that others seek to emulate, triggering cascades of expenditure for holiday homes, swimming pools, travel, clothes and expensive cars.

Oxfam reports that, on average, each of the richest 1% of people in the world produces 100 times the emissions of the average person in the poorest half of the world’s population15. That is the scale of the injustice. As poorer countries raise their material standards, the rich will have to lower theirs.

Inequality also makes it harder to implement environmental policies. Changes are resisted if people feel that the burden is not being shared fairly. For example, in 2018, the gilets jaunes (yellow vests) protests erupted across France in response to President Emmanuel Macron’s attempt to implement an ‘eco-tax’ on fuel by adding a few percentage points to pump prices. The proposed tax was seen widely as unfair — particularly for the rural poor, for whom diesel and petrol are necessities. By 2019, the government had dropped the idea. Similarly, Brazilian truck drivers protested against rises in fuel tax in 2018, disrupting roads and supply chains.

Do unequal societies perform worse when it comes to the environment, then? Yes. For rich, developed countries for which data were available, we found a strong correlation between levels of equality and a score on an index we created of performance in five environmental areas: air pollution; recycling of waste materials; the carbon emissions of the rich; progress towards the United Nations Sustainable Development Goals; and international cooperation (UN treaties ratified and avoidance of unilateral coercive measures).


The article argues that rising economic inequality is a major threat to the world's well-being. Here are the key points:

The rich are capturing a growing share of wealth: The richest 1% are accumulating wealth much faster than everyone else, and their lifestyles contribute heavily to environmental damage.

Inequality harms everyone: High levels of inequality are linked to social problems like crime, mental health issues, and lower social mobility. It also makes it harder to address environmental challenges because people resist policies seen as unfair.

More equal societies perform better: Countries with a more even distribution of wealth tend to have better social and health outcomes, as well as stronger environmental performance.

Policymakers need to take action: The article proposes progressive taxation, closing tax havens, and encouraging more equitable business practices like employee ownership.

The overall message is that reducing inequality is essential for solving a range of environmental, social, and health problems.

Tuesday, April 9, 2024

Why can’t anyone agree on how dangerous AI will be?

Dylan Matthews
Vox.com
Originally posted 13 March 24

Here is an excerpt:

The paper focuses on disagreement around AI’s potential to either wipe humanity out or cause an “unrecoverable collapse,” in which the human population shrinks to under 1 million for a million or more years, or global GDP falls to under $1 trillion (less than 1 percent of its current value) for a million years or more. At the risk of being crude, I think we can summarize these scenarios as “extinction or, at best, hell on earth.”

There are, of course, a number of other different risks from AI worth worrying about, many of which we already face today.

Existing AI systems sometimes exhibit worrying racial and gender biases; they can be unreliable in ways that cause problems when we rely upon them anyway; they can be used to bad ends, like creating fake news clips to fool the public or making pornography with the faces of unconsenting people.

But these harms, while surely bad, obviously pale in comparison to “losing control of the AIs such that everyone dies.” The researchers chose to focus on the extreme, existential scenarios.

So why do people disagree on the chances of these scenarios coming true? It’s not due to differences in access to information, or a lack of exposure to differing viewpoints. If it were, the adversarial collaboration, which consisted of massive exposure to new information and contrary opinions, would have moved people’s beliefs more dramatically.


Here is my summary:

The article discusses the ongoing debate surrounding the potential dangers of advanced AI, focusing on whether it could lead to catastrophic outcomes for humanity. The author highlights the contrasting views of experts and superforecasters regarding the risks posed by AI, with experts generally more concerned about disaster scenarios. The study conducted by the Forecasting Research Institute aimed to understand the root of these disagreements through an "adversarial collaboration" where both groups engaged in extensive discussions and exposure to new information.

The research identified key issues, termed "cruxes," that influence people's beliefs about AI risks. One significant crux was the potential for AI to autonomously replicate and acquire resources before 2030. Despite the collaborative efforts, the study did not lead to a convergence of opinions. The article delves into the reasons behind these disagreements, emphasizing fundamental worldview disparities and differing perspectives on the long-term development of AI.

Overall, the article provides insights into why individuals hold varying opinions on AI's dangers, highlighting the complexity of predicting future outcomes in this rapidly evolving field.

Monday, April 8, 2024

Delusions shape our reality

Lisa Bortolotti
iai.tv
Originally posted 12 March 24

Here is an excerpt:

But what makes it the case that a delusion disqualifies the speaker from further engagement? When we call a person’s belief “delusional”, we assume that that person’s capacity to exercise agency is compromised. So, we may recognise that the person has a unique perspective on the world, but it won’t seem to us as a valuable perspective. We may realise that the person has concerns, but we won’t think of those concerns as legitimate and worth addressing. We may come to the conviction that, due to the delusional belief, the person is not in a position to affect change or participate in decision making because their grasp on reality is tenuous. If they were simply mistaken about something, we could correct them. If Laura thought that a latte at the local coffee shop costed £2.50 when it costs £3.50, we could show her the price list and set her straight. But her belief that her partner is unfaithful because the lamp post is unlit cannot be corrected that way, because what Laura considers evidence for the claim is not likely to overlap with what we consider evidence for it. When this happens, and we feel that there is no sufficient common ground for a fruitful exchange, we may see Laura as a problem to be fixed or a patient to be diagnosed and treated, as opposed to an agent with a multiplicity of needs and interests, and a person worth interacting with.

I challenge the assumption that delusional beliefs are marks of compromised agency by default and I do so based on two main arguments. First, there is nothing in the way in which delusional beliefs are developed, maintained, or defended that can be legitimately described as a dysfunctional process. Some cognitive biases may help explain why a delusional explanation is preferred to alternative explanations, or why it is not discarded after a challenge. For instance, people who report delusional beliefs often jump to conclusions. Rashid might have the belief that the US government strives to manipulate citizens’ behaviour and concludes that the tornadoes are created for this purpose, without considering arguments against the feasibility of a machine that controls the weather with that precision. Also, people who report delusional beliefs tend to see meaningful connections between independent events—as Laura who takes the lamp post being unlit as evidence for her partner’s unfaithfulness. But these cognitive biases are a common feature of human cognition and not a dysfunction giving rise to a pathology: they tend to be accentuated at stressful times when we may be strongly motivated to come up with a quick causal explanation for a distressing event.


Here is my summary:

The article argues that delusions, though often seen as simply false beliefs, can significantly impact a person's experience of the world. It highlights that delusions can be complex and offer a kind of internal logic, even if it doesn't match objective reality.

Bortolotti also points out that the term "delusion" can be judgmental and may overlook the reasons behind the belief. Delusions can sometimes provide comfort or a sense of control in a confusing situation.

Overall, the article suggests a more nuanced view of delusions, acknowledging their role in shaping a person's reality while still recognizing the importance of distinguishing them from objective reality.