Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Tuesday, April 9, 2024

Why can’t anyone agree on how dangerous AI will be?

Dylan Matthews
Vox.com
Originally posted 13 March 24

Here is an excerpt:

The paper focuses on disagreement around AI’s potential to either wipe humanity out or cause an “unrecoverable collapse,” in which the human population shrinks to under 1 million for a million or more years, or global GDP falls to under $1 trillion (less than 1 percent of its current value) for a million years or more. At the risk of being crude, I think we can summarize these scenarios as “extinction or, at best, hell on earth.”

There are, of course, a number of other different risks from AI worth worrying about, many of which we already face today.

Existing AI systems sometimes exhibit worrying racial and gender biases; they can be unreliable in ways that cause problems when we rely upon them anyway; they can be used to bad ends, like creating fake news clips to fool the public or making pornography with the faces of unconsenting people.

But these harms, while surely bad, obviously pale in comparison to “losing control of the AIs such that everyone dies.” The researchers chose to focus on the extreme, existential scenarios.

So why do people disagree on the chances of these scenarios coming true? It’s not due to differences in access to information, or a lack of exposure to differing viewpoints. If it were, the adversarial collaboration, which consisted of massive exposure to new information and contrary opinions, would have moved people’s beliefs more dramatically.


Here is my summary:

The article discusses the ongoing debate surrounding the potential dangers of advanced AI, focusing on whether it could lead to catastrophic outcomes for humanity. The author highlights the contrasting views of experts and superforecasters regarding the risks posed by AI, with experts generally more concerned about disaster scenarios. The study conducted by the Forecasting Research Institute aimed to understand the root of these disagreements through an "adversarial collaboration" where both groups engaged in extensive discussions and exposure to new information.

The research identified key issues, termed "cruxes," that influence people's beliefs about AI risks. One significant crux was the potential for AI to autonomously replicate and acquire resources before 2030. Despite the collaborative efforts, the study did not lead to a convergence of opinions. The article delves into the reasons behind these disagreements, emphasizing fundamental worldview disparities and differing perspectives on the long-term development of AI.

Overall, the article provides insights into why individuals hold varying opinions on AI's dangers, highlighting the complexity of predicting future outcomes in this rapidly evolving field.

Monday, April 8, 2024

Delusions shape our reality

Lisa Bortolotti
iai.tv
Originally posted 12 March 24

Here is an excerpt:

But what makes it the case that a delusion disqualifies the speaker from further engagement? When we call a person’s belief “delusional”, we assume that that person’s capacity to exercise agency is compromised. So, we may recognise that the person has a unique perspective on the world, but it won’t seem to us as a valuable perspective. We may realise that the person has concerns, but we won’t think of those concerns as legitimate and worth addressing. We may come to the conviction that, due to the delusional belief, the person is not in a position to affect change or participate in decision making because their grasp on reality is tenuous. If they were simply mistaken about something, we could correct them. If Laura thought that a latte at the local coffee shop costed £2.50 when it costs £3.50, we could show her the price list and set her straight. But her belief that her partner is unfaithful because the lamp post is unlit cannot be corrected that way, because what Laura considers evidence for the claim is not likely to overlap with what we consider evidence for it. When this happens, and we feel that there is no sufficient common ground for a fruitful exchange, we may see Laura as a problem to be fixed or a patient to be diagnosed and treated, as opposed to an agent with a multiplicity of needs and interests, and a person worth interacting with.

I challenge the assumption that delusional beliefs are marks of compromised agency by default and I do so based on two main arguments. First, there is nothing in the way in which delusional beliefs are developed, maintained, or defended that can be legitimately described as a dysfunctional process. Some cognitive biases may help explain why a delusional explanation is preferred to alternative explanations, or why it is not discarded after a challenge. For instance, people who report delusional beliefs often jump to conclusions. Rashid might have the belief that the US government strives to manipulate citizens’ behaviour and concludes that the tornadoes are created for this purpose, without considering arguments against the feasibility of a machine that controls the weather with that precision. Also, people who report delusional beliefs tend to see meaningful connections between independent events—as Laura who takes the lamp post being unlit as evidence for her partner’s unfaithfulness. But these cognitive biases are a common feature of human cognition and not a dysfunction giving rise to a pathology: they tend to be accentuated at stressful times when we may be strongly motivated to come up with a quick causal explanation for a distressing event.


Here is my summary:

The article argues that delusions, though often seen as simply false beliefs, can significantly impact a person's experience of the world. It highlights that delusions can be complex and offer a kind of internal logic, even if it doesn't match objective reality.

Bortolotti also points out that the term "delusion" can be judgmental and may overlook the reasons behind the belief. Delusions can sometimes provide comfort or a sense of control in a confusing situation.

Overall, the article suggests a more nuanced view of delusions, acknowledging their role in shaping a person's reality while still recognizing the importance of distinguishing them from objective reality.

Sunday, April 7, 2024

When Institutions Harm Those Who Depend on Them: A Scoping Review of Institutional Betrayal

Christl, M. E., et al. (2024).
Trauma, violence & abuse
15248380241226627.
Advance online publication.

Abstract

The term institutional betrayal (Smith and Freyd, 2014) builds on the conceptual framework of betrayal trauma theory (see Freyd, 1996) to describe the ways that institutions (e.g., universities, workplaces) fail to take appropriate steps to prevent and/or respond appropriately to interpersonal trauma. A nascent literature has begun to describe individual costs associated with institutional betrayal throughout the United States (U.S.), with implications for public policy and institutional practice. A scoping review was conducted to quantify existing study characteristics and key findings to guide research and practice going forward. Multiple academic databases were searched for keywords (i.e., "institutional betrayal" and "organizational betrayal"). Thirty-seven articles met inclusion criteria (i.e., peer-reviewed empirical studies of institutional betrayal) and were included in analyses. Results identified research approaches, populations and settings, and predictor and outcome variables frequently studied in relation to institutional betrayal. This scoping review describes a strong foundation of published studies and provides recommendations for future research, including longitudinal research with diverse individuals across diverse institutional settings. The growing evidence for action has broad implications for research-informed policy and institutional practice.

Here is my summary:

A growing body of research examines institutional betrayal, the harm institutions cause people who depend on them. This research suggests institutional betrayal is linked to mental and physical health problems, absenteeism from work, and a distrust of institutions. A common tool to measure institutional betrayal is the Institutional Betrayal Questionnaire (IBQ). Researchers are calling for more studies on institutional betrayal among young people and in settings like K-12 schools and workplaces. Additionally, more research is needed on how institutions respond to reports of betrayal and how to prevent it from happening in the first place. Finally, future research should focus on people from minority groups, as they may be more vulnerable to institutional betrayal.

Saturday, April 6, 2024

LSD-Based Medication for GAD Receives FDA Breakthrough Status

Megan Brooks
Medscape.com
Originally posted March 08, 2024

The US Food and Drug Administration (FDA) has granted breakthrough designation to an LSD-based treatment for generalized anxiety disorder (GAD) based on promising topline data from a phase 2b clinical trial. Mind Medicine (MindMed) Inc is developing the treatment — MM120 (lysergide d-tartrate).

In a news release the company reports that a single oral dose of MM120 met its key secondary endpoint, maintaining "clinically and statistically significant" reductions in Hamilton Anxiety Scale (HAM-A) score, compared with placebo, at 12 weeks with a 65% clinical response rate and 48% clinical remission rate.

The company previously announced statistically significant improvements on the HAM-A compared with placebo at 4 weeks, which was the trial's primary endpoint.

"I've conducted clinical research studies in psychiatry for over two decades and have seen studies of many drugs under development for the treatment of anxiety. That MM120 exhibited rapid and robust efficacy, solidly sustained for 12 weeks after a single dose, is truly remarkable," study investigator David Feifel, MD, PhD, professor emeritus of psychiatry at the University of California, San Diego, and director of the Kadima Neuropsychiatry Institute in La Jolla, California, said in the news release.


Here is some information from the Press Release from Mind Medicine.

About MM120

Lysergide is a synthetic ergotamine belonging to the group of classic, or serotonergic, psychedelics, which acts as a partial agonist at human serotonin-2A (5-hydroxytryptamine-2A [5-HT2A]) receptors. MindMed is developing MM120 (lysergide D-tartrate), the tartrate salt form of lysergide, for GAD and is exploring its potential applications in other serious brain health disorders.

About MindMed

MindMed is a clinical stage biopharmaceutical company developing novel product candidates to treat brain health disorders. Our mission is to be the global leader in the development and delivery of treatments that unlock new opportunities to improve patient outcomes. We are developing a pipeline of innovative product candidates, with and without acute perceptual effects, targeting neurotransmitter pathways that play key roles in brain health disorders.

MindMed trades on NASDAQ under the symbol MNMD and on the Cboe Canada (formerly known as the NEO Exchange, Inc.) under the symbol MMED.

Friday, April 5, 2024

Ageism in health care is more common than you might think, and it can harm people

Ashley Milne-Tyte
npr.org
Originally posted 7 March 24

A recent study found that older people spend an average of 21 days a year on medical appointments. Kathleen Hayes can believe it.

Hayes lives in Chicago and has spent a lot of time lately taking her parents, who are both in their 80s, to doctor's appointments. Her dad has Parkinson's, and her mom has had a difficult recovery from a bad bout of Covid-19. As she's sat in, Hayes has noticed some health care workers talk to her parents at top volume, to the point, she says, "that my father said to one, 'I'm not deaf, you don't have to yell.'"

In addition, while some doctors and nurses address her parents directly, others keep looking at Hayes herself.

"Their gaze is on me so long that it starts to feel like we're talking around my parents," says Hayes, who lives a few hours north of her parents. "I've had to emphasize, 'I don't want to speak for my mother. Please ask my mother that question.'"

Researchers and geriatricians say that instances like these constitute ageism – discrimination based on a person's age – and it is surprisingly common in health care settings. It can lead to both overtreatment and undertreatment of older adults, says Dr. Louise Aronson, a geriatrician and professor of geriatrics at the University of California, San Francisco.

"We all see older people differently. Ageism is a cross-cultural reality," Aronson says.


Here is my summary:

This article and other research point to a concerning prevalence of ageism in healthcare settings. This bias can take the form of either overtreatment or undertreatment of older adults.

Negative stereotypes: Doctors may hold assumptions about older adults being less willing or able to handle aggressive treatments, leading to missed opportunities for care.

Communication issues: Sometimes healthcare providers speak to adult children instead of the older person themselves, disregarding their autonomy.

These biases are linked to poorer health outcomes and can even shorten lifespans.  The article cites a study suggesting that ageism costs the healthcare system billions of dollars annually.  There are positive steps that can be taken, such as anti-bias training for healthcare workers.

Thursday, April 4, 2024

Ready or not, AI chatbots are here to help with Gen Z’s mental health struggles

Matthew Perrone
AP.com
Originally posted 23 March 24

Here is an excerpt:

Earkick is one of hundreds of free apps that are being pitched to address a crisis in mental health among teens and young adults. Because they don’t explicitly claim to diagnose or treat medical conditions, the apps aren’t regulated by the Food and Drug Administration. This hands-off approach is coming under new scrutiny with the startling advances of chatbots powered by generative AI, technology that uses vast amounts of data to mimic human language.

The industry argument is simple: Chatbots are free, available 24/7 and don’t come with the stigma that keeps some people away from therapy.

But there’s limited data that they actually improve mental health. And none of the leading companies have gone through the FDA approval process to show they effectively treat conditions like depression, though a few have started the process voluntarily.

“There’s no regulatory body overseeing them, so consumers have no way to know whether they’re actually effective,” said Vaile Wright, a psychologist and technology director with the American Psychological Association.

Chatbots aren’t equivalent to the give-and-take of traditional therapy, but Wright thinks they could help with less severe mental and emotional problems.

Earkick’s website states that the app does not “provide any form of medical care, medical opinion, diagnosis or treatment.”

Some health lawyers say such disclaimers aren’t enough.


Here is my summary:

AI chatbots can provide personalized, 24/7 mental health support and guidance to users through convenient mobile apps. They use natural language processing and machine learning to simulate human conversation and tailor responses to individual needs.

 This can be especially beneficial for those who face barriers to accessing traditional in-person therapy, such as cost, location, or stigma.

Research has shown that AI chatbots can be effective in reducing the severity of mental health issues like anxiety, depression, and stress for diverse populations.  They can deliver evidence-based interventions like cognitive behavioral therapy and promote positive psychology.  Some well-known examples include Wysa, Woebot, Replika, Youper, and Tess.

However, there are also ethical concerns around the use of AI chatbots for mental health. There are risks of providing inadequate or even harmful support if the chatbot cannot fully understand the user's needs or respond empathetically. Algorithmic bias in the training data could also lead to discriminatory advice. It's crucial that users understand the limitations of the therapeutic relationship with an AI chatbot versus a human therapist.

Overall, AI chatbots have significant potential to expand access to mental health support, but must be developed and deployed responsibly with strong safeguards to protect user wellbeing. Continued research and oversight will be needed to ensure these tools are used effectively and ethically.

Wednesday, April 3, 2024

Perceptions of Falling Behind “Most White People”: Within-Group Status Comparisons Predict Fewer Positive Emotions and Worse Health Over Time Among White (but Not Black) Americans

Caluori, N., Cooley, E., et al. (2024).
Psychological Science, 35(2), 175-190.
https://doi.org/10.1177/09567976231221546

Abstract

Despite the persistence of anti-Black racism, White Americans report feeling worse off than Black Americans. We suggest that some White Americans may report low well-being despite high group-level status because of perceptions that they are falling behind their in-group. Using census-based quota sampling, we measured status comparisons and health among Black (N = 452, Wave 1) and White (N = 439, Wave 1) American adults over a period of 6 to 7 weeks. We found that Black and White Americans tended to make status comparisons within their own racial groups and that most Black participants felt better off than their racial group, whereas most White participants felt worse off than their racial group. Moreover, we found that White Americans’ perceptions of falling behind “most White people” predicted fewer positive emotions at a subsequent time, which predicted worse sleep quality and depressive symptoms in the future. Subjective within-group status did not have the same consequences among Black participants.


Here is my succinct summary:

Despite their high group status, many White Americans experience poor well-being due to the perception that they are lagging behind their in-group. In contrast, Black Americans feel relatively better off within their racial group, while White Americans feel comparatively worse off within theirs.

Tuesday, April 2, 2024

The Puzzle of Evaluating Moral Cognition in Artificial Agents

Reinecke, M. G., Mao, Y., et al. (2023).
Cognitive Science, 47(8).

Abstract

In developing artificial intelligence (AI), researchers often benchmark against human performance as a measure of progress. Is this kind of comparison possible for moral cognition? Given that human moral judgment often hinges on intangible properties like “intention” which may have no natural analog in artificial agents, it may prove difficult to design a “like-for-like” comparison between the moral behavior of artificial and human agents. What would a measure of moral behavior for both humans and AI look like? We unravel the complexity of this question by discussing examples within reinforcement learning and generative AI, and we examine how the puzzle of evaluating artificial agents' moral cognition remains open for further investigation within cognitive science.

The link to the article is the hyperlink above.

Here is my summary:

This article delves into the challenges associated with assessing the moral decision-making capabilities of artificial intelligence systems. It explores the complexities of imbuing AI with ethical reasoning and the difficulties in evaluating their moral cognition. The article discusses the need for robust frameworks and methodologies to effectively gauge the ethical behavior of AI, highlighting the intricate nature of integrating morality into machine learning algorithms. Overall, it emphasizes the critical importance of developing reliable methods to evaluate the moral reasoning of artificial agents in order to ensure their responsible and ethical deployment in various domains.

Monday, April 1, 2024

Daniel Kahneman, pioneering behavioral psychologist, Nobel laureate and ‘giant in the field,’ dies at 90

Jaime Saxon
Office of Communications - Princeton
Originally released 28 March 24

Daniel Kahneman, the Eugene Higgins Professor of Psychology, Emeritus, professor of psychology and public affairs, emeritus, and a Nobel laureate in economics whose groundbreaking behavioral science research changed our understanding of how people think and make decisions, died on March 27. He was 90.

Kahneman joined the Princeton University faculty in 1993, following appointments at Hebrew University, the University of British Columbia and the University of California–Berkeley, and transferred to emeritus status in 2007.

“Danny Kahneman changed how we understand rationality and its limits,” said Princeton President Christopher L. Eisgruber. “His scholarship pushed the frontiers of knowledge, inspired generations of students, and influenced leaders and thinkers throughout the world. We are fortunate that he made Princeton his home for so much of his career, and we will miss him greatly.”

In collaboration with his colleague and friend of nearly 30 years, the late Amos Tversky of Stanford University, Kahneman applied cognitive psychology to economic analysis, laying the foundation for a new field of research — behavioral economics — and earning Kahneman the Nobel Prize in Economics in 2002. Kahneman and Tversky’s insights on human judgment have influenced a wide range of disciplines, including economics, finance, medicine, law, politics and policy.

The Nobel citation commended Kahneman “for having integrated insights from psychological research into economic science, especially concerning human judgment and decision-making under uncertainty.”

“His work has inspired a new generation of researchers in economics and finance to enrich economic theory using insights from cognitive psychology into intrinsic human motivation,” the citation said. Kahneman shared the Nobel, formally the Sveriges Riksbank Prize in Economic Sciences in Memory of Alfred Nobel, with American economist Vernon L. Smith.


Here is my personal reflection:

Daniel Kahneman, a giant in psychology and economics, passed away recently. He revolutionized our understanding of human decision-making, revealing the biases and shortcuts that shape our choices. Through his work, he not only improved economic models but also empowered individuals to make more informed and rational decisions. His legacy will continue to influence fields far beyond his own.  May his memory be a blessing.