Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Wednesday, December 27, 2023

This algorithm could predict your health, income, and chance of premature death

Holly Barker
Science.org
Originally published 18 DEC 23

Here is an excerpt:

The researchers trained the model, called “life2vec,” on every individual’s life story between 2008 to 2016, and the model sought patterns in these stories. Next, they used the algorithm to predict whether someone on the Danish national registers had died by 2020.

The model’s predictions were accurate 78% of the time. It identified several factors that favored a greater risk of premature death, including having a low income, having a mental health diagnosis, and being male. The model’s misses were typically caused by accidents or heart attacks, which are difficult to predict.

Although the results are intriguing—if a bit grim—some scientists caution that the patterns might not hold true for non-Danish populations. “It would be fascinating to see the model adapted using cohort data from other countries, potentially unveiling universal patterns, or highlighting unique cultural nuances,” says Youyou Wu, a psychologist at University College London.

Biases in the data could also confound its predictions, she adds. (The overdiagnosis of schizophrenia among Black people could cause algorithms to mistakenly label them at a higher risk of premature death, for example.) That could have ramifications for things such as insurance premiums or hiring decisions, Wu adds.


Here is my summary:

A new algorithm, trained on a mountain of Danish life stories, can peer into your future with unsettling precision. It can predict your health, income, and even your odds of an early demise. This, achieved by analyzing the sequence of life events, like getting a job or falling ill, raises both possibilities and ethical concerns.

On one hand, imagine the potential for good: nudges towards healthier habits or financial foresight, tailored to your personal narrative. On the other, anxieties around bias and discrimination loom. We must ensure this powerful tool is used wisely, for the benefit of all, lest it exacerbate existing inequalities or create new ones. The algorithm’s gaze into the future, while remarkable, is just that – a glimpse, not a script. 

Tuesday, December 26, 2023

Who did it? Moral wrongness for us and them in the UK, US, and Brazil

Paulo Sérgio Boggio, et al. (2023) 
Philosophical Psychology
DOI: 10.1080/09515089.2023.2278637

Abstract

Morality has traditionally been described in terms of an impartial and objective “moral law”, and moral psychological research has largely followed in this vein, focusing on abstract moral judgments. But might our moral judgments be shaped not just by what the action is, but who is doing it? We looked at ratings of moral wrongness, manipulating whether the person doing the action was a friend, a refugee, or a stranger. We looked at these ratings across various moral foundations, and conducted the study in Brazil, US, and UK samples. Our most robust and consistent findings are that purity violations were judged more harshly when committed by ingroup members and less harshly when committed by the refugees in comparison to the unspecified agents, the difference between refugee and unspecified agents decays from liberals to conservatives, i.e., conservatives judge them more harshly than liberals do, and Brazilians participants are harsher than the US and UK participants. Our results suggest that purity violations are judged differently according to who committed them and according to the political ideology of the judges. We discuss the findings in light of various theories of groups dynamics, such as moral hypocrisy, moral disengagement, and the black sheep effect.


Here is my summary:

The study explores how moral judgments vary depending on both the agent committing the act and the nationality of the person making the judgment. The study's findings challenge the notion that moral judgments are universal and instead suggest that they are influenced by cultural and national factors.

The researchers investigated how participants from the UK, US, and Brazil judged moral violations committed by different agents: friends, strangers, refugees, and unspecified individuals. They found that participants from all three countries generally judged violations committed by friends more harshly than violations committed by other agents. However, there were also significant cultural differences in the severity of judgments. Brazilians tended to judge violations of purity as less wrong than Americans, but judged violations of care, liberty, and fairness as more wrong than Americans.

The study's findings suggest that moral judgments are not simply based on the severity of the act itself, but also on factors such as the relationship between the agent and the victim, and the cultural background of the person making the judgment. These findings have implications for understanding cross-cultural moral conflicts and for developing more effective moral education programs.

Monday, December 25, 2023

Pope Francis approves Catholic blessings for same-sex couples, but not for marriage

Becky Sullivan
npr.org
Originally posted 18 Dec 23

Pope Francis has granted his formal approval allowing Catholic priests to bless same-sex couples so long as they do not appear to endorse their marriage, marking the church's most permissive decree yet on the issue of same-sex couples.

The declaration, published Monday in a new document titled "Fiducia Supplicans: On the Pastoral Meaning of Blessings," marks a major departure for the Vatican, which only two years ago had said God "cannot bless sin" in a controversial 2021 decision about same-sex couples. Monday's document was approved by Pope Francis.

Still, the Vatican stressed that marriage remains exclusively between a man and a woman, and any priests granting a blessing to a same-sex couple must "avoid any form of confusion or scandal" that could suggest otherwise.

Francis, 87, has made liberalization toward LGBTQ Catholics a hallmark of his papacy. Since he became pope in 2013, he has urged the decriminalization of homosexuality. When asked in 2013 about gay priests, he famously replied: "If someone is gay and he searches for the Lord and has good will, who am I to judge?"

Monday's declaration is a "major step forward" for the church in regards to LGBTQ people, said the Rev. James Martin, an American Jesuit priest who has advocated for the LGBTQ Catholic community.

The declaration "recognizes the deep desire in many Catholic same-sex couples for God's presence in their loving relationships," Martin wrote on the social media site X, formerly known as Twitter. "In short, yesterday, as a priest, I was forbidden to bless same-sex couples at all. Today, with some limitations, I can."

What the declaration says about blessings for same-sex couples

In the document, the Vatican draws a distinction between what it described as "ritual and liturgical" blessings and those that are more informal and spontaneous.

"This Declaration remains firm on the traditional doctrine of the Church about marriage, not allowing any type of liturgical rite or blessing similar to a liturgical rite that can create confusion," wrote prefect Cardinal Victor Manuel Fernández in an introduction to the document.


The moral arc of the universe is long and complicated, and we hope it bends toward justice.

-paraphrasing Theodore Parker and Martin Luther King Jr.

Sunday, December 24, 2023

Dual character concepts and the normative dimension of conceptual representation

Knobe, J., Prasada, S., & Newman, G. E. (2013).
Cognition, 127(2), 242–257. 

Abstract

Five experiments provide evidence for a class of ‘dual character concepts.’ Dual character concepts characterize their members in terms of both (a) a set of concrete features and (b) the abstract values that these features serve to realize. As such, these concepts provide two bases for evaluating category members and two different criteria for category membership. Experiment 1 provides support for the notion that dual character concepts have two bases for evaluation. Experiments 2–4 explore the claim that dual character concepts have two different criteria for category membership. The results show that when an object possesses the appropriate concrete features, but does not fulfill the appropriate abstract value, it is judged to be a category member in one sense but not in another. Finally, Experiment 5 uses the theory developed here to construct artificial dual character concepts and examines whether participants react to these artificial concepts in the same way as naturally occurring dual character concepts. The present studies serve to define the nature of dual character concepts and distinguish them from other types of concepts (e.g., natural kind concepts), which share some, but not all of the properties of dual character concepts. More broadly, these phenomena suggest a normative dimension in everyday conceptual representation.

Here is my summary of the research, which has its current critics:

This research challenged traditional understandings of categorization and evaluation. Dual character concepts, exemplified by terms like "artist," "scientist," and "teacher," possess two distinct dimensions:

Concrete Features: These are the observable, physical attributes or characteristics that members of the category share.

Abstract Values: These are the underlying goals, ideals, or purposes that the concrete features serve to realize.

Unlike other types of concepts, dual character concepts allow for two distinct bases for evaluation:

Good/Bad Evaluation: This assessment is based on how well the concrete features of an entity align with the expected characteristics of a category member.

True/False Evaluation: This judgment is based on whether the abstract values embedded in the concept are fulfilled by the concrete features of an entity.

This dual-pronged evaluation process leads to intriguing consequences for categorization and judgment. An object may be deemed a "good" category member based on its concrete features, yet not a "true" member if it fails to uphold the abstract values associated with the concept.

The researchers provide compelling evidence for the existence of dual character concepts through a series of experiments. These studies demonstrate that people have two distinct ways of characterizing category members and that dual character concepts influence judgments of category membership.

The concept of dual character concepts highlights the normative dimension of conceptual representation, suggesting that our concepts not only reflect the world but also embody our values and beliefs. This normative dimension shapes how we categorize objects, evaluate entities, and make decisions in our daily lives.

Saturday, December 23, 2023

Folk Psychological Attributions of Consciousness to Large Language Models

Colombatto, C., & Fleming, S. M.
(2023, November 22). PsyArXiv

Abstract

Technological advances raise new puzzles and challenges for cognitive science and the study of how humans think about and interact with artificial intelligence (AI). For example, the advent of Large Language Models and their human-like linguistic abilities has raised substantial debate regarding whether or not AI could be conscious. Here we consider the question of whether AI could have subjective experiences such as feelings and sensations (“phenomenological consciousness”). While experts from many fieldshave weighed in on this issue in academic and public discourse, it remains unknown how the general population attributes phenomenology to AI. We surveyed a sample of US residents (N=300) and found that a majority of participants were willing to attribute phenomenological consciousness to LLMs. These attributions were robust, as they predicted attributions of mental states typically associated with phenomenology –but also flexible, as they were sensitive to individual differences such as usage frequency. Overall, these results show how folk intuitions about AI consciousness can diverge from expert intuitions –with important implications for the legal and ethical status of AI.


My summary:

The results of the study show that people are generally more likely to attribute consciousness to LLMs than to other non-human entities, such as animals, plants, and robots. However, the level of consciousness attributed to LLMs is still relatively low, with most participants rating them as less conscious than humans. The authors argue that these findings reflect the influence of folk psychology, which is the tendency to explain the behavior of others in terms of mental states.

The authors also found that people's attributions of consciousness to LLMs were influenced by their beliefs about the nature of consciousness and their familiarity with LLMs. Participants who were more familiar with LLMs were more likely to attribute consciousness to them, and participants who believed that consciousness is a product of complex computation were also more likely to attribute consciousness to LLMs.

Overall, the study suggests that people are generally open to the possibility that LLMs may be conscious, but they also recognize that LLMs are not as conscious as humans. These findings have implications for the development and use of LLMs, as they suggest that people may be more willing to trust and interact with LLMs that they believe are conscious.

Friday, December 22, 2023

Differential cortical network engagement during states of un/consciousness in humans

Zelmann, R., Paulk, A., et al. (2023).
Neuron. Volume 111, (21)

Summary

What happens in the human brain when we are unconscious? Despite substantial work, we are still unsure which brain regions are involved and how they are impacted when consciousness is disrupted. Using intracranial recordings and direct electrical stimulation, we mapped global, network, and regional involvement during wake vs. arousable unconsciousness (sleep) vs. non-arousable unconsciousness (propofol-induced general anesthesia). Information integration and complex processing we`re reduced, while variability increased in any type of unconscious state. These changes were more pronounced during anesthesia than sleep and involved different cortical engagement. During sleep, changes were mostly uniformly distributed across the brain, whereas during anesthesia, the prefrontal cortex was the most disrupted, suggesting that the lack of arousability during anesthesia results not from just altered overall physiology but from a disconnection between the prefrontal and other brain areas. These findings provide direct evidence for different neural dynamics during loss of consciousness compared with loss of arousability.

Highlights

• Decreased complexity and connectivity, with increased variability when unconscious
• Changes were more pronounced during propofol-induced general anesthesia than sleep
• During sleep, changes were homogeneously distributed across the human brain
• During anesthesia, substantial prefrontal disconnection is related to lack of arousability


Here is my summary:

State-Dependent Cortical Network Engagement

The human brain undergoes significant changes in its functional organization during different states of consciousness, including wakefulness, sleep, and general anesthesia. This study investigated the neural underpinnings of these state-dependent changes by comparing cortical network engagement during wakefulness, sleep, and propofol-induced general anesthesia.

Prefrontal Cortex Disruption during Anesthesia

The findings revealed that loss of consciousness, whether due to sleep or anesthesia, resulted in reduced information integration and increased response variability compared to wakefulness. However, these changes were more pronounced during anesthesia than sleep. Notably, anesthesia was associated with a specific disruption of the prefrontal cortex (PFC), a brain region crucial for higher-order cognitive functions such as decision-making and self-awareness.

Implications for Understanding Consciousness

These findings suggest that the PFC plays a critical role in maintaining consciousness and that its disruption contributes to the loss of consciousness during anesthesia. The study also highlights the distinct neural mechanisms underlying sleep and anesthesia, suggesting that these states involve different modes of brain function.

Thursday, December 21, 2023

Chatbot therapy is risky. It’s also not useless

A.W. Ohlheiser
vox.com
Originally posted 14 Dec 23

Here is an excerpt:

So what are the risks of chatbot therapy?

There are some obvious concerns here: Privacy is a big one. That includes the handling of the training data used to make generative AI tools better at mimicking therapy as well as the privacy of the users who end up disclosing sensitive medical information to a chatbot while seeking help. There are also the biases built into many of these systems as they stand today, which often reflect and reinforce the larger systemic inequalities that already exist in society.

But the biggest risk of chatbot therapy — whether it’s poorly conceived or provided by software that was not designed for mental health — is that it could hurt people by not providing good support and care. Therapy is more than a chat transcript and a set of suggestions. Honos-Webb, who uses generative AI tools like ChatGPT to organize her thoughts while writing articles on ADHD but not for her practice as a therapist, noted that therapists pick up on a lot of cues and nuances that AI is not prepared to catch.

Stade, in her working paper, notes that while large language models have a “promising” capacity to conduct some of the skills needed for psychotherapy, there’s a difference between “simulating therapy skills” and “implementing them effectively.” She noted specific concerns around how these systems might handle complex cases, including those involving suicidal thoughts, substance abuse, or specific life events.

Honos-Webb gave the example of an older woman who recently developed an eating disorder. One level of treatment might focus specifically on that behavior: If someone isn’t eating, what might help them eat? But a good therapist will pick up on more of that. Over time, that therapist and patient might make the connection between recent life events: Maybe the patient’s husband recently retired. She’s angry because suddenly he’s home all the time, taking up her space.

“So much of therapy is being responsive to emerging context, what you’re seeing, what you’re noticing,” Honos-Webb explained. And the effectiveness of that work is directly tied to the developing relationship between therapist and patient.


Here is my take:

The promise of AI in mental health care dances on a delicate knife's edge. Chatbot therapy, with its alluring accessibility and anonymity, tempts us with a quick fix for the ever-growing burden of mental illness. Yet, as with any powerful tool, its potential can be both a balm and a poison, demanding a wise touch for its ethical wielding.

On the one hand, imagine a world where everyone, regardless of location or circumstance, can find a non-judgmental ear, a gentle guide through the labyrinth of their own minds. Chatbots, tireless and endlessly patient, could offer a first step of support, a bridge to human therapy when needed. In the hushed hours of isolation, they could remind us we're not alone, providing solace and fostering resilience.

But let us not be lulled into a false sense of ease. Technology, however sophisticated, lacks the warmth of human connection, the nuanced understanding of a shared gaze, the empathy that breathes life into words. We must remember that a chatbot can never replace the irreplaceable – the human relationship at the heart of genuine healing.

Therefore, our embrace of chatbot therapy must be tempered with prudence. We must ensure adequate safeguards, preventing them from masquerading as a panacea, neglecting the complex needs of human beings. Transparency is key – users must be aware of the limitations, of the algorithms whispering behind the chatbot's words. Above all, let us never sacrifice the sacred space of therapy for the cold efficiency of code.

Chatbot therapy can be a bridge, a stepping stone, but never the destination. Let us use technology with wisdom, acknowledging its potential good while holding fast to the irreplaceable value of human connection in the intricate tapestry of healing. Only then can we mental health professionals navigate the ethical tightrope and make technology safe and effective, when and where possible.

Wednesday, December 20, 2023

Dehumanization: Beyond the Intergroup to the Interpersonal

Karantzas, G. C., Simpson, J. A., & Haslam, N. (2023).
Current Directions in Psychological Science, 0(0).

Abstract

Over the past two decades, there has been a significant shift in how dehumanization is conceptualized and studied. This shift has broadened the construct from the blatant denial of humanness to groups to include more subtle dehumanization within people’s interpersonal relationships. In this article, we focus on conceptual and empirical advances in the study of dehumanization in interpersonal relationships, with a particular focus on dehumanizing behaviors. In the first section, we describe the concept of interpersonal dehumanization. In the second section, we review social cognitive and behavioral research into interpersonal dehumanization. Within this section, we place special emphasis on the conceptualization and measurement of dehumanizing behaviors. We then propose a conceptual model of interpersonal dehumanization to guide future research. While doing so, we provide a novel review and integration of cutting-edge research on interpersonal dehumanization.

Conclusion

This review shines a spotlight on interpersonal dehumanization, with a specific emphasis on dehumanizing behaviors. Our review highlights that interpersonal dehumanization is a rapidly expanding and innovative field of research. It provides a clearer understanding of the current and emerging directions of research investigating how even subtle forms of negative behavior may, at times, thwart social connection and human bonding. It also provides a theoretical platform for scholars to launch new streams of research on interpersonal dehumanization processes and outcomes.

My summary

Traditionally, dehumanization has been studied in the context of intergroup conflict and prejudice, where individuals or groups are perceived as less human than others. However, recent research has demonstrated that dehumanization can also manifest in interpersonal interactions, affecting how individuals perceive, treat, and interact with each other.

The article argues that interpersonal dehumanization is a prevalent and impactful phenomenon that can have significant consequences for both individuals and relationships. It can lead to reduced empathy, increased hostility, and justification for aggression and violence.

The authors propose a conceptual model of interpersonal dehumanization that identifies three key components:

Dehumanizing Cognitions & Perceptions: The tendency to view others as less human-like, lacking essential human qualities like emotions, thoughts, and feelings.

Dehumanizing Behaviors: Actions or expressions that convey a disregard for another's humanity, such as insults, mockery, or exclusion.

Dehumanizing Consequences: The negative effects of dehumanization on individuals and relationships, including reduced empathy, increased hostility, and justification for aggression.

By understanding the mechanisms and consequences of interpersonal dehumanization, we can better address its prevalence and mitigate its harmful effects. The article concludes by emphasizing the importance of fostering empathy, promoting inclusive environments, and encouraging respectful interactions to combat dehumanization and promote healthy interpersonal relationships.

Tuesday, December 19, 2023

Human bias in algorithm design

Morewedge, C.K., Mullainathan, S., Naushan, H.F. et al.
Nat Hum Behav 7, 1822–1824 (2023).

Here is how the article starts:

Algorithms are designed to learn user preferences by observing user behaviour. This causes algorithms to fail to reflect user preferences when psychological biases affect user decision making. For algorithms to enhance social welfare, algorithm design needs to be psychologically informed.Many people believe that algorithms are failing to live up to their prom-ise to reflect user preferences and improve social welfare. The problem is not technological. Modern algorithms are sophisticated and accurate. Training algorithms on unrepresentative samples contributes to the problem, but failures happen even when algorithms are trained on the population. Nor is the problem caused only by the profit motive. For-profit firms design algorithms at a cost to users, but even non-profit organizations and governments fall short.

All algorithms are built on a psychological model of what the user is doing. The fundamental constraint on this model is the narrowness of the measurable variables for algorithms to predict. We suggest that algorithms fail to reflect user preferences and enhance their welfare because algorithms rely on revealed preferences to make predictions. Designers build algorithms with the erroneous assumption that user behaviour (revealed preferences) tells us (1) what users rationally prefer (normative preferences) and (2) what will enhance user welfare. Reliance on this 95-year-old economic model, rather than the more realistic assumption that users exhibit bounded rationality, leads designers to train algorithms on user behaviour. Revealed preferences can identify unknown preferences, but revealed preferences are an incomplete — and at times misleading — measure of the normative preferences and values of users. It is ironic that modern algorithms are built on an outmoded and indefensible commitment to revealed preferences.


Here is my summary.

Human biases can be reflected in algorithms, leading to unintended discriminatory outcomes. The authors argue that algorithms are not simply objective tools, but rather embody the values and assumptions of their creators. They highlight the importance of considering psychological factors when designing algorithms, as human behavior is often influenced by biases. To address this issue, the authors propose a framework for developing psychologically informed algorithms that can better capture user preferences and enhance social welfare. They emphasize the need for a more holistic approach to algorithm design that goes beyond technical considerations and takes into account the human element.