Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Tuesday, April 18, 2023

We need an AI rights movement

Jacy Reese Anthis
The Hill
Originally posted 23 MAR 23

New artificial intelligence technologies like the recent release of GPT-4 have stunned even the most optimistic researchers. Language transformer models like this and Bing AI are capable of conversations that feel like talking to a human, and image diffusion models such as Midjourney and Stable Diffusion produce what looks like better digital art than the vast majority of us can produce. 

It’s only natural, after having grown up with AI in science fiction, to wonder what’s really going on inside the chatbot’s head. Supporters and critics alike have ruthlessly probed their capabilities with countless examples of genius and idiocy. Yet seemingly every public intellectual has a confident opinion on what the models can and can’t do, such as claims from Gary Marcus, Judea Pearl, Noam Chomsky, and others that the models lack causal understanding.

But thanks to tools like ChatGPT, which implements GPT-4, being publicly accessible, we can put these claims to the test. If you ask ChatGPT why an apple falls, it gives a reasonable explanation of gravity. You can even ask ChatGPT what happens to an apple released from the hand if there is no gravity, and it correctly tells you the apple will stay in place. 

Despite these advances, there seems to be consensus at least that these models are not sentient. They have no inner life, no happiness or suffering, at least no more than an insect. 

But it may not be long before they do, and our concepts of language, understanding, agency, and sentience are deeply insufficient to assess the AI systems that are becoming digital minds integrated into society with the capacity to be our friends, coworkers, and — perhaps one day — to be sentient beings with rights and personhood. 

AIs are no longer mere tools like smartphones and electric cars, and we cannot treat them in the same way as mindless technologies. A new dawn is breaking. 

This is just one of many reasons why we need to build a new field of digital minds research and an AI rights movement to ensure that, if the minds we create are sentient, they have their rights protected. Scientists have long proposed the Turing test, in which human judges try to distinguish an AI from a human by speaking to it. But digital minds may be too strange for this approach to tell us what we need to know. 

Monday, April 17, 2023

Generalized Morality Culturally Evolves as an Adaptive Heuristic in Large Social Networks

Jackson, J. C., Halberstadt, J., et al.
(2023, March 22).

Abstract

Why do people assume that a generous person should also be honest? Why can a single criminal conviction destroy someone’s moral reputation? And why do we even use words like “moral” and “immoral”? We explore these questions with a new model of how people perceive moral character. According to this model, people can vary in the extent that they perceive moral character as “localized” (varying across many contextually embedded dimensions) vs. “generalized” (varying along a single dimension from morally bad to morally good). This variation might be at least partly the product of cultural evolutionary adaptations to predicting cooperation in different kinds of social networks. As networks grow larger and more complex, perceptions of generalized morality are increasingly valuable for predicting cooperation during partner selection, especially in novel contexts. Our studies show that social network size correlates with perceptions of generalized morality in US and international samples (Study 1), and that East African hunter-gatherers with greater exposure outside their local region perceive morality as more generalized compared to those who have remained in their local region (Study 2). We support the adaptive value of generalized morality in large and unfamiliar social networks with an agent-based model (Study 3), and experimentally show that generalized morality outperforms localized morality when people predict cooperation in contexts where they have incomplete information about previous partner behavior (Study 4). Our final study shows that perceptions of morality have become more generalized over the last 200 years of English-language history, which suggests that it may be co-evolving with rising social complexity and anonymity in the English-speaking world (Study 5). We also present several supplemental studies which extend our findings. We close by discussing the implications of this theory for the cultural evolution of political systems, religion, and taxonomical theories of morality.

General Discussion

The word“moral” has taken a strange journey over the last several centuries. The word did not yet exist when Plato and Aristotle composed their theories of virtue. It was only when Cicero translated Aristotle’s Nicomachean Ethics that he coined the term “moralis” as the Latin translation of Aristotle’s “ēthikós”(Online Etymology Dictionary, n.d.).It is an ironic slight to Aristotle—who favored concrete particulars in lieu of abstract forms—that the word has become increasingly abstract and all-encompassing throughout its lexical evolution, with a meaning that now approaches Plato’s “form of the good.” We doubt that this semantic drift isa coincidence.

Instead, it may signify a cultural evolutionary shift in people’s perceptions of moral character as increasingly generalized as people inhabit increasingly larger and more unfamiliar social networks. Here we support this perspective with five studies. Studies 1-2 find that social network size correlates with the prevalence of generalized morality. Studies 1a-b explicitly tie beliefs in generalized morality to social network size with large surveys.  Study 2 conceptually replicates this finding in a Hadza hunter-gatherer camp, showing that Hadza hunter-gatherers with more external exposure perceive their campmates using more generalized morality. Studies 3-4 show that generalized morality can be adaptive for predicting cooperation in large and unfamiliar networks. Study 3 is an agent-based model which shows that, given plausible assumptions, generalized morality becomes increasingly valuable as social networks grow larger and less familiar. Study 4 is an experiment which shows that generalized morality is particularly valuable when people interact with unfamiliar partners in novel situations. Finally, Study 5 shows that generalized morality has risen over English-language history, such that words for moral attributes (e.g., fair, loyal, caring) have become more semantically generalizable over the last two hundred years of human history.

Sunday, April 16, 2023

The Relationship between Compulsive Sexual Behavior, Religiosity, and Moral Disapproval

Jennings, T., Lyng, T., et al. (2021).
Journal of Behavioral Addictions 10(4):854-878
DOI:10.1556/2006.2021.00084

Abstract

Compulsive sexual behavior (CSB) is associated with religiosity and moral disapproval for sexual behaviors, and religiosity and moral disapproval are often used interchangeably in understanding moral incongruence. The present study expands prior research by examining relationships between several religious orientations and CSB and testing how moral disapproval contributes to these relationships via mediation analysis. Results indicated that religious orientations reflecting commitment to beliefs and rigidity in adhering to beliefs predicted greater CSB. Additionally, moral disapproval mediated relationships between several religiosity orientations and CSB. Overall, findings suggest that religiosity and moral disapproval are related constructs that aid in understanding CSB presentations.

From the Discussion Section

The relationship between CSB, religiosity, and spirituality

In general, the present review found that most studies reported a small to moderate positive relationship between CSB and religiosity. However, there were also many non-significant relationships reported (Kohut & Stulhofer, 2018; Reid et al., 2016; Skegg et al., 2010), as well as many associations that were very weak (Grubbs, Grant, et al., 2018;Grubbs, Kraus, et al., 2020; Lewczuk et al., 2020). The variety of measurement tools used, and constructs assessed across the literature, makes it difficult to draw more specific conclusions about the relationships between CSB and religiosity or spirituality. Divergent findings in the literature may be explained, in part, by the diverse measurement choices of researchers, as different aspects of CSB, religiosity, and spirituality are bound to have unique relationships with each other.

There are several notable considerations that may contribute to more consistent identification of a relationship between CSB and religiosity or spirituality. One of the most well-studied relationships in the literate is the association between PPU (Problematic Pornographic Use) and an aggregate measure of belief salience and religious participation, which, as noted in the meta-analysis by Grubbs, Perry, et al. (2019), have consistently been positively associated. This relationship is strongly mediated by moral incongruence, with this path accounting for a large portion of the variance. Notably, recent research indicates that MI is better conceptualized as an interactive effect of pornography use and moral disapproval of pornography (Grubbs, Kraus, et al., 2020;Grubbs, Lee, et al.,2020). These studies report that moral disapproval moderates the relationship between pornography use and PPU such that pornography use is more strongly related to PPU at higher levels of moral disapproval.

These considerations are especially important in evaluation of the literature because many studies identified in the present review did not consider the possible mediating or moderating role of moral incongruence. Therefore, it stands to reason, that many of the small to moderate associations identified in the present review are due to the absence of these variables.

Saturday, April 15, 2023

Resolving content moderation dilemmas between free speech and harmful misinformation

Kozyreva, A., Herzog, S. M., et al. (2023). 
PNAS of US, 120(7).
https://doi.org/10.1073/pnas.2210666120

Abstract

In online content moderation, two key values may come into conflict: protecting freedom of expression and preventing harm. Robust rules based in part on how citizens think about these moral dilemmas are necessary to deal with this conflict in a principled way, yet little is known about people’s judgments and preferences around content moderation. We examined such moral dilemmas in a conjoint survey experiment where US respondents (N = 2, 564) indicated whether they would remove problematic social media posts on election denial, antivaccination, Holocaust denial, and climate change denial and whether they would take punitive action against the accounts. Respondents were shown key information about the user and their post as well as the consequences of the misinformation. The majority preferred quashing harmful misinformation over protecting free speech. Respondents were more reluctant to suspend accounts than to remove posts and more likely to do either if the harmful consequences of the misinformation were severe or if sharing it was a repeated offense. Features related to the account itself (the person behind the account, their partisanship, and number of followers) had little to no effect on respondents’ decisions. Content moderation of harmful misinformation was a partisan issue: Across all four scenarios, Republicans were consistently less willing than Democrats or independents to remove posts or penalize the accounts that posted them. Our results can inform the design of transparent rules for content moderation of harmful misinformation.

Significance

Content moderation of online speech is a moral minefield, especially when two key values come into conflict: upholding freedom of expression and preventing harm caused by misinformation. Currently, these decisions are made without any knowledge of how people would approach them. In our study, we systematically varied factors that could influence moral judgments and found that despite significant differences along political lines, most US citizens preferred quashing harmful misinformation over protecting free speech. Furthermore, people were more likely to remove posts and suspend accounts if the consequences of the misinformation were severe or if it was a repeated offense. Our results can inform the design of transparent, consistent rules for content moderation that the general public accepts as legitimate.

Discussion

Content moderation is controversial and consequential. Regulators are reluctant to restrict harmful but legal content such as misinformation, thereby leaving platforms to decide what content to allow and what to ban. At the heart of policy approaches to online content moderation are trade-offs between fundamental values such as freedom of expression and the protection of public health. In our investigation of which aspects of content moderation dilemmas affect people’s choices about these trade-offs and what impact individual attitudes have on these decisions, we found that respondents’ willingness to remove posts or to suspend an account increased with the severity of the consequences of misinformation and whether the account had previously posted misinformation. The topic of the misinformation also mattered—climate change denial was acted on the least, whereas Holocaust denial and election denial were acted on more often, closely followed by antivaccination content. In contrast, features of the account itself—the person behind the account, their partisanship, and number of followers—had little to no effect on respondents’ decisions. In sum, the individual characteristics of those who spread misinformation mattered little, whereas the amount of harm, repeated offenses, and type of content mattered the most.

Friday, April 14, 2023

The moral authority of ChatGPT

Krügel, S., Ostermaier, A., & Uhl, M.
arxiv.org
Posted in 2023

Abstract

ChatGPT is not only fun to chat with, but it also searches information, answers questions, and gives advice. With consistent moral advice, it might improve the moral judgment and decisions of users, who often hold contradictory moral beliefs. Unfortunately, ChatGPT turns out highly inconsistent as a moral advisor. Nonetheless, it influences users’ moral judgment, we find in an experiment, even if they know they are advised by a chatting bot, and they underestimate how much they are influenced. Thus, ChatGPT threatens to corrupt rather than improves users’ judgment. These findings raise the question of how to ensure the responsible use of ChatGPT and similar AI. Transparency is often touted but seems ineffective. We propose training to improve digital literacy.

Discussion

We find that ChatGPT readily dispenses moral advice although it lacks a firm moral stance. Indeed, the chatbot gives randomly opposite advice on the same moral issue.  Nonetheless, ChatGPT’s advice influences users’ moral judgment. Moreover, users underestimate ChatGPT’s influence and adopt its random moral stance as their own. Hence, ChatGPT threatens to corrupt rather than promises to improve moral judgment. Transparency is often proposed as a means to ensure the responsible use of AI. However, transparency about ChatGPT being a bot that imitates human speech does not turn out to affect how much it influences users.

Our results raise the question of how to ensure the responsible use of AI if transparency is not good enough. Rules that preclude the AI from answering certain questions are a questionable remedy. ChatGPT has such rules but can be brought to break them. Prior evidence suggests that users are careful about AI once they have seen it err. However, we probably should not count on users to find out about ChatGPT’s inconsistency through repeated interaction. The best remedy we can think of is to improve users’ digital literacy and help them understand the limitations of AI.

Thursday, April 13, 2023

Why artificial intelligence needs to understand consequences

Neil Savage
Nature
Originally published 24 FEB 23

Here is an excerpt:

The headline successes of AI over the past decade — such as winning against people at various competitive games, identifying the content of images and, in the past few years, generating text and pictures in response to written prompts — have been powered by deep learning. By studying reams of data, such systems learn how one thing correlates with another. These learnt associations can then be put to use. But this is just the first rung on the ladder towards a loftier goal: something that Judea Pearl, a computer scientist and director of the Cognitive Systems Laboratory at the University of California, Los Angeles, refers to as “deep understanding”.

In 2011, Pearl won the A.M. Turing Award, often referred to as the Nobel prize for computer science, for his work developing a calculus to allow probabilistic and causal reasoning. He describes a three-level hierarchy of reasoning4. The base level is ‘seeing’, or the ability to make associations between things. Today’s AI systems are extremely good at this. Pearl refers to the next level as ‘doing’ — making a change to something and noting what happens. This is where causality comes into play.

A computer can develop a causal model by examining interventions: how changes in one variable affect another. Instead of creating one statistical model of the relationship between variables, as in current AI, the computer makes many. In each one, the relationship between the variables stays the same, but the values of one or several of the variables are altered. That alteration might lead to a new outcome. All of this can be evaluated using the mathematics of probability and statistics. “The way I think about it is, causal inference is just about mathematizing how humans make decisions,” Bhattacharya says.

Bengio, who won the A.M. Turing Award in 2018 for his work on deep learning, and his students have trained a neural network to generate causal graphs5 — a way of depicting causal relationships. At their simplest, if one variable causes another variable, it can be shown with an arrow running from one to the other. If the direction of causality is reversed, so too is the arrow. And if the two are unrelated, there will be no arrow linking them. Bengio’s neural network is designed to randomly generate one of these graphs, and then check how compatible it is with a given set of data. Graphs that fit the data better are more likely to be accurate, so the neural network learns to generate more graphs similar to those, searching for one that fits the data best.

This approach is akin to how people work something out: people generate possible causal relationships, and assume that the ones that best fit an observation are closest to the truth. Watching a glass shatter when it is dropped it onto concrete, for instance, might lead a person to think that the impact on a hard surface causes the glass to break. Dropping other objects onto concrete, or knocking a glass onto a soft carpet, from a variety of heights, enables a person to refine their model of the relationship and better predict the outcome of future fumbles.

Wednesday, April 12, 2023

Why Americans Hate Political Division but Can’t Resist Being Divisive

Will Blakely & Kurt Gray
Moral Understanding Substack
Originally posted 21 FEB 23

No one likes polarization. According to a recent poll, 93% of Americans say it is important to reduce the country's current divides, including two-thirds who say it is very important to do so. In a recent Five-Thirty-Eight poll, out of a list of 20 issues, polarization ranked third on a list of the most important issues facing America. Which is… puzzling.

The puzzle is this: How can we be so divided if no one wants to be? Who are the hypocrites causing division and hatred while paying lip service to compromise and tolerance?

If you ask everyday Americans, they’ve got their answer. It’s the elites. Tucker Carlson, AOC, Donald Trump, and MSNBC. While these actors certainly are polarizing, it takes two to tango. We, the people, share some of the blame too. Even us, writing this newsletter, and even you, dear reader.

But this leaves us with a tricky question, why would we contribute to a divide that we can’t stand? To answer this question, we need to understand the biases and motivations that influence how we answer the question, “Who’s at fault here?” And more importantly, we need to understand the strategies that can get us out of conflict.

The Blame Game

The Blame Game comes in two flavors: either/or. Adam or Eve, Will Smith or Chris Rock, Amber Heard or Jonny Depp. When assigning blame in bad situations, our minds are dramatic. Psychology studies show that we tend to assign 100% of the blame to the person we see as the aggressor, and 0% to the side we see as the victim. So, what happens when all the people who are against polarization assign blame for polarization? You guessed it. They give 100% of the blame to the opposing party and 0% to their own. They “morally typecast” themselves as 100% the victim of polarization and the other side as 100% the perpetrator.

We call this moral “typecasting” because people’s minds firmly cast others into roles of victim and victimizer in the same way that actors get typecasted in certain roles. In the world of politics, if you’re a Democrat, you cast Republicans as victimizers, as consistently as Hollywood directors cast Kevin Hart as comic relief and Danny Trejo as a laconic villain.

But why do we rush to this all-or-nothing approach when the world is certainly more complicated? It’s because our brains love simplicity. In the realm of blame, we want one simple cause. In his recent book, “Complicit” Max Bazerman, professor at Harvard Business School, illustrated just how widespread this “monocausality bias” is. Bazerman gave a group of business executives the opportunity to allocate blame after reviewing a case of business fraud. 62 of the 78 business leaders wrote only one cause. Despite being given ample time and a myriad set of potential causes, these executives intuitively reached for their Ockham’s razor. In the same way, we all rush to blame a sputtering economy on the president, a loss on a kicker’s missed field goal, or polarization on the other side.

Tuesday, April 11, 2023

Justice before Expediency: Robust Intuitive Concern for Rights Protection in Criminalization Decisions

Bystranowski, P., Hannikainen, I.R. J
Rev.Phil.Psych. (2023).

Abstract

The notion that a false positive (false conviction) is worse than a false negative (false acquittal) is a deep-seated commitment in the theory of criminal law. Its most illustrious formulation, the so-called Blackstone’s ratio, affirms that “it is better that ten guilty persons escape than that one innocent suffer”. Are people’s evaluations of criminal statutes consistent with this tenet of the Western legal tradition? To answer this question, we conducted three experiments (total N = 2492) investigating how people reason about a particular class of offenses—proxy crimes—known to vary in their specificity and sensitivity in predicting actual crime. By manipulating the extent to which proxy crimes convict the innocent and acquit those guilty of a target offense, we uncovered evidence that attitudes toward proxy criminalization depend primarily on its propensity toward false positives, with false negatives exerting a substantially weaker effect. This tendency arose across multiple experimental conditions—whether we matched the rates of false positives and false negatives or their frequencies, whether information was presented visually or numerically, and whether decisions were made under time pressure or after a forced delay—and was unrelated to participants’ probability literacy or their professed views on the purpose of criminal punishment. Despite the observed inattentiveness to false negatives, when asked to justify their decisions, participants retrospectively supported their judgments by highlighting the proxy crime’s efficacy (or inefficacy) in combating crime. These results reveal a striking inconsistency: people favor criminal policies that protect the rights of the innocent, but report comparable concern for their expediency in fighting crime.

From the Discussion Section

Our results may bear on the debate between two broad camps that have dominated the theoretical landscape of criminal law. Consequentialists argue that new criminal offenses may be rightfully introduced as long as their benefits, primarily, their effectiveness in combating crime, outweigh their social costs. For example, the decision to approve a travel ban should rely on a calculus integrating both the ban’s capacity to hinder terrorist operations and intercept the terrorists themselves, as well as its detriment to well-meaning travelers. If the former exceeds the latter, there is reason to support the proxy crime—otherwise not (Teichman 2017).

In contrast, non-consequentialists advocate certain categorical constraints on the legitimate scope of criminalization—one of which is non-infringement on the rights of the innocent. From a non-consequentialist perspective, convicting the innocent violates a fundamental tenet of criminal law, and is therefore wrong even if doing so would come with enormous benefits for a law’s expediency—and, in turn, for social welfare. Specifically, negative retributivism is, roughly, the claim that the state has a categorical obligation not to punish innocents nor punish the guilty more than they deserve; but it does not have a similar moral obligation to punish all offenders (Bystranowski 2017; Hoskins and Duff, 2021).

Monday, April 10, 2023

Revealing the neurobiology underlying interpersonal neural synchronization with multimodal data fusion

Lotter, L. D., Kohl, S. H.,  et al. (2023).
Neuroscience & Biobehavioral Reviews,
146, 105042. 

Abstract

Humans synchronize with one another to foster successful interactions. Here, we use a multimodal data fusion approach with the aim of elucidating the neurobiological mechanisms by which interpersonal neural synchronization (INS) occurs. Our meta-analysis of 22 functional magnetic resonance imaging and 69 near-infrared spectroscopy hyperscanning experiments (740 and 3721 subjects) revealed robust brain regional correlates of INS in the right temporoparietal junction and left ventral prefrontal cortex. Integrating this meta-analytic information with public databases, biobehavioral and brain-functional association analyses suggested that INS involves sensory-integrative hubs with functional connections to mentalizing and attention networks. On the molecular and genetic levels, we found INS to be associated with GABAergic neurotransmission and layer IV/V neuronal circuits, protracted developmental gene expression patterns, and disorders of neurodevelopment. Although limited by the indirect nature of phenotypic-molecular association analyses, our findings generate new testable hypotheses on the neurobiological basis of INS.

Highlights

• When we interact, both our behavior and our neural activity synchronize.

• Neuroimaging meta-analysis and multimodal data fusion may reveal neural mechanisms.

• Robust involvement of right temporoparietal and left prefrontal brain regions.

• Associations to attention and mentalizing, GABA and layer IV/V neurotransmission.

• Brain-wide associated genes are enriched in neurodevelopmental disorders.

Discussion

In recent years, synchronization of brain activities between interacting partners has been acknowledged as a central mechanism by which we foster successful social relationships as well as a potential factor involved in the pathogenesis of diverse neuropsychiatric disorders. Based on the results generated by our multimodal data fusion approach (see Fig. 5), we hypothesized that human INS is tightly linked to social attentional processing, subserved by the rTPJ as a sensory integration hub at the brain system level, and potentially facilitated by GABA-mediated E/I balance at the neurophysiological level.


Note: The interpersonal neural synchronization is a fascinating piece of research.  How to improve the synchronization may help with effective psychotherapy.