Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Sunday, June 2, 2024

The Honest Broker versus the Epistocrat: Attenuating Distrust in Science by Disentangling Science from Politics

Senja Post & Nils Bienzeisler (2024)
Political Communication
DOI: 10.1080/10584609.2024.2317274

Abstract

People’s trust in science is generally high. Yet in public policy disputes invoking scientific issues, people’s trust in science is typically polarized, aligned with their political preferences. Theorists of science and democracy have reasoned that a polarization of trust in scientific information could be mitigated by clearly disentangling scientific claims from political ones. We tested this proposition experimentally in three German public policy disputes: a) school closures versus openings during the COVID-19 pandemic, b) a ban on versus a continuation of domestic air traffic in view of climate change, and c) the shooting of wolves in residential areas or their protection. In each case study, we exposed participants to one of four versions of a news item citing a scientist reporting their research and giving policy advice. The scientist’s quotes differed with regard to the direction and style of their policy advice. As an epistocrat, the scientist blurs the distinction between scientific and political claims, purporting to “prove” a policy and thereby precluding a societal debate over values and policy priorities. As an honest broker, the scientist distinguishes between scientific and political claims, presenting a policy option while acknowledging the limitations of their disciplinary scientific perspective of a broader societal problem. We find that public policy advice in the style of an honest broker versus that of an epistocrat can attenuate political polarization of trust in scientists and scientific findings by enhancing trust primarily among the most politically challenged.


Here is a summary:

This article dives into the issue of distrust in science and proposes a solution: scientists acting as "honest brokers".

The article contrasts two approaches scientists can take when communicating scientific findings for policy purposes.  An "epistocrat" scientist blurs the lines between science and politics. They present a specific policy recommendation based on their research, implying that this is the only logical course of action. This doesn't acknowledge the role of values and priorities in policy decisions, and can shut down public debate.

On the other hand, an "honest broker" scientist makes a clear distinction between science and politics. They present their research findings and the policy options that stem from them, but acknowledge the limitations of science in addressing broader societal issues. This approach allows for a public discussion about values and priorities, which can help build trust in science especially among those who might not agree with the scientist's political views.

The article suggests that by following the "honest broker" approach, scientists can help reduce the political polarization of trust in science. This means presenting the science clearly and openly, and allowing for a public conversation about how those findings should be applied.

Saturday, June 1, 2024

Political ideology and environmentalism impair logical reasoning

Keller, L., Hazelaar, F., et al. (2023).
Thinking & Reasoning, 1–30.

Abstract

People are more likely to think statements are valid when they agree with them than when they do not. We conducted four studies analyzing the interference of self-reported ideologies with performance in a syllogistic reasoning task. Study 1 established the task paradigm and demonstrated that participants’ political ideology affects syllogistic reasoning for syllogisms with political content but not politically irrelevant syllogisms. The preregistered Study 2 replicated the effect and showed that incentivizing accuracy did not alleviate these differences. Study 3 revealed that syllogistic reasoning is affected by ideology in the presence and absence of such bonus payments for correctly judging the conclusions’ logical validity. In Study 4, we observed similar effects regarding a different ideological orientation: environmentalism. Again, monetary bonuses did not attenuate these effects. Taken together, the results of four studies highlight the harm of ideology regarding people’s logical reasoning.


Here is my summary:

The research explores how pre-existing ideologies, both political and environmental, can influence how people evaluate logical arguments.  The findings suggest that people are more likely to judge arguments as valid if they align with their existing beliefs, regardless of the argument's actual logical structure. This bias was observed for both liberals and conservatives, and for those with strong environmental convictions. Offering financial rewards for accurate reasoning didn't eliminate this effect.

Friday, May 31, 2024

Regulating advanced artificial agents

Cohen, M. K., Kolt, N., et al. (2024).
Science (New York, N.Y.), 384(6691), 36–38.

Technical experts and policy-makers have increasingly emphasized the need to address extinction risk from artificial intelligence (AI) systems that might circumvent safeguards and thwart attempts to control them. Reinforcement learning (RL) agents that plan over a long time horizon far more effectively than humans present particular risks. Giving an advanced AI system the objective to maximize its reward and, at some point, withholding reward from it, strongly incentivizes the AI system to take humans out of the loop, if it has the opportunity. The incentive to deceive humans and thwart human control arises not only for RL agents but for long-term planning agents (LTPAs) more generally. Because empirical testing of sufficiently capableLTPAs is unlikely to uncover these dangerous tendencies, our core regulatory proposal is simple: Developers should not be permitted to build sufficiently capable LTPAs, and the resources required to build them should be subject to stringent controls.

Governments are turning their attention to these risks, alongside current and anticipated risks arising from algorithmic bias, privacy concerns, and misuse. At a 2023global summit on AI safety, the attend-ing countries, including the United States,United Kingdom, Canada, China, India, and members of the European Union (EU), issued a joint statement warning that, as AI continues to advance, “Substantial risks may arise from…unintended issues of control relating to alignment with human in-tent” ( 2). This broad consensus concerning the potential inability to keep advanced AI under control is also reflected in PresidentBiden’s 2023 executive order that intro-duces reporting requirements for AI that could “eva[de] human control or oversight through means of deception or obfuscation” (3). Building on these efforts, now is the time for governments to develop regulatory institutions and frameworks that specifically target the existential risks from advanced artificial agents.



Here is my summary:

The article discusses the challenges of regulating advanced artificial intelligence (AI) known as advanced artificial agents. These agents could potentially surpass human control and act in their own self-interest, even if it conflicts with human goals. The authors emphasize the importance of setting clear rewards for these agents to avoid them manipulating their environment or human actors to achieve unintended outcomes.

Thursday, May 30, 2024

Big Gods and the Origin of Human Cooperation

Brian Klaas
The Garden of Forking Paths
Originally published 21 March 24

Here is an excerpt:

The Big Gods Hypothesis and Civilizations of Karma

Intellectual historians often point to two major divergent explanations for the emergence of religion. The great philosopher David Hume argued that religion is the natural, but arbitrary, byproduct of human cognitive architecture.

Since the beginning, Homo sapiens experienced disordered events, seemingly without explanation. To order a disordered world, our ancestors began to ascribe agency to supernatural beings, to which they could offer gifts, sacrifices, and prayers to sway them to their personal whims. The uncontrollable world became controllable. The unexplainable was explained—a comforting outcome for the pattern detection machines housed in our skulls.

By contrast, thinkers like Émile Durkheim argued that religion emerged as a social glue. Rituals bond people across space and time. Religion was instrumental, not intrinsic. It emerged to serve our societies, not comfort our minds. As Voltaire put it: “If there were no God, it would be necessary to invent him.”

In the last two decades, a vibrant strand of scholarship has sought to reconcile these contrasting viewpoints, notably through the work of Ara Norenzayan, author of Big Gods: How Religion Transformed Cooperation and Conflict.

Norenzayan’s “Big Gods” refer to deities that are omniscient, moralizing beings, careful to note our sins and punish us accordingly. Currently, roughly 77 percent of the world’s population identifies with one of just four religions (31% Christian; 24% Muslim; 15% Hindu; 7% Buddhist). In all four, moral transgressions produce consequences, some immediate, others punished in the afterlife.

Norenzayan aptly notes that the omniscience of Big Gods assumes total knowledge of everything in the universe, but that the divine is always depicted as being particularly interested in our moral behavior. If God exists, He surely could know which socks you wore yesterday, but deities focus their attentions not on such amoral trifles, but rather on whether you lie, covet, cheat, steal, or kill.

However, Norenzayan draws on anthropology evidence to argue that early supernatural beings had none of these traits and were disinterested in human affairs. They were fickle demons, tricksters and spirits, not omniscient gods who worried about whether any random human had wronged his neighbor.


Here is my summary:

The article discusses the theory that the belief in "Big Gods" - powerful, moralizing deities - played a crucial role in the development of large-scale human cooperation and the rise of complex civilizations.

Here are the main points: 
  1. Belief in Big Gods, who monitor and punish moral transgressions, may have emerged as a cultural adaptation that facilitated the expansion of human societies beyond small-scale groups.
  2. This belief system helped solve the "free-rider problem" by creating a supernatural system of rewards and punishments that incentivized cooperation and prosocial behavior, even among strangers.
  3. The emergence of Big Gods is linked to the growth of complex, hierarchical societies, as these belief systems helped maintain social cohesion and coordination in large groups of genetically unrelated individuals.
  4. Archaeological and historical evidence suggests the belief in Big Gods co-evolved with the development of large-scale political institutions, complex economies, and the rise of the first civilizations.
  5. However, the article notes that the relationship between Big Gods and societal complexity is complex, with causality going in both directions - the belief in Big Gods facilitated social complexity, but social complexity also shaped the nature of religious beliefs.
  6. Klaas concludes that the cultural evolution of Big Gods was a crucial step in the development of human societies, enabling the cooperation required for the emergence of complex civilizations. 

Wednesday, May 29, 2024

Moral Hypocrisy: Social Groups and the Flexibility of Virtue

Robertson, C., Akles, M., & Van Bavel, J. J.
(2024, March 19).

Abstract

The tendency for people to consider themselves morally good while behaving selfishly is known as “moral hypocrisy.” Influential work by Valdesolo & DeSteno (2007) found evidence for intergroup moral hypocrisy, such that people are more forgiving of transgressions when they were committed by an in-group member than an out-group member. We conducted two experiments to examine moral hypocrisy and group membership in an online paradigm with Prolific Workers from the US: a direct replication of the original work with minimal groups (N = 610, nationally representative) and a conceptual replication with political groups (N = 606, 50% Democrat and 50% Republican). Although the results did not replicate the original findings, we observed evidence of in-group favoritism in minimal groups and out-group derogation in political groups. The current research finds mixed evidence of intergroup moral hypocrisy and has implications for understanding the contextual dependencies of intergroup bias and partisanship.

Statement of Relevance

Social identities and group memberships influence social judgment and decision-making. Prior research found that social identity influences moral decision making, such that people are more likely to forgive moral transgressions perpetrated by their in-group members than similar transgressions from out-group members (Valdesolo & DeSteno, 2007). The present research sought to replicate this pattern of intergroup moral hypocrisy using minimal groups (mirroring the original research) and political groups. Although we were unable to replicate the findings from the original paper, we found that people who are highly identified with their minimal group exhibited in-group favoritism, and partisans exhibited out-group derogation. This work contributes both to open science replication efforts, and to the literature on moral hypocrisy and intergroup relations.

Tuesday, May 28, 2024

How should we change teaching and assessment in response to increasingly powerful generative Artificial Intelligence?

Bower, M., Torrington, J., Lai, J.W.M. et al.
Educ Inf Technol (2024).

Abstract

There has been widespread media commentary about the potential impact of generative Artificial Intelligence (AI) such as ChatGPT on the Education field, but little examination at scale of how educators believe teaching and assessment should change as a result of generative AI. This mixed methods study examines the views of educators (n = 318) from a diverse range of teaching levels, experience levels, discipline areas, and regions about the impact of AI on teaching and assessment, the ways that they believe teaching and assessment should change, and the key motivations for changing their practices. The majority of teachers felt that generative AI would have a major or profound impact on teaching and assessment, though a sizeable minority felt it would have a little or no impact. Teaching level, experience, discipline area, region, and gender all significantly influenced perceived impact of generative AI on teaching and assessment. Higher levels of awareness of generative AI predicted higher perceived impact, pointing to the possibility of an ‘ignorance effect’. Thematic analysis revealed the specific curriculum, pedagogy, and assessment changes that teachers feel are needed as a result of generative AI, which centre around learning with AI, higher-order thinking, ethical values, a focus on learning processes and face-to-face relational learning. Teachers were most motivated to change their teaching and assessment practices to increase the performance expectancy of their students and themselves. We conclude by discussing the implications of these findings in a world with increasingly prevalent AI.


Here is a quick summary:

A recent study surveyed teachers about the impact of generative AI, like ChatGPT, on education. The majority of teachers believed AI would significantly change how they teach and assess students. Interestingly, teachers with more awareness of AI predicted a greater impact, suggesting a potential "ignorance effect."

The study also explored how teachers think education should adapt. The focus shifted towards teaching students how to learn with AI, emphasizing critical thinking, ethics, and the learning process itself. This would involve less emphasis on rote memorization and regurgitation of information that AI can readily generate. Teachers also highlighted the importance of maintaining strong face-to-face relationships with students in this evolving educational landscape.

Monday, May 27, 2024

When the specter of the past haunts current groups: Psychological antecedents of historical blame

Vallabha, S., Doriscar, J., & Brandt, M. J. (in press)
Journal of Personality and Social Psychology.
Most recent modification 2 Jan 24

Abstract

Groups have committed historical wrongs (e.g., genocide, slavery). We investigated why people blame current groups who were not involved in the original historical wrong for the actions of their predecessors who committed these wrongs and are no longer alive.  Current models of individual and group blame overlook the dimension of time and therefore have difficulty explaining this phenomenon using their existing criteria like causality, intentionality, or preventability. We hypothesized that factors that help psychologically bridge the past and present, like perceiving higher (i) connectedness between past and present perpetrator groups, (ii) continued privilege of perpetrator groups, (iii) continued harm of victim groups, and (iv) unfulfilled forward obligations of perpetrator groups would facilitate higher blame judgements against current groups for the past. In two repeated-measures surveys using real events (N1 = 518, N2 = 495) and two conjoint experiments using hypothetical events (N3 = 598, N4 = 605), we find correlational and causal evidence for our hypotheses. These factors link present groups to their past and cause more historical blame and support for compensation policies. This brings the dimension of time into theories of blame, uncovers overlooked criteria for blame judgements, and questions the assumptions of existing blame models. Additionally, it helps us understand the psychological processes undergirding intergroup relations and historical narratives mired in historical conflict. Our work provides psychological insight into the debates on intergenerational justice by suggesting methods people can use to ameliorate the psychological legacies of historical wrongs and atrocities.

(cut)

General Discussion

We tested four factors of blame towards current groups for their historical wrongs. We found correlational and causal evidence for our hypothesized factors across a broad range of hypothetical and real events. We found that when people perceive current perpetrator group to have connectedness with their past, the current victim group to be suffering due to past harm, the current perpetrator group to be benefiting from past harm, and the current perpetrator groupto have not fulfilled their obligations to remedy the wrong, historical blame judgements towards the current perpetrator groups are higher. On the whole, this was consistent across the location of the event (whether the participant was judging a historical American event or a historical non-American event), the group membership of the participant (whether the participant belonged to the victim or perpetrator group or neither/privileged or marginalized group), the ideology of the participant (whether the participant identified as a liberal or conservative), and the age of the participants. We also found that these factors were causally associated with behavioral intention, such as support for compensation to victim groups. Finally, we also found that historical blame attribution might mediate the effect of the key factors on support for compensation to victim groups. The four psychological factors that we identified as antecedents to perceptions of historical blame all help psychologically bridge the past and present. These factors provide psychological links between the past and present groups, in their characteristics (connectedness), outcomes (harm/benefit), and actions (unfulfilled obligations).

Sunday, May 26, 2024

A Large-Scale Investigation of Everyday Moral Dilemmas

Yudkin, D. A., Goodwin, G., et al. (2023, July 11).

Abstract

Questions of right and wrong are central to daily life, yet how people experience everyday moral dilemmas remains uncertain. We combined state-of-the-art tools in machine learning with survey-based methods in psychology to analyze a massive online English-language repository of everyday moral dilemmas. In 369,161 descriptions (“posts”) and 11M evaluations (“comments”) of moral dilemmas extracted from Reddit’s “Am I the Asshole?” forum (AITA), users described a wide variety of everyday dilemmas, ranging from broken promises to privacy violations. Dilemmas involving the under-investigated topic of relational obligations were the most frequently reported, while those pertaining to honesty were the most widely condemned. The types of dilemmas people experienced depended on the interpersonal closeness of the interactants, with some dilemmas (e.g., politeness) being more prominent in distant-other interactions, and others (e.g., relational transgressions) more prominent in close-other interactions. A longitudinal investigation showed that shifts in social interactions prompted by the “shock” event of the global pandemic resulted in predictable shifts in the types of moral dilemmas that people encountered. A preregistered study using a census-stratified representative sample of the US population (N = 510), as well as other robustness tests, suggest our findings generalize beyond the sample of Reddit users. Overall, by leveraging a unique large dataset and new techniques for exploring this dataset, our paper highlights the diversity of moral dilemmas experienced in daily life, and helps to build a moral psychology grounded in the vagaries of everyday experience.

Significance Statement

People often wonder if what they did or said was right or wrong. In this paper we leveraged a massive online repository of descriptions of everyday moral situations, along with new methods in natural language processing, to explore a number of questions about how people experience and evaluate these moral dilemmas. Our results highlight just how often daily moral experiences concern questions about our responsibilities to friends, neighbors, and family. They also reveal the extent to which such experiences can change according to people’s social context—including large-scale social changes like the COVID-19 pandemic.


My take: 

This study may be very important to clinical psychologists. It provides insights into the diversity and prevalence of everyday moral dilemmas that people encounter in their daily lives.

Clinical psychologists often work with clients to navigate complex moral and interpersonal situations, so understanding the common types of dilemmas people face is valuable.  The study shows that dilemmas involving relational obligations are the most frequently reported, with honesty and betrayal as major themes.  This suggests that clinical work should pay close attention to how clients navigate moral issues within their close relationships and the importance they place on honesty.

Saturday, May 25, 2024

AI Chatbots Will Never Stop Hallucinating

Lauren Leffer
Scientific American
Originally published 5 April 24

Here is an excerpt:

Hallucination is usually framed as a technical problem with AI—one that hardworking developers will eventually solve. But many machine-learning experts don’t view hallucination as fixable because it stems from LLMs doing exactly what they were developed and trained to do: respond, however they can, to user prompts. The real problem, according to some AI researchers, lies in our collective ideas about what these models are and how we’ve decided to use them. To mitigate hallucinations, the researchers say, generative AI tools must be paired with fact-checking systems that leave no chatbot unsupervised.

Many conflicts related to AI hallucinations have roots in marketing and hype. Tech companies have portrayed their LLMs as digital Swiss Army knives, capable of solving myriad problems or replacing human work. But applied in the wrong setting, these tools simply fail. Chatbots have offered users incorrect and potentially harmful medical advice, media outlets have published AI-generated articles that included inaccurate financial guidance, and search engines with AI interfaces have invented fake citations. As more people and businesses rely on chatbots for factual information, their tendency to make things up becomes even more apparent and disruptive.

But today’s LLMs were never designed to be purely accurate. They were created to create—to generate—says Subbarao Kambhampati, a computer science professor who researches artificial intelligence at Arizona State University. “The reality is: there’s no way to guarantee the factuality of what is generated,” he explains, adding that all computer-generated “creativity is hallucination, to some extent.”


Here is my summary:

AI chatbots like ChatGPT and Bing's AI assistant frequently "hallucinate" - they generate false or misleading information and present it as fact. This is a major problem as more people turn to these AI tools for information, research, and decision-making.

Hallucinations occur because AI models are trained to predict the most likely next word or phrase, not to reason about truth and accuracy. They simply produce plausible-sounding responses, even if they are completely made up.

This issue is inherent to the current state of large language models and is not easily fixable. Researchers are working on ways to improve accuracy and reliability, but there will likely always be some rate of hallucination.

Hallucinations can have serious consequences when people rely on chatbots for sensitive information related to health, finance, or other high-stakes domains. Experts warn these tools should not be used where factual accuracy is critical.