Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Moral judgment. Show all posts
Showing posts with label Moral judgment. Show all posts

Monday, July 21, 2025

Emotion and deliberative reasoning in moral judgment.

Cummins, D. D., & Cummins, R. C. (2012).
Frontiers in psychology, 3, 328.

Abstract

According to an influential dual-process model, a moral judgment is the outcome of a rapid, affect-laden process and a slower, deliberative process. If these outputs conflict, decision time is increased in order to resolve the conflict. Violations of deontological principles proscribing the use of personal force to inflict intentional harm are presumed to elicit negative affect which biases judgments early in the decision-making process. This model was tested in three experiments. Moral dilemmas were classified using (a) decision time and consensus as measures of system conflict and (b) the aforementioned deontological criteria. In Experiment 1, decision time was either unlimited or reduced. The dilemmas asked whether it was appropriate to take a morally questionable action to produce a “greater good” outcome. Limiting decision time reduced the proportion of utilitarian (“yes”) decisions, but contrary to the model’s predictions, (a) vignettes that involved more deontological violations logged faster decision times, and (b) violation of deontological principles was not predictive of decisional conflict profiles. Experiment 2 ruled out the possibility that time pressure simply makes people more like to say “no.” Participants made a first decision under time constraints and a second decision under no time constraints. One group was asked whether it was appropriate to take the morally questionable action while a second group was asked whether it was appropriate to refuse to take the action. The results replicated that of Experiment 1 regardless of whether “yes” or “no” constituted a utilitarian decision. In Experiment 3, participants rated the pleasantness of positive visual stimuli prior to making a decision. Contrary to the model’s predictions, the number of deontological decisions increased in the positive affect rating group compared to a group that engaged in a cognitive task or a control group that engaged in neither task. These results are consistent with the view that early moral judgments are influenced by affect. But they are inconsistent with the view that (a) violation of deontological principles are predictive of differences in early, affect-based judgment or that (b) engaging in tasks that are inconsistent with the negative emotional responses elicited by such violations diminishes their impact.

Here are some thoughts:

This research investigates the role of emotion and cognitive processes in moral decision-making, testing a dual-process model that posits moral judgments arise from a conflict between rapid, affect-driven (System 1) and slower, deliberative (System 2) processes. Across three experiments, participants were presented with moral dilemmas involving utilitarian outcomes (sacrificing few to save many) and deontological violations (using personal force to intentionally harm), with decision times manipulated to assess how these factors influence judgment. The findings challenge the assumption that deontological decisions are always driven by fast emotional responses: while limiting decision time generally reduced utilitarian judgments, exposure to pleasant emotional stimuli unexpectedly increased deontological responses, suggesting that emotional context, not just negative affect from deontological violations, plays a significant role. Additionally, decisional conflict—marked by low consensus and long decision times—was not fully predicted by deontological criteria, indicating other factors influence moral judgment. Overall, the study supports a dual-process framework but highlights the complexity of emotion's role, showing that both utilitarian and deontological judgments can be influenced by affective states and intuitive heuristics rather than purely deliberative reasoning.

Saturday, July 19, 2025

Morality on the road: the ADC model in low-stakes traffic vignettes

Pflanzer, M., Cecchini, D., Cacace, S.,
& Dubljević, V. (2025).
Frontiers in Psychology, 16.

Introduction: In recent years, the ethical implications of traffic decision-making, particularly in the context of autonomous vehicles (AVs), have garnered significant attention. While much of the existing research has focused on high-stakes moral dilemmas, such as those exemplified by the trolley problem, everyday traffic situations—characterized by mundane, low-stakes decisions—remain underexplored.

Methods: This study addresses this gap by empirically investigating the applicability of the Agent-Deed-Consequences (ADC) model in the moral judgment of low-stakes traffic scenarios. Using a vignette approach, we surveyed professional philosophers to examine how their moral judgments are influenced by the character of the driver (Agent), their adherence to traffic rules (Deed), and the outcomes of their actions (Consequences).

Results: Our findings support the primary hypothesis that each component of the ADC model significantly influences moral judgment, with positive valences in agents, deeds, and consequences leading to greater moral acceptability. We additionally explored whether participants’ normative ethical leanings–classified as deontological, utilitarian, or virtue ethics–influenced how they weighted ADC components. However, no moderating effects of moral preference were observed. The results also reveal interaction effects among some components, illustrating the complexity of moral reasoning in traffic situations.

Discussion: The study’s implications are crucial for the ethical programming of AVs, suggesting that these systems should be designed to navigate not only high-stakes dilemmas but also the nuanced moral landscape of everyday driving. Our work creates a foundation for stakeholders to integrate human moral judgments into AV decision-making algorithms. Future research should build on these findings by including a more diverse range of participants and exploring the generalizability of the ADC model across different cultural contexts.

Here are some thoughts on the modern day trolley problem:

This article presents an alternative to the trolley problem framework for understanding moral decision-making in traffic scenarios, particularly for autonomous vehicle programming. While trolley problem research focuses on high-stakes, life-or-death dilemmas where one must choose between unavoidable harms, the authors argue this approach oversimplifies real-world traffic scenarios and lacks ecological validity. Instead, they propose the Agent-Deed-Consequences (ADC) model, which evaluates moral judgment based on three components: the character and intentions of the driver (Agent), their compliance with traffic rules (Deed), and the outcome of their actions (Consequences). The study surveyed 274 professional philosophers using low-stakes traffic vignettes and found that all three ADC components significantly influence moral judgment, with rule-following having the strongest effect, followed by character and outcomes. Notably, philosophers with different ethical frameworks (utilitarian, deontological, virtue ethics) showed similar judgment patterns, suggesting broad consensus on traffic morality. The researchers argue that "moral decision-making in everyday situations may contribute to the prevention of high-stakes emergencies, which do not arise without mundane bad decisions happening first," emphasizing that autonomous vehicles should be programmed to handle the nuanced moral landscape of ordinary driving decisions rather than just extreme emergency scenarios. This approach integrates virtue ethics, deontological ethics, and consequentialist considerations into a comprehensive framework that better reflects the complexity of real-world traffic moral reasoning.

Friday, May 23, 2025

Different judgment frameworks for moral compliance and moral violation

Shirai, R., & Watanabe, K. (2024).
Scientific Reports, 14(1).

Abstract

In recent decades, the field of moral psychology has focused on moral judgments based on some moral foundations/categories (e.g., harm/care, fairness/reciprocity, ingroup/loyalty, authority/respect, and purity/sanctity). When discussing the moral categories, however, whether a person judges moral compliance or moral violation has been rarely considered. We examined the extent to which moral judgments are influenced by each other across moral categories and explored whether the framework of judgments for moral violation and compliance would be different. For this purpose, we developed the episodes set for moral and affective behaviors. For each episode, participants evaluated valence, arousal, morality, and the degree of relevance to each of the Haidt's 5 moral foundations. The cluster analysis showed that the moral compliance episodes were divided into three clusters, whereas the moral violation episodes were divided into two clusters. Also, the additional experiment indicated that the clusters might not be stable in time. These findings suggest that people have different framework of judgments for moral compliance and moral violation.

Here are some thoughts:

This study investigates the nuances of moral judgment by examining whether people employ distinct frameworks when evaluating moral compliance versus moral violation. Researchers designed a series of scenarios encompassing moral and affective dimensions, and participants rated these scenarios across valence, arousal, morality, and relevance to Haidt's five moral foundations. The findings revealed that moral compliance and moral violation appear to be judged using different frameworks, as evidenced by the cluster analysis which showed different cluster divisions for compliance and violation episodes. 

This research carries significant implications for psychologists, deepening our understanding of the complexities inherent in moral decision-making and extending the insights of theories like Moral Foundations Theory. Furthermore, the study provides valuable tools, such as the developed set of moral and affective scenarios, for future investigations in moral psychology. Ultimately, a more refined grasp of moral judgment processes can inform efforts to mediate conflicts and foster enhanced social understanding.

Sunday, May 18, 2025

Moral judgement and decision-making: theoretical predictions and null results

Hertz, U., Jia, F., & Francis, K. B. (2023).
Scientific Reports, 13(1).

Abstract

The study of moral judgement and decision making examines the way predictions made by moral and ethical theories fare in real world settings. Such investigations are carried out using a variety of approaches and methods, such as experiments, modeling, and observational and field studies, in a variety of populations. The current Collection on moral judgments and decision making includes works that represent this variety, while focusing on some common themes, including group morality and the role of affect in moral judgment. The Collection also includes a significant number of studies that made theoretically driven predictions and failed to find support for them. We highlight the importance of such null-results papers, especially in fields that are traditionally governed by theoretical frameworks.

Here are some thoughts:

The article explores how predictions from moral theories—particularly deontological and utilitarian ethics—hold up in empirical studies. Drawing from a range of experiments involving moral dilemmas, economic games, and cross-cultural analyses, the authors highlight the increasing importance of null results—findings where expected theoretical effects were not observed.

These outcomes challenge assumptions such as the idea that deontologists are inherently more trusted than utilitarians or that moral responsibility diffuses more in group settings. The studies also show how individual traits (e.g., depression, emotional awareness) and cultural or ideological contexts influence moral decisions.

For practicing psychologists, this research underscores the importance of moving beyond theoretical assumptions toward a more evidence-based, context-sensitive understanding of moral reasoning. It emphasizes the relevance of emotional processes in moral evaluation, the impact of group dynamics, and the necessity of accounting for cultural and psychological diversity in decision-making. Additionally, the article advocates for valuing null results as critical to theory refinement and scientific integrity in the study of moral behavior.

Monday, May 12, 2025

Morality in Our Mind and Across Cultures and Politics

Gray, K., & Pratt, S. (2024).
Annual Review of Psychology.

Abstract

Moral judgments differ across cultures and politics, but they share a common theme in our minds: perceptions of harm. Both cultural ethnographies on moral values and psychological research on moral cognition highlight this shared focus on harm. Perceptions of harm are constructed from universal cognitive elements—including intention, causation, and suffering—but depend on the cultural context, allowing many values to arise from a common moral mind. This review traces the concept of harm across philosophy, cultural anthropology, and psychology, then discusses how different values (e.g., purity) across various taxonomies are grounded in perceived harm. We then explore two theories connecting culture to cognition—modularity and constructionism—before outlining how pluralism across human moral judgment is explained by the constructed nature of perceived harm. We conclude by showing how different perceptions of harm help drive political disagreements and reveal how sharing stories of harm can help bridge moral divides.

Here are some thoughts:

This article explores morality in our minds, across cultures, and within political ideologies. It shows how moral judgments differ across cultures and political ideologies, but share a common theme: perceptions of harm. The research highlights that perceptions of harm are constructed from universal cognitive elements, such as intention, causation, and suffering, but are shaped by cultural context.

The article discusses how different values are grounded in perceived harm. It also explores theories connecting culture to cognition and explains how pluralism in human moral judgment arises from the constructed nature of perceived harm. The article concludes by demonstrating how differing perceptions of harm contribute to political disagreements and how sharing stories of harm can help bridge moral divides.

This research is important for psychologists because it provides a deeper understanding of the cognitive and cultural underpinnings of morality. By understanding how perceptions of harm are constructed and how they vary across cultures and political ideologies, psychologists can gain insights into the roots of moral disagreements. This knowledge is crucial for addressing social issues, resolving conflicts, and fostering a more inclusive and harmonious society.

Wednesday, April 30, 2025

Politics makes bastards of us all: Why moral judgment is politically situational

Hull, K., Warren, C., & Smith, K. (2024).
Political Psychology, 45(6), 1013–1029.

Abstract

Moral judgment is politically situational—people are more forgiving of transgressive copartisans and more likely to behave punitively and unethically toward political opponents. Such differences are widely observed, but not fully explained. If moral values are nonnegotiable first-principle beliefs about right and wrong, why do similar transgressions elicit different moral judgment in the personal and political realm? We argue this pattern arises from the same forces intuitionist frameworks of moral psychology use to explain the origins of morality: the adaptive need to suppress individual behavior to ensure ingroup success. We hypothesize ingroups serve as moral boundaries, that the relative tight constraints morality exerts over ingroup relations loosen in competitive group environments because doing so also serves ingroup interests. We find support for this hypothesis in four independent samples and also find that group antipathy—internalized dislike of the outgroup—pushes personal and political moral boundaries farther apart.


Here are some thoughts:

This research explores why moral judgments differ between personal and political contexts. The authors argue that moral flexibility in politics arises from the adaptive function of morality: to promote ingroup success.  Ingroup loyalty loosens moral constraints when group competition is present.  The study also reveals that disliking the opposing political group increases this effect.    

This study offers psychologists a deeper understanding of moral flexibility and political behavior. It explains how group dynamics and intergroup conflict influence moral judgment, highlighting the situational nature of morality.  It also links moral psychology with political science by examining how political affiliations and antipathies shape moral judgments.   

Saturday, April 26, 2025

Culture points the moral compass: Shared basis of culture and morality

Matsuo, A., & Brown, C. M. (2022).
Culture and Brain, 10(2), 113–139.

Abstract

The present work reviews moral judgment from the perspective of culture. Culture is a dynamic system of human beings interacting with their environment, and morality is both a product of this system and a means of maintaining it. When members of a culture engage in moral judgment, they communicate their “social morality” and gain a reputation as a productive member who contributes to the culture’s prosperity. People in different cultures emphasize different moral domains, which is often understood through the individualism-collectivism distinction that is widely utilized in cultural psychology. However, traditional morality research lacks the interactive perspective of culture, where people communicate with shared beliefs about what is good or bad. As a consequence, past work has had numerous limitations and even potential confounds created by methodologies that are grounded in the perspective of WEIRD (i.e., Western, Educated, Industrialized, Rich and Democratic) cultures. Great attention should be paid to the possibly misleading assumption that researchers and participants share the same understanding of the stimuli. We must address this bias in sampling and in the minds of researchers and better clarify the concept of culture in intercultural morality research. The theoretical and practical findings from research on culture can then contribute to a better understanding of the mechanisms of moral judgment.

The article is paywalled. So, I tried to give more of a summary. Here it is:

This article discusses moral judgment from a cultural perspective. The authors argue that morality is a product of culture and helps to maintain it. They claim that people from different cultures emphasize different moral domains, which is often understood using the individualism-collectivism distinction. The authors also suggest that traditional morality research lacks an interactive perspective of culture, where people communicate shared beliefs about what is good or bad, and that this past research has had limitations and potential confounds due to methodologies that are grounded in WEIRD cultures.    

The authors discuss theories of moral judgment, including Lawrence Kohlberg’s theory of stages of moral development, the social intuitionist model, and moral pluralism. They claim that moral judgment is a complex process involving self-recognition, social cognition, and decision-making and that the brain is designed to process multiple moralities in different ways. They also explore the social function of morality, stating that behaving morally according to the standards of one’s group helps people be included in the group, and moral norms are used to identify desirable and undesirable group membership.    

In a significant part of the article, the authors discuss the concept of culture, defining it as a structured system of making sense of the environment, which shapes individuals in order to fit into their environment. They explain that the need to belong is a basic human motivation, and people form groups as a means of survival and reproduction. Norms applied to a particular group regulate group members’ behaviors, and culture emerges from these norms. The authors use the individualism-collectivism dimension, a common concept in cultural psychology, to explain how people from different cultures perceive and interpret the world in different ways. They claim that culture is a dynamic interaction between humans and their environment and that moral judgment achieves its social function because people assume that ingroup members share common representations of what is right or wrong. 

Monday, April 14, 2025

Moral Judgment and Decision Making

Bartels, D.  et al.(n.d.).
In The Wiley Blackwell Handbook of
Judgment and Decision Making.

Abstract

This chapter focuses on moral flexibility, a term that the authors use that people are strongly motivated to adhere to and affirm their moral beliefs in their judgments and choices, they really want to get it right, they really want to do the right thing, but context strongly influences which moral beliefs are brought to bear in a given situation. It reviews contemporary research on moral judgment and decision making, and suggests ways that the major themes in the literature relate to the notion of moral flexibility. The chapter explains what makes moral judgment and decision making unique. It also reviews three major research themes and their explananda: morally prohibited value trade-offs in decision making; rules, reason, and emotion in trade-offs; and judgments of moral blame and punishment. The chapter also comments on methodological desiderata and presents understudied areas of inquiry.

Here are some thoughts:

This chapter explores the psychology of moral judgment and decision-making. The authors argue that people are motivated to adhere to moral beliefs, but context strongly influences which beliefs are applied in a given situation, resulting in moral flexibility.  The chapter reviews three major research themes: moral value tradeoffs, the role of rules, reason, and emotion in moral tradeoffs, and judgments of moral blame and punishment.  The authors discuss normative ethical theories, including consequentialism (utilitarianism), deontology, and virtue ethics.  They also examine the influence of protected values and sacred values on moral decision-making, highlighting the conflict between rule-based and consequentialist decision strategies.  Furthermore, the chapter investigates the interplay of emotion, reason, automaticity, and cognitive control in moral judgment, discussing dual-process models, moral grammar, and the reconciliation of rules and emotions.  The authors explore factors influencing moral blame and punishment, including the role of intentions, outcomes, and character evaluations.  The chapter concludes by emphasizing the complexity of moral decision-making and the importance of considering contextual influences. 

Sunday, March 23, 2025

What We Do When We Define Morality (and Why We Need to Do It)

Dahl, A. (2023).
Psychological Inquiry, 34(2), 53–79.

Abstract

Psychological research on morality relies on definitions of morality. Yet the various definitions often go unstated. When unstated definitions diverge, theoretical disagreements become intractable, as theories that purport to explain “morality” actually talk about very different things. This article argues that we need to define morality and considers four common ways of doing so: The linguistic, the functionalist, the evaluating, and the normative. Each has encountered difficulties. To surmount those difficulties, I propose a technical, psychological, empirical, and distinctive definition of morality: obligatory concerns with others’ welfare, rights, fairness, and justice, as well plus the reasoning, judgment, emotions, and actions that spring from those concerns. By articulating workable definitions of morality, psychologists can communicate more clearly across paradigms, separate definitional from empirical disagreements, and jointly advance the field of moral psychology.


Here are some thoughts:

The article discusses the importance of defining morality in psychological research and the challenges associated with this task. Dahl argues that all psychological research on morality relies on definitions, but these definitions often go unstated, leading to communication problems and intractable disagreements when researchers use different unstated definitions.

The article examines four common approaches to defining morality: linguistic (whatever people call "moral"), functionalist (defined by social function), evaluating (collection of right actions), and normative (all judgments about right and wrong). After discussing the difficulties with each approach, Dahl proposes an alternative definition of morality: "obligatory concerns with others' welfare, rights, fairness, and justice, as well as the reasoning, judgment, emotions, and actions that spring from those concerns." This definition is described as technical, psychological, empirical, and distinctive.

The article emphasizes the need for clear definitions to communicate across paradigms, separate definitional from empirical disagreements, and advance the field of moral psychology. Dahl provides examples of debates in moral psychology (e.g., about obedience to authority, harm-based morality) that are complicated by lack of clear definitions. In conclusion, while defining morality is challenging due to its many meanings in ordinary language, Dahl argues that a workable scientific definition is both possible and necessary for progress in the field of moral psychology.

Sunday, March 16, 2025

Computational Approaches to Morality

Bello, P., & Malle, B. F. (2023).
In R. Sun (Ed.), Cambridge Handbook 
of Computational Cognitive Sciences
(pp. 1037-1063). Cambridge University Press.

Introduction

Morality regulates individual behavior so that it complies with community interests (Curry et al., 2019; Haidt, 2001; Hechter & Opp, 2001). Humans achieve this regulation by motivating and deterring certain behaviors through the imposition of norms – instructions of how one should or should not act in a particular context (Fehr & Fischbacher, 2004; Sripada & Stich, 2006) – and, if a norm is violated, by levying sanctions (Alexander, 1987; Bicchieri, 2006). This chapter examines the mental and behavioral processes that facilitate human living in moral communities and how these processes might be represented computationally and ultimately engineered in embodied agents.

Computational work on morality arises from two major sources. One is empirical moral science, which accumulates knowledge about a variety of phenomena of human morality, such as moral decision making, judgment, and emotions. Resulting computational work tries to model and explain these human phenomena. The second source is philosophical ethics, which has for millennia discussed moral principles by which humans should live. Resulting computational work is often labeled machine ethics, which is the attempt to create artificial agents with moral capacities reflecting one or more of the ethical theories. A brief discussion of these two sources will ground the subsequent discussion of computational morality.


Here are some thoughts:

This chapter examines computational approaches to morality, driven by two goals: modeling human moral cognition and creating artificial moral agents ("machine ethics"). It maps key moral phenomena – behavior, judgments, emotions, sanctions, and communication – arguing these are shaped by social norms rather than innate brain circuits. Norms are community instructions specifying acceptable/unacceptable behavior. The chapter explores philosophical ethics: deontology (duty-based ethics, exemplified by Kant, Rawls, Ross) and consequentialism (outcome-based ethics, particularly utilitarianism). It addresses computational challenges like scaling, conflicting preferences, and framing moral problems. Finally, it surveys rule-based approaches, case-based reasoning, reinforcement learning, and cognitive science perspectives in modeling moral decision-making.

Thursday, November 21, 2024

Moral Judgment Is Sensitive to Bargaining Power

Le Pargneux, A., & Cushman, F. (2024).
Journal of Experimental Psychology: General.
Advance online publication.

Abstract

For contractualist accounts of morality, actions are moral if they correspond to what rational or reasonable agents would agree to do, were they to negotiate explicitly. This, in turn, often depends on each party’s bargaining power, which varies with each party’s stakes in the potential agreement and available alternatives in case of disagreement. If there is an asymmetry, with one party enjoying higher bargaining power than another, this party can usually get a better deal, as often happens in real negotiations. A strong test of contractualist accounts of morality, then, is whether moral judgments do take bargaining power into account. We explore this in five preregistered experiments (n = 3,025; U.S.-based Prolific participants). We construct scenarios depicting everyday social interactions between two parties in which one of them can perform a mutually beneficial but unpleasant action. We find that the same actions (asking the other to perform the unpleasant action or explicitly refusing to do it) are perceived as less morally appropriate when performed by the party with lower bargaining power, as compared to the party with higher bargaining power. In other words, participants tend to give more moral leeway to parties with better bargaining positions and to hold disadvantaged parties to stricter moral standards. This effect appears to depend only on the relative bargaining power of each party but not on the magnitude of the bargaining power asymmetry between them. We discuss implications for contractualist theories of moral cognition and the emergence and persistence of unfair norms and inequality.

Public Significance Statement

Many social interactions involve opportunities for mutual benefit. By engaging in negotiation—sometimes explicitly, but often tacitly—we decide what each party should do and enter arrangements that we anticipate will be advantageous for everyone involved. Contractualist theories of morality insist on the fundamental role played by such bargaining procedures in determining what constitutes appropriate and inappropriate behavior. But the outcome of a negotiation often depends on each party’s bargaining power and their relative positions if an agreement cannot be reached. And situations in which each party enjoys equal bargaining power are rare. Here, we investigate the influence of bargaining power on our moral judgments. Consistent with contractualist accounts, we find that moral judgments take bargaining power considerations into account, to the benefit of the powerful party, and that parties with lower bargaining power are held to stricter moral standards.

Here are some thoughts:

This research provides insights into how people perceive fairness and morality in social interactions, which is fundamental to understanding human behavior and relationships. Mental health professionals often deal with clients struggling with interpersonal conflicts, and recognizing the role of bargaining power in these situations can help them better analyze and address these issues.

Secondly, the findings suggest that people tend to give more moral leeway to those with higher bargaining power and hold disadvantaged individuals to stricter moral standards. This knowledge is essential for therapists working with clients from diverse socioeconomic backgrounds, as it can help them recognize and address potential biases in their own judgments and those of their clients.

Furthermore, the research implications regarding the emergence and persistence of inequality are particularly relevant for mental health professionals. Understanding how moral intuitions may contribute to the perpetuation of unfair norms and outcomes can help therapists develop more effective strategies for addressing issues related to social inequality and its impact on mental health.

Lastly, the findings highlight the complexity of moral cognition and decision-making processes. This knowledge can enhance therapists' ability to help clients explore their own moral reasoning and decision-making patterns, potentially leading to more insightful and effective therapeutic interventions.

Friday, November 1, 2024

Relational morality in psychology and philosophy: past, present, and future

Earp, B D., Calcott, R., et al. (in press).
In S. Laham (ed.), Handbook of
Ethics and Social Psychology. 
Cheltenham, UK: Edward Elgar.

Abstract

Moral psychology research often frames participant judgments in terms of adherence to abstract principles, such as utilitarianism or Kant's categorical imperative, and focuses on hypothetical interactions between strangers. However, real-world moral judgments typically involve concrete evaluations of known individuals within specific social relationships. Acknowledging this, a growing number of moral psychologists are shifting their focus to the study of moral judgment in social-relational contexts. This chapter provides an overview of recent work in this area, highlighting strengths and weaknesses, and describes a new 'relational norms' model of moral judgment developed by the authors and colleagues. The
discussion is situated within influential philosophical theories of human morality that emphasize relational context, and suggests that these theories should receive more attention from moral psychologists. The chapter concludes by exploring future applications of relational-moral frameworks, such as modeling and predicting norms and judgments related to human-AI cooperation.


It's a great chapter. Here are some thoughts:

The field of moral psychology is undergoing a significant shift, known as the "relational turn." This movement recognizes that real-world morality is deeply embedded in social relationships, rather than being based solely on impartial principles and abstract dilemmas. Researchers are now focusing on the intricate web of social roles, group memberships, and interpersonal dynamics that shape our everyday moral experiences.

Traditional Western philosophical traditions, such as utilitarianism and Kantian deontology, have emphasized impartiality as a cornerstone of moral reasoning. However, empirical evidence suggests that moral judgments are influenced by factors like group membership, relationship type, and social context. This challenges the idea that moral principles should be applied uniformly, regardless of the individuals involved.

The relational context of a situation greatly impacts our moral judgments. For example, helping a stranger move might be seen as kind, but missing work for it seems excessive. Similarly, expecting payment for helping a family member feels at odds with the implicit rules of familial relationships. Philosophical perspectives such as Confucianism, African moral traditions, and feminist care ethics support the importance of relationships in shaping moral norms and obligations.

Evolutionary theory provides a compelling explanation for why relationships matter in moral decision-making. Our moral instincts likely evolved to solve coordination problems and reduce conflict within social groups, primarily consisting of family, kin, and close allies. This "friends-and-family cooperation bias" has led to the development of specific moral norms tailored to different relationship categories.

Research in relational morality highlights the importance of understanding the structure and dynamics of interpersonal relationships. Various relational models, such as Fiske's Relationship Regulation Theory, propose that different relationships are associated with specific moral motives. However, real-life relationships are complex and multifaceted, drawing on multiple models simultaneously.

The developmental trajectory of relational morality suggests that even young children display a preference for friends and family in resource allocation tasks. However, the ability to make nuanced moral judgments based on social roles and relationship types emerges gradually with age.

Emerging research areas within relational morality include impartial beneficence, moral obligations to future generations, and the psychological underpinnings of extending moral concern to strangers and future generations. By shifting focus from abstract principles to social relationships, researchers can develop more nuanced and ecologically valid models of moral judgment and behavior.

This relational turn promises to deepen our understanding of the social and evolutionary roots of human morality, shedding light on the complex interplay between personal connections and our sense of right and wrong. By recognizing the importance of relationships in moral decision-making, researchers can develop more effective strategies for promoting moral growth, cooperation, and well-being.

Tuesday, August 6, 2024

Artificial Morality: Differences in Responses to Moral Choices by Human and Artificial Agents

Armbruster, D., Mandl, S., Zeiler, A., & Strobel, A.
(2024, June 4).

Abstract

A consensus on moral "rights" and "wrongs" is essential for ensuring societal functioning. Moral decision-making has been investigated for decades focusing on human agents. More recently, research has started into how humans evaluate artificial moral agents. With increasing presence of artificial intelligence (AI) in society, this question becomes ever more relevant. We investigated responses from a third-party perspective to moral judgments of human and artificial agents in high-stakes and low-stakes dilemmas. High-stakes dilemmas describe life-or-death scenarios while low-stakes dilemmas do not have lethal albeit nevertheless substantial negative consequences. In two online studies, participants responded to the actions resp. inactions of human and artificial agents in four high-stakes scenarios (N1 = 491) and four low-stakes dilemmas (N2 = 490). In line with previous research, agents received generally more blame in high-stakes scenarios and actions resulted overall in more blame than inactions. While there was no effect of scenario type on trust, agents were more trusted when they did not act. Although humans, on average, were blamed more than artificial agents they were nevertheless also more trusted. The most important predictor for blame and trust was whether participants agreed with the moral choice of an agent and considered the chosen course of action as morally appropriate – regardless of the nature of the agent. Religiosity emerged as further predictor for blaming both human and artificial agents, while trait psychopathy was associated with more blame of and less trust in human agents. Additionally, negative attitudes towards robots predicted blame and trust in artificial agents.


Here are some thoughts:

This study on moral judgments of human and artificial agents in high and low-stakes dilemmas offers valuable insights for in terms of ethics education and ethical decision-making. The research reveals that while there were no overall differences in perceived appropriateness of actions between human and artificial agents, gender differences emerged in high-stakes scenarios. Women were less likely to endorse harmful actions for the greater good when performed by human agents, but more likely to approve such actions when performed by artificial agents. This gender disparity in moral judgments highlights the need to be aware of potential biases in ethical reasoning.

The study also found that blame and trust were affected by dilemma type and decision type, with actions generally resulting in higher blame and reduced trust compared to inactions. This aligns with previous research on omission bias and emphasizes the complexity of moral decision-making. Additionally, the research identified several individual differences that influenced moral judgments, blame attribution, and trust. Factors such as religiosity, psychopathy, and negative attitudes towards robots were found to be predictors of blame and trust in both human and artificial agents. These findings underscore the importance of considering individual differences in ethical decision-making processes and when interpreting clients' moral reasoning.

Furthermore, the study touched on the role of Need for Cognition (NFC) in moral judgments, suggesting that cognitive abilities and motivation may contribute to differences in processing moral problems. This is particularly relevant for clinical psychologists when assessing clients' decision-making processes and designing interventions to improve ethical reasoning. The research also highlighted cultural differences in attitudes towards robots and AI, which is crucial for clinical psychologists working in diverse settings or with multicultural populations. As AI becomes more prevalent in healthcare, including mental health, understanding how people perceive and trust artificial agents in moral decision-making is essential for clinical psychologists considering the implementation of AI-assisted tools in their practice.

Tuesday, December 26, 2023

Who did it? Moral wrongness for us and them in the UK, US, and Brazil

Paulo Sérgio Boggio, et al. (2023) 
Philosophical Psychology
DOI: 10.1080/09515089.2023.2278637

Abstract

Morality has traditionally been described in terms of an impartial and objective “moral law”, and moral psychological research has largely followed in this vein, focusing on abstract moral judgments. But might our moral judgments be shaped not just by what the action is, but who is doing it? We looked at ratings of moral wrongness, manipulating whether the person doing the action was a friend, a refugee, or a stranger. We looked at these ratings across various moral foundations, and conducted the study in Brazil, US, and UK samples. Our most robust and consistent findings are that purity violations were judged more harshly when committed by ingroup members and less harshly when committed by the refugees in comparison to the unspecified agents, the difference between refugee and unspecified agents decays from liberals to conservatives, i.e., conservatives judge them more harshly than liberals do, and Brazilians participants are harsher than the US and UK participants. Our results suggest that purity violations are judged differently according to who committed them and according to the political ideology of the judges. We discuss the findings in light of various theories of groups dynamics, such as moral hypocrisy, moral disengagement, and the black sheep effect.


Here is my summary:

The study explores how moral judgments vary depending on both the agent committing the act and the nationality of the person making the judgment. The study's findings challenge the notion that moral judgments are universal and instead suggest that they are influenced by cultural and national factors.

The researchers investigated how participants from the UK, US, and Brazil judged moral violations committed by different agents: friends, strangers, refugees, and unspecified individuals. They found that participants from all three countries generally judged violations committed by friends more harshly than violations committed by other agents. However, there were also significant cultural differences in the severity of judgments. Brazilians tended to judge violations of purity as less wrong than Americans, but judged violations of care, liberty, and fairness as more wrong than Americans.

The study's findings suggest that moral judgments are not simply based on the severity of the act itself, but also on factors such as the relationship between the agent and the victim, and the cultural background of the person making the judgment. These findings have implications for understanding cross-cultural moral conflicts and for developing more effective moral education programs.

Saturday, December 16, 2023

Older people are perceived as more moral than younger people: data from seven culturally diverse countries

Piotr Sorokowski, et al. (2023)
Ethics & Behavior,
DOI: 10.1080/10508422.2023.2248327

Abstract

Given the adage “older and wiser,” it seems justified to assume that older people may be stereotyped as more moral than younger people. We aimed to study whether assessments of a person’s morality differ depending on their age. We asked 661 individuals from seven societies (Australians, Britons, Burusho of Pakistan, Canadians, Dani of Papua, New Zealanders, and Poles) whether younger (~20-year-old), middle-aged (~40-year-old), or older (~60-year-old) people were more likely to behave morally and have a sense of right and wrong. We observed that older people were perceived as more moral than younger people. The effect was particularly salient when comparing 20-year-olds to either 40- or 60-year-olds and was culturally universal, as we found it in both WEIRD (i.e. Western, Educated, Industrialized, Rich, Democratic) and non-WEIRD societies.


Here is my summary:

The researchers found that older people were rated as more moral than younger people, and this effect was particularly strong when comparing 20-year-olds to either 40- or 60-year-olds. The effect was also consistent across cultures, suggesting that it is a universal phenomenon.

The researchers suggest that there are a few possible explanations for this finding. One possibility is that older people are simply seen as having more life experience and wisdom, which are both associated with morality. Another possibility is that older people are more likely to conform to social norms, which are often seen as being moral. Finally, it is also possible that people simply have a positive bias towards older people, which leads them to perceive them as being more moral.

Whatever the explanation, the finding that older people are perceived as more moral than younger people has a number of implications. For example, it suggests that older people may be more likely to be trusted and respected, and they may also be more likely to be seen as leaders. Additionally, the finding suggests that ageism may be a form of prejudice, as it involves making negative assumptions about people based on their age.

Saturday, November 4, 2023

One strike and you’re a lout: Cherished values increase the stringency of moral character attributions

Rottman, J., Foster-Hanson, E., & Bellersen, S.
(2023). Cognition, 239, 105570.

Abstract

Moral dilemmas are inescapable in daily life, and people must often choose between two desirable character traits, like being a diligent employee or being a devoted parent. These moral dilemmas arise because people hold competing moral values that sometimes conflict. Furthermore, people differ in which values they prioritize, so we do not always approve of how others resolve moral dilemmas. How are we to think of people who sacrifice one of our most cherished moral values for a value that we consider less important? The “Good True Self Hypothesis” predicts that we will reliably project our most strongly held moral values onto others, even after these people lapse. In other words, people who highly value generosity should consistently expect others to be generous, even after they act frugally in a particular instance. However, reasoning from an error-management perspective instead suggests the “Moral Stringency Hypothesis,” which predicts that we should be especially prone to discredit the moral character of people who deviate from our most deeply cherished moral ideals, given the potential costs of affiliating with people who do not reliably adhere to our core moral values. In other words, people who most highly value generosity should be quickest to stop considering others to be generous if they act frugally in a particular instance. Across two studies conducted on Prolific (N = 966), we found consistent evidence that people weight moral lapses more heavily when rating others’ membership in highly cherished moral categories, supporting the Moral Stringency Hypothesis. In Study 2, we examined a possible mechanism underlying this phenomenon. Although perceptions of hypocrisy played a role in moral updating, personal moral values and subsequent judgments of a person’s potential as a good cooperative partner provided the clearest explanation for changes in moral character attributions. Overall, the robust tendency toward moral stringency carries significant practical and theoretical implications.


My take aways: 

The results showed that participants were more likely to rate the person as having poor moral character when the transgression violated a cherished value. This suggests that when we see someone violate a value that we hold dear, it can lead us to question their entire moral compass.

The authors argue that this finding has important implications for how we think about moral judgment. They suggest that our own values play a significant role in how we judge others' moral character. This is something to keep in mind the next time we're tempted to judge someone harshly.

Here are some additional points that are made in the article:
  • The effect of cherished values on moral judgment is stronger for people who are more strongly identified with their values.
  • The effect is also stronger for transgressions that are seen as more serious.
  • The effect is not limited to personal values. It can also occur for group-based values, such as patriotism or religious beliefs.

Tuesday, October 24, 2023

The Implications of Diverse Human Moral Foundations for Assessing the Ethicality of Artificial Intelligence

Telkamp, J.B., Anderson, M.H. 
J Bus Ethics 178, 961–976 (2022).

Abstract

Organizations are making massive investments in artificial intelligence (AI), and recent demonstrations and achievements highlight the immense potential for AI to improve organizational and human welfare. Yet realizing the potential of AI necessitates a better understanding of the various ethical issues involved with deciding to use AI, training and maintaining it, and allowing it to make decisions that have moral consequences. People want organizations using AI and the AI systems themselves to behave ethically, but ethical behavior means different things to different people, and many ethical dilemmas require trade-offs such that no course of action is universally considered ethical. How should organizations using AI—and the AI itself—process ethical dilemmas where humans disagree on the morally right course of action? Though a variety of ethical AI frameworks have been suggested, these approaches do not adequately address how people make ethical evaluations of AI systems or how to incorporate the fundamental disagreements people have regarding what is and is not ethical behavior. Drawing on moral foundations theory, we theorize that a person will perceive an organization’s use of AI, its data procedures, and the resulting AI decisions as ethical to the extent that those decisions resonate with the person’s moral foundations. Since people hold diverse moral foundations, this highlights the crucial need to consider individual moral differences at multiple levels of AI. We discuss several unresolved issues and suggest potential approaches (such as moral reframing) for thinking about conflicts in moral judgments concerning AI.

The article is paywalled, link is above.

Here are some additional points:
  • The article raises important questions about the ethicality of AI systems. It is clear that there is no single, monolithic standard of morality that can be applied to AI systems. Instead, we need to consider a plurality of moral foundations when evaluating the ethicality of AI systems.
  • The article also highlights the challenges of assessing the ethicality of AI systems. It is difficult to measure the impact of AI systems on human well-being, and there is no single, objective way to determine whether an AI system is ethical. However, the article suggests that a pluralistic approach to ethical evaluation, which takes into account a variety of moral perspectives, is the best way to assess the ethicality of AI systems.
  • The article concludes by calling for more research on the implications of diverse human moral foundations for the ethicality of AI. This is an important area of research, and I hope that more research is conducted in this area in the future.

Monday, June 19, 2023

On the origin of laws by natural selection

DeScioli, P.
Evolution and Human Behavior
Volume 44, Issue 3, May 2023, Pages 195-209

Abstract

Humans are lawmakers like we are toolmakers. Why do humans make so many laws? Here we examine the structure of laws to look for clues about how humans use them in evolutionary competition. We will see that laws are messages with a distinct combination of ideas. Laws are similar to threats but critical differences show that they have a different function. Instead, the structure of laws matches moral rules, revealing that laws derive from moral judgment. Moral judgment evolved as a strategy for choosing sides in conflicts by impartial rules of action—rather than by hierarchy or faction. For this purpose, humans can create endless laws to govern nearly any action. However, as prolific lawmakers, humans produce a confusion of contradictory laws, giving rise to a perpetual battle to control the laws. To illustrate, we visit some of the major conflicts over laws of violence, property, sex, faction, and power.

(cut)

Moral rules are not for cooperation

We have briefly summarized the  major divisions and operations of moral judgment. Why then did humans evolve such elaborate powers of the mind devoted to moral rules? What is all this rule making for?

One common opinion is that moral rules are for cooperation. That is, we make and enforce a moral code in order to cooperate more effectively with other people. Indeed, traditional  theories beginning with Darwin assume that morality is  the  same  as cooperation. These theories  successfully explain many forms of cooperation, such as why humans and other  animals  care  for  offspring,  trade  favors,  respect  property, communicate  honestly,  and  work  together  in  groups.  For  instance, theories of reciprocity explain why humans keep records of other people’s deeds in the form of reputation, why we seek partners who are nice, kind, and generous, why we praise these virtues, and why we aspire to attain them.

However, if we look closely, these theories explain cooperation, not moral  judgment.  Cooperation pertains  to our decisions  to  benefit  or harm someone, whereas moral judgment pertains to  our judgments of someone’s action  as right or  wrong. The difference  is crucial because these  mental  faculties  operate  independently  and  they  evolved  separately. For  instance,  people can  use moral judgment  to cooperate but also to cheat, such as a thief who hides the theft because they judge it to be  wrong, or a corrupt leader who invents a  moral rule  that forbids criticism of the leader. Likewise, people use moral judgment to benefit others  but  also  to  harm  them, such  as falsely  accusing an enemy of murder to imprison them. 

Regarding  their  evolutionary  history, moral  judgment is  a  recent adaptation while cooperation is ancient and widespread, some forms as old  as  the origins  of  life and  multicellular  organisms.  Recalling our previous examples, social animals like gorillas, baboons, lions, and hyenas cooperate in numerous ways. They care for offspring, share food, respect property, work together in teams, form reputations,  and judge others’ characters as nice or nasty. But these species do not communicate rules of action, nor do they learn, invent, and debate the rules. Like language, moral judgment  most likely evolved  recently in the  human lineage, long after complex forms of cooperation. 

From the Conclusion

Having anchored ourselves to concrete laws, we next asked, What are laws for? This is the central question for  any mental power because it persists only  by aiding an animal in evolutionary competition.  In this search,  we  should  not  be  deterred  by  the  magnificent creativity  and variety of laws. Some people suppose that natural selection could impart no more than  a  few fixed laws in  the  human mind, but there  are  no grounds for this supposition. Natural selection designed all life on Earth and its creativity exceeds our own. The mental adaptations of animals outperform our best computer programs on routine tasks such as loco-motion and vision. Why suppose that human laws must be far simpler than, for instance, the flight controllers in the brain of a hummingbird? And there are obvious counterexamples. Language is a complex  adaptation but this does not mean that humans speak just a few sentences. Tool use comes from mental adaptations including an intuitive theory of physics, and again these abilities do not limit but enable the enormous variety of tools.

Friday, June 2, 2023

Is it good to feel bad about littering? Conflict between moral beliefs and behaviors for everyday transgressions

Schwartz, Stephanie A. and Inbar, Yoel
SSRN.
Originally posted 22 June 22

Abstract

People sometimes do things that they think are morally wrong. We investigate how actor’s perceptions of the morality of their own behaviors affects observers’ evaluations. In Study 1 (n = 302), we presented participants with six different descriptions of actors who routinely engaged in a morally questionable behavior and varied whether the actors thought the behavior was morally wrong. Actors who believed their behavior was wrong were seen as having better moral character, but their behavior was rated as more wrong. In Study 2 (n = 391) we investigated whether perceptions of actor metadesires were responsible for the effects of actor beliefs on judgments. We used the same stimuli and measures as in Study 1 but added a measure of the actor’s perceived desires to engage in the behaviors. As predicted, the effect of actors’ moral beliefs on judgments of their behavior and moral character was mediated by perceived metadesires.

General Discussion

In two studies, we find that actors’ beliefs about their own everyday immoral behaviors affect both how the acts and the actors are evaluated—albeit in opposite directions. An actor’s belief that his or her act is morally wrong causes observers to see the act itself as less morally acceptable, while, at the same time, it leads to more positive character judgments of the actor. In Study 2, we find that these differences in character judgments are mediated by people’s perceptions of the actor’s metadesires. Actors who see their behavior as morally wrong are presumed to have a desire not to engage in it, and this in turn leads to more positive evaluations of their character. These results suggest that one benefit of believing one’s own behavior to be immoral is that others—if they know this—will evaluate one’s character more positively.

(cut)

Honest Hypocrites 

In research on moral judgments of hypocrites, Jordan et al. (2017) found that people who publicly espouse a moral standard that they privately violate are judged particularly negatively.  However, they also found that “honest hypocrites” (those who publicly condemn a behavior while admitting they engage in it themselves) are judged more positively than traditional hypocrites and equivalently to control transgressors (people who simply engage in the negative behavior without taking a public stand on its acceptability). This might seem to contradict our findings in the current studies, where people who transgressed despite thinking that the behavior was morally wrong were judged more positively than those who simply transgressed. We believe the key distinction that explains the difference between Jordan et al.’s results and ours is that in their paradigm, hypocrites publicly condemned others for engaging in the behavior in question.  As Jordan et al. show, public condemnation is interpreted as a strong signal that someone is unlikely to engage in that behavior themselves; hypocrites therefore are disliked both for
engaging in a negative behavior and for falsely signaling (by their public condemnation) that they wouldn’t. Honest hypocrites, who explicitly state that they engage in the negative behavior, are not falsely signaling. However, Jordan et al.’s scenarios imply to participants that honest hypocrites do condemn others—something that may strike people as unfair coming from a person who engages in the behavior themselves. Thus, honest hypocrites may be penalized for public condemnation, even as they are credited for more positive metadesires. In contrast, in our studies participants were told that the scenario protagonists thought the behavior was morally wrong but not that they publicly condemned anyone else for engaging in it. This may have allowed protagonists to benefit from more positive perceived metadesires without being penalized for public condemnation. This explanation is admittedly speculative but could be tested in future research that we outline below.


Suppose you do something bad. Will people blame you more if you knew it was wrong? Or will they blame you less?

The answer seems to be: They will think your act is more wrong, but your character is less bad.

Thursday, May 18, 2023

People Construe a Corporation as an Individual to Ascribe Responsibility in Cases of Corporate Wrongdoing

Sharma, N., Flores-Robles, G., & Gantman, A. P.
(2023, April 11). PsyArXiv

Abstract

In cases of corporate wrongdoing, it is difficult to assign blame across multiple agents who played different roles. We propose that people have dualist ideas of corporate hierarchies: with the boss as “the mind,” and the employee as “the body,” and the employee appears to carry out the will of the boss like the mind appears to will the body (Wegner, 2003). Consistent with this idea, three experiments showed that moral responsibility was significantly higher for the boss, unless the employee acted prior to, inconsistently with, or outside of the boss’s will. People even judge the actions of the employee as mechanistic (“like a billiard ball”) when their actions mirror the will of the boss. This suggests that the same features that tell us our minds cause our actions, also facilitate the sense that a boss has willed the behavior of an employee and is ultimately responsible for bad outcomes in the workplace.

From the General Discussion

Practical Implications

Our findings offer a number of practical implications for organizations. First, our research provides insight into how people currently make judgments of moral responsibility within an organization (and specifically, when a boss gives instructions to an employee). Second, our research provides insight into the decision-making process of whether to fire a boss-figure like a CEO (or other decision-maker) or invest in lasting change in organizational culture following an organizational wrongdoing. From a scapegoating perspective, replacing a CEO is not intended to produce lasting change in underlying organizational problems and signals a desire to maintain the status quo (Boeker, 1992; Shen & Cannella, 2002). Scapegoating may not always be in the best interest of investors. Previous research has shown that following financial misrepresentation, investors react positively only to CEO successions wherein the replacement comes from the outside, which serves as a costly signal of the firm’s understanding of the need for change (Gangloff et al., 2016). And so, by allocating responsibility to the CEO without creating meaningful change, organizations may loseinvestors. Finally, this research has implications for building public trust in organizations. Following the Wells Fargo scandal, two-thirds of Wells Fargo customers (65%) claimed they trusted their bank less, and about half of Wells Fargo customers (51%) were willing to switch to another bank, if they perceived them to be more trustworthy (Business Wire, 2017).Thus, how organizations deal with wrongdoing (e.g., whether they fire individuals, create lasting change or both) can influence public trust. If corporations want to build trust among the general public, and in doing so, create a larger customer base, they can look at how people understand and ascribe responsibility and consequently punish organizational wrongdoings.