Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Sacrificial Dilemmas. Show all posts
Showing posts with label Sacrificial Dilemmas. Show all posts

Sunday, August 13, 2023

Beyond killing one to save five: Sensitivity to ratio and probability in moral judgment

Ryazanov, A.A., Wang, S.T, et al. (2023).
Journal of Experimental Social Psychology
Volume 108, September 2023, 104499


A great deal of current research on moral judgments centers on moral dilemmas concerning tradeoffs between one and five lives. Whether one considers killing one innocent person to save five others to be morally required or impermissible has been taken to determine whether one is appealing to consequentialist or non-consequentialist reasoning. But this focus on tradeoffs between one and five may obscure more nuanced commitments involved in moral decision-making that are revealed when the numbers and ratio of lives to be traded off are varied, and when the probabilities of each outcome occurring are less than certain. Four studies examine participants' reactions to scenarios that diverge in these ways from the standard ones. Study 1 examines the extent to which people are sensitive to the ratio of lives saved to lives ended by a particular action. Study 2 verifies that the ratio rather than the difference between the two values is operative. Study 3 examines whether participants treat probabilistic harm to some as equivalent to certainly harming fewer, holding expected ratio constant. Study 4 explores an analogous issue regarding the sensitivity of probabilistic saving. Participants are remarkably sensitive to expected ratio for probabilistic harms while deviating from expected value for probabilistic saving. Collectively, the studies provide evidence that people's moral judgments are consistent with the principle of threshold deontology.

General discussion

Collectively, our studies show that people are sensitive to expected ratio in moral dilemmas, and that they show this sensitivity across a range of probabilities. The particular kind of sensitivity to expected value participants display is consistent with the view that people's moral judgments are based on one single principle of threshold deontology. If one examines only participants' reactions to a single dilemma with a given ratio, one might naturally tend to sort participants' judgments into consequentialists (the ones who condone killing to save others) or non-consequentialists (the ones who do not). But this can be misleading, as is shown by the result that the number of participants who make judgments consistent with consequentialism in a scenario with ratio of 5:1 decreases when the ratio decreases (as if a larger number of people endorse deontological principles under this lower ratio). The fact that participants make some judgments that are consistent with consequentialism does not entail that these judgments are expressive of a generally consequentialist moral theory. When the larger set of judgments is taken into account, the only theory with which they are consistent is threshold deontology. On this theory, there is a general deontological constraint against killing, but this constraint is overridden when the consequences of inaction are bad enough. The variability across participants suggests that participants have different thresholds of the ratio at which the consequences count as “bad enough” for switching from supporting inaction to supporting action. This is consistent with the wide literature showing that participants' judgments can shift within the same ratio, depending on, for example, how the death of the one is caused.

My summary:

This research provides new insights into how people make moral judgments. It suggests that people are not simply weighing the number of lives saved against the number of lives lost, but that they are also taking into account the ratio of lives saved to lives lost and the probability of each outcome occurring. This research has important implications for our understanding of moral decision-making and for the development of moral education programs.

Saturday, December 24, 2022

How Stable are Moral Judgments?

Rehren, P., Sinnott-Armstrong, W.
Rev. Phil.Psych. (2022).


Psychologists and philosophers often work hand in hand to investigate many aspects of moral cognition. In this paper, we want to highlight one aspect that to date has been relatively neglected: the stability of moral judgment over time. After explaining why philosophers and psychologists should consider stability and then surveying previous research, we will present the results of an original three-wave longitudinal study. We asked participants to make judgments about the same acts in a series of sacrificial dilemmas three times, 6–8 days apart. In addition to investigating the stability of our participants’ ratings over time, we also explored some potential explanations for instability. To end, we will discuss these and other potential psychological sources of moral stability (or instability) and highlight possible philosophical implications of our findings.

From the General Discussion

We have argued that the stability of moral judgments over time is an important feature of moral cognition for philosophers and psychologists to consider. Next, we presented an original empirical study into the stability over 6–8 days of moral judgments about acts in sacrificial dilemmas. Like Helzer et al. (2017, Study 1), we found an overall test-retest correlation of 0.66. Moreover, we observed moderate to large proportions of rating shifts, and small to moderate proportions of rating revisions (M = 14%), rejections (M = 5%) and adoptions (M = 6%)—that is, the participants in question judged p in one wave, but did not judge p in the other wave.

What Explains Instability?

One potential explanation of our results is that they are not a genuine feature of moral judgments about sacrificial dilemmas, but instead are due to measurement error. Measurement error is the difference between the observed and the true value of a variable. So, it may be that most of the rating changes we observed do not mean that many real-life moral judgments about acts in sacrificial dilemmas are (or would be) unstable over short periods of time. Instead, it may be that when people make moral judgments about sacrificial dilemmas in real life, their judgments remain very stable from one week to the next, but our study (perhaps any study) was not able to capture this stability.

To the extent that real-life moral judgment is what moral psychologists and philosophers are interested in, this may suggest a problem with the type of study design used in this and many other papers. If there is enough measurement error, then it may be very difficult to draw firm conclusions about real-life moral judgments from this research. Other researchers have raised related objections. Most forcefully, Bauman et al. (2014) have argued that participants often do not take the judgment tasks used by moral psychologists seriously enough for them to engage with these tasks in anything like the way they would if they came across the same tasks in the real world (also, see, Ryazanov et al. 2018). In our view, moral psychologists would do well to more frequently move their studies outside of the (online) lab and into the real world (e.g., Bollich et al. 2016; Hofmann et al. 2014).


Instead, our findings may tell us something about a genuine feature of real-life moral judgment. If so, then a natural question to ask is what makes moral judgments unstable (or stable) over time. In this paper, we have looked at three possible explanations, but we did not find evidence for them. First, because sacrificial dilemmas are in a certain sense designed to be difficult, moral judgments about acts in these scenarios may give rise to much more instability than moral judgments about other scenarios or statements. However, when we compared our test-retest correlations with a sampling of test-retest correlations from instruments involving other moral judgments, sacrificial dilemmas did not stand out. Second, we did not find evidence that moral judgment changes occur because people are more confident in their moral judgments the second time around. Third, Study 1b did not find evidence that rating changes, when they occurred, were often due to changes in light of reasons and reflection. Note that this does not mean that we can rule out any of these potential explanations for unstable moral judgments completely. As we point out below, our research is limited in the extent to which it could test each of these explanations, and so one or more of them may still have been the cause for some proportion of the rating changes we observed.

Sunday, July 10, 2022

Situational factors shape moral judgements in the trolley dilemma in Eastern, Southern and Western countries in a culturally diverse sample

Bago, B., Kovacs, M., Protzko, J. et al. 
Nat Hum Behav (2022).


The study of moral judgements often centres on moral dilemmas in which options consistent with deontological perspectives (that is, emphasizing rules, individual rights and duties) are in conflict with options consistent with utilitarian judgements (that is, following the greater good based on consequences). Greene et al. (2009) showed that psychological and situational factors (for example, the intent of the agent or the presence of physical contact between the agent and the victim) can play an important role in moral dilemma judgements (for example, the trolley problem). Our knowledge is limited concerning both the universality of these effects outside the United States and the impact of culture on the situational and psychological factors affecting moral judgements. Thus, we empirically tested the universality of the effects of intent and personal force on moral dilemma judgements by replicating the experiments of Greene et al. in 45 countries from all inhabited continents. We found that personal force and its interaction with intention exert influence on moral judgements in the US and Western cultural clusters, replicating and expanding the original findings. Moreover, the personal force effect was present in all cultural clusters, suggesting it is culturally universal. The evidence for the cultural universality of the interaction effect was inconclusive in the Eastern and Southern cultural clusters (depending on exclusion criteria). We found no strong association between collectivism/individualism and moral dilemma judgements.

From the Discussion

In this research, we replicated the design of Greene et al. using a culturally diverse sample across 45 countries to test the universality of their results. Overall, our results support the proposition that the effect of personal force on moral judgements is likely culturally universal. This finding makes it plausible that the personal force effect is influenced by basic cognitive or emotional processes that are universal for humans and independent of culture. Our findings regarding the interaction between personal force and intention were more mixed. We found strong evidence for the interaction of personal force and intention among participants coming from Western countries regardless of familiarity and dilemma context (trolley or speedboat), fully replicating the results of Greene et al.. However, the evidence was inconclusive among participants from Eastern countries in all cases. Additionally, this interaction result was mixed for participants from countries in the Southern cluster. We only found strong enough evidence when people familiar with these dilemmas were included in the sample and only for the trolley (not speedboat) dilemma.

Our general observation is that the size of the interaction was smaller on the speedboat dilemmas in every cultural cluster. It is yet unclear whether this effect is caused by some deep-seated (and unknown) differences between the two dilemmas (for example, participants experiencing smaller emotional engagement in the speedboat dilemmas that changes response patterns) or by some unintended experimental confound (for example, an effect of the order of presentation of the dilemmas).

Sunday, February 6, 2022

Trolley Dilemma in Papua. Yali horticulturalists refuse to pull the lever

Sorokowski, P., Marczak, M., Misiak, M. et al. 
Psychon Bull Rev 27, 398–403 (2020).


Although many studies show cultural or ecological variability in moral judgments, cross-cultural responses to the trolley problem (kill one person to save five others) indicate that certain moral principles might be prevalent in human populations. We conducted a study in a traditional, indigenous, non-Western society inhabiting the remote Yalimo valley in Papua, Indonesia. We modified the original trolley dilemma to produce an ecologically valid “falling tree dilemma.” Our experiment showed that the Yali are significantly less willing than Western people to sacrifice one person to save five others in this moral dilemma. The results indicate that utilitarian moral judgments to the trolley dilemma might be less widespread than previously supposed. On the contrary, they are likely to be mediated by sociocultural factors.


Our study showed that Yali participants were significantly less willing than Western participants to sacrifice one person to save five others in the moral dilemma. More specifically, the difference was so large that the odds of pushing the tree were approximately 73% smaller for a Papuan in comparison with Canadians.

Our findings reflect cultural differences between the Western and Yali participants, which are illustrated by the two most common explanations provided by Papuans immediately after the experiment. First, owing to the extremely harsh consequences of causing someone’s death in Yali society, our Papuan participants did not want to expose themselves to any potential trouble and were, therefore, unwilling to take any action in the tree dilemma. The rules of conduct in Yali society mean that a person accused of contributing to someone’s death is killed. However, the whole extended family of the blamed individual, and even their village, are also in danger of death (Koch, 1974). This is because the relatives of the deceased person are obliged to compensate for the wrongdoing by killing the same or a greater number of persons.

Another common explanation was related to religion. The Yali often argued that people should not interfere with the divine decision about someone’s life and death (e.g., “I’m not God, so I can’t make the decision”). Hence, although the reason may suggest an action as appropriate, religion suggests otherwise, with religious believers deciding in favor of the latter (Piazza & Landy, 2013). In turn, more traditional populations may refer to religion more than more secular, modern WEIRD populations. 

Monday, January 10, 2022

Sequential decision-making impacts moral judgment: How iterative dilemmas can expand our perspective on sacrificial harm

D.H. Bostyn and A.Roets
Journal of Experimental Social Psychology
Volume 98, January 2022, 104244


When are sacrificial harms morally appropriate? Traditionally, research within moral psychology has investigated this issue by asking participants to render moral judgments on batteries of single-shot, sacrificial dilemmas. Each of these dilemmas has its own set of targets and describes a situation independent from those described in the other dilemmas. Every decision that participants are asked to make thus takes place within its own, separate moral universe. As a result, people's moral judgments can only be influenced by what happens within that specific dilemma situation. This research methodology ignores that moral judgments are interdependent and that people might try to balance multiple moral concerns across multiple decisions. In the present series of studies we present participants with iterative versions of sacrificial dilemmas that involve the same set of targets across multiple iterations. Using this novel approach, and across five preregistered studies (total n = 1890), we provide clear evidence that a) responding to dilemmas in a sequential, iterative manner impacts the type of moral judgments that participants favor and b) that participants' moral judgments are not only motivated by the desire to refrain from harming others (usually labelled as deontological judgment), or a desire to minimize harms (utilitarian judgment), but also by a desire to spread out harm across all possible targets.


• Research on sacrificial harm usually asks participants to judge single-shot dilemmas.

• We investigate sacrificial moral dilemma judgment in an iterative context.

• Sequential decision making impacts moral preferences.

• Many participants express a non-utilitarian concern for the overall spread of harm.

Moral deliberation in iterative contexts

The iterative lens we have adopted prompts some intriguing questions about the nature of moral deliberation in the context of sacrificial harm. Existing theoretical models on sacrificial harm can be described as ‘competition models’ (for instance, Conway & Gawronski, 2013; Gawronski et al., 2017; Greene et al., 2001, 2004; Hennig & Hütter, 2020). These models argue that opposing psychological processes compete to deliver a specific moral judgment and that the process that wins out, will determine the nature of that moral judgment. As such, these models presume that the goal of moral deliberation is about deciding whether to refrain from harm or minimize harm in a mutually exclusive manner. Even if participants are tempted by both options, eventually, their judgment settles wholly on one or the other. This is sensible in the context of non-iterative dilemmas in which outcomes hinge on a single decision but is it equally sensible in iterative contexts?

Consider the results of Study 4. In this study, we asked (a subset of) participants how many shocks they would divert out of a total six shocks. Interestingly, 32% of these participants decided to divert a single shock out of the six (See Fig. 6), thus shocking the individual once, and the group five times. How should such a decision be interpreted? These participants did not fully refrain from harming others, nor did they fully minimize harm, nor did they spread harm in the most balanced of ways.  Responses like this seem to straddle different moral concerns. While future research will need to corroborate these findings, we suggest that responses like this, i.e. responses that seem to straddle multiple moral concerns, cannot be explained by competition models but necessitate theoretical models that explicitly take into account that participants might strive to strike a (idiosyncratic) pluralistic balance between multiple moral concerns. 

Thursday, September 24, 2020

A Failure of Empathy Led to 200,000 Deaths. It Has Deep Roots.

Olga Khazan
The Atlantic
Originally published 22 September 20

Here is an excerpt:

Indeed, doctors follow a similar logic. In a May paper in the New England Journal of Medicine, a group of doctors from different countries suggested that hospitals consider prioritizing younger patients if they are forced to ration ventilators. “Maximizing benefits requires consideration of prognosis—how long the patient is likely to live if treated—which may mean giving priority to younger patients and those with fewer coexisting conditions,” they wrote. Perhaps, on a global scale, we’ve internalized the idea that the young matter more than the old.

The Moral Machine is not without its criticisms. Some psychologists say that the trolley problem, a similar and more widely known moral dilemma, is too silly and unrealistic to say anything about our true ethics. In a response to the Moral Machine experiment, another group of researchers conducted a comparable study and found that people actually prefer to treat everyone equally, if given the option to do so. In other words, people didn’t want to kill the elderly; they just opted to do so over killing young people, when pressed. (In that experiment, though, people still would kill the criminals.) Shariff says these findings simply show that people don’t like dilemmas. Given the option, anyone would rather say “treat everybody equally,” just so they don’t have to decide.

Bolstering that view, in another recent paper, which has not yet been peer-reviewed, people preferred giving a younger hypothetical COVID-19 patient an in-demand ventilator rather than an older one. They did this even when they were told to imagine themselves as potentially being the older patient who would therefore be sacrificed. The participants were hidden behind a so-called veil of ignorance—told they had a “50 percent chance of being a 65-year-old who gets to live another 15 years, and a 50 percent chance of dying at age 25.” That prompt made the participants favor the young patient even more. When told to look at the situation objectively, saving young lives seemed even better.

Tuesday, February 18, 2020

Is it okay to sacrifice one person to save many? How you answer depends on where you’re from.

Sigal Samuel
Originally posted 24 Jan 20

Here is an excerpt:

It turns out that people across the board, regardless of their cultural context, give the same response when they’re asked to rank the moral acceptability of acting in each case. They say Switch is most acceptable, then Loop, then Footbridge.

That’s probably because in Switch, the death of the worker is an unfortunate side effect of the action that saves the five, whereas in Footbridge, the death of the large man is not a side effect but a means to an end — and it requires the use of personal force against him.

It turns out that people across the board, regardless of their cultural context, give the same response when they’re asked to rank the moral acceptability of acting in each case. They say Switch is most acceptable, then Loop, then Footbridge.

That’s probably because in Switch, the death of the worker is an unfortunate side effect of the action that saves the five, whereas in Footbridge, the death of the large man is not a side effect but a means to an end — and it requires the use of personal force against him.

The info is here.

Monday, December 3, 2018

Choosing victims: Human fungibility in moral decision-making

Michal Bialek, Jonathan Fugelsang, and Ori Friedman
Judgment and Decision Making, Vol 13, No. 5, pp. 451-457.


In considering moral dilemmas, people often judge the acceptability of exchanging individuals’ interests, rights, and even lives. Here we investigate the related, but often overlooked, question of how people decide who to sacrifice in a moral dilemma. In three experiments (total N = 558), we provide evidence that these decisions often depend on the feeling that certain people are fungible and interchangeable with one another, and that one factor that leads people to be viewed this way is shared nationality. In Experiments 1 and 2, participants read vignettes in which three individuals’ lives could be saved by sacrificing another person. When the individuals were characterized by their nationalities, participants chose to save the three endangered people by sacrificing someone who shared their nationality, rather than sacrificing someone from a different nationality. Participants do not show similar preferences, though, when individuals were characterized by their age or month of birth. In Experiment 3, we replicated the effect of nationality on participant’s decisions about who to sacrifice, and also found that they did not show a comparable preference in a closely matched vignette in which lives were not exchanged. This suggests that the effect of nationality on decisions of who to sacrifice may be specific to judgments about exchanges of lives.

The research is here.

Tuesday, January 30, 2018

Utilitarianism’s Missing Dimensions

Erik Parens
Originally published on January 3, 2018

Here is an excerpt:

Missing the “Impartial Beneficence” Dimension of Utilitarianism

In a word, the Oxfordians argue that, whereas utilitarianism in fact has two key dimensions, the Harvardians have been calling attention to only one. A significant portion of the new paper is devoted to explicating a new scale they have created—the Oxford Utilitarianism Scale—which can be used to measure how utilitarian someone is or, more precisely, how closely a person’s moral decision-making tendencies approximate classical (act) utilitarianism. The measure is based on how much one agrees with statements such as, “If the only way to save another person’s life during an emergency is to sacrifice one’s own leg, then one is morally required to make this sacrifice,” and “It is morally right to harm an innocent person if harming them is a necessary means to helping several other innocent people.”

According to the Oxfordians, while utilitarianism is a unified theory, its two dimensions push in opposite directions. The first, positive dimension of utilitarianism is “impartial beneficence.” It demands that human beings adopt “the point of view of the universe,” from which none of us is worth more than another. This dimension of utilitarianism requires self-sacrifice. Once we see that children on the other side of the planet are no less valuable than our own, we grasp our obligation to sacrifice for those others as we would for our own. Those of us who have more than we need to flourish have an obligation to give up some small part of our abundance to promote the well-being of those who don’t have what they need.

The Oxfordians dub the second, negative dimension of utilitarianism “instrumental harm,” because it demands that we be willing to sacrifice some innocent others if doing so is necessary to promote the greater good. So, we should be willing to sacrifice the well-being of one person if, in exchange, we can secure the well-being of a larger number of others. This is of course where the trolleys come in.

The article is here.

Friday, October 27, 2017

Is utilitarian sacrifice becoming more morally permissible?

Ivar R.Hannikainen, Edouard Machery, & Fiery A.Cushman
Volume 170, January 2018, Pages 95-101


A central tenet of contemporary moral psychology is that people typically reject active forms of utilitarian sacrifice. Yet, evidence for secularization and declining empathic concern in recent decades suggests the possibility of systematic change in this attitude. In the present study, we employ hypothetical dilemmas to investigate whether judgments of utilitarian sacrifice are becoming more permissive over time. In a cross-sectional design, age negatively predicted utilitarian moral judgment (Study 1). To examine whether this pattern reflected processes of maturation, we asked a panel to re-evaluate several moral dilemmas after an eight-year interval but observed no overall change (Study 2). In contrast, a more recent age-matched sample revealed greater endorsement of utilitarian sacrifice in a time-lag design (Study 3). Taken together, these results suggest that today’s younger cohorts increasingly endorse a utilitarian resolution of sacrificial moral dilemmas.

Here is a portion of the Discussion section:

A vibrant discussion among philosophers and cognitive scientists has focused on distinguishing the virtues and pitfalls of the human moral faculty (Bloom, 2017; Greene, 2014; Singer, 2005). On a pessimistic note, our results dovetail with evidence about the socialization and development of recent cohorts (e.g., Shonkoff et al., 2012): Utilitarian judgment has been shown to correlate with Machiavellian and psychopathic traits (Bartels & Pizarro, 2011), and also with the reduced capacity to distinguish felt emotions (Patil & Silani, 2014). At the same time, leading theories credit highly acclaimed instances of moral progress to the exercise of rational scrutiny over prevailing moral norms (Greene, 2014; Singer, 2005), and the persistence of parochialism and prejudice to the unbridled command of intuition (Bloom, 2017). From this perspective, greater disapproval of intuitive deontological principles among recent cohorts may stem from the documented rise in cognitive abilities (i.e., the Flynn effect; see Pietschnig & Voracek, 2015) and foreshadow an expanding commitment to the welfare-maximizing resolution of contemporary moral challenges.

Tuesday, July 18, 2017

Human decisions in moral dilemmas are largely described by Utilitarianism

Anja Faulhaber, Anke Dittmer, Felix Blind, and others


Ethical thought experiments such as the trolley dilemma have been investigated extensively in the past, showing that humans act in a utilitarian way, trying to cause as little overall damage as possible. These trolley dilemmas have gained renewed attention over the past years; especially due to the necessity of implementing moral decisions in autonomous driving vehicles (ADVs). We conducted a set of experiments in which participants experienced modified trolley dilemmas as the driver in a virtual reality environment. Participants had to make decisions between two discrete options: driving on one of two lanes where different obstacles came into view. Obstacles included a variety of human-like avatars of different ages and group sizes. Furthermore, we tested the influence of a sidewalk as a potential safe harbor and a condition implicating a self-sacrifice. Results showed that subjects, in general, decided in a utilitarian manner, sparing the highest number of avatars possible with a limited influence of the other variables. Our findings support that people’s behavior is in line with the utilitarian approach to moral decision making. This may serve as a guideline for the
implementation of moral decisions in ADVs.

The article is here.

Wednesday, October 12, 2016

Utilitarian preferences or action preferences? De-confounding action and moral code in sacrificial dilemmas

Damien L. Crone & Simon M. Laham
Personality and Individual Differences, Volume 104, January 2017, Pages 476-481


A large literature in moral psychology investigates utilitarian versus deontological moral preferences using sacrificial dilemmas (e.g., the Trolley Problem) in which one can endorse harming one person for the greater good. The validity of sacrificial dilemma responses as indicators of one's preferred moral code is a neglected topic of study. One underexplored cause for concern is that standard sacrificial dilemmas confound the endorsement of specific moral codes with the endorsement of action such that endorsing utilitarianism always requires endorsing action. Two studies show that, after de-confounding these factors, the tendency to endorse action appears about as predictive of sacrificial dilemma responses as one's preference for a particular moral code, suggesting that, as commonly used, sacrificial dilemma responses are poor indicators of moral preferences. Interestingly however, de-confounding action and moral code may provide a more valid means of inferring one's preferred moral code.

The article is here.

Tuesday, August 12, 2014

Revisiting External Validity: Concerns about Trolley Problems and Other Sacrificial Dilemmas in Moral Psychology

By C. W. Bauman, A. P. McGraw, D. M. Bartels, and C. Warren


Sacrificial dilemmas, especially trolley problems, have rapidly become the most recognizable scientific exemplars of moral situations; they are now a familiar part of the psychological literature and are featured prominently in textbooks and the popular press. We are concerned that studies of sacrificial dilemmas may lack experimental, mundane, and psychological realism and therefore suffer from low external validity. Our apprehensions stem from three observations about trolley problems and other similar sacrificial dilemmas: (i) they are amusing rather than sobering, (ii) they are unrealistic and unrepresentative of the moral situations people encounter in the real world, and (iii) they do not elicit the same psychological processes as other moral situations. We believe it would be prudent to use more externally valid stimuli when testing descriptive theories that aim to provide comprehensive accounts of moral judgment and behavior.

The entire paper is here.