Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Utilitarianism. Show all posts
Showing posts with label Utilitarianism. Show all posts

Friday, October 6, 2023

Taking the moral high ground: Deontological and absolutist moral dilemma judgments convey self-righteousness

Weiss, A., Burgmer, P., Rom, S. C., & Conway, P. (2024). 
Journal of Experimental Social Psychology, 110, 104505.

Abstract

Individuals who reject sacrificial harm to maximize overall outcomes, consistent with deontological (vs. utilitarian) ethics, appear warmer, more moral, and more trustworthy. Yet, deontological judgments may not only convey emotional reactions, but also strict adherence to moral rules. We therefore hypothesized that people view deontologists as more morally absolutist and hence self-righteous—as perceiving themselves as morally superior. In addition, both deontologists and utilitarians who base their decisions on rules (vs. emotions) should appear more self-righteous. Four studies (N = 1254) tested these hypotheses. Participants perceived targets as more self-righteous when they rejected (vs. accepted) sacrificial harm in classic moral dilemmas where harm maximizes outcomes (i.e., deontological vs. utilitarian judgments), but not parallel cases where harm fails to maximize outcomes (Study 1). Preregistered Study 2 replicated the focal effect, additionally indicating mediation via perceptions of moral absolutism. Study 3 found that targets who reported basing their deontological judgments on rules, compared to emotional reactions or when processing information was absent, appeared particularly self-righteous. Preregistered Study 4 included both deontological and utilitarian targets and manipulated whether their judgments were based on rules versus emotion (specifically sadness). Grounding either moral position in rules conveyed self-righteousness, while communicating emotions was a remedy. Furthermore, participants perceived targets as more self-righteous the more targets deviated from their own moral beliefs. Studies 3 and 4 additionally examined participants' self-disclosure intentions. In sum, deontological dilemma judgments may convey an absolutist, rule-focused view of morality, but any judgment stemming from rules (in contrast to sadness) promotes self-righteousness perceptions.


My quick take:

The authors also found that people were more likely to perceive deontologists as self-righteous if they based their judgments on rules rather than emotions. This suggests that it is not just the deontological judgment itself that leads to perceptions of self-righteousness, but also the way in which the judgment is made.

Overall, the findings of this study suggest that people who make deontological judgments in moral dilemmas are more likely to be perceived as self-righteous. This is because deontological judgments are often seen as reflecting a rigid and absolutist view of morality, which can come across as arrogant or condescending.

It is important to note that the findings of this study do not mean that all deontologists are self-righteous. However, the study does suggest that people should be aware of how their moral judgments may be perceived by others. If you want to avoid being perceived as self-righteous, it may be helpful to explain your reasons for making a deontological judgment, and to acknowledge the emotional impact of the situation.

Friday, September 9, 2022

Online Moral Conformity: How Powerful is a Group of Online Strangers When Influencing an Individual’s Moral Judgments?

Paruzel-Czachura, M., Wojciechowska, D., 
& Bostyn, D. H. (2022, May 21). 
https://doi.org/10.31234/osf.io/4g2bn

Abstract

People make moral decisions every day, and when making them, they may be influenced by their companions (the so-called moral conformity effect). Nowadays, people make many decisions in online environments like video meetings. In the current preregistered experiment, we studied the online moral conformity effect. We applied an Asch conformity paradigm in an online context by asking participants (N = 120) to reply to sacrificial moral dilemmas through the online video communication tool Zoom when sitting in the “virtual” room with strangers (confederates instructed on how to answer; experimental condition) or when sitting alone (control condition). We found an effect of online moral conformity on half of the dilemmas included in our study as well as in the aggregate.

Discussion       

Social conformity is a well-known phenomenon (Asch, 1951, 1952, 1955, 1956; Sunstein, 2019).  Moreover, past research has demonstrated that conformity effects occur for moral issues as well (Aramovich et al., 2012; Bostyn & Roets, 2017; Crutchfield, 1955; Kelly et al., 2017; Kundu & Cummins, 2013; Lisciandra et al., 2013). However, to what extent does moral conformity occur when people interact in digital spaces, such as video conferencing software, has not yet been investigated.

We conducted a well-powered experimental study to determine if the effect of online moral conformity exists. Two study conditions were used: an experimental one in which study participants were answering along with a group of confederates and a control condition in which study participants were answering individually. In both conditions, participants were invited to a video meeting and asked to orally respond to a set of moral dilemmas with their cameras turned on. All questions and study conditions were the same, apart from the presence of other people in the experimental condition. In the experimental condition, importantly, the experimenter pretended that all people were study participants, but in fact, only the last person was an actual study participant, and all four other participants were confederates who were trained to answer in a specific manner. Confederates answered contrary to what most people had decided in past studies (Gawronski et al., 2017; Greene et al., 2008; Körner et al., 2020). We found an effect of online moral conformity on half of the dilemmas included in our study as well as in aggregate.

Wednesday, February 2, 2022

Psychopathy and Moral-Dilemma Judgment: An Analysis Using the Four-Factor Model of Psychopathy and the CNI Model of Moral Decision-Making

Luke, D. M., Neumann, C. S., & Gawronski, B.
(2021). Clinical Psychological Science. 
https://doi.org/10.1177/21677026211043862

Abstract

A major question in clinical and moral psychology concerns the nature of the commonly presumed association between psychopathy and moral judgment. In the current preregistered study (N = 443), we aimed to address this question by examining the relation between psychopathy and responses to moral dilemmas pitting consequences for the greater good against adherence to moral norms. To provide more nuanced insights, we measured four distinct facets of psychopathy and used the CNI model to quantify sensitivity to consequences (C), sensitivity to moral norms (N), and general preference for inaction over action (I) in responses to moral dilemmas. Psychopathy was associated with a weaker sensitivity to moral norms, which showed unique links to the interpersonal and affective facets of psychopathy. Psychopathy did not show reliable associations with either sensitivity to consequences or general preference for inaction over action. Implications of these findings for clinical and moral psychology are discussed.

From the Discussion

In support of our hypotheses, general psychopathy scores and a superordinate latent variable (representing the broad syndrome of psychopathy) showed significant negative relations with sensitivity to moral norms, which suggests that people with elevated psychopathic traits were less sensitive to moral norms in their responses to moral dilemmas in comparison with other people. Further analyses at the facet level suggested that sensitivity to moral norms was uniquely associated with the interpersonal-affective facets of psychopathy. Both of these findings persisted when controlling for gender. As predicted, the antisocial facet showed a negative zero-order correlation with sensitivity to moral norms, but this association fell to nonsignificance when controlling for other facets of psychopathy and gender. At the manifest variable level, neither general psychopathy scores nor the four facets showed reliable relations with either sensitivity to consequences or general preference for inaction over action.

(cut)

More broadly, the current findings have important implications for both clinical and moral psychology. For clinical psychology, our findings speak to ongoing questions about whether people with elevated levels of psychopathy exhibit disturbances in moral judgment. In a recent review of the literature on psychopathy and moral judgment, Larsen et al. (2020) claimed there was “no consistent, well-replicated evidence of observable deficits in . . . moral judgment” (p. 305). However, a notable limitation of this review is that its analysis of moral-dilemma research focused exclusively on studies that used the traditional approach. Consistent with past research using the CNI model (e.g., Gawronski et al., 2017; Körner et al., 2020; Luke & Gawronski, 2021a) and in contrast to Larsen et al.’s conclusion, the current findings indicate substantial deviations in moral-dilemma judgments among people with elevated psychopathic traits, particularly conformity to moral norms.

Saturday, September 11, 2021

Virtues for Real-World Utilitarians

Schubert, S., & Caviola, L. (2021, August 3)
https://doi.org/10.31234/osf.io/w52zm

Abstract

Utilitarianism says that we should maximize aggregate well-being, impartially considered. But utilitarians that try to apply this principle will encounter many psychological obstacles, ranging from selfishness to moral biases to limits to epistemic and instrumental rationality. In this chapter, we argue that utilitarians should cultivate a number of virtues that allow them to overcome the most important of these obstacles. We select virtues based on two criteria. First, the virtues should be impactful: they should greatly increase your impact (according to utilitarian standards), if you acquire them. Second, the virtues should be acquirable: they should be psychologically realistic to acquire. Using these criteria, we argue that utilitarians should prioritize six virtues: moderate altruism, moral expansiveness, effectiveness-focus, truth-seeking, collaborativeness, and determination. Finally, we discuss how our suggested list of virtues compares with standard conceptions of utilitarianism, as well as with common sense morality.

Conclusions 

We have suggested six virtues that utilitarians should cultivate to overcome psychological obstacles to utilitarianism and maximize their impact in the real world: moderate altruism, moral expansiveness, effectiveness-focus,  truth-seeking,  collaborativeness,  and  determination.  To  reiterate,  this  list  is tentative, and should be seen more as a starting point for further research than as a well-consolidated set of findings. It is plausible that some of our suggested virtues should be refined, and that we should add further  virtues  to  the  list.  We  hope  that  it  should  inspire  a  debate  among  philosophers  and psychologists about what virtues utilitarians should prioritize the most.

Saturday, June 5, 2021

Absolutely Right and Relatively Good: Consequentialists See Bioethical Disagreement in a Relativist Light

H. Viciana, I. R. Hannikainen & D. Rodríguez-Arias 
(2021) AJOB Empirical Bioethics
DOI: 10.1080/23294515.2021.1907476

Abstract

Background
Contemporary societies are rife with moral disagreement, resulting in recalcitrant disputes on matters of public policy. In the context of ongoing bioethical controversies, are uncompromising attitudes rooted in beliefs about the nature of moral truth?

Methods
To answer this question, we conducted both exploratory and confirmatory studies, with both a convenience and a nationally representative sample (total N = 1501), investigating the link between people’s beliefs about moral truth (their metaethics) and their beliefs about moral value (their normative ethics).

Results
Across various bioethical issues (e.g., medically-assisted death, vaccine hesitancy, surrogacy, mandatory organ conscription, or genetically modified crops), consequentialist attitudes were associated with weaker beliefs in an objective moral truth. This association was not explained by domain-general reflectivity, theism, personality, normative uncertainty, or subjective knowledge.

Conclusions
We find a robust link between the way people characterize prescriptive disagreements and their sensibility to consequences. In addition, both societal consensus and personal conviction contribute to objectivist beliefs, but these effects appear to be asymmetric, i.e., stronger for opposition than for approval.

From the Discussion

The evidence is now strong that individuals tend to embrace rather diverse metaethical attitudes regarding moral disagreement, depending on the issues at stake (eg. Cova & Ravat 2008; Pölzler & Cole-Wright 2020). Thus, we believe that the time is ripe for empirical bioethics to update this assumption.

Wednesday, February 10, 2021

Beyond Moral Dilemmas: The Role of Reasoning in Five Categories of Utilitarian Judgment

F. Jaquet & F. Cova
Cognition, Volume 209, 
April 2021, 104572

Abstract

Over the past two decades, the study of moral reasoning has been heavily influenced by Joshua Greene’s dual-process model of moral judgment, according to which deontological judgments are typically supported by intuitive, automatic processes while utilitarian judgments are typically supported by reflective, conscious processes. However, most of the evidence gathered in support of this model comes from the study of people’s judgments about sacrificial dilemmas, such as Trolley Problems. To which extent does this model generalize to other debates in which deontological and utilitarian judgments conflict, such as the existence of harmless moral violations, the difference between actions and omissions, the extent of our duties of assistance, and the appropriate justification for punishment? To find out, we conducted a series of five studies on the role of reflection in these kinds of moral conundrums. In Study 1, participants were asked to answer under cognitive load. In Study 2, participants had to answer under a strict time constraint. In Studies 3 to 5, we sought to promote reflection through exposure to counter-intuitive reasoning problems or direct instruction. Overall, our results offer strong support to the extension of Greene’s dual-process model to moral debates on the existence of harmless violations and partial support to its extension to moral debates on the extent of our duties of assistance.

From the Discussion Section

The results of  Study 1 led  us to conclude  that certain forms  of utilitarian judgments  require more cognitive resources than their deontological counterparts. The results of Studies 2 and 5 suggest  that  making  utilitarian  judgments  also  tends  to  take  more  time  than  making deontological judgments. Finally, because our manipulation was unsuccessful, Studies 3 and 4 do not allow us to conclude anything about the intuitiveness of utilitarian judgments. In Study 5, we were nonetheless successful in manipulating participants’ reliance on their intuitions (in the  Intuition  condition),  but this  did not  seem to  have any  effect on  the  rate  of utilitarian judgments.  Overall,  our  results  allow  us  to  conclude  that,  compared  to  deontological judgments,  utilitarian  judgments  tend  to  rely  on  slower  and  more  cognitively  demanding processes.  But  they  do  not  allow  us  to  conclude  anything  about  characteristics  such  as automaticity  or  accessibility  to consciousness:  for  example,  while solving  a very  difficult math problem might take more time and require more resources than solving a mildly difficult math problem, there is no reason to think that the former process is more automatic and more conscious than the latter—though it will clearly be experienced as more effortful. Moreover, although one could be tempted to  conclude that our data show  that, as predicted by Greene, utilitarian  judgment  is  experienced  as  more  effortful  than  deontological  judgment,  we collected no data  about such  experience—only data  about the  role of cognitive resources in utilitarian  judgment.  Though  it  is  reasonable  to  think  that  a  process  that  requires  more resources will be experienced as more effortful, we should keep in mind that this conclusion is based on an inferential leap. 

Thursday, September 24, 2020

A Failure of Empathy Led to 200,000 Deaths. It Has Deep Roots.

Olga Khazan
The Atlantic
Originally published 22 September 20

Here is an excerpt:

Indeed, doctors follow a similar logic. In a May paper in the New England Journal of Medicine, a group of doctors from different countries suggested that hospitals consider prioritizing younger patients if they are forced to ration ventilators. “Maximizing benefits requires consideration of prognosis—how long the patient is likely to live if treated—which may mean giving priority to younger patients and those with fewer coexisting conditions,” they wrote. Perhaps, on a global scale, we’ve internalized the idea that the young matter more than the old.

The Moral Machine is not without its criticisms. Some psychologists say that the trolley problem, a similar and more widely known moral dilemma, is too silly and unrealistic to say anything about our true ethics. In a response to the Moral Machine experiment, another group of researchers conducted a comparable study and found that people actually prefer to treat everyone equally, if given the option to do so. In other words, people didn’t want to kill the elderly; they just opted to do so over killing young people, when pressed. (In that experiment, though, people still would kill the criminals.) Shariff says these findings simply show that people don’t like dilemmas. Given the option, anyone would rather say “treat everybody equally,” just so they don’t have to decide.

Bolstering that view, in another recent paper, which has not yet been peer-reviewed, people preferred giving a younger hypothetical COVID-19 patient an in-demand ventilator rather than an older one. They did this even when they were told to imagine themselves as potentially being the older patient who would therefore be sacrificed. The participants were hidden behind a so-called veil of ignorance—told they had a “50 percent chance of being a 65-year-old who gets to live another 15 years, and a 50 percent chance of dying at age 25.” That prompt made the participants favor the young patient even more. When told to look at the situation objectively, saving young lives seemed even better.

Friday, August 7, 2020

Your Ancestors Knew Death in Ways You Never Will

Donald McNeil, Jr.
The New York Times
Originally posted 15 July 20

Here is the end:

As a result, New Yorkers took certain steps — sometimes very expensive and contentious, but all based on science: They dug sewers to pipe filth into the Hudson and East Rivers instead of letting it pool in the streets. In 1842, they built the Croton Aqueduct to carry fresh water to Manhattan. In 1910, they chlorinated its water to kill more germs. In 1912, they began requiring dairies to heat their milk because a Frenchman named Louis Pasteur had shown that doing so spared children from tuberculosis. Over time, they made smallpox vaccination mandatory.

Libertarians battled almost every step. Some fought sewers and water mains being dug through their properties, arguing that they owned perfectly good wells and cesspools. Some refused smallpox vaccines until the Supreme Court put an end to that in 1905, in Jacobson v. Massachusetts.

In the Spanish flu epidemic of 1918, many New Yorkers donned masks but 4,000 San Franciscans formed an Anti-Mask League. (The city’s mayor, James Rolph, was fined $50 for flouting his own health department’s mask order.) Slowly, science prevailed, and death rates went down.

Today, Americans are facing the same choice our ancestors did: We can listen to scientists and spend money to save lives, or we can watch our neighbors die.

“The people who say ‘Let her rip, let’s go for herd immunity’ — that’s just public-health nihilism,” said Dr. Joia S. Mukherjee, the chief medical office of Partners in Health, a medical charity fighting the virus. “How many deaths do we have to accept to get there?”

A vaccine may be close at hand, and so may treatments like monoclonal antibodies that will cut our losses.

Till then, we need not accept death as our overlord — we can simply hang on and outlast him.

The info is here.

Tuesday, July 7, 2020

Can COVID-19 re-invigorate ethics?

Louise Campbell
BMJ Blogs
Originally posted 26 May 20

The COVID-19 pandemic has catapulted ethics into the spotlight.  Questions previously deliberated about by small numbers of people interested in or affected by particular issues are now being posed with an unprecedented urgency right across the public domain.  One of the interesting facets of this development is the way in which the questions we are asking now draw attention, not just to the importance of ethics in public life, but to the very nature of ethics as practice, namely ethics as it is applied to specific societal and environmental concerns.

Some of these questions which have captured the public imagination were originally debated specifically within healthcare circles and at the level of health policy: what measures must be taken to prevent hospitals from becoming overwhelmed if there is a surge in the number of people requiring hospitalisation?  How will critical care resources such as ventilators be prioritised if need outstrips supply?  In a crisis situation, will older people or people with disabilities have the same opportunities to access scarce resources, even though they may have less chance of survival than people without age-related conditions or disabilities?  What level of risk should healthcare workers be expected to assume when treating patients in situations in which personal protective equipment may be inadequate or unavailable?   Have the rights of patients with chronic conditions been traded off against the need to prepare the health service to meet a demand which to date has not arisen?  Will the response to COVID-19 based on current evidence compromise the capacity of the health system to provide routine outpatient and non-emergency care to patients in the near future?

Other questions relate more broadly to the intersection between health and society: how do we calculate the harms of compelling entire populations to isolate themselves from loved ones and from their communities?  How do we balance these harms against the risks of giving people more autonomy to act responsibly?  What consideration is given to the fact that, in an unequal society, restrictions on liberty will affect certain social groups in disproportionate ways?  What does the catastrophic impact of COVID-19 on residents of nursing homes say about our priorities as a society and to what extent is their plight our collective responsibility?  What steps have been taken to protect marginalised communities who are at greater risk from an outbreak of infectious disease: for example, people who have no choice but to coexist in close proximity with one another in direct provision centres, in prison settings and on halting sites?

The info is here.

Sunday, July 5, 2020

Utilitarianism and the pandemic

J. Savulescu, I. Persson, & D. Wilkinson
Bioethics
Originally published 20 May 20

Abstract

There are no egalitarians in a pandemic. The scale of the challenge for health systems and public policy means that there is an ineluctable need to prioritize the needs of the many. It is impossible to treat all citizens equally, and a failure to carefully consider the consequences of actions could lead to massive preventable loss of life. In a pandemic there is a strong ethical need to consider how to do most good overall. Utilitarianism is an influential moral theory that states that the right action is the action that is expected to produce the greatest good. It offers clear operationalizable principles. In this paper we provide a summary of how utilitarianism could inform two challenging questions that have been important in the early phase of the pandemic: (a) Triage: which patients should receive access to a ventilator if there is overwhelming demand outstripping supply? (b) Lockdown: how should countries decide when to implement stringent social restrictions, balancing preventing deaths from COVID‐19 with causing deaths and reductions in well‐being from other causes? Our aim is not to argue that utilitarianism is the only relevant ethical theory, or in favour of a purely utilitarian approach. However, clearly considering which options will do the most good overall will help societies identify and consider the necessary cost of other values. Societies may choose either to embrace or not to embrace the utilitarian course, but with a clear understanding of the values involved and the price they are willing to pay.

The info is here.

Tuesday, May 19, 2020

Uncovering the moral heuristics of altruism: A philosophical scale

Friedland J, Emich K, Cole BM (2020)
PLoS ONE 15(3): e0229124.
https://doi.org/10.1371/journal.pone.0229124

Abstract

Extant research suggests that individuals employ traditional moral heuristics to support their observed altruistic behavior; yet findings have largely been limited to inductive extrapolation and rely on relatively few traditional frames in so doing, namely, deontology in organizational behavior and virtue theory in law and economics. Given that these and competing moral frames such as utilitarianism can manifest as identical behavior, we develop a moral framing instrument—the Philosophical Moral-Framing Measure (PMFM)—to expand and distinguish traditional frames associated and disassociated with observed altruistic behavior. The validation of our instrument based on 1015 subjects in 3 separate real stakes scenarios indicates that heuristic forms of deontology, virtue-theory, and utilitarianism are strongly related to such behavior, and that egoism is an inhibitor. It also suggests that deontic and virtue-theoretical frames may be commonly perceived as intertwined and opens the door for new research on self-abnegation, namely, a perceived moral obligation toward suffering and self-denial. These findings hold the potential to inform ongoing conversations regarding organizational citizenship and moral crowding out, namely, how financial incentives can undermine altruistic behavior.

The research is here.

Thursday, May 23, 2019

Priming intuition disfavors instrumental harm but not impartial beneficence

Valerio Capraro, Jim Everett, & Brian Earp
PsyArXiv Preprints
Last Edited April 17, 2019

Abstract

Understanding the cognitive underpinnings of moral judgment is one of most pressing problems in psychological science. Some highly-cited studies suggest that reliance on intuition decreases utilitarian (expected welfare maximizing) judgments in sacrificial moral dilemmas in which one has to decide whether to instrumentally harm (IH) one person to save a greater number of people. However, recent work suggests that such dilemmas are limited in that they fail to capture the positive, defining core of utilitarianism: commitment to impartial beneficence (IB). Accordingly, a new two-dimensional model of utilitarian judgment has been proposed that distinguishes IH and IB components. The role of intuition on this new model has not been studied. Does relying on intuition disfavor utilitarian choices only along the dimension of instrumental harm or does it also do so along the dimension of impartial beneficence? To answer this question, we conducted three studies (total N = 970, two preregistered) using conceptual priming of intuition versus deliberation on moral judgments. Our evidence converges on an interaction effect, with intuition decreasing utilitarian judgments in IH—as suggested by previous work—but failing to do so in IB. These findings bolster the recently proposed two-dimensional model of utilitarian moral judgment, and point to new avenues for future research.

The research is here.

Tuesday, January 15, 2019

The ends justify the meanness: An investigation of psychopathic traits and utilitarian moral endorsement

JustinBalasha and Diana M.Falkenbach
Personality and Individual Differences
Volume 127, 1 June 2018, Pages 127-132

Abstract

Although psychopathy has traditionally been synonymous with immorality, little research exists on the ethical reasoning of psychopathic individuals. Recent examination of psychopathy and utilitarianism suggests that psychopaths' moral decision-making differs from nonpsychopaths (Koenigs et al., 2012). The current study examined the relationship between psychopathic traits (PPI-R, Lilienfeld & Widows, 2005; TriPM, Patrick, 2010) and utilitarian endorsement (moral dilemmas, Greene et al., 2001) in a college sample (n = 316). The relationships between utilitarian decisions and triarchic dimensions were explored and empathy and aggression were examined as mediating factors. Hypotheses were partially supported, with Disinhibition and Meanness traits relating to personal utilitarian decisions; aggression partially mediated the relationship between psychopathic traits and utilitarian endorsements. Implications and future directions are further discussed.

Highlights

• Authors examined the relationship between psychopathy and utilitarian decision-making.

• Empathy and aggression were explored as mediating factors.

• Disinhibition and Meanness were positively related to personal utilitarian decisions.

• Meanness, Coldheartedness, and PPI-R-II were associated with personal utilitarian decisions.

• Aggression partially mediated the relationship between psychopathy and utilitarian decisions.

The research can be found here.

Tuesday, June 19, 2018

Of Mice, Men, and Trolleys: Hypothetical Judgment Versus Real-Life Behavior in Trolley-Style Moral Dilemmas

Dries H. Bostyn, Sybren Sevenhant, and Arne Roets
Psychological Science 
First Published May 9, 2018

Abstract

Scholars have been using hypothetical dilemmas to investigate moral decision making for decades. However, whether people’s responses to these dilemmas truly reflect the decisions they would make in real life is unclear. In the current study, participants had to make the real-life decision to administer an electroshock (that they did not know was bogus) to a single mouse or allow five other mice to receive the shock. Our results indicate that responses to hypothetical dilemmas are not predictive of real-life dilemma behavior, but they are predictive of affective and cognitive aspects of the real-life decision. Furthermore, participants were twice as likely to refrain from shocking the single mouse when confronted with a hypothetical versus the real version of the dilemma. We argue that hypothetical-dilemma research, while valuable for understanding moral cognition, has little predictive value for actual behavior and that future studies should investigate actual moral behavior along with the hypothetical scenarios dominating the field.

The research is here.

Tuesday, January 30, 2018

Utilitarianism’s Missing Dimensions

Erik Parens
Quillette
Originally published on January 3, 2018

Here is an excerpt:

Missing the “Impartial Beneficence” Dimension of Utilitarianism

In a word, the Oxfordians argue that, whereas utilitarianism in fact has two key dimensions, the Harvardians have been calling attention to only one. A significant portion of the new paper is devoted to explicating a new scale they have created—the Oxford Utilitarianism Scale—which can be used to measure how utilitarian someone is or, more precisely, how closely a person’s moral decision-making tendencies approximate classical (act) utilitarianism. The measure is based on how much one agrees with statements such as, “If the only way to save another person’s life during an emergency is to sacrifice one’s own leg, then one is morally required to make this sacrifice,” and “It is morally right to harm an innocent person if harming them is a necessary means to helping several other innocent people.”

According to the Oxfordians, while utilitarianism is a unified theory, its two dimensions push in opposite directions. The first, positive dimension of utilitarianism is “impartial beneficence.” It demands that human beings adopt “the point of view of the universe,” from which none of us is worth more than another. This dimension of utilitarianism requires self-sacrifice. Once we see that children on the other side of the planet are no less valuable than our own, we grasp our obligation to sacrifice for those others as we would for our own. Those of us who have more than we need to flourish have an obligation to give up some small part of our abundance to promote the well-being of those who don’t have what they need.

The Oxfordians dub the second, negative dimension of utilitarianism “instrumental harm,” because it demands that we be willing to sacrifice some innocent others if doing so is necessary to promote the greater good. So, we should be willing to sacrifice the well-being of one person if, in exchange, we can secure the well-being of a larger number of others. This is of course where the trolleys come in.

The article is here.

Saturday, January 13, 2018

The costs of being consequentialist: Social perceptions of those who harm and help for the greater good

Everett, J. A. C., Faber, N. S., Savulescu, J., & Crockett, M. (2017, December 15).
The Cost of Being Consequentialist. Retrieved from psyarxiv.com/a2kx6

Abstract

Previous work has demonstrated that people are more likely to trust “deontological” agents who reject instrumentally harming one person to save a greater number than “consequentialist” agents who endorse such harm in pursuit of the greater good. It has been argued that these differential social perceptions of deontological vs. consequentialist agents could explain the higher prevalence of deontological moral intuitions. Yet consequentialism involves much more than decisions to endorse instrumental harm: another critical dimension is impartial beneficence, defined as the impartial maximization of the greater good, treating the well-being of every individual as equally important. In three studies (total N = 1,634), we investigated preferences for deontological vs. consequentialist social partners in both the domains of instrumental harm and impartial beneficence, and consider how such preferences vary across different types of social relationships.  Our results demonstrate consistent preferences for deontological over consequentialist agents across both domains of instrumental harm and impartial beneficence: deontological agents were viewed as more moral and trustworthy, and were actually entrusted with more money in a resource distribution task. However, preferences for deontological agents were stronger when those preferences were revealed via aversion to instrumental harm than impartial beneficence. Finally, in the domain of instrumental harm, deontological agents were uniformly preferred across a variety of social roles, but in the domain of impartial beneficence, people prefer deontologists for roles requiring direct interaction (friend, spouse, boss) but not for more distant roles with little-to-no personal interaction (political leader).

The research is here.

Friday, October 27, 2017

Is utilitarian sacrifice becoming more morally permissible?

Ivar R.Hannikainen, Edouard Machery, & Fiery A.Cushman
Cognition
Volume 170, January 2018, Pages 95-101

Abstract

A central tenet of contemporary moral psychology is that people typically reject active forms of utilitarian sacrifice. Yet, evidence for secularization and declining empathic concern in recent decades suggests the possibility of systematic change in this attitude. In the present study, we employ hypothetical dilemmas to investigate whether judgments of utilitarian sacrifice are becoming more permissive over time. In a cross-sectional design, age negatively predicted utilitarian moral judgment (Study 1). To examine whether this pattern reflected processes of maturation, we asked a panel to re-evaluate several moral dilemmas after an eight-year interval but observed no overall change (Study 2). In contrast, a more recent age-matched sample revealed greater endorsement of utilitarian sacrifice in a time-lag design (Study 3). Taken together, these results suggest that today’s younger cohorts increasingly endorse a utilitarian resolution of sacrificial moral dilemmas.


Here is a portion of the Discussion section:

A vibrant discussion among philosophers and cognitive scientists has focused on distinguishing the virtues and pitfalls of the human moral faculty (Bloom, 2017; Greene, 2014; Singer, 2005). On a pessimistic note, our results dovetail with evidence about the socialization and development of recent cohorts (e.g., Shonkoff et al., 2012): Utilitarian judgment has been shown to correlate with Machiavellian and psychopathic traits (Bartels & Pizarro, 2011), and also with the reduced capacity to distinguish felt emotions (Patil & Silani, 2014). At the same time, leading theories credit highly acclaimed instances of moral progress to the exercise of rational scrutiny over prevailing moral norms (Greene, 2014; Singer, 2005), and the persistence of parochialism and prejudice to the unbridled command of intuition (Bloom, 2017). From this perspective, greater disapproval of intuitive deontological principles among recent cohorts may stem from the documented rise in cognitive abilities (i.e., the Flynn effect; see Pietschnig & Voracek, 2015) and foreshadow an expanding commitment to the welfare-maximizing resolution of contemporary moral challenges.

Sunday, May 7, 2017

Individual Differences in Moral Disgust Do Not Predict Utilitarian Judgments, Sexual and Pathogen Disgust Do

Michael Laakasuo, Jukka Sundvall & Marianna Drosinou
Scientific Reports 7, Article number: 45526 (2017)
doi:10.1038/srep45526

Abstract

The role of emotional disgust and disgust sensitivity in moral judgment and decision-making has been debated intensively for over 20 years. Until very recently, there were two main evolutionary narratives for this rather puzzling association. One of the models suggest that it was developed through some form of group selection mechanism, where the internal norms of the groups were acting as pathogen safety mechanisms. Another model suggested that these mechanisms were developed through hygiene norms, which were piggybacking on pathogen disgust mechanisms. In this study we present another alternative, namely that this mechanism might have evolved through sexual disgust sensitivity. We note that though the role of disgust in moral judgment has been questioned recently, few studies have taken disgust sensitivity to account. We present data from a large sample (N = 1300) where we analyzed the associations between The Three Domain Disgust Scale and the most commonly used 12 moral dilemmas measuring utilitarian/deontological preferences with Structural Equation Modeling. Our results indicate that of the three domains of disgust, only sexual disgust is associated with more deontological moral preferences. We also found that pathogen disgust was associated with more utilitarian preferences. Implications of the findings are discussed.

The article is here.

Sunday, February 26, 2017

Why We Love Moral Rigidity

Matthew Hutson
Scientific American
Originally published on November 1, 2016

Here is an excerpt:

We don't evaluate others based on their philosophical ideologies per se, Pizarro says. Rather we look at how others' moral decisions “express the kind of motives, commitments and emotions we want people to have.” Coolheaded calculation has its benefits, but we want our friends to at least flinch before personally harming others. Indeed, people in the study who had argued for pushing the man were trusted more when they claimed that the decision was difficult.

Politicians and executives should pay heed. Leading requires making hard trade-offs—is a war or a cut in employee benefits worth the pain it inflicts? According to Pizarro, “you want your leader to genuinely have or at least be really good at displaying the right kinds of emotions when they're talking about that decision, to show that they didn't arrive at it callously.” Calmly weighing costs and benefits may do the most good for the most people, but it can also be a good way to lose friends.

The article is here.

Saturday, April 25, 2015

On the Normative Significance of Experimental Moral Psychology

Victor Kumar and Richmond Campbell
Philosophical Psychology 
Vol. 25, Iss. 3, 2012, 311-330.

Experimental research in moral psychology can be used to generate debunking arguments in ethics. Specifically, research can indicate that we draw a moral distinction on the basis of a morally irrelevant difference. We develop this naturalistic approach by examining a recent debate between Joshua Greene and Selim Berker. We argue that Greene’s research, if accurate, undermines attempts to reconcile opposing judgments about trolley cases, but that his attempt to debunk deontology fails. We then draw some general lessons about the possibility of empirical debunking arguments in ethics.

The entire article is here.