Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Utilitarian. Show all posts
Showing posts with label Utilitarian. Show all posts

Monday, January 10, 2022

Sequential decision-making impacts moral judgment: How iterative dilemmas can expand our perspective on sacrificial harm

D.H. Bostyn and A.Roets
Journal of Experimental Social Psychology
Volume 98, January 2022, 104244

Abstract

When are sacrificial harms morally appropriate? Traditionally, research within moral psychology has investigated this issue by asking participants to render moral judgments on batteries of single-shot, sacrificial dilemmas. Each of these dilemmas has its own set of targets and describes a situation independent from those described in the other dilemmas. Every decision that participants are asked to make thus takes place within its own, separate moral universe. As a result, people's moral judgments can only be influenced by what happens within that specific dilemma situation. This research methodology ignores that moral judgments are interdependent and that people might try to balance multiple moral concerns across multiple decisions. In the present series of studies we present participants with iterative versions of sacrificial dilemmas that involve the same set of targets across multiple iterations. Using this novel approach, and across five preregistered studies (total n = 1890), we provide clear evidence that a) responding to dilemmas in a sequential, iterative manner impacts the type of moral judgments that participants favor and b) that participants' moral judgments are not only motivated by the desire to refrain from harming others (usually labelled as deontological judgment), or a desire to minimize harms (utilitarian judgment), but also by a desire to spread out harm across all possible targets.

Highlights

• Research on sacrificial harm usually asks participants to judge single-shot dilemmas.

• We investigate sacrificial moral dilemma judgment in an iterative context.

• Sequential decision making impacts moral preferences.

• Many participants express a non-utilitarian concern for the overall spread of harm.


Moral deliberation in iterative contexts

The iterative lens we have adopted prompts some intriguing questions about the nature of moral deliberation in the context of sacrificial harm. Existing theoretical models on sacrificial harm can be described as ‘competition models’ (for instance, Conway & Gawronski, 2013; Gawronski et al., 2017; Greene et al., 2001, 2004; Hennig & Hütter, 2020). These models argue that opposing psychological processes compete to deliver a specific moral judgment and that the process that wins out, will determine the nature of that moral judgment. As such, these models presume that the goal of moral deliberation is about deciding whether to refrain from harm or minimize harm in a mutually exclusive manner. Even if participants are tempted by both options, eventually, their judgment settles wholly on one or the other. This is sensible in the context of non-iterative dilemmas in which outcomes hinge on a single decision but is it equally sensible in iterative contexts?

Consider the results of Study 4. In this study, we asked (a subset of) participants how many shocks they would divert out of a total six shocks. Interestingly, 32% of these participants decided to divert a single shock out of the six (See Fig. 6), thus shocking the individual once, and the group five times. How should such a decision be interpreted? These participants did not fully refrain from harming others, nor did they fully minimize harm, nor did they spread harm in the most balanced of ways.  Responses like this seem to straddle different moral concerns. While future research will need to corroborate these findings, we suggest that responses like this, i.e. responses that seem to straddle multiple moral concerns, cannot be explained by competition models but necessitate theoretical models that explicitly take into account that participants might strive to strike a (idiosyncratic) pluralistic balance between multiple moral concerns. 

Friday, August 13, 2021

Moral dilemmas and trust in leaders during a global health crisis

Everett, J.A.C., Colombatto, C., Awad, E. et al. 
Nat Hum Behav (2021). 

Abstract

Trust in leaders is central to citizen compliance with public policies. One potential determinant of trust is how leaders resolve conflicts between utilitarian and non-utilitarian ethical principles in moral dilemmas. Past research suggests that utilitarian responses to dilemmas can both erode and enhance trust in leaders: sacrificing some people to save many others (‘instrumental harm’) reduces trust, while maximizing the welfare of everyone equally (‘impartial beneficence’) may increase trust. In a multi-site experiment spanning 22 countries on six continents, participants (N = 23,929) completed self-report (N = 17,591) and behavioural (N = 12,638) measures of trust in leaders who endorsed utilitarian or non-utilitarian principles in dilemmas concerning the COVID-19 pandemic. Across both the self-report and behavioural measures, endorsement of instrumental harm decreased trust, while endorsement of impartial beneficence increased trust. These results show how support for different ethical principles can impact trust in leaders, and inform effective public communication during times of global crisis.

Discussion

The COVID-19 pandemic has raised a number of moral dilemmas that engender conflicts between utilitarian and non-utilitarian ethical principles. Building on past work on utilitarianism and trust, we tested the hypothesis that endorsement of utilitarian solutions to pandemic dilemmas would impact trust in leaders. Specifically, in line with suggestions from previous work and case studies of public communications during the early stages of the pandemic, we predicted that endorsing instrumental harm would decrease trust in leaders, while endorsing impartial beneficence would increase trust.

Tuesday, December 15, 2020

(How) Do You Regret Killing One to Save Five? Affective and Cognitive Regret Differ After Utilitarian and Deontological Decisions

Goldstein-Greenwood J, et al.
Personality and Social Psychology 
Bulletin. 2020;46(9):1303-1317. 
doi:10.1177/0146167219897662

Abstract

Sacrificial moral dilemmas, in which opting to kill one person will save multiple others, are definitionally suboptimal: Someone dies either way. Decision-makers, then, may experience regret about these decisions. Past research distinguishes affective regret, negative feelings about a decision, from cognitive regret, thoughts about how a decision might have gone differently. Classic dual-process models of moral judgment suggest that affective processing drives characteristically deontological decisions to reject outcome-maximizing harm, whereas cognitive deliberation drives characteristically utilitarian decisions to endorse outcome-maximizing harm. Consistent with this model, we found that people who made or imagined making sacrificial utilitarian judgments reliably expressed relatively more affective regret and sometimes expressed relatively less cognitive regret than those who made or imagined making deontological dilemma judgments. In other words, people who endorsed causing harm to save lives generally felt more distressed about their decision, yet less inclined to change it, than people who rejected outcome-maximizing harm.

General Discussion

Across four studies, we found that different sacrificial moral dilemma decisions elicit different degrees of affective and cognitive regret. We found robust evidence that utilitarian decision-makers who accept outcome-maximizing harm experience far more affective regret than their deontological decision-making counterparts who reject outcome-maximizing harm, and we found somewhat weaker evidence that utilitarian decision-makers experience less cognitive regret than deontological decision-makers.The significant interaction between dilemma decision and regret type predicted in H1 emerged both when participants freely endorsed dilemma decisions (Studies 1, 3, and 4) and were randomly assigned to imagine making a decision (Study 2). Hence, the present findings cannot simply be attributed to chronic differences in the types of regret that people who prioritize each decision experience. Moreover, we found tentative evidence for H2: Focusing on the counterfactual world in which they made the alternative decision attenuated utilitarian decision-makers’ heightened affective regret compared with factual reflection, and reduced differences in affective regret between utilitarian and deontological decision-makers (Study 4). Furthermore, our findings do not appear attributable to impression management concerns, as there were no differences between public and private reports of regret.

Wednesday, August 5, 2020

A genetic profile of oxytocin receptor improves moral acceptability of outcome-maximizing harm in male insurance brokers

S. Palumbo, V. Mariotti, and others
Behavioural Brain Research
Volume 392, 17 August 2020, 112681

Abstract

In recent years, conflicting findings have been reported in the scientific literature about the influence of dopaminergic, serotonergic and oxytocinergic gene variants on moral behavior. Here, we utilized a moral judgment paradigm to test the potential effects on moral choices of three polymorphisms of the Oxytocin receptor (OXTR): rs53576, rs2268498 and rs1042770. We analyzed the influence of each single polymorphism and of genetic profiles obtained by different combinations of their genotypes in a sample of male insurance brokers (n = 129), as compared to control males (n = 109). Insurance brokers resulted significantly more oriented to maximize outcomes than control males, thus they expressed more than controls the utilitarian attitude phenotype. When analyzed individually, none of the selected variants influenced the responses to moral dilemmas. In contrast, a composite genetic profile that potentially increases OXTR activity was associated with higher moral acceptability in brokers. We hypothesize that this genetic profile promotes outcome-maximizing behavior in brokers by focusing their attention on what represents a greater good, that is, saving the highest number of people, even though at the cost of sacrificing one individual. Our data suggest that investigations in a sample that most expresses the phenotype of interest, combined with the analysis of composite genetic profiles rather than individual variants, represent a promising strategy to find out weak genetic influences on complex phenotypes, such as moral behavior.

Highlights

• Male insurance brokers as a sample to study utilitarian attitude.

• They are more aligned with utilitarianism than control males.

• Frequency of outcome-maximizing choices positively correlates with impulsivity in brokers.

• Genetic profiles affecting OXTR activity make outcome-maximizing harm more acceptable.

• Improved OXT transmission directs attention to choices more advantageous for society.

The research is here.

Sunday, March 29, 2020

Who gets the ventilator in the coronavirus pandemic?

A group of doctors pictured during a surgical operation, with a heart rate monitor in the foreground.Julian Savulescu & Dominic Wilkinson
abc.net.au
Updated on 17 March 20

Here is an excerpt:

4. Flatten the curve: the 'too little, too late' approach

There are two wishful-thinking approaches that try to make the problem go away.

The first is that we need more liberty to impose restrictions on the movement of citizens in an effort to "flatten the curve", reduce the number of coronavirus cases and pressure on hospitals, and allow everyone who needs a ventilator to get one.

That may have been possible early on (Singapore and Taiwan adopted severe liberty restriction and seemed to have controlled the epidemic).

However, that horse has bolted and it is now inevitable that there will be a shortage of life-saving medical supplies, as there is in Italy.

This approach is a case of too little, too late.

5. Paternalism: the 'greater harm' myth

The second wishful-thinking approach is that some people try to argue that it is harmful to ventilate older patients, or patients with a poorer prognosis.

One intensive care consultant wrote an open letter to older patients claiming that he and his colleagues would not discriminate against them:

"But we won't use the things that won't work. We won't use machines that can cause harm."

But all medical treatments can cause harm. It is simply incorrect that intensive care "would not work" in a patient with COVID-19 who is older than 60, or who has comorbidities.

Is a 1/1,000 chance of survival worth the discomfort of a month on a ventilator? That is a complex value judgement and people may reasonably differ. I would take the chance.

The claim that intensive care doctors will only withhold treatment that is harmful is either paternalistic or it is confused.

If the doctor claims that they will withhold ventilation when it is harmful, this is a paternalistic value judgement. Where a ventilator has some chance of saving a person's life, it is largely up to that person to decide whether it is a harm or a benefit to take that chance.

Instead, this statement is obscuring the necessary resource allocation decision. It is sanitising rationing by pretending that intensive care doctors are only doing what is best for every patient. That is simply false.

The info is here.

Saturday, March 28, 2020

Hospitals consider universal do-not-resuscitate orders for coronavirus patients

Ariana Eunjung Cha
The Washington Post
Originally posted 25 March 20

Hospitals on the front lines of the pandemic are engaged in a heated private debate over a calculation few have encountered in their lifetimes — how to weigh the “save at all costs” approach to resuscitating a dying patient against the real danger of exposing doctors and nurses to the contagion of coronavirus.

The conversations are driven by the realization that the risk to staff amid dwindling stores of protective equipment — such as masks, gowns and gloves — may be too great to justify the conventional response when a patient “codes,” and their heart or breathing stops.

Northwestern Memorial Hospital in Chicago has been discussing a do-not-resuscitate policy for infected patients, regardless of the wishes of the patient or their family members — a wrenching decision to prioritize the lives of the many over the one.

Richard Wunderink, one of Northwestern’s intensive-care medical directors, said hospital administrators would have to ask Illinois Gov. J.B. Pritzker for help in clarifying state law and whether it permits the policy shift.

“It’s a major concern for everyone,” he said. “This is something about which we have had lots of communication with families, and I think they are very aware of the grave circumstances.”

Officials at George Washington University Hospital in the District say they have had similar conversations, but for now will continue to resuscitate covid-19 patients using modified procedures, such as putting plastic sheeting over the patient to create a barrier. The University of Washington Medical Center in Seattle, one of the country’s major hot spots for infections, is dealing with the problem by severely limiting the number of responders to a contagious patient in cardiac or respiratory arrest.

The info is here.

Tuesday, March 24, 2020

The effectiveness of moral messages on public health behavioral intentions during the COVID-19 pandemic

J. Everett, C. Colombatta, & others
PsyArXiv PrePrints
Originally posted 20 March 20

Abstrac
With the COVID-19 pandemic threatening millions of lives, changing our behaviors to prevent the spread of the disease is a moral imperative. Here, we investigated the effectiveness of messages inspired by three major moral traditions on public health behavioral intentions. A sample of US participants representative for age, sex and race/ethnicity (N=1032) viewed messages from either a leader or citizen containing deontological, virtue-based, utilitarian, or non-moral justifications for adopting social distancing behaviors during the COVID-19 pandemic. We measured the messages’ effects on participants’ self-reported intentions to wash hands, avoid social gatherings, self-isolate, and share health messages, as well as their beliefs about others’ intentions, impressions of the messenger’s morality and trustworthiness, and beliefs about personal control and responsibility for preventing the spread of disease. Consistent with our pre-registered predictions, deontological messages had modest effects across several measures of behavioral intentions, second-order beliefs, and impressions of the messenger, while virtue-based messages had modest effects on personal responsibility for preventing the spread. These effects were observed for messages from leaders and citizens alike. Our findings are at odds with participants’ own beliefs about moral persuasion: a majority of participants predicted the utilitarian message would be most effective. We caution that these effects are modest in size, likely due to ceiling effects on our measures of behavioral intentions and strong heterogeneity across all dependent measures along several demographic dimensions including age, self-identified gender, self-identified race, political conservatism, and religiosity. Although the utilitarian message was the least effective among those tested, individual differences in one key dimension of utilitarianism—impartial concern for the greater good—were strongly and positively associated with public health intentions and beliefs. Overall, our preliminary results suggest that public health messaging focused on duties and responsibilities toward family, friends and fellow citizens will be most effective in slowing the spread of COVID-19 in the US. Ongoing work is investigating whether deontological persuasion generalizes across different populations, what aspects of deontological messages drive their persuasive effects, and how such messages can be most effectively delivered across global populations.

The research is here.

Wednesday, December 11, 2019

Veil-of-ignorance reasoning favors the greater good

Karen Huang, Joshua D. Greene and Max Bazerman
PNAS first published November 12, 2019
https://doi.org/10.1073/pnas.1910125116

Abstract

The “veil of ignorance” is a moral reasoning device designed to promote impartial decision-making by denying decision-makers access to potentially biasing information about who will benefit most or least from the available options. Veil-of-ignorance reasoning was originally applied by philosophers and economists to foundational questions concerning the overall organization of society. Here we apply veil-of-ignorance reasoning in a more focused way to specific moral dilemmas, all of which involve a tension between the greater good and competing moral concerns. Across six experiments (N = 5,785), three pre-registered, we find that veil-of-ignorance reasoning favors the greater good. Participants first engaged in veil-of-ignorance reasoning about a specific dilemma, asking themselves what they would want if they did not know who among those affected they would be. Participants then responded to a more conventional version of the same dilemma with a moral judgment, a policy preference, or an economic choice. Participants who first engaged in veil-of-ignorance reasoning subsequently made more utilitarian choices in response to a classic philosophical dilemma, a medical dilemma, a real donation decision between a more vs. less effective charity, and a policy decision concerning the social dilemma of autonomous vehicles. These effects depend on the impartial thinking induced by veil-of-ignorance reasoning and cannot be explained by a simple anchoring account, probabilistic reasoning, or generic perspective-taking. These studies indicate that veil-of-ignorance reasoning may be a useful tool for decision-makers who wish to make more impartial and/or socially beneficial choices.

Significance

The philosopher John Rawls aimed to identify fair governing principles by imagining people choosing their principles from behind a “veil of ignorance,” without knowing their places in the social order. Across 7 experiments with over 6,000 participants, we show that veil-of-ignorance reasoning leads to choices that favor the greater good. Veil-of-ignorance reasoning makes people more likely to donate to a more effective charity and to favor saving more lives in a bioethical dilemma. It also addresses the social dilemma of autonomous vehicles (AVs), aligning abstract approval of utilitarian AVs (which minimize total harm) with support for a utilitarian AV policy. These studies indicate that veil-of-ignorance reasoning may be used to promote decision making that is more impartial and socially beneficial.

Thursday, August 8, 2019

Not all who ponder count costs: Arithmetic reflection predicts utilitarian tendencies, but logical reflection predicts both deontological and utilitarian tendencies

NickByrdPaulConway
Cognition
https://doi.org/10.1016/j.cognition.2019.06.007

Abstract

Conventional sacrificial moral dilemmas propose directly causing some harm to prevent greater harm. Theory suggests that accepting such actions (consistent with utilitarian philosophy) involves more reflective reasoning than rejecting such actions (consistent with deontological philosophy). However, past findings do not always replicate, confound different kinds of reflection, and employ conventional sacrificial dilemmas that treat utilitarian and deontological considerations as opposite. In two studies, we examined whether past findings would replicate when employing process dissociation to assess deontological and utilitarian inclinations independently. Findings suggested two categorically different impacts of reflection: measures of arithmetic reflection, such as the Cognitive Reflection Test, predicted only utilitarian, not deontological, response tendencies. However, measures of logical reflection, such as performance on logical syllogisms, positively predicted both utilitarian and deontological tendencies. These studies replicate some findings, clarify others, and reveal opportunity for additional nuance in dual process theorist’s claims about the link between reflection and dilemma judgments.

A copy of the paper is here.

Thursday, February 7, 2019

Do People Believe That They Are More Deontological Than Others?

Ming-Hui Li and Li-Lin Rao
Personality and Social Psychology Bulletin
First published January 20, 2019

Abstract

The question of how we decide that someone else has done something wrong is at the heart of moral psychology. Little work has been done to investigate whether people believe that others’ moral judgment differs from their own in moral dilemmas. We conducted four experiments using various measures and diverse samples to demonstrate the self–other discrepancy in moral judgment. We found that (a) people were more deontological when they made moral judgments themselves than when they judged a stranger (Studies 1-4) and (b) a protected values (PVs) account outperformed an emotion account and a construal-level theory account in explaining this self–other discrepancy (Studies 3 and 4). We argued that the self–other discrepancy in moral judgment may serve as a protective mechanism co-evolving alongside the social exchange mechanism and may contribute to better understanding the obstacles preventing people from cooperation.

The research is here.

Saturday, August 4, 2018

Sacrificial utilitarian judgments do reflect concern for the greater good: Clarification via process dissociation and the judgments of philosophers

Paul Conway, Jacob Goldstein-Greenwood, David Polaceka, & Joshua D. Greene
Cognition
Volume 179, October 2018, Pages 241–265

Abstract

Researchers have used “sacrificial” trolley-type dilemmas (where harmful actions promote the greater good) to model competing influences on moral judgment: affective reactions to causing harm that motivate characteristically deontological judgments (“the ends don’t justify the means”) and deliberate cost-benefit reasoning that motivates characteristically utilitarian judgments (“better to save more lives”). Recently, Kahane, Everett, Earp, Farias, and Savulescu (2015) argued that sacrificial judgments reflect antisociality rather than “genuine utilitarianism,” but this work employs a different definition of “utilitarian judgment.” We introduce a five-level taxonomy of “utilitarian judgment” and clarify our longstanding usage, according to which judgments are “utilitarian” simply because they favor the greater good, regardless of judges’ motivations or philosophical commitments. Moreover, we present seven studies revisiting Kahane and colleagues’ empirical claims. Studies 1a–1b demonstrate that dilemma judgments indeed relate to utilitarian philosophy, as philosophers identifying as utilitarian/consequentialist were especially likely to endorse utilitarian sacrifices. Studies 2–6 replicate, clarify, and extend Kahane and colleagues’ findings using process dissociation to independently assess deontological and utilitarian response tendencies in lay people. Using conventional analyses that treat deontological and utilitarian responses as diametric opposites, we replicate many of Kahane and colleagues’ key findings. However, process dissociation reveals that antisociality predicts reduced deontological inclinations, not increased utilitarian inclinations. Critically, we provide evidence that lay people’s sacrificial utilitarian judgments also reflect moral concerns about minimizing harm. This work clarifies the conceptual and empirical links between moral philosophy and moral psychology and indicates that sacrificial utilitarian judgments reflect genuine moral concern, in both philosophers and ordinary people.

The research is here.

Wednesday, June 20, 2018

Can a machine be ethical? Why teaching AI ethics is a minefield.

Scotty Hendricks
Bigthink.com
Originally published May 31, 2018

Here is an excerpt:

Dr. Moor gives the example of Isaac Asimov’s three rules of robotics. For those who need a refresher, they are:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

The rules are hierarchical, and the robots in Asimov’s books are all obligated to follow them.

Dr. Moor suggests that the problems with these rules are obvious. The first rule is so general that an artificial intelligence following them “might be obliged by the First Law to roam the world attempting to prevent harm from befalling human beings” and therefore be useless for its original function!

Such problems can be common in deontological systems, where following good rules can lead to funny results. Asimov himself wrote several stories about potential problems with the laws. Attempts to solve this issue abound, but the challenge of making enough rules to cover all possibilities remains. 

On the other hand, a machine could be programmed to stick to utilitarian calculus when facing an ethical problem. This would be simple to do, as the computer would only have to be given a variable and told to make choices that would maximize the occurrence of it. While human happiness is a common choice, wealth, well-being, or security are also possibilities.

The article is here.

Monday, January 29, 2018

Deontological Dilemma Response Tendencies and Sensorimotor Representations of Harm to Others

Leonardo Christov-Moore, Paul Conway, and Marco Iacoboni
Front. Integr. Neurosci., 12 December 2017

The dual process model of moral decision-making suggests that decisions to reject causing harm on moral dilemmas (where causing harm saves lives) reflect concern for others. Recently, some theorists have suggested such decisions actually reflect self-focused concern about causing harm, rather than witnessing others suffering. We examined brain activity while participants witnessed needles pierce another person’s hand, versus similar non-painful stimuli. More than a month later, participants completed moral dilemmas where causing harm either did or did not maximize outcomes. We employed process dissociation to independently assess harm-rejection (deontological) and outcome-maximization (utilitarian) response tendencies. Activity in the posterior inferior frontal cortex (pIFC) while participants witnessed others in pain predicted deontological, but not utilitarian, response tendencies. Previous brain stimulation studies have shown that the pIFC seems crucial for sensorimotor representations of observed harm. Hence, these findings suggest that deontological response tendencies reflect genuine other-oriented concern grounded in sensorimotor representations of harm.

The article is here.

Monday, December 18, 2017

Is Pulling the Lever Sexy? Deontology as a Downstream Cue to Long-Term Mate Quality

Mitch Brown and Donald Sacco
Journal of Social and Personal Relationships
November 2017

Abstract

Deontological and utilitarian moral decisions have unique communicative functions within the context of group living. Deontology more strongly communicates prosocial intentions, fostering greater perceptions of trust and desirability in general affiliative contexts. This general trustworthiness may extend to perceptions of fidelity in romantic relationships, leading to perceptions of deontological persons as better long-term mates, relative to utilitarians. In two studies, participants indicated desirability of both deontologists and utilitarians in long- and short-term mating contexts. In Study 1 (n = 102), women perceived a deontological man as more interested in long-term bonds, more desirable for long-term mating, and less prone to infidelity, relative to a utilitarian man. However, utilitarian men were undesirable as short-term mates. Study 2 (n = 112) had both men and women rate opposite sex targets’ desirability after learning of their moral decisions in a trolley problem. We replicated women’s preference for deontological men as long-term mates. Interestingly, both men and women reporting personal deontological motives were particularly sensitive to deontology communicating long-term desirability and fidelity, which could be a product of the general affiliative signal from deontology. Thus, one’s moral basis for decision-making, particularly deontologically-motivated moral decisions, may communicate traits valuable in long-term mating contexts.

The research is here.

Thursday, May 25, 2017

In a moral dilemma, choose the one you love: Impartial actors are seen as less moral than partial ones

Jamie S. Hughes
British Journal of Social Psychology

Abstract

Although impartiality and concern for the greater good are lauded by utilitarian philosophies, it was predicted that when values conflict, those who acted impartially rather than partially would be viewed as less moral. Across four studies, using life-or-death scenarios and more mundane ones, support for the idea that relationship obligations are important in moral attribution was found. In Studies 1–3, participants rated an impartial actor as less morally good and his or her action as less moral compared to a partial actor. Experimental and correlational evidence showed the effect was driven by inferences about an actor's capacity for empathy and compassion. In Study 4, the relationship obligation hypothesis was refined. The data suggested that violations of relationship obligations are perceived as moral as long as strong alternative justifications sanction them. Discussion centres on the importance of relationships in understanding moral attributions.

The article is here.

Tuesday, May 16, 2017

Why are we reluctant to trust robots?

Jim Everett, David Pizarro and Molly Crockett
The Guardian
Originally posted April 27, 2017

Technologies built on artificial intelligence are revolutionising human life. As these machines become increasingly integrated in our daily lives, the decisions they face will go beyond the merely pragmatic, and extend into the ethical. When faced with an unavoidable accident, should a self-driving car protect its passengers or seek to minimise overall lives lost? Should a drone strike a group of terrorists planning an attack, even if civilian casualties will occur? As artificially intelligent machines become more autonomous, these questions are impossible to ignore.

There are good arguments for why some ethical decisions ought to be left to computers—unlike human beings, machines are not led astray by cognitive biases, do not experience fatigue, and do not feel hatred toward an enemy. An ethical AI could, in principle, be programmed to reflect the values and rules of an ideal moral agent. And free from human limitations, such machines could even be said to make better moral decisions than us. Yet the notion that a machine might be given free reign over moral decision-making seems distressing to many—so much so that, for some, their use poses a fundamental threat to human dignity. Why are we so reluctant to trust machines when it comes to making moral decisions? Psychology research provides a clue: we seem to have a fundamental mistrust of individuals who make moral decisions by calculating costs and benefits – like computers do.

The article is here.

Wednesday, December 28, 2016

Inference of trustworthiness from intuitive moral judgments

Everett JA., Pizarro DA., Crockett MJ.
Journal of Experimental Psychology: General, Vol 145(6), Jun 2016, 772-787.

Moral judgments play a critical role in motivating and enforcing human cooperation, and research on the proximate mechanisms of moral judgments highlights the importance of intuitive, automatic processes in forming such judgments. Intuitive moral judgments often share characteristics with deontological theories in normative ethics, which argue that certain acts (such as killing) are absolutely wrong, regardless of their consequences. Why do moral intuitions typically follow deontological prescriptions, as opposed to those of other ethical theories? Here, we test a functional explanation for this phenomenon by investigating whether agents who express deontological moral judgments are more valued as social partners. Across 5 studies, we show that people who make characteristically deontological judgments are preferred as social partners, perceived as more moral and trustworthy, and are trusted more in economic games. These findings provide empirical support for a partner choice account of moral intuitions whereby typically deontological judgments confer an adaptive function by increasing a person's likelihood of being chosen as a cooperation partner. Therefore, deontological moral intuitions may represent an evolutionarily prescribed prior that was selected for through partner choice mechanisms.

The article is here.

Saturday, December 24, 2016

The Adaptive Utility of Deontology: Deontological Moral Decision-Making Fosters Perceptions of Trust and Likeability

Sacco, D.F., Brown, M., Lustgraaf, C.J.N. et al.
Evolutionary Psychological Science (2016).
doi:10.1007/s40806-016-0080-6

Abstract

Although various motives underlie moral decision-making, recent research suggests that deontological moral decision-making may have evolved, in part, to communicate trustworthiness to conspecifics, thereby facilitating cooperative relations. Specifically, social actors whose decisions are guided by deontological (relative to utilitarian) moral reasoning are judged as more trustworthy, are preferred more as social partners, and are trusted more in economic games. The current study extends this research by using an alternative manipulation of moral decision-making as well as the inclusion of target facial identities to explore the potential role of participant and target sex in reactions to moral decisions. Participants viewed a series of male and female targets, half of whom were manipulated to either have responded to five moral dilemmas consistent with an underlying deontological motive or utilitarian motive; participants indicated their liking and trust toward each target. Consistent with previous research, participants liked and trusted targets whose decisions were consistent with deontological motives more than targets whose decisions were more consistent with utilitarian motives; this effect was stronger for perceptions of trust. Additionally, women reported greater dislike for targets whose decisions were consistent with utilitarianism than men. Results suggest that deontological moral reasoning evolved, in part, to facilitate positive relations among conspecifics and aid group living and that women may be particularly sensitive to the implications of the various motives underlying moral decision-making.

The research is here.

Editor's Note: This research may apply to psychotherapy, leadership style, and politics.

Friday, September 30, 2016

Gender Differences in Responses to Moral Dilemmas: A Process Dissociation Analysis

Rebecca Friesdorf, Paul Conway, and Bertram Gawronski
Pers Soc Psychol Bull, first published on April 3, 2015
doi:10.1177/0146167215575731

Abstract

The principle of deontology states that the morality of an action depends on its consistency with moral norms; the principle of utilitarianism implies that the morality of an action depends on its consequences. Previous research suggests that deontological judgments are shaped by affective processes, whereas utilitarian judgments are guided by cognitive processes. The current research used process dissociation (PD) to independently assess deontological and utilitarian inclinations in women and men. A meta-analytic re-analysis of 40 studies with 6,100 participants indicated that men showed a stronger preference for utilitarian over deontological judgments than women when the two principles implied conflicting decisions (d = 0.52). PD further revealed that women exhibited stronger deontological inclinations than men (d = 0.57), while men exhibited only slightly stronger utilitarian inclinations than women (d = 0.10). The findings suggest that gender differences in moral dilemma judgments are due to differences in affective responses to harm rather than cognitive evaluations of outcomes.

The article is here.

Thursday, July 14, 2016

At the Heart of Morality Lies Neuro-Visceral Integration: Lower Cardiac Vagal Tone Predicts Utilitarian Moral Judgment

Gewnhi Park, Andreas Kappes, Yeojin Rho, and Jay J. Van Bavel
Soc Cogn Affect Neurosci first published online June 17, 2016
doi:10.1093/scan/nsw077

Abstract

To not harm others is widely considered the most basic element of human morality. The aversion to harm others can be either rooted in the outcomes of an action (utilitarianism) or reactions to the action itself (deontology). We speculated that human moral judgments rely on the integration of neural computations of harm and visceral reactions. The present research examined whether utilitarian or deontological aspects of moral judgment are associated with cardiac vagal tone, a physiological proxy for neuro-visceral integration. We investigated the relationship between cardiac vagal tone and moral judgment by using a mix of moral dilemmas, mathematical modeling, and psychophysiological measures. An index of bipolar deontology-utilitarianism was correlated with resting heart rate variability—an index of cardiac vagal tone—such that more utilitarian judgments were associated with lower heart rate variability. Follow-up analyses using process dissociation, which independently quantifies utilitarian and deontological moral inclinations, provided further evidence that utilitarian (but not deontological) judgments were associated with lower heart rate variability. Our results suggest that the functional integration of neural and visceral systems during moral judgments can restrict outcome-based, utilitarian moral preferences. Implications for theories of moral judgment are discussed.

A copy of the paper is here.