Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Moral Dilemmas. Show all posts
Showing posts with label Moral Dilemmas. Show all posts

Saturday, November 4, 2023

One strike and you’re a lout: Cherished values increase the stringency of moral character attributions

Rottman, J., Foster-Hanson, E., & Bellersen, S.
(2023). Cognition, 239, 105570.

Abstract

Moral dilemmas are inescapable in daily life, and people must often choose between two desirable character traits, like being a diligent employee or being a devoted parent. These moral dilemmas arise because people hold competing moral values that sometimes conflict. Furthermore, people differ in which values they prioritize, so we do not always approve of how others resolve moral dilemmas. How are we to think of people who sacrifice one of our most cherished moral values for a value that we consider less important? The “Good True Self Hypothesis” predicts that we will reliably project our most strongly held moral values onto others, even after these people lapse. In other words, people who highly value generosity should consistently expect others to be generous, even after they act frugally in a particular instance. However, reasoning from an error-management perspective instead suggests the “Moral Stringency Hypothesis,” which predicts that we should be especially prone to discredit the moral character of people who deviate from our most deeply cherished moral ideals, given the potential costs of affiliating with people who do not reliably adhere to our core moral values. In other words, people who most highly value generosity should be quickest to stop considering others to be generous if they act frugally in a particular instance. Across two studies conducted on Prolific (N = 966), we found consistent evidence that people weight moral lapses more heavily when rating others’ membership in highly cherished moral categories, supporting the Moral Stringency Hypothesis. In Study 2, we examined a possible mechanism underlying this phenomenon. Although perceptions of hypocrisy played a role in moral updating, personal moral values and subsequent judgments of a person’s potential as a good cooperative partner provided the clearest explanation for changes in moral character attributions. Overall, the robust tendency toward moral stringency carries significant practical and theoretical implications.


My take aways: 

The results showed that participants were more likely to rate the person as having poor moral character when the transgression violated a cherished value. This suggests that when we see someone violate a value that we hold dear, it can lead us to question their entire moral compass.

The authors argue that this finding has important implications for how we think about moral judgment. They suggest that our own values play a significant role in how we judge others' moral character. This is something to keep in mind the next time we're tempted to judge someone harshly.

Here are some additional points that are made in the article:
  • The effect of cherished values on moral judgment is stronger for people who are more strongly identified with their values.
  • The effect is also stronger for transgressions that are seen as more serious.
  • The effect is not limited to personal values. It can also occur for group-based values, such as patriotism or religious beliefs.

Friday, October 6, 2023

Taking the moral high ground: Deontological and absolutist moral dilemma judgments convey self-righteousness

Weiss, A., Burgmer, P., Rom, S. C., & Conway, P. (2024). 
Journal of Experimental Social Psychology, 110, 104505.

Abstract

Individuals who reject sacrificial harm to maximize overall outcomes, consistent with deontological (vs. utilitarian) ethics, appear warmer, more moral, and more trustworthy. Yet, deontological judgments may not only convey emotional reactions, but also strict adherence to moral rules. We therefore hypothesized that people view deontologists as more morally absolutist and hence self-righteous—as perceiving themselves as morally superior. In addition, both deontologists and utilitarians who base their decisions on rules (vs. emotions) should appear more self-righteous. Four studies (N = 1254) tested these hypotheses. Participants perceived targets as more self-righteous when they rejected (vs. accepted) sacrificial harm in classic moral dilemmas where harm maximizes outcomes (i.e., deontological vs. utilitarian judgments), but not parallel cases where harm fails to maximize outcomes (Study 1). Preregistered Study 2 replicated the focal effect, additionally indicating mediation via perceptions of moral absolutism. Study 3 found that targets who reported basing their deontological judgments on rules, compared to emotional reactions or when processing information was absent, appeared particularly self-righteous. Preregistered Study 4 included both deontological and utilitarian targets and manipulated whether their judgments were based on rules versus emotion (specifically sadness). Grounding either moral position in rules conveyed self-righteousness, while communicating emotions was a remedy. Furthermore, participants perceived targets as more self-righteous the more targets deviated from their own moral beliefs. Studies 3 and 4 additionally examined participants' self-disclosure intentions. In sum, deontological dilemma judgments may convey an absolutist, rule-focused view of morality, but any judgment stemming from rules (in contrast to sadness) promotes self-righteousness perceptions.


My quick take:

The authors also found that people were more likely to perceive deontologists as self-righteous if they based their judgments on rules rather than emotions. This suggests that it is not just the deontological judgment itself that leads to perceptions of self-righteousness, but also the way in which the judgment is made.

Overall, the findings of this study suggest that people who make deontological judgments in moral dilemmas are more likely to be perceived as self-righteous. This is because deontological judgments are often seen as reflecting a rigid and absolutist view of morality, which can come across as arrogant or condescending.

It is important to note that the findings of this study do not mean that all deontologists are self-righteous. However, the study does suggest that people should be aware of how their moral judgments may be perceived by others. If you want to avoid being perceived as self-righteous, it may be helpful to explain your reasons for making a deontological judgment, and to acknowledge the emotional impact of the situation.

Wednesday, February 2, 2022

Psychopathy and Moral-Dilemma Judgment: An Analysis Using the Four-Factor Model of Psychopathy and the CNI Model of Moral Decision-Making

Luke, D. M., Neumann, C. S., & Gawronski, B.
(2021). Clinical Psychological Science. 
https://doi.org/10.1177/21677026211043862

Abstract

A major question in clinical and moral psychology concerns the nature of the commonly presumed association between psychopathy and moral judgment. In the current preregistered study (N = 443), we aimed to address this question by examining the relation between psychopathy and responses to moral dilemmas pitting consequences for the greater good against adherence to moral norms. To provide more nuanced insights, we measured four distinct facets of psychopathy and used the CNI model to quantify sensitivity to consequences (C), sensitivity to moral norms (N), and general preference for inaction over action (I) in responses to moral dilemmas. Psychopathy was associated with a weaker sensitivity to moral norms, which showed unique links to the interpersonal and affective facets of psychopathy. Psychopathy did not show reliable associations with either sensitivity to consequences or general preference for inaction over action. Implications of these findings for clinical and moral psychology are discussed.

From the Discussion

In support of our hypotheses, general psychopathy scores and a superordinate latent variable (representing the broad syndrome of psychopathy) showed significant negative relations with sensitivity to moral norms, which suggests that people with elevated psychopathic traits were less sensitive to moral norms in their responses to moral dilemmas in comparison with other people. Further analyses at the facet level suggested that sensitivity to moral norms was uniquely associated with the interpersonal-affective facets of psychopathy. Both of these findings persisted when controlling for gender. As predicted, the antisocial facet showed a negative zero-order correlation with sensitivity to moral norms, but this association fell to nonsignificance when controlling for other facets of psychopathy and gender. At the manifest variable level, neither general psychopathy scores nor the four facets showed reliable relations with either sensitivity to consequences or general preference for inaction over action.

(cut)

More broadly, the current findings have important implications for both clinical and moral psychology. For clinical psychology, our findings speak to ongoing questions about whether people with elevated levels of psychopathy exhibit disturbances in moral judgment. In a recent review of the literature on psychopathy and moral judgment, Larsen et al. (2020) claimed there was “no consistent, well-replicated evidence of observable deficits in . . . moral judgment” (p. 305). However, a notable limitation of this review is that its analysis of moral-dilemma research focused exclusively on studies that used the traditional approach. Consistent with past research using the CNI model (e.g., Gawronski et al., 2017; Körner et al., 2020; Luke & Gawronski, 2021a) and in contrast to Larsen et al.’s conclusion, the current findings indicate substantial deviations in moral-dilemma judgments among people with elevated psychopathic traits, particularly conformity to moral norms.

Thursday, January 27, 2022

Many heads are more utilitarian than one

Keshmirian, A., Deroy, O, & Bahrami, B.
Cognition
Volume 220, March 2022, 104965

Abstract

Moral judgments have a very prominent social nature, and in everyday life, they are continually shaped by discussions with others. Psychological investigations of these judgments, however, have rarely addressed the impact of social interactions. To examine the role of social interaction on moral judgments within small groups, we had groups of 4 to 5 participants judge moral dilemmas first individually and privately, then collectively and interactively, and finally individually a second time. We employed both real-life and sacrificial moral dilemmas in which the character's action or inaction violated a moral principle to benefit the greatest number of people. Participants decided if these utilitarian decisions were morally acceptable or not. In Experiment 1, we found that collective judgments in face-to-face interactions were more utilitarian than the statistical aggregate of their members compared to both first and second individual judgments. This observation supported the hypothesis that deliberation and consensus within a group transiently reduce the emotional burden of norm violation. In Experiment 2, we tested this hypothesis more directly: measuring participants' state anxiety in addition to their moral judgments before, during, and after online interactions, we found again that collectives were more utilitarian than those of individuals and that state anxiety level was reduced during and after social interaction. The utilitarian boost in collective moral judgments is probably due to the reduction of stress in the social setting.

Highlights

• Collective consensual judgments made via group interactions were more utilitarian than individual judgments.

• Group discussion did not change the individual judgments indicating a normative conformity effect.

• Individuals consented to a group judgment that they did not necessarily buy into personally.

• Collectives were less stressed than individuals after responding to moral dilemmas.

• Interactions reduced aversive emotions (e.g., stressed) associated with violation of moral norms.

From the Discussion

Our analysis revealed that groups, in comparison to individuals, are more utilitarian in their moral judgments. Thus, our findings are inconsistent with Virtue-Signaling (VS), which proposed the opposite
effect. Crucially, the collective utilitarian boost was short-lived: it was only seen at the collective level and not when participants rated the same questions individually again. Previous research shows that moral change at the individual level, as the result of social deliberation, is rather long-lived and not transient (e.g., see Ueshima et al., 2021). Thus, this collective utilitarian boost could not have resulted from deliberation and reasoning or due to conscious application of utilitarian principles with authentic reasons to maximize the total good. If this was the case, the effect would have persisted in the second individual judgment as well. That was not what we observed. Consequently, our findings are inconsistent with the Social Deliberation (SD) hypotheses.

Wednesday, April 21, 2021

Target Dehumanization May Influence Decision Difficulty and Response Patterns for Moral Dilemmas

Bai, H., et al. (2021, February 25). 
https://doi.org/10.31234/osf.io/fknrd

Abstract

Past research on moral dilemmas has thoroughly investigated the roles of personality and situational variables, but the role of targets in moral dilemmas has been relatively neglected. This paper presents findings from four experiments that manipulate the perceived dehumanization of targets in moral dilemmas. Studies 1, 2 and 4 suggest that dehumanized targets may render the decision easier, and with less emotion. Findings from Studies 1 and 3, though not Studies 2 and 4, show that dehumanization of targets in dilemmas may lead participants to make less deontological judgments. Study 3, but not Study 4, suggests that it is potentially because dehumanization has an effect on reducing deontological, but not utilitarian judgments. Though the patterns are somewhat inconsistent across studies, overall, results suggest that targets’ dehumanization can play a role in how people make their decisions in moral dilemmas.

General Discussion

Together, the four studies described in this paper contribute to the literature by providing evidence that the dehumanization of targets may play an important role in how people make decisions in moral dilemmas. In particular, we found some evidence in Studies 1, 2 and 4 suggesting that dehumanized targets may affect how people experience their decisions, rendering the decisions easier and less emotional. We also found some evidence from Studies 1 and 3, though not Studies 2 and 4, that dehumanization of targets in dilemmas may affect what decision people eventually make, suggesting that dehumanized targets may elicit less deontological responses to some extent. Finally, Study 3, but not Study 4, suggests that the decreased level of deontological response pattern may be potentially explained by dehumanization’s effect on reducing deontological, but not utilitarian tendencies. To this point, we conducted a mini-meta-analysis across the combined data for Studies 3 and 4 and compared the differences in the D parameter between the dehumanized condition and humanized conditions. We found an effect size of d = .135, which suggests that if dehumanization has an effect, it may not be a very big effect.

Wednesday, February 10, 2021

Beyond Moral Dilemmas: The Role of Reasoning in Five Categories of Utilitarian Judgment

F. Jaquet & F. Cova
Cognition, Volume 209, 
April 2021, 104572

Abstract

Over the past two decades, the study of moral reasoning has been heavily influenced by Joshua Greene’s dual-process model of moral judgment, according to which deontological judgments are typically supported by intuitive, automatic processes while utilitarian judgments are typically supported by reflective, conscious processes. However, most of the evidence gathered in support of this model comes from the study of people’s judgments about sacrificial dilemmas, such as Trolley Problems. To which extent does this model generalize to other debates in which deontological and utilitarian judgments conflict, such as the existence of harmless moral violations, the difference between actions and omissions, the extent of our duties of assistance, and the appropriate justification for punishment? To find out, we conducted a series of five studies on the role of reflection in these kinds of moral conundrums. In Study 1, participants were asked to answer under cognitive load. In Study 2, participants had to answer under a strict time constraint. In Studies 3 to 5, we sought to promote reflection through exposure to counter-intuitive reasoning problems or direct instruction. Overall, our results offer strong support to the extension of Greene’s dual-process model to moral debates on the existence of harmless violations and partial support to its extension to moral debates on the extent of our duties of assistance.

From the Discussion Section

The results of  Study 1 led  us to conclude  that certain forms  of utilitarian judgments  require more cognitive resources than their deontological counterparts. The results of Studies 2 and 5 suggest  that  making  utilitarian  judgments  also  tends  to  take  more  time  than  making deontological judgments. Finally, because our manipulation was unsuccessful, Studies 3 and 4 do not allow us to conclude anything about the intuitiveness of utilitarian judgments. In Study 5, we were nonetheless successful in manipulating participants’ reliance on their intuitions (in the  Intuition  condition),  but this  did not  seem to  have any  effect on  the  rate  of utilitarian judgments.  Overall,  our  results  allow  us  to  conclude  that,  compared  to  deontological judgments,  utilitarian  judgments  tend  to  rely  on  slower  and  more  cognitively  demanding processes.  But  they  do  not  allow  us  to  conclude  anything  about  characteristics  such  as automaticity  or  accessibility  to consciousness:  for  example,  while solving  a very  difficult math problem might take more time and require more resources than solving a mildly difficult math problem, there is no reason to think that the former process is more automatic and more conscious than the latter—though it will clearly be experienced as more effortful. Moreover, although one could be tempted to  conclude that our data show  that, as predicted by Greene, utilitarian  judgment  is  experienced  as  more  effortful  than  deontological  judgment,  we collected no data  about such  experience—only data  about the  role of cognitive resources in utilitarian  judgment.  Though  it  is  reasonable  to  think  that  a  process  that  requires  more resources will be experienced as more effortful, we should keep in mind that this conclusion is based on an inferential leap. 

Sunday, January 31, 2021

Free Will & The Brain

Kevin Loughran
Philosophy Now (2020)

The idea of free will touches human decision-making and action, and so the workings of the brain. So the science of the brain can inform the argument about free will. Technology, especially in the form of brain scanning, has provided new insights into what is happening in our brains prior to us taking action. And some brain studies – especially the ones led by Benjamin Libet at the University of California in San Francisco in the 1980s – have indicated the possibility of unconscious brain activity setting up our body to act on our decisions before we are conscious of having decided to act. For some people, such studies have confirmed the judgement that we lack free will. But do these studies provide sufficient data to justify such a generalisation about free will?

First, these studies do touch on the issue of how we make choices and reach decisions; but they do so in respect of some simple, and directed, tasks. For example, in one of Libet’s studies, he asked volunteers to move a hand in one direction or another and to note the time when they consciously decided to do so (50 Ideas You Really Need to Know about the Human Brain, Moher Costandi, p.60, 2013). The data these and similar brain studies provide might justly be taken to prove that when research volunteers are asked by a researcher to do one simple thing or another, and they do it, then unconscious brain processes may have moved them towards a choice a fraction of a second before they were conscious of making that choice. The question is, can they be taken to prove more than that?

To explore this question let’s first look at some of the range of choices we make in our lives day by day and week by week, then ask what they might tell us about how we come to make decisions and how this might relate to experimental results such as Libet’s. At the very least, examining the range of our choices might provide a better, wider range of research projects in the future.

Tuesday, December 15, 2020

(How) Do You Regret Killing One to Save Five? Affective and Cognitive Regret Differ After Utilitarian and Deontological Decisions

Goldstein-Greenwood J, et al.
Personality and Social Psychology 
Bulletin. 2020;46(9):1303-1317. 
doi:10.1177/0146167219897662

Abstract

Sacrificial moral dilemmas, in which opting to kill one person will save multiple others, are definitionally suboptimal: Someone dies either way. Decision-makers, then, may experience regret about these decisions. Past research distinguishes affective regret, negative feelings about a decision, from cognitive regret, thoughts about how a decision might have gone differently. Classic dual-process models of moral judgment suggest that affective processing drives characteristically deontological decisions to reject outcome-maximizing harm, whereas cognitive deliberation drives characteristically utilitarian decisions to endorse outcome-maximizing harm. Consistent with this model, we found that people who made or imagined making sacrificial utilitarian judgments reliably expressed relatively more affective regret and sometimes expressed relatively less cognitive regret than those who made or imagined making deontological dilemma judgments. In other words, people who endorsed causing harm to save lives generally felt more distressed about their decision, yet less inclined to change it, than people who rejected outcome-maximizing harm.

General Discussion

Across four studies, we found that different sacrificial moral dilemma decisions elicit different degrees of affective and cognitive regret. We found robust evidence that utilitarian decision-makers who accept outcome-maximizing harm experience far more affective regret than their deontological decision-making counterparts who reject outcome-maximizing harm, and we found somewhat weaker evidence that utilitarian decision-makers experience less cognitive regret than deontological decision-makers.The significant interaction between dilemma decision and regret type predicted in H1 emerged both when participants freely endorsed dilemma decisions (Studies 1, 3, and 4) and were randomly assigned to imagine making a decision (Study 2). Hence, the present findings cannot simply be attributed to chronic differences in the types of regret that people who prioritize each decision experience. Moreover, we found tentative evidence for H2: Focusing on the counterfactual world in which they made the alternative decision attenuated utilitarian decision-makers’ heightened affective regret compared with factual reflection, and reduced differences in affective regret between utilitarian and deontological decision-makers (Study 4). Furthermore, our findings do not appear attributable to impression management concerns, as there were no differences between public and private reports of regret.

Wednesday, August 5, 2020

A genetic profile of oxytocin receptor improves moral acceptability of outcome-maximizing harm in male insurance brokers

S. Palumbo, V. Mariotti, and others
Behavioural Brain Research
Volume 392, 17 August 2020, 112681

Abstract

In recent years, conflicting findings have been reported in the scientific literature about the influence of dopaminergic, serotonergic and oxytocinergic gene variants on moral behavior. Here, we utilized a moral judgment paradigm to test the potential effects on moral choices of three polymorphisms of the Oxytocin receptor (OXTR): rs53576, rs2268498 and rs1042770. We analyzed the influence of each single polymorphism and of genetic profiles obtained by different combinations of their genotypes in a sample of male insurance brokers (n = 129), as compared to control males (n = 109). Insurance brokers resulted significantly more oriented to maximize outcomes than control males, thus they expressed more than controls the utilitarian attitude phenotype. When analyzed individually, none of the selected variants influenced the responses to moral dilemmas. In contrast, a composite genetic profile that potentially increases OXTR activity was associated with higher moral acceptability in brokers. We hypothesize that this genetic profile promotes outcome-maximizing behavior in brokers by focusing their attention on what represents a greater good, that is, saving the highest number of people, even though at the cost of sacrificing one individual. Our data suggest that investigations in a sample that most expresses the phenotype of interest, combined with the analysis of composite genetic profiles rather than individual variants, represent a promising strategy to find out weak genetic influences on complex phenotypes, such as moral behavior.

Highlights

• Male insurance brokers as a sample to study utilitarian attitude.

• They are more aligned with utilitarianism than control males.

• Frequency of outcome-maximizing choices positively correlates with impulsivity in brokers.

• Genetic profiles affecting OXTR activity make outcome-maximizing harm more acceptable.

• Improved OXT transmission directs attention to choices more advantageous for society.

The research is here.

Sunday, March 22, 2020

Our moral instincts don’t match this crisis

Yascha Mounk
The Atlantic
Originally posted March 19, 2020

Here is an excerpt:

There are at least three straightforward explanations.

The first has to do with simple ignorance. For those of us who have spent the past weeks obsessing about every last headline regarding the evolution of the crisis, it can be easy to forget that many of our fellow citizens simply don’t follow the news with the same regularity—or that they tune into radio shows and television networks that have, shamefully, been downplaying the extent of the public-health emergency. People crowding into restaurants or hanging out in big groups, then, may simply fail to realize the severity of the pandemic. Their sin is honest ignorance.

The second explanation has to do with selfishness. Going out for trivial reasons imposes a real risk on those who will likely die if they contract the disease. Though the coronavirus does kill some young people, preliminary data from China and Italy suggest that they are, on average, less strongly affected by it. For those who are far more likely to survive, it is—from a purely selfish perspective—less obviously irrational to chance such social encounters.

The third explanation has to do with the human tendency to make sacrifices for the suffering that is right in front of our eyes, but not the suffering that is distant or difficult to see.

The philosopher Peter Singer presented a simple thought experiment in a famous paper. If you went for a walk in a park, and saw a little girl drowning in a pond, you would likely feel that you should help her, even if you might ruin your fancy shirt. Most people recognize a moral obligation to help another at relatively little cost to themselves.

Then Singer imagined a different scenario. What if a girl was in mortal danger halfway across the world, and you could save her by donating the same amount of money it would take to buy that fancy shirt? The moral obligation to help, he argued, would be the same: The life of the distant girl is just as important, and the cost to you just as small. And yet, most people would not feel the same obligation to intervene.

The same might apply in the time of COVID-19. Those refusing to stay home may not know the victims of their actions, even if they are geographically proximate, and might never find out about the terrible consequences of what they did. Distance makes them unjustifiably callous.

The info is here.

Sunday, December 15, 2019

The automation of ethics: the case of self-driving cars

Raffaele Rodogno, Marco Nørskov
Forthcoming in C. Hasse and
D. M. Søndergaard (Eds.)
Designing Robots.

Abstract
This paper explores the disruptive potential of artificial moral decision-makers on our moral practices by investigating the concrete case of self-driving cars. Our argument unfolds in two movements, the first purely philosophical, the second bringing self-driving cars into the picture.  More in particular, in the first movement, we bring to the fore three features of moral life, to wit, (i) the limits of the cognitive and motivational capacities of moral agents; (ii) the limited extent to which moral norms regulate human interaction; and (iii) the inescapable presence of tragic choices and moral dilemmas as part of moral life. Our ultimate aim, however, is not to provide a mere description of moral life in terms of these features but to show how a number of central moral practices can be envisaged as a response to, or an expression of these features. With this understanding of moral practices in hand, we broach the second movement. Here we expand our study to the transformative potential that self-driving cars would have on our moral practices . We locate two cases of potential disruption. First, the advent of self-driving cars would force us to treat as unproblematic the normative regulation of interactions that are inescapably tragic. More concretely, we will need to programme these cars’ algorithms as if there existed uncontroversial answers to the moral dilemmas that they might well face once on the road. Second, the introduction of this technology would contribute to the dissipation of accountability and lead to the partial truncation of our moral practices. People will be harmed and suffer in road related accidents, but it will be harder to find anyone to take responsibility for what happened—i.e. do what we normally do when we need to repair human relations after harm, loss, or suffering has occurred.

The book chapter is here.

Wednesday, November 13, 2019

Dynamic Moral Judgments and Emotions

Magda Osman
Published Online June 2015 in SciRes.
http://www.scirp.org/journal/psych

Abstract

We may experience strong moral outrage when we read a news headline that describes a prohibited action, but when we gain additional information by reading the main news story, do our emotional experiences change at all, and if they do in what way do they change? In a single online study with 80 participants the aim was to examine the extent to which emotional experiences (disgust, anger) and moral judgments track changes in information about a moral scenario. The evidence from the present study suggests that we systematically adjust our moral judgments and our emotional experiences as a result of exposure to further information about the morally dubious action referred to in a moral scenario. More specifically, the way in which we adjust our moral judgments and emotions appears to be based on information signalling whether a morally dubious act is permitted or prohibited.

From the Discussion

The present study showed that moral judgments changed in response to different details concerning the moral scenarios, and while participants gave the most severe judgments for the initial limited information regarding the scenario (i.e. the headline), they adjusted the severity of their judgments downwards as more information was provided (i.e. main story, conclusion). In other words, when context was provided for why a morally dubious action was carried out, people used this to inform their later judgments and consciously integrated this new information into their judgments of the action. Crucially, this reflects the fact that judgments and emotions are not fixed, and that they are likely to operate on rational processes (Huebner, 2011, 2014; Teper et al., 2015). More to the point, this evidence suggests that there may well be an integrated representation of the moral scenario that is based on informational content as well as personal emotional experiences that signal the valance on which the information should be judged. The evidence from the present study suggests that both moral judgments and emotional experiences change systematically in response to changes in information that critically concern the way in which a morally dubious action should be evaluated.

A pdf can be downloaded here.

Tuesday, November 12, 2019

Errors in Moral Forecasting: Perceptions of Affect Shape the Gap Between Moral Behaviors and Moral Forecasts

Teper, R., Zhong, C.‐B., and Inzlicht, M. (2015)
Social and Personality Psychology Compass, 9, 1– 14,
doi: 10.1111/spc3.12154

Abstract

Within the past decade, the field of moral psychology has begun to disentangle the mechanics behind moral judgments, revealing the vital role that emotions play in driving these processes. However, given the well‐documented dissociation between attitudes and behaviors, we propose that an equally important issue is how emotions inform actual moral behavior – a question that has been relatively ignored up until recently. By providing a review of recent studies that have begun to explore how emotions drive actual moral behavior, we propose that emotions are instrumental in fueling real‐life moral actions. Because research examining the role of emotional processes on moral behavior is currently limited, we push for the use of behavioral measures in the field in the hopes of building a more complete theory of real‐life moral behavior.

Conclusion

Long gone are the days when emotion was written off as a distractor or a roadblock to effective moral decision making. There now exists a great deal of evidence bolstering the idea that emotions are actually necessary for initiating adaptive behavior (Bechara, 2004; Damasio, 1994; Panskepp & Biven, 2012). Furthermore, evidence from the field of moral psychology points to the fact that individuals rely quite heavily on emotional and intuitive processes when engaging in moral judgments (e.g. Haidt, 2001). However, up until recently, the playing field of moral psychology has been heavily dominated by research revolving around moral judgments alone, especially when investigating the role that emotions play in motivating moral decision-making.

A pdf can be downloaded here.

Monday, November 11, 2019

Incidental emotions in moral dilemmas: the influence of emotion regulation.

Raluca D. Szekely & Andrei C. Miu
Cogn Emot. 2015;29(1):64-75.
doi: 10.1080/02699931.2014.895300.

Abstract

Recent theories have argued that emotions play a central role in moral decision-making and suggested that emotion regulation may be crucial in reducing emotion-linked biases. The present studies focused on the influence of emotional experience and individual differences in emotion regulation on moral choice in dilemmas that pit harming another person against social welfare. During these "harm to save" moral dilemmas, participants experienced mostly fear and sadness but also other emotions such as compassion, guilt, anger, disgust, regret and contempt (Study 1). Fear and disgust were more frequently reported when participants made deontological choices, whereas regret was more frequently reported when participants made utilitarian choices. In addition, habitual reappraisal negatively predicted deontological choices, and this effect was significantly carried through emotional arousal (Study 2). Individual differences in the habitual use of other emotion regulation strategies (i.e., acceptance, rumination and catastrophising) did not influence moral choice. The results of the present studies indicate that negative emotions are commonly experienced during "harm to save" moral dilemmas, and they are associated with a deontological bias. By efficiently reducing emotional arousal, reappraisal can attenuate the emotion-linked deontological bias in moral choice.

General Discussion

Using H2S moral dilemmas, the present studies yielded three main findings: (1) a wide spectrum of emotions are experienced during these moral dilemmas, with self-focused emotions such as fear and sadness being the most common (Study 1); (2) there is a positive relation between emotional arousal during moral dilemmas and deontological choices (Studies 1 and 2); and (3) individual differences in reappraisal, but not other emotion regulation strategies (i.e., acceptance, rumination or catastrophising) are negatively associated with deontological choices and this effect is carried through emotional arousal (Study 2).

A pdf can be downloaded here.


Thursday, August 8, 2019

Not all who ponder count costs: Arithmetic reflection predicts utilitarian tendencies, but logical reflection predicts both deontological and utilitarian tendencies

NickByrdPaulConway
Cognition
https://doi.org/10.1016/j.cognition.2019.06.007

Abstract

Conventional sacrificial moral dilemmas propose directly causing some harm to prevent greater harm. Theory suggests that accepting such actions (consistent with utilitarian philosophy) involves more reflective reasoning than rejecting such actions (consistent with deontological philosophy). However, past findings do not always replicate, confound different kinds of reflection, and employ conventional sacrificial dilemmas that treat utilitarian and deontological considerations as opposite. In two studies, we examined whether past findings would replicate when employing process dissociation to assess deontological and utilitarian inclinations independently. Findings suggested two categorically different impacts of reflection: measures of arithmetic reflection, such as the Cognitive Reflection Test, predicted only utilitarian, not deontological, response tendencies. However, measures of logical reflection, such as performance on logical syllogisms, positively predicted both utilitarian and deontological tendencies. These studies replicate some findings, clarify others, and reveal opportunity for additional nuance in dual process theorist’s claims about the link between reflection and dilemma judgments.

A copy of the paper is here.

Saturday, August 4, 2018

Sacrificial utilitarian judgments do reflect concern for the greater good: Clarification via process dissociation and the judgments of philosophers

Paul Conway, Jacob Goldstein-Greenwood, David Polaceka, & Joshua D. Greene
Cognition
Volume 179, October 2018, Pages 241–265

Abstract

Researchers have used “sacrificial” trolley-type dilemmas (where harmful actions promote the greater good) to model competing influences on moral judgment: affective reactions to causing harm that motivate characteristically deontological judgments (“the ends don’t justify the means”) and deliberate cost-benefit reasoning that motivates characteristically utilitarian judgments (“better to save more lives”). Recently, Kahane, Everett, Earp, Farias, and Savulescu (2015) argued that sacrificial judgments reflect antisociality rather than “genuine utilitarianism,” but this work employs a different definition of “utilitarian judgment.” We introduce a five-level taxonomy of “utilitarian judgment” and clarify our longstanding usage, according to which judgments are “utilitarian” simply because they favor the greater good, regardless of judges’ motivations or philosophical commitments. Moreover, we present seven studies revisiting Kahane and colleagues’ empirical claims. Studies 1a–1b demonstrate that dilemma judgments indeed relate to utilitarian philosophy, as philosophers identifying as utilitarian/consequentialist were especially likely to endorse utilitarian sacrifices. Studies 2–6 replicate, clarify, and extend Kahane and colleagues’ findings using process dissociation to independently assess deontological and utilitarian response tendencies in lay people. Using conventional analyses that treat deontological and utilitarian responses as diametric opposites, we replicate many of Kahane and colleagues’ key findings. However, process dissociation reveals that antisociality predicts reduced deontological inclinations, not increased utilitarian inclinations. Critically, we provide evidence that lay people’s sacrificial utilitarian judgments also reflect moral concerns about minimizing harm. This work clarifies the conceptual and empirical links between moral philosophy and moral psychology and indicates that sacrificial utilitarian judgments reflect genuine moral concern, in both philosophers and ordinary people.

The research is here.

Tuesday, July 17, 2018

Social observation increases deontological judgments in moral dilemmas

Minwoo Leea, Sunhae Sul, Hackjin Kim
Evolution and Human Behavior
Available online 18 June 2018

Abstract

A concern for positive reputation is one of the core motivations underlying various social behaviors in humans. The present study investigated how experimentally induced reputation concern modulates judgments in moral dilemmas. In a mixed-design experiment, participants were randomly assigned to the observed vs. the control group and responded to a series of trolley-type moral dilemmas either in the presence or absence of observers, respectively. While no significant baseline difference in personality traits and moral decision style were found across two groups of participants, our analyses revealed that social observation promoted deontological judgments especially for moral dilemmas involving direct physical harm (i.e., the personal moral dilemmas), yet with an overall decrease in decision confidence and significant prolongation of reaction time. Moreover, participants in the observed group, but not in the control group, showed the increased sensitivities towards warmth vs. competence traits words in the lexical decision task performed after the moral dilemma task. Our findings suggest that reputation concern, once triggered by the presence of potentially judgmental others, could activate a culturally dominant norm of warmth in various social contexts. This could, in turn, induce a series of goal-directed processes for self-presentation of warmth, leading to increased deontological judgments in moral dilemmas. The results of the present study provide insights into the reputational consequences of moral decisions that merit further exploration.

The article is here.

Saturday, February 10, 2018

Could Biologically Enhancing Our Morality Save Our Species?

Julian Savulescu
Leapsmag.com
Originally published January 12, 2017

Here is an excerpt:

Our limitations have also become apparent in another form of existential threat: resource depletion. Despite our best efforts at educating, nudging, and legislating on climate change, carbon dioxide emissions in 2017 are expected to come in at the highest ever following a predicted rise of 2 percent. Why? We aren’t good at cooperating in larger groups where freeriding is not easily spotted. We also deal with problems in order of urgency. A problem close by is much more significant to us than a problem in the future. That’s why even if we accept there is a choice between economic recession now or natural disasters and potential famine in the future, we choose to carry on drilling for oil. And if the disasters and famine are present day, but geographically distant, we still choose to carry on drilling.

So what is our radical solution? We propose that there is a need for what we call moral bioenhancement. That is, for seeking a biological intervention that can help us overcome our evolved moral limitations. For example, adapting our biology so that we can appreciate the suffering of foreign or future people in the same instinctive way we do our friends and neighbors. Or, in the case of individuals, in addressing the problem of psychopathy from a biological perspective.

The information is here.

Friday, October 27, 2017

Is utilitarian sacrifice becoming more morally permissible?

Ivar R.Hannikainen, Edouard Machery, & Fiery A.Cushman
Cognition
Volume 170, January 2018, Pages 95-101

Abstract

A central tenet of contemporary moral psychology is that people typically reject active forms of utilitarian sacrifice. Yet, evidence for secularization and declining empathic concern in recent decades suggests the possibility of systematic change in this attitude. In the present study, we employ hypothetical dilemmas to investigate whether judgments of utilitarian sacrifice are becoming more permissive over time. In a cross-sectional design, age negatively predicted utilitarian moral judgment (Study 1). To examine whether this pattern reflected processes of maturation, we asked a panel to re-evaluate several moral dilemmas after an eight-year interval but observed no overall change (Study 2). In contrast, a more recent age-matched sample revealed greater endorsement of utilitarian sacrifice in a time-lag design (Study 3). Taken together, these results suggest that today’s younger cohorts increasingly endorse a utilitarian resolution of sacrificial moral dilemmas.


Here is a portion of the Discussion section:

A vibrant discussion among philosophers and cognitive scientists has focused on distinguishing the virtues and pitfalls of the human moral faculty (Bloom, 2017; Greene, 2014; Singer, 2005). On a pessimistic note, our results dovetail with evidence about the socialization and development of recent cohorts (e.g., Shonkoff et al., 2012): Utilitarian judgment has been shown to correlate with Machiavellian and psychopathic traits (Bartels & Pizarro, 2011), and also with the reduced capacity to distinguish felt emotions (Patil & Silani, 2014). At the same time, leading theories credit highly acclaimed instances of moral progress to the exercise of rational scrutiny over prevailing moral norms (Greene, 2014; Singer, 2005), and the persistence of parochialism and prejudice to the unbridled command of intuition (Bloom, 2017). From this perspective, greater disapproval of intuitive deontological principles among recent cohorts may stem from the documented rise in cognitive abilities (i.e., the Flynn effect; see Pietschnig & Voracek, 2015) and foreshadow an expanding commitment to the welfare-maximizing resolution of contemporary moral challenges.

Tuesday, October 24, 2017

'The deserving’: Moral reasoning and ideological dilemmas in public responses to humanitarian communications

Irene Bruna Seu
British Journal of Social Psychology 55 (4), pp. 739-755.

Abstract

This paper investigates everyday moral reasoning in relation to donations and prosocial behaviour in a humanitarian context. The discursive analysis focuses on the principles of deservingness which members of the public use to decide who to help and under what conditions.  The paper discusses three repertoires of deservingness: 'Seeing a difference', 'Waiting in queues' and 'Something for nothing ' to illustrate participants' dilemmatic reasoning and to examine how the position of 'being deserving' is negotiated in humanitarian crises.  Discursive analyses of these dilemmatic repertoires of deservingness identify the cultural and ideological resources behind these constructions and show how humanitarianism intersects and clashes with other ideologies and value systems.  The data suggest that a neoliberal ideology, which endorses self-gratification and materialistic and individualistic ethics, and cultural assimilation of helper and receiver play important roles in decisions about humanitarian helping. The paper argues for the need for psychological research to engage more actively with the dilemmas involved in the moral reasoning related to humanitarianism and to contextualize decisions about giving and helping within the socio-cultural and ideological landscape in which the helper operates.

The research is here.