Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Moral Judgments. Show all posts
Showing posts with label Moral Judgments. Show all posts

Sunday, December 1, 2019

Moral Reasoning and Emotion

Joshua May & Victor Kumar
Published in
The Routledge Handbook of Moral Epistemology,
eds. Karen Jones, Mark Timmons, and
Aaron Zimmerman, Routledge (2018), pp. 139-156.

Abstract:

This chapter discusses contemporary scientific research on the role of reason and emotion in moral judgment. The literature suggests that moral judgment is influenced by both reasoning and emotion separately, but there is also emerging evidence of the interaction between the two. While there are clear implications for the rationalism-sentimentalism debate, we conclude that important questions remain open about how central emotion is to moral judgment. We also suggest ways in which moral philosophy is not only guided by empirical research but continues to guide it.

(cut)

Conclusion

We draw two main conclusions. First, on a fair and plausible characterization of reasoning and emotion, they are both integral to moral judgment. In particular, when our moral beliefs undergo changes over long periods of time, there is ample space for both reasoning and emotion to play an iterative role. Second, it’s difficult to cleave reasoning from emotional processing. When the two affect moral judgment, especially across time, their interplay can make it artificial or fruitless to impose a division, even if a distinction can still be drawn between inference and valence in information processing.

To some degree, our conclusions militate against extreme characterizations of the rationalism-sentimentalism divide. However, the debate is best construed as a question about which psychological process is more fundamental or essential to distinctively moral cognition.  The answer still affects both theoretical and practical problems, such as how to make artificial intelligence capable of moral judgment. At the moment, the more nuanced dispute is difficult to adjudicate, but it may be addressed by further research and theorizing.

The book chapter can be downloaded here.

Sunday, November 17, 2019

The Psychology of Existential Risk: Moral Judgments about Human Extinction

Stefan Schubert, Lucius Caviola & Nadira S. Faber
Scientific Reports volume 9, Article number: 15100 (2019)

Abstract

The 21st century will likely see growing risks of human extinction, but currently, relatively small resources are invested in reducing such existential risks. Using three samples (UK general public, US general public, and UK students; total N = 2,507), we study how laypeople reason about human extinction. We find that people think that human extinction needs to be prevented. Strikingly, however, they do not think that an extinction catastrophe would be uniquely bad relative to near-extinction catastrophes, which allow for recovery. More people find extinction uniquely bad when (a) asked to consider the extinction of an animal species rather than humans, (b) asked to consider a case where human extinction is associated with less direct harm, and (c) they are explicitly prompted to consider long-term consequences of the catastrophes. We conclude that an important reason why people do not find extinction uniquely bad is that they focus on the immediate death and suffering that the catastrophes cause for fellow humans, rather than on the long-term consequences. Finally, we find that (d) laypeople—in line with prominent philosophical arguments—think that the quality of the future is relevant: they do find extinction uniquely bad when this means forgoing a utopian future.

Discussion

Our studies show that people find that human extinction is bad, and that it is important to prevent it. However, when presented with a scenario involving no catastrophe, a near-extinction catastrophe and an extinction catastrophe as possible outcomes, they do not see human extinction as uniquely bad compared with non-extinction. We find that this is partly because people feel strongly for the victims of the catastrophes, and therefore focus on the immediate consequences of the catastrophes. The immediate consequences of near-extinction are not that different from those of extinction, so this naturally leads them to find near-extinction almost as bad as extinction. Another reason is that they neglect the long-term consequences of the outcomes. Lastly, their empirical beliefs about the quality of the future make a difference: telling them that the future will be extraordinarily good makes more people find extinction uniquely bad.

The research is here.

Tuesday, November 12, 2019

Errors in Moral Forecasting: Perceptions of Affect Shape the Gap Between Moral Behaviors and Moral Forecasts

Teper, R., Zhong, C.‐B., and Inzlicht, M. (2015)
Social and Personality Psychology Compass, 9, 1– 14,
doi: 10.1111/spc3.12154

Abstract

Within the past decade, the field of moral psychology has begun to disentangle the mechanics behind moral judgments, revealing the vital role that emotions play in driving these processes. However, given the well‐documented dissociation between attitudes and behaviors, we propose that an equally important issue is how emotions inform actual moral behavior – a question that has been relatively ignored up until recently. By providing a review of recent studies that have begun to explore how emotions drive actual moral behavior, we propose that emotions are instrumental in fueling real‐life moral actions. Because research examining the role of emotional processes on moral behavior is currently limited, we push for the use of behavioral measures in the field in the hopes of building a more complete theory of real‐life moral behavior.

Conclusion

Long gone are the days when emotion was written off as a distractor or a roadblock to effective moral decision making. There now exists a great deal of evidence bolstering the idea that emotions are actually necessary for initiating adaptive behavior (Bechara, 2004; Damasio, 1994; Panskepp & Biven, 2012). Furthermore, evidence from the field of moral psychology points to the fact that individuals rely quite heavily on emotional and intuitive processes when engaging in moral judgments (e.g. Haidt, 2001). However, up until recently, the playing field of moral psychology has been heavily dominated by research revolving around moral judgments alone, especially when investigating the role that emotions play in motivating moral decision-making.

A pdf can be downloaded here.

Monday, November 4, 2019

Principles of karmic accounting: How our intuitive moral sense balances rights and wrongs

Image result for karmic accountingJohnson, S. G. B., & Ahn, J.
(2019, September 10).
PsyArXiv
https://doi.org/10.31234/osf.io/xetwg

Abstract

We are all saints and sinners: Some of our actions benefit other people, while other actions harm people. How do people balance moral rights against moral wrongs when evaluating others’ actions? Across 9 studies, we contrast the predictions of three conceptions of intuitive morality—outcome- based (utilitarian), act-based (deontologist), and person-based (virtue ethics) approaches. Although good acts can partly offset bad acts—consistent with utilitarianism—they do so incompletely and in a manner relatively insensitive to magnitude, but sensitive to temporal order and the match between who is helped and harmed. Inferences about personal moral character best predicted blame judgments, explaining variance across items and across participants. However, there was modest evidence for both deontological and utilitarian processes too. These findings contribute to conversations about moral psychology and person perception, and may have policy implications.

Here is the beginning of the General Discussion:

Much  of  our  behavior  is  tinged  with  shades  of  morality. How  third-parties  judge  those behaviors has numerous social consequences: People judged as behaving immorally can be socially ostracized,  less  interpersonally  attractive,  and  less  able  to  take  advantage  of  win–win  agreements. Indeed, our desire to avoid ignominy and maintain our moral reputations motivate much of our social behavior. But on the other hand, moral judgment is subject to a variety of heuristics and biases that appear  to  violate  normative  moral  theories  and  lead  to  inconsistency  (Bartels,  Bauman,  Cushman, Pizarro, & McGraw, 2015; Sunstein, 2005).  Despite the dominating influence of moral judgment in everyday social cognition, little is known about how judgments of individual acts scale up into broader judgments  about  sequences  of  actions,  such  as  moral  offsetting  (a  morally  bad  act  motivates  a subsequent morally good act) or self-licensing (a morally good act motivates a subsequent morally bad act). That is, we need a theory of karmic accounting—how rights and wrongs add up in moral judgment.

Monday, June 24, 2019

Not so Motivated After All? Three Replication Attempts and a Theoretical Challenge to a Morally-Motivated Belief in Free Will

Andrew E. Monroe and Dominic Ysidron
Preprint

Abstract

AbstractFree will is often appraised as a necessary input to for holding others morally or legally responsible for misdeeds. Recently, however,Clark and colleagues (2014), argued for the opposite causal relationship. They assert that moral judgments and the desire to punish motivate people’s belief in free will. In three experiments—two exact replications (Studies 1 & 2b) and one close replication(Study 2a)we seek to replicate these findings. Additionally, in a novel experiment (Study 3) we test a theoretical challenge derived from attribution theory, which suggests that immoral behaviors do not uniquely influence free will judgments. Instead, our non-violation model argues that norm deviations, of any kind—good, bad, or strange—cause people to attribute more free will to agents, and attributions of free will are explained via desire inferences.Across replication experiments we found no evidence for the original claim that witnessing immoral behavior causes people to increase their belief in free will, though we did replicate the finding that people attribute more free will to agents who behave immorally compared to a neutral control (Studies 2a & 3). Finally, our novel experiment demonstrated broad support for our norm-violation account, suggesting that people’s willingness to attribute free will to others is malleable, but not because people are motivated to blame.Instead, this experiment shows that attributions of free will are best explained by people’s expectations for norm adherence, and when these expectations are violated people infer that an agent expressed their free will to do so.

From the Discussion Section:

Together these findings argue for a non-moral explanation for free will judgments with norm-violation as the key driver. This account explains people’s tendency to attribute more free will to behaving badly agents because people generally expect others to follow moral norms, and when they don’t, people believe that there must have been a strong desire to perform the behavior. In addition, a norm-violation account is able to explain why people attribute more free will to agents behaving in odd or morally positive ways. Any deviation from what is expected causes people to attribute more desire and choice (i.e., free will)to that agent.Thus our findings suggest that people’s willingness to ascribe free will to others is indeed malleable, but considerations of free will are being driven by basic social cognitive representations of norms, expectations, and desire. Moreover, these data indicate that when people endorse free will for themselves or for others, they are not making claims about broad metaphysical freedom. Instead, if desires and norm-constraints are what affect ascriptions of free will, this suggests that what it means to have (or believe) in free willis to be rational (i.e., making choices informed by desires and preferences) and able to overcome constraints.

A preprint can be found here.

Thursday, May 23, 2019

Priming intuition disfavors instrumental harm but not impartial beneficence

Valerio Capraro, Jim Everett, & Brian Earp
PsyArXiv Preprints
Last Edited April 17, 2019

Abstract

Understanding the cognitive underpinnings of moral judgment is one of most pressing problems in psychological science. Some highly-cited studies suggest that reliance on intuition decreases utilitarian (expected welfare maximizing) judgments in sacrificial moral dilemmas in which one has to decide whether to instrumentally harm (IH) one person to save a greater number of people. However, recent work suggests that such dilemmas are limited in that they fail to capture the positive, defining core of utilitarianism: commitment to impartial beneficence (IB). Accordingly, a new two-dimensional model of utilitarian judgment has been proposed that distinguishes IH and IB components. The role of intuition on this new model has not been studied. Does relying on intuition disfavor utilitarian choices only along the dimension of instrumental harm or does it also do so along the dimension of impartial beneficence? To answer this question, we conducted three studies (total N = 970, two preregistered) using conceptual priming of intuition versus deliberation on moral judgments. Our evidence converges on an interaction effect, with intuition decreasing utilitarian judgments in IH—as suggested by previous work—but failing to do so in IB. These findings bolster the recently proposed two-dimensional model of utilitarian moral judgment, and point to new avenues for future research.

The research is here.

Saturday, May 18, 2019

The Neuroscience of Moral Judgment

Joanna Demaree-Cotton & Guy Kahane
Published in The Routledge Handbook of Moral Epistemology, eds. Karen Jones, Mark Timmons, and Aaron Zimmerman (Routledge, 2018).

Abstract:

This chapter examines the relevance of the cognitive science of morality to moral epistemology, with special focus on the issue of the reliability of moral judgments. It argues that the kind of empirical evidence of most importance to moral epistemology is at the psychological rather than neural level. The main theories and debates that have dominated the cognitive science of morality are reviewed with an eye to their epistemic significance.

1. Introduction

We routinely make moral judgments about the rightness of acts, the badness of outcomes, or people’s characters. When we form such judgments, our attention is usually fixed on the relevant situation, actual or hypothetical, not on our own minds. But our moral judgments are obviously the result of mental processes, and we often enough turn our attention to aspects of this process—to the role, for example, of our intuitions or emotions in shaping our moral views, or to the consistency of a judgment about a case with more general moral beliefs.

Philosophers have long reflected on the way our minds engage with moral questions—on the conceptual and epistemic links that hold between our moral intuitions, judgments, emotions, and motivations. This form of armchair moral psychology is still alive and well, but it’s increasingly hard to pursue it in complete isolation from the growing body of research in the cognitive science of morality (CSM). This research is not only uncovering the psychological structures that underlie moral judgment but, increasingly, also their neural underpinning—utilizing, in this connection, advances in functional neuroimaging, brain lesion studies, psychopharmacology, and even direct stimulation of the brain. Evidence from such research has been used not only to develop grand theories about moral psychology, but also to support ambitious normative arguments.

Monday, May 6, 2019

How do we make moral decisions?

Dartmouth College
Press Release
Originally released April 18, 2019

When it comes to making moral decisions, we often think of the golden rule: do unto others as you would have them do unto you. Yet, why we make such decisions has been widely debated. Are we motivated by feelings of guilt, where we don't want to feel bad for letting the other person down? Or by fairness, where we want to avoid unequal outcomes? Some people may rely on principles of both guilt and fairness and may switch their moral rule depending on the circumstances, according to a Radboud University - Dartmouth College study on moral decision-making and cooperation. The findings challenge prior research in economics, psychology and neuroscience, which is often based on the premise that people are motivated by one moral principle, which remains constant over time. The study was published recently in Nature Communications.

"Our study demonstrates that with moral behavior, people may not in fact always stick to the golden rule. While most people tend to exhibit some concern for others, others may demonstrate what we have called 'moral opportunism,' where they still want to look moral but want to maximize their own benefit," said lead author Jeroen van Baar, a postdoctoral research associate in the department of cognitive, linguistic and psychological sciences at Brown University, who started this research when he was a scholar at Dartmouth visiting from the Donders Institute for Brain, Cognition and Behavior at Radboud University.

"In everyday life, we may not notice that our morals are context-dependent since our contexts tend to stay the same daily. However, under new circumstances, we may find that the moral rules we thought we'd always follow are actually quite malleable," explained co-author Luke J. Chang, an assistant professor of psychological and brain sciences and director of the Computational Social Affective Neuroscience Laboratory (Cosan Lab) at Dartmouth. "This has tremendous ramifications if one considers how our moral behavior could change under new contexts, such as during war," he added.

The info is here.

The research is here.

Sunday, March 17, 2019

Actions Speak Louder Than Outcomes in Judgments of Prosocial Behavior

Daniel A. Yudkin, Annayah M. B. Prosser, and Molly J. Crockett
Emotion (2018).

Recently proposed models of moral cognition suggest that people's judgments of harmful acts are influenced by their consideration both of those acts' consequences ("outcome value"), and of the feeling associated with their enactment ("action value"). Here we apply this framework to judgments of prosocial behavior, suggesting that people's judgments of the praiseworthiness of good deeds are determined both by the benefit those deeds confer to others and by how good they feel to perform. Three experiments confirm this prediction. After developing a new measure to assess the extent to which praiseworthiness is influenced by action and outcome values, we show how these factors make significant and independent contributions to praiseworthiness. We also find that people are consistently more sensitive to action than to outcome value in judging the praiseworthiness of good deeds, but not harmful deeds. This observation echoes the finding that people are often insensitive to outcomes in their giving behavior. Overall, this research tests and validates a novel framework for understanding moral judgment, with implications for the motivations that underlie human altruism.

Here is an excerpt:

On a broader level, past work has suggested that judging the wrongness of harmful actions involves a process of “evaluative simulation,” whereby we evaluate the moral status of another’s action by simulating the affective response that we would experience performing the action ourselves (Miller et al., 2014). Our results are consistent with the possibility that evaluative simulation also plays a role in judging the praiseworthiness of helpful actions.  If people evaluate helpful actions by simulating what it feels like to perform the action, then we would expect to see similar biases in moral evaluation as those that exist for moral action. Previous work has shown that individuals often do not act to maximize the benefits that others receive, but instead to maximize the good feelings associated with performing good deeds (Berman et al., 2018; Gesiarz & Crockett, 2015; Ribar & Wilhelm, 2002). Thus, the asymmetry in moral evaluation seen in the present studies may reflect a correspondence between first-person moral decision-making and third-person moral evaluation.

Download the pdf here.

Tuesday, February 19, 2019

How Our Attitude Influences Our Sense Of Morality

Konrad Bocian
Science Trend
Originally posted January 18, 2019

Here is an excerpt:

People think that their moral judgment is as rational and objective as scientific statements, but science does not confirm that belief. Within the two last decades, scholars interested in moral psychology discovered that people produce moral judgments based on fast and automatic intuitions than rational and controlled reasoning. For example, moral cognition research showed that moral judgments arise in approximately 250 milliseconds, and even then we are not able to explain them. Developmental psychologists proved that at already the age of 3 months, babies who do not have any lingual skills can distinguish a good protagonist (a helping one) from a bad one (a hindering one). But this does not mean that peoples’ moral judgments are based solely on intuitions. We can use deliberative processes when conditions are favorable – when we are both motivated to engage in and capable of conscious responding.

When we imagine how we would morally judge other people in a specific situation, we refer to actual rules and norms. If the laws are violated, the act itself is immoral. But we forget that intuitive reasoning also plays a role in forming a moral judgment. It is easy to condemn the librarian when our interest is involved on paper, but the whole picture changes when real money is on the table. We have known that rule for a very long time, but we still forget to use it when we predict our moral judgments.

Based on previous research on the intuitive nature of moral judgment, we decided to test how far our attitudes can impact our perception of morality. In our daily life, we meet a lot of people who are to some degree familiar, and we either have a positive or negative attitude toward these people.

The info is here.

Sunday, January 27, 2019

Expectations Bias Moral Evaluations

Derek Powell and Zachary Horne
PsyArXiv Preprints
Originally created on December 23, 2018

Abstract

People’s expectations play an important role in their reactions to events. There is often disappointment when events fail to meet expectations and a special thrill to having one’s expectations exceeded. We propose that expectations influence evaluations through information-theoretic principles: less expected events do more to inform us about the state of the world than do more expected events. An implication of this proposal is that people may have inappropriately muted responses to morally significant but expected events. In two preregistered experiments, we found that people’s judgments of morally-significant events were affected by the likelihood of that event. People were more upset about events that were unexpected (e.g., a robbery at a clothing store) than events that were more expected (e.g., a robbery at a convenience store). We argue that this bias has pernicious moral consequences, including leading to reduced concern for victims in most need of help.

The preprint is here.

Thursday, November 22, 2018

The Importance of Making the Moral Case for Immigration

Ilya Somin
reason.com
Originally posted on October 23, 2018

Here is an excerpt:

The parallels between racial discrimination and hostility to immigration were in fact noted by such nineteenth century opponents of slavery as Abraham Lincoln and Frederick Douglass. These similarities suggest that moral appeals similar to those made by the antislavery and civil rights movements can also play a key role in the debate over immigration.

Moral appeals were in fact central to the two issues on which public opinion has been most supportive of immigrants in recent years: DACA and family separation. Overwhelming majorities supporting letting undocumented immigrants who were brought to America as children stay in the US, oppose the forcible separation of children from their parents at the border. In both cases, public opinion seems driven by considerations of justice and morality, not narrow self-interest (although letting DACA recipients stay would indeed benefit the US economy). Admittedly, these are relatively "easy" cases because both involve harming children for the alleged sins of their parents. But they nonetheless show the potency of moral considerations in the immigration debate. And most other immigration restrictions are only superficially different: instead of punishing children for their parents' illegal border-crossing, they victimize adults and children alike because their parents gave birth to them in the wrong place.

The key role of moral principles in struggles for liberty and equality should not be surprising. Contrary to popular belief, voters' political views on most issues are not determined by narrow self-interest. Public attitudes are instead generally driven by a combination of moral principles and perceived benefits to society as a whole. Immigration is not an exception to that tendency.

This is not to say that voters weigh the interests of all people equally. Throughout history, they have often ignored or downgraded those of groups seen as inferior, or otherwise undeserving of consideration. Slavery and segregation persisted in large part because, as Supreme Court Chief Justice Roger Taney notoriously put it, many whites believed that blacks "had no rights which the white man was bound to respect." Similarly, the subordination of women was not seriously questioned for many centuries, because most people believed that it was a natural part of life, and that men were entitled to rule over the opposite sex. In much the same way, today most people assume that natives are entitled to keep out immigrants either to preserve their culture against supposedly inferior ways or because they analogize a nation to a house or club from which the "owners" can exclude newcomers for almost any reason they want.

The info is here.

Saturday, August 4, 2018

Sacrificial utilitarian judgments do reflect concern for the greater good: Clarification via process dissociation and the judgments of philosophers

Paul Conway, Jacob Goldstein-Greenwood, David Polaceka, & Joshua D. Greene
Cognition
Volume 179, October 2018, Pages 241–265

Abstract

Researchers have used “sacrificial” trolley-type dilemmas (where harmful actions promote the greater good) to model competing influences on moral judgment: affective reactions to causing harm that motivate characteristically deontological judgments (“the ends don’t justify the means”) and deliberate cost-benefit reasoning that motivates characteristically utilitarian judgments (“better to save more lives”). Recently, Kahane, Everett, Earp, Farias, and Savulescu (2015) argued that sacrificial judgments reflect antisociality rather than “genuine utilitarianism,” but this work employs a different definition of “utilitarian judgment.” We introduce a five-level taxonomy of “utilitarian judgment” and clarify our longstanding usage, according to which judgments are “utilitarian” simply because they favor the greater good, regardless of judges’ motivations or philosophical commitments. Moreover, we present seven studies revisiting Kahane and colleagues’ empirical claims. Studies 1a–1b demonstrate that dilemma judgments indeed relate to utilitarian philosophy, as philosophers identifying as utilitarian/consequentialist were especially likely to endorse utilitarian sacrifices. Studies 2–6 replicate, clarify, and extend Kahane and colleagues’ findings using process dissociation to independently assess deontological and utilitarian response tendencies in lay people. Using conventional analyses that treat deontological and utilitarian responses as diametric opposites, we replicate many of Kahane and colleagues’ key findings. However, process dissociation reveals that antisociality predicts reduced deontological inclinations, not increased utilitarian inclinations. Critically, we provide evidence that lay people’s sacrificial utilitarian judgments also reflect moral concerns about minimizing harm. This work clarifies the conceptual and empirical links between moral philosophy and moral psychology and indicates that sacrificial utilitarian judgments reflect genuine moral concern, in both philosophers and ordinary people.

The research is here.

Tuesday, July 17, 2018

Social observation increases deontological judgments in moral dilemmas

Minwoo Leea, Sunhae Sul, Hackjin Kim
Evolution and Human Behavior
Available online 18 June 2018

Abstract

A concern for positive reputation is one of the core motivations underlying various social behaviors in humans. The present study investigated how experimentally induced reputation concern modulates judgments in moral dilemmas. In a mixed-design experiment, participants were randomly assigned to the observed vs. the control group and responded to a series of trolley-type moral dilemmas either in the presence or absence of observers, respectively. While no significant baseline difference in personality traits and moral decision style were found across two groups of participants, our analyses revealed that social observation promoted deontological judgments especially for moral dilemmas involving direct physical harm (i.e., the personal moral dilemmas), yet with an overall decrease in decision confidence and significant prolongation of reaction time. Moreover, participants in the observed group, but not in the control group, showed the increased sensitivities towards warmth vs. competence traits words in the lexical decision task performed after the moral dilemma task. Our findings suggest that reputation concern, once triggered by the presence of potentially judgmental others, could activate a culturally dominant norm of warmth in various social contexts. This could, in turn, induce a series of goal-directed processes for self-presentation of warmth, leading to increased deontological judgments in moral dilemmas. The results of the present study provide insights into the reputational consequences of moral decisions that merit further exploration.

The article is here.

Monday, July 16, 2018

Moral fatigue: The effects of cognitive fatigue on moral reasoning

Shane Timmons and Ruth MJ Byrne
Quarterly Journal of Experimental Psychology
pp. 1–12

Abstract

We report two experiments that show a moral fatigue effect: participants who are fatigued after they have carried out a tiring cognitive task make different moral judgements compared to participants who are not fatigued. Fatigued participants tend to judge that a moral violation is less permissible even though it would have a beneficial effect, such as killing one person to save the lives of five others. The moral fatigue effect occurs when people make a judgement that focuses on the harmful action, killing one person, but not when they make a judgement that focuses on the beneficial
outcome, saving the lives of others, as shown in Experiment 1 (n=196). It also occurs for judgements about morally good actions, such as jumping onto railway tracks to save a person who has fallen there, as shown in Experiment 2 (n=187).  The results have implications for alternative explanations of moral reasoning.

The article is here.

Thursday, July 5, 2018

On the role of descriptive norms and subjectivism in moral judgment

Andrew E. Monroe, Kyle D. Dillon, Steve Guglielmo, Roy F. Baumeister
Journal of Experimental Social Psychology
Volume 77, July 2018, Pages 1-10.

Abstract

How do people evaluate moral actions, by referencing objective rules or by appealing to subjective, descriptive norms of behavior? Five studies examined whether and how people incorporate subjective, descriptive norms of behavior into their moral evaluations and mental state inferences of an agent's actions. We used experimental norm manipulations (Studies 1–2, 4), cultural differences in tipping norms (Study 3), and behavioral economic games (Study 5). Across studies, people increased the magnitude of their moral judgments when an agent exceeded a descriptive norm and decreased the magnitude when an agent fell below a norm (Studies 1–4). Moreover, this differentiation was partially explained via perceptions of agents' desires (Studies 1–2); it emerged only when the agent was aware of the norm (Study 4); and it generalized to explain decisions of trust for real monetary stakes (Study 5). Together, these findings indicate that moral actions are evaluated in relation to what most other people do rather than solely in relation to morally objective rules.

Highlights

• Five studies tested the impact of descriptive norms on judgments of blame and praise.

• What is usual, not just what is objectively permissible, drives moral judgments.

• Effects replicate even when holding behavior constant and varying descriptive norms.

• Agents had to be aware of a norm for it to impact perceivers' moral judgments.

• Effects generalize to explain decisions of trust for real monetary stakes.

The research is here.

Sunday, June 24, 2018

Moral hindsight for good actions and the effects of imagined alternatives to reality

Ruth M.J. Byrne and Shane Timmons
Cognition
Volume 178, September 2018, Pages 82–91

Abstract

Five experiments identify an asymmetric moral hindsight effect for judgments about whether a morally good action should have been taken, e.g., Ann should run into traffic to save Jill who fell before an oncoming truck. Judgments are increased when the outcome is good (Jill sustained minor bruises), as Experiment 1 shows; but they are not decreased when the outcome is bad (Jill sustained life-threatening injuries), as Experiment 2 shows. The hindsight effect is modified by imagined alternatives to the outcome: judgments are amplified by a counterfactual that if the good action had not been taken, the outcome would have been worse, and diminished by a semi-factual that if the good action had not been taken, the outcome would have been the same. Hindsight modification occurs when the alternative is presented with the outcome, and also when participants have already committed to a judgment based on the outcome, as Experiments 3A and 3B show. The hindsight effect occurs not only for judgments in life-and-death situations but also in other domains such as sports, as Experiment 4 shows. The results are consistent with a causal-inference explanation of moral judgment and go against an aversive-emotion one.

Highlights
• Judgments a morally good action should be taken are increased when it succeeds.
• Judgments a morally good action should be taken are not decreased when it fails.
• Counterfactuals that the outcome would have been worse amplify judgments.
• Semi-factuals that the outcome would have been the same diminish judgments.
• The asymmetric moral hindsight effect supports a causal-inference theory.

The research is here.

Wednesday, June 6, 2018

Welcome to America, where morality is judged along partisan lines

Joan Vennochi
Boston Globe
Originally posted May 8, 2018

Here some excerpts:

“It’s OK to lie to the press?” asked Stephanopoulos. To which, Giuliani replied: “Gee, I don’t know — you know a few presidents who did that.”

(cut)

Twenty years later, special counsel Robert Mueller has been investigating allegations of collusion between the Trump campaign and the Russian government. Trump’s lawyer, Cohen, is now entangled in the collusion investigation, as well as with the payment to Daniels, which also entangles Trump — who, according to Giuliani, might invoke the Fifth Amendment to avoid testifying under oath. That must be tempting, given Trump’s well-established contempt for truthfulness and personal accountability.

(cut)

So it goes in American politics, where morality is judged strictly along partisan lines, and Trump knows it.

The information is here.

Wednesday, May 16, 2018

Moral Fatigue: The Effects of Cognitive Fatigue on Moral Reasoning

S. Timmons and R. Byrne
Quarterly Journal of Experimental Psychology (March 2018)

Abstract

We report two experiments that show a moral fatigue effect: participants who are fatigued after they have carried out a tiring cognitive task make different moral judgments compared to participants who are not fatigued. Fatigued participants tend to judge that a moral violation is less permissible even though it would have a beneficial effect, such as killing one person to save the lives of five others. The moral fatigue effect occurs when people make a judgment that focuses on the harmful action, killing one person, but not when they make a judgment that focuses on the beneficial outcome, saving the lives of others, as shown in Experiment 1 (n = 196). It also occurs for judgments about morally good actions, such as jumping onto railway tracks to save a person who has fallen there, as shown in Experiment 2 (n = 187). The results have implications for alternative explanations of moral reasoning.

The research is here.

Wednesday, January 3, 2018

The neuroscience of morality and social decision-making

Keith Yoder and Jean Decety
Psychology, Crime & Law
doi: 10.1080/1068316X.2017.1414817

Abstract
Across cultures humans care deeply about morality and create institutions, such as criminal courts, to enforce social norms. In such contexts, judges and juries engage in complex social decision-making to ascertain a defendant’s capacity, blameworthiness, and culpability. Cognitive neuroscience investigations have begun to reveal the distributed neural networks which interact to implement moral judgment and social decision-making, including systems for reward learning, valuation, mental state understanding, and salience processing. These processes are fundamental to morality, and their underlying neural mechanisms are influenced by individual differences in empathy, caring and justice sensitivity. This new knowledge has important implication in legal settings for understanding how triers of fact reason. Moreover, recent work demonstrates how disruptions within the social decision-making network facilitate immoral behavior, as in the case of psychopathy. Incorporating neuroscientific methods with psychology and clinical neuroscience has the potential to improve predictions of recidivism, future dangerousness, and responsivity to particular forms of rehabilitation.

The article is here.

From the Conclusion section:

Current neuroscience work demonstrates that social decision-making and moral reasoning rely on multiple partially overlapping neural networks which support domain general processes, such as executive control, saliency processing, perspective-taking, reasoning, and valuation. Neuroscience investigations have contributed to a growing understanding of the role of these process in moral cognition and judgments of blame and culpability, exactly the sorts of judgments required of judges and juries. Dysfunction of these networks can lead to dysfunctional social behavior and a propensity to immoral behavior as in the case of psychopathy. Significant progress has been made in clarifying which aspects of social decision-making network functioning are most predictive of future recidivism. Psychopathy, in particular, constitutes a complex type of moral disorder and a challenge to the criminal justice system.

Worth reading.....