Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Moral judgment. Show all posts
Showing posts with label Moral judgment. Show all posts

Saturday, November 20, 2021

Narrative media’s emphasis on distinct moral intuitions alters early adolescents’ judgments

Hahn, L., et al. (2021).
Journal of Media Psychology: 
Theories, Methods, and Applications. 
Advance online publication.

Abstract

Logic from the model of intuitive morality and exemplars (MIME) suggests that narrative media emphasizing moral intuitions can increase the salience of those intuitions in audiences. To date, support for this logic has been limited to adults. Across two studies, the present research tested MIME predictions in early adolescents (ages 10–14). The salience of care, fairness, loyalty, and authority intuitions was manipulated in a pilot study with verbal prompts (N = 87) and in the main study with a comic book (N = 107). In both studies, intuition salience was measured after induction. The pilot study demonstrated that exposure to verbal prompts emphasizing care, fairness, and loyalty increased the salience of their respective intuitions. The main study showed that exposure to comic books emphasizing all four separate intuitions increased salience of their respective intuitions in early adolescents. Results are discussed in terms of relevance for the MIME and understanding narrative media’s influence on children’s moral judgments. 

Conclusion

Moral education is often at the forefront of parents’ concern for their children’s well-being. Although there is value in directly teaching children moral principles through instruction about what to do or not do, our results support an indirect approach to socializing children’s morality (Haidt & Bjorklund, 2008). This first step at exploring narrative media’s ability to activate moral intuitions in young audiences should be accompanied by additional work examining how “direct route” lessons, such as those contained in the
Ten Commandments, may complement narrative media’s impact on children’s morality.

Our studies provide evidence supporting the MIME’s predictions about narrative content’s influence on moral intuition salience. Future research should build on these findings to examine whether this elevated intuition salience can influence broader values, judgments, and behaviors in children. Such examinations should be especially important for researchers interested in both the mechanism responsible for media’s influence and the extent of media’s impact on malleable, developing children, who may be socialized
by media content.


Monday, November 15, 2021

On Defining Moral Enhancement: A Clarificatory Taxonomy

Carl Jago
Journal of Experimental Social Psychology
Volume 95, July 2021, 104145

Abstract

In a series of studies, we ask whether and to what extent the base rate of a behavior influences associated moral judgment. Previous research aimed at answering different but related questions are suggestive of such an effect. However, these other investigations involve injunctive norms and special reference groups which are inappropriate for an examination of the effects of base rates per se. Across five studies, we find that, when properly isolated, base rates do indeed influence moral judgment, but they do so with only very small effect sizes. In another study, we test the possibility that the very limited influence of base rates on moral judgment could be a result of a general phenomenon such as the fundamental attribution error, which is not specific to moral judgment. The results suggest that moral judgment may be uniquely resilient to the influence of base rates. In a final pair of studies, we test secondary hypotheses that injunctive norms and special reference groups would inflate any influence on moral judgments relative to base rates alone. The results supported those hypotheses.

From the General Discussion

In multiple experiments aimed at examining the influence of base rates per se, we found that base rates do indeed influence judgments, but the size of the effect we observed was very small. We considered that, in
discovering moral judgments’ resilience to influence from base rates, we may have only rediscovered a general tendency, such as the fundamental attribution error, whereby people discount situational factors. If
so, this tendency would then also apply broadly to non-moral scenarios. We therefore conducted another study in which our experimental materials were modified so as to remove the moral components. We found a substantial base-rate effect on participants’ judgments of performance regarding non-moral behavior. This finding suggests that the resilience to base rates observed in the preceding studies is unlikely the result of amore general tendency, and may instead be unique to moral judgment.

The main reasons why we concluded that the results from the most closely related extant research could not answer the present research question were the involvement in those studies of injunctive norms and
special reference groups. To confirm that these factors could inflate any influence of base rates on moral judgment, in the final pair of studies, we modified our experiments so as to include them. Specifically, in one study, we crossed prescriptive and proscriptive injunctive norms with high and low base rates and found that the impact of an injunctive norm outweighs any impact of the base rate. In the other study, we found that simply mentioning, for example, that there were some good people among those who engaged in a high base-rate behavior resulted in a large effect on moral judgment; not only on judgments of the target’s character, but also on judgments of blame and wrongness. 

Saturday, November 13, 2021

Moral behavior in games: A review and call for additional research

E. Clarkson
New Ideas in Psychology
Volume 64, January 2022, 100912

Abstract

The current review has been completed with several specific aims. First, it seeks to acknowledge, and detail, a new and growing body of research, that associates moral judgments with behavior in social dilemmas and economic games. Second, it seeks to address how a study of moral behavior is advantaged over past research that exclusively measured morality by asking about moral judgment or belief. In an analysis of these advantages, it is argued that additional research, that associates moral judgments with behavior, is better equipped to answer debates within the field, such as whether sacrificial judgments do reflect a concern for the greater good and if utilitarianism (or other moral theories) are better suited to solve certain collective action problems (like tragedies of the commons). To this end, future researchers should use methods that require participants to make decisions with real-world behavioral consequences.

Highlights

• Prior work has long investigated moral judgments in hypothetical scenarios.

• Arguments that debate the validity of this method are reviewed.

• New research is investigating the association between moral judgments and behavior.

• Future study should continue and broaden these investigations to new moral theories.


Friday, October 22, 2021

A Meta-Analytic Investigation of the Antecedents, Theoretical Correlates, and Consequences of Moral Disengagement at Work

Ogunfowora, B. T., et al. (2021)
The Journal of Applied Psychology
10.1037/apl0000912. 
Advance online publication. 
https://doi.org/10.1037/apl0000912

Abstract

Moral disengagement refers to a set of cognitive tactics people employ to sidestep moral self-regulatory processes that normally prevent wrongdoing. In this study, we present a comprehensive meta-analytic review of the nomological network of moral disengagement at work. First, we test its dispositional and contextual antecedents, theoretical correlates, and consequences, including ethics (workplace misconduct and organizational citizenship behaviors [OCBs]) and non-ethics outcomes (turnover intentions and task performance). Second, we examine Bandura's postulation that moral disengagement fosters misconduct by diminishing moral cognitions (moral awareness and moral judgment) and anticipatory moral self-condemning emotions (guilt). We also test a contrarian view that moral disengagement is limited in its capacity to effectively curtail moral emotions after wrongdoing. The results show that Honesty-Humility, guilt proneness, moral identity, trait empathy, conscientiousness, idealism, and relativism are key individual antecedents. Further, abusive supervision and perceived organizational politics are strong contextual enablers of moral disengagement, while ethical leadership and organizational justice are relatively weak deterrents. We also found that narcissism, Machiavellianism, psychopathy, and psychological entitlement are key theoretical correlates, although moral disengagement shows incremental validity over these "dark" traits. Next, moral disengagement was positively associated with workplace misconduct and turnover intentions, and negatively related to OCBs and task performance. Its positive impact on misconduct was mediated by lower moral awareness, moral judgment, and anticipated guilt. Interestingly, however, moral disengagement was positively related to guilt and shame post-misconduct. In sum, we find strong cumulative evidence for the pertinence of moral disengagement in the workplace.

From the Discussion

Our moderator analyses reveal several noteworthy findings. First, the relationship between moral disengagement and misconduct did not significantly differ depending on whether it is operationalized as a trait or state. This suggests that the impact of moral disengagement – at least with respect to workplace misconduct – is equally devastating when it is triggered in specific situations or when it is captured as a stable propensity. This provides initial support for conceptualizing moral disengagement along a continuum – from “one off” instances in specific contexts (i.e., state moral disengagement) to a “dynamic disposition” (Bandura, 1999b) that is relatively stable, but which may also shift in response to different situations (Moore et al., 2019).  

Second, there may be utility in exploring specific disengagement tactics. For instance, euphemistic labeling exerted stronger effects on misconduct compared to moral justification and diffusion of responsibility. Relative weight analyses further showed that some tactics contribute more to understanding misconduct and OCBs. Scholars have proposed that exploring moral disengagement tactics that match the specific context may offer new insights (Kish-Gephart et al., 2014; Moore et al., 2019). It is possible that moral justification might be critical in situations where participants must conjure up rationales to justify their misdeeds (Duffy et al., 2005), while diffusion of responsibility might matter more in team settings where morally disengaging employees can easily assign blame to the collective (Alnuaimi et al., 2010). These possibilities suggest that specific disengagement tactics may offer novel theoretical insights that may be overlooked when scholars focus on overall moral disengagement. However, we acknowledge that this conclusion is preliminary given the small number of studies available for these analyses. 

Wednesday, October 6, 2021

Immoral actors’ meta-perceptions are accurate but overly positive

Lees, J. M., Young, L., & Waytz, A.
(2021, August 16).
https://doi.org/10.31234/osf.io/j24tn

Abstract

We examine how actors think others perceive their immoral behavior (moral meta-perception) across a diverse set of real-world moral violations. Utilizing a novel methodology, we solicit written instances of actors’ immoral behavior (N_total=135), measure motives and meta-perceptions, then provide these accounts to separate samples of third-party observers (N_total=933), using US convenience and representative samples (N_actor-observer pairs=4,615). We find that immoral actors can accurately predict how they are perceived, how they are uniquely perceived relative to the average immoral actor, and how they are misperceived. Actors who are better at judging the motives of other immoral actors also have more accurate meta-perceptions. Yet accuracy is accompanied by two distinct biases: overestimating the positive perceptions others’ hold, and believing one’s motives are more clearly perceived than they are. These results contribute to a detailed account of the multiple components underlying both accuracy and bias in moral meta-perception.

From the General Discussion

These results collectively suggest that individuals who have engaged in immoral behavior can accurately forecast how others will react to their moral violations.  

Studies 1-4 also found similar evidence for accuracy in observers’ judgments of the unique motives of immoral actors, suggesting that individuals are able to successfully perspective-take with those who have committed moral violations. Observers higher in cognitive ability (Studies 2-3) and empathic concern (Studies 2-4) were consistently more accurate in these judgments, while observers higher in Machiavellianism (Studies 2-4) and the propensity to engage in unethical workplace behaviors (Studies 3-4) were consistently less accurate. This latter result suggests that more frequently engaging in immoral behavior does not grant one insight into the moral minds of others, and in fact is associated with less ability to understand the motives behind others’ immoral behavior.

Despite strong evidence for meta-accuracy (and observer accuracy) across studies, actors’ accuracy in judging how they would be perceived was accompanied by two judgment biases.  Studies 1-4 found evidence for a transparency bias among immoral actors (Gilovich et al., 1998), meaning that actors overestimated how accurately observers would perceive their self-reported moral motives. Similarly, in Study 4 an examination of actors’ meta-perception point estimates found evidence for a positivity bias. Actors systematically overestimate the positive attributions, and underestimate the negative attributions, made of them and their motives. In fact, the single meta-perception found to be the most inaccurate in its average point estimate was the meta-perception of harm caused, which was significantly underestimated.

Saturday, August 21, 2021

The relational logic of moral inference

Crockett, M., Everett, J. A. C., Gill, M., & Siegel, J. 
(2021, July 9). https://doi.org/10.31234/osf.io/82c6y

Abstract

How do we make inferences about the moral character of others? Here we review recent work on the cognitive mechanisms of moral inference and impression updating. We show that moral inference follows basic principles of Bayesian inference, but also departs from the standard Bayesian model in ways that may facilitate the maintenance of social relationships. Moral inference is not only sensitive to whether people make moral decisions, but also to features of decisions that reveal their suitability as a relational partner. Together these findings suggest that moral inference follows a relational logic: people form and update moral impressions in ways that are responsive to the demands of ongoing social relationships and particular social roles. We discuss implications of these findings for theories of moral cognition and identify new directions for research on human morality and person perception.

Summary

There is growing evidence that people infer moral character from behaviors that are not explicitly moral. The data so far suggest that people who are patient, hard-working, tolerant of ambiguity, risk-averse, and actively open-minded are seen as more moral and trustworthy. While at first blush this collection of preferences may seem arbitrary, considering moral inference from a relational perspective reveals a coherent logic. All of these preferences are correlated with cooperative behavior, and comprise traits that are desirable for long-term relationship partners. Reaping the benefits of long-term relationships requires patience and a tolerance for ambiguity: sometime people make mistakes despite good intentions. Erring on the side of caution and actively seeking evidence to inform decision-making in social situations not only helps prevent harmful outcomes (Kappes et al., 2019), but also signals respect: social life is fraught with uncertainty (FeldmanHall & Shenhav, 2019; Kappes et al., 2019), and assuming we know what’s best for another person can have bad consequences, even when our intentions are good.  If evidence continues to suggest that certain types of non-moral preferences are preferred in social partners, partner choice mechanisms may explain the prevalence of those preferences in the broader population.

Friday, July 16, 2021

“False positive” emotions, responsibility, and moral character

Anderson, R.A., et al.
Cognition
Volume 214, September 2021, 104770

Abstract

People often feel guilt for accidents—negative events that they did not intend or have any control over. Why might this be the case? Are there reputational benefits to doing so? Across six studies, we find support for the hypothesis that observers expect “false positive” emotions from agents during a moral encounter – emotions that are not normatively appropriate for the situation but still trigger in response to that situation. For example, if a person accidentally spills coffee on someone, most normative accounts of blame would hold that the person is not blameworthy, as the spill was accidental. Self-blame (and the guilt that accompanies it) would thus be an inappropriate response. However, in Studies 1–2 we find that observers rate an agent who feels guilt, compared to an agent who feels no guilt, as a better person, as less blameworthy for the accident, and as less likely to commit moral offenses. These attributions of moral character extend to other moral emotions like gratitude, but not to nonmoral emotions like fear, and are not driven by perceived differences in overall emotionality (Study 3). In Study 4, we demonstrate that agents who feel extremely high levels of inappropriate (false positive) guilt (e.g., agents who experience guilt but are not at all causally linked to the accident) are not perceived as having a better moral character, suggesting that merely feeling guilty is not sufficient to receive a boost in judgments of character. In Study 5, using a trust game design, we find that observers are more willing to trust others who experience false positive guilt compared to those who do not. In Study 6, we find that false positive experiences of guilt may actually be a reliable predictor of underlying moral character: self-reported predicted guilt in response to accidents negatively correlates with higher scores on a psychopathy scale.

From the General Discussion

It seems reasonable to think that there would be some benefit to communicating these moral emotions as a signal of character, and to being able to glean information about the character of others from observations of their emotional responses. If a propensity to feel guilt makes it more likely that a person is cooperative and trustworthy, observers would need to discriminate between people who are and are not prone to guilt. Guilt could therefore serve as an effective regulator of moral behavior in others in its role as a reliable signal of good character.  This account is consistent with theoretical accounts of emotional expressions more generally, either in the face, voice, or body, as a route by which observers make inferences about a person’s underlying dispositions (Frank, 1988). Our results suggest that false positive emotional responses specifically may provide an additional, and apparently informative, source of evidence for one’s propensity toward moral emotions and moral behavior.

Tuesday, July 13, 2021

Valence framing effects on moral judgments: A meta-analysis

McDonald, K., et al.
Cognition
Volume 212, July 2021, 104703

Abstract

Valence framing effects occur when participants make different choices or judgments depending on whether the options are described in terms of their positive outcomes (e.g. lives saved) or their negative outcomes (e.g. lives lost). When such framing effects occur in the domain of moral judgments, they have been taken to cast doubt on the reliability of moral judgments and raise questions about the extent to which these moral judgments are self-evident or justified in themselves. One important factor in this debate is the magnitude and variability of the extent to which differences in framing presentation impact moral judgments. Although moral framing effects have been studied by psychologists, the overall strength of these effects pooled across published studies is not yet known. Here we conducted a meta-analysis of 109 published articles (contributing a total of 146 unique experiments with 49,564 participants) involving valence framing effects on moral judgments and found a moderate effect (d = 0.50) among between-subjects designs as well as several moderator variables. While we find evidence for publication bias, statistically accounting for publication bias attenuates, but does not eliminate, this effect (d = 0.22). This suggests that the magnitude of valence framing effects on moral decisions is small, yet significant when accounting for publication bias.

Wednesday, July 7, 2021

“False positive” emotions, responsibility, and moral character

Anderson, R. A., et al.
Cognition
Volume 214, September 2021, 104770

Abstract

People often feel guilt for accidents—negative events that they did not intend or have any control over. Why might this be the case? Are there reputational benefits to doing so? Across six studies, we find support for the hypothesis that observers expect “false positive” emotions from agents during a moral encounter – emotions that are not normatively appropriate for the situation but still trigger in response to that situation. For example, if a person accidentally spills coffee on someone, most normative accounts of blame would hold that the person is not blameworthy, as the spill was accidental. Self-blame (and the guilt that accompanies it) would thus be an inappropriate response. However, in Studies 1–2 we find that observers rate an agent who feels guilt, compared to an agent who feels no guilt, as a better person, as less blameworthy for the accident, and as less likely to commit moral offenses. These attributions of moral character extend to other moral emotions like gratitude, but not to nonmoral emotions like fear, and are not driven by perceived differences in overall emotionality (Study 3). In Study 4, we demonstrate that agents who feel extremely high levels of inappropriate (false positive) guilt (e.g., agents who experience guilt but are not at all causally linked to the accident) are not perceived as having a better moral character, suggesting that merely feeling guilty is not sufficient to receive a boost in judgments of character. In Study 5, using a trust game design, we find that observers are more willing to trust others who experience false positive guilt compared to those who do not. In Study 6, we find that false positive experiences of guilt may actually be a reliable predictor of underlying moral character: self-reported predicted guilt in response to accidents negatively correlates with higher scores on a psychopathy scale.

General discussion

Collectively, our results support the hypothesis that false positive moral emotions are associated with both judgments of moral character and traits associated with moral character. We consistently found that observers use an agent's false positive experience of moral emotions (e.g., guilt, gratitude) to infer their underlying moral character, their social likability, and to predict both their future emotional responses and their future moral behavior. Specifically, we found that observers judge an agent who experienced “false positive” guilt (in response to an accidental harm) as a more moral person, more likeable, less likely to commit future moral infractions, and more trustworthy than an agent who experienced no guilt. Our results help explain the second “puzzle” regarding guilt for accidental actions (Kamtekar & Nichols, 2019). Specifically, one reason that observers may find an accidental agent less blameworthy, and yet still be wary if the agent does not feel guilt, is that such false positive guilt provides an important indicator of that agent's underlying character.

Sunday, July 4, 2021

Understanding Side-Effect Intentionality Asymmetries: Meaning, Morality, or Attitudes and Defaults?

Laurent SM, Reich BJ, Skorinko JLM. 
Personality and Social Psychology Bulletin. 
2021;47(3):410-425. 
doi:10.1177/0146167220928237

Abstract

People frequently label harmful (but not helpful) side effects as intentional. One proposed explanation for this asymmetry is that moral considerations fundamentally affect how people think about and apply the concept of intentional action. We propose something else: People interpret the meaning of questions about intentionally harming versus helping in fundamentally different ways. Four experiments substantially support this hypothesis. When presented with helpful (but not harmful) side effects, people interpret questions concerning intentional helping as literally asking whether helping is the agents’ intentional action or believe questions are asking about why agents acted. Presented with harmful (but not helpful) side effects, people interpret the question as asking whether agents intentionally acted, knowing this would lead to harm. Differences in participants’ definitions consistently helped to explain intentionality responses. These findings cast doubt on whether side-effect intentionality asymmetries are informative regarding people’s core understanding and application of the concept of intentional action.

From the Discussion

Second, questions about intentionality of harm may focus people on two distinct elements presented in the vignette: the agent’s  intentional action  (e.g., starting a profit-increasing program) and the harmful secondary outcome he knows this goal-directed action will cause. Because the concept of intentionality is most frequently applied to actions rather than consequences of actions (Laurent, Clark, & Schweitzer, 2015), reframing the question as asking about an intentional action undertaken with foreknowledge of harm has advantages. It allows consideration of key elements from the story and is responsive to what people may feel is at the heart of the question: “Did the chairman act intentionally, knowing this would lead to harm?” Notably, responses to questions capturing this idea significantly mediated intentionality responses in each experiment presented here, whereas other variables tested failed to consistently do so. 

Wednesday, May 19, 2021

Population ethical intuitions

Caviola, L., Althaus, D., Mogensen, A., 
& Goodwin, G. (2021, April 1). 

Abstract

We investigated lay people’s population ethical intuitions (N = 4,374), i.e., their moral evaluations of populations that differ in size and composition. First, we found that people place greater relative weight on, and are more sensitive to, suffering compared to happiness. Participants, on average, believed that more happy people are needed to outweigh a given amount of unhappy people in a population (Studies 1a-c). Second, we found that—in contrast to so-called person-affecting views—people do not consider the creation of new people as morally neutral. Participants considered it good to create a new happy person and bad to create a new unhappy person (Study 2). Third, we found that people take into account both the average level (averagism) and the total level (totalism) of happiness when evaluating populations. Participants preferred populations with greater total happiness levels when the average level remained constant (Study 3) and populations with greater average happiness levels when the total level remained constant (Study 4). When the two principles were in conflict, participants’ preferences lay in between the recommendations of the two principles, suggesting that both are applied simultaneously (Study 5). In certain cases, participants even showed averagist preferences when averagism disfavors adding more happy people and favors adding more unhappy people to a population (Study 6). However, when participants were prompted to reflect as opposed to rely on their intuitions, their preferences became more totalist (Studies 5-6). Our findings have implications for moral psychology, philosophy and policy making.

From the Discussion

Suffering is more bad than than happiness is good

We found that people weigh suffering more than happiness when they evaluate the goodness of populations consisting of both happy and unhappy people. Thus, people are neither following strict negative utilitarianism (minimizing suffering, giving no weight to maximizing happiness at all) nor strict classical utilitarianism (minimizing suffering and maximizing happiness, weighing both equally). Instead, the average person’s intuitions seem to track a mixture of these two theories. In Studies 1a-c, participants on average believed that approximately 1.5-3 times more happy people are required to outweigh a given amount of unhappy people. The precise trade ratio between happiness and suffering depended on the intensity levels of happiness and suffering. (In additional preliminary studies, we found that the trade ratio can also heavily depend on the framing of the question.) Study 1c clarified that, on average, participants continued to believe that more happiness was needed to outweigh suffering even when the happiness and suffering units were exactly equally intense. This suggests that people generally weigh suffering more than happiness in their moral assessments above and beyond perceiving suffering to be more intense than happiness. However, our studies also made clear that there are individual differences and that a substantial proportion of participants weighed happiness and suffering equally strongly, in line with classical utilitarianism.

Saturday, April 17, 2021

Binding Moral Values Gain Importance in the Presence of Close Others

Yudkin, D. A., et al. (2019, April 12). 
https://doi.org/10.31234/osf.io/tcq65

Abstract

A key function of morality is to regulate social behavior. Research suggests moral values may be divided into two types: binding values, which govern behavior in groups, and individualizing values, which promote personal rights and freedoms. Because people tend to mentally activate concepts in situations in which they may prove useful, the importance they afford moral values may vary according to whom they are with in the moment. In particular, because binding values help regulate communal behavior, people may afford these values more importance when in the presence of close (versus distant) others. Five studies test and support this hypothesis. First, we use a custom smartphone application to repeatedly record participants’ (n = 1,166) current social context and the importance they afforded moral values. Results show people rate moral values as more important when in the presence of close others, and this effect is stronger for binding than individualizing values—an effect that replicates in a large preregistered online sample (n = 2,016). A lab study (n = 390) and two preregistered online experiments (n = 580 and n = 752) provides convergent evidence that people afford binding, but not individualizing, values more importance when in the real or imagined presence of close others. Our results suggest people selectively activate different moral values according to the demands of the situation, and show how the mere presence of others can affect moral thinking.

Discussion

Centuries of thought in moral philosophy suggest that the purpose of moral values is to regulate social behavior. However, the psychology underlying this process remains underspecified. Here we show that the mere presence of close others increases the importance people afford binding moral values. By contrast, individualizing values are not reliably associated with relational context. In other words, people appear to selectively activate those moral values most relevant to their current social situation. This “moral activation” may play a functional role by helping people to abide by the relevant moral values in a given relational context and monitor adherence to those values in others. 

Our results are consistent with the view that different values play different functional roles in social life. Past research contrasts the values that encourage cohesion in groups and relationships with those that emphasize individual rights and freedoms10.Because violations to individualizing values may be considered wrong regardless of where and when they occur, the importance people ascribe to them may be unaffected by who they are with. By contrast, because binding values concern the moral duties conferred by specific social relationships, they may be particularly subject to social influence. 

Thursday, March 25, 2021

Religious Affiliation and Conceptions of the Moral Domain

Levine, S., Rottman, J., et al.
(2019, November 14). 

Abstract

What is the relationship between religious affiliation and conceptions of the moral domain? Putting aside the question of whether people from different religions agree about how to answer moral questions, here we investigate a more fundamental question: How much disagreement is there across religions about which issues count as moral in the first place? That is, do people from different religions conceptualize the scope of morality differently? Using a new methodology to map out how individuals conceive of the moral domain, we find dramatic differences among adherents of different religions. Mormon and Muslim participants moralized their religious norms, while Jewish participants did not. Hindu participants in our sample did not seem to make a moral/non-moral distinction of the same kind. These results suggest a profound relationship between religious affiliation and conceptions of the scope of the moral domain.

From the Discussion

We have found that it is neither true that secular people and religious people share a common conception of the moral domain nor that religious morality is expanded beyond secular morality in a uniform manner. Furthermore, when participants in a group did make a moral/non-moral distinction, there was broad agreement that norms related to harm, justice, and rights counted as moral norms. However, some religious individuals (such as the Mormon and Muslim participants) also moralized norms from their own religion that are not related to these themes. Meanwhile, others (such as the Jewish participants) acknowledged the special status of their own norms but did not moralize them. Yet others (such as the Hindu participants in our sample) seemed to make no distinction between the moral and the non-moral in the way that the other groups did. Our dataset, therefore, suggests that any theory about the lay conception of the scope of morality needs to explain why the Jewish participants in our dataset do not consider their own norms to be moral norms and why Mormons and Muslim participants do.To the extent that SDT and MFT make any predictions about how lay people decide whether a norm is moral, they too must find a way to explain these datasets.

Monday, March 1, 2021

Morality justifies motivated reasoning in the folk ethics of belief

Corey Cusimano & Tania Lombrozo
Cognition
19 January 2021

Abstract

When faced with a dilemma between believing what is supported by an impartial assessment of the evidence (e.g., that one's friend is guilty of a crime) and believing what would better fulfill a moral obligation (e.g., that the friend is innocent), people often believe in line with the latter. But is this how people think beliefs ought to be formed? We addressed this question across three studies and found that, across a diverse set of everyday situations, people treat moral considerations as legitimate grounds for believing propositions that are unsupported by objective, evidence-based reasoning. We further document two ways in which moral considerations affect how people evaluate others' beliefs. First, the moral value of a belief affects the evidential threshold required to believe, such that morally beneficial beliefs demand less evidence than morally risky beliefs. Second, people sometimes treat the moral value of a belief as an independent justification for belief, and on that basis, sometimes prescribe evidentially poor beliefs to others. Together these results show that, in the folk ethics of belief, morality can justify and demand motivated reasoning.

From the General Discussion

5.2. Implications for motivated reasoning

Psychologists have long speculated that commonplace deviations from rational judgments and decisions could reflect commitments to different normative standards for decision making rather than merely cognitive limitations or unintentional errors (Cohen, 1981; Koehler, 1996; Tribe, 1971). This speculation has been largely confirmed in the domain of decision making, where work has documented that people will refuse to make certain decisions because of a normative commitment to not rely on certain kinds of evidence (Nesson, 1985; Wells, 1992), or because of a normative commitment to prioritize deontological concerns over utility-maximizing concerns (Baron & Spranca, 1997; Tetlock et al., 2000). And yet, there has been comparatively little investigation in the domain of belief formation. While some work has suggested that people evaluate beliefs in ways that favor non-objective, or non-evidential criteria (e.g., Armor et al., 2008; Cao et al., 2019; Metz, Weisberg, & Weisberg, 2018; Tenney et al., 2015), this work has failed to demonstrate that people prescribe beliefs that violate what objective, evidence-based reasoning would warrant. To our knowledge, our results are the first to demonstrate that people will knowingly endorse non-evidential norms for belief, and specifically, prescribe motivated reasoning to others.

(cut)

Our findings suggest more proximate explanations for these biases: That lay people see these beliefs as morally beneficial and treat these moral benefits as legitimate grounds for motivated reasoning. Thus, overconfidence or over-optimism may persist in communities because people hold others to lower standards of evidence for adopting morally-beneficial optimistic beliefs than they do for pessimistic beliefs, or otherwise treat these benefits as legitimate reasons to ignore the evidence that one has.

Thursday, January 21, 2021

Reexamining the role of intent in moral judgements of purity violations

Kupfer, T. R., Inbar, Y. & Yybur, J.
Journal of Experimental Social Psychology
Volume 91, November 2020, 104043

Abstract

Perceived intent is a pivotal factor in moral judgement: intentional moral violations are considered more morally wrong than accidental ones. However, a body of recent research argues that intent is less important for moral judgements of impure acts – that it, those acts that are condemned because they elicit disgust. But the literature supporting this claim is limited in multiple ways. We conducted a new test of the hypothesis that condemnation of purity violations operates independently from intent. In Study 1, participants judged the wrongness of moral violations that were either intentional or unintentional and were either harmful (e.g., stealing) or impure (e.g., public defecation). Results revealed a large effect of intent on moral wrongness ratings that did not vary across harmful and disgusting scenarios. In Study 2, a registered report, participants judged the wrongness of disgust-eliciting moral violations that were either mundane and dyadic (e.g., serving contaminated food) or abnormal and self-directed (e.g., consuming urine). Results revealed a large effect of intent on moral wrongness judgements that did not vary across mundane and abnormal scenarios. Findings challenge the claim that moral judgements about purity violations rely upon unique psychological mechanisms that are insensitive to information about the wrongdoer's mental state.

From the Discussion

Across two studies, we found that participants rated intentional disgusting acts more morally wrong than unintentional disgusting acts. Study 1 showed that intent had a large effect on moral judgement of mundane, dyadic impure acts, such as serving contaminated food, or urinating in public. Moreover, the effect of intent on moral judgement was not different for harm and purity violations. Study 2 showed that there was also a large effect of intent on moral judgement of abnormal, self-directed, purity violations, using scenarios similar to those frequently used in past research, such as eating a pet dog (e.g., Barrett et al., 2016), drinking urine (e.g., Young & Saxe, 2011), or eating cloned human meat (e.g., Russell & Giner-Sorolla, 2011). In Study 2 the effect of intent did not differ across abnormal, self-directed purity violations and mundane, dyadic purity violations. These results are inconsistent with previous findings purporting to show little or no effect of intent on moral judgements of impure acts (e.g., Barrett et al., 2016; Chakroff et al., 2015; Young & Saxe, 2011).

Italics added.

Monday, January 18, 2021

Children punish third parties to satisfy both consequentialist and retributive motives

Marshall, J., Yudkin, D.A. & Crockett, M.J. 
Nat Hum Behav (2020). 

Abstract

Adults punish moral transgressions to satisfy both retributive motives (such as wanting antisocial others to receive their ‘just deserts’) and consequentialist motives (such as teaching transgressors that their behaviour is inappropriate). Here, we investigated whether retributive and consequentialist motives for punishment are present in children approximately between the ages of five and seven. In two preregistered studies (N = 251), children were given the opportunity to punish a transgressor at a cost to themselves. Punishment either exclusively satisfied retributive motives by only inflicting harm on the transgressor, or additionally satisfied consequentialist motives by teaching the transgressor a lesson. We found that children punished when doing so satisfied only retributive motives, and punished considerably more when doing so also satisfied consequentialist motives. Together, these findings provide evidence for the presence of both retributive and consequentialist motives in young children.

Discussion

Overall, these two preregistered studies provide clear evidence for the presence of both consequentialist and retributive motives in young children, supporting the naive pluralism hypothesis. Our observations cohere with past research showing that children between the ages of five and seven are willing to engage in costly third-party punishment, and reveal the motives behind children’s punitive behaviour. Children reliably engaged in purely retributive punishment: they punished solely to make an antisocial other sad without any possibility of deterring future antisocial behaviour.  Children did not punish in the non-communicative condition out of a preference for locking iPads in boxes, shown by the fact that children punished less in the baseline control condition. Furthermore, non-communicative punishment could not be explained by erroneous beliefs that punishing would teach the transgressor a lesson.  This demonstrates that young children are not pure consequentialists. Rather, our data suggest that young children engaged in costly third-party punishment for purely retributive reasons.

Wednesday, November 18, 2020

Virtuous Victims

Jordan, J., & Kouchaki, M. (2020, April 11).
https://doi.org/10.31234/osf.io/yz8r6

Abstract

Humans ubiquitously encounter narratives about immoral acts and their victims. Here, we demonstrate that these narratives can influence perceptions of victims’ moral character. Specifically, across a wide range of contexts, victims are seen as more moral than non-victims who have behaved identically. Using 13 experiments (total n = 8,358), we explore this Virtuous Victim effect. We show that it is specific to victims of immorality (i.e., it does not extend equally to victims of accidental misfortune) and to moral virtue (i.e., it does not extend equally to positive nonmoral traits). We also show that the Virtuous Victim effect can occur online and in the lab, when subjects have other morally relevant information about the victim, when subjects have a direct opportunity to condemn the perpetrator, and in the context of both third- and first-person victim narratives. Finally, we provide support for the Justice Restoration Hypothesis, which posits that people see victims as moral in order to motivate adaptive justice-restorative action (i.e., punishment of perpetrators and helping of victims). We show that people see victims as having elevated moral character, but do not expect them to behave more morally or less immorally—a pattern that is consistent with the Justice Restoration Hypothesis, but not readily explained by alternative explanations for the Virtuous Victim effect. And we provide both correlational and causal evidence for a key prediction of the Justice Restoration Hypothesis: when people do not perceive incentives to help victims and punish perpetrators, the Virtuous Victim effect disappears.

From the Discussion

Our theory and results negate the hypothesis that people see victims as morally deserving of mistreatment in order to maintain just world beliefs. We suggest that, when exposed to apparent injustice, the default reaction is not to justify what has occurred, but rather to seek to restore justice (by punishing the perpetrator and/or helping the victim)  .It has been proposed that restoring justice is another route through which people can maintain just world beliefs(25, 26). And we have argued it is typically a more adaptive response to wrongdoing, because people frequently face incentives for justice-restorative action.  Our experiments are consistent with the hypothesis that in order to adaptively motivate such action, people see victims as morally good. Future research should investigate whether people also see victims as possessing other traits (e.g., helpless, neediness, or innocence) that might motivate justice-restorative action.

Thursday, August 20, 2020

Morality justifies motivated reasoning

Corey Cusimano and Tania Lombrozo
Paper found online

Abstract

A great deal of work argues that people demand impartial, evidence-based reasoning from others. However, recent findings show that moral values occupy a cardinal position in people’s evaluation of others, raising the possibility that people sometimes prescribe morally-good but evidentially-poor beliefs. We report two studies investigating how people evaluate beliefs when these two ideals conflict and find that people regularly endorse motivated reasoning when it can be morally justified. Furthermore, we document two ways that moral considerations result in prescribed motivated reasoning. First, morality can provide an alternative justification for belief, leading people to prescribe evidentially unsupported beliefs to others. And, second, morality can affect how people evaluate the way evidence is weighed by lowering or raising the threshold of required evidence for morally good and bad beliefs, respectively. These results illuminate longstanding questions about the nature of motivated reasoning and the social regulation of belief.

From the General Discussion

These results can potentially explain the presence and persistence of certain motivated beliefs. In particular, morally-motivated beliefs could persist in part because people do not demand that they or others reason accurately or acquire equal evidence for their beliefs (Metz, Weisburg, & Weisburg, 2018). These findings also invite a reinterpretation of some classic biases, which are in general interpreted as unintentional errors (Kunda, 1990). We suggest instead that some apparent errors reflect convictions that one ought to be biased or discount evidence. Future work investigating biased belief formation should incorporate the perceived moral value of the belief.

The pdf can be found here.

Wednesday, June 10, 2020

Metacognition in moral decisions: judgment extremity and feeling of rightness in moral intuitions

Solange Vega and others
Thinking & Reasoning

This research investigated the metacognitive underpinnings of moral judgment. Participants in two studies were asked to provide quick intuitive responses to moral dilemmas and to indicate their feeling of rightness about those responses. Afterwards, participants were given extra time to rethink their responses, and change them if they so wished. The feeling of rightness associated with the initial judgments was predictive of whether participants chose to change their responses and how long they spent rethinking them. Thus, one’s metacognitive experience upon first coming up with a moral judgment influences whether one sticks to that initial gut feeling or decides to put more thought into it and revise it. Moreover, while the type of moral judgment (i.e., deontological vs. utilitarian) was not consistently predictive of metacognitive experience, the extremity of that judgment was: Extreme judgments (either deontological or utilitarian) were quicker and felt more right than moderate judgments.

From the General Discussion

Also consistent with Bago and De Neys’ findings (2018), these results show that few people revise their responses from one type of moral judgment to the other (i.e., from deontological to utilitarian, or vice-versa). Still,many people do revise their responses, though these are subtler revisions of extremity within one type of response. These results speak against the traditional corrective model, whereby people tend to change from deontological intuitions to utilitarian deliberations in the course of making moral judgments. At the same time, they suggest a more nuanced perspective than what one might conclude from Bago and De Neys’results that fewpeople revise their responses. In sum, few people make revisions in the kind of response they give, but many do revise the degree to which they defend a certain moral position.

The research is here.

Monday, February 17, 2020

Religion’s Impact on Conceptions of the Moral Domain

S. Levine, and others
PsyArXiv Preprints
Last edited 2 Jan 20

Abstract

How does religious affiliation impact conceptions of the moral domain? Putting aside the question of whether people from different religions agree about how to answer moral questions, here we investigate a more fundamental question: How much disagreement is there across religions about which issues count as moral in the first place? That is, do people from different religions conceptualize the scope of morality differently? Using a new methodology to map out how individuals conceive of the moral domain, we find dramatic differences among adherents of different religions. Mormons and Muslims moralize their religious norms, while Jews do not. Hindus do not seem to make a moral/non-moral distinction at all. These results suggest that religious affiliation has a profound effect on conceptions of the scope of morality.

From the General Discussion:

The results of Study 3 and 3a are predicted by neither Social Domain Theory nor Moral Foundations Theory: It is neither true that secular people and religious people share a common conception of the moral domain (as Social Domain Theory argues), nor that religious morality is expanded beyond secular morality in a uniform manner (as Moral Foundations Theory suggests).When participants in a group did make a moral/non-moral distinction, there was broad agreement that norms related to harm, justice, and rights count as moral norms. However, some religious individuals (such as the Mormon and Muslim participants) also moralized norms from their own religion that are not related to these themes. Meanwhile, others (such as the Jewish participants) acknowledged the special status of their own norms but did not moralize them. Yet others (such as the Hindu participants) made no distinction between the moral and the non-moral. 

The research is here.