Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Moral judgment. Show all posts
Showing posts with label Moral judgment. Show all posts

Tuesday, December 26, 2023

Who did it? Moral wrongness for us and them in the UK, US, and Brazil

Paulo Sérgio Boggio, et al. (2023) 
Philosophical Psychology
DOI: 10.1080/09515089.2023.2278637

Abstract

Morality has traditionally been described in terms of an impartial and objective “moral law”, and moral psychological research has largely followed in this vein, focusing on abstract moral judgments. But might our moral judgments be shaped not just by what the action is, but who is doing it? We looked at ratings of moral wrongness, manipulating whether the person doing the action was a friend, a refugee, or a stranger. We looked at these ratings across various moral foundations, and conducted the study in Brazil, US, and UK samples. Our most robust and consistent findings are that purity violations were judged more harshly when committed by ingroup members and less harshly when committed by the refugees in comparison to the unspecified agents, the difference between refugee and unspecified agents decays from liberals to conservatives, i.e., conservatives judge them more harshly than liberals do, and Brazilians participants are harsher than the US and UK participants. Our results suggest that purity violations are judged differently according to who committed them and according to the political ideology of the judges. We discuss the findings in light of various theories of groups dynamics, such as moral hypocrisy, moral disengagement, and the black sheep effect.


Here is my summary:

The study explores how moral judgments vary depending on both the agent committing the act and the nationality of the person making the judgment. The study's findings challenge the notion that moral judgments are universal and instead suggest that they are influenced by cultural and national factors.

The researchers investigated how participants from the UK, US, and Brazil judged moral violations committed by different agents: friends, strangers, refugees, and unspecified individuals. They found that participants from all three countries generally judged violations committed by friends more harshly than violations committed by other agents. However, there were also significant cultural differences in the severity of judgments. Brazilians tended to judge violations of purity as less wrong than Americans, but judged violations of care, liberty, and fairness as more wrong than Americans.

The study's findings suggest that moral judgments are not simply based on the severity of the act itself, but also on factors such as the relationship between the agent and the victim, and the cultural background of the person making the judgment. These findings have implications for understanding cross-cultural moral conflicts and for developing more effective moral education programs.

Saturday, December 16, 2023

Older people are perceived as more moral than younger people: data from seven culturally diverse countries

Piotr Sorokowski, et al. (2023)
Ethics & Behavior,
DOI: 10.1080/10508422.2023.2248327

Abstract

Given the adage “older and wiser,” it seems justified to assume that older people may be stereotyped as more moral than younger people. We aimed to study whether assessments of a person’s morality differ depending on their age. We asked 661 individuals from seven societies (Australians, Britons, Burusho of Pakistan, Canadians, Dani of Papua, New Zealanders, and Poles) whether younger (~20-year-old), middle-aged (~40-year-old), or older (~60-year-old) people were more likely to behave morally and have a sense of right and wrong. We observed that older people were perceived as more moral than younger people. The effect was particularly salient when comparing 20-year-olds to either 40- or 60-year-olds and was culturally universal, as we found it in both WEIRD (i.e. Western, Educated, Industrialized, Rich, Democratic) and non-WEIRD societies.


Here is my summary:

The researchers found that older people were rated as more moral than younger people, and this effect was particularly strong when comparing 20-year-olds to either 40- or 60-year-olds. The effect was also consistent across cultures, suggesting that it is a universal phenomenon.

The researchers suggest that there are a few possible explanations for this finding. One possibility is that older people are simply seen as having more life experience and wisdom, which are both associated with morality. Another possibility is that older people are more likely to conform to social norms, which are often seen as being moral. Finally, it is also possible that people simply have a positive bias towards older people, which leads them to perceive them as being more moral.

Whatever the explanation, the finding that older people are perceived as more moral than younger people has a number of implications. For example, it suggests that older people may be more likely to be trusted and respected, and they may also be more likely to be seen as leaders. Additionally, the finding suggests that ageism may be a form of prejudice, as it involves making negative assumptions about people based on their age.

Saturday, November 4, 2023

One strike and you’re a lout: Cherished values increase the stringency of moral character attributions

Rottman, J., Foster-Hanson, E., & Bellersen, S.
(2023). Cognition, 239, 105570.

Abstract

Moral dilemmas are inescapable in daily life, and people must often choose between two desirable character traits, like being a diligent employee or being a devoted parent. These moral dilemmas arise because people hold competing moral values that sometimes conflict. Furthermore, people differ in which values they prioritize, so we do not always approve of how others resolve moral dilemmas. How are we to think of people who sacrifice one of our most cherished moral values for a value that we consider less important? The “Good True Self Hypothesis” predicts that we will reliably project our most strongly held moral values onto others, even after these people lapse. In other words, people who highly value generosity should consistently expect others to be generous, even after they act frugally in a particular instance. However, reasoning from an error-management perspective instead suggests the “Moral Stringency Hypothesis,” which predicts that we should be especially prone to discredit the moral character of people who deviate from our most deeply cherished moral ideals, given the potential costs of affiliating with people who do not reliably adhere to our core moral values. In other words, people who most highly value generosity should be quickest to stop considering others to be generous if they act frugally in a particular instance. Across two studies conducted on Prolific (N = 966), we found consistent evidence that people weight moral lapses more heavily when rating others’ membership in highly cherished moral categories, supporting the Moral Stringency Hypothesis. In Study 2, we examined a possible mechanism underlying this phenomenon. Although perceptions of hypocrisy played a role in moral updating, personal moral values and subsequent judgments of a person’s potential as a good cooperative partner provided the clearest explanation for changes in moral character attributions. Overall, the robust tendency toward moral stringency carries significant practical and theoretical implications.


My take aways: 

The results showed that participants were more likely to rate the person as having poor moral character when the transgression violated a cherished value. This suggests that when we see someone violate a value that we hold dear, it can lead us to question their entire moral compass.

The authors argue that this finding has important implications for how we think about moral judgment. They suggest that our own values play a significant role in how we judge others' moral character. This is something to keep in mind the next time we're tempted to judge someone harshly.

Here are some additional points that are made in the article:
  • The effect of cherished values on moral judgment is stronger for people who are more strongly identified with their values.
  • The effect is also stronger for transgressions that are seen as more serious.
  • The effect is not limited to personal values. It can also occur for group-based values, such as patriotism or religious beliefs.

Tuesday, October 24, 2023

The Implications of Diverse Human Moral Foundations for Assessing the Ethicality of Artificial Intelligence

Telkamp, J.B., Anderson, M.H. 
J Bus Ethics 178, 961–976 (2022).

Abstract

Organizations are making massive investments in artificial intelligence (AI), and recent demonstrations and achievements highlight the immense potential for AI to improve organizational and human welfare. Yet realizing the potential of AI necessitates a better understanding of the various ethical issues involved with deciding to use AI, training and maintaining it, and allowing it to make decisions that have moral consequences. People want organizations using AI and the AI systems themselves to behave ethically, but ethical behavior means different things to different people, and many ethical dilemmas require trade-offs such that no course of action is universally considered ethical. How should organizations using AI—and the AI itself—process ethical dilemmas where humans disagree on the morally right course of action? Though a variety of ethical AI frameworks have been suggested, these approaches do not adequately address how people make ethical evaluations of AI systems or how to incorporate the fundamental disagreements people have regarding what is and is not ethical behavior. Drawing on moral foundations theory, we theorize that a person will perceive an organization’s use of AI, its data procedures, and the resulting AI decisions as ethical to the extent that those decisions resonate with the person’s moral foundations. Since people hold diverse moral foundations, this highlights the crucial need to consider individual moral differences at multiple levels of AI. We discuss several unresolved issues and suggest potential approaches (such as moral reframing) for thinking about conflicts in moral judgments concerning AI.

The article is paywalled, link is above.

Here are some additional points:
  • The article raises important questions about the ethicality of AI systems. It is clear that there is no single, monolithic standard of morality that can be applied to AI systems. Instead, we need to consider a plurality of moral foundations when evaluating the ethicality of AI systems.
  • The article also highlights the challenges of assessing the ethicality of AI systems. It is difficult to measure the impact of AI systems on human well-being, and there is no single, objective way to determine whether an AI system is ethical. However, the article suggests that a pluralistic approach to ethical evaluation, which takes into account a variety of moral perspectives, is the best way to assess the ethicality of AI systems.
  • The article concludes by calling for more research on the implications of diverse human moral foundations for the ethicality of AI. This is an important area of research, and I hope that more research is conducted in this area in the future.

Monday, June 19, 2023

On the origin of laws by natural selection

DeScioli, P.
Evolution and Human Behavior
Volume 44, Issue 3, May 2023, Pages 195-209

Abstract

Humans are lawmakers like we are toolmakers. Why do humans make so many laws? Here we examine the structure of laws to look for clues about how humans use them in evolutionary competition. We will see that laws are messages with a distinct combination of ideas. Laws are similar to threats but critical differences show that they have a different function. Instead, the structure of laws matches moral rules, revealing that laws derive from moral judgment. Moral judgment evolved as a strategy for choosing sides in conflicts by impartial rules of action—rather than by hierarchy or faction. For this purpose, humans can create endless laws to govern nearly any action. However, as prolific lawmakers, humans produce a confusion of contradictory laws, giving rise to a perpetual battle to control the laws. To illustrate, we visit some of the major conflicts over laws of violence, property, sex, faction, and power.

(cut)

Moral rules are not for cooperation

We have briefly summarized the  major divisions and operations of moral judgment. Why then did humans evolve such elaborate powers of the mind devoted to moral rules? What is all this rule making for?

One common opinion is that moral rules are for cooperation. That is, we make and enforce a moral code in order to cooperate more effectively with other people. Indeed, traditional  theories beginning with Darwin assume that morality is  the  same  as cooperation. These theories  successfully explain many forms of cooperation, such as why humans and other  animals  care  for  offspring,  trade  favors,  respect  property, communicate  honestly,  and  work  together  in  groups.  For  instance, theories of reciprocity explain why humans keep records of other people’s deeds in the form of reputation, why we seek partners who are nice, kind, and generous, why we praise these virtues, and why we aspire to attain them.

However, if we look closely, these theories explain cooperation, not moral  judgment.  Cooperation pertains  to our decisions  to  benefit  or harm someone, whereas moral judgment pertains to  our judgments of someone’s action  as right or  wrong. The difference  is crucial because these  mental  faculties  operate  independently  and  they  evolved  separately. For  instance,  people can  use moral judgment  to cooperate but also to cheat, such as a thief who hides the theft because they judge it to be  wrong, or a corrupt leader who invents a  moral rule  that forbids criticism of the leader. Likewise, people use moral judgment to benefit others  but  also  to  harm  them, such  as falsely  accusing an enemy of murder to imprison them. 

Regarding  their  evolutionary  history, moral  judgment is  a  recent adaptation while cooperation is ancient and widespread, some forms as old  as  the origins  of  life and  multicellular  organisms.  Recalling our previous examples, social animals like gorillas, baboons, lions, and hyenas cooperate in numerous ways. They care for offspring, share food, respect property, work together in teams, form reputations,  and judge others’ characters as nice or nasty. But these species do not communicate rules of action, nor do they learn, invent, and debate the rules. Like language, moral judgment  most likely evolved  recently in the  human lineage, long after complex forms of cooperation. 

From the Conclusion

Having anchored ourselves to concrete laws, we next asked, What are laws for? This is the central question for  any mental power because it persists only  by aiding an animal in evolutionary competition.  In this search,  we  should  not  be  deterred  by  the  magnificent creativity  and variety of laws. Some people suppose that natural selection could impart no more than  a  few fixed laws in  the  human mind, but there  are  no grounds for this supposition. Natural selection designed all life on Earth and its creativity exceeds our own. The mental adaptations of animals outperform our best computer programs on routine tasks such as loco-motion and vision. Why suppose that human laws must be far simpler than, for instance, the flight controllers in the brain of a hummingbird? And there are obvious counterexamples. Language is a complex  adaptation but this does not mean that humans speak just a few sentences. Tool use comes from mental adaptations including an intuitive theory of physics, and again these abilities do not limit but enable the enormous variety of tools.

Friday, June 2, 2023

Is it good to feel bad about littering? Conflict between moral beliefs and behaviors for everyday transgressions

Schwartz, Stephanie A. and Inbar, Yoel
SSRN.
Originally posted 22 June 22

Abstract

People sometimes do things that they think are morally wrong. We investigate how actor’s perceptions of the morality of their own behaviors affects observers’ evaluations. In Study 1 (n = 302), we presented participants with six different descriptions of actors who routinely engaged in a morally questionable behavior and varied whether the actors thought the behavior was morally wrong. Actors who believed their behavior was wrong were seen as having better moral character, but their behavior was rated as more wrong. In Study 2 (n = 391) we investigated whether perceptions of actor metadesires were responsible for the effects of actor beliefs on judgments. We used the same stimuli and measures as in Study 1 but added a measure of the actor’s perceived desires to engage in the behaviors. As predicted, the effect of actors’ moral beliefs on judgments of their behavior and moral character was mediated by perceived metadesires.

General Discussion

In two studies, we find that actors’ beliefs about their own everyday immoral behaviors affect both how the acts and the actors are evaluated—albeit in opposite directions. An actor’s belief that his or her act is morally wrong causes observers to see the act itself as less morally acceptable, while, at the same time, it leads to more positive character judgments of the actor. In Study 2, we find that these differences in character judgments are mediated by people’s perceptions of the actor’s metadesires. Actors who see their behavior as morally wrong are presumed to have a desire not to engage in it, and this in turn leads to more positive evaluations of their character. These results suggest that one benefit of believing one’s own behavior to be immoral is that others—if they know this—will evaluate one’s character more positively.

(cut)

Honest Hypocrites 

In research on moral judgments of hypocrites, Jordan et al. (2017) found that people who publicly espouse a moral standard that they privately violate are judged particularly negatively.  However, they also found that “honest hypocrites” (those who publicly condemn a behavior while admitting they engage in it themselves) are judged more positively than traditional hypocrites and equivalently to control transgressors (people who simply engage in the negative behavior without taking a public stand on its acceptability). This might seem to contradict our findings in the current studies, where people who transgressed despite thinking that the behavior was morally wrong were judged more positively than those who simply transgressed. We believe the key distinction that explains the difference between Jordan et al.’s results and ours is that in their paradigm, hypocrites publicly condemned others for engaging in the behavior in question.  As Jordan et al. show, public condemnation is interpreted as a strong signal that someone is unlikely to engage in that behavior themselves; hypocrites therefore are disliked both for
engaging in a negative behavior and for falsely signaling (by their public condemnation) that they wouldn’t. Honest hypocrites, who explicitly state that they engage in the negative behavior, are not falsely signaling. However, Jordan et al.’s scenarios imply to participants that honest hypocrites do condemn others—something that may strike people as unfair coming from a person who engages in the behavior themselves. Thus, honest hypocrites may be penalized for public condemnation, even as they are credited for more positive metadesires. In contrast, in our studies participants were told that the scenario protagonists thought the behavior was morally wrong but not that they publicly condemned anyone else for engaging in it. This may have allowed protagonists to benefit from more positive perceived metadesires without being penalized for public condemnation. This explanation is admittedly speculative but could be tested in future research that we outline below.


Suppose you do something bad. Will people blame you more if you knew it was wrong? Or will they blame you less?

The answer seems to be: They will think your act is more wrong, but your character is less bad.

Thursday, May 18, 2023

People Construe a Corporation as an Individual to Ascribe Responsibility in Cases of Corporate Wrongdoing

Sharma, N., Flores-Robles, G., & Gantman, A. P.
(2023, April 11). PsyArXiv

Abstract

In cases of corporate wrongdoing, it is difficult to assign blame across multiple agents who played different roles. We propose that people have dualist ideas of corporate hierarchies: with the boss as “the mind,” and the employee as “the body,” and the employee appears to carry out the will of the boss like the mind appears to will the body (Wegner, 2003). Consistent with this idea, three experiments showed that moral responsibility was significantly higher for the boss, unless the employee acted prior to, inconsistently with, or outside of the boss’s will. People even judge the actions of the employee as mechanistic (“like a billiard ball”) when their actions mirror the will of the boss. This suggests that the same features that tell us our minds cause our actions, also facilitate the sense that a boss has willed the behavior of an employee and is ultimately responsible for bad outcomes in the workplace.

From the General Discussion

Practical Implications

Our findings offer a number of practical implications for organizations. First, our research provides insight into how people currently make judgments of moral responsibility within an organization (and specifically, when a boss gives instructions to an employee). Second, our research provides insight into the decision-making process of whether to fire a boss-figure like a CEO (or other decision-maker) or invest in lasting change in organizational culture following an organizational wrongdoing. From a scapegoating perspective, replacing a CEO is not intended to produce lasting change in underlying organizational problems and signals a desire to maintain the status quo (Boeker, 1992; Shen & Cannella, 2002). Scapegoating may not always be in the best interest of investors. Previous research has shown that following financial misrepresentation, investors react positively only to CEO successions wherein the replacement comes from the outside, which serves as a costly signal of the firm’s understanding of the need for change (Gangloff et al., 2016). And so, by allocating responsibility to the CEO without creating meaningful change, organizations may loseinvestors. Finally, this research has implications for building public trust in organizations. Following the Wells Fargo scandal, two-thirds of Wells Fargo customers (65%) claimed they trusted their bank less, and about half of Wells Fargo customers (51%) were willing to switch to another bank, if they perceived them to be more trustworthy (Business Wire, 2017).Thus, how organizations deal with wrongdoing (e.g., whether they fire individuals, create lasting change or both) can influence public trust. If corporations want to build trust among the general public, and in doing so, create a larger customer base, they can look at how people understand and ascribe responsibility and consequently punish organizational wrongdoings.

Thursday, April 6, 2023

People recognize and condone their own morally motivated reasoning

Cusimano, C., & Lombrozo, T. (2023).
Cognition, 234, 105379.

Abstract

People often engage in biased reasoning, favoring some beliefs over others even when the result is a departure from impartial or evidence-based reasoning. Psychologists have long assumed that people are unaware of these biases and operate under an “illusion of objectivity.” We identify an important domain of life in which people harbor little illusion about their biases – when they are biased for moral reasons. For instance, people endorse and feel justified believing morally desirable propositions even when they think they lack evidence for them (Study 1a/1b). Moreover, when people engage in morally desirable motivated reasoning, they recognize the influence of moral biases on their judgment, but nevertheless evaluate their reasoning as ideal (Studies 2–4). These findings overturn longstanding assumptions about motivated reasoning and identify a boundary condition on Naïve Realism and the Bias Blind Spot. People's tendency to be aware and proud of their biases provides both new opportunities, and new challenges, for resolving ideological conflict and improving reasoning.

Highlights

• Dominant theories assume people form beliefs only under an illusion of objectivity.

• We document a boundary condition on this illusion: morally desirable biases.

• People endorse beliefs they regard as evidentially weak but morally desirable.

• People realize when they have just engaged in morally motivated reasoning.

• Accurate self-attributions of moral bias fully attenuate the ‘bias blind spot’.

From the General discussion

Our beliefs about our beliefs – including whether they are biased or justified – play a crucial role in guiding inquiry, shaping belief revision, and navigating disagreement. One line of research suggests that these judgments are almost universally characterized by an illusion of objectivity such that people consciously reason with the goal of being objective and basing their beliefs on evidence, and because of this, people nearly always assume that their current beliefs meet those standards. Another line of work suggests that people sometimes think that values legitimately bear on whether someone is justified to hold a belief (Cusimano & Lombrozo, 2021b). These findings raise the possibility, consistent with some prior theoretical proposals (Cusimano & Lombrozo, 2021a; Tetlock, 2002), that people will knowingly violate norms of impartiality, or knowingly maintain beliefs that lack evidential support, when doing so advances what they consider to be morally laudable goals. Two predictions follow. First, people should evaluate their beliefs in part based on their perceived moral value. And second, in situations in which people engage in morally motivated reasoning, they should recognize that they have done so and should evaluate their morally motivated reasoning as appropriate. We document support for these predictions across four studies (Table 1).

Conclusion

A great deal of work has assumed that people treat objectivity and evidence-based reasoning as cardinal norms governing their belief formation. This assumption has grown increasingly tenuous in light of recent work highlighting the importance of moral concerns in almost all facets of life. Consistent with this recent work, we find evidence that people’s evaluations of the moral quality of a proposition predict their subjective confidence that it is true, their likelihood of claiming that they believe it and know it, and the extent to which they take their belief to be justified. Moreover, people exhibit metacognitive awareness of this fact and approve of morality’s influence on their reasoning. People often want to be right, but they also want to be good – and they know it.

Tuesday, April 4, 2023

Chapter One - Moral inconsistency

Effron, D.A, & Helgason, B.A. 
Advances in Experimental Social Psychology
Volume 67, 2023, Pages 1-72

Abstract

We review a program of research examining three questions. First, why is the morality of people's behavior inconsistent across time and situations? We point to people's ability to convince themselves they have a license to sin, and we demonstrate various ways people use their behavioral history and others—individuals, groups, and society—to feel licensed. Second, why are people's moral judgments of others' behavior inconsistent? We highlight three factors: motivation, imagination, and repetition. Third, when do people tolerate others who fail to practice what they preach? We argue that people only condemn others' inconsistency as hypocrisy if they think the others are enjoying an “undeserved moral benefit.” Altogether, this program of research suggests that people are surprisingly willing to enact and excuse inconsistency in their moral lives. We discuss how to reconcile this observation with the foundational social psychological principle that people hate inconsistency.

(cut)

The benefits of moral inconsistency

The present chapter has focused on the negative consequences of moral inconsistency. We have highlighted how the factors that promote moral inconsistency can allow people to lie, cheat, express prejudice, and reduce their condemnation of others' morally suspect behaviors ranging from leaving the scene of an accident to spreading fake news. At the same time, people's apparent proclivity for moral inconsistency is not all bad.

One reason is that, in situations that pit competing moral values against each other, moral inconsistency may be unavoidable. For example, when a friend asks whether you like her unflattering new haircut, you must either say no (which would be inconsistent with your usual kind behavior) or yes (which would be inconsistent with your usual honest behavior; Levine, Roberts, & Cohen, 2020). If you discover corruption in your workplace, you might need to choose between blowing the whistle (which would be inconsistent with your typically loyal behavior toward the company) or staying silent (which would be inconsistent with your typically fair behavior; Dungan, Waytz, & Young, 2015; Waytz, Dungan, & Young, 2013).

Another reason is that people who strive for perfect moral consistency may incur steep costs. They may be derogated and shunned by others, who feel threatened and judged by these “do-gooders” (Howe & Monin, 2017; Minson & Monin, 2012; Monin, Sawyer, & Marquez, 2008; O’Connor & Monin, 2016). Or they may sacrifice themselves and loved ones more than they can afford, like the young social worker who consistently donated to charity until she and her partner were living on 6% of their already-modest income, or the couple who, wanting to consistently help children in need of a home, adopted 22 kids (MacFarquhar, 2015). In short, we may enjoy greater popularity and an easier life if we allow ourselves at least some moral inconsistency.

Finally, moral inconsistency can sometimes benefit society. Evolving moral beliefs about smoking (Rozin, 1999; Rozin & Singh, 1999) have led to considerable public health benefits. Stalemates in partisan conflict are hard to break if both sides rigidly refuse to change their judgments and behavior surrounding potent moral issues (Brandt, Wetherell, & Crawford, 2016). Same-sex marriage, women's sexual liberation, and racial desegregation required inconsistency in how people treated actions that were once considered wrong. In this way, moral inconsistency may be necessary for moral progress.

Wednesday, January 25, 2023

Outcome effects, moral luck and the hindsight bias

M. Kneer & I. Skoczen
Cognition
Volume 232, March 2023, 105258

Abstract

In a series of ten preregistered experiments (N = 2043), we investigate the effect of outcome valence on judgments of probability, negligence, and culpability – a phenomenon sometimes labelled moral (and legal) luck. We found that harmful outcomes, when contrasted with neutral outcomes, lead to an increased perceived probability of harm ex post, and consequently, to a greater attribution of negligence and culpability. Rather than simply postulating hindsight bias (as is common), we employ a variety of empirical means to demonstrate that the outcome-driven asymmetry across perceived probabilities constitutes a systematic cognitive distortion. We then explore three distinct strategies to alleviate the hindsight bias and its downstream effects on mens rea and culpability ascriptions. Not all strategies are successful, but some prove very promising. They should, we argue, be considered in criminal jurisprudence, where distortions due to the hindsight bias are likely considerable and deeply disconcerting.

Highlights

• In a series of ten studies (N = 2043) we examine the relation between moral luck, negligence and probability

• Most people deem outcome irrelevant for ascriptions of negligence & blame in WS studies, so there’s no “puzzle of moral luck”

• In between-subjects designs, the effect of luck on negligence and blame seems to be driven by the hindsight bias

• We examine three strategies to alleviate the hindsight bias on perceived probability, negligence and blame

• Two alleviation strategies significantly decrease the hindsight bias and could potentially be used in legal trials

Conclusion

In a series of experiments with 2043 participants, we explored the effect of outcome on judgments of subjective and objective probability, mens rea and culpability. For mens rea and blame attributions (though not for deserved punishment), the outcome effect constitutes a bias. The distorted assessment of mens rea and blame, we showed, is ultimately rooted in the hindsight bias: People tend to assess a potential harm as more likely when it does come to pass than when it does not; they therefore ascribe more negligence to the agent, and consequently consider him more culpable.

Echoing the literature from behavioral economics and legal psychology, we argued that the downstream effects of the hindsight bias constitute a serious threat to the just adjudication of legal trials, in particular in countries where mens rea is determined by lay juries (such as the US and the UK). And although it is well established that the hindsight bias is pervasive and difficult to overcome, we have shown that there are measures to reduce its impact. Among a series of different debiasing strategies we have put to the test, we showed that expert probability stabilizing (which, on occasion, is already in use in courts) and entertaining counterfactual outcomes hold considerable promise. We would strongly urge further research conducted jointly with legal practitioners that explores the most suitable ways of introducing (or further implementing) these techniques in the courtroom, so as to make the law more just and equal.

Saturday, December 24, 2022

How Stable are Moral Judgments?

Rehren, P., Sinnott-Armstrong, W.
Rev. Phil.Psych. (2022).
https://doi.org/10.1007/s13164-022-00649-7

Abstract

Psychologists and philosophers often work hand in hand to investigate many aspects of moral cognition. In this paper, we want to highlight one aspect that to date has been relatively neglected: the stability of moral judgment over time. After explaining why philosophers and psychologists should consider stability and then surveying previous research, we will present the results of an original three-wave longitudinal study. We asked participants to make judgments about the same acts in a series of sacrificial dilemmas three times, 6–8 days apart. In addition to investigating the stability of our participants’ ratings over time, we also explored some potential explanations for instability. To end, we will discuss these and other potential psychological sources of moral stability (or instability) and highlight possible philosophical implications of our findings.

From the General Discussion

We have argued that the stability of moral judgments over time is an important feature of moral cognition for philosophers and psychologists to consider. Next, we presented an original empirical study into the stability over 6–8 days of moral judgments about acts in sacrificial dilemmas. Like Helzer et al. (2017, Study 1), we found an overall test-retest correlation of 0.66. Moreover, we observed moderate to large proportions of rating shifts, and small to moderate proportions of rating revisions (M = 14%), rejections (M = 5%) and adoptions (M = 6%)—that is, the participants in question judged p in one wave, but did not judge p in the other wave.

What Explains Instability?

One potential explanation of our results is that they are not a genuine feature of moral judgments about sacrificial dilemmas, but instead are due to measurement error. Measurement error is the difference between the observed and the true value of a variable. So, it may be that most of the rating changes we observed do not mean that many real-life moral judgments about acts in sacrificial dilemmas are (or would be) unstable over short periods of time. Instead, it may be that when people make moral judgments about sacrificial dilemmas in real life, their judgments remain very stable from one week to the next, but our study (perhaps any study) was not able to capture this stability.

To the extent that real-life moral judgment is what moral psychologists and philosophers are interested in, this may suggest a problem with the type of study design used in this and many other papers. If there is enough measurement error, then it may be very difficult to draw firm conclusions about real-life moral judgments from this research. Other researchers have raised related objections. Most forcefully, Bauman et al. (2014) have argued that participants often do not take the judgment tasks used by moral psychologists seriously enough for them to engage with these tasks in anything like the way they would if they came across the same tasks in the real world (also, see, Ryazanov et al. 2018). In our view, moral psychologists would do well to more frequently move their studies outside of the (online) lab and into the real world (e.g., Bollich et al. 2016; Hofmann et al. 2014).

(cut)

Instead, our findings may tell us something about a genuine feature of real-life moral judgment. If so, then a natural question to ask is what makes moral judgments unstable (or stable) over time. In this paper, we have looked at three possible explanations, but we did not find evidence for them. First, because sacrificial dilemmas are in a certain sense designed to be difficult, moral judgments about acts in these scenarios may give rise to much more instability than moral judgments about other scenarios or statements. However, when we compared our test-retest correlations with a sampling of test-retest correlations from instruments involving other moral judgments, sacrificial dilemmas did not stand out. Second, we did not find evidence that moral judgment changes occur because people are more confident in their moral judgments the second time around. Third, Study 1b did not find evidence that rating changes, when they occurred, were often due to changes in light of reasons and reflection. Note that this does not mean that we can rule out any of these potential explanations for unstable moral judgments completely. As we point out below, our research is limited in the extent to which it could test each of these explanations, and so one or more of them may still have been the cause for some proportion of the rating changes we observed.

Friday, September 9, 2022

Online Moral Conformity: How Powerful is a Group of Online Strangers When Influencing an Individual’s Moral Judgments?

Paruzel-Czachura, M., Wojciechowska, D., 
& Bostyn, D. H. (2022, May 21). 
https://doi.org/10.31234/osf.io/4g2bn

Abstract

People make moral decisions every day, and when making them, they may be influenced by their companions (the so-called moral conformity effect). Nowadays, people make many decisions in online environments like video meetings. In the current preregistered experiment, we studied the online moral conformity effect. We applied an Asch conformity paradigm in an online context by asking participants (N = 120) to reply to sacrificial moral dilemmas through the online video communication tool Zoom when sitting in the “virtual” room with strangers (confederates instructed on how to answer; experimental condition) or when sitting alone (control condition). We found an effect of online moral conformity on half of the dilemmas included in our study as well as in the aggregate.

Discussion       

Social conformity is a well-known phenomenon (Asch, 1951, 1952, 1955, 1956; Sunstein, 2019).  Moreover, past research has demonstrated that conformity effects occur for moral issues as well (Aramovich et al., 2012; Bostyn & Roets, 2017; Crutchfield, 1955; Kelly et al., 2017; Kundu & Cummins, 2013; Lisciandra et al., 2013). However, to what extent does moral conformity occur when people interact in digital spaces, such as video conferencing software, has not yet been investigated.

We conducted a well-powered experimental study to determine if the effect of online moral conformity exists. Two study conditions were used: an experimental one in which study participants were answering along with a group of confederates and a control condition in which study participants were answering individually. In both conditions, participants were invited to a video meeting and asked to orally respond to a set of moral dilemmas with their cameras turned on. All questions and study conditions were the same, apart from the presence of other people in the experimental condition. In the experimental condition, importantly, the experimenter pretended that all people were study participants, but in fact, only the last person was an actual study participant, and all four other participants were confederates who were trained to answer in a specific manner. Confederates answered contrary to what most people had decided in past studies (Gawronski et al., 2017; Greene et al., 2008; Körner et al., 2020). We found an effect of online moral conformity on half of the dilemmas included in our study as well as in aggregate.

Monday, May 16, 2022

Exploring the Association between Character Strengths and Moral Functioning

Han, H., Dawson, K. J., et al. 
(2022, April 6). PsyArXiv
https://doi.org/10.1080/10508422.2022.2063867

Abstract

We explored the relationship between 24 character strengths measured by the Global Assessment of Character Strengths (GACS), which was revised from the original VIA instrument, and moral functioning comprising postconventional moral reasoning, empathic traits and moral identity. Bayesian Model Averaging (BMA) was employed to explore the best models, which were more parsimonious than full regression models estimated through frequentist regression, predicting moral functioning indicators with the 24 candidate character strength predictors. Our exploration was conducted with a dataset collected from 666 college students at a public university in the Southern United States. Results showed that character strengths as measured by GACS partially predicted relevant moral functioning indicators. Performance evaluation results demonstrated that the best models identified by BMA performed significantly better than the full models estimated by frequentist regression in terms of AIC, BIC, and cross-validation accuracy. We discuss theoretical and methodological implications of the findings for future studies addressing character strengths and moral functioning.

From the Discussion

Although the postconventional reasoning was relatively weakly associated with character strengths, several character strengths were still significantly associated with it. We were able to discover its association with several character strengths, particularly those within the domain of intellectual ability.One possible explanation is that intellectual strengths enable people to evaluate moral issues from diverse perspectives and appreciate moral values and principles beyond existing conventions and norms (Kohlberg, 1968). Having such intellectual strengths can thus allow them to engage in sophisticated moral reasoning. For instance, wisdom, judgment, and curiosity demonstrated positive correlation with postconventional reasoning as Han (2019) proposed.  Another possible explanation is that the DIT focuses on hypothetical, abstract moral reasoning, instead of decision making in concrete situations (Rest et al., 1999b). Therefore, the emergence of positive association between intellectual strengths and postconventional moral reasoning in the current study is plausible.

The trend of positive relationships between character strengths and moral functioning indicators was also reported from best model exploration through BMA.  First, postconventional reasoning was best predicted by intellectual strengths, curiosity, and wisdom, plus kindness. Second, EC was positively predicted by love, kindness, and gratitude. Third, PT was positively associated with wisdom and gratitude in the best model. Fourth, moral internalization was positively predicted by kindness and gratitude.

Saturday, April 16, 2022

Morality, punishment, and revealing other people’s secrets.

Salerno, J. M., & Slepian, M. L. (2022).
Journal of Personality & Social Psychology, 
122(4), 606–633. 
https://doi.org/10.1037/pspa0000284

Abstract

Nine studies represent the first investigation into when and why people reveal other people’s secrets. Although people keep their own immoral secrets to avoid being punished, we propose that people will be motivated to reveal others’ secrets to punish them for immoral acts. Experimental and correlational methods converge on the finding that people are more likely to reveal secrets that violate their own moral values. Participants were more willing to reveal immoral secrets as a form of punishment, and this was explained by feelings of moral outrage. Using hypothetical scenarios (Studies 1, 3–6), two controversial events in the news (hackers leaking citizens’ private information; Study 2a–2b), and participants’ behavioral choices to keep or reveal thousands of diverse secrets that they learned in their everyday lives (Studies 7–8), we present the first glimpse into when, how often, and one explanation for why people reveal others’ secrets. We found that theories of self-disclosure do not generalize to others’ secrets: Across diverse methodologies, including real decisions to reveal others’ secrets in everyday life, people reveal others’ secrets as punishment in response to moral outrage elicited from others’ secrets.

From the Discussion

Our data serve as a warning flag: one should be aware of a potential confidant’s views with regard to the morality of the behavior. Across 14 studies (Studies 1–8; Supplemental Studies S1–S5), we found that people are more likely to reveal other people’s secrets to the degree that they, personally, view the secret act as immoral. Emotional reactions to the immoral secrets explained this effect, such as moral outrage as well as anger and disgust, which were associated correlationally and experimentally with revealing the secret as a form of punishment. People were significantly more likely to reveal the same secret if the behavior was done intentionally (vs. unintentionally), if it had gone unpunished (vs. already punished by someone else), and in the context of a moral framing (vs. no moral framing). These experiments suggest a causal role for both the degree to which the secret behavior is immoral and the participants’ desire to see the behavior punished.  Additionally, we found that this psychological process did not generalize to non-secret information. Although people were more likely to reveal both secret and non-secret information when they perceived it to be more immoral, they did so for different reasons: as an appropriate punishment for the immoral secrets, and as interesting fodder for gossip for the immoral non-secrets.

Friday, February 18, 2022

Measuring Impartial Beneficence: A Kantian Perspective on the Oxford Utilitarianism Scale

Mihailov, E. (2022). 
Rev.Phil.Psych.
https://doi.org/10.1007/s13164-021-00600-2

Abstract

To capture genuine utilitarian tendencies, (Kahane et al., Psychological Review 125:131, 2018) developed the Oxford Utilitarianism Scale (OUS) based on two subscales, which measure the commitment to impartial beneficence and the willingness to cause harm for the greater good. In this article, I argue that the impartial beneficence subscale, which breaks ground with previous research on utilitarian moral psychology, does not distinctively measure utilitarian moral judgment. I argue that Kantian ethics captures the all-encompassing impartial concern for the well-being of all human beings. The Oxford Utilitarianism Scale draws, in fact, a point of division that places Kantian and utilitarian theories on the same track. I suggest that the impartial beneficence subscale needs to be significantly revised in order to capture distinctively utilitarian judgments. Additionally, I propose that psychological research should focus on exploring multiple sources of the phenomenon of impartial beneficence without categorizing it as exclusively utilitarian.

Conclusion

The narrow focus of psychological research on sacrificial harm contributes to a Machiavellian picture of utilitarianism. By developing the Oxford Utilitarianism Scale, Kahane and his colleagues have shown how important it is for the study of moral judgment to include the inspiring ideal of impartial concern. However, this significant contribution goes beyond the utilitarian/deontological divide. We learn to divide moral theories depending on whether they are, at the root, either Kantian or utilitarian. Kant famously denounced lying, even if it would save someone’s life, whereas utilitarianism accepts transgression of moral rules if it maximizes the greater good. However, in regard to promoting the ideal of impartial beneficence, Kantian ethics and utilitarianism overlap because both theories contributed to the Enlightenment project of moral reform. In Kantian ethics, the very concepts of duty and moral community are interpreted in radically impartial and cosmopolitan terms. Thus, a fruitful area for future research opens on exploring the diverse psychological sources of impartial beneficence.

Wednesday, February 2, 2022

Psychopathy and Moral-Dilemma Judgment: An Analysis Using the Four-Factor Model of Psychopathy and the CNI Model of Moral Decision-Making

Luke, D. M., Neumann, C. S., & Gawronski, B.
(2021). Clinical Psychological Science. 
https://doi.org/10.1177/21677026211043862

Abstract

A major question in clinical and moral psychology concerns the nature of the commonly presumed association between psychopathy and moral judgment. In the current preregistered study (N = 443), we aimed to address this question by examining the relation between psychopathy and responses to moral dilemmas pitting consequences for the greater good against adherence to moral norms. To provide more nuanced insights, we measured four distinct facets of psychopathy and used the CNI model to quantify sensitivity to consequences (C), sensitivity to moral norms (N), and general preference for inaction over action (I) in responses to moral dilemmas. Psychopathy was associated with a weaker sensitivity to moral norms, which showed unique links to the interpersonal and affective facets of psychopathy. Psychopathy did not show reliable associations with either sensitivity to consequences or general preference for inaction over action. Implications of these findings for clinical and moral psychology are discussed.

From the Discussion

In support of our hypotheses, general psychopathy scores and a superordinate latent variable (representing the broad syndrome of psychopathy) showed significant negative relations with sensitivity to moral norms, which suggests that people with elevated psychopathic traits were less sensitive to moral norms in their responses to moral dilemmas in comparison with other people. Further analyses at the facet level suggested that sensitivity to moral norms was uniquely associated with the interpersonal-affective facets of psychopathy. Both of these findings persisted when controlling for gender. As predicted, the antisocial facet showed a negative zero-order correlation with sensitivity to moral norms, but this association fell to nonsignificance when controlling for other facets of psychopathy and gender. At the manifest variable level, neither general psychopathy scores nor the four facets showed reliable relations with either sensitivity to consequences or general preference for inaction over action.

(cut)

More broadly, the current findings have important implications for both clinical and moral psychology. For clinical psychology, our findings speak to ongoing questions about whether people with elevated levels of psychopathy exhibit disturbances in moral judgment. In a recent review of the literature on psychopathy and moral judgment, Larsen et al. (2020) claimed there was “no consistent, well-replicated evidence of observable deficits in . . . moral judgment” (p. 305). However, a notable limitation of this review is that its analysis of moral-dilemma research focused exclusively on studies that used the traditional approach. Consistent with past research using the CNI model (e.g., Gawronski et al., 2017; Körner et al., 2020; Luke & Gawronski, 2021a) and in contrast to Larsen et al.’s conclusion, the current findings indicate substantial deviations in moral-dilemma judgments among people with elevated psychopathic traits, particularly conformity to moral norms.

Monday, January 17, 2022

Social threat indirectly increases moral condemnation via thwarting fundamental social needs

Henderson, R.K., Schnall, S.
Sci Rep 11, 21709 (2021).

Abstract

Individuals who experience threats to their social needs may attempt to avert further harm by condemning wrongdoers more severely. Three pre-registered studies tested whether threatened social esteem is associated with increased moral condemnation. In Study 1 (N = 381) participants played a game in which they were socially included or excluded and then evaluated the actions of moral wrongdoers. We observed an indirect effect: Exclusion increased social needs-threat, which in turn increased moral condemnation. Study 2 (N = 428) was a direct replication, and also showed this indirect effect. Both studies demonstrated the effect across five moral foundations, and was most pronounced for harm violations. Study 3 (N = 102) examined dispositional concerns about social needs threat, namely social anxiety, and showed a positive correlation between this trait and moral judgments. Overall, results suggest threatened social standing is linked to moral condemnation, presumably because moral wrongdoers pose a further threat when one’s ability to cope is already compromised.

From the General Discussion

These findings indicating that social threat is associated with harsher moral judgments suggest that various threats to survival can influence assessments of moral wrongdoing. Indeed, it has been proposed that the reason social exclusion reliably results in negative emotions is because social disconnectedness has been detrimental throughout human societies. As we found in Studies 1 and 2 and consistent with prior research, even brief exclusion via a simulated computer game can thwart fundamental social needs. Taken together, these experimental and correlational findings suggest that an elevated sense of danger appears to fortify moral judgment, because when safety is compromised, wrongdoers represent yet another source of potential danger. As a consequence, vulnerable individuals may be motivated to condemn moral violations more harshly. Interestingly, the null finding for loneliness suggests that amplified moral condemnation is not associated with having no social connections in the first place, but rather, with the existence or prospect of social threat. Relatedly, prior research has shown that greater cortisol release is associated with social anxiety but not with loneliness indicating that the body’s stress response does not react to loneliness in the same way as it does to social threat.

Monday, January 10, 2022

Sequential decision-making impacts moral judgment: How iterative dilemmas can expand our perspective on sacrificial harm

D.H. Bostyn and A.Roets
Journal of Experimental Social Psychology
Volume 98, January 2022, 104244

Abstract

When are sacrificial harms morally appropriate? Traditionally, research within moral psychology has investigated this issue by asking participants to render moral judgments on batteries of single-shot, sacrificial dilemmas. Each of these dilemmas has its own set of targets and describes a situation independent from those described in the other dilemmas. Every decision that participants are asked to make thus takes place within its own, separate moral universe. As a result, people's moral judgments can only be influenced by what happens within that specific dilemma situation. This research methodology ignores that moral judgments are interdependent and that people might try to balance multiple moral concerns across multiple decisions. In the present series of studies we present participants with iterative versions of sacrificial dilemmas that involve the same set of targets across multiple iterations. Using this novel approach, and across five preregistered studies (total n = 1890), we provide clear evidence that a) responding to dilemmas in a sequential, iterative manner impacts the type of moral judgments that participants favor and b) that participants' moral judgments are not only motivated by the desire to refrain from harming others (usually labelled as deontological judgment), or a desire to minimize harms (utilitarian judgment), but also by a desire to spread out harm across all possible targets.

Highlights

• Research on sacrificial harm usually asks participants to judge single-shot dilemmas.

• We investigate sacrificial moral dilemma judgment in an iterative context.

• Sequential decision making impacts moral preferences.

• Many participants express a non-utilitarian concern for the overall spread of harm.


Moral deliberation in iterative contexts

The iterative lens we have adopted prompts some intriguing questions about the nature of moral deliberation in the context of sacrificial harm. Existing theoretical models on sacrificial harm can be described as ‘competition models’ (for instance, Conway & Gawronski, 2013; Gawronski et al., 2017; Greene et al., 2001, 2004; Hennig & Hütter, 2020). These models argue that opposing psychological processes compete to deliver a specific moral judgment and that the process that wins out, will determine the nature of that moral judgment. As such, these models presume that the goal of moral deliberation is about deciding whether to refrain from harm or minimize harm in a mutually exclusive manner. Even if participants are tempted by both options, eventually, their judgment settles wholly on one or the other. This is sensible in the context of non-iterative dilemmas in which outcomes hinge on a single decision but is it equally sensible in iterative contexts?

Consider the results of Study 4. In this study, we asked (a subset of) participants how many shocks they would divert out of a total six shocks. Interestingly, 32% of these participants decided to divert a single shock out of the six (See Fig. 6), thus shocking the individual once, and the group five times. How should such a decision be interpreted? These participants did not fully refrain from harming others, nor did they fully minimize harm, nor did they spread harm in the most balanced of ways.  Responses like this seem to straddle different moral concerns. While future research will need to corroborate these findings, we suggest that responses like this, i.e. responses that seem to straddle multiple moral concerns, cannot be explained by competition models but necessitate theoretical models that explicitly take into account that participants might strive to strike a (idiosyncratic) pluralistic balance between multiple moral concerns. 

Friday, December 10, 2021

How social relationships shape moral wrongness judgments

Earp, B.D., McLoughlin, K.L., Monrad, J.T. et al. 
Nat Commun 12, 5776 (2021).

Abstract

Judgments of whether an action is morally wrong depend on who is involved and the nature of their relationship. But how, when, and why social relationships shape moral judgments is not well understood. We provide evidence to address these questions, measuring cooperative expectations and moral wrongness judgments in the context of common social relationships such as romantic partners, housemates, and siblings. In a pre-registered study of 423 U.S. participants nationally representative for age, race, and gender, we show that people normatively expect different relationships to serve cooperative functions of care, hierarchy, reciprocity, and mating to varying degrees. In a second pre-registered study of 1,320 U.S. participants, these relationship-specific cooperative expectations (i.e., relational norms) enable highly precise out-of-sample predictions about the perceived moral wrongness of actions in the context of particular relationships. In this work, we show that this ‘relational norms’ model better predicts patterns of moral wrongness judgments across relationships than alternative models based on genetic relatedness, social closeness, or interdependence, demonstrating how the perceived morality of actions depends not only on the actions themselves, but also on the relational context in which those actions occur.

From the General Discussion

From a theoretical perspective, one aspect of our current account that requires further attention is the reciprocity function. In contrast with the other three functions considered, relationship-specific prescriptions for reciprocity did not significantly predict moral judgments for reciprocity violations. Why might this be so? One possibility is that the model we tested did not distinguish between two different types of reciprocity. In some relationships, such as those between strangers, acquaintances, or individuals doing business with one another, each party tracks the specific benefits contributed to, and received from, the other. In these relationships, reciprocity thus takes a tit-for-tat form in which benefits are offered and accepted on a highly contingent basis. This type of reciprocity is transactional, in that resources are provided, not in response to a real or perceived need on the part of the other, but rather, in response to the past or expected future provision of a similarly valued resource from the cooperation partner. In this, it relies on an explicit accounting of who owes what to whom, and is thus characteristic of so-called “exchange” relationships.

In other relationships, by contrast, such as those between friends, family members, or romantic partners – so-called “communal” relationships – reciprocity takes a different form: that of mutually expected responsiveness to one another’s needs. In this form of reciprocity, each party tracks the other’s needs (rather than specific benefits provided) and strives to meet these needs to the best of their respective abilities, in proportion to the degree of responsibility each has assumed for the other’s welfare. Future work on moral judgments in relational context should distinguish between these two types of reciprocity: that is, mutual care-based reciprocity in communal relationships (when both partners have similar needs and abilities) and tit-for-tat reciprocity between “transactional” cooperation partners who have equal standing or claim on a resource.

Saturday, December 4, 2021

Virtuous Victims

Jordan, Jillian J., and Maryam Kouchaki
Science Advances 7, no. 42 (October 15, 2021).

Abstract

How do people perceive the moral character of victims? We find, across a range of transgressions, that people frequently see victims of wrongdoing as more moral than nonvictims who have behaved identically. Across 17 experiments (total n = 9676), we document this Virtuous Victim effect and explore the mechanisms underlying it. We also find support for the Justice Restoration Hypothesis, which proposes that people see victims as moral because this perception serves to motivate punishment of perpetrators and helping of victims, and people frequently face incentives to enact or encourage these “justice-restorative” actions. Our results validate predictions of this hypothesis and suggest that the Virtuous Victim effect does not merely reflect (i) that victims look good in contrast to perpetrators, (ii) that people are generally inclined to positively evaluate those who have suffered, or (iii) that people hold a genuine belief that victims tend to be people who behave morally.

Discussion

Across 17 experiments (total n = 9676), we have documented and explored the Virtuous Victim effect. We find that victims are frequently seen as more virtuous than nonvictims—not because of their own behavior, but because others have mistreated them. We observe this effect across a range of moral transgressions and find evidence that it is not moderated by the victim’s (white versus black) race or gender. Humans ubiquitously—and perhaps increasingly (1, 2)—encounter narratives about immoral acts and their victims. By demonstrating that these narratives have the power to confer moral status, our results shed new light on the ways that victims are perceived by society.

We have also explored the boundaries of the Virtuous Victim effect and illuminated the mechanisms that underlie it. For example, we find that the Virtuous Victim effect may be especially likely to flow from victim narratives that describe a transgression’s perpetrator and are presented by a third-person narrator (or perhaps, more generally, a narrator who is unlikely to be doubted). We also find that the effect is specific to victims of immorality (i.e., it does not extend to accident victims) and to moral virtue (i.e., it does not extend equally to positive but nonmoral traits). Furthermore, the effect shapes perceptions of moral character but not predictions about moral behavior.

We have also evaluated several potential explanations for the Virtuous Victim effect. Ultimately, our results provide evidence for the Justice Restoration Hypothesis, which proposes that people see victims as virtuous because this perception serves to motivate punishment of perpetrators and helping of victims, and people frequently face incentives to enact or encourage these justice-restorative actions.