Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Moral Psychology. Show all posts
Showing posts with label Moral Psychology. Show all posts

Thursday, August 31, 2023

It’s not only political conservatives who worry about moral purity

K. Gray, W. Blakey, & N. DiMaggio
Originally posted 13 July 23

Here are two excerpts:

What does this have to do with differences in moral psychology? Well, moral psychologists have suggested that politically charged arguments about sexuality, spirituality and other subjects reflect deep differences in the moral values of liberals and conservatives. Research involving scenarios like this one has seemed to indicate that conservatives, unlike liberals, think that maintaining ‘purity’ is a moral good in itself – which for them might mean supporting what they construe as the ‘sanctity of marriage’, for example.

It may seem strange to think about ‘purity’ as a core driver of political differences. But purity, in the moral sense, is an old concept. It pops up in the Hebrew Bible a lot, in taboos around food, menstruation, and divine encounters. When Moses meets God at the Burning Bush, God says to Moses: ‘Do not come any closer, take off your sandals, for the place where you are standing is holy ground.’ Why does God tell Moses to take off his shoes? Not because his shoes magically hurt God, but because shoes are dirty, and it’s disrespectful to wear your shoes in the presence of the creator of the universe. Similarly, in ancient Greece, worshippers were often required to endure long purification rituals before looking at sacred religious idols or engaging in different spiritual rites. These ancient moral practices seem to reflect an intuition that ‘cleanliness is next to Godliness’.

In the modern era, purity has repeatedly appeared at the centre of political battlegrounds, as in clashes between US conservatives and liberals over sexual education and mores in the 1990s. It was around this time that the psychologist Jonathan Haidt began formulating a theory to help explain the moral divide. Moral foundations theory argues that liberals and conservatives are divided because they rely on distinct moral values, including purity, to different degrees.


A harm-focused perspective on moral judgments related to ‘purity’ could help us better understand and communicate with moral opponents. We all grasp the importance of protecting ourselves and our loved ones from harm. Learning that people on the ‘other side’ of a political divide care about questions of purity because they connect these to their understanding of harm can help us empathise with different moral opinions. It is easy for a liberal to dismiss a conservative’s condemnation of dead-chicken sex when it is merely said to be ‘impure’; it is harder to be dismissive if it’s suggested that someone who makes a habit of that behaviour might end up harming people.

Explicitly grounding discussions of morality in perceptions of harm could help us all to be better citizens of a ‘small-L liberal’ society – one in which the right to swing our fists ends where others’ noses begin. If something seems disgusting, impure and immoral to you, take some time to try to articulate the harms you intuitively perceive. Talking about these potential harms may help other people understand where you are coming from. Of course, someone might not share your judgment that harm is being done. But identifying perceived harms at least puts the conversation in terms that everyone understands.

Here is my summary:

The authors define purity as "the state of being free from contamination or pollution."  They argue that people on both the left and the right care about purity because they associate it with safety and well-being.
They provide examples of how liberals and conservatives can both use purity-related language, such as "desecrate" and "toxic." They propose a new explanation of moral judgments that suggests that people care about purity when they perceive that 'impure' acts can lead to harm.

Sunday, April 23, 2023

Produced and counterfactual effort contribute to responsibility attributions in collaborative tasks

Xiang, Y., Landy, J., et al. (2023, March 8). 


How do people judge responsibility in collaborative tasks? Past work has proposed a number of metrics that people may use to attribute blame and credit to others, such as effort, competence, and force. Some theories consider only the produced effort or force (individuals are more responsible if they produce more effort or force), whereas others consider counterfactuals (individuals are more responsible if some alternative behavior on their or their collaborator's part could have altered the outcome). Across four experiments (N = 717), we found that participants’ judgments are best described by a model that considers both produced and counterfactual effort. This finding generalized to an independent validation data set (N = 99). Our results thus support a dual-factor theory of responsibility attribution in collaborative tasks.

General discussion

Responsibility for the outcomes of collaborations is often distributed unevenly. For example, the lead author on a project may get the bulk of the credit for a scientific discovery, the head of a company may  shoulder the blame for a failed product, and the lazier of two friends may get the greater share of blame  for failing to lift a couch.  However, past work has provided conflicting accounts of the computations that drive responsibility attributions in collaborative tasks.  Here, we compared each of these accounts against human responsibility attributions in a simple collaborative task where two agents attempted to lift a box together.  We contrasted seven models that predict responsibility judgments based on metrics proposed in past work, comprising three production-style models (Force, Strength, Effort), three counterfactual-style models (Focal-agent-only, Non-focal-agent-only, Both-agent), and one Ensemble model that combines the best-fitting production- and counterfactual-style models.  Experiment 1a and Experiment 1b showed that theEffort model and the Both-agent counterfactual model capture the data best among the production-style models and the counterfactual-style models, respectively.  However, neither provided a fully adequate fit on their own.  We then showed that predictions derived from the average of these two models (i.e., the Ensemble model) outperform all other models, suggesting that responsibility judgments are likely a combination of production-style reasoning and counterfactual reasoning.  Further evidence came from analyses performed on individual participants, which revealed that he Ensemble model explained more participants’ data than any other model.  These findings were subsequently supported by Experiment 2a and Experiment 2b, which replicated the results when additional force information was shown to the participants, and by Experiment 3, which validated the model predictions with a broader range of stimuli.

Summary: Effort exerted by each member & counterfactual thinking play a crucial role in attributing responsibility for success or failure in collaborative tasks. This study suggests that higher effort leads to more responsibility for success, while lower effort leads to more responsibility for failure.

Sunday, October 16, 2022

A framework for understanding reasoning errors: From fake news to climate change and beyond

Pennycook, G. (2022, August 31).


Humans have the capacity, but perhaps not always the willingness, for great intelligence. From global warming to the spread of misinformation and beyond, our species is facing several major challenges that are the result of the limits of our own reasoning and decision-making. So, why are we so prone to errors during reasoning? In this chapter, I will outline a framework for understanding reasoning errors that is based on a three-stage dual-process model of analytic engagement (intuition, metacognition, and reason). The model has two key implications: 1) That a mere lack of deliberation and analytic thinking is a primary source of errors and 2) That when deliberation is activated, it generally reduces errors (via questioning intuitions and integrating new information) than increasing errors (via rationalization and motivated reasoning). In support of these claims, I review research showing the extensive predictive validity of measures that index individual differences in analytic cognitive style – even beyond explicit errors per se. In particular, analytic thinking is not only predictive of skepticism about a wide range of epistemically suspect beliefs (paranormal, conspiratorial, COVID-19 misperceptions, pseudoscience and alternative medicines) as well as decreased susceptibility to bullshit, fake news, and misinformation, but also important differences in people’s moral judgments and values as well as their religious beliefs (and disbeliefs). Furthermore, in some (but not all cases), there is evidence from experimental paradigms that support a causal role of analytic thinking in determining judgments, beliefs, and behaviors. The findings reviewed here provide some reason for optimism for the future: It may be possible to foster analytic thinking and therefore improve the quality of our decisions.

Evaluating the evidence: Does reason matter?

Thus far, I have prioritized explaining the various alternative frameworks. I will now turn to an in-depth review of some of the key relevant evidence that helps mediate between these accounts. I will organize this review around two key implications that emerge from the framework that I have proposed.

First, the primary difference between the three-stage model (and related dual-process models) and the social-intuitionist models (and related intuitionist models) is that the former argues that people should be able to overcome intuitive errors using deliberation whereas the latter argues that reason is generally infirm and therefore that intuitive errors will simply dominate. Thus, the reviewed research will investigate the apparent role of deliberation in driving people’s choices, beliefs, and behaviors.

Second, the primary difference between the three-stage model (and related dual-process models) and the identity-protective cognition model is that the latter argues that deliberation facilitates biased information processing whereas the former argues that deliberation generally facilitates accuracy. Thus, the reviewed research will also focus on whether deliberation is linked with inaccuracy in politically-charged or identity-relevant contexts.

Friday, September 9, 2022

Online Moral Conformity: How Powerful is a Group of Online Strangers When Influencing an Individual’s Moral Judgments?

Paruzel-Czachura, M., Wojciechowska, D., 
& Bostyn, D. H. (2022, May 21). 


People make moral decisions every day, and when making them, they may be influenced by their companions (the so-called moral conformity effect). Nowadays, people make many decisions in online environments like video meetings. In the current preregistered experiment, we studied the online moral conformity effect. We applied an Asch conformity paradigm in an online context by asking participants (N = 120) to reply to sacrificial moral dilemmas through the online video communication tool Zoom when sitting in the “virtual” room with strangers (confederates instructed on how to answer; experimental condition) or when sitting alone (control condition). We found an effect of online moral conformity on half of the dilemmas included in our study as well as in the aggregate.


Social conformity is a well-known phenomenon (Asch, 1951, 1952, 1955, 1956; Sunstein, 2019).  Moreover, past research has demonstrated that conformity effects occur for moral issues as well (Aramovich et al., 2012; Bostyn & Roets, 2017; Crutchfield, 1955; Kelly et al., 2017; Kundu & Cummins, 2013; Lisciandra et al., 2013). However, to what extent does moral conformity occur when people interact in digital spaces, such as video conferencing software, has not yet been investigated.

We conducted a well-powered experimental study to determine if the effect of online moral conformity exists. Two study conditions were used: an experimental one in which study participants were answering along with a group of confederates and a control condition in which study participants were answering individually. In both conditions, participants were invited to a video meeting and asked to orally respond to a set of moral dilemmas with their cameras turned on. All questions and study conditions were the same, apart from the presence of other people in the experimental condition. In the experimental condition, importantly, the experimenter pretended that all people were study participants, but in fact, only the last person was an actual study participant, and all four other participants were confederates who were trained to answer in a specific manner. Confederates answered contrary to what most people had decided in past studies (Gawronski et al., 2017; Greene et al., 2008; Körner et al., 2020). We found an effect of online moral conformity on half of the dilemmas included in our study as well as in aggregate.

Tuesday, August 16, 2022

Virtue Discounting: Observers Infer that Publicly Virtuous Actors Have Less Principled Motivations

Kraft-Todd, G., Kleiman-Weiner, M., 
& Young, L. (2022, May 27). 


Behaving virtuously in public presents a paradox: only by doing so can people demonstrate their virtue and also influence others through their example, yet observers may derogate actors’ behavior as mere “virtue signaling.” We introduce the term virtue discounting to refer broadly to the reasons that people devalue actors’ virtue, bringing together empirical findings across diverse literatures as well as theories explaining virtuous behavior. We investigate the observability of actors’ behavior as one reason for virtue discounting, and its mechanism via motivational inferences using the comparison of generosity and impartiality as a case study among virtues. Across 14 studies (7 preregistered, total N=9,360), we show that publicly virtuous actors are perceived as less morally good than privately virtuous actors, and that this effect is stronger for generosity compared to impartiality (i.e. differential virtue discounting). An exploratory factor analysis suggests that three types of motives—principled, reputation-signaling, and norm-signaling—affect virtue discounting. Using structural equation modeling, we show that the effect of observability on ratings of actors’ moral goodness is largely explained by inferences that actors have less principled motivations. Further, we provide experimental evidence that observers’ motivational inferences mechanistically contribute to virtue discounting. We discuss the theoretical and practical implications of our findings, as well as future directions for research on the social perception of virtue.

General Discussion

Across three analyses martialing data from 14 experiments (seven preregistered, total N=9,360), we provide robust evidence of virtue discounting. In brief, we show that the observability of actors’ behavior is a reason that people devalue actors’ virtue, and that this effect can be explained by observers’ inferences about actors’ motivations. In Analysis 1—which includes a meta-analysis of all experiments we ran—we show that observability causes virtue discounting, and that this effect is larger in the context of generosity compared to impartiality. In Analysis 2, we provide suggestive evidence that participants’ motivational inferences mediate a large portion (72.6%) of the effect of observability on their ratings of actors’ moral goodness. In Analysis 3, we experimentally show that when we stipulate actors’ motivation, observability loses its significant effect on participants’ judgments of actors’ moral goodness.  This gives further evidence for   the hypothesis that observers’ inferences about actors’ motivations are a mechanism for the way that the observability of actions impacts virtue discounting.We now consider the contributions of our findings to the empirical literature, how these findings interact with our theoretical account, and the limitations of the present investigation (discussing promising directions for future research throughout). Finally, we conclude with practical implications for effective prosocial advocacy.

Monday, August 15, 2022

Modular Morals: Mapping the organisation of the moral brain

Wilkinson, J. Curry, O.S., et al.
OSF Home
Last Updated: 2022-07-12


Is morality the product of multiple domain-specific psychological mechanisms, or one domain-general mechanism? Previous research suggests that morality consists of a range of solutions to the problems of cooperation recurrent in human social life. This theory of ‘morality as cooperation’ suggests that there are (at least) seven specific moral domains: family values, group loyalty, reciprocity, heroism, deference, fairness and property rights. However, it is unclear how these types of morality are implemented at the neuroanatomical level. The possibilities are that morality is (1) the product of multiple distinct domain-specific adaptations for cooperation, (2) the product of a single domain-general adaptation which learns a range of moral rules, or (3) the product of some combination of domain-specific and domain-general adaptations. To distinguish between these possibilities, we first conducted an anatomical likelihood estimation meta-analysis of previous studies investigating the relationship between these seven moral domains and neuroanatomy. This meta-analysis provided evidence for a combination of specific and general adaptations. Next, we investigated the relationship between the seven types of morality – as measured by the Morality as Cooperation Questionnaire (Relevance) – and grey matter volume in a large neuroimaging (n=607) sample. No associations between moral values and grey matter volume survived whole-brain exploratory testing. We conclude that whatever combination of mechanisms are responsible for morality, either they are not neuroanatomically localised, or else their localisation is not manifested in grey matter volume. Future research should employ phylogenetically informed a priori predictions, as well as alternative measures of morality and of brain function.

Saturday, August 13, 2022

The moral psychology of misinformation: Why we excuse dishonesty in a post-truth world

Effron, D.A., & Helgason, B. A.
Current Opinion in Psychology
Volume 47, October 2022, 101375


Commentators say we have entered a “post-truth” era. As political lies and “fake news” flourish, citizens appear not only to believe misinformation, but also to condone misinformation they do not believe. The present article reviews recent research on three psychological factors that encourage people to condone misinformation: partisanship, imagination, and repetition. Each factor relates to a hallmark of “post-truth” society: political polarization, leaders who push “alterative facts,” and technology that amplifies disinformation. By lowering moral standards, convincing people that a lie's “gist” is true, or dulling affective reactions, these factors not only reduce moral condemnation of misinformation, but can also amplify partisan disagreement. We discuss implications for reducing the spread of misinformation.

Repeated exposure to misinformation reduces moral condemnation

A third hallmark of a post-truth society is the existence of technologies, such as social media platforms, that amplify misinformation. Such technologies allow fake news – “articles that are intentionally and verifiably false and that could mislead readers” – to spread fast and far, sometimes in multiple periods of intense “contagion” across time. When fake news does “go viral,” the same person is likely to encounter the same piece of misinformation multiple times. Research suggests that these multiple encounters may make the misinformation seem less unethical to spread.


In a post-truth world, purveyors of misinformation need not convince the public that their lies are true. Instead, they can reduce the moral condemnation they receive by appealing to our politics (partisanship), convincing us a falsehood could have been true or might become true in the future (imagination), or simply exposing us to the same misinformation multiple times (repetition). Partisanship may lower moral standards, partisanship and imagination can both make the broader meaning of the falsehood seem true, and repetition can blunt people's negative affective reaction to falsehoods (see Figure 1). Moreover, because partisan alignment strengthens the effects of imagination and facilitates repeated contact with falsehoods, each of these processes can exacerbate partisan divisions in the moral condemnation of falsehoods. Understanding these effects and their pathways informs interventions aimed at reducing the spread of misinformation.

Ultimately, the line of research we have reviewed offers a new perspective on our post-truth world. Our society is not just post-truth in that people can lie and be believed. We are post-truth in that it is concerningly easy to get a moral pass for dishonesty – even when people know you are lying.

Sunday, July 31, 2022

What is 'purity'? Conceptual murkiness in moral psychology

Gray, K., DiMaggio, N., Schein, C., 
& Kachanoff, F. (2021, February 3). 


Purity is an important topic in psychology. It has a long history in moral discourse, has helped catalyze paradigm shifts in moral psychology, and is thought to underlie political differences. But what exactly is “purity?” To answer this question, we review the history of purity and then systematically examine 158 psychology papers that define and operationalization (im)purity. In contrast to the many concepts defined by what they are, purity is often understood by what it isn’t—obvious dyadic harm. Because of this “contra”-harm understanding, definitions and operationalizations of purity are quite varied. Acts used to operationalize impurity include taking drugs, eating your sister’s scab, vandalizing a church, wearing unmatched clothes, buying music with sexually explicit lyrics, and having a messy house. This heterogeneity makes purity a “chimera”—an entity composed of various distinct elements. Our review reveals that the “contra-chimera” of purity has 9 different scientific understandings, and that most papers define purity differently from how they operationalize it. Although people clearly moralize diverse concerns—including those related to religion, sex, and food—such heterogeneity in conceptual definitions is problematic for theory development. Shifting definitions of purity provide “theoretical degrees of freedom” that make falsification extremely difficult. Doubts about the coherence and consistency of purity raise questions about key purity-related claims of modern moral psychology, including the nature of political differences and the cognitive foundations of moral judgment.


Purity is an ancient concept that has moved from historical religious rhetoric to modern moral psychology.  Many things have changed in this leap—Dr. Kellogg would never have imagined a scientific discipline catalyzed by loving incest—but purity still seems to be a heterogeneous concept with diverse understandings. This diversity makes purity an exciting topic to study, but our review suggests that purity lacks a common core, beyond involving acts that are less-than-obviously harmful.  Without a consistent and non-tautological understanding of purity, it is difficult to argue that purity is a unique and distinct construct, and it is impossible to argue for a mental mechanism dedicated to purity. It is clear, however, that purity is featured in moral rhetoric and can help shed light on cultural differences. Moving forward, we suggest that the field should unpack the richness of purity and individually explore its many understanding. When conducting this research, we should consider not only what purity isn’t, but what it really is.

Friday, September 24, 2021

Hanlon’s Razor

N. Ballantyne and P. H. Ditto
Midwest Studies in Philosophy
August 2021


“Never attribute to malice that which is adequately explained by stupidity” – so says Hanlon’s Razor. This principle is designed to curb the human tendency toward explaining other people’s behavior by moralizing it. In this article, we ask whether Hanlon’s Razor is good or bad advice. After offering a nuanced interpretation of the principle, we critically evaluate two strategies purporting to show it is good advice. Our discussion highlights important, unsettled questions about an idea that has the potential to infuse greater humility and civility into discourse and debate.

From the Conclusion

Is Hanlon’s Razor good or bad advice? In this essay, we criticized two proposals in favor of the Razor.  One sees the benefits of the principle in terms of making us more accurate. The other sees benefits in terms of making us more charitable. Our discussion has been preliminary, but we hope careful empirical investigation can illuminate when and why the Razor is beneficial, if it is. For the time being, what else can we say about the Razor?

The Razor attempts to address the problem of detecting facts that explain opponents’ mistakes. Why do our opponents screw up? For hypermoralists, detecting stupidity in the noise of malice can be difficult: we are too eager to attribute bad motives and unsavory character to people who disagree with us. When we try to explain their mistakes, we are subject to two distinct errors:

Misidentifying-stupidity error: attributing an error to malice that is due to stupidity

Misidentifying-malice error: attributing an error to stupidity that is due to malice 

The idea driving the Razor is simple enough. People make misidentifying-stupidity errors too frequently and they should minimize those errors by risking misidentifying-malice errors. The Razor attempts to adjust our criterion for detecting the source of opponents’ mistakes. People should see stupidity more often in their opponents, even if that means they sometimes see stupidity where there is in fact malice. 

Saturday, September 11, 2021

Virtues for Real-World Utilitarians

Schubert, S., & Caviola, L. (2021, August 3)


Utilitarianism says that we should maximize aggregate well-being, impartially considered. But utilitarians that try to apply this principle will encounter many psychological obstacles, ranging from selfishness to moral biases to limits to epistemic and instrumental rationality. In this chapter, we argue that utilitarians should cultivate a number of virtues that allow them to overcome the most important of these obstacles. We select virtues based on two criteria. First, the virtues should be impactful: they should greatly increase your impact (according to utilitarian standards), if you acquire them. Second, the virtues should be acquirable: they should be psychologically realistic to acquire. Using these criteria, we argue that utilitarians should prioritize six virtues: moderate altruism, moral expansiveness, effectiveness-focus, truth-seeking, collaborativeness, and determination. Finally, we discuss how our suggested list of virtues compares with standard conceptions of utilitarianism, as well as with common sense morality.


We have suggested six virtues that utilitarians should cultivate to overcome psychological obstacles to utilitarianism and maximize their impact in the real world: moderate altruism, moral expansiveness, effectiveness-focus,  truth-seeking,  collaborativeness,  and  determination.  To  reiterate,  this  list  is tentative, and should be seen more as a starting point for further research than as a well-consolidated set of findings. It is plausible that some of our suggested virtues should be refined, and that we should add further  virtues  to  the  list.  We  hope  that  it  should  inspire  a  debate  among  philosophers  and psychologists about what virtues utilitarians should prioritize the most.

Tuesday, August 17, 2021

One -- but Not the Same

Schwenkler, J. Byrd, N. Lambert, E., & Taylor, M.
Philosophical Studies


Ordinary judgments about personal identity are complicated by the fact that phrases like “same person” and “different person” have multiple uses in ordinary English. This complication calls into question the significance of recent experimental work on this topic. For example, Tobia (2015) found that judgments of personal identity were significantly affected by whether the moral change described in a vignette was for the better or for the worse, while Strohminger and Nichols (2014) found that loss of moral conscience had more of an effect on identity judgments than loss of biographical memory. In each case, however, there are grounds for questioning whether the judgments elicited in these experiments engaged a concept of numerical personal identity at all (cf. Berniūnas and Dranseika 2016; Dranseika 2017; Starmans and Bloom 2018). In two pre-registered studies we validate this criticism while also showing a way to address it: instead of attempting to engage the concept of numerical identity through specialized language or the terms of an imaginary philosophical debate, we should consider instead how the identity of a person is described through the connected use of proper names, definite descriptions, and the personal pronouns “I”, “you”, “he”, and “she”. When the experiments above are revisited in this way, there is no evidence that the differences in question had an effect on ordinary identity judgments.

From the Discussion

Our findings do, however, suggest a promising strategy for the experimental study of how philosophically important concepts are employed by people without formal philosophical training. As we noted above, in philosophy we use phrases like “numerical identity” and “qualitative identity” in a somewhat artificial way, in order thereby to disambiguate between the different meanings a phrase like “same person” can have in ordinary language. But we cannot easily disambiguate things in this way when we wish to investigate how these concepts are understood by non-philosophers: for a question like “Is the man after the accident numerically the same as the man before?” cannot be posed to such a person without first explicating the meaning of the italicized phrase.

Sunday, April 11, 2021

Personal experiences bridge moral and political divides better than facts

E. Kubin, C. Puryear, C. Shein, & K. Gray
Proceedings of the National Academy of Sciences 
Feb 2021, 118 (6) e2008389118
DOI: 10.1073/pnas.2008389118


Both liberals and conservatives believe that using facts in political discussions helps to foster mutual respect, but 15 studies—across multiple methodologies and issues—show that these beliefs are mistaken. Political opponents respect moral beliefs more when they are supported by personal experiences, not facts. The respect-inducing power of personal experiences is revealed by survey studies across various political topics, a field study of conversations about guns, an analysis of YouTube comments from abortion opinion videos, and an archival analysis of 137 interview transcripts from Fox News and CNN. The personal experiences most likely to encourage respect from opponents are issue-relevant and involve harm. Mediation analyses reveal that these harm-related personal experiences increase respect by increasing perceptions of rationality: everyone can appreciate that avoiding harm is rational, even in people who hold different beliefs about guns, taxes, immigration, and the environment. Studies show that people believe in the truth of both facts and personal experiences in nonmoral disagreement; however, in moral disagreements, subjective experiences seem truer (i.e., are doubted less) than objective facts. These results provide a concrete demonstration of how to bridge moral divides while also revealing how our intuitions can lead us astray. Stretching back to the Enlightenment, philosophers and scientists have privileged objective facts over experiences in the pursuit of truth. However, furnishing perceptions of truth within moral disagreements is better accomplished by sharing subjective experiences, not by providing facts.


All Americans are affected by rising political polarization, whether because of a gridlocked Congress or antagonistic holiday dinners. People believe that facts are essential for earning the respect of political adversaries, but our research shows that this belief is wrong. We find that sharing personal experiences about a political issue—especially experiences involving harm—help to foster respect via increased perceptions of rationality. This research provides a straightforward pathway for increasing moral understanding and decreasing political intolerance. These findings also raise questions about how science and society should understand the nature of truth in the era of “fake news.” In moral and political disagreements, everyday people treat subjective experiences as truer than objective facts.

Friday, April 9, 2021

The Ordinary Concept of a Meaningful Life

Prinzing, M., De Freitas, J., & Fredrickson, B. 
(2020, May 5). 


The desire for a meaningful life is ubiquitous, yet the ordinary concept of a meaningful life is poorly understood. Across six experiments (total N = 2,539), we investigated whether third-person attributions of meaning depend on the psychological states an agent experiences (feelings of interest, engagement, and fulfillment), or on the objective conditions of their life (e.g., their effects on others). Studies 1a–b found that laypeople think subjective and objective factors contribute independently to the meaningfulness of a person’s life. Studies 2a–b found that positive mental states are thought to make a life more meaningful, even if derived from senseless activities (e.g., hand-copying the dictionary). Studies 3a–b found that agents engaged in morally bad activities are not thought to have meaningful lives, even if they feel fulfilled. In short, both an agents’ subjective mental states and objective impact on the world affect how meaningful their lives appear.

General Discussion

What, according to the ordinary concept, makes a life meaningful?  Studies1a-b found that  laypeople  think positive  mental states (interest,  engagement, fulfillment) can make an agent’s life meaningful. These studies also found that, according to lay assessments, doing something that has value for others can also make an agent’s life meaningful. These findings conflict with the predominant philosophical theories of meaning in life. These theories posit an exclusive role for either positive mental states (subjectivist theories) or objective states of an agent’s life (objectivist theories), or they require that both criteria be met (hybrid theories). In contrast, we found that laypeople think an agent’s life is meaningful when either criterion is met.This indicates that the ordinary concept of a meaningful life does not fit neatly with these three philosophical theories. Instead, they seem to be captured by what we will call the independent-additive theory: subjective factors  (positive mental states like fulfillment) and objective factors (like contribution, sensibility, and morality)each affect the meaningfulness of an agent’s life, and their effects are both independent and additive.  

We investigated the roles of sensibility and morality as plausible boundary conditions for lay attributions of meaningfulness. For sensibility, we saw somewhat mixed results. Study 2a found no evidence that a life characterized by sensible activities (wine connoisseurship) was  seen as more  meaningful than a  life characterized  by senseless  activities(rubber  band collecting). However, Study 2b, with a larger sample and wider variety of vignettes, did find such  an  effect. Nevertheless, in both  studies, fulfilling  lives were seen as  more  meaningful than  unfulfilling  ones—regardless  of  whether  that fulfillment was derived  from sensible  or senseless activities.  Hence, on the ordinary concept, sensibility contributes to meaningfulness, though  not  as  much  as  fulfillment  does. Moreover, in  alignment  with  the independent-additive theory, fulfillment maintains its additive effect, independently of sensibility.  Regarding morality, Studies 3a-b found that morally good lives were viewed as much more meaningful than morally bad ones. In fact, morally bad agents were not thought to live meaningful lives, even if those agents felt very fulfilled. In contrast, morally good agents were seen as having meaningful lives even if they didn’t feel fulfilled.Nevertheless,  though the effect of morality was larger than that of fulfillment, participants still thought that a fulfilled, immoral agent was living more meaningfully than an unfulfilled, immoral agent. Supporting the independent-additive  theory,  the additive  effect  of  fulfillment was independent  of morality.

In short, we identified four factors (fulfillment, contribution, sensibility, and morality) that seem to have independent, additive effects on third-person attributions of meaningfulness.  There  may well be more such  factors.  But  the  evidence  from  these  six experiments supports a model of third-person meaningfulness judgments that—in contrast to subjectivist,  objectivist,  and  hybrid  theories—emphasizes  independent  and  additive  factors that  contribute  to  the  meaning in a person’s life.  We  have called such a model the “independent-additive theory”.

Thursday, April 8, 2021

How social relationships shape moral judgment

Earp, B. D., McLoughlin, et al.
(2020, September 18).


Our judgments of whether an action is morally wrong depend on who is involved and their relationship to one another. But how, when, and why do social relationships shape such judgments? Here we provide new theory and evidence to address this question. In a pre- registered study of U.S. participants (n = 423, nationally representative for age, race and gender), we show that particular social relationships (like those between romantic partners, housemates, or siblings) are normatively expected to serve distinct cooperative functions – including care, reciprocity, hierarchy, and mating – to different degrees. In a second pre- registered study (n = 1,320) we show that these relationship-specific norms, in turn, influence the severity of moral judgments concerning the wrongness of actions that violate cooperative expectations. These data provide evidence for a unifying theory of relational morality that makes highly precise out-of-sample predictions about specific patterns of moral judgments across relationships. Our findings show how the perceived morality of actions depends not only on the actions themselves, but also on the relational context in which those actions occur.

From the Discussion

Lewin famously argued that behavior is a product of the person and the situation. In a similar spirit, our data confirm that judgments of moral behavior cannot be understood solely with reference to a given act or actor, but rather, must be interpreted in light of the interaction between the parties. And crucially, the nature of their relationship--including the cooperative norms by which the relationship is governed in a given society --will typically be one of the most important situational factors in terms of explanatory power. Although relationship theorists have, for decades, worked to characterize the structural elements of various close relationships and have sometimes categorized relationships in terms of cooperative functions necessary for human thriving, here we systematically described lay perceptions of the ideal functional make-up of a wide range of common relationships. Moreover, we were able to use this information to make accurate out-of-sample predictions of moral judgments concerning a host of actions that are likely to occur in daily life. We hope that our approach will inspire further research in this vein, both theoretical and empirical, at the interface of relationship science and moral psychology. Ideally, such research will help to integrate and enrich work in both domains, which has so far remained largely separate.

From a theoretical perspective, one aspect of our current account that requires further attention is the reciprocity function. In contrast with the other three functions considered, relationship-specific functional expectations for reciprocity did not significantly predict relationship-specific judgments concerning reciprocity violations. Why might this be so? One possibility, suggested by previous research, is that the model we tested did not distinguish between two different types of reciprocity. In some relationships, such as those between strangers, acquaintances, or individuals doing business with one another, reciprocity takes a tit-for-tat form in which benefits are offered and accepted on a highly contingent basis. This type of reciprocity is transactional, in that resources are provided, not in response to a real or perceived need on the part of the other, but rather, in response to the past or expected future provision of a similarly valued resource from the cooperation partner. In this, it relies on an explicit accounting of who owes what to whom, and is thus characteristic of so-called “exchange” relationships.

Monday, March 15, 2021

What is 'purity'? Conceptual murkiness in moral psychology.

Gray, K., DiMaggio, N., Schein, C., 
& Kachanoff, F. (2021, February 3).


Purity is an important topic in psychology. It has a long history in moral discourse, has helped catalyze paradigm shifts in moral psychology, and is thought to underlie political differences. But what exactly is “purity?” To answer this question, we review the history of purity and then systematically examine 158 psychology papers that define and operationalization (im)purity. In contrast to the many concepts defined by what they are, purity is often understood by what it isn’t—obvious dyadic harm. Because of this “contra”-harm understanding, definitions and operationalizations of purity are quite varied. Acts used to operationalize impurity include taking drugs, eating your sister’s scab, vandalizing a church, wearing unmatched clothes, buying music with sexually explicit lyrics, and having a messy house. This heterogeneity makes purity a “chimera”—an entity composed of various distinct elements. Our review reveals that the “contra-chimera” of purity has 9 different scientific understandings, and that most papers define purity differently from how they operationalize it. Although people clearly moralize diverse concerns—including those related to religion, sex, and food—such heterogeneity in conceptual definitions is problematic for theory development. Shifting definitions of purity provide “theoretical degrees of freedom” that make falsification extremely difficult. Doubts about the coherence and consistency of purity raise questions about key purity-related claims of modern moral psychology, including the nature of political differences and the cognitive foundations of moral judgment.

Wednesday, February 10, 2021

Beyond Moral Dilemmas: The Role of Reasoning in Five Categories of Utilitarian Judgment

F. Jaquet & F. Cova
Cognition, Volume 209, 
April 2021, 104572


Over the past two decades, the study of moral reasoning has been heavily influenced by Joshua Greene’s dual-process model of moral judgment, according to which deontological judgments are typically supported by intuitive, automatic processes while utilitarian judgments are typically supported by reflective, conscious processes. However, most of the evidence gathered in support of this model comes from the study of people’s judgments about sacrificial dilemmas, such as Trolley Problems. To which extent does this model generalize to other debates in which deontological and utilitarian judgments conflict, such as the existence of harmless moral violations, the difference between actions and omissions, the extent of our duties of assistance, and the appropriate justification for punishment? To find out, we conducted a series of five studies on the role of reflection in these kinds of moral conundrums. In Study 1, participants were asked to answer under cognitive load. In Study 2, participants had to answer under a strict time constraint. In Studies 3 to 5, we sought to promote reflection through exposure to counter-intuitive reasoning problems or direct instruction. Overall, our results offer strong support to the extension of Greene’s dual-process model to moral debates on the existence of harmless violations and partial support to its extension to moral debates on the extent of our duties of assistance.

From the Discussion Section

The results of  Study 1 led  us to conclude  that certain forms  of utilitarian judgments  require more cognitive resources than their deontological counterparts. The results of Studies 2 and 5 suggest  that  making  utilitarian  judgments  also  tends  to  take  more  time  than  making deontological judgments. Finally, because our manipulation was unsuccessful, Studies 3 and 4 do not allow us to conclude anything about the intuitiveness of utilitarian judgments. In Study 5, we were nonetheless successful in manipulating participants’ reliance on their intuitions (in the  Intuition  condition),  but this  did not  seem to  have any  effect on  the  rate  of utilitarian judgments.  Overall,  our  results  allow  us  to  conclude  that,  compared  to  deontological judgments,  utilitarian  judgments  tend  to  rely  on  slower  and  more  cognitively  demanding processes.  But  they  do  not  allow  us  to  conclude  anything  about  characteristics  such  as automaticity  or  accessibility  to consciousness:  for  example,  while solving  a very  difficult math problem might take more time and require more resources than solving a mildly difficult math problem, there is no reason to think that the former process is more automatic and more conscious than the latter—though it will clearly be experienced as more effortful. Moreover, although one could be tempted to  conclude that our data show  that, as predicted by Greene, utilitarian  judgment  is  experienced  as  more  effortful  than  deontological  judgment,  we collected no data  about such  experience—only data  about the  role of cognitive resources in utilitarian  judgment.  Though  it  is  reasonable  to  think  that  a  process  that  requires  more resources will be experienced as more effortful, we should keep in mind that this conclusion is based on an inferential leap. 

Saturday, November 14, 2020

Do ethics classes influence student behavior? Case study: Teaching the ethics of eating meat

Schwitzgebel, E. et al.
Volume 203, October 2020


Do university ethics classes influence students' real-world moral choices? We aimed to conduct the first controlled study of the effects of ordinary philosophical ethics classes on real-world moral choices, using non-self-report, non-laboratory behavior as the dependent measure. We assigned 1332 students in four large philosophy classes to either an experimental group on the ethics of eating meat or a control group on the ethics of charitable giving. Students in each group read a philosophy article on their assigned topic and optionally viewed a related video, then met with teaching assistants for 50-minute group discussion sections. They expressed their opinions about meat ethics and charitable giving in a follow-up questionnaire (1032 respondents after exclusions). We obtained 13,642 food purchase receipts from campus restaurants for 495 of the students, before and after the intervention. Purchase of meat products declined in the experimental group (52% of purchases of at least $4.99 contained meat before the intervention, compared to 45% after) but remained the same in the control group (52% both before and after). Ethical opinion also differed, with 43% of students in the experimental group agreeing that eating the meat of factory farmed animals is unethical compared to 29% in the control group. We also attempted to measure food choice using vouchers, but voucher redemption rates were low and no effect was statistically detectable. It remains unclear what aspect of instruction influenced behavior.

Saturday, November 7, 2020

Psychopathy as moral blindness: a qualifying exploration of the blindness-analogy in psychopathy theory and research

Rasmus Rosenberg Larsen (2020) 
Philosophical Explorations, 23:3, 214-233
DOI: 10.1080/13869795.2020.1799662


The term psychopathy refers to a personality disorder associated with callous personality traits and antisocial behaviors. Throughout its research history, psychopathy has frequently been described as a peculiar form of moral blindness, engendering a narrative about a patient stereotype incapable of taking a genuine moral perspective, similar to a blind person who is deprived of proper visual perceptions. However, recent empirical research has shown that clinically diagnosed psychopaths are morally more fit than initially thought, and the blindness-analogy now comes across as largely misleading. In this contribution, the moral-blindness analogy is explored in an attempt to qualify anew its relevance in psychopathy theory and research. It is demonstrated that there are indeed theoretically relevant parallels to be drawn between blindness and psychopathy, parallels that are especially illuminating when accounting for the potential symptomatology, dimensionality, and etiological nature of the disorder.

Concluding remarks

In summary, what has been proposed throughout this paper is a perspective in terms of how to interpret and improve psychopathy research, an approach which lends itself to theorize psychopathy as a peculiar form of moral blindness. Following leading research, it was posited that psychopathy must, first of all, be understood as an emotional disorder, that is, a disorder of substantial emotional attenuation. Building on Prinz’s constructivist sentimentalism, it was demonstrated how said emotional incapacity could manifest in moral psychological impairments, as an inability to perceive the degrees of moral rightness and wrongness. Prinz’s theory was then expanded by adding (or amending) that psychopaths are not necessarily impaired in terms of perceiving the categorical value of a given moral situation, i.e. judging whether something is either right or wrong. Indeed, psychopaths must perceive this basic information by the mere fact that they do have some levels of valanced emotional experience. Instead, what is predicted is that globally low emotion attenuation (i.e. psychopathy) leads to observable differences in terms of
judging the degree of rightness and wrongness of a situation.

Thursday, November 5, 2020

Are psychopaths moral‐psychologically impaired? Reassessing emotion‐theoretical explanations

Rasmus Rosenberg Larsen
Mind & Language. 2020; 1– 17. 


Psychopathy has been theorized as a disorder of emotion, which impairs moral judgments. However, these theories are increasingly being abandoned as empirical studies show that psychopaths seem to make proper moral judgments. In this contribution, these findings are reassessed, and it is argued that prevalent emotion‐theories of psychopathy appear to operate with the unjustified assumption that psychopaths have no emotions, which leads to the hypothesis that psychopaths are completely unable to make moral judgments. An alternative and novel explanation is proposed, theorizing psychopathy as a degree‐specific emotional deficiency, which causes degree‐specific differences in moral judgments.

From the Conclusion Section

Motivated by a suite of ostensibly undermining empirical studies, this paper sought to defend and qualify emotion-theories of psychopathy by explicating in detail the philosophical and psychological commitments these theories appear to be implicitly endorsing, namely, a (constructivist) sentimentalist framework. This explication demonstrated, above all, that psychopathy studies appear to operate with an inconsistent set of hypotheses when trying to capture the differences between diagnosed psychopaths and controls in terms of their moral judgments and values. This led to a consideration of alternative research designs particularly aimed at capturing the potential moral psychological differences that follows from having diminished emotional dispositions, namely, degree-specific differences related to the two-dimensional value spectrum, as opposed to differences related to answers on moral categorical issues.

Tuesday, July 14, 2020

The MAD Model of Moral Contagion: The role of motivation, attention and design in the spread of moralized content online

Brady WJ, Crockett MJ, Van Bavel JJ.
Perspect Psychol Sci. 2020;1745691620917336.


With over 3 billion users, online social networks represent an important venue for moral and political discourse and have been used to organize political revolutions, influence elections, and raise awareness of social issues. These examples rely on a common process in order to be effective: the ability to engage users and spread moralized content through online networks. Here, we review evidence that expressions of moral emotion play an important role in the spread of moralized content (a phenomenon we call ‘moral contagion’). Next, we propose a psychological model to explain moral contagion. The ‘MAD’ model of moral contagion argues that people have group identity-based motivations to share moral-emotional content; that such content is especially likely to capture our attention; and that the design of social media platforms amplifies our natural motivational and cognitive tendencies to spread such content. We review each component of the model (as well as interactions between components) and raise several novel, testable hypotheses that can spark progress on the scientific investigation of civic engagement and activism, political polarization, propaganda and disinformation, and other moralized behaviors in the digital age.

A copy of the research can be found here.