Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Moral Judgments. Show all posts
Showing posts with label Moral Judgments. Show all posts

Friday, February 9, 2024

The Dual-Process Approach to Human Sociality: Meta-analytic evidence for a theory of internalized heuristics for self-preservation

Capraro, Valerio (May 8, 2023).
Journal of Personality and Social Psychology, 

Abstract

Which social decisions are influenced by intuitive processes? Which by deliberative processes? The dual-process approach to human sociality has emerged in the last decades as a vibrant and exciting area of research. Yet, a perspective that integrates empirical and theoretical work is lacking. This review and meta-analysis synthesizes the existing literature on the cognitive basis of cooperation, altruism, truth-telling, positive and negative reciprocity, and deontology, and develops a framework that organizes the experimental regularities. The meta-analytic results suggest that intuition favours a set of heuristics that are related to the instinct for self-preservation: people avoid being harmed, avoid harming others (especially when there is a risk of harm to themselves), and are averse to disadvantageous inequalities. Finally, this paper highlights some key research questions to further advance our understanding of the cognitive foundations of human sociality.

Here is my summary:

This article proposes a dual-process approach to human sociality.  Capraro argues that there are two main systems that govern human social behavior: an intuitive system and a deliberative system. The intuitive system is fast, automatic, and often based on heuristics, or mental shortcuts. The deliberative system is slower, more effortful, and based on a more careful consideration of the evidence.

Capraro argues that the intuitive system plays a key role in cooperation, altruism, truth-telling, positive and negative reciprocity, and deontology. This is because these behaviors are often necessary for self-preservation. For example, in order to avoid being harmed, people are naturally inclined to cooperate with others and avoid harming others. Similarly, in order to maintain positive relationships with others, people are inclined to be truthful and reciprocate favors.

The deliberative system plays a more important role in more complex social situations, such as when people need to make decisions that have long-term consequences or when they need to take into account the needs of others. In these cases, people are more likely to engage in careful consideration of the evidence and to weigh the different options before making a decision. The authors conclude that the dual-process approach to human sociality provides a framework for understanding the complex cognitive basis of human social behavior. This framework can be used to explain a wide range of social phenomena, from cooperation and altruism to truth-telling and deontology.

Wednesday, April 12, 2023

Why Americans Hate Political Division but Can’t Resist Being Divisive

Will Blakely & Kurt Gray
Moral Understanding Substack
Originally posted 21 FEB 23

No one likes polarization. According to a recent poll, 93% of Americans say it is important to reduce the country's current divides, including two-thirds who say it is very important to do so. In a recent Five-Thirty-Eight poll, out of a list of 20 issues, polarization ranked third on a list of the most important issues facing America. Which is… puzzling.

The puzzle is this: How can we be so divided if no one wants to be? Who are the hypocrites causing division and hatred while paying lip service to compromise and tolerance?

If you ask everyday Americans, they’ve got their answer. It’s the elites. Tucker Carlson, AOC, Donald Trump, and MSNBC. While these actors certainly are polarizing, it takes two to tango. We, the people, share some of the blame too. Even us, writing this newsletter, and even you, dear reader.

But this leaves us with a tricky question, why would we contribute to a divide that we can’t stand? To answer this question, we need to understand the biases and motivations that influence how we answer the question, “Who’s at fault here?” And more importantly, we need to understand the strategies that can get us out of conflict.

The Blame Game

The Blame Game comes in two flavors: either/or. Adam or Eve, Will Smith or Chris Rock, Amber Heard or Jonny Depp. When assigning blame in bad situations, our minds are dramatic. Psychology studies show that we tend to assign 100% of the blame to the person we see as the aggressor, and 0% to the side we see as the victim. So, what happens when all the people who are against polarization assign blame for polarization? You guessed it. They give 100% of the blame to the opposing party and 0% to their own. They “morally typecast” themselves as 100% the victim of polarization and the other side as 100% the perpetrator.

We call this moral “typecasting” because people’s minds firmly cast others into roles of victim and victimizer in the same way that actors get typecasted in certain roles. In the world of politics, if you’re a Democrat, you cast Republicans as victimizers, as consistently as Hollywood directors cast Kevin Hart as comic relief and Danny Trejo as a laconic villain.

But why do we rush to this all-or-nothing approach when the world is certainly more complicated? It’s because our brains love simplicity. In the realm of blame, we want one simple cause. In his recent book, “Complicit” Max Bazerman, professor at Harvard Business School, illustrated just how widespread this “monocausality bias” is. Bazerman gave a group of business executives the opportunity to allocate blame after reviewing a case of business fraud. 62 of the 78 business leaders wrote only one cause. Despite being given ample time and a myriad set of potential causes, these executives intuitively reached for their Ockham’s razor. In the same way, we all rush to blame a sputtering economy on the president, a loss on a kicker’s missed field goal, or polarization on the other side.

Sunday, July 10, 2022

Situational factors shape moral judgements in the trolley dilemma in Eastern, Southern and Western countries in a culturally diverse sample

Bago, B., Kovacs, M., Protzko, J. et al. 
Nat Hum Behav (2022).
https://doi.org/10.1038/s41562-022-01319-5

Abstract

The study of moral judgements often centres on moral dilemmas in which options consistent with deontological perspectives (that is, emphasizing rules, individual rights and duties) are in conflict with options consistent with utilitarian judgements (that is, following the greater good based on consequences). Greene et al. (2009) showed that psychological and situational factors (for example, the intent of the agent or the presence of physical contact between the agent and the victim) can play an important role in moral dilemma judgements (for example, the trolley problem). Our knowledge is limited concerning both the universality of these effects outside the United States and the impact of culture on the situational and psychological factors affecting moral judgements. Thus, we empirically tested the universality of the effects of intent and personal force on moral dilemma judgements by replicating the experiments of Greene et al. in 45 countries from all inhabited continents. We found that personal force and its interaction with intention exert influence on moral judgements in the US and Western cultural clusters, replicating and expanding the original findings. Moreover, the personal force effect was present in all cultural clusters, suggesting it is culturally universal. The evidence for the cultural universality of the interaction effect was inconclusive in the Eastern and Southern cultural clusters (depending on exclusion criteria). We found no strong association between collectivism/individualism and moral dilemma judgements.

From the Discussion

In this research, we replicated the design of Greene et al. using a culturally diverse sample across 45 countries to test the universality of their results. Overall, our results support the proposition that the effect of personal force on moral judgements is likely culturally universal. This finding makes it plausible that the personal force effect is influenced by basic cognitive or emotional processes that are universal for humans and independent of culture. Our findings regarding the interaction between personal force and intention were more mixed. We found strong evidence for the interaction of personal force and intention among participants coming from Western countries regardless of familiarity and dilemma context (trolley or speedboat), fully replicating the results of Greene et al.. However, the evidence was inconclusive among participants from Eastern countries in all cases. Additionally, this interaction result was mixed for participants from countries in the Southern cluster. We only found strong enough evidence when people familiar with these dilemmas were included in the sample and only for the trolley (not speedboat) dilemma.

Our general observation is that the size of the interaction was smaller on the speedboat dilemmas in every cultural cluster. It is yet unclear whether this effect is caused by some deep-seated (and unknown) differences between the two dilemmas (for example, participants experiencing smaller emotional engagement in the speedboat dilemmas that changes response patterns) or by some unintended experimental confound (for example, an effect of the order of presentation of the dilemmas).

Wednesday, June 8, 2022

Humans first: Why people value animals less than humans

L. Caviola, S. Schubert, G. Kahane, & N. S.Faber
Cognition
Volume 225, August 2022, 105139

Abstract

People routinely give humans moral priority over other animals. Is such moral anthropocentrism based in perceived differences in mental capacity between humans and non-humans or merely because humans favor other members of their own species? We investigated this question in six studies (N = 2217). We found that most participants prioritized humans over animals even when the animals were described as having equal or more advanced mental capacities than the humans. This applied to both mental capacity at the level of specific individuals (Studies 1a-b) and at the level typical for the respective species (Study 2). The key driver behind moral anthropocentrism was thus mere species-membership (speciesism). However, all else equal, participants still gave more moral weight to individuals with higher mental capacities (individual mental capacity principle), suggesting that the belief that humans have higher mental capacities than animals is part of the reason that they give humans moral priority. Notably, participants found mental capacity more important for animals than for humans—a tendency which can itself be regarded as speciesist. We also explored possible sub-factors driving speciesism. We found that many participants judged that all individuals (not only humans) should prioritize members of their own species over members of other species (species-relativism; Studies 3a-b). However, some participants also exhibited a tendency to see humans as having superior value in an absolute sense (pro-human species-absolutism, Studies 3–4). Overall, our work demonstrates that speciesism plays a central role in explaining moral anthropocentrism and may be itself divided into multiple sub-factors.

From the General Discussion

The distal sources of moral anthropocentrism

So far, we have discussed how the factors of moral anthropocentrism are related to each other. We now turn to briefly discuss what ultimate factors may explain moral anthropocentrism, though at present there is little evidence that directly bears on this question. However, evolutionary considerations suggest a preliminary, even if inevitably speculative, account of the ultimate sources of moral anthropocentrism. Such an explanation could also shed light on the role of the sub-factors of speciesism.

There is extensive evidence that people categorize individuals into different groups (cf. Tajfel, Billig, Bundy, & Flament, 1971), identify with their own group (Hornsey, 2008; Tajfel & Turner, 1979), and prioritize members of their ingroup over members of their outgroup (Balliet, Wu, & De Dreu, 2014; Crimston et al., 2016; Fu et al., 2012; Sherif, 1961; Yamagishi & Kiyonari, 2000; for bounded generalized reciprocity theory, cf. Yamagishi & Mifune, 2008). Ingroup favoritism is expressed in many different contexts. People have, for example, a tendency to favor others who share their ethnicity, nationality, religion, or political affiliation. (Rand et al., 2009; Whitt & Wilson, 2007). It has been argued that ingroup favoritism is an innate tendency since it can promote safety and help to encourage mutual cooperation among ingroup members (Gaertner & Insko, 2000). It seems, therefore, that there are good reasons to assume that speciesism is a form of ingroup favoritism analogous to ingroup favoritism among human groups.

While typical human ingroups would be far smaller than humanity itself, our similarity to other humans would be salient in contexts where a choice needs to be made between a human and a non-human. Since the differences between humans and animals are perceived as vast—in terms of biology, physical appearance, mental capacities, and behavior—and the boundaries between the groups so wide and clear, one would expect ingroup favoritism between humans and animals to be particularly strong. Indeed, research suggests that perceived similarity with outgroup members can reduce ingroup favoritism—as long as they are seen as non-threatening (Henderson-King, Henderson-King, Zhermer, Posokhova, & Chiker, 1997). Similarly, it has been shown that people have more positive reactions towards animals that are perceived as biologically, physically, mentally, or behaviorally more similar to humans than animals that are dissimilar (Burghardt & Herzog, 1989; Kellert & Berry, 1980).

Monday, January 24, 2022

Children Prioritize Humans Over Animals Less Than Adults Do

Wilks M, Caviola L, Kahane G, Bloom P.
Psychological Science. 2021;32(1):27-38. 
doi:10.1177/0956797620960398

Abstract

Is the tendency to morally prioritize humans over animals weaker in children than adults? In two preregistered studies (total N = 622), 5- to 9-year-old children and adults were presented with moral dilemmas pitting varying numbers of humans against varying numbers of either dogs or pigs and were asked who should be saved. In both studies, children had a weaker tendency than adults to prioritize humans over animals. They often chose to save multiple dogs over one human, and many valued the life of a dog as much as the life of a human. Although they valued pigs less, the majority still prioritized 10 pigs over one human. By contrast, almost all adults chose to save one human over even 100 dogs or pigs. Our findings suggest that the common view that humans are far more morally important than animals appears late in development and is likely socially acquired.

From the Discussion section

What are the origins of this tendency? One possibility is that it is an unlearned preference. For much of human history, animals played a central role in human life—whether as a threat or as a resource. It therefore seems possible that humans would develop distinctive psychological mechanisms for thinking about animals. Even if there are no specific cognitive adaptations for thinking about animals, it is hardly surprising that humans prefer humans over animals—similar to their preference for tribe members over strangers. Similarly, given that in-group favoritism in human groups (e.g., racism, sexism, minimal groups) tends to emerge as early as preschool years (Buttelmann & Böhm, 2014), one would expect that a basic tendency to prioritize humans over animals also emerges early.

But we would suggest that the much stronger tendency to prioritize humans over animals in adults has a different source that, given the lack of correlation between age and speciesism in children, emerges late in development. Adolescents may learn and internalize the socially held speciesist notion—or ideology—that humans are morally special and deserve full moral status, whereas animals do not. 

Sunday, November 7, 2021

Moral Judgment as Categorization

McHugh, C., McGann, M., Igou, E. R., & 
Kinsella, E. L. (2021). 
Perspectives on Psychological Science 
https://doi.org/10.1177/1745691621990636

Abstract

Observed variability and complexity of judgments of "right" and "wrong" cannot be readily accounted for within extant approaches to understanding moral judgment. In response to this challenge, we present a novel perspective on categorization in moral judgment. Moral judgment as categorization (MJAC) incorporates principles of category formation research while addressing key challenges of existing approaches to moral judgment. People develop skills in making context-relevant categorizations. They learn that various objects (events, behaviors, people, etc.) can be categorized as morally right or wrong. Repetition and rehearsal result in reliable, habitualized categorizations. According to this skill-formation account of moral categorization, the learning and the habitualization of the forming of moral categories occur within goal-directed activity that is sensitive to various contextual influences. By allowing for the complexity of moral judgments, MJAC offers greater explanatory power than existing approaches while also providing opportunities for a diverse range of new research questions.

Summarizing the Differences 

Between MJAC and Existing Approaches Above, we have outlined how MJAC differs from existing theories in terms of assumptions and explanation. These theories make assumptions based on content, and this results in essentialist theorizing, either implicit or explicit attempts to define an “essence” of morality. In contrast, MJAC rejects essentialism, instead assuming moral categorizations are dynamical, context-dependent, and occurring as part of goal-directed activity. Each of the theories discussed is explicitly or implicitly (e.g., Schein & Gray, 2018, p. 41) based on dual-process assumptions, with related dichotomous assumptions regarding the cognitive mechanisms (where these mechanisms are specified). MJAC does not assume distinct, separable processes, adopting type-token interpretation, occurring as part of goal-directed activity (Barsalou, 2003, 2017), as the mechanism that underlies moral categorization. These differences in assumptions underlie the differences in the explanation discussed above.

Sunday, September 26, 2021

Better the Two Devils You Know, Than the One You Don’t: Predictability Influences Moral Judgments of Immoral Actors

Walker, A. C.,  et al. 
(2020, March 24).

Abstract

Across six studies (N = 2,646), we demonstrate the role that perceptions of predictability play in judgments of moral character, finding that people demonstrate a moral preference for more predictable immoral actors. Participants judged agents performing an immoral action (e.g., assault) for an unintelligible reason as less predictable and less moral than agents performing the same immoral action, along with an additional immoral action (e.g., theft), for a well-understood immoral reason (Studies 1-4). Additionally, agents performing an immoral action for an unintelligible reason were judged as less predictable and less moral compared to agents performing the same immoral act for an unstated reason (Studies 3-5). This moral preference persisted when participants viewed video footage of each agent’s immoral action (Study 5). Finally, agents performing immoral actions in an unusual way were judged as less predictable and less moral than those performing the same actions in a more common manner (Study 6). The present research demonstrates how immoral actions performed without a clear motive or in an unpredictable way are perceived to be especially indicative of poor moral character. In revealing peoples’ moral preference for predictable immoral actors, we propose that perceptions of predictability play an important, yet overlooked, role in judgments of moral character. Furthermore, we propose that predictability influences judgments of moral character for its ultimate role in reducing social uncertainty and facilitating cooperation with trustworthy individuals and discuss how these findings may be accommodated by person-centered theories of moral judgment and theories of morality-as-cooperation.

From the Discussion

From traditional act-based perspectives (e.g., deontology and utilitarianism; Kant, 1785/1959; Mill, 1861/1998) this moral preference may appear puzzling, as participants judged actors causing more harm and violating more moral rules as more moral. Nevertheless, recent work suggests that people view actions not as the endpoint of moral evaluation, but as a source of information for assessing the moral character of those who perform them(Tannenbaum et al., 2011; Uhlmannet al., 2013). Fromthis person-centered perspective(Pizarro & Tannenbaum, 2011; Uhlmann et al., 2015), a moral preference for more predictable immoral actors can be understood as participants judging the same immoral action (e.g., assault) as more indicative of negative character traits (e.g., a lack of empathy)when performed without an intelligible motive. That is, a person assaulting a stranger seemingly without reason or in an unusual manner (e.g., with a frozen fish) may be viewed as a more inherently unstable, violent, and immoral person compared to an individual performing an identical assault for a well-understood reason (e.g., to escape punishment for a crime in-progress). Such negative character assessments may lead unpredictable immoral actors to be considered a greater risk for causing future harms of uncertain severity to potentially random victims. Consistent with these claims, past work has shown that people judge those performing harmless-but-offensive acts (e.g., masturbating inside a dead chicken), as not only possessing more negative character traits compared to others performing more harmful acts (e.g., theft), but also as likely to engage in more harmful actions in the future(Chakroff et al., 2017; Uhlmann& Zhu, 2014).

Sunday, July 18, 2021

‘They’re Not True Humans’: Beliefs About Moral Character Drive Categorical Denials of Humanity

Phillips, B. (2021, May 29). 

Abstract

In examining the cognitive processes that drive dehumanization, laboratory-based research has focused on non-categorical denials of humanity. Here, we examine the conditions under which people are willing to categorically deny that someone else is human. In doing so, we argue that people harbor a dual character concept of humanity. Research has found that dual character concepts have two independent sets of criteria for their application, one of which is normative. Across four experiments, we found evidence that people deploy one criterion according to which being human is a matter of being a Homo sapiens; as well as a normative criterion according to which being human is a matter of possessing a deep-seated commitment to do the morally right thing. Importantly, we found that people are willing to affirm that someone is human in the species sense, but deny that they are human in the normative sense, and vice versa. These findings suggest that categorical denials of humanity are not confined to extreme cases outside the laboratory. They also suggest a solution to “the paradox of dehumanization.”

(cut)

6.2.The paradox of dehumanization 

The findings reported here also suggest a solution to the paradox of dehumanization. Recall that in paradigmatic cases of dehumanization, such as the Holocaust, the perpetrators tend to attribute certain uniquely human traits to their victims. For example, the Nazis frequently characterized Jewish people as criminals and traitors. They also treated them as moral agents, and subjected them to severe forms of punishment and humiliation (see Gutman and Berenbaum, 1998). Criminality, treachery, and moral agency are not capacities that we tend to attribute to nonhuman animals.  Thus, can we really say that the Nazis thought of their victims as nonhuman? In responding to this paradox, some theorists have suggested that the perpetrators in these paradigmatic cases do not, in fact, think of their victims as nonhuman(see Appiah, 2008; Bloom, 2017; Manne, 2016, 2018, chapter 5; Over, 2020; Rai et al., 2017).Other theorists have suggested that the perpetrators harbor inconsistent representations of their victims, simultaneously thinking of them as both human and subhuman (Smith, 2016, 2020).Our findings suggest a third possibility: namely, that the perpetrators harbor a dual character concept of humanity, categorizing their victims as human in one sense, but denying that they are human in another sense. For example, it is true that theNazis attributed certain uniquely human traits to their victims, such as criminality. However, when categorizing their victims as evil criminals, the Nazis may have been thinking of them as nonhuman in the normative sense, while recognizing them as human in the species sense (for a relevant discussion, see Steizinger, 2018). This squares away with the fact that when the Nazis likened Jewish people to certain animals, such as rats, this often took on a moralizing tone. For example, in an antisemitic book entitled The Eternal Jew (Nachfolger, 1937), Jewish neighborhoods in Berlin were described as “breeding grounds of criminal and political vermin.” Similarly, when the Nazis referred toJews as “subhumans,” they often characterized them as bad moral agents. For example, as was mentioned above, Goebbels described Bolshevism as “the declaration of war by Jewish-led international subhumans against culture itself.”Similarly, in one 1943 Nazi pamphlet, Marxist values are described as appealing to subhumans, while liberalist values are described as “allowing the triumph of subhumans” (Anonymous, 1943, chapter 1).

Friday, March 12, 2021

The Psychology of Existential Risk: Moral Judgments about Human Extinction

Schubert, S., Caviola, L. & Faber, N.S. 
Sci Rep 9, 15100 (2019). 
https://doi.org/10.1038/s41598-019-50145-9

Abstract

The 21st century will likely see growing risks of human extinction, but currently, relatively small resources are invested in reducing such existential risks. Using three samples (UK general public, US general public, and UK students; total N = 2,507), we study how laypeople reason about human extinction. We find that people think that human extinction needs to be prevented. Strikingly, however, they do not think that an extinction catastrophe would be uniquely bad relative to near-extinction catastrophes, which allow for recovery. More people find extinction uniquely bad when (a) asked to consider the extinction of an animal species rather than humans, (b) asked to consider a case where human extinction is associated with less direct harm, and (c) they are explicitly prompted to consider long-term consequences of the catastrophes. We conclude that an important reason why people do not find extinction uniquely bad is that they focus on the immediate death and suffering that the catastrophes cause for fellow humans, rather than on the long-term consequences. Finally, we find that (d) laypeople—in line with prominent philosophical arguments—think that the quality of the future is relevant: they do find extinction uniquely bad when this means forgoing a utopian future.

Discussion

Our studies show that people find that human extinction is bad, and that it is important to prevent it. However, when presented with a scenario involving no catastrophe, a near-extinction catastrophe and an extinction catastrophe as possible outcomes, they do not see human extinction as uniquely bad compared with non-extinction. We find that this is partly because people feel strongly for the victims of the catastrophes, and therefore focus on the immediate consequences of the catastrophes. The immediate consequences of near-extinction are not that different from those of extinction, so this naturally leads them to find near-extinction almost as bad as extinction. Another reason is that they neglect the long-term consequences of the outcomes. Lastly, their empirical beliefs about the quality of the future make a difference: telling them that the future will be extraordinarily good makes more people find extinction uniquely bad.

Thus, when asked in the most straightforward and unqualified way, participants do not find human extinction uniquely bad. 

Sunday, February 21, 2021

Moral Judgment as Categorization (MJAC)

McHugh, C., et al. 
(2019, September 17). 
https://doi.org/10.31234/osf.io/72dzp

Abstract

Observed variability and complexity of judgments of 'right' and 'wrong' cannot currently be readily accounted for within extant approaches to understanding moral judgment. In response to this challenge we present a novel perspective on categorization in moral judgment. Moral judgment as categorization (MJAC) incorporates principles of category formation research while addressing key challenges to existing approaches to moral judgment. People develop skills in making context-relevant categorizations. That is, they learn that various objects (events, behaviors, people etc.) can be categorized as morally ‘right’ or ‘wrong’. Repetition and rehearsal results in reliable, habitualized categorizations. According to this skill formation account of moral categorization, the learning and the habitualization of the forming of moral categories, occurs within goal-directed activity that is sensitive to various contextual influences. By allowing for the complexity of moral judgments, MJAC offers greater explanatory power than existing approaches, while also providing opportunities for a diverse range of new research questions.

Conclusion

It is not terribly simple, the good guys are not always stalwart and true, and the bad guys are not easily distinguished by their pointy horns or black hats. Knowing right from wrong is not a simple process of applying an abstract principle to a particular situation. Decades of research in moral psychology have shown that our moral judgments can vary from one situation to the next, while a growing body of evidence indicates that people cannot always provide reasons for their moral judgments. Understanding the making of moral judgments requires accounting for the full complexity and variability of our moral judgments. MJAC provides a framework for studying moral judgment that incorporates this dynamism and context-dependency into its core assumptions. We have argued that this sensitivity to the dynamical and context-dependent nature of moral judgments provides MJAC with superior explanations for known moral phenomena while simultaneously providing MJAC with the power to explain a greater and more diverse range of phenomena than existing approaches.

Thursday, February 18, 2021

Intuitive Expertise in Moral Judgements.

Wiegmann, A., & Horvath, J. 
(2020, December 22). 

Abstract

According to the ‘expertise defence’, experimental findings which suggest that intuitive judgements about hypothetical cases are influenced by philosophically irrelevant factors do not undermine their evidential use in (moral) philosophy. This defence assumes that philosophical experts are unlikely to be influenced by irrelevant factors. We discuss relevant findings from experimental metaphilosophy that largely tell against this assumption. To advance the debate, we present the most comprehensive experimental study of intuitive expertise in ethics to date, which tests five well-known biases of judgement and decision-making among expert ethicists and laypeople. We found that even expert ethicists are affected by some of these biases, but also that they enjoy a slight advantage over laypeople in some cases. We discuss the implications of these results for the expertise defence, and conclude that they still do not support the defence as it is typically presented in (moral) philosophy.

Conclusion

We first considered the experimental restrictionist challenge to intuitions about cases, with a special focus on moral philosophy, and then introduced the expertise defence as the most popular reply. The expertise defence makes the empirically testable assumption that the case intuitions of expert philosophers are significantly less influenced by philosophically irrelevant factors than those of laypeople.  The upshot of our discussion of relevant findings from experimental metaphilosophy was twofold: first, extant findings largely tell against the expertise defence, and second, the number of published studies and investigated biases is still fairly small. To advance the debate about the expertise defencein moral philosophy, we thus tested five well-known biases of judgement and decision-making among expert ethicists and laypeople. Averaged across all biases and scenarios, the intuitive judgements of both experts and laypeople were clearly susceptible to bias. However, moral philosophers were also less biased in two of the five cases(Focus and Prospect), although we found no significant expert-lay differences in the remaining three cases.

In comparison to previous findings (for example Schwitzgebel and Cushman [2012, 2015]; Wiegmann et al. [2020]), our results appear to be relatively good news for the expertise defence, because they suggest that moral philosophers are less influenced by some morally irrelevant factors, such as a simple saving/killing framing. On the other hand, our study does not support the very general armchair versions of the expertise defence that one often finds in metaphilosophy, which try to reassure(moral) philosophers that they need not worry about the influence of philosophically irrelevant factors.At best, however, we need not worry about just a few cases and a few human biases—and even that modest hypothesis can only be upheld on the basis of sufficient empirical research.

Monday, February 15, 2021

Response time modelling reveals evidence for multiple, distinct sources of moral decision caution

Andrejević, M., et al. 
(2020, November 13). 

Abstract

People are often cautious in delivering moral judgments of others’ behaviours, as falsely accusing others of wrongdoing can be costly for social relationships. Caution might further be present when making judgements in information-dynamic environments, as contextual updates can change our minds. This study investigated the processes with which moral valence and context expectancy drive caution in moral judgements. Across two experiments, participants (N = 122) made moral judgements of others’ sharing actions. Prior to judging, participants were informed whether contextual information regarding the deservingness of the recipient would follow. We found that participants slowed their moral judgements when judging negatively valenced actions and when expecting contextual updates. Using a diffusion decision model framework, these changes were explained by shifts in drift rate and decision bias (valence) and boundary setting (context), respectively. These findings demonstrate how moral decision caution can be decomposed into distinct aspects of the unfolding decision process.

From the Discussion

Our findings that participants slowed their judgments when expecting contextual information is consistent with previous research showing that people are more cautious when aware that they are more prone to making mistakes. Notably, previous research has demonstrated this effect for decision mistakes in tasks in which people are not given additional information or a chance to change their minds.The current findings show that this effect also extends to dynamic decision-making contexts, in which learning additional information can lead to changes of mind. Crucially, here we show that this type of caution can be explained by the widening of the decision boundary separation in a process model of decision-making.

Wednesday, February 10, 2021

Beyond Moral Dilemmas: The Role of Reasoning in Five Categories of Utilitarian Judgment

F. Jaquet & F. Cova
Cognition, Volume 209, 
April 2021, 104572

Abstract

Over the past two decades, the study of moral reasoning has been heavily influenced by Joshua Greene’s dual-process model of moral judgment, according to which deontological judgments are typically supported by intuitive, automatic processes while utilitarian judgments are typically supported by reflective, conscious processes. However, most of the evidence gathered in support of this model comes from the study of people’s judgments about sacrificial dilemmas, such as Trolley Problems. To which extent does this model generalize to other debates in which deontological and utilitarian judgments conflict, such as the existence of harmless moral violations, the difference between actions and omissions, the extent of our duties of assistance, and the appropriate justification for punishment? To find out, we conducted a series of five studies on the role of reflection in these kinds of moral conundrums. In Study 1, participants were asked to answer under cognitive load. In Study 2, participants had to answer under a strict time constraint. In Studies 3 to 5, we sought to promote reflection through exposure to counter-intuitive reasoning problems or direct instruction. Overall, our results offer strong support to the extension of Greene’s dual-process model to moral debates on the existence of harmless violations and partial support to its extension to moral debates on the extent of our duties of assistance.

From the Discussion Section

The results of  Study 1 led  us to conclude  that certain forms  of utilitarian judgments  require more cognitive resources than their deontological counterparts. The results of Studies 2 and 5 suggest  that  making  utilitarian  judgments  also  tends  to  take  more  time  than  making deontological judgments. Finally, because our manipulation was unsuccessful, Studies 3 and 4 do not allow us to conclude anything about the intuitiveness of utilitarian judgments. In Study 5, we were nonetheless successful in manipulating participants’ reliance on their intuitions (in the  Intuition  condition),  but this  did not  seem to  have any  effect on  the  rate  of utilitarian judgments.  Overall,  our  results  allow  us  to  conclude  that,  compared  to  deontological judgments,  utilitarian  judgments  tend  to  rely  on  slower  and  more  cognitively  demanding processes.  But  they  do  not  allow  us  to  conclude  anything  about  characteristics  such  as automaticity  or  accessibility  to consciousness:  for  example,  while solving  a very  difficult math problem might take more time and require more resources than solving a mildly difficult math problem, there is no reason to think that the former process is more automatic and more conscious than the latter—though it will clearly be experienced as more effortful. Moreover, although one could be tempted to  conclude that our data show  that, as predicted by Greene, utilitarian  judgment  is  experienced  as  more  effortful  than  deontological  judgment,  we collected no data  about such  experience—only data  about the  role of cognitive resources in utilitarian  judgment.  Though  it  is  reasonable  to  think  that  a  process  that  requires  more resources will be experienced as more effortful, we should keep in mind that this conclusion is based on an inferential leap. 

Friday, January 29, 2021

Moral psychology of sex robots: An experimental study

M. Koverola, et al.
Journal of Brehavioral Robots
Volume 11: Issue 1

Abstract

The idea of sex with robots seems to fascinate the general public, raising both enthusiasm and revulsion. We ran two experimental studies (Ns = 172 and 260) where we compared people’s reactions to variants of stories about a person visiting a bordello. Our results show that paying for the services of a sex robot is condemned less harshly than paying for the services of a human sex worker, especially if the payer is married. We have for the first time experimentally confirmed that people are somewhat unsure about whether using a sex robot while in a committed monogamous relationship should be considered as infidelity. We also shed light on the psychological factors influencing attitudes toward sex robots, including disgust sensitivity and interest in science fiction. Our results indicate that sex with a robot is indeed genuinely considered as sex, and a sex robot is genuinely seen as a robot; thus, we show that standard research methods on sexuality and robotics are also applicable in research on sex robotics.

(cut)

Conclusion

Our results successfully show that people condemn a married person less harshly if they pay for a robot sex worker than for a human sex worker. This likely reflects the fact that many people do not consider sex with a robot as infidelity or consider it as “cheating, but less so than with a human person”. These results therefore function as a stepping-stone into new avenues of interesting research that might be appealing to evolutionary and moral psychologists alike. Most likely, sociologists and market researchers will also be interested in increasing our understanding regarding the complex relations between humans and members of new ontological categories (robots, artificial intelligences (AIs), etc.). Future research will offer new possibilities to understand both human sexual and moral cognition by focusing on how humans relate to sexual relationships with androids beyond mere fantasies produced by science fiction like Westworld or Blade Runner. As sex robots in the near future enter mass production, public opinion will presumably stabilize regarding moral attitudes toward sex with robots.


Monday, December 14, 2020

Should you save the more useful? The effect of generality on moral judgments about rescue and indirect effects

Caviola, L., Schubert, S., & Mogensen, A. 
(2020, October 23). 

Abstract

Across eight experiments (N = 2,310), we studied whether people would prioritize rescuing individuals who may be thought to contribute more to society. We found that participants were generally dismissive of general rules that prioritize more socially beneficial individuals, such as doctors instead of unemployed people. By contrast, participants were more supportive of one-off decisions to save the life of a more socially beneficial individual, even when such cases were the same as those covered by the rule. This generality effect occurred robustly even when controlling for various factors. It occurred when the decision-maker was the same in both cases, when the pairs of people differing in the extent of their indirect social utility was varied, when the scenarios were varied, when the participant samples came from different countries, and when the general rule only covered cases that are exactly the same as the situation described in the one-off condition. The effect occurred even when the general rule was introduced via a concrete precedent case. Participants’ tendency to be more supportive of the one-off proposal than the general rule was significantly reduced when they evaluated the two proposals jointly as opposed to separately. Finally, the effect also occurred in sacrificial moral dilemmas, suggesting it is a more general phenomenon in certain moral contexts. We discuss possible explanations of the effect, including concerns about negative consequences of the rule and a deontological aversion against making difficult trade-off decisions unless they are absolutely necessary.

General Discussion

Across our studies we found evidence for a generality effect: participants were more supportive of a proposal to prioritize people who are more beneficial to society than others if this applies to a concrete one-off situation than if it describes a general rule. The effect showed robustly even when controlling for various factors. It occurred even when the decision-maker was the same in both cases (Study 2), when the pairs of people differing in the extent of their indirect social utility was varied (Study 3), when the scenarios were varied (Study 3, Study 6), when the participant samples came from different countries (Study 3), and when the rule only entails cases that are exactly the same as the one-off case (Study 6). The effect also occurred when the general rule was introduced via a concrete precedent case (Study 4 and 6). The tendency to be more supportive of the one-off proposal than the general rule was significantly reduced when participants evaluated the two proposals jointly as opposed to separately (Study 7). Finally, we found that the effect also occurs in sacrificial moral dilemmas (Study 8), suggesting that it is a more general phenomenon in moral contexts.

Saturday, October 10, 2020

A Theory of Moral Praise

Anderson, R. A, Crockett, M. J., & Pizarro, D.
Trends in Cognitive Sciences
Volume 24, Issue 9, September 2020, 
Pages 694-703

Abstract

How do people judge whether someone deserves moral praise for their actions?  In contrast to the large literature on moral blame, work on how people attribute praise has, until recently, been scarce. However, there is a growing body of recent work from a variety of subfields in psychology (including social, cognitive, developmental, and consumer) suggesting that moral praise is a fundamentally unique form of moral attribution and not simply the positive moral analogue of
blame attributions. A functional perspective helps explain asymmetries in blame and praise: we propose that while blame is primarily for punishment and signaling one’s moral character, praise is primarily for relationship building.

Concluding Remarks

Moral praise, we have argued, is a psychological response that, like other forms of moral judgment,
serves a particular functional role in establishing social bonds, encouraging cooperative alliances,
and promoting good behavior. Through this lens, seemingly perplexing asymmetries between
judgments of blame for immoral acts and judgments of praise for moral acts can be understood
as consistent with the relative roles, and associated costs, played by these two kinds of moral
judgments. While both blame and praise judgments require that an agent played some causal
and intentional role in the act being judged, praise appears to be less sensitive to these features
and more sensitive to more general features about an individual’s stable, underlying character
traits. In other words, we believe that the growth of studies on moral praise in the past few years
demonstrate that, when deciding whether or not doling out praise is justified, individuals seem to
care less on how the action was performed and far more about what kind of person performed
the action. We suggest that future research on moral attribution should seek to complement
the rich literature examining moral blame by examining potentially unique processes engaged in
moral praise, guided by an understanding of their differing costs and benefits, as well as their
potentially distinct functional roles in social life.

The article is here.

Monday, October 5, 2020

Kinship intensity and the use of mental states in moral judgment across societies

C. M. Curtain and others
Evolution and Human Behavior
Volume 41, Issue 5, September 2020, Pages 415-429

Abstract

Decades of research conducted in Western, Educated, Industrialized, Rich, & Democratic (WEIRD) societies have led many scholars to conclude that the use of mental states in moral judgment is a human cognitive universal, perhaps an adaptive strategy for selecting optimal social partners from a large pool of candidates. However, recent work from a more diverse array of societies suggests there may be important variation in how much people rely on mental states, with people in some societies judging accidental harms just as harshly as intentional ones. To explain this variation, we develop and test a novel cultural evolutionary theory proposing that the intensity of kin-based institutions will favor less attention to mental states when judging moral violations. First, to better illuminate the historical distribution of the use of intentions in moral judgment, we code and analyze anthropological observations from the Human Area Relations Files. This analysis shows that notions of strict liability—wherein the role for mental states is reduced—were common across diverse societies around the globe. Then, by expanding an existing vignette-based experimental dataset containing observations from 321 people in a diverse sample of 10 societies, we show that the intensity of a society's kin-based institutions can explain a substantial portion of the population-level variation in people's reliance on intentions in three different kinds of moral judgments. Together, these lines of evidence suggest that people's use of mental states has coevolved culturally to fit their local kin-based institutions. We suggest that although reliance on mental states has likely been a feature of moral judgment in human communities over historical and evolutionary time, the relational fluidity and weak kin ties of today's WEIRD societies position these populations' psychology at the extreme end of the global and historical spectrum.

General Discussion

We have argued that some of the variation in the use of mental states in moral judgment can be explained as a psychological calibration to the social incentives, informational constraints, and cognitive demands of kin-based institutions, which we have assessed using our construct of kinship intensity. Our examination of ethnographic accounts of norms that diminish the importance of mental states reveals that these are likely common across the ethnographic record, while our analysis of data on moral judgments of hypothetical violations from a diverse sample of ten societies indicates that kinship intensity is associated with a reduced tendency to rely on intentions in moral judgment. Together, these lines of ethnographic and psychological inquiry provide evidence that (i) the heavy reliance of contemporary, WEIRD populations on intentions is likely neither globally nor historically representative, and (ii) kinship intensity may explain some of the population-level variation in the use of mental-state reasoning in moral judgment.

The research is here.

Wednesday, August 26, 2020

Morality justifies motivated reasoning in the folk ethics of belief

Cusimano, C., & Lombrozo, T. (2020, July 20).
https://doi.org/10.31234/osf.io/7r5yb

Abstract

When faced with a dilemma between believing what is supported by an impartial assessment of the evidence (e.g., that one’s friend is guilty of a crime) and believing what would better fulfill a moral obligation (e.g., that the friend is innocent), people often believe in line with the latter. But is this how people think beliefs ought to be formed? We addressed this question across three studies and found that, across a diverse set of everyday situations, people treat moral considerations as legitimate grounds for believing propositions that are unsupported by objective, evidence-based reasoning. We further document two ways in which moral evaluations affect how people prescribe beliefs to others. First, the moral value of a belief affects the evidential threshold required to believe, such that morally good beliefs demand less evidence than morally bad beliefs. Second, people sometimes treat the moral value of a belief as an independent justification for belief, and so sometimes prescribe evidentially poor beliefs to others. Together these results show that, in the folk ethics of belief, morality can justify and demand motivated reasoning.

From the Discussion

Additionally, participants reported that moral concerns affected the standards of evidence that apply to belief, such that morally-desirable beliefs require less evidence than morally-undesirable beliefs. In Study 1, participants reported that, relative to an impartial observer with the same information, someone with a moral reason to be optimistic had a wider range of beliefs that could be considered“consistent with” and “based on” the evidence.  Critically however, the broader range of beliefs that were consistent with the same evidence were only beliefs that were more morally desirable; morally undesirable beliefs were not more consistent with the evidence. In Studies 2 and 3, participants agreed more strongly that someone who had a moral reason to adopt a desirable belief had sufficient evidence to do so compared to someone who lacked a moral reason, even though they formed the same belief on the basis of the same evidence.  Likewise, on average, participants judged that when someone adopted the morally undesirable belief, they were more often judged as having insufficient evidence for doing so relative to someone who lacked a moral reason (again, even though they formed the same belief on the basis of the same evidence).  Finally, in Study 2 (though not in Study 3), these judgments replicated using an indirect measure of evidentiary quality; namely, attributions of knowledge. In sum, these findings document that one reason people may prescribe a motivated belief to someone is because morality changes how much evidence they consider to be required to hold the belief in an evidentially-sound way.

Editor's Note: Huge implications for psychotherapy.

Friday, April 10, 2020

Better the Two Devils You Know, Than the One You Don’t: Predictability Influences Moral Judgment

A. Walker, M. Turpin, & others
PsyArXiv Preprints
Updated 6 April 20

Abstract

Across four studies (N =1,806 US residents), we demonstrate the role perceptions of predictability play in judgments of moral character, finding that less predictable agents were also judged as less moral. Participants judged agents performing an immoral action (e.g., assault) for an unintelligible reason as less predictable and less moral than agents performing the same immoral action for a well-understood immoral reason (Studies 1-3). Additionally, agents performing an action in an unusual way were judged as less predictable and less moral than those performing the same action in a common manner (Study 4). These results challenge monist theories of moral psychology, which reduce morality to a single dimension (e.g., harm) and pluralist accounts failing to consider the role predictability plays in moral judgments. We propose that predictability influences judgments of moral character for its ultimate role in facilitating cooperation and discuss how these findings may be accommodated by theories of morality-as-cooperation.

From the General Discussion

Supporting the idea that judgments of predictability guide judgments of moral character, we show that people judge agents they perceive as less predictable to be less moral. Those signalling unpredictability with their actions, either by acting without an intelligible motive(Studies 1-3)or by performing an immoral act in an unusual manner(Study 4), are consistently viewed as possessing an especially poor moral character.

Despite its importance for cooperation, and therefore moral judgments (Curry, 2016; Curry et al., 2019; Greene, 2013; Haidt, 2012; Rai& Fiske, 2011; Tomasello & Vaish, 2013), dominant theories of moral psychology have not explicitly considered the role predictability plays in judgments of moral character. Here we presented novel scenarios for which many popular theoretical frameworks fail to accurately capture participants’ moral impressions.

The research is here.

Wednesday, February 19, 2020

How to talk someone out of bigotry

Brian Resnick
vox.com
Originally published 29 Jan 20

Here is an excerpt:

Topping and dozens of other canvassers were a part of that 2016 effort. It was an important study: Not only has social science found very few strategies that work, in experiments, to change minds on issues of prejudice, but even fewer tests of those strategies have occurred in the real world.

Typically, the conversations begin with the canvasser asking the voter for their opinion on a topic, like abortion access, immigration, or LGBTQ rights. Canvassers (who may or may not be members of the impacted community) listen nonjudgmentally. They don’t say if they are pleased or hurt by the response. They are supposed “to appear genuinely interested in hearing the subject ruminate on the question,” as Broockman and Kalla’s latest study instructions read.

The canvassers then ask if the voters know anyone in the affected community, and ask if they relate to the person’s story. If they don’t, and even if they do, they’re asked a question like, “When was a time someone showed you compassion when you really needed it?” to get them to reflect on their experience when they might have felt something similar to the people in the marginalized community.

The canvassers also share their own stories: about being an immigrant, about being a member of the LGBTQ community, or about just knowing people who are.

It’s a type of conversation that’s closer to what a psychotherapist might have with a patient than a typical political argument. (One clinical therapist I showed it to said it sounded a bit like “motivational interviewing,” a technique used to help clients work through ambivalent feelings.) It’s not about listing facts or calling people out on their prejudicial views. It’s about sharing and listening, all the while nudging people to be analytical and think about their shared humanity with marginalized groups.

The info is here.