Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Blame. Show all posts
Showing posts with label Blame. Show all posts

Wednesday, October 25, 2023

The moral psychology of Artificial Intelligence

Bonnefon, J., Rahwan, I., & Shariff, A.
(2023, September 22). 

Abstract

Moral psychology was shaped around three categories of agents and patients: humans, other animals, and supernatural beings. Rapid progress in Artificial Intelligence has introduced a fourth category for our moral psychology to deal with: intelligent machines. Machines can perform as moral agents, making decisions that affect the outcomes of human patients, or solving moral dilemmas without human supervi- sion. Machines can be as perceived moral patients, whose outcomes can be affected by human decisions, with important consequences for human-machine cooperation. Machines can be moral proxies, that human agents and patients send as their delegates to a moral interaction, or use as a disguise in these interactions. Here we review the experimental literature on machines as moral agents, moral patients, and moral proxies, with a focus on recent findings and the open questions that they suggest.

Conclusion

We have not addressed every issue at the intersection of AI and moral psychology. Questions about how people perceive AI plagiarism, about how the presence of AI agents can reduce or enhance trust between groups of humans, about how sexbots will alter intimate human relations, are the subjects of active research programs.  Many more yet unasked questions will only be provoked as new AI  abilities  develops. Given the pace of this change, any review paper will only be a snapshot.  Nevertheless, the very recent and rapid emergence of AI-driven technology is colliding with moral intuitions forged by culture and evolution over the span of millennia.  Grounding an imaginative speculation about the possibilities of AI with a thorough understanding of the structure of human moral psychology will help prepare for a world shared with, and complicated by, machines.

Sunday, September 10, 2023

Seeing and sanctioning structural unfairness

Flores-Robles, G., & Gantman, A. P. (2023, June 28).
PsyArXiv

Abstract

People tend to explain wrongdoing as the result of a bad actor or bad system. In five studies (four U.S. online convenience, one U.S. representative sample), we tested whether the way people understand unfairness affects how they sanction it. In Pilot 1A (N = 40), people interpreted unfair offers in an economic game as the result of a bad actor (vs. unfair rules), unless incentivized (Pilot 1B, N = 40), which, in Study 1 (N = 370), predicted costly punishment of individuals (vs. changing unfair rules). In Studies 2 (N = 500) and 3, (N = 470, representative of age, gender, and ethnicity in the U.S), we found that people paid to change the rules for the final round of the game (vs. punished individuals), when they were randomly assigned a bad system (vs. bad actor) explanation for prior identical unfair offers. Explanations for unfairness affect how people sanction it.

Statement of Relevance

Humans are facing massive problems including economic and social inequality. These problems are often framed in the media, and by friends and experts, as a problem either of individual action (e.g., racist beliefs) or of structures (e.g., discriminatory housing laws). The current research uses a context-free economic game to ask whether these explanations have any effect on what people think should happen next. We find that people tend to explain unfair offers in the game in terms of bad actors (unless incentivized) which is related to punishing individuals over changing the game itself.  When people are told that the unfairness they witnessed was the result of a bad actor, they prefer to punish that actor; when they are told that the same unfair behavior is the result of unfair rules, they prefer to change the rules. Our understanding of the mechanisms of inequality affect how we want to sanction it.

My summary:

The article discusses how people tend to explain wrongdoing as the result of a bad actor or bad system.  In essence, this is a human, decision-making process. The authors conducted five studies to test whether the way people understand unfairness affects how they sanction it. They found that people are more likely to punish individuals for unfair behavior when they believe that the behavior is the result of a bad actor. However, they are more likely to try to change the system (or the rules) when they believe that the behavior is the result of a bad system.

The authors argue that these findings have important implications for ethics, morality, and values. They suggest that we need to be more aware of the way we explain unfairness, because our explanations can influence how we respond to it. How an individual frames the issue is a key to correct possible solutions, as well as biases.  They also suggest that we need to be more critical of the systems that we live in, because these systems can create unfairness.

The article raises a number of ethical, moral, and value-related questions. For example, what is the responsibility of individuals to challenge unfair systems? What is the role of government in addressing structural unfairness? And what are the limits of individual and collective action in addressing unfairness?

The article does not provide easy answers to these questions. However, it does provide a valuable framework for thinking about unfairness and how we can respond to it.

Thursday, May 18, 2023

People Construe a Corporation as an Individual to Ascribe Responsibility in Cases of Corporate Wrongdoing

Sharma, N., Flores-Robles, G., & Gantman, A. P.
(2023, April 11). PsyArXiv

Abstract

In cases of corporate wrongdoing, it is difficult to assign blame across multiple agents who played different roles. We propose that people have dualist ideas of corporate hierarchies: with the boss as “the mind,” and the employee as “the body,” and the employee appears to carry out the will of the boss like the mind appears to will the body (Wegner, 2003). Consistent with this idea, three experiments showed that moral responsibility was significantly higher for the boss, unless the employee acted prior to, inconsistently with, or outside of the boss’s will. People even judge the actions of the employee as mechanistic (“like a billiard ball”) when their actions mirror the will of the boss. This suggests that the same features that tell us our minds cause our actions, also facilitate the sense that a boss has willed the behavior of an employee and is ultimately responsible for bad outcomes in the workplace.

From the General Discussion

Practical Implications

Our findings offer a number of practical implications for organizations. First, our research provides insight into how people currently make judgments of moral responsibility within an organization (and specifically, when a boss gives instructions to an employee). Second, our research provides insight into the decision-making process of whether to fire a boss-figure like a CEO (or other decision-maker) or invest in lasting change in organizational culture following an organizational wrongdoing. From a scapegoating perspective, replacing a CEO is not intended to produce lasting change in underlying organizational problems and signals a desire to maintain the status quo (Boeker, 1992; Shen & Cannella, 2002). Scapegoating may not always be in the best interest of investors. Previous research has shown that following financial misrepresentation, investors react positively only to CEO successions wherein the replacement comes from the outside, which serves as a costly signal of the firm’s understanding of the need for change (Gangloff et al., 2016). And so, by allocating responsibility to the CEO without creating meaningful change, organizations may loseinvestors. Finally, this research has implications for building public trust in organizations. Following the Wells Fargo scandal, two-thirds of Wells Fargo customers (65%) claimed they trusted their bank less, and about half of Wells Fargo customers (51%) were willing to switch to another bank, if they perceived them to be more trustworthy (Business Wire, 2017).Thus, how organizations deal with wrongdoing (e.g., whether they fire individuals, create lasting change or both) can influence public trust. If corporations want to build trust among the general public, and in doing so, create a larger customer base, they can look at how people understand and ascribe responsibility and consequently punish organizational wrongdoings.

Tuesday, December 20, 2022

Doesn't everybody jaywalk? On codified rules that are seldom followed and selectively punished

Wylie, J., & Gantman, A.
Cognition, Volume 231, February 2023, 105323

Abstract

Rules are meant to apply equally to all within their jurisdiction. However, some rules are frequently broken without consequence for most. These rules are only occasionally enforced, often at the discretion of a third-party observer. We propose that these rules—whose violations are frequent, and enforcement is rare—constitute a unique subclass of explicitly codified rules, which we call ‘phantom rules’ (e.g., proscribing jaywalking). Their apparent punishability is ambiguous and particularly susceptible to third-party motives. Across six experiments, (N = 1,440) we validated the existence of phantom rules and found evidence for their motivated enforcement. First, people played a modified Dictator Game with a novel frequently broken and rarely enforced rule (i.e., a phantom rule). People enforced this rule more often when the “dictator” was selfish (vs. fair) even though the rule only proscribed fractional offers (not selfishness). Then we turned to third person judgments of the U.S. legal system. We found these violations are recognizable to participants as both illegal and commonplace (Experiment 2), differentiable from violations of prototypical laws (Experiments 3) and enforced in a motivated way (Experiments 4a and 4b). Phantom rule violations (but not prototypical legal violations) are seen as more justifiably punished when the rule violator has also violated a social norm (vs. rule violation alone)—unless the motivation to punish has been satiated (Experiment 5). Phantom rules are frequently broken, codified rules. Consequently, their apparent punishability is ambiguous, and their enforcement is particularly susceptible to third party motives.

General Discussion

In this paper, we identified a subset of rules, which are explicitly codified (e.g., in professional tennis, in an economic game, by the U.S. legal system), frequently violated, and rarely enforced. As a result, their apparent punishability is particularly ambiguous and subject to motivation. These rules show us that codified rules, which are meant to apply equally to all, can be used to sanction behaviors outside of their jurisdiction. We named this subclass of rules phantom rules and found evidence that people enforce them according to their desire to punish a different behavior (i.e., a social norm violation), recognize them in the U.S. legal system, and employ motivated reasoning to determine their punishability. We hypothesized and found, across behavioral and survey experiments, that phantom rules—rules where the descriptive norms of enforcement are low—seem enforceable, punishable, and legitimate only when one has an external active motivation to punish. Indeed, we found that phantom rules were judged to be more justifiably enforced and more morally wrong to violate when the person who broke the rule had also violated a social norm—unless they were also punished for that social norm violation. Together, we take this as evidence of the existence of phantom rules and the malleability of their apparent punishability via active (vs. satiated) punishment motivation.

The ambiguity of phantom rule enforcement makes it possible for them to serve a hidden function; they can be used to punish behavior outside of the purview of the official rules. Phantom rule violations are technically wrong, but on average, seen as less morally wrong.This means, for the most part, that people are unlikely to feel strongly when they see these rules violated, and indeed, people frequently violate phantom rules without consequence. This pattern fits well with previous work in experimental philosophy that shows that motivations can affect how we reason about what constitutes breaking a rule in the first place. For example, when rule breaking occurs blameless (e.g., unintentionally), people are less likely to say a rule was violated at all and look for reasons to excuse their behavior(Turri, 2019; Turri & Blouw, 2015). Indeed, our findings mirror this pattern. People find a reason to punish phantom rule violations only when people are particularly or dispositionally motivated to punish.

Friday, December 9, 2022

Neural and Cognitive Signatures of Guilt Predict Hypocritical Blame

Yu, H., Contreras-Huerta, L. S., et al. (2022).
Psychological Science, 0(0).

Abstract

A common form of moral hypocrisy occurs when people blame others for moral violations that they themselves commit. It is assumed that hypocritical blamers act in this manner to falsely signal that they hold moral standards that they do not really accept. We tested this assumption by investigating the neurocognitive processes of hypocritical blamers during moral decision-making. Participants (62 adult UK residents; 27 males) underwent functional MRI scanning while deciding whether to profit by inflicting pain on others and then judged the blameworthiness of others’ identical decisions. Observers (188 adult U.S. residents; 125 males) judged participants who blamed others for making the same harmful choice to be hypocritical, immoral, and untrustworthy. However, analyzing hypocritical blamers’ behaviors and neural responses shows that hypocritical blame was positively correlated with conflicted feelings, neural responses to moral standards, and guilt-related neural responses. These findings demonstrate that hypocritical blamers may hold the moral standards that they apply to others.

Statement of Relevance

Hypocrites blame other people for moral violations they themselves have committed. Common perceptions of hypocrites assume they are disingenuous and insincere. However, the mental states and neurocognitive processes underlying hypocritical blamers’ behaviors are not well understood. We showed that people who hypocritically blamed others reported stronger feelings of moral conflict during moral decision-making, had stronger neural responses to moral standards in lateral prefrontal cortex, and exhibited more guilt-related neurocognitive processes associated with harming others. These findings suggest that some hypocritical blamers do care about the moral standards they use to condemn other people but sometimes fail to live up to those standards themselves, contrary to the common philosophical and folk perception.

Discussion

In this study, we developed a laboratory paradigm to precisely quantify hypocritical blame, in which people blame others for committing the same transgressions they committed themselves (Todd, 2019). At the core of this operationalization of hypocrisy is a discrepancy between participants’ moral judgments and their behaviors in a moral decision-making task. Therefore, we measured participants’ choices in an incentivized moral decision-making task that they believed had real impact on their own monetary payoff and painful electric shocks delivered to a receiver. We then compared those choices with moral judgments they made a week later of other people in the same choice context. By comparing participants’ judgments with their own behaviors, we were able to quantify the degree to which they judge other people more harshly for making the same choices they themselves made previously (i.e., hypocritical blame).

Sunday, September 11, 2022

Mental control and attributions of blame for negligent wrongdoing

Murray, S., Krasich, K., et al. (2022).
Journal of Experimental Psychology: 
General. Advance online publication.
https://doi.org/10.1037/xge0001262

Abstract

Third-personal judgments of blame are typically sensitive to what an agent knows and desires. However, when people act negligently, they do not know what they are doing and do not desire the outcomes of their negligence. How, then, do people attribute blame for negligent wrongdoing? We propose that people attribute blame for negligent wrongdoing based on perceived mental control, or the degree to which an agent guides their thoughts and attention over time. To acquire information about others’ mental control, people self-project their own perceived mental control to anchor third-personal judgments about mental control and concomitant responsibility for negligent wrongdoing. In four experiments (N = 841), we tested whether perceptions of mental control drive third-personal judgments of blame for negligent wrongdoing. Study 1 showed that the ease with which people can counterfactually imagine an individual being non-negligent mediated the relationship between judgments of control and blame. Studies 2a and 2b indicated that perceived mental control has a strong effect on judgments of blame for negligent wrongdoing and that first-personal judgments of mental control are moderately correlated with third-personal judgments of blame for negligent wrongdoing. Finally, we used an autobiographical memory manipulation in Study 3 to make personal episodes of forgetfulness salient. Participants for whom past personal episodes of forgetfulness were made salient judged negligent wrongdoers less harshly compared with a control group for whom past episodes of negligence were not salient. Collectively, these findings suggest that first-personal judgments of mental control drive third-personal judgments of blame for negligent wrongdoing and indicate a novel role for counterfactual thinking in the attribution of responsibility.

Conclusion

Models  of  blame  attribution  predict  that  judgments  of  blame  for  negligent  wrongdoing  are sensitive to the perceived  capacity of the individual  to  avoid being negligent. In  this paper, we explored two extensions of these models. The first is that people use perceived degree of mental control to inform judgments of blame for negligent wrongdoing. Information about mental control is acquired through self-projection. These results suggest a novel role for counterfactual thinking in attributing blame, namely that counterfactual thinking is the process whereby people self-project to acquire information that is used to inform judgments of blame.

Saturday, August 27, 2022

Counterfactuals and the logic of causal selection

Quillien, T., & Lucas, C. G. (2022, June 13)
https://doi.org/10.31234/osf.io/ts76y

Abstract

Everything that happens has a multitude of causes, but people make causal judgments effortlessly. How do people select one particular cause (e.g. the lightning bolt that set the forest ablaze) out of the set of factors that contributed to the event (the oxygen in the air, the dry weather. . . )? Cognitive scientists have suggested that people make causal judgments about an event by simulating alternative ways things could have happened. We argue that this counterfactual theory explains many features of human causal intuitions, given two simple assumptions. First, people tend to imagine counterfactual possibilities that are both a priori likely and similar to what actually happened. Second, people judge that a factor C caused effect E if C and E are highly correlated across these counterfactual possibilities. In a reanalysis of existing empirical data, and a set of new experiments, we find that this theory uniquely accounts for people’s causal intuitions.

From the General Discussion

Judgments of causation are closely related to assignments of blame, praise, and moral responsibility.  For instance, when two cars crash at an intersection, we say that the accident was caused by the driver who went through a red light (not by the driver who went through a green light; Knobe and Fraser, 2008; Icard et al., 2017; Hitchcock and Knobe, 2009; Roxborough and Cumby, 2009; Alicke, 1992; Willemsen and Kirfel, 2019); and we also blame that driver for the accident. According to some theorists, the fact that we judge the norm-violator to be blameworthy or morally responsible explains why we judge that he was the cause of the accident. This might be because our motivation to blame distorts our causal judgment (Alicke et al., 2011), because our intuitive concept of causation is inherently normative (Sytsma, 2021), or because of pragmatics confounds in the experimental tasks that probe the effect of moral violations on causal judgment (Samland & Waldmann, 2016).

Under these accounts, the explanation for why moral considerations affect causal judgment should be completely different than the explanation for why other factors (e.g.,prior probabilities, what happened in the actual world, the causal structure of the situation) affect causal judgment. We favor a more parsimonious account: the counterfactual approach to causal judgment (of which our theory is one instantiation) provides a unifying explanation for the influence of both moral and non-moral considerations on causal judgment (Hitchcock & Knobe, 2009)16.

Finally, many formal theories of causal reasoning aim to model how people make causal inferences (e.g. Cheng, 1997; Griffiths & Tenenbaum, 2005; Lucas & Griffiths, 2010; Bramley et al., 2017; Jenkins & Ward, 1965). These theories are not concerned with the problem of causal selection, the focus of the present paper. It is in principle possible that people use the same algorithms they use for causal inference when they engage in causal selection, but in practice models of causal inference have not been able to predict how people select causes (see Quillien and Barlev, 2022; Morris et al., 2019).

Wednesday, January 5, 2022

Outrage Fatigue? Cognitive Costs and Decisions to Blame

Bambrah, V., Cameron, D., & Inzlicht, M.
(2021, November 30).

Abstract

Across nine studies (N=1,672), we assessed the link between cognitive costs and the choice to express outrage by blaming. We developed the Blame Selection Task, a binary free-choice paradigm that examines the propensity to blame transgressors (versus an alternative choice)—either before or after reading vignettes and viewing images of moral transgressions. We hypothesized that participants’ choice to blame wrongdoers would negatively relate to how cognitively inefficacious, effortful, and aversive blaming feels (compared to the alternative choice). With vignettes, participants approached blaming and reported that blaming felt more efficacious. With images, participants avoided blaming and reported that blaming felt more inefficacious, effortful, and aversive. Blame choice was greater for vignette-based transgressions than image-based transgressions. Blame choice was positively related to moral personality constructs, blame-related social-norms, and perceived efficacy of blaming, and inversely related to perceived effort and aversiveness of blaming. The BST is a valid behavioral index of blame propensity, and choosing to blame is linked to its cognitive costs.

Discussion

Moral norm violations cause people to experience moral outrage and to express it in various ways (Crockett, 2017), such as shaming/dehumanizing, punishing, or blaming. These forms of expressing outrage are less than moderately related to one another (r’s < .30; see Bastian et al., 2013 for more information), which suggests that a considerable amount of variance between shaming/dehumanizing, punishing, and blaming remains unexplained and that these are distinct enough demonstrations of outragein response to norm violations. Yet, despite its moralistic implications (see Crockett, 2017), there is still little empirical work not only on the phenomenon of outrage fatigue but also on the role of motivated cognition on expressing outrage via blame. Social costs alter blame judgments, even when people’s cognitive resources are depleted (Monroe & Malle, 2019). But how do the inherent cognitive costs of blaming relate to people’s decisions towards moral outrage and blame? Here, we examined how felt cognitive costs associate with the choice to express outrage through blame.

Monday, November 15, 2021

On Defining Moral Enhancement: A Clarificatory Taxonomy

Carl Jago
Journal of Experimental Social Psychology
Volume 95, July 2021, 104145

Abstract

In a series of studies, we ask whether and to what extent the base rate of a behavior influences associated moral judgment. Previous research aimed at answering different but related questions are suggestive of such an effect. However, these other investigations involve injunctive norms and special reference groups which are inappropriate for an examination of the effects of base rates per se. Across five studies, we find that, when properly isolated, base rates do indeed influence moral judgment, but they do so with only very small effect sizes. In another study, we test the possibility that the very limited influence of base rates on moral judgment could be a result of a general phenomenon such as the fundamental attribution error, which is not specific to moral judgment. The results suggest that moral judgment may be uniquely resilient to the influence of base rates. In a final pair of studies, we test secondary hypotheses that injunctive norms and special reference groups would inflate any influence on moral judgments relative to base rates alone. The results supported those hypotheses.

From the General Discussion

In multiple experiments aimed at examining the influence of base rates per se, we found that base rates do indeed influence judgments, but the size of the effect we observed was very small. We considered that, in
discovering moral judgments’ resilience to influence from base rates, we may have only rediscovered a general tendency, such as the fundamental attribution error, whereby people discount situational factors. If
so, this tendency would then also apply broadly to non-moral scenarios. We therefore conducted another study in which our experimental materials were modified so as to remove the moral components. We found a substantial base-rate effect on participants’ judgments of performance regarding non-moral behavior. This finding suggests that the resilience to base rates observed in the preceding studies is unlikely the result of amore general tendency, and may instead be unique to moral judgment.

The main reasons why we concluded that the results from the most closely related extant research could not answer the present research question were the involvement in those studies of injunctive norms and
special reference groups. To confirm that these factors could inflate any influence of base rates on moral judgment, in the final pair of studies, we modified our experiments so as to include them. Specifically, in one study, we crossed prescriptive and proscriptive injunctive norms with high and low base rates and found that the impact of an injunctive norm outweighs any impact of the base rate. In the other study, we found that simply mentioning, for example, that there were some good people among those who engaged in a high base-rate behavior resulted in a large effect on moral judgment; not only on judgments of the target’s character, but also on judgments of blame and wrongness. 

Monday, October 11, 2021

Good deeds and hard knocks: The effect of past suffering on praise for moral behavior

P. Robbins, F. Alvera, & P. Litton
Journal of Experimental Social Psychology
Volume 97, November 2021

Abstract

Are judgments of praise for moral behavior modulated by knowledge of an agent's past suffering at the hands of others, and if so, in what direction? Drawing on multiple lines of research in experimental social psychology, we identify three hypotheses about the psychology of praise — typecasting, handicapping, and non-historicism — each of which supports a different answer to the question above. Typecasting predicts that information about past suffering will augment perceived patiency and thereby diminish perceived agency, making altruistic actions seem less praiseworthy; handicapping predicts that this information will make altruistic actions seem more effortful, and hence more praiseworthy; and non-historicism predicts that judgments of praise will be insensitive to information about an agent's experiential history. We report the results of two studies suggesting that altruistic behavior tends to attract more praise when the experiential history of the agent involves coping with adversity in childhood rather than enjoying prosperity (Study 1, N = 348, p = .03, d = 0.45; Study 2, N = 400, p = .02, d = 0.39), as well as the results of a third study suggesting that altruistic behavior tends to be evaluated more favorably when the experiential history of the agent includes coping with adversity than in the absence of information about the agent's past experience (N = 226, p = .002). This pattern of results, we argue, is more consistent with handicapping than typecasting or non-historicism.

From the Discussion

One possibility is that a history of suffering is perceived as depleting the psychological resources required for acting morally, making it difficult for someone to shift attention from their own needs to the needs of others. This is suggested by the stereotype of people who have suffered hardships in early life, especially at the hands of caregivers, which includes a tendency to be socially anxious, insecure, and withdrawn — a stereotype which may have some basis in fact (Elliott, Cunningham, Linder, Colangelo, & Gross, 2005). A history of suffering, that is, might seem like an obstacle to developing the kind of social mindedness exemplified by acts of altruism and other forms of prosocial behavior, which are typically motivated by feelings of compassion or empathic concern. This is an open empirical question, worthy of investigation not just in connection with handicapping and typecasting (and historicist accounts of praise more generally) but in its own right.


This research may have implications for psychotherapy.

Friday, September 17, 2021

The Case Against Non-Moral Blame

Matheson, B., & Milam, P.E.
Forthcoming in the Oxford Studies 
in Normative Ethics 11

Abstract

Non-moral blame seems to be widespread and widely accepted in everyday life—tolerated at least, but often embraced. We blame athletes for poor performance, artists for bad or boring art, scientists for faulty research, and voters for flawed reasoning. This paper argues that non-moral blame is never justified—i.e. it’s never a morally permissible response to a non-moral failure. Having explained what blame is and how non-moral blame differs from moral blame, the paper presents the argument in four steps. First, it argues that many (perhaps most) apparent cases of non-moral blame are actually cases of moral blame. Second, it argues that even if non-moral blame is pro tanto permissible—because its target is blameworthy for their substandard performance—it often (perhaps usually) fails to meet other permissibility conditions, such as fairness or standing. Third, it goes further and challenges the claim that non-moral blame is ever even pro tanto permissible. Finally, it considers a number of arguments in support of non-moral obligations and argues that none of them succeed.


This philosophical piece highlights, in part, the Fundamental Attribution Error in context of moral judgment.

Wednesday, August 18, 2021

The Shape of Blame: How statistical norms impact judgments of blame and praise

Bostyn, D. H., & Knobe, J. (2020, April 24). 
https://doi.org/10.31234/osf.io/2hca8

Abstract

For many types of behaviors, whether a specific instance of that behavior is either blame or praiseworthy depends on how much of the behavior is done or how people go about doing it. For instance, for a behavior such as “replying quickly to emails”, whether a specific reply is blame or praiseworthy will depend on the timeliness of that reply. Such behaviors lie on a continuum in which part of the continuum is praiseworthy (replying quickly) and another part of the continuum is blameworthy (replying late). As praise shifts towards blame along such behavioral continua, the resulting blame-praise curve must have a specific shape. A number of questions therefore arise. What determines the shape of that curve? And what determines “the neutral point”, i.e., the point along a behavioral continuum at which people neither blame nor praise? Seven studies explore these issues, focusing specifically on the impact of statistical information, and provide evidence for a hypothesis we call the “asymmetric frequency hypothesis.”

From the Discussion

Asymmetric frequency and moral cognition

The results obtained here appear to support the asymmetric frequency hypothesis. So far, we have summarized this hypothesis as “People tend perceive frequent behaviors as not blameworthy.” But how exactly is this hypothesis best understood?Importantly, the asymmetric frequency effect does not imply that whenever a behavior becomes more frequent, the associated moral judgment will shift towards the neutral. Behaviors that are considered to be praiseworthy do not appear to become more neutral simply because they become more frequent. The effect of frequency only appears to occur when a behavior is blameworthy, which is why we dubbed it an asymmetric effect.An enlightening historical example in this regard is perhaps the “gay revolution” (Faderman, 2015). As knowledge of the rate of homosexuality has spread across society and people have become more familiar with homosexuality within their own communities, moral norms surrounding homosexuality have shifted from hostility to increasing acceptance (Gallup 2019). Crucially, however, those who already lauded others for having a loving homosexual relation did not shift their judgment towards neutral indifference over the same time period. While frequency mitigates blameworthiness, it does not cause a general shift towards neutrality. Even when everyone does the right thing, it does not lose its moral shine.

Tuesday, June 8, 2021

Action and inaction in moral judgments and decisions: ‎Meta-analysis of Omission-Bias omission-commission asymmetries

Jamison, J., Yay, T., & Feldman, G.
Journal of Experimental Social Psychology
Volume 89, July 2020, 103977

Abstract

Omission bias is the preference for harm caused through omissions over harm caused through commissions. In a pre-registered experiment (N = 313), we successfully replicated an experiment from Spranca, Minsk, and Baron (1991), considered a classic demonstration of the omission bias, examining generalizability to a between-subject design with extensions examining causality, intent, and regret. Participants in the harm through commission condition(s) rated harm as more immoral and attributed higher responsibility compared to participants in the harm through omission condition (d = 0.45 to 0.47 and d = 0.40 to 0.53). An omission-commission asymmetry was also found for perceptions of causality and intent, in that commissions were attributed stronger action-outcome links and higher intentionality (d = 0.21 to 0.58). The effect for regret was opposite from the classic findings on the action-effect, with higher regret for inaction over action (d = −0.26 to −0.19). Overall, higher perceived causality and intent were associated with higher attributed immorality and responsibility, and with lower perceived regret.

From the Discussion

Regret: Deviation from the action-effect 

The classic action-effect (Kahneman & Tversky, 1982) findings were that actions leading to a negative outcome are regretted more than inactions leading to the same negative outcomes. We added a regret measure to examine whether the action-effect findings would extend to situations of morality involving intended harmful behavior. Our findings were opposite to the expected action-effect omission-commission asymmetry with participants rating omissions as more regretted than commissions (d = 0.18 to 0.26).  

One explanation for this surprising finding may be an intermingling of the perception of an actors’ regret for their behavior with their regret for the outcome. In typical action-effect scenarios, actors behave in a way that is morally neutral but are faced with an outcome that deviates from expectations, such as losing money over an investment. In this study’s omission bias scenarios, the actors behaved immorally to harm others for personal or interpersonal gain, and then are faced with an outcome that deviates from expectation. We hypothesized that participants would perceive actors as being more regretful for taking action that would immorally harm another person rather than allowing that harm through inaction. Yet it is plausible that participants were focused on the regret that actors would feel for not taking more direct action towards their goal of personal or interpersonal gain.  

Another possible explanation for the regret finding is the side-taking hypothesis (DeScioli, 2016; Descoli & Kurzban, 2013). This states that group members side against a wrongdoer who has performed an action that is perceived morally wrong by also attributing lack of remorse or regret. The negative relationship observed between the positive characteristic of regret and the negative characteristics of immorality, causality, and intentionality is in support of this explanation. Future research may be able to explore the true mechanisms of regret in such scenarios. 

Saturday, February 13, 2021

Allocating moral responsibility to multiple agents

Gantman, A. P., Sternisko, A., et al.
Journal of Experimental Social Psychology
Volume 91, November 2020, 

Abstract

Moral and immoral actions often involve multiple individuals who play different roles in bringing about the outcome. For example, one agent may deliberate and decide what to do while another may plan and implement that decision. We suggest that the Mindset Theory of Action Phases provides a useful lens through which to understand these cases and the implications that these different roles, which correspond to different mindsets, have for judgments of moral responsibility. In Experiment 1, participants learned about a disastrous oil spill in which one company made decisions about a faulty oil rig, and another installed that rig. Participants judged the company who made decisions as more responsible than the company who implemented them. In Experiment 2 and a direct replication, we tested whether people judge implementers to be morally responsible at all. We examined a known asymmetry in blame and praise. Moral agents received blame for actions that resulted in a bad outcome but not praise for the same action that resulted in a good outcome. We found this asymmetry for deciders but not implementers, an indication that implementers were judged through a moral lens to a lesser extent than deciders. Implications for allocating moral responsibility across multiple agents are discussed.

Highlights

• Acts can be divided into parts and thereby roles (e.g., decider, implementer).

• Deliberating agent earns more blame than implementing one for a bad outcome.

• Asymmetry in blame vs. praise for the decider but not the implementer

• Asymmetry in blame vs. praise suggests only the decider is judged as moral agent

• Effect is attenuated if decider's job is primarily to implement.

Saturday, January 23, 2021

Norms Affect Prospective Causal Judgments

Henne, P., & others
(2019, December 30). 

Abstract

People more frequently select norm-violating factors, relative to norm-conforming ones, as the cause of some outcome. Until recently, this abnormal-selection effect has been studied using retrospective vignette-based paradigms. We use a novel set of video stimuli to investigate this effect for prospective causal judgments—i.e., judgments about the cause of some future outcome. Four experiments show that people more frequently select norm-violating factors, relative to norm-conforming ones, as the cause of some future outcome. We show that the abnormal-selection effects are not primarily explained by the perception of agency (Experiment 4). We discuss these results in relation to recent efforts to model causal judgment.

From the Discussion

The results of these experiments have some important consequences for the study of causal cognition. While accounting for some of the limitations of past work on abnormal selection, we present strong evidence in support of modal explanations for abnormal-selection effects. Participants in our studies select norm-violating factors as causes for stimuli that reduce the presence of agential cues (Experiments 1-3), and increasing agency cues does not change this tendency (Experiment 4). Social explanations might account for abnormal-selection behavior in some contexts, but, in general, abnormal-selection behavior likely does not depend on perceived intentions of agents, assessments of blame, or other social concerns. Rather, abnormal-selection effects seem to reflect a more general causal reasoning process, not just processes related to social or moral cognition, that involves modal cognition.The modal explanations for abnormal selection effects predict the results that we present here; in non-social situations, abnormal-selection effects should occur, and they should occur for prospective causal judgments. Even if the social explanation can account for the results of Experiments 1-3, it does not predict the results of Experiment 4. In Experiment 4, we increased agency cues, and we saw an increase in perceived intentionality attributed to the objects in our stimuli. But we did not see a change in abnormal-selection behavior, as social explanations predict. While these results are not evidence that the social explanation is completely mistaken about causal-selection behavior, we have strong evidence that modal explanations account for these effects—even when agency cues are increased.

-----

Editor's note: This research is very important for psychologists, clinicians, and psychotherapists trying to understand and conceptualize their patient's behaviors and symptoms.  Studies show clinicians have poor inter-rater reliability to explain accurate the causes of behaviors and symptoms.  In this study, norm violations are more likely seen as causes, a bias for which we all need to understand.

Saturday, October 10, 2020

A Theory of Moral Praise

Anderson, R. A, Crockett, M. J., & Pizarro, D.
Trends in Cognitive Sciences
Volume 24, Issue 9, September 2020, 
Pages 694-703

Abstract

How do people judge whether someone deserves moral praise for their actions?  In contrast to the large literature on moral blame, work on how people attribute praise has, until recently, been scarce. However, there is a growing body of recent work from a variety of subfields in psychology (including social, cognitive, developmental, and consumer) suggesting that moral praise is a fundamentally unique form of moral attribution and not simply the positive moral analogue of
blame attributions. A functional perspective helps explain asymmetries in blame and praise: we propose that while blame is primarily for punishment and signaling one’s moral character, praise is primarily for relationship building.

Concluding Remarks

Moral praise, we have argued, is a psychological response that, like other forms of moral judgment,
serves a particular functional role in establishing social bonds, encouraging cooperative alliances,
and promoting good behavior. Through this lens, seemingly perplexing asymmetries between
judgments of blame for immoral acts and judgments of praise for moral acts can be understood
as consistent with the relative roles, and associated costs, played by these two kinds of moral
judgments. While both blame and praise judgments require that an agent played some causal
and intentional role in the act being judged, praise appears to be less sensitive to these features
and more sensitive to more general features about an individual’s stable, underlying character
traits. In other words, we believe that the growth of studies on moral praise in the past few years
demonstrate that, when deciding whether or not doling out praise is justified, individuals seem to
care less on how the action was performed and far more about what kind of person performed
the action. We suggest that future research on moral attribution should seek to complement
the rich literature examining moral blame by examining potentially unique processes engaged in
moral praise, guided by an understanding of their differing costs and benefits, as well as their
potentially distinct functional roles in social life.

The article is here.

Monday, August 17, 2020

It’s in Your Control: Free Will Beliefs and Attribution of Blame to Obese People and People with Mental Illness

Chandrashekar, S. P. (2020).
Collabra: Psychology, 6(1), 29.
DOI: http://doi.org/10.1525/collabra.305

Abstract

People’s belief in free will is shown to influence the perception of personal control in self and others. The current study tested the hypothesis that individuals who believe in free will attribute stronger personal blame to obese people and to people with mental illness (schizophrenia) for their adverse health outcomes. Results from a sample of 1110 participants showed that the belief in free will subscale is positively correlated with perceptions of the controllability of these adverse health conditions. The findings suggest that free will beliefs are correlated with attribution of blame to people with obesity and mental health issues. The study contributes to the understanding of the possible negative implications of people’s free will beliefs.

Discussion

The purpose of this brief report was to test the hypothesis that belief in free will is strongly correlated with attribution of personal blame to obese people and to people with mental illness for their adverse health outcomes. The results showed consistent positive correlations between the free will subscale and the extent of blame to obese individuals and individuals with mental illness. The study employed both generic survey measures of internal blame attributions and a survey that measured the responses based on a person described in a vignette. The current study, although correlational, contributes to recent work that argues that belief in free will is linked to processes underlying human social perception (Genschow et al., 2017). Besides theoretical implications, the findings demonstrate the societal consequences of free-will beliefs. Perception of controllability and personal responsibility is a well-documented predictor of negative stereotypes and stigma associated with people with mental illness and obesity (Blaine & Williams, 2004; Crandall, 1994). Perceptions of controllability related to people with health issues have detrimental social outcomes such as social rejection of the affected individuals (Crandall & Moriarty, 1995), and reduced social support and help from others (Crandall, 1994). The current study underlines that belief in free will as an individual-level factor is particularly relevant for developing a broader understanding of predictors of stigmatization of those with mental illness and obesity.

Sunday, August 16, 2020

Blame-Laden Moral Rebukes and the Morally Competent Robot: A Confucian Ethical Perspective

Zhu, Q., Williams, T., Jackson, B. et al.
Sci Eng Ethics (2020).
https://doi.org/10.1007/s11948-020-00246-w

Abstract

Empirical studies have suggested that language-capable robots have the persuasive power to shape the shared moral norms based on how they respond to human norm violations. This persuasive power presents cause for concern, but also the opportunity to persuade humans to cultivate their own moral development. We argue that a truly socially integrated and morally competent robot must be willing to communicate its objection to humans’ proposed violations of shared norms by using strategies such as blame-laden rebukes, even if doing so may violate other standing norms, such as politeness. By drawing on Confucian ethics, we argue that a robot’s ability to employ blame-laden moral rebukes to respond to unethical human requests is crucial for cultivating a flourishing “moral ecology” of human–robot interaction. Such positive moral ecology allows human teammates to develop their own moral reflection skills and grow their own virtues. Furthermore, this ability can and should be considered as one criterion for assessing artificial moral agency. Finally, this paper discusses potential implications of the Confucian theories for designing socially integrated and morally competent robots.

Friday, May 22, 2020

Is identity illusory?

Andreas L. Mogensen
European Journal of Philosophy
First published 29 April 2020

Abstract

Certain of our traits are thought more central to who we are: they comprise our individual identity. What makes these traits privileged in this way? What accounts for their identity centrality? Although considerations of identity play a key role in many different areas of moral philosophy, I argue that we currently have no satisfactory account of the basis of identity centrality. Nor should we expect one. Rather, we should adopt an error theory: we should concede that there is nothing in reality corresponding to the perceived distinction between the central and peripheral traits of a person.

Here is an excerpt:

Considerations of identity play a key role in many different areas of contemporary moral philosophy. The following is not intended as an exhaustive survey. I will focus on just four key issues: the ethics of biomedical enhancement; blame and responsibility; constructivist theories in meta‐ethics; and the value of moral testimony.

The wide‐ranging moral importance of individual identity plausibly reflects its intimate connection to the ethics of authenticity (Taylor, 1991). To a first approximation, authenticity is achieved when the way a person lives is expressive of her most centrally defining traits. Inauthenticity occurs when she fails to give expression to these traits. The key anxiety attached to the ideal of authenticity is that the conditions of modern life conspire to mask the true self beneath the demands of social conformity and the enticements of mass culture (Riesman, Glazer, & Denney, 1961/2001; Rousseau, 1782/2011). In spite of this perceived incongruity, authenticity is considered one of the constitutive ideals of modernity (Guignon, 2004; Taylor, 1989, 1991).

Considerations of authenticity have played a key role in recent debates on human enhancement (Juth, 2011). The specific type of enhancement at issue here is cosmetic psychopharmacology: the use of psychiatric drugs to bring about changes in mood and personality, allowing already healthy individuals to lead happier and more successful lives by becoming less shy, more confident, etc. (Kramer, 1993). Many find cosmetic psychopharmacology disturbing. In an influential paper, Elliott (1998) suggests that what disturbs us is the apparent inauthenticity involved in this kind of personal transformation: the pursuit of a new, enhanced personality represents a flight from the real you. Defenders of enhancement charge that Elliott's concern rests on a mistaken conception of identity. DeGrazia (2000, 2005) argues that Elliott fails to appreciate the extent to which a person's identity is determined by her own reflexive attitudes. Because of the authoritative role assigned to a person's self‐conception, DeGrazia concludes that if a person wholeheartedly desires to change some aspect of herself, she cannot meaningfully be accused of inauthenticity.

The paper is here.

Monday, April 27, 2020

Drivers are blamed more than their automated cars when both make mistakes

Awad, E., Levine, S., Kleiman-Weiner, M. et al.
Nat Hum Behav 4, 134–143 (2020).
https://doi.org/10.1038/s41562-019-0762-8

Abstract

When an automated car harms someone, who is blamed by those who hear about it? Here we asked human participants to consider hypothetical cases in which a pedestrian was killed by a car operated under shared control of a primary and a secondary driver and to indicate how blame should be allocated. We find that when only one driver makes an error, that driver is blamed more regardless of whether that driver is a machine or a human. However, when both drivers make errors in cases of human–machine shared-control vehicles, the blame attributed to the machine is reduced. This finding portends a public under-reaction to the malfunctioning artificial intelligence components of automated cars and therefore has a direct policy implication: allowing the de facto standards for shared-control vehicles to be established in courts by the jury system could fail to properly regulate the safety of those vehicles; instead, a top-down scheme (through federal laws) may be called for.

From the Discussion:

Our central finding (diminished blame apportioned to the machine in dual-error cases) leads us to believe that, while there may be many psychological barriers to self-driving car adoption19, public over-reaction to dual-error cases is not likely to be one of them. In fact, we should perhaps be concerned about public underreaction. Because the public are less likely to see the machine as being at fault in dual-error cases like the Tesla and Uber crashes, the sort of public pressure that drives regulation might be lacking. For instance, if we were to allow the standards for automated vehicles to be set through jury-based court-room decisions, we expect that juries will be biased to absolve the car manufacturer of blame in dual-error cases, thereby failing to put sufficient pressure on manufacturers to improve car designs.

The article is here.