Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Blame. Show all posts
Showing posts with label Blame. Show all posts

Sunday, April 26, 2020

Donald Trump: a political determinant of covid-19

Gavin Yamey and Greg Gonsalves
BMJ 2020; 369  (Published 24 April 2020)
doi: https://doi.org/10.1136/bmj.m1643

He downplayed the risk and delayed action, costing countless avertable deaths

On 23 January 2020, the World Health Organization told all governments to get ready for the transmission of a novel coronavirus in their countries. “Be prepared,” it said, “for containment, including active surveillance, early detection, isolation and case management, contact tracing and prevention of onward spread.” Some countries listened. South Korea, for example, acted swiftly to contain its covid-19 epidemic. But US President Donald Trump was unmoved by WHO’s warning, downplaying the threat and calling criticisms of his failure to act “a new hoax.”

Trump’s anaemic response led the US to become the current epicentre of the global covid-19 pandemic, with almost one third of the world’s cases and a still rising number of new daily cases.4 In our interconnected world, the uncontrolled US epidemic has become an obstacle to tackling the global pandemic. Yet the US crisis was an avertable catastrophe.

Dismissing prescient advice on pandemic preparedness from the outgoing administration of the former president, Barack Obama, the Trump administration went on to weaken the nation’s pandemic response capabilities in multiple ways. In May 2018, it eliminated the White House global health security office that Obama established after the 2014-16 Ebola epidemic to foster cross-agency pandemic preparedness. In late 2019, it ended a global early warning programme, PREDICT, that identified viruses with pandemic potential. There were also cuts to critical programmes at the Centers for Disease Control and Prevention (CDC), part and parcel of Trump’s repeated rejections of evidence based policy making for public health.

Denial
After the US confirmed its first case of covid-19 on 22 January 2020, Trump responded with false reassurances, delayed federal action, and the denigration of science. From January to mid-March, he denied that the US faced a serious epidemic risk, comparing the threat to seasonal influenza. He repeatedly reassured Americans that they had nothing to worry about, telling the public: “We think it's going to have a very good ending for us” (30 January), “We have it very much under control in this country” (23 February),
and “The virus will not have a chance against us. No nation is more prepared, or more resilient, than the United States” (11 March).

The info is here.

Friday, November 29, 2019

Drivers are blamed more than their automated cars when both make mistakes

Image result for Drivers are blamed more than their automated cars when both make mistakesEdmond Awad and others
Nature Human Behaviour (2019)
Published: 28 October 2019


Abstract

When an automated car harms someone, who is blamed by those who hear about it? Here we asked human participants to consider hypothetical cases in which a pedestrian was killed by a car operated under shared control of a primary and a secondary driver and to indicate how blame should be allocated. We find that when only one driver makes an error, that driver is blamed more regardless of whether that driver is a machine or a human. However, when both drivers make errors in cases of human–machine shared-control vehicles, the blame attributed to the machine is reduced. This finding portends a public under-reaction to the malfunctioning artificial intelligence components of automated cars and therefore has a direct policy implication: allowing the de facto standards for shared-control vehicles to be established in courts by the jury system could fail to properly regulate the safety of those vehicles; instead, a top-down scheme (through federal laws) may be called for.

The research is here.

Tuesday, November 19, 2019

Moral Responsibility

Talbert, Matthew
The Stanford Encyclopedia of Philosophy 
(Winter 2019 Edition), Edward N. Zalta (ed.)

Making judgments about whether a person is morally responsible for her behavior, and holding others and ourselves responsible for actions and the consequences of actions, is a fundamental and familiar part of our moral practices and our interpersonal relationships.

The judgment that a person is morally responsible for her behavior involves—at least to a first approximation—attributing certain powers and capacities to that person, and viewing her behavior as arising (in the right way) from the fact that the person has, and has exercised, these powers and capacities. Whatever the correct account of the powers and capacities at issue (and canvassing different accounts is the task of this entry), their possession qualifies an agent as morally responsible in a general sense: that is, as one who may be morally responsible for particular exercises of agency. Normal adult human beings may possess the powers and capacities in question, and non-human animals, very young children, and those suffering from severe developmental disabilities or dementia (to give a few examples) are generally taken to lack them.

To hold someone responsible involves—again, to a first approximation—responding to that person in ways that are made appropriate by the judgment that she is morally responsible. These responses often constitute instances of moral praise or moral blame (though there may be reason to allow for morally responsible behavior that is neither praiseworthy nor blameworthy: see McKenna 2012: 16–17 and M. Zimmerman 1988: 61–62). Blame is a response that may follow on the judgment that a person is morally responsible for behavior that is wrong or bad, and praise is a response that may follow on the judgment that a person is morally responsible for behavior that is right or good.

The information is here.

Sunday, November 10, 2019

For whom does determinism undermine moral responsibility? Surveying the conditions for free will across cultures

Ivar Hannikainen and others
PsyArXiv Preprints
Originally published October 15, 2019

Abstract

Philosophers have long debated whether, if determinism is true, we should hold people morally responsible for their actions since in a deterministic universe, people are arguably not the ultimate source of their actions nor could they have done otherwise if initial conditions and the laws of nature are held fixed. To reveal how non-philosophers ordinarily reason about the conditions for free will, we conducted a cross-cultural and cross-linguistic survey (N = 5,268) spanning twenty countries and sixteen languages. Overall, participants tended to ascribe moral responsibility whether the perpetrator lacked sourcehood or alternate possibilities. However, for American, European, and Middle Eastern participants, being the ultimate source of one’s actions promoted perceptions of free will and control as well as ascriptions of blame and punishment. By contrast, being the source of one’s actions was not particularly salient to Asian participants. Finally, across cultures, participants exhibiting greater cognitive reflection were more likely to view free will as incompatible with causal determinism. We discuss these findings in light of documented cultural differences in the tendency toward dispositional versus situational attributions.

The research is here.

Saturday, November 9, 2019

Debunking (the) Retribution (Gap)

Steven R. Kraaijeveld
Science and Engineering Ethics
https://doi.org/10.1007/s11948-019-00148-6

Abstract

Robotization is an increasingly pervasive feature of our lives. Robots with high degrees of autonomy may cause harm, yet in sufficiently complex systems neither the robots nor the human developers may be candidates for moral blame. John Danaher has recently argued that this may lead to a retribution gap, where the human desire for retribution faces a lack of appropriate subjects for retributive blame. The potential social and moral implications of a retribution gap are considerable. I argue that the retributive intuitions that feed into retribution gaps are best understood as deontological intuitions. I apply a debunking argument for deontological intuitions in order to show that retributive intuitions cannot be used to justify retributive punishment in cases of robot harm without clear candidates for blame. The fundamental moral question thus becomes what we ought to do with these retributive intuitions, given that they do not justify retribution. I draw a parallel from recent work on implicit biases to make a case for taking moral responsibility for retributive intuitions. In the same way that we can exert some form of control over our unwanted implicit biases, we can and should do so for unjustified retributive intuitions in cases of robot harm.

Monday, June 24, 2019

Not so Motivated After All? Three Replication Attempts and a Theoretical Challenge to a Morally-Motivated Belief in Free Will

Andrew E. Monroe and Dominic Ysidron
Preprint

Abstract

AbstractFree will is often appraised as a necessary input to for holding others morally or legally responsible for misdeeds. Recently, however,Clark and colleagues (2014), argued for the opposite causal relationship. They assert that moral judgments and the desire to punish motivate people’s belief in free will. In three experiments—two exact replications (Studies 1 & 2b) and one close replication(Study 2a)we seek to replicate these findings. Additionally, in a novel experiment (Study 3) we test a theoretical challenge derived from attribution theory, which suggests that immoral behaviors do not uniquely influence free will judgments. Instead, our non-violation model argues that norm deviations, of any kind—good, bad, or strange—cause people to attribute more free will to agents, and attributions of free will are explained via desire inferences.Across replication experiments we found no evidence for the original claim that witnessing immoral behavior causes people to increase their belief in free will, though we did replicate the finding that people attribute more free will to agents who behave immorally compared to a neutral control (Studies 2a & 3). Finally, our novel experiment demonstrated broad support for our norm-violation account, suggesting that people’s willingness to attribute free will to others is malleable, but not because people are motivated to blame.Instead, this experiment shows that attributions of free will are best explained by people’s expectations for norm adherence, and when these expectations are violated people infer that an agent expressed their free will to do so.

From the Discussion Section:

Together these findings argue for a non-moral explanation for free will judgments with norm-violation as the key driver. This account explains people’s tendency to attribute more free will to behaving badly agents because people generally expect others to follow moral norms, and when they don’t, people believe that there must have been a strong desire to perform the behavior. In addition, a norm-violation account is able to explain why people attribute more free will to agents behaving in odd or morally positive ways. Any deviation from what is expected causes people to attribute more desire and choice (i.e., free will)to that agent.Thus our findings suggest that people’s willingness to ascribe free will to others is indeed malleable, but considerations of free will are being driven by basic social cognitive representations of norms, expectations, and desire. Moreover, these data indicate that when people endorse free will for themselves or for others, they are not making claims about broad metaphysical freedom. Instead, if desires and norm-constraints are what affect ascriptions of free will, this suggests that what it means to have (or believe) in free willis to be rational (i.e., making choices informed by desires and preferences) and able to overcome constraints.

A preprint can be found here.

Saturday, February 16, 2019

There’s No Such Thing as Free Will

Stephen Cave
The Atlantic
Originally published June 2016

Here is an excerpt:

What is new, though, is the spread of free-will skepticism beyond the laboratories and into the mainstream. The number of court cases, for example, that use evidence from neuroscience has more than doubled in the past decade—mostly in the context of defendants arguing that their brain made them do it. And many people are absorbing this message in other contexts, too, at least judging by the number of books and articles purporting to explain “your brain on” everything from music to magic. Determinism, to one degree or another, is gaining popular currency. The skeptics are in ascendance.

This development raises uncomfortable—and increasingly nontheoretical—questions: If moral responsibility depends on faith in our own agency, then as belief in determinism spreads, will we become morally irresponsible? And if we increasingly see belief in free will as a delusion, what will happen to all those institutions that are based on it?

(cut)

Determinism not only undermines blame, Smilansky argues; it also undermines praise. Imagine I do risk my life by jumping into enemy territory to perform a daring mission. Afterward, people will say that I had no choice, that my feats were merely, in Smilansky’s phrase, “an unfolding of the given,” and therefore hardly praiseworthy. And just as undermining blame would remove an obstacle to acting wickedly, so undermining praise would remove an incentive to do good. Our heroes would seem less inspiring, he argues, our achievements less noteworthy, and soon we would sink into decadence and despondency.

The info is here.

Friday, December 7, 2018

Lay beliefs about the controllability of everyday mental states.

Cusimano, C., & Goodwin, G.
In press, Journal of Experimental Psychology: General

Abstract

Prominent accounts of folk theory of mind posit that people judge others’ mental states to be uncontrollable, unintentional, or otherwise involuntary. Yet, this claim has little empirical support: few studies have investigated lay judgments about mental state control, and those that have done so yield conflicting conclusions. We address this shortcoming across six studies, which show that, in fact, lay people attribute to others a high degree of intentional control over their mental states, including their emotions, desires, beliefs, and evaluative attitudes. For prototypical mental states, people’s judgments of control systematically varied by mental state category (e.g., emotions were seen as less controllable than desires, which in turn were seen as less controllable than beliefs and evaluative attitudes). However, these differences were attenuated, sometimes completely, when the content of and context for each mental state were tightly controlled. Finally, judgments of control over mental states correlated positively with judgments of responsibility and blame for them, and to a lesser extent, with judgments that the mental state reveals the agent’s character. These findings replicated across multiple populations and methods, and generalized to people’s real-world experiences. The present results challenge the view that people judge others’ mental states as passive, involuntary, or unintentional, and suggest that mental state control judgments play a key role in other important areas of social judgment and decision making.

The research is here.

Important research for those practicing psychotherapy.

Thursday, July 5, 2018

On the role of descriptive norms and subjectivism in moral judgment

Andrew E. Monroe, Kyle D. Dillon, Steve Guglielmo, Roy F. Baumeister
Journal of Experimental Social Psychology
Volume 77, July 2018, Pages 1-10.

Abstract

How do people evaluate moral actions, by referencing objective rules or by appealing to subjective, descriptive norms of behavior? Five studies examined whether and how people incorporate subjective, descriptive norms of behavior into their moral evaluations and mental state inferences of an agent's actions. We used experimental norm manipulations (Studies 1–2, 4), cultural differences in tipping norms (Study 3), and behavioral economic games (Study 5). Across studies, people increased the magnitude of their moral judgments when an agent exceeded a descriptive norm and decreased the magnitude when an agent fell below a norm (Studies 1–4). Moreover, this differentiation was partially explained via perceptions of agents' desires (Studies 1–2); it emerged only when the agent was aware of the norm (Study 4); and it generalized to explain decisions of trust for real monetary stakes (Study 5). Together, these findings indicate that moral actions are evaluated in relation to what most other people do rather than solely in relation to morally objective rules.

Highlights

• Five studies tested the impact of descriptive norms on judgments of blame and praise.

• What is usual, not just what is objectively permissible, drives moral judgments.

• Effects replicate even when holding behavior constant and varying descriptive norms.

• Agents had to be aware of a norm for it to impact perceivers' moral judgments.

• Effects generalize to explain decisions of trust for real monetary stakes.

The research is here.

Sunday, June 17, 2018

Does Non-Moral Ignorance Exculpate? Situational Awareness and Attributions of Blame and Forgiveness

Kissinger-Knox, A., Aragon, P. & Mizrahi, M.
Acta Anal (2018) 33: 161. https://doi.org/10.1007/s12136-017-0339-y

Abstract

In this paper, we set out to test empirically an idea that many philosophers find intuitive, namely that non-moral ignorance can exculpate. Many philosophers find it intuitive that moral agents are responsible only if they know the particular facts surrounding their action (or inaction). Our results show that whether moral agents are aware of the facts surrounding their (in)action does have an effect on people’s attributions of blame, regardless of the consequences or side effects of the agent’s actions. In general, it was more likely that a situationally aware agent will be blamed for failing to perform the obligatory action than a situationally unaware agent. We also tested attributions of forgiveness in addition to attributions of blame. In general, it was less likely that a situationally aware agent will be forgiven for failing to perform the obligatory action than a situationally unaware agent. When the agent is situationally unaware, it is more likely that the agent will be forgiven than blamed. We argue that these results provide some empirical support for the hypothesis that there is something intuitive about the idea that non-moral ignorance can exculpate.

The article is here.

Monday, May 14, 2018

No Luck for Moral Luck

Markus Kneer, University of Zurich Edouard Machery, University of Pittsburgh
Draft, March 2018

Abstract

Moral philosophers and psychologists often assume that people judge morally lucky and morally unlucky agents differently, an assumption that stands at the heart of the puzzle of moral luck. We examine whether the asymmetry is found for reflective intuitions regarding wrongness, blame, permissibility and punishment judgments, whether people's concrete, case-based judgments align with their explicit, abstract principles regarding moral luck, and what psychological mechanisms might drive the effect. Our experiments  produce three findings: First, in within-subjects experiments favorable to reflective deliberation, wrongness, blame, and permissibility judgments across different moral luck conditions are the same for the vast majority of people. The philosophical puzzle of moral luck, and the challenge to the very possibility of systematic ethics it is frequently taken to engender, thus simply does not arise. Second, punishment judgments are significantly more outcome-dependent than wrongness, blame, and permissibility  judgments. While this is evidence in favor of current dual-process theories of moral  judgment, the latter need to be qualified since punishment does not pattern with blame. Third, in between-subjects experiments, outcome has an effect on all four types of moral  judgments. This effect is mediated by negligence ascriptions and can ultimately be explained as due to differing probability ascriptions across cases.

The manuscript is here.

Friday, February 9, 2018

Robots, Law and the Retribution Gap

John Danaher
Ethics and Information Technology
December 2016, Volume 18, Issue 4, pp 299–309

We are living through an era of increased robotisation. Some authors have already begun to explore the impact of this robotisation on legal rules and practice. In doing so, many highlight potential liability gaps that might arise through robot misbehaviour. Although these gaps are interesting and socially significant, they do not exhaust the possible gaps that might be created by increased robotisation. In this article, I make the case for one of those alternative gaps: the retribution gap. This gap arises from a mismatch between the human desire for retribution and the absence of appropriate subjects of retributive blame. I argue for the potential existence of this gap in an era of increased robotisation; suggest that it is much harder to plug this gap than it is to plug those thus far explored in the literature; and then highlight three important social implications of this gap.

From the Discussion Section

Third, and finally, I have argued that this retributive gap has three potentially significant social implications: (i) it could lead to an increased risk of moral scapegoating; (ii) it could erode confidence in the rule of law; and (iii) it could present a strategic opening for those who favour nonretributive approaches to crime and punishment.

The paper is here.

Saturday, January 6, 2018

The Myth of Responsibility

Raoul Martinez
RSA.org
Originally posted December 7, 2017

Are we wholly responsible for our actions? We don’t choose our brains, our genetic inheritance, our circumstances, our milieu – so how much control do we really have over our lives? Philosopher Raoul Martinez argues that no one is truly blameworthy.  Our most visionary scientists, psychologists and philosophers have agreed that we have far less free will than we think, and yet most of society’s systems are structured around the opposite principle – that we are all on a level playing field, and we all get what we deserve.

4 minutes video is worth watching.....

Tuesday, December 19, 2017

Beyond Blaming the Victim: Toward a More Progressive Understanding of Workplace Mistreatment

Lilia M. Cortina, Verónica Caridad Rabelo, & Kathryn J. Holland
Industrial and Organizational Psychology
Published online: 21 November 2017

Theories of human aggression can inform research, policy, and practice in organizations. One such theory, victim precipitation, originated in the field of criminology. According to this perspective, some victims invite abuse through their personalities, styles of speech or dress, actions, and even their inactions. That is, they are partly at fault for the wrongdoing of others. This notion is gaining purchase in industrial and organizational (I-O) psychology as an explanation for workplace mistreatment. The first half of our article provides an overview and critique of the victim precipitation hypothesis. After tracing its history, we review the flaws of victim precipitation as catalogued by scientists and practitioners over several decades. We also consider real-world implications of victim precipitation thinking, such as the exoneration of violent criminals. Confident that I-O can do better, the second half of this article highlights alternative frameworks for researching and redressing hostile work behavior. In addition, we discuss a broad analytic paradigm—perpetrator predation—as a way to understand workplace abuse without blaming the abused. We take the position that these alternative perspectives offer stronger, more practical, and more progressive explanations for workplace mistreatment. Victim precipitation, we conclude, is an archaic ideology. Criminologists have long since abandoned it, and so should we.

The article is here.

Friday, December 8, 2017

Autonomous future could question legal ethics

Becky Raspe
Cleveland Jewish News
Originally published November 21, 2017

Here is an excerpt:

Northman said he finds the ethical implications of an autonomous future interesting, but completely contradictory to what he learned in law school in the 1990s.

“People were expected to be responsible for their activities,” he said. “And as long as it was within their means to stop something or more tellingly anticipate a problem before it occurs, they have an obligation to do so. When you blend software over the top of that this level of autonomy, we are left with some difficult boundaries to try and assess where a driver’s responsibility starts or the software programmers continues on.”

When considering the ethics surrounding autonomous living, Paris referenced the “trolley problem.” The trolley problem goes as this: there is an automated vehicle operating on an open road, and ahead there are five people in the road and one person off to the side. The question here, Paris said, is should the vehicle consider traveling on and hitting the five people or will it swerve and hit just the one?

“When humans are driving vehicles, they are the moral decision makers that make those choices behind the wheel,” she said. “Can engineers program automated vehicles to replace that moral thought with an algorithm? Will they prioritize the five lives or that one person? There are a lot of questions and not too many solutions at this point. With these ethical dilemmas, you have to be careful about what is being implemented.”

The article is here.

Friday, September 15, 2017

Robots and morality

The Big Read (which is actually in podcast form)
The Financial Times
Originally posted August 2017

Now our mechanical creations can act independently, what happens when AI goes wrong? Where does moral, ethical and legal responsibility for robots lie — with the manufacturers, the programmers, the users or the robots themselves, asks John Thornhill. And who owns their rights?

Click on the link below to access the 13 minutes podcast.

Podcast is here.

Thursday, September 7, 2017

Are morally good actions ever free?

Cory J. Clark, Adam Shniderman, Jamie Luguri, Roy Baumeister, and Peter Ditto
SSRN Electronic Journal, August 2017

Abstract

A large body of work has demonstrated that people ascribe more responsibility to morally bad actions than both morally good and morally neutral ones, creating the impression that people do not attribute responsibility to morally good actions. The present work demonstrates that this is not so: People attributed more free will to morally good actions than morally neutral ones (Studies 1a-1b). Studies 2a-2b distinguished the underlying motives for ascribing responsibility to morally good and bad actions. Free will ascriptions for morally bad actions were driven predominantly by affective punitive responses. Free will judgments for morally good actions were similarly driven by affective reward responses, but also less affectively-charged and more pragmatic considerations (the perceived utility of reward, normativity of the action, and willpower required to perform the action). Responsibility ascriptions to morally good actions may be more carefully considered, leading to generally weaker, but more contextually-sensitive free will judgments.

The research is here.

Friday, September 1, 2017

Political differences in free will belief are driven by differences in moralization

Clark, C. J., Everett, J. A. C., Luguri, J. B., Earp, B. D., Ditto, P., & Shariff, A.
PsyArXiv. (2017, August 1).

Abstract

Five studies tested whether political conservatives’ stronger free will beliefs are driven by their broader view of morality, and thus a broader motivation to assign responsibility. On an individual difference level, Study 1 found that political conservatives’ higher moral wrongness judgments accounted for their higher belief in free will.In Study 2, conservatives ascribed more free will for negative events than liberals,while no differences emerged for positive events. For actions ideologically equivalent in perceived moral wrongness, free will judgments also did not differ (Study 3), and actions that liberals perceived as more wrong, liberals judged as more free(Study 4). Finally, higher wrongness judgments mediated the effect of conservatism on free will beliefs(Study 5). Higher free will beliefs among conservatives may be explained by conservatives’ tendency to moralize, which strengthens motivation to justify blame with stronger belief in free will and personal accountability.

The preprint research article is here.

Monday, August 28, 2017

Sometimes giving a person a choice is an act of terrible cruelty

Lisa Tessman
aeon.com
Originally posted August 9, 2017

It is not always good to have the opportunity to make a choice. When we must decide to take one action rather than another, we also, ordinarily, become at least partly responsible for what we choose to do. Usually this is appropriate; it’s what makes us the kinds of creatures who can be expected to abide by moral norms. 

Sometimes, making a choice works well. For instance, imagine that while leaving the supermarket parking lot you accidentally back into another car, visibly denting it. No one else is around, nor do you think there are any surveillance cameras. You face a choice: you could drive away, fairly confident that no one will ever find out that you damaged someone’s property, or you could leave a note on the dented car’s windshield, explaining what happened and giving contact information, so that you can compensate the car’s owner.

Obviously, the right thing to do is to leave a note. If you don’t do this, you’ve committed a wrongdoing that you could have avoided just by making a different choice. Even though you might not like having to take responsibility – and paying up – it’s good to be in the position of being able to do the right thing.

Yet sometimes, having a choice means deciding to commit one bad act or another. Imagine being a doctor or nurse caught in the following fictionalised version of real events at a hospital in New Orleans in the aftermath of Hurricane Katrina in 2005. Due to a tremendous level of flooding after the hurricane, the hospital must be evacuated. The medical staff have been ordered to get everyone out by the end of the day, but not all patients can be removed. As time runs out, it becomes clear that you have a choice, but it’s a choice between two horrifying options: euthanise the remaining patients without consent (because many of them are in a condition that renders them unable to give it) or abandon them to suffer a slow, painful and terrifying death alone. Even if you’re anguished at the thought of making either choice, you might be confident that one action – let’s say administering a lethal dose of drugs – is better than the other. Nevertheless, you might have the sense that no matter which action you perform, you’ll be violating a moral requirement.

Tuesday, August 15, 2017

Inferences about moral character moderate the impact of consequences on blame and praise

Jenifer Z. Siegel, Molly J.Crockett, and Raymond J. Dolan
Cognition
Volume 167, October 2017, Pages 201-211

Abstract

Moral psychology research has highlighted several factors critical for evaluating the morality of another’s choice, including the detection of norm-violating outcomes, the extent to which an agent caused an outcome, and the extent to which the agent intended good or bad consequences, as inferred from observing their decisions. However, person-centered accounts of moral judgment suggest that a motivation to infer the moral character of others can itself impact on an evaluation of their choices. Building on this person-centered account, we examine whether inferences about agents’ moral character shape the sensitivity of moral judgments to the consequences of agents’ choices, and agents’ role in the causation of those consequences. Participants observed and judged sequences of decisions made by agents who were either bad or good, where each decision entailed a trade-off between personal profit and pain for an anonymous victim. Across trials we manipulated the magnitude of profit and pain resulting from the agent’s decision (consequences), and whether the outcome was caused via action or inaction (causation). Consistent with previous findings, we found that moral judgments were sensitive to consequences and causation. Furthermore, we show that the inferred character of an agent moderated the extent to which people were sensitive to consequences in their moral judgments. Specifically, participants were more sensitive to the magnitude of consequences in judgments of bad agents’ choices relative to good agents’ choices. We discuss and interpret these findings within a theoretical framework that views moral judgment as a dynamic process at the intersection of attention and social cognition.

The article is here.