Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Actions. Show all posts
Showing posts with label Actions. Show all posts

Friday, July 15, 2022

How inferred motives shape moral judgements

Carlson, R.W., Bigman, Y.E., Gray, K. et al. 
Nat Rev Psychol (2022).
https://doi.org/10.1038/s44159-022-00071-x

Abstract

When people judge acts of kindness or cruelty, they often look beyond the act itself to infer the agent’s motives. These inferences, in turn, can powerfully influence moral judgements. The mere possibility of self-interested motives can taint otherwise helpful acts, whereas morally principled motives can exonerate those behind harmful acts. In this Review, we survey research showcasing the importance of inferred motives for moral judgements, and show how motive inferences are connected to judgements of actions, intentions and character. This work suggests that the inferences observers draw about peoples’ motives are sufficient for moral judgement (they drive character judgements even without actions) and functional (they effectively aid observers in predicting peoples’ future behaviour). Research that directly probes when and how people infer motives, and how motive properties guide those inferences, can deepen our understanding of the role of inferred motives in moral life.

From Summary and future directions

Moral psychology has long emphasized the importance of actions and character in moral judgements. However, observers frequently go beyond judging actions and seek to understand peoples’ motives. Moral psychology paradigms often feature cues to motives which carry moral weight, such as an agent’s desire to harm others physically, or their lack of motivation to pre-vent harm to others. The inferences people draw about others’ motives are crucial for moral judgement in two respects. First, the mere presence of certain motives can drive moral judgements of character, even in the absence of any action. Second, inferred motives shape what an agent’s actions reveal about their character to observers, and thereby allow observers to better pre-dict others’ future actions. To integrate past work and guide future research in moral psychology, we reviewed research connecting motives with actions, character and other key constructs. These insights can enrich our understanding of moral judgement, and shed light on emerging social phenomena that are relevant to moral psychology (see Box 1). The motive properties reviewed (motive strength, direction and conflict), as well as motive and action multiplicity, offer a guide for future work.

From Box 1

Motives and emerging social challenges researchers and ethicists are expressing growing concern about autonomous technologies and their rapidly increasing role in human life. robots and other artificial agents are perceived as less driven by motives than humans. these agents are increasingly tasked with decisions that have moral implications, such as allocating scarce medical resources, informing parole decisions and guiding autonomous vehicles. understanding the influence of motives in moral judgement can shed light on how the motiveless existence of artificial agents influences how people respond to the decisions of such artificial agents. On the one hand, people are averse to having artificial agents make morally relevant decisions, which can be explained by people perceiving robots as lacking helpful motives. On the other hand, people see artificial agents as less capable of discrimination, and are less outraged when they do discriminate, which can be explained by people perceiving robots as lacking harmful motives, such as prejudice.

Sunday, July 4, 2021

Understanding Side-Effect Intentionality Asymmetries: Meaning, Morality, or Attitudes and Defaults?

Laurent SM, Reich BJ, Skorinko JLM. 
Personality and Social Psychology Bulletin. 
2021;47(3):410-425. 
doi:10.1177/0146167220928237

Abstract

People frequently label harmful (but not helpful) side effects as intentional. One proposed explanation for this asymmetry is that moral considerations fundamentally affect how people think about and apply the concept of intentional action. We propose something else: People interpret the meaning of questions about intentionally harming versus helping in fundamentally different ways. Four experiments substantially support this hypothesis. When presented with helpful (but not harmful) side effects, people interpret questions concerning intentional helping as literally asking whether helping is the agents’ intentional action or believe questions are asking about why agents acted. Presented with harmful (but not helpful) side effects, people interpret the question as asking whether agents intentionally acted, knowing this would lead to harm. Differences in participants’ definitions consistently helped to explain intentionality responses. These findings cast doubt on whether side-effect intentionality asymmetries are informative regarding people’s core understanding and application of the concept of intentional action.

From the Discussion

Second, questions about intentionality of harm may focus people on two distinct elements presented in the vignette: the agent’s  intentional action  (e.g., starting a profit-increasing program) and the harmful secondary outcome he knows this goal-directed action will cause. Because the concept of intentionality is most frequently applied to actions rather than consequences of actions (Laurent, Clark, & Schweitzer, 2015), reframing the question as asking about an intentional action undertaken with foreknowledge of harm has advantages. It allows consideration of key elements from the story and is responsive to what people may feel is at the heart of the question: “Did the chairman act intentionally, knowing this would lead to harm?” Notably, responses to questions capturing this idea significantly mediated intentionality responses in each experiment presented here, whereas other variables tested failed to consistently do so. 

Saturday, February 13, 2021

Allocating moral responsibility to multiple agents

Gantman, A. P., Sternisko, A., et al.
Journal of Experimental Social Psychology
Volume 91, November 2020, 

Abstract

Moral and immoral actions often involve multiple individuals who play different roles in bringing about the outcome. For example, one agent may deliberate and decide what to do while another may plan and implement that decision. We suggest that the Mindset Theory of Action Phases provides a useful lens through which to understand these cases and the implications that these different roles, which correspond to different mindsets, have for judgments of moral responsibility. In Experiment 1, participants learned about a disastrous oil spill in which one company made decisions about a faulty oil rig, and another installed that rig. Participants judged the company who made decisions as more responsible than the company who implemented them. In Experiment 2 and a direct replication, we tested whether people judge implementers to be morally responsible at all. We examined a known asymmetry in blame and praise. Moral agents received blame for actions that resulted in a bad outcome but not praise for the same action that resulted in a good outcome. We found this asymmetry for deciders but not implementers, an indication that implementers were judged through a moral lens to a lesser extent than deciders. Implications for allocating moral responsibility across multiple agents are discussed.

Highlights

• Acts can be divided into parts and thereby roles (e.g., decider, implementer).

• Deliberating agent earns more blame than implementing one for a bad outcome.

• Asymmetry in blame vs. praise for the decider but not the implementer

• Asymmetry in blame vs. praise suggests only the decider is judged as moral agent

• Effect is attenuated if decider's job is primarily to implement.

Monday, July 16, 2018

Moral fatigue: The effects of cognitive fatigue on moral reasoning

Shane Timmons and Ruth MJ Byrne
Quarterly Journal of Experimental Psychology
pp. 1–12

Abstract

We report two experiments that show a moral fatigue effect: participants who are fatigued after they have carried out a tiring cognitive task make different moral judgements compared to participants who are not fatigued. Fatigued participants tend to judge that a moral violation is less permissible even though it would have a beneficial effect, such as killing one person to save the lives of five others. The moral fatigue effect occurs when people make a judgement that focuses on the harmful action, killing one person, but not when they make a judgement that focuses on the beneficial
outcome, saving the lives of others, as shown in Experiment 1 (n=196). It also occurs for judgements about morally good actions, such as jumping onto railway tracks to save a person who has fallen there, as shown in Experiment 2 (n=187).  The results have implications for alternative explanations of moral reasoning.

The article is here.

Wednesday, May 16, 2018

Moral Fatigue: The Effects of Cognitive Fatigue on Moral Reasoning

S. Timmons and R. Byrne
Quarterly Journal of Experimental Psychology (March 2018)

Abstract

We report two experiments that show a moral fatigue effect: participants who are fatigued after they have carried out a tiring cognitive task make different moral judgments compared to participants who are not fatigued. Fatigued participants tend to judge that a moral violation is less permissible even though it would have a beneficial effect, such as killing one person to save the lives of five others. The moral fatigue effect occurs when people make a judgment that focuses on the harmful action, killing one person, but not when they make a judgment that focuses on the beneficial outcome, saving the lives of others, as shown in Experiment 1 (n = 196). It also occurs for judgments about morally good actions, such as jumping onto railway tracks to save a person who has fallen there, as shown in Experiment 2 (n = 187). The results have implications for alternative explanations of moral reasoning.

The research is here.

Sunday, February 18, 2018

Responsibility and Consciousness

Matt King and Peter Carruthers

1. Introduction

Intuitively, consciousness matters for responsibility. A lack of awareness generally provides the
basis for an excuse, or at least for blameworthiness to be mitigated. If you are aware that what
you are doing will unjustifiably harm someone, it seems you are more blameworthy for doing so
than if you harm them without awareness. There is thus a strong presumption that consciousness
is important for responsibility. The position we stake out below, however, is that consciousness,
while relevant to moral responsibility, isn’t necessary.

The background for our discussion is an emerging consensus in the cognitive sciences
that a significant portion, perhaps even a substantial majority, of our mental lives takes place
unconsciously. For example, routine and habitual actions are generally guided by the so-called
“dorsal stream” of the visual system, whose outputs are inaccessible to consciousness (Milner &
Goodale 1995; Goodale 2014). And there has been extensive investigation of the processes that
accompany conscious as opposed to unconscious forms of experience (Dehaene 2014). While
there is room for disagreement at the margins, there is little doubt that our actions are much more
influenced by unconscious factors than might intuitively seem to be the case. At a minimum,
therefore, theories of responsibility that ignore the role of unconscious factors supported by the
empirical data proceed at their own peril (King & Carruthers 2012). The crucial area of inquiry
for those interested in the relationship between consciousness and responsibility concerns the
relative strength of that relationship and the extent to which it should be impacted by findings in
the empirical sciences.

The paper is here.

Tuesday, January 2, 2018

The Neuroscience of Changing Your Mind

 Bret Stetka
Scientific American
Originally published on December 7, 2017

Here are two excerpts:

Scientists have long accepted that our ability to abruptly stop or modify a planned behavior is controlled via a single region within the brain’s prefrontal cortex, an area involved in planning and other higher mental functions. By studying other parts of the brain in both humans and monkeys, however, a team from Johns Hopkins University has now concluded that last-minute decision-making is a lot more complicated than previously known, involving complex neural coordination among multiple brain areas. The revelations may help scientists unravel certain aspects of addictive behaviors and understand why accidents like falls grow increasingly common as we age, according to the Johns Hopkins team.

(cut)

Tracking these eye movements and neural action let the researchers resolve the very confusing question of what brain areas are involved in these split-second decisions, says Vanderbilt University neuroscientist Jeffrey Schall, who was not involved in the research. “By combining human functional brain imaging with nonhuman primate neurophysiology, [the investigators] weave together threads of research that have too long been separate strands,” he says. “If we can understand how the brain stops or prevents an action, we may gain ability to enhance that stopping process to afford individuals more control over their choices.”

The article is here.

Wednesday, March 22, 2017

Act versus Impact: Conservatives and Liberals Exhibit Different Structural Emphases in Moral Judgment

Ivar R. Hannikainen, Ryan M. Miller, & Fiery A. Cushman
Ratio: Special Issue on ‘Experimental Philosophy as Applied Philosophy’
Forthcoming

Conservatives and liberals disagree sharply on matters of morality and public policy. We propose a
novel account of the psychological basis of these differences. Specifically, we find that conservatives
tend to emphasize the intrinsic value of actions during moral judgment, in part by mentally simulating themselves performing those actions, while liberals instead emphasize the value of the expected outcomes of the action. We then demonstrate that a structural emphasis on actions is linked to the condemnation of victimless crimes, a distinctive feature of conservative morality. Next, we find that the conservative and liberal structural approaches to moral judgment are associated with their corresponding patterns of reliance on distinct moral foundations. In addition, the structural approach uniquely predicts that conservatives will be more opposed to harm in circumstances like the wellknown trolley problem, a result which we replicate. Finally, we show that the structural approaches of conservatives and liberals are partly linked to underlying cognitive styles (intuitive versus deliberative).  Collectively, these findings forge a link between two important yet previously independent lines of research in political psychology: cognitive style and moral foundations theory.

The article is here.