Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Predictability. Show all posts
Showing posts with label Predictability. Show all posts

Sunday, September 26, 2021

Better the Two Devils You Know, Than the One You Don’t: Predictability Influences Moral Judgments of Immoral Actors

Walker, A. C.,  et al. 
(2020, March 24).

Abstract

Across six studies (N = 2,646), we demonstrate the role that perceptions of predictability play in judgments of moral character, finding that people demonstrate a moral preference for more predictable immoral actors. Participants judged agents performing an immoral action (e.g., assault) for an unintelligible reason as less predictable and less moral than agents performing the same immoral action, along with an additional immoral action (e.g., theft), for a well-understood immoral reason (Studies 1-4). Additionally, agents performing an immoral action for an unintelligible reason were judged as less predictable and less moral compared to agents performing the same immoral act for an unstated reason (Studies 3-5). This moral preference persisted when participants viewed video footage of each agent’s immoral action (Study 5). Finally, agents performing immoral actions in an unusual way were judged as less predictable and less moral than those performing the same actions in a more common manner (Study 6). The present research demonstrates how immoral actions performed without a clear motive or in an unpredictable way are perceived to be especially indicative of poor moral character. In revealing peoples’ moral preference for predictable immoral actors, we propose that perceptions of predictability play an important, yet overlooked, role in judgments of moral character. Furthermore, we propose that predictability influences judgments of moral character for its ultimate role in reducing social uncertainty and facilitating cooperation with trustworthy individuals and discuss how these findings may be accommodated by person-centered theories of moral judgment and theories of morality-as-cooperation.

From the Discussion

From traditional act-based perspectives (e.g., deontology and utilitarianism; Kant, 1785/1959; Mill, 1861/1998) this moral preference may appear puzzling, as participants judged actors causing more harm and violating more moral rules as more moral. Nevertheless, recent work suggests that people view actions not as the endpoint of moral evaluation, but as a source of information for assessing the moral character of those who perform them(Tannenbaum et al., 2011; Uhlmannet al., 2013). Fromthis person-centered perspective(Pizarro & Tannenbaum, 2011; Uhlmann et al., 2015), a moral preference for more predictable immoral actors can be understood as participants judging the same immoral action (e.g., assault) as more indicative of negative character traits (e.g., a lack of empathy)when performed without an intelligible motive. That is, a person assaulting a stranger seemingly without reason or in an unusual manner (e.g., with a frozen fish) may be viewed as a more inherently unstable, violent, and immoral person compared to an individual performing an identical assault for a well-understood reason (e.g., to escape punishment for a crime in-progress). Such negative character assessments may lead unpredictable immoral actors to be considered a greater risk for causing future harms of uncertain severity to potentially random victims. Consistent with these claims, past work has shown that people judge those performing harmless-but-offensive acts (e.g., masturbating inside a dead chicken), as not only possessing more negative character traits compared to others performing more harmful acts (e.g., theft), but also as likely to engage in more harmful actions in the future(Chakroff et al., 2017; Uhlmann& Zhu, 2014).

Friday, April 10, 2020

Better the Two Devils You Know, Than the One You Don’t: Predictability Influences Moral Judgment

A. Walker, M. Turpin, & others
PsyArXiv Preprints
Updated 6 April 20

Abstract

Across four studies (N =1,806 US residents), we demonstrate the role perceptions of predictability play in judgments of moral character, finding that less predictable agents were also judged as less moral. Participants judged agents performing an immoral action (e.g., assault) for an unintelligible reason as less predictable and less moral than agents performing the same immoral action for a well-understood immoral reason (Studies 1-3). Additionally, agents performing an action in an unusual way were judged as less predictable and less moral than those performing the same action in a common manner (Study 4). These results challenge monist theories of moral psychology, which reduce morality to a single dimension (e.g., harm) and pluralist accounts failing to consider the role predictability plays in moral judgments. We propose that predictability influences judgments of moral character for its ultimate role in facilitating cooperation and discuss how these findings may be accommodated by theories of morality-as-cooperation.

From the General Discussion

Supporting the idea that judgments of predictability guide judgments of moral character, we show that people judge agents they perceive as less predictable to be less moral. Those signalling unpredictability with their actions, either by acting without an intelligible motive(Studies 1-3)or by performing an immoral act in an unusual manner(Study 4), are consistently viewed as possessing an especially poor moral character.

Despite its importance for cooperation, and therefore moral judgments (Curry, 2016; Curry et al., 2019; Greene, 2013; Haidt, 2012; Rai& Fiske, 2011; Tomasello & Vaish, 2013), dominant theories of moral psychology have not explicitly considered the role predictability plays in judgments of moral character. Here we presented novel scenarios for which many popular theoretical frameworks fail to accurately capture participants’ moral impressions.

The research is here.

Thursday, August 1, 2019

Ethics in the Age of Artificial Intelligence

Shohini Kundu
Scientific American
Originally published July 3, 2019

Here is an except:

Unfortunately, in that decision-making process, AI also took away the transparency, explainability, predictability, teachability and auditability of the human move, replacing it with opacity. The logic for the move is not only unknown to the players, but also unknown to the creators of the program. As AI makes decisions for us, transparency and predictability of decision-making may become a thing of the past.

Imagine a situation in which your child comes home to you and asks for an allowance to go see a movie with her friends. You oblige. A week later, your other child comes to you with the same request, but this time, you decline. This will immediately raise the issue of unfairness and favoritism. To avoid any accusation of favoritism, you explain to your child that she must finish her homework before qualifying for any pocket money.

Without any explanation, there is bound to be tension in the family. Now imagine replacing your role with an AI system that has gathered data from thousands of families in similar situations. By studying the consequence of allowance decisions on other families, it comes to the conclusion that one sibling should get the pocket money while the other sibling should not.

But the AI system cannot really explain the reasoning—other than to say that it weighed your child’s hair color, height, weight and all other attributes that it has access to in arriving at a decision that seems to work best for other families. How is that going to work?

The info is here.

Friday, October 26, 2018

Ethics, a Psychological Perspective

Andrea Dobson
www.infoq.com
Originally posted September 22, 2018

Key Takeaways
  • With emerging technologies like machine learning, developers can now achieve much more than ever before. But this new power has a down side. 
  • When we talk about ethics - the principles that govern a person's behaviour - it is impossible to not talk about psychology. 
  • Processes like obedience, conformity, moral disengagement, cognitive dissonance and moral amnesia all reveal why, though we see ourselves as inherently good, in certain circumstances we are likely to behave badly.
  • Recognising that although people aren’t rational, they are to a large degree predictable, has profound implications on how tech and business leaders can approach making their organisations more ethical.
  • The strongest way to make a company more ethical is to start with the individual. Companies become ethical one person at a time, one decision at a time. We all want to be seen as good people, known as our moral identity, which comes with the responsibility to have to act like it.

Monday, January 30, 2017

Finding trust and understanding in autonomous technologies

David Danks
The Conversation
Originally published December 30, 2016

Here is an excerpt:

Autonomous technologies are rapidly spreading beyond the transportation sector, into health care, advanced cyberdefense and even autonomous weapons. In 2017, we’ll have to decide whether we can trust these technologies. That’s going to be much harder than we might expect.

Trust is complex and varied, but also a key part of our lives. We often trust technology based on predictability: I trust something if I know what it will do in a particular situation, even if I don’t know why. For example, I trust my computer because I know how it will function, including when it will break down. I stop trusting if it starts to behave differently or surprisingly.

In contrast, my trust in my wife is based on understanding her beliefs, values and personality. More generally, interpersonal trust does not involve knowing exactly what the other person will do – my wife certainly surprises me sometimes! – but rather why they act as they do. And of course, we can trust someone (or something) in both ways, if we know both what they will do and why.

I have been exploring possible bases for our trust in self-driving cars and other autonomous technology from both ethical and psychological perspectives. These are devices, so predictability might seem like the key. Because of their autonomy, however, we need to consider the importance and value – and the challenge – of learning to trust them in the way we trust other human beings.

The article is here.