Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Punishment. Show all posts
Showing posts with label Punishment. Show all posts

Saturday, April 25, 2020

Punitive but discriminating: Reputation fuels ambiguously-deserved punishment but also sensitivity to moral nuance

Jordan, J., & Kteily, N.
(2020, March 21).
https://doi.org/10.31234/osf.io/97nhj

Abstract

Reputation concerns can motivate moralistic punishment, but existing evidence comes exclusively from contexts in which punishment is unambiguously deserved. Recent debates surrounding “virtue signaling” and “outrage culture” raise the question of whether reputation may also fuel punishment in more ambiguous cases—and even encourage indiscriminate punishment that ignores moral nuance. But when the moral case for punishment is ambiguous, do people actually expect punishing to make them look good? And if so, are people willing to use ambiguously-deserved punishment to gain reputational benefits, or do personal reservations about whether punishment is merited restrain them from doing so? We address these questions across 11 experiments (n = 9448) employing both hypothetical vignette and costly behavioral paradigms. We find that reputation does fuel ambiguously-deserved punishment. Subjects expect even ambiguously-deserved punishment to look good, especially when the audience is highly ideological. Furthermore, despite personally harboring reservations about its morality, subjects readily use ambiguously-deserved punishment to gain reputational benefits. Yet we also find that reputation can do more to fuel unambiguously-deserved punishment. Subjects robustly expect unambiguously-deserved punishment to look better than ambiguously-deserved punishment, even when the audience is highly ideological. And we find evidence that as a result, introducing reputational incentives can preferentially increase unambiguously-deserved punishment—causing punishers to differentiate more between ambiguous and unambiguous cases and thereby heightening sensitivity to moral nuance. We thus conclude that the drive to signal virtue can make people more punitive but also more discriminating, painting a nuanced picture of the role that reputation plays in outrage culture.

From the Discussion:

Here, we have provided a novel framework for understanding the influence of reputational incentives on moralistic punishment in ambiguous and unambiguous cases.By looking beyond contexts in which punishment is unambiguously merited, and by considering the important role of audience ideology,our work fills critical theoretical gaps in our understanding of the human moral psychology surrounding punishment and reputation. Our findings also speak directly to concerns raised by critics of “outrage culture”, who have suggested that “virtue signaling” fuels ambiguously-deserved punishment and even encourages indiscriminate punishment that ignores moral nuance, thereby contributing to negative societal outcomes(e.g., by unfairly harming alleged perpetrators and chilling social discourse). More specifically, our results present a complex portrait of the role that reputation plays in outrage culture, lending credence to some concerns about virtue signaling but casting doubt on others.

Wednesday, April 22, 2020

Your Code of Conduct May Be Sending the Wrong Message

F. Gino, M, Kouchaki, & Y. Feldman
Harvard Business Review
Originally posted 13 March 20


Here is an excerpt:

We examined the relationship between the language used (personal or impersonal) in these codes and corporate illegality. Research assistants blind to our research questions and hypotheses coded each document based on the degree to which it used “we” or “member/employee” language. Next, we searched media sources for any type of illegal acts these firms may have been involved in, such as environmental violations, anticompetitive actions, false claims, and fraudulent actions. Our analysis showed that firms that used personal language in their codes of conduct were more likely to be found guilty of illegal behaviors.

We found this initial evidence to be compelling enough to dig further into the link between personal “we” language and unethical behavior. What would explain such a link? We reasoned that when language communicating ethical standards is personal, employees tend to assume they are part of a community where members are easygoing, helpful, cooperative, and forgiving. By contrast, when the language is impersonal — for example, “organizational members are expected to put customers first” — employees feel they are part of a transactional relationship in which members are more formal and distant.

Here’s the problem: When we view our organization as tolerant and forgiving, we believe we’re less likely to be punished for misconduct. Across nine different studies, using data from lab- and field-based experiments as well as a large dataset of S&P firms, we find that personal language (“we,” “us”) leads to less ethical behavior than impersonal language (“employees,” “members”) does, apparently because people encountering more personal language believe their organization is less serious about punishing wrongdoing.

The info is here.

Sunday, November 10, 2019

For whom does determinism undermine moral responsibility? Surveying the conditions for free will across cultures

Ivar Hannikainen and others
PsyArXiv Preprints
Originally published October 15, 2019

Abstract

Philosophers have long debated whether, if determinism is true, we should hold people morally responsible for their actions since in a deterministic universe, people are arguably not the ultimate source of their actions nor could they have done otherwise if initial conditions and the laws of nature are held fixed. To reveal how non-philosophers ordinarily reason about the conditions for free will, we conducted a cross-cultural and cross-linguistic survey (N = 5,268) spanning twenty countries and sixteen languages. Overall, participants tended to ascribe moral responsibility whether the perpetrator lacked sourcehood or alternate possibilities. However, for American, European, and Middle Eastern participants, being the ultimate source of one’s actions promoted perceptions of free will and control as well as ascriptions of blame and punishment. By contrast, being the source of one’s actions was not particularly salient to Asian participants. Finally, across cultures, participants exhibiting greater cognitive reflection were more likely to view free will as incompatible with causal determinism. We discuss these findings in light of documented cultural differences in the tendency toward dispositional versus situational attributions.

The research is here.

Monday, October 14, 2019

Why we don’t always punish: Preferences for non-punitive responses to moral violations

Joseph Heffner & Oriel FeldmanHall
Scientific Reports, volume 9, 
Article number: 13219 (2019) 

Abstract

While decades of research demonstrate that people punish unfair treatment, recent work illustrates that alternative, non-punitive responses may also be preferred. Across five studies (N = 1,010) we examine non-punitive methods for restoring justice. We find that in the wake of a fairness violation, compensation is preferred to punishment, and once maximal compensation is available, punishment is no longer the favored response. Furthermore, compensating the victim—as a method for restoring justice—also generalizes to judgments of more severe crimes: participants allocate more compensation to the victim as perceived severity of the crime increases. Why might someone refrain from punishing a perpetrator? We investigate one possible explanation, finding that punishment acts as a conduit for different moral signals depending on the social context in which it arises. When choosing partners for social exchange, there are stronger preferences for those who previously punished as third-party observers but not those who punished as victims. This is in part because third-parties are perceived as relatively more moral when they punish, while victims are not. Together, these findings demonstrate that non-punitive alternatives can act as effective avenues for restoring justice, while also highlighting that moral reputation hinges on whether punishment is enacted by victims or third-parties.

The research is here.

Readers may want to think about patients in psychotherapy and licensing board actions.

Tuesday, September 17, 2019

When do we punish people who don’t?

Martin, J., Jordan, J., Rand, D., & Cushman, F.
(2019). Cognition, 193(August)
doi.org/10.1016/j.cognition.2019.104040

Abstract

People often punish norm violations. In what cases is such punishment viewed as normative—a behavior that we “should” or even “must” engage in? We approach this question by asking when people who fail to punish a norm violator are, themselves, punished. (For instance, a boss who fails to punish transgressive employees might, herself, be fired). We conducted experiments exploring the contexts in which higher-order punishment occurs, using both incentivized economic games and hypothetical vignettes describing everyday situations. We presented participants with cases in which an individual fails to punish a transgressor, either as a victim (second-party) or as an observer (third-party). Across studies, we consistently observed higher-order punishment of non-punishing observers. Higher-order punishment of non-punishing victims, however, was consistently weaker, and sometimes non-existent. These results demonstrate the selective application of higher-order punishment, provide a new perspective on the psychological mechanisms that support it, and provide some clues regarding its function.

The research can be found here.

Monday, June 24, 2019

Motivated free will belief: The theory, new (preregistered) studies, and three meta-analyses

Clark, C. J., Winegard, B. M., & Shariff, A. F. (2019).
Manuscript submitted for publication.

Abstract

Do desires to punish lead people to attribute more free will to individual actors (motivated free will attributions) and to stronger beliefs in human free will (motivated free will beliefs) as suggested by prior research? Results of 14 new (7 preregistered) studies (n=4,014) demonstrated consistent support for both of these. These findings consistently replicated in studies (k=8) in which behaviors meant to elicit desires to punish were rated as equally or less counternormative than behaviors in control conditions. Thus, greater perceived counternormativity cannot account for these effects. Additionally, three meta-analyses of the existing data (including eight vignette types and eight free will judgment types) found support for motivated free will attributions (k=22; n=7,619; r=.25, p<.001) and beliefs (k=27; n=8,100; r=.13, p<.001), which remained robust after removing all potential moral responsibility confounds (k=26; n=7,953; r=.12, p<.001). The size of these effects varied by vignette type and free will belief measurement. For example, presenting the FAD+ free will belief subscale mixed among three other subscales (as in Monroe and Ysidron’s [2019] failed replications) produced a smaller average effect size (r=.04) than shorter and more immediate measures (rs=.09-.28). Also, studies with neutral control conditions produced larger effects (Attributions: r=.30; Beliefs: rs=.14-.16) than those with control conditions involving bad actions (Attributions: r=.05; Beliefs: rs=.04-.06). Removing these two kinds of studies from the meta-analyses produced larger average effect sizes (Attributions: r=.28; Beliefs: rs=.17-.18). We discuss the relevance of these findings for past and future research and the significance of these findings for human responsibility.

From the Discussion Section:

We suspect that motivated free will beliefs have become more common as society has become more humane and more concerned about proportionate punishment. Many people now assiduously reflect upon their own society’s punitive practices and separate those who deserve to be punished from those who are incapable of being fully responsible for their actions. Free will is crucial here because it is often considered a prerequisite for moral responsibility (Nichols & Knobe, 2007; Sarkissian et al., 2010; Shariff et al., 2014). Therefore, when one is motivated to punish another person, one is also motivated to inflate free will beliefs and free will attributions to specific perpetrators as a way to justify punishing the person.

A preprint can be downloaded here.

Friday, May 10, 2019

An Evolutionary Perspective On Free Will Belief

Cory Clark & Bo Winegard
Science Trends
Originally posted April 9, 2019

Here is an excerpt:

Both scholars and everyday people seem to agree that free will (whatever it is) is a prerequisite for moral responsibility (though note, among philosophers, there are numerous definitions and camps regarding how free will and moral responsibility are linked). This suggests that a crucial function of free will beliefs is the promotion of holding others morally responsible. And research supports this. Specifically, when people are exposed to another’s harmful behavior, they increase their broad beliefs in the human capacity for free action. Thus, believing in free will might facilitate the ability of individuals to punish harmful members of the social group ruthlessly.

But recent research suggests that free will is about more than just punishment. People might seek morally culpable agents not only when desiring to punish, but also when desiring to praise. A series of studies by Clark and colleagues (2018) found that, whereas people generally attributed more free will to morally bad actions than to morally good actions, they attributed more free will to morally good actions than morally neutral ones. Moreover, whereas free will judgments for morally bad actions were primarily driven by affective desires to punish, free will judgments for morally good actions were sensitive to a variety of characteristics of the behavior.

Saturday, April 27, 2019

When Would a Robot Have Free Will?

Eddy Nahmias
The NeuroEthics Blog
Originally posted April 1, 2019

Here are two excerpts:

Joshua Shepherd (2015) had found evidence that people judge humanoid robots that behave like humans and are described as conscious to be free and responsible more than robots that carry out these behaviors without consciousness. We wanted to explore what sorts of consciousness influence attributions of free will and moral responsibility—i.e., deserving praise and blame for one’s actions. We developed several scenarios describing futuristic humanoid robots or aliens, in which they were described as either having or as lacking: conscious sensations, conscious emotions, and language and intelligence. We found that people’s attributions of free will generally track their attributions of conscious emotions more than attributions of conscious sensory experiences or intelligence and language. Consistent with this, we also found that people are more willing to attribute free will to aliens than robots, and in more recent studies, we see that people also attribute free will to many animals, with dolphins and dogs near the levels attributed to human adults.

These results suggest two interesting implications. First, when philosophers analyze free will in terms of the control required to be morally responsible—e.g., being ‘reasons-responsive’—they may be creating a term of art (perhaps a useful one). Laypersons seem to distinguish the capacity to have free will from the capacities required to be responsible. Our studies suggest that people may be willing to hold intelligent but non-conscious robots or aliens responsible even when they are less willing to attribute to them free will.

(cut)

A second interesting implication of our results is that many people seem to think that having a biological body and conscious feelings and emotions are important for having free will. The question is: why? Philosophers and scientists have often asserted that consciousness is required for free will, but most have been vague about what the relationship is. One plausible possibility we are exploring is that people think that what matters for an agent to have free will is that things can really matter to the agent. And for anything to matter to an agent, she has to be able to care—that is, she has to have foundational, intrinsic motivations that ground and guide her other motivations and decisions.

The info is here.

Friday, January 18, 2019

House Democrats Look to Crack Down on Feds With Conflicts of Interest, Ethics Violations

Eric Katz
Government Executive
Originally posted January 3, 2018

Federal employees who pass through the revolving door with the private sector and engage in other actions that could present conflicts of interest would come under intensified scrutiny in a slew of reforms House Democrats introduced on Friday aimed at boosting ethics oversight in government.

The new House majority put forward the For the People Act (H.R. 1) as its first legislative priority, after the more immediate concern of reopening the full government. The package involves an array of issues House Speaker Nancy Pelosi, D-Calif., said were critical to “restoring integrity in government,” such as voting rights access and campaign finance changes. It would also place new restrictions on federal workers before, during and after their government service, with special obligations for senior officials and the president.

“Over the last two years President Trump set the tone from the top of his administration that behaving ethically and complying with the law is optional,” said newly minted House Oversight and Reform Committee Chairman Rep. Elijah Cummings, D-Md. “That is why we are introducing the For the People Act. This bill contains a number of reforms that will strengthen our accountability for the executive branch officials, including the president.”

All federal employees would face a ban on using their official positions to participate in matters related to their former employers. Violators would face fines and one-to-five years in prison. Agency heads, in consultation with the director of the Office of Government Ethics, could issue waivers if it were deemed in the public interest.

The info is here.

Monday, August 20, 2018

Massachusetts allows school to continue with electric shocks

Jeffrey Delfin
theguardian.com
Originally posted July 12, 2108

Here is an excerpt:

The device is not used in what we might call “electroshock therapy” – where small shocks are passed through the brain under anesthesia. Rather, the GED is used as a variation of “aversive conditioning”, in which negative stimulation is applied to a patient when he or she performs an unwanted action. The patient is awake, and feeling pain is the point of the shock.

The GED, when activated, outputs an electric shock that is distributed to the patient’s skin for up to two seconds. Students wear a backpack containing the shocking device, with electrodes constantly affixed to their skin. Staff are able to shock students at any point during the day. Previous attendees at JRC have spoken of up to five electrodes being attached to their bodies. One, Jen Msumba, who blogs about her time at the facility, said electrodes were applied under their fingers or the bottom of their feet to increase the pain.

“We’ve all experienced aversive conditioning. We touch the stove while it’s still hot, it hurts, then we become very cautious about touching it,” says Dr Jean Mercer, the leader of the group Advocates for Children in Therapy, a not-for-profit organization dedicated to ending harmful practices for treating children’s mental health.

The information is here.

Thursday, July 12, 2018

Learning moral values: Another's desire to punish enhances one's own punitive behavior

FeldmanHall O, Otto AR, Phelps EA.
J Exp Psychol Gen. 2018 Jun 7. doi: 10.1037/xge0000405.

Abstract

There is little consensus about how moral values are learned. Using a novel social learning task, we examine whether vicarious learning impacts moral values-specifically fairness preferences-during decisions to restore justice. In both laboratory and Internet-based experimental settings, we employ a dyadic justice game where participants receive unfair splits of money from another player and respond resoundingly to the fairness violations by exhibiting robust nonpunitive, compensatory behavior (baseline behavior). In a subsequent learning phase, participants are tasked with responding to fairness violations on behalf of another participant (a receiver) and are given explicit trial-by-trial feedback about the receiver's fairness preferences (e.g., whether they prefer punishment as a means of restoring justice). This allows participants to update their decisions in accordance with the receiver's feedback (learning behavior). In a final test phase, participants again directly experience fairness violations. After learning about a receiver who prefers highly punitive measures, participants significantly enhance their own endorsement of punishment during the test phase compared with baseline. Computational learning models illustrate the acquisition of these moral values is governed by a reinforcement mechanism, revealing it takes as little as being exposed to the preferences of a single individual to shift one's own desire for punishment when responding to fairness violations. Together this suggests that even in the absence of explicit social pressure, fairness preferences are highly labile.

The research is here.

Monday, May 14, 2018

No Luck for Moral Luck

Markus Kneer, University of Zurich Edouard Machery, University of Pittsburgh
Draft, March 2018

Abstract

Moral philosophers and psychologists often assume that people judge morally lucky and morally unlucky agents differently, an assumption that stands at the heart of the puzzle of moral luck. We examine whether the asymmetry is found for reflective intuitions regarding wrongness, blame, permissibility and punishment judgments, whether people's concrete, case-based judgments align with their explicit, abstract principles regarding moral luck, and what psychological mechanisms might drive the effect. Our experiments  produce three findings: First, in within-subjects experiments favorable to reflective deliberation, wrongness, blame, and permissibility judgments across different moral luck conditions are the same for the vast majority of people. The philosophical puzzle of moral luck, and the challenge to the very possibility of systematic ethics it is frequently taken to engender, thus simply does not arise. Second, punishment judgments are significantly more outcome-dependent than wrongness, blame, and permissibility  judgments. While this is evidence in favor of current dual-process theories of moral  judgment, the latter need to be qualified since punishment does not pattern with blame. Third, in between-subjects experiments, outcome has an effect on all four types of moral  judgments. This effect is mediated by negligence ascriptions and can ultimately be explained as due to differing probability ascriptions across cases.

The manuscript is here.

Friday, February 9, 2018

Robots, Law and the Retribution Gap

John Danaher
Ethics and Information Technology
December 2016, Volume 18, Issue 4, pp 299–309

We are living through an era of increased robotisation. Some authors have already begun to explore the impact of this robotisation on legal rules and practice. In doing so, many highlight potential liability gaps that might arise through robot misbehaviour. Although these gaps are interesting and socially significant, they do not exhaust the possible gaps that might be created by increased robotisation. In this article, I make the case for one of those alternative gaps: the retribution gap. This gap arises from a mismatch between the human desire for retribution and the absence of appropriate subjects of retributive blame. I argue for the potential existence of this gap in an era of increased robotisation; suggest that it is much harder to plug this gap than it is to plug those thus far explored in the literature; and then highlight three important social implications of this gap.

From the Discussion Section

Third, and finally, I have argued that this retributive gap has three potentially significant social implications: (i) it could lead to an increased risk of moral scapegoating; (ii) it could erode confidence in the rule of law; and (iii) it could present a strategic opening for those who favour nonretributive approaches to crime and punishment.

The paper is here.

Sunday, October 8, 2017

Moral outrage in the digital age

Molly J. Crockett
Nature Human Behaviour (2017)
Originally posted September 18, 2017

Moral outrage is an ancient emotion that is now widespread on digital media and online social networks. How might these new technologies change the expression of moral outrage and its social consequences?

Moral outrage is a powerful emotion that motivates people to shame and punish wrongdoers. Moralistic punishment can be a force for good, increasing cooperation by holding bad actors accountable. But punishment also has a dark side — it can exacerbate social conflict by dehumanizing others and escalating into destructive feuds.

Moral outrage is at least as old as civilization itself, but civilization is rapidly changing in the face of new technologies. Worldwide, more than a billion people now spend at least an hour a day on social media, and moral outrage is all the rage online. In recent years, viral online shaming has cost companies millions, candidates elections, and individuals their careers overnight.

As digital media infiltrates our social lives, it is crucial that we understand how this technology might transform the expression of moral outrage and its social consequences. Here, I describe a simple psychological framework for tackling this question (Fig. 1). Moral outrage is triggered by stimuli that call attention to moral norm violations. These stimuli evoke a range of emotional and behavioural responses that vary in their costs and constraints. Finally, expressing outrage leads to a variety of personal and social outcomes. This framework reveals that digital media may exacerbate the expression of moral outrage by inflating its triggering stimuli, reducing some of its costs and amplifying many of its personal benefits.

The article is here.

Friday, September 29, 2017

The Dark Side of Morality: Group Polarization and Moral-Belief Formation

Marcus Arvan
University of Tampa

Most of us are accustomed to thinking of morality in a positive light. Morality, we say, is a matter of “doing good” and treating ourselves and each other “rightly.” However, moral beliefs and discourse also plausibly play a role in group polarization, the tendency of social groups to divide into progressively more extreme factions, each of which regards other groups to be “wrong.” Group polarization often occurs along moral lines, and is known to have many disturbing effects, increasing racial prejudice among the already moderately prejudiced, leading group decisions to be more selfish, competitive, less trusting, and less altruistic than individual decisions, eroding public trust, leading juries to impose more severe punishments in trial, generating more extreme political decisions, and contributing to war, genocide, and other violent behavior.

This paper argues that three empirically-supported theories of group polarization predict that polarization is likely caused in substantial part by a conception of morality that I call the Discovery Model—a model which holds moral truths exist to be discovered through moral intuition, moral reasoning, or some other process.

The paper is here.

Thursday, September 7, 2017

Are morally good actions ever free?

Cory J. Clark, Adam Shniderman, Jamie Luguri, Roy Baumeister, and Peter Ditto
SSRN Electronic Journal, August 2017

Abstract

A large body of work has demonstrated that people ascribe more responsibility to morally bad actions than both morally good and morally neutral ones, creating the impression that people do not attribute responsibility to morally good actions. The present work demonstrates that this is not so: People attributed more free will to morally good actions than morally neutral ones (Studies 1a-1b). Studies 2a-2b distinguished the underlying motives for ascribing responsibility to morally good and bad actions. Free will ascriptions for morally bad actions were driven predominantly by affective punitive responses. Free will judgments for morally good actions were similarly driven by affective reward responses, but also less affectively-charged and more pragmatic considerations (the perceived utility of reward, normativity of the action, and willpower required to perform the action). Responsibility ascriptions to morally good actions may be more carefully considered, leading to generally weaker, but more contextually-sensitive free will judgments.

The research is here.

Thursday, August 3, 2017

The Wellsprings of Our Morality

Daniel M.T. Fessler
What can evolution tell us about morality?
http://www.humansandnature.org

Mother Nature is amoral, yet morality is universal. The natural world lacks both any guiding hand and any moral compass. And yet all human societies have moral rules, and, with the exception of some individuals suffering from pathology, all people experience profound feelings that shape their actions in light of such rules. Where then did these constellations of rules and feelings come from?

The term “morality” jumbles rules and feelings, as well as judgments of others’ actions that result from the intersection of rules and feelings. Rules, like other features of culture, are ideas transmitted from person to person: “It is laudable to do X,” “It is a sin to do Y,” etc. Feelings are internal states evoked by events, or by thoughts of future possibilities: “I am proud that she did X,” “I am outraged that he did Y,” and so on. Praise or condemnation are social acts, often motivated by feelings, in response to other people’s behavior. All of this is commonly called “morality.”

So, what does it mean to say that morality is universal? You don’t need to be an anthropologist to recognize that, while people everywhere experience strong feelings about others’ behavior—and, as a result, reward or punish that behavior—cultures differ with regard to the beliefs on which they base such judgments. Is injustice a graver sin than disrespect for tradition? Which is more important, the autonomy of the individual or the harmony of the group? The answer is that it depends on whom you ask.

The information is here.

Monday, July 31, 2017

Truth or Punishment: Secrecy and Punishing the Self

Michael L. Slepian and Brock Bastian
Personality and Social Psychology Bulletin
First Published July 14, 2017, 1–17

Abstract

We live in a world that values justice; when a crime is committed, just punishment is expected to follow. Keeping one’s misdeed secret therefore appears to be a strategic way to avoid (just) consequences. Yet, people may engage in self-punishment to right their own wrongs to balance their personal sense of justice. Thus, those who seek an escape from justice by keeping secrets may in fact end up serving that same justice on themselves (through self-punishment). Six studies demonstrate that thinking about secret (vs. confessed) misdeeds leads to increased self-punishment (increased denial of pleasure and seeking of pain). These effects were mediated by the feeling one deserved to be punished, moderated by the significance of the secret, and were observed for both self-reported and behavioral measures of self-punishment.

Here is an excerpt:

Recent work suggests, however, that people who are reminded of their own misdeeds will sometimes seek out their own justice. That is, even subtle acts of self-punishment can restore a sense of personal justice, whereby a wrong feels to have been righted (Bastian et al., 2011; Inbar et al., 2013). Thus,
we predicted that even though keeping a misdeed secret could lead one to avoid being punished by others, it still could prompt a desire for punishment all the same, one inflicted by the self.

The article is here.

Note: There are significant implications in this article for psychotherapists.

Wednesday, May 10, 2017

How do you punish a criminal robot?

Christopher Markou
The Independent
Originally posted on April 20, 2017

Here is an excerpt:

Among the many things that must now be considered is what role and function the law will play. Expert opinions differ wildly on the likelihood and imminence of a future where sufficiently advanced robots walk among us, but we must confront the fact that autonomous technology with the capacity to cause harm is already around. Whether it’s a military drone with a full payload, a law enforcement robot exploding to kill a dangerous suspect or something altogether more innocent that causes harm through accident, error, oversight, or good ol’ fashioned stupidity.

There’s a cynical saying in law that “wheres there’s blame, there’s a claim”. But who do we blame when a robot does wrong? This proposition can easily be dismissed as something too abstract to worry about. But let’s not forget that a robot was arrested (and released without charge) for buying drugs; and Tesla Motors was absolved of responsibility by the American National Highway Traffic Safety Administration when a driver was killed in a crash after his Tesla was in autopilot.

While problems like this are certainly peculiar, history has a lot to teach us. For instance, little thought was given to who owned the sky before the Wright Brothers took the Kitty Hawk for a joyride. Time and time again, the law is presented with these novel challenges. And despite initial overreaction, it got there in the end. Simply put: law evolves.

The article is here.

Friday, March 24, 2017

A cleansing fire: Moral outrage alleviates guilt and buffers threats to one’s moral identity

Rothschild, Z.K. & Keefer, L.A.
Motiv Emot (2017). doi:10.1007/s11031-017-9601-2

Abstract

Why do people express moral outrage? While this sentiment often stems from a perceived violation of some moral principle, we test the counter-intuitive possibility that moral outrage at third-party transgressions is sometimes a means of reducing guilt over one’s own moral failings and restoring a moral identity. We tested this guilt-driven account of outrage in five studies examining outrage at corporate labor exploitation and environmental destruction. Study 1 showed that personal guilt uniquely predicted moral outrage at corporate harm-doing and support for retributive punishment. Ingroup (vs. outgroup) wrongdoing elicited outrage at corporations through increased guilt, while the opportunity to express outrage reduced guilt (Study 2) and restored perceived personal morality (Study 3). Study 4 tested whether effects were due merely to downward social comparison and Study 5 showed that guilt-driven outrage was attenuated by an affirmation of moral identity in an unrelated context.

The article is here.