Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Motivated Moral Reasoning. Show all posts
Showing posts with label Motivated Moral Reasoning. Show all posts

Sunday, May 28, 2023

Above the law? How motivated moral reasoning shapes evaluations of high performer unethicality

Campbell, E. M., Welsh, D. T., & Wang, W. (2023).
Journal of Applied Psychology.
Advance online publication.

Abstract

Recent revelations have brought to light the misconduct of high performers across various fields and occupations who were promoted up the organizational ladder rather than punished for their unethical behavior. Drawing on principles of motivated moral reasoning, we investigate how employee performance biases supervisors’ moral judgment of employee unethical behavior and how supervisors’ performance-focus shapes how they account for moral judgments in promotion recommendations. We test our model in three studies: a field study of 587 employees and their 124 supervisors at a Fortune 500 telecom company, an experiment with two samples of working adults, and an experiment that directly varied explanatory mechanisms. Evidence revealed a moral double standard such that supervisors rendered less punitive judgment of the unethical acts of higher performing employees. In turn, supervisors’ bottom-line mentality (i.e., fixation on achieving results) influenced the degree to which they incorporated their punitive judgments into promotability considerations. By revealing the moral leniency afforded to higher performers and the uneven consequences meted out by supervisors, our results carry implications for behavioral ethics research and for organizations seeking to retain and promote their higher performers while also maintaining ethical standards that are applied fairly across employees.

Here is the opening:

Allegations of unethical conduct perpetrated by prominent, high-performing professionals have been exploding across newsfeeds (Zacharek et al., 2017). From customer service employees and their managers (e.g., Wells Fargo fake accounts; Levitt & Schoenberg, 2020), to actors, producers, and politicians (e.g., long-term corruption of Belarus’ president; Simmons, 2020), to reporters and journalists (e.g., the National Broadcasting Company’s alleged cover-up; Farrow, 2019), to engineers and executives (e.g., Volkswagen’s emissions fraud; Vlasic, 2017), the public has been repeatedly shocked by the egregious behaviors committed by individuals recognized as high performers within their respective fields (Bennett, 2017). 

In the wake of such widespread unethical, corrupt, and exploitative behavior, many have wondered how supervisors could have systematically ignored the conduct of high-performing individuals for so long while they ascended organizational ladders. How could such misconduct have resulted in their advancement to leadership roles rather than stalled or derailed the transgressors’ careers?

The story of Carlos Ghosn at Nissan hints at why and when individuals’ unethical behavior (i.e., lying, cheating, and stealing; TreviƱo et al., 2006, 2014) may result in less punitive judgment (i.e., the extent to which observed behavior is morally evaluated as negative, incorrect, or inappropriate). During his 30-year career in the automotive industry, Ghosn differentiated himself as a high performer known for effective cost-cutting, strategic planning, and spearheading change; however, in 2018, he fell from grace over allegations of years of financial malfeasance and embezzlement (Leggett, 2019). When allegations broke, Nissan’s CEO stood firm in his punitive judgment that Ghosn’s behavior “cannot be tolerated by the company” (Kageyama, 2018). Still, many questioned why the executives levied judgment on the misconduct that they had overlooked for years. Tokyo bureau chief of the New York Times, Motoko Rich, reasoned that Ghosn “probably would have continued to get away with it … if the company was continuing to be successful. But it was starting to slow down. There were signs that the magic had gone” (Barbaro, 2019). Similarly, an executive pointed squarely to the relevance of Ghosn’s performance, lamenting: “what [had he] done for us lately?” (Chozick & Rich, 2018). As a high performer, Ghosn’s unethical behavior evaded punitive judgment and career consequences from Nissan executives, but their motivation to leniently judge Ghosn’s behavior seemed to wane with his level of performance. In her reporting, Rich observed: “you can get away with whatever you want as long as you’re successful. And once you’re not so successful anymore, then all that rule-breaking and brashness doesn’t look so attractive and appealing anymore” (Barbaro, 2019).

Tuesday, April 12, 2016

Rationalization in Moral and Philosophical Thought

Eric Schwitzgebel and Jonathan Ellis

Abstract

Rationalization, in our intended sense of the term, occurs when a person favors a particular conclusion as a result of some factor (such as self-interest) that is of little justificatory epistemic relevance, if that factor then biases the person’s subsequent search for, and assessment of, potential justifications for the conclusion.  Empirical evidence suggests that rationalization is common in ordinary people’s moral and philosophical thought.  We argue that it is likely that the moral and philosophical thought of philosophers and moral psychologists is also pervaded by rationalization.  Moreover, although rationalization has some benefits, overall it would be epistemically better if the moral and philosophical reasoning of both ordinary people and professional academics were not as heavily influenced by rationalization as it likely is.  We discuss the significance of our arguments for cognitive management and epistemic responsibility.

The paper is here.

Tuesday, September 15, 2015

Explanatory Judgment, Moral Offense and Value-Free Science

By Matteo Colombo, Leandra Bucher, & Yoel Inbar
Review of Philosophy and Psychology
August 2015

Abstract

A popular view in philosophy of science contends that scientific reasoning is objective to the extent that the appraisal of scientific hypotheses is not influenced by moral, political, economic, or social values, but only by the available evidence. A large body of results in the psychology of motivated-reasoning has put pressure on the empirical adequacy of this view. The present study extends this body of results by providing direct evidence that the moral offensiveness of a scientific hypothesis biases explanatory judgment along several dimensions, even when prior credence in the hypothesis is controlled for. Furthermore, it is shown that this bias is insensitive to an economic incentive to be accurate in the evaluation of the evidence. These results contribute to call into question the attainability of the ideal of a value-free science.

The entire article is here.

Wednesday, July 8, 2015

How could they?

By Tage Rai
Aeon Magazine
Originally published June 18, 2015

Here is an excerpt:

It would be easier to live in a world where perpetrators believe that violence is wrong and engage in it anyway. That is not the world we live in. While our refusal to acknowledge this basic fact may have helped to orient our own moral compass, it has also stood in the way of interventions that might actually reduce harm. Let’s put aside the philosophical questions that arise once we accept that there is moral disagreement about violence. How does the message that violence is morally motivated aid our efforts to reduce it?

For years, we have been trying to reduce crime by enacting mass incarceration, by placing restrictions on the mentally ill, and by teaching potential perpetrators how to exercise more self-control. On the face of it, these all sound like plausible strategies. But all of them miss their target.

One of the most robust findings in criminology is that increasing the severity of punishment has little deterrent effect. People simply aren’t as sensitive to the potential costs of crime as the rational-choice model predicts they should be, and so efforts to reduce it by cracking down have failed to justify the immense fiscal and social costs of mass incarceration. Meanwhile, because most violent crimes are committed by psychologically healthy individuals, legislation that focuses on the mentally ill – for example, by stopping them from buying guns – would lead to only a small reduction.

The entire article is here.

Monday, January 6, 2014

Motivated Moral Reasoning in Psychotherapy

John D. Gavazzi, Psy.D., ABPP
Samuel Knapp, Ed.D., ABPP

            In the research literature on psychology and morality, the concept of motivated moral reasoning is relevant to psychotherapy. Motivated moral reasoning occurs when a person’s decision-making skills are motivated to reach a specific moral conclusion. Research on motivated moral reasoning can be influenced by factors such as the perception of intentionality of others and the social nature of moral reasoning (Ditto, Pizarro, & Tannenbaum, 2009). In this article, we will focus on the intuitive, automatic, and affective nature of motivated moral reasoning as these types of judgments occur in psychotherapy. The goal of this article is to help psychologists remain vigilant about the possibilities of motivated moral reasoning in the psychotherapy relationship.


Individuals typically believe that moral judgments are primarily principle-based, well-reasoned, and cognitive. Individuals also trust that moral judgments are made from a top-down approach, meaning moral agents start with moral ideals or principles first, and then apply those principles to a specific situation. Individuals typically believe moral decisions are based on well-reasoned principles, consistent over time and reliable across situations. Ironically, the research reveals that, unless primed for a specific moral dilemma (such as serving on jury duty), individuals typically use a bottom-up strategy in moral reasoning. Research on self-report of moral decisions shows that individuals seek justifications and ad hoc confirmatory data points to support the person’s reflexive decision. Furthermore, the reasoning for moral decisions is context-dependent, meaning that the same moral principles are not applied consistently over time and across situations. Finally, individuals use automatic, intuitive, and emotional processes when making important decisions (Ditto, Pizarro, & Tannenbaum, 2009). While the complexity of moral reasoning depends on a number of factors, individuals tend to make moral judgments first, and answer questions later (and only if asked).

The entire article is here.

Sunday, October 6, 2013

Why We Should Choose Science over Beliefs

By Michael Shermer
Scientific American
Originally published September 24, 2013

Ever since college I have been a libertarian—socially liberal and fiscally conservative. I believe in individual liberty and personal responsibility. I also believe in science as the greatest instrument ever devised for understanding the world. So what happens when these two principles are in conflict? My libertarian beliefs have not always served me well. Like most people who hold strong ideological convictions, I find that, too often, my beliefs trump the scientific facts. This is called motivated reasoning, in which our brain reasons our way to supporting what we want to be true. Knowing about the existence of motivated reasoning, however, can help us overcome it when it is at odds with evidence.

The entire article (and comments below it) is here.

Tuesday, October 1, 2013

Sorry

By William Germano
Lingua Franca - Blog - The Chronicle of Higher Education
Originally posted September 18, 2013

Are academics ever really sorry?

A recent kerfuffle (a good Chronicle of Higher Ed word) at Johns Hopkins involved an interim dean who apologized for asking a research professor to remove a blog post.

When the dean’s apology came forth, my friend Christopher Newfield at the University of California at Santa Barbara tweeted “an explanation would be better than an apology.” I take his point to be that when somebody does what they say they shouldn’t have it’s not the expression of contrition we’re after, it’s the detailed rationale—the sequence of missteps—that led to the action that finally produced the apology.

(cut)

So what do we do when caught out? We tend to the deflective (“I’m sorry, but my hands were tied”), the absorptive (“I’m sorry, but I had to do what I thought was right”), or the obstructive (“I’m sorry you feel that way”).

The entire blog post is here.