Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Rationalization. Show all posts
Showing posts with label Rationalization. Show all posts

Sunday, October 16, 2022

A framework for understanding reasoning errors: From fake news to climate change and beyond

Pennycook, G. (2022, August 31).
https://doi.org/10.31234/osf.io/j3w7d

Abstract

Humans have the capacity, but perhaps not always the willingness, for great intelligence. From global warming to the spread of misinformation and beyond, our species is facing several major challenges that are the result of the limits of our own reasoning and decision-making. So, why are we so prone to errors during reasoning? In this chapter, I will outline a framework for understanding reasoning errors that is based on a three-stage dual-process model of analytic engagement (intuition, metacognition, and reason). The model has two key implications: 1) That a mere lack of deliberation and analytic thinking is a primary source of errors and 2) That when deliberation is activated, it generally reduces errors (via questioning intuitions and integrating new information) than increasing errors (via rationalization and motivated reasoning). In support of these claims, I review research showing the extensive predictive validity of measures that index individual differences in analytic cognitive style – even beyond explicit errors per se. In particular, analytic thinking is not only predictive of skepticism about a wide range of epistemically suspect beliefs (paranormal, conspiratorial, COVID-19 misperceptions, pseudoscience and alternative medicines) as well as decreased susceptibility to bullshit, fake news, and misinformation, but also important differences in people’s moral judgments and values as well as their religious beliefs (and disbeliefs). Furthermore, in some (but not all cases), there is evidence from experimental paradigms that support a causal role of analytic thinking in determining judgments, beliefs, and behaviors. The findings reviewed here provide some reason for optimism for the future: It may be possible to foster analytic thinking and therefore improve the quality of our decisions.

Evaluating the evidence: Does reason matter?

Thus far, I have prioritized explaining the various alternative frameworks. I will now turn to an in-depth review of some of the key relevant evidence that helps mediate between these accounts. I will organize this review around two key implications that emerge from the framework that I have proposed.

First, the primary difference between the three-stage model (and related dual-process models) and the social-intuitionist models (and related intuitionist models) is that the former argues that people should be able to overcome intuitive errors using deliberation whereas the latter argues that reason is generally infirm and therefore that intuitive errors will simply dominate. Thus, the reviewed research will investigate the apparent role of deliberation in driving people’s choices, beliefs, and behaviors.

Second, the primary difference between the three-stage model (and related dual-process models) and the identity-protective cognition model is that the latter argues that deliberation facilitates biased information processing whereas the former argues that deliberation generally facilitates accuracy. Thus, the reviewed research will also focus on whether deliberation is linked with inaccuracy in politically-charged or identity-relevant contexts.

Wednesday, December 22, 2021

Dominant groups support digressive victimhood claims to counter accusations of discrimination

F. Danbold, et al.
Journal of Experimental Social Psychology
Volume 98, January 2022, 104233

Abstract

When dominant groups are accused of discrimination against non-dominant groups, they often seek to portray themselves as the victims of discrimination instead. Sometimes, however, members of dominant groups counter accusations of discrimination by invoking victimhood on a new dimension of harm, changing the topic being discussed. Across three studies (N = 3081), we examine two examples of this digressive victimhood – Christian Americans responding to accusations of homophobia by claiming threatened religious liberty, and White Americans responding to accusations of racism by claiming threatened free speech. We show that members of dominant groups endorse digressive victimhood claims more strongly than conventional competitive victimhood claims (i.e., ones that claim “reverse discrimination”). Additionally, accounting for the fact that these claims may also stand to benefit a wider range of people and appeal to more abstract principles, we show that this preference is driven by the perception that digressive victimhood claims are more effective at silencing further criticism from the non-dominant group. Underscoring that these claims may be used strategically, we observed that individuals high in outgroup prejudice were willing to express a positive endorsement of the digressive victimhood claims even when they did not fully support the principle they claimed to be defending (e.g., freedom of religion or speech). We discuss implications for real-world intergroup conflicts and the psychology of dominant groups.

Highlights

• Charged with discrimination, dominant groups often claim victimhood.

• These claims can be digressive, shifting the topic of conversation.

• Members of dominant groups prefer digressive claims over competitive claims.

• They see digressive claims as effective in silencing further criticism.

• Digressive victimhood claims are endorsed strategically and sometimes insincerely.

Saturday, June 13, 2020

Rationalization is rational

Fiery Cushman
Behavioral and Brain Sciences, 43, E28.
(2020)
doi:10.1017/S0140525X19001730

Abstract

Rationalization occurs when a person has performed an action and then concocts the beliefs and desires that would have made it rational. Then, people often adjust their own beliefs and desires to match the concocted ones. While many studies demonstrate rationalization, and a few theories describe its underlying cognitive mechanisms, we have little understanding of its function. Why is the mind designed to construct post hoc rationalizations of its behavior, and then to adopt them? This may accomplish an important task: transferring information between the different kinds of processes and representations that influence our behavior. Human decision making does not rely on a single process; it is influenced by reason, habit, instinct, norms, and so on. Several of these influences are not organized according to rational choice (i.e., computing and maximizing expected value). Rationalization extracts implicit information – true beliefs and useful desires – from the influence of these non-rational systems on behavior. This is a useful fiction – fiction, because it imputes reason to non-rational psychological processes; useful, because it can improve subsequent reasoning. More generally, rationalization belongs to the broader class of representational exchange mechanisms, which transfer information between many different kinds of psychological representations that guide our behavior. Representational exchange enables us to represent any information in the manner best suited to the particular tasks that require it, balancing accuracy, efficiency, and flexibility in thought. The theory of representational exchange reveals connections between rationalization and theory of mind, inverse reinforcement learning, thought experiments, and reflective equilibrium.

From the Conclusion

But human action is also shaped by non-rational forces. In these cases, any answer to the question Why did I do that? that invokes belief, desire, and reason is at best a useful fiction.  Whether or not we realize it, the question we are actually answering is: What facts would have made that worth doing? Like an amnesic government agent, we are trying to divine our programmer’s intent – to understand the nature of the world we inhabit and our purpose in it. In these cases, rationalization implements a kind of rational inference. Specifically, we infer an adaptive set of representations that guide subsequent reasoning, based on the behavioral prescriptions of non-rational systems. This inference is valid because reasoning, like non-rational processes, is ultimately designed to maximize biological fitness. It is akin to IRL as well as to Bayesian models of theory of mind, and thus it offers a new interpretation of the function of these processes.

The target article is here, along with expert commentary.

Saturday, August 18, 2018

Rationalization is rational


Fiery Cushman
Preprint
Uploaded July 18, 2018

Abstract

Rationalization occurs when a person has performed an action and then concoct the beliefs and desires that would have made it rational. Then, people often adjust their own beliefs and desires to match the concocted ones. While many studies demonstrate rationalization, and a few theories identify its underlying cognitive mechanisms, we have little understanding of its its function. Why is the mind designed to construct post hoc rationalizations of its behavior, and then to adopt them? This design may accomplish an important task: to transfer information between the many different processes and representations that influence our behavior. Human decision-making does not rely on a single process; it is influenced by reason, habit, instincts, cultural norms and so on. Several of the processes that influence our behavior are not organized according to rational choice (i.e., maximizing desires conditioned on belief). Thus, rationalization extracts implicit information—true beliefs and useful desires—from the influence of these non-rational systems on behavior. This is not a process of self-perception as traditionally conceived, in which one infers the hidden contents of unconscious reasons. Rather, it is a useful fiction. It is a fiction because it imputes reason to non-rational psychological processes; it is useful because it can improve subsequent reasoning. More generally, rationalization is one example of broader class of “representational exchange” mechanisms, which transfer of information between many different psychological processes that guide our behavior. This perspective reveals connections to theory of mind, inverse reinforcement learning, and reflective equilibrium.

The paper is here.

Asking patients why they engaged in a behavior is another example of useful fiction.  Dr. Cushman suggests psychologists ask: What made that worth doing?

Thursday, July 19, 2018

Ethics Policies Don't Build Ethical Cultures

Dori Meinert
www.shrm.org
Originally posted June 19, 2018

Here is an excerpt:

Most people think they would never voluntarily commit an unethical or illegal act. But when Gallagher asked how many people in the audience had ever received a speeding ticket, numerous hands were raised. Similarly, employees rationalize their misuse of company supplies all the time, such as shopping online on their company-issued computer during work hours.

"It's easy to make unethical choices when they are socially acceptable," he said.

But those seemingly small choices can start people down a slippery slope.

Be on the Lookout for Triggers

No one plans to destroy their career by breaking the law or violating their company's ethics policy. There are usually personal stressors that push them over the edge, triggering a "fight or flight" response. At that point, they're not thinking rationally, Gallagher said.

Financial problems, relationship problems or health issues are the most common emotional stressors, he said.

"If you're going to be an ethical leader, are you paying attention to your employees' emotional triggers?"

The information is here.

Friday, January 12, 2018

The Normalization of Corruption in Organizations

Blake E. Ashforth and Vikas Anand
Research in Organizational Behavior
Volume 25, 2003, Pages 1-52

Abstract

Organizational corruption imposes a steep cost on society, easily dwarfing that of street crime. We examine how corruption becomes normalized, that is, embedded in the organization such that it is more or less taken for granted and perpetuated. We argue that three mutually reinforcing processes underlie normalization: (1) institutionalization, where an initial corrupt decision or act becomes embedded in structures and processes and thereby routinized; (2) rationalization, where self-serving ideologies develop to justify and perhaps even valorize corruption; and (3) socialization, where naı̈ve newcomers are induced to view corruption as permissible if not desirable. The model helps explain how otherwise morally upright individuals can routinely engage in corruption without experiencing conflict, how corruption can persist despite the turnover of its initial practitioners, how seemingly rational organizations can engage in suicidal corruption and how an emphasis on the individual as evildoer misses the point that systems and individuals are mutually reinforcing.

The article is here.

Monday, January 1, 2018

Leaders Don't Make Deals About Ethics

John Baldoni
Forbes.com
Originally published December 8, 2017

Here is an excerpt:

Partisanship abides in darker recesses of our human nature; it’s about winning at all costs. Partisans comfort themselves that their side is in the right, and therefore, whatever they do to promote it is correct. To them I quote Abraham Lincoln: “my concern is not whether God is on our side; my greatest concern is to be on God's side, for God is always right."

Human values do not need to be sanctioned through religious faith. Human values as they relate to morality, equality and dignity are bedrock principles that when cast aside allow aberrant and abhorrent behaviors to flourish. The least among us become the most preyed-upon among us.

Ethics therefore knows no party. The Me Too movement is apolitical; it gives voice to women who have been abused. The preyed upon are beginning to take back what they never should have lost in the first place – their dignity. To argue about which party – or which industry – has the most sexual harassers is a fool’s errand. Sexual harassers exist within every social strata as well as every political persuasion.

Living by a moral code is putting into practice what you believe is right. That is, you call out men who abuse women – as well as all those who give the abusers sanctuary. Right now, men in powerful positions in the media, business and politics are tumbling like dominoes.

But make no mistake — there are bosses in organizations of every kind who are guilty of sexual harassment and worse. A moral code demands that such men be exposed for their predatory behaviors. It also demands protection for their accusers.

The article is here.

Wednesday, February 8, 2017

Medical culture encourages doctors to avoid admitting mistakes

By Lawrence Schlachter
STAT News
Originally published on January 13, 2017

Here are two excerpts:

In reality, the factor that most influences doctors to hide or disclose medical errors should be clear to anyone who has spent much time in the profession: The culture of medicine frowns on admitting mistakes, usually on the pretense of fear of malpractice lawsuits.

But what’s really at risk are doctors’ egos and the preservation of a system that lets physicians avoid accountability by ignoring problems or shifting blame to “the system” or any culprit other than themselves.

(cut)

What is a patient to do in this environment? The first thing is to be aware of your own predisposition to take everything your doctor says at face value. Listen closely and you may hear cause for more intense questioning.

You will likely never hear the terms negligence, error, mistake, or injury in a hospital. Instead, these harsh but truthful words and phrases are replaced with softer ones like accident, adverse event, or unfortunate outcome. If you hear any of these euphemisms, ask more questions or seek another opinion from a different doctor, preferably at a different facility.

Most doctors would never tell a flagrant lie. But in my experience as a neurosurgeon and as an attorney, too many of them resort to half-truths and glaring omissions when it comes to errors. Beware of passive language like “the patient experienced bleeding” rather than “I made a bad cut”; attributing an error to random chance or a nameless, faceless system; or trivialization of the consequences of the error by claiming something was “a blessing in disguise.”

The article is here.

Saturday, January 7, 2017

The Irrationality Within Us

By Elly Vintiadis
Scientific American blog
Originally published on December 12, 2016

We like to think of ourselves as special because we can reason and we like to think that this ability expresses the essence of what it is to be human. In many ways this belief has formed our civilization; throughout history, we have used supposed differences in rationality to justify moral and political distinctions between different races, genders, and species, as well as between “healthy” and “diseased” individuals. Even to this day, people often associate mental disorder with irrationality and this has very real effects on people living with mental disorders.

But are we really that rational? And is rationality really what distinguishes people who live with mental illness from those who do not? It seems not. After decades of research, there is compelling evidence that we are not as rational as we think we are and that, rather than irrationality being the exception, it is part of who we normally are.

So what does it mean to be rational? We usually distinguish between two kinds of rationality.  Epistemic rationality, which is involved in acquiring true beliefs about the world and which sets the standard for what we ought to believe, and instrumental rationality which is involved in decision-making and behavior and is the standard for how we ought to act.

The article is here.

Friday, December 16, 2016

Why moral companies do immoral things

Michael Skapinker
Financial Times
Originally published November 23, 2016

Here is an excerpt:

But I wondered about the “better than average” research cited above. Could the illusion of moral superiority apply to organisations as well as individuals? And could companies believe they were so superior morally that the occasional lapse into immorality did not matter much? The Royal Holloway researchers said they had recently conducted experiments examining just these issues and were preparing to publish the results. They had found that political groups with a sense of moral superiority felt justified in behaving aggressively towards opponents. In experiments, this meant denying them a monetary benefit.

“It isn’t difficult to imagine a similar scenario arising in a competitive organisational context. To the extent that employees may perceive their organisation to be morally superior to other organisations, they might feel licensed to ‘cut corners’ or behave somewhat unethically — for example, to give their organisation a competitive edge.

“These behaviours may be perceived as justified … or even ethical, insofar as they promote the goals of their morally superior organisation,” they told me.

The article is here.

Friday, October 7, 2016

Three Ways To Prevent Getting Set Up For Ethical Failure

Ron Carucci
Forbes.com
Originally posted

Here are two excerpts:

To survive the injustice of unresolved competing goals, leaders, usually middle management, become self-protective, putting the focus of their team or department ahead of others. Such self-protection turns to self-interest as chronic pain persists from living in the gap between unrealistic demands and unfair resource allocation. Resentment turns to justification as people conclude, “I’m not going down with the ship.” And eventually, unfettered self-interest and its inherent justification become conscious choices to compromise, usually from a sense of entitlement. People simply conclude, “I have no choice” or “I deserve this.” Says Jonathan Haidt, Professor of Business Ethics at NYU and founder of Ethical Systems, “Good people will do terrible things when people around them are even gently encouraging them to do so.” In many cases, that “gentle encouragement” comes in the form of simply ignoring what might provoke poor choices.

(cut)

3. Clarify decision rights. Organizational governance – which is different from “Corporate Governance” – is the distribution of authority, resources, and decision rights across an organization. Carefully designed, it synchronizes an organization and ensures natural tensions are openly managed. Knowing which leaders are accountable for which decisions and resources removes the uncertainty many organizations suffer from. When there is confusion about decision rights, competing priorities proliferate, setting the stage for organizational contradictions to arise.

The article is here.

Wednesday, September 21, 2016

Woman uses Indiana religious objections law in defense against child abuse charges

The Chicago Tribune
Originally published August 31. 2016

The attorney for a woman charged with child abuse for allegedly beating her son with a coat hanger says Indiana's religious objections law gives her the right to discipline her children according to her evangelical Christian beliefs.

Kihn Par Thaing, 30, of Indianapolis was arrested in February on felony abuse and neglect charges after a teacher discovered her 7-year-old son's injuries. Thaing is accused of beating her son with a coat hanger, leaving him with 36 bruises and red welts.

Her attorney, Greg Bowes, argues in court documents filed July 29 that the state shouldn't interfere with Thaing's right to raise her children as she deems appropriate. He cited Indiana's Religious Freedom Restoration Act as part of her defense, saying it gives her the right to discipline her children according to her beliefs.

Court documents cite biblical Scripture and state that a parent who "spares the rod, spoils the child."

The article is here.

Monday, August 22, 2016

Rationalizing our Way into Moral Progress

Jesse S. Summers
Ethical Theory and Moral Practice:1-12 (forthcoming)

Research suggests that the explicit reasoning we offer to ourselves and to others is often rationalization, that we act instead on instincts, inclinations, stereotypes, emotions, neurobiology, habits, reactions, evolutionary pressures, unexamined principles, or justifications other than the ones we think we’re acting on, then we tell a post hoc story to justify our actions. This is troubling for views of moral progress according to which moral progress proceeds from our engagement with our own and others’ reasons. I consider an account of rationalization, based on Robert Audi’s, to make clear that rationalization, unlike simple lying, can be sincere. Because it can be sincere, and because we also have a desire to be consistent with ourselves, I argue that rationalization sets us up for becoming better people over time, and that a similar case can be made to explain how moral progress among groups of people can proceed via rationalization.

Friday, July 22, 2016

What This White-Collar Felon Can Teach You About Your Temptation To Cross That Ethical Line

Ron Carucci
Forbes.com
Originally posted June 28, 2016

The sobering truth of Law Professor Donald Langevoort’s words silenced the room like a loud mic-drop: “We’re not as ethical as we think we are.” Participants at Ethical Systems recent Ethics By Design conference were visibly uncomfortable…because they all knew it was true.

Research strongly indicates people over-estimate how strong their ethics are. I wanted to learn more about why genuinely honest people can be lured to cross lines they surely would have predicted, “I would never do that!”

The article is here.

Wednesday, May 11, 2016

Procedural Moral Enhancement

G. Owen Schaefer and Julian Savulescu
Neuroethics  pp 1-12
First online: 20 April 2016

Abstract

While philosophers are often concerned with the conditions for moral knowledge or justification, in practice something arguably less demanding is just as, if not more, important – reliably making correct moral judgments. Judges and juries should hand down fair sentences, government officials should decide on just laws, members of ethics committees should make sound recommendations, and so on. We want such agents, more often than not and as often as possible, to make the right decisions. The purpose of this paper is to propose a method of enhancing the moral reliability of such agents. In particular, we advocate for a procedural approach; certain internal processes generally contribute to people’s moral reliability. Building on the early work of Rawls, we identify several particular factors related to moral reasoning that are specific enough to be the target of practical intervention: logical competence, conceptual understanding, empirical competence, openness, empathy and bias. Improving on these processes can in turn make people more morally reliable in a variety of contexts and has implications for recent debates over moral enhancement.

Monday, April 18, 2016

The Benjamin Franklin Effect

David McRaney
You Are Not So Smart Blog: A Celebration of Self Delusion
October 5, 2011

The Misconception: You do nice things for the people you like and bad things to the people you hate.

The Truth: You grow to like people for whom you do nice things and hate people you harm.

(cut)

Sometimes you can’t find a logical, moral or socially acceptable explanation for your actions. Sometimes your behavior runs counter to the expectations of your culture, your social group, your family or even the person you believe yourself to be. In those moments you ask, “Why did I do that?” and if the answer damages your self-esteem, a justification is required. You feel like a bag of sand has ruptured in your head, and you want relief. You can see the proof in an MRI scan of someone presented with political opinions which conflict with their own. The brain scans of a person shown statements which oppose their political stance show the highest areas of the cortex, the portions responsible for providing rational thought, get less blood until another statement is presented which confirms their beliefs. Your brain literally begins to shut down when you feel your ideology is threatened. Try it yourself. Watch a pundit you hate for 15 minutes. Resist the urge to change the channel. Don’t complain to the person next to you. Don’t get online and rant. Try and let it go. You will find this is excruciatingly difficult.

The blog post is here.

Note: How do you perceive complex patients or those who do not respond well to psychotherapy?

Tuesday, April 12, 2016

Rationalization in Moral and Philosophical Thought

Eric Schwitzgebel and Jonathan Ellis

Abstract

Rationalization, in our intended sense of the term, occurs when a person favors a particular conclusion as a result of some factor (such as self-interest) that is of little justificatory epistemic relevance, if that factor then biases the person’s subsequent search for, and assessment of, potential justifications for the conclusion.  Empirical evidence suggests that rationalization is common in ordinary people’s moral and philosophical thought.  We argue that it is likely that the moral and philosophical thought of philosophers and moral psychologists is also pervaded by rationalization.  Moreover, although rationalization has some benefits, overall it would be epistemically better if the moral and philosophical reasoning of both ordinary people and professional academics were not as heavily influenced by rationalization as it likely is.  We discuss the significance of our arguments for cognitive management and epistemic responsibility.

The paper is here.

Tuesday, February 2, 2016

What Makes Us Cheat? Experiment 2

by Simon Oxenham
BigThink
Originally published January 13, 2016

Dan Ariely, the psychologist who popularised behavioral economics, has made a fascinating documentary exploring what makes us dishonest. I’ve just finished watching it and it’s something of a masterpiece of psychological storytelling, delving deep into contemporary tales of dishonesty, and supporting its narrative with cunningly designed experiments that have been neatly reconstructed for the film camera.

Self-Deception



The article is here.

Monday, February 1, 2016

What Makes Us Cheat? Experiment 1

by Simon Oxenham
BigThink
Originally published January 13, 2016

Dan Ariely, the psychologist who popularised behavioral economics, has made a fascinating documentary exploring what makes us dishonest. I’ve just finished watching it and it’s something of a masterpiece of psychological storytelling, delving deep into contemporary tales of dishonesty, and supporting its narrative with cunningly designed experiments that have been neatly reconstructed for the film camera.

Matrix Experiments and Big Cheaters vs Little Cheaters




The article is here.

How You Justified 10 Lies (or Didn’t)

By Gerald Dworkin
The New York Times - The Stone
Originally published January 14, 2016

Thanks to Stone readers who submitted a response — there were more than 10,000 — to my article, “Are These 10 Lies Justified.” Judging from the number of replies, the task of determining when it is or is not acceptable to lie is obviously one that many people have faced in their own lives. Many of you gave your own examples of lies told and why you believed they were or were not justified. It was heartening to find so many people prepared to reason thoughtfully about important moral issues.

With few exceptions, readers disagreed with me about the legitimacy of one or more of the lies, all of which I believe are justified. (You can revisit the original article, here.)

The results, as well as the original scenarios that you were asked to respond to, are below.

The article is here.