Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Behavioral Ethics. Show all posts
Showing posts with label Behavioral Ethics. Show all posts

Sunday, November 12, 2023

Ignorance by Choice: A Meta-Analytic Review of the Underlying Motives of Willful Ignorance and Its Consequences

Vu, L., Soraperra, I., Leib, M., et al. (2023).
Psychological Bulletin, 149(9-10), 611–635.
https://doi.org/10.1037/bul0000398

Abstract

People sometimes avoid information about the impact of their actions as an excuse to be selfish. Such “willful ignorance” reduces altruistic behavior and has detrimental effects in many consumer and organizational contexts. We report the first meta-analysis on willful ignorance, testing the robustness of its impact on altruistic behavior and examining its underlying motives. We analyze 33,603 decisions made by 6,531 participants in 56 different treatment effects, all employing variations of an experimental paradigm assessing willful ignorance. Meta-analytic results reveal that 40% of participants avoid easily obtainable information about the consequences of their actions on others, leading to a 15.6-percentage point decrease in altruistic behavior compared to when information is provided. We discuss the motives behind willful ignorance and provide evidence consistent with excuse-seeking behaviors to maintain a positive self-image. We investigate the moderators of willful ignorance and address the theoretical, methodological, and practical implications of our findings on who engages in willful ignorance, as well as when and why.

Public Significance Statement

We present the first meta-analysis on willful ignorance—when individuals avoid information about the negative consequences of their actions to maximize personal outcomes—covering 33,603 decisions made by 6,531 participants across 56 treatment effects. Results demonstrate that the ability to avoid such information decreases altruistic behavior, and that seemingly altruistic behavior may not reflect a true concern for others.


Key findings of the meta-analysis include:

Prevalence of Willful Ignorance: Approximately 40% of participants in the analyzed studies chose to avoid learning about the negative impact of their actions on others.

Impact on Altruism: Willful ignorance significantly reduces altruistic behavior. When provided with information about the consequences of their actions, participants were 15.6 percentage points more likely to engage in altruistic acts compared to those who chose to remain ignorant.

Motives for Willful Ignorance: The study suggests that willful ignorance may serve as a self-protective mechanism to maintain a positive self-image. By avoiding information about the harm caused by their actions, individuals can protect their self-perception as moral and ethical beings.

Tuesday, April 4, 2023

Chapter One - Moral inconsistency

Effron, D.A, & Helgason, B.A. 
Advances in Experimental Social Psychology
Volume 67, 2023, Pages 1-72

Abstract

We review a program of research examining three questions. First, why is the morality of people's behavior inconsistent across time and situations? We point to people's ability to convince themselves they have a license to sin, and we demonstrate various ways people use their behavioral history and others—individuals, groups, and society—to feel licensed. Second, why are people's moral judgments of others' behavior inconsistent? We highlight three factors: motivation, imagination, and repetition. Third, when do people tolerate others who fail to practice what they preach? We argue that people only condemn others' inconsistency as hypocrisy if they think the others are enjoying an “undeserved moral benefit.” Altogether, this program of research suggests that people are surprisingly willing to enact and excuse inconsistency in their moral lives. We discuss how to reconcile this observation with the foundational social psychological principle that people hate inconsistency.

(cut)

The benefits of moral inconsistency

The present chapter has focused on the negative consequences of moral inconsistency. We have highlighted how the factors that promote moral inconsistency can allow people to lie, cheat, express prejudice, and reduce their condemnation of others' morally suspect behaviors ranging from leaving the scene of an accident to spreading fake news. At the same time, people's apparent proclivity for moral inconsistency is not all bad.

One reason is that, in situations that pit competing moral values against each other, moral inconsistency may be unavoidable. For example, when a friend asks whether you like her unflattering new haircut, you must either say no (which would be inconsistent with your usual kind behavior) or yes (which would be inconsistent with your usual honest behavior; Levine, Roberts, & Cohen, 2020). If you discover corruption in your workplace, you might need to choose between blowing the whistle (which would be inconsistent with your typically loyal behavior toward the company) or staying silent (which would be inconsistent with your typically fair behavior; Dungan, Waytz, & Young, 2015; Waytz, Dungan, & Young, 2013).

Another reason is that people who strive for perfect moral consistency may incur steep costs. They may be derogated and shunned by others, who feel threatened and judged by these “do-gooders” (Howe & Monin, 2017; Minson & Monin, 2012; Monin, Sawyer, & Marquez, 2008; O’Connor & Monin, 2016). Or they may sacrifice themselves and loved ones more than they can afford, like the young social worker who consistently donated to charity until she and her partner were living on 6% of their already-modest income, or the couple who, wanting to consistently help children in need of a home, adopted 22 kids (MacFarquhar, 2015). In short, we may enjoy greater popularity and an easier life if we allow ourselves at least some moral inconsistency.

Finally, moral inconsistency can sometimes benefit society. Evolving moral beliefs about smoking (Rozin, 1999; Rozin & Singh, 1999) have led to considerable public health benefits. Stalemates in partisan conflict are hard to break if both sides rigidly refuse to change their judgments and behavior surrounding potent moral issues (Brandt, Wetherell, & Crawford, 2016). Same-sex marriage, women's sexual liberation, and racial desegregation required inconsistency in how people treated actions that were once considered wrong. In this way, moral inconsistency may be necessary for moral progress.

Monday, November 18, 2019

Understanding behavioral ethics can strengthen your compliance program

Jeffrey Kaplan
The FCPA Blog
Originally posted October 21, 2019

Behavioral ethics is a well-known field of social science which shows how — due to various cognitive biases — “we are not as ethical as we think.” Behavioral compliance and ethics (which is less well known) attempts to use behavioral ethics insights to develop and maintain effective compliance programs. In this post I explore some of the ways that this can be done.

Behavioral C&E should be viewed on two levels. The first could be called specific behavioral C&E lessons, meaning enhancements to the various discrete C&E program elements — e.g., risk assessment, training — based on behavioral ethics insights.   Several of these are discussed below.

The second — and more general — aspect of behavioral C&E is the above-mentioned overarching finding that we are not as ethical as we think. The importance of this general lesson is based on the notion that the greatest challenges to having effective C&E programs in organizations is often more about the “will” than the “way.”

That is, what is lacking in many business organizations is an understanding that strong C&E is truly necessary. After all, if we are as ethical than we think, then effective risk mitigation would be just a matter of finding the right punishment for an offense and the power of logical thinking would do the rest. Behavioral ethics teaches that that assumption is ill-founded.

The info is here.

Tuesday, June 11, 2019

The Lawyer Who Wants to Transform Legal Ethics with Behavioral Science

Brian Gallagher
www.ethicalsystems.org
Originally posted May 28, 2019

Here is an excerpt:

In a paper on the psychology of conflicts of interest, you wrote that, “Too often, the Supreme Court has made assumptions about the behavior of defense lawyers without empirical support.” How does behavioral science inform the way the Supreme Court should think about defense lawyers?

In the last 40 years, the Supreme Court has analyzed conflicts of interest in a manner that, I believe, makes unsupported assumptions about how criminal defense lawyers respond to allegations about their own misbehavior. My argument is that lawyers—like all people—are poorly equipped to recognize and address their own conflicts of interest. As a result, I propose that constitutional standards for conflicts of interest should be treated more like the ethical rules concerning conflicts, which focus on the risk that a conflict will influence a lawyer’s behavior rather than whether a conflict has, in fact, caused an adverse effect on the legal representation that a client received. I’m happy that my analysis has been cited by a few state courts that have looked at these and similar issues—and who knows, maybe someday the Supreme Court will cite behavioral research in forming its opinion on this topic.

You recently shared a paper on your blog, calling it a “fascinating discussion of the role of behavioral ethics in the context of judicial decision-making.” Which points or lessons stood out to you the most?

Interestingly, in a series of decisions about the constitutional standards for judicial conflicts of interest, the Supreme Court seems to be a bit more behaviorally realistic about conflicts of interest than it has been about attorney conflicts. For instance, in a case from a few terms ago, the Supreme Court—in deciding whether a justice on the Pennsylvania Supreme Court could properly adjudicate a death penalty case when he had previously been the prosecutor who authorized capital charges against the defendant—noted that “bias is easy to attribute to others and difficult to discern in oneself.” The Court went even further, noting that when a judge is asked to participate in a case in which he or she previously served as a prosecutor, there is “a risk that the judge would be so psychologically wedded to his or her previous position as a prosecutor that the judge would consciously or unconsciously avoid the appearance of having erred or changed position.”

The info is here.

Thursday, November 29, 2018

Ethical Free Riding: When Honest People Find Dishonest Partners

Jörg Gross, Margarita Leib, Theo Offerman, & Shaul Shalvi
Psychological Science
https://doi.org/10.1177/0956797618796480

Abstract

Corruption is often the product of coordinated rule violations. Here, we investigated how such corrupt collaboration emerges and spreads when people can choose their partners versus when they cannot. Participants were assigned a partner and could increase their payoff by coordinated lying. After several interactions, they were either free to choose whether to stay with or switch their partner or forced to stay with or switch their partner. Results reveal that both dishonest and honest people exploit the freedom to choose a partner. Dishonest people seek a partner who will also lie—a “partner in crime.” Honest people, by contrast, engage in ethical free riding: They refrain from lying but also from leaving dishonest partners, taking advantage of their partners’ lies. We conclude that to curb collaborative corruption, relying on people’s honesty is insufficient. Encouraging honest individuals not to engage in ethical free riding is essential.

Conclusion
The freedom to select partners is important for the establishment of trust and cooperation. As we show here, however, it is also associated with potential moral hazards. For individuals who seek to keep the risk of collusion low, policies providing the freedom to choose one’s partners should be implemented with caution. Relying on people’s honesty may not always be sufficient because honest people may be willing to tolerate others’ rule violations if they stand to profit from them. Our results clarify yet again that people who are not willing to turn a blind eye and stand up to corruption should receive all praise.

Friday, November 23, 2018

The Moral Law Within: The Scientific Case For Self-Governance

Carsten Tams
Forbes.com
Originally posted September 26, 2018

Here is an excerpt:

The behavioral ethics literature, and its reception in the ethics and compliance field, is following a similar trend. Behavioral ethics is often defined as the discipline that helps to explain why good people do bad things. It frequently focuses on how various biases, cognitive heuristics, blind spots, ethical fading, bounded ethicality, or rationalizations compromise people’s ethical intentions.

To avoid misunderstandings, I am a fan and avid consumer of behavioral science literature. Understanding unethical biases is fascinating and raising awareness about them is useful. But it is only half the story. There is more to behavioral science than biases and fallacies. A lopsided focus on biases may lead us to view people’s morality as hopelessly flawed. Standing amidst a forest crowded by biases and fallacies, we may forget that people often judge and act morally.

Such an anthropological bias has programmatic consequences. If we frame organizational ethics simply as a problem of people’s ethical biases, we will focus on keeping these negative biases in check. This framing, however, does not provide a rationale for supporting people’s capacity for self-governed ethical behavior. For such a rationale, we would need evidence that such a capacity exists. The human capacity for morality has been a subject of rigorous inquiry across diverse behavioral disciplines. In the following, this article will highlight a selection of major contributions to this inquiry.

The info is here.

Sunday, June 25, 2017

Managing for Academic Integrity in Higher Education: Insights From Behavioral Ethics

Sheldene Simola
Scholarship of Teaching and Learning in Psychology
Vol 3(1), Mar 2017, 43-57.

Despite the plethora of research on factors associated with academic dishonesty and ways of averting it, such dishonesty remains a significant concern. There is a need to identify overarching frameworks through which academic dishonesty might be understood, which might also suggest novel yet research-supported practical insights aimed at prevention. Hence, this article draws upon the burgeoning field of behavioral ethics to highlight a dual processing framework on academic dishonesty and to provide additional and sometimes counterintuitive practical insights into preventing this predicament. Six themes from within behavioral ethics are elaborated. These indicate the roles of reflective, conscious deliberation in academic (dis)honesty, as well as reflexive, nonconscious judgment; the roles of rationality and emotionality; and the ways in which conscious and nonconscious situational cues can cause individual moral identity or moral standards to become more or less salient to, and therefore influential in, decision-making. Practical insights and directions for future research are provided.

The article is here.

Friday, February 14, 2014

Better than ever? Employee reactions to ethical failures in organizations, and the ethical recovery paradox

By Marshall Schminke, James Caldwell, Maureen L. Ambrose, Sean R. McMahon
Organizational Behavior and Human Decision Processes
Volume 123, Issue 2, March 2014, Pages 206–219

Abstract

This research examines organizational attempts to recover internally from ethical failures witnessed by employees. Drawing on research on service failure recovery, relationship repair, and behavioral ethics, we investigate how witnessing unethical acts in an organization impacts employees and their relationship with their organization. In two studies—one in the lab and one in the field—we examine the extent to which it is possible for organizations to recover fully from these ethical lapses. Results reveal an ethical recovery paradox, in which exemplary organizational efforts to recover internally from ethical failure may enhance employee perceptions of the organization to a more positive level than if no ethical failure had occurred.

The entire article is here, behind a paywall.