Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Transgressions. Show all posts
Showing posts with label Transgressions. Show all posts

Tuesday, July 12, 2022

Donald Trump and the rationalization of transgressive behavior: The role of group prototypicality and identity advancement

Davies, B., Leicht, C., & Abrams, D.
Journal of Applied Social Psychology
Volume 52, Issue 7, July 2022
Pages 481-495

Abstract

Transgressive leadership, especially in politics, can have significant consequences for groups and communities. However, research suggests that transgressive leaders are often granted deviance credit, and regarded sympathetically by followers due to perceptions of the leader's group prototypicality and identity advancement. We extend previous work by examining whether these perceptions additionally play a role in rationalizing the transgressions of a leader and whether deviance credit persists after a leader exits their leadership position. The present three-wave longitudinal study (N = 200) addresses these questions using the applied context of the 2020 US Presidential election. Across three survey waves administered during and after Donald Trump's election loss, Republicans perceived three transgressive behaviors (sharing false information, nepotism, and abuse of power) as less unethical when committed by Donald Trump than when the same behaviors are viewed in isolation. Perceptions of Trump's identity advancement, but not his group prototypicality, predicted the extent to which Republicans downplayed the unethicalness of his transgressions. Decreases in identity advancement across time were also related to increases in perceptions of Trump's unethicalness. Implications for the social identity theory of leadership, subjective group dynamics, and the broader consequences of deviance credit to transgressive leaders are discussed.

Discussion

This study aimed to understand how followers of transgressive leaders rationalize their leader's behavior, to what extent group prototypicality and identity advancement encourage this rationalization, and whether these effects would persist after a leader exits their leadership position. Specifically, we expected that Republicans would downplay the perceived unethicalness of behavior by Donald Trump relative to the same behavior when unattributed, and that this downplaying would be predicted by perceptions of Trump's group prototypicality and identity advancement. We also expected that, following his election loss, Donald Trump would be perceived as less prototypical and less identity advancing, and concomitantly as more unethical. In partial support of these hypotheses, we found that Republicans did indeed downplay the perceived unethicalness of Donald Trump's behavior, but that this was only predicted by perceptions of his identity advancement, and not his group prototypicality. In contrast to expectations, perceptions of Donald Trump's prototypicality and identity advancement, after controlling for his encouragement of the Capitol riots, did not decrease after his election loss, and neither did perceptions of his unethicalness increase. However, we found that intra-individual drops in perceptions of Trump's identity advancement (but not group prototypicality) did correspond with increases in perceptions of his unethicalness for two of the three transgressive behaviors. Evidence from the cross-lagged analysis is consistent with the interpretation that initial perceptions of identity advancement influenced later evaluations of Donald Trump's unethicalness, rather than the reverse. Overall, these results provide an important extension of previous deviance credit theory and research, highlighting the role of identity advancement and presenting the rationalization of a leader's behavior as a novel mechanism in the support of transgressive leaders. The applied and longitudinal nature of this study additionally demonstrates how social psychological processes operate in real-world contexts, providing a much-needed contribution to more ecologically valid behavioral research.


Editor's note: Contemplate this research as you watch the J6 committee findings today and in the future. I wonder if these perceptions will change after the J6 hearings, in their entirety.

Saturday, January 12, 2019

Monitoring Moral Virtue: When the Moral Transgressions of In-Group Members Are Judged More Severely

Karim Bettache, Takeshi Hamamura, J.A. Idrissi, R.G.J. Amenyogbo, & C. Chiu
Journal of Cross-Cultural Psychology
First Published December 5, 2018
https://doi.org/10.1177/0022022118814687

Abstract

Literature indicates that people tend to judge the moral transgressions committed by out-group members more severely than those of in-group members. However, these transgressions often conflate a moral transgression with some form of intergroup harm. There is little research examining in-group versus out-group transgressions of harmless offenses, which violate moral standards that bind people together (binding foundations). As these moral standards center around group cohesiveness, a transgression committed by an in-group member may be judged more severely. The current research presented Dutch Muslims (Study 1), American Christians (Study 2), and Indian Hindus (Study 3) with a set of fictitious stories depicting harmless and harmful moral transgressions. Consistent with our expectations, participants who strongly identified with their religious community judged harmless moral offenses committed by in-group members, relative to out-group members, more severely. In contrast, this effect was absent when participants judged harmful moral transgressions. We discuss the implications of these results.

Wednesday, December 20, 2017

Can psychopathic offenders discern moral wrongs? A new look at the moral/conventional distinction.

Aharoni, E., Sinnott-Armstrong, W., & Kiehl, K. A.
Journal of Abnormal Psychology, 121(2), 484-497. (2012)

Abstract

A prominent view of psychopathic moral reasoning suggests that psychopathic individuals cannot properly distinguish between moral wrongs and other types of wrongs. The present study evaluated this view by examining the extent to which 109 incarcerated offenders with varying degrees of psychopathy could distinguish between moral and conventional transgressions relative to each other and to nonincarcerated healthy controls. Using a modified version of the classic Moral/Conventional Transgressions task that uses a forced-choice format to minimize strategic responding, the present study found that total psychopathy score did not predict performance on the task. Task performance was explained by some individual subfacets of psychopathy and by other variables unrelated to psychopathy, such as IQ. The authors conclude that, contrary to earlier claims, insufficient data exist to infer that psychopathic individuals cannot know what is morally wrong.

The article is here.

Saturday, December 9, 2017

The Root of All Cruelty

Paul Bloom
The New Yorker
Originally published November 20, 2017

Here are two excerpts:

Early psychological research on dehumanization looked at what made the Nazis different from the rest of us. But psychologists now talk about the ubiquity of dehumanization. Nick Haslam, at the University of Melbourne, and Steve Loughnan, at the University of Edinburgh, provide a list of examples, including some painfully mundane ones: “Outraged members of the public call sex offenders animals. Psychopaths treat victims merely as means to their vicious ends. The poor are mocked as libidinous dolts. Passersby look through homeless people as if they were transparent obstacles. Dementia sufferers are represented in the media as shuffling zombies.”

The thesis that viewing others as objects or animals enables our very worst conduct would seem to explain a great deal. Yet there’s reason to think that it’s almost the opposite of the truth.

(cut)

But “Virtuous Violence: Hurting and Killing to Create, Sustain, End, and Honor Social Relationships” (Cambridge), by the anthropologist Alan Fiske and the psychologist Tage Rai, argues that these standard accounts often have it backward. In many instances, violence is neither a cold-blooded solution to a problem nor a failure of inhibition; most of all, it doesn’t entail a blindness to moral considerations. On the contrary, morality is often a motivating force: “People are impelled to violence when they feel that to regulate certain social relationships, imposing suffering or death is necessary, natural, legitimate, desirable, condoned, admired, and ethically gratifying.” Obvious examples include suicide bombings, honor killings, and the torture of prisoners during war, but Fiske and Rai extend the list to gang fights and violence toward intimate partners. For Fiske and Rai, actions like these often reflect the desire to do the right thing, to exact just vengeance, or to teach someone a lesson. There’s a profound continuity between such acts and the punishments that—in the name of requital, deterrence, or discipline—the criminal-justice system lawfully imposes. Moral violence, whether reflected in legal sanctions, the killing of enemy soldiers in war, or punishing someone for an ethical transgression, is motivated by the recognition that its victim is a moral agent, someone fully human.

The article is here.

Saturday, November 18, 2017

For some evangelicals, a choice between Moore and morality

Marc Fisher
The Washington Post
Originally posted November 16, 2017

Here is an excerpt:

What’s happening in the churches of Alabama — a state where half the residents consider themselves evangelical Christians, double the national average, according to a Pew Research study — is nothing less than a battle for the meaning of evangelism, some church leaders say. It is a titanic struggle between those who believe there must be one clear, unalterable moral standard and those who argue that to win the war for the nation’s soul, Christians must accept morally flawed leaders.

Evangelicals are not alone in shifting their view of the role moral character should play in choosing political leaders. Between 2011 and last year, the percentage of Americans who say politicians who commit immoral acts in their private lives can still behave ethically in public office jumped to 61 percent from 44 percent, according to a Public Religion Research Institute/Brookings poll. During the same period, the shift among evangelicals was even more dramatic, moving from to 72 percent from 30 percent, the survey found.

“What you’re seeing here is rank hypocrisy,” said John Fea, an evangelical Christian who teaches history at Messiah College in Mechanicsburg, Pa. “These are evangelicals who have decided that the way to win the culture is now uncoupled from character. Their goal is the same as it was 30 years ago, to restore America to its Christian roots, but the political playbook has changed.

The article is here.

And yes, I live in Mechanicsburg, PA, by I don't know John Fea.

Friday, November 17, 2017

Going with your gut may mean harsher moral judgments

Jeff Sossamon
www.futurity.org
Originally posted November 2, 2017

Going with your intuition could make you judge others’ moral transgressions more harshly and keep you from changing your mind, even after considering all the facts, a new study suggests.

The findings show that people who strongly rely on intuition automatically condemn actions they perceive to be morally wrong, even if there is no actual harm.

In psychology, intuition, or “gut instinct,” is defined as the ability to understand something immediately, without the need for reasoning.

“It is now widely acknowledged that intuitive processing influences moral judgment,” says Sarah Ward, a doctoral candidate in social and personality psychology at the University of Missouri.

“We thought people who were more likely to trust their intuition would be more likely to condemn things that are shocking, whereas people who don’t rely on gut feelings would not condemn these same actions as strongly,” Ward says.

Ward and Laura King, professor of psychological sciences, had study participants read through a series of scenarios and judge whether the action was wrong, such as an individual giving a gift to a partner that had previously been purchased for an ex.

The article is here.

Thursday, September 21, 2017

When is a lie acceptable? Work and private life lying acceptance depends on its beneficiary

Katarzyna Cantarero, Piotr Szarota, E. Stamkou, M. Navas & A. del Carmen Dominguez Espinosa
The Journal of Social Psychology 
Pages 1-16 | Received 02 Jan 2017, Accepted 25 Apr 2017, Published online: 14 Aug 2017

ABSTRACT

In this article we show that when analyzing attitude towards lying in a cross-cultural setting, both the beneficiary of the lie (self vs other) and the context (private life vs. professional domain) should be considered. In a study conducted in Estonia, Ireland, Mexico, The Netherlands, Poland, Spain, and Sweden (N = 1345), in which participants evaluated stories presenting various types of lies, we found usefulness of relying on the dimensions. Results showed that in the joint sample the most acceptable were other-oriented lies concerning private life, then other-oriented lies in the professional domain, followed by egoistic lies in the professional domain; and the least acceptance was shown for egoistic lies regarding one’s private life. We found a negative correlation between acceptance of a behavior and the evaluation of its deceitfulness.

Here is an excerpt:

Research shows differences in reactions to moral transgressions depending on the culture of the respondent as culture influences our moral judgments (e.g., Gold, Colman, & Pulford, 2014; Graham, Meindl, Beall, Johnson, & Zhang, 2016). For example, when analyzing transgressions of community (e.g., hearing children talking with their teacher the same way as they do towards their peers) Indian participants showed more moral outrage than British participants (Laham, Chopra, Lalljee, & Parkinson, 2010). Importantly, one of the main reasons why we can observe cross-cultural differences in reactions to moral transgressions is that culture influences our perception of whether an act itself constitutes a moral transgression at all (Haidt, 2001; Haidt & Joseph, 2004; Shweder, Mahapatra, & Miller, 1987; Shweder, Much, Mahapatra, & Park, 1997). Haidt, Koller and Dias (1993) showed that Brazilian participants would perceive some acts of victimless yet offensive actions more negatively than did Americans. The authors argue that for American students some of the acts that were being evaluated (e.g., using an old flag of ones’ country to clean the bathroom) fall outside the moral domain and are only a matter of social convention, whereas Brazilians would perceive them as morally wrong.

The paper is here.

Saturday, August 26, 2017

Liars, Damned Liars, and Zealots: The Effect of Moral Mandates on Transgressive Advocacy Acceptance

Allison B. Mueller, Linda J. Skitka
Social Psychological and Personality Science 
First published date: July-25-2017

Abstract

This research explored people’s reactions to targets who “went too far” to support noble causes. We hypothesized that observers’ moral mandates would shape their perceptions of others’ advocacy, even when that advocacy was transgressive, that is, when it used norm-violating means (i.e., lying) to achieve a preferred end. Observers were expected to accept others’ advocacy, independent of its credibility, to a greater extent when it bolstered their strong (vs. weak) moral mandate. Conversely, observers with strong (vs. weak) moral conviction for the cause were expected to condemn others’ advocacy—independent of its credibility—to a greater degree when it represented progress for moral opponents. Results supported these predictions. When evaluating a target in a persuasive communication setting, people’s judgments were uniquely shaped by the degree to which the target bolstered or undermined a cherished moral mandate.

Here is part of the Discussion Section:

These findings expand our knowledge of the moral mandate effect in two key ways. First, this work suggests that the moral mandate effect extends to specific individuals, not just institutions and authorities. Moral mandates may shape people’s perceptions of any target who engages in norm-violating behaviors that uphold moralized causes: co-workers, politicians, or CEOs. Second, this research suggests that, although people are not comfortable excusing others for heinous crimes that serve a moralized end (Mullen & Skitka, 2006), they appear comparatively tolerant of norm violations like lying.

A troubling and timely implication of these findings is that political figures may be able to act in corrupt ways without damaging their images (at least in the eyes of their supporters).

The article is here.

Tuesday, July 11, 2017

Moral Judgments and Social Stereotypes: Do the Age and Gender of the Perpetrator and the Victim Matter?

Qiao Chu, Daniel Grühn
Social Psychological and Personality Science
First Published June 19, 2017

Abstract
We investigated how moral judgments were influenced by (a) the age and gender of the moral perpetrator and victim, (b) the moral judge’s benevolent ageism and benevolent sexism, and (c) the moral judge’s gender. By systematically manipulating the age and gender of the perpetrators and victims in moral scenarios, participants in two studies made judgments about the moral transgressions. We found that (a) people made more negative judgments when the victims were old or female rather than young or male, (b) benevolent ageism influenced people’s judgments about young versus old perpetrators, and (c) people had differential moral expectations of perpetrators who belonged to their same-gender group versus opposite-gender group. The findings suggest that age and gender stereotypes are so salient to bias people’s moral judgments even when the transgression is undoubtedly intentional and hostile.

The article is here.

Wednesday, June 7, 2017

What do White House Rules Mean if They Can Be Circumvented?

Sheelah Kolhatkar
The New Yorker
Originally posted June 6, 2017

Here is an excerpt:

Each Administration establishes its own ethics rules, often by executive order, which go beyond ethics laws codified by Congress (those laws require such things as financial-disclosure forms from government employees, the divestiture of assets if they pose conflicts, and recusal from government matters if they intersect with personal business). While the rules established by law are hard and fast, officials can be granted waivers from the looser executive-order rules. The Obama Administration granted a handful of such waivers over the course of its eight years. What’s startling with the Trump White House is just how many waivers have been issued so early in Trump’s term—more than a dozen were disclosed last week, with another twenty-four expected this week, according to a report in the Wall Street Journal—as well as the Administration’s attempt to keep them secret, all while seeming to flout the laws that dictate how the whole system should work.

The ethics waivers made public last week apply to numerous officials who are now working on matters affecting the same companies and industries they represented before joining the Administration. The documents were only released after the Office of Government Ethics pressed the Trump Administration to make them public, which is how they have been handled in the past; the White House initially refused, attempting to argue that the ethics office lacked the standing to even ask for them. After a struggle, the Administration relented, but many of the waivers it released were missing critical information, such as the dates when they were issued. One waiver in particular, which appears to apply to Trump’s chief strategist, Stephen Bannon, without specifically naming him, grants Administration staff permission to communicate with news organizations where they might have formerly worked (Breitbart News, in Bannon’s case). The Bannon-oriented waiver, issued by the “Counsel to the President,” contains the line “I am issuing this memorandum retroactive to January 20, 2017.”

Walter Shaub, the head of the Office of Government Ethics, quickly responded that there is no such thing as a “retroactive” ethics waiver. Shaub told the Times, “If you need a retroactive waiver, you have violated a rule.”

The article is here.

Monday, November 21, 2016

A Theory of Hypocrisy

Eric Schwitzgebel
The Splintered Mind blog
Originally posted on October

Here is an excerpt:

Furthermore, if they are especially interested in the issue, violations of those norms might be more salient and visible to them than for the average person. The person who works in the IRS office sees how frequent and easy it is to cheat on one's taxes. The anti-homosexual preacher sees himself in a world full of gays. The environmentalist grumpily notices all the giant SUVs rolling down the road. Due to an increased salience of violations of the norms they most care about, people might tend to overestimate the frequency of the violations of those norms -- and then when they calibrate toward mediocrity, their scale might be skewed toward estimating high rates of violation. This combination of increased salience of unpunished violations plus calibration toward mediocrity might partly explain why hypocritical norm violations are more common than a purely strategic account might suggest.

But I don't think that's enough by itself to explain the phenomenon, since one might still expect people to tend to avoid conspicuous moral advocacy on issues where they know they are average-to-weak; and even if their calibration scale is skewed a bit high, they might hope to pitch their own behavior especially toward the good side on that particular issue -- maybe compensating by allowing themselves more laxity on other issues.

The blog post is here.

Saturday, November 19, 2016

Risk Management and You: 9 Most Frequent Violations for Psychologists

Ken Pope and Melba Vasquez
Ethics in Psychotherapy and Counseling: Practical Guide (5th edition)
(2016)

For U.S. and Canadian psychologists, the 9 most frequent causes among the 5,582 disciplinary actions over the years were (in descending order of frequency):

  1. unprofessional conduct, 
  2. sexual misconduct, 
  3. negligence, 
  4. nonsexual dual relationships, 
  5. conviction of a crime, 
  6. failure to maintain adequate or accurate records, 
  7. failure to comply with continuing education or competency requirements, 
  8. inadequate or improper supervision or delegation, and 
  9. substandard or inadequate care. 

Friday, November 18, 2016

The shame of public shaming

Russell Blackford
The Conversation
Originally published May 6, 2016

Here is an excerpt:

Shaming is on the rise. We’ve shifted – much of the time – to a mode of scrutinising each other for purity. Very often, we punish decent people for small transgressions or for no real transgressions at all. Online shaming, conducted via the blogosphere and our burgeoning array of social networking services, creates an environment of surveillance, fear and conformity.

The making of a call-out culture

I noticed the trend – and began to talk about it – around five years ago. I’d become increasingly aware of cases where people with access to large social media platforms used them to “call out” and publicly vilify individuals who’d done little or nothing wrong. Few onlookers were prepared to support the victims. Instead, many piled on with glee (perhaps to signal their own moral purity; perhaps, in part, for the sheer thrill of the hunt).

Since then, the trend to an online call-out culture has continued and even intensified, but something changed during 2015. Mainstream journalists and public intellectuals finally began to express their unease.

The article is here.

Friday, September 2, 2016

But Did They Do It on Purpose?

By Dan Falk
Scientific American
Originally published July 1, 2016

Here is an excerpt:

In all societies, the most severe transgressions draw the harshest judgments, but cultures differ on whether or not intent is weighed heavily in such crimes. One scenario, for example, asked respondents to imagine that someone had poisoned a communal well, harming dozens of villagers. In many nonindustrial societies, this was seen as the most severe wrongdoing—and yet intent seemed to matter very little. The very act of poisoning the well “was judged to be so bad that, whether it was on purpose or accidental, it ‘maxed out’ the badness judgments,” explains lead author H. Clark Barrett of the University of California, Los Angeles. “They accepted that it was accidental but said it's your responsibility to be vigilant in cases that cause that degree of harm.”

The findings also suggest that people in industrial societies are more likely in general than those in traditional societies to consider intent. This, Barrett says, may reflect the fact that people raised in the West are immersed in complex sets of rules; judges, juries and law books are just the tip of the moral iceberg. “In small-scale societies, judgment may be equally sophisticated, but it isn't codified in these elaborate systems,” he notes. “In some of these societies, people argue about moral matters for just as long as they do in any court in the U.S.”

The article is here.

Sunday, March 1, 2015

Online processing of moral transgressions: ERP evidence for spontaneous evaluation

Hartmut Leuthold, Angelika Kunkel, Ian G. Mackenzie and Ruth Filik
Soc Cogn Affect Neurosci (2015)
doi: 10.1093/scan/nsu151

Abstract

Experimental studies using fictional moral dilemmas indicate that both automatic emotional processes and controlled cognitive processes contribute to moral judgments. However, not much is known about how people process socio-normative violations that are more common to their everyday life nor the time-course of these processes. Thus, we recorded participants’ electrical brain activity while they were reading vignettes that either contained morally acceptable vs unacceptable information or text materials that contained information which was either consistent or inconsistent with their general world knowledge. A first event-related brain potential (ERP) positivity peaking at ∼200 ms after critical word onset (P200) was larger when this word involved a socio-normative or knowledge-based violation. Subsequently, knowledge-inconsistent words triggered a larger centroparietal ERP negativity at ∼320 ms (N400), indicating an influence on meaning construction. In contrast, a larger ERP positivity (larger late positivity), which also started at ∼320 ms after critical word onset, was elicited by morally unacceptable compared with acceptable words. We take this ERP positivity to reflect an implicit evaluative (good–bad) categorization process that is engaged during the online processing of moral transgressions.

The article is here.

Thursday, December 4, 2014

‘Utilitarian’ judgments in sacrificial moral dilemmas do not reflect impartial concern for the greater good

By G. Kahane, J. Everett, Brian Earp, Miguel Farias, and J. Savulescu
Cognition, Vol 134, Jan 2015, pp 193-209.

Highlights

• ‘Utilitarian’ judgments in moral dilemmas were associated with egocentric attitudes and less identification with humanity.
• They were also associated with lenient views about clear moral transgressions.
• ‘Utilitarian’ judgments were not associated with views expressing impartial altruist concern for others.
• This lack of association remained even when antisocial tendencies were controlled for.
• So-called ‘utilitarian’ judgments do not express impartial concern for the greater good.

Abstract

A growing body of research has focused on so-called ‘utilitarian’ judgments in moral dilemmas in which participants have to choose whether to sacrifice one person in order to save the lives of a greater number. However, the relation between such ‘utilitarian’ judgments and genuine utilitarian impartial concern for the greater good remains unclear. Across four studies, we investigated the relationship between ‘utilitarian’ judgment in such sacrificial dilemmas and a range of traits, attitudes, judgments and behaviors that either reflect or reject an impartial concern for the greater good of all. In Study 1, we found that rates of ‘utilitarian’ judgment were associated with a broadly immoral outlook concerning clear ethical transgressions in a business context, as well as with sub-clinical psychopathy. In Study 2, we found that ‘utilitarian’ judgment was associated with greater endorsement of rational egoism, less donation of money to a charity, and less identification with the whole of humanity, a core feature of classical utilitarianism. In Studies 3 and 4, we found no association between ‘utilitarian’ judgments in sacrificial dilemmas and characteristic utilitarian judgments relating to assistance to distant people in need, self-sacrifice and impartiality, even when the utilitarian justification for these judgments was made explicit and unequivocal. This lack of association remained even when we controlled for the antisocial element in ‘utilitarian’ judgment. Taken together, these results suggest that there is very little relation between sacrificial judgments in the hypothetical dilemmas that dominate current research, and a genuine utilitarian approach to ethics.

The entire article is here.

Tuesday, July 8, 2014

A Lack of Material Resources Causes Harsher Moral Judgments

By Marko Pitesa and Stefan Thau
Psychological Science 
March 2014 vol. 25 no. 3 702-710

Abstract

In the research presented here, we tested the idea that a lack of material resources (e.g., low income) causes people to make harsher moral judgments because a lack of material resources is associated with a lower ability to cope with the effects of others’ harmful behavior. Consistent with this idea, results from a large cross-cultural survey (Study 1) showed that both a chronic (due to low income) and a situational (due to inflation) lack of material resources were associated with harsher moral judgments. The effect of inflation was stronger for low-income individuals, whom inflation renders relatively more vulnerable. In a follow-up experiment (Study 2), we manipulated whether participants perceived themselves as lacking material resources by employing different anchors on the scale they used to report their income. The manipulation led participants in the material-resources-lacking condition to make harsher judgments of harmful, but not of nonharmful, transgressions, and this effect was explained by a sense of vulnerability. Alternative explanations were excluded. These results demonstrate a functional and contextually situated nature of moral psychology.

The entire article is here.