Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Friday, April 7, 2023

Dishonor Code: What Happens When Cheating Becomes the Norm?

Suzy Weiss
The Free Press
Originally posted 16 MAR 23

Here are two excerpts:

Amy Kind, a philosophy professor at Claremont McKenna, said that, at the prestigious liberal arts college just east of Los Angeles, “Cheating is a big concern among the faculty.”

Nor do students have much incentive to turn back the clock: they’re getting better grades for less work than ever. 

Exhibit A: Greye Dunn, a recent Boston University graduate who majored in international relations and minored in Spanish. Dunn said he never cheated per se, but he benefited handsomely from the new, lower standards. His pre-Covid GPA was just north of 3.0; during Covid, he averaged a 3.5. And he knows plenty of students who flouted the rules.

“Many students want the credential, and they just want the easiest way to get that,” Gabriel Rossman, a sociology professor at UCLA, told me. 

A sophomore at the University of Pennsylvania’s prestigious business school, who declined to give me her name, said: “They’re here for the Wharton brand, a 4.0 GPA, and to party.”

“The students see school as a stepping stone,” Beyda told me. He meant they went on to graduate school or to jobs at consulting firms like McKinsey or Bain or in finance at Goldman Sachs, and then a spouse, a house, children, private school, vacations in Provence—all the nice things in life. 

(cut)

Professors describe feeling demoralized—“I didn’t get into academia to be a cop,” a CUNY professor in the English department told me. Faculty at other schools likewise describe feeling helpless when it comes to calling out cheating, or even catching it. There’s always another app, another workaround. 

Plus, it’s not necessarily smart to report bad behavior. 

“Nontenured faculty have no real choice but to compromise their professional standards and the quality of the students’ own education to take a customer’s-always-right approach,” Gabriel Rossman at UCLA told me. 

That’s because lower level courses, where cheating is more rampant, tend to be taught by nontenured faculty with little job security—the kind of people who fear getting negative student evaluations. “Students can be tyrants,” the CUNY English professor said. “It’s like Yelp. The only four people who are going to review the restaurant are the people who are mad.” 

Thursday, April 6, 2023

People recognize and condone their own morally motivated reasoning

Cusimano, C., & Lombrozo, T. (2023).
Cognition, 234, 105379.

Abstract

People often engage in biased reasoning, favoring some beliefs over others even when the result is a departure from impartial or evidence-based reasoning. Psychologists have long assumed that people are unaware of these biases and operate under an “illusion of objectivity.” We identify an important domain of life in which people harbor little illusion about their biases – when they are biased for moral reasons. For instance, people endorse and feel justified believing morally desirable propositions even when they think they lack evidence for them (Study 1a/1b). Moreover, when people engage in morally desirable motivated reasoning, they recognize the influence of moral biases on their judgment, but nevertheless evaluate their reasoning as ideal (Studies 2–4). These findings overturn longstanding assumptions about motivated reasoning and identify a boundary condition on Naïve Realism and the Bias Blind Spot. People's tendency to be aware and proud of their biases provides both new opportunities, and new challenges, for resolving ideological conflict and improving reasoning.

Highlights

• Dominant theories assume people form beliefs only under an illusion of objectivity.

• We document a boundary condition on this illusion: morally desirable biases.

• People endorse beliefs they regard as evidentially weak but morally desirable.

• People realize when they have just engaged in morally motivated reasoning.

• Accurate self-attributions of moral bias fully attenuate the ‘bias blind spot’.

From the General discussion

Our beliefs about our beliefs – including whether they are biased or justified – play a crucial role in guiding inquiry, shaping belief revision, and navigating disagreement. One line of research suggests that these judgments are almost universally characterized by an illusion of objectivity such that people consciously reason with the goal of being objective and basing their beliefs on evidence, and because of this, people nearly always assume that their current beliefs meet those standards. Another line of work suggests that people sometimes think that values legitimately bear on whether someone is justified to hold a belief (Cusimano & Lombrozo, 2021b). These findings raise the possibility, consistent with some prior theoretical proposals (Cusimano & Lombrozo, 2021a; Tetlock, 2002), that people will knowingly violate norms of impartiality, or knowingly maintain beliefs that lack evidential support, when doing so advances what they consider to be morally laudable goals. Two predictions follow. First, people should evaluate their beliefs in part based on their perceived moral value. And second, in situations in which people engage in morally motivated reasoning, they should recognize that they have done so and should evaluate their morally motivated reasoning as appropriate. We document support for these predictions across four studies (Table 1).

Conclusion

A great deal of work has assumed that people treat objectivity and evidence-based reasoning as cardinal norms governing their belief formation. This assumption has grown increasingly tenuous in light of recent work highlighting the importance of moral concerns in almost all facets of life. Consistent with this recent work, we find evidence that people’s evaluations of the moral quality of a proposition predict their subjective confidence that it is true, their likelihood of claiming that they believe it and know it, and the extent to which they take their belief to be justified. Moreover, people exhibit metacognitive awareness of this fact and approve of morality’s influence on their reasoning. People often want to be right, but they also want to be good – and they know it.

Wednesday, April 5, 2023

Moral Judgments of Human vs. AI Agents in Moral Dilemmas

Zhang, Y., Wu, J., Yu, F., & Xu, L. (2023).
Behavioral Sciences, 13(2), 181. MDPI AG.

Abstract

Artificial intelligence has quickly integrated into human society and its moral decision-making has also begun to slowly seep into our lives. The significance of moral judgment research on artificial intelligence behavior is becoming increasingly prominent. The present research aims at examining how people make moral judgments about the behavior of artificial intelligence agents in a trolley dilemma where people are usually driven by controlled cognitive processes, and in a footbridge dilemma where people are usually driven by automatic emotional responses. Through three experiments (n = 626), we found that in the trolley dilemma (Experiment 1), the agent type rather than the actual action influenced people’s moral judgments. Specifically, participants rated AI agents’ behavior as more immoral and deserving of more blame than humans’ behavior. Conversely, in the footbridge dilemma (Experiment 2), the actual action rather than the agent type influenced people’s moral judgments. Specifically, participants rated action (a utilitarian act) as less moral and permissible and more morally wrong and blameworthy than inaction (a deontological act). A mixed-design experiment provided a pattern of results consistent with Experiment 1 and Experiment 2 (Experiment 3). This suggests that in different types of moral dilemmas, people adapt different modes of moral judgment to artificial intelligence, this may be explained by that when people make moral judgments in different types of moral dilemmas, they are engaging different processing systems.

From the Discussion

Overall, these findings revealed that, in the trolley dilemma, people are more interested in the difference between humans and AI agents than action versus inaction. Conversely, in the footbridge dilemma, people are more interested in action versus inaction. It may be explained that people made moral judgments driven by different response processes in these two dilemmas—controlled cognitive processes occur often in response to dilemmas such as the trolley dilemma and automatic emotional responses occur often in response to dilemmas such as the footbridge dilemma. Thus, in the trolley dilemma, controlled cognitive processes may drive people’s attention to the agent type and make the judgment that it is inappropriate for AI agents to make moral decisions. In the footbridge dilemma, the action of pushing someone off a footbridge may evoke a stronger negative emotion than the action of operating a switch in the trolley dilemma. Driven by these automatic negative emotional responses, people would focus more on whether the agents did this harmful act, and judged this harmful act less acceptable and more morally wrong.

However, it should be noted that our work presents some limitations and offers several avenues for future research. Firstly, the current study only examined how people make moral judgments about humans and AI agents, but did not investigate the underlying psychological mechanism. Thus, all interpretations of the results are speculations. Future research could further explore the reason why people are reluctant to AI agents making moral decisions in the trolley dilemma, why people apply the same moral norms to humans and AI agents in the footbridge dilemma, and why people show different patterns of moral judgment in the trolley dilemma and the footbridge dilemma. Previous research has provided us with some pointers. For example, interpretability and consistency of behaviors would increase people’s acceptance of AI; increased anthropomorphism of an autonomous agent would mitigate blame for an agent’s involvement in an undesirable outcome. Individual differences including personality, development experiences, and cultural background may also influence people’s attitudes toward AI agents. Second, to exclude the potential influence of individual differences between Experiments 1 and 2, we conducted Experiment 3 with a within-subjects design, participants were asked to read both the two scenarios; however, the processing system activated by the first scenario may influence the participants’ judgment about the subsequent scenario. For example, participants who read the footbridge dilemma first may be interested in whether the character acted or not due to the strong negative emotion, this emotion may drive people to focus on the character’s action in the subsequent trolley dilemma, just like they did in the footbridge dilemma. Future research could consider other method approaches to exclude the effects of individual differences and order effects.

Tuesday, April 4, 2023

Chapter One - Moral inconsistency

Effron, D.A, & Helgason, B.A. 
Advances in Experimental Social Psychology
Volume 67, 2023, Pages 1-72

Abstract

We review a program of research examining three questions. First, why is the morality of people's behavior inconsistent across time and situations? We point to people's ability to convince themselves they have a license to sin, and we demonstrate various ways people use their behavioral history and others—individuals, groups, and society—to feel licensed. Second, why are people's moral judgments of others' behavior inconsistent? We highlight three factors: motivation, imagination, and repetition. Third, when do people tolerate others who fail to practice what they preach? We argue that people only condemn others' inconsistency as hypocrisy if they think the others are enjoying an “undeserved moral benefit.” Altogether, this program of research suggests that people are surprisingly willing to enact and excuse inconsistency in their moral lives. We discuss how to reconcile this observation with the foundational social psychological principle that people hate inconsistency.

(cut)

The benefits of moral inconsistency

The present chapter has focused on the negative consequences of moral inconsistency. We have highlighted how the factors that promote moral inconsistency can allow people to lie, cheat, express prejudice, and reduce their condemnation of others' morally suspect behaviors ranging from leaving the scene of an accident to spreading fake news. At the same time, people's apparent proclivity for moral inconsistency is not all bad.

One reason is that, in situations that pit competing moral values against each other, moral inconsistency may be unavoidable. For example, when a friend asks whether you like her unflattering new haircut, you must either say no (which would be inconsistent with your usual kind behavior) or yes (which would be inconsistent with your usual honest behavior; Levine, Roberts, & Cohen, 2020). If you discover corruption in your workplace, you might need to choose between blowing the whistle (which would be inconsistent with your typically loyal behavior toward the company) or staying silent (which would be inconsistent with your typically fair behavior; Dungan, Waytz, & Young, 2015; Waytz, Dungan, & Young, 2013).

Another reason is that people who strive for perfect moral consistency may incur steep costs. They may be derogated and shunned by others, who feel threatened and judged by these “do-gooders” (Howe & Monin, 2017; Minson & Monin, 2012; Monin, Sawyer, & Marquez, 2008; O’Connor & Monin, 2016). Or they may sacrifice themselves and loved ones more than they can afford, like the young social worker who consistently donated to charity until she and her partner were living on 6% of their already-modest income, or the couple who, wanting to consistently help children in need of a home, adopted 22 kids (MacFarquhar, 2015). In short, we may enjoy greater popularity and an easier life if we allow ourselves at least some moral inconsistency.

Finally, moral inconsistency can sometimes benefit society. Evolving moral beliefs about smoking (Rozin, 1999; Rozin & Singh, 1999) have led to considerable public health benefits. Stalemates in partisan conflict are hard to break if both sides rigidly refuse to change their judgments and behavior surrounding potent moral issues (Brandt, Wetherell, & Crawford, 2016). Same-sex marriage, women's sexual liberation, and racial desegregation required inconsistency in how people treated actions that were once considered wrong. In this way, moral inconsistency may be necessary for moral progress.

Monday, April 3, 2023

The Mercy Workers

Melanie Garcia
The Marshall Project
Originally published 2 March 2023

Here are two excerpts:

Like her more famous anti-death penalty peers, such as Bryan Stevenson and Sister Helen Prejean, Baldwin argues the idea that people should be judged on more than their worst actions. But she also speaks in more spiritual terms about the value of unearthing her clients’ lives. “We look through a more merciful lens,” she told me, describing her role as that of a “witness who knows and understands, without condemning.” This work, she believes, can have a healing effect on the client, the people they hurt, and even society as a whole. “The horrible thing to see is the crime,” she said. “We’re saying, ‘Please, please, look past that, there’s a person here, and there’s more to it than you think.’”

The United States has inherited competing impulses: It’s “an eye for an eye,” but also “blessed are the merciful.” Some Americans believe that our criminal justice system — rife with excessively long sentences, appalling prison conditions and racial disparities — fails to make us safer. And yet, tell the story of a violent crime and a punishment that sounds insufficient, and you’re guaranteed to get eyerolls.

In the midst of that impasse, I’ve come to see mitigation specialists like Baldwin as ambassadors from a future where we think more richly about violence. For the last few decades, they have documented the traumas, policy failures, family dynamics and individual choices that shape the lives of people who kill. Leaders in the field say it’s impossible to accurately count mitigation specialists — there is no formal license — but there may be fewer than 1,000. They’ve actively avoided media attention, and yet the stories they uncover occasionally emerge in Hollywood scripts and Supreme Court opinions. Over three decades, mitigation specialists have helped drive down death sentences from more than 300 annually in the mid-1990s to fewer than 30 in recent years.

(cut)

The term “mitigation specialist” is often credited to Scharlette Holdman, a brash Southern human rights activist famous for her personal devotion to her clients. The so-called Unabomber, Ted Kaczynski, tried to deed his cabin to her. (The federal government stopped him.) Her last client was accused 9/11 plotter Khalid Shaikh Mohammad. While working his case, Holdman converted to Islam and made a pilgrimage to Mecca. She died in 2017 and had a Muslim burial.

Holdman began a crusade to stop executions in Florida in the 1970s, during a unique moment of American ambivalence towards the punishment. After two centuries of hangings, firing squads and electrocutions, the Supreme Court struck down the death penalty in 1972. The court found that there was no logic guiding which prisoners were executed and which were spared.

The justices eventually let executions resume, but declared, in the 1976 case of Woodson v. North Carolina, that jurors must be able to look at prisoners as individuals and consider “compassionate or mitigating factors stemming from the diverse frailties of humankind.”

Sunday, April 2, 2023

Being good to look good: Self-reported moral character predicts moral double standards among reputation-seeking individuals

Mengchen, D. Kupfer, T. R, et al. (2022).
British Journal of Psychology
First published 4 NOV 22

Abstract

Moral character is widely expected to lead to moral judgements and practices. However, such expectations are often breached, especially when moral character is measured by self-report. We propose that because self-reported moral character partly reflects a desire to appear good, people who self-report a strong moral character will show moral harshness towards others and downplay their own transgressions—that is, they will show greater moral hypocrisy. This self-other discrepancy in moral judgements should be pronounced among individuals who are particularly motivated by reputation. Employing diverse methods including large-scale multination panel data (N = 34,323), and vignette and behavioural experiments (N = 700), four studies supported our proposition, showing that various indicators of moral character (Benevolence and Universalism values, justice sensitivity, and moral identity) predicted harsher judgements of others' more than own transgressions. Moreover, these double standards emerged particularly among individuals possessing strong reputation management motives. The findings highlight how reputational concerns moderate the link between moral character and moral judgement.

Practitioner points
  • Self-reported moral character does not predict actual moral performance well.
  • Good moral character based on self-report can sometimes predict strong moral hypocrisy.
  • Good moral character based on self-report indicates high moral standards, while only for others but not necessarily for the self.
  • Hypocrites can be good at detecting reputational cues and presenting themselves as morally decent persons.
From the General Discussion

A well-known Golden Rule of morality is to treat others as you wish to be treated yourself (Singer, 1963). People with a strong moral character might be expected to follow this Golden Rule, and judge others no more harshly than they judge themselves. However, when moral character is measured by self-reports, it is often intertwined with socially desirable responding and reputation management motives (Anglim et al., 2017; Hertz & Krettenauer, 2016; Reed & Aquino, 2003). The current research examines the potential downstream effects of moral character and reputation management motives on moral decisions. By attempting to differentiate the ‘genuine’ and ‘reputation managing’ components of self-reported moral character, we posited an association between moral character and moral double standards on the self and others. Imposing harsh moral standards on oneself often comes with a cost to self-interest; to signal one's moral character, criticizing others' transgressions can be a relatively cost-effective approach (Jordan et al., 2017; Kupfer & Giner-Sorolla, 2017; Simpson et al., 2013). To the extent that the demonstration of a strong moral character is driven by reputation management motives, we, therefore, predicted that it would be related to increased hypocrisy, that is, harsher judgements of others' transgressions but not stricter standards for own misdeeds.

Conclusion

How moral character guides moral judgements and behaviours depends on reputation management motives. When people are motivated to attain a good reputation, their self-reported moral character may predict more hypocrisy by displaying stronger moral harshness towards others than towards themselves. Thus, claiming oneself as a moral person does not always translate into doing good deeds, but can manifest as showcasing one's morality to others. Desires for a positive reputation might help illuminate why self-reported moral character often fails to capture real-life moral decisions, and why (some) people who appear to be moral are susceptible to accusations of hypocrisy—for applying higher moral standards to others than to themselves.

Saturday, April 1, 2023

The effect of reward prediction errors on subjective affect depends on outcome valence and decision context

Forbes, L., & Bennett, D. (2023, January 20). 
https://doi.org/10.31234/osf.io/v86bx

Abstract

The valence of an individual’s emotional response to an event is often thought to depend on their prior expectations for the event: better-than-expected outcomes produce positive affect and worse-than-expected outcomes produce negative affect. In recent years, this hypothesis has been instantiated within influential computational models of subjective affect that assume the valence of affect is driven by reward prediction errors. However, there remain a number of open questions regarding this association. In this project, we investigated the moderating effects of outcome valence and decision context (Experiment 1: free vs. forced choices; Experiment 2: trials with versus trials without counterfactual feedback) on the effects of reward prediction errors on subjective affect. We conducted two large-scale online experiments (N = 300 in total) of general-population samples recruited via Prolific to complete a risky decision-making task with embedded high-resolution sampling of subjective affect. Hierarchical Bayesian computational modelling revealed that the effects of reward prediction errors on subjective affect were significantly moderated by both outcome valence and decision context. Specifically, after accounting for concurrent reward amounts we found evidence that only negative reward prediction errors (worse-than-expected outcomes) influenced subjective affect, with no significant effect of positive reward prediction errors (better-than-expected outcomes). Moreover, these effects were only apparent on trials in which participants made a choice freely (but not on forced-choice trials) and when counterfactual feedback was absent (but not when counterfactual feedback was present). These results deepen our understanding of the effects of reward prediction errors on subjective affect.

From the General Discussion section

Our findings were twofold: first, we found that after accounting for the effects of concurrent reward amounts (gains/losses of points) on affect, the effects of RPEs were subtler and more nuanced than has been previously appreciated. Specifically, contrary to previous research, we found that only negative RPEs influenced subjective affect within our task, with no discernible effect of positive RPEs.  Second, we found that even the effect of negative RPEs on affect was dependent on the decision context within which the RPEs occurred.  We manipulated two features of decision context (Experiment 1: free-choice versus forced-choice trials; Experiment 2: trials with counterfactual feedback versus trials without counterfactual feedback) and found that both features of decision context significantly moderated the effect of negative RPEs on subjective affect. In Experiment 1, we found that negative RPEs only influenced subjective affect in free-choice trials, with no effect of negative RPEs in forced-choice trials. In Experiment 2, we similarly found that negative RPEs only influenced subjective affect when counterfactual feedback was absent, with no effect of negative RPEs when counterfactual feedback was present. We unpack and discuss each of these results separately below.


Editor's synopsis: As with large amounts of other research, "bad" is stronger than "good" in making appraisals and decisions, in context of free (not forced) choice and no counterfactual information available.

Important data points when working with patient who are making large life decisions.

Friday, March 31, 2023

Do conspiracy theorists think too much or too little?

N.M. Brashier
Current Opinion in Psychology
Volume 49, February 2023, 101504

Abstract

Conspiracy theories explain distressing events as malevolent actions by powerful groups. Why do people believe in secret plots when other explanations are more probable? On the one hand, conspiracy theorists seem to disregard accuracy; they tend to endorse mutually incompatible conspiracies, think intuitively, use heuristics, and hold other irrational beliefs. But by definition, conspiracy theorists reject the mainstream explanation for an event, often in favor of a more complex account. They exhibit a general distrust of others and expend considerable effort to find ‘evidence’ supporting their beliefs. In searching for answers, conspiracy theorists likely expose themselves to misleading information online and overestimate their own knowledge. Understanding when elaboration and cognitive effort might backfire is crucial, as conspiracy beliefs lead to political disengagement, environmental inaction, prejudice, and support for violence.

Implications

People who are drawn to conspiracy theories exhibit other stable traits – like lower cognitive ability, intuitive thinking, and proneness to cognitive biases – that suggest they are ‘lazy thinkers.’ On the other hand, conspiracy theorists also exhibit extreme levels of skepticism and expend energy justifying their beliefs; this effortful processing can ironically reinforce conspiracy beliefs. Thus, people carelessly fall down rabbit holes at some points (e.g., when reading repetitive conspiratorial claims) and methodically climb down at others (e.g., when initiating searches online). Conspiracy theories undermine elections, threaten the environment, and harm human health, so it is vitally important that interventions aimed at increasing evaluation and reducing these beliefs do not inadvertently backfire.

Thursday, March 30, 2023

Institutional Courage Buffers Against Institutional Betrayal, Protects Employee Health, and Fosters Organizational Commitment Following Workplace Sexual Harassment

Smidt, A. M., Adams-Clark, A. A., & Freyd, J. J. (2023).
PLOS ONE, 18(1), e0278830. 
https://doi.org/10.1371/journal.pone.0278830

Abstract

Workplace sexual harassment is associated with negative psychological and physical outcomes. Recent research suggests that harmful institutional responses to reports of wrongdoing–called institutional betrayal—are associated with additional psychological and physical harm. It has been theorized that supportive responses and an institutional climate characterized by transparency and proactiveness—called institutional courage—may buffer against these negative effects. The current study examined the association of institutional betrayal and institutional courage with workplace outcomes and psychological and physical health among employees reporting exposure to workplace sexual harassment. Adults who were employed full-time for at least six months were recruited through Amazon’s Mechanical Turk platform and completed an online survey (N = 805). Of the full sample, 317 participants reported experiences with workplace sexual harassment, and only this subset of participants were included in analyses. We used existing survey instruments and developed the Institutional Courage Questionnaire-Specific to assess individual experiences of institutional courage within the context of workplace sexual harassment. Of participants who experienced workplace sexual harassment, nearly 55% also experienced institutional betrayal, and 76% experienced institutional courage. Results of correlational analyses indicated that institutional betrayal was associated with decreased job satisfaction, organizational commitment, and increased somatic symptoms. Institutional courage was associated with the reverse. Furthermore, results of multiple regression analyses indicated that institutional courage appeared to attenuate negative outcomes. Overall, our results suggest that institutional courage is important in the context of workplace sexual harassment. These results are in line with previous research on institutional betrayal, may inform policies and procedures related to workplace sexual harassment, and provide a starting point for research on institutional courage.

Conclusion

Underlying all research on institutional betrayal and institutional courage is the idea that how one responds to a negative event—whether sexual harassment, sexual assault, and other types of victimization—is often as important or more important for future outcomes as the original event itself. In other words, it’s not only about what happens; it’s also about what happens next. In this study, institutional betrayal and institutional courage appear to have a tangible association with employee workplace and health outcomes. Furthermore, institutional courage appears to attenuate negative outcomes in both the employee workplace and health domains.

While we once again find that institutional betrayal is harmful, this study indicates that institutional courage can buffer against those harms. The ultimate goal of this research is to eliminate institutional betrayal at all levels of institutions by replacing it with institutional courage. The current study provides a starting point to achieving that goal by introducing a new measure of institutional courage to be used in future investigations and by reporting findings that demonstrate the power of institutional courage with respect to workplace sexual harassment.