Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Reputation. Show all posts
Showing posts with label Reputation. Show all posts

Saturday, March 23, 2024

How prosocial actors use power hierarchies to build moral reputation

Inesi, M. E., & Rios, K. (2023).
Journal of Experimental Social Psychology,
106, 104441.


Power hierarchies are ubiquitous, emerging formally and informally, in both personal and professional contexts. When prosocial acts are offered within power hierarchies, there is a widespread belief that people who choose lower-power beneficiaries are altruistically motivated, and that those who choose higher-power beneficiaries hold a self-interested motive to ingratiate. In contrast, the current research empirically demonstrates that people can also choose lower-power beneficiaries for self-interested reasons – namely, to bolster their own moral reputation in the group. Across three pre-registered studies, involving different contexts and types of prosocial behavior, and including real financial incentives, we demonstrate that people are more likely to choose lower-power beneficiaries when reputation concerns are more salient. We also provide evidence of the mechanism underlying this pattern: people believe that choosing a lower-power beneficiary more effectively signals their own moral character.


• How do prosocial actors choose their beneficiaries in hierarchies?

• People increasingly choose lower-power beneficiaries when concerned with reputation

• This pattern is driven by a desire to signal high moral character to others

• This implies a short-term re-distribution of resources to lower-power individuals

Some thoughts:

This research challenges the common assumption that prosocial behavior towards lower-status individuals always stems from altruism, while helping those with higher power reflects self-interest. It explores how actors navigate power hierarchies to build their moral reputation.

Key findings:

Reputation matters: People are more likely to choose lower-power beneficiaries when their moral reputation is salient (e.g., being observed by others).

Strategic signaling: Choosing lower-power recipients is seen as a stronger signal of good character, even if the motivation is self-serving.

Not just altruism: Prosocial behavior can be used strategically to gain social approval and build a positive reputation, regardless of the beneficiary's status.

Sunday, August 20, 2023

When Scholars Sue Their Accusers. Francesca Gino is the Latest. Such Litigation Rarely Succeeds.

Adam Marcus and Ivan Oransky
The Chronicle of Higher Education
Originally posted 18 AUG 23

Francesca Gino has made headlines twice since June: once when serious allegations of misconduct involving her work became public, and again when she filed a $25-million lawsuit against her accusers, including Harvard University, where she is a professor at the business school.

The suit itself met with a barrage of criticism from those who worried that, as one scientist put it, it would have a “chilling effect on fraud detection.” A smaller number of people supported the move, saying that Harvard and her accusers had abandoned due process and that they believed in Gino’s integrity.How the case will play out, of course, remains to be seen. But Gino is hardly the first researcher to sue her critics and her employer when faced with misconduct findings. As the founders of Retraction Watch, a website devoted to covering problems in the scientific literature, we’ve reported many of these kinds of cases since we launched our blog in 2010. Platintiffs tend to claim defamation, but sometimes sue over wrongful termination or employment discrimination, and these kinds of cases typically end up in federal courts. A look at how some other suits fared might yield recommendations for how to limit the pain they can cause.The first thing to know about defamation and employment suits is that most plaintiffs, but not all, lose. Mario Saad, a diabetes researcher at Brazil’s Unicamp, found that out when he sued the American Diabetes Association in the very same federal district court in Massachusetts where Gino filed her case.Saad was trying to prevent Diabetes, the flagship research journal of the American Diabetes Association, from publishing expressions of concern about four of his papers following allegations of image manipulation. He lost that effort in 2015, and has now had 18 papers retracted.


Such cases can be extremely expensive — not only for the defense, whether the costs are borne by institutions or insurance companies, but also for the plaintiffs. Ask Carlo Croce and Mark Jacobson.

Croce, a cancer researcher at Ohio State University, has at various points sued The New York Times, a Purdue University biologist named David Sanders, and Ohio State. He has lost all of those cases, including on appeal. The suits against the Times and Sanders claimed that a front-page story in 2017 that quoted Sanders had defamed Croce. His suit against Ohio State alleged that he had been improperly removed as department chair.

Croce racked up some $2 million in legal bills — and was sued for nonpayment. A judge has now ordered Croce’s collection of old masters paintings to be seized and sold for the benefit of his lawyers, and has also garnished Croce’s bank accounts. Another judgment means that his lawyers may now foreclose on his house to recoup their costs. Ohio State has been garnishing his wages since March by about $15,600 each month, or about a quarter of his paycheck. He continues to earn more than $800,000 per year from the university, even after a professorship and the chair were taken away from him.

When two researchers published a critique of the work of Mark Jacobson, an energy researcher at Stanford University, in the Proceedings of the National Academy of Sciences, Jacobson sued them along with the journal’s publisher for $10 million. He dropped the case just months after filing it.

But thanks to a so-called anti-SLAPP statute, “designed to provide for early dismissal of meritless lawsuits filed against people for the exercise of First Amendment rights,” a judge has ordered Jacobson to pay $500,000 in legal fees to the defendants. Jacobson wants Stanford to pay those costs, and California’s labor commissioner said the university had to pay at least some of them because protecting his reputation was part of Jacobson’s job. The fate of those fees, and who will pay them, is up in the air, with Jacobson once again appealing the judgment against him.

Friday, May 19, 2023

What’s wrong with virtue signaling?

Hill, J., Fanciullo, J. 
Synthese 201, 117 (2023).


A novel account of virtue signaling and what makes it bad has recently been offered by Justin Tosi and Brandon Warmke. Despite plausibly vindicating the folk?s conception of virtue signaling as a bad thing, their account has recently been attacked by both Neil Levy and Evan Westra. According to Levy and Westra, virtue signaling actually supports the aims and progress of public moral discourse. In this paper, we rebut these recent defenses of virtue signaling. We suggest that virtue signaling only supports the aims of public moral discourse to the extent it is an instance of a more general phenomenon that we call norm signaling. We then argue that, if anything, virtue signaling will undermine the quality of public moral discourse by undermining the evidence we typically rely on from the testimony and norm signaling of others. Thus, we conclude, not only is virtue signaling not needed, but its epistemological effects warrant its bad reputation.


In this paper, we have challenged two recent defenses of virtue signaling. Whereas Levy ascribes a number of good features to virtue signaling—its providing higher-order evidence for the truth of certain moral judgments, its helping us delineate groups of reliable moral cooperators, and its not involving any hypocrisy on the part of its subject—it seems these good features are ascribable to virtue signaling ultimately and only because they are good features of norm signaling, and virtue signaling entails norm signaling. Similarly, whereas Westra suggests that virtue signaling uniquely benefits public moral discourse by supporting moral progress in a way that mere norm signaling does not, it seems virtue signaling also uniquely harms public moral discourse by supporting moral regression in a way that mere norm signaling does not. It therefore seems that in each case, to the extent it differs from norm signaling, virtue signaling simply isn’t needed.

Moreover, we have suggested that, if anything, virtue signaling will undermine the higher order evidence we typically can and should rely on from the testimony of others. Virtue signaling essentially involves a motivation that aims at affecting public moral discourse but that does not aim at the truth. When virtue signaling is rampant—when we are aware that this ulterior motive is common among our peers—we should give less weight to the higher-order evidence provided by the testimony of others than we otherwise would, on pain of double counting evidence and falling for unwarranted confidence. We conclude, therefore, that not only is virtue signaling not needed, but its epistemological effects warrant its bad reputation. 

Wednesday, May 3, 2023

Advocates of high court reform give Roberts poor marks

Kelsey Reichmann
Courthouse News Service
Originally published 27 April 23

The final straw for ethics experts wondering if the leader of one of the nation’s most powerful bodies would uphold the institutionalist views associated with his image came on Tuesday as Chief Justice John Roberts declined to testify before Congress about ethical concerns at the Supreme Court. 

“You can't actually have checks and balances if one branch is so powerful that the other branches cannot, in fact, engage in their constitutionally mandated role to provide a check on inappropriate or illegal behavior,” Caroline Fredrickson, a distinguished visitor from practice at Georgetown Law, said in a phone interview. “Then we have a defective system.” 

Roberts cited concerns about separation of powers as the basis for declining to testify before the Senate Judiciary Committee on the court’s ethical standards — or lack thereof. Fredrickson said it was a canard that a system based on checks and balances would not be able to do just that. 

“It sort of puts the question to the entire structure of separation of powers and checks and balances,” Fredrickson said. 

For the past several weeks, one of the associate justices has been at the heart of controversy. After blockbuster reporting revealed that Republican megadonor Harlan Crow has footed the bill for decades of luxury vacations enjoyed by Justice Clarence Thomas, the revelations brought scrutiny on the disclosure laws that bind the justices and it called into question why the justices are not bound by ethics standards like the rest of the judiciary and other branches of government.

“For it to function, it relies on the public trust, and the trust of the other institutions to abide by the court's findings,” Virginia Canter, chief ethics counsel at Citizens for Responsibility and Ethics in Washington, said in a phone call. “If the court and its members are willing to live without any standards, then I think that ultimately the whole process and the institution start to unravel.” 

Many court watchers saw opportunity for action here on a call that has been made for years: the adoption of an ethics code.

“The idea that the Supreme Court would continue to operate without one, it's just ridiculous,” Gabe Roth, executive director of Fix the Court, said in a phone call. 

Along with his letter declining to testify before Congress on the court’s ethics, Roberts included a statement listing principles and practices the court “subscribes” to. The statement was signed by all nine justices. 

For ethics experts raising alarm bells on this subject, a restatement of guidelines that the justices are already supposed to follow did not meet the moment.

“It's just a random — in my view at least — conglomeration of paragraphs that rehash things you already knew, but, yeah, good for him for getting all nine justices on board with something that already exists,” Roth said. 

Sunday, April 30, 2023

The secrets of cooperation

Bob Holmes
Originally published 29 MAR 23

Here are two excerpts:

Human cooperation takes some explaining — after all, people who act cooperatively should be vulnerable to exploitation by others. Yet in societies around the world, people cooperate to their mutual benefit. Scientists are making headway in understanding the conditions that foster cooperation, research that seems essential as an interconnected world grapples with climate change, partisan politics and more — problems that can be addressed only through large-scale cooperation.

Behavioral scientists’ formal definition of cooperation involves paying a personal cost (for example, contributing to charity) to gain a collective benefit (a social safety net). But freeloaders enjoy the same benefit without paying the cost, so all else being equal, freeloading should be an individual’s best choice — and, therefore, we should all be freeloaders eventually.

Many millennia of evolution acting on both our genes and our cultural practices have equipped people with ways of getting past that obstacle, says Muthukrishna, who coauthored a look at the evolution of cooperation in the 2021 Annual Review of Psychology. This cultural-genetic coevolution stacked the deck in human society so that cooperation became the smart move rather than a sucker’s choice. Over thousands of years, that has allowed us to live in villages, towns and cities; work together to build farms, railroads and other communal projects; and develop educational systems and governments.

Evolution has enabled all this by shaping us to value the unwritten rules of society, to feel outrage when someone else breaks those rules and, crucially, to care what others think about us.

“Over the long haul, human psychology has been modified so that we’re able to feel emotions that make us identify with the goals of social groups,” says Rob Boyd, an evolutionary anthropologist at the Institute for Human Origins at Arizona State University.


Reputation is more powerful than financial incentives in encouraging cooperation

Almost a decade ago, Yoeli and his colleagues trawled through the published literature to see what worked and what didn’t at encouraging prosocial behavior. Financial incentives such as contribution-matching or cash, or rewards for participating, such as offering T-shirts for blood donors, sometimes worked and sometimes didn’t, they found. In contrast, reputational rewards — making individuals’ cooperative behavior public — consistently boosted participation. The result has held up in the years since. “If anything, the results are stronger,” says Yoeli.

Financial rewards will work if you pay people enough, Yoeli notes — but the cost of such incentives could be prohibitive. One study of 782 German residents, for example, surveyed whether paying people to receive a Covid vaccine would increase vaccine uptake. It did, but researchers found that boosting vaccination rates significantly would have required a payment of at least 3,250 euros — a dauntingly steep price.

And payoffs can actually diminish the reputational rewards people could otherwise gain for cooperative behavior, because others may be unsure whether the person was acting out of altruism or just doing it for the money. “Financial rewards kind of muddy the water about people’s motivations,” says Yoeli. “That undermines any reputational benefit from doing the deed.”

Saturday, April 29, 2023

Observation moderates the moral licensing effect: A meta-analytic test of interpersonal and intrapsychic mechanisms.

Rotella, A., Jung, J., Chinn, C., 
& Barclay, P. (2023, March 28).


Moral licensing occurs when someone who initially behaved morally subsequently acts less morally. We apply reputation-based theories to predict when and why moral licensing would occur. Specifically, our pre-registered predictions were that (1) participants observed during the licensing manipulation would have larger licensing effects, and (2) unambiguous dependent variables would have smaller licensing effects. In a pre-registered multi-level meta-analysis of 111 experiments (N = 19,335), we found a larger licensing effect when participants were observed (Hedge’s g = 0.61) compared to unobserved (Hedge’s g = 0.14). Ambiguity did not moderate the effect. The overall moral licensing effect was small (Hedge’s g = 0.18). We replicated these analyses using robust Bayesian meta-analysis and found strong support for the moral licensing effect only when participants are observed. These results suggest that the moral licensing effect is predominantly an interpersonal effect based on reputation, rather than an intrapsychic effect based on self-image.

Statement of Relevance

When and why will people behave morally?Everyday, people make decisions to act in ways that are more or less moral –holding a door open for others, donating to charity, or assistant a colleague. Yet, it is not well understood how people’s prior actions influence their subsequent behaviors. In this study, we investigated how observation influences the moral licensing effect, which is when someone who was initially moral subsequently behaves less morally, as if they had“license” to act badly.  In a review of existing literature, we found a larger moral licensing effect when people were seen to act morally compared to when they were unobserved, which suggests that once someone establishes a moral reputation to others, they can behave slightly less moral and maintain a moral reputation. This finding advances our understanding of the moral licensing mechanism and how reputation and observation impact moral actions.

Sunday, April 2, 2023

Being good to look good: Self-reported moral character predicts moral double standards among reputation-seeking individuals

Mengchen, D. Kupfer, T. R, et al. (2022).
British Journal of Psychology
First published 4 NOV 22


Moral character is widely expected to lead to moral judgements and practices. However, such expectations are often breached, especially when moral character is measured by self-report. We propose that because self-reported moral character partly reflects a desire to appear good, people who self-report a strong moral character will show moral harshness towards others and downplay their own transgressions—that is, they will show greater moral hypocrisy. This self-other discrepancy in moral judgements should be pronounced among individuals who are particularly motivated by reputation. Employing diverse methods including large-scale multination panel data (N = 34,323), and vignette and behavioural experiments (N = 700), four studies supported our proposition, showing that various indicators of moral character (Benevolence and Universalism values, justice sensitivity, and moral identity) predicted harsher judgements of others' more than own transgressions. Moreover, these double standards emerged particularly among individuals possessing strong reputation management motives. The findings highlight how reputational concerns moderate the link between moral character and moral judgement.

Practitioner points
  • Self-reported moral character does not predict actual moral performance well.
  • Good moral character based on self-report can sometimes predict strong moral hypocrisy.
  • Good moral character based on self-report indicates high moral standards, while only for others but not necessarily for the self.
  • Hypocrites can be good at detecting reputational cues and presenting themselves as morally decent persons.
From the General Discussion

A well-known Golden Rule of morality is to treat others as you wish to be treated yourself (Singer, 1963). People with a strong moral character might be expected to follow this Golden Rule, and judge others no more harshly than they judge themselves. However, when moral character is measured by self-reports, it is often intertwined with socially desirable responding and reputation management motives (Anglim et al., 2017; Hertz & Krettenauer, 2016; Reed & Aquino, 2003). The current research examines the potential downstream effects of moral character and reputation management motives on moral decisions. By attempting to differentiate the ‘genuine’ and ‘reputation managing’ components of self-reported moral character, we posited an association between moral character and moral double standards on the self and others. Imposing harsh moral standards on oneself often comes with a cost to self-interest; to signal one's moral character, criticizing others' transgressions can be a relatively cost-effective approach (Jordan et al., 2017; Kupfer & Giner-Sorolla, 2017; Simpson et al., 2013). To the extent that the demonstration of a strong moral character is driven by reputation management motives, we, therefore, predicted that it would be related to increased hypocrisy, that is, harsher judgements of others' transgressions but not stricter standards for own misdeeds.


How moral character guides moral judgements and behaviours depends on reputation management motives. When people are motivated to attain a good reputation, their self-reported moral character may predict more hypocrisy by displaying stronger moral harshness towards others than towards themselves. Thus, claiming oneself as a moral person does not always translate into doing good deeds, but can manifest as showcasing one's morality to others. Desires for a positive reputation might help illuminate why self-reported moral character often fails to capture real-life moral decisions, and why (some) people who appear to be moral are susceptible to accusations of hypocrisy—for applying higher moral standards to others than to themselves.

Sunday, March 26, 2023

State medical board chair Dr. Brian Hyatt resigns, faces Medicaid fraud allegations

Ashley Savage
Arkansas Democrat Gazette
Originally published 3 MAR 23

Dr. Brian Hyatt stepped down as chairman of the Arkansas State Medical Board Thursday in a special meeting following "credible allegations of fraud," noted in a letter from the state's office of Medicaid inspector general.

Members of the board met remotely Thursday with only one item on the agenda: "Discussion of Arkansas State Board's leadership."

The motion to approve Hyatt's request to step down as chairman and out of an executive role on the board was approved unanimously.

Board members also decided that Dr. Rhys Branman will take over as the interim chairman until an election to fill the seat is held in April.

According to the board Thursday, the vacant seats for vice chair and chair of the board will be voted on separate ballots in the April elections.

The Medicaid letter states "red flags" were discovered in Hyatt's use of Medicaid claims and process of billing for medical services. In Arkansas, Medicaid fraud resulting in an overpayment over $2,500 is a felony.

"Dr. Hyatt is a clear outlier, and his claims are so high they skew the averages on certain codes for the entire Medicaid program in Arkansas," the affidavit states.

"The suspension is temporary and there's a right to appeal. I see only allegations and I don't see any actual charges and I haven't dealt with this a lot," said Branman.

Hyatt has 30 days to appeal his suspension from the Medicaid program.

Other information from the letter shows that Hyatt is alleged to have billed more Medicaid patients at the 99233 code than any other doctor billed for all of their Medicaid patients between January of 2019 and June 30, 2022.

Wednesday, January 18, 2023

Too Good to be Liked? When and How Prosocial Others are Disliked

Boileau, L. L. A., Grüning, D. J., & Bless, H. (2021).
Frontiers in Psychology, 12.


Outstandingly prosocial individuals may not always be valued and admired, but sometimes depreciated and rejected. While prior research has mainly focused on devaluation of highly competent or successful individuals, comparable research in the domain of prosociality is scarce. The present research suggests two mechanisms why devaluation of extreme prosocial individuals may occur: they may (a) constitute very high comparison standards for observers, and may (b) be perceived as communal narcissists. Two experiments test these assumptions. We confronted participants with an extreme prosocial or an ordinary control target and manipulated comparative aspects of the situation (salient vs. non-salient comparison, Experiment 1), and narcissistic aspects of the target (showing off vs. being modest, Experiment 2). Consistent with our assumptions, the extreme prosocial target was liked less than the control target, and even more so when the comparison situation was salient (Experiment 1), and when the target showed off with her good deeds (Experiment 2). Implications that prosociality does not always breed more liking are discussed.

General Discussion

The present research demonstrates that individuals who perform an outstanding degree of prosocial behaviors may be devaluated—due to their prosocial behaviors. Specifically, across two experiments, the prosocial target was liked less than the control target. This consistent pattern is unlikely to be due to participants' perception that the displayed behaviors did not unambiguously reflect prosocial behavior: When explicitly evaluating prosociality, the prosocial target was clearly perceived as prosocial (and more so than the control target). The finding that prosocial behaviors may decrease rather than increase liking seems rather surprising at first glance. Past research suggests that liking and perceptions of prosociality in others are in fact very highly correlated (Imhoff and Koch, 2017). However, the observed devaluation is in line with prior empirical research suggesting that superior prosocial others are indeed sometimes devaluated through rejection and dislike (Fisher et al., 1982; Herrmann et al., 2008; Parks and Stone, 2010; Pleasant and Barclay, 2018).

The present research goes beyond prior research that has similarly demonstrated a possible disliking of prosocial targets by suggesting and investigating two possible underlying processes. Thus, it responds to the call that mediating mechanisms for the dislike of very prosocial targets are yet to be investigated (Parks et al., 2013).

First, the reduced liking of the prosocial target was more pronounced when comparisons between the target and the observers were induced by the information that observers would first evaluate the target and then themselves on the very same items. Eliciting such a comparison expectation increased disliking of the prosocial target. Presumably, in this situation, the extremely prosocial target constituted a very high comparison standard, and this high standard would have negative consequences for participants' evaluations of themselves (Mussweiler, 2003; Bless and Schwarz, 2010; Morina, 2021). This conclusion extends indirect evidence by Parks and Stone (2010) by providing an experimental manipulation of the assumed comparison component.

Second, as predicted, the dislike of the prosocial target was increased when perceptions of communal narcissism (Gebauer et al., 2012; Nehrlich et al., 2019) were elicited by informing participants that the target actively sought to let others know about her prosocial behaviors. This finding suggests that a target's prosocial behavior will not turn into more liking but backfire when that target is perceived as someone who exerts “excessive self-enhancement” in the domain of prosociality and who is showing off with her good deeds (Rentzsch and Gebauer, 2019; p. 1373).

Tuesday, September 20, 2022

The Look Over Your Shoulder: Unethical Behaviour Decreases in the Physical Presence of Observers

Köbis, N., van der Lingen, S., et al., (2019, February 5).


Research in behavioural ethics repeatedly emphasizes the importance of others for people’s decisions to break ethical rules. Yet, in most lab experiments participants faced ethical dilemmas in full privacy settings. We conducted three experiments in which we compare such private set-ups to situations in which a second person is co-present in the lab. Study 1 manipulated whether that second person was a mere observer or co-benefitted from the participants’ unethical behaviour. Study 2 investigated social proximity between participant and observer –being a friend versus a stranger. Study 3 tested whether the mere presence of another person who cannot observe the participant’s behaviour suffices to decrease unethical behaviour. By using different behavioural paradigms of unethical behaviour, we obtain three main results: first, the presence of an observing other curbs unethical behaviour. Second, neither the payoff structure (Study 1) nor the social proximity towards the observing other (Study 2) qualifies this effect. Third, the mere presence of others does not reduce unethical behaviour if they do not observe the participant (Study 3). Implications, limitations and avenues for future research are discussed.

General Discussion

Taken together, the results of three experiments suggest that the physical presence of others reduces unethical behaviour, yet only if that other person can actually observe the behaviour. Even though the second person had no means to formally sanction wrong-doing, onlookers’ presence curtailed unethical behaviour while the local social utility (co-beneficiary or observer, Study 1) and the level of proximity (friend vs. stranger,Study 2) played a less important role. When others are merely present without being able to observe, no such attenuating effect on unethical behaviour occurs(Study 3).  Introducing the physical presence of another person to the rapidly growing stream of behavioural ethics research, our experiments provide some of the first empirical insights into the actual social aspects of unethical behaviour.

Humans are social animals who spend a substantial proportion of their time in company. Many decisions are made while being in the presence or in the gaze of others. At the same time, the overwhelming majority of lab experiments in behavioural ethics consists of individuals making decisions in isolation(for a meta-analysis, see Abeler et al., 2016). Also field experiments have sparsely looked at the impact of the tangible social elements of unethical behaviour (for a review, see Pierce & Balasubramanian, 2015). Nevertheless, the behavioural ethics literature emphasizes that appearing moral towards others is one of the main explanatory factor to explain when and how people break ethical rules (Mazar, Amir, & Ariely, 2008; Pillutla & Murnighan, 1995). Yet, so far behavioural research on the presence and observability of actual others remains sparse. Providing some of the first insights into how the physical presence of others shape our moral compass can contribute to the advancement of behavioural ethics and potentially inform the design of practical interventions. 

Direct application to those who practice independently.

Thursday, September 1, 2022

When does moral engagement risk triggering a hypocrite penalty?

Jordan, J. & Sommers, R.
Current Opinion in Psychology
Volume 47, October 2022, 101404


Society suffers when people stay silent on moral issues. Yet people who engage morally may appear hypocritical if they behave imperfectly themselves. Research reveals that hypocrites can—but do not always—trigger a “hypocrisy penalty,” whereby they are evaluated as more immoral than ordinary (non-hypocritical) wrongdoers. This pattern reflects that moral engagement can confer reputational benefits, but can also carry reputational costs when paired with inconsistent moral conduct. We discuss mechanisms underlying these costs and benefits, illuminating when hypocrisy is (and is not) evaluated negatively. Our review highlights the role that dishonesty and other factors play in engendering disdain for hypocrites, and offers suggestions for how, in a world where nobody is perfect, people can engage morally without generating backlash.

Conclusion: how to walk the moral tightrope

To summarize, hypocrites can—but do not always—incur a “hypocrisy penalty,” whereby they are evaluated more negatively than they would have been absent engaging. As this review has suggested, when observers scrutinize hypocritical moral engagement, they seem to ask at least three questions. First, does the actor signal to others, through his engagement, that he behaves more morally than he actually does? Second, does the actor, by virtue of his engagement, see himself as more moral than he really is? And third, is the actor's engagement preventing others from reaping benefits that he has already enjoyed? Evidence suggests that hypocritical moral engagement is more likely to carry reputational costs when the answer to these questions is “yes.” At the same time, observers do not seem to reliably impose a hypocrisy penalty just because the transgressions of hypocrites constitute personal moral failings—even as these failings convey weakness of will, highlight inconsistency with the actor's personal values, and reveal that the actor has knowingly done something that she believes to be wrong.

In a world where nobody is perfect, then, how can one engage morally while limiting the risk of subsequently being judged negatively as a hypocrite? We suggest that the answer comes down to two key factors: maximizing the reputational benefits that flow directly from one's moral engagement, and minimizing the reputational costs that flow from the combination of one's engagement and imperfect track record. While more research is needed, here we draw on the mechanisms we have reviewed to highlight four suggestions for those seeking to walk the moral tightrope.

Tuesday, August 16, 2022

Virtue Discounting: Observers Infer that Publicly Virtuous Actors Have Less Principled Motivations

Kraft-Todd, G., Kleiman-Weiner, M., 
& Young, L. (2022, May 27). 


Behaving virtuously in public presents a paradox: only by doing so can people demonstrate their virtue and also influence others through their example, yet observers may derogate actors’ behavior as mere “virtue signaling.” We introduce the term virtue discounting to refer broadly to the reasons that people devalue actors’ virtue, bringing together empirical findings across diverse literatures as well as theories explaining virtuous behavior. We investigate the observability of actors’ behavior as one reason for virtue discounting, and its mechanism via motivational inferences using the comparison of generosity and impartiality as a case study among virtues. Across 14 studies (7 preregistered, total N=9,360), we show that publicly virtuous actors are perceived as less morally good than privately virtuous actors, and that this effect is stronger for generosity compared to impartiality (i.e. differential virtue discounting). An exploratory factor analysis suggests that three types of motives—principled, reputation-signaling, and norm-signaling—affect virtue discounting. Using structural equation modeling, we show that the effect of observability on ratings of actors’ moral goodness is largely explained by inferences that actors have less principled motivations. Further, we provide experimental evidence that observers’ motivational inferences mechanistically contribute to virtue discounting. We discuss the theoretical and practical implications of our findings, as well as future directions for research on the social perception of virtue.

General Discussion

Across three analyses martialing data from 14 experiments (seven preregistered, total N=9,360), we provide robust evidence of virtue discounting. In brief, we show that the observability of actors’ behavior is a reason that people devalue actors’ virtue, and that this effect can be explained by observers’ inferences about actors’ motivations. In Analysis 1—which includes a meta-analysis of all experiments we ran—we show that observability causes virtue discounting, and that this effect is larger in the context of generosity compared to impartiality. In Analysis 2, we provide suggestive evidence that participants’ motivational inferences mediate a large portion (72.6%) of the effect of observability on their ratings of actors’ moral goodness. In Analysis 3, we experimentally show that when we stipulate actors’ motivation, observability loses its significant effect on participants’ judgments of actors’ moral goodness.  This gives further evidence for   the hypothesis that observers’ inferences about actors’ motivations are a mechanism for the way that the observability of actions impacts virtue discounting.We now consider the contributions of our findings to the empirical literature, how these findings interact with our theoretical account, and the limitations of the present investigation (discussing promising directions for future research throughout). Finally, we conclude with practical implications for effective prosocial advocacy.

Monday, July 11, 2022

Moral cognition as a Nash product maximizer: An evolutionary contractualist account of morality

André, J., Debove, S., Fitouchi, L., & Baumard, N. 
(2022, May 24). https://doi.org/10.31234/osf.io/2hxgu


Our goal in this paper is to use an evolutionary approach to explain the existence and design-features of human moral cognition. Our approach is based on the premise that human beings are under selection to appear as good cooperative investments. Hence they face a trade-off between maximizing the immediate gains of each social interaction, and maximizing its long-term reputational effects. In a simple 2-player model, we show that this trade-off leads individuals to maximize the generalized Nash product at evolutionary equilibrium, i.e., to behave according to the generalized Nash bargaining solution. We infer from this result the theoretical proposition that morality is a domain-general calculator of this bargaining solution. We then proceed to describe the generic consequences of this approach: (i) everyone in a social interaction deserves to receive a net benefit, (ii) people ought to act in ways that would maximize social welfare if everyone was acting in the same way, (iii) all domains of social behavior can be moralized, (iv) moral duties can seem both principled and non-contractual, and (v) morality shall depend on the context. Next, we apply the approach to some of the main areas of social life and show that it allows to explain, with a single logic, the entire set of what are generally considered to be different moral domains. Lastly, we discuss the relationship between this account of morality and other evolutionary accounts of morality and cooperation.

From The psychological signature of morality: the right, the wrong and the duty Section

Cooperating for the sake of reputation always entails that, at some point along social interactions, one is in a position to access benefits, but one decides to give them up, not for a short-term instrumental purpose, but for the long-term aim of having a good reputation.  And, by this, we mean precisely:the long-term aim of being considered someone with whom cooperation ends up bringing a net benefit rather than a net cost, not only in the eyes of a particular partner, but in the eyes of any potential future partner.  This specific and universal property of reputation-based cooperation explains the specific and universal phenomenology of moral decisions.

To understand, one must distinguish what people  do in practice, and what they think is right to do. In practice, people may sometimes cheat, i.e., not respect the contract. They may do so conditionally on the specific circumstances, if they evaluate that  the actual reputational benefits  of  doing  their duty is lower than the immediate cost (e.g., if their cheating has a chance to go unnoticed).  This should not –and in fact does  not  (Knoch et al., 2009;  Kogut, 2012;  Sheskin et al., 2014; Smith et al., 2013) – change their assessment of what would have been the right thing to do.  This assessment can only be absolute, in the sense that it depends only on what one needs to do to ensure that the interaction ends up bringing a net benefit to one’s partner rather than a cost, i.e., to respect the contract, and is not affected by the actual reputational stake of the specific interaction.  Or, to put it another way, people must calculate their moral duty by thinking “If someone  was looking at me, what would they think?”,  regardless of whether anyone is actually looking at them.

Sunday, March 27, 2022

Observers penalize decision makers whose risk preferences are unaffected by loss–gain framing

Dorison, C. A., & Heller, B. H. (2022). 
Journal of Experimental Psychology: 
General. Advance online publication.


A large interdisciplinary body of research on human judgment and decision making documents systematic deviations between prescriptive decision models (i.e., how individuals should behave) and descriptive decision models (i.e., how individuals actually behave). One canonical example is the loss–gain framing effect on risk preferences: the robust tendency for risk preferences to shift depending on whether outcomes are described as losses or gains. Traditionally, researchers argue that decision makers should always be immune to loss–gain framing effects. We present three preregistered experiments (N = 1,954) that qualify this prescription. We predict and find that while third-party observers penalize decision makers who make risk-averse (vs. risk-seeking) choices when choice outcomes are framed as losses, this result reverses when outcomes are framed as gains. This reversal holds across five social perceptions, three decision contexts, two sample populations of United States adults, and with financial stakes. This pattern is driven by the fact that observers themselves fall victim to framing effects and socially derogate (and financially punish) decision makers who disagree. Given that individuals often care deeply about their reputation, our results challenge the long-standing prescription that they should always be immune to framing effects. The results extend understanding not only for decision making under risk, but also for a range of behavioral tendencies long considered irrational biases. Such understanding may ultimately reveal not only why such biases are so persistent but also novel interventions: our results suggest a necessary focus on social and organizational norms.

From the General Discussion

But what makes an optimal belief or choice? Here, we argue that an expanded focus on the goals decision makers themselves hold (i.e., reputation management) questions whether such deviations from rational-agent models should always be considered suboptimal. We test this broader theorizing in the context of loss-gain framing effects on risk preferences not because we think the psychological dynamics at play are
unique to this context, but rather because such framing effects have been uniquely influential for both academic discourse and applied interventions in policy and organizations. In fact, the results hold preliminary implications not only for decision making under risk, but also for extending understanding of a range of other behavioral tendencies long considered irrational biases in the research literature on judgment and decision making (e.g., sunk cost bias; see Dorison, Umphres, & Lerner, 2021).

An important clarification of our claims merits note. We are not claiming that it is always rational to be biased just because others are. For example, it would be quite odd to claim that someone is rational for believing that eating sand provides enough nutrients to survive, simply because others may like them for holding this belief or because others in their immediate social circle hold this belief. In this admittedly bizarre case, it would still be clearly irrational to attempt to subsist on sand, even if there are reputational advantages to doing so—that is, the costs substantially outweigh the reputational benefits. In fact, the vast majority of framing effect studies in the lab do not have an explicit reputational/strategic component at all. 

Sunday, December 5, 2021

The psychological foundations of reputation-based cooperation

Manrique, H., et al. (2021, June 2).


Humans care about having a positive reputation, which may prompt them to help in scenarios where the return benefits are not obvious. Various game-theoretical models support the hypothesis that concern for reputation may stabilize cooperation beyond kin, pairs or small groups. However, such models are not explicit about the underlying psychological mechanisms that support reputation-based cooperation. These models therefore cannot account for the apparent rarity of reputation-based cooperation in other species. Here we identify the cognitive mechanisms that may support reputation-based cooperation in the absence of language. We argue that a large working memory enhances the ability to delay gratification, to understand others' mental states (which allows for perspective-taking and attribution of intentions), and to create and follow norms, which are key building blocks for increasingly complex reputation-based cooperation. We review the existing evidence for the appearance of these processes during human ontogeny as well as their presence in non-human apes and other vertebrates. Based on this review, we predict that most non-human species are cognitively constrained to show only simple forms of reputation-based cooperation.


We have presented  four basic psychological building blocks that we consider important facilitators for complex reputation-based cooperation: working memory, delay of gratification, theory of mind, and social norms. Working memory allows for parallel processing of diverse information, to  properly  assess  others’ actions and update their  reputation  scores. Delay of gratification is useful for many types of cooperation,  but may  be particularly relevant for reputation-based cooperation where the returns come from a future interaction with an observer rather than an immediate reciprocation by one’s current partner. Theory of mind makes it easier to  properly  assess others’ actions, and  reduces the  risk that spreading  errors will undermine cooperation. Finally, norms support theory of mind by giving individuals a benchmark of what is right or wrong.  The more developed that each of these building blocks is, the more complex the interaction structure can become. We are aware that by picking these four socio-cognitive mechanisms we leave out other processes that might be involved, e.g. long-term memory, yet we think the ones we picked are more critical and better allow for comparison across species.

Thursday, November 18, 2021

Ethics Pays: Summary for Businesses

September 2021

Is good ethics good for business? Crime and sleazy behavior sometimes pay off handsomely. People would not do such things if they didn’t think they were more profitable than the alternatives.

But let us make two distinctions right up front. First, let us contrast individual employees with companies. Of course, it can benefit individual employees to lie, cheat, and steal when they can get away with it. But these benefits usually come at the expense of the firm and its shareholders, so leaders and managers should work very hard to design ethical systems that will discourage such self-serving behavior (known as the “principal-agent problem”).

The harder question is whether ethical violations committed by the firm or for the firm’s benefit are profitable. Cheating customers, avoiding taxes, circumventing costly regulations, and undermining competitors can all directly increase shareholder value.

And here we must make the second distinction: short-term vs. long-term. Of course, bad ethics can be extremely profitable in the short run. Business is a complex web of relationships, and it is easy to increase revenues or decrease costs by exploiting some of those relationships. But what happens in the long run?

Customers are happy and confident in knowing they’re dealing with an honest company. Ethical companies retain the bulk of their employees for the long-term, which reduces costs associated with turnover. Investors have peace of mind when they invest in companies that display good ethics because they feel assured that their funds are protected. Good ethics keep share prices high and protect businesses from takeovers.

Culture has a tremendous influence on ethics and its application in a business setting. A corporation’s ability to deliver ethical value is dependent on the state of its culture. The culture of a company influences the moral judgment of employees and stakeholders. Companies that work to create a strong ethical culture motivate everyone to speak and act with honesty and integrity. Companies that portray strong ethics attract customers to their products and services, and are far more likely to manage their negative environmental and social externalities well.

Wednesday, October 6, 2021

Immoral actors’ meta-perceptions are accurate but overly positive

Lees, J. M., Young, L., & Waytz, A.
(2021, August 16).


We examine how actors think others perceive their immoral behavior (moral meta-perception) across a diverse set of real-world moral violations. Utilizing a novel methodology, we solicit written instances of actors’ immoral behavior (N_total=135), measure motives and meta-perceptions, then provide these accounts to separate samples of third-party observers (N_total=933), using US convenience and representative samples (N_actor-observer pairs=4,615). We find that immoral actors can accurately predict how they are perceived, how they are uniquely perceived relative to the average immoral actor, and how they are misperceived. Actors who are better at judging the motives of other immoral actors also have more accurate meta-perceptions. Yet accuracy is accompanied by two distinct biases: overestimating the positive perceptions others’ hold, and believing one’s motives are more clearly perceived than they are. These results contribute to a detailed account of the multiple components underlying both accuracy and bias in moral meta-perception.

From the General Discussion

These results collectively suggest that individuals who have engaged in immoral behavior can accurately forecast how others will react to their moral violations.  

Studies 1-4 also found similar evidence for accuracy in observers’ judgments of the unique motives of immoral actors, suggesting that individuals are able to successfully perspective-take with those who have committed moral violations. Observers higher in cognitive ability (Studies 2-3) and empathic concern (Studies 2-4) were consistently more accurate in these judgments, while observers higher in Machiavellianism (Studies 2-4) and the propensity to engage in unethical workplace behaviors (Studies 3-4) were consistently less accurate. This latter result suggests that more frequently engaging in immoral behavior does not grant one insight into the moral minds of others, and in fact is associated with less ability to understand the motives behind others’ immoral behavior.

Despite strong evidence for meta-accuracy (and observer accuracy) across studies, actors’ accuracy in judging how they would be perceived was accompanied by two judgment biases.  Studies 1-4 found evidence for a transparency bias among immoral actors (Gilovich et al., 1998), meaning that actors overestimated how accurately observers would perceive their self-reported moral motives. Similarly, in Study 4 an examination of actors’ meta-perception point estimates found evidence for a positivity bias. Actors systematically overestimate the positive attributions, and underestimate the negative attributions, made of them and their motives. In fact, the single meta-perception found to be the most inaccurate in its average point estimate was the meta-perception of harm caused, which was significantly underestimated.

Sunday, August 15, 2021

Prosocial Behavior and Reputation: When Does Doing Good Lead to Looking Good?

Berman, J. Z., & Silver, I.
(2021). Current Opinion in Psychology
Available online 9 July 2021


One reason people engage in prosocial behavior is to reap the reputational benefits associated with being seen as generous. Yet, there isn’t a direct connection between doing good deeds and being seen as a good person. Rather, prosocial actors are often met with suspicion, and sometimes castigated as disingenuous braggarts, empty virtue-signalers, or holier-than-thou hypocrites. In this article, we review recent research on how people evaluate those who engage in prosocial behavior and identify key factors that influence whether observers will praise or denigrate a prosocial actor for doing a good deed.


Obligations to Personal Relations

One complicating factor that affects how actors are judged concerns whether they are donating to a cause that benefits a close personal relation. Recent theories of morality suggest that people see others as obligated to help close personal relations over distant strangers.  Despite these obligations, or perhaps because of them, prosocial actors are afforded less credit when they donate to causes that benefit close others: doing so is seen as relatively selfish compared to helping strangers. At the same time, helping a stranger instead of helping a close other is seen as a violation of one’s commitments and obligations, which can also damage one’s reputation. Understanding the role of relationship-specific obligations in judgments of selfless behavior is still nascent and represents an emerging area of research. 

Wednesday, July 7, 2021

“False positive” emotions, responsibility, and moral character

Anderson, R. A., et al.
Volume 214, September 2021, 104770


People often feel guilt for accidents—negative events that they did not intend or have any control over. Why might this be the case? Are there reputational benefits to doing so? Across six studies, we find support for the hypothesis that observers expect “false positive” emotions from agents during a moral encounter – emotions that are not normatively appropriate for the situation but still trigger in response to that situation. For example, if a person accidentally spills coffee on someone, most normative accounts of blame would hold that the person is not blameworthy, as the spill was accidental. Self-blame (and the guilt that accompanies it) would thus be an inappropriate response. However, in Studies 1–2 we find that observers rate an agent who feels guilt, compared to an agent who feels no guilt, as a better person, as less blameworthy for the accident, and as less likely to commit moral offenses. These attributions of moral character extend to other moral emotions like gratitude, but not to nonmoral emotions like fear, and are not driven by perceived differences in overall emotionality (Study 3). In Study 4, we demonstrate that agents who feel extremely high levels of inappropriate (false positive) guilt (e.g., agents who experience guilt but are not at all causally linked to the accident) are not perceived as having a better moral character, suggesting that merely feeling guilty is not sufficient to receive a boost in judgments of character. In Study 5, using a trust game design, we find that observers are more willing to trust others who experience false positive guilt compared to those who do not. In Study 6, we find that false positive experiences of guilt may actually be a reliable predictor of underlying moral character: self-reported predicted guilt in response to accidents negatively correlates with higher scores on a psychopathy scale.

General discussion

Collectively, our results support the hypothesis that false positive moral emotions are associated with both judgments of moral character and traits associated with moral character. We consistently found that observers use an agent's false positive experience of moral emotions (e.g., guilt, gratitude) to infer their underlying moral character, their social likability, and to predict both their future emotional responses and their future moral behavior. Specifically, we found that observers judge an agent who experienced “false positive” guilt (in response to an accidental harm) as a more moral person, more likeable, less likely to commit future moral infractions, and more trustworthy than an agent who experienced no guilt. Our results help explain the second “puzzle” regarding guilt for accidental actions (Kamtekar & Nichols, 2019). Specifically, one reason that observers may find an accidental agent less blameworthy, and yet still be wary if the agent does not feel guilt, is that such false positive guilt provides an important indicator of that agent's underlying character.

Saturday, May 8, 2021

When does empathy feel good?

Ferguson, A. M., Cameron, D., & Inzlicht, M. 
(2021, March 12). 


Empathy has many benefits. When we are willing to empathize, we are more likely to act prosocially (and receive help from others in the future), to have satisfying relationships, and to be viewed as moral actors. Moreover, empathizing in certain contexts can actually feel good, regardless of the content of the emotion itself—for example, we might feel a sense of connection after empathizing with and supporting a grieving friend. Does this feeling come from empathy itself, or from its real and implied consequences? We suggest that the rewards that flow from empathy confound our experience of it, and that the pleasant feelings associated with engaging empathy are extrinsically tied to the results of some action, not to the experience of empathy itself. When we observe people’s decisions related to empathy in the absence of these acquired rewards, as we can in experimental settings, empathy appears decidedly less pleasant. Empathy has many benefits. When we are willing to empathize, we are more likely to act prosocially (and receive help from others in the future), to have satisfying relationships, and to be viewed as moral actors. Moreover, empathizing in certain contexts can actually feel good, regardless of the content of the emotion itself—for example, we might feel a sense of connection after empathizing with and supporting a grieving friend. Does this feeling come from empathy itself, or from its real and implied consequences? We suggest that the rewards that flow from empathy confound our experience of it, and that the pleasant feelings associated with engaging empathy are extrinsically tied to the results of some action, not to the experience of empathy itself. When we observe people’s decisions related to empathy in the absence of these acquired rewards, as we can in experimental settings, empathy appears decidedly less pleasant.