Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Moral Character. Show all posts
Showing posts with label Moral Character. Show all posts

Tuesday, May 21, 2024

Technology and the Situationist Challenge to Virtue Ethics

Tollon, F.
Sci Eng Ethics 30, 10 (2024).

Abstract

In this paper, I introduce a “promises and perils” framework for understanding the “soft” impacts of emerging technology, and argue for a eudaimonic conception of well-being. This eudaimonic conception of well-being, however, presupposes that we have something like stable character traits. I therefore defend this view from the “situationist challenge” and show that instead of viewing this challenge as a threat to well-being, we can incorporate it into how we think about living well with technology. Human beings are susceptible to situational influences and are often unaware of the ways that their social and technological environment influence not only their ability to do well, but even their ability to know whether they are doing well. Any theory that attempts to describe what it means for us to be doing well, then, needs to take these contextual features into account and bake them into a theory of human flourishing. By paying careful attention to these contextual factors, we can design systems that promote human flourishing.


Here is my summary:

The paper examines how technological environments can undermine the development of virtuous character traits by shaping situational factors that influence moral behavior, posing a challenge to virtue ethics.

The Situationist critique argues that character traits are less stable and predictive of behavior than virtue ethics assumes. Instead, situational factors like social pressure and environmental cues often have a stronger influence on moral actions.

The authors argue that many modern technologies, from social media to surveillance systems, create situational contexts that can override or undermine the development of virtuous character. For example, technologies that increase social monitoring and evaluation may inhibit moral courage.

They suggest that virtues like honesty, compassion, and integrity may be more difficult to cultivate in technological environments that emphasize efficiency, productivity, and conformity over moral development.

The paper calls for virtue ethicists to grapple with how emerging technologies shape moral behavior, and to develop new approaches that account for the powerful situational influences created by technological systems.

In summary, this research highlights how the Situationist critique poses a significant challenge to traditional virtue ethics by demonstrating how technological environments can undermine the development of stable moral character, requiring new ethical frameworks to address the situational factors shaping human behavior.

Saturday, March 23, 2024

How prosocial actors use power hierarchies to build moral reputation

Inesi, M. E., & Rios, K. (2023).
Journal of Experimental Social Psychology,
106, 104441.

Abstract

Power hierarchies are ubiquitous, emerging formally and informally, in both personal and professional contexts. When prosocial acts are offered within power hierarchies, there is a widespread belief that people who choose lower-power beneficiaries are altruistically motivated, and that those who choose higher-power beneficiaries hold a self-interested motive to ingratiate. In contrast, the current research empirically demonstrates that people can also choose lower-power beneficiaries for self-interested reasons – namely, to bolster their own moral reputation in the group. Across three pre-registered studies, involving different contexts and types of prosocial behavior, and including real financial incentives, we demonstrate that people are more likely to choose lower-power beneficiaries when reputation concerns are more salient. We also provide evidence of the mechanism underlying this pattern: people believe that choosing a lower-power beneficiary more effectively signals their own moral character.

Highlights

• How do prosocial actors choose their beneficiaries in hierarchies?

• People increasingly choose lower-power beneficiaries when concerned with reputation

• This pattern is driven by a desire to signal high moral character to others

• This implies a short-term re-distribution of resources to lower-power individuals

Some thoughts:

This research challenges the common assumption that prosocial behavior towards lower-status individuals always stems from altruism, while helping those with higher power reflects self-interest. It explores how actors navigate power hierarchies to build their moral reputation.

Key findings:

Reputation matters: People are more likely to choose lower-power beneficiaries when their moral reputation is salient (e.g., being observed by others).

Strategic signaling: Choosing lower-power recipients is seen as a stronger signal of good character, even if the motivation is self-serving.

Not just altruism: Prosocial behavior can be used strategically to gain social approval and build a positive reputation, regardless of the beneficiary's status.

Saturday, November 4, 2023

One strike and you’re a lout: Cherished values increase the stringency of moral character attributions

Rottman, J., Foster-Hanson, E., & Bellersen, S.
(2023). Cognition, 239, 105570.

Abstract

Moral dilemmas are inescapable in daily life, and people must often choose between two desirable character traits, like being a diligent employee or being a devoted parent. These moral dilemmas arise because people hold competing moral values that sometimes conflict. Furthermore, people differ in which values they prioritize, so we do not always approve of how others resolve moral dilemmas. How are we to think of people who sacrifice one of our most cherished moral values for a value that we consider less important? The “Good True Self Hypothesis” predicts that we will reliably project our most strongly held moral values onto others, even after these people lapse. In other words, people who highly value generosity should consistently expect others to be generous, even after they act frugally in a particular instance. However, reasoning from an error-management perspective instead suggests the “Moral Stringency Hypothesis,” which predicts that we should be especially prone to discredit the moral character of people who deviate from our most deeply cherished moral ideals, given the potential costs of affiliating with people who do not reliably adhere to our core moral values. In other words, people who most highly value generosity should be quickest to stop considering others to be generous if they act frugally in a particular instance. Across two studies conducted on Prolific (N = 966), we found consistent evidence that people weight moral lapses more heavily when rating others’ membership in highly cherished moral categories, supporting the Moral Stringency Hypothesis. In Study 2, we examined a possible mechanism underlying this phenomenon. Although perceptions of hypocrisy played a role in moral updating, personal moral values and subsequent judgments of a person’s potential as a good cooperative partner provided the clearest explanation for changes in moral character attributions. Overall, the robust tendency toward moral stringency carries significant practical and theoretical implications.


My take aways: 

The results showed that participants were more likely to rate the person as having poor moral character when the transgression violated a cherished value. This suggests that when we see someone violate a value that we hold dear, it can lead us to question their entire moral compass.

The authors argue that this finding has important implications for how we think about moral judgment. They suggest that our own values play a significant role in how we judge others' moral character. This is something to keep in mind the next time we're tempted to judge someone harshly.

Here are some additional points that are made in the article:
  • The effect of cherished values on moral judgment is stronger for people who are more strongly identified with their values.
  • The effect is also stronger for transgressions that are seen as more serious.
  • The effect is not limited to personal values. It can also occur for group-based values, such as patriotism or religious beliefs.

Wednesday, February 22, 2023

How and Why People Want to Be More Moral

Sun, J., Wilt, J. A., et al. (2022, October 13).
https://doi.org/10.31234/osf.io/6smzh

Abstract

What types of moral improvements do people wish to make? Do they hope to become more good, or less bad? Do they wish to be more caring? More honest? More loyal? And why exactly do they want to become more moral? Presumably, most people want to improve their morality because this would benefit others, but is this in fact their primary motivation? Here, we begin to investigate these questions. Across two large, preregistered studies (N = 1,818), participants provided open-ended descriptions of one change they could make in order to become more moral; they then reported their beliefs about and motives for this change. In both studies, people most frequently expressed desires to improve their compassion and more often framed their moral improvement goals in terms of amplifying good behaviors than curbing bad ones. The strongest predictor of moral motivation was the extent to which people believed that making the change would have positive consequences for their own well-being. Together, these studies provide rich descriptive insights into how ordinary people want to be more moral, and show that they are particularly motivated to do so for their own sake.

From the General Discussion section

Self-Interest is a KeyMotivation for Moral Improvement

What motivates people to be more moral? From the perspective that the function of morality is to suppress selfishness for the benefit of others (Haidt & Kesebir, 2010; Wolf, 1982), we might expect people to believe that moral improvements would primarily benefit others (rather than themselves). By a similar logic, people should also primarily want to be more moral for the sake of others (rather than for their own sake).

Surprisingly, however, this was not overwhelmingly the case. Instead, across both studies, participants were approximately equally split between those who believed that others would benefit the most and those who believed that they themselves would benefit the most(with the exception of compassion; see Figure S2). The finding that people perceive some personal benefits to becoming more moral has been demonstrated in recent research (Sun & Berman, in prep). In light of evidence that moral people tend to be happier (Sun et al., in prep) and that the presence of moral struggles predicts symptoms of depression and anxiety (Exline et al., 2014), such beliefs might also be somewhat accurate.  However, it is unclear why people believe that becoming more moral would benefit themselves more than it would others. Speculatively, one possibility is that people can more vividly imagine the impacts of their own actions on their own well-being, whereas they are much more uncertain about how their actions would affect others—especially when the impacts might be spread across many beneficiaries.

However, it is also possible that this finding only applies to self-selected moral improvements, rather than the universe of all possible moral improvements. That is, when asked what they could do to become more moral, people might more readily think of improvements that would improve their own well-being to a greater extent than the well-being of others. But, if we were to ask people to predict who would benefit the most from various moral improvements that were selected by researchers, people may be less likely to believe that it would be themselves. Future research should systematically study people’s evaluations of how various moral improvements would impact their own and others’ well-being.

Similarly, when explicitly asked for whose sake they were most motivated to make their moral improvement, almost half of the participants admitted that they were most motivated to change for their own sake (rather than for the sake of others).  However, when predicting motivation from both the expected well-being consequences for the self and the well-being consequences for others, we found that people’s perceptions of personal well-being consequences was a significantly stronger predictor in both studies.  In other words, if anything, people are relatively more motivated to make moral improvements for their own sake than for the sake of others.  This is consistent with the findings of another study which examined people’s interest in changing a variety of moral and nonmoral traits, and showed that people are particularly interested in improving the traits that they believed would make them relatively happier (Sun & Berman, in prep). Here, it is striking that personal fulfilment remains the most important motivator of personal improvement even exclusively in the moral domain.

Wednesday, June 29, 2022

Abuse case reveals therapist’s dark past, raises ethical concerns

Associated Press
Originally posted 11 JUN 22

Here is an excerpt:

Dushame held a valid driver’s license despite five previous drunken driving convictions, and it was his third fatal crash — though the others didn’t involve alcohol. The Boston Globe called him “the most notorious drunk driver in New England history.”

But over time, he dedicated himself to helping people recovering from addiction, earning a master’s degree in counseling psychology and leading treatment programs from behind bars.

Two years later, he legally changed his name to Peter Stone. He was released from prison in 2002 and eventually set up shop as a licensed drug and alcohol counselor.

Last July, he was charged with five counts of aggravated felonious sexual assault under a law that criminalizes any sexual contact between patients and their therapists or health care providers. Such behavior also is prohibited by the American Psychological Association’s ethical code of conduct.

In a recent interview, the 61-year-old woman said she developed romantic feelings for Stone about six months after he began treating her for anxiety, depression and alcohol abuse in June 2013. Though he told her a relationship would be unethical, he initiated sexual contact in February 2016, she said.

“‘That crossed the line,’” the woman remembers him saying after he pulled up his pants. “‘When am I seeing you again?’”

While about half the states have no restrictions on name changes after felony convictions, 15 have bans or temporary waiting periods for those convicted of certain crimes, according to the ACLU in Illinois, which has one of the most restrictive laws.

Stone appropriately disclosed his criminal record on licensing applications and other documents, according to a review of records obtained by the AP. Disclosure to clients isn’t mandatory, said Gary Goodnough, who teaches counseling ethics at Plymouth State University. But he believes clients have a right to know about some convictions, including vehicular homicide.

Wednesday, March 30, 2022

When Good People Break Bad: Moral Impression Violations in Everyday Life

Guan, K. W., & Heine, S. J. (2022).
Social Psychological and Personality Science. 
https://doi.org/10.1177/19485506221076685

Abstract

The present research investigated the emotional, interpersonal, and impression-updating consequences of witnessing events that violate the moral character impressions people hold of others. Across three studies, moral character-violations predicted broad disruptions to participants’ sense of meaning, confidence judging moral character, and expectations of others’ moral characters. Participants who were in real life closer to perpetrators, directly victimized, and higher in preferences for closure and behavioral stability reported more negative outcomes. Moreover, experimental manipulations showed that character-violations lead to worse outcomes than the comparable experience of encountering consistently immoral others. The authors discuss implications for research on moral perception and meaning, as well as on understanding responses to everyday revelations about people’s characters.

From the General Discussion

Moral character-violations appear frequently in the media and occasionally in everyday life. The present research provides an explanation of how these experiences affect perceivers, grounded in the meaning maintenance model (Heine et al., 2006) and social perception literature (Goodwin et al., 2014). Across all three studies, good-to-bad character-violations were associated with disruptions in perceivers’ sense that they understand the world, their confidence judging character, and their impressions of people’s morality in general. In other words, the psychological impact is not restricted to people’s views of specific character-violated targets, but spills over to color how people view other people more generally. Studies 1 and 2 also illuminated the types of moral character-violations people tend to encounter in everyday life, exploring additional situational and dispositional factors that predict stronger feelings of loss of meaning. Study 3 found causal evidence for these effects.

Our findings speak to general experiences, but a few key variables powerfully predict recalled outcomes. Directly victimized targets, and those with higher preferences for closure and personality stability reported greater disruptions in meaning. These findings line up with past evidence that being directly betrayed or transgressed upon leads to strong negative emotions (Adams & Inesi, 2016; Hutcherson & Gross, 2011), and having higher dispositional needs for stability and closure predicts more negative reactions to meaning-violations (e.g., Doherty, 1998).

Monday, February 14, 2022

Beauty Goes Down to the Core: Attractiveness Biases Moral Character Attributions

Klebl, C., Rhee, J.J., Greenaway, K.H. et al. 
J Nonverbal Behav (2021). 
https://doi.org/10.1007/s10919-021-00388-w

Abstract

Physical attractiveness is a heuristic that is often used as an indicator of desirable traits. In two studies (N = 1254), we tested whether facial attractiveness leads to a selective bias in attributing moral character—which is paramount in person perception—over non-moral traits. We argue that because people are motivated to assess socially important traits quickly, these may be the traits that are most strongly biased by physical attractiveness. In Study 1, we found that people attributed more moral traits to attractive than unattractive people, an effect that was stronger than the tendency to attribute positive non-moral traits to attractive (vs. unattractive) people. In Study 2, we conceptually replicated the findings while matching traits on perceived warmth. The findings suggest that the Beauty-is-Good stereotype particularly skews in favor of the attribution of moral traits. As such, physical attractiveness biases the perceptions of others even more fundamentally than previously understood.

From the Discussion

The present investigation advances the Beauty-is-Good stereotype literature. Our findings are consistent with extensive research showing that people attribute positive traits more strongly to attractive compared to unattractive individuals (Dion et al., 1972). Most significantly, the present studies add to the previous literature by providing evidence that attractiveness does not bias the attribution of positive traits uniformly. Attractiveness especially biases the attribution of moral traits compared to positive non-moral traits, constituting an update to the Beauty-is-Good stereotype. One possible explanation for this selective bias is that because people are particularly motivated to assess socially important traits—traits that help us quickly decide who our allies are (Goodwin et  al., 2014)—physical attractiveness selectively biases the attribution of those traits over socially less important traits. While in many instances, this may allow us to assess moral character quickly and accurately (cf. Ambady et al., 2000) and thus obtain valuable information about whether the target is a threat or ally, where morally relevant information is absent (such as during initial impression formation) this motivation to assess moral character may lead to an over reliance on heuristic cues. 

Tuesday, January 4, 2022

Changing impressions in competence-oriented domains: The primacy of morality endures

A. Luttrella, S. Sacchib, & M. Brambillab
Journal of Experimental Social Psychology
Volume 98, January 2022, 104246

Abstract

The Moral Primacy Model proposes that throughout the multiple stages of developing impressions of others, information about the target's morality is more influential than information about their competence or sociability. Would morality continue to exert outsized influence on impressions in the context of a decision for which people view competence as the most important attribute? In three experiments, we used an impression updating paradigm to test how much information about a target's morality versus competence changed perceivers' impressions of a job candidate. Despite several pilot studies in which people said they would prioritize competence over morality when deciding to hire a potential employee, results of the main studies reveal that impressions changed more when people received new information about a target's immorality than about his incompetence. This moral primacy effect held both for global impressions and willingness to hire the target, but direct effects on evaluations of the target as an employee did not consistently emerge. When the new information about the target was positive, we did not reliably observe a moral primacy effect. These findings provide important insight on the generalizability of moral primacy in impression updating.

Highlights

• People reported that hiring decisions should favor competence over morality.

• Impressions of a job candidate changed more based on his morality (vs. competence).

• Moral primacy in this context emerged only when the new information was negative.

• Moral primacy occurred for general impressions more than hiring-specific judgments.

Conclusion

In sum, we tested the boundaries of moral primacy and found that even in a context where other dimensions could dominate, information about a job candidate's immorality continued to have disproportionate influence on general impressions of him and evaluations of his suitability as an employee. However, our findings further show that the relative effect of negative moral versus competence information on domain-specific judgments tended to be smaller than effects on general impressions. In addition, unlike prior research on impression updating (Brambilla et al., 2019), we observed no evidence for moral primacy in this context when the new information was positive (although this pattern may be indicative of a more general valence asymmetry in the effects of morally relevant information). Together, these findings provide an important extension of the Moral Primacy Model but also provide useful insight on the generalizability of the effect.

Tuesday, November 23, 2021

The Moral Identity Picture Scale (MIPS): Measuring the Full Scope of Moral Identity

Amelia Goranson, Connor O’Fallon, & Kurt Gray
Research Paper, in press

Abstract

Morality is core to people’s identity. Existing moral identity scales measure good/moral vs. bad/immoral, but the Theory of Dyadic Morality highlights two-dimensions of morality: valence (good/moral vs. bad/immoral) and agency (high/agent vs. low/recipient). The Moral Identity Picture Scale (MIPS) measures this full space through 16 vivid pictures. Participants receive scores for each of four moral roles: hero, villain, victim, and beneficiary. The MIPS can also provide summary scores for good, evil, agent, and patient, and possesses test-retest reliability and convergent/divergent validity. Self-identified heroes are more empathic and higher in locus of control, villains are less agreeable and higher in narcissism, victims are higher in depression and lower in self-efficacy, and beneficiaries are lower in Machiavellianism. Although people generally see themselves as heroes, comparisons across known-groups reveals relative differences: Duke MBA students self-identify more as villains, UNC social work students self identify more as heroes, and workplace bullying victims self-identify more as victims. Data also reveals that the beneficiary role is ill-defined, collapsing the two-dimensional space of moral identity into a triangle anchored by hero, villain, and victim.

From the Discussion

We hope that, in providing this new measure of moral identity, future work can examine a broader sense of the moral world—beyond simple identifications of good vs. evil—using our expanded measure that captures not only valence but also role as a moral agent or patient. This measure expands upon previous measures related to moral identity (e.g., Aquino & Reed, 2002; Barriga et al., 2001; Reimer & Wade-Stein, 2004), replicating prior work that we divide the moral world up into good and evil, but demonstrating that the moral identification space includes another component as well: moral agency and moral patiency. Most past work has examined this “agent” side of moral identity—heroes and villains—but we can gain a fuller and more nuanced view of the moral world if we also examine their counterparts—moral patients/recipients. The MIPS provides us with the ability to examine moral identity across these 2 dimensions of valence (positive vs. negative) and agency (agent vs. patient). 

Saturday, August 21, 2021

The relational logic of moral inference

Crockett, M., Everett, J. A. C., Gill, M., & Siegel, J. 
(2021, July 9). https://doi.org/10.31234/osf.io/82c6y

Abstract

How do we make inferences about the moral character of others? Here we review recent work on the cognitive mechanisms of moral inference and impression updating. We show that moral inference follows basic principles of Bayesian inference, but also departs from the standard Bayesian model in ways that may facilitate the maintenance of social relationships. Moral inference is not only sensitive to whether people make moral decisions, but also to features of decisions that reveal their suitability as a relational partner. Together these findings suggest that moral inference follows a relational logic: people form and update moral impressions in ways that are responsive to the demands of ongoing social relationships and particular social roles. We discuss implications of these findings for theories of moral cognition and identify new directions for research on human morality and person perception.

Summary

There is growing evidence that people infer moral character from behaviors that are not explicitly moral. The data so far suggest that people who are patient, hard-working, tolerant of ambiguity, risk-averse, and actively open-minded are seen as more moral and trustworthy. While at first blush this collection of preferences may seem arbitrary, considering moral inference from a relational perspective reveals a coherent logic. All of these preferences are correlated with cooperative behavior, and comprise traits that are desirable for long-term relationship partners. Reaping the benefits of long-term relationships requires patience and a tolerance for ambiguity: sometime people make mistakes despite good intentions. Erring on the side of caution and actively seeking evidence to inform decision-making in social situations not only helps prevent harmful outcomes (Kappes et al., 2019), but also signals respect: social life is fraught with uncertainty (FeldmanHall & Shenhav, 2019; Kappes et al., 2019), and assuming we know what’s best for another person can have bad consequences, even when our intentions are good.  If evidence continues to suggest that certain types of non-moral preferences are preferred in social partners, partner choice mechanisms may explain the prevalence of those preferences in the broader population.

Sunday, August 15, 2021

Prosocial Behavior and Reputation: When Does Doing Good Lead to Looking Good?

Berman, J. Z., & Silver, I.
(2021). Current Opinion in Psychology
Available online 9 July 2021

Abstract

One reason people engage in prosocial behavior is to reap the reputational benefits associated with being seen as generous. Yet, there isn’t a direct connection between doing good deeds and being seen as a good person. Rather, prosocial actors are often met with suspicion, and sometimes castigated as disingenuous braggarts, empty virtue-signalers, or holier-than-thou hypocrites. In this article, we review recent research on how people evaluate those who engage in prosocial behavior and identify key factors that influence whether observers will praise or denigrate a prosocial actor for doing a good deed.

(cut)

Obligations to Personal Relations

One complicating factor that affects how actors are judged concerns whether they are donating to a cause that benefits a close personal relation. Recent theories of morality suggest that people see others as obligated to help close personal relations over distant strangers.  Despite these obligations, or perhaps because of them, prosocial actors are afforded less credit when they donate to causes that benefit close others: doing so is seen as relatively selfish compared to helping strangers. At the same time, helping a stranger instead of helping a close other is seen as a violation of one’s commitments and obligations, which can also damage one’s reputation. Understanding the role of relationship-specific obligations in judgments of selfless behavior is still nascent and represents an emerging area of research. 

Friday, July 16, 2021

“False positive” emotions, responsibility, and moral character

Anderson, R.A., et al.
Cognition
Volume 214, September 2021, 104770

Abstract

People often feel guilt for accidents—negative events that they did not intend or have any control over. Why might this be the case? Are there reputational benefits to doing so? Across six studies, we find support for the hypothesis that observers expect “false positive” emotions from agents during a moral encounter – emotions that are not normatively appropriate for the situation but still trigger in response to that situation. For example, if a person accidentally spills coffee on someone, most normative accounts of blame would hold that the person is not blameworthy, as the spill was accidental. Self-blame (and the guilt that accompanies it) would thus be an inappropriate response. However, in Studies 1–2 we find that observers rate an agent who feels guilt, compared to an agent who feels no guilt, as a better person, as less blameworthy for the accident, and as less likely to commit moral offenses. These attributions of moral character extend to other moral emotions like gratitude, but not to nonmoral emotions like fear, and are not driven by perceived differences in overall emotionality (Study 3). In Study 4, we demonstrate that agents who feel extremely high levels of inappropriate (false positive) guilt (e.g., agents who experience guilt but are not at all causally linked to the accident) are not perceived as having a better moral character, suggesting that merely feeling guilty is not sufficient to receive a boost in judgments of character. In Study 5, using a trust game design, we find that observers are more willing to trust others who experience false positive guilt compared to those who do not. In Study 6, we find that false positive experiences of guilt may actually be a reliable predictor of underlying moral character: self-reported predicted guilt in response to accidents negatively correlates with higher scores on a psychopathy scale.

From the General Discussion

It seems reasonable to think that there would be some benefit to communicating these moral emotions as a signal of character, and to being able to glean information about the character of others from observations of their emotional responses. If a propensity to feel guilt makes it more likely that a person is cooperative and trustworthy, observers would need to discriminate between people who are and are not prone to guilt. Guilt could therefore serve as an effective regulator of moral behavior in others in its role as a reliable signal of good character.  This account is consistent with theoretical accounts of emotional expressions more generally, either in the face, voice, or body, as a route by which observers make inferences about a person’s underlying dispositions (Frank, 1988). Our results suggest that false positive emotional responses specifically may provide an additional, and apparently informative, source of evidence for one’s propensity toward moral emotions and moral behavior.

Wednesday, July 7, 2021

“False positive” emotions, responsibility, and moral character

Anderson, R. A., et al.
Cognition
Volume 214, September 2021, 104770

Abstract

People often feel guilt for accidents—negative events that they did not intend or have any control over. Why might this be the case? Are there reputational benefits to doing so? Across six studies, we find support for the hypothesis that observers expect “false positive” emotions from agents during a moral encounter – emotions that are not normatively appropriate for the situation but still trigger in response to that situation. For example, if a person accidentally spills coffee on someone, most normative accounts of blame would hold that the person is not blameworthy, as the spill was accidental. Self-blame (and the guilt that accompanies it) would thus be an inappropriate response. However, in Studies 1–2 we find that observers rate an agent who feels guilt, compared to an agent who feels no guilt, as a better person, as less blameworthy for the accident, and as less likely to commit moral offenses. These attributions of moral character extend to other moral emotions like gratitude, but not to nonmoral emotions like fear, and are not driven by perceived differences in overall emotionality (Study 3). In Study 4, we demonstrate that agents who feel extremely high levels of inappropriate (false positive) guilt (e.g., agents who experience guilt but are not at all causally linked to the accident) are not perceived as having a better moral character, suggesting that merely feeling guilty is not sufficient to receive a boost in judgments of character. In Study 5, using a trust game design, we find that observers are more willing to trust others who experience false positive guilt compared to those who do not. In Study 6, we find that false positive experiences of guilt may actually be a reliable predictor of underlying moral character: self-reported predicted guilt in response to accidents negatively correlates with higher scores on a psychopathy scale.

General discussion

Collectively, our results support the hypothesis that false positive moral emotions are associated with both judgments of moral character and traits associated with moral character. We consistently found that observers use an agent's false positive experience of moral emotions (e.g., guilt, gratitude) to infer their underlying moral character, their social likability, and to predict both their future emotional responses and their future moral behavior. Specifically, we found that observers judge an agent who experienced “false positive” guilt (in response to an accidental harm) as a more moral person, more likeable, less likely to commit future moral infractions, and more trustworthy than an agent who experienced no guilt. Our results help explain the second “puzzle” regarding guilt for accidental actions (Kamtekar & Nichols, 2019). Specifically, one reason that observers may find an accidental agent less blameworthy, and yet still be wary if the agent does not feel guilt, is that such false positive guilt provides an important indicator of that agent's underlying character.

Tuesday, May 18, 2021

Moderators of The Liking Bias in Judgments of Moral Character

Bocian, K. Baryla, W. & Wojciszke, B. 
Personality and Social Psychology Bulletin. 
(2021)

Abstract 

Previous research found evidence for a liking bias in moral character judgments because judgments of liked people are higher than those of disliked or neutral ones. The present article sought conditions moderating this effect. In Study 1 (N = 792), the impact of the liking bias on moral character judgments was strongly attenuated when participants were educated that attitudes bias moral judgments. In Study 2 (N = 376), the influence of liking on moral character attributions was eliminated when participants were accountable for the justification of their moral judgments. Overall, these results suggest that even though liking biases moral character attributions, this bias might be reduced or eliminated when deeper information processing is required to generate judgments of others’ moral character. Keywords: moral judgments, moral character, attitudes, liking bias, accountability.

General Discussion

In this research, we sought to replicate the past results that demonstrated the influence of liking on moral character judgments, and we investigated conditions that could limit this influence. We demonstrated that liking elicited by similarity (Study 1) and mimicry (Study 2) biases the perceptions of another person’s moral character. Thus, we corroborated previous findings by Bocian et al. (2018), who found that attitudes bias moral judgments. More importantly, we showed conditions that moderate the liking bias. Specifically, in Study 1, we found evidence that forewarning participants that liking can bias moral character judgments weaken the liking bias two times. In Study 2, we demonstrated that the liking bias was eliminated when we made participants accountable for their moral decisions. 

By systematically examining the conditions that reduce the liking influences on moral character attributions, we built on and extended the past work in the area of moral cognition and biases reduction. First, while past studies have focused on the impact of accountability on the fundamental attribution error (Tetlock, 1985), overconfidence (Tetlock & Kim, 1987), or order of information (Schadewald & Limberg, 1992), we examined the effectiveness of accountability in debiasing moral judgments. Thus, we demonstrated that biased moral judgments could be effectively corrected when people are obliged to justify their judgments to others. Second, we showed that educating people that attitudes might bias their moral judgments, to some extent, effectively helped them debiased their moral character judgments. We thus extended the past research on the effectiveness of forewarning people of biases in social judgment and decision-making (Axt et al., 2018; Hershberger et al., 1997) to biases in moral judgments. 

Saturday, October 31, 2020

The new trinity of religious moral character: the Cooperator, the Crusader, and the Complicit

S. Abrams, J. Jackson, & K. Gray
Current Opinion in Psychology 2021, 
40:99–105

Abstract

Does religion make people good or bad? We suggest that there are at least three distinct profiles of religious morality: the Cooperator, the Crusader, and the Complicit. Cooperators forego selfishness to benefit others, crusaders harm outgroups to bolster their own religious community, and the complicit use religion to justify selfish behavior and reduce blame. Different aspects of religion motivate each character: religious reverence makes people cooperators, religious tribalism makes people crusaders, and religious absolution makes people complicit. This framework makes sense of previous research by explaining when and how religion can make people more or less moral.

Highlights

• Different aspects of religion inspire both morality and immorality.

• These distinct influences are summarized through three profiles of moral character.

• The ‘Cooperator’ profile shows how religious reverence encourages people to sacrifice self-interest.

• The ‘Crusader’ profile shows how religious tribalism motivates ingroup loyalty and outgroup hostility.

• The ‘Complicit’ profile shows how religious absolution allows people to justify selfish behavior.

From the Conclusion

Religion and morality are complex, and so is their relationship. This review makes sense of religious and moral complexity through a taxonomy of three moral characters — the Cooperator, the Crusader, and the Complicit — each of which is facilitated by different aspects of religion. Religious reverence encourages people to be cooperators, religious tribalism justifies people to behave like crusaders, and religious absolution allows people to be complicit.

Thursday, March 19, 2020

Does virtue lead to status? Testing the moral virtue theory of status attainment.

Bai, F., Ho, G. C. C., & Yan, J. (2020).
Journal of Personality and 
Social Psychology, 118(3), 501–531.

Abstract

The authors perform one of the first empirical tests of the moral virtue theory of status attainment (MVT), a conceptual framework for showing that morality leads to status. Studies 1a to 1d are devoted to developing and validating a 15-item status attainment scale (SAS) to measure how virtue leads to admiration (virtue–admiration), how dominance leads to fear (dominance–fear), and how competence leads to respect (competence–respect). Studies 2a and 2b are an exploration of the nomological network and discriminant validity to show that peer-reported virtue–admiration is positively related to moral character and perceptions such as perceived warmth and unrelated to amoral constructs such as neuroticism. In addition, virtue–admiration mediates the positive effect of several self-reported moral character traits, such as moral identity-internalization, on status conferral. Study 3 supports the external validity of the virtue route to status in a sample of full-time managers from China. In Study 4, a preregistered experiment, virtue evokes superior status while selfishness evokes inferior status. Perceivers who are high in moral character show stronger perceptions of superior status. Finally, Study 5, another preregistered experiment, shows that virtue leads to higher status through inducing virtue–admiration rather than competence–respect, even for incompetent actors. The findings provide initial support for MVT arguing that virtue is a distinct, third route to status.

The research is here.

Tuesday, November 5, 2019

Moral Enhancement: A Realistic Approach

Greg Conan
British Medical Journal Blogs
Originally published August 29, 2019

Here is an excerpt:

If you could take a pill to make yourself a better person, would you do it? Could you justifiably make someone else do it, even if they do not want to?

When presented so simplistically, the idea might seem unrealistic or even impossible. The concepts of “taking a pill” and “becoming a better person” seem to belong to different categories. But many of the traits commonly considered to make one a “good person”—such as treating others fairly and kindly without violence—are psychological traits strongly influenced by neurobiology, and neurobiology c
an be changed using medicine. So when and how, if ever, should medicine be used to improve moral character?

Moral bioenhancement (MBE), the concept of improving moral character using biomedical technology, has fascinated me for years—especially once I learned that it has been hotly debated in the bioethics literature since 2008. I have greatly enjoyed diving into the literature to learn about how the concept has been analyzed and presented. Much of the debate has focused on its most abstract topics, like defining its terms and relating MBE to freedom. Although my fondness for analytic philosophy means that I cannot condemn anyone for working to examine ideas with maximum clarity and specificity, any MBE proponent who actually wants MBE to be implemented must focus on realistic methods.

The info is here.

Saturday, September 14, 2019

Do People Want to Be More Moral?

Jessie Sun and Geoffrey Goodwin
PsyArXiv Preprints
Originally posted August 26, 2019

Abstract

Most people want to change some aspects of their personality, but does this phenomenon extend to moral character, and to close others? Targets (N = 800) and well-acquainted informants (N = 958) rated targets’ personality traits and reported how much they wanted the target to change each trait. Targets and informants reported a lower desire to change more morally-relevant traits (e.g., honesty, compassion), compared to less morally-relevant traits (e.g., anxiety, sociability). Moreover, although targets and informants generally wanted targets to improve more on traits that targets had less desirable levels of, targets’ moral change goals were less calibrated to their current levels. Finally, informants wanted targets to change in similar ways, but to a lesser extent, than targets themselves did. These findings shed light on self–other similarities and asymmetries in personality change goals, and suggest that the general desire for self-improvement may be less prevalent in the moral domain.

From the Discussion:

Why don’t people particularly want to be more moral? One possibility is that people see less room for improvement on moral traits, especially given the relatively high ratings on these traits.  Our data cannot speak directly to this possibility, because people might not be claiming that they have the lowest or highest possible levels of each trait when they “strongly disagree” or “strongly agree” with each trait description (Blanton & Jaccard, 2006). Testing this idea would therefore require a more direct measure of where people think they stand, relative to these extremes.

A related possibility is that people are less motivated to improve moral traits because they already see themselves as being quite high on such traits, and therefore morally “good enough”—even if they think they could be morally better (see Schwitzgebel, 2019). Consistent with this idea, supplemental analyses showed that people are less inclined to change the traits that they rate themselves higher on, compared to traits that they rate themselves lower on. However, even controlling for current levels, people are still less inclined to change more morally-relevant traits(see Supplemental Materialfor these within-person analyses), suggesting that additional psychological factors might reducepeople’s desire to change morally-relevant traits.One additional possibility is that people are more motivated to change in ways that will improve their own well-being(Hudson & Fraley, 2016). Whereas becoming less anxious has obvious personal benefits, people might believe that becoming more moral would result in few personal benefits (or even costs).

The research is here.

Friday, September 13, 2019

Intention matters to make you (im)moral: Positive-negative asymmetry in moral character evaluations

Paula Yumi Hirozawa, M. Karasawa & A. Matsuo
(2019) The Journal of Social Psychology
DOI: 10.1080/00224545.2019.1653254

Abstract

Is intention, even if unfulfilled, enough to make a person appear to be good or bad? In this study, we investigated the influence of unfulfilled intentions of an agent on subsequent moral character evaluations. We found a positive-negative asymmetry in the effect of intentions. Factual information concerning failure to fulfill a positive intention mitigated the morality judgment of the actor, yet this mitigation was not as evident for the negative vignettes. Participants rated an actor who failed to fulfill their negative intention as highly immoral, as long as there was an external explanation to its unfulfillment. Furthermore, both emotional and cognitive (i.e., informativeness) processes mediated the effect of negative intention on moral character. For the positive intention, there was a significant mediation by emotions, yet not by informativeness. Results evidence the relevance of mental states in moral character evaluations and offer affective and cognitive explanations to the asymmetry.

Conclusion

In this study, we investigated whether intentions by themselves are enough to make an agent appear to be good or bad. The answer is yes, but with a detail. We found negative intentions are more indicative of an immoral character than positive intentions are diagnostic of moral character. Simply intending to offer cookies should not, after all, make a neighbor particularly virtuous, unless the intention is acted out. The positive-negative asymmetry demonstrated in the present study may capture a fundamental aspect of people’s moral judgments, particularly for disposition-based evaluations.

Tuesday, June 11, 2019

Moral character: What it is and what it does

Cohen, T. R., & Morse, L. (2014).
In A. P. Brief & B. M. Staw (Eds.), Research in Organizational Behavior.

Abstract

Moral character can be conceptualized as an individual’s disposition to think, feel, and behave in an ethical versus unethical manner, or as the subset of individual differences relevant to morality. This essay provides an organizing framework for understanding moral character and its relationship to ethical and unethical work behaviors. We present a tripartite model for understanding moral character, with the idea that there are motivational, ability, and identity elements. The motivational element is consideration of others—referring to a disposition toward considering the needs and interests of others, and how one’s own actions affect other people. The ability element is self-regulation—referring to a disposition toward regulating one’s behavior effectively, specifically with reference to behaviors that have positive short-term consequences but negative long-term consequences for oneself or others. The identity element is moral identity—referring to a disposition toward valuing morality and wanting to view oneself as a moral person. After unpacking what moral character is, we turn our attention to what moral character does, with a focus on how it influences unethical behavior, situation selection, and situation creation. Our research indicates that the impact of moral character on work outcomes is significant and consequential, with important implications for research and practice in organizational behavior.

A copy can be downloaded here.