Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label In Group. Show all posts
Showing posts with label In Group. Show all posts

Saturday, January 14, 2023

Individuals prefer to harm their own group rather than help an opposing group

Rachel Gershon and Ariel Fridman
PNAS, 119 (49) e2215633119


Group-based conflict enacts a severe toll on society, yet the psychological factors governing behavior in group conflicts remain unclear. Past work finds that group members seek to maximize relative differences between their in-group and out-group (“in-group favoritism”) and are driven by a desire to benefit in-groups rather than harm out-groups (the “in-group love” hypothesis). This prior research studies how decision-makers approach trade-offs between two net-positive outcomes for their in-group. However, in the real world, group members often face trade-offs between net-negative options, entailing either losses to their group or gains for the opposition. Anecdotally, under such conditions, individuals may avoid supporting their opponents even if this harms their own group, seemingly inconsistent with “in-group love” or a harm minimizing strategy. Yet, to the best of our knowledge, these circumstances have not been investigated. In six pre-registered studies, we find consistent evidence that individuals prefer to harm their own group rather than provide even minimal support to an opposing group across polarized issues (abortion access, political party, gun rights). Strikingly, in an incentive-compatible experiment, individuals preferred to subtract more than three times as much from their own group rather than support an opposing group, despite believing that their in-group is more effective with funds. We find that identity concerns drive preferences in group decision-making, and individuals believe that supporting an opposing group is less value-compatible than harming their own group. Our results hold valuable insights for the psychology of decision-making in intergroup conflict as well as potential interventions for conflict resolution.


Understanding the principles guiding decisions in intergroup conflicts is essential to recognizing the psychological barriers to compromise and cooperation. We introduce a novel paradigm for studying group decision-making, demonstrating that individuals are so averse to supporting opposing groups that they prefer equivalent or greater harm to their own group instead. While previous models of group decision-making claim that group members are driven by a desire to benefit their in-group (“in-group love”) rather than harm their out-group, our results cannot be explained by in-group love or by a harm minimizing strategy. Instead, we propose that identity concerns drive this behavior. Our theorizing speaks to research in psychology, political theory, and negotiations by examining how group members navigate trade-offs among competing priorities.

From the Conclusion

We synthesize prior work on support-framing and propose the Identity-Support model, which can parsimoniously explain our findings across win-win and lose-lose scenarios. The model suggests that individuals act in group conflicts to promote their identity, and they do so primarily by providing support to causes they believe in (and avoid supporting causes they oppose; see also SI Appendix, Study S1). Simply put, in win-win contexts, supporting the in-group is more expressive of one’s identity as a group member than harming the opposing group, thereby leading to a preference for in-group support. In lose-lose contexts, supporting the opposing group is more negatively expressive of one’s identity as a group member than harming the in-group, resulting in a preference for in-group harm. Therefore, the principle that individuals make decisions in group conflicts to promote and protect their identity, primarily by allocating their support in ways that most align with their values, offers a single framework that predicts individual behavior in group conflicts in both win-win and lose-lose contexts.

Sunday, March 6, 2022

Investigating the role of group-based morality in extreme behavioral expressions of prejudice

Hoover, J., Atari, M., et al. 
Nat Commun 12, 4585 (2021). 


Understanding motivations underlying acts of hatred are essential for developing strategies to prevent such extreme behavioral expressions of prejudice (EBEPs) against marginalized groups. In this work, we investigate the motivations underlying EBEPs as a function of moral values. Specifically, we propose EBEPs may often be best understood as morally motivated behaviors grounded in people’s moral values and perceptions of moral violations. As evidence, we report five studies that integrate spatial modeling and experimental methods to investigate the relationship between moral values and EBEPs. Our results, from these U.S. based studies, suggest that moral values oriented around group preservation are predictive of the county-level prevalence of hate groups and associated with the belief that extreme behavioral expressions of prejudice against marginalized groups are justified. Additional analyses suggest that the association between group-based moral values and EBEPs against outgroups can be partly explained by the belief that these groups have done something morally wrong.

From the Discussion

Notably, Study 5 provided tentative evidence that binding values may be a particularly important risk factor for the perceived justification of EBEPs. Participants who were experimentally manipulated to believe an outgroup had done something immoral were more likely to perceive acts of hate against that outgroup as justified when they felt that the outgroup’s behavior was more morally wrong. However, this association between PMW and the justification of hate acts was strongly moderated by people’s binding values, but not by their individualizing values. Ultimately, comparing people high on binding values to people high on individualizing values, we found that the average causal mediation effect in the domain of binding values was more than six times the average causal mediation effect in the domain of individualizing values. In other words, our results suggest that if two people see an outgroup’s binding values violation as equally morally wrong, but one of them has higher binding values, the person with higher binding values will see EBEPs against the outgroup as more justified. However, no such difference was observed in the domain of individualizing values.

Accordingly, our results suggest that people who attribute moral violations to an outgroup may be at higher risk for justifying, or perhaps even expressing, extreme prejudice toward outgroups; however, our results also suggest that people who prioritize the binding values may be particularly susceptible to this dynamic when they perceive a violation of ingroup loyalty, respect for authority, and physical or spiritual purity. In this sense, our findings are consistent with the hypothesis that acts of hate—a class of behaviors of which many have received their own special legal designation as particularly heinous crimes4—are partly motivated by individuals’ moral beliefs. This view is well-grounded in current understandings of the relationship between morality and acts of extremism or violence.

Saturday, December 11, 2021

Older adults across the globe exhibit increased prosocial behavior but also greater in-group preferences

Cutler, J., Nitschke, J.P., Lamm, C. et al. 
Nat Aging 1, 880–888 (2021).


Population aging is a global phenomenon with substantial implications across society. Prosocial behaviors—actions that benefit others—promote mental and physical health across the lifespan and can save lives during the COVID-19 pandemic. We examined whether age predicts prosociality in a preregistered global study (46,576 people aged 18–99 across 67 countries) using two acutely relevant measures: distancing during COVID-19 and willingness to donate to hypothetical charities. Age positively predicted prosociality on both measures, with increased distancing and donations among older adults. However, older adults were more in-group focused than younger adults in choosing who to help, making larger donations to national over international charities and reporting increased in-group preferences. In-group preferences helped explain greater national over international donations. Results were robust to several control analyses and internal replication. Our findings have vital implications for predicting the social and economic impacts of aging populations, increasing compliance with public health measures and encouraging charitable donations.


Prosocial behaviors have critical individual and societal impacts. Emerging evidence suggests that older adults might be more prosocial than younger adults on measures including economic games learning about rewards for others, effortful actions and charitable donations. In line with this, theoretical accounts of lifespan development, such as socioemotional selectivity theory, propose that motivation for socially and emotionally meaningful behaviors increases as a result of age-related differences in goals and priorities. However, most research has tested participants from western, educated, industrialized, rich and democratic populations. It is unknown whether increased prosociality is shown by older adults across the world. Moreover, although some studies point to increased prosocial behavior, others find no association or even heightened negative behaviors, including greater bias toward one’s own emotions, increased stereotyping of outgroups and less support for foreign aid. Together these findings suggest that age might be associated with both increased positive helping behaviors but also heightened self-serving and in-group preferences.

Wednesday, September 9, 2020

Hate Trumps Love: The Impact of Political Polarization on Social Preferences

Eugen Dimant
Published 4 September 20


Political polarization has ruptured the fabric of U.S. society. The focus of this paper is to examine various layers of (non-)strategic decision-making that are plausibly affected by political polarization through the lens of one's feelings of hate and love for Donald J. Trump. In several pre-registered experiments, I document the behavioral-, belief-, and norm-based mechanisms through which perceptions of interpersonal closeness, altruism, and cooperativeness are affected by polarization, both within and between political factions. To separate ingroup-love from outgroup-hate, the political setting is contrasted with a minimal group setting. I find strong heterogeneous effects: ingroup-love occurs in the perceptional domain (how close one feels towards others), whereas outgroup-hate occurs in the behavioral domain (how one helps/harms/cooperates with others). In addition, the pernicious outcomes of partisan identity also comport with the elicited social norms. Noteworthy, the rich experimental setting also allows me to examine the drivers of these behaviors, suggesting that the observed partisan rift might be not as forlorn as previously suggested: in the contexts studied here, the adverse behavioral impact of the resulting intergroup conflict can be attributed to one's grim expectations about the cooperativeness of the opposing faction, as opposed to one's actual unwillingness to cooperate with them.

From the Conclusion and Discussion

Along all investigated dimensions, I obtain strong effects and the following results: for one, polarization produces ingroup/outgroup differentiation in all three settings (nonstrategic, Experiment 1; strategic, Experiment 2; social norms, Experiment 3), leading participants to actively harm and cooperate less with participants from the opposing faction. For another, lack of cooperation is not the result of a categorical unwillingness to cooperate across factions, but based on one’s grim expectations about the other’s willingness to cooperate. Importantly, however, the results also cast light on the nuance with which ingroup-love and outgroup-hate – something that existing literature often takes as being two sides of the same coin – occurs. In particular, by comparing behavior between the Trump Prime and minimal group prime treatments, the results suggest that ingroup-love can be observed in terms of feeling close to one another, whereas outgroup hate appears in form of taking money away from and being less cooperative with each other. The elicited norms are consistent with these observations and also point out that those who love Trump have a much weaker ingroup/outgroup differentiation than those who hate Trump do.

Friday, May 15, 2020

“Do the right thing” for whom? An experiment on ingroup favouritism, group assorting and moral suasion

E. Bilancini, L. Boncinelli, & others
Judgment and Decision Making, 
Vol. 15, No. 2, March 2020, pp. 182-192


In this paper we investigate the effect of moral suasion on ingroup favouritism. We report a well-powered, pre-registered, two-stage 2x2 mixed-design experiment. In the first stage, groups are formed on the basis of how participants answer a set of questions, concerning non-morally relevant issues in one treatment (assorting on non-moral preferences), and morally relevant issues in another treatment (assorting on moral preferences). In the second stage, participants choose how to split a given amount of money between participants of their own group and participants of the other group, first in the baseline setting and then in a setting where they are told to do what they believe to be morally right (moral suasion). Our main results are: (i) in the baseline, participants tend to favour their own group to a greater extent when groups are assorted according to moral preferences, compared to when they are assorted according to non-moral preferences; (ii) the net effect of moral suasion is to decrease ingroup favouritism, but there is also a non-negligible proportion of participants for whom moral suasion increases ingroup favouritism; (iii) the effect of moral suasion is substantially stable across group assorting and four pre-registered individual characteristics (gender, political orientation, religiosity, pro-life vs pro-choice ethical convictions).

From the Discussion:

The interest in moral suasion stems, at least in part, from being a cheap and possibly effective policy tool that could be applied to foster prosocial behaviours. While the literature on moral behaviour has so far produced a substantial body of evidence showing the effectiveness of moral suasion, its dependence on the identity of the recipients of the decision-maker’s actions is far less studied, leaving open the possibility that individuals react to moral suasion by reducing prosociality towards some types of recipients. This paper has addressed this issue in the setting of a decision to split a given amount of money between members of one’s own group and members of another group, providing experimental evidence that, on average, moral suasion increases pro-sociality towards both the ingroup and the outgroup; however, the increase towards the outgroup is greater than the increase towards the ingroup, and this results in the fact that ingroup favouritism, on average, declines under moral suasion.

The research is here.

Saturday, January 18, 2020

Could a Rising Robot Workforce Make Humans Less Prejudiced?

Jackson, J., Castelo, N. & Gray, K. (2019).
American Psychologist. (2019)

Automation is becoming ever more prevalent, with robot workers replacing many human employees. Many perspectives have examined the economic impact of a robot workforce, but here we consider its social impact: how will the rise of robot workers affect intergroup relations? Whereas some past research suggests that more robots will lead to more intergroup prejudice, we suggest that robots could also reduce prejudice by highlighting commonalities between all humans. As robot workers become more salient, intergroup differences—including racial and religious differences—may seem less important, fostering a perception of a common human identity (i.e., “panhumanism.”) Six studies (∑N= 3,312) support this hypothesis. Anxiety about the rising robot workforce predicts less anxiety about human out-groups (Study 1) and priming the salience of a robot workforce reduces prejudice towards out-groups (Study 2), makes people more accepting of out-group members as leaders and family members (Study 3), and increases wage equality across in-group and out-group members in an economic simulation (Study 4). This effect is mediated by panhumanism (Studies 5-6), suggesting that the perception of a common human in-group explains why robot salience reduces prejudice. We discuss why automation may sometimes exacerbate intergroup tensions and other-times reduce them.

From the General Discussion

An open question remains about when automation helps versus harms intergroup relations. Our evidence is optimistic, showing that robot workers can increase solidarity between human groups. Yet other studies are pessimistic, showing that reminders of rising automation can increase people’s perceived material insecurity, leading them to feel more threatened by immigrants and foreign workers (Im et al., in press; Frey, Berger, & Chen, 2017), and data that we gathered across 37 nations—summarized in our supplemental materials—suggest that the countries that have automated the fastest over the last 42 years have also increased more in explicit prejudice towards out-groups, an effect that is partially explained by rising unemployment rates.

The research is here.

Wednesday, October 9, 2019

Whistle-blowers act out of a sense of morality

Alice Walton
Originally posted September 16, 2019

Here is an excerpt:

To understand the factors that predict the likelihood of whistle-blowing, the researchers analyzed data from more than 42,000 participants in the ongoing Merit Principles Survey, which has polled US government employees since 1979, and which covers whistle-blowing. Respondents answer questions about their past experiences with unethical behavior, the approaches they’d take in dealing with future unethical behavior, and their personal characteristics, including their concern for others and their feelings about their organizations.

Concern for others was the strongest predictor of whistle-blowing, the researchers find. This was true both of people who had already blown the whistle on bad behavior and of people who expected they might in the future.

Loyalty to an immediate community—or ingroup, in psychological terms—was also linked to whistle-blowing, but in an inverse way. “The greater people’s concern for loyalty, the less likely they were to blow the whistle,” write the researchers. 

Organizational factors—such as people’s perceptions about their employer, their concern for their job, and their level of motivation or engagement—were largely unconnected to whether people spoke up. The only ones that appeared to matter were how fair people perceived their organization to be, as well as the extent to which the organization educated its employees about ways to expose bad behavior and the rights of whistle-blowers. The data suggest these two factors were linked to whether whistle-blowers opted to address the unethical behavior through internal or external avenues. 

The info is here.

Wednesday, August 28, 2019

Profit Versus Prejudice: Harnessing Self-Interest to Reduce In-Group Bias

Stagnaro, M. N., Dunham, Y., & Rand, D. G. (2018).
Social Psychological and Personality Science, 9(1), 50–58.


We examine the possibility that self-interest, typically thought to undermine social welfare, might reduce in-group bias. We compared the dictator game (DG), where participants unilaterally divide money between themselves and a recipient, and the ultimatum game (UG), where the recipient can reject these offers. Unlike the DG, there is a self-interested motive for UG giving: If participants expect the rejection of unfair offers, they have a monetary incentive to be fair even to out-group members. Thus, we predicted substantial bias in the DG but little bias in the UG. We tested this hypothesis in two studies (N = 3,546) employing a 2 (in-group/out-group, based on abortion position) × 2 (DG/UG) design. We observed the predicted significant group by game interaction, such that the substantial in-group favoritism observed in the DG was almost entirely eliminated in the UG: Giving the recipient bargaining power reduced the premium offered to in-group members by 77.5%.

Here we have provided evidence that self-interest has the potential to override in-group bias based on a salient and highly charged real-world grouping (abortion stance). In the DG, where participants had the power to offer whatever they liked, we saw clear evidence of behavior favoring in-group members. In the UG, where the recipient could reject the offer, acting on such biases had the potential to severely reduce earnings. Participants anticipated this, as shown by their expectations of partner behavior, and made fair offers to both in-group and out-group participants.

Traditionally, self-interest is considered a negative force in intergroup relations. For example, an individual might give free reign to a preference for interacting with similar others, and even be willing to pay a cost to satisfy those preferences, resulting in what has been called “taste-based” discrimination (Becker, 1957). Although we do not deny that such discrimination can (and often does) occur, we suggest that in the right context, the costs it can impose serve as a disincentive. In particular, when strategic concerns are heightened, as they are in multilateral interactions where the parties must come to an agreement and failing to do so is both salient and costly (such as the UG), self-interest has the opportunity to mitigate biased behavior. Here, we provide one example of such a situation: We find that participants successfully withheld bias in the UG, making equally fair offers to both in-group and out-group recipients.

Saturday, January 12, 2019

Monitoring Moral Virtue: When the Moral Transgressions of In-Group Members Are Judged More Severely

Karim Bettache, Takeshi Hamamura, J.A. Idrissi, R.G.J. Amenyogbo, & C. Chiu
Journal of Cross-Cultural Psychology
First Published December 5, 2018


Literature indicates that people tend to judge the moral transgressions committed by out-group members more severely than those of in-group members. However, these transgressions often conflate a moral transgression with some form of intergroup harm. There is little research examining in-group versus out-group transgressions of harmless offenses, which violate moral standards that bind people together (binding foundations). As these moral standards center around group cohesiveness, a transgression committed by an in-group member may be judged more severely. The current research presented Dutch Muslims (Study 1), American Christians (Study 2), and Indian Hindus (Study 3) with a set of fictitious stories depicting harmless and harmful moral transgressions. Consistent with our expectations, participants who strongly identified with their religious community judged harmless moral offenses committed by in-group members, relative to out-group members, more severely. In contrast, this effect was absent when participants judged harmful moral transgressions. We discuss the implications of these results.

Wednesday, August 1, 2018

Why our brains see the world as ‘us’ versus ‘them’

Leslie Henderson
The Conversation
Originally posted June 2018

Here is an excerpt:

As opposed to fear, distrust and anxiety, circuits of neurons in brain regions called the mesolimbic system are critical mediators of our sense of “reward.” These neurons control the release of the transmitter dopamine, which is associated with an enhanced sense of pleasure. The addictive nature of some drugs, as well as pathological gaming and gambling, are correlated with increased dopamine in mesolimbic circuits.

In addition to dopamine itself, neurochemicals such as oxytocin can significantly alter the sense of reward and pleasure, especially in relationship to social interactions, by modulating these mesolimbic circuits.

Methodological variations indicate further study is needed to fully understand the roles of these signaling pathways in people. That caveat acknowledged, there is much we can learn from the complex social interactions of other mammals.

The neural circuits that govern social behavior and reward arose early in vertebrate evolution and are present in birds, reptiles, bony fishes and amphibians, as well as mammals. So while there is not a lot of information on reward pathway activity in people during in-group versus out-group social situations, there are some tantalizing results from  studies on other mammals.

The article is here.

Thursday, May 3, 2018

Why Pure Reason Won’t End American Tribalism

Robert Wright
Originally published April 9, 2018

Here is an excerpt:

Pinker also understands that cognitive biases can be activated by tribalism. “We all identify with particular tribes or subcultures,” he notes—and we’re all drawn to opinions that are favored by the tribe.

So far so good: These insights would seem to prepare the ground for a trenchant analysis of what ails the world—certainly including what ails an America now famously beset by political polarization, by ideological warfare that seems less and less metaphorical.

But Pinker’s treatment of the psychology of tribalism falls short, and it does so in a surprising way. He pays almost no attention to one of the first things that springs to mind when you hear the word “tribalism.” Namely: People in opposing tribes don’t like each other. More than Pinker seems to realize, the fact of tribal antagonism challenges his sunny view of the future and calls into question his prescriptions for dispelling some of the clouds he does see on the horizon.

I’m not talking about the obvious downside of tribal antagonism—the way it leads nations to go to war or dissolve in civil strife, the way it fosters conflict along ethnic or religious lines. I do think this form of antagonism is a bigger problem for Pinker’s thesis than he realizes, but that’s a story for another day. For now the point is that tribal antagonism also poses a subtler challenge to his thesis. Namely, it shapes and drives some of the cognitive distortions that muddy our thinking about critical issues; it warps reason.

The article is here.

Friday, March 30, 2018

Not Noble Savages After All: Limits to Early Altruism

Karen Wynn, Paul Bloom, Ashley Jordan, Julia Marshall, Mark Sheskin
Current Directions in Psychological Science 
Vol 27, Issue 1, pp. 3 - 8
First Published December 22, 2017


Many scholars draw on evidence from evolutionary biology, behavioral economics, and infant research to argue that humans are “noble savages,” endowed with indiscriminate kindness. We believe this is mistaken. While there is evidence for an early-emerging moral sense—even infants recognize and favor instances of fairness and kindness among third parties—altruistic behaviors are selective from the start. Babies and young children favor people who have been kind to them in the past and favor familiar individuals over strangers. They hold strong biases for in-group over out-group members and for themselves over others, and indeed are more unequivocally selfish than older children and adults. Much of what is most impressive about adult morality arises not through inborn capacities but through a fraught developmental process that involves exposure to culture and the exercise of rationality.

The article is here.

Tuesday, September 26, 2017

The Influence of War on Moral Judgments about Harm

Hanne M Watkins and Simon M Laham


How does war influence moral judgments about harm? While the general rule is “thou shalt not kill,” war appears to provide an unfortunately common exception to the moral prohibition on intentional harm. In three studies (N = 263, N = 557, N = 793), we quantify the difference in moral judgments across peace and war contexts, and explore two possible explanations for the difference. Taken together, the findings of the present studies have implications for moral psychology researchers who use war based scenarios to study broader cognitive or affective processes. If the war context changes judgments of moral scenarios by triggering group-based reasoning or altering the perceived structure of the moral event, using such scenarios to make “decontextualized” claims about moral judgment may not be warranted.

Here is part of the discussion.

A number of researchers have begun to investigate how social contexts may influence moral judgment, whether those social contexts are grounded in groups (Carnes et al, 2015; Ellemers & van den Bos, 2009) or relationships (Fiske & Rai, 2014; Simpson, Laham, & Fiske, 2015). The war context is another specific context which influences moral judgments: in the present study we found that the intergroup nature of war influenced people’s moral judgments about harm in war – even if they belonged to neither of the two groups actually at war – and that the usually robust difference between switch and footbridge scenarios was attenuated in the war context. One implication of these findings is that some caution may be warranted when using war-based scenarios for studying morality in general. As mentioned in the introduction, scenarios set in war are often used in the study of broad domains or general processes of judgment (e.g. Graham et al., 2009; Phillips & Young, 2011; Piazza et al., 2013). Given the interaction of war context with intergroup considerations and with the construed structure of the moral event in the present studies, researchers are well advised to avoid making generalizations to morality writ large on the basis of war-related scenarios (see also Bauman, McGraw, Bartels, & Warren, 2014; Bloom, 2011).

The preprint is here.

Wednesday, December 7, 2016

Do conservatives value ‘moral purity’ more than liberals?

Kate Johnson and Joe Hoover
The Conversation
Originally posted November 21, 2016

Here is an excerpt:

Our results were remarkably consistent with our first study. When people thought the person they were being partnered with did not share their purity concerns, they tended to avoid them. And, when people thought their partner did share their purity concerns, they wanted to associate with them.

As on Twitter, people were much more likely to associate with the other person when they had similar response to the moral purity scenarios and to avoid them when they had dissimilar response. And this pattern of responding was much stronger for purity concerns than similarities or differences for any other moral concerns, regardless of people’s religious and political affiliation and the religious and political affiliation they attributed to their partner.

There are many examples of how moral purity concerns are woven deeply into the fabric of social life. For example, have you noticed that when we derogate another person or social group we often rely on adjectives like “dirty,” and “disgusting”? Whether we are talking about “dirty hippies” or an entire class of “untouchables” or “deplorables,” we tend to signal inferiority and separation through moral terms grounded in notions of bodily and spiritual purity.

The article is here.

Sunday, November 6, 2016

The Psychology of Disproportionate Punishment

Daniel Yudkin
Scientific American
Originally published October 18, 2016

Here is an excerpt:

These studies suggest that certain features of the human mind are prone to “intergroup bias” in punishment. While our slow, thoughtful deliberative side may desire to maintain strong standards of fairness and equality, our more basic, reflexive side may be prone to hostility and aggression to anyone deemed an outsider.

Indeed, this is consistent with what we know about the evolutionary heritage of our species, which spent thousands of years in tightly knit tribal groups competing for scarce resources on the African savannah. Intergroup bias may be tightly woven up in the fabric of everyone’s DNA, ready to emerge under conditions of hurry or stress.

But the picture of human relationships is not all bleak. Indeed, another line of research in which I am involved, led by Avital Mentovich, sheds light on the ways we might transcend the biases that lurk beneath the surface of the psyche.

The article is here.

Friday, April 29, 2016

No, You Can’t Feel Sorry for Everyone

BY Adam Waytz
Originally posted April 14, 2015

Here is an excerpt:

Morality can’t be everywhere at once—we humans have trouble extending equal compassion to foreign earthquake victims and hurricane victims in our own country. Our capacity to feel and act prosocially toward another person is finite. And one moral principle can constrain another. Even political liberals who prize universalism recoil when it distracts from a targeted focus on socially disadvantaged groups. Empathy draws our attention toward particular targets, and whether that target represents the underprivileged, blood relatives, refugees from a distant country, or players on a sports team, those targets obscure our attention from other equally (or more) deserving ones.

That means we need to abandon an idealized cultural sensitivity that gives all moral values equal importance. We must instead focus our limited moral resources on a few values, and make tough choices about which ones are more important than others. Collectively, we must decide that these actions affect human happiness more than those actions, and therefore the first set must be deemed more moral than the second set.

The article is here.

Thursday, February 4, 2016

Empathy can be learned by sharing positive experiences

Yahoo News
Originally published December 28, 2015

A study by researchers at the University of Zurich indicates that empathy towards strangers can be learned and that positive experiences with others influence empathic brain responses.

According to a recent Swiss study, we are all capable of feeling empathy towards strangers. By repeating positive experiences with strangers, our brain learns and develops empathic responses.

The article is here.

Saturday, December 12, 2015

The Whitewashing Effect: Using Racial Contact to Signal Trustworthiness and Competence

Stephen T. La Macchia, Winnifred R. Louis, Matthew J. Hornsey, M. Thai, & F. K. Barlow
Pers Soc Psychol Bull January 2016 42: 118-129


The present research examines whether people use racial contact to signal positive and negative social attributes. In two experiments, participants were instructed to fake good (trustworthy/competent) or fake bad (untrustworthy/incompetent) when reporting their amount of contact with a range of different racial groups. In Experiment 1 (N = 364), participants faking good reported significantly more contact with White Americans than with non-White Americans, whereas participants faking bad did not. In Experiment 2 (N = 1,056), this pattern was replicated and was found to be particularly pronounced among those with stronger pro-White bias. These findings suggest that individuals may use racial contact as a social signal, effectively “whitewashing” their apparent contact and friendships when trying to present positively.

The entire article is here.

Thursday, October 29, 2015

Choosing Empathy

A Conversation with Jamil Zaki
The Edge
Originally published October 19, 2015

Here are some excerpts:

The first narrative is that empathy is automatic. This goes all the way back to Adam Smith, who, to me, generated the first modern account of empathy in his beautiful book, The Theory of Moral Sentiments. Smith described what he called the "fellow-feeling," through which people take on each other's states—very similar to what I would call experience sharing.              


That's one narrative, that empathy is automatic, and again, it’s compelling—backed by lots of evidence. But if you believe that empathy always occurs automatically, you run into a freight train of evidence to the contrary. As many of us know, there are lots of instances in which people could feel empathy, but don't. The prototype case here is intergroup settings. People who are divided by a war, or a political issue, or even a sports rivalry, often experience a collapse of their empathy. In many cases, these folks feel apathy for others on the other side of a group boundary. They fail to share, or think about, or feel concern for those other people's emotions.              

In other cases, it gets even worse: people feel overt antipathy towards others, for instance, taking pleasure when some misfortune befalls someone on the other side of a group boundary. What's interesting to me is that this occurs not only for group boundaries that are meaningful, like ethnicity or religion, but totally arbitrary groups. If I were to divide us into a red and blue team, without that taking on any more significance, you would be more likely to experience empathy for fellow red team members than for me (apparently I'm on team blue today).  

The entire post and video is here.

Tuesday, August 18, 2015

Not Just Empathy: Meaning of Life TV

Robert Wright and Paul Bloom
Meaning of Life.tv

Bob and Paul discuss empathy, compassion, values, moral development, beliefs, in-group/out-group biases, and evolutionary psychology.  There are some pithy remarks and humorous lines, but a great deal of research and wisdom in this 42 minutes video.  It is truly video worth watching.