Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Economic Games. Show all posts
Showing posts with label Economic Games. Show all posts

Tuesday, February 14, 2023

Helping the ingroup versus harming the outgroup: Evidence from morality-based groups

Grigoryan, L, Seo, S, Simunovic, D, & Hoffman, W.
Journal of Experimental Social Psychology
Volume 105, March 2023, 104436

Abstract

The discrepancy between ingroup favoritism and outgroup hostility is well established in social psychology. Under which conditions does “ingroup love” turn into “outgroup hate”? Studies with natural groups suggest that when group membership is based on (dis)similarity of moral beliefs, people are willing to not only help the ingroup, but also harm the outgroup. The key limitation of these studies is that the use of natural groups confounds the effects of shared morality with the history of intergroup relations. We tested the effect of morality-based group membership on intergroup behavior using artificial groups that help disentangling these effects. We used the recently developed Intergroup Parochial and Universal Cooperation (IPUC) game which differentiates between behavioral options of weak parochialism (helping the ingroup), strong parochialism (harming the outgroup), universal cooperation (helping both groups), and egoism (profiting individually). In three preregistered experiments, we find that morality-based groups exhibit less egoism and more universal cooperation than non-morality-based groups. We also find some evidence of stronger ingroup favoritism in morality-based groups, but no evidence of stronger outgroup hostility. Stronger ingroup favoritism in morality-based groups is driven by expectations from the ingroup, but not the outgroup. These findings contradict earlier evidence from natural groups and suggest that (dis)similarity of moral beliefs is not sufficient to cross the boundary between “ingroup love” and “outgroup hate”.

General discussion

When does “ingroup love” turn into “outgroup hate”? Previous studies conducted on natural groups suggest that centrality of morality to the group’s identity is one such condition: morality-based groups showed more hostility towards outgroups than non-morality-based groups (Parker & Janoff-Bulman, 2013; Weisel & Böhm, 2015). We set out to test this hypothesis in a minimal group setting, using the recently developed Intergroup Parochial and Universal Cooperation (IPUC) game.  Across three pre-registered studies, we found no evidence that morality-based groups show more hostility towards outgroups than non-morality-based groups. Instead, morality-based groups exhibited less egoism and more universal cooperation (helping both the ingroup and the outgroup) than non-morality-based groups. This finding is consistent with earlier research showing that salience of morality makes people more cooperative (Capraro et al., 2019). Importantly, our morality manipulation was not specific to any pro-cooperation moralnorm. Simply asking participants to think about the criteria they use to judge what is right and what is wrong was enough to increase universal cooperation.

Our findings are inconsistent with research showing stronger outgroup hostility in morality-based groups (Parker & Janoff-Bulman, 2013; Weisel & Böhm, 2015). The key difference between the set of studies presented here and the earlier studies that find outgroup hostility in morality-based groups is the use of natural groups in the latter. What potential confounding variables might account for the emergence of outgroup hostility in natural groups?

Wednesday, December 7, 2022

Corrupt third parties undermine trust and prosocial behaviour between people.

Spadaro, G., Molho, C., Van Prooijen, JW. et al.
Nat Hum Behav (2022).

Abstract

Corruption is a pervasive phenomenon that affects the quality of institutions, undermines economic growth and exacerbates inequalities around the globe. Here we tested whether perceiving representatives of institutions as corrupt undermines trust and subsequent prosocial behaviour among strangers. We developed an experimental game paradigm modelling representatives as third-party punishers to manipulate or assess corruption and examine its relationship with trust and prosociality (trust behaviour, cooperation and generosity). In a sequential dyadic die-rolling task, the participants observed the dishonest behaviour of a target who would subsequently serve as a third-party punisher in a trust game (Study 1a, N = 540), in a prisoner’s dilemma (Study 1b, N = 503) and in dictator games (Studies 2–4, N = 765, pre-registered). Across these five studies, perceiving a third party as corrupt undermined interpersonal trust and, in turn, prosocial behaviour. These findings contribute to our understanding of the critical role that representatives of institutions play in shaping cooperative relationships in modern societies.

Discussion

Considerable research in various scientific disciplines has addressed the intricate associations between the degree to which institutions are corrupt and the extent to which people trust one another and build cooperative relations. One perspective suggests that the success of institutions is rooted in interpersonal processes such as trust. Another perspective assumes a top-down process, suggesting that the functioning of institutions serves as a basis to promote and sustain interpersonal trust. However, as far as we know, this latter claim has not been tested in experimental settings.

In the present research, we provided an initial test of a top-down perspective, examining the role of a corrupt versus honest institutional representative, here operationalized as a third-party observer with the power to regulate interaction through punishment. To do so, we revisited the sequential dyadic die-rolling paradigm where the participants could learn whether the third party was corrupt or not via second-hand
learning or via first-hand experience. Across five studies (N = 1,808), we found support for the central hypothesis guiding this research: perceiving third parties as corrupt is associated with a decline in interpersonal trust, and subsequent prosocial behaviour, towards strangers. This result was robust across a broad set of economic games and designs.

Friday, February 4, 2022

Latent motives guide structure learning during adaptive social choice

van Baar, J.M., Nassar, M.R., Deng, W. et al.
Nat Hum Behav (2021). 
https://doi.org/10.1038/s41562-021-01207-4

Abstract

Predicting the behaviour of others is an essential part of social cognition. Despite its ubiquity, social prediction poses a poorly understood generalization problem: we cannot assume that others will repeat past behaviour in new settings or that their future actions are entirely unrelated to the past. We demonstrate that humans solve this challenge using a structure learning mechanism that uncovers other people’s latent, unobservable motives, such as greed and risk aversion. In four studies, participants (N = 501) predicted other players’ decisions across four economic games, each with different social tensions (for example, Prisoner’s Dilemma and Stag Hunt). Participants achieved accurate social prediction by learning the stable motivational structure underlying a player’s changing actions across games. This motive-based abstraction enabled participants to attend to information diagnostic of the player’s next move and disregard irrelevant contextual cues. Participants who successfully learned another’s motives were more strategic in a subsequent competitive interaction with that player in entirely new contexts, reflecting that social structure learning supports adaptive social behaviour.

Significance statement

A hallmark of human cognition is being able to predict the behavior of others. How do we achieve social prediction given that we routinely encounter others in a dizzying array of social situations? We find people achieve accurate social prediction by inferring another’s hidden motives—motives that do not necessarily have a one-to-one correspondence with observable behaviors. Participants were able to infer another’s motives using a structure learning mechanism that enabled generalization.  Individuals used what they learned about others in one setting to predict their actions in an entirely new setting. This cognitive process can explain a wealth of social behaviors, ranging from strategic economic decisions to stereotyping and racial bias.

From the Discussion

How do people construct and apply abstracted mental models of others’ motives? Our data suggest that attention plays a key role in guiding this process. Attention is a fundamental cognitive mechanism as it affords optimal access to behaviorally relevant information with limited processing capacity. Our findings show how attention supports social prediction. In the Social Prediction Game, as in everyday social interactions, there were multiple cues that could be predictive of another’s behavior, from the player payoffs S and T to the order of the games or even the initials of the player. Structure learning allowed participants to disregard superficial cues and attend to information relevant to the players’ latent motives. Although this process facilitated accurate social prediction with limited effort if the inferred motives were correct, incorrect structure learning caused counterproductive attention on irrelevant information. For example, participants who did not consider risk aversion failed to shift their attention to the sucker’s payoff (S) during the Pessimist block and instead kept looking at the temptation to defect (T), thereby missing out on information predictive of the player’s choices. This suggests that what we can learn about other people is limited by our expectations.

Thursday, November 14, 2019

Cooperation and Learning in Unfamiliar Situations

McAuliffe, W. H. B., Burton-Chellew, M. N., &
McCullough, M. E. (2019).
Current Directions in Psychological Science, 
28(5), 436–440. https://doi.org/10.1177/0963721419848673

Abstract

Human social life is rife with uncertainty. In any given encounter, one can wonder whether cooperation will generate future benefits. Many people appear to resolve this dilemma by initially cooperating, perhaps because (a) encounters in everyday life often have future consequences, and (b) the costs of alienating oneself from long-term social partners often outweighed the short-term benefits of acting selfishly over our evolutionary history. However, because cooperating with other people does not always advance self-interest, people might also learn to withhold cooperation in certain situations. Here, we review evidence for two ideas: that people (a) initially cooperate or not depending on the incentives that are typically available in their daily lives and (b) also learn through experience to adjust their cooperation on the basis of the incentives of unfamiliar situations. We compare these claims with the widespread view that anonymously helping strangers in laboratory settings is motivated by altruistic desires. We conclude that the evidence is more consistent with the idea that people stop cooperating in unfamiliar situations because they learn that it does not help them, either financially or through social approval.

Conclusion

Experimental economists have long emphasized the role of learning in social decision-making (e.g., Binmore, 1999). However, cooperation researchers have only recently considered how peoples’ past social interactions shape their expectations in novel social situations. An important lesson from the research reviewed here is that people’s behavior in any single situation is not necessarily a direct read-out of how selfish or altruistic they are, especially if the situation’s incentives differ from what they normally encounter in everyday life.

Saturday, July 28, 2018

Costs, needs, and integration efforts shape helping behavior toward refugees

Robert Böhm, Maik M. P. Theelen, Hannes Rusch, and Paul A. M. Van Lange
PNAS June 25, 2018. 201805601; published ahead of print June 25, 2018

Abstract

Recent political instabilities and conflicts around the world have drastically increased the number of people seeking refuge. The challenges associated with the large number of arriving refugees have revealed a deep divide among the citizens of host countries: one group welcomes refugees, whereas another rejects them. Our research aim is to identify factors that help us understand host citizens’ (un)willingness to help refugees. We devise an economic game that captures the basic structural properties of the refugee situation. We use it to investigate both economic and psychological determinants of citizens’ prosocial behavior toward refugees. In three controlled laboratory studies, we find that helping refugees becomes less likely when it is individually costly to the citizens. At the same time, helping becomes more likely with the refugees’ neediness: helping increases when it prevents a loss rather than generates a gain for the refugees. Moreover, particularly citizens with higher degrees of prosocial orientation are willing to provide help at a personal cost. When refugees have to exert a minimum level of effort to be eligible for support by the citizens, these mandatory “integration efforts” further increase prosocial citizens’ willingness to help. Our results underscore that economic factors play a key role in shaping individual refugee helping behavior but also show that psychological factors modulate how individuals respond to them. Moreover, our economic game is a useful complement to correlational survey measures and can be used for pretesting policy measures aimed at promoting prosocial behavior toward refugees.

The research is here.

Wednesday, December 28, 2016

Oxytocin modulates third-party sanctioning of selfish and generous behavior within and between groups

Katie Daughters, Antony S.R. Manstead, Femke S. Ten Velden, Carsten K.W. De Dreu
Psychoneuroendocrinology, Available online 3 December 2016

Abstract

Human groups function because members trust each other and reciprocate cooperative contributions, and reward others’ cooperation and punish their non-cooperation. Here we examined the possibility that such third-party punishment and reward of others’ trust and reciprocation is modulated by oxytocin, a neuropeptide generally involved in social bonding and in-group (but not out-group) serving behavior. Healthy males and females (N = 100) self-administered a placebo or 24 IU of oxytocin in a randomized, double-blind, between-subjects design. Participants were asked to indicate (incentivized, costly) their level of reward or punishment for in-group (outgroup) investors donating generously or fairly to in-group (outgroup) trustees, who back-transferred generously, fairly or selfishly. Punishment (reward) was higher for selfish (generous) investments and back-transfers when (i) investors were in-group rather than outgroup, and (ii) trustees were in-group rather than outgroup, especially when (iii) participants received oxytocin rather than placebo. It follows, first, that oxytocin leads individuals to ignore out-groups as long as out-group behavior is not relevant to the in-group and, second, that oxytocin contributes to creating and enforcing in-group norms of cooperation and trust.

The article is here.