Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Cooperation. Show all posts
Showing posts with label Cooperation. Show all posts

Wednesday, October 25, 2023

The moral psychology of Artificial Intelligence

Bonnefon, J., Rahwan, I., & Shariff, A.
(2023, September 22). 

Abstract

Moral psychology was shaped around three categories of agents and patients: humans, other animals, and supernatural beings. Rapid progress in Artificial Intelligence has introduced a fourth category for our moral psychology to deal with: intelligent machines. Machines can perform as moral agents, making decisions that affect the outcomes of human patients, or solving moral dilemmas without human supervi- sion. Machines can be as perceived moral patients, whose outcomes can be affected by human decisions, with important consequences for human-machine cooperation. Machines can be moral proxies, that human agents and patients send as their delegates to a moral interaction, or use as a disguise in these interactions. Here we review the experimental literature on machines as moral agents, moral patients, and moral proxies, with a focus on recent findings and the open questions that they suggest.

Conclusion

We have not addressed every issue at the intersection of AI and moral psychology. Questions about how people perceive AI plagiarism, about how the presence of AI agents can reduce or enhance trust between groups of humans, about how sexbots will alter intimate human relations, are the subjects of active research programs.  Many more yet unasked questions will only be provoked as new AI  abilities  develops. Given the pace of this change, any review paper will only be a snapshot.  Nevertheless, the very recent and rapid emergence of AI-driven technology is colliding with moral intuitions forged by culture and evolution over the span of millennia.  Grounding an imaginative speculation about the possibilities of AI with a thorough understanding of the structure of human moral psychology will help prepare for a world shared with, and complicated by, machines.

Friday, June 30, 2023

The psychology of zero-sum beliefs

Davidai, S., Tepper, S.J. 
Nat Rev Psychol (2023). 

Abstract

People often hold zero-sum beliefs (subjective beliefs that, independent of the actual distribution of resources, one party’s gains are inevitably accrued at other parties’ expense) about interpersonal, intergroup and international relations. In this Review, we synthesize social, cognitive, evolutionary and organizational psychology research on zero-sum beliefs. In doing so, we examine when, why and how such beliefs emerge and what their consequences are for individuals, groups and society.  Although zero-sum beliefs have been mostly conceptualized as an individual difference and a generalized mindset, their emergence and expression are sensitive to cognitive, motivational and contextual forces. Specifically, we identify three broad psychological channels that elicit zero-sum beliefs: intrapersonal and situational forces that elicit threat, generate real or imagined resource scarcity, and inhibit deliberation. This systematic study of zero-sum beliefs advances our understanding of how these beliefs arise, how they influence people’s behaviour and, we hope, how they can be mitigated.

From the Summary and Future Directions section

We have suggested that zero-sum beliefs are influenced by threat, a sense of resource scarcity and lack of deliberation. Although each of these three channels can separately lead to zero-sum beliefs, simultaneously activating more than one channel might be especially potent. For instance, focusing on losses (versus gains) is both threatening and heightens a sense of resource scarcity. Consequently, focusing on losses might be especially likely to foster zero-sum beliefs. Similarly, insufficient deliberation on the long-term and dynamic effects of international trade might foster a view of domestic currency as scarce, prompting the belief that trade is zero-sum. Thus, any factor that simultaneously affects the threat that people experience, their perceptions of resource scarcity, and their level of deliberation is more likely to result in zero-sum beliefs, and attenuating zero-sum beliefs requires an exploration of all the different factors that lead to these experiences in the first place. For instance, increasing deliberation reduces zero-sum beliefs about negotiations by increasing people’s accountability, perspective taking or consideration of mutually beneficial issues. Future research could manipulate deliberation in other contexts to examine its causal effect on zero-sum beliefs. Indeed, because people express more moderate beliefs after deliberating policy details, prompting participants to deliberate about social issues (for example, asking them to explain the process by which one group’s outcomes influence another group’s outcomes) might reduce zero-sum beliefs. More generally, research could examine long-term and scalable solutions for reducing zero-sum beliefs, focusing on interventions that simultaneously reduce threat, mitigate views of resource scarcity and increase deliberation.  For instance, as formal training in economics is associated with lower zero-sum beliefs, researchers could examine whether teaching people basic economic principles reduces zero-sum beliefs across various domains. Similarly, because higher socioeconomic status is negatively associated with zero-sum beliefs, creating a sense of abundance might counter the belief that life is zero-sum.

Friday, May 19, 2023

What’s wrong with virtue signaling?

Hill, J., Fanciullo, J. 
Synthese 201, 117 (2023).

Abstract

A novel account of virtue signaling and what makes it bad has recently been offered by Justin Tosi and Brandon Warmke. Despite plausibly vindicating the folk?s conception of virtue signaling as a bad thing, their account has recently been attacked by both Neil Levy and Evan Westra. According to Levy and Westra, virtue signaling actually supports the aims and progress of public moral discourse. In this paper, we rebut these recent defenses of virtue signaling. We suggest that virtue signaling only supports the aims of public moral discourse to the extent it is an instance of a more general phenomenon that we call norm signaling. We then argue that, if anything, virtue signaling will undermine the quality of public moral discourse by undermining the evidence we typically rely on from the testimony and norm signaling of others. Thus, we conclude, not only is virtue signaling not needed, but its epistemological effects warrant its bad reputation.

Conclusion

In this paper, we have challenged two recent defenses of virtue signaling. Whereas Levy ascribes a number of good features to virtue signaling—its providing higher-order evidence for the truth of certain moral judgments, its helping us delineate groups of reliable moral cooperators, and its not involving any hypocrisy on the part of its subject—it seems these good features are ascribable to virtue signaling ultimately and only because they are good features of norm signaling, and virtue signaling entails norm signaling. Similarly, whereas Westra suggests that virtue signaling uniquely benefits public moral discourse by supporting moral progress in a way that mere norm signaling does not, it seems virtue signaling also uniquely harms public moral discourse by supporting moral regression in a way that mere norm signaling does not. It therefore seems that in each case, to the extent it differs from norm signaling, virtue signaling simply isn’t needed.

Moreover, we have suggested that, if anything, virtue signaling will undermine the higher order evidence we typically can and should rely on from the testimony of others. Virtue signaling essentially involves a motivation that aims at affecting public moral discourse but that does not aim at the truth. When virtue signaling is rampant—when we are aware that this ulterior motive is common among our peers—we should give less weight to the higher-order evidence provided by the testimony of others than we otherwise would, on pain of double counting evidence and falling for unwarranted confidence. We conclude, therefore, that not only is virtue signaling not needed, but its epistemological effects warrant its bad reputation. 

Sunday, April 30, 2023

The secrets of cooperation

Bob Holmes
Knowablemagazine.org
Originally published 29 MAR 23

Here are two excerpts:

Human cooperation takes some explaining — after all, people who act cooperatively should be vulnerable to exploitation by others. Yet in societies around the world, people cooperate to their mutual benefit. Scientists are making headway in understanding the conditions that foster cooperation, research that seems essential as an interconnected world grapples with climate change, partisan politics and more — problems that can be addressed only through large-scale cooperation.

Behavioral scientists’ formal definition of cooperation involves paying a personal cost (for example, contributing to charity) to gain a collective benefit (a social safety net). But freeloaders enjoy the same benefit without paying the cost, so all else being equal, freeloading should be an individual’s best choice — and, therefore, we should all be freeloaders eventually.

Many millennia of evolution acting on both our genes and our cultural practices have equipped people with ways of getting past that obstacle, says Muthukrishna, who coauthored a look at the evolution of cooperation in the 2021 Annual Review of Psychology. This cultural-genetic coevolution stacked the deck in human society so that cooperation became the smart move rather than a sucker’s choice. Over thousands of years, that has allowed us to live in villages, towns and cities; work together to build farms, railroads and other communal projects; and develop educational systems and governments.

Evolution has enabled all this by shaping us to value the unwritten rules of society, to feel outrage when someone else breaks those rules and, crucially, to care what others think about us.

“Over the long haul, human psychology has been modified so that we’re able to feel emotions that make us identify with the goals of social groups,” says Rob Boyd, an evolutionary anthropologist at the Institute for Human Origins at Arizona State University.

(cut)

Reputation is more powerful than financial incentives in encouraging cooperation

Almost a decade ago, Yoeli and his colleagues trawled through the published literature to see what worked and what didn’t at encouraging prosocial behavior. Financial incentives such as contribution-matching or cash, or rewards for participating, such as offering T-shirts for blood donors, sometimes worked and sometimes didn’t, they found. In contrast, reputational rewards — making individuals’ cooperative behavior public — consistently boosted participation. The result has held up in the years since. “If anything, the results are stronger,” says Yoeli.

Financial rewards will work if you pay people enough, Yoeli notes — but the cost of such incentives could be prohibitive. One study of 782 German residents, for example, surveyed whether paying people to receive a Covid vaccine would increase vaccine uptake. It did, but researchers found that boosting vaccination rates significantly would have required a payment of at least 3,250 euros — a dauntingly steep price.

And payoffs can actually diminish the reputational rewards people could otherwise gain for cooperative behavior, because others may be unsure whether the person was acting out of altruism or just doing it for the money. “Financial rewards kind of muddy the water about people’s motivations,” says Yoeli. “That undermines any reputational benefit from doing the deed.”

Thursday, March 23, 2023

Are there really so many moral emotions? Carving morality at its functional joints

Fitouchi L., André J., & Baumard N.
To appear in L. Al-Shawaf & T. K. Shackelford (Eds.)
The Oxford Handbook of Evolution and the Emotions.
New York: Oxford University Press.

Abstract

In recent decades, a large body of work has highlighted the importance of emotional processes in moral cognition. Since then, a heterogeneous bundle of emotions as varied as anger, guilt, shame, contempt, empathy, gratitude, and disgust have been proposed to play an essential role in moral psychology.  However, the inclusion of these emotions in the moral domain often lacks a clear functional rationale, generating conflations between merely social and properly moral emotions. Here, we build on (i) evolutionary theories of morality as an adaptation for attracting others’ cooperative investments, and on (ii) specifications of the distinctive form and content of moral cognitive representations. On this basis, we argue that only indignation (“moral anger”) and guilt can be rigorously characterized as moral emotions, operating on distinctively moral representations. Indignation functions to reclaim benefits to which one is morally entitled, without exceeding the limits of justice. Guilt functions to motivate individuals to compensate their violations of moral contracts. By contrast, other proposed moral emotions (e.g. empathy, shame, disgust) appear only superficially associated with moral cognitive contents and adaptive challenges. Shame doesn’t track, by design, the respect of moral obligations, but rather social valuation, the two being not necessarily aligned. Empathy functions to motivate prosocial behavior between interdependent individuals, independently of, and sometimes even in contradiction with the prescriptions of moral intuitions. While disgust is often hypothesized to have acquired a moral role beyond its pathogen-avoidance function, we argue that both evolutionary rationales and psychological evidence for this claim remain inconclusive for now.

Conclusion

In this chapter, we have suggested that a specification of the form and function of moral representations leads to a clearer picture of moral emotions. In particular, it enables a principled distinction between moral and non-moral emotions, based on the particular types of cognitive representations they process. Moral representations have a specific content: they represent a precise quantity of benefits that cooperative partners owe each other, a legitimate allocation of costs and benefits that ought to be, irrespective of whether it is achieved by people’s actual behaviors. Humans intuit that they have a duty not to betray their coalition, that innocent people do not deserve to be harmed, that their partner has a right not to be cheated on. Moral emotions can thus be defined as superordinate programs orchestrating cognition, physiology and behavior in accordance with the specific information encoded in these moral representations.    On this basis, indignation and guilt appear as prototypical moral emotions. Indignation (“moral anger”) is activated when one receives fewer benefits than one deserves, and recruits bargaining mechanisms to enforce the violated moral contract. Guilt, symmetrically, is sensitive to one’s failure to honor one’s obligations toward others, and motivates compensation to provide them the missing benefits they deserve. By contrast, often-proposed “moral” emotions – shame, empathy, disgust – seem not to function to compute distinctively moral representations of cooperative obligations, but serve other, non-moral functions – social status management, interdependence, and pathogen avoidance (Figure 2). 

Tuesday, February 14, 2023

Helping the ingroup versus harming the outgroup: Evidence from morality-based groups

Grigoryan, L, Seo, S, Simunovic, D, & Hoffman, W.
Journal of Experimental Social Psychology
Volume 105, March 2023, 104436

Abstract

The discrepancy between ingroup favoritism and outgroup hostility is well established in social psychology. Under which conditions does “ingroup love” turn into “outgroup hate”? Studies with natural groups suggest that when group membership is based on (dis)similarity of moral beliefs, people are willing to not only help the ingroup, but also harm the outgroup. The key limitation of these studies is that the use of natural groups confounds the effects of shared morality with the history of intergroup relations. We tested the effect of morality-based group membership on intergroup behavior using artificial groups that help disentangling these effects. We used the recently developed Intergroup Parochial and Universal Cooperation (IPUC) game which differentiates between behavioral options of weak parochialism (helping the ingroup), strong parochialism (harming the outgroup), universal cooperation (helping both groups), and egoism (profiting individually). In three preregistered experiments, we find that morality-based groups exhibit less egoism and more universal cooperation than non-morality-based groups. We also find some evidence of stronger ingroup favoritism in morality-based groups, but no evidence of stronger outgroup hostility. Stronger ingroup favoritism in morality-based groups is driven by expectations from the ingroup, but not the outgroup. These findings contradict earlier evidence from natural groups and suggest that (dis)similarity of moral beliefs is not sufficient to cross the boundary between “ingroup love” and “outgroup hate”.

General discussion

When does “ingroup love” turn into “outgroup hate”? Previous studies conducted on natural groups suggest that centrality of morality to the group’s identity is one such condition: morality-based groups showed more hostility towards outgroups than non-morality-based groups (Parker & Janoff-Bulman, 2013; Weisel & Böhm, 2015). We set out to test this hypothesis in a minimal group setting, using the recently developed Intergroup Parochial and Universal Cooperation (IPUC) game.  Across three pre-registered studies, we found no evidence that morality-based groups show more hostility towards outgroups than non-morality-based groups. Instead, morality-based groups exhibited less egoism and more universal cooperation (helping both the ingroup and the outgroup) than non-morality-based groups. This finding is consistent with earlier research showing that salience of morality makes people more cooperative (Capraro et al., 2019). Importantly, our morality manipulation was not specific to any pro-cooperation moralnorm. Simply asking participants to think about the criteria they use to judge what is right and what is wrong was enough to increase universal cooperation.

Our findings are inconsistent with research showing stronger outgroup hostility in morality-based groups (Parker & Janoff-Bulman, 2013; Weisel & Böhm, 2015). The key difference between the set of studies presented here and the earlier studies that find outgroup hostility in morality-based groups is the use of natural groups in the latter. What potential confounding variables might account for the emergence of outgroup hostility in natural groups?

Tuesday, January 24, 2023

On the value of modesty: How signals of status undermine cooperation

Srna, S., Barasch, A., & Small, D. A. (2022). 
Journal of Personality and Social Psychology, 
123(4), 676–692.
https://doi.org/10.1037/pspa0000303

Abstract

The widespread demand for luxury is best understood by the social advantages of signaling status (i.e., conspicuous consumption; Veblen, 1899). In the present research, we examine the limits of this perspective by studying the implications of status signaling for cooperation. Cooperation is principally about caring for others, which is fundamentally at odds with the self-promotional nature of signaling status. Across behaviorally consequential Prisoner’s Dilemma (PD) games and naturalistic scenario studies, we investigate both sides of the relationship between signaling and cooperation: (a) how people respond to others who signal status, as well as (b) the strategic choices people make about whether to signal status. In each case, we find that people recognize the relative advantage of modesty (i.e., the inverse of signaling status) and behave strategically to enable cooperation. That is, people are less likely to cooperate with partners who signal status compared to those who are modest (Studies 1 and 2), and more likely to select a modest person when cooperation is desirable (Study 3). These behaviors are consistent with inferences that status signalers are less prosocial and less prone to cooperate. Importantly, people also refrain from signaling status themselves when it is strategically beneficial to appear cooperative (Studies 4–6). Together, our findings contribute to a better understanding of the conditions under which the reputational costs of conspicuous consumption outweigh its benefits, helping integrate theoretical perspectives on strategic interpersonal dynamics, cooperation, and status signaling.

From the General Discussion

Implications

The high demand for luxury goods is typically explained by the social advantages of status signaling (Veblen, 1899). We do not dispute that status signaling is beneficial in many contexts. Indeed, we find that status signaling helps a person gain acceptance into a group that is seeking competitive members (see Supplemental Study 1). However, our research suggests a more nuanced view regarding the social effects of status signaling. Specifically, the findings caution against using this strategy indiscriminately.  Individuals should consider how important it is for them to appear prosocial, and strategically choose modesty when the goal to achieve cooperation is more important than other social goals (e.g., to appear wealthy or successful).

These strategic concerns are particularly important in the era of social media, where people can easily broadcast their consumption choices to large audiences. Many people show off their status through posts on Instagram, Twitter, and Facebook (e.g., Sekhon et al., 2015). Such posts may be beneficial for communicating one’s wealth and status, but as we have shown, they can also have negative effects. A boastful post could wind up on social media accounts such as “Rich Kids of the Internet,” which highlights extreme acts of status signaling and has over 350,000 followers and countless angry comments (Hoffower, 2020). Celebrities and other public figures also risk their reputations when they post about their status. For instance, when Louise Linton, wife of the U.S. Secretary of the Treasury, posted a photo of herself from an official government visit with many luxury-branded hashtags, she was vilified on social
media and in the press (Calfas, 2017).

Monday, January 16, 2023

The origins of human prosociality: Cultural group selection in the workplace and the laboratory

Francois, P., Fujiwara, T., & van Ypersele, T. (2018).
Science Advances, 4(9).
https://doi.org/10.1126/sciadv.aat2201

Abstract

Human prosociality toward non-kin is ubiquitous and almost unique in the animal kingdom. It remains poorly understood, although a proliferation of theories has arisen to explain it. We present evidence from survey data and laboratory treatment of experimental subjects that is consistent with a set of theories based on group-level selection of cultural norms favoring prosociality. In particular, increases in competition increase trust levels of individuals who (i) work in firms facing more competition, (ii) live in states where competition increases, (iii) move to more competitive industries, and (iv) are placed into groups facing higher competition in a laboratory experiment. The findings provide support for cultural group selection as a contributor to human prosociality.

Discussion

There is considerable experimental evidence, referenced earlier, supporting the conclusion that people are conditional cooperators: They condition actions based on their beliefs regarding prevailing norms of behavior. They cooperate if they believe their partners are also likely to do so, and they are unlikely to act cooperatively if they believe that others will not.

The environment in which people interact shapes both the social and economic returns to following cooperative norms. For instance, many aspects of groups within the work environment will determine whether cooperation can be an equilibrium in behavior among group members or whether it is strictly dominated by more selfish actions. Competition across firms can play two distinct roles in affecting this. First, there is a static equilibrium effect, which arises from competition altering rewards from cooperative versus selfish behavior, even without changing the distribution of firms. Competition across firms punishes individual free-riding behavior and rewards cooperative behavior. In the absence of competitive threats, members of groups can readily shirk without serious payoff consequences for their firm. This is not so if a firm faces an existential threat. Less markedly, even if a firm is not close to the brink of survival, more intense market competition renders firm-level payoffs more responsive to the efforts of group members. With intense competition, the deleterious effects of shirking are magnified by large loss of market share, revenues, and, in turn, lower group-level payoffs. Without competition, attendant declines in quality or efficiency arising from poor performance have weaker, and perhaps nonexistent, payoff consequences. These effects on individuals are likely to be small in large firms where any specific worker’s actions are unlikely to be pivotal. However, it is possible that employees overestimate the impact of their actions or instinctively respond to competition with more prosocial attitudes, even in large teams.

Competition across firms does not typically lead to a unique equilibrium in social norms but, if intense enough, can sustain a cooperative group norm. Depending on the setting, multiple different cooperative group equilibria differentiated by the level of costly effort can also be sustained. For example, if individuals are complementary in production, then an individual believing co-workers to all be shirkers and thus unable to produce a viable product will similarly also choose to exert low effort. An equilibrium where no one voluntarily contributes to cooperative tasks is sustained, and such a workplace looks to have noncooperative norms. In contrast, with the same complementary production process, and a workplace where all other workers are believed to be contributing high effort, a single worker will optimally choose to exert high effort as well to ensure viable output. In that case, a cooperative norm is sustained. When payoffs are continuous in both the quality of the product and the intensity of the competition, then the degree of cooperative effort that can be sustained can be continuously increasing in the intensity of market competition across firms. We have formalized this in an economic model that we include in the Supplementary Materials.

Competition’s first effect is thus to make it possible, but not necessary, for group-level cooperative norms to arise as equilibria. The literature has shown that there are many other ways to stabilize cooperative norms as equilibria, such as institutional punishment, third-party punishment, or reputations. Cross-group competition may also enhance these other well-studied mechanisms for generating cooperative norm equilibria, but with or without these factors, it has a general effect of tilting the set of equilibria toward those featuring cooperative norms.

Wednesday, January 11, 2023

How neurons, norms, and institutions shape group cooperation

Van Bavel, J. J., Pärnamets, P., Reinero, D. A., 
& Packer, D. (2022, April 7).
https://doi.org/10.1016/bs.aesp.2022.04.004

Abstract

Cooperation occurs at all stages of human life and is necessary for small groups and large-scale societies alike to emerge and thrive. This chapter bridges research in the fields of cognitive neuroscience, neuroeconomics, and social psychology to help understand group cooperation. We present a value-based framework for understanding cooperation, integrating neuroeconomic models of decision-making with psychological and situational variables involved in cooperative behavior, particularly in groups. According to our framework, the ventromedial prefrontal cortex serves as a neural integration hub for value computation during cooperative decisions, receiving inputs from various neuro-cognitive processes such as attention, affect, memory, and learning. We describe factors that directly or indirectly shape the value of cooperation decisions, including cultural contexts and social norms, personal and social identity, and intergroup relations. We also highlight the role of economic, social, and cultural institutions in shaping cooperative behavior. We discuss the implications for future research on cooperation.

(cut)

Social Institutions

Trust production is crucial for fostering cooperation (Zucker, 1986). We have already discussed two forms of trust production above: the trust and resulting cooperation that develops from experience with and knowledge about individuals, and trust based on social identities. The third form of trust production is institution-based, in which formal mechanisms or processes are used to foster trust (and that do not rely on personal characteristics, a history of exchange, or identity characteristics). At the societal level, trust-supporting institutions include governments, corporate structures, criminal and civil legal systems, contract law and property rights, insurance, and stock markets. When they function effectively, institutions allow for broader cooperation, helping people extend trust beyond other people they know or know of and, crucially, also beyond the boundaries of their in-groups (Fabbri, 2022; Hruschka & Henrich, 2013; Rothstein & Stolle, 2008; Zucker, 1986). Conversely, when these sorts of structures do not function well, “institutional distrust strips away a basic sense that one is protected from exploitation, thus reducing trust between strangers, which is at the core of functioning societies” (van Prooijen, Spadaro, & Wang, 2022).

When strangers with different cultural backgrounds have to interact, it often lacks the interpersonal or group-level trust necessary for cooperation. For instance, reliance on tightly-knit social networks, where everyone knows everyone, is often impossible in larger, more diverse environments. Communities can compensate by relying more on group-based trust. For example, banks may loan money primarily within separate kin or ethnic groups (Zucker, 1986). However, the disruption of homogeneous social networks, combined with the increasing need to cooperate across group boundaries creates incentives to develop and participate in broader sets of institutions. Institutions can facilitate cooperation and individuals prefer institutions that help regulate interactions and foster trust.

People often may seek to build institutions embodying principles, norms, rules, or procedures that foster group-based cooperation. In turn, these institutions shape decisions by altering the value people place oncooperative decisions. One study, for instance, examined these institutional and psychological dynamics over 30 rounds of a public goods game (Gürerk, Irlenbusch & Rockenbach, 2006). Every round had three stages. First, participants chose whether they wanted to play that round with or without a “sanctioning institution” that would provide a means of rewarding or punishing other players based on their behavior in the game. Second, they played the public goods game with (and onlywith) other participants whohad selected the same institutional structure for that round. After making their decisions (to contribute to the common pool), they then saw how much everyone else in their institutional context had contributed. Third, participants who had opted to play the round with a sanctioning institution could choose, for a price, to punish or reward other players.

Wednesday, January 4, 2023

How social identity tunes moral cognition

Van Bavel, J. J., Packer, D.,  et al.
PsyArXiv.com (2022, November 18). 
https://doi.org/10.31234/osf.io/9efsb

Abstract

In this chapter, we move beyond the treatment of intuition and reason as competing systems and outline how social contexts, and especially social identities, allow people to flexibly “tune” their cognitive reactions to moral contexts—a process we refer to as ‘moral tuning’. Collective identities—identities based on shared group memberships—significantly influence judgments and decisions of many kinds, including in the moral domain. We explain why social identities influence all aspects of moral cognition, including processes traditionally classified as intuition and reasoning. We then explain how social identities tune preferences and goals, expectations, and what outcomes care about. Finally, we propose directions for future research in moral psychology.

Social Identities Tune Preferences and Goals

Morally-relevant situations often involve conflicts between choices about which the interests of different parties are in tension. Moral transgressions typically involve an agent putting their own desires ahead of the interests, needs, or rights of others, thus causing them harm (e.g., Gray et al., 2012), whereas acts worthy of moral praise usually involve an agent sacrificing self-interest for the sake of someone else’s or the greater good. Value-computation frameworks of cooperation model how much people weigh the interests of different parties (e.g., their own versus others’) in terms of social preferences (see Van Bavel et al., 2022). Social preference parameters can, for example, capture individual differences in how much people prioritize their own outcomes over others’ (e.g., pro-selfs versus pro-socials as indexed by social value orientation; Balliet et al., 2009). These preferences, along with social norms, inform the computations that underlie decisions to engage in selfish or pro-social behavior (Hackel, Wills &Van Bavel, 2020).

We argue that social identity also influences social preferences, such that people tend to care more about outcomes incurred by in-group than out-group members (Tajfel & Turner, 1979;Van Bavel & Packer, 2021). For instance, highly identified group members appear to experience vicarious reward when they observe in-group (but not out-group) members experiencing positiveoutcomes, as indexed by activity in ventral striatum, a brain region implicated in hedonic reward (Hackel et al., 2017). Intergroup competition may exacerbate differences in concern for in-group versus out-group targets, causing people to feel empathy when in-group targets experience negative outcomes, but schadenfreude (pleasure in others’ pain) when out-group members experience these same events (Cikara et al., 2014). Shared social identities can also lead people to put collective interests ahead of their own individual interests in social dilemmas. For instance, making collective identities salient causes selfish individuals to contribute more to theirgroup than when these same people were reminded of their individual self (De Cremer & Van Vugt, 1999). This shift in behavior was not necessarily because they were less selfish, but rather because their sense of self had shifted from the individual to the collective level.

(cut)

Conclusion

For centuries, philosophers and scientists have debated the role of emotional intuition and reason in moral judgment. Thanks to theoretical and methodological developments over the past few decades, we believe it is time to move beyond these debates. We argue that social identity can tune the intuitions and reasoning processes that underlie moral cognition (Van Bavel et al., 2015). Extensive research has found that social identities have a significant influence on social and moral judgment and decision-making (Oakes et al., 1994; Van Bavel & Packer, 2021). This approach offers an important complement to other theories of moral psychology and suggests a powerful way to shift moral judgments and decisions—by changing identities and norms, rather than hearts and minds.

Monday, December 19, 2022

Socially evaluative contexts facilitate mentalizing

Woo, B. M., Tan, E., Yuen, F. L, & Hamlin, J. K.
Trends in Cognitive Sciences, Month 2022, 
Vol. xx, No. xx

Abstract

Our ability to understand others’ minds stands at the foundation of human learning, communication, cooperation, and social life more broadly. Although humans’ ability to mentalize has been well-studied throughout the cognitive sciences, little attention has been paid to whether and how mentalizing differs across contexts. Classic developmental studies have examined mentalizing within minimally social contexts, in which a single agent seeks a neutral inanimate object. Such object-directed acts may be common, but they are typically consequential only to the object-seeking agent themselves. Here, we review a host of indirect evidence suggesting that contexts providing the opportunity to evaluate prospective social partners may facilitate mentalizing across development. Our article calls on cognitive scientists to study mentalizing in contexts where it counts.

Highlights

Cognitive scientists have long studied the origins of our ability to mentalize. Remarkably little is known, however, about whether there are particular contexts where humans are more likely to mentalize.
We propose that mentalizing is facilitated in contexts where others’ actions shed light on their status as a good or bad social partner. Mentalizing within socially evaluative contexts supports effective partner choice.

Our proposal is based on three lines of evidence. First, infants leverage their understanding of others’ mental states to evaluate others’ social actions. Second, infants, children, and adults demonstrate enhanced mentalizing within socially evaluative contexts. Third, infants, children, and adults are especially likely to mentalize when agents cause negative outcomes.  Direct tests of this proposal will contribute to a more comprehensive understanding of human mentalizing.

Concluding remarks

Mental state reasoning is not only used for social evaluation, but may be facilitated, and even overactivated, when humans engage in social evaluation. Human infants begin mentalizing in socially evaluative contexts as soon as they do so in nonevaluative contexts, if not earlier, and mental state representations across human development may be stronger in socially evaluative contexts, particularly when there are negative outcomes. This opinion article supports the possibility that mentalizing is privileged within socially evaluative contexts, perhaps due to its key role in facilitating the selection of appropriate cooperative partners. Effective partner choice may provide a strong foundation upon which humans’ intensely interdependent and cooperative nature can flourish.

The work cited herein is highly suggestive, and more work is clearly needed to further explore this possibility (see Outstanding questions). We have mostly reviewed and compared data across experiments that have studied mentalizing in either socially evaluative or nonevaluative contexts, pulling from a wide range of ages and methods; to our knowledge, no research has directly compared both socially evaluative and nonevaluative contexts within the same experiment.  Experiments using stringent minimal contrast designs would provide stronger tests of our central claims. In addition to such experiments, in the same way that meta-analyses have explored other predictors of mentalizing, we call on future researchers to conduct meta-analyses of findings that come from socially evaluative and nonevaluative contexts. We look forward to such research, which together will move us towards a more comprehensive understanding of humans’ early mentalizing.

Sunday, December 11, 2022

Strategic Behavior with Tight, Loose, and Polarized Norms

Dimant, E., Gelfand, M. J., Hochleitner, A., 
& Sonderegger, S. (2022).
SSRN.com

Abstract

Descriptive norms – the behavior of other individuals in one’s reference group – play a key role in shaping individual decisions. When characterizing the behavior of others, a standard approach in the literature is to focus on average behavior. In this paper, we argue both theoretically and empirically that not only averages, but the shape of the whole distribution of behavior can play a crucial role in how people react to descriptive norms. Using a representative sample of the U.S. population, we experimentally investigate how individuals react to strategic environments that are characterized by different distributions of behavior, focusing on the distinction between tight (i.e., characterized by low behavioral variance), loose (i.e., characterized by high behavioral variance), and polarized (i.e., characterized by u-shaped behavior) environments. We find that individuals indeed strongly respond to differences in the variance and shape of the descriptive norm they are facing: loose norms generate greater behavioral variance and polarization generates polarized responses. In polarized environments, most individuals prefer extreme actions that expose them to considerable strategic risk to intermediate actions that would minimize such risk. Importantly, we also find that, in polarized and loose environments, personal traits and values play a larger role in determining actual behavior. This provides important insights into how individuals navigate environments that contain strategic uncertainty.

Conclusion

In this study, we investigate how individuals respond to differences in the observed distribution of others’ behavior. In particular, we test how different distributions of cooperative behavior affect an individual’s own willingness to cooperate. We first develop a theoretical framework that is based on the assumption that individuals are conditional cooperators and interpret differences in observed distribution as a shift in strategic uncertainty. We then test our framework empirically in the context of a PGG. To do so, we measure behavior in the PGG both before and after participants receive information about the distribution from which a co-players contribution is drawn. We thereby vary both the mean (high/low) and the variance/shape (high variance/ low variance/ u-shaped) of the observed distribution.

Our results confirm previous research showing that information about average behavior has an important effect on subsequent decisions. Individuals contribute significantly more in high mean conditions than in low mean conditions. However, the mean is not the only important feature of the distribution. In line with our theoretical framework, we find that looser environments generate a larger variance in individual responses compared to tighter environments.  In other words, “tight breeds tight” and “loose breeds loose”. Moreover, we find that, when confronted with a polarized (U-shaped) distribution, participants’ responses are polarized as well. A possible interpretation of these results is that people have heterogeneous reactions to situations characterized by high strategic uncertainty, while they react rather similarly when strategic uncertainty is low. Finally, we find that personal values have a higher predictive power for contribution decisions in loose and polarized compared to tight environments. This suggests that an individual’s reaction to strategic uncertainty may be mediated by their personal values.  This in turn has practical implications for behavioral change interventions. For example, when intervening in contexts with loose or polarized empirical norms, it may be more fruitful to focus on personal values, whereas when intervening in contexts with tight empirical norms, it may be more fruitful to focus on the behaviors of others.

Overall, we show that when studying empirical norms it is crucial to not only consider the average behavior, but the whole distribution. Doing so provides substantial analytical richness that can form the basis for a better understanding of the different behavioral patterns observed across societies.

Sunday, November 13, 2022

Cross-cultural variation in cooperation: A meta-analysis

Spadaro, G., Graf, C., et al.
JPSP, 123(5), 1024–1088.

Abstract

Impersonal cooperation among strangers enables societies to create valuable public goods, such as infrastructure, public services, and democracy. Several factors have been proposed to explain variation in impersonal cooperation across societies, referring to institutions (e.g., rule of law), religion (e.g., belief in God as a third-party punisher), cultural beliefs (e.g., trust) and values (e.g., collectivism), and ecology (e.g., relational mobility). We tested 17 preregistered hypotheses in a meta-analysis of 1,506 studies of impersonal cooperation in social dilemmas (e.g., the Public Goods Game) conducted across 70 societies (k = 2,271), where people make costly decisions to cooperate among strangers. After controlling for 10 study characteristics that can affect the outcome of studies, we found very little cross-societal variation in impersonal cooperation. Categorizing societies into cultural groups explained no variance in cooperation. Similarly, cultural, ancestral, and linguistic distance between societies explained little variance in cooperation. None of the cross-societal factors hypothesized to relate to impersonal cooperation explained variance in cooperation across societies. We replicated these conclusions when meta-analyzing 514 studies across 41 states and nine regions in the United States (k = 783). Thus, we observed that impersonal cooperation occurred in all societies-and to a similar degree across societies-suggesting that prior research may have overemphasized the magnitude of differences between modern societies in impersonal cooperation. We discuss the discrepancy between theory, past empirical research and the meta-analysis, address a limitation of experimental research on cooperation to study culture, and raise possible directions for future research. 

Discussion

Humans cooperate within multiple domains in daily life, such as sharing common pool resources and producing large-scale public goods. Cooperation can be expressed in many ways, including strategies to favor kin (Hamilton, 1964), allies and coalitional members (Balliet et al., 2014; Yamagishi et al., 1999), and it can even occur in interactions among strangers with no known future interactions (Delton et al., 2011; Macy & Skvoretz, 1998).  Here, we focused on this later kind of impersonal cooperation, in which people interact for the first time, they have no knowledge of their partner’s reputation, and no known possibilities of future interaction outside the experiment. Impersonal cooperation can enable societies to  develop, expand, and compete, impacting wealth and prosperity. Although impersonal cooperation occurs in all modern, industrialized, market-based societies, prior research has documented cross-societal variation in impersonal cooperation (Henrich, Ensminger, et al., 2010; Hermann et al., 2008; Romano et al., 2021). To date, several perspectives have been advanced to explain why and how impersonal cooperation varies across societies. 

Friday, November 11, 2022

Moral disciplining: The cognitive and evolutionary foundations of puritanical morality

Fitouchi, L., André, J., & Baumard, N. (2022).
Behavioral and Brain Sciences, 1-71.
doi:10.1017/S0140525X22002047

Abstract

Why do many societies moralize apparently harmless pleasures, such as lust, gluttony, alcohol, drugs, and even music and dance? Why do they erect temperance, asceticism, sobriety, modesty, and piety as cardinal moral virtues? According to existing theories, this puritanical morality cannot be reduced to concerns for harm and fairness: it must emerge from cognitive systems that did not evolve for cooperation (e.g., disgust-based “Purity” concerns). Here, we argue that, despite appearances, puritanical morality is no exception to the cooperative function of moral cognition. It emerges in response to a key feature of cooperation, namely that cooperation is (ultimately) a long-term strategy, requiring (proximately) the self-control of appetites for immediate gratification. Puritanical moralizations condemn behaviors which, although inherently harmless, are perceived as indirectly facilitating uncooperative behaviors, by impairing the self-control required to refrain from cheating. Drinking, drugs, immodest clothing, and unruly music and dance, are condemned as stimulating short-term impulses, thus facilitating uncooperative behaviors (e.g., violence, adultery, free-riding). Overindulgence in harmless bodily pleasures (e.g., masturbation, gluttony) is perceived as making people slave to their urges, thus altering abilities to resist future antisocial temptations. Daily self-discipline, ascetic temperance, and pious ritual observance are perceived as cultivating the self-control required to honor prosocial obligations. We review psychological, historical, and ethnographic evidence supporting this account. We use this theory to explain the fall of puritanism in WEIRD societies, and discuss the cultural evolution of puritanical norms. Explaining puritanical norms does not require adding mechanisms unrelated to cooperation in our models of the moral mind.

Conclusion

Many societies develop apparently unnecessarily austere norms, depriving people from the harmless pleasures of life. In face of the apparent disconnect of puritanical values from cooperation, the latter have either been ignored by cooperation-centered theories of morality, or been explained by mechanisms orthogonal to cooperative challenges, such as concerns for the purity of the soul, rooted in disgust intuitions. We have argued for a theoretical reintegration of puritanical morality in the otherwise theoretically grounded and empirically supported perspective of morality as cooperation. For deep evolutionary reasons, cooperation as a long-term strategy requires resisting impulses for immediate pleasures. To protect cooperative interactions from the threat of temptation, many societies develop preemptive moralizations aimed at facilitating moral self-control. This may explain why, aside from values of fairness, reciprocity, solidarity or loyalty, many societies develop hedonically restrictive standards of sobriety, asceticism, temperance, modesty, piety, and self-discipline.

Wednesday, August 3, 2022

Predictors and consequences of intellectual humility

Porter, T., Elnakouri, A., Meyers, E.A. et al.
Nat Rev Psychol (2022). 
https://doi.org/10.1038/s44159-022-00081-9

Abstract

In a time of societal acrimony, psychological scientists have turned to a possible antidote — intellectual humility. Interest in intellectual humility comes from diverse research areas, including researchers studying leadership and organizational behaviour, personality science, positive psychology, judgement and decision-making, education, culture, and intergroup and interpersonal relationships. In this Review, we synthesize empirical approaches to the study of intellectual humility. We critically examine diverse approaches to defining and measuring intellectual humility and identify the common element: a meta-cognitive ability to recognize the limitations of one’s beliefs and knowledge. After reviewing the validity of different measurement approaches, we highlight factors that influence intellectual humility, from relationship security to social coordination. Furthermore, we review empirical evidence concerning the benefits and drawbacks of intellectual humility for personal decision-making, interpersonal relationships, scientific enterprise and society writ large. We conclude by outlining initial attempts to boost intellectual humility, foreshadowing possible scalable interventions that can turn intellectual humility into a core interpersonal, institutional and cultural value.

Importance of intellectual humility

The willingness to recognize the limits of one’s knowledge and fallibility can confer societal and individual benefits, if expressed in the right moment and to the proper extent. This insight echoes the philosophical roots of intellectual humility as a virtue. State and trait intellectual humility have been associated with a range of cognitive, social and personality variables (Table 2). At the societal level, intellectual humility can promote societal cohesion by reducing group polarization and encouraging harmonious intergroup relationships. At the individual level, intellectual humility can have important consequences for wellbeing, decision-making and academic learning.

Notably, empirical research has provided little evidence regarding the generalizability of the benefits or drawbacks of intellectual humility beyond the unique contexts of WEIRD (Western, educated, industrialized, rich and democratic) societies. With this caveat, below is an initial set of findings concerning the implications of possessing high levels of intellectual humility. Unless otherwise specified, the evidence below concerns trait-level intellectual humility. After reviewing these benefits, we consider attempts to improve an individual’s intellectual humility and confer associated benefits.

Social implications

People who score higher in intellectual humility are more likely to display tolerance of opposing political and religious views, exhibit less hostility toward members of those opposing groups, and are more likely to resist derogating outgroup members as intellectually and morally bankrupt. Although intellectually humbler people are capable of intergroup prejudice, they are more willing to question themselves and to consider rival viewpoints104. Indeed, people with greater intellectual humility display less myside bias, expose themselves to opposing perspectives more often and show greater openness to befriending outgroup members on social media platforms. By comparison, people with lower intellectual humility display features of cognitive rigidity and are more likely to hold inflexible opinions and beliefs.

Monday, July 11, 2022

Moral cognition as a Nash product maximizer: An evolutionary contractualist account of morality

André, J., Debove, S., Fitouchi, L., & Baumard, N. 
(2022, May 24). https://doi.org/10.31234/osf.io/2hxgu

Abstract

Our goal in this paper is to use an evolutionary approach to explain the existence and design-features of human moral cognition. Our approach is based on the premise that human beings are under selection to appear as good cooperative investments. Hence they face a trade-off between maximizing the immediate gains of each social interaction, and maximizing its long-term reputational effects. In a simple 2-player model, we show that this trade-off leads individuals to maximize the generalized Nash product at evolutionary equilibrium, i.e., to behave according to the generalized Nash bargaining solution. We infer from this result the theoretical proposition that morality is a domain-general calculator of this bargaining solution. We then proceed to describe the generic consequences of this approach: (i) everyone in a social interaction deserves to receive a net benefit, (ii) people ought to act in ways that would maximize social welfare if everyone was acting in the same way, (iii) all domains of social behavior can be moralized, (iv) moral duties can seem both principled and non-contractual, and (v) morality shall depend on the context. Next, we apply the approach to some of the main areas of social life and show that it allows to explain, with a single logic, the entire set of what are generally considered to be different moral domains. Lastly, we discuss the relationship between this account of morality and other evolutionary accounts of morality and cooperation.

From The psychological signature of morality: the right, the wrong and the duty Section

Cooperating for the sake of reputation always entails that, at some point along social interactions, one is in a position to access benefits, but one decides to give them up, not for a short-term instrumental purpose, but for the long-term aim of having a good reputation.  And, by this, we mean precisely:the long-term aim of being considered someone with whom cooperation ends up bringing a net benefit rather than a net cost, not only in the eyes of a particular partner, but in the eyes of any potential future partner.  This specific and universal property of reputation-based cooperation explains the specific and universal phenomenology of moral decisions.

To understand, one must distinguish what people  do in practice, and what they think is right to do. In practice, people may sometimes cheat, i.e., not respect the contract. They may do so conditionally on the specific circumstances, if they evaluate that  the actual reputational benefits  of  doing  their duty is lower than the immediate cost (e.g., if their cheating has a chance to go unnoticed).  This should not –and in fact does  not  (Knoch et al., 2009;  Kogut, 2012;  Sheskin et al., 2014; Smith et al., 2013) – change their assessment of what would have been the right thing to do.  This assessment can only be absolute, in the sense that it depends only on what one needs to do to ensure that the interaction ends up bringing a net benefit to one’s partner rather than a cost, i.e., to respect the contract, and is not affected by the actual reputational stake of the specific interaction.  Or, to put it another way, people must calculate their moral duty by thinking “If someone  was looking at me, what would they think?”,  regardless of whether anyone is actually looking at them.

Saturday, July 2, 2022

Shadow of conflict: How past conflict influences group cooperation and the use of punishment

J. Grossa, C. K. W. DeDreua, & L. Reddmann
Organizational Behavior and Human Decision Processes
Volume 171, July 2022, 104152

Abstract

Intergroup conflict profoundly affects the welfare of groups and can deteriorate intergroup relations long after the conflict is over. Here, we experimentally investigate how the experience of an intergroup conflict influences the ability of groups to establish cooperation after conflict. We induced conflict by using a repeated attacker-defender game in which groups of four are divided into two ‘attackers’ that can invest resources to take away resources from the other two participants in the role of ‘defenders.’ After the conflict, groups engaged in a repeated public goods game with peer-punishment, in which group members could invest resources to benefit the group and punish other group members for their decisions. Previous conflict did not significantly reduce group cooperation compared to a control treatment in which groups did not experience the intergroup conflict. However, when having experienced an intergroup conflict, individuals punished free-riding during the repeated public goods game less harshly and did not react to punishment by previous attackers, ultimately reducing group welfare. This result reveals an important boundary condition for peer punishment institutions. Peer punishment is less able to efficiently promote cooperation amid a ‘shadow of conflict.’ In a third treatment, we tested whether such ‘maladaptive’ punishment patterns induced by previous conflict can be mitigated by hiding the group members’ conflict roles during the subsequent public goods provision game. We find more cooperation when individuals could not identify each other as (previous) attackers and defenders and maladaptive punishment patterns disappeared. Results suggest that intergroup conflict undermines past perpetrators’ legitimacy to enforce cooperation norms. More generally, results reveal that past conflict can reduce the effectiveness of institutions for managing the commons.

Highlights

• Intergroup conflict reduces the effectiveness of peer punishment to promote cooperation.

• Previous attackers lose their legitimacy to enforce cooperation norms.

• Hiding previous conflict roles allows to re-establish group cooperation.


From the Discussion

Across all treatments, we observed that groups with a shadow of conflict earned progressively less and, hence, benefitted less from the cooperation opportunities they had after the conflict episode compared to groups without a previous intergroup conflict (control treatment) and groups in which previous conflict roles were hidden (reset treatment). By analyzing the patterns of punishment, we found that groups that experienced a shadow of conflict did not punish free-riders as harshly compared to the other treatments. Further, punishment by past attackers was less effective in inducing subsequent cooperation, suggesting that attackers lose their legitimacy to enforce norms of cooperation when their past role in the conflict is identifiable (see also Baldassarri and Grossman, 2011, Faillo et al., 2013, Gross et al., 2016 for related findings on the role of legitimacy for the effectiveness of punishment in non-conflict settings). Even previous attackers did not significantly change their subsequent cooperation when having received punishment by their fellow, previous attacker. Hiding previous group affiliations, instead, made punishment by previous attackers as effective in promoting cooperation as in the control treatment.

These results reveal an important boundary condition for peer punishment institutions. While many experiments have shown that peer punishment can stabilize cooperation in groups (Fehr and Gächter, 2000, Masclet et al., 2003, Yamagishi, 1986), other research also showed that peer punishment can be misused or underused and is not always aimed at free-riders. In such cases, the ability to punish group members can have detrimental consequences for cooperation and group earnings (Abbink et al., 2017, Engelmann and Nikiforakis, 2015, Herrmann et al., 2008, Nikiforakis, 2008).

Friday, June 3, 2022

Cooperation as a signal of time preferences

Lie-Panis, J., & André, J. (2021, June 23).
https://doi.org/10.31234/osf.io/p6hc4

Abstract

Many evolutionary models explain why we cooperate with non kin, but few explain why cooperative behavior and trust vary. Here, we introduce a model of cooperation as a signal of time preferences, which addresses this variability. At equilibrium in our model, (i) future-oriented individuals are more motivated to cooperate, (ii) future-oriented populations have access to a wider range of cooperative opportunities, and (iii) spontaneous and inconspicuous cooperation reveal stronger preference for the future, and therefore inspire more trust. Our theory sheds light on the variability of cooperative behavior and trust. Since affluence tends to align with time preferences, results (i) and (ii) explain why cooperation is often associated with affluence, in surveys and field studies. Time preferences also explain why we trust others based on proxies for impulsivity, and, following result (iii), why uncalculating, subtle and one-shot cooperators are deemed particularly trustworthy. Time preferences provide a powerful and parsimonious explanatory lens, through which we can better understand the variability of trust and cooperation.

From the Discussion Section

Trust depends on revealed time preferences

Result (iii) helps explain why we infer trustworthiness from traits which appear unrelated  to cooperation,  but  happen  to  predict  time  preferences.   We  trust known partners and strangers based on how impulsive we perceive them to be (Peetz & Kammrath, 2013; Righetti & Finkenauer, 2011); impulsivity being associated to both time preferences and cooperativeness in laboratory experiments (Aguilar-Pardo et al., 2013; Burks et al., 2009; Cohen et al., 2014; Martinsson et al., 2014; Myrseth et al., 2015; Restubog et al., 2010).  Other studies show we infer cooperative motivation from a wide variety of proxies for partner self-control, including indicators of their indulgence in harmless sensual pleasures (for a review see  Fitouchi et al.,  2021),  as well as proxies for environmental affluence (Moon et al., 2018; Williams et al., 2016).

Time preferences further offer a parsimonious explanation for why different forms of cooperation inspire more trust than others.  When probability of observation p or cost-benefit ratio r/c are small in our model, helpful behavior reveals large time horizon- and cooperators may be perceived as relatively genuine or disinterested.  We derive two different types of conclusion from this principle.  (Inconspicuous and/or spontaneous cooperation)

Saturday, May 21, 2022

Cross-Cultural Variation in Cooperation: A Meta-Analysis

Spadaro, G., Graf, C., et al. (2022). 
Journal of personality and social psychology.
Advance online publication. 
https://doi.org/10.1037/pspi0000389

Abstract

Impersonal cooperation among strangers enables societies to create valuable public goods, such as infrastructure, public services, and democracy. Several factors have been proposed to explain variation in impersonal cooperation across societies, referring to institutions (e.g., rule of law), religion (e.g., belief in God as a third-party punisher), cultural beliefs (e.g., trust) and values (e.g., collectivism), and ecology (e.g., relational mobility). We tested 17 pre-registered hypotheses in a meta-analysis of 1,506 studies of impersonal cooperation in social dilemmas (e.g., the Public Goods Game) conducted across 70 societies (k = 2,271), where people make costly decisions to cooperate among strangers. After controlling for 10 study characteristics that can affect the outcome of studies, we found very little cross-societal variation in impersonal cooperation. Categorizing societies into cultural groups explained no variance in cooperation. Similarly, cultural, ancestral, and linguistic distance between societies explained little variance in cooperation. None of the cross-societal factors hypothesized to relate to impersonal cooperation explained variance in cooperation across societies. We replicated these conclusions when meta-analyzing 514 studies across 41 states and nine regions in the United States (k = 783). Thus, we observed that impersonal cooperation occurred in all societies – and to a similar degree across societies – suggesting that prior research may have overemphasized the magnitude of differences between modern societies in impersonal cooperation. We discuss the discrepancy between theory, past empirical research and the meta-analysis, address a limitation of experimental research on cooperation to study culture, and raise possible directions for future research.

From the Discussion

In the present meta-analysis, we found little variation in impersonal cooperation across 70 societies and 8 cultural groups. In fact, we found no significant differences in cooperation between cultural groups, which suggests there is little variation both within and between cultures. Moreover, linguistic and cultural distance between each pair of societies were only weakly related to differences in cooperation between societies, and genetic distance was not significantly associated with cooperation. If there existed substantial, systematic differences between societies in impersonal cooperation, we would expect a strong association between cultural distance and cooperation. Furthermore, we gathered all the societal indicators that have been hypothesized to explain cross-societal variation in impersonal cooperation and found that none of these were associated with cooperation. We also analyzed variation in cooperation across U.S. states and regions and found mixed evidence for variation in cooperation across the US. Contrary to what we observed within the global data, we found some variation in cooperation across U.S. regions, but only in one out of eight comparisons (i.e., South Atlantic region vs. East North Central region). That said, we did not find evidence for any between-state variation in cooperation.

Friday, April 22, 2022

Generous with individuals and selfish to the masses

Alós-Ferrer, C.; García-Segarra, J.; Ritschel, A.
(2022). Nature Human Behaviour, 6(1):88-96.

Abstract

The seemingly rampant economic selfishness suggested by many recent corporate scandals is at odds with empirical results from behavioural economics, which demonstrate high levels of prosocial behaviour in bilateral interactions and low levels of dishonest behaviour. We design an experimental setting, the ‘Big Robber’ game, where a ‘robber’ can obtain a large personal gain by appropriating the earnings of a large group of ‘victims’. In a large laboratory experiment (N = 640), more than half of all robbers took as much as possible and almost nobody declined to rob. However, the same participants simultaneously displayed standard, predominantly prosocial behaviour in Dictator, Ultimatum and Trust games. Thus, we provide direct empirical evidence showing that individual selfishness in high-impact decisions affecting a large group is compatible with prosociality in bilateral low-stakes interactions. That is, human beings can simultaneously be generous with others and selfish with large groups.

From the Discussion

Our results demonstrate that socially-relevant selfishness in the large is fully compatible with evidence from experimental economics on bilateral, low-stake games at the individual level, without requiring arguments relying on population differences (in fact, we found no statistically significant differences in the behavior of participants with or without an economics background). The same individuals can behave selfishly when interacting with a large group of other people while, at the same time, displaying standard levels of prosocial behavior in commonly-used laboratory tasks where only one other individual is involved. Additionally, however, individual differences in behavior in the Big Robber Game correlate with individual selfishness in the DG/UG/TG, i.e., Extreme Robbers gave less in the DG, offered less in the UG, and transferred less in the TG than Moderate Robbers.

The finding that people behave selfishly toward a large group while being generous toward individuals suggests that harming many individuals might be easier than harming just one, in line with received evidence that people are more willing to help one individual than many. It also reflects the tradeoff between personal gain and other-regarding concerns encompassed in standard models of social preferences, although this particular implication had not been demonstrated so far. When facing a single opponent in a bilateral game, appropriating a given monetary amount can result in a large interpersonal difference. When appropriating income from a large group of people, the same personal gain involves a smaller percentual difference. Correspondingly, creating a given level of inequality with respect to others results in a much larger personal gain when income is taken from a group than when it is taken from just another person, and hence it is much more likely to offset the disutility from inequality aversion in the former case.