Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Social Dilemmas. Show all posts
Showing posts with label Social Dilemmas. Show all posts

Sunday, August 23, 2020

Suckers or Saviors? Consistent Contributors in Social Dilemmas

Weber JM, Murnighan JK.
J Pers Soc Psychol. 2008;95(6):1340-1353.
doi:10.1037/a0012454

Abstract

Groups and organizations face a fundamental problem: They need cooperation but their members have incentives to free ride. Empirical research on this problem has often been discouraging, and economic models suggest that solutions are unlikely or unstable. In contrast, the authors present a model and 4 studies that show that an unwaveringly consistent contributor can effectively catalyze cooperation in social dilemmas. The studies indicate that consistent contributors occur naturally, and their presence in a group causes others to contribute more and cooperate more often, with no apparent cost to the consistent contributor and often gain. These positive effects seem to result from a consistent contributor's impact on group members' cooperative inferences about group norms.

From the Discussion:

Practical Implications

These findings may also have important practical implications.Should an individual who is joining a new group take the risk and be a CC (Consistent Contributor)? The alternative is to risk being in a group without one.Even though CCs seemed to benefit economically from their actions, they also tended to get relatively little credit for their positive influence, if they got any credit at all. Thus, future research might explore how consistent contributions can be encouraged and appreciated and how people can overcome the fears that are naturally associated with becoming a CC.

These data also provide further support for Kelley and Stahelski’s (1970) observation that people consistently under estimate their roles in creating their own social environments. In particular, in the contexts that we studied here, the common characterization of self-interested choices as “strategic” or “rational” appears to be behaviorally inappropriate. Characterizing CCs as suckers may be both misleading and fallacious (see Moore & Loewenstein, 2004,p. 200).  If “rational” choices maximize personal outcomes, our data suggest that the choice to be a CC can actually be rational. In this research, we examined CCs’ effects, not their motives or strategies. The data suggest that in these kinds of groups, CCs are saviors rather than suckers.

A serious impediment to the emergence of CCs is the fact that like Axelrod’s (1984) tit-for-tat players, CCs can never do better than the other members of their own groups. This means that CCs cannot do better than their exchange partners: Anyone who cooperates less, even if they ultimately move to mutual cooperation, will obtain better short-term outcomes than CCs. The common tendency to make social comparisons (Festinger, 1954) means that these outcome disparities will probably be noticed. Relatively disadvantageous outcomes are particularly noxious (e.g., Loewenstein, Thompson, & Bazerman, 1989), as is feeling exploited (e.g., Kelley & Stahelski, 1970). Thus, in the absence of formal agreements and binding contracts (which have their own problems; Malhotra & Murnighan, 2002), cooperative action can be exploited. The inclination to self-interested action may even be a common default (Moore & Loewenstein, 2004).

------------------

In essence: Economic theories often assume people look out mostly for themselves, cooperating only when punished or cajoled.  But even in anonymous experiments, some people consistently cooperate. These people also (i) perform better and (ii) inspire others to cooperate.

Monday, August 28, 2017

Maintaining cooperation in complex social dilemmas using deep reinforcement learning

Adam Lerer and Alexander Peysakhovich
(2017)

Abstract

In social dilemmas individuals face a temptation to increase their payoffs in the short run at a cost to the long run total welfare. Much is known about how cooperation can be stabilized in the simplest of such settings: repeated Prisoner’s Dilemma games. However, there is relatively little work on generalizing these insights to more complex situations. We start to fill this gap by showing how to use modern reinforcement learning methods to generalize a highly successful Prisoner’s Dilemma strategy: tit-for-tat. We construct artificial agents that act in ways that are simple to understand, nice (begin by cooperating), provokable (try to avoid being exploited), and forgiving (following a bad turn try to return to mutual cooperation). We show both theoretically and experimentally that generalized tit-for-tat agents can maintain cooperation in more complex environments. In contrast, we show that employing purely reactive training techniques can lead to agents whose behavior results in socially inefficient outcomes.

The paper is here.

Friday, October 24, 2014

When do people cooperate? The neuroeconomics of prosocial decision making.

Declerck CH, Boone C, Emonds G. When do people cooperate? The neuroeconomics
of prosocial decision making. Brain Cogn. 2013 Feb;81(1):95-117. 
doi: 10.1016/j.bandc.2012.09.009.

Abstract

Understanding the roots of prosocial behavior is an interdisciplinary research endeavor that has generated an abundance of empirical data across many disciplines. This review integrates research findings from different fields into a novel theoretical framework that can account for when prosocial behavior is likely to occur. Specifically, we propose that the motivation to cooperate (or not), generated by the reward system in the brain (extending from the striatum to the ventromedial prefrontal cortex), is modulated by two neural networks: a cognitive control system (centered on the lateral prefrontal cortex) that processes extrinsic cooperative incentives, and/or a social cognition system (including the temporo-parietal junction, the medial prefrontal cortex and the amygdala) that processes trust and/or threat signals. The independent modulatory influence of incentives and trust on the decision to cooperate is substantiated by a growing body of neuroimaging data and reconciles the apparent paradox between economic versus social rationality in the literature, suggesting that we are in fact wired for both. Furthermore, the theoretical framework can account for substantial behavioral heterogeneity in prosocial behavior. Based on the existing data, we postulate that self-regarding individuals (who are more likely to adopt an economically rational strategy) are more responsive to extrinsic cooperative incentives and therefore rely relatively more on cognitive control to make (un)cooperative decisions, whereas other-regarding individuals (who are more likely to adopt a socially rational strategy) are more sensitive to trust signals to avoid betrayal and recruit relatively more brain activity in the social cognition system. Several additional hypotheses with respect to the neural roots of social preferences are derived from the model and suggested for future research.

(cut)

6. Concluding remarks and directions for future research
Prosociality includes a wide array of behavior, including mutual cooperation, pure altruism, and the costly act of punishing norm violators. Neurologically, these behaviors are all motivated by neural networks dedicated to reward, indicating that prosocial acts (such as cooperating in a social dilemma) are carried out because they were desired and feel good. However, the underlying reasons for the pleasant feelings associated with cooperative behavior may differ. First, cooperation may be valued because of accruing benefits, making it economically rational. This route to cooperation is made possible through brain regions in the lateral frontal cortex that generate cognitive control and process the presence or absence of extrinsic cooperative incentives. Second, consistent with proponents of social rationality, cooperation can also occur when people expect to experience reward through a “warm glow of giving.” Such intrinsically motivated cooperation yields collective benefits from which all group members may eventually benefit, but it can only be sustained when it exists in concert with a mechanism to detect and deter free-riding. Hence socially rational cooperation is facilitated by a neural network dedicated to social cognition that processes trust signals.