Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Prisoner's Dilemma. Show all posts
Showing posts with label Prisoner's Dilemma. Show all posts

Saturday, December 17, 2022

Interaction between games give rise to the evolution of moral norms of cooperation

Salahshour M (2022)
PLoS Comput Biol 18(9): e1010429.
https://doi.org/10.1371/journal.pcbi.1010429

Abstract

In many biological populations, such as human groups, individuals face a complex strategic setting, where they need to make strategic decisions over a diverse set of issues and their behavior in one strategic context can affect their decisions in another. This raises the question of how the interaction between different strategic contexts affects individuals’ strategic choices and social norms? To address this question, I introduce a framework where individuals play two games with different structures and decide upon their strategy in a second game based on their knowledge of their opponent’s strategy in the first game. I consider both multistage games, where the same opponents play the two games consecutively, and reputation-based model, where individuals play their two games with different opponents but receive information about their opponent’s strategy. By considering a case where the first game is a social dilemma, I show that when the second game is a coordination or anti-coordination game, the Nash equilibrium of the coupled game can be decomposed into two classes, a defective equilibrium which is composed of two simple equilibrium of the two games, and a cooperative equilibrium, in which coupling between the two games emerge and sustain cooperation in the social dilemma. For the existence of the cooperative equilibrium, the cost of cooperation should be smaller than a value determined by the structure of the second game. Investigation of the evolutionary dynamics shows that a cooperative fixed point exists when the second game belongs to coordination or anti-coordination class in a mixed population. However, the basin of attraction of the cooperative fixed point is much smaller for the coordination class, and this fixed point disappears in a structured population. When the second game belongs to the anti-coordination class, the system possesses a spontaneous symmetry-breaking phase transition above which the symmetry between cooperation and defection breaks. A set of cooperation supporting moral norms emerges according to which cooperation stands out as a valuable trait. Notably, the moral system also brings a more efficient allocation of resources in the second game. This observation suggests a moral system has two different roles: Promotion of cooperation, which is against individuals’ self-interest but beneficial for the population, and promotion of organization and order, which is at both the population’s and the individual’s self-interest. Interestingly, the latter acts like a Trojan horse: Once established out of individuals’ self-interest, it brings the former with itself. Importantly, the fact that the evolution of moral norms depends only on the cost of cooperation and is independent of the benefit of cooperation implies that moral norms can be harmful and incur a pure collective cost, yet they are just as effective in promoting order and organization. Finally, the model predicts that recognition noise can have a surprisingly positive effect on the evolution of moral norms and facilitates cooperation in the Snow Drift game in structured populations.

Author summary

How do moral norms spontaneously evolve in the presence of selfish incentives? An answer to this question is provided by the observation that moral systems have two distinct functions: Besides encouraging self-sacrificing cooperation, they also bring organization and order into the societies. In contrast to the former, which is costly for the individuals but beneficial for the group, the latter is beneficial for both the group and the individuals. A simple evolutionary model suggests this latter aspect is what makes a moral system evolve based on the individuals’ self-interest. However, a moral system behaves like a Trojan horse: Once established out of the individuals’ self-interest to promote order and organization, it also brings self-sacrificing cooperation.

Sunday, November 21, 2021

Moral labels increase cooperation and costly punishment in a Prisoner’s Dilemma game with punishment option

Mieth, L., Buchner, A. & Bell, R.
Sci Rep 11, 10221 (2021). 
https://doi.org/10.1038/s41598-021-89675-6

Abstract

To determine the role of moral norms in cooperation and punishment, we examined the effects of a moral-framing manipulation in a Prisoner’s Dilemma game with a costly punishment option. In each round of the game, participants decided whether to cooperate or to defect. The Prisoner’s Dilemma game was identical for all participants with the exception that the behavioral options were paired with moral labels (“I cooperate” and “I cheat”) in the moral-framing condition and with neutral labels (“A” and “B”) in the neutral-framing condition. After each round of the Prisoner’s Dilemma game, participants had the opportunity to invest some of their money to punish their partners. In two experiments, moral framing increased moral and hypocritical punishment: participants were more likely to punish partners for defection when moral labels were used than when neutral labels were used. When the participants’ cooperation was enforced by their partners’ moral punishment, moral framing did not only increase moral and hypocritical punishment but also cooperation. The results suggest that moral framing activates a cooperative norm that specifically increases moral and hypocritical punishment. Furthermore, the experience of moral punishment by the partners may increase the importance of social norms for cooperation, which may explain why moral framing effects on cooperation were found only when participants were subject to moral punishment.

General discussion

In human social life, a large variety of behaviors are regulated by social norms that set standards on how individuals should behave. One of these norms is the norm of cooperation. In many situations, people are expected to set aside their egoistic interests to achieve the collective best outcome. Within economic research, cooperation is often studied in social dilemma games. In these games, the complexities of human social interactions are reduced to their incentive structures. However, human behavior is not only determined by monetary incentives. There are many other important determinants of behavior among which social norms are especially powerful. The participants’ decisions in social dilemma situations are thus affected by their interpretation of whether a certain behavior is socially appropriate or inappropriate. Moral labels can help to reduce the ambiguity of the social dilemma game by creating associations to real-life cooperation norms. Thereby, the moral framing may support a moral interpretation of the social dilemma situation, resulting in the moral rejection of egoistic behaviors. Often, social norms are enforced by punishment. It has been argued “that the maintenance of social norms typically requires a punishment threat, as there are almost always some individuals whose self-interest tempts them to violate the norm” [p. 185]. 

Tuesday, May 29, 2018

Choosing partners or rivals

The Harvard Gazette
Originally published April 27, 2018

Here is the conclusion:

“The interesting observation is that natural selection always chooses either partners or rivals,” Nowak said. “If it chooses partners, the system naturally moves to cooperation. If it chooses rivals, it goes to defection, and is doomed. An approach like ‘America First’ embodies a rival strategy which guarantees the demise of cooperation.”

In addition to shedding light on how cooperation might evolve in a society, Nowak believes the study offers an instructive example of how to foster cooperation among individuals.

“With the partner strategy, I have to accept that sometimes I’m in a relationship where the other person gets more than me,” he said. “But I can nevertheless provide an incentive structure where the best thing the other person can do is to cooperate with me.

“So the best I can do in this world is to play a strategy such that the other person gets the maximum payoff if they always cooperate,” he continued. “That strategy does not prevent a situation where the other person, to some extent, exploits me. But if they exploit me, they get a lower payoff than if they fully cooperated.”

The information is here.

Tuesday, February 6, 2018

Do the Right Thing: Experimental Evidence that Preferences for Moral Behavior, Rather Than Equity or Efficiency per se, Drive Human Prosociality

Capraro, Valerio and Rand, David G.
(January 11, 2018). Judgment and Decision Making.

Abstract

Decades of experimental research show that some people forgo personal gains to benefit others in unilateral anonymous interactions. To explain these results, behavioral economists typically assume that people have social preferences for minimizing inequality and/or maximizing efficiency (social welfare). Here we present data that are incompatible with these standard social preference models. We use a “Trade-Off Game” (TOG), where players unilaterally choose between an equitable option and an efficient option. We show that simply changing the labelling of the options to describe the equitable versus efficient option as morally right completely reverses the correlation between behavior in the TOG and play in a separate Dictator Game (DG) or Prisoner’s Dilemma (PD): people who take the action framed as moral in the TOG, be it equitable or efficient, are much more prosocial in the DG and PD. Rather than preferences for equity and/or efficiency per se, our results suggest that prosociality in games such as the DG and PD are driven by a generalized morality preference that motivates people to do what they think is morally right.

Download the paper here.

Monday, October 2, 2017

Cooperation in the Finitely Repeated Prisoner’s Dilemma

Matthew Embrey  Guillaume R. Fréchette  Sevgi Yuksel
The Quarterly Journal of Economics
Published: 26 August 2017

Abstract

More than half a century after the first experiment on the finitely repeated prisoner’s dilemma, evidence on whether cooperation decreases with experience–as suggested by backward induction–remains inconclusive. This paper provides a meta-analysis of prior experimental research and reports the results of a new experiment to elucidate how cooperation varies with the environment in this canonical game. We describe forces that affect initial play (formation of cooperation) and unraveling (breakdown of cooperation). First, contrary to the backward induction prediction, the parameters of the repeated game have a significant effect on initial cooperation. We identify how these parameters impact the value of cooperation–as captured by the size of the basin of attraction of Always Defect–to account for an important part of this effect. Second, despite these initial differences, the evolution of behavior is consistent with the unraveling logic of backward induction for all parameter combinations. Importantly, despite the seemingly contradictory results across studies, this paper establishes a systematic pattern of behavior: subjects converge to use threshold strategies that conditionally cooperate until a threshold round; and conditional on establishing cooperation, the first defection round moves earlier with experience. Simulation results generated from a learning model estimated at the subject level provide insights into the long-term dynamics and the forces that slow down the unraveling of cooperation.

The paper is here.

Monday, August 28, 2017

Maintaining cooperation in complex social dilemmas using deep reinforcement learning

Adam Lerer and Alexander Peysakhovich
(2017)

Abstract

In social dilemmas individuals face a temptation to increase their payoffs in the short run at a cost to the long run total welfare. Much is known about how cooperation can be stabilized in the simplest of such settings: repeated Prisoner’s Dilemma games. However, there is relatively little work on generalizing these insights to more complex situations. We start to fill this gap by showing how to use modern reinforcement learning methods to generalize a highly successful Prisoner’s Dilemma strategy: tit-for-tat. We construct artificial agents that act in ways that are simple to understand, nice (begin by cooperating), provokable (try to avoid being exploited), and forgiving (following a bad turn try to return to mutual cooperation). We show both theoretically and experimentally that generalized tit-for-tat agents can maintain cooperation in more complex environments. In contrast, we show that employing purely reactive training techniques can lead to agents whose behavior results in socially inefficient outcomes.

The paper is here.

Tuesday, May 30, 2017

Game Theory and Morality

Moshe Hoffman , Erez Yoeli , and Carlos David Navarrete
The Evolution of Morality
Part of the series Evolutionary Psychology pp 289-316

Here is an excerpt:

The key result for evolutionary dynamic models is that, except under extreme conditions, behavior converges to Nash equilibria. This result rests on one simple, noncontroversial assumption shared by all evolutionary dynamics: Behaviors that are relatively successful will increase in frequency. Based on this logic, game theory models have been fruitfully applied in biological contexts to explain phenomena such as animal sex ratios (Fisher, 1958), territoriality (Smith & Price, 1973), cooperation (Trivers, 1971), sexual displays (Zahavi, 1975), and parent–offspring conflict (Trivers, 1974). More recently, evolutionary dynamic models have been applied in human contexts where conscious deliberation is believed to not play an important role, such as in the adoption of religious rituals (Sosis & Alcorta, 2003 ), in the expression and experience of emotion (Frank, 1988 ; Winter, 2014), and in the use of indirect speech (Pinker, Nowak, & Lee, 2008).

 Crucially for this chapter, because our behaviors are mediated by moral intuitions and ideologies, if our moral behaviors converge to Nash, so must the intuitions and ideologies that motivate them. The resulting intuitions and ideologies will bear the signature of their game theoretic origins, and this signature will lend clarity on the puzzling, counterintuitive, and otherwise hard-to-explain features of our moral intuitions, as exemplified by our motivating examples.

In order for game theory to be relevant to understanding our moral intuitions and ideologies, we need only the following simple assumption: Moral intuitions and ideologies that lead to higher payoffs become more frequent. This assumption can be met if moral intuitions that yield higher payoffs are held more tenaciously, are more likely to be imitated, or are genetically encoded. For example, if every time you transgress by commission you are punished, but every time you transgress by omission you are not, you will start to intuit that commission is worse than omission.

The book chapter is here.

Friday, March 31, 2017

Signaling Emotion and Reason in Cooperation

Levine, Emma Edelman and Barasch, Alixandra and Rand, David G. and Berman, Jonathan Z. and Small, Deborah A. (February 23, 2017).

Abstract

We explore the signal value of emotion and reason in human cooperation. Across four experiments utilizing dyadic prisoner dilemma games, we establish three central results. First, individuals believe that a reliance on emotion signals that one will cooperate more so than a reliance on reason. Second, these beliefs are generally accurate — those who act based on emotion are more likely to cooperate than those who act based on reason. Third, individuals’ behavioral responses towards signals of emotion and reason depends on their own decision mode: those who rely on emotion tend to conditionally cooperate (that is, cooperate only when they believe that their partner has cooperated), whereas those who rely on reason tend to defect regardless of their partner’s signal. These findings shed light on how different decision processes, and lay theories about decision processes, facilitate and impede cooperation.

Available at SSRN: https://ssrn.com/abstract=2922765

Editor's note: This research has implications for developing the therapeutic relationship.

Thursday, December 29, 2016

The Tragedy of Biomedical Moral Enhancement

Stefan Schlag
Neuroethics (2016). pp 1-13.
doi:10.1007/s12152-016-9284-5

Abstract

In Unfit for the Future, Ingmar Persson and Julian Savulescu present a challenging argument in favour of biomedical moral enhancement. In light of the existential threats of climate change, insufficient moral capacities of the human species seem to require a cautiously shaped programme of biomedical moral enhancement. The story of the tragedy of the commons creates the impression that climate catastrophe is unavoidable and consequently gives strength to the argument. The present paper analyses to what extent a policy in favour of biomedical moral enhancement can thereby be justified and puts special emphasis on the political context. By reconstructing the theoretical assumptions of the argument and by taking them seriously, it is revealed that the argument is self-defeating. The tragedy of the commons may make moral enhancement appear necessary, but when it comes to its implementation, a second-order collective action-problem emerges and impedes the execution of the idea. The paper examines several modifications of the argument and shows how it can be based on easier enforceability of BME. While this implies enforcement, it is not an obstacle for the justification of BME. Rather, enforceability might be the decisive advantage of BME over other means. To take account of the global character of climate change, the paper closes with an inquiry of possible justifications of enforced BME on a global level. The upshot of the entire line of argumentation is that Unfit for the Future cannot justify BME because it ignores the nature of the problem of climate protection and political prerequisites of any solution.

The article is here.