Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Game Theory. Show all posts
Showing posts with label Game Theory. Show all posts

Sunday, April 30, 2023

The secrets of cooperation

Bob Holmes
Knowablemagazine.org
Originally published 29 MAR 23

Here are two excerpts:

Human cooperation takes some explaining — after all, people who act cooperatively should be vulnerable to exploitation by others. Yet in societies around the world, people cooperate to their mutual benefit. Scientists are making headway in understanding the conditions that foster cooperation, research that seems essential as an interconnected world grapples with climate change, partisan politics and more — problems that can be addressed only through large-scale cooperation.

Behavioral scientists’ formal definition of cooperation involves paying a personal cost (for example, contributing to charity) to gain a collective benefit (a social safety net). But freeloaders enjoy the same benefit without paying the cost, so all else being equal, freeloading should be an individual’s best choice — and, therefore, we should all be freeloaders eventually.

Many millennia of evolution acting on both our genes and our cultural practices have equipped people with ways of getting past that obstacle, says Muthukrishna, who coauthored a look at the evolution of cooperation in the 2021 Annual Review of Psychology. This cultural-genetic coevolution stacked the deck in human society so that cooperation became the smart move rather than a sucker’s choice. Over thousands of years, that has allowed us to live in villages, towns and cities; work together to build farms, railroads and other communal projects; and develop educational systems and governments.

Evolution has enabled all this by shaping us to value the unwritten rules of society, to feel outrage when someone else breaks those rules and, crucially, to care what others think about us.

“Over the long haul, human psychology has been modified so that we’re able to feel emotions that make us identify with the goals of social groups,” says Rob Boyd, an evolutionary anthropologist at the Institute for Human Origins at Arizona State University.

(cut)

Reputation is more powerful than financial incentives in encouraging cooperation

Almost a decade ago, Yoeli and his colleagues trawled through the published literature to see what worked and what didn’t at encouraging prosocial behavior. Financial incentives such as contribution-matching or cash, or rewards for participating, such as offering T-shirts for blood donors, sometimes worked and sometimes didn’t, they found. In contrast, reputational rewards — making individuals’ cooperative behavior public — consistently boosted participation. The result has held up in the years since. “If anything, the results are stronger,” says Yoeli.

Financial rewards will work if you pay people enough, Yoeli notes — but the cost of such incentives could be prohibitive. One study of 782 German residents, for example, surveyed whether paying people to receive a Covid vaccine would increase vaccine uptake. It did, but researchers found that boosting vaccination rates significantly would have required a payment of at least 3,250 euros — a dauntingly steep price.

And payoffs can actually diminish the reputational rewards people could otherwise gain for cooperative behavior, because others may be unsure whether the person was acting out of altruism or just doing it for the money. “Financial rewards kind of muddy the water about people’s motivations,” says Yoeli. “That undermines any reputational benefit from doing the deed.”

Saturday, December 17, 2022

Interaction between games give rise to the evolution of moral norms of cooperation

Salahshour M (2022)
PLoS Comput Biol 18(9): e1010429.
https://doi.org/10.1371/journal.pcbi.1010429

Abstract

In many biological populations, such as human groups, individuals face a complex strategic setting, where they need to make strategic decisions over a diverse set of issues and their behavior in one strategic context can affect their decisions in another. This raises the question of how the interaction between different strategic contexts affects individuals’ strategic choices and social norms? To address this question, I introduce a framework where individuals play two games with different structures and decide upon their strategy in a second game based on their knowledge of their opponent’s strategy in the first game. I consider both multistage games, where the same opponents play the two games consecutively, and reputation-based model, where individuals play their two games with different opponents but receive information about their opponent’s strategy. By considering a case where the first game is a social dilemma, I show that when the second game is a coordination or anti-coordination game, the Nash equilibrium of the coupled game can be decomposed into two classes, a defective equilibrium which is composed of two simple equilibrium of the two games, and a cooperative equilibrium, in which coupling between the two games emerge and sustain cooperation in the social dilemma. For the existence of the cooperative equilibrium, the cost of cooperation should be smaller than a value determined by the structure of the second game. Investigation of the evolutionary dynamics shows that a cooperative fixed point exists when the second game belongs to coordination or anti-coordination class in a mixed population. However, the basin of attraction of the cooperative fixed point is much smaller for the coordination class, and this fixed point disappears in a structured population. When the second game belongs to the anti-coordination class, the system possesses a spontaneous symmetry-breaking phase transition above which the symmetry between cooperation and defection breaks. A set of cooperation supporting moral norms emerges according to which cooperation stands out as a valuable trait. Notably, the moral system also brings a more efficient allocation of resources in the second game. This observation suggests a moral system has two different roles: Promotion of cooperation, which is against individuals’ self-interest but beneficial for the population, and promotion of organization and order, which is at both the population’s and the individual’s self-interest. Interestingly, the latter acts like a Trojan horse: Once established out of individuals’ self-interest, it brings the former with itself. Importantly, the fact that the evolution of moral norms depends only on the cost of cooperation and is independent of the benefit of cooperation implies that moral norms can be harmful and incur a pure collective cost, yet they are just as effective in promoting order and organization. Finally, the model predicts that recognition noise can have a surprisingly positive effect on the evolution of moral norms and facilitates cooperation in the Snow Drift game in structured populations.

Author summary

How do moral norms spontaneously evolve in the presence of selfish incentives? An answer to this question is provided by the observation that moral systems have two distinct functions: Besides encouraging self-sacrificing cooperation, they also bring organization and order into the societies. In contrast to the former, which is costly for the individuals but beneficial for the group, the latter is beneficial for both the group and the individuals. A simple evolutionary model suggests this latter aspect is what makes a moral system evolve based on the individuals’ self-interest. However, a moral system behaves like a Trojan horse: Once established out of the individuals’ self-interest to promote order and organization, it also brings self-sacrificing cooperation.

Thursday, December 9, 2021

'Moral molecules’ – a new theory of what goodness is made of

Oliver Scott Curry and others
www.psyche.com
Originally posted 1 NOV 21

Here are two excerpts:

Research is converging on the idea that morality is a collection of rules for promoting cooperation – rules that help us work together, get along, keep the peace and promote the common good. The basic idea is that humans are social animals who have lived together in groups for millions of years. During this time, we have been surrounded by opportunities for cooperation – for mutually beneficial social interaction – and we have evolved and invented a range of ways of unlocking these benefits. These cooperative strategies come in different shapes and sizes: instincts, intuitions, inventions, institutions. Together, they motivate our cooperative behaviour and provide the criteria by which we evaluate the behaviour of others. And it is these cooperative strategies that philosophers and others have called ‘morality’.

This theory of ‘morality as cooperation’ relies on the mathematical analysis of cooperation provided by game theory – the branch of maths that is used to describe situations in which the outcome of one’s decisions depends on the decisions made by others. Game theory distinguishes between competitive ‘zero-sum’ interactions or ‘games’, where one player’s gain is another’s loss, and cooperative ‘nonzero-sum’ games, win-win situations in which both players benefit. What’s more, game theory tells us that there is not just one type of nonzero-sum game; there are many, with many different cooperative strategies for playing them. At least seven different types of cooperation have been identified so far, and each one explains a different type of morality.

(cut)

Hence, seven types of cooperation explain seven types of morality: love, loyalty, reciprocity, heroism, deference, fairness and property rights. And so, according to this theory, it is morally good to: 1) love your family; 2) be loyal to your group; 3) return favours; 4) be heroic; 5) defer to superiors; 6) be fair; and 7) respect property. (And it is morally bad to: 1) neglect your family; 2) betray your group; 3) cheat; 4) be a coward; 5) disrespect authority; 6) be unfair; or 7) steal.) These morals are evolutionarily ancient, genetically distinct, psychologically discrete and cross-culturally universal.

The theory of ‘morality as cooperation’ explains, from first principles, many of the morals on those old lists. Some of the morals correspond to one of the basic types of cooperation (as in the case of courage), while others correspond to component parts of a basic type (as in the case of gratitude, which is a component of reciprocity).

Tuesday, May 29, 2018

Choosing partners or rivals

The Harvard Gazette
Originally published April 27, 2018

Here is the conclusion:

“The interesting observation is that natural selection always chooses either partners or rivals,” Nowak said. “If it chooses partners, the system naturally moves to cooperation. If it chooses rivals, it goes to defection, and is doomed. An approach like ‘America First’ embodies a rival strategy which guarantees the demise of cooperation.”

In addition to shedding light on how cooperation might evolve in a society, Nowak believes the study offers an instructive example of how to foster cooperation among individuals.

“With the partner strategy, I have to accept that sometimes I’m in a relationship where the other person gets more than me,” he said. “But I can nevertheless provide an incentive structure where the best thing the other person can do is to cooperate with me.

“So the best I can do in this world is to play a strategy such that the other person gets the maximum payoff if they always cooperate,” he continued. “That strategy does not prevent a situation where the other person, to some extent, exploits me. But if they exploit me, they get a lower payoff than if they fully cooperated.”

The information is here.

Tuesday, May 30, 2017

Game Theory and Morality

Moshe Hoffman , Erez Yoeli , and Carlos David Navarrete
The Evolution of Morality
Part of the series Evolutionary Psychology pp 289-316

Here is an excerpt:

The key result for evolutionary dynamic models is that, except under extreme conditions, behavior converges to Nash equilibria. This result rests on one simple, noncontroversial assumption shared by all evolutionary dynamics: Behaviors that are relatively successful will increase in frequency. Based on this logic, game theory models have been fruitfully applied in biological contexts to explain phenomena such as animal sex ratios (Fisher, 1958), territoriality (Smith & Price, 1973), cooperation (Trivers, 1971), sexual displays (Zahavi, 1975), and parent–offspring conflict (Trivers, 1974). More recently, evolutionary dynamic models have been applied in human contexts where conscious deliberation is believed to not play an important role, such as in the adoption of religious rituals (Sosis & Alcorta, 2003 ), in the expression and experience of emotion (Frank, 1988 ; Winter, 2014), and in the use of indirect speech (Pinker, Nowak, & Lee, 2008).

 Crucially for this chapter, because our behaviors are mediated by moral intuitions and ideologies, if our moral behaviors converge to Nash, so must the intuitions and ideologies that motivate them. The resulting intuitions and ideologies will bear the signature of their game theoretic origins, and this signature will lend clarity on the puzzling, counterintuitive, and otherwise hard-to-explain features of our moral intuitions, as exemplified by our motivating examples.

In order for game theory to be relevant to understanding our moral intuitions and ideologies, we need only the following simple assumption: Moral intuitions and ideologies that lead to higher payoffs become more frequent. This assumption can be met if moral intuitions that yield higher payoffs are held more tenaciously, are more likely to be imitated, or are genetically encoded. For example, if every time you transgress by commission you are punished, but every time you transgress by omission you are not, you will start to intuit that commission is worse than omission.

The book chapter is here.

Thursday, January 21, 2016

Intuition, deliberation, and the evolution of cooperation

Adam Bear and David G. Rand
PNAS 2016 : 1517780113v1-201517780.

Abstract

Humans often cooperate with strangers, despite the costs involved. A long tradition of theoretical modeling has sought ultimate evolutionary explanations for this seemingly altruistic behavior. More recently, an entirely separate body of experimental work has begun to investigate cooperation’s proximate cognitive underpinnings using a dual-process framework: Is deliberative self-control necessary to reign in selfish impulses, or does self-interested deliberation restrain an intuitive desire to cooperate? Integrating these ultimate and proximate approaches, we introduce dual-process cognition into a formal game-theoretic model of the evolution of cooperation. Agents play prisoner’s dilemma games, some of which are one-shot and others of which involve reciprocity. They can either respond by using a generalized intuition, which is not sensitive to whether the game is one-shot or reciprocal, or pay a (stochastically varying) cost to deliberate and tailor their strategy to the type of game they are facing. We find that, depending on the level of reciprocity and assortment, selection favors one of two strategies: intuitive defectors who never deliberate, or dual-process agents who intuitively cooperate but sometimes use deliberation to defect in one-shot games. Critically, selection never favors agents who use deliberation to override selfish impulses: Deliberation only serves to undermine cooperation with strangers. Thus, by introducing a formal theoretical framework for exploring cooperation through a dual-process lens, we provide a clear answer regarding the role of deliberation in cooperation based on evolutionary modeling, help to organize a growing body of sometimes-conflicting empirical results, and shed light on the nature of human cognition and social decision making.

The article is here.

Saturday, October 17, 2015

Chimpanzee choice rates in competitive games match equilibrium game theory predictions

Christopher Flynn Martin, Rahul Bhui, Peter Bossaerts, Tetsuro Matsuzawa & Colin Camerer
Scientific Reports 4, Article number: 5182 (2014)
doi:10.1038/srep05182

Abstract

The capacity for strategic thinking about the payoff-relevant actions of conspecifics is not well understood across species. We use game theory to make predictions about choices and temporal dynamics in three abstract competitive situations with chimpanzee participants. Frequencies of chimpanzee choices are extremely close to equilibrium (accurate-guessing) predictions, and shift as payoffs change, just as equilibrium theory predicts. The chimpanzee choices are also closer to the equilibrium prediction, and more responsive to past history and payoff changes, than two samples of human choices from experiments in which humans were also initially uninformed about opponent payoffs and could not communicate verbally. The results are consistent with a tentative interpretation of game theory as explaining evolved behavior, with the additional hypothesis that chimpanzees may retain or practice a specialized capacity to adjust strategy choice during competition to perform at least as well as, or better than, humans have.

Sunday, March 15, 2015

Game Theory Analysis Shows How Evolution Favors Cooperation’s Collapse

By Katherine Unger Baillie
University of Pennsylvania
Press Release
Originally released on November 24, 2014

Last year, University of Pennsylvania researchers Alexander J. Stewart and Joshua B. Plotkin published a mathematical explanation for why cooperation and generosity have evolved in nature. Using the classical game theory match-up known as the Prisoner’s Dilemma, they found that generous strategies were the only ones that could persist and succeed in a multi-player, iterated version of the game over the long term.

But now they’ve come out with a somewhat less rosy view of evolution. With a new analysis of the Prisoner’s Dilemma played in a large, evolving population, they found that adding more flexibility to the game can allow selfish strategies to be more successful. The work paints a dimmer but likely more realistic view of how cooperation and selfishness balance one another in nature.

“It’s a somewhat depressing evolutionary outcome, but it makes intuitive sense,” said Plotkin, a professor in Penn’s Department of Biology in the School of Arts & Sciences, who coauthored the study with Stewart, a postdoctoral researcher in his lab. “We had a nice picture of how evolution can promote cooperation even amongst self-interested agents and indeed it sometimes can, but, when we allow mutations that change the nature of the game, there is a runaway evolutionary process, and suddenly defection becomes the more robust outcome.”

The entire press release is here.

Saturday, January 10, 2015

Robert Wright: The evolution of compassion

TED Talk Video
Originally published October 2009

Robert Wright uses evolutionary biology and game theory to explain why we appreciate the Golden Rule ("Do unto others..."), why we sometimes ignore it and why there’s hope that, in the near future, we might all have the compassion to follow it.


Friday, November 14, 2014

Compensation and punishment: ‘Justice’ depends on whether or not we’re a victim

New York University
Press Release
Originally released on October 28, 2014

We’re more likely to punish wrongdoing as a third party to a non-violent offense than when we’re victimized by it, according to a new study by New York University psychology researchers. The findings, which appear in the journal Nature Communications, may offer insights into how juries differ from plaintiffs in seeking to restore justice.

Their study, conducted in the laboratory of NYU Professor Elizabeth Phelps, also shows that victims, rather than seeking to punish an offender, instead seek to restore what they’ve lost.

“In our legal system, individuals are presented with the option to punish the transgressor or not, but such a narrow choice set may fail to capture alternative preferences for restoring justice,” observes Oriel FeldmanHall, the study’s lead author and a post-doctoral fellow in NYU’s Department of Psychology. “In this study we show that victims actually prefer other forms of justice restoration, such as compensation to the victim, rather than punishment of the transgressor.”