Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Cooperation. Show all posts
Showing posts with label Cooperation. Show all posts

Sunday, July 6, 2025

In similarity we trust: Like-mindedness, rather than just the type of moral judgment, drives inferences of trustworthiness

Chandrashekar, S., et al. (2025, May 26).
PsyArXiv Preprints

Abstract

Trust plays a central role in social interactions. Recent research has highlighted the importance of others’ moral decisions in shaping trust inference: individuals who reject sacrificial harm in moral dilemmas (which aligns with deontological ethics) are generally perceived as more trustworthy than those who condone sacrificial harm (which aligns with utilitarian ethics). Across five studies (N = 1234), we investigated trust inferences in the context of iterative moral dilemmas, which allow individuals to not only make deontological or utilitarian decisions, but also harm-balancing decisions. Our findings challenge the prevailing perspective: While we did observe effects of the type of moral decision that people make, the direction of these effects was inconsistent across studies. In contrast, moral similarity (i.e., whether a decision aligns with one’s own perspective) consistently predicted increased trust. Our findings suggest that trust is not just about adhering to specific moral frameworks but also about shared moral perspectives.

Here are some thoughts:

This research is important to practicing psychologists for several key reasons. It demonstrates that like-mindedness —specifically, sharing similar moral judgments or decision-making patterns—is a strong determinant of perceived trustworthiness. This insight is valuable across clinical, organizational, and social psychology, particularly in understanding how moral alignment influences interpersonal relationships.

Unlike past studies focused on isolated moral dilemmas like the trolley problem, this work explores iterative dilemmas, offering a more realistic model of how people make repeated moral decisions over time. For psychologists working in ethics or behavioral interventions, this provides a nuanced framework for promoting cooperation and ethical behavior in dynamic contexts.

The study also challenges traditional views by showing that individuals who switch between utilitarian and deontological reasoning are not necessarily seen as less trustworthy, suggesting flexibility in moral judgment may be contextually appropriate. Additionally, the research highlights how moral decisions shape perceptions of traits such as bravery, warmth, and competence—key factors in how people are judged socially and professionally.

These findings can aid therapists in helping clients navigate relational issues rooted in moral misalignment or trust difficulties. Overall, the research bridges moral psychology and social perception, offering practical tools for improving interpersonal trust across diverse psychological domains.

Monday, March 24, 2025

Relational Norms for Human-AI Cooperation

Earp, B.D, et al. (2025).
arXiv.com

Abstract

How we should design and interact with so-called “social” artificial intelligence (AI) depends, in part, on the socio-relational role the AI serves to emulate or occupy. In human society, different types of social relationship exist (e.g., teacher-student, parent-child, neighbors, siblings, and so on) and are associated with distinct sets of prescribed (or proscribed) cooperative functions, including hierarchy, care, transaction, and mating. These relationship-specific patterns of prescription and proscription (i.e., “relational norms”) shape our judgments of what is appropriate or inappropriate for each partner within that relationship. Thus, what is considered ethical, trustworthy, or cooperative within one relational context, such as between friends or romantic partners, may not be considered as such within another relational context, such as between strangers, housemates, or work colleagues. Moreover, what is appropriate for one partner within a relationship, such as a boss giving orders to their employee, may not be appropriate for the other relationship partner (i.e., the employee giving orders to their boss) due to the relational norm(s) associated with that dyad in the relevant context (here, hierarchy and transaction in a workplace context). Now that artificially intelligent “agents” and chatbots powered by large language models (LLMs), are increasingly being designed and used to fill certain social roles and relationships that are analogous to those found in human societies (e.g., AI assistant, AI mental health provider, AI tutor, AI “girlfriend” or “boyfriend”), it is imperative to determine whether or how human-human relational norms will, or should, be applied to human-AI relationships. Here, we systematically examine how AI systems' characteristics that differ from those of humans, such as their likely lack of conscious experience and immunity to fatigue, may affect their ability to fulfill relationship-specific cooperative functions, as well as their ability to (appear to) adhere to corresponding relational norms. We also highlight the "layered" nature of human-AI relationships, wherein a third party (the AI provider) mediates and shapes the interaction. This analysis, which is a collaborative effort by philosophers, psychologists, relationship scientists, ethicists, legal experts, and AI researchers, carries important implications for AI systems design, user behavior, and regulation. While we accept that AI systems can offer significant benefits such as increased availability and consistency in certain socio-relational roles, they also risk fostering unhealthy dependencies or unrealistic expectations that could spill over into human-human relationships. We propose that understanding and thoughtfully shaping (or implementing) suitable human-AI relational norms—for a wide range of relationship types—will be crucial for ensuring that human-AI interactions are ethical, trustworthy, and favorable to human well-being.

Here are some thoughts:

This article details the intricate dynamics of how artificial intelligence (AI) systems, particularly those designed to mimic social roles, should interact with humans in a manner that is both ethically sound and socially beneficial. Authored by a diverse team of experts from various disciplines, the paper posits that understanding and applying human-human relational norms to human-AI interactions is essential for fostering ethical, trustworthy, and advantageous outcomes. The authors draw upon the Relational Norms model, which identifies four primary cooperative functions in human relationships—care, transaction, hierarchy, and mating—that guide behavior and expectations within different types of relationships, such as parent-child, teacher-student, or romantic partnerships.

As AI systems increasingly occupy social roles traditionally held by humans, such as assistants, tutors, and companions, the paper examines how AI's unique characteristics, such as the lack of consciousness and immunity to fatigue, influence their ability to fulfill these roles and adhere to relational norms. A significant aspect of human-AI relationships highlighted in the document is their "layered" nature, where a third party—the AI provider—mediates and shapes the interaction. This structure can introduce risks, such as changes in AI behavior or the monetization of user interactions, which may not align with the user's best interests.

The authors emphasize the importance of transparency in AI design, urging developers to clearly communicate the capabilities, limitations, and data practices of their systems to prevent exploitation and build trust. They also call for adaptive regulatory frameworks that consider the specific relational contexts of AI systems, ensuring user protection and ethical alignment. Users, too, are encouraged to educate themselves about AI and relational norms to engage more effectively and safely with these technologies. The paper concludes by advocating for ongoing interdisciplinary research and collaboration to address the evolving challenges posed by AI in social roles, ensuring that AI systems are developed and governed in ways that respect human values and contribute positively to society.

Wednesday, February 19, 2025

The Moral Psychology of Artificial Intelligence

Bonnefon, J., Rahwan, I., & Shariff, A. (2023).
Annual Review of Psychology, 75(1), 653–675.

Abstract

Moral psychology was shaped around three categories of agents and patients: humans, other animals, and supernatural beings. Rapid progress in artificial intelligence has introduced a fourth category for our moral psychology to deal with: intelligent machines. Machines can perform as moral agents, making decisions that affect the outcomes of human patients or solving moral dilemmas without human supervision. Machines can be perceived as moral patients, whose outcomes can be affected by human decisions, with important consequences for human–machine cooperation. Machines can be moral proxies that human agents and patients send as their delegates to moral interactions or use as a disguise in these interactions. Here we review the experimental literature on machines as moral agents, moral patients, and moral proxies, with a focus on recent findings and the open questions that they suggest.

Here are some thoughts:

This article delves into the evolving moral landscape shaped by artificial intelligence (AI). As AI technology progresses rapidly, it introduces a new category for moral consideration: intelligent machines.

Machines as moral agents are capable of making decisions that have significant moral implications. This includes scenarios where AI systems can inadvertently cause harm through errors, such as misdiagnosing a medical condition or misclassifying individuals in security contexts. The authors highlight that societal expectations for these machines are often unrealistically high; people tend to require AI systems to outperform human capabilities significantly while simultaneously overestimating human error rates. This disparity raises critical questions about how many mistakes are acceptable from machines in life-and-death situations and how these errors are distributed among different demographic groups.

In their role as moral patients, machines become subjects of human moral behavior. This perspective invites exploration into how humans interact with AI—whether cooperatively or competitively—and the potential biases that may arise in these interactions. For instance, there is a growing concern about algorithmic bias, where certain demographic groups may be unfairly treated by AI systems due to flawed programming or data sets.

Lastly, machines serve as moral proxies, acting as intermediaries in human interactions. This role allows individuals to delegate moral decision-making to machines or use them to mask unethical behavior. The implications of this are profound, as it raises ethical questions about accountability and the extent to which humans can offload their moral responsibilities onto AI.

Overall, the article underscores the urgent need for a deeper understanding of the psychological dimensions associated with AI's integration into society. As encounters between humans and intelligent machines become commonplace, addressing issues of trust, bias, and ethical alignment will be crucial in shaping a future where AI can be safely and effectively integrated into daily life.

Sunday, February 9, 2025

Does Morality do us any good

Nikhil Kishnan
The New Yorker
Originally published 23 Dec 24

Here is an excerpt:

As things became more unequal, we developed a paradoxical aversion to inequality. In time, patterns began to appear that are still with us. Kinship and hierarchy were replaced or augmented by coöperative relationships that individuals entered into voluntarily—covenants, promises, and the economically essential contracts. The people of Europe, at any rate, became what Joseph Henrich, the Harvard evolutionary biologist and anthropologist, influentially termed “WEIRD”: Western, educated, industrialized, rich, and democratic. WEIRD people tend to believe in moral rules that apply to every human being, and tend to downplay the moral significance of their social communities or personal relations. They are, moreover, much less inclined to conform to social norms that lack a moral valence, or to defer to such social judgments as shame and honor, but much more inclined to be bothered by their own guilty consciences.

That brings us to the past fifty years, decades that inherited the familiar structures of modernity: capitalism, liberal democracy, and the critics of these institutions, who often fault them for failing to deliver on the ideal of human equality. The civil-rights struggles of these decades have had an urgency and an excitement that, Sauer writes, make their supporters think victory will be both quick and lasting. When it is neither, disappointment produces the “identity politics” that is supposed to be the essence of the present cultural moment.

His final chapter, billed as an account of the past five years, connects disparate contemporary phenomena—vigilance about microaggressions and cultural appropriation, policies of no-platforming—as instances of the “punitive psychology” of our early hominin ancestors. Our new sensitivities, along with the twenty-first-century terms they’ve inspired (“mansplaining,” “gaslighting”), guide us as we begin to “scrutinize the symbolic markers of our group membership more and more closely and to penalize any non-compliance.” We may have new targets, Sauer says, but the psychology is an old one.


Here are some thoughts:

Understanding the origins of human morality is relevant for practicing psychologists, as it provides important insights into the psychological foundations of our moral behaviors and professional social interactions. These insight include working with patients and our own ethical code. The article explores how our moral intuitions have evolved over millions of years, revealing that our current moral frameworks are not fixed absolutes, but dynamic systems shaped by biological and social processes. Other scholars have conceptualized morality in similar ways, such as Haidt, DeWaal, and Tomasello.

Hanno Sauer's work illuminates a similar journey of moral development, tracing how early human survival strategies of cooperation and altruism gradually transformed into complex ethical systems. Psychologists can gain insights from this evolutionary perspective, understanding that our moral convictions are deeply rooted in our species' adaptive mechanisms rather than being purely rational constructs.

The article highlights several key insights:
  • Moral beliefs are significantly influenced by social context and evolutionary history
  • Our moral intuitions often precede rational justification
  • Cooperation and punishment played crucial roles in shaping human moral psychology
  • Universal moral values exist across different cultures, despite apparent differences
Particularly compelling is the exploration of how our "punitive psychology" emerged as a mechanism for social regulation, demonstrating how psychological processes have been instrumental in creating societal norms. For practicing psychologists, this understanding can provide a more nuanced approach to understanding patient behaviors, moral reasoning, and the complex interplay between individual experiences and broader evolutionary patterns. Notably, morality is always contextual, as I have pointed out in other summaries.

Finally, the article offers an optimistic perspective on moral progress, suggesting that our fundamental values are more aligned than we might initially perceive. This insight can be helpful for psychologists working with individuals from diverse backgrounds, emphasizing our shared psychological and evolutionary heritage.

Thursday, January 2, 2025

Negative economic shocks and the compliance to social norms

Bogliacino, F.,  et al. (2024).
Judgment and Decision Making, 19.

Abstract

We study why suffering a negative economic shock, i.e., a significant loss, may trigger a change in other-regarding behavior. We conjecture that people trade off concern for money with a conditional preference to follow social norms and that suffering a shock makes extrinsic motivation more salient, leading to more norm violation. This hypothesis is grounded on the premise that preferences are norm-dependent. We study this question experimentally: after administering losses on the earnings from a real-effort task, we analyze choices in prosocial and antisocial settings. To derive our predictions, we elicit social norms for each context analyzed in the experiments. We find evidence that shock increases deviations from norms.

The research is linked above.

Here are some thoughts.

The research indicates another way in which moral norms shift based on context. The study investigates how experiencing significant financial losses, termed negative economic shocks (NES), influences individuals' adherence to social norms. The authors hypothesize that when individuals face NES, they become more focused on monetary concerns, leading to a higher likelihood of violating social norms. This hypothesis is grounded in the concept of norm-dependent utility, where individuals weigh the psychological costs of deviating from norms against their financial needs. The researchers conducted three experiments where participants experienced an 80% loss in earnings from a real-effort task and subsequently engaged in various tasks measuring norm compliance, including stealing, cheating, and cooperation.

The findings reveal that participants who experienced NES exhibited increased norm violations across several contexts. Specifically, there was a notable rise in stealing behaviors and a significant increase in cheating during the "die-under-the-cup" task. Additionally, the study found that retaliation behaviors decreased markedly in "joy of destruction" scenarios. Importantly, the results suggest that the effects of NES on social behavior are distinct from mere wealth effects, indicating that experiencing a shock alters individuals' motivations and decision-making processes. Overall, this research contributes valuable insights into the complex interplay between economic stressors and social behavior, highlighting how financial adversity can lead to deviations from established social norms.

Thursday, May 30, 2024

Big Gods and the Origin of Human Cooperation

Brian Klaas
The Garden of Forking Paths
Originally published 21 March 24

Here is an excerpt:

The Big Gods Hypothesis and Civilizations of Karma

Intellectual historians often point to two major divergent explanations for the emergence of religion. The great philosopher David Hume argued that religion is the natural, but arbitrary, byproduct of human cognitive architecture.

Since the beginning, Homo sapiens experienced disordered events, seemingly without explanation. To order a disordered world, our ancestors began to ascribe agency to supernatural beings, to which they could offer gifts, sacrifices, and prayers to sway them to their personal whims. The uncontrollable world became controllable. The unexplainable was explained—a comforting outcome for the pattern detection machines housed in our skulls.

By contrast, thinkers like Émile Durkheim argued that religion emerged as a social glue. Rituals bond people across space and time. Religion was instrumental, not intrinsic. It emerged to serve our societies, not comfort our minds. As Voltaire put it: “If there were no God, it would be necessary to invent him.”

In the last two decades, a vibrant strand of scholarship has sought to reconcile these contrasting viewpoints, notably through the work of Ara Norenzayan, author of Big Gods: How Religion Transformed Cooperation and Conflict.

Norenzayan’s “Big Gods” refer to deities that are omniscient, moralizing beings, careful to note our sins and punish us accordingly. Currently, roughly 77 percent of the world’s population identifies with one of just four religions (31% Christian; 24% Muslim; 15% Hindu; 7% Buddhist). In all four, moral transgressions produce consequences, some immediate, others punished in the afterlife.

Norenzayan aptly notes that the omniscience of Big Gods assumes total knowledge of everything in the universe, but that the divine is always depicted as being particularly interested in our moral behavior. If God exists, He surely could know which socks you wore yesterday, but deities focus their attentions not on such amoral trifles, but rather on whether you lie, covet, cheat, steal, or kill.

However, Norenzayan draws on anthropology evidence to argue that early supernatural beings had none of these traits and were disinterested in human affairs. They were fickle demons, tricksters and spirits, not omniscient gods who worried about whether any random human had wronged his neighbor.


Here is my summary:

The article discusses the theory that the belief in "Big Gods" - powerful, moralizing deities - played a crucial role in the development of large-scale human cooperation and the rise of complex civilizations.

Here are the main points: 
  1. Belief in Big Gods, who monitor and punish moral transgressions, may have emerged as a cultural adaptation that facilitated the expansion of human societies beyond small-scale groups.
  2. This belief system helped solve the "free-rider problem" by creating a supernatural system of rewards and punishments that incentivized cooperation and prosocial behavior, even among strangers.
  3. The emergence of Big Gods is linked to the growth of complex, hierarchical societies, as these belief systems helped maintain social cohesion and coordination in large groups of genetically unrelated individuals.
  4. Archaeological and historical evidence suggests the belief in Big Gods co-evolved with the development of large-scale political institutions, complex economies, and the rise of the first civilizations.
  5. However, the article notes that the relationship between Big Gods and societal complexity is complex, with causality going in both directions - the belief in Big Gods facilitated social complexity, but social complexity also shaped the nature of religious beliefs.
  6. Klaas concludes that the cultural evolution of Big Gods was a crucial step in the development of human societies, enabling the cooperation required for the emergence of complex civilizations. 

Friday, May 17, 2024

Moral universals: A machine-reading analysis of 256 societies

Alfano, M., Cheong, M., & Curry, O. S. (2024).
Heliyon, 10(6).
doi.org/10.1016/j.heliyon.2024.e25940 

Abstract

What is the cross-cultural prevalence of the seven moral values posited by the theory of “morality-as-cooperation”? Previous research, using laborious hand-coding of ethnographic accounts of ethics from 60 societies, found examples of most of the seven morals in most societies, and observed these morals with equal frequency across cultural regions. Here we replicate and extend this analysis by developing a new Morality-as-Cooperation Dictionary (MAC-D) and using Linguistic Inquiry and Word Count (LIWC) to machine-code ethnographic accounts of morality from an additional 196 societies (the entire Human Relations Area Files, or HRAF, corpus). Again, we find evidence of most of the seven morals in most societies, across all cultural regions. The new method allows us to detect minor variations in morals across region and subsistence strategy. And we successfully validate the new machine-coding against the previous hand-coding. In light of these findings, MAC-D emerges as a theoretically-motivated, comprehensive, and validated tool for machine-reading moral corpora. We conclude by discussing the limitations of the current study, as well as prospects for future research.

Significance statement

The empirical study of morality has hitherto been conducted primarily in WEIRD contexts and with living participants. This paper addresses both of these shortcomings by examining the global anthropological record. In addition, we develop a novel methodological tool, the morality-as-cooperation dictionary, which makes it possible to use natural language processing to extract a moral signal from text. We find compelling evidence that the seven moral elements posited by the morality-as-cooperation hypothesis are documented in the anthropological record in all regions of the world and among all subsistence strategies. Furthermore, differences in moral emphasis between different types of cultures tend to be non-significant and small when significant. This is evidence for moral universalism.


Here is my summary:

The study aimed to investigate potential moral universals across human societies by analyzing a large dataset of ethnographic texts describing the norms and practices of 256 societies from around the world. The researchers used machine learning and natural language processing techniques to identify recurring concepts and themes related to morality across the texts.

Some key findings:

1. Seven potential moral universals were identified as being very widespread across societies:
            Fairness/reciprocity
            Harm/care
            Deference to authorities/respect
            Loyalty to the in-group
            Purity/sanctity
            Liberty/oppression
            Ownership/property rights

2. However, there was also substantial variation in how these principles were interpreted and prioritized across cultures.

3. Certain potential universals like harm/care and fairness were more universally condemned when violations impacted one's own group versus other groups.

4. Societies' mobility, population density, and reliance on agriculture or animal husbandry seemed to influence the relative importance placed on different moral principles.

The authors argue that while there do appear to be some common moral foundations widespread across societies, there is also substantial cultural variation in how these are expressed and prioritized. They suggest morality emerges from an interaction of innate psychological foundations and cultural evolutionary processes.

Friday, February 9, 2024

The Dual-Process Approach to Human Sociality: Meta-analytic evidence for a theory of internalized heuristics for self-preservation

Capraro, Valerio (May 8, 2023).
Journal of Personality and Social Psychology, 

Abstract

Which social decisions are influenced by intuitive processes? Which by deliberative processes? The dual-process approach to human sociality has emerged in the last decades as a vibrant and exciting area of research. Yet, a perspective that integrates empirical and theoretical work is lacking. This review and meta-analysis synthesizes the existing literature on the cognitive basis of cooperation, altruism, truth-telling, positive and negative reciprocity, and deontology, and develops a framework that organizes the experimental regularities. The meta-analytic results suggest that intuition favours a set of heuristics that are related to the instinct for self-preservation: people avoid being harmed, avoid harming others (especially when there is a risk of harm to themselves), and are averse to disadvantageous inequalities. Finally, this paper highlights some key research questions to further advance our understanding of the cognitive foundations of human sociality.

Here is my summary:

This article proposes a dual-process approach to human sociality.  Capraro argues that there are two main systems that govern human social behavior: an intuitive system and a deliberative system. The intuitive system is fast, automatic, and often based on heuristics, or mental shortcuts. The deliberative system is slower, more effortful, and based on a more careful consideration of the evidence.

Capraro argues that the intuitive system plays a key role in cooperation, altruism, truth-telling, positive and negative reciprocity, and deontology. This is because these behaviors are often necessary for self-preservation. For example, in order to avoid being harmed, people are naturally inclined to cooperate with others and avoid harming others. Similarly, in order to maintain positive relationships with others, people are inclined to be truthful and reciprocate favors.

The deliberative system plays a more important role in more complex social situations, such as when people need to make decisions that have long-term consequences or when they need to take into account the needs of others. In these cases, people are more likely to engage in careful consideration of the evidence and to weigh the different options before making a decision. The authors conclude that the dual-process approach to human sociality provides a framework for understanding the complex cognitive basis of human social behavior. This framework can be used to explain a wide range of social phenomena, from cooperation and altruism to truth-telling and deontology.

Wednesday, February 7, 2024

Listening to bridge societal divides

Santoro, E., & Markus, H. R. (2023).
Current opinion in psychology, 54, 101696.

Abstract

The U.S. is plagued by a variety of societal divides across political orientation, race, and gender, among others. Listening has the potential to be a key element in spanning these divides. Moreover, the benefits of listening for mitigating social division has become a culturally popular idea and practice. Recent evidence suggests that listening can bridge divides in at least two ways: by improving outgroup sentiment and by granting outgroup members greater status and respect. When reviewing this literature, we pay particular attention to mechanisms and to boundary conditions, as well as to the possibility that listening can backfire. We also review a variety of current interventions designed to encourage and improve listening at all levels of the culture cycle. The combination of recent evidence and the growing popular belief in the significance of listening heralds a bright future for research on the many ways that listening can diffuse stereotypes and improve attitudes underlying intergroup division.

The article is paywalled, which is not really helpful in spreading the word.  This information can be very helpful in couples and family therapy.  Here are my thoughts:

The idea that listening can help bridge societal divides is a powerful one. When we truly listen to someone from a different background, we open ourselves up to understanding their perspective and experiences. This can help to break down stereotypes and foster empathy.

Benefits of Listening:
  • Reduces prejudice: Studies have shown that listening to people from different groups can help to reduce prejudice. When we hear the stories of others, we are more likely to see them as individuals, rather than as members of a stereotyped group.
  • Builds trust: Listening can help to build trust between people from different groups. When we show that we are willing to listen to each other, we demonstrate that we are open to understanding and respecting each other's views.
  • Finds common ground: Even when people disagree, listening can help them to find common ground. By focusing on areas of agreement, rather than on differences, we can build a foundation for cooperation and collaboration.
Challenges of Listening:

It is important to acknowledge that listening is not always easy. There are a number of challenges that can make it difficult to truly hear and understand someone from a different background. These challenges include:
  • Bias: We all have biases, and these biases can influence the way we listen to others. It is important to be aware of our own biases and to try to set them aside when we are listening to someone else.
  • Distraction: In today's world, there are many distractions that can make it difficult to focus on what someone else is saying. It is important to create a quiet and distraction-free environment when we are trying to have a meaningful conversation with someone.
  • Discomfort: Talking about difficult topics can be uncomfortable. However, it is important to be willing to listen to these conversations, even if they make us feel uncomfortable.
Tips for Effective Listening:
  • Pay attention: Make eye contact and avoid interrupting the speaker.
  • Be open-minded: Try to see things from the speaker's perspective, even if you disagree with them.
  • Ask questions: Ask clarifying questions to make sure you understand what the speaker is saying.
  • Summarize: Briefly summarize what you have heard to show that you were paying attention.
  • By practicing these tips, we can become more effective listeners and, in turn, help to bridge the divides that separate us.

Sunday, December 10, 2023

Personality and prosocial behavior: A theoretical framework and meta-analysis

Thielmann, I., Spadaro, G., & Balliet, D. (2020).
Psychological Bulletin, 146(1), 30–90.

Abstract

Decades of research document individual differences in prosocial behavior using controlled experiments that model social interactions in situations of interdependence. However, theoretical and empirical integration of the vast literature on the predictive validity of personality traits to account for these individual differences is missing. Here, we present a theoretical framework that identifies 4 broad situational affordances across interdependent situations (i.e., exploitation, reciprocity, temporal conflict, and dependence under uncertainty) and more specific subaffordances within certain types of interdependent situations (e.g., possibility to increase equality in outcomes) that can determine when, which, and how personality traits should be expressed in prosocial behavior. To test this framework, we meta-analyzed 770 studies reporting on 3,523 effects of 8 broad and 43 narrow personality traits on prosocial behavior in interdependent situations modeled in 6 commonly studied economic games (Dictator Game, Ultimatum Game, Trust Game, Prisoner’s Dilemma, Public Goods Game, and Commons Dilemma). Overall, meta-analytic correlations ranged between −.18 ≤ ρ̂ ≤ .26, and most traits yielding a significant relation to prosocial behavior had conceptual links to the affordances provided in interdependent situations, most prominently the possibility for exploitation. Moreover, for several traits, correlations within games followed the predicted pattern derived from a theoretical analysis of affordances. On the level of traits, we found that narrow and broad traits alike can account for prosocial behavior, informing the bandwidth-fidelity problem. In sum, the meta-analysis provides a theoretical foundation that can guide future research on prosocial behavior and advance our understanding of individual differences in human prosociality.

Public Significance Statement

This meta-analysis provides a theoretical framework and empirical test identifying when, how, and which of 51 personality traits account for individual variation in prosocial behavior. The meta-analysis shows that the relations between personality traits and prosocial behavior can be understood in terms of a few situational affordances (e.g., a possibility for exploitation, a possibility for reciprocity, dependence on others under uncertainty) that allow specific traits to become expressed in behavior across a variety of interdependent situations. As such, the meta-analysis provides a theoretical basis for understanding individual differences in prosocial behavior in various situations that individuals face in their everyday social interactions.


A massive review of the literature finds that the best predictors of pro-social behavior are:
  1. social value orientation
  2. proneness to feel guilt
  3. humility/honesty

Wednesday, October 25, 2023

The moral psychology of Artificial Intelligence

Bonnefon, J., Rahwan, I., & Shariff, A.
(2023, September 22). 

Abstract

Moral psychology was shaped around three categories of agents and patients: humans, other animals, and supernatural beings. Rapid progress in Artificial Intelligence has introduced a fourth category for our moral psychology to deal with: intelligent machines. Machines can perform as moral agents, making decisions that affect the outcomes of human patients, or solving moral dilemmas without human supervi- sion. Machines can be as perceived moral patients, whose outcomes can be affected by human decisions, with important consequences for human-machine cooperation. Machines can be moral proxies, that human agents and patients send as their delegates to a moral interaction, or use as a disguise in these interactions. Here we review the experimental literature on machines as moral agents, moral patients, and moral proxies, with a focus on recent findings and the open questions that they suggest.

Conclusion

We have not addressed every issue at the intersection of AI and moral psychology. Questions about how people perceive AI plagiarism, about how the presence of AI agents can reduce or enhance trust between groups of humans, about how sexbots will alter intimate human relations, are the subjects of active research programs.  Many more yet unasked questions will only be provoked as new AI  abilities  develops. Given the pace of this change, any review paper will only be a snapshot.  Nevertheless, the very recent and rapid emergence of AI-driven technology is colliding with moral intuitions forged by culture and evolution over the span of millennia.  Grounding an imaginative speculation about the possibilities of AI with a thorough understanding of the structure of human moral psychology will help prepare for a world shared with, and complicated by, machines.

Friday, June 30, 2023

The psychology of zero-sum beliefs

Davidai, S., Tepper, S.J. 
Nat Rev Psychol (2023). 

Abstract

People often hold zero-sum beliefs (subjective beliefs that, independent of the actual distribution of resources, one party’s gains are inevitably accrued at other parties’ expense) about interpersonal, intergroup and international relations. In this Review, we synthesize social, cognitive, evolutionary and organizational psychology research on zero-sum beliefs. In doing so, we examine when, why and how such beliefs emerge and what their consequences are for individuals, groups and society.  Although zero-sum beliefs have been mostly conceptualized as an individual difference and a generalized mindset, their emergence and expression are sensitive to cognitive, motivational and contextual forces. Specifically, we identify three broad psychological channels that elicit zero-sum beliefs: intrapersonal and situational forces that elicit threat, generate real or imagined resource scarcity, and inhibit deliberation. This systematic study of zero-sum beliefs advances our understanding of how these beliefs arise, how they influence people’s behaviour and, we hope, how they can be mitigated.

From the Summary and Future Directions section

We have suggested that zero-sum beliefs are influenced by threat, a sense of resource scarcity and lack of deliberation. Although each of these three channels can separately lead to zero-sum beliefs, simultaneously activating more than one channel might be especially potent. For instance, focusing on losses (versus gains) is both threatening and heightens a sense of resource scarcity. Consequently, focusing on losses might be especially likely to foster zero-sum beliefs. Similarly, insufficient deliberation on the long-term and dynamic effects of international trade might foster a view of domestic currency as scarce, prompting the belief that trade is zero-sum. Thus, any factor that simultaneously affects the threat that people experience, their perceptions of resource scarcity, and their level of deliberation is more likely to result in zero-sum beliefs, and attenuating zero-sum beliefs requires an exploration of all the different factors that lead to these experiences in the first place. For instance, increasing deliberation reduces zero-sum beliefs about negotiations by increasing people’s accountability, perspective taking or consideration of mutually beneficial issues. Future research could manipulate deliberation in other contexts to examine its causal effect on zero-sum beliefs. Indeed, because people express more moderate beliefs after deliberating policy details, prompting participants to deliberate about social issues (for example, asking them to explain the process by which one group’s outcomes influence another group’s outcomes) might reduce zero-sum beliefs. More generally, research could examine long-term and scalable solutions for reducing zero-sum beliefs, focusing on interventions that simultaneously reduce threat, mitigate views of resource scarcity and increase deliberation.  For instance, as formal training in economics is associated with lower zero-sum beliefs, researchers could examine whether teaching people basic economic principles reduces zero-sum beliefs across various domains. Similarly, because higher socioeconomic status is negatively associated with zero-sum beliefs, creating a sense of abundance might counter the belief that life is zero-sum.

Friday, May 19, 2023

What’s wrong with virtue signaling?

Hill, J., Fanciullo, J. 
Synthese 201, 117 (2023).

Abstract

A novel account of virtue signaling and what makes it bad has recently been offered by Justin Tosi and Brandon Warmke. Despite plausibly vindicating the folk?s conception of virtue signaling as a bad thing, their account has recently been attacked by both Neil Levy and Evan Westra. According to Levy and Westra, virtue signaling actually supports the aims and progress of public moral discourse. In this paper, we rebut these recent defenses of virtue signaling. We suggest that virtue signaling only supports the aims of public moral discourse to the extent it is an instance of a more general phenomenon that we call norm signaling. We then argue that, if anything, virtue signaling will undermine the quality of public moral discourse by undermining the evidence we typically rely on from the testimony and norm signaling of others. Thus, we conclude, not only is virtue signaling not needed, but its epistemological effects warrant its bad reputation.

Conclusion

In this paper, we have challenged two recent defenses of virtue signaling. Whereas Levy ascribes a number of good features to virtue signaling—its providing higher-order evidence for the truth of certain moral judgments, its helping us delineate groups of reliable moral cooperators, and its not involving any hypocrisy on the part of its subject—it seems these good features are ascribable to virtue signaling ultimately and only because they are good features of norm signaling, and virtue signaling entails norm signaling. Similarly, whereas Westra suggests that virtue signaling uniquely benefits public moral discourse by supporting moral progress in a way that mere norm signaling does not, it seems virtue signaling also uniquely harms public moral discourse by supporting moral regression in a way that mere norm signaling does not. It therefore seems that in each case, to the extent it differs from norm signaling, virtue signaling simply isn’t needed.

Moreover, we have suggested that, if anything, virtue signaling will undermine the higher order evidence we typically can and should rely on from the testimony of others. Virtue signaling essentially involves a motivation that aims at affecting public moral discourse but that does not aim at the truth. When virtue signaling is rampant—when we are aware that this ulterior motive is common among our peers—we should give less weight to the higher-order evidence provided by the testimony of others than we otherwise would, on pain of double counting evidence and falling for unwarranted confidence. We conclude, therefore, that not only is virtue signaling not needed, but its epistemological effects warrant its bad reputation. 

Sunday, April 30, 2023

The secrets of cooperation

Bob Holmes
Knowablemagazine.org
Originally published 29 MAR 23

Here are two excerpts:

Human cooperation takes some explaining — after all, people who act cooperatively should be vulnerable to exploitation by others. Yet in societies around the world, people cooperate to their mutual benefit. Scientists are making headway in understanding the conditions that foster cooperation, research that seems essential as an interconnected world grapples with climate change, partisan politics and more — problems that can be addressed only through large-scale cooperation.

Behavioral scientists’ formal definition of cooperation involves paying a personal cost (for example, contributing to charity) to gain a collective benefit (a social safety net). But freeloaders enjoy the same benefit without paying the cost, so all else being equal, freeloading should be an individual’s best choice — and, therefore, we should all be freeloaders eventually.

Many millennia of evolution acting on both our genes and our cultural practices have equipped people with ways of getting past that obstacle, says Muthukrishna, who coauthored a look at the evolution of cooperation in the 2021 Annual Review of Psychology. This cultural-genetic coevolution stacked the deck in human society so that cooperation became the smart move rather than a sucker’s choice. Over thousands of years, that has allowed us to live in villages, towns and cities; work together to build farms, railroads and other communal projects; and develop educational systems and governments.

Evolution has enabled all this by shaping us to value the unwritten rules of society, to feel outrage when someone else breaks those rules and, crucially, to care what others think about us.

“Over the long haul, human psychology has been modified so that we’re able to feel emotions that make us identify with the goals of social groups,” says Rob Boyd, an evolutionary anthropologist at the Institute for Human Origins at Arizona State University.

(cut)

Reputation is more powerful than financial incentives in encouraging cooperation

Almost a decade ago, Yoeli and his colleagues trawled through the published literature to see what worked and what didn’t at encouraging prosocial behavior. Financial incentives such as contribution-matching or cash, or rewards for participating, such as offering T-shirts for blood donors, sometimes worked and sometimes didn’t, they found. In contrast, reputational rewards — making individuals’ cooperative behavior public — consistently boosted participation. The result has held up in the years since. “If anything, the results are stronger,” says Yoeli.

Financial rewards will work if you pay people enough, Yoeli notes — but the cost of such incentives could be prohibitive. One study of 782 German residents, for example, surveyed whether paying people to receive a Covid vaccine would increase vaccine uptake. It did, but researchers found that boosting vaccination rates significantly would have required a payment of at least 3,250 euros — a dauntingly steep price.

And payoffs can actually diminish the reputational rewards people could otherwise gain for cooperative behavior, because others may be unsure whether the person was acting out of altruism or just doing it for the money. “Financial rewards kind of muddy the water about people’s motivations,” says Yoeli. “That undermines any reputational benefit from doing the deed.”

Thursday, March 23, 2023

Are there really so many moral emotions? Carving morality at its functional joints

Fitouchi L., André J., & Baumard N.
To appear in L. Al-Shawaf & T. K. Shackelford (Eds.)
The Oxford Handbook of Evolution and the Emotions.
New York: Oxford University Press.

Abstract

In recent decades, a large body of work has highlighted the importance of emotional processes in moral cognition. Since then, a heterogeneous bundle of emotions as varied as anger, guilt, shame, contempt, empathy, gratitude, and disgust have been proposed to play an essential role in moral psychology.  However, the inclusion of these emotions in the moral domain often lacks a clear functional rationale, generating conflations between merely social and properly moral emotions. Here, we build on (i) evolutionary theories of morality as an adaptation for attracting others’ cooperative investments, and on (ii) specifications of the distinctive form and content of moral cognitive representations. On this basis, we argue that only indignation (“moral anger”) and guilt can be rigorously characterized as moral emotions, operating on distinctively moral representations. Indignation functions to reclaim benefits to which one is morally entitled, without exceeding the limits of justice. Guilt functions to motivate individuals to compensate their violations of moral contracts. By contrast, other proposed moral emotions (e.g. empathy, shame, disgust) appear only superficially associated with moral cognitive contents and adaptive challenges. Shame doesn’t track, by design, the respect of moral obligations, but rather social valuation, the two being not necessarily aligned. Empathy functions to motivate prosocial behavior between interdependent individuals, independently of, and sometimes even in contradiction with the prescriptions of moral intuitions. While disgust is often hypothesized to have acquired a moral role beyond its pathogen-avoidance function, we argue that both evolutionary rationales and psychological evidence for this claim remain inconclusive for now.

Conclusion

In this chapter, we have suggested that a specification of the form and function of moral representations leads to a clearer picture of moral emotions. In particular, it enables a principled distinction between moral and non-moral emotions, based on the particular types of cognitive representations they process. Moral representations have a specific content: they represent a precise quantity of benefits that cooperative partners owe each other, a legitimate allocation of costs and benefits that ought to be, irrespective of whether it is achieved by people’s actual behaviors. Humans intuit that they have a duty not to betray their coalition, that innocent people do not deserve to be harmed, that their partner has a right not to be cheated on. Moral emotions can thus be defined as superordinate programs orchestrating cognition, physiology and behavior in accordance with the specific information encoded in these moral representations.    On this basis, indignation and guilt appear as prototypical moral emotions. Indignation (“moral anger”) is activated when one receives fewer benefits than one deserves, and recruits bargaining mechanisms to enforce the violated moral contract. Guilt, symmetrically, is sensitive to one’s failure to honor one’s obligations toward others, and motivates compensation to provide them the missing benefits they deserve. By contrast, often-proposed “moral” emotions – shame, empathy, disgust – seem not to function to compute distinctively moral representations of cooperative obligations, but serve other, non-moral functions – social status management, interdependence, and pathogen avoidance (Figure 2). 

Tuesday, February 14, 2023

Helping the ingroup versus harming the outgroup: Evidence from morality-based groups

Grigoryan, L, Seo, S, Simunovic, D, & Hoffman, W.
Journal of Experimental Social Psychology
Volume 105, March 2023, 104436

Abstract

The discrepancy between ingroup favoritism and outgroup hostility is well established in social psychology. Under which conditions does “ingroup love” turn into “outgroup hate”? Studies with natural groups suggest that when group membership is based on (dis)similarity of moral beliefs, people are willing to not only help the ingroup, but also harm the outgroup. The key limitation of these studies is that the use of natural groups confounds the effects of shared morality with the history of intergroup relations. We tested the effect of morality-based group membership on intergroup behavior using artificial groups that help disentangling these effects. We used the recently developed Intergroup Parochial and Universal Cooperation (IPUC) game which differentiates between behavioral options of weak parochialism (helping the ingroup), strong parochialism (harming the outgroup), universal cooperation (helping both groups), and egoism (profiting individually). In three preregistered experiments, we find that morality-based groups exhibit less egoism and more universal cooperation than non-morality-based groups. We also find some evidence of stronger ingroup favoritism in morality-based groups, but no evidence of stronger outgroup hostility. Stronger ingroup favoritism in morality-based groups is driven by expectations from the ingroup, but not the outgroup. These findings contradict earlier evidence from natural groups and suggest that (dis)similarity of moral beliefs is not sufficient to cross the boundary between “ingroup love” and “outgroup hate”.

General discussion

When does “ingroup love” turn into “outgroup hate”? Previous studies conducted on natural groups suggest that centrality of morality to the group’s identity is one such condition: morality-based groups showed more hostility towards outgroups than non-morality-based groups (Parker & Janoff-Bulman, 2013; Weisel & Böhm, 2015). We set out to test this hypothesis in a minimal group setting, using the recently developed Intergroup Parochial and Universal Cooperation (IPUC) game.  Across three pre-registered studies, we found no evidence that morality-based groups show more hostility towards outgroups than non-morality-based groups. Instead, morality-based groups exhibited less egoism and more universal cooperation (helping both the ingroup and the outgroup) than non-morality-based groups. This finding is consistent with earlier research showing that salience of morality makes people more cooperative (Capraro et al., 2019). Importantly, our morality manipulation was not specific to any pro-cooperation moralnorm. Simply asking participants to think about the criteria they use to judge what is right and what is wrong was enough to increase universal cooperation.

Our findings are inconsistent with research showing stronger outgroup hostility in morality-based groups (Parker & Janoff-Bulman, 2013; Weisel & Böhm, 2015). The key difference between the set of studies presented here and the earlier studies that find outgroup hostility in morality-based groups is the use of natural groups in the latter. What potential confounding variables might account for the emergence of outgroup hostility in natural groups?

Tuesday, January 24, 2023

On the value of modesty: How signals of status undermine cooperation

Srna, S., Barasch, A., & Small, D. A. (2022). 
Journal of Personality and Social Psychology, 
123(4), 676–692.
https://doi.org/10.1037/pspa0000303

Abstract

The widespread demand for luxury is best understood by the social advantages of signaling status (i.e., conspicuous consumption; Veblen, 1899). In the present research, we examine the limits of this perspective by studying the implications of status signaling for cooperation. Cooperation is principally about caring for others, which is fundamentally at odds with the self-promotional nature of signaling status. Across behaviorally consequential Prisoner’s Dilemma (PD) games and naturalistic scenario studies, we investigate both sides of the relationship between signaling and cooperation: (a) how people respond to others who signal status, as well as (b) the strategic choices people make about whether to signal status. In each case, we find that people recognize the relative advantage of modesty (i.e., the inverse of signaling status) and behave strategically to enable cooperation. That is, people are less likely to cooperate with partners who signal status compared to those who are modest (Studies 1 and 2), and more likely to select a modest person when cooperation is desirable (Study 3). These behaviors are consistent with inferences that status signalers are less prosocial and less prone to cooperate. Importantly, people also refrain from signaling status themselves when it is strategically beneficial to appear cooperative (Studies 4–6). Together, our findings contribute to a better understanding of the conditions under which the reputational costs of conspicuous consumption outweigh its benefits, helping integrate theoretical perspectives on strategic interpersonal dynamics, cooperation, and status signaling.

From the General Discussion

Implications

The high demand for luxury goods is typically explained by the social advantages of status signaling (Veblen, 1899). We do not dispute that status signaling is beneficial in many contexts. Indeed, we find that status signaling helps a person gain acceptance into a group that is seeking competitive members (see Supplemental Study 1). However, our research suggests a more nuanced view regarding the social effects of status signaling. Specifically, the findings caution against using this strategy indiscriminately.  Individuals should consider how important it is for them to appear prosocial, and strategically choose modesty when the goal to achieve cooperation is more important than other social goals (e.g., to appear wealthy or successful).

These strategic concerns are particularly important in the era of social media, where people can easily broadcast their consumption choices to large audiences. Many people show off their status through posts on Instagram, Twitter, and Facebook (e.g., Sekhon et al., 2015). Such posts may be beneficial for communicating one’s wealth and status, but as we have shown, they can also have negative effects. A boastful post could wind up on social media accounts such as “Rich Kids of the Internet,” which highlights extreme acts of status signaling and has over 350,000 followers and countless angry comments (Hoffower, 2020). Celebrities and other public figures also risk their reputations when they post about their status. For instance, when Louise Linton, wife of the U.S. Secretary of the Treasury, posted a photo of herself from an official government visit with many luxury-branded hashtags, she was vilified on social
media and in the press (Calfas, 2017).

Monday, January 16, 2023

The origins of human prosociality: Cultural group selection in the workplace and the laboratory

Francois, P., Fujiwara, T., & van Ypersele, T. (2018).
Science Advances, 4(9).
https://doi.org/10.1126/sciadv.aat2201

Abstract

Human prosociality toward non-kin is ubiquitous and almost unique in the animal kingdom. It remains poorly understood, although a proliferation of theories has arisen to explain it. We present evidence from survey data and laboratory treatment of experimental subjects that is consistent with a set of theories based on group-level selection of cultural norms favoring prosociality. In particular, increases in competition increase trust levels of individuals who (i) work in firms facing more competition, (ii) live in states where competition increases, (iii) move to more competitive industries, and (iv) are placed into groups facing higher competition in a laboratory experiment. The findings provide support for cultural group selection as a contributor to human prosociality.

Discussion

There is considerable experimental evidence, referenced earlier, supporting the conclusion that people are conditional cooperators: They condition actions based on their beliefs regarding prevailing norms of behavior. They cooperate if they believe their partners are also likely to do so, and they are unlikely to act cooperatively if they believe that others will not.

The environment in which people interact shapes both the social and economic returns to following cooperative norms. For instance, many aspects of groups within the work environment will determine whether cooperation can be an equilibrium in behavior among group members or whether it is strictly dominated by more selfish actions. Competition across firms can play two distinct roles in affecting this. First, there is a static equilibrium effect, which arises from competition altering rewards from cooperative versus selfish behavior, even without changing the distribution of firms. Competition across firms punishes individual free-riding behavior and rewards cooperative behavior. In the absence of competitive threats, members of groups can readily shirk without serious payoff consequences for their firm. This is not so if a firm faces an existential threat. Less markedly, even if a firm is not close to the brink of survival, more intense market competition renders firm-level payoffs more responsive to the efforts of group members. With intense competition, the deleterious effects of shirking are magnified by large loss of market share, revenues, and, in turn, lower group-level payoffs. Without competition, attendant declines in quality or efficiency arising from poor performance have weaker, and perhaps nonexistent, payoff consequences. These effects on individuals are likely to be small in large firms where any specific worker’s actions are unlikely to be pivotal. However, it is possible that employees overestimate the impact of their actions or instinctively respond to competition with more prosocial attitudes, even in large teams.

Competition across firms does not typically lead to a unique equilibrium in social norms but, if intense enough, can sustain a cooperative group norm. Depending on the setting, multiple different cooperative group equilibria differentiated by the level of costly effort can also be sustained. For example, if individuals are complementary in production, then an individual believing co-workers to all be shirkers and thus unable to produce a viable product will similarly also choose to exert low effort. An equilibrium where no one voluntarily contributes to cooperative tasks is sustained, and such a workplace looks to have noncooperative norms. In contrast, with the same complementary production process, and a workplace where all other workers are believed to be contributing high effort, a single worker will optimally choose to exert high effort as well to ensure viable output. In that case, a cooperative norm is sustained. When payoffs are continuous in both the quality of the product and the intensity of the competition, then the degree of cooperative effort that can be sustained can be continuously increasing in the intensity of market competition across firms. We have formalized this in an economic model that we include in the Supplementary Materials.

Competition’s first effect is thus to make it possible, but not necessary, for group-level cooperative norms to arise as equilibria. The literature has shown that there are many other ways to stabilize cooperative norms as equilibria, such as institutional punishment, third-party punishment, or reputations. Cross-group competition may also enhance these other well-studied mechanisms for generating cooperative norm equilibria, but with or without these factors, it has a general effect of tilting the set of equilibria toward those featuring cooperative norms.

Wednesday, January 11, 2023

How neurons, norms, and institutions shape group cooperation

Van Bavel, J. J., Pärnamets, P., Reinero, D. A., 
& Packer, D. (2022, April 7).
https://doi.org/10.1016/bs.aesp.2022.04.004

Abstract

Cooperation occurs at all stages of human life and is necessary for small groups and large-scale societies alike to emerge and thrive. This chapter bridges research in the fields of cognitive neuroscience, neuroeconomics, and social psychology to help understand group cooperation. We present a value-based framework for understanding cooperation, integrating neuroeconomic models of decision-making with psychological and situational variables involved in cooperative behavior, particularly in groups. According to our framework, the ventromedial prefrontal cortex serves as a neural integration hub for value computation during cooperative decisions, receiving inputs from various neuro-cognitive processes such as attention, affect, memory, and learning. We describe factors that directly or indirectly shape the value of cooperation decisions, including cultural contexts and social norms, personal and social identity, and intergroup relations. We also highlight the role of economic, social, and cultural institutions in shaping cooperative behavior. We discuss the implications for future research on cooperation.

(cut)

Social Institutions

Trust production is crucial for fostering cooperation (Zucker, 1986). We have already discussed two forms of trust production above: the trust and resulting cooperation that develops from experience with and knowledge about individuals, and trust based on social identities. The third form of trust production is institution-based, in which formal mechanisms or processes are used to foster trust (and that do not rely on personal characteristics, a history of exchange, or identity characteristics). At the societal level, trust-supporting institutions include governments, corporate structures, criminal and civil legal systems, contract law and property rights, insurance, and stock markets. When they function effectively, institutions allow for broader cooperation, helping people extend trust beyond other people they know or know of and, crucially, also beyond the boundaries of their in-groups (Fabbri, 2022; Hruschka & Henrich, 2013; Rothstein & Stolle, 2008; Zucker, 1986). Conversely, when these sorts of structures do not function well, “institutional distrust strips away a basic sense that one is protected from exploitation, thus reducing trust between strangers, which is at the core of functioning societies” (van Prooijen, Spadaro, & Wang, 2022).

When strangers with different cultural backgrounds have to interact, it often lacks the interpersonal or group-level trust necessary for cooperation. For instance, reliance on tightly-knit social networks, where everyone knows everyone, is often impossible in larger, more diverse environments. Communities can compensate by relying more on group-based trust. For example, banks may loan money primarily within separate kin or ethnic groups (Zucker, 1986). However, the disruption of homogeneous social networks, combined with the increasing need to cooperate across group boundaries creates incentives to develop and participate in broader sets of institutions. Institutions can facilitate cooperation and individuals prefer institutions that help regulate interactions and foster trust.

People often may seek to build institutions embodying principles, norms, rules, or procedures that foster group-based cooperation. In turn, these institutions shape decisions by altering the value people place oncooperative decisions. One study, for instance, examined these institutional and psychological dynamics over 30 rounds of a public goods game (Gürerk, Irlenbusch & Rockenbach, 2006). Every round had three stages. First, participants chose whether they wanted to play that round with or without a “sanctioning institution” that would provide a means of rewarding or punishing other players based on their behavior in the game. Second, they played the public goods game with (and onlywith) other participants whohad selected the same institutional structure for that round. After making their decisions (to contribute to the common pool), they then saw how much everyone else in their institutional context had contributed. Third, participants who had opted to play the round with a sanctioning institution could choose, for a price, to punish or reward other players.

Wednesday, January 4, 2023

How social identity tunes moral cognition

Van Bavel, J. J., Packer, D.,  et al.
PsyArXiv.com (2022, November 18). 
https://doi.org/10.31234/osf.io/9efsb

Abstract

In this chapter, we move beyond the treatment of intuition and reason as competing systems and outline how social contexts, and especially social identities, allow people to flexibly “tune” their cognitive reactions to moral contexts—a process we refer to as ‘moral tuning’. Collective identities—identities based on shared group memberships—significantly influence judgments and decisions of many kinds, including in the moral domain. We explain why social identities influence all aspects of moral cognition, including processes traditionally classified as intuition and reasoning. We then explain how social identities tune preferences and goals, expectations, and what outcomes care about. Finally, we propose directions for future research in moral psychology.

Social Identities Tune Preferences and Goals

Morally-relevant situations often involve conflicts between choices about which the interests of different parties are in tension. Moral transgressions typically involve an agent putting their own desires ahead of the interests, needs, or rights of others, thus causing them harm (e.g., Gray et al., 2012), whereas acts worthy of moral praise usually involve an agent sacrificing self-interest for the sake of someone else’s or the greater good. Value-computation frameworks of cooperation model how much people weigh the interests of different parties (e.g., their own versus others’) in terms of social preferences (see Van Bavel et al., 2022). Social preference parameters can, for example, capture individual differences in how much people prioritize their own outcomes over others’ (e.g., pro-selfs versus pro-socials as indexed by social value orientation; Balliet et al., 2009). These preferences, along with social norms, inform the computations that underlie decisions to engage in selfish or pro-social behavior (Hackel, Wills &Van Bavel, 2020).

We argue that social identity also influences social preferences, such that people tend to care more about outcomes incurred by in-group than out-group members (Tajfel & Turner, 1979;Van Bavel & Packer, 2021). For instance, highly identified group members appear to experience vicarious reward when they observe in-group (but not out-group) members experiencing positiveoutcomes, as indexed by activity in ventral striatum, a brain region implicated in hedonic reward (Hackel et al., 2017). Intergroup competition may exacerbate differences in concern for in-group versus out-group targets, causing people to feel empathy when in-group targets experience negative outcomes, but schadenfreude (pleasure in others’ pain) when out-group members experience these same events (Cikara et al., 2014). Shared social identities can also lead people to put collective interests ahead of their own individual interests in social dilemmas. For instance, making collective identities salient causes selfish individuals to contribute more to theirgroup than when these same people were reminded of their individual self (De Cremer & Van Vugt, 1999). This shift in behavior was not necessarily because they were less selfish, but rather because their sense of self had shifted from the individual to the collective level.

(cut)

Conclusion

For centuries, philosophers and scientists have debated the role of emotional intuition and reason in moral judgment. Thanks to theoretical and methodological developments over the past few decades, we believe it is time to move beyond these debates. We argue that social identity can tune the intuitions and reasoning processes that underlie moral cognition (Van Bavel et al., 2015). Extensive research has found that social identities have a significant influence on social and moral judgment and decision-making (Oakes et al., 1994; Van Bavel & Packer, 2021). This approach offers an important complement to other theories of moral psychology and suggests a powerful way to shift moral judgments and decisions—by changing identities and norms, rather than hearts and minds.