Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Reciprocity. Show all posts
Showing posts with label Reciprocity. Show all posts

Friday, February 9, 2024

The Dual-Process Approach to Human Sociality: Meta-analytic evidence for a theory of internalized heuristics for self-preservation

Capraro, Valerio (May 8, 2023).
Journal of Personality and Social Psychology, 

Abstract

Which social decisions are influenced by intuitive processes? Which by deliberative processes? The dual-process approach to human sociality has emerged in the last decades as a vibrant and exciting area of research. Yet, a perspective that integrates empirical and theoretical work is lacking. This review and meta-analysis synthesizes the existing literature on the cognitive basis of cooperation, altruism, truth-telling, positive and negative reciprocity, and deontology, and develops a framework that organizes the experimental regularities. The meta-analytic results suggest that intuition favours a set of heuristics that are related to the instinct for self-preservation: people avoid being harmed, avoid harming others (especially when there is a risk of harm to themselves), and are averse to disadvantageous inequalities. Finally, this paper highlights some key research questions to further advance our understanding of the cognitive foundations of human sociality.

Here is my summary:

This article proposes a dual-process approach to human sociality.  Capraro argues that there are two main systems that govern human social behavior: an intuitive system and a deliberative system. The intuitive system is fast, automatic, and often based on heuristics, or mental shortcuts. The deliberative system is slower, more effortful, and based on a more careful consideration of the evidence.

Capraro argues that the intuitive system plays a key role in cooperation, altruism, truth-telling, positive and negative reciprocity, and deontology. This is because these behaviors are often necessary for self-preservation. For example, in order to avoid being harmed, people are naturally inclined to cooperate with others and avoid harming others. Similarly, in order to maintain positive relationships with others, people are inclined to be truthful and reciprocate favors.

The deliberative system plays a more important role in more complex social situations, such as when people need to make decisions that have long-term consequences or when they need to take into account the needs of others. In these cases, people are more likely to engage in careful consideration of the evidence and to weigh the different options before making a decision. The authors conclude that the dual-process approach to human sociality provides a framework for understanding the complex cognitive basis of human social behavior. This framework can be used to explain a wide range of social phenomena, from cooperation and altruism to truth-telling and deontology.

Friday, August 19, 2022

Too cynical to reconnect: Cynicism moderates the effect of social exclusion on prosociality through empathy

B. K. C. Choy, K. Eom, & N. P. Li
Personality and Individual Differences
Volume 178, August 2021, 110871

Abstract

Extant findings are mixed on whether social exclusion impacts prosociality. We propose one factor that may underlie the mixed results: Cynicism. Specifically, cynicism may moderate the exclusion-prosociality link by influencing interpersonal empathy. Compared to less cynical individuals, we expected highly cynical individuals who were excluded to experience less empathy and, consequently, less prosocial behavior. Using an online ball-tossing game, participants were randomly assigned to an exclusion or inclusion condition. Consistent with our predictions, the effect of social exclusion on prosociality through empathy was contingent on cynicism, such that only less-cynical individuals responded to exclusion with greater empathy, which, in turn, was associated with higher levels of prosocial behavior. We further showed this effect to hold for cynicism, but not other similar traits typically characterized by high disagreeableness. Findings contribute to the social exclusion literature by suggesting a key variable that may moderate social exclusion's impact on resultant empathy and prosocial behavior and are consistent with the perspective that people who are excluded try to not only become included again but to establish alliances characterized by reciprocity.

From the Discussion

While others have proposed that empathy may be reflexively inhibited upon exclusion (DeWall & Baumeister, 2006; Twenge et al., 2007), our findings indicate that this process of inhibition—at least for empathy—may be more flexible than previously thought. If reflexive, individuals would have shown a similar level of empathy regardless of cynicism. That highly- and less-cynical individuals displayed different levels of empathy indicates that some other processes are in play. Our interpretation is that the process through which empathy is exhibited or inhibited may depend on one’s appraisals of the physical and social situation. 

Importantly, unlike cynicism, other similarly disagreeable dispositional traits such as Machiavellianism, psychopathy, and SDO (Social Dominance Orientation) did not modulate the empathy-mediated link between social exclusion and prosociality. This suggests that cynicism is conceptually different from other traits of a seemingly negative nature. Indeed, whereas cynics may hold a negative view of the intentions of others around them, Machiavellians are characterized by a negative view of others’ competence and a pragmatic and strategic approach to social interactions (Jones, 2016). Similarly, whereas cynics view others’ emotions as ingenuine, psychopathic individuals are further distinguished by their high levels of callousness and impulsivity (Paulhus, 2014). Likewise, whereas cynics may view the world as inherently competitive, they may not display the same preference for hierarchy that high-SDO individuals do (Ho et al., 21015). Thus, despite the similarities between these traits, our findings affirm their substantive differences from cynicism. 

Monday, July 11, 2022

Moral cognition as a Nash product maximizer: An evolutionary contractualist account of morality

AndrĂ©, J., Debove, S., Fitouchi, L., & Baumard, N. 
(2022, May 24). https://doi.org/10.31234/osf.io/2hxgu

Abstract

Our goal in this paper is to use an evolutionary approach to explain the existence and design-features of human moral cognition. Our approach is based on the premise that human beings are under selection to appear as good cooperative investments. Hence they face a trade-off between maximizing the immediate gains of each social interaction, and maximizing its long-term reputational effects. In a simple 2-player model, we show that this trade-off leads individuals to maximize the generalized Nash product at evolutionary equilibrium, i.e., to behave according to the generalized Nash bargaining solution. We infer from this result the theoretical proposition that morality is a domain-general calculator of this bargaining solution. We then proceed to describe the generic consequences of this approach: (i) everyone in a social interaction deserves to receive a net benefit, (ii) people ought to act in ways that would maximize social welfare if everyone was acting in the same way, (iii) all domains of social behavior can be moralized, (iv) moral duties can seem both principled and non-contractual, and (v) morality shall depend on the context. Next, we apply the approach to some of the main areas of social life and show that it allows to explain, with a single logic, the entire set of what are generally considered to be different moral domains. Lastly, we discuss the relationship between this account of morality and other evolutionary accounts of morality and cooperation.

From The psychological signature of morality: the right, the wrong and the duty Section

Cooperating for the sake of reputation always entails that, at some point along social interactions, one is in a position to access benefits, but one decides to give them up, not for a short-term instrumental purpose, but for the long-term aim of having a good reputation.  And, by this, we mean precisely:the long-term aim of being considered someone with whom cooperation ends up bringing a net benefit rather than a net cost, not only in the eyes of a particular partner, but in the eyes of any potential future partner.  This specific and universal property of reputation-based cooperation explains the specific and universal phenomenology of moral decisions.

To understand, one must distinguish what people  do in practice, and what they think is right to do. In practice, people may sometimes cheat, i.e., not respect the contract. They may do so conditionally on the specific circumstances, if they evaluate that  the actual reputational benefits  of  doing  their duty is lower than the immediate cost (e.g., if their cheating has a chance to go unnoticed).  This should not –and in fact does  not  (Knoch et al., 2009;  Kogut, 2012;  Sheskin et al., 2014; Smith et al., 2013) – change their assessment of what would have been the right thing to do.  This assessment can only be absolute, in the sense that it depends only on what one needs to do to ensure that the interaction ends up bringing a net benefit to one’s partner rather than a cost, i.e., to respect the contract, and is not affected by the actual reputational stake of the specific interaction.  Or, to put it another way, people must calculate their moral duty by thinking “If someone  was looking at me, what would they think?”,  regardless of whether anyone is actually looking at them.

Friday, December 10, 2021

How social relationships shape moral wrongness judgments

Earp, B.D., McLoughlin, K.L., Monrad, J.T. et al. 
Nat Commun 12, 5776 (2021).

Abstract

Judgments of whether an action is morally wrong depend on who is involved and the nature of their relationship. But how, when, and why social relationships shape moral judgments is not well understood. We provide evidence to address these questions, measuring cooperative expectations and moral wrongness judgments in the context of common social relationships such as romantic partners, housemates, and siblings. In a pre-registered study of 423 U.S. participants nationally representative for age, race, and gender, we show that people normatively expect different relationships to serve cooperative functions of care, hierarchy, reciprocity, and mating to varying degrees. In a second pre-registered study of 1,320 U.S. participants, these relationship-specific cooperative expectations (i.e., relational norms) enable highly precise out-of-sample predictions about the perceived moral wrongness of actions in the context of particular relationships. In this work, we show that this ‘relational norms’ model better predicts patterns of moral wrongness judgments across relationships than alternative models based on genetic relatedness, social closeness, or interdependence, demonstrating how the perceived morality of actions depends not only on the actions themselves, but also on the relational context in which those actions occur.

From the General Discussion

From a theoretical perspective, one aspect of our current account that requires further attention is the reciprocity function. In contrast with the other three functions considered, relationship-specific prescriptions for reciprocity did not significantly predict moral judgments for reciprocity violations. Why might this be so? One possibility is that the model we tested did not distinguish between two different types of reciprocity. In some relationships, such as those between strangers, acquaintances, or individuals doing business with one another, each party tracks the specific benefits contributed to, and received from, the other. In these relationships, reciprocity thus takes a tit-for-tat form in which benefits are offered and accepted on a highly contingent basis. This type of reciprocity is transactional, in that resources are provided, not in response to a real or perceived need on the part of the other, but rather, in response to the past or expected future provision of a similarly valued resource from the cooperation partner. In this, it relies on an explicit accounting of who owes what to whom, and is thus characteristic of so-called “exchange” relationships.

In other relationships, by contrast, such as those between friends, family members, or romantic partners – so-called “communal” relationships – reciprocity takes a different form: that of mutually expected responsiveness to one another’s needs. In this form of reciprocity, each party tracks the other’s needs (rather than specific benefits provided) and strives to meet these needs to the best of their respective abilities, in proportion to the degree of responsibility each has assumed for the other’s welfare. Future work on moral judgments in relational context should distinguish between these two types of reciprocity: that is, mutual care-based reciprocity in communal relationships (when both partners have similar needs and abilities) and tit-for-tat reciprocity between “transactional” cooperation partners who have equal standing or claim on a resource.

Saturday, September 25, 2021

The prefrontal cortex and (uniquely) human cooperation: a comparative perspective

Zoh, Y., Chang, S.W.C. & Crockett, M.J.
Neuropsychopharmacol. (2021). 

Abstract

Humans have an exceptional ability to cooperate relative to many other species. We review the neural mechanisms supporting human cooperation, focusing on the prefrontal cortex. One key feature of human social life is the prevalence of cooperative norms that guide social behavior and prescribe punishment for noncompliance. Taking a comparative approach, we consider shared and unique aspects of cooperative behaviors in humans relative to nonhuman primates, as well as divergences in brain structure that might support uniquely human aspects of cooperation. We highlight a medial prefrontal network common to nonhuman primates and humans supporting a foundational process in cooperative decision-making: valuing outcomes for oneself and others. This medial prefrontal network interacts with lateral prefrontal areas that are thought to represent cooperative norms and modulate value representations to guide behavior appropriate to the local social context. Finally, we propose that more recently evolved anterior regions of prefrontal cortex play a role in arbitrating between cooperative norms across social contexts, and suggest how future research might fruitfully examine the neural basis of norm arbitration.

Conclusion

The prefrontal cortex, in particular its more anterior regions, has expanded dramatically over the course of human evolution. In tandem, the scale and scope of human cooperation has dramatically outpaced its counterparts in nonhuman primate species, manifesting as complex systems of moral codes that guide normative behaviors even in the absence of punishment or repeated interactions. Here, we provided a selective review of the neural basis of human cooperation, taking a comparative approach to identify the brain systems and social behaviors that are thought to be unique to humans. Humans and nonhuman primates alike cooperate on the basis of kinship and reciprocity, but humans are unique in their abilities to represent shared goals and self-regulate to comply with and enforce cooperative norms on a broad scale. We highlight three prefrontal networks that contribute to cooperative behavior in humans: a medial prefrontal network, common to humans and nonhuman primates, that values outcomes for self and others; a lateral prefrontal network that guides cooperative goal pursuit by modulating value representations in the context of local norms; and an anterior prefrontal network that we propose serves uniquely human abilities to reflect on one’s own behavior, commit to shared social contracts, and arbitrate between cooperative norms across diverse social contexts. We suggest future avenues for investigating cooperative norm arbitration and how it is implemented in prefrontal networks.

Tuesday, September 7, 2021

Vaccination as a social contract

L. Korn, et al.
PNASJun 2020, 117 (26) 
14890-14899; 
DOI: 10.1073/pnas.1919666117

Abstract

Most vaccines protect both the vaccinated individual and the society by reducing the transmission of infectious diseases. In order to eliminate infectious diseases, individuals need to consider social welfare beyond mere self-interest—regardless of ethnic, religious, or national group borders. It has therefore been proposed that vaccination poses a social contract in which individuals are morally obliged to get vaccinated. However, little is known about whether individuals indeed act upon this social contract. If so, vaccinated individuals should reciprocate by being more generous to a vaccinated other. On the contrary, if the other doesn’t vaccinate and violates the social contract, generosity should decline. Three preregistered experiments investigated how a person’s own vaccination behavior, others’ vaccination behavior, and others’ group membership influenced a person’s generosity toward respective others. The experiments consistently showed that especially compliant (i.e., vaccinated) individuals showed less generosity toward nonvaccinated individuals. This effect was independent of the others’ group membership, suggesting an unconditional moral principle. An internal metaanalysis (n = 1,032) confirmed the overall social contract effect. In a fourth experiment (n = 1,212), this pattern was especially pronounced among vaccinated individuals who perceived vaccination as a moral obligation. It is concluded that vaccination is a social contract in which cooperation is the morally right choice. Individuals act upon the social contract, and more so the stronger they perceive it as a moral obligation. Emphasizing the social contract could be a promising intervention to increase vaccine uptake, prevent free riding, and, eventually, support the elimination of infectious diseases.

Significance

Vaccines support controlling and eliminating infectious diseases. As most vaccines protect both vaccinated individuals and the society, vaccination is a prosocial act. Its success relies on a large number of contributing individuals. We study whether vaccination is a social contract where individuals reciprocate and reward others who comply with the contract and punish those who don’t. Four preregistered experiments demonstrate that vaccinated individuals indeed show less generosity toward nonvaccinated individuals who violate the social contract. This effect is independent of whether the individuals are members of the same or different social groups. Thus, individuals’ behavior follows the rules of a social contract, which provides a valuable basis for future interventions aiming at increasing vaccine uptake by emphasizing this social contract.

Wednesday, November 11, 2020

How social relationships shape moral judgment

Earp, B. D.,  et al. (2020, September 18).

Abstract

Our judgments of whether an action is morally wrong depend on who is involved and their relationship to one another. But how, when, and why do social relationships shape such judgments? Here we provide new theory and evidence to address this question. In a pre- registered study of U.S. participants (n = 423, nationally representative for age, race and gender), we show that particular social relationships (like those between romantic partners, housemates, or siblings) are normatively expected to serve distinct cooperative functions – including care, reciprocity, hierarchy, and mating – to different degrees. In a second pre- registered study (n = 1,320) we show that these relationship-specific norms, in turn, influence the severity of moral judgments concerning the wrongness of actions that violate cooperative expectations. These data provide evidence for a unifying theory of relational morality that makes highly precise out-of-sample predictions about specific patterns of moral judgments across relationships. Our findings show how the perceived morality of actions depends not only on the actions themselves, but also on the relational context in which those actions occur.

From the Discussion

In other relationships, by contrast, such as those between friends, family members, or romantic partners --so-called “communal” relationships --reciprocity takes a different form: that of mutually expected responsiveness to one another’s needs. In this form of reciprocity, each party tracks the other’s needs (rather than specific benefits provided) and strives to meet these needs to the best of their respective abilities, in proportion to the degree of responsibility each has assumed for the other’s welfare. Future work should distinguish between these two types of reciprocity: that is, mutual care-based reciprocity in communal relationships (when both partners have similar needs and abilities) and tit-for-tat reciprocity between “transactional” cooperation partners who have equal standing or claim on a resource.

Sunday, August 16, 2020

Blame-Laden Moral Rebukes and the Morally Competent Robot: A Confucian Ethical Perspective

Zhu, Q., Williams, T., Jackson, B. et al.
Sci Eng Ethics (2020).
https://doi.org/10.1007/s11948-020-00246-w

Abstract

Empirical studies have suggested that language-capable robots have the persuasive power to shape the shared moral norms based on how they respond to human norm violations. This persuasive power presents cause for concern, but also the opportunity to persuade humans to cultivate their own moral development. We argue that a truly socially integrated and morally competent robot must be willing to communicate its objection to humans’ proposed violations of shared norms by using strategies such as blame-laden rebukes, even if doing so may violate other standing norms, such as politeness. By drawing on Confucian ethics, we argue that a robot’s ability to employ blame-laden moral rebukes to respond to unethical human requests is crucial for cultivating a flourishing “moral ecology” of human–robot interaction. Such positive moral ecology allows human teammates to develop their own moral reflection skills and grow their own virtues. Furthermore, this ability can and should be considered as one criterion for assessing artificial moral agency. Finally, this paper discusses potential implications of the Confucian theories for designing socially integrated and morally competent robots.

Wednesday, April 1, 2020

The Ethics of Quarantine

The Ethics of Quarantine | Journal of Ethics | American Medical ...Ross Upshur
Virtual Mentor. 2003;5(11):393-395.


Here are two excerpts:

There are 2 independent ethical considerations to consider here: whether the concept of quarantine is justified ethically and whether it is effective. It is also important to make a clear distinction between quarantine and isolation. Quarantine refers to the separation of those exposed individuals who are not yet symptomatic for a period of time (usually the known incubation period of the suspected pathogen) to determine whether they will develop symptoms. Quarantine achieves 2 goals. First, it stops the chain of transmission because it is less possible to infect others if one is not in social circulation. Second, it allows the individuals under surveillance to be identified and directed toward appropriate care if they become symptomatic. This is more important in diseases where there is presymptomatic shedding of virus. Isolation, on the other hand, is keeping those who have symptoms from circulation in general populations.

Justification of quarantine and quarantine laws stems from a general moral obligation to prevent harm to (infection of) others if this can be done. Most democracies have public health laws that do permit quarantine. Even though quarantine is a curtailment of civil liberties, it can be broadly justified if several criteria can be met.

(cut)

Secondly, the proportionality, or least-restrictive-means, principle should be observed. This holds that public health authorities should use the least restrictive measures proportional to the goal of achieving disease control. This would indicate that quarantine be made voluntary before more restrictive means and sanctions such as mandatory orders or surveillance devices, home cameras, bracelets, or incarceration are contemplated. It is striking to note that in the Canadian SARS outbreak in the Greater Toronto area, approximately 30,000 persons were quarantined at some time. Toronto Public Health reports writing only 22 orders for mandatory detainment [3]. Even if the report is a tenfold underestimate, the remaining instances of voluntary quarantine constitute an impressive display of civic-mindedness.

Thirdly, reciprocity must be upheld. If society asks individuals to curtail their liberties for the good of others, society has a reciprocal obligation to assist them in the discharge of their obligations. That means providing individuals with adequate food and shelter and psychological support, accommodating them in their workplaces, and not discriminating against them. They should suffer no penalty on account of discharging their obligations to society.

The info is here.

Saturday, February 29, 2020

Does Morality Matter? Depends On Your Definition Of Right And Wrong

Hannes Leroy
forbes.com
Originally posted 30 Jan 20

Here is an excerpt:

For our research into morality we reviewed some 300 studies on moral leadership. We discovered that morality is – generally speaking – a good thing for leadership effectiveness but it is also a double-edged sword about which you need to be careful and smart. 

To do this, there are three basic approaches.

First, followers can be inspired by a leader who advocates the highest common good for all and is motivated to contribute to that common good from an expectation of reciprocity (servant leadership; consequentialism).

Second, followers can also be inspired by a leader who advocates the adherence to a set of standards or rules and is motivated to contribute to the clarity and safety this structure imposes for an orderly society (ethical leadership; deontology).

Third and finally, followers can also be inspired by a leader who advocates for moral freedom and corresponding responsibility and is motivated to contribute to this system in the knowledge that others will afford them their own moral autonomy (authentic leadership; virtue ethics).

The info is here.

Wednesday, June 26, 2019

The computational and neural substrates of moral strategies in social decision-making

Jeroen M. van Baar, Luke J. Chang & Alan G. Sanfey
Nature Communications, Volume 10, Article number: 1483 (2019)

Abstract

Individuals employ different moral principles to guide their social decision-making, thus expressing a specific ‘moral strategy’. Which computations characterize different moral strategies, and how might they be instantiated in the brain? Here, we tackle these questions in the context of decisions about reciprocity using a modified Trust Game. We show that different participants spontaneously and consistently employ different moral strategies. By mapping an integrative computational model of reciprocity decisions onto brain activity using inter-subject representational similarity analysis of fMRI data, we find markedly different neural substrates for the strategies of ‘guilt aversion’ and ‘inequity aversion’, even under conditions where the two strategies produce the same choices. We also identify a new strategy, ‘moral opportunism’, in which participants adaptively switch between guilt and inequity aversion, with a corresponding switch observed in their neural activation patterns. These findings provide a valuable view into understanding how different individuals may utilize different moral principles.

(cut)

From the Discussion

We also report a new strategy observed in participants, moral opportunism. This group did not consistently apply one moral rule to their decisions, but rather appeared to make a motivational trade-off depending on the particular trial structure. This opportunistic decision strategy entailed switching between the behavioral patterns of guilt aversion and inequity aversion, and allowed participants to maximize their financial payoff while still always following a moral rule. Although it could have been the case that these opportunists merely resembled GA and IA in terms of decision outcome, and not in the underlying psychological process, a confirmatory analysis showed that the moral opportunists did in fact switch between the neural representations of guilt and inequity aversion, and thus flexibly employed the respective psychological processes underlying these two, quite different, social preferences. This further supports our interpretation that the activity patterns directly reflect guilt aversion and inequity aversion computations, and not a theoretically peripheral “third factor” shared between GA or IA participants. Additionally, we found activity patterns specifically linked to moral opportunism in the superior parietal cortex and dACC, which are strongly associated with cognitive control and working memory.

The research is here.

Friday, March 8, 2019

Is It Good to Cooperate? Testing the Theory of Morality-as-Cooperation in 60 Societies

Oliver Scott Curry, Daniel Austin Mullins, and Harvey Whitehouse
Current Anthropology
The paper is here.

Abstract

What is morality? And to what extent does it vary around the world? The theory of “morality-as-cooperation” argues that morality consists of a collection of biological and cultural solutions to the problems of cooperation recurrent in human social life. Morality-as-cooperation draws on the theory of non-zero-sum games to identify distinct problems of cooperation and their solutions, and it predicts that specific forms of cooperative behavior—including helping kin, helping your group, reciprocating, being brave, deferring to superiors, dividing disputed resources, and respecting prior possession—will be considered morally good wherever they arise, in all cultures. To test these predictions, we investigate the moral valence of these seven cooperative behaviors in the ethnographic records of 60 societies. We find that the moral valence of these behaviors is uniformly positive, and the majority of these cooperative morals are observed in the majority of cultures, with equal frequency across all regions of the world. We conclude that these seven cooperative behaviors are plausible candidates for universal moral rules, and that morality-as-cooperation could provide the unified theory of morality that anthropology has hitherto lacked.

Monday, January 7, 2019

The Boundary Between Our Bodies and Our Tech

Kevin Lincoln
Pacific Standard
Originally published November 8, 2018

Here is an excerpt:

"They argued that, essentially, the mind and the self are extended to those devices that help us perform what we ordinarily think of as our cognitive tasks," Lynch says. This can include items as seemingly banal and analog as a piece of paper and a pen, which help us remember, a duty otherwise performed by the brain. According to this philosophy, the shopping list, for example, becomes part of our memory, the mind spilling out beyond the confines of our skull to encompass anything that helps it think.

"Now if that thought is right, it's pretty clear that our minds have become even more radically extended than ever before," Lynch says. "The idea that our self is expanding through our phones is plausible, and that's because our phones, and our digital devices generally—our smartwatches, our iPads—all these things have become a really intimate part of how we go about our daily lives. Intimate in the sense in which they're not only on our body, but we sleep with them, we wake up with them, and the air we breathe is filled, in both a literal and figurative sense, with the trails of ones and zeros that these devices leave behind."

This gets at one of the essential differences between a smartphone and a piece of paper, which is that our relationship with our phones is reciprocal: We not only put information into the device, we also receive information from it, and, in that sense, it shapes our lives far more actively than would, say, a shopping list. The shopping list isn't suggesting to us, based on algorithmic responses to our past and current shopping behavior, what we should buy; the phone is.

The info is here.

Thursday, June 14, 2018

Sex robots are coming. We might even fall in love with them.

Sean Illing
www.vox.com
Originally published May 11, 2018

Here is an excerpt:

Sean Illing: Your essay poses an interesting question: Is mutual love with a robot possible? What’s the answer?

Lily Eva Frank:

Our essay tried to explore some of the core elements of romantic love that people find desirable, like the idea of being a perfect match for someone or the idea that we should treasure the little traits that make someone unique, even those annoying flaws or imperfections.

The key thing is that we love someone because there’s something about being with them that matters, something particular to them that no one else has. And we make a commitment to that person that holds even when they change, like aging, for example.

Could a robot do all these things? Our answer is, in theory, yes. But only a very advanced form of artificial intelligence could manage it because it would have to do more than just perform as if it were a person doing the loving. The robot would have to have feelings and internal experiences. You might even say that it would have to be self-aware.

But that would leave open the possibility that the sex bot might not want to have sex with you, which sort of defeats the purpose of developing these technologies in the first place.

(cut)

I think people are weird enough that it is probably possible for them to fall in love with a cat or a dog or a machine that doesn’t reciprocate the feelings. A few outspoken proponents of sex dolls and robots claim they love them. Check out the testimonials page on the websites of sex doll manufactures; they say things like, “Three years later, I love her as much as the first day I met her.” I don’t want to dismiss these people’s reports.

The information is here.

Tuesday, June 5, 2018

Norms and the Flexibility of Moral Action

Oriel Feldman Hall, Jae-Young Son, and Joseph Heffner
Preprint

ABSTRACT

A complex web of social and moral norms governs many everyday human behaviors, acting as the glue for social harmony. The existence of moral norms helps elucidate the psychological motivations underlying a wide variety of seemingly puzzling behavior, including why humans help or trust total strangers. In this review, we examine four widespread moral norms: fairness, altruism, trust, and cooperation, and consider how a single social instrument—reciprocity—underpins compliance to these norms. Using a game theoretic framework, we examine how both context and emotions moderate moral standards, and by extension, moral behavior. We additionally discuss how a mechanism of reciprocity facilitates the adherence to, and enforcement of, these moral norms through a core network of brain regions involved in processing reward. In contrast, violating this set of moral norms elicits neural activation in regions involved in resolving decision conflict and exerting cognitive control. Finally, we review how a reinforcement mechanism likely governs learning about morally normative behavior. Together, this review aims to explain how moral norms are deployed in ways that facilitate flexible moral choices.

The research is here.

Monday, April 30, 2018

Social norm complexity and past reputations in the evolution of cooperation

Fernando P. Santos, Francisco C. Santos & Jorge M. Pacheco
Nature volume 555, pages 242–245 (08 March 2018)

Abstract

Indirect reciprocity is the most elaborate and cognitively demanding of all known cooperation mechanisms, and is the most specifically human because it involves reputation and status. By helping someone, individuals may increase their reputation, which may change the predisposition of others to help them in future. The revision of an individual’s reputation depends on the social norms that establish what characterizes a good or bad action and thus provide a basis for morality. Norms based on indirect reciprocity are often sufficiently complex that an individual’s ability to follow subjective rules becomes important, even in models that disregard the past reputations of individuals, and reduce reputations to either ‘good’ or ‘bad’ and actions to binary decisions. Here we include past reputations in such a model and identify the key pattern in the associated norms that promotes cooperation. Of the norms that comply with this pattern, the one that leads to maximal cooperation (greater than 90 per cent) with minimum complexity does not discriminate on the basis of past reputation; the relative performance of this norm is particularly evident when we consider a ‘complexity cost’ in the decision process. This combination of high cooperation and low complexity suggests that simple moral principles can elicit cooperation even in complex environments.

The article is here.

Wednesday, April 4, 2018

Simple moral code supports cooperation

Charles Efferson & Ernst Fehr
Nature
Originally posted March 7, 2018

The evolution of cooperation hinges on the benefits of cooperation being shared among those who cooperate. In a paper in Nature, Santos et al. investigate the evolution of cooperation using computer-based modelling analyses, and they identify a rule for moral judgements that provides an especially powerful system to drive cooperation.

Cooperation can be defined as a behaviour that is costly to the individual providing help, but which provides a greater overall societal benefit. For example, if Angela has a sandwich that is of greater value to Emmanuel than to her, Angela can increase total societal welfare by giving her sandwich to Emmanuel. This requires sacrifice on her part if she likes sandwiches. Reciprocity offers a way for benefactors to avoid helping uncooperative individuals in such situations. If Angela knows Emmanuel is cooperative because she and Emmanuel have interacted before, her reciprocity is direct. If she has heard from others that Emmanuel is a cooperative person, her reciprocity is indirect — a mechanism of particular relevance to human societies.

A strategy is a rule that a donor uses to decide whether or not to cooperate, and the evolution of reciprocal strategies that support cooperation depends crucially on the amount of information that individuals process. Santos and colleagues develop a model to assess the evolution of cooperation through indirect reciprocity. The individuals in their model can consider a relatively large amount of information compared with that used in previous studies.

The review is here.

Monday, October 23, 2017

Reciprocity Outperforms Conformity to Promote Cooperation

Angelo Romano, Daniel Balliet
Psychological Sciences
First Published September 6, 2017

Abstract

Evolutionary psychologists have proposed two processes that could give rise to the pervasiveness of human cooperation observed among individuals who are not genetically related: reciprocity and conformity. We tested whether reciprocity outperformed conformity in promoting cooperation, especially when these psychological processes would promote a different cooperative or noncooperative response. To do so, across three studies, we observed participants’ cooperation with a partner after learning (a) that their partner had behaved cooperatively (or not) on several previous trials and (b) that their group members had behaved cooperatively (or not) on several previous trials with that same partner. Although we found that people both reciprocate and conform, reciprocity has a stronger influence on cooperation. Moreover, we found that conformity can be partly explained by a concern about one’s reputation—a finding that supports a reciprocity framework.

The article is here.

Saturday, October 14, 2017

Who Sees What as Fair? Mapping Individual Differences in Valuation of Reciprocity, Charity,and Impartiality

Laura Niemi and Liane Young
Social Justice Research

When scarce resources are allocated, different criteria may be considered: impersonal allocation (impartiality), the needs of specific individuals (charity), or the relational ties between individuals (reciprocity). In the present research, we investigated how people’s perspectives on fairness relate to individual differences in interpersonal orientations. Participants evaluated the fairness of allocations based on (a) impartiality (b) charity, and (c) reciprocity. To assess interpersonal orientations, we administered measures of dispositional empathy (i.e., empathic concern and perspective-taking) and Machiavellianism. Across two studies, Machiavellianism correlated with higher ratings of reciprocity as fair, whereas empathic concern and perspective taking correlated with higher ratings of charity as fair. We discuss these findings in relation to recent neuroscientific research on empathy, fairness, and moral evaluations of resource allocations.

The article is here.

Thursday, July 16, 2015

It Pays to Be Nice

By Olga Khazan
The Atlantic
Originally published June 23, 2015

Here is an excerpt:

The conclusions of Rand’s studies support corporate do-gooders. Judging by his research, you should be nice even if you don’t trust the other person. In fact, you should keep on being nice even if the other person screws you over.

In one experiment, he found that people playing an unpredictable prisoner’s-dilemma type game benefitted from being lenient—forgiving their partner for acting against them. The same holds true in the business environment, which can be similarly “noisy,” as economists say. Sometimes, when someone is trying to undermine you, they’re actually trying to undermine you. But other times, it’s just an accident. If someone doesn’t credit you for a big idea in a meeting, you can’t know if he or she just forgot, or if it was an intentional slight. According to Rand’s research, you shouldn’t, say, turn around and tattle to the boss about that person’s chronic tardiness—at least not until he or she sabotages you at least a couple more times.

“If someone did something that hurt me, and I get pissed, and I screw them over, that destroys that relationship over a mistake,” Rand said. And losing allies, especially in a cooperative environment, can be costly. In his studies, “the strategy that earns the most money is giving someone a pass and letting the person take advantage of you two or three times.”

The entire article is here.