Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Altruism. Show all posts
Showing posts with label Altruism. Show all posts

Monday, November 4, 2019

Principles of karmic accounting: How our intuitive moral sense balances rights and wrongs

Image result for karmic accountingJohnson, S. G. B., & Ahn, J.
(2019, September 10).
PsyArXiv
https://doi.org/10.31234/osf.io/xetwg

Abstract

We are all saints and sinners: Some of our actions benefit other people, while other actions harm people. How do people balance moral rights against moral wrongs when evaluating others’ actions? Across 9 studies, we contrast the predictions of three conceptions of intuitive morality—outcome- based (utilitarian), act-based (deontologist), and person-based (virtue ethics) approaches. Although good acts can partly offset bad acts—consistent with utilitarianism—they do so incompletely and in a manner relatively insensitive to magnitude, but sensitive to temporal order and the match between who is helped and harmed. Inferences about personal moral character best predicted blame judgments, explaining variance across items and across participants. However, there was modest evidence for both deontological and utilitarian processes too. These findings contribute to conversations about moral psychology and person perception, and may have policy implications.

Here is the beginning of the General Discussion:

Much  of  our  behavior  is  tinged  with  shades  of  morality. How  third-parties  judge  those behaviors has numerous social consequences: People judged as behaving immorally can be socially ostracized,  less  interpersonally  attractive,  and  less  able  to  take  advantage  of  win–win  agreements. Indeed, our desire to avoid ignominy and maintain our moral reputations motivate much of our social behavior. But on the other hand, moral judgment is subject to a variety of heuristics and biases that appear  to  violate  normative  moral  theories  and  lead  to  inconsistency  (Bartels,  Bauman,  Cushman, Pizarro, & McGraw, 2015; Sunstein, 2005).  Despite the dominating influence of moral judgment in everyday social cognition, little is known about how judgments of individual acts scale up into broader judgments  about  sequences  of  actions,  such  as  moral  offsetting  (a  morally  bad  act  motivates  a subsequent morally good act) or self-licensing (a morally good act motivates a subsequent morally bad act). That is, we need a theory of karmic accounting—how rights and wrongs add up in moral judgment.

Monday, October 14, 2019

Principles of karmic accounting: How our intuitive moral sense balances rights and wrongs

Samuel Johnson and Jaye Ahn
PsyArXiv
Originally posted September 10, 2019

Abstract

We are all saints and sinners: Some of our actions benefit other people, while other actions harm people. How do people balance moral rights against moral wrongs when evaluating others’ actions? Across 9 studies, we contrast the predictions of three conceptions of intuitive morality—outcome- based (utilitarian), act-based (deontologist), and person-based (virtue ethics) approaches. Although good acts can partly offset bad acts—consistent with utilitarianism—they do so incompletely and in a manner relatively insensitive to magnitude, but sensitive to temporal order and the match between who is helped and harmed. Inferences about personal moral character best predicted blame judgments, explaining variance across items and across participants. However, there was modest evidence for both deontological and utilitarian processes too. These findings contribute to conversations about moral psychology and person perception, and may have policy implications.

General Discussion

These  studies  begin  to  map  out  the  principles  governing  how  the  mind  combines  rights  and wrongs to form summary judgments of blameworthiness. Moreover, these principles are explained by inferences  about  character,  which  also  explain  differences  across  scenarios  and  participants.  These results overall buttress person-based accounts of morality (Uhlmann et al., 2014), according to which morality  serves  primarily  to  identify  and  track  individuals  likely  to  be  cooperative  and  trustworthy social partners in the future.

These results also have implications for moral psychology beyond third-party judgments. Moral behavior is motivated largely by its expected reputational consequences, thus studying the psychology of  third-party  reputational  judgments  is  key  for  understanding  people’s  behavior  when  they  have opportunities  to  perform  licensing  or  offsetting acts.  For  example,  theories  of  moral  self-licensing (Merritt et al., 2010) disagree over whether licensing occurs due to moral credits (i.e., having done good, one can now “spend” the moral credit on a harm) versus moral credentials (i.e., having done good, later bad  acts  are  reframed  as  less  blameworthy). 

The research is here.

Monday, September 16, 2019

Increasing altruistic and cooperative behaviour with simple moral nudges

Valerio Capraro, Glorianna Jagfeld,
Rana Klein, Mathijs Mul & Iris van de Pol
Natrure.com
Published Online August 15, 2019

The conflict between pro-self and pro-social behaviour is at the core of many key problems of our time, as, for example, the reduction of air pollution and the redistribution of scarce resources. For the well-being of our societies, it is thus crucial to find mechanisms to promote pro-social choices over egoistic ones. Particularly important, because cheap and easy to implement, are those mechanisms that can change people’s behaviour without forbidding any options or significantly changing their economic incentives, the so-called “nudges”. Previous research has found that moral nudges (e.g., making norms salient) can promote pro-social behaviour. However, little is known about whether their effect persists over time and spills across context. This question is key in light of research showing that pro-social actions are often followed by selfish actions, thus suggesting that some moral manipulations may backfire. Here we present a class of simple moral nudges that have a great positive impact on pro-sociality. In Studies 1–4 (total N =  1,400), we use economic games to demonstrate that asking subjects to self-report “what they think is the morally right thing to do” does not only increase pro-sociality in the choice immediately after, but also in subsequent choices, and even when the social context changes. In Study 5, we explore whether moral nudges promote charity donations to humanitarian organisations in a large (N =  1,800) crowdfunding campaign. We find that, in this context, moral nudges increase donations by about 44 percent.

The research is here.

Thursday, April 18, 2019

Why are smarter individuals more prosocial? A study on the mediating roles of empathy and moral identity

Qingke Guoa, Peng Suna, Minghang Caia, Xiling Zhang, & Kexin Song
Intelligence
Volume 75, July–August 2019, Pages 1-8

Abstract

The purpose of this study is to examine whether there is an association between intelligence and prosocial behavior (PSB), and whether this association is mediated by empathy and moral identity. Chinese version of the Raven's Standard Progressive Matrices, the Self-Report Altruism Scale Distinguished by the Recipient, Interpersonal Reactivity Index, and the Internalization subscale of the Self-Importance of Moral Identity Scale were administered to 518 (N female = 254, M age = 19.79) undergraduate students. The results showed that fluid intelligence was significantly correlated with self-reported PSB; moral identity, perspective taking, and empathic concern could account for the positive association between intelligence and PSB; the mediation effects of moral identity and empathy were consistent across gender.

The article is here.

Here is part of the Discussion:

This is consistent with previous findings that highly intelligent individuals are more likely to engage in prosocial and civic activities (Aranda & Siyaranamual, 2014; Bekkers & Wiepking, 2011; Wiepking & Maas, 2009). One explanation of the intelligence-prosocial association is that highly intelligent individuals are better able to perceive and understand the desires and feelings of the person in need, and are quicker in making proper decisions and figuring out which behaviors should be enacted (Eisenberg et al., 2015; Gottfredson, 1997). Another explanation is that highly intelligent individuals are smart enough to realize that PSB is rewarding in the long run. PSB is rewarding because the helper is more likely to be selected as a coalition partner or a mate (Millet & Dewitte, 2007; Zahavi, 1977).

Sunday, March 17, 2019

Actions Speak Louder Than Outcomes in Judgments of Prosocial Behavior

Daniel A. Yudkin, Annayah M. B. Prosser, and Molly J. Crockett
Emotion (2018).

Recently proposed models of moral cognition suggest that people's judgments of harmful acts are influenced by their consideration both of those acts' consequences ("outcome value"), and of the feeling associated with their enactment ("action value"). Here we apply this framework to judgments of prosocial behavior, suggesting that people's judgments of the praiseworthiness of good deeds are determined both by the benefit those deeds confer to others and by how good they feel to perform. Three experiments confirm this prediction. After developing a new measure to assess the extent to which praiseworthiness is influenced by action and outcome values, we show how these factors make significant and independent contributions to praiseworthiness. We also find that people are consistently more sensitive to action than to outcome value in judging the praiseworthiness of good deeds, but not harmful deeds. This observation echoes the finding that people are often insensitive to outcomes in their giving behavior. Overall, this research tests and validates a novel framework for understanding moral judgment, with implications for the motivations that underlie human altruism.

Here is an excerpt:

On a broader level, past work has suggested that judging the wrongness of harmful actions involves a process of “evaluative simulation,” whereby we evaluate the moral status of another’s action by simulating the affective response that we would experience performing the action ourselves (Miller et al., 2014). Our results are consistent with the possibility that evaluative simulation also plays a role in judging the praiseworthiness of helpful actions.  If people evaluate helpful actions by simulating what it feels like to perform the action, then we would expect to see similar biases in moral evaluation as those that exist for moral action. Previous work has shown that individuals often do not act to maximize the benefits that others receive, but instead to maximize the good feelings associated with performing good deeds (Berman et al., 2018; Gesiarz & Crockett, 2015; Ribar & Wilhelm, 2002). Thus, the asymmetry in moral evaluation seen in the present studies may reflect a correspondence between first-person moral decision-making and third-person moral evaluation.

Download the pdf here.

Thursday, March 14, 2019

Actions speak louder than outcomes in judgments of prosocial behavior.

Yudkin, D. A., Prosser, A. M. B., & Crockett, M. J. (2018).
Emotion. Advance online publication.
http://dx.doi.org/10.1037/emo0000514

Abstract

Recently proposed models of moral cognition suggest that people’s judgments of harmful acts are influenced by their consideration both of those acts’ consequences (“outcome value”), and of the feeling associated with their enactment (“action value”). Here we apply this framework to judgments of prosocial behavior, suggesting that people’s judgments of the praiseworthiness of good deeds are determined both by the benefit those deeds confer to others and by how good they feel to perform. Three experiments confirm this prediction. After developing a new measure to assess the extent to which praiseworthiness is influenced by action and outcome values, we show how these factors make significant and independent contributions to praiseworthiness. We also find that people are consistently more sensitive to action than to outcome value in judging the praiseworthiness of good deeds, but not harmful deeds. This observation echoes the finding that people are often insensitive to outcomes in their giving behavior. Overall, this research tests and validates a novel framework for understanding moral judgment, with implications for the motivations that underlie human altruism.

Friday, February 8, 2019

Empathy is hard work: People choose to avoid empathy because of its cognitive costs

Daryl Cameron, Cendri Hutcherson, Amanda Ferguson,  and others
PsyArXiv Preprints
Last edited January 25, 2019

Abstract

Empathy is considered a virtue, yet fails in many situations, leading to a basic question: when given a choice, do people avoid empathy? And if so, why? Whereas past work has focused on material and emotional costs of empathy, here we examined whether people experience empathy as cognitively taxing and costly, leading them to avoid it. We developed the Empathy Selection Task, which uses free choices to assess desire to empathize. Participants make a series of binary choices, selecting situations that lead them to engage in empathy or an alternative course of action. In each of 11 studies (N=1,204) and a meta-analysis, we found a robust preference to avoid empathy, which was associated with perceptions of empathy as effortful, aversive, and inefficacious. Experimentally increasing empathy efficacy eliminated empathy avoidance, suggesting cognitive costs directly cause empathy choice. When given the choice to share others’ feelings, people act as if it’s not worth the effort.

The research is here.

Friday, September 7, 2018

23andMe's Pharma Deals Have Been the Plan All Along

Megan Molteni
www.wired.com
Originally posted August 3, 2018

Here is an excerpt:

So last week’s announcement that one of the world’s biggest drugmakers, GlaxoSmithKline, is gaining exclusive rights to mine 23andMe’s customer data for drug targets should come as no surprise. (Neither should GSK’s $300 million investment in the company). 23andMe has been sharing insights gleaned from consented customer data with GSK and at least six other pharmaceutical and biotechnology firms for the past three and a half years. And offering access to customer information in the service of science has been 23andMe’s business plan all along, as WIRED noted when it first began covering the company more than a decade ago.

But some customers were still surprised and angry, unaware of what they had already signed (and spat) away. GSK will receive the same kind of data pharma partners have generally received—summary level statistics that 23andMe scientists gather from analyses on de-identified, aggregate customer information—though it will have four years of exclusive rights to run analyses to discover new drug targets. Supporting this kind of translational work is why some customers signed up in the first place. But it’s clear the days of blind trust in the optimistic altruism of technology companies are coming to a close.

“I think we’re just operating now in a much more untrusting environment,” says Megan Allyse, a health policy researcher at the Mayo Clinic who studies emerging genetic technologies. “It’s no longer enough for companies to promise to make people healthy through the power of big data.”

The info is here.

Tuesday, June 5, 2018

Norms and the Flexibility of Moral Action

Oriel Feldman Hall, Jae-Young Son, and Joseph Heffner
Preprint

ABSTRACT

A complex web of social and moral norms governs many everyday human behaviors, acting as the glue for social harmony. The existence of moral norms helps elucidate the psychological motivations underlying a wide variety of seemingly puzzling behavior, including why humans help or trust total strangers. In this review, we examine four widespread moral norms: fairness, altruism, trust, and cooperation, and consider how a single social instrument—reciprocity—underpins compliance to these norms. Using a game theoretic framework, we examine how both context and emotions moderate moral standards, and by extension, moral behavior. We additionally discuss how a mechanism of reciprocity facilitates the adherence to, and enforcement of, these moral norms through a core network of brain regions involved in processing reward. In contrast, violating this set of moral norms elicits neural activation in regions involved in resolving decision conflict and exerting cognitive control. Finally, we review how a reinforcement mechanism likely governs learning about morally normative behavior. Together, this review aims to explain how moral norms are deployed in ways that facilitate flexible moral choices.

The research is here.

Wednesday, April 18, 2018

Why it’s a bad idea to break the rules, even if it’s for a good cause

Robert Wiblin
80000hours.org
Originally posted March 20, 2018

How honest should we be? How helpful? How friendly? If our society claims to value honesty, for instance, but in reality accepts an awful lot of lying – should we go along with those lax standards? Or, should we attempt to set a new norm for ourselves?

Dr Stefan Schubert, a researcher at the Social Behaviour and Ethics Lab at Oxford University, has been modelling this in the context of the effective altruism community. He thinks people trying to improve the world should hold themselves to very high standards of integrity, because their minor sins can impose major costs on the thousands of others who share their goals.

In addition, when a norm is uniquely important to our situation, we should be willing to question society and come up with something different and hopefully better.

But in other cases, we can be better off sticking with whatever our culture expects, both to save time, avoid making mistakes, and ensure others can predict our behaviour.

The key points and podcast are here.

Friday, March 30, 2018

Not Noble Savages After All: Limits to Early Altruism

Karen Wynn, Paul Bloom, Ashley Jordan, Julia Marshall, Mark Sheskin
Current Directions in Psychological Science 
Vol 27, Issue 1, pp. 3 - 8
First Published December 22, 2017

Abstract

Many scholars draw on evidence from evolutionary biology, behavioral economics, and infant research to argue that humans are “noble savages,” endowed with indiscriminate kindness. We believe this is mistaken. While there is evidence for an early-emerging moral sense—even infants recognize and favor instances of fairness and kindness among third parties—altruistic behaviors are selective from the start. Babies and young children favor people who have been kind to them in the past and favor familiar individuals over strangers. They hold strong biases for in-group over out-group members and for themselves over others, and indeed are more unequivocally selfish than older children and adults. Much of what is most impressive about adult morality arises not through inborn capacities but through a fraught developmental process that involves exposure to culture and the exercise of rationality.

The article is here.

Thursday, March 1, 2018

Concern for Others Leads to Vicarious Optimism

Andreas Kappes, Nadira S. Faber, Guy Kahane, Julian Savulescu, Molly J. Crockett
Psychological Science 
First Published January 30, 2018

Abstract

An optimistic learning bias leads people to update their beliefs in response to better-than-expected good news but neglect worse-than-expected bad news. Because evidence suggests that this bias arises from self-concern, we hypothesized that a similar bias may affect beliefs about other people’s futures, to the extent that people care about others. Here, we demonstrated the phenomenon of vicarious optimism and showed that it arises from concern for others. Participants predicted the likelihood of unpleasant future events that could happen to either themselves or others. In addition to showing an optimistic learning bias for events affecting themselves, people showed vicarious optimism when learning about events affecting friends and strangers. Vicarious optimism for strangers correlated with generosity toward strangers, and experimentally increasing concern for strangers amplified vicarious optimism for them. These findings suggest that concern for others can bias beliefs about their future welfare and that optimism in learning is not restricted to oneself.

From the Discussion section

Optimism is a self-centered phenomenon in which people underestimate the likelihood of negative future events for themselves compared with others (Weinstein, 1980). Usually, the “other” is defined as a group of average others—an anonymous mass. When past studies asked participants to estimate the likelihood of an event happening to either themselves or the average population, participants did not show a learning bias for the average population (Garrett & Sharot, 2014). These findings are unsurprising given that people typically feel little concern for anonymous groups or anonymous individual strangers (Kogut & Ritov, 2005; Loewenstein et al., 2005). Yet people do care about identifiable others, and we accordingly found that people exhibit an optimistic learning bias for identifiable strangers and, even more markedly, for friends. Our research thereby suggests that optimism in learning is not restricted to oneself. We see not only our own lives through rose-tinted glasses but also the lives of those we care about.

The research is here.

Sunday, February 25, 2018

The Moral Importance of Reflective Empathy

Ingmar Persson and Julian Savulescu
J. Neuroethics (2017). https://doi.org/10.1007/s12152-017-9350-7

Abstract

This is a reply to Jesse Prinz and Paul Bloom’s skepticism about the moral importance of empathy. It concedes that empathy is spontaneously biased to individuals who are spatio-temporally close, as well as discriminatory in other ways, and incapable of accommodating large numbers of individuals. But it is argued that we could partly correct these shortcomings of empathy by a guidance of reason because empathy for others consists in imagining what they feel, and, importantly, such acts of imagination can be voluntary – and, thus, under the influence of reflection – as well as automatic. Since empathizing with others motivates concern for their welfare, a reflectively justified empathy will lead to a likewise justified altruistic concern. In addition, we argue that such concern supports another central moral attitude, namely a sense of justice or fairness.

From the Conclusion

All in all, the picture that emerges is this. We have beliefs about how other individuals feel and how we can help them to feel better. There is both a set of properties such that: (1) if we believe individuals have any of these properties, this facilitates spontaneous empathy with these individuals, i.e. disposes us to imagine spontaneously how they feel, and (2) a set of properties such that if we believe that individuals have any of them, this hinders spontaneous empathy with them. In the former case, we will be spontaneously concerned about the well-being of these individuals; in the latter case, it will take voluntary reflection to empathize and be concerned about the individuals in question. We are also in possession of a sense of justice or fairness which not only animates us to benefit those whom justice requires to be benefited, but also to harm those whom justice requires be harmed.

The article can be accessed here.

Tuesday, February 6, 2018

Do the Right Thing: Experimental Evidence that Preferences for Moral Behavior, Rather Than Equity or Efficiency per se, Drive Human Prosociality

Capraro, Valerio and Rand, David G.
(January 11, 2018). Judgment and Decision Making.

Abstract

Decades of experimental research show that some people forgo personal gains to benefit others in unilateral anonymous interactions. To explain these results, behavioral economists typically assume that people have social preferences for minimizing inequality and/or maximizing efficiency (social welfare). Here we present data that are incompatible with these standard social preference models. We use a “Trade-Off Game” (TOG), where players unilaterally choose between an equitable option and an efficient option. We show that simply changing the labelling of the options to describe the equitable versus efficient option as morally right completely reverses the correlation between behavior in the TOG and play in a separate Dictator Game (DG) or Prisoner’s Dilemma (PD): people who take the action framed as moral in the TOG, be it equitable or efficient, are much more prosocial in the DG and PD. Rather than preferences for equity and/or efficiency per se, our results suggest that prosociality in games such as the DG and PD are driven by a generalized morality preference that motivates people to do what they think is morally right.

Download the paper here.

Wednesday, January 31, 2018

The Fear Factor

Matthieu Ricard
Medium.com
Originally published January 5, 2018

Here is an excerpt:

Research by Abigail Marsh and other neuroscientists reveals that psychopaths’ brains are marked by a dysfunction in the structure called the amygdala that is responsible for essential social and emotional function. In psychopaths, the amygdala is not only under-responsive to images of people experiencing fear, but is also up to 20% smaller than average.

Marsh also wondered about people who are at the other end of the spectrum, extreme altruists: people filled with compassion, people who volunteer, for example, to donate one of their kidneys to a stranger. The answer is remarkable: extreme altruists surpass everyone in detecting expressions of fear in others and, while they do experience fear themselves, that does not stop them from acting in ways that are considered very courageous.

Since her initial discovery, several studies have confirmed that the ability to label other peoples’ fear predicts altruism better than gender, mood or how compassionate people claim to be. In addition, Abigail Marsh found that, among extreme altruists, the amygdala is physically larger than the average by about 8%. The significance of this fact held up even after finding something rather unexpected: the altruists’s brains are in general larger than those of the average person.

The information is here.

Tuesday, October 24, 2017

'The deserving’: Moral reasoning and ideological dilemmas in public responses to humanitarian communications

Irene Bruna Seu
British Journal of Social Psychology 55 (4), pp. 739-755.

Abstract

This paper investigates everyday moral reasoning in relation to donations and prosocial behaviour in a humanitarian context. The discursive analysis focuses on the principles of deservingness which members of the public use to decide who to help and under what conditions.  The paper discusses three repertoires of deservingness: 'Seeing a difference', 'Waiting in queues' and 'Something for nothing ' to illustrate participants' dilemmatic reasoning and to examine how the position of 'being deserving' is negotiated in humanitarian crises.  Discursive analyses of these dilemmatic repertoires of deservingness identify the cultural and ideological resources behind these constructions and show how humanitarianism intersects and clashes with other ideologies and value systems.  The data suggest that a neoliberal ideology, which endorses self-gratification and materialistic and individualistic ethics, and cultural assimilation of helper and receiver play important roles in decisions about humanitarian helping. The paper argues for the need for psychological research to engage more actively with the dilemmas involved in the moral reasoning related to humanitarianism and to contextualize decisions about giving and helping within the socio-cultural and ideological landscape in which the helper operates.

The research is here.

Wednesday, October 18, 2017

When Doing Some Good Is Evaluated as Worse Than Doing No Good at All

George E. Newman and Daylian M. Cain
Psychological Science published online 8 January 2014

Abstract

In four experiments, we found that the presence of self-interest in the charitable domain was seen as tainting: People evaluated efforts that realized both charitable and personal benefits as worse than analogous behaviors that produced no charitable benefit. This tainted-altruism effect was observed in a variety of contexts and extended to both moral evaluations of other agents and participants’ own behavioral intentions (e.g., reported willingness to hire someone or purchase a company’s products). This effect did not seem to be driven by expectations that profits would be realized at the direct cost of charitable benefits, or the explicit use of charity as a means to an end. Rather, we found that it was related to the accessibility of different counterfactuals: When someone was charitable for self-interested reasons, people considered his or her behavior in the absence of self-interest, ultimately concluding that the person did not behave as altruistically as he or she could have. However, when someone was only selfish, people did not spontaneously consider whether the person could have been more altruistic.

The article is here.

Thursday, September 7, 2017

Harm to self outweighs benefit to others in moral decision making

Lukas J. Volz, B. Locke Welborn, Matthias S. Gobel, Michael S. Gazzaniga, and Scott T. Grafton
PNAS 2017 ; published ahead of print July 10, 2017

Abstract

How we make decisions that have direct consequences for ourselves and others forms the moral foundation of our society. Whereas economic theory contends that humans aim at maximizing their own gains, recent seminal psychological work suggests that our behavior is instead hyperaltruistic: We are more willing to sacrifice gains to spare others from harm than to spare ourselves from harm. To investigate how such egoistic and hyperaltruistic tendencies influence moral decision making, we investigated trade-off decisions combining monetary rewards and painful electric shocks, administered to the participants themselves or an anonymous other. Whereas we replicated the notion of hyperaltruism (i.e., the willingness to forego reward to spare others from harm), we observed strongly egoistic tendencies in participants’ unwillingness to harm themselves for others’ benefit. The moral principle guiding intersubject trade-off decision making observed in our study is best described as egoistically biased altruism, with important implications for our understanding of economic and social interactions in our society.

Significance

Principles guiding decisions that affect both ourselves and others are of prominent importance for human societies. Previous accounts in economics and psychological science have often described decision making as either categorically egoistic or altruistic. Instead, the present work shows that genuine altruism is embedded in context-specific egoistic bias. Participants were willing to both forgo monetary reward to spare the other from painful electric shocks and also to suffer painful electric shocks to secure monetary reward for the other. However, across all trials and conditions, participants accrued more reward and less harm for the self than for the other person. These results characterize human decision makers as egoistically biased altruists, with important implications for psychology, economics, and public policy.

The article is here.

Monday, August 28, 2017

Death Before Dishonor: Incurring Costs to Protect Moral Reputation

Andrew J. Vonasch, Tania Reynolds, Bo M. Winegard, Roy F. Baumeister
Social Psychological and Personality Science 
First published date: July-21-2017

Abstract

Predicated on the notion that people’s survival depends greatly on participation in cooperative society, and that reputation damage may preclude such participation, four studies with diverse methods tested the hypothesis that people would make substantial sacrifices to protect their reputations. A “big data” study found that maintaining a moral reputation is one of people’s most important values. In making hypothetical choices, high percentages of “normal” people reported preferring jail time, amputation of limbs, and death to various forms of reputation damage (i.e., becoming known as a criminal, Nazi, or child molester). Two lab studies found that 30% of people fully submerged their hands in a pile of disgusting live worms, and 63% endured physical pain to prevent dissemination of information suggesting that they were racist. We discuss the implications of reputation protection for theories about altruism and motivation.

The article is here.

Saturday, July 15, 2017

How do self-interest and other-need interact in the brain to determine altruistic behavior?

Jie Hu, Yue Li, Yunlu Yin, Philip R. Blue, Hongbo Yu, Xiaolin Zhou
NeuroImage
Volume 157, 15 August 2017, Pages 598–611

Abstract

Altruistic behavior, i.e., promoting the welfare of others at a cost to oneself, is subserved by the integration of various social, affective, and economic factors represented in extensive brain regions. However, it is unclear how different regions interact to process/integrate information regarding the helper's interest and recipient's need when deciding whether to behave altruistically. Here we combined an interactive game with functional Magnetic Resonance Imaging (fMRI) and transcranial direct current stimulation (tDCS) to characterize the neural network underlying the processing/integration of self-interest and other-need. At the behavioral level, high self-risk decreased helping behavior and high other-need increased helping behavior. At the neural level, activity in medial prefrontal cortex (MPFC) and right dorsolateral prefrontal cortex (rDLPFC) were positively associated with self-risk levels, and activity in right inferior parietal lobe (rIPL) and rDLPFC were negatively associated with other-need levels. Dynamic causal modeling further suggested that both MPFC and rIPL were extrinsically connected to rDLPFC; high self-risk enhanced the effective connectivity from MPFC to rDLPFC, and the modulatory effect of other-need on the connectivity from rIPL to rDLPFC positively correlated with the modulatory effect of other-need on individuals’ helping rate. Two tDCS experiments provided causal evidence that rDLPFC affects both self-interest and other-need concerns, and rIPL selectively affects the other-need concerns. These findings suggest a crucial role of the MPFC-IPL-DLPFC network during altruistic decision-making, with rDLPFC as a central node for integrating and modulating motives regarding self-interest and other-need.

The article is here.