Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Social Preferences. Show all posts
Showing posts with label Social Preferences. Show all posts

Monday, August 22, 2022

Meta-Analysis of Inequality Aversion Estimates

Nunnari, S., & Pozzi, M. (2022).
SSRN Electronic Journal.
https://doi.org/10.2139/ssrn.4169385

Abstract

Loss aversion is one of the most widely used concepts in behavioral economics. We conduct a large-scale interdisciplinary meta-analysis, to systematically accumulate knowledge from numerous empirical estimates of the loss aversion coefficient reported during the past couple of decades. We examine 607 empirical estimates of loss aversion from 150 articles in economics, psychology, neuroscience, and several other disciplines. Our analysis indicates that the mean loss aversion coefficient is between 1.8 and 2.1. We also document how reported estimates vary depending on the observable characteristics of the study design.

Conclusion

In this paper, we reported the results of a meta-analysis of empirical estimates of the inequality aversion coefficients in models of outcome-based other-regarding preferences `a la Fehr and Schmidt (1999). We conduct both a frequentist analysis (using a multi-level random-effects model) and a Bayesian analysis (using a Bayesian hierarchical model) to provide a “weighted average” for α and β. The results from the two approaches are nearly identical and support the hypothesis of inequality concerns. From the frequentist analysis, we learn that the mean envy coefficient is 0.425 with a 95% confidence interval of [0.244, 0.606]; the mean guilt coefficient is, instead, 0.291 with a 95% confidence interval [0.218, 0.363]. This means that, on average, an individual is willing to spend € 0.41 to increase others’ earnings by €1 when ahead, and € 0.74 to decrease others’ earnings by €1 when behind. The theoretical assumptions α ≥ β and 0 ≤ β < 1 are upheld in our empirical analysis, but we cannot conclude that the disadvantageous inequality coefficient is statistically greater than the coefficient for advantageous inequality. We also observe no correlation between the two parameters.

Wednesday, September 9, 2020

Hate Trumps Love: The Impact of Political Polarization on Social Preferences

Eugen Dimant
ssrn.com
Published 4 September 20

Abstract

Political polarization has ruptured the fabric of U.S. society. The focus of this paper is to examine various layers of (non-)strategic decision-making that are plausibly affected by political polarization through the lens of one's feelings of hate and love for Donald J. Trump. In several pre-registered experiments, I document the behavioral-, belief-, and norm-based mechanisms through which perceptions of interpersonal closeness, altruism, and cooperativeness are affected by polarization, both within and between political factions. To separate ingroup-love from outgroup-hate, the political setting is contrasted with a minimal group setting. I find strong heterogeneous effects: ingroup-love occurs in the perceptional domain (how close one feels towards others), whereas outgroup-hate occurs in the behavioral domain (how one helps/harms/cooperates with others). In addition, the pernicious outcomes of partisan identity also comport with the elicited social norms. Noteworthy, the rich experimental setting also allows me to examine the drivers of these behaviors, suggesting that the observed partisan rift might be not as forlorn as previously suggested: in the contexts studied here, the adverse behavioral impact of the resulting intergroup conflict can be attributed to one's grim expectations about the cooperativeness of the opposing faction, as opposed to one's actual unwillingness to cooperate with them.

From the Conclusion and Discussion

Along all investigated dimensions, I obtain strong effects and the following results: for one, polarization produces ingroup/outgroup differentiation in all three settings (nonstrategic, Experiment 1; strategic, Experiment 2; social norms, Experiment 3), leading participants to actively harm and cooperate less with participants from the opposing faction. For another, lack of cooperation is not the result of a categorical unwillingness to cooperate across factions, but based on one’s grim expectations about the other’s willingness to cooperate. Importantly, however, the results also cast light on the nuance with which ingroup-love and outgroup-hate – something that existing literature often takes as being two sides of the same coin – occurs. In particular, by comparing behavior between the Trump Prime and minimal group prime treatments, the results suggest that ingroup-love can be observed in terms of feeling close to one another, whereas outgroup hate appears in form of taking money away from and being less cooperative with each other. The elicited norms are consistent with these observations and also point out that those who love Trump have a much weaker ingroup/outgroup differentiation than those who hate Trump do.

Tuesday, August 25, 2020

Uncertainty about the impact of social decisions increases prosocial behaviour

Kappes, A., Nussberger, A. M., et al.
Nature human behaviour, 2(8), 573–580.
https://doi.org/10.1038/s41562-018-0372-x

Abstract

Uncertainty about how our choices will affect others infuses social life. Past research suggests uncertainty has a negative effect on prosocial behavior by enabling people to adopt self-serving narratives about their actions. We show that uncertainty does not always promote selfishness. We introduce a distinction between two types of uncertainty that have opposite effects on prosocial behavior. Previous work focused on outcome uncertainty: uncertainty about whether or not a decision will lead to a particular outcome. But as soon as people’s decisions might have negative consequences for others, there is also impact uncertainty: uncertainty about how badly others’ well-being will be impacted by the negative outcome. Consistent with past research, we found decreased prosocial behavior under outcome uncertainty. In contrast, prosocial behavior was increased under impact uncertainty in incentivized economic decisions and hypothetical decisions about infectious disease threats. Perceptions of social norms paralleled the behavioral effects. The effect of impact uncertainty on prosocial behavior did not depend on the individuation of others or the mere mention of harm, and was stronger when impact uncertainty was made more salient. Our findings offer insights into communicating uncertainty, especially in contexts where prosocial behavior is paramount, such as responding to infectious disease threats.

From the Summary

To summarize, we show that uncertainty does not always decrease prosocial behavior; instead, the type of uncertainty matters. Replicating previous findings, we found that outcome uncertainty – uncertainty about the outcomes of decisions – made people behave more selfishly. However, impact uncertainty about how an outcome will impact another person’s well-being increased prosocial behavior, in economic and health domains. Examining closer the effect of impact uncertainty on prosociality, we show that for the increase in prosociality to occur, simply mentioning negative outcomes or inducing uncertainty about aspects of the other person unrelated to the negative outcome is not sufficient to increase prosociality. Rather, it seems that uncertainty relating to the impact of negative outcomes on others is needed to increase prosociality in our studies. Finally, we show that impact uncertainty is only effective when it is salient, thereby potentially overcoming people’s reluctance to contemplating the harm they might cause.

Wednesday, June 27, 2018

Understanding Moral Preferences Using Sentiment Analysis

Capraro, Valerio and Vanzo, Andrea
(May 28, 2018).

Abstract

Behavioral scientists have shown that people are not solely motivated by the economic consequences of the available actions, but they also care about the actions themselves. Several models have been proposed to formalize this preference for "doing the right thing". However, a common limitation of these models is their lack of predictive power: given a set of instructions of a decision problem, they lack to make clear predictions of people's behavior. Here, we show that, at least in simple cases, the overall qualitative pattern of behavior can be predicted reasonably well using a Computational Linguistics technique, known as Sentiment Analysis. The intuition is that people are reluctant to make actions that evoke negative emotions, and are eager to make actions that stimulate positive emotions. To show this point, we conduct an economic experiment in which decision-makers either get 50 cents, and another person gets nothing, or the opposite, the other person gets 50 cents and the decision maker gets nothing. We experimentally manipulate the wording describing the available actions using six words, from very negative (e.g., stealing) to very positive (e.g., donating) connotations. In agreement with our theory, we show that sentiment polarity has a U-shaped effect on pro-sociality. We also propose a utility function that can qualitatively predict the observed behavior, as well as previously reported framing effects. Our results suggest that building bridges from behavioral sciences to Computational Linguistics can help improve our understanding of human decision making.

The research is here.

Saturday, June 9, 2018

Doing good vs. avoiding bad in prosocial choice

 A refined test and extension of the morality preference hypothesis

Ben Tappin and Valerio Capraro
Preprint

Abstract

Prosociality is fundamental to the success of human social life, and, accordingly, much research has attempted to explain human prosocial behavior. Capraro and Rand (2018) recently advanced the hypothesis that prosocial behavior in anonymous, one-shot interactions is not driven by outcome-based social preferences for equity or efficiency, as classically assumed, but by a generalized morality preference for “doing the right thing”. Here we argue that the key experiments reported in Capraro and Rand (2018) comprise prominent methodological confounds and open questions that bear on influential psychological theory. Specifically, their design confounds: (i) preferences for efficiency with self-interest; and (ii) preferences for action with preferences for morality. Furthermore, their design fails to dissociate the preference to do “good” from the preference to avoid doing “bad”. We thus designed and conducted a preregistered, refined and extended test of the morality preference hypothesis (N=801). Consistent with this hypothesis and the results of Capraro and Rand (2018), our findings indicate that prosocial behavior in anonymous, one-shot interactions is driven by a preference for doing the morally right thing. Inconsistent with influential psychological theory, however, our results suggest the preference to do “good” is as potent as the preference to avoid doing “bad” in prosocial choice.

The preprint is here.

Tuesday, February 6, 2018

Do the Right Thing: Experimental Evidence that Preferences for Moral Behavior, Rather Than Equity or Efficiency per se, Drive Human Prosociality

Capraro, Valerio and Rand, David G.
(January 11, 2018). Judgment and Decision Making.

Abstract

Decades of experimental research show that some people forgo personal gains to benefit others in unilateral anonymous interactions. To explain these results, behavioral economists typically assume that people have social preferences for minimizing inequality and/or maximizing efficiency (social welfare). Here we present data that are incompatible with these standard social preference models. We use a “Trade-Off Game” (TOG), where players unilaterally choose between an equitable option and an efficient option. We show that simply changing the labelling of the options to describe the equitable versus efficient option as morally right completely reverses the correlation between behavior in the TOG and play in a separate Dictator Game (DG) or Prisoner’s Dilemma (PD): people who take the action framed as moral in the TOG, be it equitable or efficient, are much more prosocial in the DG and PD. Rather than preferences for equity and/or efficiency per se, our results suggest that prosociality in games such as the DG and PD are driven by a generalized morality preference that motivates people to do what they think is morally right.

Download the paper here.

Friday, May 26, 2017

Do the Right Thing: Preferences for Moral Behavior, Rather than Equity or Efficiency Per Se, Drive Human Prosociality

Capraro, Valerio and Rand, David G.
(May 8, 2017).

Abstract

Decades of experimental research have shown that some people forgo personal gains to benefit others in unilateral one-shot anonymous interactions. To explain these results, behavioral economists typically assume that people have social preferences for minimizing inequality and/or maximizing efficiency (social welfare). Here we present data that are fundamentally incompatible with these standard social preference models. We introduce the “Trade-Off Game” (TOG), where players unilaterally choose between an equitable option and an efficient option. We show that simply changing the labeling of the options to describe the equitable versus efficient option as morally right completely reverses people’s behavior in the TOG. Moreover, people who take the positively framed action, be it equitable or efficient, are more prosocial in a separate Dictator Game (DG) and Prisoner’s Dilemma (PD). Rather than preferences for equity and/or efficiency per se, we propose a generalized morality preference that motivates people to do what they think is morally right. When one option is clearly selfish and the other pro-social (e.g. equitable and/or efficient), as in the DG and PD, the economic outcomes are enough to determine what is morally right. When one option is not clearly more prosocial than the other, as in the TOG, framing resolves the ambiguity about which choice is moral. In addition to explaining our data, this account organizes prior findings that framing impacts cooperation in the standard simultaneous PD, but not in the asynchronous PD or the DG. Thus we present a new framework for understanding the basis of human prosociality.

The paper is here.

Tuesday, December 23, 2014

Harm to others outweighs harm to self in moral decision making

Molly J. Crockett, Zeb Kurth-Nelson, Jenifer Z. Siegel, Peter Dayand, and Raymond J. Dolan
PNAS 2014 ; published ahead of print November 17, 2014, doi:10.1073/pnas.1408988111

Abstract

Concern for the suffering of others is central to moral decision making. How humans evaluate others’ suffering, relative to their own suffering, is unknown. We investigated this question by inviting subjects to trade off profits for themselves against pain experienced either by themselves or an anonymous other person. Subjects made choices between different amounts of money and different numbers of painful electric shocks. We independently varied the recipient of the shocks (self vs. other) and whether the choice involved paying to decrease pain or profiting by increasing pain. We built computational models to quantify the relative values subjects ascribed to pain for themselves and others in this setting. In two studies we show that most people valued others’ pain more than their own pain. This was evident in a willingness to pay more to reduce others’ pain than their own and a requirement for more compensation to increase others’ pain relative to their own. This ‟hyperaltruistic” valuation of others’ pain was linked to slower responding when making decisions that affected others, consistent with an engagement of deliberative processes in moral decision making. Subclinical psychopathic traits correlated negatively with aversion to pain for both self and others, in line with reports of aversive processing deficits in psychopathy. Our results provide evidence for a circumstance in which people care more for others than themselves. Determining the precise boundaries of this surprisingly prosocial disposition has implications for understanding human moral decision making and its disturbance in antisocial behavior.

Significance

Concern for the welfare of others is a key component of moral decision making and is disturbed in antisocial and criminal behavior. However, little is known about how people evaluate the costs of others’ suffering. Past studies have examined people’s judgments in hypothetical scenarios, but there is evidence that hypothetical judgments cannot accurately predict actual behavior.  Here we addressed this issue by measuring how much money people will sacrifice to reduce the number of painful electric shocks delivered to either themselves or an anonymous stranger. Surprisingly, most people sacrifice more money to reduce a stranger’s pain than their own pain. This finding may help us better understand how people resolve moral dilemmas that commonly arise in medical, legal, and political decision making.

The entire article is here.

Monday, December 8, 2014

Harm to others outweighs harm to self in moral decision making

By Molly J. Crockett, Zeb Kurth-Nelson, Jenifer Z. Siegel, Peter Dayan, and Raymond J. Dolan
PNAS 2014 ; published ahead of print November 17, 2014, doi:10.1073/pnas.1408988111

Abstract

Concern for the suffering of others is central to moral decision making. How humans evaluate others’ suffering, relative to their own suffering, is unknown. We investigated this question by inviting subjects to trade off profits for themselves against pain experienced either by themselves or an anonymous other person. Subjects made choices between different amounts of money and different numbers of painful electric shocks. We independently varied the recipient of the shocks (self vs. other) and whether the choice involved paying to decrease pain or profiting by increasing pain. We built computational models to quantify the relative values subjects ascribed to pain for themselves and others in this setting. In two studies we show that most people valued others’ pain more than their own pain. This was evident in a willingness to pay more to reduce others’ pain than their own and a requirement for more compensation to increase others’ pain relative to their own. This ‟hyperaltruistic” valuation of others’ pain was linked to slower responding when making decisions that affected others, consistent with an engagement of deliberative processes in moral decision making. Subclinical psychopathic traits correlated negatively with aversion to pain for both self and others, in line with reports of aversive processing deficits in psychopathy. Our results provide evidence for a circumstance in which people care more for others than themselves. Determining the precise boundaries of this surprisingly prosocial disposition has implications for understanding human moral decision making and its disturbance in antisocial behavior.


Significance

Concern for the welfare of others is a key component of moral decision making and is disturbed in antisocial and criminal behavior. However, little is known about how people evaluate the costs of others’ suffering. Past studies have examined people’s judgments in hypothetical scenarios, but there is evidence that hypothetical judgments cannot accurately predict actual behavior. Here we addressed this issue by measuring how much money people will sacrifice to reduce the number of painful electric shocks delivered to either themselves or an anonymous stranger. Surprisingly, most people sacrifice more money to reduce a stranger’s pain than their own pain. This finding may help us better understand how people resolve moral dilemmas that commonly arise in medical, legal, and political decision making.

The entire article is here.