Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Moral Preference. Show all posts
Showing posts with label Moral Preference. Show all posts

Thursday, May 20, 2021

Behavioral and Neural Representations en route to Intuitive Action Understanding

L. Tarhan, J. De Freitas, & T. Konkle
BioRxiv
doi: https://doi.org/10.1101/2021.04.08.438996

Abstract

When we observe another person’s actions, we process many kinds of information – from how their body moves to the intention behind their movements. What kinds of information underlie our intuitive understanding about how similar actions are to each other? To address this question, we measured the intuitive similarities among a large set of everyday action videos using multi-arrangement experiments, then used a modeling approach to predict this intuitive similarity space along three hypothesized properties. We found that similarity in the actors’ inferred goals predicted the intuitive similarity judgments the best, followed by similarity in the actors’ movements, with little contribution from the videos’ visual appearance. In opportunistic fMRI analyses assessing brain-behavior correlations, we found evidence for an action processing hierarchy, in which these three kinds of action similarities are reflected in the structure of brain responses along a posterior-to-anterior gradient on the lateral surface of the visual cortex. Altogether, this work joins existing literature suggesting that humans are naturally tuned to process others’ intentions, and that the visuo-motor cortex computes the perceptual precursors of the higher-level representations over which intuitive action perception operates.

From the Discussion

Intuitive Action Representations in the Mind

Our primary finding was that judgments about the similarity of actors’ goals was the best predictor of intuitive action similarity judgments. In addition, these goals accounted for the most unique variance in the intuitive similarity data. We interpret this to mean that humans naturally and intuitively process other actors’ internal motivations and thoughts, even in the absence of an explicitly social task. This conclusion adds to a rich literature showing that humans automatically represent others in terms of their mental states, even from a very young age (Gergely and Csibra, 2003; Jara-Ettinger et al., 2016; Liu et al., 2017; Reid et al., 2007; Thornton et al., 2019a,b). In addition, we found that similarity in the actors’ movements also predicted intuitive judgments moderately well and accounted for a smaller amount of unique variance in the data. This finding goes beyond our current understanding of the factors driving natural action processing, to suggest that kinematic information also contributes to intuitive action perception. In contrast, similarity in the videos’ visual appearance did not account for any unique variance in the data, suggesting that lower-level visual properties such as color, form, and motion direction do not have much influence on natural action perception.

A natural extension of these findings is to investigate the specific features that we use to calculate actors’ goals and movements. For example, how important are speed, trajectory, and movement quality (e.g., shaky or smooth) for our assessment of the similarity among actions’ movements? And, do we consider physical variables – such as facial expression – when inferring actors’ goals? Digging into these specific feature dimensions will bring further clarity to the cognitive processes driving intuitive action perception.

Wednesday, March 24, 2021

Does observability amplify sensitivity to moral frames? Evaluating a reputation-based account of moral preferences

Capraro, V., Jordan, J., & Tappin, B. M. 
(2020, April 9). 
https://doi.org/10.31234/osf.io/bqjcv

Abstract

A growing body of work suggests that people are sensitive to moral framing in economic games involving prosociality, suggesting that people hold moral preferences for doing the “right thing”. What gives rise to these preferences? Here, we evaluate the explanatory power of a reputation-based account, which proposes that people respond to moral frames because they are motivated to look good in the eyes of others. Across four pre-registered experiments (total N = 9,601), we investigated whether reputational incentives amplify sensitivity to framing effects. Studies 1-3 manipulated (i) whether moral or neutral framing was used to describe a Trade-Off Game (in which participants chose between prioritizing equality or efficiency) and (ii) whether Trade-Off Game choices were observable to a social partner in a subsequent Trust Game. These studies found that observability does not significantly amplify sensitivity to moral framing. Study 4 ruled out the alternative explanation that the observability manipulation from Studies 1-3 is too weak to influence behavior. In Study 4, the same observability manipulation did significantly amplify sensitivity to normative information (about what others see as moral in the Trade-Off Game). Together, these results suggest that moral frames may tap into moral preferences that are relatively deeply internalized, such that the power of moral frames is not strongly enhanced by making the morally-framed behavior observable to others.

From the Discussion

Our results have implications for interventions that draw on moral framing effects to encourage socially desirable behavior. They suggest that such interventions can be successful even when behavior is not observable to others and thus reputation is not at stake—and in fact, that the efficacy of moral framing effects is not strongly enhanced by making behavior observable. Thus, our results suggest that targeting contexts where reputation is at stake is not an especially important priority for individuals seeking to maximize the impact of interventions based on moral framing.  This  conclusion  provides  an  optimistic  view  of  the  potential  of such interventions, given that there may be many contexts in which it is difficult to make behavior observable but yet possible to frame a decision in a way that encourages prosociality—for example, when  crowdsourcing  donations  anonymously(or  nearly  anonymously)on  the Internet.Future research should investigate the power of moral framing to promote prosocial behaviour in anonymous contexts outside of the laboratory.

Friday, May 15, 2020

“Do the right thing” for whom? An experiment on ingroup favouritism, group assorting and moral suasion

E. Bilancini, L. Boncinelli, & others
Judgment and Decision Making, 
Vol. 15, No. 2, March 2020, pp. 182-192

Abstract

In this paper we investigate the effect of moral suasion on ingroup favouritism. We report a well-powered, pre-registered, two-stage 2x2 mixed-design experiment. In the first stage, groups are formed on the basis of how participants answer a set of questions, concerning non-morally relevant issues in one treatment (assorting on non-moral preferences), and morally relevant issues in another treatment (assorting on moral preferences). In the second stage, participants choose how to split a given amount of money between participants of their own group and participants of the other group, first in the baseline setting and then in a setting where they are told to do what they believe to be morally right (moral suasion). Our main results are: (i) in the baseline, participants tend to favour their own group to a greater extent when groups are assorted according to moral preferences, compared to when they are assorted according to non-moral preferences; (ii) the net effect of moral suasion is to decrease ingroup favouritism, but there is also a non-negligible proportion of participants for whom moral suasion increases ingroup favouritism; (iii) the effect of moral suasion is substantially stable across group assorting and four pre-registered individual characteristics (gender, political orientation, religiosity, pro-life vs pro-choice ethical convictions).

From the Discussion:

The interest in moral suasion stems, at least in part, from being a cheap and possibly effective policy tool that could be applied to foster prosocial behaviours. While the literature on moral behaviour has so far produced a substantial body of evidence showing the effectiveness of moral suasion, its dependence on the identity of the recipients of the decision-maker’s actions is far less studied, leaving open the possibility that individuals react to moral suasion by reducing prosociality towards some types of recipients. This paper has addressed this issue in the setting of a decision to split a given amount of money between members of one’s own group and members of another group, providing experimental evidence that, on average, moral suasion increases pro-sociality towards both the ingroup and the outgroup; however, the increase towards the outgroup is greater than the increase towards the ingroup, and this results in the fact that ingroup favouritism, on average, declines under moral suasion.

The research is here.

Tuesday, January 29, 2019

Even arbitrary norms influence moral decision-making

Campbell Pryor, Amy Perfors & Piers D. L. Howe
Nature Human Behaviour (2018)

Abstract

It is well known that individuals tend to copy behaviours that are common among other people—a phenomenon known as the descriptive norm effect. This effect has been successfully used to encourage a range of real-world prosocial decisions, such as increasing organ donor registrations. However, it is still unclear why it occurs. Here, we show that people conform to social norms, even when they understand that the norms in question are arbitrary and do not reflect the actual preferences of other people. These results hold across multiple contexts and when controlling for confounds such as anchoring or mere-exposure effects. Moreover, we demonstrate that the degree to which participants conform to an arbitrary norm is determined by the degree to which they self-identify with the group that exhibits the norm. Two prominent explanations of norm adherence—the informational and social sanction accounts—cannot explain these results, suggesting that these theories need to be supplemented by an additional mechanism that takes into account self-identity.

The info is here.

Thursday, July 12, 2018

Learning moral values: Another's desire to punish enhances one's own punitive behavior

FeldmanHall O, Otto AR, Phelps EA.
J Exp Psychol Gen. 2018 Jun 7. doi: 10.1037/xge0000405.

Abstract

There is little consensus about how moral values are learned. Using a novel social learning task, we examine whether vicarious learning impacts moral values-specifically fairness preferences-during decisions to restore justice. In both laboratory and Internet-based experimental settings, we employ a dyadic justice game where participants receive unfair splits of money from another player and respond resoundingly to the fairness violations by exhibiting robust nonpunitive, compensatory behavior (baseline behavior). In a subsequent learning phase, participants are tasked with responding to fairness violations on behalf of another participant (a receiver) and are given explicit trial-by-trial feedback about the receiver's fairness preferences (e.g., whether they prefer punishment as a means of restoring justice). This allows participants to update their decisions in accordance with the receiver's feedback (learning behavior). In a final test phase, participants again directly experience fairness violations. After learning about a receiver who prefers highly punitive measures, participants significantly enhance their own endorsement of punishment during the test phase compared with baseline. Computational learning models illustrate the acquisition of these moral values is governed by a reinforcement mechanism, revealing it takes as little as being exposed to the preferences of a single individual to shift one's own desire for punishment when responding to fairness violations. Together this suggests that even in the absence of explicit social pressure, fairness preferences are highly labile.

The research is here.

Wednesday, June 27, 2018

Understanding Moral Preferences Using Sentiment Analysis

Capraro, Valerio and Vanzo, Andrea
(May 28, 2018).

Abstract

Behavioral scientists have shown that people are not solely motivated by the economic consequences of the available actions, but they also care about the actions themselves. Several models have been proposed to formalize this preference for "doing the right thing". However, a common limitation of these models is their lack of predictive power: given a set of instructions of a decision problem, they lack to make clear predictions of people's behavior. Here, we show that, at least in simple cases, the overall qualitative pattern of behavior can be predicted reasonably well using a Computational Linguistics technique, known as Sentiment Analysis. The intuition is that people are reluctant to make actions that evoke negative emotions, and are eager to make actions that stimulate positive emotions. To show this point, we conduct an economic experiment in which decision-makers either get 50 cents, and another person gets nothing, or the opposite, the other person gets 50 cents and the decision maker gets nothing. We experimentally manipulate the wording describing the available actions using six words, from very negative (e.g., stealing) to very positive (e.g., donating) connotations. In agreement with our theory, we show that sentiment polarity has a U-shaped effect on pro-sociality. We also propose a utility function that can qualitatively predict the observed behavior, as well as previously reported framing effects. Our results suggest that building bridges from behavioral sciences to Computational Linguistics can help improve our understanding of human decision making.

The research is here.

Saturday, June 9, 2018

Doing good vs. avoiding bad in prosocial choice

 A refined test and extension of the morality preference hypothesis

Ben Tappin and Valerio Capraro
Preprint

Abstract

Prosociality is fundamental to the success of human social life, and, accordingly, much research has attempted to explain human prosocial behavior. Capraro and Rand (2018) recently advanced the hypothesis that prosocial behavior in anonymous, one-shot interactions is not driven by outcome-based social preferences for equity or efficiency, as classically assumed, but by a generalized morality preference for “doing the right thing”. Here we argue that the key experiments reported in Capraro and Rand (2018) comprise prominent methodological confounds and open questions that bear on influential psychological theory. Specifically, their design confounds: (i) preferences for efficiency with self-interest; and (ii) preferences for action with preferences for morality. Furthermore, their design fails to dissociate the preference to do “good” from the preference to avoid doing “bad”. We thus designed and conducted a preregistered, refined and extended test of the morality preference hypothesis (N=801). Consistent with this hypothesis and the results of Capraro and Rand (2018), our findings indicate that prosocial behavior in anonymous, one-shot interactions is driven by a preference for doing the morally right thing. Inconsistent with influential psychological theory, however, our results suggest the preference to do “good” is as potent as the preference to avoid doing “bad” in prosocial choice.

The preprint is here.