Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Social Relationships. Show all posts
Showing posts with label Social Relationships. Show all posts

Saturday, November 11, 2023

Discordant benevolence: How and why people help others in the face of conflicting values.

Cowan, S. K., Bruce, T. C., et al. (2022).
Science Advances, 8(7).

Abstract

What happens when a request for help from friends or family members invokes conflicting values? In answering this question, we integrate and extend two literatures: support provision within social networks and moral decision-making. We examine the willingness of Americans who deem abortion immoral to help a close friend or family member seeking one. Using data from the General Social Survey and 74 in-depth interviews from the National Abortion Attitudes Study, we find that a substantial minority of Americans morally opposed to abortion would enact what we call discordant benevolence: providing help when doing so conflicts with personal values. People negotiate discordant benevolence by discriminating among types of help and by exercising commiseration, exemption, or discretion. This endeavor reveals both how personal values affect social support processes and how the nature of interaction shapes outcomes of moral decision-making.

Here is my summary:

Using data from the General Social Survey and 74 in-depth interviews from the National Abortion Attitudes Study, the authors find that a substantial minority of Americans morally opposed to abortion would enact discordant benevolence. They also find that people negotiate discordant benevolence by discriminating among types of help and by exercising commiseration, exemption, or discretion.

Commiseration involves understanding and sharing the other person's perspective, even if one does not agree with it. Exemption involves excusing oneself from helping, perhaps by claiming ignorance or lack of resources. Discretion involves helping in a way that minimizes the conflict with one's own values, such as by providing emotional support or practical assistance but not financial assistance.

The authors argue that discordant benevolence is a complex phenomenon that reflects the interplay of personal values, social relationships, and moral decision-making. They conclude that discordant benevolence is a significant form of social support, even in cases where it is motivated by conflicting values.

In other words, the research suggests that people are willing to help others in need, even if it means violating their own personal values. This is because people also value social relationships and helping others. They may do this by discriminating among types of help or by exercising commiseration, exemption, or discretion.

Wednesday, July 14, 2021

Popularity is linked to neural coordination: Neural evidence for an Anna Karenina principle in social networks

Baek, E. C.,  et al. (2021)
https://doi.org/10.31234/osf.io/6fj2p

Abstract

People differ in how they attend to, interpret, and respond to their surroundings. Convergent processing of the world may be one factor that contributes to social connections between individuals. We used neuroimaging and network analysis to investigate whether the most central individuals in their communities (as measured by in-degree centrality, a notion of popularity) process the world in a particularly normative way. More central individuals had exceptionally similar neural responses to their peers and especially to each other in brain regions associated with high-level interpretations and social cognition (e.g., in the default-mode network), whereas less-central individuals exhibited more idiosyncratic responses. Self-reported enjoyment of and interest in stimuli followed a similar pattern, but accounting for these data did not change our main results. These findings suggest an “Anna Karenina principle” in social networks: Highly-central individuals process the world in exceptionally similar ways, whereas less-central individuals process the world in idiosyncratic ways.

Discussion

What factors distinguish highly-central individuals in social networks? Our results are consistent with the notion that popular individuals (who are central in their social networks) process the world around them in normative ways, whereas unpopular individuals process the world around them idiosyncratically. Popular individuals exhibited greater mean neural similarity with their peers than unpopular individuals in several regions of the brain, including ones in which similar neural responding has been associated with shared higher-level interpretations of events and social cognition (e.g., regions of the default mode network) while viewing dynamic, naturalistic stimuli. Our results indicate that the relationship between popularity and neural similarity follows anAnna Karenina principle. Specifically, we observed that popular individuals were very similar to each other in their neural responses, whereas unpopular individuals were dissimilar both to each other and to their peers’ normative way of processing the world.  Our findings suggest that highly-central people process and respond to the world around them in a manner that allows them to relate to and connect with many of their peers and that less-central people exhibit idiosyncrasies that may result in greater difficulty in relating to others.

Saturday, September 14, 2019

Do People Want to Be More Moral?

Jessie Sun and Geoffrey Goodwin
PsyArXiv Preprints
Originally posted August 26, 2019

Abstract

Most people want to change some aspects of their personality, but does this phenomenon extend to moral character, and to close others? Targets (N = 800) and well-acquainted informants (N = 958) rated targets’ personality traits and reported how much they wanted the target to change each trait. Targets and informants reported a lower desire to change more morally-relevant traits (e.g., honesty, compassion), compared to less morally-relevant traits (e.g., anxiety, sociability). Moreover, although targets and informants generally wanted targets to improve more on traits that targets had less desirable levels of, targets’ moral change goals were less calibrated to their current levels. Finally, informants wanted targets to change in similar ways, but to a lesser extent, than targets themselves did. These findings shed light on self–other similarities and asymmetries in personality change goals, and suggest that the general desire for self-improvement may be less prevalent in the moral domain.

From the Discussion:

Why don’t people particularly want to be more moral? One possibility is that people see less room for improvement on moral traits, especially given the relatively high ratings on these traits.  Our data cannot speak directly to this possibility, because people might not be claiming that they have the lowest or highest possible levels of each trait when they “strongly disagree” or “strongly agree” with each trait description (Blanton & Jaccard, 2006). Testing this idea would therefore require a more direct measure of where people think they stand, relative to these extremes.

A related possibility is that people are less motivated to improve moral traits because they already see themselves as being quite high on such traits, and therefore morally “good enough”—even if they think they could be morally better (see Schwitzgebel, 2019). Consistent with this idea, supplemental analyses showed that people are less inclined to change the traits that they rate themselves higher on, compared to traits that they rate themselves lower on. However, even controlling for current levels, people are still less inclined to change more morally-relevant traits(see Supplemental Materialfor these within-person analyses), suggesting that additional psychological factors might reducepeople’s desire to change morally-relevant traits.One additional possibility is that people are more motivated to change in ways that will improve their own well-being(Hudson & Fraley, 2016). Whereas becoming less anxious has obvious personal benefits, people might believe that becoming more moral would result in few personal benefits (or even costs).

The research is here.

Wednesday, August 28, 2019

Profit Versus Prejudice: Harnessing Self-Interest to Reduce In-Group Bias

Stagnaro, M. N., Dunham, Y., & Rand, D. G. (2018).
Social Psychological and Personality Science, 9(1), 50–58.
https://doi.org/10.1177/1948550617699254

Abstract

We examine the possibility that self-interest, typically thought to undermine social welfare, might reduce in-group bias. We compared the dictator game (DG), where participants unilaterally divide money between themselves and a recipient, and the ultimatum game (UG), where the recipient can reject these offers. Unlike the DG, there is a self-interested motive for UG giving: If participants expect the rejection of unfair offers, they have a monetary incentive to be fair even to out-group members. Thus, we predicted substantial bias in the DG but little bias in the UG. We tested this hypothesis in two studies (N = 3,546) employing a 2 (in-group/out-group, based on abortion position) × 2 (DG/UG) design. We observed the predicted significant group by game interaction, such that the substantial in-group favoritism observed in the DG was almost entirely eliminated in the UG: Giving the recipient bargaining power reduced the premium offered to in-group members by 77.5%.

Discussion
Here we have provided evidence that self-interest has the potential to override in-group bias based on a salient and highly charged real-world grouping (abortion stance). In the DG, where participants had the power to offer whatever they liked, we saw clear evidence of behavior favoring in-group members. In the UG, where the recipient could reject the offer, acting on such biases had the potential to severely reduce earnings. Participants anticipated this, as shown by their expectations of partner behavior, and made fair offers to both in-group and out-group participants.

Traditionally, self-interest is considered a negative force in intergroup relations. For example, an individual might give free reign to a preference for interacting with similar others, and even be willing to pay a cost to satisfy those preferences, resulting in what has been called “taste-based” discrimination (Becker, 1957). Although we do not deny that such discrimination can (and often does) occur, we suggest that in the right context, the costs it can impose serve as a disincentive. In particular, when strategic concerns are heightened, as they are in multilateral interactions where the parties must come to an agreement and failing to do so is both salient and costly (such as the UG), self-interest has the opportunity to mitigate biased behavior. Here, we provide one example of such a situation: We find that participants successfully withheld bias in the UG, making equally fair offers to both in-group and out-group recipients.

Sunday, July 28, 2019

Community Standards of Deception

Levine, Emma
Booth School of Business
(June 17, 2019).
Available at SSRN: https://ssrn.com/abstract=3405538

Abstract

We frequently claim that lying is wrong, despite modeling that it is often right. The present research sheds light on this tension by unearthing systematic cases in which people believe lying is ethical in everyday communication and by proposing and testing a theory to explain these cases. Using both inductive and experimental approaches, I demonstrate that deception is perceived to be ethical, and individuals want to be deceived, when deception is perceived to prevent unnecessary harm. I identify nine implicit rules – pertaining to the targets of deception and the topic and timing of a conversation – that specify the systematic circumstances in which deception is perceived to cause unnecessary harm, and I document the causal effect of each implicit rule on the endorsement of deception. This research provides insight into when and why people value honesty, and paves the way for future research on when and why people embrace deception.

Thursday, July 16, 2015

It Pays to Be Nice

By Olga Khazan
The Atlantic
Originally published June 23, 2015

Here is an excerpt:

The conclusions of Rand’s studies support corporate do-gooders. Judging by his research, you should be nice even if you don’t trust the other person. In fact, you should keep on being nice even if the other person screws you over.

In one experiment, he found that people playing an unpredictable prisoner’s-dilemma type game benefitted from being lenient—forgiving their partner for acting against them. The same holds true in the business environment, which can be similarly “noisy,” as economists say. Sometimes, when someone is trying to undermine you, they’re actually trying to undermine you. But other times, it’s just an accident. If someone doesn’t credit you for a big idea in a meeting, you can’t know if he or she just forgot, or if it was an intentional slight. According to Rand’s research, you shouldn’t, say, turn around and tattle to the boss about that person’s chronic tardiness—at least not until he or she sabotages you at least a couple more times.

“If someone did something that hurt me, and I get pissed, and I screw them over, that destroys that relationship over a mistake,” Rand said. And losing allies, especially in a cooperative environment, can be costly. In his studies, “the strategy that earns the most money is giving someone a pass and letting the person take advantage of you two or three times.”

The entire article is here.