Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Tribes. Show all posts
Showing posts with label Tribes. Show all posts

Saturday, July 3, 2021

Binding moral values gain importance in the presence of close others


Yudkin, D.A., Gantman, A.P., Hofmann, W. et al. 
Nat Commun 12, 2718 (2021). 
https://doi.org/10.1038/s41467-021-22566-6

Abstract

A key function of morality is to regulate social behavior. Research suggests moral values may be divided into two types: binding values, which govern behavior in groups, and individualizing values, which promote personal rights and freedoms. Because people tend to mentally activate concepts in situations in which they may prove useful, the importance they afford moral values may vary according to whom they are with in the moment. In particular, because binding values help regulate communal behavior, people may afford these values more importance when in the presence of close (versus distant) others. Five studies test and support this hypothesis. First, we use a custom smartphone application to repeatedly record participants’ (n = 1166) current social context and the importance they afforded moral values. Results show people rate moral values as more important when in the presence of close others, and this effect is stronger for binding than individualizing values—an effect that replicates in a large preregistered online sample (n = 2016). A lab study (n = 390) and two preregistered online experiments (n = 580 and n = 752) provide convergent evidence that people afford binding, but not individualizing, values more importance when in the real or imagined presence of close others. Our results suggest people selectively activate different moral values according to the demands of the situation, and show how the mere presence of others can affect moral thinking.

From the Discussion

Our findings converge with work highlighting the practical contexts where binding values are pitted against individualizing ones. Research on the psychology of whistleblowing, for example, suggests that the decision over whether to report unethical behavior in one’s own organization reflects a tradeoff between loyalty (to one’s community) and fairness (to society in general). Other research has found that increasing or decreasing people’s “psychological distance” from a situation affects the degree to which they apply binding versus individualizing principles. For example, research shows that prompting people to take a detached (versus immersed) perspective on their own actions renders them more likely to apply impartial principles in punishing close others for moral transgressions. By contrast, inducing feelings of empathy toward others (which could be construed as increasing feelings of psychological closeness) increases people’s likelihood of showing favoritism toward them in violation of general fairness norms. Our work highlights a psychological process that might help to explain these patterns of behavior: people are more prone to act according to binding values when they are with close others precisely because that relational context activates those values in the mind.

Wednesday, July 17, 2019

Deep Ethics: The Long-Term Quest to Decide Right from Wrong

Simon Beard
www.bbc.com
Originally posted June 18, 2019

Here is an excerpt:

Our sense of right and wrong goes back a long way, so it can be helpful to distinguish between ethics and “morality”. Morality is an individual’s, largely intuitive and emotional, sense of how they should treat others. It has probably existed for hundreds of thousands of years, and maybe even in other species. Ethics, on the other hand, is a formalised set of principles that claim to represent the truth about how people should behave. For instance, while almost everyone has a strong moral sense that killing is wrong and that it simply “mustn’t be done”, ethicists have long sought to understand why killing is wrong and under what circumstances (war, capital punishment, euthanasia) it may still be permissible.

Put a small group of people together in relative isolation and this natural moral sense will usually be enough to allow them to get along. However, at some point in our history, human societies became so large and complex that new principles of organisation were needed. Originally these were likely simple buttresses to our pre-existing emotions and intuitions: invoking a supernatural parent might bring together multiple kinship groups or identifying a common enemy might keep young men from fighting each other.

However, such buttresses are inherently unstable and attempts to codify more enduring principles began shortly after our ancestors began to form stable states. From the earliest written accounts, we see appeals to what are recognisably ethical values and principles.

The information is here.

Friday, May 10, 2019

An Evolutionary Perspective On Free Will Belief

Cory Clark & Bo Winegard
Science Trends
Originally posted April 9, 2019

Here is an excerpt:

Both scholars and everyday people seem to agree that free will (whatever it is) is a prerequisite for moral responsibility (though note, among philosophers, there are numerous definitions and camps regarding how free will and moral responsibility are linked). This suggests that a crucial function of free will beliefs is the promotion of holding others morally responsible. And research supports this. Specifically, when people are exposed to another’s harmful behavior, they increase their broad beliefs in the human capacity for free action. Thus, believing in free will might facilitate the ability of individuals to punish harmful members of the social group ruthlessly.

But recent research suggests that free will is about more than just punishment. People might seek morally culpable agents not only when desiring to punish, but also when desiring to praise. A series of studies by Clark and colleagues (2018) found that, whereas people generally attributed more free will to morally bad actions than to morally good actions, they attributed more free will to morally good actions than morally neutral ones. Moreover, whereas free will judgments for morally bad actions were primarily driven by affective desires to punish, free will judgments for morally good actions were sensitive to a variety of characteristics of the behavior.

Tuesday, October 18, 2016

The desire to fit in is the root of almost all wrongdoing

Christopher Freiman
Aeon.co
Originally published in September 30, 2016

Here is an excerpt:

Doing the wrong thing is, for most of us, pretty mundane. It’s not usurping political power or stealing millions of dollars. It’s nervously joining in the chorus of laughs for your co-worker’s bigoted joke or lying about your politics to appease your family at Thanksgiving dinner. We ‘go along to get along’ in defiance of what we really value or believe because we don’t want any trouble. Immanuel Kant calls this sort of excessively deferential attitude servility. Rather than downgrading the values and commitments of others, servility involves downgrading your own values and commitments relative to those of others. The servile person is thus the mirror image of the conventional, self-interested immoralist found in Plato, Hobbes and Hume. Instead of stepping on whomever is in his way to get what he wants, the servile person is, in Kant’s words, someone who ‘makes himself a worm’ and thus ‘cannot complain afterwards if people step on him’.

Kant thinks that your basic moral obligation is to not treat humanity as a mere means. When you make a lying promise that you’ll pay back a loan or threaten someone unless he hands over his wallet, you’re treating your victim as a mere means. You’re using him like a tool that exists only to serve your purposes, not respecting him as a person who has value in himself.

The article is here.