Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Groups. Show all posts
Showing posts with label Groups. Show all posts

Friday, March 8, 2019

Seven moral rules found all around the world

University of Oxford
phys.org
Originally released February 12, 2019

Anthropologists at the University of Oxford have discovered what they believe to be seven universal moral rules.

The rules: help your family, help your group, return favours, be brave, defer to superiors, divide resources fairly, and respect others' property. These were found in a survey of 60 cultures from all around the world.

Previous studies have looked at some of these rules in some places – but none has looked at all of them in a large representative sample of societies. The present study, published in Current Anthropology, is the largest and most comprehensive cross-cultural survey of morals ever conducted.

The team from Oxford's Institute of Cognitive & Evolutionary Anthropology (part of the School of Anthropology & Museum Ethnography) analysed ethnographic accounts of ethics from 60 societies, comprising over 600,000 words from over 600 sources.

Dr. Oliver Scott Curry, lead author and senior researcher at the Institute for Cognitive and Evolutionary Anthropology, said: "The debate between moral universalists and moral relativists has raged for centuries, but now we have some answers. People everywhere face a similar set of social problems, and use a similar set of moral rules to solve them. As predicted, these seven moral rules appear to be universal across cultures. Everyone everywhere shares a common moral code. All agree that cooperating, promoting the common good, is the right thing to do."

The study tested the theory that morality evolved to promote cooperation, and that – because there are many types of cooperation – there are many types of morality. According to this theory of 'morality as cooperation," kin selection explains why we feel a special duty of care for our families, and why we abhor incest. Mutualism explains why we form groups and coalitions (there is strength and safety in numbers), and hence why we value unity, solidarity, and loyalty. Social exchange explains why we trust others, reciprocate favours, feel guilt and gratitude, make amends, and forgive. And conflict resolution explains why we engage in costly displays of prowess such as bravery and generosity, why we defer to our superiors, why we divide disputed resources fairly, and why we recognise prior possession.

The information is here.

Monday, February 11, 2019

Escape the echo chamber

By C Thi Nguyen
aeon.co
Originally posted April 9, 2018

Here is an excerpt:

Epistemic bubbles also threaten us with a second danger: excessive self-confidence. In a bubble, we will encounter exaggerated amounts of agreement and suppressed levels of disagreement. We’re vulnerable because, in general, we actually have very good reason to pay attention to whether other people agree or disagree with us. Looking to others for corroboration is a basic method for checking whether one has reasoned well or badly. This is why we might do our homework in study groups, and have different laboratories repeat experiments. But not all forms of corroboration are meaningful. Ludwig Wittgenstein says: imagine looking through a stack of identical newspapers and treating each next newspaper headline as yet another reason to increase your confidence. This is obviously a mistake. The fact that The New York Times reports something is a reason to believe it, but any extra copies of The New York Times that you encounter shouldn’t add any extra evidence.

But outright copies aren’t the only problem here. Suppose that I believe that the Paleo diet is the greatest diet of all time. I assemble a Facebook group called ‘Great Health Facts!’ and fill it only with people who already believe that Paleo is the best diet. The fact that everybody in that group agrees with me about Paleo shouldn’t increase my confidence level one bit. They’re not mere copies – they actually might have reached their conclusions independently – but their agreement can be entirely explained by my method of selection. The group’s unanimity is simply an echo of my selection criterion. It’s easy to forget how carefully pre-screened the members are, how epistemically groomed social media circles might be.

The information is here.

Saturday, February 9, 2019

Are groups more competitive, more selfish-rational or more prosocial bargainers?

UlrikeVollstädt & RobertBöhm
Journal of Behavioral and Experimental Economics
Available online 14 December 2018

Abstract

Often, it is rather groups than individuals that make decisions. In previous experiments, groups have frequently been shown to act differently from individuals in several ways. It has been claimed that inter-group interactions may be (1) more competitive, (2) more selfish-rational, or (3) more prosocial than inter-individual interactions. While some of these observed differences may be due to differences in the experimental setups, it is still not clear which of the three kinds of behavior is prevailing as they have hardly been distinguishable in previous experiments. We use Rubinstein’s alternating offers bargaining game to compare inter-individual with inter-group behavior since it allows separating the predictions of competitive, selfish-rational and prosocial behavior. We find that groups are, on average, more selfish-rational bargainers than individuals, in particular when being in a weak as opposed to a strong position.

From the Conclusion section:

From these four results, we could infer that groups are not more competitive than individuals since being more competitive would mean making higher first round demands and needing more rounds than individuals in both discount factor combinations. Nevertheless, it was not clear
whether the observed behavior was more rational or more prosocial.

A pdf can be downloaded here.

Friday, December 14, 2018

Don’t Want to Fall for Fake News? Don’t Be Lazy

Robbie Gonzalez
www.wired.com
Originally posted November 9, 2018

Here are two excerpts:

Misinformation researchers have proposed two competing hypotheses for why people fall for fake news on social media. The popular assumption—supported by research on apathy over climate change and the denial of its existence—is that people are blinded by partisanship, and will leverage their critical-thinking skills to ram the square pegs of misinformation into the round holes of their particular ideologies. According to this theory, fake news doesn't so much evade critical thinking as weaponize it, preying on partiality to produce a feedback loop in which people become worse and worse at detecting misinformation.

The other hypothesis is that reasoning and critical thinking are, in fact, what enable people to distinguish truth from falsehood, no matter where they fall on the political spectrum. (If this sounds less like a hypothesis and more like the definitions of reasoning and critical thinking, that's because they are.)

(cut)

All of which suggests susceptibility to fake news is driven more by lazy thinking than by partisan bias. Which on one hand sounds—let's be honest—pretty bad. But it also implies that getting people to be more discerning isn't a lost cause. Changing people's ideologies, which are closely bound to their sense of identity and self, is notoriously difficult. Getting people to think more critically about what they're reading could be a lot easier, by comparison.

Then again, maybe not. "I think social media makes it particularly hard, because a lot of the features of social media are designed to encourage non-rational thinking." Rand says. Anyone who has sat and stared vacantly at their phone while thumb-thumb-thumbing to refresh their Twitter feed, or closed out of Instagram only to re-open it reflexively, has experienced firsthand what it means to browse in such a brain-dead, ouroboric state. Default settings like push notifications, autoplaying videos, algorithmic news feeds—they all cater to humans' inclination to consume things passively instead of actively, to be swept up by momentum rather than resist it.

The info is here.

Thursday, June 28, 2018

Making better decisions in groups

The Royal Society
Originally published May 24, 2018

The animation and briefing on making better decisions in groups is based on the research work of Dr Dan Bang and Professor Chris Frith FRS.

It introduces the key concepts around improving decision making in groups with the aim of alerting Royal Society committee chairs and panel members to consider that by pooling diverse information and different areas of expertise, groups can make better decisions than individuals.




Tuesday, May 29, 2018

Choosing partners or rivals

The Harvard Gazette
Originally published April 27, 2018

Here is the conclusion:

“The interesting observation is that natural selection always chooses either partners or rivals,” Nowak said. “If it chooses partners, the system naturally moves to cooperation. If it chooses rivals, it goes to defection, and is doomed. An approach like ‘America First’ embodies a rival strategy which guarantees the demise of cooperation.”

In addition to shedding light on how cooperation might evolve in a society, Nowak believes the study offers an instructive example of how to foster cooperation among individuals.

“With the partner strategy, I have to accept that sometimes I’m in a relationship where the other person gets more than me,” he said. “But I can nevertheless provide an incentive structure where the best thing the other person can do is to cooperate with me.

“So the best I can do in this world is to play a strategy such that the other person gets the maximum payoff if they always cooperate,” he continued. “That strategy does not prevent a situation where the other person, to some extent, exploits me. But if they exploit me, they get a lower payoff than if they fully cooperated.”

The information is here.

Tuesday, June 27, 2017

Resisting Temptation for the Good of the Group: Binding Moral Values and the Moralization of Self-Control

Mooijman, Marlon; Meindl, Peter; Oyserman, Daphna; Monterosso, John; Dehghani, Morteza; Doris, John M.; Graham, Jesse
Journal of Personality and Social Psychology, Jun 12 , 2017.

Abstract

When do people see self-control as a moral issue? We hypothesize that the group-focused “binding” moral values of Loyalty/betrayal, Authority/subversion, and Purity/degradation play a particularly important role in this moralization process. Nine studies provide support for this prediction. First, moralization of self-control goals (e.g., losing weight, saving money) is more strongly associated with endorsing binding moral values than with endorsing individualizing moral values (Care/harm, Fairness/cheating). Second, binding moral values mediate the effect of other group-focused predictors of self-control moralization, including conservatism, religiosity, and collectivism. Third, guiding participants to consider morality as centrally about binding moral values increases moralization of self-control more than guiding participants to consider morality as centrally about individualizing moral values. Fourth, we replicate our core finding that moralization of self-control is associated with binding moral values across studies differing in measures and design—whether we measure the relationship between moral and self-control language across time, the perceived moral relevance of self-control behaviors, or the moral condemnation of self-control failures. Taken together, our findings suggest that self-control moralization is primarily group-oriented and is sensitive to group-oriented cues.

The article is here.

Sunday, February 5, 2017

Group-focused morality is associated with limited conflict detection and resolution capacity: Neuroanatomical evidence

Nash, Kyle, Baumgartner, Thomas, & Knoch, Daria
Biological Psychology
Volume 123, February 2017, Pages 235–240

Abstract

Group-focused moral foundations (GMFs) − moral values that help protect the group’s welfare − sharply divide conservatives from liberals and religiously devout from non-believers. However, there is little evidence about what drives this divide. Moral foundations theory and the model of motivated social cognition both associate group-focused moral foundations with differences in conflict detection and resolution capacity, but in opposing directions. Individual differences in conflict detection and resolution implicate specific neuroanatomical differences. Examining neuroanatomy thus affords an objective and non-biased opportunity to contrast these influential theories. Here, we report that increased adherence to group-focused moral foundations was strongly associated (whole-brain corrected) with reduced gray matter volume in key regions of the conflict detection and resolution system (anterior cingulate cortex and lateral prefrontal cortex). Because reduced gray matter is reliably associated with reduced neural and cognitive capacity, these findings support the idea outlined in the model of motivated social cognition that belief in group-focused moral values is associated with reduced conflict detection and resolution capacity.

The article is here.

Tuesday, November 29, 2016

Everyone Thinks They’re More Moral Than Everyone Else

By Cari Romm
New York Magazine - The Science of Us
Originally posted November 15, 2016

There’s been a lot of talk over the past week about the “filter bubble” — the ideological cocoon that each of us inhabits, blinding us to opposing views. As my colleague Drake wrote the day after the election, the filter bubble is why so many people were so blindsided by Donald Trump’s win: They only saw, and only read, stories assuming that it wouldn’t happen.

Our filter bubbles are defined by the people and ideas we choose to surround ourselves with, but each of us also lives in a one-person bubble of sorts, viewing the world through our own distorted sense of self. The way we view ourselves in relation to others is a constant tug-of-war between two opposing forces: On one end of the spectrum is something called illusory superiority, a psychological quirk in which we tend to assume that we’re better than average — past research has found it to be true in people estimating their own driving skills, parents’ perceived ability to catch their kid in a lie, even cancer patients’ estimates of their own prognoses. And on the other end of the spectrum, there’s “social projection,” or the assumption that other people share your abilities or beliefs.

Thursday, March 17, 2016

Why being good is a miracle

By Michael Bond
The New Scientist
Originally published March 9, 2016

Here is an excerpt:

A quick look at leading evolutionary anthropologist Michael Tomasello’s A Natural History of Human Morality will tell you that this flux in social norms is all of a piece with group psychology. Interests and identities within groups often seem to hold sway over those of individuals. This can seem irrational, but in the context of our evolutionary history it is anything but. Tomasello aims to describe not only how these “us and them” attitudes evolved, but also how they came to define our sense of right and wrong.

The article is here.

Wednesday, March 2, 2016

Beyond the paleo

Our morality may be a product of natural selection, but that doesn’t mean it’s set in stone

by Russell Powell & Allen Buchanan
Aeon Magazine
Originally published December 12, 2013

For centuries now, conservative thinkers have argued that significant social reform is impossible, because human nature is inherently limited. The argument goes something like this: sure, it would be great to change the world, but it will never work, because people are too flawed, lacking the ability to see beyond their own interests and those of the groups to which they belong. They have permanent cognitive, motivational and emotional deficits that make any deliberate, systematic attempt to improve human society futile at best. Efforts to bring about social or moral progress are naive about the natural limits of the human animal and tend to have unintended consequences. They are likely to make things worse rather than better.

It’s tempting to nod along at this, and think humans are irredeemable, or at best, permanently flawed. But it’s not clear that such a view stands up to empirical scrutiny. For the conservative argument to prevail, it is not enough that humans exhibit tendencies toward selfishness, group-mindedness, partiality toward kin and kith, apathy toward strangers, and the like. It must also be the case that these tendencies are unalterable, either due to the inherent constraints of human psychology or to our inability to figure out how to modify these constraints without causing greater harms. The trouble is, these assumptions about human nature are largely based on anecdote or selective and controversial readings of history. A more thorough look at the historical record suggests they are due for revision.

The article is here.

Thursday, January 21, 2016

Intuition, deliberation, and the evolution of cooperation

Adam Bear and David G. Rand
PNAS 2016 : 1517780113v1-201517780.

Abstract

Humans often cooperate with strangers, despite the costs involved. A long tradition of theoretical modeling has sought ultimate evolutionary explanations for this seemingly altruistic behavior. More recently, an entirely separate body of experimental work has begun to investigate cooperation’s proximate cognitive underpinnings using a dual-process framework: Is deliberative self-control necessary to reign in selfish impulses, or does self-interested deliberation restrain an intuitive desire to cooperate? Integrating these ultimate and proximate approaches, we introduce dual-process cognition into a formal game-theoretic model of the evolution of cooperation. Agents play prisoner’s dilemma games, some of which are one-shot and others of which involve reciprocity. They can either respond by using a generalized intuition, which is not sensitive to whether the game is one-shot or reciprocal, or pay a (stochastically varying) cost to deliberate and tailor their strategy to the type of game they are facing. We find that, depending on the level of reciprocity and assortment, selection favors one of two strategies: intuitive defectors who never deliberate, or dual-process agents who intuitively cooperate but sometimes use deliberation to defect in one-shot games. Critically, selection never favors agents who use deliberation to override selfish impulses: Deliberation only serves to undermine cooperation with strangers. Thus, by introducing a formal theoretical framework for exploring cooperation through a dual-process lens, we provide a clear answer regarding the role of deliberation in cooperation based on evolutionary modeling, help to organize a growing body of sometimes-conflicting empirical results, and shed light on the nature of human cognition and social decision making.

The article is here.