Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Social Cognition. Show all posts
Showing posts with label Social Cognition. Show all posts

Wednesday, January 6, 2021

Moral “foundations” as the product of motivated social cognition: Empathy and other psychological underpinnings of ideological divergence in “individualizing” and “binding” concerns

Strupp-Levitsky M, et al.
PLoS ONE 15(11): e0241144. 

Abstract

According to moral foundations theory, there are five distinct sources of moral intuition on which political liberals and conservatives differ. The present research program seeks to contextualize this taxonomy within the broader research literature on political ideology as motivated social cognition, including the observation that conservative judgments often serve system-justifying functions. In two studies, a combination of regression and path modeling techniques were used to explore the motivational underpinnings of ideological differences in moral intuitions. Consistent with our integrative model, the “binding” foundations (in-group loyalty, respect for authority, and purity) were associated with epistemic and existential needs to reduce uncertainty and threat and system justification tendencies, whereas the so-called “individualizing” foundations (fairness and avoidance of harm) were generally unrelated to epistemic and existential motives and were instead linked to empathic motivation. Taken as a whole, these results are consistent with the position taken by Hatemi, Crabtree, and Smith that moral “foundations” are themselves the product of motivated social cognition.

Concluding remarks

Taken in conjunction, the results presented here lead to several conclusions that should be of relevance to social scientists who study morality, social justice, and political ideology. First, we observe that so-called “binding” moral concerns pertaining to ingroup loyalty, authority, and purity are psychologically linked to epistemic and, to a lesser extent, existential motives to reduce uncertainty and threat. Second, so-called “individualizing” concerns for fairness and avoidance of harm are not linked to these same motives. Rather, they seem to be driven largely by empathic sensitivity. Third, it would appear that theories of moral foundations and motivated social cognition are in some sense compatible, as suggested by Van Leeuween and Park, rather than incompatible, as suggested by Haidt and Graham and Haidt. That is, the motivational basis of conservative preferences for “binding” intuitions seems to be no different than the motivational basis for many other conservative preferences, including system justification and the epistemic and existential motives that are presumed to underlie system justification.

Saturday, November 28, 2020

Toward a Hierarchical Model of Social Cognition: A Neuroimaging Meta-Analysis and Integrative Review of Empathy and Theory of Mind

Schurz, M. et al.
Psychological Bulletin. 
Advance online publication. 

Abstract

Along with the increased interest in and volume of social cognition research, there has been higher awareness of a lack of agreement on the concepts and taxonomy used to study social processes. Two central concepts in the field, empathy and Theory of Mind (ToM), have been identified as overlapping umbrella terms for different processes of limited convergence. Here, we review and integrate evidence of brain activation, brain organization, and behavior into a coherent model of social-cognitive processes. We start with a meta-analytic clustering of neuroimaging data across different social-cognitive tasks. Results show that understanding others’ mental states can be described by a multilevel model of hierarchical structure, similar to models in intelligence and personality research. A higher level describes more broad and abstract classes of functioning, whereas a lower one explains how functions are applied to concrete contexts given by particular stimulus and task formats. Specifically, the higher level of our model suggests 3 groups of neurocognitive processes: (a) predominantly cognitive processes, which are engaged when mentalizing requires self-generated cognition decoupled from the physical world; (b) more affective processes, which are engaged when we witness emotions in others based on shared emotional, motor, and somatosensory representations; (c) combined processes, which engage cognitive and affective functions in parallel. We discuss how these processes are explained by an underlying principal gradient of structural brain organization. Finally, we validate the model by a review of empathy and ToM task interrelations found in behavioral studies.

Public Significance Statement

Empathy and Theory of Mind are important human capacities for understanding others. Here, we present a meta-analysis of neuroimaging data from 4,207 participants, which shows that these abilities can be deconstructed into specific and partially shared neurocognitive subprocesses. Our findings provide systematic, large-scale support for the hypothesis that understanding others’ mental states can be described by a multilevel model of hierarchical structure, similar to models in intelligence and personality research.

Sunday, October 18, 2020

Beliefs have a social purpose. Does this explain delusions?

Anna Greenburgh
psyche.co
Originally published 

Here is an excerpt:

Of course, just because a delusion has logical roots doesn’t mean it’s helpful for the person once it takes hold. Indeed, this is why delusions are an important clinical issue. Delusions are often conceptualised as sitting at the extreme end of a continuum of belief, but how can they be distinguished from other beliefs? If not irrationality, then what demarcates a delusion?

Delusions are fixed, unchanging in the face of contrary evidence, and not shared by the person’s peers. In light of the social function of beliefs, these preconditions have added significance. The coalitional model underlines that beliefs arising from adaptive cognitive processes should show some sensitivity to social context and enable successful social coordination. Delusions lack this social function and adaptability. Clinical psychologists have documented the fixity of delusional beliefs: they are more resistant to change than other types of belief, and are intensely preoccupying, regardless of the social context or interpersonal consequences. In both ‘The Yellow Wallpaper’ and the novel Don Quixote (1605-15) by Miguel de Cervantes, the protagonists’ beliefs about their surroundings are unchangeable and, if anything, become increasingly intense and disruptive. It is this inflexibility to social context, once they take hold, that sets delusions apart from other beliefs.

Across the field of mental health, research showing the importance of the social environment has spurred a great shift in the way that clinicians interact with patients. For example, research exposing the link between trauma and psychosis has resulted in more compassionate, person-centred approaches. The coalitional model of delusions can now contribute to this movement. It opens up promising new avenues of research, which integrate our fundamental social nature and the social function of belief formation. It can also deepen how people experiencing delusions are understood – instead of contributing to stigma by dismissing delusions as irrational, it considers the social conditions that gave rise to such intensely distressing beliefs.

Wednesday, July 29, 2020

Survival of the Friendliest: Homo sapiens Evolved via Selection for Prosociality

Brian Hare
Annu. Rev. Psychol. 2017.68:155-186.

Abstract

The challenge of studying human cognitive evolution is identifying unique features of our intelligence while explaining the processes by which they arose. Comparisons with nonhuman apes point to our early-emerging cooperative-communicative abilities as crucial to the evolution of all forms of human cultural cognition, including language. The human self-domestication hypothesis proposes that these early-emerging social skills evolved when natural selection favored increased in-group prosociality over aggression in late human evolution. As a by-product of this selection, humans are predicted to show traits of the domestication syndrome observed in other domestic animals. In reviewing comparative, developmental, neurobiological, and paleoanthropological research, compelling evidence emerges for the predicted relationship between unique human mentalizing abilities, tolerance, and the domestication syndrome in humans. This synthesis includes a review of the first a priori test of the self-domestication hypothesis as well as predictions for future tests.

A pdf can be downloaded from here.

Wednesday, July 15, 2020

Empathy is both a trait and a skill. Here's how to strengthen it.

Kristen Rogers
CNN.com
Originally posted 24 June 20

Here is an excerpt:

Types of empathy

Empathy is more about looking for a common humanity, while sympathy entails feeling pity for someone's pain or suffering, Konrath said.

"Whereas empathy is the ability to perceive accurately what another person is feeling, sympathy is compassion or concern stimulated by the distress of another," Lerner said. "A common example of empathy is accurately detecting when your child is afraid and needs encouragement. A common example of sympathy is feeling sorry for someone who has lost a loved one."

(cut)

A "common mistake is to leap into sympathy before empathically understanding what another person is feeling," Lerner said. Two types of empathy can prevent that relationship blunder.

Emotional empathy, sometimes called compassion, is more intuitive and involves care and concern for others.

Cognitive empathy requires effort and more systematic thinking, so it may lead to more empathic accuracy, Lerner said.  It entails considering others' and their perspectives and imagining what it's like to be them, Konrath added.

Some work managers and colleagues, for example, have had to practice empathy for parents juggling remote work with child care and virtual learning duties, said David Anderson, senior director of national programs and outreach at the Child Mind Institute….   But since the outset of the pandemic in March, that empathy has faded — reflecting the notion that cognitive empathy does take effort.

It takes work to interpret what someone is feeling by all of his cues: facial expressions, tones of voice, posture, words and more. Then you have to connect those cues with what you know about him and the situation in order to accurately infer his feelings.

"This kind of inference is a highly complex social-cognitive task" that might involve a variation of mental processes, Lerner said.

The info is here.

Friday, January 24, 2020

How One Person Can Change the Conscience of an Organization

Nicholas W. Eyrich, Robert E. Quinn, and
David P. Fessell
Harvard Business Review
Originally published 27 Dec 19

Here is an excerpt:

A single person with a clarity of conscience and a willingness to speak up can make a difference. Contributing to the greater good is a deep and fundamental human need. When a leader, even a mid-level or lower level leader, skillfully brings a voice and a vision, others will follow and surprising things can happen—even culture change on a large scale. While Yamada did not set out to change a culture, his actions were catalytic and galvanized the organization. As news of the new “not for profit” focus of Tres Cantos spread, many of GSK’s top scientists volunteered to work there. Yamada’s voice spoke for many others, offering a clear path and a vision for a more positive future for all.

The info is here.

Saturday, January 18, 2020

Could a Rising Robot Workforce Make Humans Less Prejudiced?

Jackson, J., Castelo, N. & Gray, K. (2019).
American Psychologist. (2019)

Automation is becoming ever more prevalent, with robot workers replacing many human employees. Many perspectives have examined the economic impact of a robot workforce, but here we consider its social impact: how will the rise of robot workers affect intergroup relations? Whereas some past research suggests that more robots will lead to more intergroup prejudice, we suggest that robots could also reduce prejudice by highlighting commonalities between all humans. As robot workers become more salient, intergroup differences—including racial and religious differences—may seem less important, fostering a perception of a common human identity (i.e., “panhumanism.”) Six studies (∑N= 3,312) support this hypothesis. Anxiety about the rising robot workforce predicts less anxiety about human out-groups (Study 1) and priming the salience of a robot workforce reduces prejudice towards out-groups (Study 2), makes people more accepting of out-group members as leaders and family members (Study 3), and increases wage equality across in-group and out-group members in an economic simulation (Study 4). This effect is mediated by panhumanism (Studies 5-6), suggesting that the perception of a common human in-group explains why robot salience reduces prejudice. We discuss why automation may sometimes exacerbate intergroup tensions and other-times reduce them.

From the General Discussion

An open question remains about when automation helps versus harms intergroup relations. Our evidence is optimistic, showing that robot workers can increase solidarity between human groups. Yet other studies are pessimistic, showing that reminders of rising automation can increase people’s perceived material insecurity, leading them to feel more threatened by immigrants and foreign workers (Im et al., in press; Frey, Berger, & Chen, 2017), and data that we gathered across 37 nations—summarized in our supplemental materials—suggest that the countries that have automated the fastest over the last 42 years have also increased more in explicit prejudice towards out-groups, an effect that is partially explained by rising unemployment rates.

The research is here.

Thursday, January 16, 2020

Inaccurate group meta-perceptions drive negative out-group attributions in competitive contexts

Lees, J., Cikara, M.
Nat Hum Behav (2019)

Abstract

Across seven experiments and one survey (N=4282) people consistently overestimated out-group negativity towards the collective behavior of their in-group. This negativity bias in group meta-perception was present across multiple competitive (but not cooperative) intergroup contexts, and appears to be yoked to group psychology more generally; we observed negativity bias for estimation of out-group, anonymized-group, and even fellow in-group members’ perceptions. Importantly, in the context of American politics greater inaccuracy was associated with increased belief that the out-group is motivated by purposeful obstructionism. However, an intervention that informed participants of the inaccuracy of their beliefs reduced negative out-group attributions, and was more effective for those whose group meta-perceptions were more inaccurate. In sum, we highlight a pernicious bias in social judgments of how we believe ‘they’ see ‘our’ behavior, demonstrate how such inaccurate beliefs can exacerbate intergroup conflict, and provide an avenue for reducing the negative effects of inaccuracy.

From the Discussion

Our findings highlight a consistent, pernicious inaccuracy in social perception, along withhow these inaccurate perceptions relate to negative attributions towards out-groups. More broadly, inaccurate and overly negative GMPs exist across multiple competitive intergroup contexts, and we find no evidence they differ across the political spectrum. This suggests that there may be many domains of intergroup interaction where inaccurate GMPs could potentially diminish the likelihood of cooperation and instead exacerbate the possibility of conflict. However, our findings also highlight a straight-forward manner in which simply informing individuals of their inaccurate beliefs can reduce these negative attributions.

A version of the research can be downloaded here.

Saturday, January 11, 2020

A Semblance of Aliveness

J. Grunsven & A. Wynsberghe
Techné: Research in Philosophy and Technology
Published on December 3, 2019

While the design of sex robots is still in the early stages, the social implications of the potential proliferation of sex robots into our lives has been heavily debated by activists and scholars from various disciplines. What is missing in the current debate on sex robots and their potential impact on human social relations is a targeted look at the boundedness and bodily expressivity typically characteristic of humans, the role that these dimensions of human embodiment play in enabling reciprocal human interactions, and the manner in which this contrasts with sex robot-human interactions. Through a fine-grained discussion of these themes, rooted in fruitful but largely untapped resources from the field of enactive embodied cognition, we explore the unique embodiment of sex robots. We argue that the embodiment of the sex robot is constituted by what we term restricted expressivity and a lack of bodily boundedness and that this is the locus of negative but also potentially positive implications. We discuss the possible benefits that these two dimensions of embodiment may have for people within a specific demographic, namely some persons on the autism spectrum. Our preliminary conclusion—that the benefits and the downsides of sex robots reside in the same capability of the robot, its restricted expressivity and lack of bodily boundedness as we call it—demands we take stock of future developments in the design of sex robot embodiment. Given the importance of evidence-based research pertaining to sex robots in particular, as reinforced by Nature (2017) for drawing correlations and making claims, the analysis is intended to set the stage for future research.

The info is here.

Thursday, December 26, 2019

Is virtue signalling a perversion of morality?

<p><em>Photo courtesy Wikimedia</em></p>Neil Levy
aeon.co
Originally posted 29 Nov 19

Here is an excerpt:

If such virtue signalling is a central – and justifying – function of public moral discourse, then the claim that it perverts this discourse is false. What about the hypocrisy claim?

The accusation that virtue signalling is hypocritical might be cashed out in two different ways. We might mean that virtue signallers are really concerned with displaying themselves in the best light – and not with climate change, animal welfare or what have you. That is, we might question their motives. In their recent paper, the management scholars Jillian Jordan and David Rand asked if people would virtue signal when no one was watching. They found that their participants’ responses were sensitive to opportunities for signalling: after a moral violation was committed, the reported degree of moral outrage was reduced when the participants had better opportunities to signal virtue. But the entire experiment was anonymous, so no one could link moral outrage to specific individuals. This suggests that, while virtue signalling is part (but only part) of the explanation for why we feel certain emotions, we nevertheless genuinely feel them, and we don’t express them just because we’re virtue signalling.

The second way of cashing out the hypocrisy accusation is the thought that virtue signallers might actually lack the virtue that they try to display. Dishonest signalling is also widespread in evolution. For instance, some animals mimic the honest signal that others give of being poisonous or venomous – hoverflies that imitate wasps, for example. It’s likely that some human virtue signallers are engaged in dishonest mimicry too. But dishonest signalling is worth engaging in only when there are sufficiently many honest signallers for it make sense to take such signals into account.

The info is here.

Saturday, November 30, 2019

Are You a Moral Grandstander?

Image result for moral superiorityScott Barry Kaufman
Scientific American
Originally published October 28, 2019

Here are two excerpts:

Do you strongly agree with the following statements?

  • When I share my moral/political beliefs, I do so to show people who disagree with me that I am better than them.
  • I share my moral/political beliefs to make people who disagree with me feel bad.
  • When I share my moral/political beliefs, I do so in the hopes that people different than me will feel ashamed of their beliefs.

If so, then you may be a card-carrying moral grandstander. Of course it's wonderful to have a social cause that you believe in genuinely, and which you want to share with the world to make it a better place. But moral grandstanding comes from a different place.

(cut)

Nevertheless, since we are such a social species, the human need for social status is very pervasive, and often our attempts at sharing our moral and political beliefs on public social media platforms involve a mix of genuine motives with social status motives. As one team of psychologists put it, yes, you probably are "virtue signaling" (a closely related concept to moral grandstanding), but that doesn't mean that your outrage is necessarily inauthentic. It just means that we often have a subconscious desire to signal our virtue, which when not checked, can spiral out of control and cause us to denigrate or be mean to others in order to satisfy that desire. When the need for status predominates, we may even lose touch with what we truly believe, or even what is actually the truth.

The info is here.

Monday, November 4, 2019

Principles of karmic accounting: How our intuitive moral sense balances rights and wrongs

Image result for karmic accountingJohnson, S. G. B., & Ahn, J.
(2019, September 10).
PsyArXiv
https://doi.org/10.31234/osf.io/xetwg

Abstract

We are all saints and sinners: Some of our actions benefit other people, while other actions harm people. How do people balance moral rights against moral wrongs when evaluating others’ actions? Across 9 studies, we contrast the predictions of three conceptions of intuitive morality—outcome- based (utilitarian), act-based (deontologist), and person-based (virtue ethics) approaches. Although good acts can partly offset bad acts—consistent with utilitarianism—they do so incompletely and in a manner relatively insensitive to magnitude, but sensitive to temporal order and the match between who is helped and harmed. Inferences about personal moral character best predicted blame judgments, explaining variance across items and across participants. However, there was modest evidence for both deontological and utilitarian processes too. These findings contribute to conversations about moral psychology and person perception, and may have policy implications.

Here is the beginning of the General Discussion:

Much  of  our  behavior  is  tinged  with  shades  of  morality. How  third-parties  judge  those behaviors has numerous social consequences: People judged as behaving immorally can be socially ostracized,  less  interpersonally  attractive,  and  less  able  to  take  advantage  of  win–win  agreements. Indeed, our desire to avoid ignominy and maintain our moral reputations motivate much of our social behavior. But on the other hand, moral judgment is subject to a variety of heuristics and biases that appear  to  violate  normative  moral  theories  and  lead  to  inconsistency  (Bartels,  Bauman,  Cushman, Pizarro, & McGraw, 2015; Sunstein, 2005).  Despite the dominating influence of moral judgment in everyday social cognition, little is known about how judgments of individual acts scale up into broader judgments  about  sequences  of  actions,  such  as  moral  offsetting  (a  morally  bad  act  motivates  a subsequent morally good act) or self-licensing (a morally good act motivates a subsequent morally bad act). That is, we need a theory of karmic accounting—how rights and wrongs add up in moral judgment.

Sunday, October 27, 2019

Language Is the Scaffold of the Mind

Anna Ivanova
nautil.us
Originally posted September 26, 2019

Can you imagine a mind without language? More specifically, can you imagine your mind without language? Can you think, plan, or relate to other people if you lack words to help structure your experiences?

Many great thinkers have drawn a strong connection between language and the mind. Oscar Wilde called language “the parent, and not the child, of thought”; Ludwig Wittgenstein claimed that “the limits of my language mean the limits of my world”; and Bertrand Russell stated that the role of language is “to make possible thoughts which could not exist without it.”

After all, language is what makes us human, what lies at the root of our awareness, our intellect, our sense of self. Without it, we cannot plan, cannot communicate, cannot think. Or can we?

Imagine growing up without words. You live in a typical industrialized household, but you are somehow unable to learn the language of your parents. That means that you do not have access to education; you cannot properly communicate with your family other than through a set of idiosyncratic gestures; you never get properly exposed to abstract ideas such as “justice” or “global warming.” All you know comes from direct experience with the world.

It might seem that this scenario is purely hypothetical. There aren’t any cases of language deprivation in modern industrialized societies, right? It turns out there are. Many deaf children born into hearing families face exactly this issue. They cannot hear and, as a result, do not have access to their linguistic environment. Unless the parents learn sign language, the child’s language access will be delayed and, in some cases, missing completely.

The info is here.


Monday, October 14, 2019

Principles of karmic accounting: How our intuitive moral sense balances rights and wrongs

Samuel Johnson and Jaye Ahn
PsyArXiv
Originally posted September 10, 2019

Abstract

We are all saints and sinners: Some of our actions benefit other people, while other actions harm people. How do people balance moral rights against moral wrongs when evaluating others’ actions? Across 9 studies, we contrast the predictions of three conceptions of intuitive morality—outcome- based (utilitarian), act-based (deontologist), and person-based (virtue ethics) approaches. Although good acts can partly offset bad acts—consistent with utilitarianism—they do so incompletely and in a manner relatively insensitive to magnitude, but sensitive to temporal order and the match between who is helped and harmed. Inferences about personal moral character best predicted blame judgments, explaining variance across items and across participants. However, there was modest evidence for both deontological and utilitarian processes too. These findings contribute to conversations about moral psychology and person perception, and may have policy implications.

General Discussion

These  studies  begin  to  map  out  the  principles  governing  how  the  mind  combines  rights  and wrongs to form summary judgments of blameworthiness. Moreover, these principles are explained by inferences  about  character,  which  also  explain  differences  across  scenarios  and  participants.  These results overall buttress person-based accounts of morality (Uhlmann et al., 2014), according to which morality  serves  primarily  to  identify  and  track  individuals  likely  to  be  cooperative  and  trustworthy social partners in the future.

These results also have implications for moral psychology beyond third-party judgments. Moral behavior is motivated largely by its expected reputational consequences, thus studying the psychology of  third-party  reputational  judgments  is  key  for  understanding  people’s  behavior  when  they  have opportunities  to  perform  licensing  or  offsetting acts.  For  example,  theories  of  moral  self-licensing (Merritt et al., 2010) disagree over whether licensing occurs due to moral credits (i.e., having done good, one can now “spend” the moral credit on a harm) versus moral credentials (i.e., having done good, later bad  acts  are  reframed  as  less  blameworthy). 

The research is here.

Friday, December 28, 2018

The Theory of Dyadic Morality: Reinventing Moral Judgment by Redefining Harm

Chelsea Schein & Kurt Gray
Personality and Social Psychology Review
Volume: 22 issue: 1, page(s): 32-70
Article first published online: May 14, 2017; Issue published: February 1, 2018

Abstract

The nature of harm—and therefore moral judgment—may be misunderstood. Rather than an objective matter of reason, we argue that harm should be redefined as an intuitively perceived continuum. This redefinition provides a new understanding of moral content and mechanism—the constructionist Theory of Dyadic Morality (TDM). TDM suggests that acts are condemned proportional to three elements: norm violations, negative affect, and—importantly—perceived harm. This harm is dyadic, involving an intentional agent causing damage to a vulnerable patient (A→P). TDM predicts causal links both from harm to immorality (dyadic comparison) and from immorality to harm (dyadic completion). Together, these two processes make the “dyadic loop,” explaining moral acquisition and polarization. TDM argues against intuitive harmless wrongs and modular “foundations,” but embraces moral pluralism through varieties of values and the flexibility of perceived harm. Dyadic morality impacts understandings of moral character, moral emotion, and political/cultural differences, and provides research guidelines for moral psychology.

The review is here.

Wednesday, December 12, 2018

Social relationships more important than hard evidence in partisan politics

phys.org
Dartmouth College
Originally posted November 13, 2018

Here is an excerpt:

Three factors drive the formation of social and political groups according to the research: social pressure to have stronger opinions, the relationship of an individual's opinions to those of their social neighbors, and the benefits of having social connections.

A key idea studied in the paper is that people choose their opinions and their connections to avoid differences of opinion with their social neighbors. By joining like-minded groups, individuals also prevent the psychological stress, or "cognitive dissonance," of considering opinions that do not match their own.

"Human social tendencies are what form the foundation of that political behavior," said Tucker Evans, a senior at Dartmouth who led the study. "Ultimately, strong relationships can have more value than hard evidence, even for things that some would take as proven fact."

The information is here.

The original research is here.

Wednesday, July 25, 2018

Descartes was wrong: ‘a person is a person through other persons’

Abeba Birhane
aeon.com
Originally published April 7, 2017

Here is an excerpt:

So reality is not simply out there, waiting to be uncovered. ‘Truth is not born nor is it to be found inside the head of an individual person, it is born between people collectively searching for truth, in the process of their dialogic interaction,’ Bakhtin wrote in Problems of Dostoevsky’s Poetics (1929). Nothing simply is itself, outside the matrix of relationships in which it appears. Instead, being is an act or event that must happen in the space between the self and the world.

Accepting that others are vital to our self-perception is a corrective to the limitations of the Cartesian view. Consider two different models of child psychology. Jean Piaget’s theory of cognitive development conceives of individual growth in a Cartesian fashion, as the reorganisation of mental processes. The developing child is depicted as a lone learner – an inventive scientist, struggling independently to make sense of the world. By contrast, ‘dialogical’ theories, brought to life in experiments such as Lisa Freund’s ‘doll house study’ from 1990, emphasise interactions between the child and the adult who can provide ‘scaffolding’ for how she understands the world.

A grimmer example might be solitary confinement in prisons. The punishment was originally designed to encourage introspection: to turn the prisoner’s thoughts inward, to prompt her to reflect on her crimes, and to eventually help her return to society as a morally cleansed citizen. A perfect policy for the reform of Cartesian individuals.

The information is here.

Thursday, July 5, 2018

On the role of descriptive norms and subjectivism in moral judgment

Andrew E. Monroe, Kyle D. Dillon, Steve Guglielmo, Roy F. Baumeister
Journal of Experimental Social Psychology
Volume 77, July 2018, Pages 1-10.

Abstract

How do people evaluate moral actions, by referencing objective rules or by appealing to subjective, descriptive norms of behavior? Five studies examined whether and how people incorporate subjective, descriptive norms of behavior into their moral evaluations and mental state inferences of an agent's actions. We used experimental norm manipulations (Studies 1–2, 4), cultural differences in tipping norms (Study 3), and behavioral economic games (Study 5). Across studies, people increased the magnitude of their moral judgments when an agent exceeded a descriptive norm and decreased the magnitude when an agent fell below a norm (Studies 1–4). Moreover, this differentiation was partially explained via perceptions of agents' desires (Studies 1–2); it emerged only when the agent was aware of the norm (Study 4); and it generalized to explain decisions of trust for real monetary stakes (Study 5). Together, these findings indicate that moral actions are evaluated in relation to what most other people do rather than solely in relation to morally objective rules.

Highlights

• Five studies tested the impact of descriptive norms on judgments of blame and praise.

• What is usual, not just what is objectively permissible, drives moral judgments.

• Effects replicate even when holding behavior constant and varying descriptive norms.

• Agents had to be aware of a norm for it to impact perceivers' moral judgments.

• Effects generalize to explain decisions of trust for real monetary stakes.

The research is here.

Sunday, April 29, 2018

Who Am I? The Role of Moral Beliefs in Children’s and Adults’ Understanding of Identity

Larisa Heiphetz, Nina Strohminger, Susan A. Gelman, and Liane L. Young
Forthcoming: Journal of Experimental and Social Psychology

Abstract

Adults report that moral characteristics—particularly widely shared moral beliefs—are central to identity. This perception appears driven by the view that changes to widely shared moral beliefs would alter friendships and that this change in social relationships would, in turn, alter an individual’s personal identity. Because reasoning about identity changes substantially during adolescence, the current work tested pre- and post-adolescents to reveal the role that such changes could play in moral cognition. Experiment 1 showed that 8- to 10-year-olds, like adults, judged that people would change more after changes to their widely shared moral beliefs (e.g., whether hitting is wrong) than after changes to controversial moral beliefs (e.g., whether telling prosocial lies is wrong). Following up on this basic effect, a second experiment examined whether participants regard all changes to widely shared moral beliefs as equally impactful. Adults, but not children, reported that individuals would change more if their good moral beliefs (e.g., it is not okay to hit) transformed into bad moral beliefs (e.g., it is okay to hit) than if the opposite change occurred. This difference in adults was mediated by perceptions of how much changes to each type of belief would alter friendships. We discuss implications for moral judgment and social cognitive development.

The research is here.

Monday, November 20, 2017

Why we pretend to know things, explained by a cognitive scientist

Sean Illing
Vox.com
Originally posted November 3, 2017

Why do people pretend to know things? Why does confidence so often scale with ignorance? Steven Sloman, a professor of cognitive science at Brown University, has some compelling answers to these questions.

“We're biased to preserve our sense of rightness,” he told me, “and we have to be.”

The author of The Knowledge Illusion: Why We Never Think Alone, Sloman’s research focuses on judgment, decision-making, and reasoning. He’s especially interested in what’s called “the illusion of explanatory depth.” This is how cognitive scientists refer to our tendency to overestimate our understanding of how the world works.

We do this, Sloman says, because of our reliance on other minds.

“The decisions we make, the attitudes we form, the judgments we make, depend very much on what other people are thinking,” he said.

If the people around us are wrong about something, there’s a good chance we will be too. Proximity to truth compounds in the same way.

In this interview, Sloman and I talk about the problem of unjustified belief. I ask him about the political implications of his research, and if he thinks the rise of “fake news” and “alternative facts” has amplified our cognitive biases.

The interview/article is here.