Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Conformity. Show all posts
Showing posts with label Conformity. Show all posts

Friday, June 14, 2024

What does my group consider moral?: How social influence shapes moral expressions

del Rosario, K., Van Bavel, J. J., & West, T.
PsyArXiv (2024, May 8).

Abstract

Although morality is often characterized as a set of stable values that are deeply held, we argue that moral expressions are highly malleable and sensitive to social norms. For instance, norms can either lead people to exaggerate their expressions of morality (such as on social media) or restrain them (such as in professional settings). In this paper, we discuss why moral expressions are subject to social influence by considering two goals that govern social influence: affiliation goals (the desire to affiliate with one’s group) and accuracy goals (the desire to be accurate in ambiguous situations). Different from other domains of social influence, we argue that moral expressions often satisfy both affiliation goals (“I want to fit in with the group”) and accuracy goals (“I want to do the right thing”). As such, the fundamental question governing moral expressions is: “what does my group consider moral?” We argue that this central consideration achieves both goals underlying social influence and drives moral expressions. We outline the ways in which social influence shapes moral expressions, from unconsciously copying others’ behavior to expressing outrage to gain status within the group. Finally, we describe when the same goals can result in different behaviors, highlighting how context-specific norms can encourage (or discourage) moral expressions. We explain how this framework will be helpful in understanding how identity, norms, and social contexts shape moral expressions.

Conclusion

Our review examines moral expressions through the lens of social influence, illustrating the critical role of the social environment in shaping moral expressions. Moral expressions serve a social purpose, such as affiliating with a group, and are influenced by various goals, including understanding the appropriate emotional response to moral issues and conforming to others' expressions to fit in. These influences become evident in different contexts, where norms either encourage exaggerated expressions, like on social media, or restraint, such as in professional settings. For this reason, different forms of influence can have vastly different implications. As such, the fundamental social question governing moral expressions for people in moral contexts is: “What does my group consider moral?” However, much of the morality literature does not account for the role of social influence in moral expressions. Thus, a social norms framework will be helpful in understanding how social contexts shape moral expression.

Here is a summary:

The research argues that moral expressions (outward displays of emotions related to right and wrong) are highly malleable and shaped by social norms and contexts, contrary to the view that morality reflects stable convictions. It draws from research on normative influence (conforming to gain social affiliation) and informational influence (seeking accuracy in ambiguous situations) to explain how moral expressions aim to satisfy both affiliation goals ("fitting in with the group") and accuracy goals ("doing the right thing").

The key points are:
  1. Moral expressions vary across contexts because people look to their social groups to determine what is considered moral behavior.
  2. Affiliation goals (fitting in) and accuracy goals (being correct) are intertwined for moral expressions, unlike in other domains where they are distinct.
  3. Social influence shapes moral expressions in various ways, from unconscious mimicry to outrage expressions for gaining group status.
  4. Context-specific norms can encourage or discourage moral expressions by prioritizing affiliation over accuracy goals, or vice versa.
  5. The motivation to be seen as moral contributes to the malleability of moral expressions across social contexts.

Sunday, July 30, 2023

Social influences in the digital era: When do people conform more to a human being or an artificial intelligence?

Riva, P., Aureli, N., & Silvestrini, F. 
(2022). Acta Psychologica, 229, 103681. 

Abstract

The spread of artificial intelligence (AI) technologies in ever-widening domains (e.g., virtual assistants) increases the chances of daily interactions between humans and AI. But can non-human agents influence human beings and perhaps even surpass the power of the influence of another human being? This research investigated whether people faced with different tasks (objective vs. subjective) could be more influenced by the information provided by another human being or an AI. We expected greater AI (vs. other humans) influence in objective tasks (i.e., based on a count and only one possible correct answer). By contrast, we expected greater human (vs. AI) influence in subjective tasks (based on attributing meaning to evocative images). In Study 1, participants (N = 156) completed a series of trials of an objective task to provide numerical estimates of the number of white dots pictured on black backgrounds. Results showed that participants conformed more with the AI's responses than the human ones. In Study 2, participants (N = 102) in a series of subjective tasks observed evocative images associated with two concepts ostensibly provided, again, by an AI or a human. Then, they rated how each concept described the images appropriately. Unlike the objective task, in the subjective one, participants conformed more with the human than the AI's responses. Overall, our findings show that under some circumstances, AI can influence people above and beyond the influence of other humans, offering new insights into social influence processes in the digital era.

Conclusion

Our research might offer new insights into social influence processes in the digital era. The results showed that people can conform more to non-human agents (than human ones) in a digital context under specific circumstances. For objective tasks eliciting uncertainty, people might be more prone to conform to AI agents than another human being, whereas for subjective tasks, other humans may continue to be the most credible source of influence compared with AI agents. These findings highlight the relevance of matching agents and the type of task to maximize social influence. Our findings could be important for non-human agent developers, showing under which circumstances a human is more prone to follow the guidance of non-human agents. Proposing a non-human agent in a task in which it is not so trusted could be suboptimal. Conversely, in objective-type tasks that elicit uncertainty, it might be advantageous to emphasize the nature of the agent as artificial intelligence, rather than trying to disguise the agent as human (as some existing chatbots tend to do). In conclusion, it is important to consider, on the one hand, that non-human agents can become credible sources of social influence and, on the other hand, the match between the type of agent and the type of task.

Summary:

The first study found that people conformed more to AI than to human sources on objective tasks, such as estimating the number of white dots on a black background. The second study found that people conformed more to human than to AI sources on subjective tasks, such as attributing meaning to evocative images.

The authors conclude that the findings of their studies suggest that AI can be a powerful source of social influence, especially on objective tasks. However, they also note that the literature on AI and social influence is still limited, and more research is needed to understand the conditions under which AI can be more or less influential than human sources.

Key points:
  • The spread of AI technologies is increasing the chances of daily interactions between humans and AI.
  • Research has shown that people can be influenced by AI on objective tasks, but they may be more influenced by humans on subjective tasks.
  • More research is needed to understand the conditions under which AI can be more or less influential than human sources.

Wednesday, August 3, 2022

Predictors and consequences of intellectual humility

Porter, T., Elnakouri, A., Meyers, E.A. et al.
Nat Rev Psychol (2022). 
https://doi.org/10.1038/s44159-022-00081-9

Abstract

In a time of societal acrimony, psychological scientists have turned to a possible antidote — intellectual humility. Interest in intellectual humility comes from diverse research areas, including researchers studying leadership and organizational behaviour, personality science, positive psychology, judgement and decision-making, education, culture, and intergroup and interpersonal relationships. In this Review, we synthesize empirical approaches to the study of intellectual humility. We critically examine diverse approaches to defining and measuring intellectual humility and identify the common element: a meta-cognitive ability to recognize the limitations of one’s beliefs and knowledge. After reviewing the validity of different measurement approaches, we highlight factors that influence intellectual humility, from relationship security to social coordination. Furthermore, we review empirical evidence concerning the benefits and drawbacks of intellectual humility for personal decision-making, interpersonal relationships, scientific enterprise and society writ large. We conclude by outlining initial attempts to boost intellectual humility, foreshadowing possible scalable interventions that can turn intellectual humility into a core interpersonal, institutional and cultural value.

Importance of intellectual humility

The willingness to recognize the limits of one’s knowledge and fallibility can confer societal and individual benefits, if expressed in the right moment and to the proper extent. This insight echoes the philosophical roots of intellectual humility as a virtue. State and trait intellectual humility have been associated with a range of cognitive, social and personality variables (Table 2). At the societal level, intellectual humility can promote societal cohesion by reducing group polarization and encouraging harmonious intergroup relationships. At the individual level, intellectual humility can have important consequences for wellbeing, decision-making and academic learning.

Notably, empirical research has provided little evidence regarding the generalizability of the benefits or drawbacks of intellectual humility beyond the unique contexts of WEIRD (Western, educated, industrialized, rich and democratic) societies. With this caveat, below is an initial set of findings concerning the implications of possessing high levels of intellectual humility. Unless otherwise specified, the evidence below concerns trait-level intellectual humility. After reviewing these benefits, we consider attempts to improve an individual’s intellectual humility and confer associated benefits.

Social implications

People who score higher in intellectual humility are more likely to display tolerance of opposing political and religious views, exhibit less hostility toward members of those opposing groups, and are more likely to resist derogating outgroup members as intellectually and morally bankrupt. Although intellectually humbler people are capable of intergroup prejudice, they are more willing to question themselves and to consider rival viewpoints104. Indeed, people with greater intellectual humility display less myside bias, expose themselves to opposing perspectives more often and show greater openness to befriending outgroup members on social media platforms. By comparison, people with lower intellectual humility display features of cognitive rigidity and are more likely to hold inflexible opinions and beliefs.

Monday, April 11, 2022

Distinct neurocomputational mechanisms support informational and socially normative conformity

Mahmoodi A, Nili H, et al.
(2022) PLoS Biol 20(3): e3001565. 
https://doi.org/10.1371/journal.pbio.3001565

Abstract

A change of mind in response to social influence could be driven by informational conformity to increase accuracy, or by normative conformity to comply with social norms such as reciprocity. Disentangling the behavioural, cognitive, and neurobiological underpinnings of informational and normative conformity have proven elusive. Here, participants underwent fMRI while performing a perceptual task that involved both advice-taking and advice-giving to human and computer partners. The concurrent inclusion of 2 different social roles and 2 different social partners revealed distinct behavioural and neural markers for informational and normative conformity. Dorsal anterior cingulate cortex (dACC) BOLD response tracked informational conformity towards both human and computer but tracked normative conformity only when interacting with humans. A network of brain areas (dorsomedial prefrontal cortex (dmPFC) and temporoparietal junction (TPJ)) that tracked normative conformity increased their functional coupling with the dACC when interacting with humans. These findings enable differentiating the neural mechanisms by which different types of conformity shape social changes of mind.

Discussion

A key feature of adaptive behavioural control is our ability to change our mind as new evidence comes to light. Previous research has identified dACC as a neural substrate for changes of mind in both nonsocial situations, such as when receiving additional evidence pertaining to a previously made decision, and social situations, such as when weighing up one’s own decision against the recommendation of an advisor. However, unlike the nonsocial case, the role of dACC in social changes of mind can be driven by different, and often competing, factors that are specific to the social nature of the interaction. In particular, a social change of mind may be driven by a motivation to be correct, i.e., informational influence. Alternatively, a social change of mind may be driven by reasons unrelated to accuracy—such as social acceptance—a process called normative influence. To date, studies on the neural basis of social changes of mind have not disentangled these processes. It has therefore been unclear how the brain tracks and combines informational and normative factors.

Here, we leveraged a recently developed experimental framework that separates humans’ trial-by-trial conformity into informational and normative components to unpack the neural basis of social changes of mind. On each trial, participants first made a perceptual estimate and reported their confidence in it. In support of our task rationale, we found that, while participants’ changes of mind were affected by confidence (i.e., informational) in both human and computer settings, they were only affected by the need to reciprocate influence (i.e., normative) specifically in the human–human setting. It should be noted that participants’ perception of their partners’ accuracy is also an important factor in social change of mind (we tend to change our mind towards the more accurate participants). 

Thursday, January 27, 2022

Many heads are more utilitarian than one

Keshmirian, A., Deroy, O, & Bahrami, B.
Cognition
Volume 220, March 2022, 104965

Abstract

Moral judgments have a very prominent social nature, and in everyday life, they are continually shaped by discussions with others. Psychological investigations of these judgments, however, have rarely addressed the impact of social interactions. To examine the role of social interaction on moral judgments within small groups, we had groups of 4 to 5 participants judge moral dilemmas first individually and privately, then collectively and interactively, and finally individually a second time. We employed both real-life and sacrificial moral dilemmas in which the character's action or inaction violated a moral principle to benefit the greatest number of people. Participants decided if these utilitarian decisions were morally acceptable or not. In Experiment 1, we found that collective judgments in face-to-face interactions were more utilitarian than the statistical aggregate of their members compared to both first and second individual judgments. This observation supported the hypothesis that deliberation and consensus within a group transiently reduce the emotional burden of norm violation. In Experiment 2, we tested this hypothesis more directly: measuring participants' state anxiety in addition to their moral judgments before, during, and after online interactions, we found again that collectives were more utilitarian than those of individuals and that state anxiety level was reduced during and after social interaction. The utilitarian boost in collective moral judgments is probably due to the reduction of stress in the social setting.

Highlights

• Collective consensual judgments made via group interactions were more utilitarian than individual judgments.

• Group discussion did not change the individual judgments indicating a normative conformity effect.

• Individuals consented to a group judgment that they did not necessarily buy into personally.

• Collectives were less stressed than individuals after responding to moral dilemmas.

• Interactions reduced aversive emotions (e.g., stressed) associated with violation of moral norms.

From the Discussion

Our analysis revealed that groups, in comparison to individuals, are more utilitarian in their moral judgments. Thus, our findings are inconsistent with Virtue-Signaling (VS), which proposed the opposite
effect. Crucially, the collective utilitarian boost was short-lived: it was only seen at the collective level and not when participants rated the same questions individually again. Previous research shows that moral change at the individual level, as the result of social deliberation, is rather long-lived and not transient (e.g., see Ueshima et al., 2021). Thus, this collective utilitarian boost could not have resulted from deliberation and reasoning or due to conscious application of utilitarian principles with authentic reasons to maximize the total good. If this was the case, the effect would have persisted in the second individual judgment as well. That was not what we observed. Consequently, our findings are inconsistent with the Social Deliberation (SD) hypotheses.

Monday, November 15, 2021

On Defining Moral Enhancement: A Clarificatory Taxonomy

Carl Jago
Journal of Experimental Social Psychology
Volume 95, July 2021, 104145

Abstract

In a series of studies, we ask whether and to what extent the base rate of a behavior influences associated moral judgment. Previous research aimed at answering different but related questions are suggestive of such an effect. However, these other investigations involve injunctive norms and special reference groups which are inappropriate for an examination of the effects of base rates per se. Across five studies, we find that, when properly isolated, base rates do indeed influence moral judgment, but they do so with only very small effect sizes. In another study, we test the possibility that the very limited influence of base rates on moral judgment could be a result of a general phenomenon such as the fundamental attribution error, which is not specific to moral judgment. The results suggest that moral judgment may be uniquely resilient to the influence of base rates. In a final pair of studies, we test secondary hypotheses that injunctive norms and special reference groups would inflate any influence on moral judgments relative to base rates alone. The results supported those hypotheses.

From the General Discussion

In multiple experiments aimed at examining the influence of base rates per se, we found that base rates do indeed influence judgments, but the size of the effect we observed was very small. We considered that, in
discovering moral judgments’ resilience to influence from base rates, we may have only rediscovered a general tendency, such as the fundamental attribution error, whereby people discount situational factors. If
so, this tendency would then also apply broadly to non-moral scenarios. We therefore conducted another study in which our experimental materials were modified so as to remove the moral components. We found a substantial base-rate effect on participants’ judgments of performance regarding non-moral behavior. This finding suggests that the resilience to base rates observed in the preceding studies is unlikely the result of amore general tendency, and may instead be unique to moral judgment.

The main reasons why we concluded that the results from the most closely related extant research could not answer the present research question were the involvement in those studies of injunctive norms and
special reference groups. To confirm that these factors could inflate any influence of base rates on moral judgment, in the final pair of studies, we modified our experiments so as to include them. Specifically, in one study, we crossed prescriptive and proscriptive injunctive norms with high and low base rates and found that the impact of an injunctive norm outweighs any impact of the base rate. In the other study, we found that simply mentioning, for example, that there were some good people among those who engaged in a high base-rate behavior resulted in a large effect on moral judgment; not only on judgments of the target’s character, but also on judgments of blame and wrongness. 

Saturday, October 30, 2021

Psychological barriers to effective altruism: An evolutionary perspective

Bastian, J. & van Vugt, M.
Current Opinion in Psychology
Available online 17 September 2021

Abstract

People usually engage in (or at least profess to engage in) altruistic acts to benefit others. Yet, they routinely fail to maximize how much good is achieved with their donated money and time. An accumulating body of research has uncovered various psychological factors that can explain why people’s altruism tends to be ineffective. These prior studies have mostly focused on proximate explanations (e.g., emotions, preferences, lay beliefs). Here, we adopt an evolutionary perspective and highlight how three fundamental motives—parochialism, status, and conformity—can explain many seemingly disparate failures to do good effectively. Our approach outlines ultimate explanations for ineffective altruism and we illustrate how fundamental motives can be leveraged to promote more effective giving.

Summary and Implications

Even though donors and charities often highlight their desire to make a difference in the lives of others, an accumulating body of research demonstrates that altruistic acts are surprisingly ineffective in maximizing others’ welfare. To explain ineffective altruism, previous investigations have largely focused on the role of emotions, beliefs, preferences, and other proximate causes. Here, we adopted an evolutionary perspective to understand why these proximate mechanisms evolved in the first place. We outlined how three fundamental motives that likely evolved because they helped solve key challenges in humans’ ancestral past—parochialism, status, and conformity—can create psychological barriers to effective giving. Our framework not only provides a parsimonious explanation for many proximate causes of ineffective giving, it also provides an ultimate explanation for why these mechanisms exist.

Although parochialism, status concerns, and conformity can explain many forms of ineffective giving, there are additional causes that we did not address here. For example, many people focus too much on overhead costs when deciding where to donate. Everyday altruism is multi-faceted: People donate to charity, volunteer, give in church, and engage in a various random acts of kindness. These diverse acts of altruism likely require diverse explanations and more research is needed to understand the relative importance of different psychological factors for explaining different forms of altruism. Moreover, of the three fundamental motives reviewed here, conformity to social norms has probably received the least attention when it comes to explaining ineffective altruism. While there is ample evidence showing that social norms affect the decision of whether and how much to donate, more research is needed to understand how social norms influence the decision of where to donate and how they can lead to ineffective giving.

Saturday, February 27, 2021

Following your group or your morals? The in-group promotes immoral behavior while the out-group buffers against it

Vives, M., Cikara, M., & FeldmanHall, O. 
(2021, February 5). 
https://doi.org/10.31234/osf.io/jky9h

Abstract

People learn by observing others, albeit not uniformly. Witnessing an immoral behavior causes observers to commit immoral actions, especially when the perpetrator is part of the in-group. Does conformist behavior hold when observing the out-group? We conducted three experiments (N=1,358) exploring how observing an (im)moral in-/out-group member changed decisions relating to justice: Punitive, selfish, or dishonest choices. Only immoral in-groups increased immoral actions, while the same immoral behavior from out-groups had no effect. In contrast, a compassionate or generous individual did not make people more moral, regardless of group membership. When there was a loophole to deny cheating, neither an immoral in-/out-group member changed dishonest behavior. Compared to observing an honest in-group member, people become more honest themselves after observing an honest out-group member, revealing that out-groups can enhance morality. Depending on the severity of the moral action, the in-group licenses immoral behavior while the out-group buffers against it.

General discussion

Choosing compassion over punishment, generosity over selfishness, and honesty over dishonesty is the byproduct of many factors, including virtue-signaling, norm compliance, and self-interest. There are times, however, when moral choices are shaped by the mere observation of what others do in the same situation (Gino & Galinsky, 2012; Nook et al., 2016). Here, we investigated how moral decisions are shaped by one’s in-or out-group—a factor known to shift willingness to conform (Gino et al., 2009). Conceptually replicating past research (Gino et al., 2009), results reveal that immoral behaviors were only transmitted by the in-group: while participants became more punitive or selfish after observing a punitive or selfish in-group, they did not increase their immoral behavior after observing an immoral out-group (Experiments 1 & 2). However, when the same manipulation was deployed in a context where the immoral acts could not be traced, neither the dishonest in- nor out-group member produced any behavioral shifts in our subjects (Experiment 3). These results suggest that immoral behaviors are not transmitted equally by all individuals. Rather, they are more likely to be transmitted within groups than between groups. In contrast, pro-social behaviors were rarely transmitted by either group. Participants did not become more compassionate or generous after observing a compassionate or generous in-or out-group member (Experiments 1 & 2). We only find modifications for prosocial behavior when participants observe another participant behaving in a costly honest manner, and this was modulated by group membership. Witnessing an honest out-group member attenuated the degree to which participants themselves cheated compared to participants who witnessed an honest in-group member (see Table 1 for a summary of results). Together, these findings suggest that the transmission of moral corruption is both determined by group membership and is sensitive to the degree of moral transgression. Namely, given the findings from Experiment 3, in-groups appear to license moral corruption, while virtuous out-groups can buffer against it.

(Italics added.)

Wednesday, June 17, 2020

Vice dressed as virtue

Paul Russell
aeon.com
Originally published 22 May 20

Here is an excerpt:

When I speak of moralism, in this context, what I am concerned with, in general terms, is the misuse of morality for ends and purposes that are themselves vicious or corrupt. Moralisers present the facade of genuine moral concern but their real motivations rest with interests and satisfactions of a very different character. When these motivations are unmasked, they are shown to be tainted and considerably less attractive than we suppose. Among these motivations are cruelty, malice and sadism. Not all forms of moralism, however, are motivated in this way. On the contrary, it could be argued that the most familiar and common form of moralism is rooted not in cruelty but in vanity.

The basic idea behind vain moralism is that the agents’ (moral) conduct and conversation is motivated with a view to inflating their social and moral standing in the eyes of others. This is achieved by way of flaunting their moral virtues for others to praise and admire. Any number of moralists through the ages – reaching back to the likes of Fran├žois de La Rochefoucauld (1613-80) and Bernard Mandeville (1670-1733) – have attempted to show that it is vanity that lies behind most, if not all, of our moral conduct and activity. While theories of this kind no doubt exaggerate and distort the truth, they do make sense of much of what troubles us about moralism.

One feature of vain moralism that is especially troubling is that an excessive or misplaced concern with our moral reputation and standing suggests that moralisers of this kind lack any deep or sincere commitment to the values, principles and ideals that they want others to believe animates their conduct and character. Moralisers of this kind are essentially superficial and fraudulent. We have, of course, countless examples of this sort of moral personality, ranging from Evangelical preachers caught in airport motels taking drugs with male prostitutes, to any number of highly paid professors wining and dining on the lecture circuit while explaining the need for social justice and advocating extreme forms of egalitarianism. For the most part, these characters and their activities – whatever their doctrine – are a matter of ridicule rather than of grave moral concern. Over time, the motivations behind their ‘grandstanding’ and ‘virtue signalling’ will be exposed for what it is, and the moralisers’ shallow commitment to their professed ideals and values becomes apparent to all. While we shouldn’t dismiss the vain moraliser as simply innocuous, there is no essential connection between moralism of this kind and cruelty or sadism.

The info is here.

Monday, June 15, 2020

The dual evolutionary foundations of political ideology

S. Claessens, K. Fischer, and others
PsyArXiv
Originally published 18 June 19

Abstract

What determines our views on taxation and crime, healthcare and religion, welfare and gender roles? And why do opinions about these seemingly disparate aspects of our social lives coalesce the way they do? Research over the last 50 years has suggested that political attitudes and values around the globe are shaped by two ideological dimensions, often referred to as economic and social conservatism. However, it remains unclear why this ideological structure exists. Here, we highlight the striking concordance between these two dimensions of ideology and two key aspects of human sociality: cooperation and group conformity. Humans cooperate to a greater degree than our great ape relatives, paying personal costs to benefit others. Humans also conform to group-wide social norms and punish norm violators in interdependent, culturally marked groups. Together, these two shifts in sociality are posited to have driven the emergence of large-scale complex human societies. We argue that fitness trade-offs and behavioural plasticity have maintained strategic individual differences in both cooperation and group conformity, naturally giving rise to the two dimensions of political ideology. Supported by evidence from psychology, behavioural genetics, behavioural economics, and primatology, this evolutionary framework promises novel insight into the biological and cultural basis of political ideology.

The research is here.

Friday, October 26, 2018

Ethics, a Psychological Perspective

Andrea Dobson
www.infoq.com
Originally posted September 22, 2018

Key Takeaways
  • With emerging technologies like machine learning, developers can now achieve much more than ever before. But this new power has a down side. 
  • When we talk about ethics - the principles that govern a person's behaviour - it is impossible to not talk about psychology. 
  • Processes like obedience, conformity, moral disengagement, cognitive dissonance and moral amnesia all reveal why, though we see ourselves as inherently good, in certain circumstances we are likely to behave badly.
  • Recognising that although people aren’t rational, they are to a large degree predictable, has profound implications on how tech and business leaders can approach making their organisations more ethical.
  • The strongest way to make a company more ethical is to start with the individual. Companies become ethical one person at a time, one decision at a time. We all want to be seen as good people, known as our moral identity, which comes with the responsibility to have to act like it.

Friday, September 21, 2018

Why Social Science Needs Evolutionary Theory

Christine Legare
Nautilus.com
Originally posted June 15, 2018

Here is an excerpt:

Human cognition and behavior is the product of the interaction of genetic and cultural evolution. Gene-culture co-evolution has allowed us to adapt to highly diverse ecologies and to produce cultural adaptations and innovations. It has also produced extraordinary cultural diversity. In fact, cultural variability is one of our species’ most distinctive features. Humans display a wider repertoire of behaviors that vary more within and across groups than any other animal. Social learning enables cultural transmission, so the psychological mechanisms supporting it should be universal. These psychological mechanisms must also be highly responsive to diverse developmental contexts and cultural ecologies.

Take the conformity bias. It is a universal proclivity of all human psychology—even very young children imitate the behavior of others and conform to group norms. Yet beliefs about conformity vary substantially between populations. Adults in some populations are more likely to associate conformity with children’s intelligence, whereas others view creative non-conformity as linked with intelligence. Psychological adaptations for social learning, such as conformity bias, develop in complex and diverse cultural ecologies that work in tandem to shape the human mind and generate cultural variation.

The info is here.

Sunday, August 12, 2018

Evolutionary Origins of Morality: Insights From Non-human Primates

Judith Burkart, Rahel Brugger, and Carel van Schaik
Front. Sociol., 09 July 2018

The aim of this contribution is to explore the origins of moral behavior and its underlying moral preferences and intuitions from an evolutionary perspective. Such a perspective encompasses both the ultimate, adaptive function of morality in our own species, as well as the phylogenetic distribution of morality and its key elements across primates. First, with regard to the ultimate function, we argue that human moral preferences are best construed as adaptations to the affordances of the fundamentally interdependent hunter-gatherer lifestyle of our hominin ancestors. Second, with regard to the phylogenetic origin, we show that even though full-blown human morality is unique to humans, several of its key elements are not. Furthermore, a review of evidence from non-human primates regarding prosocial concern, conformity, and the potential presence of universal, biologically anchored and arbitrary cultural norms shows that these elements of morality are not distributed evenly across primate species. This suggests that they have evolved along separate evolutionary trajectories. In particular, the element of prosocial concern most likely evolved in the context of shared infant care, which can be found in humans and some New World monkeys. Strikingly, many if not all of the elements of morality found in non-human primates are only evident in individualistic or dyadic contexts, but not as third-party reactions by truly uninvolved bystanders. We discuss several potential explanations for the unique presence of a systematic third-party perspective in humans, but focus particularly on mentalizing ability and language. Whereas both play an important role in present day, full-blown human morality, it appears unlikely that they played a causal role for the original emergence of morality. Rather, we suggest that the most plausible scenario to date is that human morality emerged because our hominid ancestors, equipped on the one hand with large and powerful brains inherited from their ape-like ancestor, and on the other hand with strong prosocial concern as a result of cooperative breeding, could evolve into an ever more interdependent social niche.

The article is here.

Thursday, July 12, 2018

Learning moral values: Another's desire to punish enhances one's own punitive behavior

FeldmanHall O, Otto AR, Phelps EA.
J Exp Psychol Gen. 2018 Jun 7. doi: 10.1037/xge0000405.

Abstract

There is little consensus about how moral values are learned. Using a novel social learning task, we examine whether vicarious learning impacts moral values-specifically fairness preferences-during decisions to restore justice. In both laboratory and Internet-based experimental settings, we employ a dyadic justice game where participants receive unfair splits of money from another player and respond resoundingly to the fairness violations by exhibiting robust nonpunitive, compensatory behavior (baseline behavior). In a subsequent learning phase, participants are tasked with responding to fairness violations on behalf of another participant (a receiver) and are given explicit trial-by-trial feedback about the receiver's fairness preferences (e.g., whether they prefer punishment as a means of restoring justice). This allows participants to update their decisions in accordance with the receiver's feedback (learning behavior). In a final test phase, participants again directly experience fairness violations. After learning about a receiver who prefers highly punitive measures, participants significantly enhance their own endorsement of punishment during the test phase compared with baseline. Computational learning models illustrate the acquisition of these moral values is governed by a reinforcement mechanism, revealing it takes as little as being exposed to the preferences of a single individual to shift one's own desire for punishment when responding to fairness violations. Together this suggests that even in the absence of explicit social pressure, fairness preferences are highly labile.

The research is here.

Monday, March 5, 2018

Donald Trump and the rise of tribal epistemology

David Roberts
Vox.com
Originally posted May 19, 2017 and still extremely important

Here is an excerpt:

Over time, this leads to what you might call tribal epistemology: Information is evaluated based not on conformity to common standards of evidence or correspondence to a common understanding of the world, but on whether it supports the tribe’s values and goals and is vouchsafed by tribal leaders. “Good for our side” and “true” begin to blur into one.

Now tribal epistemology has found its way to the White House.

Donald Trump and his team represent an assault on almost every American institution — they make no secret of their desire to “deconstruct the administrative state” — but their hostility toward the media is unique in its intensity.

It is Trump’s obsession and favorite target. He sees himself as waging a “running war” on the mainstream press, which his consigliere Steve Bannon calls “the opposition party.”

The article is here.

Monday, October 23, 2017

Reciprocity Outperforms Conformity to Promote Cooperation

Angelo Romano, Daniel Balliet
Psychological Sciences
First Published September 6, 2017

Abstract

Evolutionary psychologists have proposed two processes that could give rise to the pervasiveness of human cooperation observed among individuals who are not genetically related: reciprocity and conformity. We tested whether reciprocity outperformed conformity in promoting cooperation, especially when these psychological processes would promote a different cooperative or noncooperative response. To do so, across three studies, we observed participants’ cooperation with a partner after learning (a) that their partner had behaved cooperatively (or not) on several previous trials and (b) that their group members had behaved cooperatively (or not) on several previous trials with that same partner. Although we found that people both reciprocate and conform, reciprocity has a stronger influence on cooperation. Moreover, we found that conformity can be partly explained by a concern about one’s reputation—a finding that supports a reciprocity framework.

The article is here.

Wednesday, May 17, 2017

Moral conformity in online interactions

Meagan Kelly, Lawrence Ngo, Vladimir Chituc, Scott Huettel, and Walter Sinnott-Armstrong
Social Influence 

Abstract

Over the last decade, social media has increasingly been used as a platform for political and moral discourse. We investigate whether conformity, specifically concerning moral attitudes, occurs in these virtual environments apart from face-to-face interactions. Participants took an online survey and saw either statistical information about the frequency of certain responses, as one might see on social media (Study 1), or arguments that defend the responses in either a rational or emotional way (Study 2). Our results show that social information shaped moral judgments, even in an impersonal digital setting. Furthermore, rational arguments were more effective at eliciting conformity than emotional arguments. We discuss the implications of these results for theories of moral judgment that prioritize emotional responses.

The article is here.

Thursday, August 11, 2016

Does children's moral compass waver under social pressure?

Kim EB, Chen C, Smetana J, Greenberger E
Journal of Experimental Child Psychology 150:241-251 · June 2016
DOI: 10.1016/j.jecp.2016.06.006

Abstract

The current study tested whether preschoolers' moral and social-conventional judgments change under social pressure using Asch's conformity paradigm. A sample of 132 preschoolers (Mage=3.83years, SD=0.85) rated the acceptability of moral and social-conventional events and also completed a visual judgment task (i.e., comparing line length) both independently and after having viewed two peers who consistently made immoral, unconventional, or visually inaccurate judgments. Results showed evidence of conformity on all three tasks, but conformity was stronger on the social-conventional task than on the moral and visual tasks. Older children were less susceptible to pressure for social conformity for the moral and visual tasks but not for the conventional task.

The article is here.

Friday, August 5, 2016

Moral Enhancement and Moral Freedom: A Critical Analysis

By John Danaher
Philosophical Disquisitions
Originally published July 19, 2016

The debate about moral neuroenhancement has taken off in the past decade. Although the term admits of several definitions, the debate primarily focuses on the ways in which human enhancement technologies could be used to ensure greater moral conformity, i.e. the conformity of human behaviour with moral norms. Imagine you have just witnessed a road rage incident. An irate driver, stuck in a traffic jam, jumped out of his car and proceeded to abuse the driver in the car behind him. We could all agree that this contravenes a moral norm. And we may well agree that the proximate cause of his outburst was a particular pattern of activity in the rage circuit of his brain. What if we could intervene in that circuit and prevent him from abusing his fellow motorists? Should we do it?

Proponents of moral neuroenhancement think we should — though they typically focus on much higher stakes scenarios. A popular criticism of their project has emerged. This criticism holds that trying to ensure moral conformity comes at the price of moral freedom. If our brains are prodded, poked and tweaked so that we never do the wrong thing, then we lose the ‘freedom to fall’ — i.e. the freedom to do evil. That would be a great shame. The freedom to do the wrong thing is, in itself, an important human value. We would lose it in the pursuit of greater moral conformity.

Wednesday, May 13, 2015

Science Cannot Teach Us to Be Good

By Daniel Jacobson
Big Ideas at Slate.com

Here is an excerpt:

Consider first the relativist conception of morality, on which goodness consists in conformity with the widely accepted practices of one’s society. Sometimes such behavior is called pro-social, a tendentious term that seems to imply that non-conformity is antisocial. If to be virtuous is to be fully enculturated, as this view claims, then moral dissent must be both mistaken (since moral facts are at bottom sociological facts) and vicious (since goodness is conformity).

Although many social scientists advocate this view, it rests on a premise that cannot claim any scientific backing: the normative principle that it is always illegitimate to criticize another culture by standards it does not accept. This is widely seen as a failure of tolerance, as cultural imperialism or ethnocentrism. The relativists don’t seem to notice that their own principle puts such criticism out of bounds, since conformity to widely accepted ethnocentrism is virtuous by their lights. In fact, no one holds the relativist principle consistently.

The entire article is here.