Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Moral Emotions. Show all posts
Showing posts with label Moral Emotions. Show all posts

Wednesday, July 24, 2019

We need to talk marketing ethics: What to consider as a content writer

Natasha Lane
thenextweb.com
Originally posted June 30, 2019

Here is an excerpt:

Are content marketing ethics journalism ethics?

Content marketing and blogging aren’t journalism. Journalism is primarily impartial, and when content is created as part of a business’ marketing strategy, it’s understood that it can’t be entirely impartial.

However, well-written content (whether it’s blog posts, case studies, how-to guides, white papers, etc.) will have journalistic value. The bottom line is that a brand should seek to provide value through all their marketing efforts, so rather than trying to sell, content created with the intention to inform and teach its audience will be a legitimate resource to its readers.

And to make something a trustworthy resource, you’ll need to stick to the same ethical principles of traditional journalism. It’s about being honest and outright with the reader – honest in your intention to inform truthfully and honest about your biases. Providing appropriate disclosures to acknowledge potential conflicts of interest and making it clear in the byline for which company the author works are the most basic starting points.

Whose responsibility is it? 

In an era when audience trust is increasingly difficult to gain, businesses are best advised to follow white-hat practices across all their marketing strategies. But we have to face the facts – that won’t always be the case.

As freelance writers, we come across all sorts of offers. I know I did. It might be a request to write a review for a product you’ve never tried or to plug in some shady statistics.

The info is here.

Monday, July 22, 2019

Understanding the process of moralization: How eating meat becomes a moral issue

Feinberg, M., Kovacheff, C., Teper, R., & Inbar, Y. (2019).
Journal of Personality and Social Psychology, 117(1), 50-72.

Abstract

A large literature demonstrates that moral convictions guide many of our thoughts, behaviors, and social interactions. Yet, we know little about how these moral convictions come to exist. In the present research we explore moralization—the process by which something that was morally neutral takes on moral properties—examining what factors facilitate and deter it. In 3 longitudinal studies participants were presented with morally evocative stimuli about why eating meat should be viewed as a moral issue. Study 1 tracked students over a semester as they took a university course that highlighted the suffering animals endure because of human meat consumption. In Studies 2 and 3 participants took part in a mini-course we developed which presented evocative videos aimed at inducing moralization. In all 3 studies, we assessed participants’ beliefs, attitudes, emotions, and cognitions at multiple time points to track moral changes and potential factors responsible for such changes. A variety of factors, both cognitive and affective, predicted participants’ moralization or lack thereof. Model testing further pointed to two primary conduits of moralization: the experience of moral emotions (e.g., disgust, guilt) felt when contemplating the issue, and moral piggybacking (connecting the issue at hand with one’s existing fundamental moral principles). Moreover, we found individual differences, such as how much one holds their morality as central to their identity, also predicted the moralization process. We discuss the broad theoretical and applied implications of our results.

A pdf can be viewed here.

Wednesday, June 26, 2019

The computational and neural substrates of moral strategies in social decision-making

Jeroen M. van Baar, Luke J. Chang & Alan G. Sanfey
Nature Communications, Volume 10, Article number: 1483 (2019)

Abstract

Individuals employ different moral principles to guide their social decision-making, thus expressing a specific ‘moral strategy’. Which computations characterize different moral strategies, and how might they be instantiated in the brain? Here, we tackle these questions in the context of decisions about reciprocity using a modified Trust Game. We show that different participants spontaneously and consistently employ different moral strategies. By mapping an integrative computational model of reciprocity decisions onto brain activity using inter-subject representational similarity analysis of fMRI data, we find markedly different neural substrates for the strategies of ‘guilt aversion’ and ‘inequity aversion’, even under conditions where the two strategies produce the same choices. We also identify a new strategy, ‘moral opportunism’, in which participants adaptively switch between guilt and inequity aversion, with a corresponding switch observed in their neural activation patterns. These findings provide a valuable view into understanding how different individuals may utilize different moral principles.

(cut)

From the Discussion

We also report a new strategy observed in participants, moral opportunism. This group did not consistently apply one moral rule to their decisions, but rather appeared to make a motivational trade-off depending on the particular trial structure. This opportunistic decision strategy entailed switching between the behavioral patterns of guilt aversion and inequity aversion, and allowed participants to maximize their financial payoff while still always following a moral rule. Although it could have been the case that these opportunists merely resembled GA and IA in terms of decision outcome, and not in the underlying psychological process, a confirmatory analysis showed that the moral opportunists did in fact switch between the neural representations of guilt and inequity aversion, and thus flexibly employed the respective psychological processes underlying these two, quite different, social preferences. This further supports our interpretation that the activity patterns directly reflect guilt aversion and inequity aversion computations, and not a theoretically peripheral “third factor” shared between GA or IA participants. Additionally, we found activity patterns specifically linked to moral opportunism in the superior parietal cortex and dACC, which are strongly associated with cognitive control and working memory.

The research is here.

Saturday, June 8, 2019

Anger, Fear, and Echo Chambers: The Emotional Basis for Online Behavior

Wollebæk, D., Karlsen, R., Steen-Johnsen, K., & Enjolras, B.
(2019). Social Media + Society. 
https://doi.org/10.1177/2056305119829859

Abstract

Emotions, such as anger and fear, have been shown to influence people’s political behavior. However, few studies link emotions specifically to how people debate political issues and seek political information online. In this article, we examine how anger and fear are related to politics-oriented digital behavior, attempting to bridge the gap between the thus far disconnected literature on political psychology and the digital media. Based on survey data, we show that anger and fear are connected to distinct behaviors online. Angry people are more likely to engage in debates with people having both similar and opposing views. They also seek out information confirming their views more frequently. Anxious individuals, by contrast, tend to seek out information contradicting their opinions. These findings reiterate predictions made in the extant literature concerning the role of emotions in politics. Thus, we argue that anger reinforces echo chamber dynamics and trench warfare dynamics in the digital public sphere, while fear counteracts these dynamics.

Discussion and Conclusion

The analyses have shown that anger and fear have distinct effects on echo chamber and trench warfare dynamics in the digital sphere. With regard to the debate dimension, we have shown that anger is positively related to participation in online debates. This finding confirms the results of a recent study by Hasell and Weeks (2016). Importantly, however, the impact of anger is not limited to echo chamber discussions with like-minded and similar people. Angry individuals are also over-represented in debates between people holding opposing views and belonging to a different class or
ethnic background. This entails that regarding online debates, anger contributes more to what has been previously labeled as trench warfare dynamics than to echo chamber dynamics.

The research is here.

Tuesday, May 14, 2019

Is ancient philosophy the future?

Donald Robertson
The Globe and Mail
Originally published April 19, 2019

Recently, a bartender in Nova Scotia showed me a quote from the Roman emperor Marcus Aurelius tattooed on his forearm. “Waste no more time arguing what a good man should be,” it said, “just be one.”

We live in an age when social media bombards everyone, especially the young, with advice about every aspect of their lives. Stoic philosophy, of which Marcus Aurelius was history’s most famous proponent, taught its followers not to waste time on diversions that don’t actually improve their character.

In recent decades, Stoicism has been experiencing a resurgence in popularity, especially among millennials. There has been a spate of popular self-help books that helped to spread the word. One of the best known is Ryan Holiday and Steven Hanselman’s The Daily Stoic, which introduced a whole new generation to the concept of philosophy, based on the classics, as a way of life. It has fuelled interest among Silicon Valley entrepreneurs. So has endorsement from self-improvement guru Tim Ferriss who describes Stoicism as the “ideal operating system for thriving in high-stress environments.”

Why should the thoughts of a Roman emperor who died nearly 2,000 years ago seem particularly relevant today, though? What’s driving this rebirth of Stoicism?

The info is here.

Monday, May 6, 2019

How do we make moral decisions?

Dartmouth College
Press Release
Originally released April 18, 2019

When it comes to making moral decisions, we often think of the golden rule: do unto others as you would have them do unto you. Yet, why we make such decisions has been widely debated. Are we motivated by feelings of guilt, where we don't want to feel bad for letting the other person down? Or by fairness, where we want to avoid unequal outcomes? Some people may rely on principles of both guilt and fairness and may switch their moral rule depending on the circumstances, according to a Radboud University - Dartmouth College study on moral decision-making and cooperation. The findings challenge prior research in economics, psychology and neuroscience, which is often based on the premise that people are motivated by one moral principle, which remains constant over time. The study was published recently in Nature Communications.

"Our study demonstrates that with moral behavior, people may not in fact always stick to the golden rule. While most people tend to exhibit some concern for others, others may demonstrate what we have called 'moral opportunism,' where they still want to look moral but want to maximize their own benefit," said lead author Jeroen van Baar, a postdoctoral research associate in the department of cognitive, linguistic and psychological sciences at Brown University, who started this research when he was a scholar at Dartmouth visiting from the Donders Institute for Brain, Cognition and Behavior at Radboud University.

"In everyday life, we may not notice that our morals are context-dependent since our contexts tend to stay the same daily. However, under new circumstances, we may find that the moral rules we thought we'd always follow are actually quite malleable," explained co-author Luke J. Chang, an assistant professor of psychological and brain sciences and director of the Computational Social Affective Neuroscience Laboratory (Cosan Lab) at Dartmouth. "This has tremendous ramifications if one considers how our moral behavior could change under new contexts, such as during war," he added.

The info is here.

The research is here.

Thursday, April 25, 2019

The New Science of How to Argue—Constructively

Jesse Singal
The Atlantic
Originally published April 7, 2019

Here is an excerpt:

Once you know a term like decoupling, you can identify instances in which a disagreement isn’t really about X anymore, but about Y and Z. When some readers first raised doubts about a now-discredited Rolling Stone story describing a horrific gang rape at the University of Virginia, they noted inconsistencies in the narrative. Others insisted that such commentary fit into destructive tropes about women fabricating rape claims, and therefore should be rejected on its face. The two sides weren’t really talking; one was debating whether the story was a hoax, while the other was responding to the broader issue of whether rape allegations are taken seriously. Likewise, when scientists bring forth solid evidence that sexual orientation is innate, or close to it, conservatives have lashed out against findings that would “normalize” homosexuality. But the dispute over which sexual acts, if any, society should discourage is totally separate from the question of whether sexual orientation is, in fact, inborn. Because of a failure to decouple, people respond indignantly to factual claims when they’re actually upset about how those claims might be interpreted.

Nerst believes that the world can be divided roughly into “high decouplers,” for whom decoupling comes easy, and “low decouplers,” for whom it does not. This is the sort of area where erisology could produce empirical insights: What characterizes people’s ability to decouple? Nerst believes that hard-science types are better at it, on average, while artistic types are worse. After all, part of being an artist is seeing connections where other people don’t—so maybe it’s harder for them to not see connections in some cases. Nerst might be wrong. Either way, it’s the sort of claim that could be fairly easily tested if the discipline caught on.

The info is here.

Wednesday, April 3, 2019

Feeling Good: Integrating the Psychology and Epistemology of Moral Intuition and Emotion

Hossein Dabbagh
Journal of Cognition and Neuroethics 5 (3): 1–30.

Abstract

Is the epistemology of moral intuitions compatible with admitting a role for emotion? I argue in this paper thatmoral intuitions and emotions can be partners without creating an epistemic threat. I start off by offering some empirical findings to weaken Singer’s (and Greene’s and Haidt’s) debunking argument against moral intuition, which treat emotions as a distorting factor. In the second part of the paper, I argue that the standard contrast between intuition and emotion is a mistake. Moral intuitions and emotions are not contestants if we construe moral intuition as non-doxastic intellectual seeming and emotion as a non-doxastic perceptual-like state. This will show that emotions support, rather than distort, the epistemic standing of moral intuitions.

Here is an excerpt:

However, cognitive sciences, as I argued above, show us that seeing all emotions in this excessively pessimistic way is not plausible. To think about emotional experience as always being a source of epistemic distortion would be wrong. On the contrary, there are some reasons to believe that emotional experiences can sometimes make a positive contribution to our activities in practical rationality. So, there is a possibility that some emotions are not distorting factors. If this is right, we are no longer justified in saying that emotions always distort our epistemic activities. Instead, emotions (construed as quasiperceptual experiences) might have some cognitive elements assessable for rationality.

The paper is here.

Thursday, March 21, 2019

Anger as a moral emotion: A 'bird's eye view' systematic review

Tim Lomas
Counseling Psychology Quarterly


Anger is common problem for which counseling/psychotherapy clients seek help, and is typically regarded as an invidious negative emotion to be ameliorated. However, it may be possible to reframe anger as a moral emotion, arising in response to perceived transgressions, thereby endowing it with meaning. In that respect, the current paper offers a ‘bird’s eye’ systematic review of empirical research on anger as a moral emotion (i.e., one focusing broadly on the terrain as a whole, rather than on specific areas). Three databases were reviewed from the start of their records to January 2019. Eligibility criteria included empirical research, published in English in peer-reviewed journals, on anger specifically as a moral emotion. 175 papers met the criteria, and fell into four broad classes of study: survey-based; experimental; physiological; and qualitative. In reviewing the articles, this paper pays particular attention to: how/whether anger can be differentiated from other moral emotions; antecedent causes and triggers; contextual factors that influence or mitigate anger; and outcomes arising from moral anger. Together, the paper offers a comprehensive overview of current knowledge into this prominent and problematic emotion. The results may be of use to counsellors and psychotherapists helping to address anger issues in their clients.

Download the paper here.

Note: Other "symptoms" in mental health can also be reframed as moral issues.  PTSD is similar to Moral Injury.  OCD is highly correlated with scrupulosity, excessive concern about moral purity.  Unhealthy guilt is found in many depressed individuals.  And, psychologists used forgiveness of self and others as a goal in treatment.

Friday, March 8, 2019

Seven moral rules found all around the world

University of Oxford
phys.org
Originally released February 12, 2019

Anthropologists at the University of Oxford have discovered what they believe to be seven universal moral rules.

The rules: help your family, help your group, return favours, be brave, defer to superiors, divide resources fairly, and respect others' property. These were found in a survey of 60 cultures from all around the world.

Previous studies have looked at some of these rules in some places – but none has looked at all of them in a large representative sample of societies. The present study, published in Current Anthropology, is the largest and most comprehensive cross-cultural survey of morals ever conducted.

The team from Oxford's Institute of Cognitive & Evolutionary Anthropology (part of the School of Anthropology & Museum Ethnography) analysed ethnographic accounts of ethics from 60 societies, comprising over 600,000 words from over 600 sources.

Dr. Oliver Scott Curry, lead author and senior researcher at the Institute for Cognitive and Evolutionary Anthropology, said: "The debate between moral universalists and moral relativists has raged for centuries, but now we have some answers. People everywhere face a similar set of social problems, and use a similar set of moral rules to solve them. As predicted, these seven moral rules appear to be universal across cultures. Everyone everywhere shares a common moral code. All agree that cooperating, promoting the common good, is the right thing to do."

The study tested the theory that morality evolved to promote cooperation, and that – because there are many types of cooperation – there are many types of morality. According to this theory of 'morality as cooperation," kin selection explains why we feel a special duty of care for our families, and why we abhor incest. Mutualism explains why we form groups and coalitions (there is strength and safety in numbers), and hence why we value unity, solidarity, and loyalty. Social exchange explains why we trust others, reciprocate favours, feel guilt and gratitude, make amends, and forgive. And conflict resolution explains why we engage in costly displays of prowess such as bravery and generosity, why we defer to our superiors, why we divide disputed resources fairly, and why we recognise prior possession.

The information is here.

Saturday, February 23, 2019

The Psychology of Morality: A Review and Analysis of Empirical Studies Published From 1940 Through 2017

Naomi Ellemers, Jojanneke van der Toorn, Yavor Paunov, and Thed van Leeuwen
Personality and Social Psychology Review, 1–35

Abstract

We review empirical research on (social) psychology of morality to identify which issues and relations are well documented by existing data and which areas of inquiry are in need of further empirical evidence. An electronic literature search yielded a total of 1,278 relevant research articles published from 1940 through 2017. These were subjected to expert content analysis and standardized bibliometric analysis to classify research questions and relate these to (trends in) empirical approaches that characterize research on morality. We categorize the research questions addressed in this literature into five different themes and consider how empirical approaches within each of these themes have addressed psychological antecedents and implications of moral behavior. We conclude that some key features of theoretical questions relating to human morality are not systematically captured in empirical research and are in need of further investigation.

Here is a portion of the article:

In sum, research on moral behavior demonstrates that people can be highly motivated to behave morally. Yet, personal convictions, social rules and normative pressures from others, or motivational lapses may all induce behavior that is not considered moral by others and invite self-justifying
responses to maintain moral self-views.

The review article can be downloaded here.

Tuesday, February 12, 2019

Certain Moral Values May Lead to More Prejudice, Discrimination

American Psychological Association Pressor
Released December 20, 2018

People who value following purity rules over caring for others are more likely to view gay and transgender people as less human, which leads to more prejudice and support for discriminatory public policies, according to a new study published by the American Psychological Association.

“After the Supreme Court decision affirming marriage equality and the debate over bathroom rights for transgender people, we realized that the arguments were often not about facts but about opposing moral beliefs,” said Andrew E. Monroe, PhD, of Appalachian State University and lead author of the study, published in the Journal of Experimental Psychology: General®.

“Thus, we wanted to understand if moral values were an underlying cause of prejudice toward gay and transgender people.”

Monroe and his co-author, Ashby Plant, PhD, of Florida State University, focused on two specific moral values — what they called sanctity, or a strict adherence to purity rules and disgust over any acts that are considered morally contaminating, and care, which centers on disapproval of others who cause suffering without just cause — because they predicted those values might be behind the often-heated debates over LGBTQ rights. 

The researchers conducted five experiments with nearly 1,100 participants. Overall, they found that people who prioritized sanctity over care were more likely to believe that gay and transgender people, people with AIDS and prostitutes were more impulsive, less rational and, therefore, something less than human. These attitudes increased prejudice and acceptance of discriminatory public policies, according to Monroe.

The info is here.

The research is here.

Monday, September 24, 2018

Distinct Brain Areas involved in Anger versus Punishment during Social Interactions

Olga M. Klimecki, David Sander & Patrik Vuilleumier
Scientific Reports volume 8, Article number: 10556 (2018)

Abstract

Although anger and aggression can have wide-ranging consequences for social interactions, there is sparse knowledge as to which brain activations underlie the feelings of anger and the regulation of related punishment behaviors. To address these issues, we studied brain activity while participants played an economic interaction paradigm called Inequality Game (IG). The current study confirms that the IG elicits anger through the competitive behavior of an unfair (versus fair) other and promotes punishment behavior. Critically, when participants see the face of the unfair other, self-reported anger is parametrically related to activations in temporal areas and amygdala – regions typically associated with mentalizing and emotion processing, respectively. During anger provocation, activations in the dorsolateral prefrontal cortex, an area important for regulating emotions, predicted the inhibition of later punishment behavior. When participants subsequently engaged in behavioral decisions for the unfair versus fair other, increased activations were observed in regions involved in behavioral adjustment and social cognition, comprising posterior cingulate cortex, temporal cortex, and precuneus. These data point to a distinction of brain activations related to angry feelings and the control of subsequent behavioral choices. Furthermore, they show a contribution of prefrontal control mechanisms during anger provocation to the inhibition of later punishment.

The research is here.

Wednesday, August 1, 2018

Why our brains see the world as ‘us’ versus ‘them’

Leslie Henderson
The Conversation
Originally posted June 2018

Here is an excerpt:

As opposed to fear, distrust and anxiety, circuits of neurons in brain regions called the mesolimbic system are critical mediators of our sense of “reward.” These neurons control the release of the transmitter dopamine, which is associated with an enhanced sense of pleasure. The addictive nature of some drugs, as well as pathological gaming and gambling, are correlated with increased dopamine in mesolimbic circuits.

In addition to dopamine itself, neurochemicals such as oxytocin can significantly alter the sense of reward and pleasure, especially in relationship to social interactions, by modulating these mesolimbic circuits.

Methodological variations indicate further study is needed to fully understand the roles of these signaling pathways in people. That caveat acknowledged, there is much we can learn from the complex social interactions of other mammals.

The neural circuits that govern social behavior and reward arose early in vertebrate evolution and are present in birds, reptiles, bony fishes and amphibians, as well as mammals. So while there is not a lot of information on reward pathway activity in people during in-group versus out-group social situations, there are some tantalizing results from  studies on other mammals.

The article is here.

Monday, July 9, 2018

Learning from moral failure

Matthew Cashman & Fiery Cushman
In press: Becoming Someone New: Essays on Transformative Experience, Choice, and Change

Introduction

Pedagogical environments are often designed to minimize the chance of people acting wrongly; surely this is a sensible approach. But could it ever be useful to design pedagogical environments to permit, or even encourage, moral failure? If so, what are the circumstances where moral failure can be beneficial?  What types of moral failure are helpful for learning, and by what mechanisms? We consider the possibility that moral failure can be an especially effective tool in fostering learning. We also consider the obvious costs and potential risks of allowing or fostering moral failure. We conclude by suggesting research directions that would help to establish whether, when and how moral pedagogy might be facilitated by letting students learn from moral failure.

(cut)

Conclusion

Errors are an important source of learning, and educators often exploit this fact.  Failing helps to tune our sense of balance; Newtonian mechanics sticks better when we witness the failure of our folk physics. We consider the possibility that moral failure may also prompt especially strong or distinctive forms of learning.  First, and with greatest certainty, humans are designed to learn from moral failure through the feeling of guilt.  Second, and more speculatively, humans may be designed to experience moral failures by “testing limits” in a way that ultimately fosters an adaptive moral character.  Third—and highly speculatively—there may be ways to harness learning by moral failure in pedagogical contexts. Minimally, this might occur by imagination, observational learning, or the exploitation of spontaneous wrongful acts as “teachable moments”.

The book chapter is here.

Thursday, May 31, 2018

What did Hannah Arendt really mean by the banality of evil?

Thomas White
Aeon.co
Originally published April 23, 2018

Here is an excerpt:

Arendt dubbed these collective characteristics of Eichmann ‘the banality of evil’: he was not inherently evil, but merely shallow and clueless, a ‘joiner’, in the words of one contemporary interpreter of Arendt’s thesis: he was a man who drifted into the Nazi Party, in search of purpose and direction, not out of deep ideological belief. In Arendt’s telling, Eichmann reminds us of the protagonist in Albert Camus’s novel The Stranger (1942), who randomly and casually kills a man, but then afterwards feels no remorse. There was no particular intention or obvious evil motive: the deed just ‘happened’.

This wasn’t Arendt’s first, somewhat superficial impression of Eichmann. Even 10 years after his trial in Israel, she wrote in 1971:
I was struck by the manifest shallowness in the doer [ie Eichmann] which made it impossible to trace the uncontestable evil of his deeds to any deeper level of roots or motives. The deeds were monstrous, but the doer – at least the very effective one now on trial – was quite ordinary, commonplace, and neither demonic nor monstrous.
The banality-of-evil thesis was a flashpoint for controversy. To Arendt’s critics, it seemed absolutely inexplicable that Eichmann could have played a key role in the Nazi genocide yet have no evil intentions. Gershom Scholem, a fellow philosopher (and theologian), wrote to Arendt in 1963 that her banality-of-evil thesis was merely a slogan that ‘does not impress me, certainly, as the product of profound analysis’. Mary McCarthy, a novelist and good friend of Arendt, voiced sheer incomprehension: ‘[I]t seems to me that what you are saying is that Eichmann lacks an inherent human quality: the capacity for thought, consciousness – conscience. But then isn’t he a monster simply?’

The information is here.

Friday, January 12, 2018

The Normalization of Corruption in Organizations

Blake E. Ashforth and Vikas Anand
Research in Organizational Behavior
Volume 25, 2003, Pages 1-52

Abstract

Organizational corruption imposes a steep cost on society, easily dwarfing that of street crime. We examine how corruption becomes normalized, that is, embedded in the organization such that it is more or less taken for granted and perpetuated. We argue that three mutually reinforcing processes underlie normalization: (1) institutionalization, where an initial corrupt decision or act becomes embedded in structures and processes and thereby routinized; (2) rationalization, where self-serving ideologies develop to justify and perhaps even valorize corruption; and (3) socialization, where naı̈ve newcomers are induced to view corruption as permissible if not desirable. The model helps explain how otherwise morally upright individuals can routinely engage in corruption without experiencing conflict, how corruption can persist despite the turnover of its initial practitioners, how seemingly rational organizations can engage in suicidal corruption and how an emphasis on the individual as evildoer misses the point that systems and individuals are mutually reinforcing.

The article is here.

The Age of Outrage

Jonathan Haidt
Essay derived from a speech in City Journal
December 17, 2017

Here is an excerpt:

When we look back at the ways our ancestors lived, there’s no getting around it: we are tribal primates. We are exquisitely designed and adapted by evolution for life in small societies with intense, animistic religion and violent intergroup conflict over territory. We love tribal living so much that we invented sports, fraternities, street gangs, fan clubs, and tattoos. Tribalism is in our hearts and minds. We’ll never stamp it out entirely, but we can minimize its effects because we are a behaviorally flexible species. We can live in many different ways, from egalitarian hunter-gatherer groups of 50 individuals to feudal hierarchies binding together millions. And in the last two centuries, a lot of us have lived in large, multi-ethnic secular liberal democracies. So clearly that is possible. But how much margin of error do we have in such societies?

Here is the fine-tuned liberal democracy hypothesis: as tribal primates, human beings are unsuited for life in large, diverse secular democracies, unless you get certain settings finely adjusted to make possible the development of stable political life. This seems to be what the Founding Fathers believed. Jefferson, Madison, and the rest of those eighteenth-century deists clearly did think that designing a constitution was like designing a giant clock, a clock that might run forever if they chose the right springs and gears.

Thankfully, our Founders were good psychologists. They knew that we are not angels; they knew that we are tribal creatures. As Madison wrote in Federalist 10: “the latent causes of faction are thus sown in the nature of man.” Our Founders were also good historians; they were well aware of Plato’s belief that democracy is the second worst form of government because it inevitably decays into tyranny. Madison wrote in Federalist 10 about pure or direct democracies, which he said are quickly consumed by the passions of the majority: “such democracies have ever been spectacles of turbulence and contention . . . and have in general been as short in their lives as they have been violent in their deaths.”

So what did the Founders do? They built in safeguards against runaway factionalism, such as the division of powers among the three branches, and an elaborate series of checks and balances. But they also knew that they had to train future generations of clock mechanics. They were creating a new kind of republic, which would demand far more maturity from its citizens than was needed in nations ruled by a king or other Leviathan.

The full speech is here.

Saturday, December 23, 2017

What Makes Moral Disgust Special? An Integrative Functional Review

Giner-Sorolla, Roger and Kupfer, Tom R. and Sabo, John S. (2018)
Advances in Experimental Social Psychology. Advances in Experimental Social Psychology, 57

The role of disgust in moral psychology has been a matter of much controversy and experimentation over the past 20 or so years. We present here an integrative look at the literature, organized according to the four functions of emotion proposed by integrative functional theory: appraisal, associative, self-regulation, and communicative. Regarding appraisals, we review experimental, personality, and neuroscientific work that has shown differences between elicitors of disgust and anger in moral contexts, with disgust responding more to bodily moral violations such as incest, and anger responding more to sociomoral violations such as theft. We also present new evidence for interpreting the phenomenon of sociomoral disgust as an appraisal of bad character in a person. The associative nature of disgust is shown by evidence for “unreasoning disgust,” in which associations to bodily moral violations are not accompanied by elaborated reasons, and not modified by appraisals such as harm or intent. We also critically examine the literature about the ability of incidental disgust to intensify moral judgments associatively. For disgust's self-regulation function, we consider the possibility that disgust serves as an existential defense, regulating avoidance of thoughts that might threaten our basic self-image as living humans. Finally, we discuss new evidence from our lab that moral disgust serves a communicative function, implying that expressions of disgust serve to signal one's own moral intentions even when a different emotion is felt internally on the basis of appraisal. Within the scope of the literature, there is evidence that all four functions of Giner-Sorolla’s (2012) integrative functional theory of emotion may be operating, and that their variety can help explain some of the paradoxes of disgust.

The information is here.

Saturday, December 9, 2017

The Root of All Cruelty

Paul Bloom
The New Yorker
Originally published November 20, 2017

Here are two excerpts:

Early psychological research on dehumanization looked at what made the Nazis different from the rest of us. But psychologists now talk about the ubiquity of dehumanization. Nick Haslam, at the University of Melbourne, and Steve Loughnan, at the University of Edinburgh, provide a list of examples, including some painfully mundane ones: “Outraged members of the public call sex offenders animals. Psychopaths treat victims merely as means to their vicious ends. The poor are mocked as libidinous dolts. Passersby look through homeless people as if they were transparent obstacles. Dementia sufferers are represented in the media as shuffling zombies.”

The thesis that viewing others as objects or animals enables our very worst conduct would seem to explain a great deal. Yet there’s reason to think that it’s almost the opposite of the truth.

(cut)

But “Virtuous Violence: Hurting and Killing to Create, Sustain, End, and Honor Social Relationships” (Cambridge), by the anthropologist Alan Fiske and the psychologist Tage Rai, argues that these standard accounts often have it backward. In many instances, violence is neither a cold-blooded solution to a problem nor a failure of inhibition; most of all, it doesn’t entail a blindness to moral considerations. On the contrary, morality is often a motivating force: “People are impelled to violence when they feel that to regulate certain social relationships, imposing suffering or death is necessary, natural, legitimate, desirable, condoned, admired, and ethically gratifying.” Obvious examples include suicide bombings, honor killings, and the torture of prisoners during war, but Fiske and Rai extend the list to gang fights and violence toward intimate partners. For Fiske and Rai, actions like these often reflect the desire to do the right thing, to exact just vengeance, or to teach someone a lesson. There’s a profound continuity between such acts and the punishments that—in the name of requital, deterrence, or discipline—the criminal-justice system lawfully imposes. Moral violence, whether reflected in legal sanctions, the killing of enemy soldiers in war, or punishing someone for an ethical transgression, is motivated by the recognition that its victim is a moral agent, someone fully human.

The article is here.