Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Emotion. Show all posts
Showing posts with label Emotion. Show all posts

Thursday, September 14, 2023

Who supports redistribution? Replicating and refining effects of compassion, malicious envy, and self-interest

Lin, C.A., & Bates, T.C.
(2023). Evolution and Human Behavior

Abstract

Debate over wealth redistribution plays a prominent role in society, but the causes of differences in support for redistribution remain contested. A recent three-person two-situation model suggests these differences are shaped by evolved motivational systems of self-interest, compassion, and dispositional envy. We conducted a close replication testing this prediction, all subjects were British, recruited from an online subject pool. Study 1 (N = 206) confirmed the roles of self-interest (β = 0.20) and compassion for others (β = 0.37), as well as a predicted null effect of procedural fairness. Dispositional envy was non-significant (β = 0.06). In study 2 (N = 304), we tested whether it was better to conceptualize envy as being two separate emotions, benign envy and malicious envy. A significant effect of malicious envy was found (β = 0.13) and no significant effect of benign envy (β = −0.06). Study 3 (N = 501) closely replicated this improved model, confirming significant effects of compassion (β = 0.40), self-interest (β = 0.21), and malicious envy (β = 0.15), accounting for one third of variance in support for redistribution. These results support the role of evolved motivational systems to explain and improve important aspects of contemporary economic redistribution.


The authors conducted three studies to test their hypotheses. In Study 1, they replicated the findings of a previous study that found that compassion, malicious envy, and self-interest all predict support for redistribution. In Study 2, they developed a new measure of envy and found that this measure also predicted support for redistribution. In Study 3, they found that left-political support was associated with higher support for redistribution.

The authors conclude that their findings support the hypothesis that compassion, malicious envy, and self-interest all play a role in shaping people's support for wealth redistribution. They suggest that future research should examine the relative importance of these three motivational systems in different contexts.

Here are some additional key points from the article:
  • The authors propose a model of wealth redistribution that is based on three motivational systems: compassion, malicious envy, and self-interest.
  • They conducted three studies to test their hypotheses.
  • The findings of the studies support the hypothesis that compassion, malicious envy, and self-interest all play a role in shaping people's support for wealth redistribution.
  • The authors suggest that future research should examine the relative importance of these three motivational systems in different contexts.

Friday, April 21, 2023

Moral Shock

Stockdale, K. (2022).
Journal of the American Philosophical
Association, 8(3), 496-511.
doi:10.1017/apa.2021.15

Abstract

This paper defends an account of moral shock as an emotional response to intensely bewildering events that are also of moral significance. This theory stands in contrast to the common view that shock is a form of intense surprise. On the standard model of surprise, surprise is an emotional response to events that violated one's expectations. But I show that we can be morally shocked by events that confirm our expectations. What makes an event shocking is not that it violated one's expectations, but that the content of the event is intensely bewildering (and bewildering events are often, but not always, contrary to our expectations). What causes moral shock is, I argue, our lack of emotional preparedness for the event. And I show that, despite the relative lack of attention to shock in the philosophical literature, the emotion is significant to moral, social, and political life.

Conclusion

I have argued that moral shock is an emotional response to intensely bewildering events that are also of moral significance. Although shock is typically considered to be an intense form of surprise, where surprise is an emotional response to events that violate our expectations or are at least unexpected, I have argued that the contrary-expectation model is found wanting. For it seems that we are sometimes shocked by the immoral actions of others even when we expected them to behave in just the ways that they did. What is shocking is what is intensely bewildering—and the bewildering often, but not always, tracks the unexpected. The extent to which such events shock us is, I have argued, a function of our felt readiness to experience them. When we are not emotionally prepared for what we expect to occur, we might find ourselves in the grip of moral shock.

There is much more to be said about the emotion of moral shock and its significance to moral, social, and political life. This paper is meant to be a starting point rather than a decisive take on an undertheorized emotion. But by understanding more deeply the nature and effects of moral shock, we can gain richer insight into a common response to immoral actions; what prevents us from responding well in the moment; and how the brief and fleeting, yet intense events in our lives affect agency, responsibility, and memory. We might also be able to make better sense of the bewildering social and political events that shock us and those to which we have become emotionally resilient.


This appears to be a philosophical explication of "Moral Injury", as can be found multiple places on this web site.

Wednesday, June 15, 2022

A Constructionist Review of Morality and Emotions: No Evidence for Specific Links Between Moral Content and Discrete Emotions

Cameron, C. D., Lindquist, K. A., & Gray, K. (2015). 
Personality and Social Psychology Review
19(4), 371–394.

Abstract

Morality and emotions are linked, but what is the nature of their correspondence? Many “whole number” accounts posit specific correspondences between moral content and discrete emotions, such that harm is linked to anger, and purity is linked to disgust. A review of the literature provides little support for these specific morality–emotion links. Moreover, any apparent specificity may arise from global features shared between morality and emotion, such as affect and conceptual content. These findings are consistent with a constructionist perspective of the mind, which argues against a whole number of discrete and domain-specific mental mechanisms underlying morality and emotion. Instead, constructionism emphasizes the flexible combination of basic and domain-general ingredients such as core affect and conceptualization in creating the experience of moral judgments and discrete emotions. The implications of constructionism in moral psychology are discussed, and we propose an experimental framework for rigorously testing morality–emotion links.

Conclusion

The tension between whole number and constructionist accounts has existed in psychology since its beginning (e.g., Darwin, 1872/2005 vs. James, 1890; see Gendron & Barrett, 2009; Lindquist, 2013). Commonsense and essentialism suggest the existence of distinct and immutable psychological constructs. The intuitiveness of whole number accounts is reinforced by the communicative usefulness of distinguishing harm from purity (Graham et al., 2009), and anger from disgust (Barrett, 2006; Lindquist, Gendron, et al., 2013), but utility does not equal ontology. As decades of psychological research have demonstrated, intuitive experiences are poor guides to the structure of the mind (Barrett, 2009; Davies, 2009; James, 1890; Nisbett & Wilson, 1977; Roser & Gazzaniga, 2004; Ross & Ward, 1996; Wegner, 2003).  Although initially less intuitive, we suggest that constructionist approaches are actually better at capturing the nature of the powerful subjective phenomena long treasured by social psychologists (Gray & Wegner, 2013; Wegner & Gilbert, 2000). Whereas whole number theories impose taxonomies onto human experience and treat variability as noise or error, constructionist theories allow that experience is complex and messy. Rather than assuming that human experience is “wrong” when it fails to conform to a preferred taxonomy, constructionist theories appreciate this diversity and use domain-general mechanisms to explain it. Returning to our opening example, Jack and Diane may be soul-mates with a love that is unique, unchanging, and eternal, or they may just be two similar American kids who feel the rush of youth and the heat of a summer’s day. The first may be more romantic, but the second is more likely to be true.

Monday, April 18, 2022

The psychological drivers of misinformation belief and its resistance to correction

Ecker, U.K.H., Lewandowsky, S., Cook, J. et al. 
Nat Rev Psychol 1, 13–29 (2022).
https://doi.org/10.1038/s44159-021-00006-y

Abstract

Misinformation has been identified as a major contributor to various contentious contemporary events ranging from elections and referenda to the response to the COVID-19 pandemic. Not only can belief in misinformation lead to poor judgements and decision-making, it also exerts a lingering influence on people’s reasoning after it has been corrected — an effect known as the continued influence effect. In this Review, we describe the cognitive, social and affective factors that lead people to form or endorse misinformed views, and the psychological barriers to knowledge revision after misinformation has been corrected, including theories of continued influence. We discuss the effectiveness of both pre-emptive (‘prebunking’) and reactive (‘debunking’) interventions to reduce the effects of misinformation, as well as implications for information consumers and practitioners in various areas including journalism, public health, policymaking and education.

Summary and future directions

Psychological research has built solid foundational knowledge of how people decide what is true and false, form beliefs, process corrections, and might continue to be influenced by misinformation even after it has been corrected. However, much work remains to fully understand the psychology of misinformation.

First, in line with general trends in psychology and elsewhere, research methods in the field of misinformation should be improved. Researchers should rely less on small-scale studies conducted in the laboratory or a small number of online platforms, often on non-representative (and primarily US-based) participants. Researchers should also avoid relying on one-item questions with relatively low reliability. Given the well-known attitude–behaviour gap — that attitude change does not readily translate into behavioural effects — researchers should also attempt to use more behavioural measures, such as information-sharing measures, rather than relying exclusively on self-report questionnaires. Although existing research has yielded valuable insights into how people generally process misinformation (many of which will translate across different contexts and cultures), an increased focus on diversification of samples and more robust methods is likely to provide a better appreciation of important contextual factors and nuanced cultural differences.

Saturday, October 16, 2021

Social identity shapes antecedents and functional outcomes of moral emotion expression in online networks

Brady, W. J., & Van Bavel, J. J. 
(2021, April 2). 

Abstract

As social interactions increasingly occur through social media platforms, intergroup affective phenomena such as “outrage firestorms” and “cancel culture” have emerged with notable consequences for society. In this research, we examine how social identity shapes the antecedents and functional outcomes of moral emotion expression online. Across four pre-registered experiments (N = 1,712), we find robust evidence that the inclusion of moral-emotional expressions in political messages has a causal influence on intentions to share the messages on social media. We find that individual differences in the strength of partisan identification is a consistent predictor of sharing messages with moral-emotional expressions, but little evidence that brief manipulations of identity salience increased sharing. Negative moral emotion expression in social media messages also causes the message author to be perceived as more strongly identified among their partisan ingroup, but less open-minded and less worthy of conversation to outgroup members. These experiments highlight the role of social identity in affective phenomena in the digital age, and showcase how moral emotion expressions in online networks can serve ingroup reputation functions while at the same time hinder discourse between political groups.

Conclusion

In the context of contentious political conversations online, moral-emotional language causes political partisans to share the message more often, and that this effect was strongest in strong group identifiers. Expressing negative moral-emotional language in social media messages makes the message author appear more strongly identified with their group, but also makes outgroup members think the author is less open-minded and less worth of conversation. This work sheds light on antecedents and functional outcomes of moral-emotion expression in the digital age, which is becoming increasingly important to study as intergroup affective phenomena such as viral outrage and affective polarization are reaching historic levels.

Wednesday, April 21, 2021

Target Dehumanization May Influence Decision Difficulty and Response Patterns for Moral Dilemmas

Bai, H., et al. (2021, February 25). 
https://doi.org/10.31234/osf.io/fknrd

Abstract

Past research on moral dilemmas has thoroughly investigated the roles of personality and situational variables, but the role of targets in moral dilemmas has been relatively neglected. This paper presents findings from four experiments that manipulate the perceived dehumanization of targets in moral dilemmas. Studies 1, 2 and 4 suggest that dehumanized targets may render the decision easier, and with less emotion. Findings from Studies 1 and 3, though not Studies 2 and 4, show that dehumanization of targets in dilemmas may lead participants to make less deontological judgments. Study 3, but not Study 4, suggests that it is potentially because dehumanization has an effect on reducing deontological, but not utilitarian judgments. Though the patterns are somewhat inconsistent across studies, overall, results suggest that targets’ dehumanization can play a role in how people make their decisions in moral dilemmas.

General Discussion

Together, the four studies described in this paper contribute to the literature by providing evidence that the dehumanization of targets may play an important role in how people make decisions in moral dilemmas. In particular, we found some evidence in Studies 1, 2 and 4 suggesting that dehumanized targets may affect how people experience their decisions, rendering the decisions easier and less emotional. We also found some evidence from Studies 1 and 3, though not Studies 2 and 4, that dehumanization of targets in dilemmas may affect what decision people eventually make, suggesting that dehumanized targets may elicit less deontological responses to some extent. Finally, Study 3, but not Study 4, suggests that the decreased level of deontological response pattern may be potentially explained by dehumanization’s effect on reducing deontological, but not utilitarian tendencies. To this point, we conducted a mini-meta-analysis across the combined data for Studies 3 and 4 and compared the differences in the D parameter between the dehumanized condition and humanized conditions. We found an effect size of d = .135, which suggests that if dehumanization has an effect, it may not be a very big effect.

Tuesday, March 9, 2021

How social learning amplifies moral outrage expression in online social networks

Brady, W. J., McLoughlin, K. L., et al.
(2021, January 19).
https://doi.org/10.31234/osf.io/gf7t5

Abstract

Moral outrage shapes fundamental aspects of human social life and is now widespread in online social networks. Here, we show how social learning processes amplify online moral outrage expressions over time. In two pre-registered observational studies of Twitter (7,331 users and 12.7 million total tweets) and two pre-registered behavioral experiments (N = 240), we find that positive social feedback for outrage expressions increases the likelihood of future outrage expressions, consistent with principles of reinforcement learning. We also find that outrage expressions are sensitive to expressive norms in users’ social networks, over and above users’ own preferences, suggesting that norm learning processes guide online outrage expressions. Moreover, expressive norms moderate social reinforcement of outrage: in ideologically extreme networks, where outrage expression is more common, users are less sensitive to social feedback when deciding whether to express outrage. Our findings highlight how platform design interacts with human learning mechanisms to impact moral discourse in digital public spaces.

From the Conclusion

At first blush, documenting the role of reinforcement learning in online outrage expressions may seem trivial. Of course, we should expect that a fundamental principle of human behavior, extensively observed in offline settings, will similarly describe behavior in online settings. However, reinforcement learning of moral behaviors online, combined with the design of social media platforms, may have especially important social implications. Social media newsfeed algorithms can directly impact how much social feedback a given post receives by determining how many other users are exposed to that post. Because we show here that social feedback impacts users’ outrage expressions over time, this suggests newsfeed algorithms can influence users’ moral behaviors by exploiting their natural tendencies for reinforcement learning.  In this way, reinforcement learning on social media differs from reinforcement learning in other environments because crucial inputs to the learning process are shaped by corporate interests. Even if platform designers do not intend to amplify moral outrage, design choices aimed at satisfying other goals --such as profit maximization via user engagement --can indirectly impact moral behavior because outrage-provoking content draws high engagement. Given that moral outrage plays a critical role in collective action and social change, our data suggest that platform designers have the ability to influence the success or failure of social and political movements, as well as informational campaigns designed to influence users’ moral and political attitudes. Future research is required to understand whether users are aware of this, and whether making such knowledge salient can impact their online behavior.


People are more likely to express online "moral outrage" if they have either been rewarded for it in the past or it's common in their own social network.  They are even willing to express far more moral outrage than they genuinely feel in order to fit in.

Monday, February 22, 2021

Anger Increases Susceptibility to Misinformation

Greenstein M, Franklin N. 
Exp Psychol. 2020 May;67(3):202-209. 

Abstract

The effect of anger on acceptance of false details was examined using a three-phase misinformation paradigm. Participants viewed an event, were presented with schema-consistent and schema-irrelevant misinformation about it, and were given a surprise source monitoring test to examine the acceptance of the suggested material. Between each phase of the experiment, they performed a task that either induced anger or maintained a neutral mood. Participants showed greater susceptibility to schema-consistent than schema-irrelevant misinformation. Anger did not affect either recognition or source accuracy for true details about the initial event, but suggestibility for false details increased with anger. In spite of this increase in source errors (i.e., misinformation acceptance), both confidence in the accuracy of source attributions and decision speed for incorrect judgments also increased with anger. Implications are discussed with respect to both the general effects of anger and real-world applications such as eyewitness memory.

Saturday, November 28, 2020

Toward a Hierarchical Model of Social Cognition: A Neuroimaging Meta-Analysis and Integrative Review of Empathy and Theory of Mind

Schurz, M. et al.
Psychological Bulletin. 
Advance online publication. 

Abstract

Along with the increased interest in and volume of social cognition research, there has been higher awareness of a lack of agreement on the concepts and taxonomy used to study social processes. Two central concepts in the field, empathy and Theory of Mind (ToM), have been identified as overlapping umbrella terms for different processes of limited convergence. Here, we review and integrate evidence of brain activation, brain organization, and behavior into a coherent model of social-cognitive processes. We start with a meta-analytic clustering of neuroimaging data across different social-cognitive tasks. Results show that understanding others’ mental states can be described by a multilevel model of hierarchical structure, similar to models in intelligence and personality research. A higher level describes more broad and abstract classes of functioning, whereas a lower one explains how functions are applied to concrete contexts given by particular stimulus and task formats. Specifically, the higher level of our model suggests 3 groups of neurocognitive processes: (a) predominantly cognitive processes, which are engaged when mentalizing requires self-generated cognition decoupled from the physical world; (b) more affective processes, which are engaged when we witness emotions in others based on shared emotional, motor, and somatosensory representations; (c) combined processes, which engage cognitive and affective functions in parallel. We discuss how these processes are explained by an underlying principal gradient of structural brain organization. Finally, we validate the model by a review of empathy and ToM task interrelations found in behavioral studies.

Public Significance Statement

Empathy and Theory of Mind are important human capacities for understanding others. Here, we present a meta-analysis of neuroimaging data from 4,207 participants, which shows that these abilities can be deconstructed into specific and partially shared neurocognitive subprocesses. Our findings provide systematic, large-scale support for the hypothesis that understanding others’ mental states can be described by a multilevel model of hierarchical structure, similar to models in intelligence and personality research.

Tuesday, November 3, 2020

The Political is Personal: Daily Politics as a Chronic Stressor

Feinberg, M., Ford, et al.
(2020, September 19).

Abstract

Politics and its controversies have permeated everyday life, but the daily impact of politics is largely unknown. Here, we conceptualize politics as a chronic stressor with important consequences for people’s daily lives. We used longitudinal, daily-diary methods to track U.S. participants as they experienced daily political events across two weeks (Study 1: N=198, observations=2,167) and, separately, across three weeks (Study 2: N=811, observations=12,790) to explore how daily political events permeate people’s lives and how they cope with this influence of politics. In both studies, daily political events consistently evoked negative emotions, which corresponded to worse psychological and physical well-being, but also increased motivation to take political action (e.g., volunteer, protest) aimed at changing the political system that evoked these emotions in the first place. Understandably, people frequently tried to regulate their politics-induced emotions; and successfully regulating these emotions using cognitive strategies (reappraisal and distraction) predicted greater well-being, but also weaker motivation to take action. Although people can protect themselves from the emotional impact of politics, frequently-used regulation strategies appear to come with a trade-off between well being and action. To examine whether an alternative approach to one’s emotions could avoid this trade-off, we measured emotional acceptance in Study 2 (i.e., accepting one’s emotions without trying to change them) and found that successful acceptance predicted greater daily well-being but no impairment to political action. Overall, this research highlights how politics can be a chronic stressor in people’s daily lives, underscoring the far-reaching influence politicians have beyond the formal powers endowed unto them.

Conclusion

In all, our research bridges political psychology and affective science theory and methods, and highlights how these distinct literatures can intersect to answer important, unexplored questions. Our findings show that the political is very much personal–a pattern with powerful consequences for people’s daily lives. More generally, by demonstrating how political events personally impact the average citizen, including their psychological and physical health, our study reveals the far-reaching impact politicians have, beyond the formal powers endowed unto them.

Tuesday, July 14, 2020

The MAD Model of Moral Contagion: The role of motivation, attention and design in the spread of moralized content online

Brady WJ, Crockett MJ, Van Bavel JJ.
Perspect Psychol Sci. 2020;1745691620917336.

Abstract

With over 3 billion users, online social networks represent an important venue for moral and political discourse and have been used to organize political revolutions, influence elections, and raise awareness of social issues. These examples rely on a common process in order to be effective: the ability to engage users and spread moralized content through online networks. Here, we review evidence that expressions of moral emotion play an important role in the spread of moralized content (a phenomenon we call ‘moral contagion’). Next, we propose a psychological model to explain moral contagion. The ‘MAD’ model of moral contagion argues that people have group identity-based motivations to share moral-emotional content; that such content is especially likely to capture our attention; and that the design of social media platforms amplifies our natural motivational and cognitive tendencies to spread such content. We review each component of the model (as well as interactions between components) and raise several novel, testable hypotheses that can spark progress on the scientific investigation of civic engagement and activism, political polarization, propaganda and disinformation, and other moralized behaviors in the digital age.

A copy of the research can be found here.

Tuesday, May 5, 2020

How stress influences our morality

Lucius Caviola and Nadira Faulmüller
Oxford Martin School

Abstract

Several studies show that stress can influence moral judgment and behavior. In personal moral dilemmas—scenarios where someone has to be harmed by physical contact in order to save several others—participants under stress tend to make more deontological judgments than nonstressed participants, i.e. they agree less with harming someone for the greater good. Other studies demonstrate that stress can increase pro-social behavior for in-group members but decrease it for out-group members. The dual-process theory of moral judgment in combination with an evolutionary perspective on emotional reactions seems to explain these results: stress might inhibit controlled reasoning and trigger people’s automatic emotional intuitions. In other words, when it comes to morality, stress seems to make us prone to follow our gut reactions instead of our elaborate reasoning.

From the Implications Section

The conclusions drawn from these studies seem to raise an important question: if our moral judgments are so dependent on stress, which of our judgments should we rely on—the ones elicited by stress or the ones we come to after careful consideration? Most people would probably not regard a physiological reaction, such as stress, as a relevant normative factor that should have a qualified influence on our moral values. Instead, our reflective moral judgments seem to represent better what we really care about. This should make us suspicious of the normative validity of emotional intuitions in general. Thus, in order to identify our moral values, we should not blindly follow our gut reactions, but try to think more deliberately about what we care about.

For example, as stated we might be more prone to help a poor beggar on the street when we are stressed. Here, even after careful reflection we might come to the conclusion that this emotional reaction elicited by stress is the morally right thing to do after all. However, in other situations this might not be the case. As we have seen we are less prone to donate money to charity when stressed (cf. Vinkers et al., 2013). But is this reaction really in line with what we consider to be the morally right thing to do after careful reflection? After all, if we care about the well-being of the single beggar, why then should the many more people’s lives, potentially benefiting from our donation, count less?

The research is here.

Friday, December 20, 2019

Study offers first large-sample evidence of the effect of ethics training on financial sector behavior

Image result for business ethicsShannon Roddel
phys.org
Originally posted 21 Nov 19


Here is an excerpt:

"Behavioral ethics research shows that business people often do not recognize when they are making ethical decisions," he says. "They approach these decisions by weighing costs and benefits, and by using emotion or intuition."

These results are consistent with the exam playing a "priming" role, where early exposure to rules and ethics material prepares the individual to behave appropriately later. Those passing the exam without prior misconduct appear to respond most to the amount of rules and ethics material covered on their exam. Those already engaging in misconduct, or having spent several years working in the securities industry, respond least or not at all.

The study also examines what happens when people with more ethics training find themselves surrounded by bad behavior, revealing these individuals are more likely to leave their jobs.

"We study this effect both across organizations and within Wells Fargo, during their account fraud scandal," Kowaleski explains. "That those with more ethics training are more likely to leave misbehaving organizations suggests the self-reinforcing nature of corporate culture."

The info is here.

Monday, November 25, 2019

The MAD Model of Moral Contagion: The role of motivation, attention and design in the spread of moralized content online

William Brady, Molly Crockett, and Jay Van Bavel
PsyArXiv
Originally posted March 11, 2019

Abstract

With over 3 billion users, online social networks represent an important venue for moral and political discourse and have been used to organize political revolutions, influence elections, and raise awareness of social issues. These examples rely on a common process in order to be effective: the ability to engage users and spread moralized content through online networks. Here, we review evidence that expressions of moral emotion play an important role in the spread of moralized content (a phenomenon we call ‘moral contagion’). Next, we propose a psychological model to explain moral contagion. The ‘MAD’ model of moral contagion argues that people have group identity-based motivations to share moral-emotional content; that such content is especially likely to capture our attention; and that the design of social media platforms amplifies our natural motivational and cognitive tendencies to spread such content. We review each component of the model (as well as interactions between components) and raise several novel, testable hypotheses that can spark progress on the scientific investigation of civic engagement and activism, political polarization, propaganda and disinformation, and other moralized behaviors in the digital age.

The research is here.

Thursday, July 4, 2019

Exposure to opposing views on social media can increase political polarization

Christopher Bail, Lisa Argyle, and others
PNAS September 11, 2018 115 (37) 9216-9221; first published August 28, 2018 https://doi.org/10.1073/pnas.1804840115

Abstract

There is mounting concern that social media sites contribute to political polarization by creating “echo chambers” that insulate people from opposing views about current events. We surveyed a large sample of Democrats and Republicans who visit Twitter at least three times each week about a range of social policy issues. One week later, we randomly assigned respondents to a treatment condition in which they were offered financial incentives to follow a Twitter bot for 1 month that exposed them to messages from those with opposing political ideologies (e.g., elected officials, opinion leaders, media organizations, and nonprofit groups). Respondents were resurveyed at the end of the month to measure the effect of this treatment, and at regular intervals throughout the study period to monitor treatment compliance. We find that Republicans who followed a liberal Twitter bot became substantially more conservative posttreatment. Democrats exhibited slight increases in liberal attitudes after following a conservative Twitter bot, although these effects are not statistically significant. Notwithstanding important limitations of our study, these findings have significant implications for the interdisciplinary literature on political polarization and the emerging field of computational social science.

The research is here.

Happy Fourth of July!!!

Saturday, June 22, 2019

Morality and Self-Control: How They are Intertwined, and Where They Differ

Wilhelm Hofmann, Peter Meindl, Marlon Mooijman, & Jesse Graham
PsyArXiv Preprints
Last edited November 18, 2018

Abstract

Despite sharing conceptual overlap, morality and self-control research have led largely separate lives. In this article, we highlight neglected connections between these major areas of psychology. To this end, we first note their conceptual similarities and differences. We then show how morality research, typically emphasizing aspects of moral cognition and emotion, may benefit from incorporating motivational concepts from self-control research. Similarly, self-control research may benefit from a better understanding of the moral nature of many self-control domains. We place special focus on various components of self-control and on the ways in which self-control goals may be moralized.

(cut)

Here is the Conclusion:

How do we resist temptation, prioritizing our future well-being over our present pleasure? And how do we resist acting selfishly, prioritizing the needs of others over our own self-interest? These two questions highlight the links between understanding self-control and understanding morality. We hope we have shown that morality and self-control share considerable conceptual overlap with regard to the way people regulate behavior in line with higher-order values and standards. As the psychological study of both areas becomes increasingly collaborative and integrated, insights from each subfield can better enable research and interventions to increase human health and flourishing.

The info is here.

Wednesday, May 15, 2019

Moral self-judgment is stronger for future than past actions

Sjåstad, H. & Baumeister, R.F.
Motiv Emot (2019).
https://doi.org/10.1007/s11031-019-09768-8

Abstract

When, if ever, would a person want to be held responsible for his or her choices? Across four studies (N = 915), people favored more extreme rewards and punishments for their future than their past actions. This included thinking that they should receive more blame and punishment for future misdeeds than for past ones, and more credit and reward for future good deeds than for past ones. The tendency to moralize the future more than the past was mediated by anticipating (one’s own) emotional reactions and concern about one’s reputation, which was stronger in the future as well. The findings fit the pragmatic view that people moralize the future partly to guide their choices and actions, such as by increasing their motivation to restrain selfish impulses and build long-term cooperative relationships with others. People typically believe that the future is open and changeable, while the past is not. We conclude that the psychology of moral accountability has a strong future component.

Here is a snip from Concluding Remarks

A recent article by Uhlmann, Pizarro, and Diermeier (2015) proposed an important shift in the foundation of moral psychology. Whereas most research has focused on how people judge moral actions, Uhlmann et al. proposed that the primary, focal purpose is to judge persons. They suggested that this has a prospective dimension: Ultimately, the pragmatic goal is to know whom one can cooperate with, rely on, and otherwise trust in the future. Judging past actions is a means toward predicting the future, with the focus on individual persons.

The present findings fit well with and even extend that analysis. The orientation toward the future is not limited to judging and predicting the moral character of others but also extends to oneself. If one functional purpose of morality is to promote group cohesion and cooperation in the future, people apparently think that part of that involves raising expectations and standards for their own future behavior as well.

The pre-print can be found here.

Saturday, January 5, 2019

Emotion shapes the diffusion of moralized content in social networks

William J. Brady, Julian A. Wills, John T. Jost, Joshua A. Tucker, and Jay J. Van Bavel
PNAS July 11, 2017 114 (28) 7313-7318; published ahead of print June 26, 2017 https://doi.org/10.1073/pnas.1618923114

Abstract

Political debate concerning moralized issues is increasingly common in online social networks. However, moral psychology has yet to incorporate the study of social networks to investigate processes by which some moral ideas spread more rapidly or broadly than others. Here, we show that the expression of moral emotion is key for the spread of moral and political ideas in online social networks, a process we call “moral contagion.” Using a large sample of social media communications about three polarizing moral/political issues (n = 563,312), we observed that the presence of moral-emotional words in messages increased their diffusion by a factor of 20% for each additional word. Furthermore, we found that moral contagion was bounded by group membership; moral-emotional language increased diffusion more strongly within liberal and conservative networks, and less between them. Our results highlight the importance of emotion in the social transmission of moral ideas and also demonstrate the utility of social network methods for studying morality. These findings offer insights into how people are exposed to moral and political ideas through social networks, thus expanding models of social influence and group polarization as people become increasingly immersed in social media networks.

Significance

Twitter and other social media platforms are believed to have altered the course of numerous historical events, from the Arab Spring to the US presidential election. Online social networks have become a ubiquitous medium for discussing moral and political ideas. Nevertheless, the field of moral psychology has yet to investigate why some moral and political ideas spread more widely than others. Using a large sample of social media communications concerning polarizing issues in public policy debates (gun control, same-sex marriage, climate change), we found that the presence of moral-emotional language in political messages substantially increases their diffusion within (and less so between) ideological group boundaries. These findings offer insights into how moral ideas spread within networks during real political discussion.

Friday, December 7, 2018

Lay beliefs about the controllability of everyday mental states.

Cusimano, C., & Goodwin, G.
In press, Journal of Experimental Psychology: General

Abstract

Prominent accounts of folk theory of mind posit that people judge others’ mental states to be uncontrollable, unintentional, or otherwise involuntary. Yet, this claim has little empirical support: few studies have investigated lay judgments about mental state control, and those that have done so yield conflicting conclusions. We address this shortcoming across six studies, which show that, in fact, lay people attribute to others a high degree of intentional control over their mental states, including their emotions, desires, beliefs, and evaluative attitudes. For prototypical mental states, people’s judgments of control systematically varied by mental state category (e.g., emotions were seen as less controllable than desires, which in turn were seen as less controllable than beliefs and evaluative attitudes). However, these differences were attenuated, sometimes completely, when the content of and context for each mental state were tightly controlled. Finally, judgments of control over mental states correlated positively with judgments of responsibility and blame for them, and to a lesser extent, with judgments that the mental state reveals the agent’s character. These findings replicated across multiple populations and methods, and generalized to people’s real-world experiences. The present results challenge the view that people judge others’ mental states as passive, involuntary, or unintentional, and suggest that mental state control judgments play a key role in other important areas of social judgment and decision making.

The research is here.

Important research for those practicing psychotherapy.

Tuesday, October 10, 2017

Reasons Probably Won’t Change Your Mind: The Role of Reasons in Revising Moral Decisions

Stanley, M. L., Dougherty, A. M., Yang, B. W., Henne, P., & De Brigard, F. (2017).
Journal of Experimental Psychology: General. Advance online publication.

Abstract

Although many philosophers argue that making and revising moral decisions ought to be a matter of deliberating over reasons, the extent to which the consideration of reasons informs people’s moral decisions and prompts them to change their decisions remains unclear. Here, after making an initial decision in 2-option moral dilemmas, participants examined reasons for only the option initially chosen (affirming reasons), reasons for only the option not initially chosen (opposing reasons), or reasons for both options. Although participants were more likely to change their initial decisions when presented with only opposing reasons compared with only affirming reasons, these effect sizes were consistently small. After evaluating reasons, participants were significantly more likely not to change their initial decisions than to change them, regardless of the set of reasons they considered. The initial decision accounted for most of the variance in predicting the final decision, whereas the reasons evaluated accounted for a relatively small proportion of the variance in predicting the final decision. This resistance to changing moral decisions is at least partly attributable to a biased, motivated evaluation of the available reasons: participants rated the reasons supporting their initial decisions more favorably than the reasons opposing their initial decisions, regardless of the reported strategy used to make the initial decision. Overall, our results suggest that the consideration of reasons rarely induces people to change their initial decisions in moral dilemmas.

The article is here, behind a paywall.

You can contact the lead investigator for a personal copy.