Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Moral Emotions. Show all posts
Showing posts with label Moral Emotions. Show all posts

Monday, January 8, 2024

Human-Algorithm Interactions Help Explain the Spread of Misinformation

McLoughlin, K. L., & Brady, W. J. (2023).
Current Opinion in Psychology, 101770.


Human attention biases toward moral and emotional information are as prevalent online as they are offline. When these biases interact with content algorithms that curate social media users’ news feeds to maximize attentional capture, moral and emotional information are privileged in the online information ecosystem. We review evidence for these human-algorithm interactions and argue that misinformation exploits this process to spread online. This framework suggests that interventions aimed at combating misinformation require a dual-pronged approach that combines person-centered and design-centered interventions to be most effective. We suggest several avenues for research in the psychological study of misinformation sharing under a framework of human-algorithm interaction.

Here is my summary:

This research highlights the crucial role of human-algorithm interactions in driving the spread of misinformation online. It argues that both human attentional biases and algorithmic amplification mechanisms contribute to this phenomenon.

Firstly, humans naturally gravitate towards information that evokes moral and emotional responses. This inherent bias makes us more susceptible to engaging with and sharing misinformation that leverages these emotions, such as outrage, fear, or anger.

Secondly, social media algorithms are designed to maximize user engagement, which often translates to prioritizing content that triggers strong emotions. This creates a feedback loop where emotionally charged misinformation is amplified, further attracting human attention and fueling its spread.

The research concludes that effectively combating misinformation requires a multifaceted approach. It emphasizes the need for interventions that address both human psychology and algorithmic design. This includes promoting media literacy, encouraging critical thinking skills, and designing algorithms that prioritize factual accuracy and diverse perspectives over emotional engagement.

Thursday, June 8, 2023

Do Moral Beliefs Motivate Action?

Díaz, R.
Ethic Theory Moral Prac (2023).


Do moral beliefs motivate action? To answer this question, extant arguments have considered hypothetical cases of association (dissociation) between agents’ moral beliefs and actions. In this paper, I argue that this approach can be improved by studying people’s actual moral beliefs and actions using empirical research methods. I present three new studies showing that, when the stakes are high, associations between participants’ moral beliefs and actions are actually explained by co-occurring but independent moral emotions. These findings suggest that moral beliefs themselves have little or no motivational force, supporting the Humean picture of moral motivation.


In this paper, I showed that the use of hypothetical cases to extract conclusions regarding the (lack of) motivational power of moral beliefs faces important limitations. I argued that these limitations can be addressed using empirical research tools, and presented a series of studies doing so.

The results of the studies show that, when the stakes are high, the apparent motivational force of beliefs is in fact explained by co-occurring moral emotions. This supports Humean views of moral motivation. The results regarding low-stake situations, however, are open to both Humean and “watered-down” Anti-Humean interpretations.

In moral practice, it probably won’t matter if moral beliefs don’t motivate us much or at all. Arguably, most real-life moral choices involve countervailing motives with more than a little motivational strength, making moral beliefs irrelevant in any case. However, the situation might be different with regards to ethical theory. Accepting that moral beliefs have some motivational force (even if very low) could be enough to solve the Moral Problem (see Introduction)Footnote33 while rejecting that moral beliefs have motivational force would prompt us to reject one of the other claims involved in the puzzle. Future research should help us decide between competing interpretations of the results regarding low-stakes situations presented in this paper.

Overall, the results presented in this paper put pressure on Anti-Humean views of moral motivation, as they suggest that moral beliefs have little or no motivational force.

With regards to methodology, I showed that using empirical research tools improves upon the use of hypothetical cases of moral motivation by ruling out alternative interpretations. Note, however, that the empirical investigations presented in this paper build on extant hypothetical cases and the logical tools involved in the discussion of these cases. In this sense, the studies presented in this paper do not oppose, but rather continue extant work regarding cases. Hopefully, this paper paves the way for more empirical investigations, as well as discussions on the best ways to measure and test the relations between moral behavior, moral beliefs, and moral emotions.

Monday, July 25, 2022

Morally Exhausted: Why Russian Soldiers are Refusing to Fight in the Unprovoked War on Ukraine

Tіmofеі Rоzhаnskіy
Originally posted 23 July 22

Here is an excerpt:

I Had To Refuse So I Could Stay Alive

Russia’s troops in Ukraine are largely made up of contract soldiers: volunteer personnel who sign fixed-term contracts for service. The range of experience varies. Other units include troops from private military companies like Vagner, or specialized, semiautonomous units overseen by Chechnya’s strongman leader, Ramzan Kadyrov.

The discontent in Kaminsky’s 11th Brigade is not an isolated case, and there are indications that Russian commanders are trying different tactics to keep the problem from spiraling out of control: for example, publicly shaming soldiers who are refusing to fight.

In Buryatia, where the 11th Brigade is based, dozens of personnel have sought legal assistance from local activists, seeking to break their contracts and get out of service in Ukraine, for various reasons.

In the southern Russian town of Budyonnovsk, on the home base for the 205th Cossack Motorized Rifle Brigade, commanders have erected a “wall of shame” with the names, ranks, and photographs of some 300 soldiers who have disobeyed orders in the Ukraine war.

“They forgot their military oaths, the ceremonial promise, their vows of duty to their Fatherland,” the board reads.

In conversations via the Russian social media giant VK, several soldiers from the brigade disputed the circumstances behind their inclusion on the wall of shame. All asked that their names be withheld for fear of further punishment or retaliation by commanders.

“I understand everything, of course. I signed a contract. I’m supposed to be ready for any situation; this war, this special operation,” one soldier wrote. “But I was thinking, I’m still young; at any moment, a piece of shrapnel, a bullet could fly into my head.”

The soldier said he broke his contract and resigned from the brigade before the February 24 invasion, once he realized it was in fact going forward.

“I thought a long time about it and came to the decision. I understood that I had to refuse so I could stay alive,” he said. “I don’t regret it one bit.”

Friday, October 22, 2021

A Meta-Analytic Investigation of the Antecedents, Theoretical Correlates, and Consequences of Moral Disengagement at Work

Ogunfowora, B. T., et al. (2021)
The Journal of Applied Psychology
Advance online publication. 


Moral disengagement refers to a set of cognitive tactics people employ to sidestep moral self-regulatory processes that normally prevent wrongdoing. In this study, we present a comprehensive meta-analytic review of the nomological network of moral disengagement at work. First, we test its dispositional and contextual antecedents, theoretical correlates, and consequences, including ethics (workplace misconduct and organizational citizenship behaviors [OCBs]) and non-ethics outcomes (turnover intentions and task performance). Second, we examine Bandura's postulation that moral disengagement fosters misconduct by diminishing moral cognitions (moral awareness and moral judgment) and anticipatory moral self-condemning emotions (guilt). We also test a contrarian view that moral disengagement is limited in its capacity to effectively curtail moral emotions after wrongdoing. The results show that Honesty-Humility, guilt proneness, moral identity, trait empathy, conscientiousness, idealism, and relativism are key individual antecedents. Further, abusive supervision and perceived organizational politics are strong contextual enablers of moral disengagement, while ethical leadership and organizational justice are relatively weak deterrents. We also found that narcissism, Machiavellianism, psychopathy, and psychological entitlement are key theoretical correlates, although moral disengagement shows incremental validity over these "dark" traits. Next, moral disengagement was positively associated with workplace misconduct and turnover intentions, and negatively related to OCBs and task performance. Its positive impact on misconduct was mediated by lower moral awareness, moral judgment, and anticipated guilt. Interestingly, however, moral disengagement was positively related to guilt and shame post-misconduct. In sum, we find strong cumulative evidence for the pertinence of moral disengagement in the workplace.

From the Discussion

Our moderator analyses reveal several noteworthy findings. First, the relationship between moral disengagement and misconduct did not significantly differ depending on whether it is operationalized as a trait or state. This suggests that the impact of moral disengagement – at least with respect to workplace misconduct – is equally devastating when it is triggered in specific situations or when it is captured as a stable propensity. This provides initial support for conceptualizing moral disengagement along a continuum – from “one off” instances in specific contexts (i.e., state moral disengagement) to a “dynamic disposition” (Bandura, 1999b) that is relatively stable, but which may also shift in response to different situations (Moore et al., 2019).  

Second, there may be utility in exploring specific disengagement tactics. For instance, euphemistic labeling exerted stronger effects on misconduct compared to moral justification and diffusion of responsibility. Relative weight analyses further showed that some tactics contribute more to understanding misconduct and OCBs. Scholars have proposed that exploring moral disengagement tactics that match the specific context may offer new insights (Kish-Gephart et al., 2014; Moore et al., 2019). It is possible that moral justification might be critical in situations where participants must conjure up rationales to justify their misdeeds (Duffy et al., 2005), while diffusion of responsibility might matter more in team settings where morally disengaging employees can easily assign blame to the collective (Alnuaimi et al., 2010). These possibilities suggest that specific disengagement tactics may offer novel theoretical insights that may be overlooked when scholars focus on overall moral disengagement. However, we acknowledge that this conclusion is preliminary given the small number of studies available for these analyses. 

Tuesday, August 10, 2021

The irrationality of transhumanists

Susan B. Levin
iai.tv Issue 9
Originally posted 11 Jan 21

Bioenhancement is among the hottest topics in bioethics today. The most contentious area of debate here is advocacy of “radical” enhancement (aka transhumanism). Because transhumanists urge us to categorically heighten select capacities, above all, rationality, it would be incorrect to say that the possessors of these abilities were human beings: to signal, unmistakably, the transcendent status of these beings, transhumanists call them “posthuman,” “godlike,” and “divine.” For many, the idea of humanity’s technological self-transcendence has a strong initial appeal; that appeal, intensified by transhumanists’ relentless confidence that radical bioenhancement will occur if only we commit adequate resources to the endeavor, yields a viscerally potent combination. On this of all topics, however, we should not let ourselves be ruled by viscera. 

Transhumanists present themselves as the sole rational parties to the debate over radical bioenhancement: merely questioning a dedication to skyrocketing rational capacity or lifespan testifies to one’s irrationality. Scientifically, for this charge of irrationality not to be intellectually perverse, the evidence on transhumanists’ side would have to be overwhelming.


Transhumanists are committed to extreme rational essentialism: they treasure the limitless augmentation of rational capacity, treating affect as irrelevant or targeting it (at minimum, the so-called negative variety) for elimination. Further disrupting transhumanists’ fixation with radical cognitive bioenhancement, therefore, is the finding that pharmacological boosts, such as they are, may not be entirely or even mainly cognitive. Motivation may be strengthened, with resulting boosts to subjects’ informational facility. What’s more, being in a “positive” (i.e., happy) mood can impair cognitive performance, while being in a “negative” (i.e., sad) one can strengthen it by, for instance, making subjects more disposed to reject stereotypes. 

Tuesday, July 27, 2021

Forms and Functions of the Social Emotions

Sznycer, D., Sell, A., & Lieberman, D. (2021). 
Current Directions in Psychological Science. 


In engineering, form follows function. It is therefore difficult to understand an engineered object if one does not examine it in light of its function. Just as understanding the structure of a lock requires understanding the desire to secure valuables, understanding structures engineered by natural selection, including emotion systems, requires hypotheses about adaptive function. Social emotions reliably solved adaptive problems of human sociality. A central function of these emotions appears to be the recalibration of social evaluations in the minds of self and others. For example, the anger system functions to incentivize another individual to value your welfare more highly when you deem the current valuation insufficient; gratitude functions to consolidate a cooperative relationship with another individual when there are indications that the other values your welfare; shame functions to minimize the spread of discrediting information about yourself and the threat of being devalued by others; and pride functions to capitalize on opportunities to become more highly valued by others. Using the lens of social valuation, researchers are now mapping these and other social emotions at a rapid pace, finding striking regularities across industrial and small-scale societies and throughout history.

From the Shame portion

The behavioral repertoire of shame is broad. From the perspective of the disgraced or to-be-disgraced individual, a trait (e.g., incompetence) or course of action (e.g., theft) that fellow group members view negatively can be shielded from others’ censure at each of various junctures: imagination, decision making, action, information diffusion within the community, and audience reaction. Shame appears to have authority over devaluation-minimizing responses relevant to each of these junctures. For example, shame can lead people to turn away from courses of actions that might lead others to devalue them, to interrupt their execution of discrediting actions, to conceal and destroy reputationally damaging information about themselves, and to hide. When an audience finds discrediting information about the focal individual and condemns or attacks that individual, the shamed individual may apologize, signal submission, appease, cooperate, obfuscate, lie, shift the blame to others, or react with aggression. These behaviors are heterogeneous from a tactical standpoint; some even work at cross-purposes if mobilized concurrently. But each of these behaviors appears to have the strategic potential to limit the threat of devaluation in certain contexts, combinations, or sequences.

Such shame-inspired behaviors as hiding, scapegoating, and aggressing are undesirable from the standpoint of victims and third parties. This has led to the view that shame is an ugly and maladaptive emotion (Tangney et al., 1996). However, note that those behaviors can enhance the welfare of the focal individual, who is pressed to escape detection and minimize or counteract devaluation by others. Whereas the consequences of social devaluation are certainly ugly for the individual being devalued, the form-function approach suggests instead that shame is an elegantly engineered system that transmits bad news of the potential for devaluation to the array of counter-devaluation responses available to the focal individual.

Important data points to share with trainees.  A good refreshed for seasoned therapists.

Monday, July 5, 2021

When Do Robots have Free Will? Exploring the Relationships between (Attributions of) Consciousness and Free Will

Nahmias, E., Allen, C. A., & Loveall, B.
In Free Will, Causality, & Neuroscience
Chapter 3

Imagine that, in the future, humans develop the technology to construct humanoid robots with very sophisticated computers instead of brains and with bodies made out of metal, plastic, and synthetic materials. The robots look, talk, and act just like humans and are able to integrate into human society and to interact with humans across any situation. They work in our offices and our restaurants, teach in our schools, and discuss the important matters of the day in our bars and coffeehouses. How do you suppose you’d respond to one of these robots if you were to discover them attempting to steal your wallet or insulting your friend? Would you regard them as free and morally responsible agents, genuinely deserving of blame and punishment?

If you’re like most people, you are more likely to regard these robots as having free will and being morally responsible if you believe that they are conscious rather than non-conscious. That is, if you think that the robots actually experience sensations and emotions, you are more likely to regard them as having free will and being morally responsible than if you think they simply behave like humans based on their internal programming but with no conscious experiences at all. But why do many people have this intuition? Philosophers and scientists typically assume that there is a deep connection between consciousness and free will, but few have developed theories to explain this connection. To the extent that they have, it’s typically via some cognitive capacity thought to be important for free will, such as reasoning or deliberation, that consciousness is supposed to enable or bolster, at least in humans. But this sort of connection between consciousness and free will is relatively weak. First, it’s contingent; given our particular cognitive architecture, it holds, but if robots or aliens could carry out the relevant cognitive capacities without being conscious, this would suggest that consciousness is not constitutive of, or essential for, free will. Second, this connection is derivative, since the main connection goes through some capacity other than consciousness. Finally, this connection does not seem to be focused on phenomenal consciousness (first-person experience or qualia), but instead, on access consciousness or self-awareness (more on these distinctions below).

From the Conclusion

In most fictional portrayals of artificial intelligence and robots (such as Blade Runner, A.I., and Westworld), viewers tend to think of the robots differently when they are portrayed in a way that suggests they express and feel emotions. No matter how intelligent or complex their behavior, they do not come across as free and autonomous until they seem to care about what happens to them (and perhaps others). Often this is portrayed by their showing fear of their own death or others, or expressing love, anger, or joy. Sometimes it is portrayed by the robots’ expressing reactive attitudes, such as indignation, or our feeling such attitudes towards them. Perhaps the authors of these works recognize that the robots, and their stories, become most interesting when they seem to have free will, and people will see them as free when they start to care about what happens to them, when things really matter to them, which results from their experiencing the actual (and potential) outcomes of their actions.

Monday, May 31, 2021

Disgust Can Be Morally Valuable

Charlie Kurth
Scientific American
Originally posted 9 May 21

Here is no an excerpt:

Let’s start by considering disgust’s virtues. Not only do we tend to experience disgust toward moral wrongs like hypocrisy and exploitation, but the shunning and social excluding that disgust brings seems a fitting response to those who pollute the moral fabric in these ways. Moreover, in the face of worries about morally problematic disgust—disgust felt at the wrong time or in the wrong way—advocates respond that it’s an emotion we can substantively change for the better.

On this front, disgust’s advocates point to exposure and habituation; just like I might overcome the disgust I feel about exotic foods by trying them, I can overcome the disgust I feel about same-sex marriage by spending more time with gay couples. Moreover, work in psychology appears to support this picture. Medical school students, for instance, lose their disgust about touching dead bodies after a few months of dissecting corpses, and new mothers quickly become less disgusted by the smell of soiled diapers.

But these findings may be deceptive. For starters, when we look more closely at the results of the diaper experiment, we see that a mother’s reduced disgust sensitivity is most pronounced with regard to her own baby’s diapers, and additional research indicates that mothers have a general preference for the smell of their own children. This combination suggests, contra the disgust advocates, that a mother’s disgust is not being eliminated. Rather, her disgust at the soiled diapers is still there; it’s just being masked by the positive feelings that she’s getting from the smell of her newborn. Similarly, when we look carefully at the cadaver study, we see that while the disgust of medical students toward touching the cold bodies of the dissection lab is reduced with exposure, the disgust they feel toward touching the warm bodies of the recently deceased remained unchanged.

Wednesday, September 9, 2020

Hate Trumps Love: The Impact of Political Polarization on Social Preferences

Eugen Dimant
Published 4 September 20


Political polarization has ruptured the fabric of U.S. society. The focus of this paper is to examine various layers of (non-)strategic decision-making that are plausibly affected by political polarization through the lens of one's feelings of hate and love for Donald J. Trump. In several pre-registered experiments, I document the behavioral-, belief-, and norm-based mechanisms through which perceptions of interpersonal closeness, altruism, and cooperativeness are affected by polarization, both within and between political factions. To separate ingroup-love from outgroup-hate, the political setting is contrasted with a minimal group setting. I find strong heterogeneous effects: ingroup-love occurs in the perceptional domain (how close one feels towards others), whereas outgroup-hate occurs in the behavioral domain (how one helps/harms/cooperates with others). In addition, the pernicious outcomes of partisan identity also comport with the elicited social norms. Noteworthy, the rich experimental setting also allows me to examine the drivers of these behaviors, suggesting that the observed partisan rift might be not as forlorn as previously suggested: in the contexts studied here, the adverse behavioral impact of the resulting intergroup conflict can be attributed to one's grim expectations about the cooperativeness of the opposing faction, as opposed to one's actual unwillingness to cooperate with them.

From the Conclusion and Discussion

Along all investigated dimensions, I obtain strong effects and the following results: for one, polarization produces ingroup/outgroup differentiation in all three settings (nonstrategic, Experiment 1; strategic, Experiment 2; social norms, Experiment 3), leading participants to actively harm and cooperate less with participants from the opposing faction. For another, lack of cooperation is not the result of a categorical unwillingness to cooperate across factions, but based on one’s grim expectations about the other’s willingness to cooperate. Importantly, however, the results also cast light on the nuance with which ingroup-love and outgroup-hate – something that existing literature often takes as being two sides of the same coin – occurs. In particular, by comparing behavior between the Trump Prime and minimal group prime treatments, the results suggest that ingroup-love can be observed in terms of feeling close to one another, whereas outgroup hate appears in form of taking money away from and being less cooperative with each other. The elicited norms are consistent with these observations and also point out that those who love Trump have a much weaker ingroup/outgroup differentiation than those who hate Trump do.

Thursday, March 19, 2020

Does virtue lead to status? Testing the moral virtue theory of status attainment.

Bai, F., Ho, G. C. C., & Yan, J. (2020).
Journal of Personality and 
Social Psychology, 118(3), 501–531.


The authors perform one of the first empirical tests of the moral virtue theory of status attainment (MVT), a conceptual framework for showing that morality leads to status. Studies 1a to 1d are devoted to developing and validating a 15-item status attainment scale (SAS) to measure how virtue leads to admiration (virtue–admiration), how dominance leads to fear (dominance–fear), and how competence leads to respect (competence–respect). Studies 2a and 2b are an exploration of the nomological network and discriminant validity to show that peer-reported virtue–admiration is positively related to moral character and perceptions such as perceived warmth and unrelated to amoral constructs such as neuroticism. In addition, virtue–admiration mediates the positive effect of several self-reported moral character traits, such as moral identity-internalization, on status conferral. Study 3 supports the external validity of the virtue route to status in a sample of full-time managers from China. In Study 4, a preregistered experiment, virtue evokes superior status while selfishness evokes inferior status. Perceivers who are high in moral character show stronger perceptions of superior status. Finally, Study 5, another preregistered experiment, shows that virtue leads to higher status through inducing virtue–admiration rather than competence–respect, even for incompetent actors. The findings provide initial support for MVT arguing that virtue is a distinct, third route to status.

The research is here.

Sunday, December 1, 2019

Moral Reasoning and Emotion

Joshua May & Victor Kumar
Published in
The Routledge Handbook of Moral Epistemology,
eds. Karen Jones, Mark Timmons, and
Aaron Zimmerman, Routledge (2018), pp. 139-156.


This chapter discusses contemporary scientific research on the role of reason and emotion in moral judgment. The literature suggests that moral judgment is influenced by both reasoning and emotion separately, but there is also emerging evidence of the interaction between the two. While there are clear implications for the rationalism-sentimentalism debate, we conclude that important questions remain open about how central emotion is to moral judgment. We also suggest ways in which moral philosophy is not only guided by empirical research but continues to guide it.



We draw two main conclusions. First, on a fair and plausible characterization of reasoning and emotion, they are both integral to moral judgment. In particular, when our moral beliefs undergo changes over long periods of time, there is ample space for both reasoning and emotion to play an iterative role. Second, it’s difficult to cleave reasoning from emotional processing. When the two affect moral judgment, especially across time, their interplay can make it artificial or fruitless to impose a division, even if a distinction can still be drawn between inference and valence in information processing.

To some degree, our conclusions militate against extreme characterizations of the rationalism-sentimentalism divide. However, the debate is best construed as a question about which psychological process is more fundamental or essential to distinctively moral cognition.  The answer still affects both theoretical and practical problems, such as how to make artificial intelligence capable of moral judgment. At the moment, the more nuanced dispute is difficult to adjudicate, but it may be addressed by further research and theorizing.

The book chapter can be downloaded here.

Wednesday, November 13, 2019

Dynamic Moral Judgments and Emotions

Magda Osman
Published Online June 2015 in SciRes.


We may experience strong moral outrage when we read a news headline that describes a prohibited action, but when we gain additional information by reading the main news story, do our emotional experiences change at all, and if they do in what way do they change? In a single online study with 80 participants the aim was to examine the extent to which emotional experiences (disgust, anger) and moral judgments track changes in information about a moral scenario. The evidence from the present study suggests that we systematically adjust our moral judgments and our emotional experiences as a result of exposure to further information about the morally dubious action referred to in a moral scenario. More specifically, the way in which we adjust our moral judgments and emotions appears to be based on information signalling whether a morally dubious act is permitted or prohibited.

From the Discussion

The present study showed that moral judgments changed in response to different details concerning the moral scenarios, and while participants gave the most severe judgments for the initial limited information regarding the scenario (i.e. the headline), they adjusted the severity of their judgments downwards as more information was provided (i.e. main story, conclusion). In other words, when context was provided for why a morally dubious action was carried out, people used this to inform their later judgments and consciously integrated this new information into their judgments of the action. Crucially, this reflects the fact that judgments and emotions are not fixed, and that they are likely to operate on rational processes (Huebner, 2011, 2014; Teper et al., 2015). More to the point, this evidence suggests that there may well be an integrated representation of the moral scenario that is based on informational content as well as personal emotional experiences that signal the valance on which the information should be judged. The evidence from the present study suggests that both moral judgments and emotional experiences change systematically in response to changes in information that critically concern the way in which a morally dubious action should be evaluated.

A pdf can be downloaded here.

Monday, November 11, 2019

Incidental emotions in moral dilemmas: the influence of emotion regulation.

Raluca D. Szekely & Andrei C. Miu
Cogn Emot. 2015;29(1):64-75.
doi: 10.1080/02699931.2014.895300.


Recent theories have argued that emotions play a central role in moral decision-making and suggested that emotion regulation may be crucial in reducing emotion-linked biases. The present studies focused on the influence of emotional experience and individual differences in emotion regulation on moral choice in dilemmas that pit harming another person against social welfare. During these "harm to save" moral dilemmas, participants experienced mostly fear and sadness but also other emotions such as compassion, guilt, anger, disgust, regret and contempt (Study 1). Fear and disgust were more frequently reported when participants made deontological choices, whereas regret was more frequently reported when participants made utilitarian choices. In addition, habitual reappraisal negatively predicted deontological choices, and this effect was significantly carried through emotional arousal (Study 2). Individual differences in the habitual use of other emotion regulation strategies (i.e., acceptance, rumination and catastrophising) did not influence moral choice. The results of the present studies indicate that negative emotions are commonly experienced during "harm to save" moral dilemmas, and they are associated with a deontological bias. By efficiently reducing emotional arousal, reappraisal can attenuate the emotion-linked deontological bias in moral choice.

General Discussion

Using H2S moral dilemmas, the present studies yielded three main findings: (1) a wide spectrum of emotions are experienced during these moral dilemmas, with self-focused emotions such as fear and sadness being the most common (Study 1); (2) there is a positive relation between emotional arousal during moral dilemmas and deontological choices (Studies 1 and 2); and (3) individual differences in reappraisal, but not other emotion regulation strategies (i.e., acceptance, rumination or catastrophising) are negatively associated with deontological choices and this effect is carried through emotional arousal (Study 2).

A pdf can be downloaded here.

Tuesday, October 29, 2019

Should we create artificial moral agents? A Critical Analysis

John Danaher
Philosophical Disquisitions
Originally published September 21, 2019

Here is an excerpt:

So what argument is being made? At first, it might look like Sharkey is arguing that moral agency depends on biology, but I think that is a bit of a red herring. What she is arguing is that moral agency depends on emotions (particularly second personal emotions such as empathy, sympathy, shame, regret, anger, resentment etc). She then adds to this the assumption that you cannot have emotions without having a biological substrate. This suggests that Sharkey is making something like the following argument:

(1) You cannot have explicit moral agency without having second personal emotions.

(2) You cannot have second personal emotions without being constituted by a living biological substrate.

(3) Robots cannot be constituted by a living biological substrate.

(4) Therefore, robots cannot have explicit moral agency.

Assuming this is a fair reconstruction of the reasoning, I have some questions about it. First, taking premises (2) and (3) as a pair, I would query whether having a biological substrate really is essential for having second personal emotions. What is the necessary connection between biology and emotionality? This smacks of biological mysterianism or dualism to me, almost a throwback to the time when biologists thought that living creatures possessed some élan vital that separated them from the inanimate world. Modern biology and biochemistry casts all that into doubt. Living creatures are — admittedly extremely complicated — evolved biochemical machines. There is no essential and unbridgeable chasm between the living and the inanimate.

The info is here.

Thursday, October 24, 2019

Facebook isn’t free speech, it’s algorithmic amplification optimized for outrage

Jon Evans
Originally published October 20, 2019

This week Mark Zuckerberg gave a speech in which he extolled “giving everyone a voice” and fighting “to uphold a wide a definition of freedom of expression as possible.” That sounds great, of course! Freedom of expression is a cornerstone, if not the cornerstone, of liberal democracy. Who could be opposed to that?

The problem is that Facebook doesn’t offer free speech; it offers free amplification. No one would much care about anything you posted to Facebook, no matter how false or hateful, if people had to navigate to your particular page to read your rantings, as in the very early days of the site.

But what people actually read on Facebook is what’s in their News Feed … and its contents, in turn, are determined not by giving everyone an equal voice, and not by a strict chronological timeline. What you read on Facebook is determined entirely by Facebook’s algorithm, which elides much — censors much, if you wrongly think the News Feed is free speech — and amplifies little.

What is amplified? Two forms of content. For native content, the algorithm optimizes for engagement. This in turn means people spend more time on Facebook, and therefore more time in the company of that other form of content which is amplified: paid advertising.

Of course this isn’t absolute. As Zuckerberg notes in his speech, Facebook works to stop things like hoaxes and medical misinformation from going viral, even if they’re otherwise anointed by the algorithm. But he has specifically decided that Facebook will not attempt to stop paid political misinformation from going viral.

The info is here.

Editor's note: Facebook is one of the most defective products that millions of Americans use everyday.

Thursday, October 10, 2019

Moral Distress and Moral Strength Among Clinicians in Health Care Systems: A Call for Research

Connie M. Ulrich and Christine Grady
NAM Perspectives. 

Here is an excerpt:

Evidence shows that dissatisfaction and wanting to leave one’s job—and the profession altogether—often follow morally distressing encounters. Ethics education that builds cognitive and communication skills, teaches clinicians ethical concepts, and helps them gain communication skills and confidence may be essential in building moral strength. One study found, for example, that among practicing nurses and social workers, those with the least ethics education were also the least confident, the least likely to use ethics resources (if available), and the least likely to act on their ethical concerns. In this national study, as many as 23 percent of nurses reported having had no ethics education at all. But the question remains—is ethics education enough?

Many factors likely support or hinder a clinician’s capacity and willingness to act with moral strength. More research is needed to investigate how interdisciplinary ethics education and institutional resources can help nurses, physicians, and others voice their ethical concerns, help them agree on morally acceptable actions, and support their capacity and propensity to act with moral strength and confidence. Research on moral distress and ethical concerns in everyday clinical practice can begin to build a knowledge base that will inform clinical training—in both educational and health care institutions—and that will help create organizational structures and processes to prepare and support clinicians to encounter potentially distressing situations with moral strength. Research can help tease out what is important and predictive for taking (or not taking) ethical action in morally distressing circumstances. This knowledge would be useful for designing strategies to support clinician well-being. Indeed, studies should focus on the influences that affect clinicians’ ability and willingness to become involved or take ownership of ethically-laden patient care issues, and their level of confidence in doing so.

Sunday, October 6, 2019

Thinking Fast and Furious: Emotional Intensity and Opinion Polarization in Online Media

David Asker & Elias Dinas
Public Opinion Quarterly
Published: 09 September 2019


How do online media increase opinion polarization? The “echo chamber” thesis points to the role of selective exposure to homogeneous views and information. Critics of this view emphasize the potential of online media to expand the ideological spectrum that news consumers encounter. Embedded in this discussion is the assumption that online media affects public opinion via the range of information that it offers to users. We show that online media can induce opinion polarization even among users exposed to ideologically heterogeneous views, by heightening the emotional intensity of the content. Higher affective intensity provokes motivated reasoning, which in turn leads to opinion polarization. The results of an online experiment focusing on the comments section, a user-driven tool of communication whose effects on opinion formation remain poorly understood, show that participants randomly assigned to read an online news article with a user comments section subsequently express more extreme views on the topic of the article than a control group reading the same article without any comments. Consistent with expectations, this effect is driven by the emotional intensity of the comments, lending support to the idea that motivated reasoning is the mechanism behind this effect.

From the Discussion:

These results should not be taken as a challenge to the echo chamber argument, but rather as a complement to it. Selective exposure to desirable information and motivated rejection of undesirable information constitute separate mechanisms whereby online news audiences may develop more extreme views. Whereas there is already ample empirical evidence about the first mechanism, previous research on the second has been scant. Our contribution should thus be seen as an attempt to fill this gap.

Tuesday, September 10, 2019

Can Ethics Be Taught?

Peter Singer
Project Syndicate
Originally published August 7, 2019

Can taking a philosophy class – more specifically, a class in practical ethics – lead students to act more ethically?

Teachers of practical ethics have an obvious interest in the answer to that question. The answer should also matter to students thinking of taking a course in practical ethics. But the question also has broader philosophical significance, because the answer could shed light on the ancient and fundamental question of the role that reason plays in forming our ethical judgments and determining what we do.

Plato, in the Phaedrus, uses the metaphor of a chariot pulled by two horses; one represents rational and moral impulses, the other irrational passions or desires. The role of the charioteer is to make the horses work together as a team. Plato thinks that the soul should be a composite of our passions and our reason, but he also makes it clear that harmony is to be found under the supremacy of reason.

In the eighteenth century, David Hume argued that this picture of a struggle between reason and the passions is misleading. Reason on its own, he thought, cannot influence the will. Reason is, he famously wrote, “the slave of the passions.”

The info is here.

Monday, September 9, 2019

Why Some Christians ‘Love the Meanest Parts’ of Trump

Emma Green
The Atlantic
Originally posted August 18, 2019

Ben Howe is angry at evangelicals. As he describes it, he is angry that they didn’t just vote for Donald Trump in record numbers, but repeatedly provide moral cover for his outrageous failings. He is angry that leaders of the religious right, who long claimed to be the champions of American morality, appear to have gladly traded their values for power. He is angry that Christians claim they support the president because they want to end abortion or protect religious liberty, when supporting Trump suggests that what they really want is a champion who will mock and crush their perceived enemies.

To redeem themselves, Howe believes, evangelicals have to give up their take-no-prisoners culture war.

This is the story Howe, a writer and pundit, tells in his new book, The Immoral Majority—the title aptly riffs on the Moral Majority, the 1980s-era Christian political machine created by the influential pastor Jerry Falwell. Right-wing Christianity is Howe’s native territory: He grew up attending Falwell’s church in Virginia, Thomas Road Baptist Church, down the street from Liberty University, where Howe’s father, a Southern Baptist pastor, taught classes. In other years, Howe’s family attended First Baptist Church in Dallas, which is now pastored by one of Trump’s most vocal supporters, Robert Jeffress. After being raised in the bosom of the religious right, Howe went on to become a filmmaker, a Tea Party activist, and a blogger for the conservative website RedState, where he spent a not insignificant portion of his time trolling progressives. He was later fired from that website, along with other writers, because of his vocally anti-Trump views, he claims. (Rosie Gray wrote about the purge for The Atlantic in the spring of 2018.)

The interview is here.

Monday, August 19, 2019

The evolution of moral cognition

Leda Cosmides, Ricardo Guzmán, and John Tooby
The Routledge Handbook of Moral Epistemology - Chapter 9

1. Introduction

Moral concepts, judgments, sentiments, and emotions pervade human social life. We consider certain actions obligatory, permitted, or forbidden, recognize when someone is entitled to a resource, and evaluate character using morally tinged concepts such as cheater, free rider, cooperative, and trustworthy. Attitudes, actions, laws, and institutions can strike us as fair, unjust, praiseworthy, or punishable: moral judgments. Morally relevant sentiments color our experiences—empathy for another’s pain, sympathy for their loss, disgust at their transgressions—and our decisions are influenced by feelings of loyalty, altruism, warmth, and compassion.  Full blown moral emotions organize our reactions—anger toward displays of disrespect, guilt over harming those we care about, gratitude for those who sacrifice on our behalf, outrage at those who harm others with impunity. A newly reinvigorated field, moral psychology, is investigating the genesis and content of these concepts, judgments, sentiments, and emotions.

This handbook reflects the field’s intellectual diversity: Moral psychology has attracted psychologists (cognitive, social, developmental), philosophers, neuroscientists, evolutionary biologists,  primatologists, economists, sociologists, anthropologists, and political scientists.

The chapter can be found here.