Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Moral Intuitions. Show all posts
Showing posts with label Moral Intuitions. Show all posts

Saturday, November 20, 2021

Narrative media’s emphasis on distinct moral intuitions alters early adolescents’ judgments

Hahn, L., et al. (2021).
Journal of Media Psychology: 
Theories, Methods, and Applications. 
Advance online publication.

Abstract

Logic from the model of intuitive morality and exemplars (MIME) suggests that narrative media emphasizing moral intuitions can increase the salience of those intuitions in audiences. To date, support for this logic has been limited to adults. Across two studies, the present research tested MIME predictions in early adolescents (ages 10–14). The salience of care, fairness, loyalty, and authority intuitions was manipulated in a pilot study with verbal prompts (N = 87) and in the main study with a comic book (N = 107). In both studies, intuition salience was measured after induction. The pilot study demonstrated that exposure to verbal prompts emphasizing care, fairness, and loyalty increased the salience of their respective intuitions. The main study showed that exposure to comic books emphasizing all four separate intuitions increased salience of their respective intuitions in early adolescents. Results are discussed in terms of relevance for the MIME and understanding narrative media’s influence on children’s moral judgments. 

Conclusion

Moral education is often at the forefront of parents’ concern for their children’s well-being. Although there is value in directly teaching children moral principles through instruction about what to do or not do, our results support an indirect approach to socializing children’s morality (Haidt & Bjorklund, 2008). This first step at exploring narrative media’s ability to activate moral intuitions in young audiences should be accompanied by additional work examining how “direct route” lessons, such as those contained in the
Ten Commandments, may complement narrative media’s impact on children’s morality.

Our studies provide evidence supporting the MIME’s predictions about narrative content’s influence on moral intuition salience. Future research should build on these findings to examine whether this elevated intuition salience can influence broader values, judgments, and behaviors in children. Such examinations should be especially important for researchers interested in both the mechanism responsible for media’s influence and the extent of media’s impact on malleable, developing children, who may be socialized
by media content.


Thursday, February 18, 2021

Intuitive Expertise in Moral Judgements.

Wiegmann, A., & Horvath, J. 
(2020, December 22). 

Abstract

According to the ‘expertise defence’, experimental findings which suggest that intuitive judgements about hypothetical cases are influenced by philosophically irrelevant factors do not undermine their evidential use in (moral) philosophy. This defence assumes that philosophical experts are unlikely to be influenced by irrelevant factors. We discuss relevant findings from experimental metaphilosophy that largely tell against this assumption. To advance the debate, we present the most comprehensive experimental study of intuitive expertise in ethics to date, which tests five well-known biases of judgement and decision-making among expert ethicists and laypeople. We found that even expert ethicists are affected by some of these biases, but also that they enjoy a slight advantage over laypeople in some cases. We discuss the implications of these results for the expertise defence, and conclude that they still do not support the defence as it is typically presented in (moral) philosophy.

Conclusion

We first considered the experimental restrictionist challenge to intuitions about cases, with a special focus on moral philosophy, and then introduced the expertise defence as the most popular reply. The expertise defence makes the empirically testable assumption that the case intuitions of expert philosophers are significantly less influenced by philosophically irrelevant factors than those of laypeople.  The upshot of our discussion of relevant findings from experimental metaphilosophy was twofold: first, extant findings largely tell against the expertise defence, and second, the number of published studies and investigated biases is still fairly small. To advance the debate about the expertise defencein moral philosophy, we thus tested five well-known biases of judgement and decision-making among expert ethicists and laypeople. Averaged across all biases and scenarios, the intuitive judgements of both experts and laypeople were clearly susceptible to bias. However, moral philosophers were also less biased in two of the five cases(Focus and Prospect), although we found no significant expert-lay differences in the remaining three cases.

In comparison to previous findings (for example Schwitzgebel and Cushman [2012, 2015]; Wiegmann et al. [2020]), our results appear to be relatively good news for the expertise defence, because they suggest that moral philosophers are less influenced by some morally irrelevant factors, such as a simple saving/killing framing. On the other hand, our study does not support the very general armchair versions of the expertise defence that one often finds in metaphilosophy, which try to reassure(moral) philosophers that they need not worry about the influence of philosophically irrelevant factors.At best, however, we need not worry about just a few cases and a few human biases—and even that modest hypothesis can only be upheld on the basis of sufficient empirical research.

Sunday, August 9, 2020

The Extended Moral Foundations Dictionary (eMFD): Development and Applications

Hopp, F. R., Fisher, J. T., Cornell, D.,
Huskey, R., & Weber, R. (2020, June 12).
https://doi.org/10.3758/s13428-020-01433-0

Abstract

Moral intuitions are a central motivator in human behavior. Recent work highlights the importance of moral intuitions for understanding a wide range of issues ranging from online radicalization to vaccine hesitancy. Extracting and analyzing moral content in messages, narratives, and other forms of public discourse is a critical step toward understanding how the psychological influence of moral judgments unfolds at a global scale. Extant approaches for extracting moral content are limited in their ability to capture the intuitive nature of moral sensibilities, constraining their usefulness for understanding and predicting human moral behavior. Here we introduce the extended Moral Foundations Dictionary (eMFD), a dictionary-based tool for extracting moral content from textual corpora. The eMFD, unlike previous methods, is constructed from text annotations generated by a large sample of human coders. We demonstrate that the eMFD outperforms existing approaches in a variety of domains. We anticipate that the eMFD will contribute to advance the study of moral intuitions and their influence on social, psychological, and communicative processes.

From the Discussion:

In  a  series  of  theoretically-informed  dictionary  validation  procedures,  we  demonstrated  the  eMFD’s increased  utility  compared  to  previous  moral  dictionaries.  First,  we  showed  that  the  eMFD  more accurately  predicts  the  presence  of  morally-relevant  article  topics  compared  to  previous  dictionaries. Second, we showed that the eMFD more effectively detects distinctions between the moral language used by  partisan  news  organizations.  Word  scores  returned  by  the  eMFD  confirm  that  conservative  sources place greater emphasis on the binding moral foundations of loyalty, authority, and sanctity, whereas more liberal  leaning  sources  tend  to  stress  the  individualizing  foundations  of  care  and  fairness,  supporting previous research on moral partisan news framing (Fulgoni et al., 2016). Third, we demonstrated that the eMFD more accurately predicts the share counts of morally-loaded online newspaper articles. The eMFD produced  a  better  model  fit  explained  more  variance  in  overall  share  counts  compared  to  previous approaches.  Finally,  we  demonstrated eMFD score’s  utility  for  linking  moral  actions  to  their  respective moral agents and targets.

Wednesday, February 12, 2020

Empirical Work in Moral Psychology

Joshua May
Routledge Encyclopedia of Philosophy
Taylor and Francis
Originally published in 2017

Abstract

How do we form our moral judgments, and how do they influence behaviour? What ultimately motivates kind versus malicious action? Moral psychology is the interdisciplinary study of such questions about the mental lives of moral agents, including moral thought, feeling, reasoning and motivation. While these questions can be studied solely from the armchair or using only empirical tools, researchers in various disciplines, from biology to neuroscience to philosophy, can address them in tandem. Some key topics in this respect revolve around moral cognition and motivation, such as moral responsibility, altruism, the structure of moral motivation, weakness of will, and moral intuitions. Of course there are other important topics as well, including emotions, character, moral development, self-deception, addiction, well-being, and the evolution of moral capacities.

Some of the primary objects of study in moral psychology are the processes driving moral action. For example, we think of ourselves as possessing free will, as being responsible for what we do; as capable of self-control; and as capable of genuine concern for the welfare of others. Such claims can be tested by empirical methods to some extent in at least two ways. First, we can determine what in fact our ordinary thinking is. While many philosophers investigate this through rigorous reflection on concepts, we can also use the empirical methods of the social sciences. Second, we can investigate empirically whether our ordinary thinking is correct or illusory. For example, we can check the empirical adequacy of philosophical theories, assessing directly any claims made about how we think, feel, and behave

Understanding the psychology of moral individuals is certainly interesting in its own right, but it also often has direct implications for other areas of ethics, such as metaethics and normative ethics. For instance, determining the role of reason versus sentiment in moral judgment and motivation can shed light on whether moral judgments are cognitive, and perhaps whether morality itself is in some sense objective. Similarly, evaluating moral theories, such as deontology and utilitarianism, often relies on intuitive judgments about what one ought to do in various hypothetical cases. Empirical research can again serve as an additional tool to determine what exactly our intuitions are and which psychological processes generate them, contributing to a rigorous evaluation of the warrant of moral intuitions.

The info is here.

Saturday, November 9, 2019

Debunking (the) Retribution (Gap)

Steven R. Kraaijeveld
Science and Engineering Ethics
https://doi.org/10.1007/s11948-019-00148-6

Abstract

Robotization is an increasingly pervasive feature of our lives. Robots with high degrees of autonomy may cause harm, yet in sufficiently complex systems neither the robots nor the human developers may be candidates for moral blame. John Danaher has recently argued that this may lead to a retribution gap, where the human desire for retribution faces a lack of appropriate subjects for retributive blame. The potential social and moral implications of a retribution gap are considerable. I argue that the retributive intuitions that feed into retribution gaps are best understood as deontological intuitions. I apply a debunking argument for deontological intuitions in order to show that retributive intuitions cannot be used to justify retributive punishment in cases of robot harm without clear candidates for blame. The fundamental moral question thus becomes what we ought to do with these retributive intuitions, given that they do not justify retribution. I draw a parallel from recent work on implicit biases to make a case for taking moral responsibility for retributive intuitions. In the same way that we can exert some form of control over our unwanted implicit biases, we can and should do so for unjustified retributive intuitions in cases of robot harm.

Monday, October 21, 2019

Moral Judgment as Categorization

Cillian McHugh, and others
PsyArXiv
Originally posted September 17, 2019

Abstract

We propose that the making of moral judgments is an act of categorization; people categorize events, behaviors, or people as ‘right’ or ‘wrong’. This approach builds on the currently dominant dual-processing approach to moral judgment in the literature, providing important links to developmental mechanisms in category formation, while avoiding recently developed critiques of dual-systems views. Stable categories are the result of skill in making context-relevant categorizations. People learn that various objects (events, behaviors, people etc.) can be categorized as ‘right’ or ‘wrong’. Repetition and rehearsal then results in these categorizations becoming habitualized. According to this skill formation account of moral categorization, the learning, and the habitualization of the forming of, moral categories, occurs as part of goal-directed activity, and is sensitive to various contextual influences. Reviewing the literature we highlight the essential similarity of categorization principles and processes of moral judgments. Using a categorization framework, we provide an overview of moral category formation as basis for moral judgments. The implications for our understanding of the making of moral judgments are discussed.

Conclusion

We propose a revisiting of the categorization approach to the understanding of moral judgment proposed by Stich (1993).  This approach, in providing a coherent account of the emergence of stability in the formation of moral categories, provides an account of the emergence of moral intuitions.  This account of the emergence of moral intuitions predicts that emergent stable moral intuitions will mirror real-world social norms or collectively agreed moral principles.  It is also possible that the emergence of moral intuitions can be informed by prior reasoning, allowing for the so called “intelligence” of moral intuitions (e.g., Pizarro & Bloom, 2003; Royzman, Kim, & Leeman, 2015).  This may even allow for the traditionally opposing rationalist and intuitionist positions (e.g., Fine, 2006; Haidt, 2001; Hume, 2000/1748; Kant, 1959/1785; Kennett & Fine, 2009; Kohlberg, 1971; Nussbaum & Kahan, 1996; Cameron et al., 2013; Prinz, 2005; Pizarro & Bloom, 2003; Royzman et al., 2015; see also Mallon & Nichols, 2010, p. 299) to be integrated.  In addition, the account of the emergence of moral intuitions described here is also consistent with discussions of the emergence of moral heuristics (e.g., Gigerenzer, 2008; Sinnott-Armstrong, Young, & Cushman, 2010).

The research is here.

Monday, August 19, 2019

The evolution of moral cognition

Leda Cosmides, Ricardo Guzmán, and John Tooby
The Routledge Handbook of Moral Epistemology - Chapter 9

1. Introduction

Moral concepts, judgments, sentiments, and emotions pervade human social life. We consider certain actions obligatory, permitted, or forbidden, recognize when someone is entitled to a resource, and evaluate character using morally tinged concepts such as cheater, free rider, cooperative, and trustworthy. Attitudes, actions, laws, and institutions can strike us as fair, unjust, praiseworthy, or punishable: moral judgments. Morally relevant sentiments color our experiences—empathy for another’s pain, sympathy for their loss, disgust at their transgressions—and our decisions are influenced by feelings of loyalty, altruism, warmth, and compassion.  Full blown moral emotions organize our reactions—anger toward displays of disrespect, guilt over harming those we care about, gratitude for those who sacrifice on our behalf, outrage at those who harm others with impunity. A newly reinvigorated field, moral psychology, is investigating the genesis and content of these concepts, judgments, sentiments, and emotions.

This handbook reflects the field’s intellectual diversity: Moral psychology has attracted psychologists (cognitive, social, developmental), philosophers, neuroscientists, evolutionary biologists,  primatologists, economists, sociologists, anthropologists, and political scientists.

The chapter can be found here.

Thursday, May 23, 2019

Priming intuition disfavors instrumental harm but not impartial beneficence

Valerio Capraro, Jim Everett, & Brian Earp
PsyArXiv Preprints
Last Edited April 17, 2019

Abstract

Understanding the cognitive underpinnings of moral judgment is one of most pressing problems in psychological science. Some highly-cited studies suggest that reliance on intuition decreases utilitarian (expected welfare maximizing) judgments in sacrificial moral dilemmas in which one has to decide whether to instrumentally harm (IH) one person to save a greater number of people. However, recent work suggests that such dilemmas are limited in that they fail to capture the positive, defining core of utilitarianism: commitment to impartial beneficence (IB). Accordingly, a new two-dimensional model of utilitarian judgment has been proposed that distinguishes IH and IB components. The role of intuition on this new model has not been studied. Does relying on intuition disfavor utilitarian choices only along the dimension of instrumental harm or does it also do so along the dimension of impartial beneficence? To answer this question, we conducted three studies (total N = 970, two preregistered) using conceptual priming of intuition versus deliberation on moral judgments. Our evidence converges on an interaction effect, with intuition decreasing utilitarian judgments in IH—as suggested by previous work—but failing to do so in IB. These findings bolster the recently proposed two-dimensional model of utilitarian moral judgment, and point to new avenues for future research.

The research is here.

Saturday, May 18, 2019

The Neuroscience of Moral Judgment

Joanna Demaree-Cotton & Guy Kahane
Published in The Routledge Handbook of Moral Epistemology, eds. Karen Jones, Mark Timmons, and Aaron Zimmerman (Routledge, 2018).

Abstract:

This chapter examines the relevance of the cognitive science of morality to moral epistemology, with special focus on the issue of the reliability of moral judgments. It argues that the kind of empirical evidence of most importance to moral epistemology is at the psychological rather than neural level. The main theories and debates that have dominated the cognitive science of morality are reviewed with an eye to their epistemic significance.

1. Introduction

We routinely make moral judgments about the rightness of acts, the badness of outcomes, or people’s characters. When we form such judgments, our attention is usually fixed on the relevant situation, actual or hypothetical, not on our own minds. But our moral judgments are obviously the result of mental processes, and we often enough turn our attention to aspects of this process—to the role, for example, of our intuitions or emotions in shaping our moral views, or to the consistency of a judgment about a case with more general moral beliefs.

Philosophers have long reflected on the way our minds engage with moral questions—on the conceptual and epistemic links that hold between our moral intuitions, judgments, emotions, and motivations. This form of armchair moral psychology is still alive and well, but it’s increasingly hard to pursue it in complete isolation from the growing body of research in the cognitive science of morality (CSM). This research is not only uncovering the psychological structures that underlie moral judgment but, increasingly, also their neural underpinning—utilizing, in this connection, advances in functional neuroimaging, brain lesion studies, psychopharmacology, and even direct stimulation of the brain. Evidence from such research has been used not only to develop grand theories about moral psychology, but also to support ambitious normative arguments.

Wednesday, April 3, 2019

Feeling Good: Integrating the Psychology and Epistemology of Moral Intuition and Emotion

Hossein Dabbagh
Journal of Cognition and Neuroethics 5 (3): 1–30.

Abstract

Is the epistemology of moral intuitions compatible with admitting a role for emotion? I argue in this paper thatmoral intuitions and emotions can be partners without creating an epistemic threat. I start off by offering some empirical findings to weaken Singer’s (and Greene’s and Haidt’s) debunking argument against moral intuition, which treat emotions as a distorting factor. In the second part of the paper, I argue that the standard contrast between intuition and emotion is a mistake. Moral intuitions and emotions are not contestants if we construe moral intuition as non-doxastic intellectual seeming and emotion as a non-doxastic perceptual-like state. This will show that emotions support, rather than distort, the epistemic standing of moral intuitions.

Here is an excerpt:

However, cognitive sciences, as I argued above, show us that seeing all emotions in this excessively pessimistic way is not plausible. To think about emotional experience as always being a source of epistemic distortion would be wrong. On the contrary, there are some reasons to believe that emotional experiences can sometimes make a positive contribution to our activities in practical rationality. So, there is a possibility that some emotions are not distorting factors. If this is right, we are no longer justified in saying that emotions always distort our epistemic activities. Instead, emotions (construed as quasiperceptual experiences) might have some cognitive elements assessable for rationality.

The paper is here.

Wednesday, August 15, 2018

Four Rules for Learning How to Talk To Each Other Again

Jason Pontin
www.wired.com
Originally posted

Here is an excerpt:

Here’s how to speak in a polity where we loathe each other. Let this be the Law of Parsimonious Claims:

1. Say nothing you know to be untrue, whether to deceive, confuse, or, worst of all, encourage a wearied cynicism.

2. Make mostly falsifiable assertions or offer prescriptions whose outcomes could be measured, always explaining how your assertion or prescription could be tested.

3. Whereof you have no evidence but possess only moral intuitions, say so candidly, and accept you must coexist with people who have different intuitions.

4. When evidence proves you wrong, admit it cheerfully, pleased that your mistake has contributed to the general progress.

Finally, as you listen, assume the good faith of your opponents, unless you have proof otherwise. Judge their assertions and prescriptions based on the plain meaning of their words, rather on than what you guess to be their motives. Often, people will tell you about experiences they found significant. If they are earnest, hear them sympathetically.

The info is here.