Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Moral judgment. Show all posts
Showing posts with label Moral judgment. Show all posts

Friday, October 25, 2019

Deciding Versus Reacting:Conceptions of Moral Judgment and the Reason-Affect Debate

Monin, B., Pizarro, D. A., & Beer, J. S. (2007).
Review of General Psychology, 11(2), 99–111.
https://doi.org/10.1037/1089-2680.11.2.99

Abstract

Recent approaches to moral judgment have typically pitted emotion against reason. In an effort to move beyond this debate, we propose that authors presenting diverging models are considering quite different prototypical situations: those focusing on the resolution of complex dilemmas conclude that morality involves sophisticated reasoning, whereas those studying reactions to shocking moral violations find that morality involves quick, affect-laden processes. We articulate these diverging dominant approaches and consider three directions for future research (moral temptation, moral self-image, and lay understandings of morality) that we propose have not received sufficient attention as a result of the focus on these two prototypical situations within moral psychology.

Concluding Thoughts

Recent theorizing on the psychology of moral decision making has pitted deliberative reasoning against quick affect-laden intuitions. In this article, we propose a resolution to this tension by arguing that it results from a choice of different prototypical situations: advocates of the reasoning approach have focused on sophisticated dilemmas, whereas advocates of the intuition/emotion approach have focused on reactions to other people’s moral infractions. Arbitrarily choosing one or the other as the typical moral situation has a significant impact on one’s characterization of moral judgment.

Monday, October 21, 2019

Moral Judgment as Categorization

Cillian McHugh, and others
PsyArXiv
Originally posted September 17, 2019

Abstract

We propose that the making of moral judgments is an act of categorization; people categorize events, behaviors, or people as ‘right’ or ‘wrong’. This approach builds on the currently dominant dual-processing approach to moral judgment in the literature, providing important links to developmental mechanisms in category formation, while avoiding recently developed critiques of dual-systems views. Stable categories are the result of skill in making context-relevant categorizations. People learn that various objects (events, behaviors, people etc.) can be categorized as ‘right’ or ‘wrong’. Repetition and rehearsal then results in these categorizations becoming habitualized. According to this skill formation account of moral categorization, the learning, and the habitualization of the forming of, moral categories, occurs as part of goal-directed activity, and is sensitive to various contextual influences. Reviewing the literature we highlight the essential similarity of categorization principles and processes of moral judgments. Using a categorization framework, we provide an overview of moral category formation as basis for moral judgments. The implications for our understanding of the making of moral judgments are discussed.

Conclusion

We propose a revisiting of the categorization approach to the understanding of moral judgment proposed by Stich (1993).  This approach, in providing a coherent account of the emergence of stability in the formation of moral categories, provides an account of the emergence of moral intuitions.  This account of the emergence of moral intuitions predicts that emergent stable moral intuitions will mirror real-world social norms or collectively agreed moral principles.  It is also possible that the emergence of moral intuitions can be informed by prior reasoning, allowing for the so called “intelligence” of moral intuitions (e.g., Pizarro & Bloom, 2003; Royzman, Kim, & Leeman, 2015).  This may even allow for the traditionally opposing rationalist and intuitionist positions (e.g., Fine, 2006; Haidt, 2001; Hume, 2000/1748; Kant, 1959/1785; Kennett & Fine, 2009; Kohlberg, 1971; Nussbaum & Kahan, 1996; Cameron et al., 2013; Prinz, 2005; Pizarro & Bloom, 2003; Royzman et al., 2015; see also Mallon & Nichols, 2010, p. 299) to be integrated.  In addition, the account of the emergence of moral intuitions described here is also consistent with discussions of the emergence of moral heuristics (e.g., Gigerenzer, 2008; Sinnott-Armstrong, Young, & Cushman, 2010).

The research is here.

Sunday, October 20, 2019

Moral Judgment and Decision Making

Bartels, D. M., and others (2015)
In G. Keren & G. Wu (Eds.)
The Wiley Blackwell Handbook of Judgment and Decision Making.

From the Introduction

Our focus in this essay is moral flexibility, a term that we use to capture to the thesis that people are strongly motivated to adhere to and affirm their moral beliefs in their judgments and choices—they really want to get it right, they really want to do the right thing—but context strongly influences which moral beliefs are brought to bear in a given situation (cf. Bartels, 2008). In what follows, we review contemporary research on moral judgment and decision making and suggest ways that the major themes in the literature relate to the notion of moral flexibility. First, we take a step back and explain what makes moral judgment and decision making unique. We then review three major research themes and their explananda: (i) morally prohibited value tradeoffs in decision making, (ii) rules, reason, and emotion in tradeoffs, and (iii) judgments of moral blame and punishment. We conclude by commenting on methodological desiderata and presenting understudied areas of inquiry.

Conclusion

Moral thinking pervades everyday decision making, and so understanding the psychological underpinnings of moral judgment and decision making is an important goal for the behavioral sciences. Research that focuses on rule-based models makes moral decisions appear straightforward and rigid, but our review suggests that they more complicated. Our attempt to document the state of the field reveals the diversity of approaches that (indirectly) reveals the flexibility of moral decision making systems. Whether they are study participants, policy makers, or the person on the street, people are strongly motivated to adhere to and affirm their moral beliefs—they want to make the right judgments and choices, and do the right thing. But what is right and wrong, like many things, depends in part on the situation. So while moral judgments and choices can be accurately characterized as using moral rules, they are also characterized by a striking ability to adapt to situations that require flexibility.

Consistent with this theme, our review suggests that context strongly influences which moral principles people use to judge actions and actors and that apparent inconsistencies across situations need not be interpreted as evidence of moral bias, error, hypocrisy, weakness, or failure.  One implication of the evidence for moral flexibility we have presented is that it might be difficult for any single framework to capture moral judgments and decisions (and this may help explain why no fully descriptive and consensus model of moral judgment and decision making exists despite decades of research). While several interesting puzzle pieces have been identified, the big picture remains unclear. We cannot even be certain that all of these pieces belong to just one puzzle.  Fortunately for researchers interested in this area, there is much left to be learned, and we suspect that the coming decades will budge us closer to a complete understanding of moral judgment and decision making.

A pdf of the book chapter can be downloaded here.

Monday, October 14, 2019

Principles of karmic accounting: How our intuitive moral sense balances rights and wrongs

Samuel Johnson and Jaye Ahn
PsyArXiv
Originally posted September 10, 2019

Abstract

We are all saints and sinners: Some of our actions benefit other people, while other actions harm people. How do people balance moral rights against moral wrongs when evaluating others’ actions? Across 9 studies, we contrast the predictions of three conceptions of intuitive morality—outcome- based (utilitarian), act-based (deontologist), and person-based (virtue ethics) approaches. Although good acts can partly offset bad acts—consistent with utilitarianism—they do so incompletely and in a manner relatively insensitive to magnitude, but sensitive to temporal order and the match between who is helped and harmed. Inferences about personal moral character best predicted blame judgments, explaining variance across items and across participants. However, there was modest evidence for both deontological and utilitarian processes too. These findings contribute to conversations about moral psychology and person perception, and may have policy implications.

General Discussion

These  studies  begin  to  map  out  the  principles  governing  how  the  mind  combines  rights  and wrongs to form summary judgments of blameworthiness. Moreover, these principles are explained by inferences  about  character,  which  also  explain  differences  across  scenarios  and  participants.  These results overall buttress person-based accounts of morality (Uhlmann et al., 2014), according to which morality  serves  primarily  to  identify  and  track  individuals  likely  to  be  cooperative  and  trustworthy social partners in the future.

These results also have implications for moral psychology beyond third-party judgments. Moral behavior is motivated largely by its expected reputational consequences, thus studying the psychology of  third-party  reputational  judgments  is  key  for  understanding  people’s  behavior  when  they  have opportunities  to  perform  licensing  or  offsetting acts.  For  example,  theories  of  moral  self-licensing (Merritt et al., 2010) disagree over whether licensing occurs due to moral credits (i.e., having done good, one can now “spend” the moral credit on a harm) versus moral credentials (i.e., having done good, later bad  acts  are  reframed  as  less  blameworthy). 

The research is here.

Tuesday, February 26, 2019

The Role of Emotion Regulation in Moral Judgment

Helion, C. & Ochsner, K.N.
Neuroethics (2018) 11: 297.
https://doi.org/10.1007/s12152-016-9261-z

Abstract

Moral judgment has typically been characterized as a conflict between emotion and reason. In recent years, a central concern has been determining which process is the chief contributor to moral behavior. While classic moral theorists claimed that moral evaluations stem from consciously controlled cognitive processes, recent research indicates that affective processes may be driving moral behavior. Here, we propose a new way of thinking about emotion within the context of moral judgment, one in which affect is generated and transformed by both automatic and controlled processes, and moral evaluations are shifted accordingly. We begin with a review of how existing theories in psychology and neuroscience address the interaction between emotion and cognition, and how these theories may inform the study of moral judgment. We then describe how brain regions involved in both affective processing and moral judgment overlap and may make distinct contributions to the moral evaluation process. Finally, we discuss how this way of thinking about emotion can be reconciled with current theories in moral psychology before mapping out future directions in the study of moral behavior.

Here is an excerpt:

Individuals may up- or down- regulate their automatic emotional responses to moral stimuli in a way that encourages goal-consistent behavior. For example, individuals may down-regulate their disgust when evaluating dilemmas in which disgusting acts occurred but no one was harmed, or they may up-regulate anger when engaging in punishment or assigning blame. To observe this effect in the wild, one need go no further than the modern political arena. Someone who is politically liberal may be as disgusted by the thought of two men kissing as someone who is politically conservative, but may choose to down-regulate their response so that it is more in line with their political views [44]. They can do this in multiple ways, including reframing the situation as one about equality and fairness, construing the act as one of love and affection, or manipulating personal relevance by thinking about homosexual individuals whom the person knows. This affective transformation would rely on controlled emotional processes that shape the initial automatically elicited emotion (disgust) into a very different emotion (tolerance or acceptance). This process requires motivation, recognition (conscious or non-conscious) that one is experiencing an emotion that is in conflict with ones goals and ideals, and a reconstruction of the situation and one’s emotions in order to come to a moral resolution. Comparatively, political conservatives may be less motivated to do so, and may instead up-regulate their disgust response so that their moral judgment is in line with their overarching goals. In contrast, the opposite regulatory pattern may occur (such that liberals up-regulate emotion and conservatives down-regulate emotion) when considering issues like the death penalty or gun control.

Saturday, February 23, 2019

The Psychology of Morality: A Review and Analysis of Empirical Studies Published From 1940 Through 2017

Naomi Ellemers, Jojanneke van der Toorn, Yavor Paunov, and Thed van Leeuwen
Personality and Social Psychology Review, 1–35

Abstract

We review empirical research on (social) psychology of morality to identify which issues and relations are well documented by existing data and which areas of inquiry are in need of further empirical evidence. An electronic literature search yielded a total of 1,278 relevant research articles published from 1940 through 2017. These were subjected to expert content analysis and standardized bibliometric analysis to classify research questions and relate these to (trends in) empirical approaches that characterize research on morality. We categorize the research questions addressed in this literature into five different themes and consider how empirical approaches within each of these themes have addressed psychological antecedents and implications of moral behavior. We conclude that some key features of theoretical questions relating to human morality are not systematically captured in empirical research and are in need of further investigation.

Here is a portion of the article:

In sum, research on moral behavior demonstrates that people can be highly motivated to behave morally. Yet, personal convictions, social rules and normative pressures from others, or motivational lapses may all induce behavior that is not considered moral by others and invite self-justifying
responses to maintain moral self-views.

The review article can be downloaded here.

Thursday, February 7, 2019

Do People Believe That They Are More Deontological Than Others?

Ming-Hui Li and Li-Lin Rao
Personality and Social Psychology Bulletin
First published January 20, 2019

Abstract

The question of how we decide that someone else has done something wrong is at the heart of moral psychology. Little work has been done to investigate whether people believe that others’ moral judgment differs from their own in moral dilemmas. We conducted four experiments using various measures and diverse samples to demonstrate the self–other discrepancy in moral judgment. We found that (a) people were more deontological when they made moral judgments themselves than when they judged a stranger (Studies 1-4) and (b) a protected values (PVs) account outperformed an emotion account and a construal-level theory account in explaining this self–other discrepancy (Studies 3 and 4). We argued that the self–other discrepancy in moral judgment may serve as a protective mechanism co-evolving alongside the social exchange mechanism and may contribute to better understanding the obstacles preventing people from cooperation.

The research is here.

Thursday, November 15, 2018

Expectations Bias Moral Evaluations

Derek Powell & Zachary Horne
PsyArXiv
Originally posted September 13, 2018

Abstract

People’s expectations play an important role in their reactions to events. There is often disappointment when events fail to meet expectations and a special thrill to having one’s expectations exceeded. We propose that expectations influence evaluations through information-theoretic principles: less expected events do more to inform us about the state of the world than do more expected events. An implication of this proposal is that people may have inappropriately muted responses to morally significant but expected events. In two preregistered experiments, we found that people’s judgments of morally-significant events were affected by the likelihood of that event. People were more upset about events that were unexpected (e.g., a robbery at a clothing store) than events that were more expected (e.g., a robbery at a convenience store). We argue that this bias has pernicious moral consequences, including leading to reduced concern for victims in most need of help.

The research/preprint is here.

Monday, November 12, 2018

Optimality bias in moral judgment

Julian De Freitas and Samuel G. B. Johnson
Journal of Experimental Social Psychology
Volume 79, November 2018, Pages 149-163

Abstract

We often make decisions with incomplete knowledge of their consequences. Might people nonetheless expect others to make optimal choices, despite this ignorance? Here, we show that people are sensitive to moral optimality: that people hold moral agents accountable depending on whether they make optimal choices, even when there is no way that the agent could know which choice was optimal. This result held up whether the outcome was positive, negative, inevitable, or unknown, and across within-subjects and between-subjects designs. Participants consistently distinguished between optimal and suboptimal choices, but not between suboptimal choices of varying quality — a signature pattern of the Efficiency Principle found in other areas of cognition. A mediation analysis revealed that the optimality effect occurs because people find suboptimal choices more difficult to explain and assign harsher blame accordingly, while moderation analyses found that the effect does not depend on tacit inferences about the agent's knowledge or negligence. We argue that this moral optimality bias operates largely out of awareness, reflects broader tendencies in how humans understand one another's behavior, and has real-world implications.

The research is here.

Wednesday, June 8, 2016

Are You Morally Modified?: The Moral Effects of Widely Used Pharmaceuticals

Neil Levy, Thomas Douglas, Guy Kahane, Sylvia Terbeck, Philip J. Cowen, Miles
Hewstone, and Julian Savulescu
Philos Psychiatr Psychol. 2014 June 1; 21(2): 111–125.
doi:10.1353/ppp.2014.0023.

Abstract

A number of concerns have been raised about the possible future use of pharmaceuticals designed
to enhance cognitive, affective, and motivational processes, particularly where the aim is to
produce morally better decisions or behavior. In this article, we draw attention to what is arguably
a more worrying possibility: that pharmaceuticals currently in widespread therapeutic use are
already having unintended effects on these processes, and thus on moral decision making and
morally significant behavior. We review current evidence on the moral effects of three widely
used drugs or drug types: (i) propranolol, (ii) selective serotonin reuptake inhibitors, and (iii)
drugs that effect oxytocin physiology. This evidence suggests that the alterations to moral decision
making and behavior caused by these agents may have important and difficult-to-evaluate
consequences, at least at the population level. We argue that the moral effects of these and other
widely used pharmaceuticals warrant further empirical research and ethical analysis.

The paper is here.

Tuesday, April 12, 2016

Most People Think Watching Porn Is Morally Wrong

By Emma Green
The Atlantic
Originally posted March 6, 2016

Here is an excerpt:

Recent debates about the porn industry haven't seemed to take this ambivalence into account. A Duke University freshman starred in hardcore porn videos and took to the blogs to defend her right to do so. Editorials about Britain's new Internet porn filter have focused on the government's right to regulate the web. Both of these are compelling and understandable points of concern, but they hinge on this issue of rights: The right to voluntarily work in the erotica industry without harassment, the right to enjoy sex work, the right to watch porn without interrogation from your government.

These are all valid issues. But even if 18-year-olds are free to make sex tapes and middle-aged men are free to watch them without Big Brother's scrutiny, there is a lingering moral question: Is watching porn a good thing to do?

The article is here.

Note: Some of these statistics in this article are fascinating.

Saturday, August 15, 2015

Understanding Libertarian Morality: The Psychological Dispositions of Self-Identified Libertarians

Ravi Iyer, Spassena Koleva, Jesse Graham, Peter Ditto, Jonathan Haidt
PLOS | One
Published: August 21, 2012
DOI: 10.1371/journal.pone.0042366

Abstract

Libertarians are an increasingly prominent ideological group in U.S. politics, yet they have been largely unstudied. Across 16 measures in a large web-based sample that included 11,994 self-identified libertarians, we sought to understand the moral and psychological characteristics of self-described libertarians. Based on an intuitionist view of moral judgment, we focused on the underlying affective and cognitive dispositions that accompany this unique worldview. Compared to self-identified liberals and conservatives, libertarians showed 1) stronger endorsement of individual liberty as their foremost guiding principle, and weaker endorsement of all other moral principles; 2) a relatively cerebral as opposed to emotional cognitive style; and 3) lower interdependence and social relatedness. As predicted by intuitionist theories concerning the origins of moral reasoning, libertarian values showed convergent relationships with libertarian emotional dispositions and social preferences. Our findings add to a growing recognition of the role of personality differences in the organization of political attitudes.

The entire article is here.

Friday, May 8, 2015

TMS affects moral judgment, showing the role of DLPFC and TPJ in cognitive and emotional processing

Jeurissen D, Sack AT, Roebroeck A, Russ BE and Pascual-Leone A (2014) TMS affects moral judgment, showing the role of DLPFC and TPJ in cognitive and emotional processing.
Front. Neurosci. 8:18. doi: 10.3389/fnins.2014.00018

Decision-making involves a complex interplay of emotional responses and reasoning processes. In this study, we use TMS to explore the neurobiological substrates of moral decisions in humans. To examining the effects of TMS on the outcome of a moral-decision, we compare the decision outcome of moral-personal and moral-impersonal dilemmas to each other and examine the differential effects of applying TMS over the right DLPFC or right TPJ. In this comparison, we find that the TMS-induced disruption of the DLPFC during the decision process, affects the outcome of the moral-personal judgment, while TMS-induced disruption of TPJ affects only moral-impersonal conditions. In other words, we find a double-dissociation between DLPFC and TPJ in the outcome of a moral decision. Furthermore, we find that TMS-induced disruption of the DLPFC during non-moral, moral-impersonal, and moral-personal decisions lead to lower ratings of regret about the decision. Our results are in line with the dual-process theory and suggest a role for both the emotional response and cognitive reasoning process in moral judgment. Both the emotional and cognitive processes were shown to be involved in the decision outcome.

The entire article is here.

Friday, April 24, 2015

Gender Differences in Responses to Moral Dilemmas

By Rebecca Riesdorf, Paul Conway, and Bertram Gawronski
Pers Soc Psychol Bull April 3, 2015

Abstract

The principle of deontology states that the morality of an action depends on its consistency with moral norms; the principle of utilitarianism implies that the morality of an action depends on its consequences. Previous research suggests that deontological judgments are shaped by affective processes, whereas utilitarian judgments are guided by cognitive processes. The current research used process dissociation (PD) to independently assess deontological and utilitarian inclinations in women and men. A meta-analytic re-analysis of 40 studies with 6,100 participants indicated that men showed a stronger preference for utilitarian over deontological judgments than women when the two principles implied conflicting decisions (d = 0.52). PD further revealed that women exhibited stronger deontological inclinations than men (d = 0.57), while men exhibited only slightly stronger utilitarian inclinations than women (d = 0.10). The findings suggest that gender differences in moral dilemma judgments are due to differences in affective responses to harm rather than cognitive evaluations of outcomes.

The entire article is here.

Wednesday, March 25, 2015

Sacrifice One For the Good of Many? People Apply Different Moral Norms to Human and Robot Agents

By B.F. Malle, M. Scheutz, T. Arnold, J. Voiklis, and C. Cusimano
HRI '15, March 02 - 05 2015

Abstract

Moral norms play an essential role in regulating human interaction. With the growing sophistication and proliferation of robots, it is important to understand how ordinary people apply moral norms to robot agents and make moral judgments about their behavior. We report the first comparison of people’s moral judgments (of permissibility, wrongness, and blame) about human and robot agents. Two online experiments (total N = 316) found that robots, compared with human agents, were more strongly expected to take an action that sacrifices one person for the good of many (a “utilitarian” choice), and they were blamed more than their human counterparts when they did not make that choice.  Though the utilitarian sacrifice was generally seen as permissible for human agents, they were blamed more for choosing this option than for doing nothing. These results provide a first step toward a new field of Moral HRI, which is well placed to help guide the design of social robots.

The entire article is here.

Saturday, March 21, 2015

How foreign language shapes moral judgment

By J. Geipel, C. Hadjichristidis, and L. Surian
Journal of Experimental Social Psychology
Volume 59, July 2015, Pages 8–17

Abstract

We investigated whether and how processing information in a foreign language as opposed to the native language affects moral judgments. Participants judged the moral wrongness of several private actions, such as consensual incest, that were depicted as harmless and presented in either the native or a foreign language. The use of a foreign language promoted less severe moral judgments and less confidence in them. Harmful and harmless social norm violations, such as saying a white lie to get a reduced fare, were also judged more leniently. The results do not support explanations based on facilitated deliberation, misunderstanding, or the adoption of a universalistic stance. We propose that the influence of foreign language is best explained by a reduced activation of social and moral norms when making moral judgments.

Highlights

  • We investigated whether and how foreign language influences moral judgment.
  • Foreign language prompted more lenient judgments for moral transgressions.
  • Foreign language reduced confidence in people's moral evaluations.
  • Violations of everyday norms were judged less harshly in a foreign language.
  • Foreign language might act through a reduced activation of social and moral norms.

Wednesday, March 4, 2015

‘Utilitarian’ judgments in sacrificial moral dilemmas do not reflect impartial concern for the greater good

By Guy Kahane, Jim A.C. Everett, Brian Earp, Miguel Farias, and Julian Savulescu
Cognition
Volume 134, January 2015, Pages 193–209

Abstract

A growing body of research has focused on so-called ‘utilitarian’ judgments in moral dilemmas in which participants have to choose whether to sacrifice one person in order to save the lives of a greater number. However, the relation between such ‘utilitarian’ judgments and genuine utilitarian impartial concern for the greater good remains unclear. Across four studies, we investigated the relationship between ‘utilitarian’ judgment in such sacrificial dilemmas and a range of traits, attitudes, judgments and behaviors that either reflect or reject an impartial concern for the greater good of all. In Study 1, we found that rates of ‘utilitarian’ judgment were associated with a broadly immoral outlook concerning clear ethical transgressions in a business context, as well as with sub-clinical psychopathy. In Study 2, we found that ‘utilitarian’ judgment was associated with greater endorsement of rational egoism, less donation of money to a charity, and less identification with the whole of humanity, a core feature of classical utilitarianism. In Studies 3 and 4, we found no association between ‘utilitarian’ judgments in sacrificial dilemmas and characteristic utilitarian judgments relating to assistance to distant people in need, self-sacrifice and impartiality, even when the utilitarian justification for these judgments was made explicit and unequivocal. This lack of association remained even when we controlled for the antisocial element in ‘utilitarian’ judgment. Taken together, these results suggest that there is very little relation between sacrificial judgments in the hypothetical dilemmas that dominate current research, and a genuine utilitarian approach to ethics.

Highlights

• Utilitarian’ judgments in moral dilemmas were associated with egocentric attitudes and less identification with humanity.
• They were also associated with lenient views about clear moral transgressions.
• ‘Utilitarian’ judgments were not associated with views expressing impartial altruist concern for others.
• This lack of association remained even when antisocial tendencies were controlled for.
• So-called ‘utilitarian’ judgments do not express impartial concern for the greater good.

The entire article is here.

Sunday, March 1, 2015

Online processing of moral transgressions: ERP evidence for spontaneous evaluation

Hartmut Leuthold, Angelika Kunkel, Ian G. Mackenzie and Ruth Filik
Soc Cogn Affect Neurosci (2015)
doi: 10.1093/scan/nsu151

Abstract

Experimental studies using fictional moral dilemmas indicate that both automatic emotional processes and controlled cognitive processes contribute to moral judgments. However, not much is known about how people process socio-normative violations that are more common to their everyday life nor the time-course of these processes. Thus, we recorded participants’ electrical brain activity while they were reading vignettes that either contained morally acceptable vs unacceptable information or text materials that contained information which was either consistent or inconsistent with their general world knowledge. A first event-related brain potential (ERP) positivity peaking at ∼200 ms after critical word onset (P200) was larger when this word involved a socio-normative or knowledge-based violation. Subsequently, knowledge-inconsistent words triggered a larger centroparietal ERP negativity at ∼320 ms (N400), indicating an influence on meaning construction. In contrast, a larger ERP positivity (larger late positivity), which also started at ∼320 ms after critical word onset, was elicited by morally unacceptable compared with acceptable words. We take this ERP positivity to reflect an implicit evaluative (good–bad) categorization process that is engaged during the online processing of moral transgressions.

The article is here.

Tuesday, February 24, 2015

The Importance of Moral Construal

Moral versus Non-Moral Construal Elicits Faster, More Extreme, Universal Evaluations of the Same Actions

By Jay J. Van Bavel, Dominic J. Packer, Ingrid J. Haas, and William A. Cunningham
PLoS ONE 7(11): e48693. doi:10.1371/journal.pone.0048693

Abstract

Over the past decade, intuitionist models of morality have challenged the view that moral reasoning is the sole or even primary means by which moral judgments are made. Rather, intuitionist models posit that certain situations automatically elicit moral intuitions, which guide moral judgments. We present three experiments showing that evaluations are also susceptible to the influence of moral versus non-moral construal. We had participants make moral evaluations (rating whether actions were morally good or bad) or non-moral evaluations (rating whether actions were pragmatically or hedonically good or bad) of a wide variety of actions. As predicted, moral evaluations were faster, more extreme, and more strongly associated with universal prescriptions—the belief that absolutely nobody or everybody should engage in an action—than non-moral (pragmatic or hedonic) evaluations of the same actions. Further, we show that people are capable of flexibly shifting from moral to non-moral evaluations on a trial-by-trial basis. Taken together, these experiments provide evidence that moral versus non-moral construal has an important influence on evaluation and suggests that effects of construal are highly flexible. We discuss the implications of these experiments for models of moral judgment and decision-making.

The entire article is here.

Wednesday, February 18, 2015

Moral Judgment as a Natural Kind

By Victor Kumar
Forthcoming in Philosophical Studies

Moral judgments seem to be different from other normative judgments, even apart from their characteristic subject matter. Two people might both disapprove of an action, for example, although one judges it a moral violation and the other a breach of etiquette. Philosophers have traditionally attempted to define moral judgment through reflection alone. However, psychological research on the “moral/conventional distinction” offers a promising source of empirical evidence about the distinctive nature of moral judgment.

Several authors treat the ability to draw a distinction between morality and convention as a test for the presence of moral judgments (Blair 1995; Nichols 2004a; Prinz 2007; Levy 2007). None, however, develops the implied theory of moral judgment.

The entire article is here.