Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Dual Process. Show all posts
Showing posts with label Dual Process. Show all posts

Sunday, October 23, 2022

Advancing theorizing about fast-and-slow thinking

De Neys, W. (2022). 
Behavioral and Brain Sciences, 1-68. 
doi:10.1017/S0140525X2200142X

Abstract

Human reasoning is often conceived as an interplay between a more intuitive and deliberate thought process. In the last 50 years, influential fast-and-slow dual process models that capitalize on this distinction have been used to account for numerous phenomena—from logical reasoning biases, over prosocial behavior, to moral decision-making. The present paper clarifies that despite the popularity, critical assumptions are poorly conceived. My critique focuses on two interconnected foundational issues: the exclusivity and switch feature. The exclusivity feature refers to the tendency to conceive intuition and deliberation as generating unique responses such that one type of response is assumed to be beyond the capability of the fast-intuitive processing mode. I review the empirical evidence in key fields and show that there is no solid ground for such exclusivity. The switch feature concerns the mechanism by which a reasoner can decide to shift between more intuitive and deliberate processing. I present an overview of leading switch accounts and show that they are conceptually problematic—precisely because they presuppose exclusivity. I build on these insights to sketch the groundwork for a more viable dual process architecture and illustrate how it can set a new research agenda to advance the field in the coming years.

Conclusion

In the last 50 years dual process models of thinking have moved to the center stage in research on human reasoning. These models have been instrumental for the initial exploration of human thinking in the cognitive sciences and related fields (Chater, 2018; De Neys, 2021). However, it is time to rethink foundational assumptions. Traditional dual process models have typically conceived intuition and deliberation as generating unique responses such that one type of response is exclusively tied to deliberation and is assumed to be beyond the reach of the intuitive system. I reviewed empirical evidence from key dual process applications that argued against this exclusivity feature. I also showed how exclusivity leads to conceptual complications when trying to explain how a reasoner switches between intuitive and deliberate reasoning. To avoid these complications, I sketched an elementary non-exclusive working model in which it is the activation strength of competing intuitions within System 1 that determines System 2 engagement. 

It will be clear that the working model is a starting point that will need to be further developed and specified. However, by avoiding the conceptual paradoxes that plague the traditional model, it presents a more viable basic architecture that can serve as theoretical groundwork to build future dual process models in various fields. In addition, it should at the very least force dual process theorists to specify more explicitly how they address the switch issue. In the absence of such specification, dual process models might continue to provide an appealing narrative but will do little to advance our understanding of the interaction between intuitive and deliberate— fast and slow—thinking. It is in this sense that I hope that the present paper can help to sketch the building blocks of a more judicious dual process future. 

Saturday, October 9, 2021

Nudgeability: Mapping Conditions of Susceptibility to Nudge Influence

de Ridder, D., Kroese, F., & van Gestel, L. (2021). 
Perspectives on psychological science 
Advance online publication. 
https://doi.org/10.1177/1745691621995183

Abstract

Nudges are behavioral interventions to subtly steer citizens' choices toward "desirable" options. An important topic of debate concerns the legitimacy of nudging as a policy instrument, and there is a focus on issues relating to nudge transparency, the role of preexisting preferences people may have, and the premise that nudges primarily affect people when they are in "irrational" modes of thinking. Empirical insights into how these factors affect the extent to which people are susceptible to nudge influence (i.e., "nudgeable") are lacking in the debate. This article introduces the new concept of nudgeability and makes a first attempt to synthesize the evidence on when people are responsive to nudges. We find that nudge effects do not hinge on transparency or modes of thinking but that personal preferences moderate effects such that people cannot be nudged into something they do not want. We conclude that, in view of these findings, concerns about nudging legitimacy should be softened and that future research should attend to these and other conditions of nudgeability.

From the General Discussion

Finally, returning to the debates on nudging legitimacy that we addressed at the beginning of this article, it seems that concerns should be softened insofar as nudges do impose choice without respecting basic ethical requirements for good public policy. More than a decade ago, philosopher Luc Bovens (2009) formulated the following four principles for nudging to be legitimate: A nudge should allow people to act in line with their overall preferences; a nudge should not induce a change in preferences that would not hold under nonnudge conditions; a nudge should not lead to “infantilization,” such that people are no longer capable of making autonomous decisions; and a nudge should be transparent so that people have control over being in a nudge situation. With the findings from our review in mind, it seems that these legitimacy requirements are fulfilled. Nudges do allow people to act in line with their overall preferences, nudges allow for making autonomous decisions insofar as nudge effects do not depend on being in a System 1 mode of thinking, and making the nudge transparent does not compromise nudge effects.

Wednesday, June 10, 2020

Metacognition in moral decisions: judgment extremity and feeling of rightness in moral intuitions

Solange Vega and others
Thinking & Reasoning

This research investigated the metacognitive underpinnings of moral judgment. Participants in two studies were asked to provide quick intuitive responses to moral dilemmas and to indicate their feeling of rightness about those responses. Afterwards, participants were given extra time to rethink their responses, and change them if they so wished. The feeling of rightness associated with the initial judgments was predictive of whether participants chose to change their responses and how long they spent rethinking them. Thus, one’s metacognitive experience upon first coming up with a moral judgment influences whether one sticks to that initial gut feeling or decides to put more thought into it and revise it. Moreover, while the type of moral judgment (i.e., deontological vs. utilitarian) was not consistently predictive of metacognitive experience, the extremity of that judgment was: Extreme judgments (either deontological or utilitarian) were quicker and felt more right than moderate judgments.

From the General Discussion

Also consistent with Bago and De Neys’ findings (2018), these results show that few people revise their responses from one type of moral judgment to the other (i.e., from deontological to utilitarian, or vice-versa). Still,many people do revise their responses, though these are subtler revisions of extremity within one type of response. These results speak against the traditional corrective model, whereby people tend to change from deontological intuitions to utilitarian deliberations in the course of making moral judgments. At the same time, they suggest a more nuanced perspective than what one might conclude from Bago and De Neys’results that fewpeople revise their responses. In sum, few people make revisions in the kind of response they give, but many do revise the degree to which they defend a certain moral position.

The research is here.

Wednesday, November 13, 2019

Dynamic Moral Judgments and Emotions

Magda Osman
Published Online June 2015 in SciRes.
http://www.scirp.org/journal/psych

Abstract

We may experience strong moral outrage when we read a news headline that describes a prohibited action, but when we gain additional information by reading the main news story, do our emotional experiences change at all, and if they do in what way do they change? In a single online study with 80 participants the aim was to examine the extent to which emotional experiences (disgust, anger) and moral judgments track changes in information about a moral scenario. The evidence from the present study suggests that we systematically adjust our moral judgments and our emotional experiences as a result of exposure to further information about the morally dubious action referred to in a moral scenario. More specifically, the way in which we adjust our moral judgments and emotions appears to be based on information signalling whether a morally dubious act is permitted or prohibited.

From the Discussion

The present study showed that moral judgments changed in response to different details concerning the moral scenarios, and while participants gave the most severe judgments for the initial limited information regarding the scenario (i.e. the headline), they adjusted the severity of their judgments downwards as more information was provided (i.e. main story, conclusion). In other words, when context was provided for why a morally dubious action was carried out, people used this to inform their later judgments and consciously integrated this new information into their judgments of the action. Crucially, this reflects the fact that judgments and emotions are not fixed, and that they are likely to operate on rational processes (Huebner, 2011, 2014; Teper et al., 2015). More to the point, this evidence suggests that there may well be an integrated representation of the moral scenario that is based on informational content as well as personal emotional experiences that signal the valance on which the information should be judged. The evidence from the present study suggests that both moral judgments and emotional experiences change systematically in response to changes in information that critically concern the way in which a morally dubious action should be evaluated.

A pdf can be downloaded here.

Monday, October 21, 2019

Moral Judgment as Categorization

Cillian McHugh, and others
PsyArXiv
Originally posted September 17, 2019

Abstract

We propose that the making of moral judgments is an act of categorization; people categorize events, behaviors, or people as ‘right’ or ‘wrong’. This approach builds on the currently dominant dual-processing approach to moral judgment in the literature, providing important links to developmental mechanisms in category formation, while avoiding recently developed critiques of dual-systems views. Stable categories are the result of skill in making context-relevant categorizations. People learn that various objects (events, behaviors, people etc.) can be categorized as ‘right’ or ‘wrong’. Repetition and rehearsal then results in these categorizations becoming habitualized. According to this skill formation account of moral categorization, the learning, and the habitualization of the forming of, moral categories, occurs as part of goal-directed activity, and is sensitive to various contextual influences. Reviewing the literature we highlight the essential similarity of categorization principles and processes of moral judgments. Using a categorization framework, we provide an overview of moral category formation as basis for moral judgments. The implications for our understanding of the making of moral judgments are discussed.

Conclusion

We propose a revisiting of the categorization approach to the understanding of moral judgment proposed by Stich (1993).  This approach, in providing a coherent account of the emergence of stability in the formation of moral categories, provides an account of the emergence of moral intuitions.  This account of the emergence of moral intuitions predicts that emergent stable moral intuitions will mirror real-world social norms or collectively agreed moral principles.  It is also possible that the emergence of moral intuitions can be informed by prior reasoning, allowing for the so called “intelligence” of moral intuitions (e.g., Pizarro & Bloom, 2003; Royzman, Kim, & Leeman, 2015).  This may even allow for the traditionally opposing rationalist and intuitionist positions (e.g., Fine, 2006; Haidt, 2001; Hume, 2000/1748; Kant, 1959/1785; Kennett & Fine, 2009; Kohlberg, 1971; Nussbaum & Kahan, 1996; Cameron et al., 2013; Prinz, 2005; Pizarro & Bloom, 2003; Royzman et al., 2015; see also Mallon & Nichols, 2010, p. 299) to be integrated.  In addition, the account of the emergence of moral intuitions described here is also consistent with discussions of the emergence of moral heuristics (e.g., Gigerenzer, 2008; Sinnott-Armstrong, Young, & Cushman, 2010).

The research is here.

Thursday, August 8, 2019

Not all who ponder count costs: Arithmetic reflection predicts utilitarian tendencies, but logical reflection predicts both deontological and utilitarian tendencies

NickByrdPaulConway
Cognition
https://doi.org/10.1016/j.cognition.2019.06.007

Abstract

Conventional sacrificial moral dilemmas propose directly causing some harm to prevent greater harm. Theory suggests that accepting such actions (consistent with utilitarian philosophy) involves more reflective reasoning than rejecting such actions (consistent with deontological philosophy). However, past findings do not always replicate, confound different kinds of reflection, and employ conventional sacrificial dilemmas that treat utilitarian and deontological considerations as opposite. In two studies, we examined whether past findings would replicate when employing process dissociation to assess deontological and utilitarian inclinations independently. Findings suggested two categorically different impacts of reflection: measures of arithmetic reflection, such as the Cognitive Reflection Test, predicted only utilitarian, not deontological, response tendencies. However, measures of logical reflection, such as performance on logical syllogisms, positively predicted both utilitarian and deontological tendencies. These studies replicate some findings, clarify others, and reveal opportunity for additional nuance in dual process theorist’s claims about the link between reflection and dilemma judgments.

A copy of the paper is here.

Sunday, June 23, 2019

On the belief that beliefs should change according to evidence: Implications for conspiratorial, moral, paranormal, political, religious, and science beliefs

Gordon Pennycook, James Allan Cheyne, Derek Koehler, & Jonathan Fugelsang
PsyAirXiv PrePrints - Last edited on May 24, 2019

Abstract

Does one’s stance toward evidence evaluation and belief revision have relevance for actual beliefs? We investigate the role of having an actively open-minded thinking style about evidence (AOT-E) on a wide range of beliefs, values, and opinions. Participants indicated the extent to which they think beliefs (Study 1) or opinions (Studies 2 and 3) ought to change according to evidence on an 8-item scale. Across three studies with 1,692 participants from two different sources (Mechanical Turk and Lucid for Academics), we find that our short AOT-E scale correlates negatively with beliefs about topics ranging from extrasensory perception, to respect for tradition, to abortion, to God; and positively with topics ranging from anthropogenic global warming to support for free speech on college campuses. More broadly, the belief that beliefs should change according to evidence was robustly associated with political liberalism, the rejection of traditional moral values, the acceptance of science, and skepticism about religious, paranormal, and conspiratorial claims. However, we also find that AOT-E is much more strongly predictive for political liberals (Democrats) than conservatives (Republicans). We conclude that socio-cognitive theories of belief (both specific and general) should take into account people’s beliefs about when and how beliefs should change – that is, meta-beliefs – but that further work is required to understand how meta-beliefs about evidence interact with political ideology.

Conclusion

Our 8-item actively open-minded thinking about evidence (AOT-E) scale was strongly predictive of a wide range of beliefs, values, and opinions. People who reported believing that beliefs and opinions should change according to evidence were less likely to be religious, less likely to hold paranormal and conspiratorial beliefs, more likely to believe in a variety of scientific claims, and were more political liberal (in terms of overall ideology, partisan affiliation, moral values, and a variety of specific political opinions). Moreover, the effect sizes for these correlations was often large or very large, based on established norms (Funder & Ozer, 2019; Gignac & Szodorai, 2016). The size and diversity of AOT-E correlates strongly supports one major, if broad, conclusion: Socio-cognitive theories of belief (both specific and general) should take into account what people believe about when and how beliefs and opinions should change (i.e., meta-beliefs). That is, we should not assume that evidence is equally important for everyone. However, future work is required to more clearly delineate why AOT-E is more predictive for political liberals than conservatives.

A preprint can be downloaded here.

Thursday, May 23, 2019

Priming intuition disfavors instrumental harm but not impartial beneficence

Valerio Capraro, Jim Everett, & Brian Earp
PsyArXiv Preprints
Last Edited April 17, 2019

Abstract

Understanding the cognitive underpinnings of moral judgment is one of most pressing problems in psychological science. Some highly-cited studies suggest that reliance on intuition decreases utilitarian (expected welfare maximizing) judgments in sacrificial moral dilemmas in which one has to decide whether to instrumentally harm (IH) one person to save a greater number of people. However, recent work suggests that such dilemmas are limited in that they fail to capture the positive, defining core of utilitarianism: commitment to impartial beneficence (IB). Accordingly, a new two-dimensional model of utilitarian judgment has been proposed that distinguishes IH and IB components. The role of intuition on this new model has not been studied. Does relying on intuition disfavor utilitarian choices only along the dimension of instrumental harm or does it also do so along the dimension of impartial beneficence? To answer this question, we conducted three studies (total N = 970, two preregistered) using conceptual priming of intuition versus deliberation on moral judgments. Our evidence converges on an interaction effect, with intuition decreasing utilitarian judgments in IH—as suggested by previous work—but failing to do so in IB. These findings bolster the recently proposed two-dimensional model of utilitarian moral judgment, and point to new avenues for future research.

The research is here.

Saturday, May 18, 2019

The Neuroscience of Moral Judgment

Joanna Demaree-Cotton & Guy Kahane
Published in The Routledge Handbook of Moral Epistemology, eds. Karen Jones, Mark Timmons, and Aaron Zimmerman (Routledge, 2018).

Abstract:

This chapter examines the relevance of the cognitive science of morality to moral epistemology, with special focus on the issue of the reliability of moral judgments. It argues that the kind of empirical evidence of most importance to moral epistemology is at the psychological rather than neural level. The main theories and debates that have dominated the cognitive science of morality are reviewed with an eye to their epistemic significance.

1. Introduction

We routinely make moral judgments about the rightness of acts, the badness of outcomes, or people’s characters. When we form such judgments, our attention is usually fixed on the relevant situation, actual or hypothetical, not on our own minds. But our moral judgments are obviously the result of mental processes, and we often enough turn our attention to aspects of this process—to the role, for example, of our intuitions or emotions in shaping our moral views, or to the consistency of a judgment about a case with more general moral beliefs.

Philosophers have long reflected on the way our minds engage with moral questions—on the conceptual and epistemic links that hold between our moral intuitions, judgments, emotions, and motivations. This form of armchair moral psychology is still alive and well, but it’s increasingly hard to pursue it in complete isolation from the growing body of research in the cognitive science of morality (CSM). This research is not only uncovering the psychological structures that underlie moral judgment but, increasingly, also their neural underpinning—utilizing, in this connection, advances in functional neuroimaging, brain lesion studies, psychopharmacology, and even direct stimulation of the brain. Evidence from such research has been used not only to develop grand theories about moral psychology, but also to support ambitious normative arguments.

Tuesday, January 8, 2019

The 3 faces of clinical reasoning: Epistemological explorations of disparate error reduction strategies.

Sandra Monteiro, Geoff Norman, & Jonathan Sherbino
J Eval Clin Pract. 2018 Jun;24(3):666-673.

Abstract

There is general consensus that clinical reasoning involves 2 stages: a rapid stage where 1 or more diagnostic hypotheses are advanced and a slower stage where these hypotheses are tested or confirmed. The rapid hypothesis generation stage is considered inaccessible for analysis or observation. Consequently, recent research on clinical reasoning has focused specifically on improving the accuracy of the slower, hypothesis confirmation stage. Three perspectives have developed in this line of research, and each proposes different error reduction strategies for clinical reasoning. This paper considers these 3 perspectives and examines the underlying assumptions. Additionally, this paper reviews the evidence, or lack of, behind each class of error reduction strategies. The first perspective takes an epidemiological stance, appealing to the benefits of incorporating population data and evidence-based medicine in every day clinical reasoning. The second builds on the heuristic and bias research programme, appealing to a special class of dual process reasoning models that theorizes a rapid error prone cognitive process for problem solving with a slower more logical cognitive process capable of correcting those errors. Finally, the third perspective borrows from an exemplar model of categorization that explicitly relates clinical knowledge and experience to diagnostic accuracy.

A pdf can be downloaded here.

Saturday, August 4, 2018

Sacrificial utilitarian judgments do reflect concern for the greater good: Clarification via process dissociation and the judgments of philosophers

Paul Conway, Jacob Goldstein-Greenwood, David Polaceka, & Joshua D. Greene
Cognition
Volume 179, October 2018, Pages 241–265

Abstract

Researchers have used “sacrificial” trolley-type dilemmas (where harmful actions promote the greater good) to model competing influences on moral judgment: affective reactions to causing harm that motivate characteristically deontological judgments (“the ends don’t justify the means”) and deliberate cost-benefit reasoning that motivates characteristically utilitarian judgments (“better to save more lives”). Recently, Kahane, Everett, Earp, Farias, and Savulescu (2015) argued that sacrificial judgments reflect antisociality rather than “genuine utilitarianism,” but this work employs a different definition of “utilitarian judgment.” We introduce a five-level taxonomy of “utilitarian judgment” and clarify our longstanding usage, according to which judgments are “utilitarian” simply because they favor the greater good, regardless of judges’ motivations or philosophical commitments. Moreover, we present seven studies revisiting Kahane and colleagues’ empirical claims. Studies 1a–1b demonstrate that dilemma judgments indeed relate to utilitarian philosophy, as philosophers identifying as utilitarian/consequentialist were especially likely to endorse utilitarian sacrifices. Studies 2–6 replicate, clarify, and extend Kahane and colleagues’ findings using process dissociation to independently assess deontological and utilitarian response tendencies in lay people. Using conventional analyses that treat deontological and utilitarian responses as diametric opposites, we replicate many of Kahane and colleagues’ key findings. However, process dissociation reveals that antisociality predicts reduced deontological inclinations, not increased utilitarian inclinations. Critically, we provide evidence that lay people’s sacrificial utilitarian judgments also reflect moral concerns about minimizing harm. This work clarifies the conceptual and empirical links between moral philosophy and moral psychology and indicates that sacrificial utilitarian judgments reflect genuine moral concern, in both philosophers and ordinary people.

The research is here.

Monday, January 29, 2018

Deontological Dilemma Response Tendencies and Sensorimotor Representations of Harm to Others

Leonardo Christov-Moore, Paul Conway, and Marco Iacoboni
Front. Integr. Neurosci., 12 December 2017

The dual process model of moral decision-making suggests that decisions to reject causing harm on moral dilemmas (where causing harm saves lives) reflect concern for others. Recently, some theorists have suggested such decisions actually reflect self-focused concern about causing harm, rather than witnessing others suffering. We examined brain activity while participants witnessed needles pierce another person’s hand, versus similar non-painful stimuli. More than a month later, participants completed moral dilemmas where causing harm either did or did not maximize outcomes. We employed process dissociation to independently assess harm-rejection (deontological) and outcome-maximization (utilitarian) response tendencies. Activity in the posterior inferior frontal cortex (pIFC) while participants witnessed others in pain predicted deontological, but not utilitarian, response tendencies. Previous brain stimulation studies have shown that the pIFC seems crucial for sensorimotor representations of observed harm. Hence, these findings suggest that deontological response tendencies reflect genuine other-oriented concern grounded in sensorimotor representations of harm.

The article is here.

Wednesday, June 7, 2017

On the cognitive (neuro)science of moral cognition: utilitarianism, deontology and the ‘fragmentation of value’

Alejandro Rosas
Working Paper: May 2017

Abstract

Scientific explanations of human higher capacities, traditionally denied to other animals, attract the attention of both philosophers and other workers in the humanities. They are often viewed with suspicion and skepticism. In this paper I critically examine the dual-process theory of moral judgment proposed by Greene and collaborators and the normative consequences drawn from that theory. I believe normative consequences are warranted, in principle, but I propose an alternative dual-process model of moral cognition that leads to a different normative consequence, which I dub ‘the fragmentation of value’. In the alternative model, the neat overlap between the deontological/utilitarian divide and the intuitive/reflective divide is abandoned. Instead, we have both utilitarian and deontological intuitions, equally fundamental and partially in tension. Cognitive control is sometimes engaged during a conflict between intuitions. When it is engaged, the result of control is not always utilitarian; sometimes it is deontological. I describe in some detail how this version is consistent with evidence reported by many studies, and what could be done to find more evidence to support it.

The working paper is here.

Monday, October 3, 2016

Moral learning: Why learning? Why moral? And why now?

Peter Railton
Cognition

Abstract

What is distinctive about a bringing a learning perspective to moral psychology? Part of the answer lies in the remarkable transformations that have taken place in learning theory over the past two decades, which have revealed how powerful experience-based learning can be in the acquisition of abstract causal and evaluative representations, including generative models capable of attuning perception, cognition, affect, and action to the physical and social environment. When conjoined with developments in neuroscience, these advances in learning theory permit a rethinking of fundamental questions about the acquisition of moral understanding and its role in the guidance of behavior. For example, recent research indicates that spatial learning and navigation involve the formation of non-perspectival as well as ego-centric models of the physical environment, and that spatial representations are combined with learned information about risk and reward to guide choice and potentiate further learning. Research on infants provides evidence that they form non-perspectival expected-value representations of agents and actions as well, which help them to navigate the human environment. Such representations can be formed by highly-general mental processes such as causal and empathic simulation, and thus afford a foundation for spontaneous moral learning and action that requires no innate moral faculty and can exhibit substantial autonomy with respect to community norms. If moral learning is indeed integral with the acquisition and updating of casual and evaluative models, this affords a new way of understanding well-known but seemingly puzzling patterns in intuitive moral judgment—including the notorious “trolley problems.”

The article is here.

Thursday, September 29, 2016

Priming Children’s Use of Intentions in Moral Judgement with Metacognitive Training

Gvozdic, Katarina and others
Frontiers in Psychology  
18 March 2016
http://dx.doi.org/10.3389/fpsyg.2016.00190

Abstract

Typically, adults give a primary role to the agent's intention to harm when performing a moral judgment of accidental harm. By contrast, children often focus on outcomes, underestimating the actor's mental states when judging someone for his action, and rely on what we suppose to be intuitive and emotional processes. The present study explored the processes involved in the development of the capacity to integrate agents' intentions into their moral judgment of accidental harm in 5 to 8-year-old children. This was done by the use of different metacognitive trainings reinforcing different abilities involved in moral judgments (mentalising abilities, executive abilities, or no reinforcement), similar to a paradigm previously used in the field of deductive logic. Children's moral judgments were gathered before and after the training with non-verbal cartoons depicting agents whose actions differed only based on their causal role or their intention to harm. We demonstrated that a metacognitive training could induce an important shift in children's moral abilities, showing that only children who were explicitly instructed to "not focus too much" on the consequences of accidental harm, preferentially weighted the agents' intentions in their moral judgments. Our findings confirm that children between the ages of 5 and 8 are sensitive to the intention of agents, however, at that age, this ability is insufficient in order to give a "mature" moral judgment. Our experiment is the first that suggests the critical role of inhibitory resources in processing accidental harm.

The article is here.

Saturday, September 10, 2016

Rational and Emotional Sources of Moral Decision-Making: an Evolutionary-Developmental Account

Denton, K.K. & Krebs, D.L.
Evolutionary Psychological Science (2016). pp 1-14.

Abstract

Some scholars have contended that moral decision-making is primarily rational, mediated by controlled, deliberative, and reflective forms of moral reasoning. Others have contended that moral decision-making is primarily emotional, mediated by automatic, affective, and intuitive forms of decision-making. Evidence from several lines of research suggests that people make moral decisions in both of these ways. In this paper, we review psychological and neurological evidence supporting dual-process models of moral decision-making and discuss research that has attempted to identify triggers for rational-reflective and emotional-intuitive processes. We argue that attending to the ways in which brain mechanisms evolved and develop throughout the life span supplies a basis for explaining why people possess the capacity to engage in two forms of moral decision-making, as well as accounting for the attributes that define each type and predicting when the mental mechanisms that mediate each of them will be activated and when one will override the other. We close by acknowledging that neurological research on moral decision-making mechanisms is in its infancy and suggesting that future research should be directed at distinguishing among different types of emotional, intuitive, rational, and reflective processes; refining our knowledge of the brain mechanisms implicated in different forms of moral judgment; and investigating the ways in which these mechanisms interact to produce moral decisions.

The article is here.

Tuesday, February 23, 2016

American attitudes toward nudges

Janice Y. Jung and Barbara A. Mellers
Judgment and Decision Making
Vol. 11, No. 1, January 2016, pp. 62-74

To successfully select and implement nudges, policy makers need a psychological understanding of who opposes nudges, how they are perceived, and when alternative methods (e.g., forced choice) might work better. Using two representative samples, we examined four factors that influence U.S. attitudes toward nudges – types of nudges, individual dispositions, nudge perceptions, and nudge frames. Most nudges were supported, although opt-out defaults for organ donations were opposed in both samples. “System 1” nudges (e.g., defaults and sequential orderings) were viewed less favorably than “System 2” nudges (e.g., educational opportunities or reminders). System 1 nudges were perceived as more autonomy threatening, whereas System 2 nudges were viewed as more effective for better decision making and more necessary for changing behavior. People with greater empathetic concern tended to support both types of nudges and viewed them as the “right” kind of goals to have. Individualists opposed both types of nudges, and conservatives tended to oppose both types. Reactant people and those with a strong desire for control opposed System 1 nudges. To see whether framing could influence attitudes, we varied the description of the nudge in terms of the target (Personal vs. Societal) and the reference point for the nudge (Costs vs. Benefits). Empathetic people were more supportive when framing highlighted societal costs or benefits, and reactant people were more opposed to nudges when frames highlighted the personal costs of rejection.

The article is here.

Saturday, February 6, 2016

Understanding Responses to Moral Dilemmas

Deontological Inclinations, Utilitarian Inclinations, and General Action Tendencies

Bertram Gawronski, Paul Conway, Joel B. Armstrong, Rebecca Friesdorf, and Mandy Hütter
In: J. P. Forgas, L. Jussim, & P. A. M. Van Lange (Eds.). (2016). Social psychology of morality. New York, NY: Psychology Press.

Introduction

For  centuries,  societies  have  wrestled  with  the  question  of  how  to  balance  the  rights of the individual versus the greater good (see Forgas, Jussim, & Van Lange, this volume); is it acceptable to ignore a person’s rights in order to increase the overall well-being of a larger number of people? The contentious nature of this issue is reflected in many contemporary examples, including debates about whether it is legitimate to cause harm in order to protect societies against threats (e.g., shooting an abducted passenger plane to prevent a terrorist attack) and whether it is acceptable to refuse life-saving support for some people in order to protect the well-being  of  many  others  (e.g.,  refusing  the  return  of  American  citizens  who  became infected with Ebola in Africa for treatment in the US). These issues have captured the attention of social scientists, politicians, philosophers, lawmakers, and citizens alike, partly because they involve a conflict between two moral principles.

The  first  principle,  often  associated  with  the  moral  philosophy  of  Immanuel  Kant, emphasizes the irrevocable universality of rights and duties. According to the principle of deontology, the moral status of an action is derived from its consistency with context-independent norms (norm-based morality). From this perspective, violations of moral norms are unacceptable irrespective of the anticipated outcomes (e.g.,  shooting  an  abducted  passenger  plane  is  always  immoral  because it violates  the moral norm not to kill others). The second principle, often associated with the moral philosophy of John Stuart Mill, emphasizes the greater good. According to the principle of utilitarianism, the moral status of an action depends on its outcomes, more  specifically  its consequences  for  overall  well-being  (outcome-based  morality).

Thursday, January 21, 2016

Intuition, deliberation, and the evolution of cooperation

Adam Bear and David G. Rand
PNAS 2016 : 1517780113v1-201517780.

Abstract

Humans often cooperate with strangers, despite the costs involved. A long tradition of theoretical modeling has sought ultimate evolutionary explanations for this seemingly altruistic behavior. More recently, an entirely separate body of experimental work has begun to investigate cooperation’s proximate cognitive underpinnings using a dual-process framework: Is deliberative self-control necessary to reign in selfish impulses, or does self-interested deliberation restrain an intuitive desire to cooperate? Integrating these ultimate and proximate approaches, we introduce dual-process cognition into a formal game-theoretic model of the evolution of cooperation. Agents play prisoner’s dilemma games, some of which are one-shot and others of which involve reciprocity. They can either respond by using a generalized intuition, which is not sensitive to whether the game is one-shot or reciprocal, or pay a (stochastically varying) cost to deliberate and tailor their strategy to the type of game they are facing. We find that, depending on the level of reciprocity and assortment, selection favors one of two strategies: intuitive defectors who never deliberate, or dual-process agents who intuitively cooperate but sometimes use deliberation to defect in one-shot games. Critically, selection never favors agents who use deliberation to override selfish impulses: Deliberation only serves to undermine cooperation with strangers. Thus, by introducing a formal theoretical framework for exploring cooperation through a dual-process lens, we provide a clear answer regarding the role of deliberation in cooperation based on evolutionary modeling, help to organize a growing body of sometimes-conflicting empirical results, and shed light on the nature of human cognition and social decision making.

The article is here.

Tuesday, December 15, 2015

On the reception and detection of pseudo-profound bullshit

Gordon Pennycook, Allan Cheyne, Nathaniel Barr, Derek J. Koehler, & Jonathan A. Fugelsang
Judgment and Decision Making, Vol. 10, No. 6, November 2015, pp. 549–563

Abstract

Although bullshit is common in everyday life and has attracted attention from philosophers, its reception (critical or ingenuous) has not, to our knowledge, been subject to empirical investigation. Here we focus on pseudo-profound bullshit, which consists of seemingly impressive assertions that are presented as true and meaningful but are actually vacuous. We presented participants with bullshit statements consisting of buzzwords randomly organized into statements with syntactic structure but no discernible meaning (e.g., “Wholeness quiets infinite phenomena”). Across multiple studies, the propensity to judge bullshit statements as profound was associated with a variety of conceptually relevant variables (e.g., intuitive cognitive style, supernatural belief). Parallel associations were less evident among profundity judgments for more conventionally profound (e.g., “A wet person does not fear the rain”) or mundane (e.g., “Newborn babies require constant attention”) statements. These results support the idea that some people are more receptive to this type of bullshit and that detecting it is not merely a matter of indiscriminate skepticism but rather a discernment of deceptive vagueness in otherwise impressive sounding claims. Our results also suggest that a bias toward accepting statements as true may be an important component of pseudo-profound bullshit receptivity.

The entire paper is here.