Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Deontological. Show all posts
Showing posts with label Deontological. Show all posts

Sunday, September 11, 2016

Morality (Book Chapter)

Jonathan Haidt and Selin Kesebir
Handbook of Social Psychology. (2010) 3:III:22.

Here is a portion of the conclusion:

 The goal of this chapter was to offer an account of what morality really is, where it came from, how it works, and why McDougall was right to urge social psychologists to make morality one of their fundamental concerns. The chapter used a simple narrative device to make its literature review more intuitively compelling: It told the history of moral psychology as a fall followed by redemption. (This is one of several narrative forms that people spontaneously use when telling the stories of their lives [McAdams, 2006]). To create the sense of a fall, the chapter began by praising the ancients and their virtue - based ethics; it praised some early sociologists and psychologists (e.g., McDougall, Freud, and Durkheim) who had “ thick ” emotional and sociological conceptions of morality; and it praised Darwin for his belief that intergroup competition contributed to the evolution of morality. The chapter then suggested that moral psychology lost these perspectives in the twentieth century as many psychologists followed philosophers and other social scientists in embracing rationalism and methodological individualism. Morality came to be studied primarily as a set of beliefs and cognitive abilities, located in the heads of individuals, which helped individuals to solve quandaries about helping and hurting other individuals. In this narrative, evolutionary theory also lost something important (while gaining much else) when it focused on morality as a set of strategies, coded into the genes of individuals, that helped individuals optimize their decisions about cooperation and defection when interacting with strangers. Both of these losses or “ narrowings ” led many theorists to think that altruistic acts performed toward strangers are the quintessence of morality.

The book chapter is here.

This chapter is an excellent summary for students or those beginning to read on moral psychology.

Thursday, July 14, 2016

At the Heart of Morality Lies Neuro-Visceral Integration: Lower Cardiac Vagal Tone Predicts Utilitarian Moral Judgment

Gewnhi Park, Andreas Kappes, Yeojin Rho, and Jay J. Van Bavel
Soc Cogn Affect Neurosci first published online June 17, 2016
doi:10.1093/scan/nsw077

Abstract

To not harm others is widely considered the most basic element of human morality. The aversion to harm others can be either rooted in the outcomes of an action (utilitarianism) or reactions to the action itself (deontology). We speculated that human moral judgments rely on the integration of neural computations of harm and visceral reactions. The present research examined whether utilitarian or deontological aspects of moral judgment are associated with cardiac vagal tone, a physiological proxy for neuro-visceral integration. We investigated the relationship between cardiac vagal tone and moral judgment by using a mix of moral dilemmas, mathematical modeling, and psychophysiological measures. An index of bipolar deontology-utilitarianism was correlated with resting heart rate variability—an index of cardiac vagal tone—such that more utilitarian judgments were associated with lower heart rate variability. Follow-up analyses using process dissociation, which independently quantifies utilitarian and deontological moral inclinations, provided further evidence that utilitarian (but not deontological) judgments were associated with lower heart rate variability. Our results suggest that the functional integration of neural and visceral systems during moral judgments can restrict outcome-based, utilitarian moral preferences. Implications for theories of moral judgment are discussed.

A copy of the paper is here.

Sunday, July 10, 2016

Deontology Or Trustworthiness?

A Conversation Between Molly Crockett, Daniel Kahneman
Edge.org
June 16, 2016

Here is an excerpt:

DANIEL KAHNEMAN:  Molly, you started your career as a neuroscientist, and you still are. Yet, much of the work that you do now is about moral judgment. What journey got you there?            

MOLLY CROCKETT:  I've always been interested in how we make decisions. In particular, why is it that the same person will sometimes make a decision that follows one set of principles or rules, and other times make a wildly different decision? These intra-individual variations in decision making have always fascinated me, specifically in the moral domain, but also in other kinds of decision making, more broadly.

I got interested in brain chemistry because this seemed to be a neural implementation or solution for how a person could be so different in their disposition across time, because we know brain chemistry is sensitive to aspects of the environment. I picked that methodology as a tool with which to study why our decisions can shift so much, even within the same person; morality is one clear demonstration of how this happens.            

KAHNEMAN:  Are you already doing that research, connecting moral judgment to chemistry?

CROCKETT:  Yes. One of the first entry points into the moral psychology literature during my PhD was a study where we gave people different kinds of psychoactive drugs. We gave people an antidepressant drug that affected their serotonin, or an ADHD drug that affected their noradrenaline, and then we looked at how these drugs affected the way people made moral judgments. In that literature, you can compare two different schools of moral thought for how people ought to make moral decisions.

The entire transcript, video, and audio are here.

Sunday, May 22, 2016

Is Deontology a Moral Confabulation?

Emilian Mihailov
Neuroethics
April 2016, Volume 9, Issue 1, pp 1-13

Abstract

Joshua Greene has put forward the bold empirical hypothesis that deontology is a confabulation of moral emotions. Deontological philosophy does not stem from "true" moral reasoning, but from emotional reactions, backed up by post hoc rationalizations which play no role in generating the initial moral beliefs. In this paper, I will argue against the confabulation hypothesis. First, I will highlight several points in Greene’s discussion of confabulation, and identify two possible models. Then, I will argue that the evidence does not illustrate the relevant model of deontological confabulation. In fact, I will make the case that deontology is unlikely to be a confabulation because alarm-like emotions, which allegedly drive deontological theorizing, are resistant to be subject to confabulation. I will end by clarifying what kind of claims can the confabulation data support. The upshot of the final section is that confabulation data cannot be used to undermine deontological theory in itself, and ironically, if one commits to the claim that a deontological justification is a confabulation in a particular case, then the data suggests that in general deontology has a prima facie validity.

The article is here.

Tuesday, April 26, 2016

Inference of Trustworthiness from Intuitive Moral Judgments

Jim A. C. Everett, David A. Pizarro, M. J. Crockett.
Journal of Experimental Psychology: General, 2016
DOI: 10.1037/xge0000165

Abstract

Moral judgments play a critical role in motivating and enforcing human cooperation. Research on the proximate mechanisms of moral judgments highlights the importance of intuitive, automatic processes in forming such judgments. Intuitive moral judgments often share characteristics with deontological theories in normative ethics, which argue that certain acts (such as killing) are absolutely wrong, regardless of their consequences. Why do moral intuitions typically follow deontological prescriptions, as opposed to those of other ethical theories? Here we test a functional explanation for this phenomenon by investigating whether agents who express deontological moral judgments are more valued as social partners. Across five studies we show that people who make characteristically deontological judgments (as opposed to judgments that align with other ethical traditions) are preferred as social partners, perceived as more moral and trustworthy, and trusted more in economic games. These findings provide empirical support for a partner choice account for why intuitive moral judgments often align with deontological theories.

The article can be downloaded here.

Tuesday, March 1, 2016

Does Bioethics Tell Us What to Do?

by J.S. Blumenthal-Barby, Ph.D.
bioethics.net
Originally posted February 15, 2016

Applied ethicists—including bioethicists—are in the business of making normative claims. Unlike, say, claims in meta-ethics, these are meant to guide action. Yet, when one examines the literature and discourse in applied ethics, there are three common barriers to these claims being action-guiding. First, they often lack precision and accuracy when examined under the lens of deontic logic. Second, even when accurately articulated in deontic language, they often fall into the category of claims about “permissibility,” a category that yields low utility with respect to action guidance. Third, they are often spectrum based rather than binary normative claims, which also yield low utility with respect to action guidance.

The blog post is here.

Monday, February 22, 2016

Morality is a muscle. Get to the gym.

Pascal-Emmanuel Gobry
The Week
Originally published January 18, 2016

Here is an excerpt:

Take the furor over "trigger warnings" in college classes and textbooks. One side believes that in order to protect the sensitivities of some students, professors or writers should warn readers or students about some at the beginning of an article or course about controversial topics. Another side says that if someone can't handle rough material, then he can stop reading or step out of the room, and that trigger warnings are an unconscionable affront to freedom of thought. Interestingly, both schools clearly believe that there is one moral stance which takes the form of a rule that should be obeyed always and everywhere. Always and everywhere we should have trigger warnings to protect people's sensibilities, or always and everywhere we should not.

Both sides need a lecture in virtue ethics.

If I try to stretch my virtue of empathy, it doesn't seem at all absurd to me to imagine that, say, a young woman who has been raped might be made quite uncomfortable by a class discussion of rape in literature, and that this is something to which we should be sensitive. But the trigger warning people maybe should think more about the moral imperative to develop the virtue of courage, including intellectual courage. Then it seems to me that if you just put aside grand moral questions about freedom of inquiry, simple basic human courtesy would mean a professor would try to take account a trauma victim's sensibilities while teaching sensitive material, and students would understand that part of the goal of a college class is to challenge them. We don't need to debate universal moral values, we just need to be reminded to exercise virtue more.

The article is here.

Saturday, February 6, 2016

Understanding Responses to Moral Dilemmas

Deontological Inclinations, Utilitarian Inclinations, and General Action Tendencies

Bertram Gawronski, Paul Conway, Joel B. Armstrong, Rebecca Friesdorf, and Mandy Hütter
In: J. P. Forgas, L. Jussim, & P. A. M. Van Lange (Eds.). (2016). Social psychology of morality. New York, NY: Psychology Press.

Introduction

For  centuries,  societies  have  wrestled  with  the  question  of  how  to  balance  the  rights of the individual versus the greater good (see Forgas, Jussim, & Van Lange, this volume); is it acceptable to ignore a person’s rights in order to increase the overall well-being of a larger number of people? The contentious nature of this issue is reflected in many contemporary examples, including debates about whether it is legitimate to cause harm in order to protect societies against threats (e.g., shooting an abducted passenger plane to prevent a terrorist attack) and whether it is acceptable to refuse life-saving support for some people in order to protect the well-being  of  many  others  (e.g.,  refusing  the  return  of  American  citizens  who  became infected with Ebola in Africa for treatment in the US). These issues have captured the attention of social scientists, politicians, philosophers, lawmakers, and citizens alike, partly because they involve a conflict between two moral principles.

The  first  principle,  often  associated  with  the  moral  philosophy  of  Immanuel  Kant, emphasizes the irrevocable universality of rights and duties. According to the principle of deontology, the moral status of an action is derived from its consistency with context-independent norms (norm-based morality). From this perspective, violations of moral norms are unacceptable irrespective of the anticipated outcomes (e.g.,  shooting  an  abducted  passenger  plane  is  always  immoral  because it violates  the moral norm not to kill others). The second principle, often associated with the moral philosophy of John Stuart Mill, emphasizes the greater good. According to the principle of utilitarianism, the moral status of an action depends on its outcomes, more  specifically  its consequences  for  overall  well-being  (outcome-based  morality).

Monday, December 7, 2015

Poker-faced morality: Concealing emotions leads to utilitarian decision making

Jooa Julia Lee, Francesca Gino
Organizational Behavior and Human Decision Processes Volume 126, 
January 2015, Pages 49–64

Abstract

This paper examines how making deliberate efforts to regulate aversive affective responses influences people’s decisions in moral dilemmas. We hypothesize that emotion regulation—mainly suppression and reappraisal—will encourage utilitarian choices in emotionally charged contexts and that this effect will be mediated by the decision maker’s decreased deontological inclinations. In Study 1, we find that individuals who endorsed the utilitarian option (vs. the deontological option) were more likely to suppress their emotional expressions. In Studies 2a, 2b, and 3, we instruct participants to either regulate their emotions, using one of two different strategies (reappraisal vs. suppression), or not to regulate, and we collect data through the concurrent monitoring of psycho-physiological measures. We find that participants are more likely to make utilitarian decisions when asked to suppress their emotions rather than when they do not regulate their affect. In Study 4, we show that one’s reduced deontological inclinations mediate the relationship between emotion regulation and utilitarian decision making.

The article is here.

Saturday, April 25, 2015

On the Normative Significance of Experimental Moral Psychology

Victor Kumar and Richmond Campbell
Philosophical Psychology 
Vol. 25, Iss. 3, 2012, 311-330.

Experimental research in moral psychology can be used to generate debunking arguments in ethics. Specifically, research can indicate that we draw a moral distinction on the basis of a morally irrelevant difference. We develop this naturalistic approach by examining a recent debate between Joshua Greene and Selim Berker. We argue that Greene’s research, if accurate, undermines attempts to reconcile opposing judgments about trolley cases, but that his attempt to debunk deontology fails. We then draw some general lessons about the possibility of empirical debunking arguments in ethics.

The entire article is here.


Friday, April 24, 2015

Gender Differences in Responses to Moral Dilemmas

By Rebecca Riesdorf, Paul Conway, and Bertram Gawronski
Pers Soc Psychol Bull April 3, 2015

Abstract

The principle of deontology states that the morality of an action depends on its consistency with moral norms; the principle of utilitarianism implies that the morality of an action depends on its consequences. Previous research suggests that deontological judgments are shaped by affective processes, whereas utilitarian judgments are guided by cognitive processes. The current research used process dissociation (PD) to independently assess deontological and utilitarian inclinations in women and men. A meta-analytic re-analysis of 40 studies with 6,100 participants indicated that men showed a stronger preference for utilitarian over deontological judgments than women when the two principles implied conflicting decisions (d = 0.52). PD further revealed that women exhibited stronger deontological inclinations than men (d = 0.57), while men exhibited only slightly stronger utilitarian inclinations than women (d = 0.10). The findings suggest that gender differences in moral dilemma judgments are due to differences in affective responses to harm rather than cognitive evaluations of outcomes.

The entire article is here.

Thursday, March 26, 2015

Should ethics be taught in schools?

By William Isdale
Practical Ethics
Originally published March 4, 2015

Here is an excerpt:

Can we teach ethics?

One problem with teaching ethics in schools is that there are many competing theories about what is right and wrong. For instance, one might think that our intentions matter morally (Kantianism), or that only consequences do (consequentialism). Some regard inequality as intrinsically problematic, whilst others do not. Unlike other subjects taught in schools, ethics seems to be one in which people can’t agree on even seemingly foundational issues.

In his book Essays on Religion and Education, the Oxford philosopher R.M. Hare argued that ethics can be taught in schools, because it involves learning a language with a determinate method, “such that, if you understand what a moral question is, you must know which arguments are legitimate, in the same way in which, in mathematics, if you know what mathematics is, you know that certain arguments in that field are legitimate and certain arguments not.”

The entire blog post is here.

Saturday, February 14, 2015

A Person-Centered Approach to Moral Judgment

By Eric Luis Uhlman, David Pizarro, and Daniel Diermeier
Perspectives on Psychological Science January 2015 vol. 10 no. 1 72-81

Both normative theories of ethics in philosophy and contemporary models of moral judgment in
psychology have focused almost exclusively on the permissibility of acts, in particular whether
acts should be judged based on their material outcomes (consequentialist ethics) or based on
rules, duties, and obligations (deontological ethics). However, a longstanding third perspective
on morality, virtue ethics, may offer a richer descriptive account of a wide range of lay moral
judgments. Building on this ethical tradition, we offer a person-centered account of moral
judgment, which focuses on individuals as the unit of analysis for moral evaluations rather than
on acts. Because social perceivers are fundamentally motivated to acquire information about the
moral character of others, features of an act that seem most informative of character often hold
more weight than either the consequences of the act, or whether or not a moral rule has been
broken. This approach, we argue, can account for a number of empirical findings that are either
not predicted by current theories of moral psychology, or are simply categorized as biases or
irrational quirks in the way individuals make moral judgments.

The entire article is here.

Tuesday, December 16, 2014

Core Values Versus Common Sense Consequentialist Views Appear Less Rooted in Morality

By Tamar Kreps and Knight Way
Personality and Social Psychology Bulletin
November 2014 vol. 40 no. 11 1529-1542

Abstract

When a speaker presents an opinion, an important factor in audiences’ reactions is whether the speaker seems to be basing his or her decision on ethical (as opposed to more pragmatic) concerns. We argue that, despite a consequentialist philosophical tradition that views utilitarian consequences as the basis for moral reasoning, lay perceivers think that speakers using arguments based on consequences do not construe the issue as a moral one. Five experiments show that, for both political views (including real State of the Union quotations) and organizational policies, consequentialist views are seen to express less moralization than deontological views, and even sometimes than views presented with no explicit justification. We also demonstrate that perceived moralization in turn affects speakers’ perceived commitment to the issue and authenticity. These findings shed light on lay conceptions of morality and have practical implications for people considering how to express moral opinions publicly.

The entire article is here.

Friday, August 15, 2014

Moral judgement in adolescents: Age differences in applying and justifying three principles of harm

Paul C. Stey, Daniel Lapsley & Mary O. McKeever
European Journal of Developmental Psychology
Volume 10, Issue 2, 2013
DOI:10.1080/17405629.2013.765798

Abstract

This study investigated the application and justification of three principles of harm in a cross-sectional sample of adolescents in order to test recent theories concerning the source of intuitive moral judgements. Participants were 46 early (M age = 14.8 years) and 40 late adolescents (M age = 17.8 years). Participants rated the permissibility of various ethical dilemmas, and provided justifications for their judgements. Results indicated participants aligned their judgements with the three principles of harm, but had difficulty explaining their reasoning. Furthermore, although age groups were consistent in the application of the principles of harm, age differences emerged in their justifications. These differences were partly explained by differences in language ability. Additionally, participants who used emotional language in their justifications demonstrated a characteristically deontological pattern of moral judgement on certain dilemmas. We conclude adolescents in this age range apply the principles of harm but that the ability to explain their judgements is still developing.

The entire article is here.

Saturday, May 3, 2014

Religiosity, Political Orientation, and Consequentialist Moral Thinking

By Jared Piazza and Paulo Sousa
Social Psychological and Personality Science April 2014 vol. 5 no. 3 334-342

Abstract

Three studies demonstrated that the moral judgments of religious individuals and political conservatives are highly insensitive to consequentialist (i.e., outcome-based) considerations. In Study 1, both religiosity and political conservatism predicted a resistance toward consequentialist thinking concerning a range of transgressive acts, independent of other relevant dispositional factors (e.g., disgust sensitivity). Study 2 ruled out differences in welfare sensitivity as an explanation for these findings. In Study 3, religiosity and political conservatism predicted a commitment to judging “harmless” taboo violations morally impermissible, rather than discretionary, despite the lack of negative consequences rising from the act. Furthermore, non-consequentialist thinking style was shown to mediate the relationship religiosity/conservatism had with impermissibility judgments, while intuitive thinking style did not. These data provide further evidence for the influence of religious and political commitments in motivating divergent moral judgments, while highlighting a new dispositional factor, non-consequentialist thinking style, as a mediator of these effects.

The entire article is here.

Tuesday, April 15, 2014

Automated ethics

When is it ethical to hand our decisions over to machines? And when is external automation a step too far?

by Tom Chatfield
Aeon Magazine
Originally published March 31, 2014

Here is an excerpt:

Automation, in this context, is a force pushing old principles towards breaking point. If I can build a car that will automatically avoid killing a bus full of children, albeit at great risk to its driver’s life, should any driver be given the option of disabling this setting? And why stop there: in a world that we can increasingly automate beyond our reaction times and instinctual reasoning, should we trust ourselves even to conduct an assessment in the first place?

Beyond the philosophical friction, this last question suggests another reason why many people find the trolley disturbing: because its consequentialist resolution presents not only the possibility that an ethically superior action might be calculable via algorithm (not in itself a controversial claim) but also that the right algorithm can itself be an ethically superior entity to us.

The entire article is here.

Wednesday, January 29, 2014

Moral luck: Neiladri Sinhababu

Published on Dec 2, 2013

A talk on moral luck that will examine when blame and virtue can be assigned to human actions through a number of examples. Neil Sinhababu is Assistant Professor of Philosophy at the National University of Singapore. His research is mainly on ethics. His paper on romantic relationships with people from other universes, "Possible Girls", was featured in the Washington Post on Valentine's Day.


Monday, July 22, 2013

Models of Morality

By Molly J. Crockett
Wellcome Trust Centre for Neuroimaging

Moral dilemmas engender conflicts between two traditions: consequentialism, which evaluates actions based on their outcomes, and deontology, which evaluates the actions themselves.  These strikingly resemble two distinct decision-making architectures: a model-based system that selects actions based on inferences about their consequences; and a model-free system that selects actions based on their reinforcement history.  Here, I consider how these systems, along with a Pavlovian system that responds reflexively to rewards and punishments, can illuminate puzzles in moral psychology.

The entire article is here.

Thanks to Molly for making this journal article public.

Friday, July 12, 2013

Kill Whitey. It’s the Right Thing to Do.

by David Dobbs
Neuron Culture
September 15, 2010

Here is an excerpt:

Researchers generally use these (trolley) scenarios to see whether people hold a) an absolutist or so-called “deontological” moral code or b) a utilitarian or “consequentialist” moral code. In an absolutist code, an act’s morality virtually never depends on context or secondary consequences. A utilitarian code allows that an act’s morality can depend on context and secondary consequences, such as whether taking one life can save two or three or a thousand.

In most studies, people start out insisting they have absolute codes. But when researchers tweak the settings, many people decide morality is relative after all: Propose, for instance, that the fat man is known to be dying, or was contemplating jumping off the bridge anyway — and the passengers are all children — and for some people, that makes it different. Or the guy is a murderer and the passengers nuns. In other scenarios the man might be slipping, and will fall and die if you don’t grab him: Do you save him … even if it means all those kids will die? By tweaking these settings, researchers can squeeze an absolutist pretty hard, but they usually find a mix of absolutists and consequentialists.

As a grad student, Pizarro liked trolleyology. Yet it struck him that these studies, in their targeting of an absolutist versus consequentialist spectrum, seemed to assume that most people would hold firm to their particular spots on that spectrum — that individuals generally held a roughly consistent moral compass. The compass needle might wobble, but it would generally point in the same direction.

Pizarro wasn’t so sure. He suspected we might be more fickle. That perhaps we act first and scramble for morality afterward, or something along those lines, and that we choose our rule set according to how well it fits our desires.

The entire blog post is here.