Jim A. C. Everett, David A. Pizarro, M. J. Crockett.
Journal of Experimental Psychology: General, 2016
DOI: 10.1037/xge0000165
Abstract
Moral judgments play a critical role in motivating and enforcing human cooperation. Research on the proximate mechanisms of moral judgments highlights the importance of intuitive, automatic processes in forming such judgments. Intuitive moral judgments often share characteristics with deontological theories in normative ethics, which argue that certain acts (such as killing) are absolutely wrong, regardless of their consequences. Why do moral intuitions typically follow deontological prescriptions, as opposed to those of other ethical theories? Here we test a functional explanation for this phenomenon by investigating whether agents who express deontological moral judgments are more valued as social partners. Across five studies we show that people who make characteristically deontological judgments (as opposed to judgments that align with other ethical traditions) are preferred as social partners, perceived as more moral and trustworthy, and trusted more in economic games. These findings provide empirical support for a partner choice account for why intuitive moral judgments often align with deontological theories.
The article can be downloaded here.
Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care
Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Utilitarian. Show all posts
Showing posts with label Utilitarian. Show all posts
Tuesday, April 26, 2016
Tuesday, April 19, 2016
Divergent roles of autistic and alexithymic traits in utilitarian moral judgments in adults with autism
Indrajeet Patil, Jens Melsbach, Kristina Hennig-Fast & Giorgia Silani
Scientific Reports 6, Article number: 23637 (2016)
doi:10.1038/srep23637
Abstract
This study investigated hypothetical moral choices in adults with high-functioning autism and the role of empathy and alexithymia in such choices. We used a highly emotionally salient moral dilemma task to investigate autistics’ hypothetical moral evaluations about personally carrying out harmful utilitarian behaviours aimed at maximizing welfare. Results showed that they exhibited a normal pattern of moral judgments despite the deficits in social cognition and emotional processing. Further analyses revealed that this was due to mutually conflicting biases associated with autistic and alexithymic traits after accounting for shared variance: (a) autistic traits were associated with reduced utilitarian bias due to elevated personal distress of demanding social situations, while (b) alexithymic traits were associated with increased utilitarian bias on account of reduced empathic concern for the victim. Additionally, autistics relied on their non-verbal reasoning skills to rigidly abide by harm-norms. Thus, utilitarian moral judgments in autism were spared due to opposite influences of autistic and alexithymic traits and compensatory intellectual strategies. These findings demonstrate the importance of empathy and alexithymia in autistic moral cognition and have methodological implications for studying moral judgments in several other clinical populations.
The article is here.
Scientific Reports 6, Article number: 23637 (2016)
doi:10.1038/srep23637
Abstract
This study investigated hypothetical moral choices in adults with high-functioning autism and the role of empathy and alexithymia in such choices. We used a highly emotionally salient moral dilemma task to investigate autistics’ hypothetical moral evaluations about personally carrying out harmful utilitarian behaviours aimed at maximizing welfare. Results showed that they exhibited a normal pattern of moral judgments despite the deficits in social cognition and emotional processing. Further analyses revealed that this was due to mutually conflicting biases associated with autistic and alexithymic traits after accounting for shared variance: (a) autistic traits were associated with reduced utilitarian bias due to elevated personal distress of demanding social situations, while (b) alexithymic traits were associated with increased utilitarian bias on account of reduced empathic concern for the victim. Additionally, autistics relied on their non-verbal reasoning skills to rigidly abide by harm-norms. Thus, utilitarian moral judgments in autism were spared due to opposite influences of autistic and alexithymic traits and compensatory intellectual strategies. These findings demonstrate the importance of empathy and alexithymia in autistic moral cognition and have methodological implications for studying moral judgments in several other clinical populations.
The article is here.
Monday, February 22, 2016
Morality is a muscle. Get to the gym.
Pascal-Emmanuel Gobry
The Week
Originally published January 18, 2016
Here is an excerpt:
Take the furor over "trigger warnings" in college classes and textbooks. One side believes that in order to protect the sensitivities of some students, professors or writers should warn readers or students about some at the beginning of an article or course about controversial topics. Another side says that if someone can't handle rough material, then he can stop reading or step out of the room, and that trigger warnings are an unconscionable affront to freedom of thought. Interestingly, both schools clearly believe that there is one moral stance which takes the form of a rule that should be obeyed always and everywhere. Always and everywhere we should have trigger warnings to protect people's sensibilities, or always and everywhere we should not.
Both sides need a lecture in virtue ethics.
If I try to stretch my virtue of empathy, it doesn't seem at all absurd to me to imagine that, say, a young woman who has been raped might be made quite uncomfortable by a class discussion of rape in literature, and that this is something to which we should be sensitive. But the trigger warning people maybe should think more about the moral imperative to develop the virtue of courage, including intellectual courage. Then it seems to me that if you just put aside grand moral questions about freedom of inquiry, simple basic human courtesy would mean a professor would try to take account a trauma victim's sensibilities while teaching sensitive material, and students would understand that part of the goal of a college class is to challenge them. We don't need to debate universal moral values, we just need to be reminded to exercise virtue more.
The article is here.
The Week
Originally published January 18, 2016
Here is an excerpt:
Take the furor over "trigger warnings" in college classes and textbooks. One side believes that in order to protect the sensitivities of some students, professors or writers should warn readers or students about some at the beginning of an article or course about controversial topics. Another side says that if someone can't handle rough material, then he can stop reading or step out of the room, and that trigger warnings are an unconscionable affront to freedom of thought. Interestingly, both schools clearly believe that there is one moral stance which takes the form of a rule that should be obeyed always and everywhere. Always and everywhere we should have trigger warnings to protect people's sensibilities, or always and everywhere we should not.
Both sides need a lecture in virtue ethics.
If I try to stretch my virtue of empathy, it doesn't seem at all absurd to me to imagine that, say, a young woman who has been raped might be made quite uncomfortable by a class discussion of rape in literature, and that this is something to which we should be sensitive. But the trigger warning people maybe should think more about the moral imperative to develop the virtue of courage, including intellectual courage. Then it seems to me that if you just put aside grand moral questions about freedom of inquiry, simple basic human courtesy would mean a professor would try to take account a trauma victim's sensibilities while teaching sensitive material, and students would understand that part of the goal of a college class is to challenge them. We don't need to debate universal moral values, we just need to be reminded to exercise virtue more.
The article is here.
Saturday, February 6, 2016
Understanding Responses to Moral Dilemmas
Deontological Inclinations, Utilitarian Inclinations, and General Action Tendencies
Bertram Gawronski, Paul Conway, Joel B. Armstrong, Rebecca Friesdorf, and Mandy Hütter
In: J. P. Forgas, L. Jussim, & P. A. M. Van Lange (Eds.). (2016). Social psychology of morality. New York, NY: Psychology Press.
Bertram Gawronski, Paul Conway, Joel B. Armstrong, Rebecca Friesdorf, and Mandy Hütter
In: J. P. Forgas, L. Jussim, & P. A. M. Van Lange (Eds.). (2016). Social psychology of morality. New York, NY: Psychology Press.
Introduction
For centuries, societies have wrestled with the question of how to balance the rights of the individual versus the greater good (see Forgas, Jussim, & Van Lange, this volume); is it acceptable to ignore a person’s rights in order to increase the overall well-being of a larger number of people? The contentious nature of this issue is reflected in many contemporary examples, including debates about whether it is legitimate to cause harm in order to protect societies against threats (e.g., shooting an abducted passenger plane to prevent a terrorist attack) and whether it is acceptable to refuse life-saving support for some people in order to protect the well-being of many others (e.g., refusing the return of American citizens who became infected with Ebola in Africa for treatment in the US). These issues have captured the attention of social scientists, politicians, philosophers, lawmakers, and citizens alike, partly because they involve a conflict between two moral principles.
The first principle, often associated with the moral philosophy of Immanuel Kant, emphasizes the irrevocable universality of rights and duties. According to the principle of deontology, the moral status of an action is derived from its consistency with context-independent norms (norm-based morality). From this perspective, violations of moral norms are unacceptable irrespective of the anticipated outcomes (e.g., shooting an abducted passenger plane is always immoral because it violates the moral norm not to kill others). The second principle, often associated with the moral philosophy of John Stuart Mill, emphasizes the greater good. According to the principle of utilitarianism, the moral status of an action depends on its outcomes, more specifically its consequences for overall well-being (outcome-based morality).
Monday, December 7, 2015
Poker-faced morality: Concealing emotions leads to utilitarian decision making
Jooa Julia Lee, Francesca Gino
Organizational Behavior and Human Decision Processes Volume 126,
January 2015, Pages 49–64
Abstract
This paper examines how making deliberate efforts to regulate aversive affective responses influences people’s decisions in moral dilemmas. We hypothesize that emotion regulation—mainly suppression and reappraisal—will encourage utilitarian choices in emotionally charged contexts and that this effect will be mediated by the decision maker’s decreased deontological inclinations. In Study 1, we find that individuals who endorsed the utilitarian option (vs. the deontological option) were more likely to suppress their emotional expressions. In Studies 2a, 2b, and 3, we instruct participants to either regulate their emotions, using one of two different strategies (reappraisal vs. suppression), or not to regulate, and we collect data through the concurrent monitoring of psycho-physiological measures. We find that participants are more likely to make utilitarian decisions when asked to suppress their emotions rather than when they do not regulate their affect. In Study 4, we show that one’s reduced deontological inclinations mediate the relationship between emotion regulation and utilitarian decision making.
The article is here.
Organizational Behavior and Human Decision Processes Volume 126,
January 2015, Pages 49–64
Abstract
This paper examines how making deliberate efforts to regulate aversive affective responses influences people’s decisions in moral dilemmas. We hypothesize that emotion regulation—mainly suppression and reappraisal—will encourage utilitarian choices in emotionally charged contexts and that this effect will be mediated by the decision maker’s decreased deontological inclinations. In Study 1, we find that individuals who endorsed the utilitarian option (vs. the deontological option) were more likely to suppress their emotional expressions. In Studies 2a, 2b, and 3, we instruct participants to either regulate their emotions, using one of two different strategies (reappraisal vs. suppression), or not to regulate, and we collect data through the concurrent monitoring of psycho-physiological measures. We find that participants are more likely to make utilitarian decisions when asked to suppress their emotions rather than when they do not regulate their affect. In Study 4, we show that one’s reduced deontological inclinations mediate the relationship between emotion regulation and utilitarian decision making.
The article is here.
Thursday, November 19, 2015
Is moral bioenhancement dangerous?
Nicholas Drake
J Med Ethics doi:10.1136/medethics-2015-102944
Abstract
In a recent response to Persson and Savulescu's Unfit for the Future, Nicholas Agar argues that moral bioenhancement is dangerous. His grounds for this are that normal moral judgement should be privileged because it involves a balance of moral subcapacities; moral bioenhancement, Agar argues, involves the enhancement of only particular moral subcapacities, and thus upsets the balance inherent in normal moral judgement. Mistaken moral judgements, he says, are likely to result. I argue that Agar's argument fails for two reasons. First, having strength in a particular moral subcapacity does not necessarily entail a worsening of moral judgement; it can involve strength in a particular aspect of morality. Second, normal moral judgement is not sufficiently likely to be correct to be the standard by which moral judgements are measured.
The entire article is here.
J Med Ethics doi:10.1136/medethics-2015-102944
Abstract
In a recent response to Persson and Savulescu's Unfit for the Future, Nicholas Agar argues that moral bioenhancement is dangerous. His grounds for this are that normal moral judgement should be privileged because it involves a balance of moral subcapacities; moral bioenhancement, Agar argues, involves the enhancement of only particular moral subcapacities, and thus upsets the balance inherent in normal moral judgement. Mistaken moral judgements, he says, are likely to result. I argue that Agar's argument fails for two reasons. First, having strength in a particular moral subcapacity does not necessarily entail a worsening of moral judgement; it can involve strength in a particular aspect of morality. Second, normal moral judgement is not sufficiently likely to be correct to be the standard by which moral judgements are measured.
The entire article is here.
Tuesday, October 27, 2015
Intuitive and Counterintuitive Morality
Guy Kahane
Moral Psychology and Human Agency: Philosophical Essays on the Science of Ethics, Oxford University Press
Abstract
Recent work in the cognitive science of morality has been taken to show that moral judgment is largely based on immediate intuitions and emotions. However, according to Greene's influential dual process model, deliberative processing not only plays a significant role in moral judgment, but also favours a distinctive type of content broadly utilitarian approach to ethics. In this chapter, I argue that this proposed tie between process and content is based on conceptual errors, and on a misinterpretation of the empirical evidence. Drawing on some of our own empirical research, I will argue so-called "utilitarian" judgments in response to trolley cases often have little to do with concern for the greater good, and may actually express antisocial tendencies. A more general lesson of my argument is that much of current empirical research in moral psychology is based on a far too narrow understanding of intuition and deliberation.
The entire book chapter is here.
Moral Psychology and Human Agency: Philosophical Essays on the Science of Ethics, Oxford University Press
Abstract
Recent work in the cognitive science of morality has been taken to show that moral judgment is largely based on immediate intuitions and emotions. However, according to Greene's influential dual process model, deliberative processing not only plays a significant role in moral judgment, but also favours a distinctive type of content broadly utilitarian approach to ethics. In this chapter, I argue that this proposed tie between process and content is based on conceptual errors, and on a misinterpretation of the empirical evidence. Drawing on some of our own empirical research, I will argue so-called "utilitarian" judgments in response to trolley cases often have little to do with concern for the greater good, and may actually express antisocial tendencies. A more general lesson of my argument is that much of current empirical research in moral psychology is based on a far too narrow understanding of intuition and deliberation.
The entire book chapter is here.
Tuesday, September 1, 2015
The moral naivete of ethics by numbers
By Susan Dwyer
Aljazeera America
Originally posted August 13, 2015
What do bioethicists do? According to a recent Boston Globe op-ed by the Harvard cognitive psychologist Steven Pinker, they needlessly get in the way of saving and improving human lives by throwing up ethical red tape and slowing the speed of research, and in so doing, they undermine their right to call themselves ethicists at all.
In principle, it is correct that if 250,000 people die each year of a disease that is potentially treatable, the cost of every year’s delay in research is 250,000 lives. And it is certainly terrible to lose so many people to unnecessary delays. But Pinker doesn’t cite a single specific example in which bioethical scrutiny has produced such a result. Certainly, the withholding of experimental drugs has cost lives; for example, ZMapp, an experimental drug to treat Ebola, was not readily available to people in several African nations who were dying of the disease. Yet there was little of the drug on hand, in any case. But the problem here was not ethical red tape; it was the underfunding of research to treat “exotic” infectious disease.
The entire article is here.
Aljazeera America
Originally posted August 13, 2015
What do bioethicists do? According to a recent Boston Globe op-ed by the Harvard cognitive psychologist Steven Pinker, they needlessly get in the way of saving and improving human lives by throwing up ethical red tape and slowing the speed of research, and in so doing, they undermine their right to call themselves ethicists at all.
In principle, it is correct that if 250,000 people die each year of a disease that is potentially treatable, the cost of every year’s delay in research is 250,000 lives. And it is certainly terrible to lose so many people to unnecessary delays. But Pinker doesn’t cite a single specific example in which bioethical scrutiny has produced such a result. Certainly, the withholding of experimental drugs has cost lives; for example, ZMapp, an experimental drug to treat Ebola, was not readily available to people in several African nations who were dying of the disease. Yet there was little of the drug on hand, in any case. But the problem here was not ethical red tape; it was the underfunding of research to treat “exotic” infectious disease.
The entire article is here.
Wednesday, July 29, 2015
The Logic of Effective Altruism
By Peter Singer
Boston Review
Originally posted July 6, 2015
Here is an excerpt:
Effective altruism is based on a very simple idea: we should do the most good we can. Obeying the usual rules about not stealing, cheating, hurting, and killing is not enough, or at least not enough for those of us who have the good fortune to live in material comfort, who can feed, house, and clothe ourselves and our families and still have money or time to spare. Living a minimally acceptable ethical life involves using a substantial part of our spare resources to make the world a better place. Living a fully ethical life involves doing the most good we can.
Most effective altruists are millennials—members of the first generation to have come of age in the new millennium. They are pragmatic realists, not saints, so very few claim to live a fully ethical life. Most of them are somewhere on the continuum between a minimally acceptable ethical life and a fully ethical life. That doesn’t mean they go about feeling guilty because they are not morally perfect. Effective altruists don’t see a lot of point in feeling guilty. They prefer to focus on the good they are doing. Some of them are content to know they are doing something significant to make the world a better place. Many of them like to challenge themselves to do a little better this year than last year.
The entire article is here.
Boston Review
Originally posted July 6, 2015
Here is an excerpt:
Effective altruism is based on a very simple idea: we should do the most good we can. Obeying the usual rules about not stealing, cheating, hurting, and killing is not enough, or at least not enough for those of us who have the good fortune to live in material comfort, who can feed, house, and clothe ourselves and our families and still have money or time to spare. Living a minimally acceptable ethical life involves using a substantial part of our spare resources to make the world a better place. Living a fully ethical life involves doing the most good we can.
Most effective altruists are millennials—members of the first generation to have come of age in the new millennium. They are pragmatic realists, not saints, so very few claim to live a fully ethical life. Most of them are somewhere on the continuum between a minimally acceptable ethical life and a fully ethical life. That doesn’t mean they go about feeling guilty because they are not morally perfect. Effective altruists don’t see a lot of point in feeling guilty. They prefer to focus on the good they are doing. Some of them are content to know they are doing something significant to make the world a better place. Many of them like to challenge themselves to do a little better this year than last year.
The entire article is here.
Wednesday, March 4, 2015
‘Utilitarian’ judgments in sacrificial moral dilemmas do not reflect impartial concern for the greater good
By Guy Kahane, Jim A.C. Everett, Brian Earp, Miguel Farias, and Julian Savulescu
Cognition
Volume 134, January 2015, Pages 193–209
Abstract
A growing body of research has focused on so-called ‘utilitarian’ judgments in moral dilemmas in which participants have to choose whether to sacrifice one person in order to save the lives of a greater number. However, the relation between such ‘utilitarian’ judgments and genuine utilitarian impartial concern for the greater good remains unclear. Across four studies, we investigated the relationship between ‘utilitarian’ judgment in such sacrificial dilemmas and a range of traits, attitudes, judgments and behaviors that either reflect or reject an impartial concern for the greater good of all. In Study 1, we found that rates of ‘utilitarian’ judgment were associated with a broadly immoral outlook concerning clear ethical transgressions in a business context, as well as with sub-clinical psychopathy. In Study 2, we found that ‘utilitarian’ judgment was associated with greater endorsement of rational egoism, less donation of money to a charity, and less identification with the whole of humanity, a core feature of classical utilitarianism. In Studies 3 and 4, we found no association between ‘utilitarian’ judgments in sacrificial dilemmas and characteristic utilitarian judgments relating to assistance to distant people in need, self-sacrifice and impartiality, even when the utilitarian justification for these judgments was made explicit and unequivocal. This lack of association remained even when we controlled for the antisocial element in ‘utilitarian’ judgment. Taken together, these results suggest that there is very little relation between sacrificial judgments in the hypothetical dilemmas that dominate current research, and a genuine utilitarian approach to ethics.
Highlights
• Utilitarian’ judgments in moral dilemmas were associated with egocentric attitudes and less identification with humanity.
• They were also associated with lenient views about clear moral transgressions.
• ‘Utilitarian’ judgments were not associated with views expressing impartial altruist concern for others.
• This lack of association remained even when antisocial tendencies were controlled for.
• So-called ‘utilitarian’ judgments do not express impartial concern for the greater good.
The entire article is here.
Cognition
Volume 134, January 2015, Pages 193–209
Abstract
A growing body of research has focused on so-called ‘utilitarian’ judgments in moral dilemmas in which participants have to choose whether to sacrifice one person in order to save the lives of a greater number. However, the relation between such ‘utilitarian’ judgments and genuine utilitarian impartial concern for the greater good remains unclear. Across four studies, we investigated the relationship between ‘utilitarian’ judgment in such sacrificial dilemmas and a range of traits, attitudes, judgments and behaviors that either reflect or reject an impartial concern for the greater good of all. In Study 1, we found that rates of ‘utilitarian’ judgment were associated with a broadly immoral outlook concerning clear ethical transgressions in a business context, as well as with sub-clinical psychopathy. In Study 2, we found that ‘utilitarian’ judgment was associated with greater endorsement of rational egoism, less donation of money to a charity, and less identification with the whole of humanity, a core feature of classical utilitarianism. In Studies 3 and 4, we found no association between ‘utilitarian’ judgments in sacrificial dilemmas and characteristic utilitarian judgments relating to assistance to distant people in need, self-sacrifice and impartiality, even when the utilitarian justification for these judgments was made explicit and unequivocal. This lack of association remained even when we controlled for the antisocial element in ‘utilitarian’ judgment. Taken together, these results suggest that there is very little relation between sacrificial judgments in the hypothetical dilemmas that dominate current research, and a genuine utilitarian approach to ethics.
Highlights
• Utilitarian’ judgments in moral dilemmas were associated with egocentric attitudes and less identification with humanity.
• They were also associated with lenient views about clear moral transgressions.
• ‘Utilitarian’ judgments were not associated with views expressing impartial altruist concern for others.
• This lack of association remained even when antisocial tendencies were controlled for.
• So-called ‘utilitarian’ judgments do not express impartial concern for the greater good.
The entire article is here.
Thursday, December 4, 2014
‘Utilitarian’ judgments in sacrificial moral dilemmas do not reflect impartial concern for the greater good
By G. Kahane, J. Everett, Brian Earp, Miguel Farias, and J. Savulescu
Cognition, Vol 134, Jan 2015, pp 193-209.
Highlights
• ‘Utilitarian’ judgments in moral dilemmas were associated with egocentric attitudes and less identification with humanity.
• They were also associated with lenient views about clear moral transgressions.
• ‘Utilitarian’ judgments were not associated with views expressing impartial altruist concern for others.
• This lack of association remained even when antisocial tendencies were controlled for.
• So-called ‘utilitarian’ judgments do not express impartial concern for the greater good.
Abstract
A growing body of research has focused on so-called ‘utilitarian’ judgments in moral dilemmas in which participants have to choose whether to sacrifice one person in order to save the lives of a greater number. However, the relation between such ‘utilitarian’ judgments and genuine utilitarian impartial concern for the greater good remains unclear. Across four studies, we investigated the relationship between ‘utilitarian’ judgment in such sacrificial dilemmas and a range of traits, attitudes, judgments and behaviors that either reflect or reject an impartial concern for the greater good of all. In Study 1, we found that rates of ‘utilitarian’ judgment were associated with a broadly immoral outlook concerning clear ethical transgressions in a business context, as well as with sub-clinical psychopathy. In Study 2, we found that ‘utilitarian’ judgment was associated with greater endorsement of rational egoism, less donation of money to a charity, and less identification with the whole of humanity, a core feature of classical utilitarianism. In Studies 3 and 4, we found no association between ‘utilitarian’ judgments in sacrificial dilemmas and characteristic utilitarian judgments relating to assistance to distant people in need, self-sacrifice and impartiality, even when the utilitarian justification for these judgments was made explicit and unequivocal. This lack of association remained even when we controlled for the antisocial element in ‘utilitarian’ judgment. Taken together, these results suggest that there is very little relation between sacrificial judgments in the hypothetical dilemmas that dominate current research, and a genuine utilitarian approach to ethics.
The entire article is here.
Cognition, Vol 134, Jan 2015, pp 193-209.
Highlights
• ‘Utilitarian’ judgments in moral dilemmas were associated with egocentric attitudes and less identification with humanity.
• They were also associated with lenient views about clear moral transgressions.
• ‘Utilitarian’ judgments were not associated with views expressing impartial altruist concern for others.
• This lack of association remained even when antisocial tendencies were controlled for.
• So-called ‘utilitarian’ judgments do not express impartial concern for the greater good.
Abstract
A growing body of research has focused on so-called ‘utilitarian’ judgments in moral dilemmas in which participants have to choose whether to sacrifice one person in order to save the lives of a greater number. However, the relation between such ‘utilitarian’ judgments and genuine utilitarian impartial concern for the greater good remains unclear. Across four studies, we investigated the relationship between ‘utilitarian’ judgment in such sacrificial dilemmas and a range of traits, attitudes, judgments and behaviors that either reflect or reject an impartial concern for the greater good of all. In Study 1, we found that rates of ‘utilitarian’ judgment were associated with a broadly immoral outlook concerning clear ethical transgressions in a business context, as well as with sub-clinical psychopathy. In Study 2, we found that ‘utilitarian’ judgment was associated with greater endorsement of rational egoism, less donation of money to a charity, and less identification with the whole of humanity, a core feature of classical utilitarianism. In Studies 3 and 4, we found no association between ‘utilitarian’ judgments in sacrificial dilemmas and characteristic utilitarian judgments relating to assistance to distant people in need, self-sacrifice and impartiality, even when the utilitarian justification for these judgments was made explicit and unequivocal. This lack of association remained even when we controlled for the antisocial element in ‘utilitarian’ judgment. Taken together, these results suggest that there is very little relation between sacrificial judgments in the hypothetical dilemmas that dominate current research, and a genuine utilitarian approach to ethics.
The entire article is here.
Thursday, November 13, 2014
The drunk utilitarian: Blood alcohol concentration predicts utilitarian responses in moral dilemmas
Aaron
A. Duke and Laurent Bègueb
Cognition
Volume 134, January 2015, Pages 121–127
Highlights
•
Greene’s dual-process theory of moral reasoning needs revision.
•
Blood alcohol concentration is positively correlated with utilitarianism.
•
Self-reported disinhibition is positively correlated with utilitarianism.
•
Decreased empathy predicts utilitarianism better than increased deliberation.
Abstract
The
hypothetical moral dilemma known as the trolley problem has become a
methodological cornerstone in the psychological study of moral reasoning and
yet, there remains considerable debate as to the meaning of utilitarian
responding in these scenarios. It is unclear whether utilitarian responding
results primarily from increased deliberative reasoning capacity or from
decreased aversion to harming others. In order to clarify this question, we conducted
two field studies to examine the effects of alcohol intoxication on utilitarian
responding. Alcohol holds promise in clarifying the above debate because it
impairs both social cognition (i.e., empathy) and higher-order executive
functioning. Hence, the direction of the association between alcohol and
utilitarian vs. non-utilitarian responding should inform the relative
importance of both deliberative and social processing systems in influencing
utilitarian preference. In two field studies with a combined sample of 103 men
and women recruited at two bars in Grenoble, France, participants were
presented with a moral dilemma assessing their willingness to sacrifice one
life to save five others. Participants’ blood alcohol concentrations were found
to positively correlate with utilitarian preferences (r = .31, p < .001)
suggesting a stronger role for impaired social cognition than intact
deliberative reasoning in predicting utilitarian responses in the trolley
dilemma. Implications for Greene’s dual-process model of moral reasoning are
discussed.
Wednesday, April 23, 2014
Damage to the prefrontal cortex increases utilitarian moral judgements
By Michael Koenigs, Liane Young, Ralph Adolphs, Daniel Tranel, Fiery Cushman, Marc Hauser, and Antonio Damasio
Nature. Apr 19, 2007; 446(7138): 908–911.
Published online Mar 21, 2007
doi: 10.1038/nature05631
Abstract
The psychological and neurobiological processes underlying moral judgement have been the focus of many recent empirical studies. Of central interest is whether emotions play a causal role in moral judgement, and, in parallel, how emotion-related areas of the brain contribute to moral judgement. Here we show that six patients with focal bilateral damage to the ventromedial prefrontal cortex (VMPC), a brain region necessary for the normal generation of emotions and, in particular, social emotions, produce an abnormally ‘utilitarian’ pattern of judgements on moral dilemmas that pit compelling considerations of aggregate welfare against highly emotionally aversive behaviours (for example, having to sacrifice one person’s life to save a number of other lives). In contrast, the VMPC patients’ judgements were normal in other classes of moral dilemmas. These findings indicate that, for a selective set of moral dilemmas, the VMPC is critical for normal judgements of right and wrong. The findings support a necessary role for emotion in the generation of those judgements.
The entire article is here.
Nature. Apr 19, 2007; 446(7138): 908–911.
Published online Mar 21, 2007
doi: 10.1038/nature05631
Abstract
The psychological and neurobiological processes underlying moral judgement have been the focus of many recent empirical studies. Of central interest is whether emotions play a causal role in moral judgement, and, in parallel, how emotion-related areas of the brain contribute to moral judgement. Here we show that six patients with focal bilateral damage to the ventromedial prefrontal cortex (VMPC), a brain region necessary for the normal generation of emotions and, in particular, social emotions, produce an abnormally ‘utilitarian’ pattern of judgements on moral dilemmas that pit compelling considerations of aggregate welfare against highly emotionally aversive behaviours (for example, having to sacrifice one person’s life to save a number of other lives). In contrast, the VMPC patients’ judgements were normal in other classes of moral dilemmas. These findings indicate that, for a selective set of moral dilemmas, the VMPC is critical for normal judgements of right and wrong. The findings support a necessary role for emotion in the generation of those judgements.
The entire article is here.
Subscribe to:
Posts (Atom)