Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Moral Dilemmas. Show all posts
Showing posts with label Moral Dilemmas. Show all posts

Thursday, December 4, 2014

‘Utilitarian’ judgments in sacrificial moral dilemmas do not reflect impartial concern for the greater good

By G. Kahane, J. Everett, Brian Earp, Miguel Farias, and J. Savulescu
Cognition, Vol 134, Jan 2015, pp 193-209.

Highlights

• ‘Utilitarian’ judgments in moral dilemmas were associated with egocentric attitudes and less identification with humanity.
• They were also associated with lenient views about clear moral transgressions.
• ‘Utilitarian’ judgments were not associated with views expressing impartial altruist concern for others.
• This lack of association remained even when antisocial tendencies were controlled for.
• So-called ‘utilitarian’ judgments do not express impartial concern for the greater good.

Abstract

A growing body of research has focused on so-called ‘utilitarian’ judgments in moral dilemmas in which participants have to choose whether to sacrifice one person in order to save the lives of a greater number. However, the relation between such ‘utilitarian’ judgments and genuine utilitarian impartial concern for the greater good remains unclear. Across four studies, we investigated the relationship between ‘utilitarian’ judgment in such sacrificial dilemmas and a range of traits, attitudes, judgments and behaviors that either reflect or reject an impartial concern for the greater good of all. In Study 1, we found that rates of ‘utilitarian’ judgment were associated with a broadly immoral outlook concerning clear ethical transgressions in a business context, as well as with sub-clinical psychopathy. In Study 2, we found that ‘utilitarian’ judgment was associated with greater endorsement of rational egoism, less donation of money to a charity, and less identification with the whole of humanity, a core feature of classical utilitarianism. In Studies 3 and 4, we found no association between ‘utilitarian’ judgments in sacrificial dilemmas and characteristic utilitarian judgments relating to assistance to distant people in need, self-sacrifice and impartiality, even when the utilitarian justification for these judgments was made explicit and unequivocal. This lack of association remained even when we controlled for the antisocial element in ‘utilitarian’ judgment. Taken together, these results suggest that there is very little relation between sacrificial judgments in the hypothetical dilemmas that dominate current research, and a genuine utilitarian approach to ethics.

The entire article is here.

Thursday, November 20, 2014

Teaching Moral Values

Panellists: Michael Portillo, Anne McElvoy, Claire Fox and Giles Fraser

Witnesses: Adrian Bishop, Dr. Sandra Cooke, Professor Jesse Prinz and Dr. Ralph Levinson

Teaching your children a set of moral values to live their lives by is arguably one of the most important aspects of being a parent - and for some, one of the most neglected. In Japan that job could soon be handed to teachers and become part of the school curriculum. The Central Council for Education is making preparations to introduce moral education as an official school subject, on a par with traditional subjects like Japanese, mathematics and science. In a report the council says that since moral education plays an important role not only in helping children realise a better life for themselves but also in ensuring sustainable development of the Japanese state and society, so it should to taught more formally and the subject codified. The prospect of the state defining a set of approved values to be taught raises some obvious questions, but is it very far away from what we already accept? School websites often talk of their "moral ethos". The much quoted aphorism "give me the child until he is seven and I'll give you the man" is attributed to the Jesuits and why are church schools so popular if it's not for their faith based ethos? Moral philosophy is an enormously diverse subject, but why not use it to give children a broad set of tools and questions to ask, to help them make sense of a complex and contradictory world? If we try and make classrooms morally neutral zones are we just encouraging moral relativism? Our society is becoming increasingly secular and finding it hard to define a set of common values. As another disputed epigram puts it "When men stop believing in God, they don't believe in nothing. They believe in anything."

Could moral education fill the moral vacuum?

Moral Maze - Presented by Michael Buerk

The audio file can be accessed here.

Monday, October 6, 2014

On Aiming for Moral Mediocrity

By Eric Schwitzgebel
The Splintered Mind Blog
Originally published October 2, 2014

People seem to calibrate toward moral mediocrity. If we see, or are told, that many people violate a norm, that seems to increase the rate at which we ourselves violate the norm (e.g., Cialdini et al 2006; Keizer et al. 2011 [though see here]). Commit a good deed or think of yourself in a good light, and shortly thereafter you might be more likely to commit a bad deed, or less likely to commit another good deed, than you otherwise would have been ("moral self-licensing"; though see here). Susan Wolf tells us that people do not, and should not, aim to be moral saints. But maybe she understates the case: Not only do people not want to be saints, they don't even want to be particularly good.

The entire blog post is here.

Wednesday, October 1, 2014

Possible neurobiological basis for tradeoff between honesty, self-interest

By Ashley Wenners Herron
ScienceDirect
Originally published September 4, 2014

Summary:

What's the price on your integrity? Tell the truth; everyone has a tipping point. We all want to be honest, but at some point, we'll lie if the benefit is great enough. Now, scientists have confirmed the area of the brain in which we make that decision, using advanced imaging techniques to study how the brain makes choices about honesty.

(cut)

The study sheds light on the neuroscientific basis and broader nature of honesty. Moral philosophers and cognitive psychologists have had longstanding, contrasting hypotheses about the mechanisms governing the tradeoff between honesty and self-interest.

The "Grace" hypothesis, suggests that people are innately honest and have to control honest impulses if they want to profit. The "Will" hypothesis holds that self-interest is our automatic response.

The entire article is here.

Tuesday, August 12, 2014

Revisiting External Validity: Concerns about Trolley Problems and Other Sacrificial Dilemmas in Moral Psychology

By C. W. Bauman, A. P. McGraw, D. M. Bartels, and C. Warren

Abstract

Sacrificial dilemmas, especially trolley problems, have rapidly become the most recognizable scientific exemplars of moral situations; they are now a familiar part of the psychological literature and are featured prominently in textbooks and the popular press. We are concerned that studies of sacrificial dilemmas may lack experimental, mundane, and psychological realism and therefore suffer from low external validity. Our apprehensions stem from three observations about trolley problems and other similar sacrificial dilemmas: (i) they are amusing rather than sobering, (ii) they are unrealistic and unrepresentative of the moral situations people encounter in the real world, and (iii) they do not elicit the same psychological processes as other moral situations. We believe it would be prudent to use more externally valid stimuli when testing descriptive theories that aim to provide comprehensive accounts of moral judgment and behavior.

The entire paper is here.

Thursday, July 17, 2014

Moral Dilemmas

The Stanford Encyclopedia of Philosophy
Revised June 30, 2014

Here is an excerpt:

What is common to the two well-known cases is conflict. In each case, an agent regards herself as having moral reasons to do each of two actions, but doing both actions is not possible. Ethicists have called situations like these moral dilemmas. The crucial features of a moral dilemma are these: the agent is required to do each of two (or more) actions; the agent can do each of the actions; but the agent cannot do both (or all) of the actions. The agent thus seems condemned to moral failure; no matter what she does, she will do something wrong (or fail to do something that she ought to do).

The Platonic case strikes many as too easy to be characterized as a genuine moral dilemma. For the agent's solution in that case is clear; it is more important to protect people from harm than to return a borrowed weapon. And in any case, the borrowed item can be returned later, when the owner no longer poses a threat to others. Thus in this case we can say that the requirement to protect others from serious harm overrides the requirement to repay one's debts by returning a borrowed item when its owner so demands. When one of the conflicting requirements overrides the other, we do not have a genuine moral dilemma. So in addition to the features mentioned above, in order to have a genuine moral dilemma it must also be true that neither of the conflicting requirements is overridden (Sinnott-Armstrong 1988, Chapter 1).

The entire page is here.

Editor's note: Anyone interested in ethics and morality needs to read this page.  It is an excellent source to understand moral dilemmas as well as ethical dilemmas when in the role of a psychologist.

Wednesday, April 30, 2014

The Heinz Dilemma Might Reveal That Morality Is Meaningless

By Esther Inglis-Arkell
io9.com
Originally published April 29, 2014

Here is an excerpt:

But if this finding is true, it seems there are bigger problems with morality. What this experiment seems to say is people can take the same situation, and argue the same principles - social roles, the importance of interpersonal relationships, the likelihood of punishment, and pure humanitarian principles - and come to exactly opposite moral conclusions. And they do this for their whole lives. Sure, it's interesting to see that principles evolve over time, but it's more interesting to see that principles - at least the ones confined solely to the human mind - are irrelevant. There is no method or guiding idea that could possibly allow any group of humanity to come to a consensus. Morality, then, is basically chaos. We can start from the same place, and follow the same principles, and end at diametrically opposite ends of a problem, and there's no way to resolve that.

The entire blog post is here.

Editor's note:

I posted this piece to demonstrate that many struggle to understand morality.  First, moral psychology has moved well past Kohlberg.  Psychologists, especially those who study moral psychology, understand the theoretical and research limitations of Kohlberg.  Please listen to podcast Episode 7 to get a flavor of this.

Second, to believe "morality, then, is basically chaos" is also uninformed.  In moral decision-making, individuals can use different principles to generate different conclusions.  This does not indicate that morality is in chaos, rather, it demonstrates how people use different moral systems to judge and respond to moral dilemmas.

Third, a true moral dilemma involves competing principles.  If it is truly a moral dilemma, then there is no "correct" or "right" answer.  A true dilemma shows how an individual is in a moral or ethical bind and there are cognitive and emotional strategies to generate solutions to sometimes impossible problems. Podcasts 5 and 6 demonstrate how psychologists can knit together possible solutions to ethical dilemmas because, in part, they bring their own moral systems, values, and biases to their work.

The podcasts can be found here.


Wednesday, April 23, 2014

Damage to the prefrontal cortex increases utilitarian moral judgements

By Michael Koenigs, Liane Young, Ralph Adolphs, Daniel Tranel, Fiery Cushman, Marc Hauser, and Antonio Damasio
Nature. Apr 19, 2007; 446(7138): 908–911.
Published online Mar 21, 2007
doi:  10.1038/nature05631

Abstract

The psychological and neurobiological processes underlying moral judgement have been the focus of many recent empirical studies. Of central interest is whether emotions play a causal role in moral judgement, and, in parallel, how emotion-related areas of the brain contribute to moral judgement. Here we show that six patients with focal bilateral damage to the ventromedial prefrontal cortex (VMPC), a brain region necessary for the normal generation of emotions and, in particular, social emotions, produce an abnormally ‘utilitarian’ pattern of judgements on moral dilemmas that pit compelling considerations of aggregate welfare against highly emotionally aversive behaviours (for example, having to sacrifice one person’s life to save a number of other lives). In contrast, the VMPC patients’ judgements were normal in other classes of moral dilemmas. These findings indicate that, for a selective set of moral dilemmas, the VMPC is critical for normal judgements of right and wrong. The findings support a necessary role for emotion in the generation of those judgements.

The entire article is here.

Sunday, March 2, 2014

The Tragedy Of The Mental Commons

By Kevin Arnold
Films for Action
Originally published January 22, 2011

Here is an excerpt:

Thirty-five years ago, Garret Hardin, a professor at the University of California, Santa Barbara, authored a ground-breaking article in the journal Science that introduced an idea: the tragedy of the commons. Our survival was at stake, he argued, if we failed to open our eyes and realize that Earth's physical resources were finite. Treating them as a free-for-all was no longer acceptable if we wanted to reduce human suffering and prolong our existence on this planet.

To illustrate the tragedy, he used the example of 14th-century common land. 'Picture a pasture open to all,' he wrote. 'It is to be expected that each herdsman will try to keep as many cattle as possible on the commons.' When a herder adds a cow to the pasture, he reaps the benefit of a larger herd. Meanwhile, the cost of the animal - the damage done to the pasture - is divided among all the herdsmen.

This continues until, finally, the herders reach a delicate point: as the pasture becomes overgrazed, each new animal threatens the well-being of the entire herd. 'At this point,' Hardin argues, 'the inherent logic of the commons remorselessly generates tragedy.'

The entire article is here.

Thanks to Ed Zuckerman for this article.

Wednesday, January 29, 2014

Moral luck: Neiladri Sinhababu

Published on Dec 2, 2013

A talk on moral luck that will examine when blame and virtue can be assigned to human actions through a number of examples. Neil Sinhababu is Assistant Professor of Philosophy at the National University of Singapore. His research is mainly on ethics. His paper on romantic relationships with people from other universes, "Possible Girls", was featured in the Washington Post on Valentine's Day.


At Issue in 2 Wrenching Cases: What to Do After the Brain Dies

By BENEDICT CAREY and DENISE GRADY
The New York Times
Originally posted January 9, 2014

In one way, the cases are polar opposites: the parents of Jahi McMath in Oakland, Calif., have fought to keep their daughter connected to a ventilator, while the parents and husband of Marlise Muñoz in Fort Worth, Tex., want desperately to turn the machine off. In another way, the cases are identical: both families have been shocked to learn that a loved one was declared brain-dead — and that hospital officials defied the family’s wishes for treatment.

Their wrenching stories raise questions about how brain death is determined, and who has the right to decide how such patients are treated.

The entire story is here.

Wednesday, January 22, 2014

Understanding Moral Values in Psychotherapy

By John Gavazzi and Sam Knapp
Submitted for publication

Psychotherapy is not a value-free experience; hence, morality plays a role in the helping relationship. The psychologist’s role in psychotherapy inherently entails more power in the relationship. Therefore, to work in their patient’s best interest, psychologists need to remain aware of the power imbalance and their potential influence on the belief systems and values of their patients. All psychologists have the ability to influence their patients in many areas of their lives including the domains of morality, values, and ethics.

In terms of psychotherapy training, psychologists need to be aware of their moral beliefs as these apply to a variety of topics in psychotherapy. Patients come to psychotherapy with diverse beliefs and backgrounds, so psychologists need to be open to the diversities of modern American life. Psychologists also need to be aware of their limits of what is acceptable versus unacceptable, in terms of their patients’ thoughts, feelings, and behaviors. Psychologists and patients who have congruent belief systems rarely discuss how their synchronous values work toward a positive outcome, although congruence between the value systems of clients and psychologists is correlated with successful outcomes in psychotherapy (Beutler & Bergen, 1991).  Furthermore, research supports the idea that patient values shift toward psychologist values during therapy (Williams & Levitt, 2007). This finding is a less obvious result of psychotherapy, and typically not a planned goal of therapy. 

Friday, December 6, 2013

Pushing the Intuitions behind Moral Internalism

Derek Leben and Kristine Wilckens

Introduction 

Moral Internalism claims that there is a necessary connection between judging that some action is morally right/wrong and being motivated to perform/avoid that action. For instance, if I sincerely believe that it is morally wrong to eat animals, then I would be automatically motivated not to eat animals. If I sincerely believe that it is morally required for me to take care of my children, then I would be automatically motivated to take care of my children. This claim is called ‘Internalism’ (or more technically, ‘Motivational Judgment Internalism’) because in such cases, the motivation is internal to the evaluative judgment. There are different types of Moral Internalism, but we will here be concerned with the conceptual variety advocated by Hare (1952), which claims that the link between moral judgments and motivation is an a priori conceptual truth.

The fact that Internalism appears intuitively to be true specifically for moral judgments has been extremely important to moral philosophers. In response to the skeptical question: “Why should I care about right and wrong?” some ethicists have argued that the question is nonsensical, since by making judgments about right and wrong, one is automatically motivated to care about these judgments. In response to the question: “What kind of judgments are moral judgments?” philosophers going back to Hume have argued that beliefs like ‘my car is black’ or ‘today is Tuesday’ can never in themselves motivate or direct anyone to perform some action, but only in conjunction with an emotion. If one adopts this Humean Theory of Motivation along with Moral Internalism, then, as Hume states, “it is impossible that the distinction betwixt moral good and evil can be made by reason; since that distinction has an influence on our actions, of which reason alone is incapable” (Hume, 1739). In other words, since beliefs are never inherently motivating, moral judgments cannot be normal beliefs about the world. This conclusion is known as (psychological) non-cognitivism, and has obvious consequences for how we engage in moral debate and consideration. 


Sunday, November 24, 2013

Vantage Points and The Trolley Problem

By Thomas Nadelhoffer
Leiter Reports: A Philosophy Blog
Originally posted November 10, 2013

Here is an excerpt:

The standard debates about scenarios like BAS (Bystander at the Switch) typically focus on what it is permissible for the bystander to do given the rights of the few who have to be sacrificed involuntarily in order to save the many. In a paper I have been working on in fits and starts for too damn long now, I try to shift the vantage point from which we view cases like BAS and I suggest doing so yields some interesting results.  Rather than looking at BAS from the perspective of the bystanders—and what it is permissible (or impermissible) for them to do—I examine BAS instead from the point of view of the individuals whose lives hang in the balance. This change of vantage points highlights some possible tensions that may exist in our ever shifting intuitions.

For instance, let’s reexamine BAS from the point of view of the five people who will be killed if the bystander perhaps understandably cannot bring herself to hit the switch. Imagine that one of the five workmen has a gun and it becomes clear that the bystander is not going to be able to bring herself to divert the trolley.  Would it be permissible for the workman with the gun to shoot and kill the bystander if doing so was the only way of getting her to fall onto the switch?

The entire blog post is here.

Saturday, October 19, 2013

Second-Person vs. Third-Person Presentations of Moral Dilemmas

By Eric Schwitzgebel
Experimental Philosophy Blog
Originally published on 10/03/2013

You know the trolley problems, of course. An out-of-control trolley is headed toward five people it will kill if nothing is done. You can flip a switch and send it to a side track where it will kill one different person instead. Should you flip the switch? What if, instead of flipping a switch, the only way to save the five is to push someone into the path of the trolley, killing that one person?

In evaluating this scenario, does it matter if the person standing near the switch with the life-and-death decision to make is "John" as opposed to "you"? Nadelhoffer & Feltz presented the switch version of the trolley problem to undergraduates from Florida State University. Forty-three saw the problem with "you" as the actor; 65% of the them said it was permissible to throw the switch. Forty-two saw the problem with "John" as the actor; 90% of them said it was permissible to throw the switch, a statistically significant difference.

Saturday, March 3, 2012

Health Care Issues Intensify U.S. Debate Over Conscience in the Workplace

By Stephanie Simon
Reuters
Originally published February 22, 2012

Can a state require a pharmacy to stock and dispense emergency contraception -- even when the owner considers the drug immoral?

That's the question at the heart of a long-running legal battle in Washington state, expected to be decided Wednesday with a ruling from the U.S. District Court in Seattle.

It's the latest twist in a contentious national debate over the role of conscience in the workplace.

In recent weeks, the debate has been dominated by religious groups fighting to overturn a federal mandate that most health insurance plans provide free birth control. But the battle extends far beyond insurance regulations.

Asserting conscientious objections, nurses in New Jersey have said they would not check the vital signs of patients recovering from abortions. Infertility specialists in California would not perform artificial insemination on a lesbian. An ambulance driver in Illinois declined to transport a patient to an abortion clinic.

In the Washington case, a family-owned pharmacy in Olympia declined to stock emergency contraception, which can prevent pregnancy if taken within 72 hours of unprotected sex. Co-owner Kevin Stormans says he considers the drug equivalent to an abortion, because it can prevent implantation of a fertilized egg. His two pharmacists agree.

Their decision to keep the drug off their shelves came under fire in 2007, when the state Board of Pharmacy enacted a rule requiring pharmacies to stock and dispense all time-sensitive medications in demand in their community. In the case of the Olympia pharmacy, that includes emergency contraception, said Tim Church, a state Department of Health spokesman. The pharmacy's owner and employees filed suit to block the mandate.

All our family wants ... is to serve our customers in keeping with our deepest values," Stormans said in a statement issued by his attorneys.

The state argues that it has a compelling interest in protecting the right of patients to legal medication.

The conscience debate has implications for a vast number of patients. A 2007 New England Journal of Medicine study found that 14% of doctors do not believe they are obligated to tell patients about possible treatments that they personally consider morally objectionable. Nearly 30% of physicians said they had no obligation to refer patients to another provider for treatments they wouldn't offer themselves. A more recent study, published last week in the Journal of Medical Ethics, echoed the finding on referrals.

And abortion and contraception aren't the only medical services at issue. Physicians also may object to following directives from terminally ill patients to remove feeding tubes or ventilators, said Kathryn Tucker, director of legal affairs for Compassion & Choices, an advocacy group that backs physician-assisted suicide.

Entire story is here.

Monday, January 2, 2012

Moral dilemma: Would you kill one person to save five?

Michigan State University News
Released December 1, 2011

C. D. Navarrete, PhD
EAST LANSING, Mich. — Imagine a runaway boxcar heading toward five people who can’t escape its path. Now imagine you had the power to reroute the boxcar onto different tracks with only one person along that route.

Would you do it?

That’s the moral dilemma posed by a team of Michigan State University researchers in a first-of-its-kind study published in the research journal Emotion. Research participants were put in a three dimensional setting and given the power to kill one person (in this case, a realistic digital character) to save five.

The results? About 90 percent of the participants pulled a switch to reroute the boxcar, suggesting people are willing to violate a moral rule if it means minimizing harm.

“What we found is that the rule of ‘Thou shalt not kill’ can be overcome by considerations of the greater good,” said Carlos David Navarrete, lead researcher on the project.

As an evolutionary psychologist, Navarrete explores big-picture topics such as morality – in other words, how do we come to our moral judgments and does our behavior follow suit?

His latest experiment offers a new twist on the “trolley problem,” a moral dilemma that philosophers have contemplated for decades. But this is the first time the dilemma has been posed as a behavioral experiment in a virtual environment, “with the sights, sounds and consequences of our actions thrown into stark relief,” the study says.

The research participants were presented with a 3-D simulated version of the classic dilemma though a head-mounted device. Sensors were attached to their fingertips to monitor emotional arousal.

In the virtual world, each participant was stationed at a railroad switch where two sets of tracks veered off. Up ahead and to their right, five people hiked along the tracks in a steep ravine that prevented escape. On the opposite side, a single person hiked along in the same setting.

As the boxcar approached over the horizon, the participants could either do nothing – letting the coal-filled boxcar go along its route and kill the five hikers – or pull a switch (in this case a joystick) and reroute it to the tracks occupied by the single hiker.

Of the 147 participants, 133 (or 90.5 percent) pulled the switch to divert the boxcar, resulting in the death of the one hiker. Fourteen participants allowed the boxcar to kill the five hikers (11 participants did not pull the switch, while three pulled the switch but then returned it to its original position).

The findings are consistent with past research that was not virtual-based, Navarrete said.

The study also found that participants who did not pull the switch were more emotionally aroused. The reasons for this are unknown, although it may be because people freeze up during highly anxious moments – akin to a solider failing to fire his weapon in battle, Navarrete said.

“I think humans have an aversion to harming others that needs to be overridden by something,” Navarrete said. “By rational thinking we can sometimes override it – by thinking about the people we will save, for example. But for some people, that increase in anxiety may be so overpowering that they don’t make the utilitarian choice, the choice for the greater good.”

A 2-D example of the virtual environment used in this study is available at www.cdnresearch.net/vr

Sunday, December 18, 2011

The Psychology of Moral Reasoning

Moral Reasoning

This article is found in the public domain here.