Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Trolley Problem. Show all posts
Showing posts with label Trolley Problem. Show all posts

Thursday, October 1, 2015

Ethics Won't Be A Big Problem For Driverless Cars

By Adam Ozimek
Forbes Magazine
Originally posted September 13, 2015

Skeptics of driverless cars have a variety of criticisms, from technical to demand based, but perhaps the most curious is the supposed ethical trolley problem it creates. While the question of how driverless cars will behave in ethical situations is interesting and will ultimately have to be answered by programmers, critics greatly exaggerate its importance. In addition, they assume that driverless cars have to be perfect rather than just better.

(cut)

Patrick Lin asks “Is it better to save an adult or child? What about saving two (or three or ten) adults versus one child?” But seriously, how often do drivers actually make this decision? Accidents that provide this choice seem pretty rare. And if I am wrong and we’re actually living in a world rife with trolley problems for drivers, it seems likely that bad human driving and foresight probably creates many of them. Having driverless cars that don’t get distracted, don’t speed dangerously, and can see 360 degrees will make it less likely that split second life and death choices need to be made.

The entire article is here.

Thursday, August 20, 2015

life after faith

Richard Marshall interviews Philip Kitcher
3:AM Magazine
Originally published on August 2, 2015

Here is an excerpt:

Thought experiments work when, and only when, they call into action cognitive capacities that might reliably deliver the conclusions drawn. When the question posed is imprecise, your thought experiment is typically useless. But even more crucial is the fact that the stripped-down scenarios many philosophers love simply don’t mesh with our intellectual skills. The story rules out by fiat the kinds of reactions we naturally have in the situation described. Think of the trolley problem in which you are asked to decide whether to push the fat man off the bridge. If you imagine yourself – seriously imagine yourself – in the situation, you’d look around for alternatives, you’d consider talking to the fat man, volunteering to jump with him, etc. etc. None of that is allowed. So you’re offered a forced choice about which most people I know are profoundly uneasy. The “data” delivered are just the poor quality evidence any reputable investigator would worry about using. (I like Joshua Greene’s fundamental idea of investigating people’s reactions; but I do wish he’d present them with better questions.)

Philosophers love to appeal to their “intuitions” about these puzzle cases. They seem to think they have access to little nuggets of wisdom. We’d all be much better off if the phrase “My intuition is …” were replaced by “Given my evolved psychological adaptations and my distinctive enculturation, when faced by this perplexing scenario, I find myself, more or less tentatively, inclined to say …” Maybe there are occasions in which the cases bring out some previously unnoticed facet of the meaning of a word. But, for a pragmatist like me, the important issues concern the words we might deploy to achieve our purposes, rather than the language we actually use.

If the intuition-mongering were abandoned, would that be the end of philosophy? It would be the end of a certain style of philosophy – a style that has cut philosophy off, not only from the humanities but from every other branch of inquiry and culture. (In my view, most of current Anglophone philosophy is quite reasonably seen as an ingrown conversation pursued by very intelligent people with very strange interests.) But it would hardly stop the kinds of investigation that the giants of the past engaged in. In my view, we ought to replace the notion of analytic philosophy by that of synthetic philosophy. Philosophers ought to aspire to know lots of different things and to forge useful synthetic perspectives.

The entire interview is here.

Friday, May 8, 2015

TMS affects moral judgment, showing the role of DLPFC and TPJ in cognitive and emotional processing

Jeurissen D, Sack AT, Roebroeck A, Russ BE and Pascual-Leone A (2014) TMS affects moral judgment, showing the role of DLPFC and TPJ in cognitive and emotional processing.
Front. Neurosci. 8:18. doi: 10.3389/fnins.2014.00018

Decision-making involves a complex interplay of emotional responses and reasoning processes. In this study, we use TMS to explore the neurobiological substrates of moral decisions in humans. To examining the effects of TMS on the outcome of a moral-decision, we compare the decision outcome of moral-personal and moral-impersonal dilemmas to each other and examine the differential effects of applying TMS over the right DLPFC or right TPJ. In this comparison, we find that the TMS-induced disruption of the DLPFC during the decision process, affects the outcome of the moral-personal judgment, while TMS-induced disruption of TPJ affects only moral-impersonal conditions. In other words, we find a double-dissociation between DLPFC and TPJ in the outcome of a moral decision. Furthermore, we find that TMS-induced disruption of the DLPFC during non-moral, moral-impersonal, and moral-personal decisions lead to lower ratings of regret about the decision. Our results are in line with the dual-process theory and suggest a role for both the emotional response and cognitive reasoning process in moral judgment. Both the emotional and cognitive processes were shown to be involved in the decision outcome.

The entire article is here.

Saturday, April 25, 2015

On the Normative Significance of Experimental Moral Psychology

Victor Kumar and Richmond Campbell
Philosophical Psychology 
Vol. 25, Iss. 3, 2012, 311-330.

Experimental research in moral psychology can be used to generate debunking arguments in ethics. Specifically, research can indicate that we draw a moral distinction on the basis of a morally irrelevant difference. We develop this naturalistic approach by examining a recent debate between Joshua Greene and Selim Berker. We argue that Greene’s research, if accurate, undermines attempts to reconcile opposing judgments about trolley cases, but that his attempt to debunk deontology fails. We then draw some general lessons about the possibility of empirical debunking arguments in ethics.

The entire article is here.


Tuesday, January 13, 2015

Is Applied Ethics Applicable Enough? Acting and Hedging under Moral Uncertainty

By Grace Boey
3 Quarks Daily
Originally published December 16, 2014

Here are two excerpts:

Lots has been written about moral decision-making under factual uncertainty. Michael Zimmerman, for example, has written an excellent book on how such ignorance impacts morality. The point of most ethical thought experiments, though, is to eliminate precisely this sort of uncertainty. Ethicists are interested in finding out things like whether, once we know all the facts of the situation, and all other things being equal, it's okay to engage in certain actions. If we're still not sure of the rightness or wrongness of such actions, or of underlying moral theories themselves, then we experience moral uncertainty.

(cut)

So, what's the best thing to do when we're faced with moral uncertainty? Unless one thinks that anything goes once uncertainty enters the picture, then doing nothing by default is not a good strategy. As the trolley case demonstrates, inaction often has major consequences. Failure to act also comes with moral ramifications...

The entire blog post is here.

Thursday, December 4, 2014

Why I Am Not a Utilitarian

By Julian Savulescu
Practical Ethics Blog
Originally posted November 15 2014

Utilitarianism is a widely despised, denigrated and misunderstood moral theory.

Kant himself described it as a morality fit only for English shopkeepers. (Kant had much loftier aspirations of entering his own “noumenal” world.)

The adjective “utilitarian” now has negative connotations like “Machiavellian”. It is associated with “the end justifies the means” or using people as a mere means or failing to respect human dignity, etc.

For example, consider the following negative uses of “utilitarian.”

“Don’t be so utilitarian.”

“That is a really utilitarian way to think about it.”

To say someone is behaving in a utilitarian manner is to say something derogatory about their behaviour.

The entire article is here.

Thursday, November 13, 2014

The drunk utilitarian: Blood alcohol concentration predicts utilitarian responses in moral dilemmas

Aaron A. Duke and Laurent Bègueb
Cognition
Volume 134, January 2015, Pages 121–127

Highlights

• Greene’s dual-process theory of moral reasoning needs revision.
• Blood alcohol concentration is positively correlated with utilitarianism.
• Self-reported disinhibition is positively correlated with utilitarianism.
• Decreased empathy predicts utilitarianism better than increased deliberation.

Abstract

The hypothetical moral dilemma known as the trolley problem has become a methodological cornerstone in the psychological study of moral reasoning and yet, there remains considerable debate as to the meaning of utilitarian responding in these scenarios. It is unclear whether utilitarian responding results primarily from increased deliberative reasoning capacity or from decreased aversion to harming others. In order to clarify this question, we conducted two field studies to examine the effects of alcohol intoxication on utilitarian responding. Alcohol holds promise in clarifying the above debate because it impairs both social cognition (i.e., empathy) and higher-order executive functioning. Hence, the direction of the association between alcohol and utilitarian vs. non-utilitarian responding should inform the relative importance of both deliberative and social processing systems in influencing utilitarian preference. In two field studies with a combined sample of 103 men and women recruited at two bars in Grenoble, France, participants were presented with a moral dilemma assessing their willingness to sacrifice one life to save five others. Participants’ blood alcohol concentrations were found to positively correlate with utilitarian preferences (r = .31, p < .001) suggesting a stronger role for impaired social cognition than intact deliberative reasoning in predicting utilitarian responses in the trolley dilemma. Implications for Greene’s dual-process model of moral reasoning are discussed.

Sunday, September 21, 2014

Moral decision-making and the brain

NEURO.tv - Episode 11
Published on Aug 16, 2014

What experiments do psychologists use to identify the brain areas involved in moral decision-making? Do moral truths exist? We discuss with Joshua D. Greene, Professor of Psychology at Harvard University and author of Moral Tribes.




Wednesday, September 3, 2014

Is One of the Most Popular Psychology Experiments Worthless?

By Olga Khazan
The Atlantic
Originally published July 24, 2014

Here is an excerpt:

But one group of researchers thinks it might be time to retire the trolley. In an upcoming paper that will be published in Social and Personality Psychology Compass, Christopher Bauman of the University of California, Irvine, Peter McGraw of the University of Colorado, Boulder, and others argue that the dilemma is too silly and unrealistic to be applicable to real-life moral problems. Therefore, they contend, it doesn't tell us as much about the human condition as we might hope.

In a survey of undergraduates, Bauman and McGraw found that 63 percent laughed "at least a little bit" in the fat-man scenario and 33 percent did so in the track-switching scenario. And that's an issue, because "humor may alter the decision-making processes people normally use to evaluate moral situations," they note. "A large body of research shows how positivity is less motivating than negativity."

The entire article is here.

Sunday, August 24, 2014

Beyond Good And Evil: New Science Casts Light On Morality In The Brain

By Carey Goldberg
Common Health
Originally posted August 7, 2014

For two decades, researchers have scanned and analyzed the brains of psychopaths and murderers, but they haven’t pinpointed any single source of evil in the brain. What they’ve found instead, as Buckholtz puts it, “is that our folk concepts of good and evil are much more complicated, and multi-faceted, and riven with uncertainty than we ever thought possible before.”

In other words, so much for the old idea that we have an angel on one shoulder and a devil on the other, and that morality is simply a battle between the two. Using new technology, brain researchers are beginning to tease apart the biology that underlies our decisions to behave badly or do good deeds. They’re even experimenting with ways to alter our judgments of what is right and wrong, and our deep gut feelings of moral conviction.

The entire article is here.

Friday, August 15, 2014

Moral judgement in adolescents: Age differences in applying and justifying three principles of harm

Paul C. Stey, Daniel Lapsley & Mary O. McKeever
European Journal of Developmental Psychology
Volume 10, Issue 2, 2013
DOI:10.1080/17405629.2013.765798

Abstract

This study investigated the application and justification of three principles of harm in a cross-sectional sample of adolescents in order to test recent theories concerning the source of intuitive moral judgements. Participants were 46 early (M age = 14.8 years) and 40 late adolescents (M age = 17.8 years). Participants rated the permissibility of various ethical dilemmas, and provided justifications for their judgements. Results indicated participants aligned their judgements with the three principles of harm, but had difficulty explaining their reasoning. Furthermore, although age groups were consistent in the application of the principles of harm, age differences emerged in their justifications. These differences were partly explained by differences in language ability. Additionally, participants who used emotional language in their justifications demonstrated a characteristically deontological pattern of moral judgement on certain dilemmas. We conclude adolescents in this age range apply the principles of harm but that the ability to explain their judgements is still developing.

The entire article is here.

Tuesday, August 12, 2014

Revisiting External Validity: Concerns about Trolley Problems and Other Sacrificial Dilemmas in Moral Psychology

By C. W. Bauman, A. P. McGraw, D. M. Bartels, and C. Warren

Abstract

Sacrificial dilemmas, especially trolley problems, have rapidly become the most recognizable scientific exemplars of moral situations; they are now a familiar part of the psychological literature and are featured prominently in textbooks and the popular press. We are concerned that studies of sacrificial dilemmas may lack experimental, mundane, and psychological realism and therefore suffer from low external validity. Our apprehensions stem from three observations about trolley problems and other similar sacrificial dilemmas: (i) they are amusing rather than sobering, (ii) they are unrealistic and unrepresentative of the moral situations people encounter in the real world, and (iii) they do not elicit the same psychological processes as other moral situations. We believe it would be prudent to use more externally valid stimuli when testing descriptive theories that aim to provide comprehensive accounts of moral judgment and behavior.

The entire paper is here.

Saturday, June 21, 2014

Morality pills: reality or science fiction?

The complexities of ethics and the brain make it difficult for scientists to develop a pill to enhance human morals

By Molly Crockett
The Guardian
Originally published June 3, 2014

Could we create a "morality pill"? Once the stuff of science fiction, recent studies in neuroscience have shown that brain chemicals can subtly influence some aspects of moral judgments and decisions. However, science is very far from creating pills that can turn sinners into saints, as I have argued many times before. So imagine my surprise when I came across the headline, “‘Morality Pills’ Close to Reality, Claims Scientist”– referring to a lecture I gave recently in London. (I asked the newspaper where the reporter got his misinformation, but received no response to my query.)

The entire story is here.

Sunday, December 8, 2013

Clang Went the Trolley

‘Would You Kill the Fat Man?’ and ‘The Trolley Problem’

By Sarah Bakewell
The New York Times
Originally published November 22, 2013

Here is an excerpt:

Nothing intrigues philosophers more than a phenomenon that seems simultaneously self-evident and inexplicable. Thus, ever since the moral philosopher Philippa Foot set out Spur as a thought experiment in 1967, a whole enterprise of “trolley­ology” has unfolded, with trolleyologists generating ever more fiendish variants. (Fat Man was developed by the philosopher Judith Jarvis Thomson, in 1985.)

Some find it frivolous: One philosopher is quoted as snapping, “I just don’t do trolleys.” But it really matters what we do in such situations, sometimes on a vast scale. In 1944, new German V-1 rockets started pounding the southern suburbs of London, though they were clearly aimed at more central areas. The British not only let the Germans think the rockets were on target, but used double agents to feed them information suggesting they should adjust their aim even farther south. The government deliberately placed southern suburbanites in danger, but one scientific adviser, whose own family lived in South London, estimated that some 10,000 lives were saved as a result. A still more momentous decision occurred the following year when America dropped atom bombs on Hiroshima and Nagasaki on the argument that a quick end to the war would save lives — and by macabre coincidence, the Nagasaki bomb was nicknamed Fat Man.

The entire story is here.

Sunday, November 24, 2013

Vantage Points and The Trolley Problem

By Thomas Nadelhoffer
Leiter Reports: A Philosophy Blog
Originally posted November 10, 2013

Here is an excerpt:

The standard debates about scenarios like BAS (Bystander at the Switch) typically focus on what it is permissible for the bystander to do given the rights of the few who have to be sacrificed involuntarily in order to save the many. In a paper I have been working on in fits and starts for too damn long now, I try to shift the vantage point from which we view cases like BAS and I suggest doing so yields some interesting results.  Rather than looking at BAS from the perspective of the bystanders—and what it is permissible (or impermissible) for them to do—I examine BAS instead from the point of view of the individuals whose lives hang in the balance. This change of vantage points highlights some possible tensions that may exist in our ever shifting intuitions.

For instance, let’s reexamine BAS from the point of view of the five people who will be killed if the bystander perhaps understandably cannot bring herself to hit the switch. Imagine that one of the five workmen has a gun and it becomes clear that the bystander is not going to be able to bring herself to divert the trolley.  Would it be permissible for the workman with the gun to shoot and kill the bystander if doing so was the only way of getting her to fall onto the switch?

The entire blog post is here.

Tuesday, November 19, 2013

You Can't Learn about Morality from Brain Scans

By Thomas Nagel
New Republic
Originally posted November 1, 2013

This story includes information from Joshua Green's book: Moral Tribes: Emotion, Reason, and the Gap Between Us and Them

Here is an excerpt:

Morality evolved to enable cooperation, but this conclusion comes with an important caveat. Biologically speaking, humans were designed for cooperation, but only with some people. Our moral brains evolved for cooperation within groups, and perhaps only within the context of personal relationships. Our moral brains did not evolve for cooperation between groups (at least not all groups).... As with the evolution of faster carnivores, competition is essential for the evolution of cooperation.

The tragedy of commonsense morality is conceived by analogy with the familiar tragedy of the commons, to which commonsense morality does provide a solution. In the tragedy of the commons, the pursuit of private self-interest leads a collection of individuals to a result that is contrary to the interest of all of them (like over-grazing the commons or over-fishing the ocean). If they learn to limit their individual self-interest by agreeing to follow certain rules and sticking to them, the commons will not be destroyed and they will all do well.

The entire article is here.

Thursday, November 14, 2013

Robert Wright Interviews Joshua Greene on his New Book

The Robert Wright Show
Originally published October 13, 2013
Interview with Joshua Green
They discuss his new book: Moral Tribes: Emotion, Reason, and the Gap Between Us and Them


The website is here.

Saturday, October 26, 2013

Ethical questions science can’t answer

By Massimo Pigliucci
Rationally Speaking Blog
Originally posted October 11, 2013

Yes, yes, we’ve covered this territory before. But you might have heard that Sam Harris has reopened the discussion by challenging his critics, luring them out of their hiding places with the offer of cold hard cash. You see, even though Sam has received plenty of devastating criticism in print and other venues for the thesis he presents in The Moral Landscape (roughly: there is no distinction between facts and values, hence science is the way to answer moral questions), he is — not surprisingly — unconvinced. Hence the somewhat gimmicky challenge. We’ll see how that ones goes, I already have my entry ready (but the submission period doesn’t open until February 2nd).

Be that as it may, I’d like to engage my own thoughtful readers with a different type of challenge (sorry, no cash!), one from which I hope we can all learn something as the discussion unfolds. It seems to me pretty obvious (but I could be wrong) that there are plenty of ethical issues that simply cannot be settled by science, so I’m going to give a few examples below and ask all of you to: a) provide more and/or b) argue that I am mistaken, and that these questions really can be answered scientifically.

The entire article is here.

Saturday, October 19, 2013

Second-Person vs. Third-Person Presentations of Moral Dilemmas

By Eric Schwitzgebel
Experimental Philosophy Blog
Originally published on 10/03/2013

You know the trolley problems, of course. An out-of-control trolley is headed toward five people it will kill if nothing is done. You can flip a switch and send it to a side track where it will kill one different person instead. Should you flip the switch? What if, instead of flipping a switch, the only way to save the five is to push someone into the path of the trolley, killing that one person?

In evaluating this scenario, does it matter if the person standing near the switch with the life-and-death decision to make is "John" as opposed to "you"? Nadelhoffer & Feltz presented the switch version of the trolley problem to undergraduates from Florida State University. Forty-three saw the problem with "you" as the actor; 65% of the them said it was permissible to throw the switch. Forty-two saw the problem with "John" as the actor; 90% of them said it was permissible to throw the switch, a statistically significant difference.