Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Moral Decision-making. Show all posts
Showing posts with label Moral Decision-making. Show all posts

Friday, September 5, 2014

Here’s a Terrible Idea: Robot Cars With Adjustable Ethics Settings

By Patrick Lin
Wired
Originally posted August 18, 2014

Here is an excerpt:

So why not let the user select the car’s “ethics setting”? The way this would work is one customer may set the car (which he paid for) to jealously value his life over all others; another user may prefer that the car values all lives the same and minimizes harm overall; yet another may want to minimize legal liability and costs for herself; and other settings are possible.

Plus, with an adjustable ethics dial set by the customer, the manufacturer presumably can’t be blamed for hard judgment calls, especially in no-win scenarios, right? In one survey, 44 percent of the respondents preferred to have a personalized ethics setting, while only 12 percent thought the manufacturer should predetermine the ethical standard. So why not give customers what they want?

The entire story is here.

Friday, May 30, 2014

Now The Military Is Going To Build Robots That Have Morals

By Patrick Tucker
Defense One
Originally posted May 13, 2014

Are robots capable of moral or ethical reasoning? It’s no longer just a question for tenured philosophy professors or Hollywood directors. This week, it’s a question being put to the United Nations.

The Office of Naval Research will award $7.5 million in grant money over five years to university researchers from Tufts, Rensselaer Polytechnic Institute, Brown, Yale and Georgetown to explore how to build a sense of right and wrong and moral consequence into autonomous robotic systems.

The entire article is here.

Wednesday, April 30, 2014

The Heinz Dilemma Might Reveal That Morality Is Meaningless

By Esther Inglis-Arkell
io9.com
Originally published April 29, 2014

Here is an excerpt:

But if this finding is true, it seems there are bigger problems with morality. What this experiment seems to say is people can take the same situation, and argue the same principles - social roles, the importance of interpersonal relationships, the likelihood of punishment, and pure humanitarian principles - and come to exactly opposite moral conclusions. And they do this for their whole lives. Sure, it's interesting to see that principles evolve over time, but it's more interesting to see that principles - at least the ones confined solely to the human mind - are irrelevant. There is no method or guiding idea that could possibly allow any group of humanity to come to a consensus. Morality, then, is basically chaos. We can start from the same place, and follow the same principles, and end at diametrically opposite ends of a problem, and there's no way to resolve that.

The entire blog post is here.

Editor's note:

I posted this piece to demonstrate that many struggle to understand morality.  First, moral psychology has moved well past Kohlberg.  Psychologists, especially those who study moral psychology, understand the theoretical and research limitations of Kohlberg.  Please listen to podcast Episode 7 to get a flavor of this.

Second, to believe "morality, then, is basically chaos" is also uninformed.  In moral decision-making, individuals can use different principles to generate different conclusions.  This does not indicate that morality is in chaos, rather, it demonstrates how people use different moral systems to judge and respond to moral dilemmas.

Third, a true moral dilemma involves competing principles.  If it is truly a moral dilemma, then there is no "correct" or "right" answer.  A true dilemma shows how an individual is in a moral or ethical bind and there are cognitive and emotional strategies to generate solutions to sometimes impossible problems. Podcasts 5 and 6 demonstrate how psychologists can knit together possible solutions to ethical dilemmas because, in part, they bring their own moral systems, values, and biases to their work.

The podcasts can be found here.


Wednesday, February 19, 2014

Ethics Questions Arise as Genetic Testing of Embryos Increases

By GINA Kolata
The New York Times
Originally posted February 3, 2014

Here is an excerpt:

Genetic testing of embryos has been around for more than a decade, but its use has soared in recent years as methods have improved and more disease-causing genes have been discovered. The in vitro fertilization and testing are expensive — typically about $20,000 — but they make it possible for couples to ensure that their children will not inherit a faulty gene and to avoid the difficult choice of whether to abort a pregnancy if testing of a fetus detects a genetic problem.

But the procedure also raises unsettling ethical questions that trouble advocates for the disabled and have left some doctors struggling with what they should tell their patients.

The entire story is here.

Thursday, January 16, 2014

The Tragedy of Common-Sense Morality

Evolution didn’t equip us for modern judgments.

By Tiffany O'Callaghan
The New Scientist
Originally published December 14, 2013

Our instincts don't always serve us well. Moral psychologist Joshua Greene explains why, in the modern world, we need to figure out when to put our sense of right and wrong in manual mode. His new book is Moral Tribe: Emotion, Reason, and the Gap Between Us and Them.

Tiffany O’Callaghan: You say morality is more than it evolved to be. What do you mean?

Joshua Greene: Morality is essentially a suite of psychological mechanisms that enable us to cooperate. But, biologically at least, we only evolved to cooperate in a tribal way. Individuals who were more moral—more cooperative with those around them—could outcompete others who were not. However, we have the capacity to take a step back from this and ask what a more global morality would look like. Why are the lives of people on the other side of the world worth any less than those in my immediate community? Going through that reasoning process can allow our moral thinking to do something it never evolved to.

TO: So we need to be able to switch from intuitive morality to more considered responses? When should we use which system?

JG: When it’s a matter of me versus us, my interests versus those of others, our instincts do pretty well. They don't do as well when it’s us versus them, my group’s interests and values versus another group’s. Our moral intuitions didn’t evolve to solve that problem in an even-handed way. When groups disagree about the right thing to do, we need to slow down and shift into manual mode.

The entire article is here.

Friday, January 10, 2014

Screening Newborns For Disease Can Leave Families In Limbo

By Nell Greenfieldboyce
NPR Health News
Originally posted December 23, 2013

For Matthew and Brianne Wojtesta, it all started about a week after the birth of their daughter Vera. Matthew was picking up his son from kindergarten when he got a phone call.

It was their pediatrician, with some shocking news. Vera had been flagged by New York's newborn screening program as possibly having a potentially deadly disease, and would need to go see a neurologist the next day.

Like every state, New York requires that newborns get a small heel prick so that a few drops of blood can be sent to a lab for testing. The idea is to catch health problems that could cause death or disability without early intervention.

But in recent years, patient advocacy groups have been pushing states to adopt mandatory newborn screening for more and more diseases, including ones that have no easy diagnosis or treatment.

One of those is Krabbe disease, a rare and devastating neurological disorder.

In 2006, New York became the first state to screen for Krabbe, and until recently it was the only state to do so. Screening for this disease is expanding, even though some experts say the treatment available doesn't seem to help affected children as much as was initially hoped — and testing can put some families in a kind of fearful limbo.

The entire story is here.

Monday, January 6, 2014

Motivated Moral Reasoning in Psychotherapy

John D. Gavazzi, Psy.D., ABPP
Samuel Knapp, Ed.D., ABPP

            In the research literature on psychology and morality, the concept of motivated moral reasoning is relevant to psychotherapy. Motivated moral reasoning occurs when a person’s decision-making skills are motivated to reach a specific moral conclusion. Research on motivated moral reasoning can be influenced by factors such as the perception of intentionality of others and the social nature of moral reasoning (Ditto, Pizarro, & Tannenbaum, 2009). In this article, we will focus on the intuitive, automatic, and affective nature of motivated moral reasoning as these types of judgments occur in psychotherapy. The goal of this article is to help psychologists remain vigilant about the possibilities of motivated moral reasoning in the psychotherapy relationship.


Individuals typically believe that moral judgments are primarily principle-based, well-reasoned, and cognitive. Individuals also trust that moral judgments are made from a top-down approach, meaning moral agents start with moral ideals or principles first, and then apply those principles to a specific situation. Individuals typically believe moral decisions are based on well-reasoned principles, consistent over time and reliable across situations. Ironically, the research reveals that, unless primed for a specific moral dilemma (such as serving on jury duty), individuals typically use a bottom-up strategy in moral reasoning. Research on self-report of moral decisions shows that individuals seek justifications and ad hoc confirmatory data points to support the person’s reflexive decision. Furthermore, the reasoning for moral decisions is context-dependent, meaning that the same moral principles are not applied consistently over time and across situations. Finally, individuals use automatic, intuitive, and emotional processes when making important decisions (Ditto, Pizarro, & Tannenbaum, 2009). While the complexity of moral reasoning depends on a number of factors, individuals tend to make moral judgments first, and answer questions later (and only if asked).

The entire article is here.

Tuesday, November 19, 2013

You Can't Learn about Morality from Brain Scans

By Thomas Nagel
New Republic
Originally posted November 1, 2013

This story includes information from Joshua Green's book: Moral Tribes: Emotion, Reason, and the Gap Between Us and Them

Here is an excerpt:

Morality evolved to enable cooperation, but this conclusion comes with an important caveat. Biologically speaking, humans were designed for cooperation, but only with some people. Our moral brains evolved for cooperation within groups, and perhaps only within the context of personal relationships. Our moral brains did not evolve for cooperation between groups (at least not all groups).... As with the evolution of faster carnivores, competition is essential for the evolution of cooperation.

The tragedy of commonsense morality is conceived by analogy with the familiar tragedy of the commons, to which commonsense morality does provide a solution. In the tragedy of the commons, the pursuit of private self-interest leads a collection of individuals to a result that is contrary to the interest of all of them (like over-grazing the commons or over-fishing the ocean). If they learn to limit their individual self-interest by agreeing to follow certain rules and sticking to them, the commons will not be destroyed and they will all do well.

The entire article is here.

Monday, November 11, 2013

Getting In Touch With Your Inner Sexual Deviant

An Interview and article by David DiSalvo
Jesse Bering, Perv: The Sexual Deviant in all of Us
Forbes
Originally posted on October 24, 2013

Here is an excerpt:

Q: One of the themes that comes through is that we feel so sure about the origins and motivations of various sexual behaviors, and for a good many of them there’s no scientific basis for feeling this way – indeed, in many cases science is far from reaching a conclusion. Why do you think we’re so prone to staunchly believing that how we feel about a sexual behavior is automatically true?

A: It’s certainly one of those areas where everyone has an opinion. But if there’s one thing I discovered while working on this book, it’s that the strength of one’s moral convictions about sex usually reflects the depths of one’s ignorance about the science of sex. The more one learns in this area, paradoxically, the more uncertain one becomes.

Human beings are “stomach philosophers”—we allow our gut feelings to make decisions about other people’s sex lives on the basis of whether or not we’re personally disgusted or uncomfortable with their erotic desires or behaviors. I draw the line at harm, but defining harm can be a slippery matter, too. Since we would be harmed, we presume that others must be harmed as well, even when that’s far from apparent. I joke in the book about how I’d be irreparably damaged if Kate Upton were to pin me to my chair and do a slow strip tease on my lap. Lovely as she is, I’m gay, and not only would I not enjoy that experience, I’d be made deeply uncomfortable by it. My straight brother or my lesbian cousin, by contrast, would process this identical Upton event very differently.

The entire interview/article is here.

Tuesday, October 8, 2013

The Importance of the Afterlife. Seriously.

By SAMUEL SCHEFFLER
The New York Times - Opinionator
Originally published September 21, 2013

I believe in life after death.

No, I don’t think that I will live on as a conscious being after my earthly demise. I’m firmly convinced that death marks the unqualified and irreversible end of our lives.

My belief in life after death is more mundane. What I believe is that other people will continue to live after I myself have died. You probably make the same assumption in your own case. Although we know that humanity won’t exist forever, most of us take it for granted that the human race will survive, at least for a while, after we ourselves are gone.

Because we take this belief for granted, we don’t think much about its significance. Yet I think that this belief plays an extremely important role in our lives, quietly but critically shaping our values, commitments and sense of what is worth doing. Astonishing though it may seem, there are ways in which the continuing existence of other people after our deaths — even that of complete strangers — matters more to us than does our own survival and that of our loved ones.

The entire story is here.

Sunday, September 1, 2013

Good Deeds Gone Bad

By MATTHEW HUTSON
The New York Times
Published: August 16, 2013

ON your way to work today you may have paused to let another car merge into your lane. Or you stopped to give a dollar to a subway artist. A minute later, another chance to do the same may have appeared. Did your first act make the second more tempting? Or did you decide you had done your good deed for the day?

Strangely, researchers have demonstrated both reactions — moral consistency and moral compensation — repeatedly in laboratories, leading them to ask why virtue sometimes begets more virtue and sometimes allows for vice. In doing so, they have shed an interesting light on how the conscience works.

We often look to past behavior for clues about who we are and what we want, and then behave accordingly. Of course, we seek consistency not only with desirable behaviors, but also with less noble acts: in one study, subjects assigned to wear sunglasses they knew were counterfeit were more likely to cheat during the experiment.

The entire article is here.

Sunday, August 18, 2013

The Whistle-Blower’s Quandary

By ADAM WAYTZ, JAMES DUNGAN and LIANE YOUNG
The New York Times
Published: August 2, 2013

IMAGINE you’re thinking about blowing the whistle on your employer. As the impassioned responses to the actions of whistle-blowers like Edward J. Snowden have reminded us, you face a moral quandary: Is reporting misdeeds an act of heroism or betrayal?

(cut)

It makes sense that whistle-blowing brings these two moral values, fairness and loyalty, into conflict. Doing what is fair or just (e.g., promoting an employee based on talent alone) often conflicts with showing loyalty (e.g., promoting a longstanding but unskilled employee).

The entire story is here.

Friday, August 9, 2013

Is income inequality 'morally wrong'?

By John Sutter
CNN
Originally posted July 25, 2013

Here are some excerpts:

So is extreme inequality amoral?

To think this through, I called up four smart people -- Nigel Warburton, a freelance philosopher and writer, and host of the (wonderful) Philosophy Bites podcast; Arthur Brooks, president of the American Enterprise Institute and author of "Wealth and Justice"; Thomas Pogge, director of the Global Justice Program at Yale; and Kentaro Toyama, researcher at the University of California at Berkeley.

(cut)

I'll end this list back on John Rawls, the philosopher whose 1971 book, "A Theory of Justice," is a must-read (or at least a must-become-familiar-with) for people interested in this topic. One of Rawls' theories is that inequality can be justified only when it benefits everyone in society, particularly those who are most poor and vulnerable.

Saturday, August 3, 2013

Ethics, Charity and Overhead

Posted by Mike LaBossiere
Talking Philosophy
Originally posted July 19, 2013

While heading home after a race, I caught a segment on the radio discussing Dan Pallotta’s view of the moral assessment of charities and the notion that our moral intuitions regarding charities are erroneous. Pallotta’s main criticism is that people err in regarding frugality as being equivalent to being moral. So, for example, a charitable event with 5% overhead is regarded as morally superior to one with 70% overhead. This is an error, as he sees it, because what should be focused on is the accomplishments. If, for example, the event with the 5% overhead only raised $100 for charity and the event with 70% overhead raised a million dollars, then the second event would obviously have accomplished a great deal more. Naturally, it is being assumed that the overhead is for legitimate expenses such as salaries, advertising and such.

While I lack Pallotta’s experience and expertise in regards to running charities, I do think it is well worth while to consider some of the ethical issues that his discussion raised.

The entire story is here.

Thursday, July 18, 2013

Cruel and Competitive or Compassionate and Cooperative?

Are We Born To Be Cruel and Competitive or Compassionate and Cooperative?

Samuel Knapp, EdD, ABPP
Director of Professional Affairs - Pennsylvania Psychological Association
The Pennsylvania Psychologist

What is the nature of humankind? Are we devils who only occasionally show sparks of morality? Or are we angels who sometimes slip into depravity? This question is not merely an interesting academic exercise. Instead, our assumptions about human nature, and our capacity for good or evil, help shape our expectations of each other and our expectations for ourselves. If we assume that humans are naturally evil and aggressive, we may tolerate or justify insensitive or cruel acts. On the other hand, if we assume that humans have a strong capacity for compassion and cooperation, then we may demand more of it from others and ourselves. [1]

Compassion and cooperation in non-human primates

Some claim that only the restraining force of civilization keeps people from “acting like animals.” Like the children in Lord of the Flies, it is argued that only a modest breakdown of external control can unleash the worst instincts of people that are lurking under a thin surface of civility. However, consider this event that occurred at the Brookfield zoo outside of Chicago on August 16, 1996:
A 3-year-old boy climbed the wall around the gorilla enclosure and fell 18 feet on to concrete into the enclosure, where he remained unconscious. Spectators gasped when the gorilla Binti Jua picked up the child, certain that the gorilla would harm him. However, Binti Jua gently cradled the infant with her right arm and carried him to an access entrance where the zoo keeper was waiting to take the child. Her own baby, Koola clutched her back during the entire incident. (Jones, 2011). 
Primatologist Frans deWaal (2010) could cite this and many other less dramatic incidents to illustrate the complexity of behavior of non-human primates, including their capacity for prosocial behaviors. DeWaal is no sentimentalist. He knows that some primates, such as chimpanzees, can act with great brutality such as when they engage in lethal gang warfare against members of their own species. Nonetheless, he claims that non-human primates also show love, compassion, and social cooperation. It is simply scientifically inaccurate, he argues, to conclude that our biological heritage necessarily drives us toward cruelty and selfishness. On the contrary, empathy and cooperation, deWaal claims, may be an equal or even greater part of our biological nature than callousness and aggression.

Detailed observations of non-human primates support deWaal’s conclusions. Primatologist Barbara Smuts states that “life in African ape societies possesses all the essential ingredients of first-rate soap operas; convoluted plots, passion, lots of sex and politics, surprise endings, and a cast of distinct characters” (2000, p. 80). Non-human primates rely heavily on their social networks and have a detailed mental record keeping system of who has helped them in the past and to whom they owe obligations. They know their kin and they gravitate toward them. Children will remember their mothers; mothers appear depressed at the death of their children. Chimpanzees keep track of who groomed them this morning when they share food in the afternoon, and they support their friends during fights. When endangered they will cling to each other or hold hands. Friendships can last a life time.
Here are some examples of social cooperation:
Rachael, a monkey raised in the wild and later captured, raised orphaned children as her own (Smith, 2005).
A bonobo inserted herself between a poisonous snake and her friend at the risk of her own life (deWaal, 2011).
A high-ranking chimpanzee ensures that all members of his social group, even lower ranking members, get something to eat from his kill (deWaal, 2011). 
Cooperation and a sense of fairness even show up in controlled experiments. For example, monkeys are quite happy to receive a cucumber from experimenters, unless they see a companion getting a much more valued grape, whereupon they may reject the cucumber (deWaal, 2011).

Compassion and cooperation in human primates 

What evidence is there that these findings would generalize to human behavior? Are human primates as motivated by fairness as their non-human cousins? One source of information about human fairness and compassion comes from studies of game theory. Every fan of television crime shows has seen a version of the “prisoner’s dilemma” in which two people are arrested for a crime and are interrogated separately. Each prisoner knows that if they confess to the crime and implicate their partner, they will get a light sentence and their partner will get a heavy sentence (and conversely if their partner in crime confesses, they will get a heavy sentence, and their partner will get a light sentence), but if both prisoners refuse to talk, it is possible that neither of them will get any sentence at all.

Game theory, developed by behavioral economists, refers to simulations that are often modeled loosely on the prisoner’s dilemma. That is, in these situations participants can either gain or lose according to the degree of cooperation between them. Consider the Ultimatum Game:
Players are given a certain amount of money (for example $10) and put in separate rooms. Player one gets to decide how the money is to be split between him or her and player two. Player one could give it all away, keep it all, or give a share to player two. Then player two gets to decide whether to accept or reject the offer. If player two accepts the offer, then the offer is in effect. If player two rejects the offer, then no one gets anything. 
When Americans play the Ultimatum Game, offers of $2 were rejected half the time, but offers lower than $2 were rejected even more commonly. These findings run contrary to traditional economic theory that says that people should be motivated primarily be rational self-interest and player two should accept the offer of any money, because something is better than nothing.

However, consider the outcomes with a second type of game, The Dictator Game:
Player one is given a certain amount of money (for example $10) and is put in a room separate from player two. Player one gets to decide how much money to offer to player two. There is no opportunity to accept or reject: player two has to accept what is offered. 
When the Dictator Game is actually played, the amount most often offered by player one was 20% to 30% of the original amount, although the most common offers were nothing or one-half. That is, player one usually gave something to player two, and frequently gave player two the same amount that he/she took for him/herself. Traditional economic theory would predict that player one should offer player two nothing, since all players should be motivated primarily by their own financial interests.

Over the years behavioral economists have replicated or varied these games in hundreds of experiments. They have learned, for example, that in both the Dictator Game and the Ultimatum Game, several factors can influence the degree of cooperation including whether or not the players are anonymous, come from cultures where trading or commerce is common, or (in round robin games) previous participants were generous to them.

Just like the monkeys who reject the cucumber when it seems that they were being treated unfairly, players will often reject offers in the Ultimatum Game that they consider to be unfair. Like the monkeys, they would rather get nothing than submit to an unfair system. Just like the monkeys, baboons, and chimpanzees who feel social obligations to their relatives and friends, players in the Ultimatum and Dictator Games will be more generous with friends and relatives than with strangers, and will be more generous to those who have been generous to them in the past. Although we cannot automatically extrapolate every finding from the study of the behavior of non-human primates or game theory to other contexts, these sources of data suggest that humans are not motivated exclusively by short-term self-interest, but that fairness and cooperation also help drive human behavior.

References
deWaal, F. B. M. (2005, April). How animals do business. Scientific American, 292, 72-79.
deWaal, F. B. M. (2011). The age of empathy. New York: Harmony Books.
Kish-Gephart, J. J., Harrison, D. A., & Treviño, L. K. (2010). Bad apples, bad cases, or bad barrels: A meta-analytic evaluation about sources of unethical behavior at work. Journal of Applied Psychology, 95, 1-31.
Jones, J. (2011, August 16). From the archives: Gorilla saves boy. Retrieved from http://www.nbcchicago.com/news/local/binji-jua-127910608.html
Smuts, B. (2000, December). Common ground. Natural History, 78-83.

Smith, H. J. (2005). Parenting for primates. Cambridge, MA: Harvard University Press.

[1] Philosophers such as Thomas Hobbes and Niccolò Machiavelli expected the worst from others. But in their review of unethical behavior in business organizations, Kish-Gephart et al. (2010) found that those who had a Machiavellian interpretation of human behavior were more likely to engage in unethical behavior. 

Monday, July 1, 2013

Book Review: Moral Perception

Robert Audi, Moral Perception, Princeton University Press, 2013, 194pp.

Review by Antti Kauppinen
Notre Dame Philosophical Reviews
Originally published June 29, 2013

In everyday parlance, we sometimes report having seen that an audience member's standing up to a sexist keynote speaker was morally good or having heard how a husband wronged his wife. In philosophy, the idea that we can literally perceive moral facts has not exactly been popular, but it has had its proponents. In this volume, Robert Audi, who can lay claim to being the leading contemporary moral epistemologist in the intuitionist tradition, develops what is perhaps the most comprehensive defence of the possibility of moral perception to date.

What is moral perception? Suppose I see a teenager drowning a reluctant hamster. I may form the moral belief that the action is wrong straight away, without any conscious inference. This much is common ground between proponents of moral perception and sceptics about it. But where sceptics think that the quick belief is based on non-conscious inference or association or perhaps emotional response, those who believe in moral perception take it to be based on a distinct moral perceptual experience, which can justify the belief in the same way perception in general does.

The first step in making the case is clarifying what happens in perception in general. Audi takes this task up in the first chapter. As is his wont, he makes a series of careful distinctions, starting with three main kinds of perception. They are simple perception (seeing a flower), attributive perception (seeing a flower to be yellow), and propositional perception (seeing that a flower is yellow). The content of perceptual experience is formed by properties that are phenomenally represented in it. Such experience is distinct from belief -- we need not have beliefs corresponding to the content of our perception. For us to perceive something is for it to "produce or sustain, in the right way, an appropriate phenomenal representation of it" (20). We see an object by seeing some suitable subset of its properties. Roughly, an object instantiates an observable property, which causes me to instantiate a phenomenal property (such as being appeared to elliptically).

How about moral perception? Audi does not claim we can perceive that drowning the hamster is wrong in the same way we can perceive that a hat is red. Moral properties are not perceptual like colours and shapes, but they are perceptible. We perceive them by way of perceiving the non-moral properties they are grounded or consequential on. The phenomenal aspect of moral perception is a non-sensory "sense of injustice" (37) or a "felt sense of connection" (39) between the moral property, such as wrongness, and the perceived base property, such as intentionally causing pain to an animal. This representational element isn't "pictorial" or "cartographic" (37) as it might be in paradigmatic cases of perception, but, Audi says, we shouldn't expect that to be the case when it comes to moral properties. Nor are moral properties directly causally responsible for the phenomenal properties; rather, the relevant causal connection obtains between instantiations of base properties and instantiations of the distinctively moral phenomenal states.

The entire book review is here.