Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Moral Decision-making. Show all posts
Showing posts with label Moral Decision-making. Show all posts

Monday, July 31, 2017

Is it dangerous to recreate flawed human morality in machines?

Alexandra Myers-Lewis
Wired.com
Originally published July 13, 2017

Here are two excerpts:

The need for ethical machines may be one of the defining issues of our time. Algorithms are created to govern critical systems in our society, from banking to medicine, but with no concept of right and wrong, machines cannot understand the repercussions of their actions. A machine has never thrown a punch in a schoolyard fight, cheated on a test or a relationship, or been rapt with the special kind of self-doubt that funds our cosmetic and pharmaceutical industries. Simply put, an ethical machine will always be an it - but how can it be more?

(cut)

A self-driving car wouldn't just have to make decisions in life-and-death situations - as if that wasn't enough - but would also need to judge how much risk is acceptable at any given time. But who will ultimately restrict this decision-making process? Would it be the job of the engineer to determine which circumstances it is acceptable to overtake a cyclist? You won't lose sleep pegging a deer over a goat. But a person? Choosing who potentially lives and dies based on a number has an inescapable air of dystopia. You may see tight street corners and hear the groan of oncoming traffic, but an algorithm will only see the world in numbers. These numbers will form its memories and its reason, the force that moves the car out into the road.

"I think people will be very uncomfortable with the idea of a machine deciding between life and death," Sütfeld says, "In this regard we believe that transparency and comprehensibility could be a very important factor to gain public acceptance of these systems. Or put another way, people may favour a transparent and comprehensible system over a more complex black-box system. We would hope that the people will understand this general necessity of a moral compass and that the discussion will be about what approach to take, and how such systems should decide. If this is put in, every car will make the same decision and if there is a good common ground in terms of model, this could improve public safety."

The article is here.

Friday, June 16, 2017

On What Basis Do Terrorists Make Moral Judgments?

Kendra Pierre-Louis
Popular Science
Originally published May 26, 2017

Here is an excerpt:

“Multiple studies across the world have systematically shown that in judging the morality of an action, civilized individuals typically attach greater importance to intentions than outcomes,” Ibáñez told PopSci. “If an action is aimed to induce harm, it does not matter whether it was successful or not: most people consider it as less morally admissible than other actions in which harm was neither intended nor inflicted, or even actions in which harm was caused by accident.”

For most of us, intent matters. If I mean to slam you to the ground and I fail, that’s far worse than if I don’t mean to slam you to the ground and I do. If that sounds like a no-brainer, you should know that for the terrorists in the study, the morality was flipped. They rated accidental harm as worse than the failed intentional harm, because in one situation someone doesn’t get hurt, while in the second situation someone does. Write the study’s authors, “surprisingly, this moral judgement resembles that observed at early development stages.”

Perhaps more chilling, this tendency to focus on the outcomes rather than the underlying intention means that the terrorists are focused more on outcomes than your average person, and that terror behavior is "goal directed." Write the study's authors "... our sample is characterized by a general tendency to focus more on the outcomes of actions than on the actions' underlying intentions." In essence terrorism is the world's worst productivity system, because when coupled with rational choice theory—which says that we tend to act in ways that maximize getting our way with the least amount of personal sacrifice—murdering a lot of people to get your goal, absent moral stigma, starts to make sense.

The article is here.

Tuesday, May 16, 2017

Why are we reluctant to trust robots?

Jim Everett, David Pizarro and Molly Crockett
The Guardian
Originally posted April 27, 2017

Technologies built on artificial intelligence are revolutionising human life. As these machines become increasingly integrated in our daily lives, the decisions they face will go beyond the merely pragmatic, and extend into the ethical. When faced with an unavoidable accident, should a self-driving car protect its passengers or seek to minimise overall lives lost? Should a drone strike a group of terrorists planning an attack, even if civilian casualties will occur? As artificially intelligent machines become more autonomous, these questions are impossible to ignore.

There are good arguments for why some ethical decisions ought to be left to computers—unlike human beings, machines are not led astray by cognitive biases, do not experience fatigue, and do not feel hatred toward an enemy. An ethical AI could, in principle, be programmed to reflect the values and rules of an ideal moral agent. And free from human limitations, such machines could even be said to make better moral decisions than us. Yet the notion that a machine might be given free reign over moral decision-making seems distressing to many—so much so that, for some, their use poses a fundamental threat to human dignity. Why are we so reluctant to trust machines when it comes to making moral decisions? Psychology research provides a clue: we seem to have a fundamental mistrust of individuals who make moral decisions by calculating costs and benefits – like computers do.

The article is here.

Thursday, October 6, 2016

How Morality Changes in a Foreign Language

By Julie Sedivy
Scientific American
Originally published September 14, 2016

Here is an excerpt:

Why does it matter whether we judge morality in our native language or a foreign one? According to one explanation, such judgments involve two separate and competing modes of thinking—one of these, a quick, gut-level “feeling,” and the other, careful deliberation about the greatest good for the greatest number. When we use a foreign language, we unconsciously sink into the more deliberate mode simply because the effort of operating in our non-native language cues our cognitive system to prepare for strenuous activity. This may seem paradoxical, but is in line with findings that reading math problems in a hard-to-read font makes people less likely to make careless mistakes (although these results have proven difficult to replicate).

An alternative explanation is that differences arise between native and foreign tongues because our childhood languages vibrate with greater emotional intensity than do those learned in more academic settings. As a result, moral judgments made in a foreign language are less laden with the emotional reactions that surface when we use a language learned in childhood.

Thursday, September 15, 2016

Driven to extinction? The ethics of eradicating mosquitoes with gene-drive technologies

Jonathan Pugh
J Med Ethics 2016;42:578-581

Abstract

Mosquito-borne diseases represent a significant global disease burden, and recent outbreaks of such diseases have led to calls to reduce mosquito populations. Furthermore, advances in ‘gene-drive’ technology have raised the prospect of eradicating certain species of mosquito via genetic modification. This technology has attracted a great deal of media attention, and the idea of using gene-drive technology to eradicate mosquitoes has been met with criticism in the public domain. In this paper, I shall dispel two moral objections that have been raised in the public domain against the use of gene-drive technologies to eradicate mosquitoes. The first objection invokes the concept of the ‘sanctity of life’ in order to claim that we should not drive an animal to extinction. In response, I follow Peter Singer in raising doubts about general appeals to the sanctity of life, and argue that neither individual mosquitoes nor mosquitoes species considered holistically are appropriately described as bearing a significant degree of moral status. The second objection claims that seeking to eradicate mosquitoes amounts to displaying unacceptable degrees of hubris. Although I argue that this objection also fails, I conclude by claiming that it raises the important point that we need to acquire more empirical data about, inter alia, the likely effects of mosquito eradication on the ecosystem, and the likelihood of gene-drive technology successfully eradicating the intended mosquito species, in order to adequately inform our moral analysis of gene-drive technologies in this context.

The article is here.

Thursday, September 8, 2016

How Emotions Shape Moral Behavior: Some Answers (and Questions) for the Field of Moral Psychology

Teper R., Zhong C.-B., and Inzlicht M.
Social and Personality Psychology Compass (2015), 9, 1–14

Abstract

Within the past decade, the field of moral psychology has begun to disentangle the mechanics behind moral judgments, revealing the vital role that emotions play in driving these processes. However, given the well-documented dissociation between attitudes and behaviors, we propose that an equally important issue is how emotions inform actual moral behavior – a question that has been relatively ignored up until recently. By providing a review of recent studies that have begun to explore how emotions drive actual moral behavior, we propose that emotions are instrumental in fueling real-life moral actions. Because research examining the role of emotional processes on moral behavior is currently limited, we push for the use of behavioral measures in the field in the hopes of building a more complete theory of real-life moral behavior.

The article is here.

Monday, August 29, 2016

Should a Self-Driving Car Kill Two Jaywalkers or One Law-Abiding Citizen?

By Jacob Brogan
Future Tense
Originally published August 11, 2016

Anyone who’s followed the debates surrounding autonomous vehicles knows that moral quandaries inevitably arise. As Jesse Kirkpatrick has written in Slate, those questions most often come down to how the vehicles should perform when they’re about to crash. What do they do if they have to choose between killing a passenger and harming a pedestrian? How should they behave if they have to decide between slamming into a child or running over an elderly man?

It’s hard to figure out how a car should make such decisions in part because it’s difficult to get humans to agree on how we should make them. By way of evidence, look to Moral Machine, a website created by a group of researchers at the MIT Media Lab. As the Verge’s Russell Brandon notes, the site effectively gameifies the classic trolley problem, folding in a variety of complicated variations along the way. You’ll have to decide whether a vehicle should choose its passengers or people in an intersection. Others will present two differently composed groups of pedestrians—say, a handful of female doctors or a collection of besuited men—and ask which an empty car should slam into. Further complications—including the presence of animals and details about whether the pedestrians have the right of way—sometimes further muddle the question.

Wednesday, June 8, 2016

Are You Morally Modified?: The Moral Effects of Widely Used Pharmaceuticals

Neil Levy, Thomas Douglas, Guy Kahane, Sylvia Terbeck, Philip J. Cowen, Miles
Hewstone, and Julian Savulescu
Philos Psychiatr Psychol. 2014 June 1; 21(2): 111–125.
doi:10.1353/ppp.2014.0023.

Abstract

A number of concerns have been raised about the possible future use of pharmaceuticals designed
to enhance cognitive, affective, and motivational processes, particularly where the aim is to
produce morally better decisions or behavior. In this article, we draw attention to what is arguably
a more worrying possibility: that pharmaceuticals currently in widespread therapeutic use are
already having unintended effects on these processes, and thus on moral decision making and
morally significant behavior. We review current evidence on the moral effects of three widely
used drugs or drug types: (i) propranolol, (ii) selective serotonin reuptake inhibitors, and (iii)
drugs that effect oxytocin physiology. This evidence suggests that the alterations to moral decision
making and behavior caused by these agents may have important and difficult-to-evaluate
consequences, at least at the population level. We argue that the moral effects of these and other
widely used pharmaceuticals warrant further empirical research and ethical analysis.

The paper is here.

Sunday, February 21, 2016

Epistemology, Communication and Divine Command Theory

By John Danaher
Philosophical Disquisitions
Originally posted July 21, 2015

I have written about the epistemological objection to divine command theory (DCT) on a previous occasion. It goes a little something like this: According to proponents of the DCT, at least some moral statuses (like the fact that X is forbidden, or that X is bad) depend for their existence on God’s commands. In other words, without God’s commands those moral statuses would not exist. It would seem to follow that in order for anyone to know whether X is forbidden/bad (or whatever), they would need to have epistemic access to God’s commands. That is to say, they would need to know that God has commanded X to be forbidden/bad. The problem is that there is a certain class of non-believers — so-called ‘reasonable non-believers’ — who don’t violate any epistemic duties in their non-belief. Consequently, they lack epistemic access to God’s commands without being blameworthy for lacking this access. For them, X cannot be forbidden or bad.

This has been termed the ‘epistemological objection’ to DCT, and I will stick with that name throughout, but it may be a bit of a misnomer. This objection is not just about moral epistemology; it is also about moral ontology. It highlights the fact that at least some DCTs include a (seemingly) epistemic condition in their account of moral ontology. Consequently, if that condition is violated it implies that certain moral facts cease to exist (for at least some people). This is a subtle but important point: the epistemological objection does have ontological implications.

The blog post is here.

Thursday, November 12, 2015

The Ethics of Killing Baby Hitler

By Matt Ford
The Atlantic
Originally published October 24, 2015

Here is an excerpt:

The strongest argument for removing Hitler from history is the Holocaust, since it can be directly tied to his existence. The exact mechanisms of the Holocaust—the Nuremburg laws, Kristallnacht, the death squads, the gas chambers, the forced marches, and more—are unquestionably the products of Hitler and his disciples, and they likely would not have existed without him. All other things being equal, a choice between Hitler and the Holocaust is an easy one.

But focusing on Hitler’s direct responsibility for the Holocaust blinds us to more disturbing truths about the early 20th century. His absence from history would not remove the underlying political ideologies or social movements that fueled his ascendancy. Before his rise to power, eugenic theories already held sway in Western countries. Anti-Semitism infected civic discourse and state policy, even in the United States. Concepts like ethnic hierarchies and racial supremacy influenced mainstream political thought in Germany and throughout the West. Focusing on Hitler’s central role in the Holocaust also risks ignoring the thousands of participants who helped carry it out, both within Germany and throughout occupied Europe, and on the social and political forces that preceded it. It’s not impossible that in a climate of economic depression and scientific racism, another German leader could also move towards a similar genocidal end, even if he deviated from Hitler’s exact worldview or methods.

The entire article is here.

Sunday, September 27, 2015

Emotional and Utilitarian Appraisals of Moral Dilemmas Are Encoded in Separate Areas of the Brain

Cendri A. Hutcherson, Leila Montaser-Kouhsari, James Woodward, & Antonio Rangel
The Journal of Neuroscience, 9 September 2015, 35(36): 12593-12605
doi: 10.1523/JNEUROSCI.3402-14.2015

Abstract

Moral judgment often requires making difficult tradeoffs (e.g., is it appropriate to torture to save the lives of innocents at risk?). Previous research suggests that both emotional appraisals and more deliberative utilitarian appraisals influence such judgments and that these appraisals often conflict. However, it is unclear how these different types of appraisals are represented in the brain, or how they are integrated into an overall moral judgment. We addressed these questions using an fMRI paradigm in which human subjects provide separate emotional and utilitarian appraisals for different potential actions, and then make difficult moral judgments constructed from combinations of these actions. We found that anterior cingulate, insula, and superior temporal gyrus correlated with emotional appraisals, whereas temporoparietal junction and dorsomedial prefrontal cortex correlated with utilitarian appraisals. Overall moral value judgments were represented in an anterior portion of the ventromedial prefrontal cortex. Critically, the pattern of responses and functional interactions between these three sets of regions are consistent with a model in which emotional and utilitarian appraisals are computed independently and in parallel, and passed to the ventromedial prefrontal cortex where they are integrated into an overall moral value judgment.

Significance statement

Popular accounts of moral judgment often describe it as a battle for control between two systems, one intuitive and emotional, the other rational and utilitarian, engaged in winner-take-all inhibitory competition. Using a novel fMRI paradigm, we identified distinct neural signatures of emotional and utilitarian appraisals and used them to test different models of how they compete for the control of moral behavior. Importantly, we find little support for competitive inhibition accounts. Instead, moral judgments resembled the architecture of simple economic choices: distinct regions represented emotional and utilitarian appraisals independently and passed this information to the ventromedial prefrontal cortex for integration into an overall moral value signal.

The entire article is here.

Friday, September 25, 2015

The Effect of Probability Anchors on Moral Decision Making

By Chris Brand and Mike Oaksford

Abstract

The role of probabilistic reasoning in moral decision making has seen relatively little research, despite having potentially profound consequences for our models of moral cognition. To rectify this, two experiments were undertaken in which participants were presented with moral dilemmas with additional information designed to anchor judgements about how likely the dilemma’s outcomes were. It was found that these anchoring values significantly altered how permissible the dilemmas were found when they were presented both explicitly and implicitly. This was the case even for dilemmas typically seen as eliciting deontological judgements.  Implications of this finding for cognitive models of moral decision making are discussed.

The entire research is here.

Tuesday, August 25, 2015

The Lion, the Myth, and the Morality Tale

By Brandon Ferdig
The American Thinker
Originally posted August 8, 2015

Here is an excerpt:

There’s nothing inherently wrong with myth and symbolism. They are emotional-mental tools used to categorize our world, to seek its improvement, to add meaning, to sink our emotional teeth into life and cultivate richness around our experience. Epic is awesome.

It was awesome for those who cried when seeing Barack Obama elected because of the interpreted representative step forward and victory of our nation. It’s awesome to feel moved by the sight of an animal that represents and elicits majesty. And it’s awesome to find other like-minded folks and bond in celebration or fight for a better world.

But there’s a risk.

To the degree that we subscribe to a particular ideology is the potential for us to color the events of our world with its tint. Suddenly we have something invested into these events -- our world view, our ego -- and exaggerated responses result. We’ll fight to defend our ideology, details and facts be damned. Get with like-minded folks, and you can create a mob.

The entire article is here.

Saturday, February 14, 2015

A Person-Centered Approach to Moral Judgment

By Eric Luis Uhlman, David Pizarro, and Daniel Diermeier
Perspectives on Psychological Science January 2015 vol. 10 no. 1 72-81

Both normative theories of ethics in philosophy and contemporary models of moral judgment in
psychology have focused almost exclusively on the permissibility of acts, in particular whether
acts should be judged based on their material outcomes (consequentialist ethics) or based on
rules, duties, and obligations (deontological ethics). However, a longstanding third perspective
on morality, virtue ethics, may offer a richer descriptive account of a wide range of lay moral
judgments. Building on this ethical tradition, we offer a person-centered account of moral
judgment, which focuses on individuals as the unit of analysis for moral evaluations rather than
on acts. Because social perceivers are fundamentally motivated to acquire information about the
moral character of others, features of an act that seem most informative of character often hold
more weight than either the consequences of the act, or whether or not a moral rule has been
broken. This approach, we argue, can account for a number of empirical findings that are either
not predicted by current theories of moral psychology, or are simply categorized as biases or
irrational quirks in the way individuals make moral judgments.

The entire article is here.

Tuesday, December 23, 2014

Self-Driving Cars: Safer, but What of Their Morals

By Justin Pritchard
Associated Press
Originally posted November 19, 2014

Here is an excerpt:

"This is one of the most profoundly serious decisions we can make. Program a machine that can foreseeably lead to someone's death," said Lin. "When we make programming decisions, we expect those to be as right as we can be."

What right looks like may differ from company to company, but according to Lin automakers have a duty to show that they have wrestled with these complex questions and publicly reveal the answers they reach.

The entire article is here.

Friday, December 12, 2014

The Neuroscience of Moral Decision Making

By Molly Crockett
Edge Video Series
Originally published November 18, 2014

Here is an excerpt:

The neurochemistry adds an interesting layer to this bigger question of whether punishment is prosocially motivated, because in some ways it's a more objective way to look at it. Serotonin doesn't have a research agenda; it's just a chemical. We had all this data and we started thinking differently about the motivations of so-called altruistic punishment. That inspired a purely behavioral study where we give people the opportunity to punish those who behave unfairly towards them, but we do it in two conditions. One is a standard case where someone behaves unfairly to someone else and then that person can punish them. Everyone has full information, and the guy who's unfair knows that he's being punished.

Then we added another condition, where we give people the opportunity to punish in secret— hidden punishment. You can punish someone without them knowing that they've been punished. They still suffer a loss financially, but because we obscure the size of the stake, the guy who's being punished doesn't know he's being punished. The punisher gets the satisfaction of knowing that the bad guy is getting less money, but there's no social norm being enforced.

The entire video and transcript is here.

Tuesday, December 9, 2014

What we say and what we do: The relationship between real and hypothetical moral choices

By Oriel FeldmanHall, Dean Mobbs, Davy Evans, Lucy Hiscox, Lauren Navrady, & Tim Dalgleish
Cognition. Jun 2012; 123(3): 434–441.
doi:  10.1016/j.cognition.2012.02.001

Abstract

Moral ideals are strongly ingrained within society and individuals alike, but actual moral choices are profoundly influenced by tangible rewards and consequences. Across two studies we show that real moral decisions can dramatically contradict moral choices made in hypothetical scenarios (Study 1). However, by systematically enhancing the contextual information available to subjects when addressing a hypothetical moral problem—thereby reducing the opportunity for mental simulation—we were able to incrementally bring subjects’ responses in line with their moral behaviour in real situations (Study 2). These results imply that previous work relying mainly on decontextualized hypothetical scenarios may not accurately reflect moral decisions in everyday life. The findings also shed light on contextual factors that can alter how moral decisions are made, such as the salience of a personal gain.

Highlights

    We show people are unable to appropriately judge outcomes of moral behaviour. 

  • Moral beliefs have weaker impact when there is a presence of significant self-gain. 
  • People make highly self-serving choices in real moral situations. 
  • Real moral choices contradict responses to simple hypothetical moral probes. 
  • Enhancing context can cause hypothetical decisions to mirror real moral decisions.

Monday, December 8, 2014

Harm to others outweighs harm to self in moral decision making

By Molly J. Crockett, Zeb Kurth-Nelson, Jenifer Z. Siegel, Peter Dayan, and Raymond J. Dolan
PNAS 2014 ; published ahead of print November 17, 2014, doi:10.1073/pnas.1408988111

Abstract

Concern for the suffering of others is central to moral decision making. How humans evaluate others’ suffering, relative to their own suffering, is unknown. We investigated this question by inviting subjects to trade off profits for themselves against pain experienced either by themselves or an anonymous other person. Subjects made choices between different amounts of money and different numbers of painful electric shocks. We independently varied the recipient of the shocks (self vs. other) and whether the choice involved paying to decrease pain or profiting by increasing pain. We built computational models to quantify the relative values subjects ascribed to pain for themselves and others in this setting. In two studies we show that most people valued others’ pain more than their own pain. This was evident in a willingness to pay more to reduce others’ pain than their own and a requirement for more compensation to increase others’ pain relative to their own. This ‟hyperaltruistic” valuation of others’ pain was linked to slower responding when making decisions that affected others, consistent with an engagement of deliberative processes in moral decision making. Subclinical psychopathic traits correlated negatively with aversion to pain for both self and others, in line with reports of aversive processing deficits in psychopathy. Our results provide evidence for a circumstance in which people care more for others than themselves. Determining the precise boundaries of this surprisingly prosocial disposition has implications for understanding human moral decision making and its disturbance in antisocial behavior.


Significance

Concern for the welfare of others is a key component of moral decision making and is disturbed in antisocial and criminal behavior. However, little is known about how people evaluate the costs of others’ suffering. Past studies have examined people’s judgments in hypothetical scenarios, but there is evidence that hypothetical judgments cannot accurately predict actual behavior. Here we addressed this issue by measuring how much money people will sacrifice to reduce the number of painful electric shocks delivered to either themselves or an anonymous stranger. Surprisingly, most people sacrifice more money to reduce a stranger’s pain than their own pain. This finding may help us better understand how people resolve moral dilemmas that commonly arise in medical, legal, and political decision making.

The entire article is here.

Tuesday, October 14, 2014

The Moral Instinct

By Steven Pinker
The New York Times
Originally posted January 13, 2013

Here is an excerpt:

The Moralization Switch

The starting point for appreciating that there is a distinctive part of our psychology for morality is seeing how moral judgments differ from other kinds of opinions we have on how people ought to behave. Moralization is a psychological state that can be turned on and off like a switch, and when it is on, a distinctive mind-set commandeers our thinking. This is the mind-set that makes us deem actions immoral (“killing is wrong”), rather than merely disagreeable (“I hate brussels sprouts”), unfashionable (“bell-bottoms are out”) or imprudent (“don’t scratch mosquito bites”).

The first hallmark of moralization is that the rules it invokes are felt to be universal. Prohibitions of rape and murder, for example, are felt not to be matters of local custom but to be universally and objectively warranted. One can easily say, “I don’t like brussels sprouts, but I don’t care if you eat them,” but no one would say, “I don’t like killing, but I don’t care if you murder someone.”

The entire article is here.

Sunday, September 21, 2014

Moral decision-making and the brain

NEURO.tv - Episode 11
Published on Aug 16, 2014

What experiments do psychologists use to identify the brain areas involved in moral decision-making? Do moral truths exist? We discuss with Joshua D. Greene, Professor of Psychology at Harvard University and author of Moral Tribes.