Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, philosophy and health care

Tuesday, July 17, 2018

Social observation increases deontological judgments in moral dilemmas

Minwoo Leea, Sunhae Sul, Hackjin Kim
Evolution and Human Behavior
Available online 18 June 2018

Abstract

A concern for positive reputation is one of the core motivations underlying various social behaviors in humans. The present study investigated how experimentally induced reputation concern modulates judgments in moral dilemmas. In a mixed-design experiment, participants were randomly assigned to the observed vs. the control group and responded to a series of trolley-type moral dilemmas either in the presence or absence of observers, respectively. While no significant baseline difference in personality traits and moral decision style were found across two groups of participants, our analyses revealed that social observation promoted deontological judgments especially for moral dilemmas involving direct physical harm (i.e., the personal moral dilemmas), yet with an overall decrease in decision confidence and significant prolongation of reaction time. Moreover, participants in the observed group, but not in the control group, showed the increased sensitivities towards warmth vs. competence traits words in the lexical decision task performed after the moral dilemma task. Our findings suggest that reputation concern, once triggered by the presence of potentially judgmental others, could activate a culturally dominant norm of warmth in various social contexts. This could, in turn, induce a series of goal-directed processes for self-presentation of warmth, leading to increased deontological judgments in moral dilemmas. The results of the present study provide insights into the reputational consequences of moral decisions that merit further exploration.

The article is here.

The Rise of the Robots and the Crisis of Moral Patiency

John Danaher
Pre-publication version of AI and Society

Abstract

This paper adds another argument to the rising tide of panic about robots and AI. The argument is intended to have broad civilization-level significance, but to involve less fanciful speculation about the likely future intelligence of machines than is common among many AI-doomsayers. The argument claims that the rise of the robots will create a crisis of moral patiency. That is to say, it will reduce the ability and willingness of humans to act in the world as responsible moral agents, and thereby reduce them to moral patients. Since that ability and willingness is central to the value system in modern liberal democratic states, the crisis of moral patiency has a broad civilization-level significance: it threatens something that is foundational to and presupposed in much contemporary moral and political discourse. I defend this argument in three parts. I start with a brief analysis of an analogous argument made (or implied) in pop culture. Though those arguments turn out to be hyperbolic and satirical, they do prove instructive as they illustrates a way in which the rise of robots could impact upon civilization, even when the robots themselves are neither malicious nor powerful enough to bring about our doom. I then introduce the argument from the crisis of moral patiency, defend its main premises and address objections.

The paper is here.

Monday, July 16, 2018

Moral fatigue: The effects of cognitive fatigue on moral reasoning

Shane Timmons and Ruth MJ Byrne
Quarterly Journal of Experimental Psychology
pp. 1–12

Abstract

We report two experiments that show a moral fatigue effect: participants who are fatigued after they have carried out a tiring cognitive task make different moral judgements compared to participants who are not fatigued. Fatigued participants tend to judge that a moral violation is less permissible even though it would have a beneficial effect, such as killing one person to save the lives of five others. The moral fatigue effect occurs when people make a judgement that focuses on the harmful action, killing one person, but not when they make a judgement that focuses on the beneficial
outcome, saving the lives of others, as shown in Experiment 1 (n=196). It also occurs for judgements about morally good actions, such as jumping onto railway tracks to save a person who has fallen there, as shown in Experiment 2 (n=187).  The results have implications for alternative explanations of moral reasoning.

The article is here.

Mind-body practices and the self: yoga and meditation do not quiet the ego, but instead boost self-enhancement

Gebauer, Jochen, Nehrlich, A.D., Stahlberg, D., et al.
Psychological Science, 1-22. (In Press)

Abstract

Mind-body practices enjoy immense public and scientific interest. Yoga and meditation are highly popular. Purportedly, they foster well-being by “quieting the ego” or, more specifically, curtailing self-enhancement. However, this ego-quieting effect contradicts an apparent psychological universal, the self-centrality principle. According to this principle, practicing any skill renders it self-central, and self-centrality breeds self-enhancement. We examined those opposing predictions in the first tests of mind-body practices’ self-enhancement effects. Experiment 1 followed 93 yoga students over 15 weeks, assessing self-centrality and self-enhancement after yoga practice (yoga condition, n = 246) and without practice (control condition, n = 231). Experiment 2 followed 162 meditators over 4 weeks (meditation condition: n = 246; control condition: n = 245). Self-enhancement was higher in the yoga (Experiment 1) and meditation (Experiment 2) conditions, and those effects were mediated by greater self-centrality. Additionally, greater self-enhancement mediated mind-body practices’ well-being benefits. Evidently, neither yoga nor meditation quiet the ego; instead, they boost self-enhancement.

The paper can be downloaded here.

Sunday, July 15, 2018

Should the police be allowed to use genetic information in public databases to track down criminals?

Bob Yirka
Phys.org
Originally posted June 8, 2018

Here is an excerpt:

The authors point out that there is no law forbidding what the police did—the genetic profiles came from people who willingly and of their own accord gave up their DNA data. But should there be? If you send a swab to Ancestry.com, for example, should the genetic profile they create be off-limits to anyone but you and them? It is doubtful that many who take such actions fully consider the ways in which their profile might be used. Most such companies routinely sell their data to pharmaceutical companies or others looking to use the data to make a profit, for example. Should they also be compelled to give up such data due to a court order? The authors suggest that if the public wants their DNA information to remain private, they need to contact their representatives and demand that legislation that lays out specific rules for data housed in public databases.

The article is here.

Saturday, July 14, 2018

10 Ways to Avoid False Memories

Christopher Chabris and Daniel Simons
Slate.com
Originally posted February 10, 2018

Here is an excerpt:

No one has, to our knowledge, tried to implant a false memory of being shot down in a helicopter. But researchers have repeatedly created other kinds of entirely false memory in the laboratory. Most famously, Elizabeth Loftus and Jacqueline Pickrell successfully convinced people that, as children, they had once been lost in a shopping mall. In another study, researchers Kimberly Wade, Maryanne Garry, Don Read, and Stephen Lindsay showed people a Photoshopped image of themselves as children, standing in the basket of a hot air balloon. Half of the participants later had either complete or partial false memories, sometimes “remembering” additional details from this event—an event that they never experienced. In a newly published study, Julia Shaw and Stephen Porter used structured interviews to convince 70 percent of their college student participants that they had committed a crime as an adolescent (theft, assault, or assault with a weapon) and that the crime had resulted in police contact. And outside the laboratory, people have fabricated rich and detailed memories of things that we can be almost 100 percent certain did not happen, such as having been abducted and impregnated by aliens.

Even memories for highly emotional events—like the Challenger explosion or the 9/11 attacks—can mutate substantially. As time passes, we can lose the link between things we’ve experienced and the details surrounding them; we remember the gist of a story, but we might not recall whether we experienced the events or just heard about them from someone else. We all experience this failure of “source memory” in small ways: Maybe you tell a friend a great joke that you heard recently, only to learn that he’s the one who told it to you. Or you recall having slammed your hand in a car door as a child, only to get into an argument over whether it happened instead to your sister. People sometimes even tell false stories directly to the people who actually experienced the original events, something that is hard to explain as intentional lying. (Just last month, Brian Williams let his exaggerated war story be told at a public event honoring one of the soldiers who had been there.)

The information is here.

Friday, July 13, 2018

Rorschach (regarding AI)

Michael Solana
Medium.com
Originally posted June 7, 2018

Here is an excerpt:

Here we approach our inscrutable abstract, and our robot Rorschach test. But in this contemporary version of the famous psychological prompts, what we are observing is not even entirely ambiguous. We are attempting to imagine a greatly-amplified mind. Here, each of us has a particularly relevant data point — our own. In trying to imagine the amplified intelligence, it is natural to imagine our own intelligence amplified. In imagining the motivations of this amplified intelligence, we naturally imagine ourselves. If, as you try to conceive of a future with machine intelligence, a monster comes to mind, it is likely you aren’t afraid of something alien at all. You’re afraid of something exactly like you. What would you do with unlimited power?

Psychological projection seems to work in several contexts outside of general artificial intelligence. In the technology industry the concept of “meritocracy” is now hotly debated. How much of your life is determined by luck, and how much by chance? There’s no answer here we know for sure, but has there ever been a better Rorschach test for separating high-achievers from people who were given what they have? Questions pertaining to human nature are almost open self-reflection. Are we basically good, with some exceptions, or are humans basically beasts, with an animal nature just barely contained by a set of slowly-eroding stories we tell ourselves — law, faith, society. The inner workings of a mind can’t be fully shared, and they can’t be observed by a neutral party. We therefore do not — can not, currently — know anything of the inner workings of people in general. But we can know ourselves. So in the face of large abstractions concerning intelligence, we hold up a mirror.

Not everyone who fears general artificial intelligence would cause harm to others. There are many people who haven’t thought deeply about these questions at all. They look to their neighbors for cues on what to think, and there is no shortage of people willing to tell them. The media has ads to sell, after all, and historically they have found great success in doing this with horror stories. But as we try to understand the people who have thought about these questions with some depth — with the depth required of a thoughtful screenplay, for example, or a book, or a company — it’s worth considering the inkblot.

The article is here.

Thursday, July 12, 2018

Learning moral values: Another's desire to punish enhances one's own punitive behavior

FeldmanHall O, Otto AR, Phelps EA.
J Exp Psychol Gen. 2018 Jun 7. doi: 10.1037/xge0000405.

Abstract

There is little consensus about how moral values are learned. Using a novel social learning task, we examine whether vicarious learning impacts moral values-specifically fairness preferences-during decisions to restore justice. In both laboratory and Internet-based experimental settings, we employ a dyadic justice game where participants receive unfair splits of money from another player and respond resoundingly to the fairness violations by exhibiting robust nonpunitive, compensatory behavior (baseline behavior). In a subsequent learning phase, participants are tasked with responding to fairness violations on behalf of another participant (a receiver) and are given explicit trial-by-trial feedback about the receiver's fairness preferences (e.g., whether they prefer punishment as a means of restoring justice). This allows participants to update their decisions in accordance with the receiver's feedback (learning behavior). In a final test phase, participants again directly experience fairness violations. After learning about a receiver who prefers highly punitive measures, participants significantly enhance their own endorsement of punishment during the test phase compared with baseline. Computational learning models illustrate the acquisition of these moral values is governed by a reinforcement mechanism, revealing it takes as little as being exposed to the preferences of a single individual to shift one's own desire for punishment when responding to fairness violations. Together this suggests that even in the absence of explicit social pressure, fairness preferences are highly labile.

The research is here.

Wednesday, July 11, 2018

The Lifespan of a Lie

Ben Blum
Medium.com
Originally posted June 7, 2018

Here is an excerpt:

Somehow, neither Prescott’s letter nor the failed replication nor the numerous academic critiques have so far lessened the grip of Zimbardo’s tale on the public imagination. The appeal of the Stanford prison experiment seems to go deeper than its scientific validity, perhaps because it tells us a story about ourselves that we desperately want to believe: that we, as individuals, cannot really be held accountable for the sometimes reprehensible things we do. As troubling as it might seem to accept Zimbardo’s fallen vision of human nature, it is also profoundly liberating. It means we’re off the hook. Our actions are determined by circumstance. Our fallibility is situational. Just as the Gospel promised to absolve us of our sins if we would only believe, the SPE offered a form of redemption tailor-made for a scientific era, and we embraced it.

For psychology professors, the Stanford prison experiment is a reliable crowd-pleaser, typically presented with lots of vividly disturbing video footage. In introductory psychology lecture halls, often filled with students from other majors, the counterintuitive assertion that students’ own belief in their inherent goodness is flatly wrong offers dramatic proof of psychology’s ability to teach them new and surprising things about themselves. Some intro psych professors I spoke to felt that it helped instill the understanding that those who do bad things are not necessarily bad people. Others pointed to the importance of teaching students in our unusually individualistic culture that their actions are profoundly influenced by external factors.

(cut)

But if Zimbardo’s work was so profoundly unscientific, how can we trust the stories it claims to tell? Many other studies, such as Soloman Asch’s famous experiment demonstrating that people will ignore the evidence of their own eyes in conforming to group judgments about line lengths, illustrate the profound effect our environments can have on us. The far more methodologically sound — but still controversial — Milgram experiment demonstrates how prone we are to obedience in certain settings. What is unique, and uniquely compelling, about Zimbardo’s narrative of the Stanford prison experiment is its suggestion that all it takes to make us enthusiastic sadists is a jumpsuit, a billy club, and the green light to dominate our fellow human beings.

The article is here.