Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Monday, July 16, 2018

Moral fatigue: The effects of cognitive fatigue on moral reasoning

Shane Timmons and Ruth MJ Byrne
Quarterly Journal of Experimental Psychology
pp. 1–12

Abstract

We report two experiments that show a moral fatigue effect: participants who are fatigued after they have carried out a tiring cognitive task make different moral judgements compared to participants who are not fatigued. Fatigued participants tend to judge that a moral violation is less permissible even though it would have a beneficial effect, such as killing one person to save the lives of five others. The moral fatigue effect occurs when people make a judgement that focuses on the harmful action, killing one person, but not when they make a judgement that focuses on the beneficial
outcome, saving the lives of others, as shown in Experiment 1 (n=196). It also occurs for judgements about morally good actions, such as jumping onto railway tracks to save a person who has fallen there, as shown in Experiment 2 (n=187).  The results have implications for alternative explanations of moral reasoning.

The article is here.

Mind-body practices and the self: yoga and meditation do not quiet the ego, but instead boost self-enhancement

Gebauer, Jochen, Nehrlich, A.D., Stahlberg, D., et al.
Psychological Science, 1-22. (In Press)

Abstract

Mind-body practices enjoy immense public and scientific interest. Yoga and meditation are highly popular. Purportedly, they foster well-being by “quieting the ego” or, more specifically, curtailing self-enhancement. However, this ego-quieting effect contradicts an apparent psychological universal, the self-centrality principle. According to this principle, practicing any skill renders it self-central, and self-centrality breeds self-enhancement. We examined those opposing predictions in the first tests of mind-body practices’ self-enhancement effects. Experiment 1 followed 93 yoga students over 15 weeks, assessing self-centrality and self-enhancement after yoga practice (yoga condition, n = 246) and without practice (control condition, n = 231). Experiment 2 followed 162 meditators over 4 weeks (meditation condition: n = 246; control condition: n = 245). Self-enhancement was higher in the yoga (Experiment 1) and meditation (Experiment 2) conditions, and those effects were mediated by greater self-centrality. Additionally, greater self-enhancement mediated mind-body practices’ well-being benefits. Evidently, neither yoga nor meditation quiet the ego; instead, they boost self-enhancement.

The paper can be downloaded here.

Sunday, July 15, 2018

Should the police be allowed to use genetic information in public databases to track down criminals?

Bob Yirka
Phys.org
Originally posted June 8, 2018

Here is an excerpt:

The authors point out that there is no law forbidding what the police did—the genetic profiles came from people who willingly and of their own accord gave up their DNA data. But should there be? If you send a swab to Ancestry.com, for example, should the genetic profile they create be off-limits to anyone but you and them? It is doubtful that many who take such actions fully consider the ways in which their profile might be used. Most such companies routinely sell their data to pharmaceutical companies or others looking to use the data to make a profit, for example. Should they also be compelled to give up such data due to a court order? The authors suggest that if the public wants their DNA information to remain private, they need to contact their representatives and demand that legislation that lays out specific rules for data housed in public databases.

The article is here.

Saturday, July 14, 2018

10 Ways to Avoid False Memories

Christopher Chabris and Daniel Simons
Slate.com
Originally posted February 10, 2018

Here is an excerpt:

No one has, to our knowledge, tried to implant a false memory of being shot down in a helicopter. But researchers have repeatedly created other kinds of entirely false memory in the laboratory. Most famously, Elizabeth Loftus and Jacqueline Pickrell successfully convinced people that, as children, they had once been lost in a shopping mall. In another study, researchers Kimberly Wade, Maryanne Garry, Don Read, and Stephen Lindsay showed people a Photoshopped image of themselves as children, standing in the basket of a hot air balloon. Half of the participants later had either complete or partial false memories, sometimes “remembering” additional details from this event—an event that they never experienced. In a newly published study, Julia Shaw and Stephen Porter used structured interviews to convince 70 percent of their college student participants that they had committed a crime as an adolescent (theft, assault, or assault with a weapon) and that the crime had resulted in police contact. And outside the laboratory, people have fabricated rich and detailed memories of things that we can be almost 100 percent certain did not happen, such as having been abducted and impregnated by aliens.

Even memories for highly emotional events—like the Challenger explosion or the 9/11 attacks—can mutate substantially. As time passes, we can lose the link between things we’ve experienced and the details surrounding them; we remember the gist of a story, but we might not recall whether we experienced the events or just heard about them from someone else. We all experience this failure of “source memory” in small ways: Maybe you tell a friend a great joke that you heard recently, only to learn that he’s the one who told it to you. Or you recall having slammed your hand in a car door as a child, only to get into an argument over whether it happened instead to your sister. People sometimes even tell false stories directly to the people who actually experienced the original events, something that is hard to explain as intentional lying. (Just last month, Brian Williams let his exaggerated war story be told at a public event honoring one of the soldiers who had been there.)

The information is here.

Friday, July 13, 2018

Rorschach (regarding AI)

Michael Solana
Medium.com
Originally posted June 7, 2018

Here is an excerpt:

Here we approach our inscrutable abstract, and our robot Rorschach test. But in this contemporary version of the famous psychological prompts, what we are observing is not even entirely ambiguous. We are attempting to imagine a greatly-amplified mind. Here, each of us has a particularly relevant data point — our own. In trying to imagine the amplified intelligence, it is natural to imagine our own intelligence amplified. In imagining the motivations of this amplified intelligence, we naturally imagine ourselves. If, as you try to conceive of a future with machine intelligence, a monster comes to mind, it is likely you aren’t afraid of something alien at all. You’re afraid of something exactly like you. What would you do with unlimited power?

Psychological projection seems to work in several contexts outside of general artificial intelligence. In the technology industry the concept of “meritocracy” is now hotly debated. How much of your life is determined by luck, and how much by chance? There’s no answer here we know for sure, but has there ever been a better Rorschach test for separating high-achievers from people who were given what they have? Questions pertaining to human nature are almost open self-reflection. Are we basically good, with some exceptions, or are humans basically beasts, with an animal nature just barely contained by a set of slowly-eroding stories we tell ourselves — law, faith, society. The inner workings of a mind can’t be fully shared, and they can’t be observed by a neutral party. We therefore do not — can not, currently — know anything of the inner workings of people in general. But we can know ourselves. So in the face of large abstractions concerning intelligence, we hold up a mirror.

Not everyone who fears general artificial intelligence would cause harm to others. There are many people who haven’t thought deeply about these questions at all. They look to their neighbors for cues on what to think, and there is no shortage of people willing to tell them. The media has ads to sell, after all, and historically they have found great success in doing this with horror stories. But as we try to understand the people who have thought about these questions with some depth — with the depth required of a thoughtful screenplay, for example, or a book, or a company — it’s worth considering the inkblot.

The article is here.

Thursday, July 12, 2018

Learning moral values: Another's desire to punish enhances one's own punitive behavior

FeldmanHall O, Otto AR, Phelps EA.
J Exp Psychol Gen. 2018 Jun 7. doi: 10.1037/xge0000405.

Abstract

There is little consensus about how moral values are learned. Using a novel social learning task, we examine whether vicarious learning impacts moral values-specifically fairness preferences-during decisions to restore justice. In both laboratory and Internet-based experimental settings, we employ a dyadic justice game where participants receive unfair splits of money from another player and respond resoundingly to the fairness violations by exhibiting robust nonpunitive, compensatory behavior (baseline behavior). In a subsequent learning phase, participants are tasked with responding to fairness violations on behalf of another participant (a receiver) and are given explicit trial-by-trial feedback about the receiver's fairness preferences (e.g., whether they prefer punishment as a means of restoring justice). This allows participants to update their decisions in accordance with the receiver's feedback (learning behavior). In a final test phase, participants again directly experience fairness violations. After learning about a receiver who prefers highly punitive measures, participants significantly enhance their own endorsement of punishment during the test phase compared with baseline. Computational learning models illustrate the acquisition of these moral values is governed by a reinforcement mechanism, revealing it takes as little as being exposed to the preferences of a single individual to shift one's own desire for punishment when responding to fairness violations. Together this suggests that even in the absence of explicit social pressure, fairness preferences are highly labile.

The research is here.

Wednesday, July 11, 2018

The Lifespan of a Lie

Ben Blum
Medium.com
Originally posted June 7, 2018

Here is an excerpt:

Somehow, neither Prescott’s letter nor the failed replication nor the numerous academic critiques have so far lessened the grip of Zimbardo’s tale on the public imagination. The appeal of the Stanford prison experiment seems to go deeper than its scientific validity, perhaps because it tells us a story about ourselves that we desperately want to believe: that we, as individuals, cannot really be held accountable for the sometimes reprehensible things we do. As troubling as it might seem to accept Zimbardo’s fallen vision of human nature, it is also profoundly liberating. It means we’re off the hook. Our actions are determined by circumstance. Our fallibility is situational. Just as the Gospel promised to absolve us of our sins if we would only believe, the SPE offered a form of redemption tailor-made for a scientific era, and we embraced it.

For psychology professors, the Stanford prison experiment is a reliable crowd-pleaser, typically presented with lots of vividly disturbing video footage. In introductory psychology lecture halls, often filled with students from other majors, the counterintuitive assertion that students’ own belief in their inherent goodness is flatly wrong offers dramatic proof of psychology’s ability to teach them new and surprising things about themselves. Some intro psych professors I spoke to felt that it helped instill the understanding that those who do bad things are not necessarily bad people. Others pointed to the importance of teaching students in our unusually individualistic culture that their actions are profoundly influenced by external factors.

(cut)

But if Zimbardo’s work was so profoundly unscientific, how can we trust the stories it claims to tell? Many other studies, such as Soloman Asch’s famous experiment demonstrating that people will ignore the evidence of their own eyes in conforming to group judgments about line lengths, illustrate the profound effect our environments can have on us. The far more methodologically sound — but still controversial — Milgram experiment demonstrates how prone we are to obedience in certain settings. What is unique, and uniquely compelling, about Zimbardo’s narrative of the Stanford prison experiment is its suggestion that all it takes to make us enthusiastic sadists is a jumpsuit, a billy club, and the green light to dominate our fellow human beings.

The article is here.

Could Moral Enhancement Interventions be Medically Indicated?

Sarah Carter
Health Care Analysis
December 2017, Volume 25, Issue 4, pp 338–353

Abstract

This paper explores the position that moral enhancement interventions could be medically indicated (and so considered therapeutic) in cases where they provide a remedy for a lack of empathy, when such a deficit is considered pathological. In order to argue this claim, the question as to whether a deficit of empathy could be considered to be pathological is examined, taking into account the difficulty of defining illness and disorder generally, and especially in the case of mental health. Following this, Psychopathy and a fictionalised mental disorder (Moral Deficiency Disorder) are explored with a view to consider moral enhancement techniques as possible treatments for both conditions. At this juncture, having asserted and defended the position that moral enhancement interventions could, under certain circumstances, be considered medically indicated, this paper then goes on to briefly explore some of the consequences of this assertion. First, it is acknowledged that this broadening of diagnostic criteria in light of new interventions could fall foul of claims of medicalisation. It is then briefly noted that considering moral enhancement technologies to be akin to therapies in certain circumstances could lead to ethical and legal consequences and questions, such as those regarding regulation, access, and even consent.

The paper is here.

Tuesday, July 10, 2018

The Artificial Intelligence Ethics Committee

Zara Stone
Forbes.com
Originally published June 11, 2018

Here is an excerpt:

Back to the ethics problem: Some sort of bias is sadly inevitable in programming. “We humans all have a bias,” said computer scientist Ehsan Hoque, who leads the Human-Computer Interaction Lab at Rochester University. “There’s a study where judges make more favorable decisions after a lunch break. Machines have an inherent bias (as they are built by humans) so we need to empower users in ways to make decisions.”

For instance, Walworth's way of empowering his choices is by being conscious about what AI algorithms show him. “I recommend you do things that are counterintuitive,” he said. “For instance, read a spectrum of news, everything from Fox to CNN and The New York Times to combat the algorithm that decides what you see.” Use the Cambridge Analytica election scandal as an example here. Algorithms dictated what you’d see, how you’d see it and if more of the same got shown to you, and were manipulated by Cambridge Analytica to sway voters.

The move to a consciousness of ethical AI  is both a top-down and bottoms up approach. “There’s a rising field of impact investing,” explained Walworth. “Investors and shareholders are demanding something higher than the bottom line, some accountability with the way they spend and invest money.”

The article is here.