Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Tuesday, November 14, 2017

What is consciousness, and could machines have it?

Stanislas Dehaene, Hakwan Lau, & Sid Kouider
Science  27 Oct 2017: Vol. 358, Issue 6362, pp. 486-492

Abstract

The controversial question of whether machines may ever be conscious must be based on a careful consideration of how consciousness arises in the only physical system that undoubtedly possesses it: the human brain. We suggest that the word “consciousness” conflates two different types of information-processing computations in the brain: the selection of information for global broadcasting, thus making it flexibly available for computation and report (C1, consciousness in the first sense), and the self-monitoring of those computations, leading to a subjective sense of certainty or error (C2, consciousness in the second sense). We argue that despite their recent successes, current machines are still mostly implementing computations that reflect unconscious processing (C0) in the human brain. We review the psychological and neural science of unconscious (C0) and conscious computations (C1 and C2) and outline how they may inspire novel machine architectures.

The article is here.

Facial recognition may reveal things we’d rather not tell the world. Are we ready?

Amitha Kalaichandran
The Boston Globe
Originally published October 27, 2017

Here is an excerpt:

Could someone use a smartphone snapshot, for example, to diagnose another person’s child at the playground? The Face2Gene app is currently limited to clinicians; while anyone can download it from the App Store on an iPhone, it can only be used after the user’s healthcare credentials are verified. “If the technology is widespread,” says Lin, “do I see people taking photos of others for diagnosis? That would be unusual, but people take photos of others all the time, so maybe it’s possible. I would obviously worry about the invasion of privacy and misuse if that happened.”

Humans are pre-wired to discriminate against others based on physical characteristics, and programmers could easily manipulate AI programming to mimic human bias. That’s what concerns Anjan Chatterjee, a neuroscientist who specializes in neuroesthetics, the study of what our brains find pleasing. He has found that, relying on baked-in prejudices, we often quickly infer character just from seeing a person’s face. In a paper slated for publication in Psychology of Aesthetics, Creativity, and the Arts, Chatterjee reports that a person’s appearance — and our interpretation of that appearance — can have broad ramifications in professional and personal settings. This conclusion has serious implications for artificial intelligence.

“We need to distinguish between classification and evaluation,” he says. “Classification would be, for instance, using it for identification purposes like fingerprint recognition. . . which was once a privacy concern but seems to have largely faded away. Using the technology for evaluation would include discerning someone’s sexual orientation or for medical diagnostics.” The latter raises serious ethical questions, he says. One day, for example, health insurance companies could use this information to adjust premiums based on a predisposition to a condition.

The article is here.

Monday, November 13, 2017

Will life be worth living in a world without work? Technological Unemployment and the Meaning of Life

John Danaher
forthcoming in Science and Engineering Ethics

Abstract

Suppose we are about to enter an era of increasing technological unemployment. What implications does this have for society? Two distinct ethical/social issues would seem to arise. The first is one of distributive justice: how will the (presumed) efficiency gains from automated labour be distributed through society? The second is one of personal fulfillment and meaning: if  people no longer have to work, what will they do with their lives? In this article, I set aside the first issue and focus on the second. In doing so, I make three arguments. First, I argue that there are good reasons to embrace non-work and that these reasons become more compelling in an era of technological unemployment. Second, I argue that the technological advances that make widespread technological unemployment possible could still threaten or undermine human flourishing and meaning, especially if (as is to be expected) they do not remain confined to the economic sphere. And third, I argue that this threat could be contained if we adopt an integrative  approach to our relationship with technology. In advancing these arguments, I draw on three distinct literatures: (i) the literature on technological unemployment and workplace automation; (ii) the antiwork critique — which I argue gives reasons to embrace technological unemployment; and (iii) the philosophical debate about the conditions for meaning in life — which I argue gives reasons for concern.

The article is here.
 

Medical Evidence Debated

Ralph Bartholdt
Coeur d’Alene Press 
Originally posted October 27, 2017

Here is an excerpt:

“The point of this is not that he had a choice,” he said. “But what’s been loaded into his system, what’s he’s making the choices with.”

Thursday’s expert witness, psychologist Richard Adler, further developed the argument that Renfro suffered from a brain disorder evidenced by a series of photograph-like images of Renfro’s brain that showed points of trauma. He pointed out degeneration of white matter responsible for transmitting information from the front to the back of the brain, and shrunken portions on one side of the brain that were not symmetrical with their mirror images on the other side.

Physical evidence coinciding with the findings include Renfro’s choppy speech patterns and mannerisms as well inabilities to make cognitive connections, and his lack of social skills, Adler said.

Defense attorney Jay Logsdon asked if the images were obtained through a discredited method, one that has “been attacked as junk science?”

The method, called QEEG, for quantitative electroencephalogram, which uses electrical patterns that show electrical activity inside the brain’s cortex to determine impairment, was attacked in an article in 1997. The article’s criticism still stands today, Adler said.

Throughout the morning and into the afternoon, Adler reiterated findings, linking them to the defendant’s actions, and dovetailing them into other test results, psychological and cognitive, that have been conducted while Renfro has been incarcerated in the Kootenai County Jail.

The article is here.

Sunday, November 12, 2017

Why You Don’t See the Forest for the Trees When You Are Anxious: Anxiety Impairs Intuitive Decision Making

Carina Remmers and Thea Zander
Clinical Psychological Science
First Published September 27, 2017

Abstract

Intuitive decisions arise effortlessly from an unconscious, associative coherence detection process. Hereby, they guide people adaptively through everyday life decision making. When people are anxious, however, they often make poor decisions or no decision at all. Is intuition impaired in a state of anxiety? The aim of the current experiment was to examine this question in a between-subjects design. A total of 111 healthy participants were randomly assigned to an anxious, positive, or neutral multimodal mood induction after which they performed the established semantic coherence task. This task operationalizes intuition as the sudden, inexplicable detection of environmental coherence, based on automatic, unconscious processes of spreading activation. The current findings show that anxious participants showed impaired intuitive performance compared to participants of the positive and neutral mood groups. Trait anxiety did not moderate this effect. Accordingly, holistic, associative processes seem to be impaired by anxiety. Clinical implications and directions for future research are discussed.

The article is here.

Saturday, November 11, 2017

Did I just feed an addiction? Or ease a man’s pain? Welcome to modern medicine’s moral cage fight

Jay Baruch
STAT News
Originally published October 23, 2017

Here are two excerpts:

Will the opioid pills Sonny is asking for treat his pain, feed an addiction, or both? Will prescribing it fulfill my moral responsibility to alleviate his distress, contribute to the supply chain in the illicit pill economy, or both? Prescribing guidelines from the Centers for Disease Control and Prevention and recommendations from medical specialties and local hospitals are well-intentioned and necessary. But they do little to address the central anxiety that makes this decision a source of distress for physicians like me. It’s hard to evaluate pain without making some judgment about the patient and the patient’s story.

(cut)

A good story shortcuts analytical thinking. It can work its charms without our knowledge and sometimes against our better judgment. Once an emotional connection is made and the listener becomes invested in the story, the believability of the story matters less. In fact, the more extreme the story, the greater its capacity to enthrall the listener or reader.

Stories can elicit empathy and influence behavior in part by stimulating the release of the neurotransmitter oxytocin, which has ties to generosity, trustworthiness, and mother-infant bonding. I’m intrigued by the possibility that clinicians’ vulnerability to deceit is often grounded in the empathy they are reported to be lacking.

The article is here.

Friday, November 10, 2017

Court ruling on expert testimony could open door to junk science

Andis Robeznieks
AMA Wire
Originally posted October 20, 2017

The New Jersey Supreme Court is expected to issue a ruling soon that may affect more than 2,000 cases before the state’s courts. The court’s decision could have an even more far-reaching impact and might eventually undermine medical research, patient-physician decision making and informed consent.

The issue is whether scientific testimony expressing refuted theories that have not been subjected to peer review and do not follow the traditional hierarchy of scientific evidence should be admissible in litigation involving plaintiffs who claim their inflammatory bowel disease (IBD) was caused by the drug isotretinoin, marketed as Accutane by Hoffmann-La Roche, formerly headquartered in Nutley, N.J.

The first lawsuit on this matter was filed in July 2003. A hearing on whether the plaintiffs’ witnesses would be allowed to testify was held in February 2015, after which trial Judge Nelson C. Johnson barred their testimony. In May 2015, Johnson dismissed 2,076 related cases based on his ruling about the evidence.

This past July, a three-judge panel of the state appellate court reversed both the ruling barring the testimony and the dismissal of the cases. Hoffman-La Roche has requested the state Supreme Court to review this ruling.

The pressor is here.

Genetic testing of embryos creates an ethical morass

 Andrew Joseph
STAT news
Originally published October 23, 2017

Here is an excerpt:

The issue also pokes at a broader puzzle ethicists and experts are trying to reckon with as genetic testing moves out of the lab and further into the hands of consumers. People have access to more information about their own genes — or, in this case, about the genes of their potential offspring — than ever before. But having that information doesn’t necessarily mean it can be used to inform real-life decisions.

A test can tell prospective parents that their embryo has an abnormal number of chromosomes in its cells, for example, but it cannot tell them what kind of developmental delays their child might have, or whether transferring that embryo into a womb will lead to a pregnancy at all. Families and physicians are gazing into five-day-old cells like crystal balls, seeking enlightenment about what might happen over a lifetime. Plus, the tests can be wrong.

“This is a problem that the rapidly developing field of genetics is facing every day and it’s no different with embryos than it is when someone is searching Ancestry.com,” said Judith Daar, a bioethicist and clinical professor at University of California, Irvine, School of Medicine. “We’ve learned a lot, and the technology is marvelous and can be predictive and accurate, but we’re probably at a very nascent stage of understanding the impact of what the genetic findings are on health.”

Preimplantation genetic testing, or PGT, emerged in the 1990s as a way to study the DNA of embryos before they’re transferred to a womb, and the technology has grown more advanced with time. Federal data show it has been used in about 5 percent of IVF procedures going back several years, but many experts pin the figure as high as 20 or 30 percent.

The article is here.

Thursday, November 9, 2017

Morality and Machines

Robert Fry
Prospect
Originally published October 23, 2017

Here is an excerpt:

It is axiomatic that robots are more mechanically efficient than humans; equally they are not burdened with a sense of self-preservation, nor is their judgment clouded by fear or hysteria. But it is that very human fallibility that requires the intervention of the defining human characteristic—a moral sense that separates right from wrong—and explains why the ethical implications of the autonomous battlefield are so much more contentious than the physical consequences. Indeed, an open letter in 2015 seeking to separate AI from military application included the signatures of such luminaries as Elon Musk, Steve Wozniak, Stephen Hawking and Noam Chomsky. For the first time, therefore, human agency may be necessary on the battlefield not to take the vital tactical decisions but to weigh the vital moral ones.

So, who will accept these new responsibilities and how will they be prepared for the task? The first point to make is that none of this is an immediate prospect and it may be that AI becomes such a ubiquitous and beneficial feature of other fields of human endeavour that we will no longer fear its application in warfare. It may also be that morality will co-evolve with technology. Either way, the traditional military skills of physical stamina and resilience will be of little use when machines will have an infinite capacity for physical endurance. Nor will the quintessential commander’s skill of judging tactical advantage have much value when cognitive computing will instantaneously integrate sensor information. The key human input will be to make the judgments that link moral responsibility to legal consequence.

The article is here.