Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Sunday, March 15, 2020

Will Past Criminals Reoffend? (Humans are Terrible at Predicting; Algorithms Worse)

Sophie Bushwick
Scientific American
Originally published 14 Feb 2020

Here is an excerpt:

Based on the wider variety of experimental conditions, the new study concluded that algorithms such as COMPAS and LSI-R are indeed better than humans at predicting risk. This finding makes sense to Monahan, who emphasizes how difficult it is for people to make educated guesses about recidivism. “It’s not clear to me how, in real life situations—when actual judges are confronted with many, many things that could be risk factors and when they’re not given feedback—how the human judges could be as good as the statistical algorithms,” he says. But Goel cautions that his conclusion does not mean algorithms should be adopted unreservedly. “There are lots of open questions about the proper use of risk assessment in the criminal justice system,” he says. “I would hate for people to come away thinking, ‘Algorithms are better than humans. And so now we can all go home.’”

Goel points out that researchers are still studying how risk-assessment algorithms can encode racial biases. For instance, COMPAS can say whether a person might be arrested again—but one can be arrested without having committed an offense. “Rearrest for low-level crime is going to be dictated by where policing is occurring,” Goel says, “which itself is intensely concentrated in minority neighborhoods.” Researchers have been exploring the extent of bias in algorithms for years. Dressel and Farid also examined such issues in their 2018 paper. “Part of the problem with this idea that you're going to take the human out of [the] loop and remove the bias is: it’s ignoring the big, fat, whopping problem, which is the historical data is riddled with bias—against women, against people of color, against LGBTQ,” Farid says.

The info is here.

Saturday, March 14, 2020

You’re Not Going to Kill Them With Kindness. You’ll Do Just the Opposite.

Judith Newman
The New York Times
Originally posted 8 Jan 20

It was New Year’s Eve, and my friends had just adopted a little girl, 4 years old, from China. The family was going around the table, suggesting what each thought the New Year’s resolution should be for the other. Fei Fei’s English was still shaky. When her turn came, though, she didn’t hesitate. She pointed at her new father, mother and sister in turn. “Be nice, be nice, be nice,” she said.

Fifteen years later, in this dark age for civility, a toddler’s cri de coeur resonates more than ever. In his recent remarks at the memorial service for Congressman Elijah Cummings, President Obama said, “Being a strong man includes being kind, and there’s nothing weak about kindness and compassion; nothing weak about looking out for others.” On a more pedestrian level, yesterday I walked into the Phluid Project, the NoHo gender-neutral shop where T-shirts have slogans like “Hatephobic” and “Be Your Self.” I asked the salesperson, “What is your current best seller?” She pointed to a shirt in the window imprinted with the slogan: “Be kind.”

So I’m not surprised that there’s been a little flurry of self-help books on basic human decency and what it will do for you.

Kindness is doing small acts for others without expecting anything in return. It’s the opposite of transactional, and therefore the opposite of what we’re seeing in our body politic today.

The info is here.

Friday, March 13, 2020

DoD unveils how it will keep AI in check with ethics principles

Image result for military aiScott Maucione
federalnewsnetwork.com
Originally posted 25 Feb 20

Here is an excerpt:

The principle areas are based on recommendations from a 15-month study by the Defense Innovation Board — a panel of science and technology experts from industry and academia.

The principles are as follows:

  1. Responsible: DoD personnel will exercise appropriate levels of judgment and care, while remaining responsible for the development, deployment, and use of AI capabilities.
  2. Equitable: The Department will take deliberate steps to minimize unintended bias in AI capabilities.
  3. Traceable: The Department’s AI capabilities will be developed and deployed such that relevant personnel possess an appropriate understanding of the technology, development processes, and operational methods applicable to AI capabilities, including with transparent and auditable methodologies, data sources, and design procedure and documentation.
  4. Reliable: The Department’s AI capabilities will have explicit, well-defined uses, and the safety, security, and effectiveness of such capabilities will be subject to testing and assurance within those defined uses across their entire life-cycles.
  5. Governable: The Department will design and engineer AI capabilities to fulfill their intended functions while possessing the ability to detect and avoid unintended consequences, and the ability to disengage or deactivate deployed systems that demonstrate unintended behavior.

When Medical Debt Collectors Decide Who Gets Arrested

Lizzie Presser
Propublica.org
Originally posted 16 Oct 19

Here is an excerpt:

Across the country, thousands of people are jailed each year for failing to appear in court for unpaid bills, in arrangements set up much like this one. The practice spread in the wake of the recession as collectors found judges willing to use their broad powers of contempt to wield the threat of arrest. Judges have issued warrants for people who owe money to landlords and payday lenders, who never paid off furniture, or day care fees, or federal student loans. Some debtors who have been arrested owed as little as $28.

More than half of the debt in collections stems from medical care, which, unlike most other debt, is often taken on without a choice or an understanding of the costs. Since the Affordable Care Act of 2010, prices for medical services have ballooned; insurers have nearly tripled deductibles — the amount a person pays before their coverage kicks in — and raised premiums and copays, as well. As a result, tens of millions of people without adequate coverage are expected to pay larger portions of their rising bills.

The sickest patients are often the most indebted, and they’re not exempt from arrest. In Indiana, a cancer patient was hauled away from home in her pajamas in front of her three children; too weak to climb the stairs to the women’s area of the jail, she spent the night in a men’s mental health unit where an inmate smeared feces on the wall. In Utah, a man who had ignored orders to appear over an unpaid ambulance bill told friends he would rather die than go to jail; the day he was arrested, he snuck poison into the cell and ended his life.

The info is here.

Thursday, March 12, 2020

Business gets ready to trip

Jeffrey O'Brien
Forbes. com
Originally posted 17 Feb 20

Here is an excerpt:

The need for a change in approach is clear. “Mental illness” is an absurdly large grab bag of disorders, but taken as a whole, it exacts an astronomical toll on society. The National Institute of Mental Health says nearly one in five U.S. adults lives with some form of it. According to the World Health Organization, 300 million people worldwide have an anxiety disorder. And there’s a death by suicide every 40 seconds—that includes 20 veterans a day, according to the U.S. Department of Veterans Affairs. Almost 21 million Americans have at least one addiction, per the U.S. Surgeon General, and things are only getting worse. The Lancet Commission—a group of experts in psychiatry, public health, neuroscience, etc.—projects that the cost of mental disorders, currently on the rise in every country, will reach $16 trillion by 2030, including lost productivity. The current standard of care clearly benefits some. Antidepressant medication sales in 2017 surpassed $14 billion. But SSRI drugs—antidepressants that boost the level of serotonin in the brain—can take months to take hold; the first prescription is effective only about 30% of the time. Up to 15% of benzodiazepine users become addicted, and adults on antidepressants are 2.5 times as likely to attempt suicide.

Meanwhile, in various clinical trials, psychedelics are demonstrating both safety and efficacy across the terrain. Scientific papers have been popping up like, well, mushrooms after a good soaking, producing data to blow away conventional methods. Psilocybin, the psychoactive ingredient in magic mushrooms, has been shown to cause a rapid and sustained reduction in anxiety and depression in a group of patients with life-threatening cancer. When paired with counseling, it has improved the ability of some patients suffering from treatment-resistant depression to recognize and process emotion on people’s faces. That correlates to reducing anhedonia, or the inability to feel pleasure. The other psychedelic agent most commonly being studied, MDMA, commonly called ecstasy or molly, has in some scientific studies proved highly effective at treating patients with persistent PTSD. In one Phase II trial of 107 patients who’d had PTSD for an average of over 17 years, 56% no longer showed signs of the affliction after one session of MDMA-assisted therapy. Psychedelics are helping to break addictions, as well. A combination of psilocybin and cognitive therapy enabled 80% of one study’s participants to kick cigarettes for at least six months. Compare that with the 35% for the most effective available smoking-cessation drug, varenicline.

The info is here.

Artificial Intelligence in Health Care

M. Matheny, D. Whicher, & S. Israni
JAMA. 2020;323(6):509-510.
doi:10.1001/jama.2019.21579

The promise of artificial intelligence (AI) in health care offers substantial opportunities to improve patient and clinical team outcomes, reduce costs, and influence population health. Current data generation greatly exceeds human cognitive capacity to effectively manage information, and AI is likely to have an important and complementary role to human cognition to support delivery of personalized health care.  For example, recent innovations in AI have shown high levels of accuracy in imaging and signal detection tasks and are considered among the most mature tools in this domain.

However, there are challenges in realizing the potential for AI in health care. Disconnects between reality and expectations have led to prior precipitous declines in use of the technology, termed AI winters, and another such event is possible, especially in health care.  Today, AI has outsized market expectations and technology sector investments. Current challenges include using biased data for AI model development, applying AI outside of populations represented in the training and validation data sets, disregarding the effects of possible unintended consequences on care or the patient-clinician relationship, and limited data about actual effects on patient outcomes and cost of care.

AI in Healthcare: The Hope, The Hype, The Promise, The Peril, a publication by the National Academy of Medicine (NAM), synthesizes current knowledge and offers a reference document for the responsible development, implementation, and maintenance of AI in the clinical enterprise.  The publication outlines current and near-term AI solutions; highlights the challenges, limitations, and best practices for AI development, adoption, and maintenance; presents an overview of the legal and regulatory landscape for health care AI; urges the prioritization of equity, inclusion, and a human rights lens for this work; and outlines considerations for moving forward. This Viewpoint shares highlights from the NAM publication.

The info is here.

Wednesday, March 11, 2020

Expertise in Child Abuse?

Dr. Woods, from a YouTube video
Mike Hixenbaugh & Taylor Mirfendereski
NBCnews.com
Originally posted 14 Feb 20

Here is an excerpt:

Contrary to Woods’ testimony, there are more than 375 child abuse pediatricians certified by the American Board of Pediatrics in the U.S., all of whom have either completed an extensive fellowship program — first offered, not three, but nearly 15 years ago, while Woods was still in medical school — or spent years examining cases of suspected abuse prior to the creation of the medical subspecialty in 2009. The doctors are trained to differentiate accidental from inflicted injuries, which child abuse pediatricians say makes them better qualified than other doctors to determine whether a child has been abused. At least three physicians have met those qualifications and are practicing as board-certified child abuse pediatricians in the state of Washington.

Woods is not one of them.

Despite her lack of fellowship training, state child welfare and law enforcement officials in Washington have granted Woods remarkable influence over their decisions about whether to remove children from parents or pursue criminal charges, NBC News and KING 5 found. In four cases reviewed by reporters, child welfare workers took children from parents based on Woods’ reports — including some in which Woods misstated key facts, according to a review of records — despite contradictory opinions from other medical experts who said they saw no evidence of abuse.

In one instance, a pediatrician, Dr. Niran Al-Agba, insisted that a 2-year-old child’s bruise matched her parents’ description of an accidental fall onto a heating grate in their home. But Child Protective Services workers, who’d gotten a call from the child’s day care after someone noticed the bruise, asked Woods to look at photos of the injury.

Woods reported that the mark was most likely the result of abuse, even though she’d never seen the child in person or talked to the parents. The agency sided with her. To justify that decision, the Child Protective Services worker described Woods as “a physician with extensive training and experience in regard to child abuse and neglect,” according to a written report reviewed by reporters.

The info is here.

The Polarization of Reality

A. Alesina, A. Miano, and S. Stantcheva
American Economic Review Papers and Proceedings

Evidence is growing that Americans are polarized not only in their views on policy issues and attitudes towards government and society, but also in their perceptions of the same, factual reality.

In this paper we conceptualize how to think about the polarization of reality and review recent papers that show that Republican and Democrats as well as Trump and non-Trump voters since 2016) view the same reality through a different lens. Perhaps as a result, they hold different views about policies and what should be done to address different economic and social issues.

The direction of causality is unclear: On the one hand, individuals could select into political affiliation based on their perceptions of reality. On the other hand, political affiliation affects the information one receives, the groups one interacts with, and the media one is exposed to, which in turn can shape perceptions of reality.

Regardless of the direction of causality though, this is not about having different attitudes about economic or social phenomena or policies that could justifiably be viewed differently from different angles.

What is striking is rather to have different perceptions of realities that can be factually checked.

We highlight evidence about differences in perceptions across the political spectrum on social mobility, inequality, immigration, and public policies.


We also show that providing information leads to different reassessments of reality and different responses along the policy support margin, depending on one’s political leanings.

The paper can be downloaded here.

Tuesday, March 10, 2020

Three Unresolved Issues in Human Morality

Jerome Kagan
Perspectives on Psychological Science
First Published March 28, 2018

Abstract

This article discusses three major, but related, controversies surrounding the idea of morality. Is the complete pattern of features defining human morality unique to this species? How context dependent are moral beliefs and the emotions that often follow a violation of a moral standard? What developmental sequence establishes a moral code? This essay suggests that human morality rests on a combination of cognitive and emotional processes that are missing from the repertoires of other species. Second, the moral evaluation of every behavior, whether by self or others, depends on the agent, the action, the target of the behavior, and the context. The ontogeny of morality, which begins with processes that apes possess but adds language, inference, shame, and guilt, implies that humans are capable of experiencing blends of thoughts and feelings for which no semantic term exists. As a result, conclusions about a person’s moral emotions based only on questionnaires or interviews are limited to this evidence.

From the Summary

The human moral sense appears to contain some features not found in any other animal. The judgment of a behavior as moral or immoral, by self or community, depends on the agent, the action, and the setting. The development of a moral code involves changes in both cognitive and affective processes that are the result of maturation and experience. The ideas in this essay have pragmatic implications for psychological research. If most humans want others to regard them as moral agents, and, therefore, good persons, their answers to questionnaires or to interviewers as well as behaviors in laboratories will tend to conform to their understanding of what the examiner regards as the society’s values. That is why investigators should try to gather evidence on the behaviors that their participants exhibit in their usual settings.

The article is here.