Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Accuracy. Show all posts
Showing posts with label Accuracy. Show all posts

Saturday, July 14, 2018

10 Ways to Avoid False Memories

Christopher Chabris and Daniel Simons
Slate.com
Originally posted February 10, 2018

Here is an excerpt:

No one has, to our knowledge, tried to implant a false memory of being shot down in a helicopter. But researchers have repeatedly created other kinds of entirely false memory in the laboratory. Most famously, Elizabeth Loftus and Jacqueline Pickrell successfully convinced people that, as children, they had once been lost in a shopping mall. In another study, researchers Kimberly Wade, Maryanne Garry, Don Read, and Stephen Lindsay showed people a Photoshopped image of themselves as children, standing in the basket of a hot air balloon. Half of the participants later had either complete or partial false memories, sometimes “remembering” additional details from this event—an event that they never experienced. In a newly published study, Julia Shaw and Stephen Porter used structured interviews to convince 70 percent of their college student participants that they had committed a crime as an adolescent (theft, assault, or assault with a weapon) and that the crime had resulted in police contact. And outside the laboratory, people have fabricated rich and detailed memories of things that we can be almost 100 percent certain did not happen, such as having been abducted and impregnated by aliens.

Even memories for highly emotional events—like the Challenger explosion or the 9/11 attacks—can mutate substantially. As time passes, we can lose the link between things we’ve experienced and the details surrounding them; we remember the gist of a story, but we might not recall whether we experienced the events or just heard about them from someone else. We all experience this failure of “source memory” in small ways: Maybe you tell a friend a great joke that you heard recently, only to learn that he’s the one who told it to you. Or you recall having slammed your hand in a car door as a child, only to get into an argument over whether it happened instead to your sister. People sometimes even tell false stories directly to the people who actually experienced the original events, something that is hard to explain as intentional lying. (Just last month, Brian Williams let his exaggerated war story be told at a public event honoring one of the soldiers who had been there.)

The information is here.

Monday, June 18, 2018

Groundhog Day for Medical Artificial Intelligence

Alex John London
The Hastings Report
Originally published May 26, 2018

Abstract

Following a boom in investment and overinflated expectations in the 1980s, artificial intelligence entered a period of retrenchment known as the “AI winter.” With advances in the field of machine learning and the availability of large datasets for training various types of artificial neural networks, AI is in another cycle of halcyon days. Although medicine is particularly recalcitrant to change, applications of AI in health care have professionals in fields like radiology worried about the future of their careers and have the public tittering about the prospect of soulless machines making life‐and‐death decisions. Medicine thus appears to be at an inflection point—a kind of Groundhog Day on which either AI will bring a springtime of improved diagnostic and predictive practices or the shadow of public and professional fear will lead to six more metaphorical weeks of winter in medical AI.

The brief perspective is here.

Monday, May 14, 2018

Computer Says No: Part 2 - Explainability

Jasmine Leonard
theRSA.org
Originally posted March 23, 2018

Here is an expert:

The trouble is, since many decisions should be explainable, it’s tempting to assume that automated decision systems should also be explainable.  But as discussed earlier, automated decisions systems don’t actually make decisions; they make predictions.  And when a prediction is used to guide a decision, the prediction is itself part of the explanation for that decision.  It therefore doesn’t need to be explained itself, it merely needs to be justifiable.

This is a subtle but important distinction.  To illustrate it, imagine you were to ask your doctor to explain her decision to prescribe you a particular drug.  She could do so by saying that the drug had cured many other people with similar conditions in the past and that she therefore predicted it would cure you too.  In this case, her prediction that the drug will cure you is the explanation for her decision to prescribe it.  And it’s a good explanation because her prediction is justified – not on the basis of an explanation of how the drug works, but on the basis that it’s proven to be effective in previous cases.  Indeed, explanations of how drugs work are often not available because the biological mechanisms by which they operate are poorly understood, even by those who produce them.  Moreover, even if your doctor could explain how the drug works, unless you have considerable knowledge of pharmacology, it’s unlikely that the explanation would actually increase your understanding of her decision to prescribe the drug.

If explanations of predictions are unnecessary to justify their use in decision-making then, what else can justify the use of a prediction made by an automated decision system?  The best answer, I believe, is that the system is shown to be sufficiently accurate.  What “sufficiently accurate” means is obviously up for debate, but at a minimum I would suggest that it means the system’s predictions are at least as accurate as those produced by a trained human.  It also means that there are no other readily available systems that produce more accurate predictions.

The article is here.

Tuesday, April 17, 2018

Planning Complexity Registers as a Cost in Metacontrol

Kool, W., Gershman, S. J., & Cushman, F. A. (in press). Planning complexity registers as a
cost in metacontrol. Journal of Cognitive Neuroscience.

Abstract

Decision-making algorithms face a basic tradeoff between accuracy and effort (i.e., computational demands). It is widely agreed that humans have can choose between multiple decision-making processes that embody different solutions to this tradeoff: Some are computationally cheap but inaccurate, while others are computationally expensive but accurate. Recent progress in understanding this tradeoff has been catalyzed by formalizing it in terms of model-free (i.e., habitual) versus model-based (i.e., planning) approaches to reinforcement learning. Intuitively, if two tasks offer the same rewards for accuracy but one of them is much more demanding, we might expect people to rely on habit more in the difficult task: Devoting significant computation to achieve slight marginal accuracy gains wouldn’t be “worth it”. We test and verify this prediction in a sequential RL task. Because our paradigm is amenable to formal analysis, it contributes to the development of a computational model of how people balance the costs and benefits of different decision-making processes in a task-specific manner; in other words, how we decide when hard thinking is worth it.

The research is here.

Tuesday, February 20, 2018

This Cat Sensed Death. What if Computers Could, Too?

Siddhartha Mukherjee
The New York Times
Originally published January 3, 2017

Here are two excerpts:

But what if an algorithm could predict death? In late 2016 a graduate student named Anand Avati at Stanford’s computer-science department, along with a small team from the medical school, tried to “teach” an algorithm to identify patients who were very likely to die within a defined time window. “The palliative-care team at the hospital had a challenge,” Avati told me. “How could we find patients who are within three to 12 months of dying?” This window was “the sweet spot of palliative care.” A lead time longer than 12 months can strain limited resources unnecessarily, providing too much, too soon; in contrast, if death came less than three months after the prediction, there would be no real preparatory time for dying — too little, too late. Identifying patients in the narrow, optimal time period, Avati knew, would allow doctors to use medical interventions more appropriately and more humanely. And if the algorithm worked, palliative-care teams would be relieved from having to manually scour charts, hunting for those most likely to benefit.

(cut)

So what, exactly, did the algorithm “learn” about the process of dying? And what, in turn, can it teach oncologists? Here is the strange rub of such a deep learning system: It learns, but it cannot tell us why it has learned; it assigns probabilities, but it cannot easily express the reasoning behind the assignment. Like a child who learns to ride a bicycle by trial and error and, asked to articulate the rules that enable bicycle riding, simply shrugs her shoulders and sails away, the algorithm looks vacantly at us when we ask, “Why?” It is, like death, another black box.

The article is here.

Friday, December 1, 2017

Selling Bad Therapy to Trauma Victims

Jonathan Shedler
Psychology Today
Originally published November 19, 2017

Here is the conclusion:

First, do no harm

Many health insurance companies discriminate against psychotherapy. Congress has passed laws mandating mental health “parity” (equal coverage for medical and mental health conditions) but health insurers circumvent them. This has led to class action lawsuits against health insurance companies, but discrimination continues.

One way that health insurers circumvent parity laws is by shunting patients to the briefest and cheapest therapies — just the kind of therapies recommended by the APA’s treatment guidelines. Another way is by making therapy so impersonal and dehumanizing that patients drop out. Health insurers do not publicly say the treatment decisions are driven by economic self-interest. They say the treatments are scientifically proven — and point to treatment guidelines like those just issued by the APA.

It’s bad enough that most Americans don’t have adequate mental health coverage, without also being gaslighted and told that inadequate therapy is the best therapy.

The APA’s ethics code begins, “Psychologists strive to benefit those with whom they work and take care to do no harm.” APA has an honorable history of fighting for patients’ access to good care and against health insurance company abuses.

Blinded by RCT ideology, APA inadvertently handed a trump card to the worst apples in the health insurance industry.

The article is here.

Wednesday, August 23, 2017

Tell it to me straight, doctor: why openness from health experts is vital

Robin Bisson
The Guardian
Originally published August 3, 2017

Here is an excerpt:

It is impossible to overstate the importance of public belief that the medical profession acts in the interests of patients. Any suggestions that public health experts are not being completely open looks at best paternalistic and at worst plays into the hands of those, such as the anti-vaccination lobby, who have warped views about the medical establishment.

So when it comes out that public health messages such as “complete the course” aren’t backed up by evidence, it adds colour to the picture of a paternalistic medical establishment and risks undermining public trust.

Simple public health messages – wear sunscreen, eat five portions of fruit and veg a day – undoubtedly have positive effects on everyone’s health. But people are also capable of understanding nuance and the shifting sands of new evidence. The best way to guarantee people keep trusting experts is for experts to put their trust in people.

The article is here.

Tuesday, September 6, 2016

Truth in stereotypes

Lee Jussim
Aeon Magazine
Originally published August 15, 2016

Here is an excerpt:

These practices created what I call ‘The Myth of Stereotype Inaccuracy’. Famous psychologists declaring stereotypes inaccurate without a citation or evidence meant that anyone could do likewise, creating an illusion that pervasive stereotype inaccuracy was ‘settled science’. Subsequent researchers could declare stereotypes inaccurate and could create the appearance of scientific support by citing articles that also made the claim. Only if one looked for the empirical research underlying such claims did one discover that there was nothing there; just a black hole.

‘But wait!’ you say. ‘Researchers are often defining stereotypes as inaccurate, not declaring them to be empirically inaccurate, and they can define their terms how they choose.’ To which I reply: ‘Are you sure that is the argument you are going to use to defend the viability of “stereotypes are inaccurate”?’

Friday, July 1, 2016

Predicting Suicide is not Reliable, according to recent study

Matthew Large , M. Kaneson, N. Myles, H. Myles, P. Gunaratne, C. Ryan
PLOS One
Published: June 10, 2016
http://dx.doi.org/10.1371/journal.pone.0156322

Discussion

The pooled estimate from a large and representative body of research conducted over 40 years suggests a statistically strong association between high-risk strata and completed suicide. However the meta-analysis of the sensitivity of suicide risk categorization found that about half of all suicides are likely to occur in lower-risk groups and the meta-analysis of PPV suggests that 95% of high-risk patients will not suicide. Importantly, the pooled odds ratio (and the estimates of the sensitivity and PPV) and any assessment of the overall strength of risk assessment should be interpreted very cautiously in the context of several limitations documented below.

With respect to our first hypothesis, the statistical estimates of between study heterogeneity and the distribution of the outlying, quartile and median effect sizes values suggests that the statistical strength of suicide risk assessment cannot be considered to be consistent between studies, potentially limiting the generalizability of the pooled estimate.

With respect to our second hypothesis we found no evidence that the statistical strength of suicide risk assessment has improved over time.

The research is here.

Wednesday, May 11, 2016

Procedural Moral Enhancement

G. Owen Schaefer and Julian Savulescu
Neuroethics  pp 1-12
First online: 20 April 2016

Abstract

While philosophers are often concerned with the conditions for moral knowledge or justification, in practice something arguably less demanding is just as, if not more, important – reliably making correct moral judgments. Judges and juries should hand down fair sentences, government officials should decide on just laws, members of ethics committees should make sound recommendations, and so on. We want such agents, more often than not and as often as possible, to make the right decisions. The purpose of this paper is to propose a method of enhancing the moral reliability of such agents. In particular, we advocate for a procedural approach; certain internal processes generally contribute to people’s moral reliability. Building on the early work of Rawls, we identify several particular factors related to moral reasoning that are specific enough to be the target of practical intervention: logical competence, conceptual understanding, empirical competence, openness, empathy and bias. Improving on these processes can in turn make people more morally reliable in a variety of contexts and has implications for recent debates over moral enhancement.

Sunday, March 27, 2016

Reversing the legacy of junk science in the courtroom

By Kelly Servick
Science Magazine
Originally published March 7, 2016

Here is an excerpt:

Testing examiner accuracy using known samples can give the judge or jury a sense of general error rates in a field, but it can’t describe the level of uncertainty around a specific piece of evidence. Right now, only DNA identification includes that measure of uncertainty. (DNA analyses are based on 13 genetic variants, or alleles, that are statistically independent, and known to vary widely among individuals.) Mixtures of genetic material from multiple people can complicate the analysis, but DNA profiling is “a relatively easy statistical problem to solve,” says Nicholas Petraco, an applied mathematician at City University of New York’s John Jay College of Criminal Justice in New York City. Pattern evidence doesn’t operate under the same rules, he says. “What’s an allele on a tool mark?”; “What’s an allele on a hair or fiber?”

The article is here.

Note: This article addresses evidence such as fingerprints, that can have error. What does this say about neurological or psychological "evidence" in terms of accuracy, validity, and reliability?

Wednesday, July 22, 2015

Researchers Find Everyone Has a Bias Blindspot

By Shilo Rea
Carnegie Mellon University
Originally published June 8, 2015

Here are two excerpt:

The most telling finding was that everyone is affected by blind spot bias — only one adult out of 661 said that he/she is more biased than the average person. However, they did find that the participants varied in the degree in which they thought they were less biased than others. This was true irrespective of whether they were actually unbiased or biased in their decision-making.

(cut)

“People seem to have no idea how biased they are. Whether a good decision-maker or a bad one, everyone thinks that they are less biased than their peers,” said Carey Morewedge, associate professor of marketing at Boston University. “This susceptibility to the bias blind spot appears to be pervasive, and is unrelated to people’s intelligence, self-esteem, and actual ability to make unbiased judgments and decisions.”

They also found that people with a high bias blind spot are those most likely to ignore the advice of peers or experts, and are least likely to learn from de-biasing training that could improve the quality of their decisions.

The entire article is here.

Thursday, March 19, 2015

Enduring and Emerging Challenges of Informed Consent

Christine Grady, Ph.D.
N Engl J Med 2015; 372:855-862
February 26, 2015
DOI: 10.1056/NEJMra1411250

Here is an excerpt:

A substantial body of literature corroborates a considerable gap between the practice of informed consent and its theoretical construct or intended goals and indicates many unresolved conceptual and practical questions.  Empirical evidence shows variation in the type and level of detail of information disclosed, in patient or research-participant understanding of the information, and in how their decisions are influenced.  Physicians receive little training regarding the practice of informed consent, are pressed for time and by competing demands, and often misinterpret the requirements and legal standards. Patients often have meager comprehension of the risks and alternatives of offered surgical or medical treatments, and their decisions are driven more by trust in their doctor or by deference to authority than by the information provided. Informed consent for research is more tightly regulated and detailed, yet research consent forms continue to increase in length, complexity, and incorporation of legal language, making them less likely to be read or understood. Studies also show that research participants have deficits in their understanding of study information, particularly of research methods such as randomization.

The entire article is here.