Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Prediction. Show all posts
Showing posts with label Prediction. Show all posts

Friday, November 10, 2017

Genetic testing of embryos creates an ethical morass

 Andrew Joseph
STAT news
Originally published October 23, 2017

Here is an excerpt:

The issue also pokes at a broader puzzle ethicists and experts are trying to reckon with as genetic testing moves out of the lab and further into the hands of consumers. People have access to more information about their own genes — or, in this case, about the genes of their potential offspring — than ever before. But having that information doesn’t necessarily mean it can be used to inform real-life decisions.

A test can tell prospective parents that their embryo has an abnormal number of chromosomes in its cells, for example, but it cannot tell them what kind of developmental delays their child might have, or whether transferring that embryo into a womb will lead to a pregnancy at all. Families and physicians are gazing into five-day-old cells like crystal balls, seeking enlightenment about what might happen over a lifetime. Plus, the tests can be wrong.

“This is a problem that the rapidly developing field of genetics is facing every day and it’s no different with embryos than it is when someone is searching Ancestry.com,” said Judith Daar, a bioethicist and clinical professor at University of California, Irvine, School of Medicine. “We’ve learned a lot, and the technology is marvelous and can be predictive and accurate, but we’re probably at a very nascent stage of understanding the impact of what the genetic findings are on health.”

Preimplantation genetic testing, or PGT, emerged in the 1990s as a way to study the DNA of embryos before they’re transferred to a womb, and the technology has grown more advanced with time. Federal data show it has been used in about 5 percent of IVF procedures going back several years, but many experts pin the figure as high as 20 or 30 percent.

The article is here.

Wednesday, October 18, 2017

Danny Kahneman on AI versus Humans


NBER Economics of AI Workshop 2017

Here is a rough translation of an excerpt:

One point made yesterday was the uniqueness of humans when it comes to evaluations. It was called “judgment”. Here in my noggin it’s “evaluation of outcomes”: the utility side of the decision function. I really don’t see why that should be reserved to humans.

I’d like to make the following argument:
  1. The main characteristic of people is that they’re very “noisy”.
  2. You show them the same stimulus twice, they don’t give you the same response twice.
  3. You show the same choice twice I mean—that’s why we had stochastic choice theory because thereis so much variability in people’s choices given the same stimuli.
  4. Now what can be done even without AI is a program that observes an individual that will be better than the individual and will make better choices for the individual by because it will be noise-free.
  5. We know from the literature that Colin cited on predictions an interesting tidbit:
  6. If you take clinicians and you have them predict some criterion a large number of time and then you develop a simple equation that predicts not the outcome but the clinicians judgment, that model does better in predicting the outcome then the clinician.
  7. That is fundamental.
This is telling you that one of the major limitations on human performance is not bias it is just noise.
I’m maybe partly responsible for this, but people now when they talk about error tend to think of bias as an explanation: the first thing that comes to mind. Well, there is bias. And it is an error. But in fact most of the errors that people make are better viewed as this random noise. And there’s an awful lot of it.

The entire transcript and target article is here.

Monday, June 5, 2017

AI May Hold the Key to Stopping Suicide

Bahar Gholipour
NBC News
Originally posted May 23, 2017

Here is an excerpt:

So far the results are promising. Using AI, Ribeiro and her colleagues were able to predict whether someone would attempt suicide within the next two years at about 80 percent accuracy, and within the next week at 92 percent accuracy. Their findings were recently reported in the journal Clinical Psychological Science.

This high level of accuracy was possible because of machine learning, as researchers trained an algorithm by feeding it anonymous health records from 3,200 people who had attempted suicide. The algorithm learns patterns through examining combinations of factors that lead to suicide, from medication use to the number of ER visits over many years. Bizarre factors may pop up as related to suicide, such as acetaminophen use a year prior to an attempt, but that doesn't mean taking acetaminophen can be isolated as a risk factor for suicide.

"As humans, we want to understand what to look for," Ribeiro says. "But this is like asking what's the most important brush stroke in a painting."

With funding from the Department of Defense, Ribeiro aims to create a tool that can be used in clinics and emergency rooms to better find and help high-risk individuals.

The article is here.

Friday, March 3, 2017

Doctors suffer from the same cognitive distortions as the rest of us

Michael Lewis
Nautilus
Originally posted February 9, 2017

Here are two excerpts:

What struck Redelmeier wasn’t the idea that people made mistakes. Of course people made mistakes! What was so compelling is that the mistakes were predictable and systematic. They seemed ingrained in human nature. One passage in particular stuck with him—about the role of the imagination in human error. “The risk involved in an adventurous expedition, for example, is evaluated by imagining contingencies with which the expedition is not equipped to cope,” the authors wrote. “If many such difficulties are vividly portrayed, the expedition can be made to appear exceedingly dangerous, although the ease with which disasters are imagined need not reflect their actual likelihood. Conversely, the risk involved in an undertaking may be grossly underestimated if some possible dangers are either difficult to conceive of, or simply do not come to mind.” This wasn’t just about how many words in the English language started with the letter K. This was about life and death.

(cut)

Toward the end of their article in Science, Daniel Kahneman and Amos Tversky had pointed out that, while statistically sophisticated people might avoid the simple mistakes made by less savvy people, even the most sophisticated minds were prone to error. As they put it, “their intuitive judgments are liable to similar fallacies in more intricate and less transparent problems.” That, the young Redelmeier realized, was a “fantastic rationale why brilliant physicians were not immune to these fallibilities.” Error wasn’t necessarily shameful; it was merely human. “They provided a language and a logic for articulating some of the pitfalls people encounter when they think. Now these mistakes could be communicated. It was the recognition of human error. Not its denial. Not its demonization. Just the understanding that they are part of human nature.”

The article is here.

Monday, February 6, 2017

Misguided mental health system needs an overhaul

Jim Gottstein
Alaska Dispatch News
Originally posted January 12, 2016

The glaring failures surrounding Esteban Santiago, resulting in the tragic killing of five people and wounding of eight others in Fort Lauderdale, Florida, prompts me to make some points about our misguided mental health system.

First, psychiatrists have no ability to predict who is going to be violent. In a Jan. 3, 2013, Washington Post article, "Predicting violence is a work in progress," after reviewing the research, writer David Brown, reported:

• "There is no instrument that is specifically useful or validated for identifying potential school shooters or mass murderers."

• "The best-known attempt to measure violence in mental patients found that mental illness by itself didn't predict an above-average risk of being violent."

• "(S)tudies have shown psychiatrists' accuracy in identifying patients who would become violent was slightly better than chance."

• "(T)he presence of a mental disorder (is) only a small contributor to risk, outweighed by other factors such as age, previous violent acts, alcohol use, impulsivity, gang membership and lack of family support."

The article is here.

Monday, January 30, 2017

Finding trust and understanding in autonomous technologies

David Danks
The Conversation
Originally published December 30, 2016

Here is an excerpt:

Autonomous technologies are rapidly spreading beyond the transportation sector, into health care, advanced cyberdefense and even autonomous weapons. In 2017, we’ll have to decide whether we can trust these technologies. That’s going to be much harder than we might expect.

Trust is complex and varied, but also a key part of our lives. We often trust technology based on predictability: I trust something if I know what it will do in a particular situation, even if I don’t know why. For example, I trust my computer because I know how it will function, including when it will break down. I stop trusting if it starts to behave differently or surprisingly.

In contrast, my trust in my wife is based on understanding her beliefs, values and personality. More generally, interpersonal trust does not involve knowing exactly what the other person will do – my wife certainly surprises me sometimes! – but rather why they act as they do. And of course, we can trust someone (or something) in both ways, if we know both what they will do and why.

I have been exploring possible bases for our trust in self-driving cars and other autonomous technology from both ethical and psychological perspectives. These are devices, so predictability might seem like the key. Because of their autonomy, however, we need to consider the importance and value – and the challenge – of learning to trust them in the way we trust other human beings.

The article is here.

Friday, January 20, 2017

Why is everyone talking about algorithms?

Discover Society
Originally published January 3, 2017

Here is an excerpt:

The notion of the algorithm though, is also becoming really quite powerful in its own right. The very notion of the algorithm has taken on a life of its own, especially in the popular media. Algorithms are becoming the shadowy figures that in some way embody our wider fears and concerns. The visions we have of algorithms chime with broader feelings of a loss of control, of accelerated lives that are speeding away from us, of our inability to cope with the unmanageable information that we are exposed to, or the feeling that our lives are governed for us and that we have less discretion, autonomy or voice.

The talk about algorithms is a product of the powerful role of algorithms in our lives, but the talk around algorithms also seems to tap into broader concerns about powerlessness and the limitations placed on our discretion and choice. The algorithm is coming to embody the sense of life as out of our control. Algorithms are evoked to speak to these fears and concerns. This is not to say that they don’t have material influences on our lives, they clearly have powerful consequences. But the idea of the algorithm is also now a powerful presence, jumping out suddenly from the mass of code within which everyday life is lived to give us the occasional fright or to remind us of our sense of limited autonomy.

The article is here.

Wednesday, January 4, 2017

Actuaries are bringing Netflix-like predictive modeling to health care

By Gary Gau
STAT News
Originally published on December 13, 2016

Here is an excerpt:

In today’s ever-changing landscape, the health actuary is part clinician, epidemiologist, health economist, and statistician. He or she combines financial, operational, and clinical data, such as information from electronic medical records, pharmacy use, and lab results, to provide insights on both individual patients and overall population health.

I see a future where predictive modeling helps health care companies not only suggest healthy behaviors but also convince patients and consumers to adopt them. Predictive modeling techniques can be applied to information that can influence an individual’s decision to use preventive care, accurately take prescribed medication, book a doctor appointment, lose weight, or become more physically active.

The trick will be identifying the trigger that gets him or her to act.

Insurers must understand their patient populations, including the barriers they face to achieving better health. To create solutions, insurers must first understand the psychology of motivation and what leads individuals to change their behavior. That’s where the precision approach comes into play.

The article is here.

Monday, December 5, 2016

The Simple Economics of Machine Intelligence

Ajay Agrawal, Joshua Gans, and Avi Goldfarb
Harvard Business Review
Originally published November 17, 2016

Here are two excerpts:

The first effect of machine intelligence will be to lower the cost of goods and services that rely on prediction. This matters because prediction is an input to a host of activities including transportation, agriculture, healthcare, energy manufacturing, and retail.

When the cost of any input falls so precipitously, there are two other well-established economic implications. First, we will start using prediction to perform tasks where we previously didn’t. Second, the value of other things that complement prediction will rise.

(cut)

As machine intelligence improves, the value of human prediction skills will decrease because machine prediction will provide a cheaper and better substitute for human prediction, just as machines did for arithmetic. However, this does not spell doom for human jobs, as many experts suggest. That’s because the value of human judgment skills will increase. Using the language of economics, judgment is a complement to prediction and therefore when the cost of prediction falls demand for judgment rises. We’ll want more human judgment.

The article is here.

Wednesday, July 13, 2016

In Wisconsin, a Backlash Against Using Data to Foretell Defendants’ Futures

By Mitch Smith
The New York Times
Originally published June 23, 2016

Here is an excerpt:

Compas is an algorithm developed by a private company, Northpointe Inc., that calculates the likelihood of someone committing another crime and suggests what kind of supervision a defendant should receive in prison. The results come from a survey of the defendant and information about his or her past conduct. Compas assessments are a data-driven complement to the written presentencing reports long compiled by law enforcement agencies.

Company officials say the algorithm’s results are backed by research, but they are tight-lipped about its details. They do acknowledge that men and women receive different assessments, as do juveniles, but the factors considered and the weight given to each are kept secret.

“The key to our product is the algorithms, and they’re proprietary,” said Jeffrey Harmon, Northpointe’s general manager. “We’ve created them, and we don’t release them because it’s certainly a core piece of our business. It’s not about looking at the algorithms. It’s about looking at the outcomes.”

The article is here.

Tuesday, June 14, 2016

There’s No Such Thing as Free Will

By Stephen Cave
The Atlantic
Originally posted June 2016

Here are two excerpts:

The 20th-century nature-nurture debate prepared us to think of ourselves as shaped by influences beyond our control. But it left some room, at least in the popular imagination, for the possibility that we could overcome our circumstances or our genes to become the author of our own destiny. The challenge posed by neuroscience is more radical: It describes the brain as a physical system like any other, and suggests that we no more will it to operate in a particular way than we will our heart to beat. The contemporary scientific image of human behavior is one of neurons firing, causing other neurons to fire, causing our thoughts and deeds, in an unbroken chain that stretches back to our birth and beyond. In principle, we are therefore completely predictable. If we could understand any individual’s brain architecture and chemistry well enough, we could, in theory, predict that individual’s response to any given stimulus with 100 percent accuracy.

(cut)

The big problem, in Harris’s view, is that people often confuse determinism with fatalism. Determinism is the belief that our decisions are part of an unbreakable chain of cause and effect. Fatalism, on the other hand, is the belief that our decisions don’t really matter, because whatever is destined to happen will happen—like Oedipus’s marriage to his mother, despite his efforts to avoid that fate.

The article is here.

Sunday, January 18, 2015

Why the Myers-Briggs test is totally meaningless

By Joseph Stromberg
Vox
Published on January 5, 2015

The Myers-Briggs Type Indicator is probably the most widely used personality test in the world.

An estimated 2 million people take it annually, at the behest of corporate HR departments, colleges, and even government agencies. The company that makes and markets the test makes somewhere around $20 million each year.

The only problem? The test is completely meaningless.

"There's just no evidence behind it," says Adam Grant, an organizational psychologist at the University of Pennsylvania who's written about the shortcomings of the Myers-Briggs previously. "The characteristics measured by the test have almost no predictive power on how happy you'll be in a situation, how you'll perform at your job, or how happy you'll be in your marriage."

The entire article is here.

Saturday, December 20, 2014

Bioethics in 2025: what will be the challenges?

Deborah Bowman, Professor of Bioethics, Clinical Ethics and Medical Law at St. George’s University of London

Sarah Chan, Research Fellow in Bioethics and Law and Deputy Director of the Institute for Science, Ethics and Innovation at the University of Manchester

Molly Crockett, Associate Professor of Experimental Psychology at the University of Oxford

Gill Haddow, Senior Research Fellow in Science, Technology and Innovation Studies at the University of Edinburgh

For its 2014 annual public lecture, the Nuffield Council on Bioethics had four speakers from different disciplines present their take on what will be the main challenges in and for bioethics in the near future. Topics touched on included how to make bioethics more open and inclusive as a discipline; what role for bioethicists in meeting future societal challenges; whether we will be able to develop a 'morality pill' in near future; and how it might feel for people to have electronic or other material transplanted into them in the future to help their bodies cope with longer lives.


Monday, November 24, 2014

Psychologist paying $550,000 settlement in toddler’s death

By Tom Jackman
The Washington Post
Originally published November 8, 2014

The mother of a 15-month-old boy who died while on a visit to his father in Manassas in 2012 will be paid a $550,000 wrongful death settlement from the psychologist who testified that it was safe to leave the boy with his father, Joaquin Rams.

The settlement was entered in Fairfax Circuit Court on Oct. 17, the same day that Prince William County prosecutors, who are seeking to prove that Rams killed his son, revealed that Virginia’s chief medical examiner had changed the official ruling on the cause of death from drowning to “undetermined.”

The entire article is here.

Tuesday, June 3, 2014

Are Psychologists Violating their Ethics Code by Conducting Death Penalty Evaluations for Defendants with Mental Disabilities?

By Celia Fisher
The Center for Ethics Education
Originally posted on May 17, 2014

Imagine you are a forensic psychologist asked during the sentencing phase of a capital punishment case to assess the mental status of a homeless, African American defendant convicted of murder.  Your evaluation report states that the defendant has an IQ and adaptive living score bordering on a diagnosis of intellectual disability, but the absence of educational and health records from childhood prevents you from definitively stating he fits the Supreme Court’s definition of “mental retardation” which would preclude the jury from recommending the death penalty.  Subsequently the defendant is sentenced for execution.

The entire article is here.

Wednesday, March 26, 2014

Alzheimer's Blood Test Raises Ethical Questions

By Jon Hamilton
NPR News
Originally posted March 9, 2014

Here is an excerpt:

But the biggest concern about Alzheimer's testing probably has to do with questions of stigma and identity, Karlawish says. "How will other people interact with you if they learn that you have this information?" he says. "And how will you think about your own brain and your sort of sense of self?"

The stigma and fear surrounding Alzheimer's may decrease, though, as our understanding of the disease changes, Karlawish says. Right now, people still tend to think that "Either you have Alzheimer's disease dementia or you're normal, you don't have it," he says.

The entire story is here.

Sunday, April 28, 2013

College admission questions rarely identify criminal behavior

University of Colorado
Press Release
April 16, 2013

A new study shows that neither criminal background checks nor pre-admission screening questions accurately predict students likely to commit crime on college campuses.

"In an effort to reduce campus crime, more than half of all American colleges ask applicants about their criminal histories or require criminal background checks," said study author Carol Runyan , Ph.D., MPH, and professor of epidemiology at the Colorado School of Public Health. "But there is no real evidence to show this reduces campus crime."

Colleges across the U.S. ramped up background checks after the 2007 Virginia Tech massacre which killed 32 people and wounded another 17.

Yet Runyan found that only 3.3 percent of college seniors who engaged in misconduct actually reported precollege criminal histories during the admissions process. And just 8.5 percent of applicants with a criminal history were charged with misconduct during college.

The study surveyed 6,972 students at a large southern university. It found that students with criminal records prior to college were more likely to commit crimes once admitted but the screening process rarely identified them.

"We didn't look at cheating or minor alcohol offences," Runyan said. "We focused on significant offences like assault, robbery, property crimes, driving under the influence of alcohol, marijuana use and other drug-related crimes."

While colleges are generally safe environments, students can be both perpetrators and victims of crimes that pose risks to the entire campus community, Runyan said.

She noted that earlier studies had reported that up to 14 percent of all college men admitted to some kind of sexual assault or coercion while 30 percent of university males and 22 percent of females said they had driven under the influence of alcohol in the last year. Also, 19 percent of students reported illicit drug use.

Still, the screening questions have proven a weak tool in identifying would-be campus criminals, Runyan said.

Runyan's findings indicate that students who engage in criminal activity during college are more likely to have engaged in misconduct prior to college, whether they admit it on their applications or not. However, she said current screening questions on the college application often fail to detect which students will engage in misconduct during college. And most of those who have records before college don't seem to continue the behaviors in college.

Even if the screenings could identify likely troublemakers, Runyan said, colleges would have to decide whether to admit the students given that the odds of them committing a crime on campus would still be low. And much of the reported precollege crime involves marijuana use and is not violent.

Another complication is possible discrimination. Students from more affluent backgrounds, who tend to be white, can often pay to have their early criminal records expunged while others, including many minorities, can't afford it.

"Based on our work, I cannot say with confidence that colleges should stop asking about criminal backgrounds, but I would use caution in thinking that this is the best strategy to address crime on campus," said Runyan who directs the University of Colorado's Pediatric Injury Prevention, Education and Research Program. "We need to ensure a safe and supportive environment for all students rather than limiting college access for students who may need extra help."

The study was recently published in the journal Injury Prevention and will be presented by Runyan at a conference in June.

Monday, April 8, 2013

Brain Scans Might Predict Future Criminal Behavior

Science Daily
Originally published March 28, 2013

A new study conducted by The Mind Research Network in Albuquerque, N.M., shows that neuroimaging data can predict the likelihood of whether a criminal will reoffend following release from prison.

The paper, which is to be published in the Proceedings of the National Academy of Sciences, studied impulsive and antisocial behavior and centered on the anterior cingulate cortex (ACC), a portion of the brain that deals with regulating behavior and impulsivity.

The study demonstrated that inmates with relatively low anterior cingulate activity were twice as likely to reoffend than inmates with high-brain activity in this region.

"These findings have incredibly significant ramifications for the future of how our society deals with criminal justice and offenders," said Dr. Kent A. Kiehl, who was senior author on the study and is director of mobile imaging at MRN and an associate professor of psychology at the University of New Mexico.

The entire story is here.

The journal abstract is here.

Monday, March 11, 2013

Why Failing Med Students Don’t Get Failing Grades

By Pauline Chen
The New York Times
Originally published February 28, 2013

Here are some excerpts:

Medical educators have long understood that good doctoring, like ducks, elephants and obscenity, is easy to recognize but difficult to quantify. And nowhere is the need to catalog those qualities more explicit, and charged, than in the third year of medical school, when students leave the lecture halls and begin to work with patients and other clinicians in specialty-based courses referred to as “clerkships.” In these clerkships, students are evaluated by senior doctors and ranked on their nascent doctoring skills, with the highest-ranking students going on to the most competitive training programs and jobs.

A student’s performance at this early stage, the traditional thinking went, would be predictive of how good a doctor she or he would eventually become.

But in the mid-1990s, a group of researchers decided to examine grading criteria and asked directors of internal medicine clerkship courses across the country how accurate and consistent they believed their grading to be.

The entire story is here.

Friday, March 8, 2013

Evaluations of Dangerousness among those Adjudicated Not Guilty by Reason of Insanity

Edited by Christina M. Finello, JD, PhD
American Psychology Law Society
Winter 2013 News

In many states, following an indeterminate period of hospitalization, individuals adjudicated Not Guilty by Reason of Insanity (hereafter called “acquittees” despite different international legal terminology) are typically discharged under conditional release with provisions for ongoing monitoring and recommitment (Packer & Grisso, 2011). Studies have identified factors associated with conditional release, recommitment, and reoffending in this population. However, few studies have evaluated whether risk assessment measures could assist in predicting recommitment to forensic hospitals.

A number of static factors may be associated with decisions to retain or conditionally release acquittees. For example, Callahan and Silver (1998) found that female acquittees, those with diagnoses other than Schizophrenia and those who committed non-violent offenses, were released most often. Additionally, low psychopathy and older age during one’s first criminal offense increased the likelihood of release (Manguno-Mire, Thompson, BertmanPate, Burnett, & Thompson, 2007). Dynamic and protective variables also influence decisions of retention versus release. For example, researchers identified that acquittees’ treatment compliance and responsiveness, substance use, risk of violence, and availability of structured activities in the community are relevant to release decisions (McDermott, Edens, Quanbeck, Busse, & Scott, 2008; Stredny, Parker, & Dibble, 2012).

Decisions regarding release versus retention involve determinations of future dangerousness (Jones v. United States, 1983), highlighting the relevance of violence risk assessment measures. However, available data do not indicate a strong relationship between scores on risk assessment measures and dispositional decisions. For example, McKee, Harris, and Rice (2007) observed that scores on the Violence Risk Appraisal Guide (VRAG; Quinsey, Harris, Rice, & Cormier, 1998) predicted clinicians’ recommendations for retention versus transfer from a maximum security facility, but did not predict the ultimate decisions. Côté, Crocker, Nicholls, and Seto (2012) reported that, with the exception of previous violence, presence of major mental illness, substance use problems, active symptoms of major mental illness, and unresponsiveness to treatment - the factors of the Historical, Clinical, Risk Management-20 (HCR-20; Webster, Douglas, Eaves, & Hart, 1997) identified by researchers - corresponded poorly (if at all) with those raised by evaluators in review hearings.

The entire article can be found here.