Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Prediction. Show all posts
Showing posts with label Prediction. Show all posts

Monday, July 8, 2019

Prediction Models for Suicide Attempts and Deaths: A Systematic Review and Simulation

Bradley Belsher, Derek Smolenski, Larry Pruitt, and others
JAMA Psychiatry. 2019;76(6):642-651.
doi:10.1001/jamapsychiatry.2019.0174

Abstract
Importance  Suicide prediction models have the potential to improve the identification of patients at heightened suicide risk by using predictive algorithms on large-scale data sources. Suicide prediction models are being developed for use across enterprise-level health care systems including the US Department of Defense, US Department of Veterans Affairs, and Kaiser Permanente.

Objectives
To evaluate the diagnostic accuracy of suicide prediction models in predicting suicide and suicide attempts and to simulate the effects of implementing suicide prediction models using population-level estimates of suicide rates.

Evidence Review
A systematic literature search was conducted in MEDLINE, PsycINFO, Embase, and the Cochrane Library to identify research evaluating the predictive accuracy of suicide prediction models in identifying patients at high risk for a suicide attempt or death by suicide. Each database was searched from inception to August 21, 2018. The search strategy included search terms for suicidal behavior, risk prediction, and predictive modeling. Reference lists of included studies were also screened. Two reviewers independently screened and evaluated eligible studies.

Findings
From a total of 7306 abstracts reviewed, 17 cohort studies met the inclusion criteria, representing 64 unique prediction models across 5 countries with more than 14 million participants. The research quality of the included studies was generally high. Global classification accuracy was good (≥0.80 in most models), while the predictive validity associated with a positive result for suicide mortality was extremely low (≤0.01 in most models). Simulations of the results suggest very low positive predictive values across a variety of population assessment characteristics.

Conclusions and Relevance
To date, suicide prediction models produce accurate overall classification models, but their accuracy of predicting a future event is near 0. Several critical concerns remain unaddressed, precluding their readiness for clinical applications across health systems.

Monday, June 3, 2019

IVF couples could be able to choose the ‘smartest’ embryo

Hannah Devlin
TheGuardian.com
Originally posted May 24, 2019

Couples undergoing IVF treatment could be given the option to pick the “smartest” embryo within the next 10 years, a leading US scientist has predicted.

Stephen Hsu, senior vice president for research at Michigan State University, said scientific advances mean it will soon be feasible to reliably rank embryos according to potential IQ, posing profound ethical questions for society about whether or not the technology should be adopted.

Hsu’s company, Genomic Prediction, already offers a test aimed at screening out embryos with abnormally low IQ to couples being treated at fertility clinics in the US.

“Accurate IQ predictors will be possible, if not the next five years, the next 10 years certainly,” Hsu told the Guardian. “I predict certain countries will adopt them.”

Genomic Prediction’s tests are not currently available in the UK, but the company is planning to submit an application to the Human Fertilisation and Embryology Authority by the end of the year, initially to offer a test for risk of type 1 diabetes.

The info is here.

Thursday, May 23, 2019

Pre-commitment and Updating Beliefs

Charles R. Ebersole
Doctoral Dissertation, University of Virginia

Abstract

Beliefs help individuals make predictions about the world. When those predictions are incorrect, it may be useful to update beliefs. However, motivated cognition and biases (notably, hindsight bias and confirmation bias) can instead lead individuals to reshape interpretations of new evidence to seem more consistent with prior beliefs. Pre-committing to a prediction or evaluation of new evidence before knowing its results may be one way to reduce the impact of these biases and facilitate belief updating. I first examined this possibility by having participants report predictions about their performance on a challenging anagrams task before or after completing the task. Relative to those who reported predictions after the task, participants who pre-committed to predictions reported predictions that were more discrepant from actual performance and updated their beliefs about their verbal ability more (Studies 1a and 1b). The effect on belief updating was strongest among participants who directly tested their predictions (Study 2) and belief updating was related to their evaluations of the validity of the task (Study 3). Furthermore, increased belief updating seemed to not be due to faulty or shifting memory of initial ratings of verbal ability (Study 4), but rather reflected an increase in the discrepancy between predictions and observed outcomes (Study 5). In a final study (Study 6), I examined pre-commitment as an intervention to reduce confirmation bias, finding that pre-committing to evaluations of new scientific studies eliminated the relation between initial beliefs and evaluations of evidence while also increasing belief updating. Together, these studies suggest that pre-commitment can reduce biases and increase belief updating in light of new evidence.

The dissertation is here.

Saturday, March 9, 2019

Can AI Help Reduce Disparities in General Medical and Mental Health Care?

Irene Y. Chen, Peter Szolovits, and Marzyeh Ghassemi
AMA J Ethics. 2019;21(2):E167-179.
doi: 10.1001/amajethics.2019.167.

Abstract

Background: As machine learning becomes increasingly common in health care applications, concerns have been raised about bias in these systems’ data, algorithms, and recommendations. Simply put, as health care improves for some, it might not improve for all.

Methods: Two case studies are examined using a machine learning algorithm on unstructured clinical and psychiatric notes to predict intensive care unit (ICU) mortality and 30-day psychiatric readmission with respect to race, gender, and insurance payer type as a proxy for socioeconomic status.

Results: Clinical note topics and psychiatric note topics were heterogenous with respect to race, gender, and insurance payer type, which reflects known clinical findings. Differences in prediction accuracy and therefore machine bias are shown with respect to gender and insurance type for ICU mortality and with respect to insurance policy for psychiatric 30-day readmission.

Conclusions: This analysis can provide a framework for assessing and identifying disparate impacts of artificial intelligence in health care.

Tuesday, December 11, 2018

Is It Ethical to Use Prognostic Estimates from Machine Learning to Treat Psychosis?

Nicole Martinez-Martin, Laura B. Dunn, and Laura Weiss Roberts
AMA J Ethics. 2018;20(9):E804-811.
doi: 10.1001/amajethics.2018.804.

Abstract

Machine learning is a method for predicting clinically relevant variables, such as opportunities for early intervention, potential treatment response, prognosis, and health outcomes. This commentary examines the following ethical questions about machine learning in a case of a patient with new onset psychosis: (1) When is clinical innovation ethically acceptable? (2) How should clinicians communicate with patients about the ethical issues raised by a machine learning predictive model?

(cut)

Conclusion

In order to implement the predictive tool in an ethical manner, Dr K will need to carefully consider how to give appropriate information—in an understandable manner—to patients and families regarding use of the predictive model. In order to maximize benefits from the predictive model and minimize risks, Dr K and the institution as a whole will need to formulate ethically appropriate procedures and protocols surrounding the instrument. For example, implementation of the predictive tool should consider the ability of a physician to override the predictive model in support of ethically or clinically important variables or values, such as beneficence. Such measures could help realize the clinical application potential of machine learning tools, such as this psychosis prediction model, to improve the lives of patients.

Sunday, December 2, 2018

Re-thinking Data Protection Law in the Age of Big Data and AI

Sandra Wachter and Brent Mittelstadt
Oxford Internet Institute
Originally posted October 11,2 018

Numerous applications of ‘Big Data analytics’ drawing potentially troubling inferences about individuals and groups have emerged in recent years.  Major internet platforms are behind many of the highest profile examples: Facebook may be able to infer protected attributes such as sexual orientation, race, as well as political opinions and imminent suicide attempts, while third parties have used Facebook data to decide on the eligibility for loans and infer political stances on abortion. Susceptibility to depression can similarly be inferred via usage data from Facebook and Twitter. Google has attempted to predict flu outbreaks as well as other diseases and their outcomes. Microsoft can likewise predict Parkinson’s disease and Alzheimer’s disease from search engine interactions. Other recent invasive applications include prediction of pregnancy by Target, assessment of users’ satisfaction based on mouse tracking, and China’s far reaching Social Credit Scoring system.

Inferences in the form of assumptions or predictions about future behaviour are often privacy-invasive, sometimes counterintuitive and, in any case, cannot be verified at the time of decision-making. While we are often unable to predict, understand or refute these inferences, they nonetheless impact on our private lives, identity, reputation, and self-determination.

These facts suggest that the greatest risks of Big Data analytics do not stem solely from how input data (name, age, email address) is used. Rather, it is the inferences that are drawn about us from the collected data, which determine how we, as data subjects, are being viewed and evaluated by third parties, that pose the greatest risk. It follows that protections designed to provide oversight and control over how data is collected and processed are not enough; rather, individuals require meaningful protection against not only the inputs, but the outputs of data processing.

Unfortunately, European data protection law and jurisprudence currently fails in this regard.

The info is here.

Monday, November 5, 2018

We Need To Examine The Ethics And Governance Of Artificial Intelligence

Nikita Malik
forbes.com
Originally posted October 4, 2018

Here is an excerpt:

The second concern is on regulation and ethics. Research teams at MIT and Harvard are already looking into the fast-developing area of AI to map the boundaries within which sensitive but important data can be used. Who determines whether this technology can save lives, for example, versus the very real risk of veering into an Orwellian dystopia?

Take artificial intelligence systems that have the ability to predicate a crime based on an individual’s history, and their propensity to do harm. Pennsylvania could be one of the first states in the United States to base criminal sentences not just on the crimes people are convicted of, but also on whether they are deemed likely to commit additional crimes in the future. Statistically derived risk assessments – based on factors such as age, criminal record, and employment, will help judges determine which sentences to give. This would help reduce the cost of, and burden on, the prison system.

Risk assessments – which have existed for a long time - have been used in other areas such as the prevention of terrorism and child sexual exploitation. In the latter category, existing human systems are so overburdened that children are often overlooked, at grave risk to themselves. Human errors in the case work of the severely abused child Gabriel Fernandez contributed to his eventual death at the hands of his parents, and a serious inquest into the shortcomings of the County Department of Children and Family Services in Los Angeles. Using artificial intelligence in vulnerability assessments of children could aid overworked caseworkers and administrators and flag errors in existing systems.

The info is here.

Sunday, November 4, 2018

When Tech Knows You Better Than You Know Yourself

Nicholas Thompson
www.wired.com
Originally published October 4, 2018

Here is an excerpt:

Hacking a Human

NT: Explain what it means to hack a human being and why what can be done now is different from what could be done 100 years ago.

YNH: To hack a human being is to understand what's happening inside you on the level of the body, of the brain, of the mind, so that you can predict what people will do. You can understand how they feel and you can, of course, once you understand and predict, you can usually also manipulate and control and even replace. And of course it can't be done perfectly and it was possible to do it to some extent also a century ago. But the difference in the level is significant. I would say that the real key is whether somebody can understand you better than you understand yourself. The algorithms that are trying to hack us, they will never be perfect. There is no such thing as understanding perfectly everything or predicting everything. You don't need perfect, you just need to be better than the average human being.

If you have an hour, please watch the video.

Tuesday, August 7, 2018

Google’s AI ethics won't curb war by algorithm

Phoebe Braithwaite
Wired.com
Originally published July 5, 2018

Here is an excerpt:

One of these programmes is Project Maven, which trains artificial intelligence systems to parse footage from surveillance drones in order to “extract objects from massive amounts of moving or still imagery,” writes Drew Cukor, chief of the Algorithmic Warfare Cross-Functional Team. The programme is a key element of the US army’s efforts to select targets. One of the companies working on Maven is Google. Engineers at Google have protested their company’s involvement; their peers at companies like Amazon and Microsoft have made similar complaints, calling on their employers not to support the development of the facial recognition tool Rekognition, for use by the military, police and immigration control. For technology companies, this raises a question: should they play a role in governments’ use of force?

The US government’s policy of using armed drones to hunt its enemies abroad has long been controversial. Gibson argues that the CIA and US military are using drones to strike “far from the hot battlefield, against communities that aren't involved in an armed conflict, based on intelligence that is quite frequently wrong”. Paul Scharre, director of the technology and national security programme at the Center for a New American Security and author of Army of None says that the use of drones and computing power is making the US military a much more effective and efficient force that kills far fewer civilians than in previous wars. “We actually need tech companies like Google helping the military to do many other things,” he says.

The article is here.

Sunday, August 5, 2018

How Do Expectations Shape Perception?

Floris P. de Lange, Micha Heilbron, & Peter Kok
Trends in Cognitive Sciences
Available online 29 June 2018

Abstract

Perception and perceptual decision-making are strongly facilitated by prior knowledge about the probabilistic structure of the world. While the computational benefits of using prior expectation in perception are clear, there are myriad ways in which this computation can be realized. We review here recent advances in our understanding of the neural sources and targets of expectations in perception. Furthermore, we discuss Bayesian theories of perception that prescribe how an agent should integrate prior knowledge and sensory information, and investigate how current and future empirical data can inform and constrain computational frameworks that implement such probabilistic integration in perception.

Highlights

  • Expectations play a strong role in determining the way we perceive the world.
  • Prior expectations can originate from multiple sources of information, and correspondingly have different neural sources, depending on where in the brain the relevant prior knowledge is stored.
  • Recent findings from both human neuroimaging and animal electrophysiology have revealed that prior expectations can modulate sensory processing at both early and late stages, and both before and after stimulus onset. The response modulation can take the form of either dampening the sensory representation or enhancing it via a process of sharpening.
  • Theoretical computational frameworks of neural sensory processing aim to explain how the probabilistic integration of prior expectations and sensory inputs results in perception.

Wednesday, June 27, 2018

Understanding Moral Preferences Using Sentiment Analysis

Capraro, Valerio and Vanzo, Andrea
(May 28, 2018).

Abstract

Behavioral scientists have shown that people are not solely motivated by the economic consequences of the available actions, but they also care about the actions themselves. Several models have been proposed to formalize this preference for "doing the right thing". However, a common limitation of these models is their lack of predictive power: given a set of instructions of a decision problem, they lack to make clear predictions of people's behavior. Here, we show that, at least in simple cases, the overall qualitative pattern of behavior can be predicted reasonably well using a Computational Linguistics technique, known as Sentiment Analysis. The intuition is that people are reluctant to make actions that evoke negative emotions, and are eager to make actions that stimulate positive emotions. To show this point, we conduct an economic experiment in which decision-makers either get 50 cents, and another person gets nothing, or the opposite, the other person gets 50 cents and the decision maker gets nothing. We experimentally manipulate the wording describing the available actions using six words, from very negative (e.g., stealing) to very positive (e.g., donating) connotations. In agreement with our theory, we show that sentiment polarity has a U-shaped effect on pro-sociality. We also propose a utility function that can qualitatively predict the observed behavior, as well as previously reported framing effects. Our results suggest that building bridges from behavioral sciences to Computational Linguistics can help improve our understanding of human decision making.

The research is here.

Wednesday, June 6, 2018

The LAPD’s Terrifying Policing Algorithm: Yes It’s Basically ‘Minority Report’

Dan Robitzski
Futurism.com
Originally posted May 11, 2018

The Los Angeles Police Department was recently forced to release documents about their predictive policing and surveillance algorithms, thanks to a lawsuit from the Stop LAPD Spying Coalition (which turned the documents over to In Justice Today). And what do you think the documents have to say?

If you guessed “evidence that policing algorithms, which require officers to keep a checklist of (and keep an eye on) 12 people deemed most likely to commit a crime, are continuing to propagate a vicious cycle of disproportionately high arrests of black Angelinos, as well as other racial minorities,” you guessed correctly.

Algorithms, no matter how sophisticated, are only as good as the information that’s provided to them. So when you feed an AI data from a city where there’s a problem of demonstrably, mathematically racist over-policing of neighborhoods with concentrations of people of color, and then have it tell you who the police should be monitoring, the result will only be as great as the process. And the process? Not so great!

The article is here.

Sunday, May 27, 2018

​The Ethics of Neuroscience - A Different Lens



New technologies are allowing us to have control over the human brain like never before. As we push the possibilities we must ask ourselves, what is neuroscience today and how far is too far?

The world’s best neurosurgeons can now provide treatments for things that were previously untreatable, such as Parkinson’s and clinical depression. Many patients are cured, while others develop side effects such as erratic behaviour and changes in their personality. 

Not only do we have greater understanding of clinical psychology, forensic psychology and criminal psychology, we also have more control. Professional athletes and gamers are now using this technology – some of it untested – to improve performance. However, with these amazing possibilities come great ethical concerns.

This manipulation of the brain has far-reaching effects, impacting the law, marketing, health industries and beyond. We need to investigate the capabilities of neuroscience and ask the ethical questions that will determine how far we can push the science of mind and behaviour.

Monday, May 14, 2018

Computer Says No: Part 2 - Explainability

Jasmine Leonard
theRSA.org
Originally posted March 23, 2018

Here is an expert:

The trouble is, since many decisions should be explainable, it’s tempting to assume that automated decision systems should also be explainable.  But as discussed earlier, automated decisions systems don’t actually make decisions; they make predictions.  And when a prediction is used to guide a decision, the prediction is itself part of the explanation for that decision.  It therefore doesn’t need to be explained itself, it merely needs to be justifiable.

This is a subtle but important distinction.  To illustrate it, imagine you were to ask your doctor to explain her decision to prescribe you a particular drug.  She could do so by saying that the drug had cured many other people with similar conditions in the past and that she therefore predicted it would cure you too.  In this case, her prediction that the drug will cure you is the explanation for her decision to prescribe it.  And it’s a good explanation because her prediction is justified – not on the basis of an explanation of how the drug works, but on the basis that it’s proven to be effective in previous cases.  Indeed, explanations of how drugs work are often not available because the biological mechanisms by which they operate are poorly understood, even by those who produce them.  Moreover, even if your doctor could explain how the drug works, unless you have considerable knowledge of pharmacology, it’s unlikely that the explanation would actually increase your understanding of her decision to prescribe the drug.

If explanations of predictions are unnecessary to justify their use in decision-making then, what else can justify the use of a prediction made by an automated decision system?  The best answer, I believe, is that the system is shown to be sufficiently accurate.  What “sufficiently accurate” means is obviously up for debate, but at a minimum I would suggest that it means the system’s predictions are at least as accurate as those produced by a trained human.  It also means that there are no other readily available systems that produce more accurate predictions.

The article is here.

Sunday, May 13, 2018

Facebook Uses AI To Predict Your Future Actions for Advertizers

Sam Biddle
The Intercept
Originally posted April 13, 2018

Here is an excerpt:

Asked by Fortune’s Stacey Higginbotham where Facebook hoped its machine learning work would take it in five years, Chief Technology Officer Mike Schroepfer said in 2016 his goal was that AI “makes every moment you spend on the content and the people you want to spend it with.” Using this technology for advertising was left unmentioned. A 2017 TechCrunch article declared, “Machine intelligence is the future of monetization for Facebook,” but quoted Facebook executives in only the mushiest ways: “We want to understand whether you’re interested in a certain thing generally or always. Certain things people do cyclically or weekly or at a specific time, and it’s helpful to know how this ebbs and flows,” said Mark Rabkin, Facebook’s vice president of engineering for ads. The company was also vague about the melding of machine learning to ads in a 2017 Wired article about the company’s AI efforts, which alluded to efforts “to show more relevant ads” using machine learning and anticipate what ads consumers are most likely to click on, a well-established use of artificial intelligence. Most recently, during his congressional testimony, Zuckerberg touted artificial intelligence as a tool for curbing hate speech and terrorism.

The article is here.

Friday, April 13, 2018

Computer Says "No": Part 1- Algorithm Bias

Jasmine Leonard
www.thersa.org
Originally published March 14, 2018

From the court room to the workplace, important decisions are increasingly being made by so-called "automated decision systems". Critics claim that these decisions are less scrutable than those made by humans alone, but is this really the case? In the first of a three-part series, Jasmine Leonard considers the issue of algorithmic bias and how it might be avoided.

Recent advances in AI have a lot of people worried about the impact of automation.  One automatable task that’s received a lot of attention of late is decision-making.  So-called “automated decision systems” are already being used to decide whether or not individuals are given jobs, loans or even bail.  But there’s a lack of understanding about how these systems work, and as a result, a lot of unwarranted concerns.  In this three-part series I attempt to allay three of the most widely discussed fears surrounding automated decision systems: that they’re prone to bias, impossible to explain, and that they diminish accountability.

Before we begin, it’s important to be clear just what we’re talking about, as the term “automated decision” is incredibly misleading.  It suggests that a computer is making a decision, when in reality this is rarely the case.  What actually happens in most examples of “automated decisions” is that a human makes a decision based on information generated by a computer.  In the case of AI systems, the information generated is typically a prediction about the likelihood of something happening; for instance, the likelihood that a defendant will reoffend, or the likelihood that an individual will default on a loan.  A human will then use this prediction to make a decision about whether or not to grant a defendant bail or give an individual a credit card.  When described like this, it seems somewhat absurd to say that these systems are making decisions.  I therefore suggest that we call them what they actually are: prediction engines.

Tuesday, February 20, 2018

This Cat Sensed Death. What if Computers Could, Too?

Siddhartha Mukherjee
The New York Times
Originally published January 3, 2017

Here are two excerpts:

But what if an algorithm could predict death? In late 2016 a graduate student named Anand Avati at Stanford’s computer-science department, along with a small team from the medical school, tried to “teach” an algorithm to identify patients who were very likely to die within a defined time window. “The palliative-care team at the hospital had a challenge,” Avati told me. “How could we find patients who are within three to 12 months of dying?” This window was “the sweet spot of palliative care.” A lead time longer than 12 months can strain limited resources unnecessarily, providing too much, too soon; in contrast, if death came less than three months after the prediction, there would be no real preparatory time for dying — too little, too late. Identifying patients in the narrow, optimal time period, Avati knew, would allow doctors to use medical interventions more appropriately and more humanely. And if the algorithm worked, palliative-care teams would be relieved from having to manually scour charts, hunting for those most likely to benefit.

(cut)

So what, exactly, did the algorithm “learn” about the process of dying? And what, in turn, can it teach oncologists? Here is the strange rub of such a deep learning system: It learns, but it cannot tell us why it has learned; it assigns probabilities, but it cannot easily express the reasoning behind the assignment. Like a child who learns to ride a bicycle by trial and error and, asked to articulate the rules that enable bicycle riding, simply shrugs her shoulders and sails away, the algorithm looks vacantly at us when we ask, “Why?” It is, like death, another black box.

The article is here.

Tuesday, January 30, 2018

Your Brain Creates Your Emotions

Lisa Feldman Barrett
TED Talk
Published December 2017

Can you look at someone's face and know what they're feeling? Does everyone experience happiness, sadness and anxiety the same way? What are emotions anyway? For the past 25 years, psychology professor Lisa Feldman Barrett has mapped facial expressions, scanned brains and analyzed hundreds of physiology studies to understand what emotions really are. She shares the results of her exhaustive research -- and explains how we may have more control over our emotions than we think.

Thursday, January 11, 2018

Is Blended Intelligence the Next Stage of Human Evolution?

Richard Yonck
TED Talk
Published December 8, 2017

What is the future of intelligence? Humanity is still an extremely young species and yet our prodigious intellects have allowed us to achieve all manner of amazing accomplishments in our relatively short time on this planet, most especially during the past couple of centuries or so. Yet, it would be short-sighted of us to assume our species has reached the end of our journey, having become as intelligent as we will ever be. On the contrary, it seems far more likely that if we should survive our “infancy," there is probably much more time ahead of us than there is looking back. If that’s the case, then our descendants of only a few thousand years from now will probably be very, very different from you and I.


Monday, January 1, 2018

What I Was Wrong About This Year

David Leonhardt
The New York Times
Originally posted December 24, 2017

Here is an excerpt:

But I’ve come to realize that I was wrong about a major aspect of probabilities.

They are inherently hard to grasp. That’s especially true for an individual event, like a war or election. People understand that if they roll a die 100 times, they will get some 1’s. But when they see a probability for one event, they tend to think: Is this going to happen or not?

They then effectively round to 0 or to 100 percent. That’s what the Israeli official did. It’s also what many Americans did when they heard Hillary Clinton had a 72 percent or 85 percent chance of winning. It’s what football fans did in the Super Bowl when the Atlanta Falcons had a 99 percent chance of victory.

And when the unlikely happens, people scream: The probabilities were wrong!

Usually, they were not wrong. The screamers were wrong.

The article is here.