Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Predictions. Show all posts
Showing posts with label Predictions. Show all posts

Wednesday, December 27, 2023

This algorithm could predict your health, income, and chance of premature death

Holly Barker
Science.org
Originally published 18 DEC 23

Here is an excerpt:

The researchers trained the model, called “life2vec,” on every individual’s life story between 2008 to 2016, and the model sought patterns in these stories. Next, they used the algorithm to predict whether someone on the Danish national registers had died by 2020.

The model’s predictions were accurate 78% of the time. It identified several factors that favored a greater risk of premature death, including having a low income, having a mental health diagnosis, and being male. The model’s misses were typically caused by accidents or heart attacks, which are difficult to predict.

Although the results are intriguing—if a bit grim—some scientists caution that the patterns might not hold true for non-Danish populations. “It would be fascinating to see the model adapted using cohort data from other countries, potentially unveiling universal patterns, or highlighting unique cultural nuances,” says Youyou Wu, a psychologist at University College London.

Biases in the data could also confound its predictions, she adds. (The overdiagnosis of schizophrenia among Black people could cause algorithms to mistakenly label them at a higher risk of premature death, for example.) That could have ramifications for things such as insurance premiums or hiring decisions, Wu adds.


Here is my summary:

A new algorithm, trained on a mountain of Danish life stories, can peer into your future with unsettling precision. It can predict your health, income, and even your odds of an early demise. This, achieved by analyzing the sequence of life events, like getting a job or falling ill, raises both possibilities and ethical concerns.

On one hand, imagine the potential for good: nudges towards healthier habits or financial foresight, tailored to your personal narrative. On the other, anxieties around bias and discrimination loom. We must ensure this powerful tool is used wisely, for the benefit of all, lest it exacerbate existing inequalities or create new ones. The algorithm’s gaze into the future, while remarkable, is just that – a glimpse, not a script. 

Saturday, May 7, 2022

Mathematical model offers clear-cut answers to how morals will change over time

The Institute for Future Studies
Phys.org
Originally posted 13 APR 2022

Researchers at the Institute for Futures Studies in Stockholm, Sweden, have created a mathematical model to predict changes in moral opinion. It predicts that values about corporal punishment of children, abortion-rights and how parental leave should be shared between parents, will all move in liberal directions in the U.S. Results from a first test of the model using data from large opinion surveys continuously conducted in the U.S. are promising.

Corporal punishment of children, such as spanking or paddling, is still widely accepted in the U.S. But public opinion is changing rapidly, and in the United States and elsewhere around the world, this norm will soon become a marginal position. The right to abortion is currently being threatened through a series of court cases—but though change is slow, the view of abortion as a right will eventually come to dominate. A majority of Americans today reject the claim that parental leave should be equally shared between parents, but within 15 years, public opinion will flip, and a majority will support an equal division.

"Almost all moral issues are moving in the liberal direction. Our model is based on large opinion surveys continuously conducted in the U.S., but our method for analyzing the dynamics of moral arguments to predict changing public opinion on moral issues can be applied anywhere," says social norm researcher Pontus Strimling, a research leader at the Institute for Futures Studies, who together with mathematician Kimmo Eriksson and statistician Irina Vartanova conducted the study that will be published in the journal Royal Society Open Science on Wednesday, April 13th.


From the Discussion

Overall, this study shows that moral opinion change can to some extent be predicted, even under unusually volatile circumstances. Note that the prediction method used in this paper is quite rudimentary. Specifically, the method is only based on a very simple survey measure of each opinion's argument advantage and the use of historical opinion data to calibrate a parameter for converting such measures to predicted change rates. Given that the direction is predicted completely based on surveys about argument advantage it is remarkable that the direction was correctly predicted in two-thirds of the cases (three-quarters if the issues related to singular events were excluded). Even so, the method can probably be improved.

Predicting how the U.S. public opinion on moral issues will change from 2018 to 2020 and beyond, Royal Society Open Science (2022).

Sunday, August 29, 2021

A New Era of Designer Babies May Be Based on Overhyped Science

Laura Hercher
Scientific American
Originally published 12 July 21

Here is an excerpt:

Current polygenic risk scores have limited predictive strength and reflect the shortcomings of genetic databases, which are overwhelmingly Eurocentric. Alicia Martin, an instructor at Massachusetts General Hospital and the Broad Institute of the Massachusetts Institute of Technology and Harvard University, says her research examining polygenic risk scores suggests “they don’t transfer well to other populations that have been understudied.” In fact, the National Institutes of Health announced in mid-June that it will be giving out $38 million in grants over five years to find ways to enhance disease prediction in diverse populations using polygenic risk scores. Speaking of Orchid, Martin says, “I think it is premature to try to roll this out.”

In an interview about embryo screening and ethics featured on the company’s Web site, Jonathan Anomaly, a University of Pennsylvania bioethicist, suggested the current biases are a problem to be solved by getting customers and doing the testing. “As I understand it,” he said, “Orchid is actively building statistical models to improve ancestry adaptation and adjustments for genetic risk scores, which will increase accessibility of the product to all individuals.”

Still, better data sets will not allay all concerns about embryo selection. The combined expense of testing and IVF means that unequal access to these technologies will continue to be an issue. In her Mendelspod interview, Siddiqui insisted, “We think that everyone who wants to have a baby should be able to, and we want our technology to be as accessible to everyone who wants it,” adding that the lack of insurance coverage for IVF is a major problem that needs to be addressed in the U.S.

But should insurance companies pay for fertile couples to embryo-shop? This issue is complicated, especially in light of the fact that polygenic risk scores can generate predictions for more than just heart disease and cancer. They can be devised for any trait with a heritable component, and existing models offer predictions for educational attainment, neuroticism and same-sex sexual behavior, all with the same caveats and limitations as Orchid’s current tests for major diseases. To be clear, tests for these behavioral traits are not part of Orchid’s current genetic panel. But when talking about tests the company does offer, Siddiqui suggested that the ultimate decision makers should be the parents-to-be. “I think at the end of the day, you have to respect patient autonomy,” she said.

Wednesday, May 26, 2021

Before You Answer, Consider the Opposite Possibility—How Productive Disagreements Lead to Better Outcomes

Ian Leslie
The Atlantic
Originally published 25 Apr 21

Here is an excerpt:

This raises the question of how a wise inner crowd can be cultivated. Psychologists have investigated various methods. One, following Stroop, is to harness the power of forgetting. Reassuringly for those of us who are prone to forgetting, people with poor working memories have been shown to have a wiser inner crowd; their guesses are more independent of one another, so they end up with a more diverse set of estimates and a more accurate average. The same effect has been achieved by spacing the guesses out in time.

More sophisticated methods harness the mind’s ability to inhabit different perspectives and look at a problem from more than one angle. People generate more diverse estimates when prompted to base their second or third guess on alternative assumptions; one effective technique is simply asking people to “consider the opposite” before giving a new answer. A fascinating recent study in this vein harnesses the power of disagreement itself. A pair of Dutch psychologists, Philippe Van de Calseyde and Emir Efendić, asked people a series of questions with numerical answers, such as the percentage of the world’s airports located in the U.S.. Then they asked participants to think of someone in their life with whom they often disagreed—that uncle with whom they always argue about politics—and to imagine what that person would guess.

The respondents came up with second estimates that were strikingly different from their first estimate, producing a much more accurate inner crowd. The same didn’t apply when they were asked to imagine how someone they usually agree with would answer the question, which suggests that the secret is to incorporate the perspectives of people who think differently from us. That the respondents hadn’t discussed that particular question with their disagreeable uncle did not matter. Just the act of thinking about someone with whom they argued a lot was enough to jog them out of habitual assumptions.

Monday, January 25, 2021

Late Payments, Credit Scores May Predict Dementia

Judy George
MedPage Today
Originally posted 30 Nov 20

Problems paying bills and managing personal finances were evident years before a dementia diagnosis, retrospective data showed.

As early as 6 years before they were diagnosed with dementia, people with Alzheimer's disease and related dementias were more likely to miss credit account payments than their peers without dementia (7.7% vs 7.3%; absolute difference 0.4 percentage points, 95% CI 0.07-0.70), reported Lauren Hersch Nicholas, PhD, MPP, of Johns Hopkins University in Baltimore, and co-authors.

They also were more likely to develop subprime credit scores 2.5 years before their dementia diagnosis (8.5% vs 8.1%; absolute difference 0.38 percentage points, 95% CI 0.04-0.72), the researchers wrote in JAMA Internal Medicine.

Higher payment delinquency and subprime credit rates persisted for at least 3.5 years after a dementia diagnosis.

"Our study provides the first large-scale evidence of the financial symptoms of Alzheimer's disease and related dementias using administrative financial records," Nicholas said.

"These results are important because they highlight a new source of data -- consumer credit reports -- that can help detect early signs of Alzheimer's disease," she told MedPage Today. "While doctors have long believed that dementia presents in the checkbook, our study helps show that these financial symptoms are common and span years before and after diagnosis, suggesting unmet need for assistance managing money."

Thursday, December 3, 2020

The psychologist rethinking human emotion

David Shariatmadari
The Guardian
Originally posted 25 Sept 20

Here is an excerpt:

Barrett’s point is that if you understand that “fear” is a cultural concept, a way of overlaying meaning on to high arousal and high unpleasantness, then it’s possible to experience it differently. “You know, when you have high arousal before a test, and your brain makes sense of it as test anxiety, that’s a really different feeling than when your brain makes sense of it as energised determination,” she says. “So my daughter, for example, was testing for her black belt in karate. Her sensei was a 10th degree black belt, so this guy is like a big, powerful, scary guy. She’s having really high arousal, but he doesn’t say to her, ‘Calm down’; he says, ‘Get your butterflies flying in formation.’” That changed her experience. Her brain could have made anxiety, but it didn’t, it made determination.”

In the lectures Barrett gives to explain this model, she talks of the brain as a prisoner in a dark, silent box: the skull. The only information it gets about the outside world comes via changes in light (sight), air pressure (sound) exposure to chemicals (taste and smell), and so on. It doesn’t know the causes of these changes, and so it has to guess at them in order to decide what to do next.

How does it do that? It compares those changes to similar changes in the past, and makes predictions about the current causes based on experience. Imagine you are walking through a forest. A dappled pattern of light forms a wavy black shape in front of you. You’ve seen many thousands of images of snakes in the past, you know that snakes live in the forest. Your brain has already set in train an array of predictions.

The point is that this prediction-making is consciousness, which you can think of as a constant rolling process of guesses about the world being either confirmed or proved wrong by fresh sensory inputs. In the case of the dappled light, as you step forward you get information that confirms a competing prediction that it’s just a stick: the prediction of a snake was ultimately disproved, but not before it grew so strong that neurons in your visual cortex fired as though one was actually there, meaning that for a split second you “saw” it. So we are all creating our world from moment to moment. If you didn’t, your brain wouldn’t be able make the changes necessary for your survival quickly enough. If the prediction “snake” wasn’t already in train, then the shot of adrenaline you might need in order to jump out of its way would come too late.

Friday, February 28, 2020

Slow response times undermine trust in algorithmic (but not human) predictions

E Efendic, P van de Calseyde, & A Evans
PsyArXiv PrePrints
Lasted Edited 22 Jan 20

Abstract

Algorithms consistently perform well on various prediction tasks, but people often mistrust their advice. Here, we demonstrate one component that affects people’s trust in algorithmic predictions: response time. In seven studies (total N = 1928 with 14,184 observations), we find that people judge slowly generated predictions from algorithms as less accurate and they are less willing to rely on them. This effect reverses for human predictions, where slowly generated predictions are judged to be more accurate. In explaining this asymmetry, we find that slower response times signal the exertion of effort for both humans and algorithms. However, the relationship between perceived effort and prediction quality differs for humans and algorithms. For humans, prediction tasks are seen as difficult and effort is therefore positively correlated with the perceived quality of predictions. For algorithms, however, prediction tasks are seen as easy and effort is therefore uncorrelated to the quality of algorithmic predictions. These results underscore the complex processes and dynamics underlying people’s trust in algorithmic (and human) predictions and the cues that people use to evaluate their quality.

General discussion 

When are people reluctant to trust algorithm-generated advice? Here, we demonstrate that it depends on the algorithm’s response time. People judged slowly (vs. quickly) generated predictions by algorithms as being of lower quality. Further, people were less willing to use slowly generated algorithmic predictions. For human predictions, we found the opposite: people judged slow human-generated predictions as being of higher quality. Similarly, they were more likely to use slowly generated human predictions. 

We find that the asymmetric effects of response time can be explained by different expectations of task difficulty for humans vs. algorithms. For humans, slower responses were congruent with expectations; the prediction task was presumably difficult so slower responses, and more effort, led people to conclude that the predictions were high quality. For algorithms, slower responses were incongruent with expectations; the prediction task was presumably easy so slower speeds, and more effort, were unrelated to prediction quality. 

The research is here.

Friday, February 7, 2020

People Who Second-Guess Themselves Make Worse Decisions

Christopher Ingraham
The Washington Post
Originally posted 9 Jan 20

Here is an excerpt:

The researchers specifically wanted to know whether the revisions were more accurate than the originals.

In theory, there are a lot of reasons to believe this might be the case. A person would presumably revise a prediction after obtaining new information, such as an analyst’s match forecast or a team roster change.

In practice, however, the opposite was true: Revised forecasts accurately predicted the final match score 7.7 percent of the time. But the unaltered forecasts were correct 9.3 percent of the time.

In other words, revised forecasts were about 17 percent less accurate than those that had never changed.

(cut)

So where did the second-guessers go wrong? For starters, the researchers controlled for match-to-match and player-to-player variation — it isn’t likely the case, in other words, that matches receiving more revisions were more difficult to predict, or that bad guessers were more likely to revise their forecasts.

The researchers found that revisions were more likely to go awry when forecasters dialed up the scores — by going, say, from predicting a 2-1 final score to 3-2. Indeed, across the data set, the bettors systematically underestimated the likelihood of a 0-0 draw: an outcome anticipated 1.5 percent of the time that actually occurs in 8.4 percent of matches.

The info is here.

Wednesday, December 11, 2019

When Assessing Novel Risks, Facts Are Not Enough

Baruch Fischoff
Scientific American
September 2019

Here is an excerpt:

To start off, we wanted to figure out how well the general public understands the risks they face in everyday life. We asked groups of laypeople to estimate the annual death toll from causes such as drowning, emphysema and homicide and then compared their estimates with scientific ones. Based on previous research, we expected that people would make generally accurate predictions but that they would overestimate deaths from causes that get splashy or frequent headlines—murders, tornadoes—and underestimate deaths from “quiet killers,” such as stroke and asthma, that do not make big news as often.

Overall, our predictions fared well. People overestimated highly reported causes of death and underestimated ones that received less attention. Images of terror attacks, for example, might explain why people who watch more television news worry more about terrorism than individuals who rarely watch. But one puzzling result emerged when we probed these beliefs. People who were strongly opposed to nuclear power believed that it had a very low annual death toll. Why, then, would they be against it? The apparent paradox made us wonder if by asking them to predict average annual death tolls, we had defined risk too narrowly. So, in a new set of questions we asked what risk really meant to people. When we did, we found that those opposed to nuclear power thought the technology had a greater potential to cause widespread catastrophes. That pattern held true for other technologies as well.

To find out whether knowing more about a technology changed this pattern, we asked technical experts the same questions. The experts generally agreed with laypeople about nuclear power's death toll for a typical year: low. But when they defined risk themselves, on a broader time frame, they saw less potential for problems. The general public, unlike the experts, emphasized what could happen in a very bad year. The public and the experts were talking past each other and focusing on different parts of reality.

The info is here.

Wednesday, May 22, 2019

Why Behavioral Scientists Need to Think Harder About the Future

Ed Brandon
www.behavioralscientist.org
Originally published January 17, 2019

Here is an excerpt:

It’s true that any prediction made a century out will almost certainly be wrong. But thinking carefully and creatively about the distant future can sharpen our thinking about the present, even if what we imagine never comes to pass. And if this feels like we’re getting into the realms of (behavioral) science fiction, then that’s a feeling we should lean into. Whether we like it or not, futuristic visions often become shorthand for talking about technical concepts. Public discussions about A.I. safety, or automation in general, rarely manage to avoid at least a passing reference to the Terminator films (to the dismay of leading A.I. researchers). In the behavioral science sphere, plodding Orwell comparisons are now de rigueur whenever “government” and “psychology” appear in the same sentence. If we want to enrich the debate beyond an argument about whether any given intervention is or isn’t like something out of 1984, expanding our repertoire of sci-fi touch points can help.

As the Industrial Revolution picked up steam, accelerating technological progress raised the possibility that even the near future might look very different to the present. In the nineteenth century, writers such as Jules Verne, Mary Shelley, and H. G. Wells started to write about the new worlds that might result. Their books were not dry lists of predictions. Instead, they explored the knock-on effects of new technologies, and how ordinary people might react. Invariably, the most interesting bits of these stories were not the technologies themselves but the social and dramatic possibilities they opened up. In Shelley’s Frankenstein, there is the horror of creating something you do not understand and cannot control; in Wells’s War of the Worlds, peripeteia as humans get dislodged from the top of the civilizational food chain.

The info is here.

Friday, October 5, 2018

Should Artificial Intelligence Augment Medical Decision Making? The Case for an Autonomy Algorithm

Camillo Lamanna and Lauren Byrne
AMA J Ethics. 2018;20(9):E902-910.

Abstract

A significant proportion of elderly and psychiatric patients do not have the capacity to make health care decisions. We suggest that machine learning technologies could be harnessed to integrate data mined from electronic health records (EHRs) and social media in order to estimate the confidence of the prediction that a patient would consent to a given treatment. We call this process, which takes data about patients as input and derives a confidence estimate for a particular patient’s predicted health care-related decision as an output, the autonomy algorithm. We suggest that the proposed algorithm would result in more accurate predictions than existing methods, which are resource intensive and consider only small patient cohorts. This algorithm could become a valuable tool in medical decision-making processes, augmenting the capacity of all people to make health care decisions in difficult situations.

The article is here.