Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Prediction. Show all posts
Showing posts with label Prediction. Show all posts

Wednesday, October 11, 2023

The Best-Case Heuristic: 4 Studies of Relative Optimism, Best-Case, Worst-Case, & Realistic Predictions in Relationships, Politics, & a Pandemic

Sjåstad, H., & Van Bavel, J. (2023).
Personality and Social Psychology Bulletin, 0(0).
https://doi.org/10.1177/01461672231191360

Abstract

In four experiments covering three different life domains, participants made future predictions in what they considered the most realistic scenario, an optimistic best-case scenario, or a pessimistic worst-case scenario (N = 2,900 Americans). Consistent with a best-case heuristic, participants made “realistic” predictions that were much closer to their best-case scenario than to their worst-case scenario. We found the same best-case asymmetry in health-related predictions during the COVID-19 pandemic, for romantic relationships, and a future presidential election. In a fully between-subject design (Experiment 4), realistic and best-case predictions were practically identical, and they were naturally made faster than the worst-case predictions. At least in the current study domains, the findings suggest that people generate “realistic” predictions by leaning toward their best-case scenario and largely ignoring their worst-case scenario. Although political conservatism was correlated with lower covid-related risk perception and lower support of early public-health interventions, the best-case prediction heuristic was ideologically symmetric.


Here is my summary:

This research examined how people make predictions about the future in different life domains, such as health, relationships, and politics. The researchers found that people tend to make predictions that are closer to their best-case scenario than to their worst-case scenario, even when asked to make a "realistic" prediction. This is known as the best-case heuristic.

The researchers conducted four experiments to test the best-case heuristic. In the first experiment, participants were asked to make predictions about their risk of getting COVID-19, their satisfaction with their romantic relationship in one year, and the outcome of the next presidential election. Participants were asked to make three predictions for each event: a best-case scenario, a worst-case scenario, and a realistic scenario. The results showed that participants' "realistic" predictions were much closer to their best-case predictions than to their worst-case predictions.

The researchers found the same best-case asymmetry in the other three experiments, which covered a variety of life domains, including health, relationships, and politics. The findings suggest that people use a best-case heuristic when making predictions about the future, even in serious and important matters.

The best-case heuristic has several implications for individuals and society. On the one hand, it can help people to maintain a positive outlook on life and to cope with difficult challenges. On the other hand, it can also lead to unrealistic expectations and to a failure to plan for potential problems.

Overall, the research on the best-case heuristic suggests that people's predictions about the future are often biased towards optimism. This is something to be aware of when making important decisions and when planning for the future.

Thursday, April 13, 2023

Why artificial intelligence needs to understand consequences

Neil Savage
Nature
Originally published 24 FEB 23

Here is an excerpt:

The headline successes of AI over the past decade — such as winning against people at various competitive games, identifying the content of images and, in the past few years, generating text and pictures in response to written prompts — have been powered by deep learning. By studying reams of data, such systems learn how one thing correlates with another. These learnt associations can then be put to use. But this is just the first rung on the ladder towards a loftier goal: something that Judea Pearl, a computer scientist and director of the Cognitive Systems Laboratory at the University of California, Los Angeles, refers to as “deep understanding”.

In 2011, Pearl won the A.M. Turing Award, often referred to as the Nobel prize for computer science, for his work developing a calculus to allow probabilistic and causal reasoning. He describes a three-level hierarchy of reasoning4. The base level is ‘seeing’, or the ability to make associations between things. Today’s AI systems are extremely good at this. Pearl refers to the next level as ‘doing’ — making a change to something and noting what happens. This is where causality comes into play.

A computer can develop a causal model by examining interventions: how changes in one variable affect another. Instead of creating one statistical model of the relationship between variables, as in current AI, the computer makes many. In each one, the relationship between the variables stays the same, but the values of one or several of the variables are altered. That alteration might lead to a new outcome. All of this can be evaluated using the mathematics of probability and statistics. “The way I think about it is, causal inference is just about mathematizing how humans make decisions,” Bhattacharya says.

Bengio, who won the A.M. Turing Award in 2018 for his work on deep learning, and his students have trained a neural network to generate causal graphs5 — a way of depicting causal relationships. At their simplest, if one variable causes another variable, it can be shown with an arrow running from one to the other. If the direction of causality is reversed, so too is the arrow. And if the two are unrelated, there will be no arrow linking them. Bengio’s neural network is designed to randomly generate one of these graphs, and then check how compatible it is with a given set of data. Graphs that fit the data better are more likely to be accurate, so the neural network learns to generate more graphs similar to those, searching for one that fits the data best.

This approach is akin to how people work something out: people generate possible causal relationships, and assume that the ones that best fit an observation are closest to the truth. Watching a glass shatter when it is dropped it onto concrete, for instance, might lead a person to think that the impact on a hard surface causes the glass to break. Dropping other objects onto concrete, or knocking a glass onto a soft carpet, from a variety of heights, enables a person to refine their model of the relationship and better predict the outcome of future fumbles.

Sunday, April 2, 2023

Being good to look good: Self-reported moral character predicts moral double standards among reputation-seeking individuals

Mengchen, D. Kupfer, T. R, et al. (2022).
British Journal of Psychology
First published 4 NOV 22

Abstract

Moral character is widely expected to lead to moral judgements and practices. However, such expectations are often breached, especially when moral character is measured by self-report. We propose that because self-reported moral character partly reflects a desire to appear good, people who self-report a strong moral character will show moral harshness towards others and downplay their own transgressions—that is, they will show greater moral hypocrisy. This self-other discrepancy in moral judgements should be pronounced among individuals who are particularly motivated by reputation. Employing diverse methods including large-scale multination panel data (N = 34,323), and vignette and behavioural experiments (N = 700), four studies supported our proposition, showing that various indicators of moral character (Benevolence and Universalism values, justice sensitivity, and moral identity) predicted harsher judgements of others' more than own transgressions. Moreover, these double standards emerged particularly among individuals possessing strong reputation management motives. The findings highlight how reputational concerns moderate the link between moral character and moral judgement.

Practitioner points
  • Self-reported moral character does not predict actual moral performance well.
  • Good moral character based on self-report can sometimes predict strong moral hypocrisy.
  • Good moral character based on self-report indicates high moral standards, while only for others but not necessarily for the self.
  • Hypocrites can be good at detecting reputational cues and presenting themselves as morally decent persons.
From the General Discussion

A well-known Golden Rule of morality is to treat others as you wish to be treated yourself (Singer, 1963). People with a strong moral character might be expected to follow this Golden Rule, and judge others no more harshly than they judge themselves. However, when moral character is measured by self-reports, it is often intertwined with socially desirable responding and reputation management motives (Anglim et al., 2017; Hertz & Krettenauer, 2016; Reed & Aquino, 2003). The current research examines the potential downstream effects of moral character and reputation management motives on moral decisions. By attempting to differentiate the ‘genuine’ and ‘reputation managing’ components of self-reported moral character, we posited an association between moral character and moral double standards on the self and others. Imposing harsh moral standards on oneself often comes with a cost to self-interest; to signal one's moral character, criticizing others' transgressions can be a relatively cost-effective approach (Jordan et al., 2017; Kupfer & Giner-Sorolla, 2017; Simpson et al., 2013). To the extent that the demonstration of a strong moral character is driven by reputation management motives, we, therefore, predicted that it would be related to increased hypocrisy, that is, harsher judgements of others' transgressions but not stricter standards for own misdeeds.

Conclusion

How moral character guides moral judgements and behaviours depends on reputation management motives. When people are motivated to attain a good reputation, their self-reported moral character may predict more hypocrisy by displaying stronger moral harshness towards others than towards themselves. Thus, claiming oneself as a moral person does not always translate into doing good deeds, but can manifest as showcasing one's morality to others. Desires for a positive reputation might help illuminate why self-reported moral character often fails to capture real-life moral decisions, and why (some) people who appear to be moral are susceptible to accusations of hypocrisy—for applying higher moral standards to others than to themselves.

Sunday, June 26, 2022

What drives mass shooters? Grievance, despair, and anger are more likely triggers than mental illness, experts say

Deanna Pan
Boston Globe
Originally posted 3 JUN 22

Here is an excerpt:

A 2018 study by the FBI’s Behavioral Analysis Unit evaluating 63 active shooters between 2000 and 2013 found that a quarter were known to have been diagnosed with any kind of mental illness, and just 3 of the 63 had a verified psychotic disorder.

Although 62 percent of shooters showed signs that they were struggling with issues like depression, anxiety, or paranoia, their symptoms, the study notes, may ultimately have been “transient manifestations of behaviors and moods” that would not qualify them for a formal diagnosis.

Formally diagnosed mental illness, the study concludes, “is not a very specific predictor of violence of any type, let alone targeted violence,” given that roughly half of the US population experiences symptoms of mental illness over the course of their lifetimes.

Forensic psychologist Jillian Peterson, cofounder of The Violence Project, a think tank dedicated to reducing violence, said mass shooters are typically younger men, channeling their pain and anger through acts of violence and aggression. For many mass shooters, Peterson said, their path to violence begins with early childhood trauma. They often share a sense of “entitlement,” she said — to wealth, power, romance, and success. When they don’t achieve those goals, they become enraged and search for a scapegoat.

”As they get older, you see a lot of despair, hopelessness, self-hate — many of them attempt suicide — isolation. And then that kind of despair, isolation, that self-hatred turns outward,” Peterson said. “School shooters blame their schools. Some people blame a racial group or women or a religious group or the workplace.”

But mental illness, she said, is rarely an exclusive motive for mass shooters. In a study published last year, Peterson and her colleagues analyzed a dataset of 172 mass shooters for signs of psychosis — features of schizophrenia and other mood disorders. Although mental illness and psychotic disorders were overrepresented among the mass shooters they studied, Peterson’s study found most mass shooters were motivated by other factors, such as interpersonal conflicts, relationship problems, or a desire for fame.

Peterson’s study found psychotic symptoms, such as delusions or hallucinations, played no role in almost 70 percent of cases, and only a minor role in 11 percent of cases, where the shooters had other motives. In just 10 percent of cases, perpetrators were directly responding to their delusions or hallucinations when they were planning and committing their attacks.

Monday, March 29, 2021

The problem with prediction

Joseph Fridman
aeon.com
Originally published 25 Jan 21

Here is an excerpt:

Today, many neuroscientists exploring the predictive brain deploy contemporary economics as a similar sort of explanatory heuristic. Scientists have come a long way in understanding how ‘spending metabolic money to build complex brains pays dividends in the search for adaptive success’, remarks the philosopher Andy Clark, in a notable review of the predictive brain. The idea of the predictive brain makes sense because it is profitable, metabolically speaking. Similarly, the psychologist Lisa Feldman Barrett describes the primary role of the predictive brain as managing a ‘body budget’. In this view, she says, ‘your brain is kind of like the financial sector of a company’, predictively allocating resources, spending energy, speculating, and seeking returns on its investments. For Barrett and her colleagues, stress is like a ‘deficit’ or ‘withdrawal’ from the body budget, while depression is bankruptcy. In Blackmore’s day, the brain was made up of sentries and soldiers, whose collective melancholy became the sadness of the human being they inhabited. Today, instead of soldiers, we imagine the brain as composed of predictive statisticians, whose errors become our neuroses. As the neuroscientist Karl Friston said: ‘[I]f the brain is an inference machine, an organ of statistics, then when it goes wrong, it’ll make the same sorts of mistakes a statistician will make.’

The strength of this association between predictive economics and brain sciences matters, because – if we aren’t careful – it can encourage us to reduce our fellow humans to mere pieces of machinery. Our brains were never computer processors, as useful as it might have been to imagine them that way every now and then. Nor are they literally prediction engines now and, should it come to pass, they will not be quantum computers. Our bodies aren’t empires that shuttle around sentrymen, nor are they corporations that need to make good on their investments. We aren’t fundamentally consumers to be tricked, enemies to be tracked, or subjects to be predicted and controlled. Whether the arena be scientific research or corporate intelligence, it becomes all too easy for us to slip into adversarial and exploitative framings of the human; as Galison wrote, ‘the associations of cybernetics (and the cyborg) with weapons, oppositional tactics, and the black-box conception of human nature do not so simply melt away.’

Sunday, February 7, 2021

How people decide what they want to know

Sharot, T., Sunstein, C.R. 
Nat Hum Behav 4, 14–19 (2020). 

Abstract

Immense amounts of information are now accessible to people, including information that bears on their past, present and future. An important research challenge is to determine how people decide to seek or avoid information. Here we propose a framework of information-seeking that aims to integrate the diverse motives that drive information-seeking and its avoidance. Our framework rests on the idea that information can alter people’s action, affect and cognition in both positive and negative ways. The suggestion is that people assess these influences and integrate them into a calculation of the value of information that leads to information-seeking or avoidance. The theory offers a framework for characterizing and quantifying individual differences in information-seeking, which we hypothesize may also be diagnostic of mental health. We consider biases that can lead to both insufficient and excessive information-seeking. We also discuss how the framework can help government agencies to assess the welfare effects of mandatory information disclosure.

Conclusion

It is increasingly possible for people to obtain information that bears on their future prospects, in terms of health, finance and even romance. It is also increasingly possible for them to obtain information about the past, the present and the future, whether or not that information bears on their personal lives. In principle, people’s decisions about whether to seek or avoid information should depend on some integration of instrumental value, hedonic value and cognitive value. But various biases can lead to both insufficient and excessive information-seeking. Individual differences in information-seeking may reflect different levels of susceptibility to those biases, as well as varying emphasis on instrumental, hedonic and cognitive utility.  Such differences may also be diagnostic of mental health.

Whether positive or negative, the value of information bears directly on significant decisions of government agencies, which are often charged with calculating the welfare effects of mandatory disclosure and which have long struggled with that task. Our hope is that the integrative framework of information-seeking motives offered here will facilitate these goals and promote future research in this important domain.

Tuesday, December 29, 2020

Effects of Language on Visual Perception

Lupyan, G., et al. (2020, April 28). 

Abstract

Does language change what we perceive? Does speaking different languages cause us to perceive things differently? We review the behavioral and electrophysiological evidence for the influence of language on perception, with an emphasis on the visual modality. Effects of language on perception can be observed both in higher-level processes such as recognition, and in lower-level processes such as discrimination and detection. A consistent finding is that language causes us to perceive in a more categorical way. Rather than being fringe or exotic, as they are sometimes portrayed, we discuss how effects of language on perception naturally arise from the interactive and predictive nature of perception.

Highlights
  • Our ability to detect, discriminate, and recognize perceptual stimuli is influenced both by their physical features and our prior experiences.
  • One potent prior experience is language. How might learning a language affect perception?
  • We review evidence of linguistic effects on perception, focusing on the effects of language on visual recognition, discrimination, and detection.
  • Language exerts both off-line and on-line effects on visual processing; these effects naturally emerge from taking a predictive processing approach to perception.
In sum, language shapes perception in terms of higher-level processes (recognition) and lower-level processes (discrimination and detection).

Very important research in terms of psychotherapy and the language we use.

Sunday, April 19, 2020

On the ethics of algorithmic decision-making in healthcare

Grote T, Berens P
Journal of Medical Ethics 
2020;46:205-211.

Abstract

In recent years, a plethora of high-profile scientific publications has been reporting about machine learning algorithms outperforming clinicians in medical diagnosis or treatment recommendations. This has spiked interest in deploying relevant algorithms with the aim of enhancing decision-making in healthcare. In this paper, we argue that instead of straightforwardly enhancing the decision-making capabilities of clinicians and healthcare institutions, deploying machines learning algorithms entails trade-offs at the epistemic and the normative level. Whereas involving machine learning might improve the accuracy of medical diagnosis, it comes at the expense of opacity when trying to assess the reliability of given diagnosis. Drawing on literature in social epistemology and moral responsibility, we argue that the uncertainty in question potentially undermines the epistemic authority of clinicians. Furthermore, we elucidate potential pitfalls of involving machine learning in healthcare with respect to paternalism, moral responsibility and fairness. At last, we discuss how the deployment of machine learning algorithms might shift the evidentiary norms of medical diagnosis. In this regard, we hope to lay the grounds for further ethical reflection of the opportunities and pitfalls of machine learning for enhancing decision-making in healthcare.

From the Conclusion

In this paper, we aimed at examining which opportunities and pitfalls machine learning potentially provides to enhance of medical decision-making on epistemic and ethical grounds. As should have become clear, enhancing medical decision-making by deferring to machine learning algorithms requires trade-offs at different levels. Clinicians, or their respective healthcare institutions, are facing a dilemma: while there is plenty of evidence of machine learning algorithms outsmarting their human counterparts, their deployment comes at the costs of high degrees of uncertainty. On epistemic grounds, relevant uncertainty promotes risk-averse decision-making among clinicians, which then might lead to impoverished medical diagnosis. From an ethical perspective, deferring to machine learning algorithms blurs the attribution of accountability and imposes health risks to patients. Furthermore, the deployment of machine learning might also foster a shift of norms within healthcare. It needs to be pointed out, however, that none of the issues we discussed presents a knockout argument against deploying machine learning in medicine, and our article is not intended this way at all. On the contrary, we are convinced that machine learning provides plenty of opportunities to enhance decision-making in medicine.

The article is here.

Friday, March 20, 2020

Flawed science? Two efforts launched to improve scientific validity of psychological test evidence in court

Karen Franklin
forensicpsychologist Blog
Originally posted 15 Feb 20

Here is an excerpt:

New report slams "junk science” psychological assessments

In one of two significant developments, a group of researchers today released evidence of systematic problems with the state of psychological test admissibility in court. The researchers' comprehensive survey found that only about two-thirds of the tools used by clinicians in forensic settings were generally accepted in the field, while even fewer -- only about four in ten -- were favorably reviewed in authoritative sources such as the Mental Measurements Yearbook.

Despite this, psychological tests are rarely challenged when they are introduced in court, Tess M.S. Neal and her colleagues found. Even when they are, the challenges fail about two-thirds of the time. Worse yet, there is little relationship between a tool’s psychometric quality and the likelihood of it being challenged.

“Some of the weakest tools tend to get a pass from the courts,” write the authors of the newly issued report, "Psychological Assessments in Legal Contexts: Are Courts Keeping 'Junk Science' Out of the Courtroom?”

The report, currently in press in the journal Psychological Science in the Public Interest, proposes that standard batteries be developed for forensic use, based on the consensus of experts in the field as to which tests are the most reliable and valid for assessing a given psycho-legal issue. It further cautions against forensic deployment of newly developed tests that are being marketed by for-profit corporations before adequate research or review by independent professionals.

The info is here.

Sunday, March 15, 2020

Will Past Criminals Reoffend? (Humans are Terrible at Predicting; Algorithms Worse)

Sophie Bushwick
Scientific American
Originally published 14 Feb 2020

Here is an excerpt:

Based on the wider variety of experimental conditions, the new study concluded that algorithms such as COMPAS and LSI-R are indeed better than humans at predicting risk. This finding makes sense to Monahan, who emphasizes how difficult it is for people to make educated guesses about recidivism. “It’s not clear to me how, in real life situations—when actual judges are confronted with many, many things that could be risk factors and when they’re not given feedback—how the human judges could be as good as the statistical algorithms,” he says. But Goel cautions that his conclusion does not mean algorithms should be adopted unreservedly. “There are lots of open questions about the proper use of risk assessment in the criminal justice system,” he says. “I would hate for people to come away thinking, ‘Algorithms are better than humans. And so now we can all go home.’”

Goel points out that researchers are still studying how risk-assessment algorithms can encode racial biases. For instance, COMPAS can say whether a person might be arrested again—but one can be arrested without having committed an offense. “Rearrest for low-level crime is going to be dictated by where policing is occurring,” Goel says, “which itself is intensely concentrated in minority neighborhoods.” Researchers have been exploring the extent of bias in algorithms for years. Dressel and Farid also examined such issues in their 2018 paper. “Part of the problem with this idea that you're going to take the human out of [the] loop and remove the bias is: it’s ignoring the big, fat, whopping problem, which is the historical data is riddled with bias—against women, against people of color, against LGBTQ,” Farid says.

The info is here.

Monday, March 2, 2020

The Dunning-Kruger effect, or why the ignorant think they’re experts

Alexandru Micu
zmescience.com
Originally posted 13 Feb 20

Here is an excerpt:

It’s not specific only to technical skills but plagues all walks of human existence equally. One study found that 80% of drivers rate themselves as above average, which is literally impossible because that’s not how averages work. We tend to gauge our own relative popularity the same way.

It isn’t limited to people with low or nonexistent skills in a certain matter, either — it works on pretty much all of us. In their first study, Dunning and Kruger also found that students who scored in the top quartile (25%) routinely underestimated their own competence.

A fuller definition of the Dunning-Kruger effect would be that it represents a bias in estimating our own ability that stems from our limited perspective. When we have a poor or nonexistent grasp on a topic, we literally know too little of it to understand how little we know. Those who do possess the knowledge or skills, however, have a much better idea of where they sit. But they also think that if a task is clear and simple to them, it must be so for everyone else as well.

A person in the first group and one in the second group are equally liable to use their own experience and background as the baseline and kinda just take it for granted that everyone is near that baseline. They both partake in the “illusion of confidence” — for one, that confidence is in themselves, for the other, in everyone else.

The info is here.

Sunday, February 16, 2020

Fast optimism, slow realism? Causal evidence for a two-step model of future thinking

Hallgeir Sjåstad and Roy F. Baumeister
PsyArXiv
Originally posted 6 Jan 20

Abstract

Future optimism is a widespread phenomenon, often attributed to the psychology of intuition. However, causal evidence for this explanation is lacking, and sometimes cautious realism is found. One resolution is that thoughts about the future have two steps: A first step imagining the desired outcome, and then a sobering reflection on how to get there. Four pre-registered experiments supported this two-step model, showing that fast predictions are more optimistic than slow predictions. The total sample consisted of 2,116 participants from USA and Norway, providing 9,036 predictions. In Study 1, participants in the fast-response condition thought positive events were more likely to happen and that negative events were less likely, as compared to participants in the slow-response condition. Although the predictions were optimistically biased in both conditions, future optimism was significantly stronger among fast responders. Participants in the fast-response condition also relied more on intuitive heuristics (CRT). Studies 2 and 3 focused on future health problems (e.g., getting a heart attack or diabetes), in which participants in the fast-response condition thought they were at lower risk. Study 4 provided a direct replication, with the additional finding that fast predictions were more optimistic only for the self (vs. the average person). The results suggest that when people think about their personal future, the first response is optimistic, which only later may be followed by a second step of reflective realism. Current health, income, trait optimism, perceived control and happiness were negatively correlated with health-risk predictions, but did not moderate the fast-optimism effect.

From the Discussion section:

Four studies found that people made more optimistic predictions when they relied on fast intuition rather than slow reflection. Apparently, a delay of 15 seconds is sufficient to enable second thoughts and a drop in future optimism. The slower responses were still "unrealistically optimistic"(Weinstein, 1980; Shepperd et al., 2013), but to a much lesser extent than the fast responses. We found this fast-optimism effect on relative comparison to the average person and isolated judgments of one's own likelihood, in two different languages across two different countries, and in one direct replication.All four experiments were pre-registered, and the total sample consisted of about 2,000 participants making more than 9,000 predictions.

Tuesday, January 7, 2020

AI Is Not Similar To Human Intelligence. Thinking So Could Be Dangerous

Elizabeth Fernandez
Artificial intelligenceforbes.com
Originally posted 30 Nov 19

Here is an excerpt:

No doubt, these algorithms are powerful, but to think that they “think” and “learn” in the same way as humans would be incorrect, Watson says. There are many differences, and he outlines three.

The first - DNNs are easy to fool. For example, imagine you have a picture of a banana. A neural network successfully classifies it as a banana. But it’s possible to create a generative adversarial network that can fool your DNN. By adding a slight amount of noise or another image besides the banana, your DNN might now think the picture of a banana is a toaster. A human could not be fooled by such a trick. Some argue that this is because DNNs can see things humans can’t, but Watson says, “This disconnect between biological and artificial neural networks suggests that the latter lack some crucial component essential to navigating the real world.”

Secondly, DNNs need an enormous amount of data to learn. An image classification DNN might need to “see” thousands of pictures of zebras to identify a zebra in an image. Give the same test to a toddler, and chances are s/he could identify a zebra, even one that’s partially obscured, by only seeing a picture of a zebra a few times. Humans are great “one-shot learners,” says Watson. Teaching a neural network, on the other hand, might be very difficult, especially in instances where data is hard to come by.

Thirdly, neural nets are “myopic”. They can see the trees, so to speak, but not the forest. For example, a DNN could successfully label a picture of Kim Kardashian as a woman, an entertainer, and a starlet. However, switching the position of her mouth and one of her eyes actually improved the confidence of the DNN’s prediction. The DNN didn’t see anything wrong with that image. Obviously, something is wrong here. Another example - a human can say “that cloud looks like a dog”, whereas a DNN would say that the cloud is a dog.

The info is here.

Friday, November 1, 2019

What Clinical Ethics Can Learn From Decision Science

Michele C. Gornick and Brian J. Zikmund-Fisher
AMA J Ethics. 2019;21(10):E906-912.
doi: 10.1001/amajethics.2019.906.

Abstract

Many components of decision science are relevant to clinical ethics practice. Decision science encourages thoughtful definition of options, clarification of information needs, and acknowledgement of the heterogeneity of people’s experiences and underlying values. Attention to decision-making processes reminds participants in consultations that how decisions are made and how information is provided can change a choice. Decision science also helps reveal affective forecasting errors (errors in predictions about how one will feel in a future situation) that happen when people consider possible future health states and suggests strategies for correcting these and other kinds of biases. Implementation of decision science innovations is not always feasible or appropriate in ethics consultations, but their uses increase the likelihood that an ethics consultation process will generate choices congruent with patients’ and families’ values.

Here is an excerpt:

Decision Science in Ethics Practice

Clinical ethicists can support informed, value-congruent decision making in ethically complex clinical situations by working with stakeholders to identify and address biases and the kinds of barriers just discussed. Doing so requires constantly comparing actual decision-making processes with ideal decision-making processes, responding to information deficits, and integrating stakeholder values. One key step involves regularly urging clinicians to clarify both available options and possible outcomes and encouraging patients to consider both their values and the possible meanings of different outcomes.

Tuesday, September 24, 2019

Cruel, Immoral Behavior Is Not Mental Illness

gun violence, mental disordersJames L. Knoll & Ronald W. Pies
Psychiatric Times
Originally posted August 19, 2019

Here is an excerpt:

Another way of posing the question is to ask—Does immoral, callous, cruel, and supremely selfish behaviors constitute a mental illness? These socially deviant traits appear in those with and without mental illness, and are widespread in the general population. Are there some perpetrators suffering from a genuine psychotic disorder who remain mentally organized enough to carry out these attacks? Of course, but they are a minority. To further complicate matters, psychotic individuals can also commit violent acts that were motivated by base emotions (resentment, selfishness, etc.), while their psychotic symptoms may be peripheral or merely coincidental.

It bears repeating that reliable, clinically-based data or complete psychological autopsies on perpetrators of mass public shootings are very difficult to obtain. That said, some of the best available research on mass public shooters indicates that they often display “rigidness, hostility, or extreme self-centeredness.” A recent FBI study found that only 25% of mass shooters had ever had a mental illness diagnosis, and only 3 of these individuals had a diagnosis of a psychotic disorder. The FBI’s cautionary statement in this report is incisive: “. . . formally diagnosed mental illness is not a very specific predictor of violence of any type, let alone targeted violence…. declarations that all active shooters must simply be mentally ill are misleading and unhelpful."

Psychiatric and mental health treatment has its limits, and is not traditionally designed to detect and uncover budding violent extremists. It is designed to work together with individuals who are invested in their own mental health and seek to increase their own degrees of freedom in life in a pro-social manner. This is why calls for more mental health laws or alterations in civil commitment laws are likely to be low-yield at best, with respect to preventing mass killing—and stagnating to mental health progress at worst.

The info is here.

Friday, September 13, 2019

The dynamics of social support among suicide attempters: A smartphone-based daily diary study

Coppersmith, D.D.L.; Kleiman, E.M.; Glenn, C.R.; Millner, A.J.; Nock, M.K.
Behaviour Research and Therapy (2018)

Abstract

Decades of research suggest that social support is an important factor in predicting suicide risk and resilience. However, no studies have examined dynamic fluctuations in day-by-day levels of perceived social support. We examined such fluctuations over 28 days among a sample of 53 adults who attempted suicide in the past year (992 total observations). Variability in social support was analyzed with between-person intraclass correlations and root mean square of successive differences. Multi-level models were conducted to determine the association between social support and suicidal ideation. Results revealed that social support varies considerably from day to day with 45% of social support ratings differing by at least one standard deviation from the prior assessment. Social support is inversely associated with same-day and next-day suicidal ideation, but not with next-day suicidal ideation after adjusting for same-day suicidal ideation (i.e., not with daily changes in suicidal ideation). These results suggest that social support is a time-varying protective factor for suicidal ideation.

The research is here.

Monday, August 26, 2019

Proprietary Algorithms for Polygenic Risk: Protecting Scientific Innovation or Hiding the Lack of It?

A. Cecile & J.W. Janssens
Genes 2019, 10(6), 448
https://doi.org/10.3390/genes10060448

Abstract

Direct-to-consumer genetic testing companies aim to predict the risks of complex diseases using proprietary algorithms. Companies keep algorithms as trade secrets for competitive advantage, but a market that thrives on the premise that customers can make their own decisions about genetic testing should respect customer autonomy and informed decision making and maximize opportunities for transparency. The algorithm itself is only one piece of the information that is deemed essential for understanding how prediction algorithms are developed and evaluated. Companies should be encouraged to disclose everything else, including the expected risk distribution of the algorithm when applied in the population, using a benchmark DNA dataset. A standardized presentation of information and risk distributions allows customers to compare test offers and scientists to verify whether the undisclosed algorithms could be valid. A new model of oversight in which stakeholders collaboratively keep a check on the commercial market is needed.

Here is the conclusion:

Oversight of the direct-to-consumer market for polygenic risk algorithms is complex and time-sensitive. Algorithms are frequently adapted to the latest scientific insights, which may make evaluations obsolete before they are completed. A standardized format for the provision of essential information could readily provide insight into the logic behind the algorithms, the rigor of their development, and their predictive ability. The development of this format gives responsible providers the opportunity to lead by example and show that much can be shared when there is nothing to hide.

Wednesday, August 21, 2019

Personal infidelity and professional conduct in 4 settings

John M. Griffin, Samuel Kruger, and Gonzalo Maturana
PNAS first published July 30, 2019
https://doi.org/10.1073/pnas.1905329116

Abstract

We study the connection between personal and professional behavior by introducing usage of a marital infidelity website as a measure of personal conduct. Police officers and financial advisors who use the infidelity website are significantly more likely to engage in professional misconduct. Results are similar for US Securities and Exchange Commission (SEC) defendants accused of white-collar crimes, and companies with chief executive officers (CEOs) or chief financial officers (CFOs) who use the website are more than twice as likely to engage in corporate misconduct. The relation is not explained by a wide range of regional, firm, executive, and cultural variables. These findings suggest that personal and workplace behavior are closely related.

Significance

The relative importance of personal traits compared with context for predicting behavior is a long-standing issue in psychology. This debate plays out in a practical way every time an employer, voter, or other decision maker has to infer expected professional conduct based on observed personal behavior. Despite its theoretical and practical importance, there is little academic consensus on this question. We fill this void with evidence connecting personal infidelity to professional behavior in 4 different settings.

The Conclusion:

More broadly, our findings suggest that personal and professional lives are connected and cut against the common view that ethics are predominantly situational. This supports the classical view that virtues such as honesty and integrity influence a person’s thoughts and actions across diverse contexts and has potentially important implications for corporate recruiting and codes of conduct. A possible implication of our findings is that the recent focus on eliminating sexual misconduct in the workplace may have the auxiliary effect of reducing fraudulent workplace activity.

Tech Is Already Reading Your Emotions - But Do Algorithms Get It Right?

Jessica Baron
Forbes.com
Originally published July 18, 2019

From measuring shopper satisfaction to detecting signs of depression, companies are employing emotion-sensing facial recognition technology that is based on flawed science, according to a new study.

If the idea of having your face recorded and then analyzed for mood so that someone can intervene in your life sounds creepy, that’s because it is. But that hasn’t stopped companies like Walmart from promising to implement the technology to improve customer satisfaction, despite numerous challenges from ethicists and other consumer advocates.

At the end of the day, this flavor of facial recognition software probably is all about making you safer and happier – it wants to let you know if you’re angry or depressed so you can calm down or get help; it wants to see what kind of mood you’re in when you shop so it can help stores keep you as a customer; it wants to measure your mood while driving, playing video games, or just browsing the Internet to see what goods and services you might like to buy to improve your life.


The problem is – well, aside from the obvious privacy issues and general creep factor – that computers aren’t really that good at judging our moods based on the information they get from facial recognition technology. To top it off, this technology exhibits that same kind of racial bias that other AI programs do, assigning more negative emotions to black faces, for example. That’s probably because it’s based on flawed science.

The info is here.

Sunday, August 18, 2019

Social physics

Despite the vagaries of free will and circumstance, human behaviour in bulk is far more predictable than we like to imagine

Ian Stewart
www.aeon.co
Originally posted July 9, 2019

Here is an excerpt:

Polling organisations use a variety of methods to try to minimise these sources of error. Many of these methods are mathematical, but psychological and other factors also come into consideration. Most of us know of stories where polls have confidently indicated the wrong result, and it seems to be happening more often. Special factors are sometimes invoked to ‘explain’ why, such as a sudden late swing in opinion, or people deliberately lying to make the opposition think it’s going to win and become complacent. Nevertheless, when performed competently, polling has a fairly good track-record overall. It provides a useful tool for reducing uncertainty. Exit polls, where people are asked whom they voted for soon after they cast their vote, are often very accurate, giving the correct result long before the official vote count reveals it, and can’t influence the result.

Today, the term ‘social physics’ has acquired a less metaphorical meaning. Rapid progress in information technology has led to the ‘big data’ revolution, in which gigantic quantities of information can be obtained and processed. Patterns of human behaviour can be extracted from records of credit-card purchases, telephone calls and emails. Words suddenly becoming more common on social media, such as ‘demagogue’ during the 2016 US presidential election, can be clues to hot political issues.

The mathematical challenge is to find effective ways to extract meaningful patterns from masses of unstructured information, and many new methods.

The info is here.