Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Risk. Show all posts
Showing posts with label Risk. Show all posts

Thursday, April 4, 2024

Ready or not, AI chatbots are here to help with Gen Z’s mental health struggles

Matthew Perrone
Originally posted 23 March 24

Here is an excerpt:

Earkick is one of hundreds of free apps that are being pitched to address a crisis in mental health among teens and young adults. Because they don’t explicitly claim to diagnose or treat medical conditions, the apps aren’t regulated by the Food and Drug Administration. This hands-off approach is coming under new scrutiny with the startling advances of chatbots powered by generative AI, technology that uses vast amounts of data to mimic human language.

The industry argument is simple: Chatbots are free, available 24/7 and don’t come with the stigma that keeps some people away from therapy.

But there’s limited data that they actually improve mental health. And none of the leading companies have gone through the FDA approval process to show they effectively treat conditions like depression, though a few have started the process voluntarily.

“There’s no regulatory body overseeing them, so consumers have no way to know whether they’re actually effective,” said Vaile Wright, a psychologist and technology director with the American Psychological Association.

Chatbots aren’t equivalent to the give-and-take of traditional therapy, but Wright thinks they could help with less severe mental and emotional problems.

Earkick’s website states that the app does not “provide any form of medical care, medical opinion, diagnosis or treatment.”

Some health lawyers say such disclaimers aren’t enough.

Here is my summary:

AI chatbots can provide personalized, 24/7 mental health support and guidance to users through convenient mobile apps. They use natural language processing and machine learning to simulate human conversation and tailor responses to individual needs.

 This can be especially beneficial for those who face barriers to accessing traditional in-person therapy, such as cost, location, or stigma.

Research has shown that AI chatbots can be effective in reducing the severity of mental health issues like anxiety, depression, and stress for diverse populations.  They can deliver evidence-based interventions like cognitive behavioral therapy and promote positive psychology.  Some well-known examples include Wysa, Woebot, Replika, Youper, and Tess.

However, there are also ethical concerns around the use of AI chatbots for mental health. There are risks of providing inadequate or even harmful support if the chatbot cannot fully understand the user's needs or respond empathetically. Algorithmic bias in the training data could also lead to discriminatory advice. It's crucial that users understand the limitations of the therapeutic relationship with an AI chatbot versus a human therapist.

Overall, AI chatbots have significant potential to expand access to mental health support, but must be developed and deployed responsibly with strong safeguards to protect user wellbeing. Continued research and oversight will be needed to ensure these tools are used effectively and ethically.

Thursday, February 22, 2024

Rising Suicide Rate Among Hispanics Worries Community Leaders

A. Miller and M. C. Work
KFF Health News
Originally posted 22 Jan 24

Here is an excerpt:

The suicide rate for Hispanic people in the United States has increased significantly over the past decade. The trend has community leaders worried: Even elementary school-aged Hispanic children have tried to harm themselves or expressed suicidal thoughts.

Community leaders and mental health researchers say the pandemic hit young Hispanics especially hard. Immigrant children are often expected to take more responsibility when their parents don’t speak English ― even if they themselves aren’t fluent. Many live in poorer households with some or all family members without legal residency. And cultural barriers and language may prevent many from seeking care in a mental health system that already has spotty access to services.

“Being able to talk about painful things in a language that you are comfortable with is a really specific type of healing,” said Alejandra Vargas, a bilingual Spanish program coordinator for the Suicide Prevention Center at Didi Hirsch Mental Health Services in Los Angeles.

“When we answer the calls in Spanish, you can hear that relief on the other end,” she said. “That, ‘Yes, they’re going to understand me.’”

The Centers for Disease Control and Prevention’s provisional data for 2022 shows a record high of nearly 50,000 suicide deaths for all racial and ethnic groups.

Grim statistics from KFF show that the rise in the suicide death rate has been more pronounced among communities of color: From 2011 to 2021, the suicide rate among Hispanics jumped from 5.7 per 100,000 people to 7.9 per 100,000, according to the data.

For Hispanic children 12 and younger, the rate increased 92.3% from 2010 to 2019, according to a study published in the Journal of Community Health.

Friday, January 5, 2024

Mathematical and Computational Modeling of Suicide as a Complex Dynamical System

Wang, S. B., Robinaugh, D., et al.
(2023, September 24). 



Despite decades of research, the current suicide rate is nearly identical to what it was 100 years ago. This slow progress is due, at least in part, to a lack of formal theories of suicide. Existing suicide theories are instantiated verbally, omitting details required for precise explanation and prediction, rendering them difficult to effectively evaluate and difficult to improve.  By contrast, formal theories are instantiated mathematically and computationally, allowing researchers to precisely deduce theory predictions, rigorously evaluate what the theory can and cannot explain, and thereby, inform how the theory can be improved.  This paper takes the first step toward addressing the need for formal theories in suicide research by formalizing an initial, general theory of suicide and evaluating its ability to explain suicide-related phenomena.


First, we formalized a General Escape Theory of Suicide as a system of stochastic and ordinary differential equations. Second, we used these equations to simulate behavior of the system over time. Third, we evaluated if the formal theory produced robust suicide-related phenomena including rapid onset and brief duration of suicidal thoughts, and zero-inflation of suicidal thinking in time series data.


Simulations successfully produced the proposed suicidal phenomena (i.e.,rapid onset, short duration, and high zero-inflation of suicidal thoughts in time series data). Notably, these simulations also produced theorized phenomena following from the General Escape Theory of Suicide:that suicidal thoughts emerge when alternative escape behaviors failed to effectively regulate aversive internal states, and that effective use of long-term strategies may prevent the emergence of suicidal thoughts.


To our knowledge, the model developed here is the first formal theory of suicide, which was able to produce –and, thus, explain –well-established phenomena documented in the suicide literature. We discuss the next steps in a research program dedicated to studying suicide as a complex dynamical system, and describe how the integration of formal theories and empirical research may advance our understanding, prediction, and prevention of suicide. 

My take:

In essence, the paper demonstrates the potential value of using computational modeling and formal theorizing to improve understanding and prediction of suicidal behaviors, breaking from a reliance on narrative theories that have failed to significantly reduce suicide rates over the past century. The formal modeling approach allows more rigorous evaluation and refinement of theories over time.

Friday, September 1, 2023

Building Superintelligence Is Riskier Than Russian Roulette

Tam Hunt & Roman Yampolskiy
Originally posted 2 August 23

Here is an excerpt:

The precautionary principle is a long-standing approach for new technologies and methods that urges positive proof of safety before real-world deployment. Companies like OpenAI have so far released their tools to the public with no requirements at all to establish their safety. The burden of proof should be on companies to show that their AI products are safe—not on public advocates to show that those same products are not safe.

Recursively self-improving AI, the kind many companies are already pursuing, is the most dangerous kind, because it may lead to an intelligence explosion some have called “the singularity,” a point in time beyond which it becomes impossible to predict what might happen because AI becomes god-like in its abilities. That moment could happen in the next year or two, or it could be a decade or more away.

Humans won’t be able to anticipate what a far-smarter entity plans to do or how it will carry out its plans. Such superintelligent machines, in theory, will be able to harness all of the energy available on our planet, then the solar system, then eventually the entire galaxy, and we have no way of knowing what those activities will mean for human well-being or survival.

Can we trust that a god-like AI will have our best interests in mind? Similarly, can we trust that human actors using the coming generations of AI will have the best interests of humanity in mind? With the stakes so incredibly high in developing superintelligent AI, we must have a good answer to these questions—before we go over the precipice.

Because of these existential concerns, more scientists and engineers are now working toward addressing them. For example, the theoretical computer scientist Scott Aaronson recently said that he’s working with OpenAI to develop ways of implementing a kind of watermark on the text that the company’s large language models, like GPT-4, produce, so that people can verify the text’s source. It’s still far too little, and perhaps too late, but it is encouraging to us that a growing number of highly intelligent humans are turning their attention to these issues.

Philosopher Toby Ord argues, in his book The Precipice: Existential Risk and the Future of Humanity, that in our ethical thinking and, in particular, when thinking about existential risks like AI, we must consider not just the welfare of today’s humans but the entirety of our likely future, which could extend for billions or even trillions of years if we play our cards right. So the risks stemming from our AI creations need to be considered not only over the next decade or two, but for every decade stretching forward over vast amounts of time. That’s a much higher bar than ensuring AI safety “only” for a decade or two.

Skeptics of these arguments often suggest that we can simply program AI to be benevolent, and if or when it becomes superintelligent, it will still have to follow its programming. This ignores the ability of superintelligent AI to either reprogram itself or to persuade humans to reprogram it. In the same way that humans have figured out ways to transcend our own “evolutionary programming”—caring about all of humanity rather than just our family or tribe, for example—AI will very likely be able to find countless ways to transcend any limitations or guardrails we try to build into it early on.

Here is my summary:

The article argues that building superintelligence is a risky endeavor, even more so than playing Russian roulette. Further, there is no way to guarantee that we will be able to control a superintelligent AI, and that even if we could, it is possible that the AI would not share our values. This could lead to the AI harming or even destroying humanity.

The authors propose that we should pause our current efforts to develop superintelligence and instead focus on understanding the risks involved. He argues that we need to develop a better understanding of how to align AI with our values, and that we need to develop safety mechanisms that will prevent AI from harming humanity.  (See Shelley's Frankenstein as a literary example.)

Sunday, August 13, 2023

Beyond killing one to save five: Sensitivity to ratio and probability in moral judgment

Ryazanov, A.A., Wang, S.T, et al. (2023).
Journal of Experimental Social Psychology
Volume 108, September 2023, 104499


A great deal of current research on moral judgments centers on moral dilemmas concerning tradeoffs between one and five lives. Whether one considers killing one innocent person to save five others to be morally required or impermissible has been taken to determine whether one is appealing to consequentialist or non-consequentialist reasoning. But this focus on tradeoffs between one and five may obscure more nuanced commitments involved in moral decision-making that are revealed when the numbers and ratio of lives to be traded off are varied, and when the probabilities of each outcome occurring are less than certain. Four studies examine participants' reactions to scenarios that diverge in these ways from the standard ones. Study 1 examines the extent to which people are sensitive to the ratio of lives saved to lives ended by a particular action. Study 2 verifies that the ratio rather than the difference between the two values is operative. Study 3 examines whether participants treat probabilistic harm to some as equivalent to certainly harming fewer, holding expected ratio constant. Study 4 explores an analogous issue regarding the sensitivity of probabilistic saving. Participants are remarkably sensitive to expected ratio for probabilistic harms while deviating from expected value for probabilistic saving. Collectively, the studies provide evidence that people's moral judgments are consistent with the principle of threshold deontology.

General discussion

Collectively, our studies show that people are sensitive to expected ratio in moral dilemmas, and that they show this sensitivity across a range of probabilities. The particular kind of sensitivity to expected value participants display is consistent with the view that people's moral judgments are based on one single principle of threshold deontology. If one examines only participants' reactions to a single dilemma with a given ratio, one might naturally tend to sort participants' judgments into consequentialists (the ones who condone killing to save others) or non-consequentialists (the ones who do not). But this can be misleading, as is shown by the result that the number of participants who make judgments consistent with consequentialism in a scenario with ratio of 5:1 decreases when the ratio decreases (as if a larger number of people endorse deontological principles under this lower ratio). The fact that participants make some judgments that are consistent with consequentialism does not entail that these judgments are expressive of a generally consequentialist moral theory. When the larger set of judgments is taken into account, the only theory with which they are consistent is threshold deontology. On this theory, there is a general deontological constraint against killing, but this constraint is overridden when the consequences of inaction are bad enough. The variability across participants suggests that participants have different thresholds of the ratio at which the consequences count as “bad enough” for switching from supporting inaction to supporting action. This is consistent with the wide literature showing that participants' judgments can shift within the same ratio, depending on, for example, how the death of the one is caused.

My summary:

This research provides new insights into how people make moral judgments. It suggests that people are not simply weighing the number of lives saved against the number of lives lost, but that they are also taking into account the ratio of lives saved to lives lost and the probability of each outcome occurring. This research has important implications for our understanding of moral decision-making and for the development of moral education programs.

Tuesday, June 20, 2023

Ethical Accident Algorithms for Autonomous Vehicles and the Trolley Problem: Three Philosophical Disputes

Sven Nyholm
In Lillehammer, H. (ed.), The Trolley Problem.
Cambridge: Cambridge University Press, 2023


The Trolley Problem is one of the most intensively discussed and controversial puzzles in contemporary moral philosophy. Over the last half-century, it has also become something of a cultural phenomenon, having been the subject of scientific experiments, online polls, television programs, computer games, and several popular books. This volume offers newly written chapters on a range of topics including the formulation of the Trolley Problem and its standard variations; the evaluation of different forms of moral theory; the neuroscience and social psychology of moral behavior; and the application of thought experiments to moral dilemmas in real life. The chapters are written by leading experts on moral theory, applied philosophy, neuroscience, and social psychology, and include several authors who have set the terms of the ongoing debates. The volume will be valuable for students and scholars working on any aspect of the Trolley Problem and its intellectual significance.

Here is the conclusion:

Accordingly, it seems to me that just as the first methodological approach mentioned a few paragraphs above is problematic, so is the third methodological approach. In other words, we do best to take the second approach. We should neither rely too heavily (or indeed exclusively) on the comparison between the ethics of self-driving cars and the trolley problem, nor wholly ignore and pay no attention to the comparison between the ethics of self-driving cars and the trolley problem. Rather, we do best to make this one – but not the only – thing we do when we think about the ethics of self-driving cars. With what is still a relatively new issue for philosophical ethics to work with, and indeed also regarding older ethical issues that have been around much longer, using a mixed and pluralistic method that approaches the moral issues we are considering from many different angles is surely the best way to go. In this instance, that includes reflecting on – and reflecting critically on – how the ethics of crashes involving self-driving cars is both similar to and different from the philosophy of the trolley problem.

At this point, somebody might say, “what if I am somebody who really dislikes the self-driving cars/trolley problem comparison, and I would really prefer reflecting on the ethics of self-driving cars without spending any time on thinking about the similarities and differences between the ethics of self-driving cars and the trolley problem?” In other words, should everyone working on the ethics of self-driving cars spend at least some of their time reflecting on the comparison with the trolley problem? Luckily for those who are reluctant to spend any of their time reflecting on the self-driving cars/trolley problem comparison, there are others who are willing and able to devote at least some of their energies to this comparison.

In general, I think we should view the community that works on the ethics of this issue as being one in which there can be a division of labor, whereby different members of this field can partly focus on different things, and thereby together cover all of the different aspects that are relevant and important to investigate regarding the ethics of self-driving cars.  As it happens, there has been a remarkable variety in the methods and approaches people have used to address the ethics of self-driving cars (see Nyholm 2018 a-b).  So, while it is my own view that anybody who wants to form a complete overview of the ethics of self-driving cars should, among other things, devote some of their time to studying the comparison with the trolley problem, it is ultimately no big problem if not everyone wishes to do so. There are others who have been studying, and who will most likely continue to reflect on, this comparison.

Sunday, March 19, 2023

The role of attention in decision-making under risk in gambling disorder: an eye-tracking study

Hoven, M., Hirmas, A., Engelmann, J. B., 
& van Holst, R. (2022, June 30).


Gambling disorder (GD) is a behavioural addiction characterized by impairments in decision-making, favouring risk- and reward-prone choices. One explanatory factor for this behaviour is a deviation in attentional processes, as increasing evidence indicates that GD patients show an attentional bias toward gambling stimuli. However, previous attentional studies have not directly investigated attention during risky decision-making. 25 patients with GD and 27 healthy matched controls (HC) completed a mixed gambles task combined with eye-tracking to investigate attentional biases for potential gains versus losses during decision-making under risk. Results indicate that compared to HC, GD patients gambled more and were less loss averse. GD patients did not show a direct attentional bias towards gains (or relative to losses). Using a recent (neuro)economics model that considers average attention and trial-wise deviations in average attention, we conducted fine-grained exploratory analyses of the attentional data. Results indicate that the average attention in GD patients moderated the effect of gain value on gambling choices, whereas this was not the case for HC. GD patients with high average attention for gains started gambling at less high gain values. A similar trend-level effect was found for losses, where GD patients with high average attention for losses stopped gambling with lower loss values. This study gives more insight into how attentional processes in GD play a role in gambling behaviour, which could have implications for the development of future treatments focusing on attentional training or for the development of interventions that increase the salience of losses.

From the Discussion section

We extend the current literature by investigating the role of attention in risky decision-making using eye-tracking, which has been underexplored in GD thus far. Consistent with previous studies in HCs, subjects’ overall relative attention toward gains decreased in favor of attention toward losses when  loss  values  increased.  We  did not find group differences in attention to either  gains or losses, suggesting no direct attentional biases in GD. However, while HCs increased their attention to gains with higher gain values, patients with GD did not. Moreover, while patients with GD displayed lower loss aversion, they did not show less attention to losses, rather, in both groups, increased trial-by-trial attention to losses resulted in less gambling.

The question arises whether attention modulates the effect of gains and losses on choice behavior differently in GD relative to controls. Our exploratory analyses that differentiated between two different channels of attention indeed indicated that the effect of gain value on gambling choices was modulated by the amount of average attention on gains in GD only. In other words, patients with GD who focused more on gains exhibited a greater gambling propensity at relatively low gain values. Notably, the strength of the effect of gain value on choice only significantly differed at average and high levels of attention to gains between groups, while patients with GD and HCs with relatively low levels of average attention to gains did not differ. Moreover, patients with GD who had relatively more average attention to losses showed a reduction in gambling propensity at relatively lower loss values, but note that this was at trend level.  Since  average  attention  relates  to  goal-directed or top-down attention, this measure likely reflects one’s preferences and beliefs.  Hence,  the  current  results suggest  that  gambling  choices  in  patients  with GD, relative to HCs are more  influenced by their preferences for gains. Future studies are needed to verify if and how top-down attentional processes affect decision-making in GD.

Editor's note: Apparently, GD focusing primarily on gains continue to gamble.  GD and HC who focus on losses are more likely to stop.  Therefore, psychologists treating people with impulse control difficulties may want to help patient's focus on potential losses/harm, as opposed to imagined gains.

Tuesday, January 25, 2022

Sexbots as Synthetic Companions: Comparing Attitudes of Official Sex Offenders and Non-Offenders.

Zara, G., Veggi, S. & Farrington, D.P. 
Int J of Soc Robotics (2021). 


This is the first Italian study to examine views on sexbots of adult male sex offenders and non-offenders, and their perceptions of sexbots as sexual partners, and sexbots as a means to prevent sexual violence. In order to explore these aspects 344 adult males were involved in the study. The study carried out two types of comparisons. 100 male sex offenders were compared with 244 male non-offenders. Also, sex offenders were divided into child molesters and rapists. Preliminary findings suggest that sex offenders were less open than non-offenders to sexbots, showed a lower acceptance of them, and were more likely to dismiss the possibility of having an intimate and sexual relationship with a sexbot. Sex offenders were also less likely than non-offenders to believe that the risk of sexual violence against people could be reduced if a sexbot was used in the treatment of sex offenders. No differences were found between child molesters and rapists. Though no definitive conclusion can be drawn about what role sexbots might play in the prevention and treatment of sex offending, this study emphasizes the importance of both exploring how sexbots are both perceived and understood. Sex offenders in this study showed a high dynamic sexual risk and, paradoxically, despite, or because of, their sexual deviance (e.g. deficits in sexual self-regulation), they were more inclined to see sexbots as just machines and were reluctant to imagine them as social agents, i.e. as intimate or sexual arousal partners. How sex offenders differ in their dynamic risk and criminal careers can inform experts about the mechanisms that take place and can challenge their engagement in treatment and intervention.

From the Discussion

Being in a Relationship with a Sexbot: a Comparison Between Sex Offenders and Non-Offenders
Notwithstanding that previous studies suggest that those who are quite open in admitting their interest in having a relationship with a sexbot were not necessarily problematic in terms of psycho-sexual functioning and life satisfaction, some anecdotal evidence seems to indicate otherwise. In this study, sex offenders were more reluctant to speak about their preferences towards sexbots. While male non-offenders appeared to be open to sexbots and quite eager to imagine themselves having a relationship with a sexbot or having sexual intercourse with one of them, sex offenders were reluctant to admit any interest towards sexbots. No clinical data are available to support the assumption about whether the interaction with sexbots is in any way egodystonic (inconsistent with one’s ideal self) or egosyntonic (consistent with one’s ideal self). Thus, no-one can discount the influence of being in detention upon the offenders’ willingness to feel at ease in expressing their views. It is not unusual that, when in detention, offenders may put up a front. This might explain why the sex offenders in this study kept a low profile on sex matters (e.g. declaring that “sexbots are not for me, I’m not a pervert”, to use their words). Sexuality is a dirty word for sex offenders in detention and their willingness to be seen as reformed and «sexually normal» is what perhaps motivated them to deny that they had any form of curiosity or attraction for any sexbot presented to them.

Wednesday, October 20, 2021

The Fight to Define When AI Is ‘High Risk’

Khari Johnson
Originally posted 1 Sept 21

Here is an excerpt:

At the heart of much of that commentary is a debate over which kinds of AI should be considered high risk. The bill defines high risk as AI that can harm a person’s health or safety or infringe on fundamental rights guaranteed to EU citizens, like the right to life, the right to live free from discrimination, and the right to a fair trial. News headlines in the past few years demonstrate how these technologies, which have been largely unregulated, can cause harm. AI systems can lead to false arrests, negative health care outcomes, and mass surveillance, particularly for marginalized groups like Black people, women, religious minority groups, the LGBTQ community, people with disabilities, and those from lower economic classes. Without a legal mandate for businesses or governments to disclose when AI is used, individuals may not even realize the impact the technology is having on their lives.

The EU has often been at the forefront of regulating technology companies, such as on issues of competition and digital privacy. Like the EU's General Data Protection Regulation, the AI Act has the potential to shape policy beyond Europe’s borders. Democratic governments are beginning to create legal frameworks to govern how AI is used based on risk and rights. The question of what regulators define as high risk is sure to spark lobbying efforts from Brussels to London to Washington for years to come.

Thursday, October 7, 2021

Axiological futurism: The systematic study of the future of values

J. Danaher
Volume 132, September 2021


Human values seem to vary across time and space. What implications does this have for the future of human value? Will our human and (perhaps) post-human offspring have very different values from our own? Can we study the future of human values in an insightful and systematic way? This article makes three contributions to the debate about the future of human values. First, it argues that the systematic study of future values is both necessary in and of itself and an important complement to other future-oriented inquiries. Second, it sets out a methodology and a set of methods for undertaking this study. Third, it gives a practical illustration of what this ‘axiological futurism’ might look like by developing a model of the axiological possibility space that humans are likely to navigate over the coming decades.


• Outlines a new field of inquiry: axiological futurism.

• Defends the role of axiological futurism in understanding technology in society.

• Develops a set of methods for undertaking this inquiry into the axiological future.

• Presents a model for understanding the impact of AI, robotics and ICTs on human values.

From the Conclusion

In conclusion, axiological futurism is the systematic and explicit inquiry into the axiological possibility space for future human (and post-human) civilisations. Axiological futurism is necessary because, given the history of axiological change and variation, it is very unlikely that our current axiological systems will remain static and unchanging in the future. Axiological futurism is also important because it is complementary to other futurological inquiries. While it might initially seem that axiological futurism cannot be a systematic inquiry, this is not the case. Axiological futurism is an exercise in informed speculation.

Sunday, June 27, 2021

On Top of Everything Else, the Pandemic Messed With Our Morals

Jonathan Moens
The Atlantic
Originally posted 8 June 21

Here is an excerpt:

The core features of moral injury are feelings of betrayal by colleagues, leaders, and institutions who forced people into moral quandaries, says Suzanne Shale, a medical ethicist. As a way to minimize exposure for the entire team, Kathleen Turner and other ICU nurses have had to take on multiple roles: cleaning rooms, conducting blood tests, running neurological exams, and standing in for families who can’t keep patients company. Juggling all those tasks has left Turner feeling abandoned and expendable. “It definitely exposes and highlights the power dynamics within health care of who gets to say ‘No, I'm too high risk; I can't go in that patient's room,’” she said. Kate Dupuis, a clinical neuropsychiatrist and researcher at Canada’s Sheridan College, also felt her moral foundations shaken after Ontario’s decision to shut down schools for in-person learning at the start of the pandemic. The closures have left her worrying about the potential mental-health consequences this will have on her children.

For some people dealing with moral injury right now, the future might hold what is known as “post-traumatic growth,” whereby people’s sense of purpose is reinforced during adverse events, says Victoria Williamson, a researcher who studies moral injury at Oxford University and King’s College London. Last spring, Ahmed Ali, an imam in Brooklyn, New York, felt his moral code violated when dead bodies that were sent to him to perform religious rituals were improperly handled and had blood spilling from detached IV tubes. The experience has invigorated his dedication to helping others in the name of God. “That was a spiritual feeling,” he said.

But moral injury may leave other people feeling befuddled and searching for some way to make sense of a very bad year. If moral injury is left unaddressed, Greenberg said, there’s a real risk that people will develop depression, alcohol misuse, and suicidality. People suffering from moral injury risk retreating into isolation, engaging in self-destructive behaviors, and disconnecting from their friends and family. In the U.K., moral injury among military veterans has been linked to a loss of faith in organized religion. The psychological cost of a traumatic event is largely determined by what happens afterward, meaning that a lack of support from family, friends, and experts who can help people process these events—now that some of us are clawing our way out of the pandemic—could have serious mental-health repercussions. “This phase that we’re in now is actually the phase that’s the most important,” Greenberg said.

Tuesday, June 15, 2021

Diagnostic Mistakes a Big Contributor to Malpractice Suits, Study Finds

Joyce Friedan
Originally posted 26 May 21

Here are two excerpts

One problem is that "healthcare is inherently risky," she continued. For example, "there's ever-changing industry knowledge, growing bodies of clinical options, new diseases, and new technology. There are variable work demands -- boy, didn't we experience that this past year! -- and production pressure has long been a struggle and a challenge for our providers and their teams." Not to mention variable individual competency, an aging population, complex health issues, and evolving workforces.


Cognitive biases can also trigger diagnostic errors, Siegal said. "Anchor bias" occurs when "a provider anchors on a diagnosis, early on, and then through the course of the journey looks for things to confirm that diagnosis. Once they've confirmed it enough that 'search satisfaction' is met, that leads to premature closure" of the patient's case. But that causes a problem because "it means that there's a failure to continue exploring other options. What else could it be? It's a failure to establish, perhaps, every differential diagnosis."

To avoid this problem, providers "always want to think about, 'Am I anchoring too soon? Am I looking to confirm, rather than challenge, my diagnosis?'" she said. According to the study, 25% of cases didn't have evidence of a differential diagnosis, and 36% fell into the category of "confirmation bias" -- "I was looking for things to confirm what I knew, but there were relevant signs and symptoms or positive tests that were still present that didn't quite fit the picture, but it was close. So they were somehow discounted, and the premature closure took over and a diagnosis was made," she said.

She suggested that clinicians take a "diagnostic timeout" -- similar to a surgical timeout -- when they're arriving at a diagnosis. "What else could this be? Have I truly explored all the other possibilities that seem relevant in this scenario and, more importantly, what doesn't fit? Be sure to dis-confirm as well."

Monday, April 5, 2021

Japan has appointed a 'Minister of Loneliness' after seeing suicide rates in the country increase for the first time in 11 years

Kaite Warren
Originally posted 22 Feb 21

Here is an excerpt:

Loneliness has long been an issue in Japan, often discussed alongside "hikikomori," or people who live in extreme social isolation. People have worked to create far-ranging solutions to this issue: Engineers in Japan previously designed a robot to hold someone's hand when they're lonely and one man charges people to "do nothing" except keep them company.

A rise in suicides during the pandemic

During the COVID-19 pandemic in 2020, with people more socially isolated than ever, Japan saw a rise in suicides for the first time in 11 years.

In October, more people died from suicide than had died from COVID-19 in Japan in all of 2020. There were 2,153 suicide deaths that month and 1,765 total virus deaths up to the end of October 2020, per the Japanese National Police Agency. (After a surge in new cases starting in December, Japan has now recorded 7,506 total coronavirus deaths as of February 22.) Studies show that loneliness has been linked to a higher risk of health issues like heart disease, dementia, and eating disorders.

Women in Japan, in particular, have contributed to the uptick in suicides. In October, 879 women died by suicide in Japan — a 70% increase compared to the same month in 2019. 

More and more single women live alone in Japan, but many of them don't have stable employment, Michiko Ueda, a Japanese professor who studies suicide in Japan, told the BBC last week.

"A lot of women are not married anymore," Ueda said. "They have to support their own lives and they don't have permanent jobs. So, when something happens, of course, they are hit very, very hard."

Sunday, June 21, 2020

Downloading COVID-19 contact tracing apps is a moral obligation

G. Owen Schaefer and Angela Ballantyne
BMJ Blogs
Originally posted 4 May 20

Should you download an app that could notify you if you had been in contact with someone who contracted COVID-19? Such apps are already available in countries such as Israel, Singapore, and Australia, with other countries like the UK and US soon to follow. Here, we explain why you might have an ethical obligation to use a tracing app during the COVID-19 pandemic, even in the face of privacy concerns.


Vulnerability and unequal distribution of risk

Marginalized populations are both hardest hit by pandemics and often have the greatest reason to be sceptical of supposedly benign State surveillance. COVID-19 is a jarring reminder of global inequality, structural racism, gender inequity, entrenched ableism, and many other social divisions. During the SARS outbreak, Toronto struggled to adequately respond to the distinctive vulnerabilities of people who were homeless. In America, people of colour are at greatest risk in several dimensions – less able to act on public health advice such as social distancing, more likely to contract the virus, and more likely to die from severe COVID if they do get infected. When public health advice switched to recommending (or in some cases requiring) masks, some African Americans argued it was unsafe for them to cover their faces in public. People of colour in the US are at increased risk of state surveillance and police violence, in part because they are perceived to be threatening and violent. In New York City, black and Latino patients are dying from COVID-19 at twice the rate of non-Hispanic white people.

Marginalized populations have historically been harmed by State health surveillance. For example, indigenous populations have been the victims of State data collection to inform and implement segregation, dispossession of land, forced migration, as well as removal and ‘re‐education’ of their children. Stigma and discrimination have impeded the public health response to HIV/AIDS, as many countries still have HIV-specific laws that prosecute people living with HIV for a range of offences.  Surveillance is an important tool for implementing these laws. Marginalized populations therefore have good reasons to be sceptical of health related surveillance.

Monday, May 25, 2020

How Could the CDC Make That Mistake?

Alexis C. Madrigal & Robinson Meyer
The Atlantic
Originally posted 21 May 20

The Centers for Disease Control and Prevention is conflating the results of two different types of coronavirus tests, distorting several important metrics and providing the country with an inaccurate picture of the state of the pandemic. We’ve learned that the CDC is making, at best, a debilitating mistake: combining test results that diagnose current coronavirus infections with test results that measure whether someone has ever had the virus. The upshot is that the government’s disease-fighting agency is overstating the country’s ability to test people who are sick with COVID-19. The agency confirmed to The Atlantic on Wednesday that it is mixing the results of viral and antibody tests, even though the two tests reveal different information and are used for different reasons.

This is not merely a technical error. States have set quantitative guidelines for reopening their economies based on these flawed data points.

Several states—including Pennsylvania, the site of one of the country’s largest outbreaks, as well as Texas, Georgia, and Vermont—are blending the data in the same way. Virginia likewise mixed viral and antibody test results until last week, but it reversed course and the governor apologized for the practice after it was covered by the Richmond Times-Dispatch and The Atlantic. Maine similarly separated its data on Wednesday; Vermont authorities claimed they didn’t even know they were doing this.

The widespread use of the practice means that it remains difficult to know exactly how much the country’s ability to test people who are actively sick with COVID-19 has improved.

The info is here.

Monday, April 27, 2020

Drivers are blamed more than their automated cars when both make mistakes

Awad, E., Levine, S., Kleiman-Weiner, M. et al.
Nat Hum Behav 4, 134–143 (2020).


When an automated car harms someone, who is blamed by those who hear about it? Here we asked human participants to consider hypothetical cases in which a pedestrian was killed by a car operated under shared control of a primary and a secondary driver and to indicate how blame should be allocated. We find that when only one driver makes an error, that driver is blamed more regardless of whether that driver is a machine or a human. However, when both drivers make errors in cases of human–machine shared-control vehicles, the blame attributed to the machine is reduced. This finding portends a public under-reaction to the malfunctioning artificial intelligence components of automated cars and therefore has a direct policy implication: allowing the de facto standards for shared-control vehicles to be established in courts by the jury system could fail to properly regulate the safety of those vehicles; instead, a top-down scheme (through federal laws) may be called for.

From the Discussion:

Our central finding (diminished blame apportioned to the machine in dual-error cases) leads us to believe that, while there may be many psychological barriers to self-driving car adoption19, public over-reaction to dual-error cases is not likely to be one of them. In fact, we should perhaps be concerned about public underreaction. Because the public are less likely to see the machine as being at fault in dual-error cases like the Tesla and Uber crashes, the sort of public pressure that drives regulation might be lacking. For instance, if we were to allow the standards for automated vehicles to be set through jury-based court-room decisions, we expect that juries will be biased to absolve the car manufacturer of blame in dual-error cases, thereby failing to put sufficient pressure on manufacturers to improve car designs.

The article is here.

Friday, February 14, 2020

Judgment and Decision Making

Baruch Fischhoff and Stephen B. Broomell
Annual Review of Psychology 
2020 71:1, 331-355


The science of judgment and decision making involves three interrelated forms of research: analysis of the decisions people face, description of their natural responses, and interventions meant to help them do better. After briefly introducing the field's intellectual foundations, we review recent basic research into the three core elements of decision making: judgment, or how people predict the outcomes that will follow possible choices; preference, or how people weigh those outcomes; and choice, or how people combine judgments and preferences to reach a decision. We then review research into two potential sources of behavioral heterogeneity: individual differences in decision-making competence and developmental changes across the life span. Next, we illustrate applications intended to improve individual and organizational decision making in health, public policy, intelligence analysis, and risk management. We emphasize the potential value of coupling analytical and behavioral research and having basic and applied research inform one another.

The paper can be downloaded here.

Wednesday, February 5, 2020

A Reality Check On Artificial Intelligence: Are Health Care Claims Overblown?

Liz Szabo
Kaiser Health News
Originally published 30 Dec 19

Here is an excerpt:

“Almost none of the [AI] stuff marketed to patients really works,” said Dr. Ezekiel Emanuel, professor of medical ethics and health policy in the Perelman School of Medicine at the University of Pennsylvania.

The FDA has long focused its attention on devices that pose the greatest threat to patients. And consumer advocates acknowledge that some devices ― such as ones that help people count their daily steps ― need less scrutiny than ones that diagnose or treat disease.

Some software developers don’t bother to apply for FDA clearance or authorization, even when legally required, according to a 2018 study in Annals of Internal Medicine.

Industry analysts say that AI developers have little interest in conducting expensive and time-consuming trials. “It’s not the main concern of these firms to submit themselves to rigorous evaluation that would be published in a peer-reviewed journal,” said Joachim Roski, a principal at Booz Allen Hamilton, a technology consulting firm, and co-author of the National Academy’s report. “That’s not how the U.S. economy works.”

But Oren Etzioni, chief executive officer at the Allen Institute for AI in Seattle, said AI developers have a financial incentive to make sure their medical products are safe.

The info is here.

Tuesday, January 7, 2020

AI Is Not Similar To Human Intelligence. Thinking So Could Be Dangerous

Elizabeth Fernandez
Artificial intelligenceforbes.com
Originally posted 30 Nov 19

Here is an excerpt:

No doubt, these algorithms are powerful, but to think that they “think” and “learn” in the same way as humans would be incorrect, Watson says. There are many differences, and he outlines three.

The first - DNNs are easy to fool. For example, imagine you have a picture of a banana. A neural network successfully classifies it as a banana. But it’s possible to create a generative adversarial network that can fool your DNN. By adding a slight amount of noise or another image besides the banana, your DNN might now think the picture of a banana is a toaster. A human could not be fooled by such a trick. Some argue that this is because DNNs can see things humans can’t, but Watson says, “This disconnect between biological and artificial neural networks suggests that the latter lack some crucial component essential to navigating the real world.”

Secondly, DNNs need an enormous amount of data to learn. An image classification DNN might need to “see” thousands of pictures of zebras to identify a zebra in an image. Give the same test to a toddler, and chances are s/he could identify a zebra, even one that’s partially obscured, by only seeing a picture of a zebra a few times. Humans are great “one-shot learners,” says Watson. Teaching a neural network, on the other hand, might be very difficult, especially in instances where data is hard to come by.

Thirdly, neural nets are “myopic”. They can see the trees, so to speak, but not the forest. For example, a DNN could successfully label a picture of Kim Kardashian as a woman, an entertainer, and a starlet. However, switching the position of her mouth and one of her eyes actually improved the confidence of the DNN’s prediction. The DNN didn’t see anything wrong with that image. Obviously, something is wrong here. Another example - a human can say “that cloud looks like a dog”, whereas a DNN would say that the cloud is a dog.

The info is here.

Monday, August 26, 2019

Proprietary Algorithms for Polygenic Risk: Protecting Scientific Innovation or Hiding the Lack of It?

A. Cecile & J.W. Janssens
Genes 2019, 10(6), 448


Direct-to-consumer genetic testing companies aim to predict the risks of complex diseases using proprietary algorithms. Companies keep algorithms as trade secrets for competitive advantage, but a market that thrives on the premise that customers can make their own decisions about genetic testing should respect customer autonomy and informed decision making and maximize opportunities for transparency. The algorithm itself is only one piece of the information that is deemed essential for understanding how prediction algorithms are developed and evaluated. Companies should be encouraged to disclose everything else, including the expected risk distribution of the algorithm when applied in the population, using a benchmark DNA dataset. A standardized presentation of information and risk distributions allows customers to compare test offers and scientists to verify whether the undisclosed algorithms could be valid. A new model of oversight in which stakeholders collaboratively keep a check on the commercial market is needed.

Here is the conclusion:

Oversight of the direct-to-consumer market for polygenic risk algorithms is complex and time-sensitive. Algorithms are frequently adapted to the latest scientific insights, which may make evaluations obsolete before they are completed. A standardized format for the provision of essential information could readily provide insight into the logic behind the algorithms, the rigor of their development, and their predictive ability. The development of this format gives responsible providers the opportunity to lead by example and show that much can be shared when there is nothing to hide.