Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Tuesday, January 9, 2024

Betrayal-Based Moral Injury and Mental Health Problems Among Healthcare and Hospital Workers Serving COVID-19 Patients

Soim Park, Johannes Thrul, et al. (2023)
Journal of Trauma & Dissociation

Abstract

One factor potentially driving healthcare and hospital worker (HHW)’s declining mental health during the COVID-19 pandemic is feeling betrayed by institutional leaders, coworkers, and/or others’ pandemic-related responses and behaviors. We investigated whether HHWs’ betrayal-based moral injury was associated with greater mental distress and post-traumatic stress disorder (PTSD) symptoms related to COVID-19. We also examined if these associations varied between clinical and non-clinical staff. From July 2020 to January 2021, cross-sectional online survey data were collected from 1,066 HHWs serving COVID-19 patients in a large urban US healthcare system. We measured betrayal-based moral injury in three groups: institutional leaders, coworkers/colleagues, and people outside of healthcare. Multivariate logistic regression analyses were performed to investigate whether betrayal-based moral injury was associated with mental distress and PTSD symptoms. Approximately one-third of HHWs reported feeling betrayed by institutional leaders, and/or people outside healthcare. Clinical staff were more likely to report feelings of betrayal than non-clinical staff. For all respondents, 49.5% reported mental distress and 38.2% reported PTSD symptoms. Having any feelings of betrayal increased the odds of mental distress and PTSD symptoms by 2.9 and 3.3 times, respectively. These associations were not significantly different between clinical and non-clinical staff. As health systems seek to enhance support of HHWs, they need to carefully examine institutional structures, accountability, communication, and decision-making patterns that can result in staff feelings of betrayal. Building trust and repairing ruptures with HHWs could prevent potential mental health problems, increase retention, and reduce burnout, while likely improving patient care.

Here is my take for psychologists:

The article identifies betrayal-based moral injury as a significant factor contributing to mental health problems among healthcare workers during the COVID-19 pandemic. The research demonstrates a strong association between feelings of betrayal and both mental distress and PTSD symptoms. This suggests that interventions aimed at addressing betrayal-based moral injury could play a crucial role in improving the mental well-being of healthcare workers.

The article provides valuable insights into specific sources of betrayal experienced by healthcare workers. The study highlights that betrayal can stem from institutional leaders, coworkers, and even individuals outside of the healthcare system. This understanding can inform targeted interventions aimed at rebuilding trust and repairing ruptures within healthcare institutions and the broader community.

By understanding the impact of betrayal-based moral injury and its sources, clinical psychologists can develop more effective interventions to support healthcare workers' mental health. These efforts can improve the well-being of individuals working on the frontlines, potentially leading to better patient care and a more sustainable healthcare system.

Monday, January 8, 2024

Human-Algorithm Interactions Help Explain the Spread of Misinformation

McLoughlin, K. L., & Brady, W. J. (2023).
Current Opinion in Psychology, 101770.

Abstract

Human attention biases toward moral and emotional information are as prevalent online as they are offline. When these biases interact with content algorithms that curate social media users’ news feeds to maximize attentional capture, moral and emotional information are privileged in the online information ecosystem. We review evidence for these human-algorithm interactions and argue that misinformation exploits this process to spread online. This framework suggests that interventions aimed at combating misinformation require a dual-pronged approach that combines person-centered and design-centered interventions to be most effective. We suggest several avenues for research in the psychological study of misinformation sharing under a framework of human-algorithm interaction.

Here is my summary:

This research highlights the crucial role of human-algorithm interactions in driving the spread of misinformation online. It argues that both human attentional biases and algorithmic amplification mechanisms contribute to this phenomenon.

Firstly, humans naturally gravitate towards information that evokes moral and emotional responses. This inherent bias makes us more susceptible to engaging with and sharing misinformation that leverages these emotions, such as outrage, fear, or anger.

Secondly, social media algorithms are designed to maximize user engagement, which often translates to prioritizing content that triggers strong emotions. This creates a feedback loop where emotionally charged misinformation is amplified, further attracting human attention and fueling its spread.

The research concludes that effectively combating misinformation requires a multifaceted approach. It emphasizes the need for interventions that address both human psychology and algorithmic design. This includes promoting media literacy, encouraging critical thinking skills, and designing algorithms that prioritize factual accuracy and diverse perspectives over emotional engagement.

Sunday, January 7, 2024

The power of social influence: A replication and extension of the Asch experiment

Franzen A, Mader S (2023)
PLoS ONE 18(11): e0294325.

Abstract

In this paper, we pursue four goals: First, we replicate the original Asch experiment with five confederates and one naïve subject in each group (N = 210). Second, in a randomized trial we incentivize the decisions in the line experiment and demonstrate that monetary incentives lower the error rate, but that social influence is still at work. Third, we confront subjects with different political statements and show that the power of social influence can be generalized to matters of political opinion. Finally, we investigate whether intelligence, self-esteem, the need for social approval, and the Big Five are related to the susceptibility to provide conforming answers. We find an error rate of 33% for the standard length-of-line experiment which replicates the original findings by Asch (1951, 1955, 1956). Furthermore, in the incentivized condition the error rate decreases to 25%. For political opinions we find a conformity rate of 38%. However, besides openness, none of the investigated personality traits are convincingly related to the susceptibility of group pressure.

My summary:

This research aimed to replicate and extend the classic Asch conformity experiment, investigating the extent to which individuals conform to group pressure in a line-judging task. The study involved 210 participants divided into groups, with one naive participant and five confederates who provided deliberately incorrect answers. Replicating the original findings, the researchers observed an average error rate of 33%, demonstrating the enduring power of social influence in shaping individual judgments.

Furthering the investigation, the study explored the impact of monetary incentives on conformity. The researchers found that offering rewards for independent judgments reduced the error rate, suggesting that individuals are more likely to resist social pressure when motivated by personal gain. However, the study still observed a significant level of conformity even with incentives, indicating that social influence remains a powerful force even when competing with personal interests.

Saturday, January 6, 2024

Worth the Risk? Greater Acceptance of Instrumental Harm Befalling Men than Women

Graso, M., Reynolds, T. & Aquino, K.
Arch Sex Behav 52, 2433–2445 (2023).

Abstract

Scientific and organizational interventions often involve trade-offs whereby they benefit some but entail costs to others (i.e., instrumental harm; IH). We hypothesized that the gender of the persons incurring those costs would influence intervention endorsement, such that people would more readily support interventions inflicting IH onto men than onto women. We also hypothesized that women would exhibit greater asymmetries in their acceptance of IH to men versus women. Three experimental studies (two pre-registered) tested these hypotheses. Studies 1 and 2 granted support for these predictions using a variety of interventions and contexts. Study 3 tested a possible boundary condition of these asymmetries using contexts in which women have traditionally been expected to sacrifice more than men: caring for infants, children, the elderly, and the ill. Even in these traditionally female contexts, participants still more readily accepted IH to men than women. Findings indicate people (especially women) are less willing to accept instrumental harm befalling women (vs. men). We discuss the theoretical and practical implications and limitations of our findings.

Here is my summary:

This research investigated the societal acceptance of "instrumental harm" (IH) based on the gender of the person experiencing it. Three studies found that people are more likely to tolerate IH when it happens to men than when it happens to women. This bias is especially pronounced among women and those holding egalitarian or feminist beliefs. Even in contexts traditionally associated with women's vulnerability, IH inflicted on men is seen as more acceptable.

These findings highlight a potential blind spot in our perception of harm and raise concerns about how policies might be influenced by this bias. Further research is needed to understand the underlying reasons for this bias and develop strategies to address it.

Friday, January 5, 2024

Mathematical and Computational Modeling of Suicide as a Complex Dynamical System

Wang, S. B., Robinaugh, D., et al.
(2023, September 24). 

Abstract

Background:

Despite decades of research, the current suicide rate is nearly identical to what it was 100 years ago. This slow progress is due, at least in part, to a lack of formal theories of suicide. Existing suicide theories are instantiated verbally, omitting details required for precise explanation and prediction, rendering them difficult to effectively evaluate and difficult to improve.  By contrast, formal theories are instantiated mathematically and computationally, allowing researchers to precisely deduce theory predictions, rigorously evaluate what the theory can and cannot explain, and thereby, inform how the theory can be improved.  This paper takes the first step toward addressing the need for formal theories in suicide research by formalizing an initial, general theory of suicide and evaluating its ability to explain suicide-related phenomena.

Methods:

First, we formalized a General Escape Theory of Suicide as a system of stochastic and ordinary differential equations. Second, we used these equations to simulate behavior of the system over time. Third, we evaluated if the formal theory produced robust suicide-related phenomena including rapid onset and brief duration of suicidal thoughts, and zero-inflation of suicidal thinking in time series data.

Results:

Simulations successfully produced the proposed suicidal phenomena (i.e.,rapid onset, short duration, and high zero-inflation of suicidal thoughts in time series data). Notably, these simulations also produced theorized phenomena following from the General Escape Theory of Suicide:that suicidal thoughts emerge when alternative escape behaviors failed to effectively regulate aversive internal states, and that effective use of long-term strategies may prevent the emergence of suicidal thoughts.

Conclusions:

To our knowledge, the model developed here is the first formal theory of suicide, which was able to produce –and, thus, explain –well-established phenomena documented in the suicide literature. We discuss the next steps in a research program dedicated to studying suicide as a complex dynamical system, and describe how the integration of formal theories and empirical research may advance our understanding, prediction, and prevention of suicide. 

My take:

In essence, the paper demonstrates the potential value of using computational modeling and formal theorizing to improve understanding and prediction of suicidal behaviors, breaking from a reliance on narrative theories that have failed to significantly reduce suicide rates over the past century. The formal modeling approach allows more rigorous evaluation and refinement of theories over time.

Thursday, January 4, 2024

Americans’ Trust in Scientists, Positive Views of Science Continue to Decline

Brian Kennedy & Alec Tyson
Pew Research
Originally published 14 NOV 23

Impact of science on society

Overall, 57% of Americans say science has had a mostly positive effect on society. This share is down 8 percentage points since November 2021 and down 16 points since before the start of the coronavirus outbreak.

About a third (34%) now say the impact of science on society has been equally positive as negative. A small share (8%) think science has had a mostly negative impact on society.

Trust in scientists

When it comes to the standing of scientists, 73% of U.S. adults have a great deal or fair amount of confidence in scientists to act in the public’s best interests. But trust in scientists is 14 points lower than it was at the early stages of the pandemic.

The share expressing the strongest level of trust in scientists – saying they have a great deal of confidence in them – has fallen from 39% in 2020 to 23% today.

As trust in scientists has fallen, distrust has grown: Roughly a quarter of Americans (27%) now say they have not too much or no confidence in scientists to act in the public’s best interests, up from 12% in April 2020.

Ratings of medical scientists mirror the trend seen in ratings of scientists generally. Read Chapter 1 of the report for a detailed analysis of this data.

Differences between Republicans and Democrats in ratings of scientists and science

Declining levels of trust in scientists and medical scientists have been particularly pronounced among Republicans and Republican-leaning independents over the past several years. In fact, nearly four-in-ten Republicans (38%) now say they have not too much or no confidence at all in scientists to act in the public’s best interests. This share is up dramatically from the 14% of Republicans who held this view in April 2020. Much of this shift occurred during the first two years of the pandemic and has persisted in more recent surveys.


My take on why this important:

Science is a critical driver of progress. From technological advancements to medical breakthroughs, scientific discoveries have dramatically improved our lives. Without public trust in science, these advancements may slow or stall.

Science plays a vital role in addressing complex challenges. Climate change, pandemics, and other pressing issues demand evidence-based solutions. Undermining trust in science weakens our ability to respond effectively to these challenges.

Erosion of trust can have far-reaching consequences. It can fuel misinformation campaigns, hinder scientific collaboration, and ultimately undermine public health and well-being.

Wednesday, January 3, 2024

Doctors Wrestle With A.I. in Patient Care, Citing Lax Oversight

Christina Jewett
The New York Times
Originally posted 30 October 23

In medicine, the cautionary tales about the unintended effects of artificial intelligence are already legendary.

There was the program meant to predict when patients would develop sepsis, a deadly bloodstream infection, that triggered a litany of false alarms. Another, intended to improve follow-up care for the sickest patients, appeared to deepen troubling health disparities.

Wary of such flaws, physicians have kept A.I. working on the sidelines: assisting as a scribe, as a casual second opinion and as a back-office organizer. But the field has gained investment and momentum for uses in medicine and beyond.

Within the Food and Drug Administration, which plays a key role in approving new medical products, A.I. is a hot topic. It is helping to discover new drugs. It could pinpoint unexpected side effects. And it is even being discussed as an aid to staff who are overwhelmed with repetitive, rote tasks.

Yet in one crucial way, the F.D.A.’s role has been subject to sharp criticism: how carefully it vets and describes the programs it approves to help doctors detect everything from tumors to blood clots to collapsed lungs.

“We’re going to have a lot of choices. It’s exciting,” Dr. Jesse Ehrenfeld, president of the American Medical Association, a leading doctors’ lobbying group, said in an interview. “But if physicians are going to incorporate these things into their workflow, if they’re going to pay for them and if they’re going to use them — we’re going to have to have some confidence that these tools work.”


My summary: 

This article delves into the growing integration of artificial intelligence (A.I.) in patient care, exploring the challenges and concerns raised by doctors regarding the perceived lack of oversight. The medical community is increasingly leveraging A.I. technologies to aid in diagnostics, treatment planning, and patient management. However, physicians express apprehension about the potential risks associated with the use of these technologies, emphasizing the need for comprehensive oversight and regulatory frameworks to ensure patient safety and uphold ethical standards. The article highlights the ongoing debate within the medical profession on striking a balance between harnessing the benefits of A.I. and addressing the associated uncertainties and risks.

Tuesday, January 2, 2024

Three Ways to Tell If Research Is Bunk

Arthur C. Brooks
The Atlantic
Originally posted 30 Nov 23

Here is an excerpt:

I follow three basic rules.

1. If it seems too good to be true, it probably is.

Over the past few years, three social scientists—Uri Simonsohn, Leif Nelson, and Joseph Simmons—have become famous for their sleuthing to uncover false or faked research results. To make the point that many apparently “legitimate” findings are untrustworthy, they tortured one particular data set until it showed the obviously impossible result that listening to the Beatles song “When I’m Sixty-Four” could literally make you younger.

So if a behavioral result is extremely unusual, I’m suspicious. If it is implausible or runs contrary to common sense, I steer clear of the finding entirely because the risk that it is false is too great. I like to subject behavioral science to what I call the “grandparent test”: Imagine describing the result to your worldly-wise older relative, and getting their response. (“Hey, Grandma, I found a cool new study showing that infidelity leads to happier marriages. What do you think?”)

2. Let ideas age a bit.

I tend to trust a sweet spot for how recent a particular research finding is. A study published more than 20 years ago is usually too old to reflect current social circumstances. But if a finding is too new, it may have so far escaped sufficient scrutiny—and been neither replicated nor shredded by other scholars. Occasionally, a brand-new paper strikes me as so well executed and sensible that it is worth citing to make a point, and I use it, but I am generally more comfortable with new-ish studies that are part of a broader pattern of results in an area I am studying. I keep a file (my “wine cellar”) of very recent studies that I trust but that I want to age a bit before using for a column.

3. Useful beats clever.

The perverse incentive is not limited to the academy alone. A lot of science journalism values novelty over utility, reporting on studies that turn out to be more likely to fail when someone tries to replicate them. As well as leading to confusion, this misunderstands the point of behavioral science, which is to provide not edutainment but insights that can improve well-being.

I rarely write a column because I find an interesting study. Instead, I come across an interesting topic or idea and write about that. Then I go looking for answers based on a variety of research and evidence. That gives me a bias—for useful studies over clever ones.

Beyond checking the methods, data, and design of studies, I feel that these three rules work pretty well in a world of imperfect research. In fact, they go beyond how I do my work; they actually help guide how I live.

In life, we’re constantly beset by fads and hacks—new ways to act and think and be, shortcuts to the things we want. Whether in politics, love, faith, or fitness, the equivalent of some hot new study with counterintuitive findings is always demanding that we throw out the old ways and accept the latest wisdom.


Here is my summary:

This article provides insights into identifying potentially unreliable or flawed research through three key indicators. Firstly, the author suggests scrutinizing the methodology, emphasizing the importance of a sound research design and data collection process. Research with vague or poorly explained methods may lack credibility. Secondly, the article highlights the significance of peer review and publication in reputable journals, serving as indicators of a study's reliability. Journals with rigorous peer-review processes contribute to the credibility of the research. Lastly, the author recommends assessing the source of funding for the research, as biased funding sources may influence study outcomes.

Monday, January 1, 2024

Cyborg computer with living brain organoid aces machine learning tests

Loz Blain
New Atlas
Originally posted 12 DEC 23

Here are two excerpts:

Now, Indiana University researchers have taken a slightly different approach by growing a brain "organoid" and mounting that on a silicon chip. The difference might seem academic, but by allowing the stem cells to self-organize into a three-dimensional structure, the researchers hypothesized that the resulting organoid might be significantly smarter, that the neurons might exhibit more "complexity, connectivity, neuroplasticity and neurogenesis" if they were allowed to arrange themselves more like the way they normally do.

So they grew themselves a little brain ball organoid, less than a nanometer in diameter, and they mounted it on a high-density multi-electrode array – a chip that's able to send electrical signals into the brain organoid, as well as reading electrical signals that come out due to neural activity.

They called it "Brainoware" – which they probably meant as something adjacent to hardware and software, but which sounds far too close to "BrainAware" for my sensitive tastes, and evokes the perpetual nightmare of one of these things becoming fully sentient and understanding its fate.

(cut)

And finally, much like the team at Cortical Labs, this team really has no clear idea what to do about the ethics of creating micro-brains out of human neurons and wiring them into living cyborg computers. “As the sophistication of these organoid systems increases, it is critical for the community to examine the myriad of neuroethical issues that surround biocomputing systems incorporating human neural tissue," wrote the team. "It may be decades before general biocomputing systems can be created, but this research is likely to generate foundational insights into the mechanisms of learning, neural development and the cognitive implications of neurodegenerative diseases."


Here is my summary:

There is a new type of computer chip that uses living brain cells. The brain cells are grown from human stem cells and are organized into a ball-like structure called an organoid. The organoid is mounted on a chip that can send electrical signals to the brain cells and read the electrical signals that the brain cells produce. The researchers found that the organoid could learn to perform tasks such as speech recognition and math prediction much faster than traditional computers. They believe that this new type of computer chip could have many applications, such as in artificial intelligence and medical research. However, there are also some ethical concerns about using living brain cells in computers.