Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Friday, January 5, 2024

Mathematical and Computational Modeling of Suicide as a Complex Dynamical System

Wang, S. B., Robinaugh, D., et al.
(2023, September 24). 

Abstract

Background:

Despite decades of research, the current suicide rate is nearly identical to what it was 100 years ago. This slow progress is due, at least in part, to a lack of formal theories of suicide. Existing suicide theories are instantiated verbally, omitting details required for precise explanation and prediction, rendering them difficult to effectively evaluate and difficult to improve.  By contrast, formal theories are instantiated mathematically and computationally, allowing researchers to precisely deduce theory predictions, rigorously evaluate what the theory can and cannot explain, and thereby, inform how the theory can be improved.  This paper takes the first step toward addressing the need for formal theories in suicide research by formalizing an initial, general theory of suicide and evaluating its ability to explain suicide-related phenomena.

Methods:

First, we formalized a General Escape Theory of Suicide as a system of stochastic and ordinary differential equations. Second, we used these equations to simulate behavior of the system over time. Third, we evaluated if the formal theory produced robust suicide-related phenomena including rapid onset and brief duration of suicidal thoughts, and zero-inflation of suicidal thinking in time series data.

Results:

Simulations successfully produced the proposed suicidal phenomena (i.e.,rapid onset, short duration, and high zero-inflation of suicidal thoughts in time series data). Notably, these simulations also produced theorized phenomena following from the General Escape Theory of Suicide:that suicidal thoughts emerge when alternative escape behaviors failed to effectively regulate aversive internal states, and that effective use of long-term strategies may prevent the emergence of suicidal thoughts.

Conclusions:

To our knowledge, the model developed here is the first formal theory of suicide, which was able to produce –and, thus, explain –well-established phenomena documented in the suicide literature. We discuss the next steps in a research program dedicated to studying suicide as a complex dynamical system, and describe how the integration of formal theories and empirical research may advance our understanding, prediction, and prevention of suicide. 

My take:

In essence, the paper demonstrates the potential value of using computational modeling and formal theorizing to improve understanding and prediction of suicidal behaviors, breaking from a reliance on narrative theories that have failed to significantly reduce suicide rates over the past century. The formal modeling approach allows more rigorous evaluation and refinement of theories over time.

Thursday, January 4, 2024

Americans’ Trust in Scientists, Positive Views of Science Continue to Decline

Brian Kennedy & Alec Tyson
Pew Research
Originally published 14 NOV 23

Impact of science on society

Overall, 57% of Americans say science has had a mostly positive effect on society. This share is down 8 percentage points since November 2021 and down 16 points since before the start of the coronavirus outbreak.

About a third (34%) now say the impact of science on society has been equally positive as negative. A small share (8%) think science has had a mostly negative impact on society.

Trust in scientists

When it comes to the standing of scientists, 73% of U.S. adults have a great deal or fair amount of confidence in scientists to act in the public’s best interests. But trust in scientists is 14 points lower than it was at the early stages of the pandemic.

The share expressing the strongest level of trust in scientists – saying they have a great deal of confidence in them – has fallen from 39% in 2020 to 23% today.

As trust in scientists has fallen, distrust has grown: Roughly a quarter of Americans (27%) now say they have not too much or no confidence in scientists to act in the public’s best interests, up from 12% in April 2020.

Ratings of medical scientists mirror the trend seen in ratings of scientists generally. Read Chapter 1 of the report for a detailed analysis of this data.

Differences between Republicans and Democrats in ratings of scientists and science

Declining levels of trust in scientists and medical scientists have been particularly pronounced among Republicans and Republican-leaning independents over the past several years. In fact, nearly four-in-ten Republicans (38%) now say they have not too much or no confidence at all in scientists to act in the public’s best interests. This share is up dramatically from the 14% of Republicans who held this view in April 2020. Much of this shift occurred during the first two years of the pandemic and has persisted in more recent surveys.


My take on why this important:

Science is a critical driver of progress. From technological advancements to medical breakthroughs, scientific discoveries have dramatically improved our lives. Without public trust in science, these advancements may slow or stall.

Science plays a vital role in addressing complex challenges. Climate change, pandemics, and other pressing issues demand evidence-based solutions. Undermining trust in science weakens our ability to respond effectively to these challenges.

Erosion of trust can have far-reaching consequences. It can fuel misinformation campaigns, hinder scientific collaboration, and ultimately undermine public health and well-being.

Wednesday, January 3, 2024

Doctors Wrestle With A.I. in Patient Care, Citing Lax Oversight

Christina Jewett
The New York Times
Originally posted 30 October 23

In medicine, the cautionary tales about the unintended effects of artificial intelligence are already legendary.

There was the program meant to predict when patients would develop sepsis, a deadly bloodstream infection, that triggered a litany of false alarms. Another, intended to improve follow-up care for the sickest patients, appeared to deepen troubling health disparities.

Wary of such flaws, physicians have kept A.I. working on the sidelines: assisting as a scribe, as a casual second opinion and as a back-office organizer. But the field has gained investment and momentum for uses in medicine and beyond.

Within the Food and Drug Administration, which plays a key role in approving new medical products, A.I. is a hot topic. It is helping to discover new drugs. It could pinpoint unexpected side effects. And it is even being discussed as an aid to staff who are overwhelmed with repetitive, rote tasks.

Yet in one crucial way, the F.D.A.’s role has been subject to sharp criticism: how carefully it vets and describes the programs it approves to help doctors detect everything from tumors to blood clots to collapsed lungs.

“We’re going to have a lot of choices. It’s exciting,” Dr. Jesse Ehrenfeld, president of the American Medical Association, a leading doctors’ lobbying group, said in an interview. “But if physicians are going to incorporate these things into their workflow, if they’re going to pay for them and if they’re going to use them — we’re going to have to have some confidence that these tools work.”


My summary: 

This article delves into the growing integration of artificial intelligence (A.I.) in patient care, exploring the challenges and concerns raised by doctors regarding the perceived lack of oversight. The medical community is increasingly leveraging A.I. technologies to aid in diagnostics, treatment planning, and patient management. However, physicians express apprehension about the potential risks associated with the use of these technologies, emphasizing the need for comprehensive oversight and regulatory frameworks to ensure patient safety and uphold ethical standards. The article highlights the ongoing debate within the medical profession on striking a balance between harnessing the benefits of A.I. and addressing the associated uncertainties and risks.

Tuesday, January 2, 2024

Three Ways to Tell If Research Is Bunk

Arthur C. Brooks
The Atlantic
Originally posted 30 Nov 23

Here is an excerpt:

I follow three basic rules.

1. If it seems too good to be true, it probably is.

Over the past few years, three social scientists—Uri Simonsohn, Leif Nelson, and Joseph Simmons—have become famous for their sleuthing to uncover false or faked research results. To make the point that many apparently “legitimate” findings are untrustworthy, they tortured one particular data set until it showed the obviously impossible result that listening to the Beatles song “When I’m Sixty-Four” could literally make you younger.

So if a behavioral result is extremely unusual, I’m suspicious. If it is implausible or runs contrary to common sense, I steer clear of the finding entirely because the risk that it is false is too great. I like to subject behavioral science to what I call the “grandparent test”: Imagine describing the result to your worldly-wise older relative, and getting their response. (“Hey, Grandma, I found a cool new study showing that infidelity leads to happier marriages. What do you think?”)

2. Let ideas age a bit.

I tend to trust a sweet spot for how recent a particular research finding is. A study published more than 20 years ago is usually too old to reflect current social circumstances. But if a finding is too new, it may have so far escaped sufficient scrutiny—and been neither replicated nor shredded by other scholars. Occasionally, a brand-new paper strikes me as so well executed and sensible that it is worth citing to make a point, and I use it, but I am generally more comfortable with new-ish studies that are part of a broader pattern of results in an area I am studying. I keep a file (my “wine cellar”) of very recent studies that I trust but that I want to age a bit before using for a column.

3. Useful beats clever.

The perverse incentive is not limited to the academy alone. A lot of science journalism values novelty over utility, reporting on studies that turn out to be more likely to fail when someone tries to replicate them. As well as leading to confusion, this misunderstands the point of behavioral science, which is to provide not edutainment but insights that can improve well-being.

I rarely write a column because I find an interesting study. Instead, I come across an interesting topic or idea and write about that. Then I go looking for answers based on a variety of research and evidence. That gives me a bias—for useful studies over clever ones.

Beyond checking the methods, data, and design of studies, I feel that these three rules work pretty well in a world of imperfect research. In fact, they go beyond how I do my work; they actually help guide how I live.

In life, we’re constantly beset by fads and hacks—new ways to act and think and be, shortcuts to the things we want. Whether in politics, love, faith, or fitness, the equivalent of some hot new study with counterintuitive findings is always demanding that we throw out the old ways and accept the latest wisdom.


Here is my summary:

This article provides insights into identifying potentially unreliable or flawed research through three key indicators. Firstly, the author suggests scrutinizing the methodology, emphasizing the importance of a sound research design and data collection process. Research with vague or poorly explained methods may lack credibility. Secondly, the article highlights the significance of peer review and publication in reputable journals, serving as indicators of a study's reliability. Journals with rigorous peer-review processes contribute to the credibility of the research. Lastly, the author recommends assessing the source of funding for the research, as biased funding sources may influence study outcomes.

Monday, January 1, 2024

Cyborg computer with living brain organoid aces machine learning tests

Loz Blain
New Atlas
Originally posted 12 DEC 23

Here are two excerpts:

Now, Indiana University researchers have taken a slightly different approach by growing a brain "organoid" and mounting that on a silicon chip. The difference might seem academic, but by allowing the stem cells to self-organize into a three-dimensional structure, the researchers hypothesized that the resulting organoid might be significantly smarter, that the neurons might exhibit more "complexity, connectivity, neuroplasticity and neurogenesis" if they were allowed to arrange themselves more like the way they normally do.

So they grew themselves a little brain ball organoid, less than a nanometer in diameter, and they mounted it on a high-density multi-electrode array – a chip that's able to send electrical signals into the brain organoid, as well as reading electrical signals that come out due to neural activity.

They called it "Brainoware" – which they probably meant as something adjacent to hardware and software, but which sounds far too close to "BrainAware" for my sensitive tastes, and evokes the perpetual nightmare of one of these things becoming fully sentient and understanding its fate.

(cut)

And finally, much like the team at Cortical Labs, this team really has no clear idea what to do about the ethics of creating micro-brains out of human neurons and wiring them into living cyborg computers. “As the sophistication of these organoid systems increases, it is critical for the community to examine the myriad of neuroethical issues that surround biocomputing systems incorporating human neural tissue," wrote the team. "It may be decades before general biocomputing systems can be created, but this research is likely to generate foundational insights into the mechanisms of learning, neural development and the cognitive implications of neurodegenerative diseases."


Here is my summary:

There is a new type of computer chip that uses living brain cells. The brain cells are grown from human stem cells and are organized into a ball-like structure called an organoid. The organoid is mounted on a chip that can send electrical signals to the brain cells and read the electrical signals that the brain cells produce. The researchers found that the organoid could learn to perform tasks such as speech recognition and math prediction much faster than traditional computers. They believe that this new type of computer chip could have many applications, such as in artificial intelligence and medical research. However, there are also some ethical concerns about using living brain cells in computers.

Sunday, December 31, 2023

Problems with the interjurisdictional regulation of psychological practice

Taube, D. O., Shapiro, D. L., et al. (2023).
Professional Psychology: Research and Practice,
54(6), 389–402.

Abstract

The U.S. Constitutional structure creates ethical conflicts for the cross-jurisdictional practice of professional psychology. The profession has chosen to seek interstate agreements to overcome such barriers, and such agreements now include almost 80% of American jurisdictions. Although an improvement over a patchwork of state laws regarding practice, the structure of this agreement and the exclusion of the remaining states continue to pose barriers to the principles of beneficence and nonmaleficence. It creates a system that is extraordinarily difficult to change and places an unrealistic burden on professionals to know, address, and act under complex legal mandates. As psychological services have moved increasingly to remote platforms, cross-jurisdictional business models, and a nationwide mental health crisis emerged alongside the pandemic, it is time to consider a national professional licensing system more seriously, both to further reduce barriers to care and complexity and permit the best interests of patients to prevail.

Impact Statement

Access to and the ability to continue receiving mental health care across jurisdictions and nations has become increasingly urgent in the wake of the COVID-19 pandemic This Ethics in Motion section highlights legal barriers to providing ethical care across jurisdictions, how those challenges developed, and strengths and limitations of current approaches and potential solutions.


My summary: 

The current system of interjurisdictional regulation of psychological practice in the United States is problematic because it creates ethical conflicts for psychologists and places an unrealistic burden on them to comply with complex legal mandates. The system is also extraordinarily difficult to change, and it excludes psychologists in states that have not joined the interstate agreement. As a result, the current system does not adequately protect the interests of patients.

A national professional licensing system would be a more effective way to regulate the practice of psychology across state lines. Such a system would eliminate the need for psychologists to comply with multiple state laws, and it would make it easier for them to provide care to patients who live in different states. A national system would also be more equitable, as it would ensure that all psychologists are held to the same standards.

Saturday, December 30, 2023

The ethics of doing human enhancement ethics

Rueda, J. (2023). 
Futures, 153, 103236.

Abstract

Human enhancement is one of the leading research topics in contemporary applied ethics. Interestingly, the widespread attention to the ethical aspects of future enhancement applications has generated misgivings. Are researchers who spend their time investigating the ethics of futuristic human enhancement scenarios acting in an ethically suboptimal manner? Are the methods they use to analyze future technological developments appropriate? Are institutions wasting resources by funding such research? In this article, I address the ethics of doing human enhancement ethics focusing on two main concerns. The Methodological Problem refers to the question of how we should methodologically address the moral aspects of future enhancement applications. The Normative Problem refers to what is the normative justification for investigating and funding the research on the ethical aspects of future human enhancement. This article aims to give a satisfactory response to both meta-questions in order to ethically justify the inquiry into the ethical aspects of emerging enhancement technologies.

Highlights

• Formulates second-order problems neglected in the literature on the ethics of future enhancement technologies.

• Discusses speculative ethics and anticipatory ethics methodologies for analyzing emerging enhancement innovations.

• Evaluates the main objections to engaging in research into the ethical aspects of future scenarios of human enhancement.

• Shows that methodological and normative meta-questions are key to advance the ethical debate on human enhancement.

Friday, December 29, 2023

A Hybrid Account of Harm

Unruh, C. F. (2022).
Australasian Journal of Philosophy, 1–14.
https://doi.org/10.1080/00048402.2022.2048401

Abstract

When does a state of affairs constitute a harm to someone? Comparative accounts say that being worse off constitutes harm. The temporal version of the comparative account is seldom taken seriously, due to apparently fatal counterexamples. I defend the temporal version against these counterexamples, and show that it is in fact more plausible than the prominent counterfactual version of the account. Non-comparative accounts say that being badly off constitutes harm. However, neither the temporal comparative account nor the non-comparative account can correctly classify all harms. I argue that we should combine them into a hybrid account of harm. The hybrid account is extensionally adequate and presents a unified view on the nature of harm.


Here's my take:

Charlotte Unruh proposes a new way of thinking about harm. Unruh argues that neither the traditional comparative account nor the non-comparative account of harm can adequately explain all cases of harm. The comparative account says that harm consists in being worse off than one would have been had some event not occurred. The non-comparative account says that harm consists in being in a bad state, regardless of how one would have fared otherwise.

Unruh proposes a hybrid account of harm that combines elements of both the comparative and non-comparative accounts. She says that an agent suffers harm if and only if either (i) the agent suffers ill-being or (ii) the agent's well-being is lower than it was before. This hybrid account is able to explain cases of harm that cannot be explained by either the comparative or non-comparative account alone. For example, the hybrid account explains why it is harmful to prevent someone from achieving a good that they would have otherwise achieved, even if the person is still in a good state overall.

Unruh's hybrid account of harm has a number of advantages over other accounts of harm. It is extensionally adequate, meaning that it correctly classifies all cases of harm as harmful and all cases of non-harm as non-harmful. It is also normatively plausible, meaning that it accords with our intuitions about what counts as harm. Additionally, the hybrid account is able to explain a number of different phenomena related to harm, such as the severity of harm, the distribution of harm, and the compensation for harm.

Thursday, December 28, 2023

The Relative Importance of Target and Judge Characteristics in Shaping the Moral Circle

Jaeger, B., & Wilks, M. (2021). 
Cognitive Science. 

Abstract

People's treatment of others (humans, nonhuman animals, or other entities) often depends on whether they think the entity is worthy of moral concern. Recent work has begun to investigate which entities are included in a person's moral circle, examining how certain target characteristics (e.g., species category, perceived intelligence) and judge characteristics (e.g., empathy, political orientation) shape moral inclusion. However, the relative importance of target and judge characteristics in predicting moral inclusion remains unclear. When predicting whether a person will deem an entity worthy of moral consideration, how important is it to know who is making the judgment (i.e., characteristics of the judge), who is being judged (i.e., characteristics of the target), and potential interactions between the two factors? Here, we address this foundational question by conducting a variance component analysis of the moral circle. In two studies with participants from the Netherlands, the United States, the United Kingdom, and Australia (N = 836), we test how much variance in judgments of moral concern is explained by between-target differences, between-judge differences, and by the interaction between the two factors. We consistently find that all three components explain substantial amounts of variance in judgments of moral concern. Our findings provide two important insights. First, an increased focus on interactions between target and judge characteristics is needed, as these interactions explain as much variance as target and judge characteristics separately. Second, any theoretical account that aims to provide an accurate description of moral inclusion needs to consider target characteristics, judge characteristics, and their interaction.

Here is my take:

The authors begin by reviewing the literature on the moral circle, which is the group of beings that people believe are worthy of moral consideration. They note that both target characteristics (e.g., species category, perceived intelligence) and judge characteristics (e.g., empathy, political orientation) have been shown to influence moral inclusion. However, the relative importance of these two types of characteristics remains unclear.

To address this question, the authors conducted two studies with participants from the Netherlands, the United States, the United Kingdom, and Australia. In each study, participants were asked to rate how much moral concern they felt for a variety of targets, including humans, animals, and robots. Participants were also asked to complete a questionnaire about their own moral values and beliefs.

The authors' analysis revealed that both target and judge characteristics explained significant amounts of variance in judgments of moral concern. However, they also found that the interaction between target and judge characteristics was just as important as target and judge characteristics separately. This means that the moral circle is not simply a function of either target or judge characteristics, but rather of the complex interaction between the two.

The authors' findings have important implications for our understanding of the moral circle. They show that moral inclusion is not simply a matter of whether or not a target possesses certain characteristics (e.g., sentience, intelligence). Rather, it also depends on the characteristics of the judge, as well as the interaction between the two.

The authors' findings also have important implications for applied ethics. For example, they suggest that ethicists should be careful to avoid making generalizations about the moral status of entire groups of beings. Instead, they should consider the individual characteristics of both the target and the judge when making moral judgments.