Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Saturday, March 9, 2024

New Evidence Suggests Long COVID Could Be a Brain Injury

Sara Novak
MedScape.com
Originally posted 8 Feb 24

Brain fog is one of the most common, persistent complaints in patients with long COVID. It affects as many as 46% of patients who also deal with other cognitive concerns like memory loss and difficulty concentrating. 

Now, researchers believe they know why. A new study has found that these symptoms may be the result of a viral-borne brain injury that may cause cognitive and mental health issues that persist for years.

Researchers found that 351 patients hospitalized with severe COVID-19 had evidence of a long-term brain injury a year after contracting the SARS-CoV-2 virus. The findings were based on a series of cognitive tests, self-reported symptoms, brain scans, and biomarkers. 

Brain Deficits Equal to 20 Years of Brain Aging

As part of the preprint study, participants took a cognition test with their scores age-matched to those who had not suffered a serious bout of COVID-19. Then a blood sample was taken to look for specific biomarkers, showing that elevated levels of certain biomarkers were consistent with a brain injury. Using brain scans, researchers also found that certain regions of the brain associated with attention were reduced in volume.

Patients who participated in the study were "less accurate and slower" in their cognition, and suffered from at least one mental health condition, such as depression, anxiety, or posttraumatic stress disorder, according to researchers.

The brain deficits found in COVID-19 patients were equivalent to 20 years of brain aging and provided proof of what doctors have feared: that this virus can damage the brain and result in ongoing mental health issues. 

Friday, March 8, 2024

What Does Being Sober Mean Today? For Many, Not Full Abstinence

Ernesto Londono
The New York Times
Originally posted 4 Feb 24

Here are two excerpts:

Notions of what constitutes sobriety and problematic substance use have grown more flexible in recent years as younger Americans have shunned alcohol in increasing numbers while embracing cannabis and psychedelics - a phenomenon that alarms some addiction experts.

Not long ago, sobriety was broadly understood to mean abstaining from all intoxicating substances, and the term was often associated with people who had overcome severe forms of addiction. These days, it is used more expansively, including by people who have quit drinking alcohol but consume what they deem moderate amounts of other substances, including marijuana and mushrooms.

(cut)

As some drugs come to be viewed as wellness boosters by those who use them, adherence to the full abstinence model favored by organizations like Alcoholics Anonymous is shifting. Some people call themselves "California sober," a term popularized in a 2021 song by the pop star Demi Lovato, who later disavowed the idea, saying on social media that "sober sober is the only way to be."

Approaches that might have once seemed ludicrous-like treating opioid addiction with psychedelics - have gained broader enthusiasm among doctors as drug overdoses kill tens of thousands of Americans each year.

"The abstinence-only model is very restrictive," said Dr. Peter Grinspoon, a primary care physician at Massachusetts General Hospital who specializes in medical cannabis and is a recovering opioid addict. "We really have to meet people where they are and have a broader recovery tent."

It is impossible to know how many Americans consider themselves part of an increasingly malleable concept of sobriety, but there are indications of shifting views of acceptable substance use. Since 2000, alcohol use among younger Americans has declined significantly, according to a Gallup poll.

At the same time, the use of cannabis and psychedelics has risen as state laws and attitudes grow more permissive, even as both remain illegal under federal law.

A survey found that 44 percent of adults aged 19 to 30 said in 2022 that they had used cannabis in the past year, a record high. That year, 8 percent of adults in the same age range said they had used psychedelics, an increase from the 3 percent a decade earlier.

Thursday, March 7, 2024

Canada Postpones Plan to Allow Euthanasia for Mentally Ill

Craig McCulloh
Voice of America News
Originally posted 8 Feb 24

The Canadian government is delaying access to medically assisted death for people with mental illness.

Those suffering from mental illness were supposed to be able to access Medical Assistance in Dying — also known as MAID — starting March 17. The recent announcement by the government of Canadian Prime Minister Justin Trudeau was the second delay after original legislation authorizing the practice passed in 2021.

The delay came in response to a recommendation by a majority of the members of a committee made up of senators and members of Parliament.

One of the most high-profile proponents of MAID is British Columbia-based lawyer Chris Considine. In the mid-1990s, he represented Sue Rodriguez, who was dying from amyotrophic lateral sclerosis, commonly known as ALS.

Their bid for approval of a medically assisted death was rejected at the time by the Supreme Court of Canada. But a law passed in 2016 legalized euthanasia for individuals with terminal conditions. From then until 2022, more than 45,000 people chose to die.


Summary:

Canada originally planned to expand its Medical Assistance in Dying (MAiD) program to include individuals with mental illnesses in March 2024.
  • This plan has been postponed until 2027 due to concerns about the healthcare system's readiness and potential ethical issues.
  • The original legislation passed in 2021, but concerns about safeguards and mental health support led to delays.
  • This issue is complex and ethically charged, with advocates arguing for individual autonomy and opponents raising concerns about coercion and vulnerability.
I would be concerned about the following issues:
  • Vulnerability: Mental illness can impair judgement, raising concerns about informed consent and potential coercion.
  • Safeguards: Concerns exist about insufficient safeguards to prevent abuse or exploitation.
  • Mental health access: Limited access to adequate mental health treatment could contribute to undue pressure towards MAiD.
  • Social inequalities: Concerns exist about disproportionate access to MAiD based on socioeconomic background.

Wednesday, March 6, 2024

We're good people: Moral conviction as social identity

Ekstrom, P. D. (2022, April 27).

Abstract

Moral convictions—attitudes that people construe as matters of right and wrong—have unique effects on behavior, from activism to intolerance. Less is known, though, about the psychological underpinnings of moral convictions themselves. I propose that moral convictions are social identities. Consistent with the idea that moral convictions are identities, I find in two studies that attitude-level moral conviction predicts (1) attitudes’ self-reported identity centrality and (2) reaction time to attitude-related stimuli in a me/not me task. Consistent with the idea that moral convictions are social identities, I find evidence that participants used their moral convictions to perceive, categorize, and remember information about other individuals’ positions on political issues, and that they did so more strongly when their convictions were more identity-central. In short, the identities that participants’ moral convictions defined were also meaningful social categories, providing a basis to distinguish “us” from “them.” However, I also find that non-moral attitudes can serve as meaningful social categories. Although moral convictions were more identity-central than non-moral attitudes, moral and non-moral attitudes may both define social identities that are more or less salient in certain situations. Regardless, social identity may help explain intolerance for moral disagreement, and identity-based interventions may help reduce that intolerance.

Here is my summary:

Main Hypothesis:
  • Moral convictions (beliefs about right and wrong) are seen as fundamental and universally true, distinct from other attitudes.
  • The research proposes that they shape how people view themselves and others, acting as social identities.
Key Points:
  • Moral convictions define group belonging: People use them to categorize themselves and others as "good" or "bad," similar to how we might use group affiliations like race or religion.
  • They influence our relationships: We tend to be more accepting and trusting of those who share our moral convictions.
  • They can lead to conflict: When morals clash, it can create animosity and division between groups with different convictions.
Evidence:
  • The research cites studies showing how people judge others based on their moral stances, similar to how they judge based on group membership.
  • It also shows how moral convictions predict behavior like activism and intolerance towards opposing views.
Implications:
  • Understanding how moral convictions function as social identities can help explain conflict, prejudice, and social movements.
  • It may also offer insights into promoting understanding and cooperation between groups with differing moral beliefs.
Overall:

This research suggests that moral convictions are more than just strong opinions; they act as powerful social identities shaping how we see ourselves and interact with others. Understanding this dynamic can offer valuable insights into social behavior and potential avenues for promoting tolerance and cooperation.

Tuesday, March 5, 2024

You could lie to a health chatbot – but it might change how you perceive yourself

Dominic Wilkinson
The Conversation
Originally posted 8 FEB 24

Here is an excerpt:

The ethics of lying

There are different ways that we can think about the ethics of lying.

Lying can be bad because it causes harm to other people. Lies can be deeply hurtful to another person. They can cause someone to act on false information, or to be falsely reassured.

Sometimes, lies can harm because they undermine someone else’s trust in people more generally. But those reasons will often not apply to the chatbot.

Lies can wrong another person, even if they do not cause harm. If we willingly deceive another person, we potentially fail to respect their rational agency, or use them as a means to an end. But it is not clear that we can deceive or wrong a chatbot, since they don’t have a mind or ability to reason.

Lying can be bad for us because it undermines our credibility. Communication with other people is important. But when we knowingly make false utterances, we diminish the value, in other people’s eyes, of our testimony.

For the person who repeatedly expresses falsehoods, everything that they say then falls into question. This is part of the reason we care about lying and our social image. But unless our interactions with the chatbot are recorded and communicated (for example, to humans), our chatbot lies aren’t going to have that effect.

Lying is also bad for us because it can lead to others being untruthful to us in turn. (Why should people be honest with us if we won’t be honest with them?)

But again, that is unlikely to be a consequence of lying to a chatbot. On the contrary, this type of effect could be partly an incentive to lie to a chatbot, since people may be conscious of the reported tendency of ChatGPT and similar agents to confabulate.


Here is my summary:

The article discusses the potential consequences of lying to a health chatbot, even though it might seem tempting. It highlights a situation where someone frustrated with a wait for surgery considers exaggerating their symptoms to a chatbot screening them.

While lying might offer short-term benefits like quicker attention, the author argues it could have unintended consequences:

Impact on healthcare:
  • Inaccurate information can hinder proper diagnosis and treatment.
  • It contributes to an already strained healthcare system.
Self-perception:
  • Repeatedly lying, even to a machine, can erode honesty and integrity.
  • It reinforces unhealthy avoidance of seeking professional help.
The article encourages readers to be truthful with chatbots for better healthcare outcomes and self-awareness. It acknowledges the frustration with healthcare systems but emphasizes the importance of transparency for both individual and collective well-being.

Monday, March 4, 2024

How to Deal with Counter-Examples to Common Morality Theory: A Surprising Result

Herissone-Kelly P.
Cambridge Quarterly of Healthcare Ethics.
2022;31(2):185-191.
doi:10.1017/S096318012100058X

Abstract

Tom Beauchamp and James Childress are confident that their four principles—respect for autonomy, beneficence, non-maleficence, and justice—are globally applicable to the sorts of issues that arise in biomedical ethics, in part because those principles form part of the common morality (a set of general norms to which all morally committed persons subscribe). Inevitably, however, the question arises of how the principlist ought to respond when presented with apparent counter-examples to this thesis. I examine a number of strategies the principlist might adopt in order to retain common morality theory in the face of supposed counter-examples. I conclude that only a strategy that takes a non-realist view of the common morality’s principles is viable. Unfortunately, such a view is likely not to appeal to the principlist.


Herissone-Kelly examines various strategies principlism could employ to address counter-examples:

Refine the principles: This involves clarifying or reinterpreting the principles to better handle specific cases.
  • Prioritize principles: Establish a hierarchy among the principles to resolve conflicts.
  • Supplement the principles: Introduce additional considerations or context-specific factors.
  • Limit the scope: Acknowledge that the principles may not apply universally to all cultures or situations.
Herissone-Kelly argues that none of these strategies are fully satisfactory. Refining or prioritizing principles risks distorting their original meaning or introducing arbitrariness. Supplementing them can lead to an unwieldy and complex framework. Limiting their scope undermines the theory's claim to universality.

He concludes that the most viable approach is to adopt a non-realist view of the common morality's principles. This means understanding them not as objective moral facts but as flexible tools for ethical reflection and deliberation, open to interpretation and adaptation in different contexts. While this may seem to weaken the theory's authority, Herissone-Kelly argues that it allows for a more nuanced and practical application of ethical principles in a diverse world.

Sunday, March 3, 2024

Is Dan Ariely Telling the Truth?

Tom Bartlett
The Chronicle of Higher Ed
Originally posted 18 Feb 24

Here is an excerpt:

In August 2021, the blog Data Colada published a post titled “Evidence of Fraud in an Influential Field Experiment About Dishonesty.” Data Colada is run by three researchers — Uri Simonsohn, Leif Nelson, and Joe Simmons — and it serves as a freelance watchdog for the field of behavioral science, which has historically done a poor job of policing itself. The influential field experiment in question was described in a 2012 paper, published in the Proceedings of the National Academy of Sciences, by Ariely and four co-authors. In the study, customers of an insurance company were asked to report how many miles they had driven over a period of time, an answer that might affect their premiums. One set of customers signed an honesty pledge at the top of the form, and another signed at the bottom. The study found that those who signed at the top reported higher mileage totals, suggesting that they were more honest. The authors wrote that a “simple change of the signature location could lead to significant improvements in compliance.” The study was classic Ariely: a slight tweak to a system that yields real-world results.

But did it actually work? In 2020, an attempted replication of the effect found that it did not. In fact, multiple attempts to replicate the 2012 finding all failed (though Ariely points to evidence in a recent, unpublished paper, on which he is a co-author, indicating that the effect might be real). The authors of the attempted replication posted the original data from the 2012 study, which was then scrutinized by a group of anonymous researchers who found that the data, or some of it anyway, had clearly been faked. They passed the data along to the Data Colada team. There were multiple red flags. For instance, the number of miles customers said they’d driven was unrealistically uniform. About the same number of people drove 40,000 miles as drove 500 miles. No actual sampling would look like that — but randomly generated data would. Two different fonts were used in the file, apparently because whoever fudged the numbers wasn’t being careful.

In short, there is no doubt that the data were faked. The only question is, who did it?


This article discusses an investigation into the research conduct of Dr. Dan Ariely, a well-known behavioral economist at Duke University. The investigation, prompted by concerns about potential data fabrication, concluded that while no evidence of fabricated data was found, Ariely did commit research misconduct by failing to adequately vet findings and maintain proper records.

The article highlights several specific issues identified by the investigation, including inconsistencies in data and a lack of supporting documentation for key findings. It also mentions that Ariely made inaccurate statements about his personal history, such as misrepresenting his age at the time of a childhood accident.

While Ariely maintains that he did not intentionally fabricate data and attributes the errors to negligence and a lack of awareness, the investigation's findings have damaged his reputation and raised questions about the integrity of his research. The article concludes by leaving the reader to ponder whether Ariely's transgressions can be forgiven or if they represent a deeper pattern of dishonesty.

It's important to note that the article presents one perspective on a complex issue and doesn't offer definitive answers. Further research and analysis are necessary to form a complete understanding of the situation.

Saturday, March 2, 2024

Unraveling the Mindset of Victimhood

Scott Barry Kaufman
Scientific American
Originally posted 29 June 2020

Here is an excerpt:

Constantly seeking recognition of one’s victimhood. Those who score high on this dimension have a perpetual need to have their suffering acknowledged. In general, this is a normal psychological response to trauma. Experiencing trauma tends to “shatter our assumptions” about the world as a just and moral place. Recognition of one’s victimhood is a normal response to trauma and can help reestablish a person’s confidence in their perception of the world as a fair and just place to live.

Also, it is normal for victims to want the perpetrators to take responsibility for their wrongdoing and to express feelings of guilt. Studies conducted on testimonies of patients and therapists have found that validation of the trauma is important for therapeutic recovery from trauma and victimization (see here and here).

A sense of moral elitism. Those who score high on this dimension perceive themselves as having an immaculate morality and view everyone else as being immoral. Moral elitism can be used to control others by accusing others of being immoral, unfair or selfish, while seeing oneself as supremely moral and ethical.

Moral elitism often develops as a defense mechanism against deeply painful emotions and as a way to maintain a positive self-image. As a result, those under distress tend to deny their own aggressiveness and destructive impulses and project them onto others. The “other” is perceived as threatening whereas the self is perceived as persecuted, vulnerable and morally superior.


Here is a summary:

Kaufman explores the concept of "interpersonal victimhood," a tendency to view oneself as the repeated target of unfair treatment by others. He identifies several key characteristics of this mindset, including:
  • Belief in inherent unfairness: The conviction that the world is fundamentally unjust and that one is disproportionately likely to experience harm.
  • Moral self-righteousness: The perception of oneself as more ethical and deserving of good treatment compared to others.
  • Rumination on past injustices: Dwelling on and replaying negative experiences, often with feelings of anger and resentment.
  • Difficulty taking responsibility: Attributing negative outcomes to external factors rather than acknowledging one's own role.
Kaufman argues that while acknowledging genuine injustices is important, clinging to a victimhood identity can be detrimental. It can hinder personal growth, strain relationships, and fuel negativity. He emphasizes the importance of developing a more balanced perspective, acknowledging both external challenges and personal agency. The article offers strategies for fostering resilience

Friday, March 1, 2024

AI needs the constraints of the human brain

Danyal Akarca
iai.tv
Originally posted 30 Jan 24

Here is an excerpt:

So, evolution shapes systems that are capable of solving competing problems that are both internal (e.g., how to expend energy) and external (e.g., how to act to survive), but in a way that can be highly efficient, in many cases elegant, and often surprising. But how does this evolutionary story of biological intelligence contrast with the current paradigm of AI?

In some ways, quite directly. Since the 50s, neural networks were developed as models that were inspired directly from neurons in the brain and the strength of their connections, in addition to many successful architectures of the past being directly motivated by neuroscience experimentation and theory. Yet, AI research in the modern era has occurred with a significant absence of thought of intelligent systems in nature and their guiding principles. Why is this? There are many reasons. But one is that the exponential growth of computing capabilities, enabled by increases of transistors on integrated circuits (observed since the 1950s, known as Moore’s Law), has permitted AI researchers to leverage significant improvements in performance without necessarily requiring extraordinarily elegant solutions. This is not to say that modern AI algorithms are not widely impressive – they are. It is just that the majority of the heavy lifting has come from advances in computing power rather than their engineered design. Consequently, there has been relatively little recent need or interest from AI experts to look to the brain for inspiration.

But the tide is turning. From a hardware perspective, Moore’s law will not continue ad infinitum (at 7 nanometers, transistor channel lengths are now nearing fundamental limits of atomic spacing). We will therefore not be able to leverage ever improving performance delivered by increasingly compact microprocessors. It is likely therefore that we will require entirely new computing paradigms, some of which may be inspired by the types of computations we observe in the brain (the most notable being neuromorphic computing). From a software and AI perspective, it is becoming increasingly clear that – in part due to the reliance on increases to computational power – the AI research field will need to refresh its conceptions as to what makes systems intelligent at all. For example, this will require much more sophisticated benchmarks of what it means to perform at human or super-human performance. In sum, the field will need to form a much richer view of the possible space of intelligent systems, and how artificial models can occupy different places in that space.


Key Points:
  • Evolutionary pressures: Efficient, resource-saving brains are advantageous for survival, leading to optimized solutions for learning, memory, and decision-making.
  • AI's reliance on brute force: Modern AI often achieves performance through raw computing power, neglecting principles like energy efficiency.
  • Shifting AI paradigm: Moore's Law's end and limitations in conventional AI call for exploration of new paradigms, potentially inspired by the brain.
  • Neurobiology's potential: Brain principles like network structure, local learning, and energy trade-offs can inform AI design for efficiency and novel functionality.
  • Embodied AI with constraints: Recent research incorporates space and communication limitations into AI models, leading to features resembling real brains and potentially more efficient information processing.