Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Sunday, June 30, 2024

Reddit Provides Insight into How People Think About Moral Dilemmas

Sigal Samuel
Vox: Future Perfect
Undated post

Here is a sample:

Uncovering philosophy’s blind spots 

Let’s get a bit more precise: It’s not as though all of philosophy has ignored relational context. But one branch — utilitarianism — is strongly inclined in this direction. Utilitarians believe we should seek the greatest happiness for the greatest number of people — and we have to consider everybody’s happiness equally. So we’re not supposed to be partial to our own friends or family members. 

This ethical approach took off in the 18th century. Today, it’s extremely influential in Western philosophy — and not just in the halls of academia. Famous philosophers like Peter Singer have popularized it in the public sphere, too. 

Increasingly, though, some are challenging it. 

“Moral philosophy has for so long been about trying to identify universal moral principles that apply to all people regardless of their identity,” Yudkin told me. “And it’s because of this effort that moral philosophers have really moved away from the relational perspective. But the more that I think about the data, the more clear to me it is that you’re losing something essential from the moral equation when you abstract away from relationships.” 

Moral psychologists like Princeton’s Molly Crockett and Yale’s Margaret Clark have likewise been investigating the idea that moral obligations are relationship-specific.

“Here’s a classic example,” Crockett told me a few years ago. “Consider a woman, Wendy, who could easily provide a meal to a young child but fails to do so. Has Wendy done anything wrong? It depends on who the child is. If she’s failing to provide a meal to her own child, then absolutely she’s done something wrong! But if Wendy is a restaurant owner and the child is not otherwise starving, then they don’t have a relationship that creates special obligations prompting her to feed the child.”

According to Crockett, being a moral agent has become trickier for us with the rise of globalization, which forces us to think about how our actions might affect people we’re never going to meet. “Being a good global citizen now butts up against our very powerful psychological tendencies to prioritize our families and friends,” Crockett told me.


Here is my summary:

Reddit Provides Insight into How People Think About Moral Dilemmas
  • Philosophers Daniel Yudkin and colleagues analyzed millions of comments from Reddit's "Am I the Asshole?" forum to study how ordinary people reason about moral dilemmas in real life situations.
  • They found the most common dilemmas involved "relational obligations" - what we owe to others based on our relationships with them, like family, friends, coworkers etc.
  • The types of moral dilemmas people faced varied based on the specific relationship context (e.g. with a sibling vs. manager).
Challenging the Impartiality of Utilitarianism
  • This challenges the utilitarian view in philosophy that we should impartially maximize happiness for everyone equally, ignoring special relationships.
  • Some argue this impartial view overlooks the deep psychological importance of prioritizing close relations like family over strangers.
  • While impartiality may be an ideal, critics say it is psychologically unrealistic to expect people to abandon loved ones to help larger numbers of strangers.
  • The research highlights how modern moral philosophy, especially utilitarianism, may fail to account for the central role relationships and social contexts play in ordinary moral reasoning and obligations.
As others have said better than me, moral norms and principles provide a shared framework for evaluating right and wrong behavior. They define obligations and duties we have towards others, especially those close to us. By adhering to moral codes, individuals can build trust, reciprocity, and a sense of fairness in their relationships.

The expression of moral judgments, both positive and negative, helps regulate self-interest and enforce cooperative norms within groups. When people can call out immoral actions and praise ethical conduct, it incentivizes prosocial behavior and discourages free-riding. This promotes cooperation for mutual benefit.

Saturday, June 29, 2024

OpenAI insiders are demanding a “right to warn” the public

Sigal Samuel
Vox.com
Originally posted 5 June 24

Here is an excerpt:

To be clear, the signatories are not saying they should be free to divulge intellectual property or trade secrets, but as long as they protect those, they want to be able to raise concerns about risks. To ensure whistleblowers are protected, they want the companies to set up an anonymous process by which employees can report their concerns “to the company’s board, to regulators, and to an appropriate independent organization with relevant expertise.” 

An OpenAI spokesperson told Vox that current and former employees already have forums to raise their thoughts through leadership office hours, Q&A sessions with the board, and an anonymous integrity hotline.

“Ordinary whistleblower protections [that exist under the law] are insufficient because they focus on illegal activity, whereas many of the risks we are concerned about are not yet regulated,” the signatories write in the proposal. They have retained a pro bono lawyer, Lawrence Lessig, who previously advised Facebook whistleblower Frances Haugen and whom the New Yorker once described as “the most important thinker on intellectual property in the Internet era.”


Here are some thoughts:

AI development is booming, but with great power comes great responsibility, typed the Spiderman fan.  AI researchers at OpenAI are calling for a "right to warn" the public about potential risks. In clinical psychology, we have a "duty to warn" for violent patients. This raises important ethical questions. On one hand, transparency and open communication are crucial for responsible AI development.  On the other hand, companies need to protect their ideas.  The key seems to lie in striking a balance.  Researchers should have safe spaces to voice concerns without fearing punishment, and clear guidelines can help ensure responsible disclosure without compromising confidential information.

Ultimately, fostering a culture of open communication is essential to ensure AI benefits society without creating unforeseen risks.  AI developers need similar ethical guidelines to psychologists in this matter.

Friday, June 28, 2024

Becoming a culturally responsive and socially just clinical supervisor

Spowart, J. K. P., & Robertson, S. E. (2024).
Canadian Psychology / Psychologie canadienne.
Advance online publication. 
https://doi.org/10.1037/cap0000388

Abstract

Clinical supervisors must learn to attend to and address a breadth of cultural, diversity and social justice factors and dynamics when providing supervision. Developing these abilities does not occur automatically; rather, training in clinical supervision has a significant impact on supervisors’ development. Unfortunately, there is relatively limited research on how supervisors develop these same ways of being and working. Therefore, the purpose of this study was to explore how counselling psychology doctoral students understand their experiences of becoming culturally responsive and socially just clinical supervisors. Findings from this study detail the developmental experiences of novice supervisors and highlight training needs, educational interventions, progression of competencies and experiences with counselling supervisees and supervisors-of-supervision. Implications for theories of supervisor development and approaches in graduate training programmes are discussed along side of calls to more robustly integrate culturally responsive and socially just training and approaches throughout the field of clinical supervision.

Impact Statement

Clinical supervisors are responsible for attending to and addressing issues of culture, diversity and advocacy so that they may better prepare new mental health practitioners to support populations from diverse backgrounds. Little is known about the training experiences and needs of clinical supervisors as they learn to carry out this important work. The present study addresses this gap in the literature by highlighting the experiences of supervisors-in-training and provides tangible education and training recommendations to help ensure more culturally responsive and socially just clinical supervision practices.

Here are two excerpts:

From the Introduction:

Clinical supervision is a distinct area of practice in psychology (Arthur & Collins, 2015). Historically, it was assumed that becoming a clinical supervisor was "a natural outgrowth of the acquisition of [counselling] experience" (Thériault & Gazzola, 2019, p. 155). Currently, it is recognised that becoming a clinical supervisor is a unique, complex and multifaceted developmental process in which distinct skills, knowledge, awareness and attitudes must be cultivated (Falender & Shafranske, 2017; Thériault & Gazzola, 2019). Adding to this, providing supervision alone does not guarantee supervisor development or the acquisition of clinical supervision competencies (Falender & Shafranske, 2004; C. E. Watkins, 2012). Rather, training in clinical supervision has been shown to have a significant impact on development as a supervisor (Christofferson et al., 2023; Gazzola & De Stefano, 2016; Milne et al., 2011). Individuals may obtain such training either during graduate school or through postgraduate professional development.

From the Discussion:

To begin, the importance of MCSJ (Multicultural Social Justice) factors and dynamics served as a context for the doctoral student SITs' (Supervisors In Training) experiences. As if it were a lens through which they understood their practice and development, their focus on MCSJ factors and dynamics was not something that could be divorced from their experiences. As they were transitioning into and taking on their new role, the SITS experienced some initial difficulties. At first, they felt they needed a road map. They did not have a clear understanding of how they could provide CRSJ supervision and wished they had received more initial guidance. Some of these initial difficulties abated as the doctoral student SITS were impacted by a number of supports to their development.

Thursday, June 27, 2024

When Therapists Lose Their Licenses, Some Turn to the Unregulated Life Coaching Industry Instead

Jessica Miller
Salt Lake Tribune
Originally published 17 June 24

A frustrated woman recently called the Utah official in charge of professional licensing, upset that his office couldn’t take action against a life coach she had seen. Mark Steinagel recalls the woman telling him: “I really think that we should be regulating life coaching. Because this person did a lot of damage to me.”

Reports about life coaches — who sell the promise of helping people achieve their personal or professional goals — come into Utah’s Division of Professional Licensing about once a month. But much of the time, Steinagel or his staff have to explain that there’s nothing they can do.

If the woman had been complaining about any of the therapist professions overseen by DOPL, Steinagel’s office might have been able to investigate and potentially order discipline, including fines.

But life coaches aren’t therapists and are mostly unregulated across the United States. They aren’t required to be trained in ethical boundaries the way therapists are, and there’s no universally accepted certification for those who work in the industry.


Here are some thoughts on the ethics of this trend:

The trend of therapists who have lost their licenses transitioning to the unregulated life coaching industry raises significant ethical concerns and risks. This shift allows individuals who have been deemed unfit to practice therapy to continue working with vulnerable clients without oversight or accountability. The lack of regulation in life coaching means that these practitioners can potentially continue harmful behaviors, misrepresent their qualifications, and exploit clients without facing the same consequences they would in the regulated therapy field.

This situation poses substantial risks to clients (and the integrity of coaching as profession). Clients seeking help may not understand the difference between regulated therapy and unregulated life coaching, potentially exposing themselves to practitioners who have previously violated ethical standards. The presence of discredited therapists in the life coaching industry can erode public trust in mental health services and coaching alike, potentially deterring individuals from seeking necessary help. Moreover, clients have limited legal recourse if they are harmed by an unregulated life coach, leaving them vulnerable to financial and emotional distress.

To address these concerns, there is a pressing need for regulatory measures in the life coaching industry, particularly concerning practitioners with a history of ethical violations in related fields. Such regulations could help maintain the integrity of coaching, protect vulnerable clients, and ensure that those seeking help receive services from qualified and ethical practitioners. Without such measures, the potential for harm remains significant, undermining the valuable work done by ethical professionals in both therapy and life coaching.

Wednesday, June 26, 2024

Can Generative AI improve social science?

Bail, C. A. (2024).
Proceedings of the National Academy of
Sciences of the United States of America, 121(21). 

Abstract

Generative AI that can produce realistic text, images, and other human-like outputs is currently transforming many different industries. Yet it is not yet known how such tools might influence social science research. I argue Generative AI has the potential to improve survey research, online experiments, automated content analyses, agent-based models, and other techniques commonly used to study human behavior. In the second section of this article, I discuss the many limitations of Generative AI. I examine how bias in the data used to train these tools can negatively impact social science research—as well as a range of other challenges related to ethics, replication, environmental impact, and the proliferation of low-quality research. I conclude by arguing that social scientists can address many of these limitations by creating open-source infrastructure for research on human behavior. Such infrastructure is not only necessary to ensure broad access to high-quality research tools, I argue, but also because the progress of AI will require deeper understanding of the social forces that guide human behavior.

Here is a brief summary:

Generative AI, with its ability to produce realistic text, images, and data, has the potential to significantly impact social science research.  This article explores both the exciting possibilities and potential pitfalls of this new technology.

On the positive side, generative AI could streamline data collection and analysis, making social science research more efficient and allowing researchers to explore new avenues. For example, AI-powered surveys could be more engaging and lead to higher response rates. Additionally, AI could automate tasks like content analysis, freeing up researchers to focus on interpretation and theory building.

However, there are also ethical considerations. AI models can inherit and amplify biases present in the data they're trained on. This could lead to skewed research findings that perpetuate social inequalities. Furthermore, the opaqueness of some AI models can make it difficult to understand how they arrive at their conclusions, raising concerns about transparency and replicability in research.

Overall, generative AI offers a powerful tool for social scientists, but it's crucial to be mindful of the ethical implications and limitations of this technology. Careful development and application are essential to ensure that AI enhances, rather than hinders, our understanding of human behavior.

Tuesday, June 25, 2024

‘I’m dying, you’re not': Those terminally ill ask more states to legalize physician-assisted death

Jesse Bedayn
AP
Updated 6:39 PM EDT, April 12, 2024

On a brisk day at a restaurant outside Chicago, Deb Robertson sat with her teenage grandson to talk about her death.

She’ll probably miss his high school graduation. She declined the extended warranty on her car. Sometimes she wonders who will be at her funeral.

Those things don’t frighten her much. The 65-year-old didn’t cry when she learned two months ago that the cancerous tumors in her liver were spreading, portending a tormented death.

But later, she received a call. A bill moving through the Illinois Legislature to allow certain terminally ill patients to end their own lives with a doctor’s help had made progress.

Then she cried.

“Medical-aid in dying is not me choosing to die,” she says she told her 17-year-old grandson. “I am going to die. But it is my way of having a little bit more control over what it looks like in the end.”


Here is a summary:

The article discusses the ethical and moral debate surrounding physician-assisted death (PAD), also known as medical aid in dying (MAiD). It highlights the desire of terminally ill patients for more control over their end-of-life experience, including the option for a peaceful death facilitated by a doctor.

On one hand, the article presents the perspective of patients like Deb Robertson, who argues that MAiD isn't about choosing to die, but about choosing how to die with dignity on their own terms, avoiding prolonged suffering.

On the other hand, the patchwork of laws across different states raises ethical concerns.  Some states are considering legalizing MAiD, while others are proposing stricter bans. This creates a situation where some patients have to travel to distant states or forgo their wishes entirely.

The article doesn't take a definitive stance on the morality of MAiD, but rather presents the arguments on both sides, leaving the reader to consider the complex ethical questions surrounding end-of-life decisions.

Monday, June 24, 2024

Evidence-Based Care for Suicidality as an Ethical & Professional Imperative: How to Decrease Suicidal Suffering & Save Lives

Jobes, D. A., & Barnett, J. E. (2024).
The American Psychologist
10.1037/amp0001325.
Advance online publication.

Abstract

Suicide is a major public and mental health problem in the United States and around the world. According to recent survey research, there were 16,600,000 American adults and adolescents in 2022 who reported having serious thoughts of suicide (Substance Abuse and Mental Health Services Administration, 2023), which underscores a profound need for effective clinical care for people who are suicidal. Yet there is evidence that clinical providers may avoid patients who are suicidal (out of fear and perceived concerns about malpractice liability) and that too many rely on interventions (i.e., inpatient hospitalization and medications) that have little to no evidence for decreasing suicidal ideation and behavior (and may even increase risk). Fortunately, there is an emerging and robust evidence-based clinical literature on suicide-related assessment, acute clinical stabilization, and the actual treatment of suicide risk through psychological interventions supported by replicated randomized controlled trials. Considering the pervasiveness of suicidality, the life versus death implications, and the availability of proven approaches, it is argued that providers should embrace evidence-based practices for suicidal risk as their best possible risk management strategy. Such an embrace is entirely consistent with expert recommendations as well as professional and ethical standards. Finally, a call to action is made with a series of specific recommendations to help psychologists (and other disciplines) use evidence-based, suicide-specific, approaches to help decrease suicide-related suffering and deaths. It is argued that doing so has now become both an ethical and professional imperative. Given the challenge of this issue, it is also simply the right thing to do. 

Note: I really wish the APA would make these article available for every mental health provider.

Here is my best summary:
  1. Use evidence-based suicide risk assessments like the Ask Suicide Questionnaire, Columbia Suicide Severity Rating Scale, and Patient Health Questionnaire-9 to identify suicide risk, but do not rely solely on them.
  2. Implement acute stabilization interventions for highly suicidal patients, such as the Safety Plan Intervention, Crisis Response Plan, reducing access to lethal means, crisis hotlines/text lines, and caring contact follow-ups.
  3. Utilize evidence-based psychological treatments focused specifically on suicidal thoughts and behaviors, rather than solely treating underlying mental disorders. Examples are Cognitive Therapy for Suicide Prevention, Dialectical Behavior Therapy, and the Collaborative Assessment and Management of Suicidality.
  4. Receive comprehensive training in evidence-based suicide assessment and treatment during education and through continuing education to increase competence and reduce fear of working with suicidal patients.
  5. Integrate significant others into treatment with patient consent for support, monitoring, and reducing hospitalization need, while addressing confidentiality.
  6. Follow risk management strategies like thorough informed consent, documentation, and consulting colleagues, which align with ethical principles and reduce liability concerns.

Sunday, June 23, 2024

Healthcare Needs Qualified Expert Witnesses More Than Ever

Baum, N., MD. (2024, May 15).
MedPage Today
Originally posted 15 May 24

Any physician or scientist who has served as an expert witness is no doubt familiar with the three golden rules of testifying in a civil or criminal trial: 1) Do unto others as you would have them do unto you. 2) Them that's got the gold, rules. 3) The lawyer with the best medical expert gets the gold.

Rule Three becomes more salient as the need for medical and scientific expert witnesses is likely to accelerate due to an explosion of jury awards. In the decade from 2013 to 2023, malpractice verdicts of $10 million or more grew by 67%, according to reinsurance company TransRe. Enormous malpractice awards like these are clearly on the rise.

In 2023, several massive payouts made splashy headlines. For example, in November, a Florida jury ordered Johns Hopkins All Children's Hospital in St. Petersburg to pay a whopping $261 million for alleged medical negligence and false imprisonment of a young girl.

The case inspired the Netflix documentary "Take Care of Maya" which chronicled events leading to the suicide of Maya Kowalski's mother over Maya's separation from her family during months of hospitalization.

Also in 2023, a Pennsylvania jury ordered the Hospital of the University of Pennsylvania to pay $183 million for an alleged birth gone wrong, resulting in cerebral palsy and substantial neurodevelopmental delays. And in New York, a jury awarded $120 million to a stroke victim for alleged delayed diagnosis and treatment leading to extensive brain damage.


Here are some thoughts:

The article raises a crucial point about the need for qualified expert witnesses in healthcare-related legal cases. The complexities of the healthcare system demand a deep understanding of medical practices, procedures, and the intricate web of regulations that govern the industry. Unqualified or ill-informed expert testimony can have severe consequences, potentially leading to miscarriages of justice and undermining public trust in the healthcare system. It is imperative that expert witnesses possess the necessary credentials, experience, and up-to-date knowledge to provide accurate and impartial assessments.

Furthermore, the ethical implications of expert witness testimony in healthcare cases cannot be overstated. Healthcare professionals are bound by strict ethical codes that prioritize patient well-being, informed consent, and the preservation of human dignity. Expert witnesses must uphold these ethical principles and ensure that their testimony aligns with the highest standards of professional conduct. They must resist any temptation to skew their opinions or present biased information, as doing so could compromise the integrity of the legal process and potentially harm patients or healthcare providers.

Saturday, June 22, 2024

The Ethical Implications of Illusionism

Frankish, K.
Neuroethics 17, 28 (2024).

Abstract

Illusionism is a revisionary view of consciousness, which denies the existence of the phenomenal properties traditionally thought to render experience conscious. The view has theoretical attractions, but some think it also has objectionable ethical implications. They take illusionists to be denying the existence of consciousness itself, or at least of the thing that gives consciousness its ethical value, and thus as undermining our established ethical attitudes. This article responds to this objection. I argue that, properly understood, illusionism neither denies the existence of consciousness nor entails that consciousness does not ground ethical value. It merely offers a different account of what consciousness is and why it grounds ethical value. The article goes on to argue that the theoretical revision proposed by illusionists does have some indirect implications for our ethical attitudes but that these are wholly attractive and progressive ones. The illusionist perspective on consciousness promises to make ethical decision making easier and to extend the scope of our ethical concern. Illusionism is good news.

The article is free, and linked above.

Here are some important points:

The illusionist perspective argues that our conscious experiences and choices are not the result of free will, but rather the product of unconscious neural processes and external factors beyond our control. This view suggests that we should shift our focus from solely blaming individuals for their actions to considering the external factors (e.g., social structures, environmental influences) that shape behavior. Ethicists must reevaluate the concept of individual responsibility and moral condemnation, as people's choices and actions may not be entirely their own. Instead, a more nuanced and empathetic approach that acknowledges the complex interplay of forces influencing human behavior is necessary for ethical decision-making.

Moreover, the illusionist perspective has the potential to expand the scope of our ethical concern. If conscious experiences are not real in the way we typically assume, then the boundaries of moral consideration may need to be extended beyond just conscious beings. This could have significant implications for ethical debates surrounding the treatment of non-human animals, artificial intelligence, and even the environment. Ethicists must grapple with these profound questions as our understanding of consciousness evolves.

Friday, June 21, 2024

Lab-grown sperm and eggs: ‘epigenetic’ reset in human cells paves the way

Heidi Ledford
Nature

Here is an excerpt:

Growing human sperm and eggs in the laboratory would offer hope to some couples struggling with infertility. It would also provide a way to edit disease-causing DNA sequences in sperm and eggs, sidestepping some of the technical complications of making such edits in embryos. And understanding how eggs and sperm develop can give researchers insight into some causes of infertility.

But in addition to its technical difficulty, growing eggs and sperm in a dish — called in vitro gametogenesis — would carry weighty social and ethical questions. Genetic modification to prevent diseases, for example, could lead to genetic enhancement to boost traits associated with intelligence or athleticism.

Epigenetic reprogramming is key to the formation of reproductive cells — without it, the primordial cells that would eventually give rise to sperm and eggs stop developing. Furthermore, the epigenome affects gene activity, helping cells with identical DNA sequences to take on unique identities. The epigenome helps to differentiate a brain cell, for example, from a liver cell.

Researchers know how to grow mouse eggs and sperm using stem-cell-like cells generated from skin. But the protocols used don’t work in human cells: “There is a big gap between mice and humans,” says Saitou.


Here are some moral/ethical issues:

The ability to derive human gametes (sperm and eggs) from reprogrammed somatic cells raises profound ethical questions that must be carefully considered:

Reproductive Autonomy

Deriving gametes from non-traditional cell sources could enable third parties to create human embryos without the consent or involvement of the cell donors. This raises concerns over violations of reproductive autonomy and the potential for coercion or exploitation, especially of vulnerable groups.

Access and Equity

If allowed for reproductive purposes, access to lab-grown gamete technology may be limited due to high costs, exacerbating existing disparities in access to assisted reproductive services. There are also concerns over the creation of "designer babies" if the technology enables extensive genetic selection.

Safety Considerations

Subtle epigenetic errors during reprogramming or gametogenesis could lead to developmental abnormalities or diseases in resulting children. Extensive research is needed to ensure the safety and efficacy of lab-grown gametes before clinical use.

Social and Cultural Implications

The ability to derive gametes from non-traditional sources challenges traditional notions of parenthood and kinship. The technology's impact on family structures, gender roles, and social norms must be carefully examined.

Robust public discourse, ethical guidelines, and regulatory frameworks will be essential to navigate the profound moral questions surrounding lab-grown human gametes as this technology continues to advance.

Thursday, June 20, 2024

Share of Adult Suicides After Recent Jail Release

Miller TR, Weinstock LM, Ahmedani BK, et al.
JAMA Network Open. 2024;7(5):e249965.

Key Points

Question  What proportion of US adults who died by suicide spent at least 1 night in jail shortly before their death?

Findings  In this cohort modeling study involving nearly 7.1 million US adults released from incarceration in 2019, nearly 20% of suicides occurred among those who were released from jail in the past year and 7% were by those in their second year of jail release.

Meaning  Findings of this study suggest that focused suicide prevention efforts could reach a substantial number of adults who were formerly incarcerated within 2 years, when death by suicide is likely to occur.

----------------
Abstract

Importance  Although people released from jail have an elevated suicide risk, the potentially large proportion of this population in all adult suicides is unknown.

Objective  To estimate what percentage of adults who died by suicide within 1 year or 2 years after jail release could be reached if the jail release triggered community suicide risk screening and prevention efforts.

Design, Setting, and Participants  This cohort modeling study used estimates from meta-analyses and jail census counts instead of unit record data. The cohort included all adults who were released from US jails in 2019. Data analysis and calculations were performed between June 2021 and February 2024.

Main Outcomes and Measures  The outcomes were percentage of total adult suicides within years 1 and 2 after jail release and associated crude mortality rates (CMRs), standardized mortality ratios (SMRs), and relative risks (RRs) of suicide in incarcerated vs not recently incarcerated adults. Taylor expansion formulas were used to calculate the variances of CMRs, SMRs, and other ratios. Random-effects restricted maximum likelihood meta-analyses were used to estimate suicide SMRs in postrelease years 1 and 2 from 10 jurisdictions. Alternate estimate was computed using the ratio of suicides after release to suicides while incarcerated.

Conclusions and Relevance  This cohort modeling study found that adults who were released from incarceration at least once make up a large, concentrated population at greatly elevated risk for death by suicide; therefore, suicide prevention efforts focused on return to the community after jail release could reach many adults within 1 to 2 years of jail release, when suicide is likely to occur. Health systems could develop infrastructure to identify these high-risk adults and provide community-based suicide screening and prevention.

Wednesday, June 19, 2024

The Internal State of an LLM Knows When its Lying

A. Azaria and T. Mitchell
arxiv.org
Last Revised 17 Oct 23

Abstract

While Large Language Models (LLMs) have shown exceptional performance in various tasks, their (arguably) most prominent drawback is generating inaccurate or false information with a confident tone. In this paper, we hypothesize that the LLM's internal state can be used to reveal the truthfulness of a statement. Therefore, we introduce a simple yet effective method to detect the truthfulness of LLM-generated statements, which utilizes the LLM's hidden layer activations to determine the veracity of statements. To train and evaluate our method, we compose a dataset of true and false statements in six different topics. A classifier is trained to detect which statement is true or false based on an LLM's activation values. Specifically, the classifier receives as input the activation values from the LLM for each of the statements in the dataset. Our experiments demonstrate that our method for detecting statement veracity significantly outperforms even few-shot prompting methods, highlighting its potential to enhance the reliability of LLM-generated content and its practical applicability in real-world scenarios.


Here is a summary:

The research presents evidence that a large language model's (LLM's) internal state, specifically the hidden layer activations, can reveal whether statements it generates or is given are truthful or false.

The approach is to train a classifier on the LLM's hidden activations when processing true and false statements. In experiments, this classifier achieved 71-83% accuracy in labeling statements as true or false, outperforming methods based solely on the probability the LLM assigns to statements.

While LLM probability is related to truthfulness, it is also influenced by sentence length and word frequencies. The trained classifier provides a more reliable way to detect truthfulness.

The findings suggest that while LLMs can generate false information confidently, their internal representations encode signals about the veracity of statements. Leveraging these signals could help enhance the reliability of LLM outputs.

However, the approach was evaluated on a limited dataset of true/false statements across topics. Its generalization to arbitrary statements or knowledge domains is unclear from the study.

Tuesday, June 18, 2024

Medical-Targeted Ransomware Is Breaking Records After Change Healthcare’s $22M Payout

Andy Greenberg
wired.com
Originally posted 12 June 24

When Change Healthcare paid $22 million in March to a ransomware gang that had crippled the company along with hundreds of hospitals, medical practices, and pharmacies across the US, the cybersecurity industry warned that Change's extortion payment would only fuel a vicious cycle: Rewarding hackers who had carried out a ruthless act of sabotage against the US health care system nationwide with one of the largest ransomware payments in history, it seemed, was bound to incentivize a new wave of attacks on similarly sensitive victims. Now that wave has arrived.

In April, cybersecurity firm Recorded Future tracked 44 cases of cybercriminal groups targeting health care organizations with ransomware attacks, stealing their data, encrypting their systems, and demanding payments from the companies while holding their networks hostage. That's more health care victims of ransomware than in any month Recorded Future has seen in its four years of collecting that data, says Allan Liska, a threat intelligence analyst at the company. Comparing that number to the 30 incidents in March, it's also the second biggest month-to-month jump in incidents the company has ever tracked.

While Liska notes that he can't be sure of the reason for that spike, he argues it's unlikely to be a coincidence that it follows in the wake of Change Healthcare's eight-figure payout to the hacker group known as AlphV or BlackCat that was tormenting the company.


Here are some thoughts:

The recent record-breaking ransom payment by a healthcare giant raises a troubling question: are profits being prioritized over patient well-being? This approach creates an ethical dilemma and poses serious psychological and public health risks.

Imagine needing urgent medical attention, only to find your records inaccessible due to a cyberattack. Ransomware disrupts services, causing immense anxiety for patients. Disrupted access to data can delay diagnoses, hinder treatment, and even threaten public health initiatives. Furthermore, these attacks essentially blackmail healthcare providers, potentially eroding trust in the medical system.

To combat this growing threat, we need a multi-pronged approach. Healthcare institutions must prioritize robust cybersecurity. International law enforcement collaboration is crucial to hold cybercriminals accountable. Finally, open communication with patients during and after an attack is essential to rebuild trust and minimize stress. By working together, we can build a more resilient healthcare system that safeguards patient data and well-being.

Monday, June 17, 2024

Political ideology and environmentalism impair logical reasoning

Keller, L., Hazelaar, F., et al. (2024).
Thinking & Reasoning, 30(1), 79–108.

Abstract

People are more likely to think statements are valid when they agree with them than when they do not. We conducted four studies analyzing the interference of self-reported ideologies with performance in a syllogistic reasoning task. Study 1 established the task paradigm and demonstrated that participants’ political ideology affects syllogistic reasoning for syllogisms with political content but not politically irrelevant syllogisms. The preregistered Study 2 replicated the effect and showed that incentivizing accuracy did not alleviate these differences. Study 3 revealed that syllogistic reasoning is affected by ideology in the presence and absence of such bonus payments for correctly judging the conclusions’ logical validity. In Study 4, we observed similar effects regarding a different ideological orientation: environmentalism. Again, monetary bonuses did not attenuate these effects. Taken together, the results of four studies highlight the harm of ideology regarding people’s logical reasoning.

Here is a summary:

This paper reports four studies investigating the influence of political ideology and environmentalism on logical reasoning performance using a syllogistic reasoning task. The key findings are:
  1. People were more likely to judge the conclusion of a syllogism as valid when it aligned with their ideology (liberal, conservative, or environmentalist), even when the conclusion was logically invalid. Conversely, they were more likely to recognize flaws in conclusions that went against their ideology.
  2. This ideological bias in reasoning occurred symmetrically across the political spectrum - both liberals and conservatives showed impaired logical reasoning for conclusions contradicting their ideology.
  3. The bias persisted even when participants were offered monetary incentives for accurately judging the logical validity of conclusions, across online and lab studies in the US and Germany.
  4. Political ideology and environmentalism did not impact logical reasoning for neutral, non-ideological syllogisms, suggesting the effect was specific to ideologically charged content.
The authors argue that ideological beliefs can negatively impact logical reasoning abilities in a systematic way when dealing with belief-consonant or belief-dissonant conclusions, even for abstract reasoning tasks devoid of real-world consequences. Monetary incentives failed to mitigate this ideological reasoning effect.

Sunday, June 16, 2024

Black Americans and racial conspiracy theories about the news media

Kiana Cox
Pew Research Center
Originally posted 10 June 24

Some of the most enduring stereotypes about Black people have their roots in images created during and immediately after slavery. From the docile Mammy and Uncle Tom characters that appeared in newspaper ads and on food containers to the threatening Mandingo in the film “Birth of a Nation” and the more recent controversy about whether television character Olivia Pope was a modern-day Jezebel, Black Americans’ relationship with media has been contentious at best.

Black Americans have also said the news media specifically characterizes them as disproportionately poor, welfare-dependent and criminal. This history of stereotypical imagery provides some context for Black Americans’ beliefs in racial conspiracy theories about the media.

What is a ‘racial conspiracy theory’?
In this report, the phrase “racial conspiracy theories” refers to the suspicions that Black adults might have about the actions of U.S. institutions based on their personal and collective historical experiences with racial discrimination.

A Pew Research Center survey from early 2023 shows that 63% of Black Americans say the news about Black people is often more negative than news about other racial and ethnic groups. Over half (57%) say the news only covers certain segments of Black communities, and 43% say the coverage significantly stereotypes Black people.


Here are some thoughts:

The Pew study on Black Americans' beliefs in racial conspiracy theories about the news media sheds light on the deep-rooted mistrust stemming from a long history of systemic racism and negative portrayals. Harmful stereotypes like the docile "Mammy" or threatening "Mandingo" have perpetuated biases. Black Americans perceive disproportionate coverage depicting them as poor, criminal, and welfare-dependent, reinforcing negative stereotypes.

This mistrust is rooted in well-documented acts of discrimination and harm against Black communities, such as the Tuskegee Syphilis Study and Tulsa Race Massacre. The racial conspiracy theories are not mere paranoia but valid beliefs shaped by these collective experiences of intentional and negligent harm by institutions like the media.

To rebuild trust, the news media must actively dismantle negative stereotypes through increased newsroom diversity, cultural competency training, amplifying Black voices, and accountability for biased coverage. Acknowledging their role in shaping public perceptions and taking proactive steps towards ethical, inclusive reporting is crucial for promoting racial equity and justice.

Saturday, June 15, 2024

Folk psychological attributions of consciousness to large language models

Colombatto, C., & Fleming, S. M. (2024).
Neuroscience of consciousness, 2024(1)

Abstract

Technological advances raise new puzzles and challenges for cognitive science and the study of how humans think about and interact with artificial intelligence (AI). For example, the advent of large language models and their human-like linguistic abilities has raised substantial debate regarding whether or not AI could be conscious. Here, we consider the question of whether AI could have subjective experiences such as feelings and sensations ('phenomenal consciousness'). While experts from many fields have weighed in on this issue in academic and public discourse, it remains unknown whether and how the general population attributes phenomenal consciousness to AI. We surveyed a sample of US residents (n = 300) and found that a majority of participants were willing to attribute some possibility of phenomenal consciousness to large language models. These attributions were robust, as they predicted attributions of mental states typically associated with phenomenality-but also flexible, as they were sensitive to individual differences such as usage frequency. Overall, these results show how folk intuitions about AI consciousness can diverge from expert intuitions-with potential implications for the legal and ethical status of AI.

Conclusions

In summary, our investigation of folk psychological attributions of consciousness revealed that most  people are willing  to attribute some form of phenomenality to LLMs: only a third of our sample thought that ChatGPT definitely did not have subjective experience, while two-thirds of our sample thought that  ChatGPT had varying degrees of phenomenal consciousness. The relatively high rates of consciousness  attributions in this sample are somewhat surprising, given that experts in neuroscience and consciousness science currently estimate that LLMs are highly unlikely to be conscious (Butlin et al. 2023, LeDoux et al. 2023). These findings thus highlight a discrepancy between folk  intuitions and expert  opinions on artificial consciousness—with significant implications for the ethical, legal, and moral status of AI.


Summary:

The paper examines how people attribute consciousness and mental states to large language models (LLMs) like GPT-3 based on folk psychology intuitions.  Folk psychology refers to how laypeople reason about minds and mental states based on observable behavior.

Attributions of Consciousness

In two studies, participants interacted with GPT-3 and rated the extent to which they attributed various mental states to the LLM.  Participants attributed significant levels of consciousness, including the ability to experience emotions, have beliefs, and understand language.  However, attributions were lower for more advanced capacities like self-awareness and intentionality.

Factors Influencing Attributions

Attributions increased the more the LLM's responses appeared coherent, thoughtful and human-like.
Attributions decreased when participants were reminded that the LLM is an artificial system without subjective experiences. Individuals' beliefs about machine consciousness and their exposure to AI also impacted attributions.

Implications

The studies reveal a tendency for people to anthropomorphize and over-attribute mental capacities to sophisticated language models based on their surface behavior.  This has implications for managing public expectations around AI capabilities and potential risks of deception. The findings highlight the need for transparency about the actual cognitive architectures underlying LLMs to mitigate misunderstandings.

In summary, the research demonstrates how people's folk psychology leads them to project human-like mental states onto LLMs in ways that may not accurately reflect the systems' true capabilities and nature.

Friday, June 14, 2024

What does my group consider moral?: How social influence shapes moral expressions

del Rosario, K., Van Bavel, J. J., & West, T.
PsyArXiv (2024, May 8).

Abstract

Although morality is often characterized as a set of stable values that are deeply held, we argue that moral expressions are highly malleable and sensitive to social norms. For instance, norms can either lead people to exaggerate their expressions of morality (such as on social media) or restrain them (such as in professional settings). In this paper, we discuss why moral expressions are subject to social influence by considering two goals that govern social influence: affiliation goals (the desire to affiliate with one’s group) and accuracy goals (the desire to be accurate in ambiguous situations). Different from other domains of social influence, we argue that moral expressions often satisfy both affiliation goals (“I want to fit in with the group”) and accuracy goals (“I want to do the right thing”). As such, the fundamental question governing moral expressions is: “what does my group consider moral?” We argue that this central consideration achieves both goals underlying social influence and drives moral expressions. We outline the ways in which social influence shapes moral expressions, from unconsciously copying others’ behavior to expressing outrage to gain status within the group. Finally, we describe when the same goals can result in different behaviors, highlighting how context-specific norms can encourage (or discourage) moral expressions. We explain how this framework will be helpful in understanding how identity, norms, and social contexts shape moral expressions.

Conclusion

Our review examines moral expressions through the lens of social influence, illustrating the critical role of the social environment in shaping moral expressions. Moral expressions serve a social purpose, such as affiliating with a group, and are influenced by various goals, including understanding the appropriate emotional response to moral issues and conforming to others' expressions to fit in. These influences become evident in different contexts, where norms either encourage exaggerated expressions, like on social media, or restraint, such as in professional settings. For this reason, different forms of influence can have vastly different implications. As such, the fundamental social question governing moral expressions for people in moral contexts is: “What does my group consider moral?” However, much of the morality literature does not account for the role of social influence in moral expressions. Thus, a social norms framework will be helpful in understanding how social contexts shape moral expression.

Here is a summary:

The research argues that moral expressions (outward displays of emotions related to right and wrong) are highly malleable and shaped by social norms and contexts, contrary to the view that morality reflects stable convictions. It draws from research on normative influence (conforming to gain social affiliation) and informational influence (seeking accuracy in ambiguous situations) to explain how moral expressions aim to satisfy both affiliation goals ("fitting in with the group") and accuracy goals ("doing the right thing").

The key points are:
  1. Moral expressions vary across contexts because people look to their social groups to determine what is considered moral behavior.
  2. Affiliation goals (fitting in) and accuracy goals (being correct) are intertwined for moral expressions, unlike in other domains where they are distinct.
  3. Social influence shapes moral expressions in various ways, from unconscious mimicry to outrage expressions for gaining group status.
  4. Context-specific norms can encourage or discourage moral expressions by prioritizing affiliation over accuracy goals, or vice versa.
  5. The motivation to be seen as moral contributes to the malleability of moral expressions across social contexts.

Thursday, June 13, 2024

Examining Potential Psychological Protective and Risk Factors for Stress and Burnout in Social Workers

Maddock, A.
Clin Soc Work J (2024).

Abstract

Social work professionals experience high levels of stress and burnout. Stress and burnout can have a negative impact on the individual social worker, the organisations they work for, and perhaps most importantly, the quality of care that marginalised groups that are supported by social workers receive. Several work-related predictors of stress and burnout have been identified; however, no studies have examined the underlying psychological protective and risk factors which might help to explain changes in social worker stress and burnout. Using the clinically modified Buddhist psychological model (CBPM) as a theoretical framework, this cross-sectional study attempted to identify psychological protective and risk factors for stress and burnout in 121 social workers in Northern Ireland, using structural equation modelling, and conditional process analyses. This study provided promising preliminary evidence for a mediated effect CBPM as being a potentially useful explanatory framework of variation in social worker stress, emotional exhaustion, and depersonalisation. This study also provided evidence that several CBPM domains could have a direct effect on personal accomplishment. This study provides preliminary evidence that support programmes, which have the capacity to improve each CBPM domain (mindfulness, acceptance, attention regulation/decentering, self-compassion, non-attachment, and non-aversion) and reduce experiences of worry and rumination, are likely to support social workers to experience reduced stress, emotional exhaustion, depersonalisation of service users, and improvements in personal accomplishment.

From the Discussion

The aims of this paper were to provide more theoretical transparency on what some of the most important protective and risk factors for social worker stress and burnout are, using the data attained from social workers in Northern Ireland. To support our analysis, the CBPM (Maddock, 2023), which is a multi-faceted stress coping, cognitive and emotional regulation theory was used. Using structural equation modelling, though the direct and mediated effects CBPM was found to be an acceptable fit to the data on perceived stress, emotional exhaustion, and depersonalisation, our results indicate that the mediated effects CBPM model was a better fit to the data on each of these outcomes. Most of the significant conditional effects found using Process, between the CBPM domains and perceived stress, emotional exhaustion, depersonalisation were also mediated by either worry or rumination and sometimes both (e.g., stress), highlighting that negative thinking styles, such as worry and rumination, are likely to be a key risk factor for the development of stress and emotional exhaustion in social workers along with the depersonalisation of service users. This supports Kazdin (2009), who asserted that individual risk or protective factors (in our case, worry and rumination respectively) can impact multiple outcomes. This highlights how interventions e.g., MBPs or CBT, that aim to reduce feelings of stress, emotional exhaustion, and depersonalisation of service users in social work, could be more parsimonious, and effective, if they focussed on supporting social workers to regulate the extent to which they engage in worry or rumination in response to feelings of stress or burnout. This could be achieved, particularly by MBPs, through the development of each CBPM domain (i.e., mindfulness, attention regulation/decentering, acceptance, self-compassion, non-attachment and non-aversion), each of which have been identified as approach oriented coping strategies, which have been the capacity to support social workers to regulate the extent to which they worry or rumination (Maddock, 2023).

It is clear from this study that the effects of different potential psychological protective and risk factors for social worker stress and burnout, are likely to be complex. The limited literature available attempting to explain the patterns of relationships between mindfulness variables and mental health and well-being outcomes such as stress and burnout has usually identified either significant direct (e.g., Hölzel et al., 2011) or mediated (e.g., Gu et al., 2015) pathways, but not both at the same time. This study thus highlights the potentially complex direct and mediated interactions between mindfulness variables e.g., acceptance, attention regulation, stress, and different domains of burnout in social work. This is supported by the fact that most of the significant effects of each CBPM domain on stress, burnout-emotional exhaustion, burnout-depersonalisation, and burnout-personal accomplishment were found to be mediated by either worry or rumination. A number of CBPM domains e.g., acceptance and attention regulation/decentering also appeared to have a direct effect on stress and burnout-depersonalisation. These findings also support Kazdin (2009) who highlighted that outcomes, such as stress and depersonalisation, can be reduced through multiple pathways i.e., through both direct and mediated relationships.

Wednesday, June 12, 2024

The health care workforce crisis is already here

Caitlin Owen
axios.com
Originally posted 7 June 24

Demoralized doctors and nurses are leaving the field, hospitals are sounding the alarm about workforce shortages and employees are increasingly unionizing and even going on strike in high-profile disputes with their employers.

Why it matters: Dire forecasts of health care worker shortages often look to a decade or more from now, but the pandemic — and its ongoing fallout — has already ushered in a volatile era of dissatisfied workers and understaffed health care facilities.
  • Some workers and experts say understaffing is, in some cases, the result of intentional cost cutting. Regardless, patients' access to care and the quality of that care are at risk.
  • "There are 83 million Americans today who don't have access to primary care," said Jesse Ehrenfeld, president of the American Medical Association. "The problem is here. It's acute in rural parts of the country, it's acute in underserved communities."
The big picture: Complaints about understaffing, administrative burdens and inadequate wages aren't new, but they are getting much louder — and more health workers are leaving their jobs or cutting back their hours.


Here are some thoughts:

The news of the healthcare workforce crisis being "already here" is deeply concerning.  It's not just about future projections; it's about the impact on patient care, provider well-being, and the ethical obligations we all share.

Providers will likely walk an ethical tightrope, that will likely have negative consequences. Imagine a doctor facing a packed waiting room, knowing some patients won't receive the time and attention they deserve.  This is the reality for many providers stretched thin by staffing shortages. It creates an ethical tightrope: how to deliver quality care amidst overwhelming pressure.  Burnout, compassion fatigue, and even medical errors become more likely.  This is likely the starting point for the possibility of moral distress and/or moral injury.

The crisis isn't just a burden on healthcare providers or institutions. It's a societal challenge.  Policymakers, educators, and even patients themself can play a role.

This isn't about pointing fingers; it's about recognizing a shared responsibility.  By working together, we can ensure a healthcare system that is ethical, sustainable, and provides quality care for all.

Tuesday, June 11, 2024

Morals Versus Ethics: Building An Organizational Culture Of Trust And Transparency

Pamela Furr
Forbes.com
Originally posted 6 May 24

Here are two excerpts:

Prioritize Transparency And Integrity

Our team is a diverse mix of ages, cultures, races and backgrounds, and we all bring unique experiences and perspectives to the table. If a colleague says or does something that doesn’t sit right with you, take a moment to pause, process and then approach them. Share how you felt in the moment—this can be as simple as saying, “My feelings were hurt when you did that” or “I didn’t think the language you used earlier was appropriate.” Give them the opportunity to explain or apologize before gossiping with coworkers or silently holding onto resentments. Trust each other to have open, honest conversations, and you can often defuse conflicts before they escalate.

(cut)

Build A Sense Of Community

Set the tone for open dialogue and mutual respect in your organization. By modeling these values in your interactions with others, you can inspire your team to uphold the same standards. Foster a culture in which you advocate for yourself and others and try to learn from others as well. Approach things you don’t understand with a spirit of curiosity and compassion, assuming positive intent until proven otherwise. Ask questions, and truly seek to understand someone else’s point of view.

I believe that an essential part of being a leader is ensuring that our employees feel safe, protected and heard when they come to work. We can work to hold external governing boards accountable to the standards they set, but we can also do everything in our power to create a culture of trust, transparency and accountability within our own organizations.


Here is my summary:

The article discusses the difference between morals and ethics. Morals are personal beliefs and values that guide our actions, while ethics are a set of rules established by a community or governing body.

The author describes a situation where a trainee made a false sexual harassment claim against her mentor. The certifying board refused to take any action because they saw it as an employment contract issue. The author argues that governing boards should take a stronger stance in upholding ethics within their professions.

The article concludes with the author's thoughts on creating an ethical and transparent workplace culture. The author emphasizes the importance of open communication, understanding policies and procedures, and building a sense of community. By following these principles, organizations can create a safe and supportive environment for their employees.

Monday, June 10, 2024

Attributions toward artificial agents in a modified Moral Turing Test

Aharoni, E., Fernandes, S., Brady, D.J. et al.
Sci Rep 14, 8458 (2024).

Abstract

Advances in artificial intelligence (AI) raise important questions about whether people view moral evaluations by AI systems similarly to human-generated moral evaluations. We conducted a modified Moral Turing Test (m-MTT), inspired by Allen et al. (Exp Theor Artif Intell 352:24–28, 2004) proposal, by asking people to distinguish real human moral evaluations from those made by a popular advanced AI language model: GPT-4. A representative sample of 299 U.S. adults first rated the quality of moral evaluations when blinded to their source. Remarkably, they rated the AI’s moral reasoning as superior in quality to humans’ along almost all dimensions, including virtuousness, intelligence, and trustworthiness, consistent with passing what Allen and colleagues call the comparative MTT. Next, when tasked with identifying the source of each evaluation (human or computer), people performed significantly above chance levels. Although the AI did not pass this test, this was not because of its inferior moral reasoning but, potentially, its perceived superiority, among other possible explanations. The emergence of language models capable of producing moral responses perceived as superior in quality to humans’ raises concerns that people may uncritically accept potentially harmful moral guidance from AI. This possibility highlights the need for safeguards around generative language models in matters of morality.

Here is my summary:

The researchers conducted a modified Moral Turing Test (m-MTT) to investigate if people view moral evaluations by advanced AI systems similarly to those by humans. They had participants rate the quality of moral reasoning from the AI language model GPT-4 and from humans, while initially blinded to the source.

Key Findings
  • Remarkably, participants rated GPT-4's moral reasoning as superior in quality to humans' across dimensions like virtuousness, intelligence, and trustworthiness. This is consistent with passing the "comparative MTT" proposed previously.
  • When later asked to identify if the moral evaluations came from a human or computer, participants performed above chance levels.
  • However, GPT-4 did not definitively "pass" this test, potentially because its perceived superiority made it identifiable as AI.

Sunday, June 9, 2024

Artificial Intelligence Feedback on Physician Notes Improves Patient Care

NYU Langone Health
Research, Innovation
Originally posted 17 APR 24

Artificial intelligence (AI) feedback improved the quality of physician notes written during patient visits, with better documentation improving the ability of care teams to make diagnoses and plan for patients’ future needs, a new study finds.

Since 2021, NYU Langone Health has been using pattern-recognizing, machine-learning AI systems to grade the quality of doctors’ clinical notes. At the same time, NYU Langone created data informatics dashboards that monitor hundreds of measures of safety and the effectiveness of care. The informatics team over time trained the AI models to track in dashboards how well doctors’ notes achieved the “5 Cs”: completeness, conciseness, contingency planning, correctness, and clinical assessment.

Now, a new case study, published online April 17 in NEJM Catalyst Innovations in Care Delivery, shows how notes improved by AI, in combination with dashboard innovations and other safety initiatives, resulted in an improvement in care quality across four major medical specialties: internal medicine, pediatrics, general surgery, and the intensive care unit.

This includes improvements across the specialties of up to 45 percent in note-based clinical assessments (that is, determining diagnoses) and reasoning (making predictions when diagnoses are unknown). In addition, contingency planning to address patients’ future needs saw improvements of up to 34 percent.

Last year, NYU Langone added to this long-standing effort a newer form of AI that develops likely options for the next word in any sentence based on how billions of people used language on the internet over time. A result of this next-word prediction is that generative AI chatbots like GPT-4 can read physician notes and make suggestions. In a pilot within the case study, the research team supercharged their machine-learning AI model, which can only give physicians a grade on their notes, by integrating a chatbot that added an accurate written narrative of issues with any note.


The article is linked above.  Here is the abstract:

Abstract

Electronic health records have become an integral part of modern health care, but their implementation has led to unintended consequences, such as poor note quality. This case study explores how NYU Langone Health leveraged artificial intelligence (AI) to address the challenge to improve the content and quality of medical documentation. By quickly and accurately analyzing large volumes of clinical documentation and providing feedback to organizational leadership and individually to providers, AI can help support a culture of continuous note quality improvement, allowing organizations to enhance a critical component of patient care.

Saturday, June 8, 2024

A Doctor at Cigna Said Her Bosses Pressured Her to Review Patients’ Cases Too Quickly

P. Rucker and D. Armstrong
Propublica.org
Originally posted 29 APR 24

Here is an excerpt:

As ProPublica and The Capitol Forum reported last year, Cigna built a computer program that allowed its medical directors to deny certain claims in bulk. The insurer’s doctors spent an average of just 1.2 seconds on each of those cases. Cigna at the time said the review system was created to speed up approval of claims for certain routine screenings; the company later posted a rebuttal to the story. A congressional committee and the Department of Labor launched inquiries into this Cigna program. A spokesperson for Rep. Cathy McMorris Rodgers, the chair of the congressional committee, said Rodgers continues to monitor the situation after Cigna shared some details about its process. The Labor Department is still examining such practices.

One figure on Cigna’s January and February 2022 dashboards was like a productivity score; the news organizations found that this number reflects the pace at which a medical director clears cases.

Cigna said it was incorrect to call that figure on its dashboard a productivity score and said its “view on productivity is defined by a range of factors beyond elements included in a single spreadsheet.” In addition, the company told the news organizations, “The copy of the dashboard that you have is inaccurate and secondary calculations made using its contents may also be inaccurate.” The news organizations asked what was inaccurate, but the company wouldn’t elaborate.

Nevertheless, Cigna said that because the dashboard created “inadvertent confusion” the company was “reassessing its use.”


Here is my summary:

The article reports on Dr. Debby Day, who alleges that Cigna, her employer, pressured her to prioritize speed over thoroughness when reviewing patients' requests for healthcare coverage.

According to Day, managers emphasized meeting quotas and processing claims quickly, even if it meant superficially reviewing cases. Dr. Day said Cigna expected medical directors to review cases in as little as 4 minutes, which she felt was too rushed to properly evaluate them.  The pressure to deny claims quickly was nicknamed "click and close" by some employees.

Day felt this practice compromised patient care and refused to expedite reviews at the expense of quality. The article suggests this may have led to threats of termination from Cigna.

Friday, June 7, 2024

Large Language Models as Moral Experts? GPT-4o Outperforms Expert Ethicist in Providing Moral Guidance

Dillion, D., Mondal, D., Tandon, N.,
& Gray, K. (2024, May 29).

Abstract

AI has demonstrated expertise across various fields, but its potential as a moral expert remains unclear. Recent work suggests that Large Language Models (LLMs) can reflect moral judgments with high accuracy. But as LLMs are increasingly used in complex decision-making roles, true moral expertise requires not just aligned judgments but also clear and trustworthy moral reasoning. Here, we advance work on the Moral Turing Test and find that advice from GPT-4o is rated as more moral, trustworthy, thoughtful, and correct than that of the popular The New York Times advice column, The Ethicist. GPT models outperformed both a representative sample of Americans and a renowned ethicist in providing moral explanations and advice, suggesting that LLMs have, in some respects, achieved a level of moral expertise. The present work highlights the importance of carefully programming ethical guidelines in LLMs, considering their potential to sway users' moral reasoning. More promisingly, it suggests that LLMs could complement human expertise in moral guidance and decision-making.


Here are my thoughts:

This research on GPT-4o's moral reasoning is fascinating, but caution is warranted. While exceeding human performance in explanations and perceived trustworthiness is impressive, true moral expertise goes beyond these initial results.

Here's why:

First, there are nuances to all moral dilemmas. Real-world dilemmas often lack clear-cut answers. Can GPT-4o navigate the gray areas and complexities of human experience?

Next, everyone has a rich experience, values, perspectives, and biases.  What ethical framework guides GPT-4o's decisions? Transparency in its programming is crucial.

Finally, the consequences of AI-driven moral advice can be far-reaching. Careful evaluation of potential biases and unintended outcomes is essential.  There is no objective algorithm.  There is no objective morality.  All moral decisions, no matter how well-reasoned, have pluses and minuses.  Therefore, AI can be used as a starting point for decision-making and planning.