Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Monday, January 20, 2025

The Human Core of AI: Navigating Our Anxieties and Ethics in the Digital Age

Jesse Hirsh
medium.com
Originally posted 25 FEB 24


Artificial Intelligence (AI) serves as a profound mirror reflecting not just our technological ambitions, but the complex tapestry of human anxieties, ethical dilemmas, and societal challenges. As we navigate the burgeoning landscape of AI, the discourse surrounding it often reveals more about us as a society and as individuals than it does about the technology itself. This is fundamentally about the human condition, our fears, our hopes, and our ethical compass.

AI as a Reflection of Human Anxieties

When we talk about controlling AI, at its core, this discussion encapsulates our fears of losing control — not over machines, but over the humans. The control over AI becomes a metaphor for our collective anxiety about unchecked power, the erosion of privacy, and the potential for new forms of exploitation. It’s an echo of our deeper concerns about how power is distributed and exercised in society.

Guardrails for AI as Guardrails for Humanity

The debate on implementing guardrails for AI is indeed a debate on setting boundaries for human behavior. It’s about creating a framework that ensures AI technologies are used ethically, responsibly, and for the greater good. These conversations underscore a pressing need to manage not just how machines operate, but how people use these tools — in ways that align with societal values and norms. Or perhaps guardrails are the wrong approach, as they limit what humans can do, not what machines can do.


Here are some thoughts:

The essay explores the relationship between Artificial Intelligence (AI) and humanity, arguing that AI reflects human anxieties, ethics, and societal challenges. It emphasizes that the discourse surrounding AI is more about human concerns than the technology itself. The author highlights the need to focus on human ethics, trust, and responsibility when developing and using AI, rather than viewing AI as a separate entity or threat.

This essay is important for psychologists for several reasons. Firstly, understanding human anxieties is crucial for psychologists to understand when working with clients who may be experiencing anxiety related to AI or technology. Secondly, the emphasis on human ethics and responsibility when developing and using AI is essential for psychologists to consider when using AI-powered tools in their practice.

Furthermore, the text's focus on trust and human connection in the context of AI is critical for psychologists to understand when building therapeutic relationships with clients who may be impacted by AI-related issues. By recognizing the interconnectedness of human trust and AI, psychologists can foster deeper and more meaningful relationships with their clients.

Lastly, the author's suggestion to use AI as a tool to reconnect with humanity resonates with psychologists' goals of promoting emotional connection, empathy, and understanding in their clients. By leveraging AI in a way that promotes human connection, clinical psychologists can help their clients develop more authentic and meaningful relationships with others.

Sunday, January 19, 2025

Institutional Betrayal in Inpatient Psychiatry: Effects on Trust and Engagement With Care

Lewis, A., Lee, H. S., Zabelski, S., & 
Shields, M. C. (2024). Psychiatric Services.

Abstract

Objective:

Patients’ experiences of inpatient psychiatry have received limited empirical scrutiny. The authors examined patients’ likelihood of experiencing institutional betrayal (harmful actions or inactions toward patients) at facilities with for-profit, nonprofit, or government ownership; patient-level characteristics associated with experiencing institutional betrayal; associations between betrayal and patients’ trust in mental health providers; and associations between betrayal and patients’ willingness to engage in care postdischarge.

Methods:

Former psychiatric inpatients (N=814 adults) responded to an online survey. Data were collected on patients’ demographic characteristics; experiences of institutional betrayal; and the impact of psychiatric hospitalization on patients’ trust in providers, willingness to engage in care, and attendance at 30-day follow-up visits. Participants’ responses were linked to secondary data on facility ownership type.

Results:

Experiencing institutional betrayal was associated with less trust in mental health providers (25-percentage-point increase in reporting less trust, 95% CI=17–32), reduced willingness (by 45 percentage points, 95% CI=39–52) voluntarily undergo hospitalization, reduced willingness (by 30 percentage points, 95% CI=23–37) to report distressing thoughts to mental health providers, and lower probability of reporting attendance at a 30-day follow-up visit (11-percentage-point decrease, 95% CI=5–18). Participants treated at a for-profit facility were significantly more likely (by 14 percentage points) to report experiencing institutional betrayal than were those treated at a nonprofit facility (p=0.01).

Conclusions:

Institutional betrayal is one mechanism through which inpatient psychiatric facilities may cause iatrogenic harm, and the potential for betrayal was larger at for-profit facilities. Further research is needed to identify the determinants of institutional betrayal and strategies to support improvement in care quality.


Here are some thoughts:

The study found that patients were likely to experience institutional betrayal, defined as harmful actions or inactions toward patients by the facilities they depend on for care.

Key findings of the study include:
  1. Patients who experienced institutional betrayal during their inpatient psychiatric stay reported decreased trust in healthcare providers and organizations.
  2. Institutional betrayal was associated with reduced engagement with care following discharge from inpatient psychiatry.
  3. The period following discharge from inpatient psychiatry is characterized by elevated suicide risk, unplanned readmissions, and lack of outpatient follow-up care.
  4. The study highlights the importance of addressing institutional betrayal in psychiatric care settings to improve patient outcomes and trust in the healthcare system.
These findings suggest that institutional betrayal in inpatient psychiatric care can have significant negative effects on patients' trust in healthcare providers and their willingness to engage with follow-up care. Addressing these issues may be crucial for improving patient outcomes and reducing risks associated with the post-discharge period.

Saturday, January 18, 2025

Comparing Physician and Artificial Intelligence Chatbot Responses to Patient Questions Posted to a Public Social Media Forum

Ayers, J. W., et al. (2023).
JAMA internal medicine, 183(6), 589–596.

Abstract

Importance
The rapid expansion of virtual health care has caused a surge in patient messages concomitant with more work and burnout among health care professionals. Artificial intelligence (AI) assistants could potentially aid in creating answers to patient questions by drafting responses that could be reviewed by clinicians.

Objective
To evaluate the ability of an AI chatbot assistant (ChatGPT), released in November 2022, to provide quality and empathetic responses to patient questions.

Design, Setting, and Participants
In this cross-sectional study, a public and nonidentifiable database of questions from a public social media forum (Reddit’s r/AskDocs) was used to randomly draw 195 exchanges from October 2022 where a verified physician responded to a public question. Chatbot responses were generated by entering the original question into a fresh session (without prior questions having been asked in the session) on December 22 and 23, 2022. The original question along with anonymized and randomly ordered physician and chatbot responses were evaluated in triplicate by a team of licensed health care professionals. Evaluators chose “which response was better” and judged both “the quality of information provided” (very poor, poor, acceptable, good, or very good) and “the empathy or bedside manner provided” (not empathetic, slightly empathetic, moderately empathetic, empathetic, and very empathetic). Mean outcomes were ordered on a 1 to 5 scale and compared between chatbot and physicians.

Results
Of the 195 questions and responses, evaluators preferred chatbot responses to physician responses in 78.6% (95% CI, 75.0%-81.8%) of the 585 evaluations. Mean (IQR) physician responses were significantly shorter than chatbot responses (52 [17-62] words vs 211 [168-245] words; t = 25.4; P < .001). Chatbot responses were rated of significantly higher quality than physician responses (t = 13.3; P < .001). The proportion of responses rated as good or very good quality (≥ 4), for instance, was higher for chatbot than physicians (chatbot: 78.5%, 95% CI, 72.3%-84.1%; physicians: 22.1%, 95% CI, 16.4%-28.2%;). This amounted to 3.6 times higher prevalence of good or very good quality responses for the chatbot. Chatbot responses were also rated significantly more empathetic than physician responses (t = 18.9; P < .001). The proportion of responses rated empathetic or very empathetic (≥4) was higher for chatbot than for physicians (physicians: 4.6%, 95% CI, 2.1%-7.7%; chatbot: 45.1%, 95% CI, 38.5%-51.8%; physicians: 4.6%, 95% CI, 2.1%-7.7%). This amounted to 9.8 times higher prevalence of empathetic or very empathetic responses for the chatbot.

Conclusions
In this cross-sectional study, a chatbot generated quality and empathetic responses to patient questions posed in an online forum. Further exploration of this technology is warranted in clinical settings, such as using chatbot to draft responses that physicians could then edit. Randomized trials could assess further if using AI assistants might improve responses, lower clinician burnout, and improve patient outcomes.

Here are some thoughts:

This is a document about the use of chatbots in healthcare. It discusses the use of chatbots to answer patient questions. The study found that chatbots were preferred over physicians and rated significantly higher for both quality and empathy. This research is important for psychologists to know because chatbots in the future may be able to answer questions about your practice in terms of informed consent, insurances accepted, and the type of services you provide. AI agents may be able to help psychologists with streamlining these types of administrative issues.

Friday, January 17, 2025

Men's Suicidal thoughts and behaviors and conformity to masculine norms: A person-centered, latent profile approach

Eggenberger, L., et al. (2024).
Heliyon, 10(20), e39094.

Abstract

Background

Men are up to four times more likely to die by suicide than women. At the same time, men are less likely to disclose suicidal ideation and transition more rapidly from ideation to attempt. Recently, socialized gender norms and particularly conformity to masculine norms (CMN) have been discussed as driving factors for men's increased risk for suicidal thoughts and behaviors (STBs). This study aims to examine the individual interplay between CMN dimensions and their association with depression symptoms, help-seeking, and STBs.

Methods

Using data from an anonymous online survey of 488 cisgender men, latent profile analysis was performed to identify CMN subgroups. Multigroup comparisons and hierarchical regression analyses were used to estimate differences in sociodemographic characteristics, depression symptoms, psychotherapy use, and STBs.

Results

Three latent CMN subgroups were identified: Egalitarians (58.6 %; characterized by overall low CMN), Players (16.0 %; characterized by patriarchal beliefs, endorsement of sexual promiscuity, and heterosexual self-presentation), and Stoics (25.4 %; characterized by restrictive emotionality, self-reliance, and engagement in risky behavior). Stoics showed a 2.32 times higher risk for a lifetime suicide attempt, younger age, stronger somatization of depression symptoms, and stronger unbearability beliefs.

Conclusion

The interplay between the CMN dimensions restrictive emotionality, self-reliance, and willingness to engage in risky behavior, paired with suicidal beliefs about the unbearability of emotional pain, may create a suicidogenic psychosocial system. Acknowledging this high-risk subgroup of men conforming to restrictive masculine norms may aid the development of tailored intervention programs, ultimately mitigating the risk for a suicide attempt.

Here are some thoughts:

Overall, the study underscores the critical role of social norms in shaping men's mental health and suicide risk. It provides valuable insights for developing targeted interventions and promoting healthier expressions of masculinity to prevent suicide in men.

This research article investigates the link between conformity to masculine norms (CMN) and suicidal thoughts and behaviors (STBs) in cisgender men. Using data from an online survey, the study employs latent profile analysis to identify distinct CMN subgroups, revealing three profiles: Egalitarians (low CMN), Players (patriarchal beliefs and promiscuity), and Stoics (restrictive emotionality, self-reliance, and risk-taking). Stoics demonstrated a significantly higher risk of lifetime suicide attempts, attributable to their CMN profile combined with beliefs about the unbearability of emotional pain. The study concludes that understanding CMN dimensions is crucial for developing targeted suicide prevention strategies for men.

Thursday, January 16, 2025

Faculty Must Protect Their Labor from AI Replacement

John Warner
Inside Higher Ed
Originally posted 11 Dec 24

Here is an excerpt:

A PR release from the UCLA Newsroom about a comparative lit class that is using a “UCLA-developed AI system” to substitute for labor that was previously done by faculty or teaching assistants lays out the whole deal. The course textbook has been generated from the professor’s previous course materials. Students will interact with the AI-driven courseware. A professor and teaching assistants will remain, for now, but for how long?

The professor argues—I would say rationalizes—that this is good for students because “Normally, I would spend lectures contextualizing the material and using visuals to demonstrate the content. But now all of that is in the textbook we generated, and I can actually work with students to read the primary sources and walk them through what it means to analyze and think critically.”

(Note: Whenever I see someone touting the benefit of an AI-driven practice as good pedagogy, I wonder what is stopping them from doing it without the AI component, and the answer is usually nothing.)

An additional apparent benefit is “that the platform can help professors ensure consistent delivery of course material. Now that her teaching materials are organized into a coherent text, another instructor could lead the course during the quarters when Stahuljak isn’t teaching—and offer students a very similar experience.”


This article argues that he survival of college faculty in an AI-driven world depends on recognizing themselves as laborers and resisting trends that devalue their work. The rise of adjunctification—prioritizing cheaper, non-tenured faculty over tenured ones—offers a cautionary tale. Similarly, the adoption of generative AI in teaching risks diminishing the human role in education. Examples like UCLA’s AI-powered courseware illustrate how faculty labor becomes interchangeable, paving the way for automation and eroding the value of teaching. Faculty must push back against policies, such as shifts in copyright, that enable these trends, emphasizing the irreplaceable value of their labor and resisting practices that jeopardize the future of academic teaching and learning.

Wednesday, January 15, 2025

AI Licensing for Authors: Who Owns the Rights and What’s a Fair Split?

The Authors Guild. (2024, December 13). 
The Authors Guild. 
Originally published 12 Dec 24

The Authors Guild believes it is crucial that authors, not publishers or tech companies, have control over the licensing of AI rights. Authors must be able to choose whether they want to allow their works to be used by AI and under what terms.

AI Training Is Not Covered Under Standard Publishing Agreements

A trade publishing agreement grants just that: a license to publish. AI training is not publishing, and a publishing contract does not in any way grant that right. AI training is not a new book format, it is not a new market, it is not a new distribution mechanism. Licensing for AI training is a right entirely unrelated to publishing, and is not a right that can simply be tacked onto a subsidiary-rights clause. It is a right reserved by authors, a right that must be negotiated individually for each publishing contract, and only if the author chooses to license that right at all.

Subsidiary Rights Do Not Include AI Rights

The contractual rights that authors do grant to publishers include the right to publish the book in print, electronic, and often audio formats (though many older contracts do not provide for electronic or audio rights). They also grant the publisher “subsidiary rights” authorizing it to license the book or excerpts to third parties in readable formats, such as foreign language editions, serials, abridgements or condensations, and readable digital or electronic editions. AI training rights to date have not been included as a subsidiary right in any contract we have been made aware of. Subsidiary rights have a range of “splits”—percentages of revenues that the publisher keeps and pays to the author. For certain subsidiary rights, such as “other digital” or “other electronic” rights (which some publishers have, we believe erroneously, argued gives them AI training rights), the publisher is typically required to consult with the author or get their approval before granting any subsidiary licenses.


Here are some thoughts:

The Authors Guild emphasizes that authors, not publishers or tech companies, should control AI licensing for their works. Standard publishing contracts don’t cover AI training, as it’s unrelated to traditional publishing rights. Authors retain copyright for AI uses and must negotiate these rights separately, ensuring they can approve or reject licensing deals. Publishers, if involved, should be fairly compensated based on their role, but authors should receive the majority—75-85%—of AI licensing revenues. The Guild also continues legal action against companies for past AI-related copyright violations, advocating for fair practices and author autonomy in this emerging market.

Tuesday, January 14, 2025

Agentic LLMs for Patient-Friendly Medical Reports

Sudarshan, M., Shih, S, et al. (2024).
arXiv.org

Abstract

The application of Large Language Models (LLMs) in healthcare is expanding rapidly, with one potential use case being the translation of formal medical re-ports into patient-legible equivalents. Currently, LLM outputs often need to be edited and evaluated by a human to ensure both factual accuracy and comprehensibility, and this is true for the above use case. We aim to minimize this step by proposing an agentic workflow with the Reflexion framework, which uses iterative self-reflection to correct outputs from an LLM. This pipeline was tested and compared to zero-shot prompting on 16 randomized radiology reports. In our multi-agent approach, reports had an accuracy rate of 94.94% when looking at verification of ICD-10 codes, compared to zero-shot prompted reports, which had an accuracy rate of 68.23%. Additionally, 81.25% of the final reflected reports required no corrections for accuracy or readability, while only 25% of zero-shot prompted reports met these criteria without needing modifications. These results indicate that our approach presents a feasible method for communicating clinical findings to patients in a quick, efficient and coherent manner whilst also retaining medical accuracy. The codebase is available for viewing at http://github.com/ malavikhasudarshan/Multi-Agent-Patient-Letter-Generation.


Here are some thoughts:

The article focuses on using Large Language Models (LLMs) in healthcare to create patient-friendly versions of medical reports, specifically in the field of radiology. The authors present a new multi-agent workflow that aims to improve the accuracy and readability of these reports compared to traditional methods like zero-shot prompting. This workflow involves multiple steps: extracting ICD-10 codes from the original report, generating multiple patient-friendly reports, and using a reflection model to select the optimal version.

The study highlights the success of this multi-agent approach, demonstrating that it leads to higher accuracy in terms of including correct ICD-10 codes and produces reports that are more concise, structured, and formal compared to zero-shot prompting. The authors acknowledge that while their system significantly reduces the need for human review and editing, it doesn't completely eliminate it. The article emphasizes the importance of clear and accessible medical information for patients, especially as they increasingly gain access to their own records. The goal is to reduce patient anxiety and confusion, ultimately enhancing their understanding of their health conditions.

Monday, January 13, 2025

Exposure to Higher Rates of False News Erodes Media Trust and Fuels Overconfidence

Altay, S., Lyons, B. A., & Modirrousta-Galian, A. (2024).
Mass Communication & Society, 1–25.
https://doi.org/10.1080/15205436.2024.2382776

Abstract

In two online experiments (N = 2,735), we investigated whether forced exposure to high proportions of false news could have deleterious effects by sowing confusion and fueling distrust in news. In a between-subjects design where U.S. participants rated the accuracy of true and false news, we manipulated the proportions of false news headlines participants were exposed to (17%, 33%, 50%, 66%, and 83%). We found that exposure to higher proportions of false news decreased trust in the news but did not affect participants’ perceived accuracy of news headlines. While higher proportions of false news had no effect on participants’ overall ability to discern between true and false news, they made participants more overconfident in their discernment ability. Therefore, exposure to false news may have deleterious effects not by increasing belief in falsehoods, but by fueling overconfidence and eroding trust in the news. Although we are only able to shed light on one causal pathway, from news environment to attitudes, this can help us better understand the effects of external or supply-side changes in news quality.


Here are some thoughts:

The study investigates the impact of increased exposure to false news on individuals' trust in media, their ability to discern truth from falsehood, and their confidence in their evaluation skills. The research involved two online experiments with a total of 2,735 participants, who rated the accuracy of news headlines after being exposed to varying proportions of false content. The findings reveal that higher rates of misinformation significantly decrease general media trust, independent of individual factors such as ideology or cognitive reflectiveness. This decline in trust may lead individuals to turn away from credible news sources in favor of less reliable alternatives, even when their ability to evaluate individual news items remains intact.

Interestingly, while participants displayed overconfidence in their evaluations after exposure to predominantly false content, their actual accuracy judgments did not significantly vary with the proportion of true and false news. This suggests that personal traits like discernment skills play a more substantial role than environmental cues in determining how individuals assess news accuracy. The study also highlights a disconnection between changes in media trust and evaluations of specific news items, indicating that attitudes toward media are often more malleable than actual behavior.

The research underscores the importance of understanding the psychological mechanisms at play when individuals encounter misinformation. It points out that interventions aimed at improving news discernment should consider the potential for increased skepticism rather than enhanced accuracy. Moreover, the findings suggest that exposure to high levels of false news can lead to overconfidence in one's ability to judge news quality, which may result in the rejection of accurate information.

Overall, the study provides credible evidence that exposure to predominantly false news can have harmful effects by eroding trust in media institutions and fostering overconfidence in personal judgment abilities. These insights are crucial for developing effective strategies to combat misinformation and promote healthy media consumption habits among the public.

Sunday, January 12, 2025

Large language models can outperform humans in social situational judgments

Mittelstädt, J. M.,  et al. (2024).
Scientific Reports, 14(1).

Abstract

Large language models (LLM) have been a catalyst for the public interest in artificial intelligence (AI). These technologies perform some knowledge-based tasks better and faster than human beings. However, whether AIs can correctly assess social situations and devise socially appropriate behavior, is still unclear. We conducted an established Situational Judgment Test (SJT) with five different chatbots and compared their results with responses of human participants (N = 276). Claude, Copilot and you.com’s smart assistant performed significantly better than humans in proposing suitable behaviors in social situations. Moreover, their effectiveness rating of different behavior options aligned well with expert ratings. These results indicate that LLMs are capable of producing adept social judgments. While this constitutes an important requirement for the use as virtual social assistants, challenges and risks are still associated with their wide-spread use in social contexts.

Here are some thoughts:

This research assesses the social judgment capabilities of large language models (LLMs) by administering a Situational Judgment Test (SJT), a standardized test for work or critical situation decisions, to five popular chatbots and comparing their performance to a human control group. The study found that several LLMs significantly outperformed humans in identifying appropriate behaviors in complex social scenarios. While LLMs demonstrated high consistency in their responses and agreement with expert ratings, the study notes limitations including potential biases and the need for further investigation into real-world application and the underlying mechanisms of their social judgment. The results suggest LLMs possess considerable potential as social assistants, but also highlight ethical considerations surrounding their use.

Saturday, January 11, 2025

LLM-based agentic systems in medicine and healthcare

Qiu, J., Lam, K., Li, G. et al.
Nat Mach Intell (2024).

Large language model-based agentic systems can process input information, plan and decide, recall and reflect, interact and collaborate, leverage various tools and act. This opens up a wealth of opportunities within medicine and healthcare, ranging from clinical workflow automation to multi-agent-aided diagnosis.

Large language models (LLMs) exhibit generalist intelligence in following instructions and providing information. In medicine, they have been employed in tasks from writing discharge summaries to clinical note-taking. LLMs are typically created via a three-stage process: first, pre-training using vast web-scale data to obtain a base model; second, fine-tuning the base model using high-quality question-and-answer data to generate a conversational assistant model; and third, reinforcement learning from human feedback to align the assistant model with human values and improve responses. LLMs are essentially text-completion models that provide responses by predicting words following the prompt. Although this next-word prediction mechanism allows LLMs to respond rapidly, it does not guarantee depth or accuracy of their outputs. LLMs are currently limited by the recency, validity and breadth of their training data, and their outputs are dependent on prompt quality. They also lack persistent memory, owing to their intrinsically limited context window, which leads to difficulties in maintaining continuity across longer interactions or across sessions; this, in turn, leads to challenges in providing personalized responses based on past interactions. Furthermore, LLMs are inherently unimodal. These limitations restrict their applications in medicine and healthcare, which often require problem-solving skills beyond linguistic proficiency alone.


Here are some thoughts:

Large language model (LLM)-based agentic systems are emerging as powerful tools in medicine and healthcare, offering capabilities that go beyond simple text generation. These systems can process information, make decisions, and interact with various tools, leading to advancements in clinical workflows and diagnostics. LLM agents are created through a three-stage process involving pre-training, fine-tuning, and reinforcement learning. They overcome limitations of standalone LLMs by incorporating external modules for perception, memory, and action, enabling them to handle complex tasks and collaborate with other agents. Four key opportunities for LLM agents in healthcare include clinical workflow automation, trustworthy medical AI, multi-agent-aided diagnosis, and health digital twins. Despite their potential, these systems also pose challenges such as safety concerns, bias amplification, and the need for new regulatory frameworks.

This development is important to psychologists for several reasons. First, LLM agents could revolutionize mental health care by providing personalized, round-the-clock support to patients, potentially improving treatment outcomes and accessibility. Second, these systems could assist psychologists in analyzing complex patient data, leading to more accurate diagnoses and tailored treatment plans. Third, LLM agents could automate administrative tasks, allowing psychologists to focus more on direct patient care. Fourth, the multi-agent collaboration feature could facilitate interdisciplinary approaches in mental health, bringing together insights from various specialties. Finally, the ethical implications and potential biases of these systems present new areas of study for psychologists, particularly in understanding how AI-human interactions may impact mental health and therapeutic relationships.

Friday, January 10, 2025

The Danger Of Superhuman AI Is Not What You Think

Shannon Vallor
Noema Magazine
Originally posted 23 May 24

Today’s generative AI systems like ChatGPT and Gemini are routinely described as heralding the imminent arrival of “superhuman” artificial intelligence. Far from a harmless bit of marketing spin, the headlines and quotes trumpeting our triumph or doom in an era of superhuman AI are the refrain of a fast-growing, dangerous and powerful ideology. Whether used to get us to embrace AI with unquestioning enthusiasm or to paint a picture of AI as a terrifying specter before which we must tremble, the underlying ideology of “superhuman” AI fosters the growing devaluation of human agency and autonomy and collapses the distinction between our conscious minds and the mechanical tools we’ve built to mirror them.

Today’s powerful AI systems lack even the most basic features of human minds; they do not share with humans what we call consciousness or sentience, the related capacity to feel things like pain, joy, fear and love. Nor do they have the slightest sense of their place and role in this world, much less the ability to experience it. They can answer the questions we choose to ask, paint us pretty pictures, generate deepfake videos and more. But an AI tool is dark inside.


Here are some thoughts:

This essay critiques the prevalent notion of superhuman AI, arguing that this rhetoric diminishes the unique qualities of human intelligence. The author challenges the idea that surpassing humans in task completion equates to superior intelligence, emphasizing the irreplaceable aspects of human consciousness, emotion, and creativity. The essay contrasts the narrow definition of intelligence used by some AI researchers with a broader understanding that encompasses human experience and values. Ultimately, the author proposes a future where AI complements rather than replaces human capabilities, fostering a more humane and sustainable society.

Thursday, January 9, 2025

Moral resilience and moral injury of nurse leaders during crisis situations: A qualitative descriptive analysis

Bergman, A., Nelson, K., et al. (2024).
Nursing Management, 55(12), 16–26.

Nurse leaders are a heterogeneous group encompassing a variety of roles, settings, and specialties. What ties these diverse professionals together is a common code of ethics. Nurse leaders apply the provisions of their code of ethics not only to patient scenarios, but also to their interactions with nursing colleagues who rely on their leaders as advocates for ethical nursing practice. Successful nurse leaders embody principles of professionalism, utilize effective communication and interpersonal skills, have a broad familiarity with the healthcare system and its nuances and complexity, and demonstrate skillful business acumen.

Despite their extensive training, nurse leaders have long been an underappreciated and largely unseen force maintaining the health of the healthcare systems and functioning as a safety net for both patients and nursing staff. However, nurse leaders are under more scrutiny and subject to extraordinary stressors related to the COVID-19 pandemic. Some of these stressors occurred due to the ethical challenges placed on leaders navigating an unprecedented pandemic. Others reflect long-standing patterns within healthcare and nursing that were exacerbated during the pandemic.

Unresolved ethical issues combined with unrelenting stress can lead to escalating degrees of moral suffering that undermines integrity and well-being. Moral injury (MI) occurs when an individual compromises personal or professional values, violating the individual's sense of right and wrong and causing this person to question their ability to navigate ethical concerns with integrity. Conversely, moral resilience (MR) is the capacity to restore or sustain integrity in response to ethical or moral adversity. MR includes six pillars: personal integrity, relational integrity, buoyancy, self-regulation/self-awareness, moral efficacy, and self-stewardship.

This research substudy aimed to explore the experiences and scenarios that exposed nurse leaders to MI during the COVID-19 pandemic and the strategies and solutions that nurse leaders employ to bolster their MR and integrity. This research strives to amplify their stories in the hopes of developing practical solutions to organizational, professional, and individual concerns rooted in their ethical values as nurse leaders.

The article is linked above.

This qualitative study examines the moral injury (MI) and moral resilience (MR) of nurse leaders during the COVID-19 pandemic. Researchers surveyed US nurse leaders, analyzing both quantitative MI and MR scores and qualitative responses exploring their experiences. Five key themes emerged: absent nursing voice, unsustainable workload, lack of leadership support, need for leadership capacity building, and prioritization of finances over patient care. The Reina Trust & Betrayal Model framed the analysis, revealing widespread broken trust impacting all three dimensions of trust: communication, character, and capability. The study concludes with recommendations to rebuild trust and address nurse leader well-being.

Wednesday, January 8, 2025

The Unpaid Toll: Quantifying the Public Health Impact of AI

Han, Y. et al.
arXiv:2412.06288 [cs.CY]

Abstract

The surging demand for AI has led to a rapid expansion of energy-intensive data centers, impacting the environment through escalating carbon emissions and water consumption. While significant attention has been paid to AI's growing environmental footprint, the public health burden, a hidden toll of AI, has been largely overlooked. Specifically, AI's lifecycle, from chip manufacturing to data center operation, significantly degrades air quality through emissions of criteria air pollutants such as fine particulate matter, substantially impacting public health. This paper introduces a methodology to model pollutant emissions across AI's lifecycle, quantifying the public health impacts. Our findings reveal that training an AI model of the Llama3.1 scale can produce air pollutants equivalent to more than 10,000 round trips by car between Los Angeles and New York City. The total public health burden of U.S. data centers in 2030 is valued at up to more than $20 billion per year, double that of U.S. coal-based steelmaking and comparable to that of on-road emissions of California. Further, the public health costs unevenly impact economically disadvantaged communities, where the per-household health burden could be 200x more than that in less-impacted communities. We recommend adopting a standard reporting protocol for criteria air pollutants and the public health costs of AI, paying attention to all impacted communities, and implementing health-informed AI to mitigate adverse effects while promoting public health equity.

The research is linked above.

This research paper quantifies the previously overlooked public health consequences of artificial intelligence (AI), focusing on the air pollution generated throughout its lifecycle—from chip manufacturing to data center operation. The authors present a methodology for modeling pollutant emissions and their resulting health impacts, finding that AI's environmental footprint translates to substantial health costs, potentially exceeding $20 billion annually in the US by 2030 and disproportionately affecting low-income communities. This "hidden toll" of AI, the paper argues, necessitates standardized reporting protocols for air pollutants and health impacts, the development of "health-informed AI" to mitigate adverse effects, and a focus on achieving public health equity.

Psychologists could find the information in the sources valuable as it highlights the potential mental health consequences of socioeconomic disparities exacerbated by AI's environmental impact. The sources reveal that the health burden of AI, particularly from data centers, is unevenly distributed and disproportionately affects low-income communities. This raises concerns about increased stress, anxiety, and depression in these communities due to factors like higher exposure to air pollution, reduced access to healthcare, and financial strain from increased health costs. Understanding these psychological impacts could inform interventions and policies aimed at mitigating the negative mental health consequences of AI's growth, particularly for vulnerable populations.

Tuesday, January 7, 2025

Are Large Language Models More Empathetic than Humans?

Welivita, A., and Pu, P. (2024, June 7).
arXiv.org.

Abstract

With the emergence of large language models (LLMs), investigating if they can surpass humans in areas such as emotion recognition and empathetic responding has become a focal point of research. This paper presents a comprehensive study exploring the empathetic responding capabilities of four state-of-the-art LLMs: GPT-4, LLaMA-2-70B-Chat, Gemini-1.0-Pro, and Mixtral-8x7B-Instruct in comparison to a human baseline. We engaged 1,000 participants in a between-subjects user study, assessing the empathetic quality of responses generated by humans and the four LLMs to 2,000 emotional dialogue prompts meticulously selected to cover a broad spectrum of 32 distinct positive and negative emotions. Our findings reveal a statistically significant superiority of the empathetic responding capability of LLMs over humans. GPT-4 emerged as the most empathetic, marking ≈31% increase in responses rated as Good compared to the human benchmark. It was followed by LLaMA-2, Mixtral-8x7B, and Gemini-Pro, which showed increases of approximately 24%, 21%, and 10% in Good ratings, respectively. We further analyzed the response ratings at a finer granularity and discovered that some LLMs are significantly better at responding to specific emotions compared to others. The suggested evaluation framework offers a scalable and adaptable approach for assessing the empathy of new LLMs, avoiding the need to replicate this study’s findings in future research.


Here are some thoughts:

The research presents a groundbreaking study exploring the empathetic responding capabilities of large language models (LLMs), specifically comparing GPT-4, LLaMA-2-70B-Chat, Gemini-1.0-Pro, and Mixtral-8x7B-Instruct against human responses. The researchers designed a comprehensive between-subjects user study involving 1,000 participants who evaluated responses to 2,000 emotional dialogue prompts covering 32 distinct emotions.

By utilizing the EmpatheticDialogues dataset, the study meticulously selected dialogue prompts to ensure equal distribution across positive and negative emotional spectrums. The researchers developed a nuanced approach to evaluating empathy, defining it through cognitive, affective, and compassionate components. They provided LLMs with specific instructions emphasizing the multifaceted nature of empathetic communication, which went beyond traditional linguistic proficiency to capture deeper emotional understanding.

The findings revealed statistically significant superiority in LLMs' empathetic responding capabilities. GPT-4 emerged as the most empathetic, demonstrating approximately a 31% increase in responses rated as "Good" compared to the human baseline. Other models like LLaMA-2, Mixtral-8x7B, and Gemini-Pro showed increases of 24%, 21%, and 10% respectively. Notably, the study also discovered that different LLMs exhibited varying capabilities in responding to specific emotions, highlighting the complexity of artificial empathy.

This research represents a significant advancement in understanding AI's potential for nuanced emotional communication, offering a scalable and adaptable framework for assessing empathy in emerging language models.

Monday, January 6, 2025

Moral agency under oppression

Hirji, S. (2024).
Philosophy and Phenomenological Research.

Abstract

In Huckleberry Finn, a thirteen-year old white boy in antebellum Missouri escapes from his abusive father and befriends a runaway slave named Jim. On a familiar reading of the novel, both Huck and Jim are, in their own ways, morally impressive, transcending the unjust circumstances in which they find themselves in to treat each other as equals. Huck saves Jim's life from two men looking for runaway slaves, and later Jim risks his chance at freedom to save Huck's friend Tom. I want to complicate the idea that Huck and Jim are morally commendable for what they do. More generally, I want to explore how oppression undermines the moral agency of the oppressed, and to some degree, the oppressor. In §1 I take a careful look at Jim's choice, arguing that his enslavement compromises his moral agency. In §2 I show how Jim's oppression also shapes the extent to which Huck can be praiseworthy for his action. In §3, I consider the consequences for thinking about the moral agency of the oppressed, and in §4 I explore the limitations of the concept of moral worth for theorizing in cases of oppression.

The article is here.

Here are some thoughts: 

This article examines moral agency within the context of oppression, using Mark Twain's Huckleberry Finn as a case study. The author challenges the conventional interpretation of Huck and Jim's actions as morally commendable, arguing that Jim's enslavement fundamentally restricts his agency, regardless of his choices. This limitation, the author contends, also impacts the assessment of Huck's actions, suggesting his seemingly virtuous choices are inadvertently shaped by the system of oppression. The article further explores how established moral philosophical concepts inadequately address the complexities of moral agency under oppression, proposing a nuanced understanding that considers both capacity and the ability to fully express that capacity in action. Finally, the article broadens its scope to consider contemporary instances of oppression, demonstrating the persistent challenges to moral agency in various social contexts.

Sunday, January 5, 2025

Incompetence & Losing Capacity: Answers to 8 FAQs

Leslie Kernisan
(2024, November 30). Better Health While Aging.

Perhaps your elderly father insists he has no difficulties driving, even though he’s gotten into some fender benders and you find yourself a bit uncomfortable when you ride in the car with him.

Or you’ve worried about your aging aunt giving an alarming amount of money to people who call her on the phone.

Or maybe it’s your older spouse, who has started refusing to take his medication, claiming that it’s poisoned because the neighbor is out to get him.

These situations are certainly concerning, and they often prompt families to ask me if they should be worried about an older adult becoming “incompetent.”

In response, I usually answer that we need to do at least two things:

  • We should assess whether the person has “capacity” to make the decision in question.
  • If there are signs concerning for memory or thinking problems, we should evaluate to determine what might be causing them.
If you’ve been concerned about an older person’s mental wellbeing or ability to make decisions, understanding what clinicians — and lawyers —mean by capacity is hugely important.


The website addresses concerns related to the decision-making capacity of older adults, particularly in light of cognitive impairments such as dementia. It emphasizes the importance of understanding "capacity," which refers to an individual's ability to make informed decisions about specific matters. Dr. Leslie Kernisan outlines that capacity is not a binary state; instead, it is decision-specific and can fluctuate based on health conditions. For example, an older adult may retain the capacity to make simple decisions but struggle with more complex ones, especially if they are experiencing health issues or cognitive decline.

Dr. Kernisan distinguishes between incapacity and incompetence, noting that capacity is typically assessed in clinical settings by healthcare professionals, while competence is a legal determination made by courts. The document explains that various types of decisions—such as medical consent, financial matters, and driving—require different capacities, and the legal standards for these capacities can vary by state.
The article also highlights the impact of Alzheimer's disease and other dementias on decision-making abilities. In early stages, individuals may still have the capacity for many decisions, but as the disease progresses, their ability to make even simple choices may diminish. Therefore, it is crucial for families to seek clinical assessments of capacity when there are concerns about an older adult's decision-making abilities.

Moreover, the document advises that legal determinations of incapacity may be necessary before overriding an older person's decisions, especially in matters concerning safety or financial well-being. Families are encouraged to consult with legal professionals when navigating these issues to ensure they are acting within legal and ethical boundaries.

Overall, the article serves as a practical guide for caregivers and family members dealing with the complexities of aging and cognitive decline, stressing the need for respectful communication and proactive measures to protect the autonomy and safety of older adults.

Saturday, January 4, 2025

Artificial Intelligence (AI) Psychotherapy: Coming Soon to a Consultation Room Near You?

Swartz, H. A. (2023).
American Journal of Psychotherapy,
76(2), 55–56.


News stories about generative artificial intelligence (AI) abound, alternately predicting that AI will cause humanity’s demise and touting it as the solution to everything from climate change to health care scarcity. In 2022, OpenAI’s release of ChatGPT, an easy-to-use AI interface that produces fluent text in response to prompts, intensified highly ambivalent discussions about AI. Pundits and knowledgeable experts alike marveled at ChatGPT’s ability to produce text that resembles human writing while acknowledging that the speed of its evolution was outstripping our ability to understand its risks. This technology performed better on law school and medical school entrance exams than many human applicants, leading to predictions that AI will eventually render some professions obsolete. Psychotherapists, too, seem at risk for being replaced by AI. Are fears of AI supplanting psychotherapists reasonable?

To explore how AI might respond to a patient in distress, I typed this prompt into ChatGPT: “I feel sad. Can you help me?” It quickly generated the following text:
I am sorry to hear that you are feeling sad. I’ll do my best to help you. It’s important to remember that I’m an AI language model and may not fully understand or empathize with your emotions, but I can offer some general suggestions and support. . . . Remember, everyone experiences sadness from time to time, and it’s okay to seek help when you need it. You don’t have to face it alone.
Despite being impressed by the speed and conversational tone of ChatGPT’s text generation and the convenience of receiving an immediate response during hours when a human therapist would likely be unavailable, the formulaic exchange and canned feedback provided by ChatGPT left me with doubts about its ability to provide genuine soothing to humans experiencing depression. 


Here are some thoughts:

This editorial examines the potential of artificial intelligence (AI) in psychotherapy. While AI chatbots offer increased accessibility and convenience, providing self-help tools and improving symptom management, studies reveal limitations, including a lack of genuine human connection and potential risks like increased self-harm. The author concludes that AI is a useful supplementary tool, particularly in low-resource settings, but cannot replace human therapists for complex emotional and interpersonal issues. Ultimately, a blended approach incorporating both AI and human interaction is suggested for optimal therapeutic outcomes.

Friday, January 3, 2025

Assessing Empathy in Large Language Models with Real-World Physician-Patient Interactions

Luo, M., et al. (2024, May 26).
arXiv.org.

Abstract

The integration of Large Language Models (LLMs) into the healthcare domain has the potential to significantly enhance patient care and support through the development of empathetic, patient-facing chatbots. This study investigates an intriguing question Can ChatGPT respond with a greater degree of empathy than those typically offered by physicians? To answer this question, we collect a de-identified dataset of patient messages and physician responses from Mayo Clinic and generate alternative replies using ChatGPT. Our analyses incorporate novel empathy ranking evaluation (EMRank) involving both automated metrics and human assessments to gauge the empathy level of responses. Our findings indicate that LLM-powered chatbots have the potential to surpass human physicians in delivering empathetic communication, suggesting a promising avenue for enhancing patient care and reducing professional burnout. The study not only highlights the importance of empathy in patient interactions but also proposes a set of effective automatic empathy ranking metrics, paving the way for the broader adoption of LLMs in healthcare.


Here are some thoughts:

The research explores an innovative approach to assessing empathy in healthcare communication by comparing responses from physicians and ChatGPT, a large language model (LLM). The study focuses on prostate cancer patient interactions, utilizing a real-world dataset from Mayo Clinic to investigate whether AI-powered chatbots can potentially deliver more empathetic responses than human physicians.

The researchers developed a novel methodology called EMRank, which employs multiple evaluation techniques to measure empathy. This approach includes both automated metrics using LLaMA (another language model) and human assessments. By using zero-shot, one-shot, and few-shot learning strategies, they created a flexible framework for ranking empathetic communication that could be generalized across different healthcare domains.

Key findings suggest that LLM-powered chatbots like ChatGPT have significant potential to surpass human physicians in delivering empathetic communication. The study's unique contributions include using real patient data, developing innovative automatic empathy ranking metrics, and incorporating patient evaluations to validate the assessment methods. By demonstrating the capability of AI to generate compassionate responses, the research opens new avenues for enhancing patient care and potentially reducing professional burnout among healthcare providers.

The methodology carefully addressed privacy concerns by de-identifying patient and physician information, and controlled for response length to ensure a fair comparison. Ultimately, the study represents a promising step towards integrating artificial intelligence into healthcare communication, highlighting the potential of LLMs to provide supportive, empathetic interactions in medical contexts.

Thursday, January 2, 2025

Negative economic shocks and the compliance to social norms

Bogliacino, F.,  et al. (2024).
Judgment and Decision Making, 19.

Abstract

We study why suffering a negative economic shock, i.e., a significant loss, may trigger a change in other-regarding behavior. We conjecture that people trade off concern for money with a conditional preference to follow social norms and that suffering a shock makes extrinsic motivation more salient, leading to more norm violation. This hypothesis is grounded on the premise that preferences are norm-dependent. We study this question experimentally: after administering losses on the earnings from a real-effort task, we analyze choices in prosocial and antisocial settings. To derive our predictions, we elicit social norms for each context analyzed in the experiments. We find evidence that shock increases deviations from norms.

The research is linked above.

Here are some thoughts.

The research indicates another way in which moral norms shift based on context. The study investigates how experiencing significant financial losses, termed negative economic shocks (NES), influences individuals' adherence to social norms. The authors hypothesize that when individuals face NES, they become more focused on monetary concerns, leading to a higher likelihood of violating social norms. This hypothesis is grounded in the concept of norm-dependent utility, where individuals weigh the psychological costs of deviating from norms against their financial needs. The researchers conducted three experiments where participants experienced an 80% loss in earnings from a real-effort task and subsequently engaged in various tasks measuring norm compliance, including stealing, cheating, and cooperation.

The findings reveal that participants who experienced NES exhibited increased norm violations across several contexts. Specifically, there was a notable rise in stealing behaviors and a significant increase in cheating during the "die-under-the-cup" task. Additionally, the study found that retaliation behaviors decreased markedly in "joy of destruction" scenarios. Importantly, the results suggest that the effects of NES on social behavior are distinct from mere wealth effects, indicating that experiencing a shock alters individuals' motivations and decision-making processes. Overall, this research contributes valuable insights into the complex interplay between economic stressors and social behavior, highlighting how financial adversity can lead to deviations from established social norms.

Wednesday, January 1, 2025

Personalized progression modelling and prediction in Parkinson’s disease with a novel multi-modal graph approach

Lian, J., Luo, X., et al. (2024).
Npj Parkinson S Disease, 10(1).
doi.org/10.1038/s41531-024-00832-w

Abstract

Parkinson’s disease (PD) is a complex neurological disorder characterized by dopaminergic neuron degeneration, leading to diverse motor and non-motor impairments. This variability complicates accurate progression modelling and early-stage prediction. Traditional classification methods based on clinical symptoms are often limited by disease heterogeneity. This study introduces an graph-based interpretable personalized progression method, utilizing data from the Parkinson’s Progression Markers Initiative (PPMI) and Stroke Parkinson’s Disease Biomarker Program (PDBP). Our approach integrates multimodal inter-individual and intra-individual data, including clinical assessments, MRI, and genetic information to make multi-dimension predictions. Validated using the PDBP dataset from 12 to 36 months, our AdaMedGraph method demonstrated strong performance, achieving AUC values of 0.748 and 0.714 for the 12-month Hoehn and Yahr Scale and Movement Disorder Society-Sponsored Revision of the Unified Parkinson’s Disease Rating Scale (MDS-UPDRS) III on the PPMI test set. Ablation analysis reveals the importance of baseline clinical assessment predictors. This novel framework improves personalized care and offers insights into unique disease trajectories in PD patients.

The research is here.

Here are some thoughts:

The research introduces AdaMedGraph, an innovative tool designed to model and predict the progression of Parkinson’s Disease (PD) using personalized, multimodal graph-based methods. This approach integrates clinical assessments, MRI imaging, and genetic data to create individualized disease trajectories. AdaMedGraph demonstrates superior predictive performance compared to traditional machine learning methods, achieving high accuracy for progression markers like the Hoehn and Yahr Scale (e.g., AUC 0.748) and effectively addressing the challenges of disease heterogeneity. Its ability to incorporate diverse data sources allows for detailed predictions across motor (e.g., rigidity, tremors) and non-motor symptoms (e.g., cognition, sleep patterns), which are critical for comprehensive PD management.

For psychologists, this research has significant implications. The integration of cognitive and behavioral assessments into the predictive framework underscores the importance of psychological evaluations in understanding PD progression. The model’s ability to identify personalized trajectories enables psychologists to tailor interventions for both motor and non-motor symptoms, enhancing patient-centered care. Furthermore, the study’s exploration of medication effects on symptoms provides valuable insights into the interaction between treatments and behavioral outcomes, informing therapeutic adjustments. By identifying critical predictive features and offering interpretable patient similarity analyses, AdaMedGraph aligns well with the emphasis on individualized care in psychological practice. This tool has the potential to advance multidisciplinary collaboration, enabling psychologists to combine behavioral insights with advanced predictive analytics to improve outcomes for individuals with Parkinson’s Disease.