Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Monday, January 20, 2025

The Human Core of AI: Navigating Our Anxieties and Ethics in the Digital Age

Jesse Hirsh
medium.com
Originally posted 25 FEB 24


Artificial Intelligence (AI) serves as a profound mirror reflecting not just our technological ambitions, but the complex tapestry of human anxieties, ethical dilemmas, and societal challenges. As we navigate the burgeoning landscape of AI, the discourse surrounding it often reveals more about us as a society and as individuals than it does about the technology itself. This is fundamentally about the human condition, our fears, our hopes, and our ethical compass.

AI as a Reflection of Human Anxieties

When we talk about controlling AI, at its core, this discussion encapsulates our fears of losing control — not over machines, but over the humans. The control over AI becomes a metaphor for our collective anxiety about unchecked power, the erosion of privacy, and the potential for new forms of exploitation. It’s an echo of our deeper concerns about how power is distributed and exercised in society.

Guardrails for AI as Guardrails for Humanity

The debate on implementing guardrails for AI is indeed a debate on setting boundaries for human behavior. It’s about creating a framework that ensures AI technologies are used ethically, responsibly, and for the greater good. These conversations underscore a pressing need to manage not just how machines operate, but how people use these tools — in ways that align with societal values and norms. Or perhaps guardrails are the wrong approach, as they limit what humans can do, not what machines can do.


Here are some thoughts:

The essay explores the relationship between Artificial Intelligence (AI) and humanity, arguing that AI reflects human anxieties, ethics, and societal challenges. It emphasizes that the discourse surrounding AI is more about human concerns than the technology itself. The author highlights the need to focus on human ethics, trust, and responsibility when developing and using AI, rather than viewing AI as a separate entity or threat.

This essay is important for psychologists for several reasons. Firstly, understanding human anxieties is crucial for psychologists to understand when working with clients who may be experiencing anxiety related to AI or technology. Secondly, the emphasis on human ethics and responsibility when developing and using AI is essential for psychologists to consider when using AI-powered tools in their practice.

Furthermore, the text's focus on trust and human connection in the context of AI is critical for psychologists to understand when building therapeutic relationships with clients who may be impacted by AI-related issues. By recognizing the interconnectedness of human trust and AI, psychologists can foster deeper and more meaningful relationships with their clients.

Lastly, the author's suggestion to use AI as a tool to reconnect with humanity resonates with psychologists' goals of promoting emotional connection, empathy, and understanding in their clients. By leveraging AI in a way that promotes human connection, clinical psychologists can help their clients develop more authentic and meaningful relationships with others.

Sunday, January 19, 2025

Institutional Betrayal in Inpatient Psychiatry: Effects on Trust and Engagement With Care

Lewis, A., Lee, H. S., Zabelski, S., & 
Shields, M. C. (2024). Psychiatric Services.

Abstract

Objective:

Patients’ experiences of inpatient psychiatry have received limited empirical scrutiny. The authors examined patients’ likelihood of experiencing institutional betrayal (harmful actions or inactions toward patients) at facilities with for-profit, nonprofit, or government ownership; patient-level characteristics associated with experiencing institutional betrayal; associations between betrayal and patients’ trust in mental health providers; and associations between betrayal and patients’ willingness to engage in care postdischarge.

Methods:

Former psychiatric inpatients (N=814 adults) responded to an online survey. Data were collected on patients’ demographic characteristics; experiences of institutional betrayal; and the impact of psychiatric hospitalization on patients’ trust in providers, willingness to engage in care, and attendance at 30-day follow-up visits. Participants’ responses were linked to secondary data on facility ownership type.

Results:

Experiencing institutional betrayal was associated with less trust in mental health providers (25-percentage-point increase in reporting less trust, 95% CI=17–32), reduced willingness (by 45 percentage points, 95% CI=39–52) voluntarily undergo hospitalization, reduced willingness (by 30 percentage points, 95% CI=23–37) to report distressing thoughts to mental health providers, and lower probability of reporting attendance at a 30-day follow-up visit (11-percentage-point decrease, 95% CI=5–18). Participants treated at a for-profit facility were significantly more likely (by 14 percentage points) to report experiencing institutional betrayal than were those treated at a nonprofit facility (p=0.01).

Conclusions:

Institutional betrayal is one mechanism through which inpatient psychiatric facilities may cause iatrogenic harm, and the potential for betrayal was larger at for-profit facilities. Further research is needed to identify the determinants of institutional betrayal and strategies to support improvement in care quality.


Here are some thoughts:

The study found that patients were likely to experience institutional betrayal, defined as harmful actions or inactions toward patients by the facilities they depend on for care.

Key findings of the study include:
  1. Patients who experienced institutional betrayal during their inpatient psychiatric stay reported decreased trust in healthcare providers and organizations.
  2. Institutional betrayal was associated with reduced engagement with care following discharge from inpatient psychiatry.
  3. The period following discharge from inpatient psychiatry is characterized by elevated suicide risk, unplanned readmissions, and lack of outpatient follow-up care.
  4. The study highlights the importance of addressing institutional betrayal in psychiatric care settings to improve patient outcomes and trust in the healthcare system.
These findings suggest that institutional betrayal in inpatient psychiatric care can have significant negative effects on patients' trust in healthcare providers and their willingness to engage with follow-up care. Addressing these issues may be crucial for improving patient outcomes and reducing risks associated with the post-discharge period.

Saturday, January 18, 2025

Comparing Physician and Artificial Intelligence Chatbot Responses to Patient Questions Posted to a Public Social Media Forum

Ayers, J. W., et al. (2023).
JAMA internal medicine, 183(6), 589–596.

Abstract

Importance
The rapid expansion of virtual health care has caused a surge in patient messages concomitant with more work and burnout among health care professionals. Artificial intelligence (AI) assistants could potentially aid in creating answers to patient questions by drafting responses that could be reviewed by clinicians.

Objective
To evaluate the ability of an AI chatbot assistant (ChatGPT), released in November 2022, to provide quality and empathetic responses to patient questions.

Design, Setting, and Participants
In this cross-sectional study, a public and nonidentifiable database of questions from a public social media forum (Reddit’s r/AskDocs) was used to randomly draw 195 exchanges from October 2022 where a verified physician responded to a public question. Chatbot responses were generated by entering the original question into a fresh session (without prior questions having been asked in the session) on December 22 and 23, 2022. The original question along with anonymized and randomly ordered physician and chatbot responses were evaluated in triplicate by a team of licensed health care professionals. Evaluators chose “which response was better” and judged both “the quality of information provided” (very poor, poor, acceptable, good, or very good) and “the empathy or bedside manner provided” (not empathetic, slightly empathetic, moderately empathetic, empathetic, and very empathetic). Mean outcomes were ordered on a 1 to 5 scale and compared between chatbot and physicians.

Results
Of the 195 questions and responses, evaluators preferred chatbot responses to physician responses in 78.6% (95% CI, 75.0%-81.8%) of the 585 evaluations. Mean (IQR) physician responses were significantly shorter than chatbot responses (52 [17-62] words vs 211 [168-245] words; t = 25.4; P < .001). Chatbot responses were rated of significantly higher quality than physician responses (t = 13.3; P < .001). The proportion of responses rated as good or very good quality (≥ 4), for instance, was higher for chatbot than physicians (chatbot: 78.5%, 95% CI, 72.3%-84.1%; physicians: 22.1%, 95% CI, 16.4%-28.2%;). This amounted to 3.6 times higher prevalence of good or very good quality responses for the chatbot. Chatbot responses were also rated significantly more empathetic than physician responses (t = 18.9; P < .001). The proportion of responses rated empathetic or very empathetic (≥4) was higher for chatbot than for physicians (physicians: 4.6%, 95% CI, 2.1%-7.7%; chatbot: 45.1%, 95% CI, 38.5%-51.8%; physicians: 4.6%, 95% CI, 2.1%-7.7%). This amounted to 9.8 times higher prevalence of empathetic or very empathetic responses for the chatbot.

Conclusions
In this cross-sectional study, a chatbot generated quality and empathetic responses to patient questions posed in an online forum. Further exploration of this technology is warranted in clinical settings, such as using chatbot to draft responses that physicians could then edit. Randomized trials could assess further if using AI assistants might improve responses, lower clinician burnout, and improve patient outcomes.

Here are some thoughts:

This is a document about the use of chatbots in healthcare. It discusses the use of chatbots to answer patient questions. The study found that chatbots were preferred over physicians and rated significantly higher for both quality and empathy. This research is important for psychologists to know because chatbots in the future may be able to answer questions about your practice in terms of informed consent, insurances accepted, and the type of services you provide. AI agents may be able to help psychologists with streamlining these types of administrative issues.

Friday, January 17, 2025

Men's Suicidal thoughts and behaviors and conformity to masculine norms: A person-centered, latent profile approach

Eggenberger, L., et al. (2024).
Heliyon, 10(20), e39094.

Abstract

Background

Men are up to four times more likely to die by suicide than women. At the same time, men are less likely to disclose suicidal ideation and transition more rapidly from ideation to attempt. Recently, socialized gender norms and particularly conformity to masculine norms (CMN) have been discussed as driving factors for men's increased risk for suicidal thoughts and behaviors (STBs). This study aims to examine the individual interplay between CMN dimensions and their association with depression symptoms, help-seeking, and STBs.

Methods

Using data from an anonymous online survey of 488 cisgender men, latent profile analysis was performed to identify CMN subgroups. Multigroup comparisons and hierarchical regression analyses were used to estimate differences in sociodemographic characteristics, depression symptoms, psychotherapy use, and STBs.

Results

Three latent CMN subgroups were identified: Egalitarians (58.6 %; characterized by overall low CMN), Players (16.0 %; characterized by patriarchal beliefs, endorsement of sexual promiscuity, and heterosexual self-presentation), and Stoics (25.4 %; characterized by restrictive emotionality, self-reliance, and engagement in risky behavior). Stoics showed a 2.32 times higher risk for a lifetime suicide attempt, younger age, stronger somatization of depression symptoms, and stronger unbearability beliefs.

Conclusion

The interplay between the CMN dimensions restrictive emotionality, self-reliance, and willingness to engage in risky behavior, paired with suicidal beliefs about the unbearability of emotional pain, may create a suicidogenic psychosocial system. Acknowledging this high-risk subgroup of men conforming to restrictive masculine norms may aid the development of tailored intervention programs, ultimately mitigating the risk for a suicide attempt.

Here are some thoughts:

Overall, the study underscores the critical role of social norms in shaping men's mental health and suicide risk. It provides valuable insights for developing targeted interventions and promoting healthier expressions of masculinity to prevent suicide in men.

This research article investigates the link between conformity to masculine norms (CMN) and suicidal thoughts and behaviors (STBs) in cisgender men. Using data from an online survey, the study employs latent profile analysis to identify distinct CMN subgroups, revealing three profiles: Egalitarians (low CMN), Players (patriarchal beliefs and promiscuity), and Stoics (restrictive emotionality, self-reliance, and risk-taking). Stoics demonstrated a significantly higher risk of lifetime suicide attempts, attributable to their CMN profile combined with beliefs about the unbearability of emotional pain. The study concludes that understanding CMN dimensions is crucial for developing targeted suicide prevention strategies for men.

Thursday, January 16, 2025

Faculty Must Protect Their Labor from AI Replacement

John Warner
Inside Higher Ed
Originally posted 11 Dec 24

Here is an excerpt:

A PR release from the UCLA Newsroom about a comparative lit class that is using a “UCLA-developed AI system” to substitute for labor that was previously done by faculty or teaching assistants lays out the whole deal. The course textbook has been generated from the professor’s previous course materials. Students will interact with the AI-driven courseware. A professor and teaching assistants will remain, for now, but for how long?

The professor argues—I would say rationalizes—that this is good for students because “Normally, I would spend lectures contextualizing the material and using visuals to demonstrate the content. But now all of that is in the textbook we generated, and I can actually work with students to read the primary sources and walk them through what it means to analyze and think critically.”

(Note: Whenever I see someone touting the benefit of an AI-driven practice as good pedagogy, I wonder what is stopping them from doing it without the AI component, and the answer is usually nothing.)

An additional apparent benefit is “that the platform can help professors ensure consistent delivery of course material. Now that her teaching materials are organized into a coherent text, another instructor could lead the course during the quarters when Stahuljak isn’t teaching—and offer students a very similar experience.”


This article argues that he survival of college faculty in an AI-driven world depends on recognizing themselves as laborers and resisting trends that devalue their work. The rise of adjunctification—prioritizing cheaper, non-tenured faculty over tenured ones—offers a cautionary tale. Similarly, the adoption of generative AI in teaching risks diminishing the human role in education. Examples like UCLA’s AI-powered courseware illustrate how faculty labor becomes interchangeable, paving the way for automation and eroding the value of teaching. Faculty must push back against policies, such as shifts in copyright, that enable these trends, emphasizing the irreplaceable value of their labor and resisting practices that jeopardize the future of academic teaching and learning.

Wednesday, January 15, 2025

AI Licensing for Authors: Who Owns the Rights and What’s a Fair Split?

The Authors Guild. (2024, December 13). 
The Authors Guild. 
Originally published 12 Dec 24

The Authors Guild believes it is crucial that authors, not publishers or tech companies, have control over the licensing of AI rights. Authors must be able to choose whether they want to allow their works to be used by AI and under what terms.

AI Training Is Not Covered Under Standard Publishing Agreements

A trade publishing agreement grants just that: a license to publish. AI training is not publishing, and a publishing contract does not in any way grant that right. AI training is not a new book format, it is not a new market, it is not a new distribution mechanism. Licensing for AI training is a right entirely unrelated to publishing, and is not a right that can simply be tacked onto a subsidiary-rights clause. It is a right reserved by authors, a right that must be negotiated individually for each publishing contract, and only if the author chooses to license that right at all.

Subsidiary Rights Do Not Include AI Rights

The contractual rights that authors do grant to publishers include the right to publish the book in print, electronic, and often audio formats (though many older contracts do not provide for electronic or audio rights). They also grant the publisher “subsidiary rights” authorizing it to license the book or excerpts to third parties in readable formats, such as foreign language editions, serials, abridgements or condensations, and readable digital or electronic editions. AI training rights to date have not been included as a subsidiary right in any contract we have been made aware of. Subsidiary rights have a range of “splits”—percentages of revenues that the publisher keeps and pays to the author. For certain subsidiary rights, such as “other digital” or “other electronic” rights (which some publishers have, we believe erroneously, argued gives them AI training rights), the publisher is typically required to consult with the author or get their approval before granting any subsidiary licenses.


Here are some thoughts:

The Authors Guild emphasizes that authors, not publishers or tech companies, should control AI licensing for their works. Standard publishing contracts don’t cover AI training, as it’s unrelated to traditional publishing rights. Authors retain copyright for AI uses and must negotiate these rights separately, ensuring they can approve or reject licensing deals. Publishers, if involved, should be fairly compensated based on their role, but authors should receive the majority—75-85%—of AI licensing revenues. The Guild also continues legal action against companies for past AI-related copyright violations, advocating for fair practices and author autonomy in this emerging market.

Tuesday, January 14, 2025

Agentic LLMs for Patient-Friendly Medical Reports

Sudarshan, M., Shih, S, et al. (2024).
arXiv.org

Abstract

The application of Large Language Models (LLMs) in healthcare is expanding rapidly, with one potential use case being the translation of formal medical re-ports into patient-legible equivalents. Currently, LLM outputs often need to be edited and evaluated by a human to ensure both factual accuracy and comprehensibility, and this is true for the above use case. We aim to minimize this step by proposing an agentic workflow with the Reflexion framework, which uses iterative self-reflection to correct outputs from an LLM. This pipeline was tested and compared to zero-shot prompting on 16 randomized radiology reports. In our multi-agent approach, reports had an accuracy rate of 94.94% when looking at verification of ICD-10 codes, compared to zero-shot prompted reports, which had an accuracy rate of 68.23%. Additionally, 81.25% of the final reflected reports required no corrections for accuracy or readability, while only 25% of zero-shot prompted reports met these criteria without needing modifications. These results indicate that our approach presents a feasible method for communicating clinical findings to patients in a quick, efficient and coherent manner whilst also retaining medical accuracy. The codebase is available for viewing at http://github.com/ malavikhasudarshan/Multi-Agent-Patient-Letter-Generation.


Here are some thoughts:

The article focuses on using Large Language Models (LLMs) in healthcare to create patient-friendly versions of medical reports, specifically in the field of radiology. The authors present a new multi-agent workflow that aims to improve the accuracy and readability of these reports compared to traditional methods like zero-shot prompting. This workflow involves multiple steps: extracting ICD-10 codes from the original report, generating multiple patient-friendly reports, and using a reflection model to select the optimal version.

The study highlights the success of this multi-agent approach, demonstrating that it leads to higher accuracy in terms of including correct ICD-10 codes and produces reports that are more concise, structured, and formal compared to zero-shot prompting. The authors acknowledge that while their system significantly reduces the need for human review and editing, it doesn't completely eliminate it. The article emphasizes the importance of clear and accessible medical information for patients, especially as they increasingly gain access to their own records. The goal is to reduce patient anxiety and confusion, ultimately enhancing their understanding of their health conditions.

Monday, January 13, 2025

Exposure to Higher Rates of False News Erodes Media Trust and Fuels Overconfidence

Altay, S., Lyons, B. A., & Modirrousta-Galian, A. (2024).
Mass Communication & Society, 1–25.
https://doi.org/10.1080/15205436.2024.2382776

Abstract

In two online experiments (N = 2,735), we investigated whether forced exposure to high proportions of false news could have deleterious effects by sowing confusion and fueling distrust in news. In a between-subjects design where U.S. participants rated the accuracy of true and false news, we manipulated the proportions of false news headlines participants were exposed to (17%, 33%, 50%, 66%, and 83%). We found that exposure to higher proportions of false news decreased trust in the news but did not affect participants’ perceived accuracy of news headlines. While higher proportions of false news had no effect on participants’ overall ability to discern between true and false news, they made participants more overconfident in their discernment ability. Therefore, exposure to false news may have deleterious effects not by increasing belief in falsehoods, but by fueling overconfidence and eroding trust in the news. Although we are only able to shed light on one causal pathway, from news environment to attitudes, this can help us better understand the effects of external or supply-side changes in news quality.


Here are some thoughts:

The study investigates the impact of increased exposure to false news on individuals' trust in media, their ability to discern truth from falsehood, and their confidence in their evaluation skills. The research involved two online experiments with a total of 2,735 participants, who rated the accuracy of news headlines after being exposed to varying proportions of false content. The findings reveal that higher rates of misinformation significantly decrease general media trust, independent of individual factors such as ideology or cognitive reflectiveness. This decline in trust may lead individuals to turn away from credible news sources in favor of less reliable alternatives, even when their ability to evaluate individual news items remains intact.

Interestingly, while participants displayed overconfidence in their evaluations after exposure to predominantly false content, their actual accuracy judgments did not significantly vary with the proportion of true and false news. This suggests that personal traits like discernment skills play a more substantial role than environmental cues in determining how individuals assess news accuracy. The study also highlights a disconnection between changes in media trust and evaluations of specific news items, indicating that attitudes toward media are often more malleable than actual behavior.

The research underscores the importance of understanding the psychological mechanisms at play when individuals encounter misinformation. It points out that interventions aimed at improving news discernment should consider the potential for increased skepticism rather than enhanced accuracy. Moreover, the findings suggest that exposure to high levels of false news can lead to overconfidence in one's ability to judge news quality, which may result in the rejection of accurate information.

Overall, the study provides credible evidence that exposure to predominantly false news can have harmful effects by eroding trust in media institutions and fostering overconfidence in personal judgment abilities. These insights are crucial for developing effective strategies to combat misinformation and promote healthy media consumption habits among the public.

Sunday, January 12, 2025

Large language models can outperform humans in social situational judgments

Mittelstädt, J. M.,  et al. (2024).
Scientific Reports, 14(1).

Abstract

Large language models (LLM) have been a catalyst for the public interest in artificial intelligence (AI). These technologies perform some knowledge-based tasks better and faster than human beings. However, whether AIs can correctly assess social situations and devise socially appropriate behavior, is still unclear. We conducted an established Situational Judgment Test (SJT) with five different chatbots and compared their results with responses of human participants (N = 276). Claude, Copilot and you.com’s smart assistant performed significantly better than humans in proposing suitable behaviors in social situations. Moreover, their effectiveness rating of different behavior options aligned well with expert ratings. These results indicate that LLMs are capable of producing adept social judgments. While this constitutes an important requirement for the use as virtual social assistants, challenges and risks are still associated with their wide-spread use in social contexts.

Here are some thoughts:

This research assesses the social judgment capabilities of large language models (LLMs) by administering a Situational Judgment Test (SJT), a standardized test for work or critical situation decisions, to five popular chatbots and comparing their performance to a human control group. The study found that several LLMs significantly outperformed humans in identifying appropriate behaviors in complex social scenarios. While LLMs demonstrated high consistency in their responses and agreement with expert ratings, the study notes limitations including potential biases and the need for further investigation into real-world application and the underlying mechanisms of their social judgment. The results suggest LLMs possess considerable potential as social assistants, but also highlight ethical considerations surrounding their use.