Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Psychotherapy. Show all posts
Showing posts with label Psychotherapy. Show all posts

Friday, August 1, 2025

You sound like ChatGPT

Sara Parker
The Verge
Originally posted 20 June 25

Here is an excerpt:

AI shows up most obviously in functions like smart replies, autocorrect, and spellcheck. Research out of Cornell looks at our use of smart replies in chats, finding that use of smart replies increases overall cooperation and feelings of closeness between participants, since users end up selecting more positive emotional language. But if people believed their partner was using AI in the interaction, they rated their partner as less collaborative and more demanding. Crucially, it wasn’t actual AI usage that turned them off — it was the suspicion of it. We form perceptions based on language cues, and it’s really the language properties that drive those impressions, says Malte Jung, Associate Professor of Information Science at Cornell University and a co-author of the study.

This paradox — AI improving communication while fostering suspicion — points to a deeper loss of trust, according to Mor Naaman, professor of Information Science at Cornell Tech. He has identified three levels of human signals that we’ve lost in adopting AI into our communication. The first level is that of basic humanity signals, cues that speak to our authenticity as a human being like moments of vulnerability or personal rituals, which say to others, “This is me, I’m human.” The second level consists of attention and effort signals that prove “I cared enough to write this myself.” And the third level is ability signals which show our sense of humor, our competence, and our real selves to others. It’s the difference between texting someone, “I’m sorry you’re upset” versus “Hey sorry I freaked at dinner, I probably shouldn’t have skipped therapy this week.” One sounds flat; the other sounds human.


Here are some thoughts:

The increasing influence of AI language models like ChatGPT on everyday language, as highlighted in the article, holds significant implications for practicing psychologists. As these models shape linguistic trends—boosting the use of certain words and phrases—patients may unconsciously adopt these patterns in therapy sessions. This shift could reflect broader cultural changes in communication, potentially affecting how individuals articulate emotions, experiences, and personal narratives. Psychologists must remain attuned to these developments, as AI-mediated language might introduce subtle biases or homogenized expressions that could influence self-reporting and therapeutic dialogue.

Additionally, the rise of AI-generated content underscores the importance of digital literacy in mental health care. Many patients may turn to chatbots for support, making it essential for psychologists to help them critically assess the reliability and limitations of such tools. Understanding AI's linguistic impact also has research implications, particularly in qualitative studies and diagnostic tools that rely on natural language analysis. By recognizing these trends, psychologists can better navigate the evolving relationship between technology, language, and mental health, ensuring they provide informed and adaptive care in an increasingly AI-influenced world.

Tuesday, July 1, 2025

The Advantages of Human Evolution in Psychotherapy: Adaptation, Empathy, and Complexity

Gavazzi, J. (2025, May 24).
On Board with Professional Psychology.
American Board of Professional Psychology.
Issues 5.

Abstract

The rapid advancement of artificial intelligence, particularly Large Language Models (LLMs), has generated significant concern among psychologists regarding potential impacts on therapeutic practice. 

This paper examines the evolutionary advantages that position human psychologists as irreplaceable in psychotherapy, despite technological advances. Human evolution has produced sophisticated capacities for genuine empathy, social connection, and adaptive flexibility that are fundamental to effective therapeutic relationships. These evolutionarily-derived abilities include biologically-rooted emotional understanding, authentic empathetic responses, and the capacity for nuanced, context-dependent decision-making. In contrast, LLMs lack consciousness, genuine emotional experience, and the evolutionary framework necessary for deep therapeutic insight. While LLMs can simulate empathetic responses through linguistic patterns, they operate as statistical models without true emotional comprehension or theory of mind. The therapeutic alliance, cornerstone of successful psychotherapy, depends on authentic human connection and shared experiential understanding that transcends algorithmic processes. Human psychologists demonstrate adaptive complexity in understanding attachment styles, trauma responses, and individual patient needs that current AI cannot replicate.

The paper concludes that while LLMs serve valuable supportive roles in documentation, treatment planning, and professional reflection, they cannot replace the uniquely human relational and interpretive aspects essential to psychotherapy. Psychologists should integrate these technologies as resources while maintaining focus on the evolutionarily-grounded human capacities that define effective therapeutic practice.

Tuesday, June 10, 2025

Prejudiced patients: Ethical considerations for addressing patients’ prejudicial comments in psychotherapy.

Mbroh, H., Najjab, A., et al. (2020).
Professional Psychology: Research and
Practice, 51(3), 284–290.

Abstract

Psychologists will often encounter patients who make prejudiced comments during psychotherapy. Some psychologists may argue that the obligations to social justice require them to address these comments. Others may argue that the obligation to promote the psychotherapeutic process requires them to ignore such comments. The authors present a decision-making strategy and an intervention based on principle-based ethics for thinking through such dilemmas.

Public Significance Statement—

This article identifies ethical principles psychologists should consider when deciding whether to address their patients’ prejudicial comments in psychotherapy. It also provides an intervention strategy for addressing patients’ prejudicial comments.


Here are some thoughts:

The article explores how psychologists should ethically respond when clients express prejudicial views during therapy. The authors highlight a tension between two key obligations: the duty to promote the well-being of the patient (beneficence) and the broader responsibility to challenge social injustice (general beneficence). Using principle-based ethics, the article presents multiple real-life scenarios in which clients make discriminatory remarks—whether racist, ageist, sexist, or homophobic—and examines the ethical dilemmas that arise. In each case, psychologists must consider the context, potential harm, and therapeutic alliance before choosing whether or how to intervene. The authors emphasize that while tolerance for clients' values is important, it should not extend to condoning harmful biases. They propose a structured approach to addressing prejudice in session: show empathy, create cognitive dissonance by highlighting harm, and invite the client to explore the issue further. Recommendations include ongoing education, self-reflection, consultation, and thoughtful, non-punitive interventions. Ultimately, the article argues that addressing patient prejudice is ethically justifiable when done skillfully, and doing so can improve both individual therapy outcomes and societal well-being.

Monday, June 9, 2025

No Change? A Grounded Theory Analysis of Depressed Patients' Perspectives on Non-improvement in Psychotherapy

De Smet, M. M., et al. (2019).
Frontiers in Psychology, 10.

Aim: Understanding the effects of psychotherapy is a crucial concern for both research and clinical practice, especially when outcome tends to be negative. Yet, while outcome is predominantly evaluated by means of quantitative pre-post outcome questionnaires, it remains unclear what this actually means for patients in their daily lives. To explore this meaning, it is imperative to combine treatment evaluation with quantitative and qualitative outcome measures. This study investigates the phenomenon of non-improvement in psychotherapy, by complementing quantitative pre-post outcome scores that indicate no reliable change in depression symptoms with a qualitative inquiry of patients' perspectives.

Methods: The study took place in the context of a Randomised Controlled Trial evaluating time-limited psychodynamic and cognitive behavioral therapy for major depression. A mixed methods study was conducted including patients' pre-post outcome scores on the BDI-II-NL and post treatment Client Change Interviews. Nineteen patients whose data showed no reliable change in depression symptoms were selected. A grounded theory analysis was conducted on the transcripts of patients' interviews.

Findings: From the patients' perspective, non-improvement can be understood as being stuck between knowing versus doing, resulting in a stalemate. Positive changes (mental stability, personal strength, and insight) were stimulated by therapy offering moments of self-reflection and guidance, the benevolent therapist approach and the context as important motivations. Remaining issues (ambition to change but inability to do so) were attributed to the therapy hitting its limits, patients' resistance and impossibility and the context as a source of distress. “No change” in outcome scores therefore seems to involve a “partial change” when considering the patients' perspectives.

Conclusion: The study shows the value of integrating qualitative first-person analyses into standard quantitative outcome evaluation and particularly for understanding the phenomenon of non-improvement. It argues for more multi-method and multi-perspective research to gain a better understanding of (negative) outcome and treatment effects. Implications for both research and practice are discussed.

Here are some thoughts:

This study explores the perspectives of depressed patients who experienced no improvement in psychotherapy. While quantitative measures often assess therapy outcomes, the reasons behind a lack of progress from the patients' viewpoint remain unclear. Through a grounded theory analysis, the researchers aimed to understand this phenomenon. The study highlights the importance of considering the patient's subjective experience when evaluating the effectiveness of psychotherapy, particularly in cases where standard outcome measures might not capture the nuances of non-improvement.

Wednesday, May 21, 2025

Optimized Informed Consent for Psychotherapy: Protocol for a Randomized Controlled Trial

Gerke, L. et al. (2022).
JMIR Research Protocols, 11(9), e39843.

Abstract
Background:
Informed consent is a legal and ethical prerequisite for psychotherapy. However, in clinical practice, consistent strategies to obtain informed consent are scarce. Inconsistencies exist regarding the overall validity of informed consent for psychotherapy as well as the disclosure of potential mechanisms and negative effects, the latter posing a moral dilemma between patient autonomy and nonmaleficence.

Objective:
This protocol describes a randomized controlled web-based trial aiming to investigate the efficacy of a one-session optimized informed consent consultation.

Methods:
The optimized informed consent consultation was developed to provide information on the setting, efficacy, mechanisms, and negative effects via expectation management and shared decision-making techniques. A total of 122 participants with an indication for psychotherapy will be recruited. Participants will take part in a baseline assessment, including a structured clinical interview for Diagnostic and Statistical Manual of Mental Disorders-fifth edition (DSM-5) disorders. Eligible participants will be randomly assigned either to a control group receiving an information brochure about psychotherapy as treatment as usual (n=61) or to an intervention group receiving treatment as usual and the optimized informed consent consultation (n=61). Potential treatment effects will be measured after the treatment via interview and patient self-report and at 2 weeks and 3 months follow-up via web-based questionnaires. Treatment expectation is the primary outcome. Secondary outcomes include the capacity to consent, decisional conflict, autonomous treatment motivation, adherence intention, and side-effect expectations.

Results:
This trial received a positive ethics vote by the local ethics committee of the Center for Psychosocial Medicine, University-Medical Center Hamburg-Eppendorf, Hamburg, Germany on April 1, 2021, and was prospectively registered on June 17, 2021. The first participant was enrolled in the study on August 5, 2021. We expect to complete data collection in December 2022. After data analysis within the first quarter of 2023, the results will be submitted for publication in peer-reviewed journals in summer 2023.

Conclusions:
If effective, the optimized informed consent consultation might not only constitute an innovative clinical tool to meet the ethical and legal obligations of informed consent but also strengthen the contributing factors of psychotherapy outcome, while minimizing nocebo effects and fostering shared decision-making.

Here are some thoughts:

This research study investigated an optimized informed consent process in psychotherapy. Recognizing inconsistencies in standard practices, the study tested an enhanced consultation method designed to improve patients' understanding of treatment, manage their expectations, and promote shared decision-making. By comparing this enhanced approach to standard practice with a cohort of 122 participants, the researchers aimed to demonstrate the benefits of a more comprehensive and collaborative informed consent process in fostering positive treatment expectations and related outcomes. The findings were anticipated to provide evidence for a more effective and ethical approach to initiating psychotherapy.

Thursday, April 17, 2025

How do clinical psychologists make ethical decisions? A systematic review of empirical research

Grace, B., Wainwright, T., et al. (2020). 
Clinical Ethics, 15(4), 213–224.

Abstract

Given the nature of the discipline, it might be assumed that clinical psychology is an ethical profession, within which effective ethical decision-making is integral. How then, does this ethical decision-making occur? This paper describes a systematic review of empirical research addressing this question. The paucity of evidence related to this question meant that the scope was broadened to include other professions who deliver talking therapies. This review could support reflective practice about what may be taken into account when making ethical decisions and highlight areas for future research. Using academic search databases, original research articles were identified from peer-reviewed journals. Articles using qualitative (n = 3), quantitative (n = 8) and mixed methods (n = 2) were included. Two theoretical models of aspects of ethical decision-making were identified. Areas of agreement and debate are described in relation to factors linked to the professional, which impacted ethical decision-making. Factors relating to ethical dilemmas, which impacted ethical decision-making, are discussed. Articles were appraised by two independent raters, using quality assessment criteria, which suggested areas of methodological strengths and weaknesses. Comparison and synthesis of results revealed that the research did not generally pertain to current clinical practice of talking therapies or the particular socio-political context of the UK healthcare system. There was limited research into ethical decision-making amongst specific professions, including clinical psychology. Generalisability was limited due to methodological issues, indicating avenues for future research.

Here are some thoughts:

This article is a systematic review of empirical research on how clinical psychologists and related professionals make ethical decisions. The review addresses the question of how professionals who deliver psychotherapy make ethical decisions related to their work. The authors searched academic databases for original research articles from peer-reviewed journals and included qualitative, quantitative, and mixed-methods studies. The review identified two theoretical models of ethical decision-making and discussed factors related to the professional and ethical dilemmas that impact decision-making. The authors found that the research did not generally pertain to current clinical practice or the socio-political context of the UK healthcare system and that there was limited research into ethical decision-making among specific professions, including clinical psychology. The authors suggest that there is a need for further up-to-date, profession-specific, mixed-methods research in this area.

Friday, April 4, 2025

Can AI replace psychotherapists? Exploring the future of mental health care.

Zhang, Z., & Wang, J. (2024).
Frontiers in psychiatry, 15, 1444382.

In the current technological era, Artificial Intelligence (AI) has transformed operations across numerous sectors, enhancing everything from manufacturing automation to intelligent decision support systems in financial services. In the health sector, particularly, AI has not only refined the accuracy of disease diagnoses but has also ushered in groundbreaking advancements in personalized medicine. The mental health field, amid a global crisis characterized by increasing demand and insufficient resources, is witnessing a significant paradigm shift facilitated by AI, presenting novel approaches that promise to reshape traditional mental health care models (see Figure 1 ).

Mental health, once a stigmatized aspect of health care, is now recognized as a critical component of overall well-being, with disorders such as depression becoming leading causes of global disability (WHO). Traditional mental health care, reliant on in-person consultations, is increasingly perceived as inadequate against the growing prevalence of mental health issues. AI’s role in mental health care is multifaceted, encompassing predictive analytics, therapeutic interventions, clinician support tools, and patient monitoring systems. For instance, AI algorithms are increasingly used to predict treatment outcomes by analyzing patient data. Meanwhile, AI-powered interventions, such as virtual reality exposure therapy and chatbot-delivered cognitive behavioral therapy, are being explored, though they are at varying stages of validation. Each of these applications is evolving at its own pace, influenced by technological advancements and the need for rigorous clinical validation.

The article is linked above.

Here are some thoughts: 

This article explores the evolving role of artificial intelligence (AI) in mental health care, particularly its potential to support or even replace some functions of human psychotherapists. With global demand for mental health services rising and traditional care systems under strain, AI is emerging as a tool to enhance diagnosis, personalize treatments, and provide therapeutic interventions through technologies like chatbots and virtual reality therapy. While early research shows promise, particularly in managing conditions such as anxiety and depression, existing studies are limited and call for larger, long-term trials to determine effectiveness and safety. The authors emphasize that while AI may supplement mental health care and address gaps in service delivery, it must be integrated responsibly, with careful attention to algorithmic bias, ethical considerations, and the irreplaceable human elements of psychotherapy, such as empathy and nuanced judgment.

Wednesday, April 2, 2025

Large language models could change the future of behavioral healthcare: a proposal for responsible development and evaluation

Stade, E. C.,  et al. (2024).
Npj Mental Health Research, 3(1).

Abstract

Large language models (LLMs) such as Open AI’s GPT-4 (which power ChatGPT) and Google’s Gemini, built on artificial intelligence, hold immense potential to support, augment, or even eventually automate psychotherapy. Enthusiasm about such applications is mounting in the field as well as industry. These developments promise to address insufficient mental healthcare system capacity and scale individual access to personalized treatments. However, clinical psychology is an uncommonly high stakes application domain for AI systems, as responsible and evidence-based therapy requires nuanced expertise. This paper provides a roadmap for the ambitious yet responsible application of clinical LLMs in psychotherapy. First, a technical overview of clinical LLMs is presented. Second, the stages of integration of LLMs into psychotherapy are discussed while highlighting parallels to the development of autonomous vehicle technology. Third, potential applications of LLMs in clinical care, training, and research are discussed, highlighting areas of risk given the complex nature of psychotherapy. Fourth, recommendations for the responsible development and evaluation of clinical LLMs are provided, which include centering clinical science, involving robust interdisciplinary collaboration, and attending to issues like assessment, risk detection, transparency, and bias. Lastly, a vision is outlined for how LLMs might enable a new generation of studies of evidence-based interventions at scale, and how these studies may challenge assumptions about psychotherapy.

The article is linked above.

Here are some thoughts.

This article examines the potential of large language models (LLMs), such as GPT-4 and Google’s Gemini, to support and transform behavioral healthcare, particularly psychotherapy. LLMs could enhance access to care by automating administrative tasks like documentation and session summaries, assisting with treatment planning, and supporting clinician training. The authors propose a phased integration of LLMs, starting with low-risk assistive roles, moving toward collaborative functions with human oversight, and potentially, though more controversially, fully autonomous psychotherapy.

While LLMs offer promising opportunities to improve efficiency and scale mental health services, the authors emphasize the need for cautious, evidence-based development due to significant ethical, safety, and accountability concerns. They call for ongoing collaboration between clinicians, researchers, and technologists to ensure LLM use in mental healthcare prioritizes patient safety, transparency, and effectiveness through rigorous testing and gradual implementation.

Friday, March 21, 2025

Should the Mental Health of Psychotherapists Be One of the Transtheoretical Principles of Change?

Knapp, S., Sternlieb, J., & Kornblith, S. 
(2025, February).
Psychotherapy Bulletin, 60(2).

Often, psychotherapy researchers find that their contributions to psychotherapy get lost in the discussions of complex methodological issues that appear far removed from the real-life work of psychotherapists. Consequently, few psychotherapists regularly read research-based studies, and researchers communicate primarily with each other and less with psychotherapists. Fortunately, the pioneering work of Castonguay et al. (2019) has identified evidence-supported principles of change that improve patient outcomes, regardless of the psychotherapist’s theoretical orientation. They help bridge the researcher/practitioner gap by identifying, in succinct terms, evidence-supported findings related to improved patient outcomes. Psychotherapy scholars identified these principles after exhaustingly reviewing thousands of studies on psychotherapy.

Of course, none of the principles of change should be implemented in isolation. Nevertheless, together, they can guide psychotherapists on how to improve and personalize their treatment plans. Examples have been given of how psychotherapists can apply these change principles to improve the treatment of patients with suicidal thoughts (Knapp, 2022) and anxiety, depression, and other disorders (Castonguay et al., 2019).

Some of the principles appeared to support the conventional wisdom on what is effective in psychotherapy. For example, Principle 3 states, “Clients with more secure attachment may benefit more from psychotherapy than clients with less secure attachment” (McAleavey et al., 2019, p. 16). However, other principles conflict with some popular beliefs about the effectiveness of psychotherapy. For example, Principle 20 states, “Clients with substance use problems may be equally likely to benefit from psychotherapy delivered by a therapist with or without his or her own history of substance use problems” (McAleavey et al., 2019, p. 17).


Here are some thoughts:

The article argues for recognizing the mental health of psychotherapists as a transtheoretical principle of change, emphasizing its impact on patient outcomes. Building on the work of Castonguay et al. (2019), which identified principles that enhance patient outcomes across theoretical orientations, the authors propose that a psychotherapist's emotional well-being should be considered a key factor in effective treatment. They suggest that clients benefit more when their therapist experiences fewer symptoms of mental distress, highlighting the need for psychotherapists to prioritize self-care and emotional health.

Psychotherapists face numerous stressors, including administrative burdens, exposure to patient traumas, and the emotional demands of their work, all of which have intensified during the COVID-19 pandemic. Research indicates that higher levels of therapist burnout and distress correlate with poorer patient outcomes, underscoring the importance of addressing these issues. To enhance patient care, the article recommends integrating self-care practices into psychotherapy training and fostering supportive environments within institutions. By promoting self-awareness, self-compassion, and social connections, psychotherapists can better manage their emotional well-being and provide more effective treatment. The authors emphasize the need for ongoing research and open discussions to destigmatize mental health issues within the profession, ensuring that psychotherapists feel supported in seeking help when needed. Ultimately, prioritizing the mental health of psychotherapists is essential for improving both patient outcomes and the well-being of mental health professionals.

Thursday, February 20, 2025

Enhancing competencies for the ethical integration of religion and spirituality in psychological services

Currier, J. M. et al. (2023).
Psychological Services, 20(1), 40–50.

Abstract

Advancement of Spiritual and religious competencies aligns with increasing attention to the pivotal role of multiculturalism and intersectionality, as well as shifts in organizational values and strategies, that shape the delivery of psychological services (e.g., evidence-based practice). A growing evidence base also attests to ethical integration of peoples’ religious faith and/or spirituality (R/S) in their mental care as enhancing the utilization and efficacy of psychological services. When considering the essential attitudes, knowledge, and skills for addressing religious and spiritual aspects of clients’ lives, lack of R/S competencies among psychologists and other mental health professionals impedes ethical and effective practice. The purpose of this article is to discuss the following: (a) skills for negotiating ethical challenges with spiritually integrated care; and (b) strategies for assessing a client’s R/S. We also describe systemic barriers to ethical integration of R/S in mental health professions and briefly introduce our Spiritual and Religious Competencies project. Looking ahead, a strategic, interdisciplinary, and comprehensive approach is needed to transform the practice of mental health care in a manner that more fully aligns with the values, principles, and expectations across our disciplines’ professional ethical codes and accreditation standards. We propose that explicit training across mental health professions is necessary to more fully honor R/S diversity and the importance of this layer of identity and intersectionality in many peoples’ lives.

Impact Statement

Psychologists and other mental health professionals often lack necessary awareness, knowledge, and skills to address their clients’ religious faith and/or spirituality (R/S). This article explores ethical considerations regarding Spiritual and Religious Competencies in training and clinical practice, approaches to R/S assessment, as well as barriers and solutions to ethical integration of R/S in psychological services.

Sunday, February 9, 2025

Does Morality do us any good

Nikhil Kishnan
The New Yorker
Originally published 23 Dec 24

Here is an excerpt:

As things became more unequal, we developed a paradoxical aversion to inequality. In time, patterns began to appear that are still with us. Kinship and hierarchy were replaced or augmented by coƶperative relationships that individuals entered into voluntarily—covenants, promises, and the economically essential contracts. The people of Europe, at any rate, became what Joseph Henrich, the Harvard evolutionary biologist and anthropologist, influentially termed “WEIRD”: Western, educated, industrialized, rich, and democratic. WEIRD people tend to believe in moral rules that apply to every human being, and tend to downplay the moral significance of their social communities or personal relations. They are, moreover, much less inclined to conform to social norms that lack a moral valence, or to defer to such social judgments as shame and honor, but much more inclined to be bothered by their own guilty consciences.

That brings us to the past fifty years, decades that inherited the familiar structures of modernity: capitalism, liberal democracy, and the critics of these institutions, who often fault them for failing to deliver on the ideal of human equality. The civil-rights struggles of these decades have had an urgency and an excitement that, Sauer writes, make their supporters think victory will be both quick and lasting. When it is neither, disappointment produces the “identity politics” that is supposed to be the essence of the present cultural moment.

His final chapter, billed as an account of the past five years, connects disparate contemporary phenomena—vigilance about microaggressions and cultural appropriation, policies of no-platforming—as instances of the “punitive psychology” of our early hominin ancestors. Our new sensitivities, along with the twenty-first-century terms they’ve inspired (“mansplaining,” “gaslighting”), guide us as we begin to “scrutinize the symbolic markers of our group membership more and more closely and to penalize any non-compliance.” We may have new targets, Sauer says, but the psychology is an old one.


Here are some thoughts:

Understanding the origins of human morality is relevant for practicing psychologists, as it provides important insights into the psychological foundations of our moral behaviors and professional social interactions. These insight include working with patients and our own ethical code. The article explores how our moral intuitions have evolved over millions of years, revealing that our current moral frameworks are not fixed absolutes, but dynamic systems shaped by biological and social processes. Other scholars have conceptualized morality in similar ways, such as Haidt, DeWaal, and Tomasello.

Hanno Sauer's work illuminates a similar journey of moral development, tracing how early human survival strategies of cooperation and altruism gradually transformed into complex ethical systems. Psychologists can gain insights from this evolutionary perspective, understanding that our moral convictions are deeply rooted in our species' adaptive mechanisms rather than being purely rational constructs.

The article highlights several key insights:
  • Moral beliefs are significantly influenced by social context and evolutionary history
  • Our moral intuitions often precede rational justification
  • Cooperation and punishment played crucial roles in shaping human moral psychology
  • Universal moral values exist across different cultures, despite apparent differences
Particularly compelling is the exploration of how our "punitive psychology" emerged as a mechanism for social regulation, demonstrating how psychological processes have been instrumental in creating societal norms. For practicing psychologists, this understanding can provide a more nuanced approach to understanding patient behaviors, moral reasoning, and the complex interplay between individual experiences and broader evolutionary patterns. Notably, morality is always contextual, as I have pointed out in other summaries.

Finally, the article offers an optimistic perspective on moral progress, suggesting that our fundamental values are more aligned than we might initially perceive. This insight can be helpful for psychologists working with individuals from diverse backgrounds, emphasizing our shared psychological and evolutionary heritage.

Sunday, January 19, 2025

Artificial Intelligence for Psychotherapy: A Review of the Current State and Future Directions

Beg et al. (2024). 
Indian Journal of Psychological Medicine.

Abstract

Background:

Psychotherapy is crucial for addressing mental health issues but is often limited by accessibility and quality. Artificial intelligence (AI) offers innovative solutions, such as automated systems for increased availability and personalized treatments to improve psychotherapy. Nonetheless, ethical concerns about AI integration in mental health care remain.

Aim:

This narrative review explores the literature on AI applications in psychotherapy, focusing on their mechanisms, effectiveness, and ethical implications, particularly for depressive and anxiety disorders.
Methods:

A review was conducted, spanning studies from January 2009 to December 2023, focusing on empirical evidence of AI’s impact on psychotherapy. Following PRISMA guidelines, the authors independently screened and selected relevant articles. The analysis of 28 studies provided a comprehensive understanding of AI’s role in the field.

Results:

The results suggest that AI can enhance psychotherapy interventions for people with anxiety and depression, especially chatbots and internet-based cognitive-behavioral therapy. However, to achieve optimal outcomes, the ethical integration of AI necessitates resolving concerns about privacy, trust, and interaction between humans and AI.

Conclusion:

The study emphasizes the potential of AI-powered cognitive-behavioral therapy and conversational chatbots to address symptoms of anxiety and depression effectively. The article highlights the importance of cautiously integrating AI into mental health services, considering privacy, trust, and the relationship between humans and AI. This integration should prioritize patient well-being and assist mental health professionals while also considering ethical considerations and the prospective benefits of AI.

Here are some thoughts:

Artificial Intelligence (AI) is emerging as a promising tool in psychotherapy, offering innovative solutions to address mental health challenges. The comprehensive review explores the potential of AI-powered interventions, particularly for anxiety and depression disorders.

The study highlights several key insights about AI's role in mental health care. Researchers found that AI technologies like chatbots and internet-based cognitive-behavioral therapy (iCBT) can enhance psychological interventions by increasing accessibility and providing personalized treatment approaches. Machine learning, natural language processing, and deep learning are particularly crucial technologies enabling these advancements.

Despite the promising potential, the review emphasizes the critical need for careful integration of AI into mental health services. Ethical considerations remain paramount, with researchers stressing the importance of addressing privacy concerns, maintaining patient trust, and preserving the human element of therapeutic interactions. While AI can offer cost-effective and stigma-reducing solutions, it cannot yet fully replicate the profound empathy of face-to-face therapy.

The research examined 28 studies spanning from 2009 to 2023, revealing that AI interventions show particular promise in managing symptoms of anxiety and depression. Chatbots and iCBT demonstrated effectiveness in reducing psychological distress, though their impact on overall life satisfaction varies. The study calls for continued research to optimize AI's implementation in mental health care, balancing technological innovation with ethical principles.

Globally, organizations like the World Health Organization are developing regulatory frameworks to guide AI's responsible use in healthcare. In India, the Indian Council of Medical Research has already established guidelines for AI applications in biomedical research, signaling a growing recognition of this technology's potential.

Saturday, January 4, 2025

Artificial Intelligence (AI) Psychotherapy: Coming Soon to a Consultation Room Near You?

Swartz, H. A. (2023).
American Journal of Psychotherapy,
76(2), 55–56.


News stories about generative artificial intelligence (AI) abound, alternately predicting that AI will cause humanity’s demise and touting it as the solution to everything from climate change to health care scarcity. In 2022, OpenAI’s release of ChatGPT, an easy-to-use AI interface that produces fluent text in response to prompts, intensified highly ambivalent discussions about AI. Pundits and knowledgeable experts alike marveled at ChatGPT’s ability to produce text that resembles human writing while acknowledging that the speed of its evolution was outstripping our ability to understand its risks. This technology performed better on law school and medical school entrance exams than many human applicants, leading to predictions that AI will eventually render some professions obsolete. Psychotherapists, too, seem at risk for being replaced by AI. Are fears of AI supplanting psychotherapists reasonable?

To explore how AI might respond to a patient in distress, I typed this prompt into ChatGPT: “I feel sad. Can you help me?” It quickly generated the following text:
I am sorry to hear that you are feeling sad. I’ll do my best to help you. It’s important to remember that I’m an AI language model and may not fully understand or empathize with your emotions, but I can offer some general suggestions and support. . . . Remember, everyone experiences sadness from time to time, and it’s okay to seek help when you need it. You don’t have to face it alone.
Despite being impressed by the speed and conversational tone of ChatGPT’s text generation and the convenience of receiving an immediate response during hours when a human therapist would likely be unavailable, the formulaic exchange and canned feedback provided by ChatGPT left me with doubts about its ability to provide genuine soothing to humans experiencing depression. 


Here are some thoughts:

This editorial examines the potential of artificial intelligence (AI) in psychotherapy. While AI chatbots offer increased accessibility and convenience, providing self-help tools and improving symptom management, studies reveal limitations, including a lack of genuine human connection and potential risks like increased self-harm. The author concludes that AI is a useful supplementary tool, particularly in low-resource settings, but cannot replace human therapists for complex emotional and interpersonal issues. Ultimately, a blended approach incorporating both AI and human interaction is suggested for optimal therapeutic outcomes.

Wednesday, November 20, 2024

Being facially expressive is socially advantageous

Kavanagh, E., Whitehouse, J., & Waller, B. (2024)
Scientific Reports, 14(1). 

Abstract

Individuals vary in how they move their faces in everyday social interactions. In a first large-scale study, we measured variation in dynamic facial behaviour during social interaction and examined dyadic outcomes and impression formation. In Study 1, we recorded semi-structured video calls with 52 participants interacting with a confederate across various everyday contexts. Video clips were rated by 176 independent participants. In Study 2, we examined video calls of 1315 participants engaging in unstructured video-call interactions. Facial expressivity indices were extracted using automated Facial Action Coding Scheme analysis and measures of personality and partner impressions were obtained by self-report. Facial expressivity varied considerably across participants, but little across contexts, social partners or time. In Study 1, more facially expressive participants were more well-liked, agreeable, and successful at negotiating (if also more agreeable). Participants who were more facially competent, readable, and perceived as readable were also more well-liked. In Study 2, we replicated the findings that facial expressivity was associated with agreeableness and liking by their social partner, and additionally found it to be associated with extraversion and neuroticism. Findings suggest that facial behaviour is a stable individual difference that proffers social advantages, pointing towards an affiliative, adaptive function.


Here are some thoughts:

The study on facial expressivity in social interactions offers valuable insights for psychologists engaging in psychotherapy. A key takeaway is the importance of facial expressions in building rapport with clients. Therapists can utilize their facial expressions to convey empathy, understanding, and interest, thereby fostering a positive therapeutic relationship. Conversely, being attentive to clients' facial expressivity can provide clues about their personality traits, such as extraversion and agreeableness, as well as their emotional regulation strategies.

Therapists should also develop awareness of their own facial expressions and their impact on clients. This self-awareness enables therapists to manage their emotional responses and maintain a neutral or supportive demeanor. Moreover, recognizing cultural differences in facial expressivity and display rules is crucial. Cultural norms may influence clients' facial behavior and interpretations, and therapists must be sensitive to these variations.

Facial expressivity plays a significant role in nonverbal communication, and therapists can harness this to convey emotional support, encouragement, or concern. This can enhance the therapeutic relationship and facilitate effective communication. Additionally, being aware of subtle, involuntary facial expressions (micro-expressions) can reveal underlying emotions or attitudes.

To integrate these findings into therapeutic practice, therapists should strive for authenticity and congruence in their facial expressions to build trust and rapport. Consideration should be given to incorporating facial expression training into therapist development programs. Furthermore, therapists must be mindful of power dynamics and cultural differences in facial expressivity. By leveraging facial expressivity, therapists can refine their approach, foster stronger relationships with clients, and ultimately improve treatment outcomes.

The study's findings also underscore the importance of considering individual differences in facial expressivity. Rather than assuming universality, therapists should recognize that each client's facial behavior is unique and influenced by their personality, cultural background, and emotional regulation strategies. By adopting a more nuanced understanding of facial expressivity, therapists can tailor their approach to better meet the needs of their clients and cultivate a more empathetic and supportive therapeutic environment.

Sunday, November 3, 2024

Your Therapist’s Notes Might Be Just a Click Away

Christina Caron
The New York Times
Originally posted 25 Sept 24

Stunned. Ambushed. Traumatized.

These were the words that Jeffrey, 76, used to describe how he felt when he stumbled upon his therapist’s notes after logging into an online patient portal in June.

There was a summary of the physical and emotional abuse he endured during childhood. Characterizations of his most intimate relationships. And an assessment of his insight (fair) and his judgment (poor). Each was written by his new psychologist, whom he had seen four times.

“I felt as though someone had tied me up in a chair and was slapping me, and I was defenseless,” said Jeffrey, whose psychologist had diagnosed him with complex post-traumatic stress disorder.

Jeffrey, who lives in New York City and asked to be identified by his middle name to protect his privacy, was startled not only by the details that had been included in the visit summaries, but also by some inaccuracies.

And because his therapist practiced at a large hospital, he worried that his other doctors who used the same online records system would read the notes.

In the past, if patients wanted to see what their therapists had written about them, they had to formally request their records. But after a change in federal law, it has become increasingly common for patients in health care systems across the country to view their notes online — it can be as easy as logging into patient portals like MyChart.


There are some significant ethical issues here. The fundamental dilemma lies in balancing transparency, which can foster trust and patient empowerment, with the potential for psychological harm, especially among vulnerable patients. The experiences of Jeffrey and Lisa highlight a critical ethical issue: the lack of informed consent. Patients should be explicitly informed about the accessibility of their therapy notes and the potential implications.

The psychological impact of this practice is profound. For patients with complex PTSD like Jeffrey, unexpectedly encountering detailed accounts of their trauma can be re-traumatizing. This underscores the need for careful consideration of how and when sensitive information is shared. Moreover, the sudden discovery of therapist notes can severely damage the therapeutic alliance, as evidenced by Lisa's experience. Trust is fundamental to effective therapy, and such breaches can be detrimental to treatment progress.

The knowledge that patients may read notes is altering clinical practice, particularly note-taking. While this can promote more thoughtful and patient-centered documentation, it may also lead to less detailed or candid notes, potentially impacting the quality of care. Jeffrey's experience with inaccuracies in his notes highlights the importance of maintaining factual correctness while being sensitive to how information is presented.

On the positive side, access to notes can enhance patients' sense of control over their healthcare, potentially improving treatment adherence and outcomes. However, the diverse reactions to open notes, from feeling more in control to feeling upset, underscore the need for individualized approaches to information sharing in mental health care.

To navigate this complex terrain, several recommendations emerge. Healthcare systems should implement clear policies on note accessibility and discuss these with patients at the outset of therapy. Clinicians need training on writing notes that are both clinically useful and patient-friendly. Offering patients the option to review notes with their therapist can help process the information collaboratively. Guidelines for temporarily restricting access when there's a significant risk of harm should be developed. Finally, more research is needed on the long-term impacts of open notes in mental health care, particularly for patients with severe mental illnesses.

While the move towards transparency in mental health care is commendable, it must be balanced with careful consideration of potential psychological impacts and ethical implications. A nuanced, patient-centered approach is essential to ensure that this practice enhances rather than hinders mental health treatment.

Saturday, October 26, 2024

Suicidal Ideation and Suicide Attempts After Direct or Indirect Psychotherapy: A Systematic Review and Meta-Analysis

van Ballegooijen, et al. (2024).
JAMA psychiatry, e242854.
Advance online publication.


Abstract

Importance: Suicidal ideation and suicide attempts are debilitating mental health problems that are often treated with indirect psychotherapy (ie, psychotherapy that focuses on other mental health problems, such as depression or personality disorders). The effects of direct and indirect psychotherapy on suicidal ideation have not yet been examined in a meta-analysis, and several trials have been published since a previous meta-analysis examined the effect size of direct and indirect psychotherapy on suicide attempts.

Objective: To investigate the effect sizes of direct and indirect psychotherapy on suicidal ideation and the incidence of suicide attempts.

Data sources: PubMed, Embase, PsycInfo, Web of Science, Scopus, and the Cochrane Central Register of Controlled Trials were searched for articles published up until April 1, 2023.

Results: Of 15 006 studies identified, 147 comprising 193 comparisons and 11 001 participants were included. Direct and indirect psychotherapy conditions were associated with reduced suicidal ideation (direct: g, -0.39; 95% CI, -0.53 to -0.24; I2, 83.2; indirect: g, -0.30; 95% CI, -0.42 to -0.18; I2, 52.2). Direct and indirect psychotherapy conditions were also associated with reduced suicide attempts (direct: RR, 0.72; 95% CI, 0.62 to 0.84; I2, 40.5; indirect: RR, 0.68; 95% CI, 0.48 to 0.95; I2, 0). Sensitivity analyses largely confirmed these results.

Conclusions and relevance: Direct and indirect interventions had similar effect sizes for reducing suicidal ideation and suicide attempts. Suicide prevention strategies could make greater use of indirect treatments to provide effective interventions for people who would not likely seek treatment for suicidal ideation or self-harm.

My interpretation:

A recent systematic review and meta-analysis found that both direct (focused on suicidal thoughts and behaviors) and indirect (treating related issues like depression) psychotherapies can significantly reduce suicidal ideation and suicide attempts, suggesting that even treatments not explicitly targeting suicide can still be effective in lowering suicide risk; however, the effect sizes between direct and indirect interventions were similar, indicating that directly addressing suicidal thoughts may not necessarily provide a greater benefit compared to treating associated symptoms. This findings is inconsistent with other research and practice recommendations.

Monday, April 15, 2024

On the Ethics of Chatbots in Psychotherapy.

Benosman, M. (2024, January 7).
PsyArXiv Preprints
https://doi.org/10.31234/osf.io/mdq8v

Introduction:

In recent years, the integration of chatbots in mental health care has emerged as a groundbreaking development. These artificial intelligence (AI)-driven tools offer new possibilities for therapy and support, particularly in areas where mental health services are scarce or stigmatized. However, the use of chatbots in this sensitive domain raises significant ethical concerns that must be carefully considered. This essay explores the ethical implications of employing chatbots in mental health, focusing on issues of non-maleficence, beneficence, explicability, and care. Our main ethical question is: should we trust chatbots with our mental health and wellbeing?

Indeed, the recent pandemic has made mental health an urgent global problem. This fact, together with the widespread shortage in qualified human therapists, makes the proposal of chatbot therapists a timely, and perhaps, viable alternative. However, we need to be cautious about hasty implementations of such alternative. For instance, recent news has reported grave incidents involving chatbots-human interactions. For example, (Walker, 2023) reports the death of an eco-anxious man who committed suicide following a prolonged interaction with a chatbot named ELIZA, which encouraged him to put an end to his life to save the planet. Another individual was caught while executing a plan to assassinate the Queen of England, after a chatbot encouraged him to do so (Singleton, Gerken, & McMahon, 2023).

These are only a few recent examples that demonstrate the potential maleficence effect of chatbots on-fragile-individuals. Thus, to be ready to safely deploy such technology, in the context of mental health care, we need to carefully study its potential impact on patients from an ethics standpoint.


Here is my summary:

The article analyzes the ethical considerations around the use of chatbots as mental health therapists, from the perspectives of different stakeholders - bioethicists, therapists, and engineers. It examines four main ethical values:

Non-maleficence: Ensuring chatbots do not cause harm, either accidentally or deliberately. There is agreement that chatbots need rigorous evaluation and regulatory oversight like other medical devices before clinical deployment.

Beneficence: Ensuring chatbots are effective at providing mental health support. There is a need for evidence-based validation of their efficacy, while also considering broader goals like improving quality of life.

Explicability: The need for transparency and accountability around how chatbot algorithms work, so patients can understand the limitations of the technology.

Care: The inability of chatbots to truly empathize, which is a crucial aspect of effective human-based psychotherapy. This raises concerns about preserving patient autonomy and the risk of manipulation.

Overall, the different stakeholders largely agree on the importance of these ethical values, despite coming from different backgrounds. The text notes a surprising level of alignment, even between the more technical engineering perspective and the more humanistic therapist and bioethicist viewpoints. The key challenge seems to be ensuring chatbots can meet the high bar of empathy and care required for effective mental health therapy.

Sunday, February 25, 2024

Characteristics of Mental Health Specialists Who Shifted Their Practice Entirely to Telemedicine

Hailu, R., Huskamp, H. A., et al. (2024).
JAMA, 5(1), e234982. 

Introduction

The COVID-19 pandemic–related shift to telemedicine has been particularly prominent and sustained in mental health care. In 2021, more than one-third of mental health visits were conducted via telemedicine. While most mental health specialists have in-person and telemedicine visits, some have transitioned to fully virtual practice, perhaps for greater work-life flexibility (including avoiding commuting) and eliminating expenses of maintaining a physical clinic. The decision by some clinicians to practice only via telemedicine has gained importance due to Medicare’s upcoming requirement, effective in 2025, that patients have an annual in-person visit to receive telemedicine visits for mental illness and new requirements from some state Medicaid programs that clinicians offer in-person visits. We assessed the number and characteristics of mental health specialists who have shifted fully to telemedicine.

Discussion

In 2022, 13.0% of mental health specialists serving commercially insured or Medicare Advantage
enrollees had shifted to telemedicine only. Rates were higher among female clinicians and those
working in densely populated counties with higher real estate prices. A virtual-only practice allowing
clinicians to work from home may be more attractive to female clinicians, who report spending more
time on familial responsibilities, and those facing long commutes and higher office-space costs.
It is unclear how telemedicine-only clinicians will navigate new Medicare and Medicaid
requirements for in-person care. While clinicians and patients may prefer in-person care,
introducing in-person requirements for visits and prescribing could cause care interruptions,
particularly for conditions such as opioid use disorder.

Our analysis is limited to clinicians treating patients with commercial insurance or Medicare
Advantage and therefore may lack generalizability. We were also unable to determine where
clinicians physically practiced, particularly if they had transitioned to virtual-health companies. Given the shortage of mental health clinicians, future research should explore whether a virtual-only model
affects clinician burnout or workforce retention.

Friday, February 2, 2024

Young people turning to AI therapist bots

Joe Tidy
BBC.com
Originally posted 4 Jan 24

Here is an excerpt:

Sam has been so surprised by the success of the bot that he is working on a post-graduate research project about the emerging trend of AI therapy and why it appeals to young people. Character.ai is dominated by users aged 16 to 30.

"So many people who've messaged me say they access it when their thoughts get hard, like at 2am when they can't really talk to any friends or a real therapist,"
Sam also guesses that the text format is one with which young people are most comfortable.
"Talking by text is potentially less daunting than picking up the phone or having a face-to-face conversation," he theorises.

Theresa Plewman is a professional psychotherapist and has tried out Psychologist. She says she is not surprised this type of therapy is popular with younger generations, but questions its effectiveness.

"The bot has a lot to say and quickly makes assumptions, like giving me advice about depression when I said I was feeling sad. That's not how a human would respond," she said.

Theresa says the bot fails to gather all the information a human would and is not a competent therapist. But she says its immediate and spontaneous nature might be useful to people who need help.
She says the number of people using the bot is worrying and could point to high levels of mental ill health and a lack of public resources.


Here are some important points-

Reasons for appeal:
  • Cost: Traditional therapy's expense and limited availability drive some towards bots, seen as cheaper and readily accessible.
  • Stigma: Stigma associated with mental health might make bots a less intimidating first step compared to human therapists.
  • Technology familiarity: Young people, comfortable with technology, find text-based interaction with bots familiar and less daunting than face-to-face sessions.
Concerns and considerations:
  • Bias: Bots trained on potentially biased data might offer inaccurate or harmful advice, reinforcing existing prejudices.
  • Qualifications: Lack of professional mental health credentials and oversight raises concerns about the quality of support provided.
  • Limitations: Bots aren't replacements for human therapists. Complex issues or severe cases require professional intervention.

Friday, January 26, 2024

This Is Your Brain on Zoom

Leah Croll
MedScape.com
Originally posted 21 Dec 23

Here is an excerpt:

Zoom vs In-Person Brain Activity

The researchers took 28 healthy volunteers and recorded multiple neural response signals of them speaking in person vs on Zoom to see whether face-processing mechanisms differ depending upon social context. They used sophisticated imaging and neuromonitoring tools to monitor the real-time brain activity of the same pairs discussing the same exact things, once in person and once over Zoom.

When study participants were face-to-face, they had higher levels of synchronized neural activity, spent more time looking directly at each other, and demonstrated increased arousal (as indicated by larger pupil diameters), suggestive of heightened engagement and increased mutual exchange of social cues. In keeping with these behavioral findings, the study also found that face-to-face meetings produced more activation of the dorsal-parietal cortex on functional near-infrared spectroscopy. Similarly, in-person encounters were associated with more theta oscillations seen on electroencephalography, which are associated with face processing. These multimodal findings led the authors to conclude that there are probably separable neuroprocessing pathways for live faces presented in person and for the same live faces presented over virtual media.

It makes sense that virtual interfaces would disrupt the exchange of social cues. After all, it is nearly impossible to make eye contact in a Zoom meeting; in order to look directly at your partner, you need to look into the camera where you cannot see your partner's expressions and reactions. Perhaps current virtual technology limits our ability to detect more subtle facial movements. Plus, the downward angle of the typical webcam may distort the visual information that we are able to glean over virtual encounters. Face-to-face meetings, on the other hand, offer a direct line of sight that allows for optimal exchange of subtle social cues rooted in the eyes and facial expressions.


Key findings:
  • Zoom meetings are less stimulating for the brain than face-to-face interactions. A study by Yale University found that brain activity associated with social processing is lower during Zoom calls compared to in-person conversations.
  • Reduced social cues on Zoom lead to increased cognitive effort. The lack of subtle nonverbal cues, like facial expressions and body language, makes it harder to read others and understand their intentions on Zoom. This requires the brain to work harder to compensate.
  • Constant video calls can be mentally taxing. Studies have shown that back-to-back Zoom meetings can increase stress and fatigue. This is likely due to the cognitive demands of processing visual information and the constant pressure to be "on."
Implications:
  • Be mindful of Zoom fatigue. Schedule breaks between meetings and allow time for your brain to recover.
  • Use Zoom strategically. Don't use Zoom for every meeting or interaction. When possible, opt for face-to-face conversations.
  • Enhance social cues on Zoom. Use good lighting and a clear webcam to make it easier for others to see your face and expressions. Use gestures and nonverbal cues to communicate more effectively.