Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Artificial Intelligence. Show all posts
Showing posts with label Artificial Intelligence. Show all posts

Tuesday, June 17, 2025

Ethical implication of artificial intelligence (AI) adoption in financial decision making.

Owolabi, O. S., Uche, P. C., et al. (2024).
Computer and Information Science, 17(1), 49.

Abstract

The integration of artificial intelligence (AI) into the financial sector has raised ethical concerns that need to be addressed. This paper analyzes the ethical implications of using AI in financial decision-making and emphasizes the importance of an ethical framework to ensure its fair and trustworthy deployment. The study explores various ethical considerations, including the need to address algorithmic bias, promote transparency and explainability in AI systems, and adhere to regulations that protect equity, accountability, and public trust. By synthesizing research and empirical evidence, the paper highlights the complex relationship between AI innovation and ethical integrity in finance. To tackle this issue, the paper proposes a comprehensive and actionable ethical framework that advocates for clear guidelines, governance structures, regular audits, and collaboration among stakeholders. This framework aims to maximize the potential of AI while minimizing negative impacts and unintended consequences. The study serves as a valuable resource for policymakers, industry professionals, researchers, and other stakeholders, facilitating informed discussions, evidence-based decision-making, and the development of best practices for responsible AI integration in the financial sector. The ultimate goal is to ensure fairness, transparency, and accountability while reaping the benefits of AI for both the financial sector and society.

Here are some thoughts:

This paper explores the ethical implications of using artificial intelligence (AI) in financial decision-making.  It emphasizes the necessity of an ethical framework to ensure AI is used fairly and responsibly.  The study examines ethical concerns like algorithmic bias, the need for transparency and explainability in AI systems, and the importance of regulations that protect equity, accountability, and public trust.  The paper also proposes a comprehensive ethical framework with guidelines, governance structures, regular audits, and stakeholder collaboration to maximize AI's potential while minimizing negative impacts.

These themes are similar to concerns in using AI in the practice of psychology. Also, psychologists may need to be aware of these issues for their own financial and wealth management.

Monday, June 16, 2025

The impact of AI errors in a human-in-the-loop process

Agudo, U., Liberal, K. G., et al. (2024).
Cognitive Research Principles and 
Implications, 9(1).

Abstract

Automated decision-making is becoming increasingly common in the public sector. As a result, political institutions recommend the presence of humans in these decision-making processes as a safeguard against potentially erroneous or biased algorithmic decisions. However, the scientific literature on human-in-the-loop performance is not conclusive about the benefits and risks of such human presence, nor does it clarify which aspects of this human–computer interaction may influence the final decision. In two experiments, we simulate an automated decision-making process in which participants judge multiple defendants in relation to various crimes, and we manipulate the time in which participants receive support from a supposed automated system with Artificial Intelligence (before or after they make their judgments). Our results show that human judgment is affected when participants receive incorrect algorithmic support, particularly when they receive it before providing their own judgment, resulting in reduced accuracy. The data and materials for these experiments are freely available at the Open Science Framework: https://osf.io/b6p4z/ Experiment 2 was preregistered.

Here are some thoughts:


This study explores the impact of AI errors in human-in-the-loop processes, where humans and AI systems collaborate in decision-making.  The research specifically investigates how the timing of AI support influences human judgment and decision accuracy.  The findings indicate that human judgment is negatively affected by incorrect algorithmic support, particularly when provided before the human's own judgment, leading to decreased accuracy.  This research highlights the complexities of human-computer interaction in automated decision-making contexts and emphasizes the need for a deeper understanding of how AI support systems can be effectively integrated to minimize errors and biases.    

This is important for psychologists because it sheds light on the cognitive biases and decision-making processes involved when humans interact with AI systems, which is an increasingly relevant area of study in the field.  Understanding these interactions can help psychologists develop interventions and strategies to mitigate negative impacts, such as automation bias, and improve the design of human-computer interfaces to optimize decision-making accuracy and reduce errors in various sectors, including public service, healthcare, and justice. 

Thursday, May 22, 2025

On bullshit, large language models, and the need to curb your enthusiasm

Tigard, D. W. (2025).
AI And Ethics.

Abstract

Amidst all the hype around artificial intelligence (AI), particularly regarding large language models (LLMs), generative AI and chatbots like ChatGPT, a surge of headlines is instilling caution and even explicitly calling “bullshit” on such technologies. Should we follow suit? What exactly does it mean to call bullshit on an AI program? When is doing so a good idea, and when might it not be? With this paper, I aim to provide a brief guide on how to call bullshit on ChatGPT and related systems. In short, one must understand the basic nature of LLMs, how they function and what they produce, and one must recognize bullshit. I appeal to the prominent work of the late Harry Frankfurt and suggest that recent accounts jump too quickly to the conclusion that LLMs are bullshitting. In doing so, I offer a more level-headed approach to calling bullshit, and accordingly, a way of navigating some of the recent critiques of generative AI systems.

Here are some thoughts:

This paper examines the application of Harry Frankfurt's theory of "bullshit" to large language models (LLMs) like ChatGPT. It discusses the controversy around labeling AI-generated content as "bullshit," arguing for a more nuanced approach. The author suggests that while LLM outputs might resemble bullshit due to their lack of concern for truth, LLMs themselves don't fit the definition of a "bullshitter" because they lack the intentions and aims that Frankfurt attributes to human bullshitters.

For psychologists, this distinction is important because it asks for a reconsideration of how we interpret and evaluate AI-generated content and its impact on human users. The paper further argues that if AI interactions provide tangible benefits to users without causing harm, then thwarting these interactions may not be necessary. This perspective encourages psychologists to weigh the ethical considerations of AI's influence on individuals, balancing concerns about authenticity and integrity with the potential for AI to enhance human experiences and productivity.

Friday, April 4, 2025

Can AI replace psychotherapists? Exploring the future of mental health care.

Zhang, Z., & Wang, J. (2024).
Frontiers in psychiatry, 15, 1444382.

In the current technological era, Artificial Intelligence (AI) has transformed operations across numerous sectors, enhancing everything from manufacturing automation to intelligent decision support systems in financial services. In the health sector, particularly, AI has not only refined the accuracy of disease diagnoses but has also ushered in groundbreaking advancements in personalized medicine. The mental health field, amid a global crisis characterized by increasing demand and insufficient resources, is witnessing a significant paradigm shift facilitated by AI, presenting novel approaches that promise to reshape traditional mental health care models (see Figure 1 ).

Mental health, once a stigmatized aspect of health care, is now recognized as a critical component of overall well-being, with disorders such as depression becoming leading causes of global disability (WHO). Traditional mental health care, reliant on in-person consultations, is increasingly perceived as inadequate against the growing prevalence of mental health issues. AI’s role in mental health care is multifaceted, encompassing predictive analytics, therapeutic interventions, clinician support tools, and patient monitoring systems. For instance, AI algorithms are increasingly used to predict treatment outcomes by analyzing patient data. Meanwhile, AI-powered interventions, such as virtual reality exposure therapy and chatbot-delivered cognitive behavioral therapy, are being explored, though they are at varying stages of validation. Each of these applications is evolving at its own pace, influenced by technological advancements and the need for rigorous clinical validation.

The article is linked above.

Here are some thoughts: 

This article explores the evolving role of artificial intelligence (AI) in mental health care, particularly its potential to support or even replace some functions of human psychotherapists. With global demand for mental health services rising and traditional care systems under strain, AI is emerging as a tool to enhance diagnosis, personalize treatments, and provide therapeutic interventions through technologies like chatbots and virtual reality therapy. While early research shows promise, particularly in managing conditions such as anxiety and depression, existing studies are limited and call for larger, long-term trials to determine effectiveness and safety. The authors emphasize that while AI may supplement mental health care and address gaps in service delivery, it must be integrated responsibly, with careful attention to algorithmic bias, ethical considerations, and the irreplaceable human elements of psychotherapy, such as empathy and nuanced judgment.

Thursday, April 3, 2025

Large Language Models and User Trust: Consequence of Self-Referential Learning Loop and the Deskilling of Health Care Professionals

Choudhury, A., & Chaudhry, Z. (2024).
Journal of medical Internet research, 26, e56764.

Abstract

As the health care industry increasingly embraces large language models (LLMs), understanding the consequence of this integration becomes crucial for maximizing benefits while mitigating potential pitfalls. This paper explores the evolving relationship among clinician trust in LLMs, the transition of data sources from predominantly human-generated to artificial intelligence (AI)–generated content, and the subsequent impact on the performance of LLMs and clinician competence. One of the primary concerns identified in this paper is the LLMs’ self-referential learning loops, where AI-generated content feeds into the learning algorithms, threatening the diversity of the data pool, potentially entrenching biases, and reducing the efficacy of LLMs. While theoretical at this stage, this feedback loop poses a significant challenge as the integration of LLMs in health care deepens, emphasizing the need for proactive dialogue and strategic measures to ensure the safe and effective use of LLM technology. Another key takeaway from our investigation is the role of user expertise and the necessity for a discerning approach to trusting and validating LLM outputs. The paper highlights how expert users, particularly clinicians, can leverage LLMs to enhance productivity by off-loading routine tasks while maintaining a critical oversight to identify and correct potential inaccuracies in AI-generated content. This balance of trust and skepticism is vital for ensuring that LLMs augment rather than undermine the quality of patient care. We also discuss the risks associated with the deskilling of health care professionals. Frequent reliance on LLMs for critical tasks could result in a decline in health care providers’ diagnostic and thinking skills, particularly affecting the training and development of future professionals. The legal and ethical considerations surrounding the deployment of LLMs in health care are also examined. We discuss the medicolegal challenges, including liability in cases of erroneous diagnoses or treatment advice generated by LLMs. The paper references recent legislative efforts, such as The Algorithmic Accountability Act of 2023, as crucial steps toward establishing a framework for the ethical and responsible use of AI-based technologies in health care. In conclusion, this paper advocates for a strategic approach to integrating LLMs into health care. By emphasizing the importance of maintaining clinician expertise, fostering critical engagement with LLM outputs, and navigating the legal and ethical landscape, we can ensure that LLMs serve as valuable tools in enhancing patient care and supporting health care professionals. This approach addresses the immediate challenges posed by integrating LLMs and sets a foundation for their maintainable and responsible use in the future.

The abstract provides a sufficient summary.

Tuesday, April 1, 2025

Why Most Resist AI Companions

De Freitas, J., et al. (2025).
(Working Paper No. 25–030).

Abstract

Chatbots are now able to form emotional relationships with people and alleviate loneliness—a growing public health concern. Behavioral research provides little insight into whether everyday people are likely to use these applications and why. We address this question by focusing on the context of “AI companion” applications, designed to provide people with synthetic interaction partners. Study 1 shows that people believe AI companions are more capable than human companions in advertised respects relevant to relationships (being more available and nonjudgmental). Even so, they view them as incapable of realizing the underlying values of relationships, like mutual caring, judging them as not ‘true’ relationships. Study 2 provides further insight into this belief: people believe relationships with AI companions are one-sided
(rather than mutual), because they see AI as incapable of understanding and feeling emotion. Study 3 finds that actually interacting with an AI companion increases acceptance by changing beliefs about the AI’s advertised capabilities, but not about its ability to achieve the true values of relationships, demonstrating the resilience of this belief against intervention. In short, despite the potential loneliness-reducing benefits of AI companions, we uncover fundamental psychological barriers to adoption, suggesting these benefits will not be easily realized.

Here are some thoughts:

The research explores why people remain reluctant to adopt AI companions, despite the growing public health crisis of loneliness and the promise that AI might offer support. Through a series of studies, the authors identify deep-seated psychological barriers to embracing AI as a substitute or supplement for human connection. Specifically, people tend to view AI companions as fundamentally incapable of embodying the core features of meaningful relationships—such as mutual care, genuine emotional understanding, and shared experiences. While participants often acknowledged some of the practical benefits of AI companionship, such as constant availability and non-judgmental interaction, they consistently doubted that AI could offer authentic or reciprocal relationships. Even when people interacted directly with AI systems, their impressions of the AI’s functional abilities improved, but their skepticism around the emotional and relational authenticity of AI companions remained firmly in place. These findings suggest that the resistance is not merely technological or unfamiliarity-based, but rooted in beliefs about what makes relationships "real."

For psychologists, this research is particularly important because it sheds light on how people conceptualize emotional connection, authenticity, and support—core concerns in both clinical and social psychology. As mental health professionals increasingly confront issues of social isolation, understanding the limitations of AI in replicating genuine human connection is critical. Psychologists might be tempted to view AI companions as possible interventions for loneliness, especially for individuals who are socially isolated or homebound. However, this paper underscores that unless these deep psychological barriers are acknowledged and addressed, such tools may be met with resistance or prove insufficient in fulfilling emotional needs. Furthermore, the study contributes to a broader understanding of human-technology relationships, offering insights into how people emotionally and cognitively differentiate between human and artificial agents. This knowledge is crucial for designing future interventions, therapeutic tools, and technologies that are sensitive to the human need for authenticity, reciprocity, and emotional depth in relationships.

Wednesday, February 19, 2025

The Moral Psychology of Artificial Intelligence

Bonnefon, J., Rahwan, I., & Shariff, A. (2023).
Annual Review of Psychology, 75(1), 653–675.

Abstract

Moral psychology was shaped around three categories of agents and patients: humans, other animals, and supernatural beings. Rapid progress in artificial intelligence has introduced a fourth category for our moral psychology to deal with: intelligent machines. Machines can perform as moral agents, making decisions that affect the outcomes of human patients or solving moral dilemmas without human supervision. Machines can be perceived as moral patients, whose outcomes can be affected by human decisions, with important consequences for human–machine cooperation. Machines can be moral proxies that human agents and patients send as their delegates to moral interactions or use as a disguise in these interactions. Here we review the experimental literature on machines as moral agents, moral patients, and moral proxies, with a focus on recent findings and the open questions that they suggest.

Here are some thoughts:

This article delves into the evolving moral landscape shaped by artificial intelligence (AI). As AI technology progresses rapidly, it introduces a new category for moral consideration: intelligent machines.

Machines as moral agents are capable of making decisions that have significant moral implications. This includes scenarios where AI systems can inadvertently cause harm through errors, such as misdiagnosing a medical condition or misclassifying individuals in security contexts. The authors highlight that societal expectations for these machines are often unrealistically high; people tend to require AI systems to outperform human capabilities significantly while simultaneously overestimating human error rates. This disparity raises critical questions about how many mistakes are acceptable from machines in life-and-death situations and how these errors are distributed among different demographic groups.

In their role as moral patients, machines become subjects of human moral behavior. This perspective invites exploration into how humans interact with AI—whether cooperatively or competitively—and the potential biases that may arise in these interactions. For instance, there is a growing concern about algorithmic bias, where certain demographic groups may be unfairly treated by AI systems due to flawed programming or data sets.

Lastly, machines serve as moral proxies, acting as intermediaries in human interactions. This role allows individuals to delegate moral decision-making to machines or use them to mask unethical behavior. The implications of this are profound, as it raises ethical questions about accountability and the extent to which humans can offload their moral responsibilities onto AI.

Overall, the article underscores the urgent need for a deeper understanding of the psychological dimensions associated with AI's integration into society. As encounters between humans and intelligent machines become commonplace, addressing issues of trust, bias, and ethical alignment will be crucial in shaping a future where AI can be safely and effectively integrated into daily life.

Friday, February 14, 2025

High-reward, high-risk technologies? An ethical and legal account of AI development in healthcare

Corfmat, M., Martineau, J. T., & Régis, C. (2025).
BMC Med Ethics 26, 4
https://doi.org/10.1186/s12910-024-01158-1

Abstract

Background
Considering the disruptive potential of AI technology, its current and future impact in healthcare, as well as healthcare professionals’ lack of training in how to use it, the paper summarizes how to approach the challenges of AI from an ethical and legal perspective. It concludes with suggestions for improvements to help healthcare professionals better navigate the AI wave.

Methods
We analyzed the literature that specifically discusses ethics and law related to the development and implementation of AI in healthcare as well as relevant normative documents that pertain to both ethical and legal issues. After such analysis, we created categories regrouping the most frequently cited and discussed ethical and legal issues. We then proposed a breakdown within such categories that emphasizes the different - yet often interconnecting - ways in which ethics and law are approached for each category of issues. Finally, we identified several key ideas for healthcare professionals and organizations to better integrate ethics and law into their practices.

Results
We identified six categories of issues related to AI development and implementation in healthcare: (1) privacy; (2) individual autonomy; (3) bias; (4) responsibility and liability; (5) evaluation and oversight; and (6) work, professions and the job market. While each one raises different questions depending on perspective, we propose three main legal and ethical priorities: education and training of healthcare professionals, offering support and guidance throughout the use of AI systems, and integrating the necessary ethical and legal reflection at the heart of the AI tools themselves.

Conclusions
By highlighting the main ethical and legal issues involved in the development and implementation of AI technologies in healthcare, we illustrate their profound effects on professionals as well as their relationship with patients and other organizations in the healthcare sector. We must be able to identify AI technologies in medical practices and distinguish them by their nature so we can better react and respond to them. Healthcare professionals need to work closely with ethicists and lawyers involved in the healthcare system, or the development of reliable and trusted AI will be jeopardized.


Here are some thoughts:

This article explores the ethical and legal challenges surrounding artificial intelligence (AI) in healthcare. The authors identify six critical categories of issues: privacy, individual autonomy, bias, responsibility and liability, evaluation and oversight, as well as work and professional impacts.

The research highlights that AI is fundamentally different from previous medical technologies due to its disruptive potential and ability to perform autonomous learning and decision-making. While AI promises significant improvements in areas like biomedical research, precision medicine, and healthcare efficiency, there remains a significant gap between AI system development and practical implementation in healthcare settings.

The authors emphasize that healthcare professionals often lack comprehensive knowledge about AI technologies and their implications. They argue that understanding the nuanced differences between legal and ethical frameworks is crucial for responsible AI integration. Legal rules represent minimal mandatory requirements, while ethical considerations encourage deeper reflection on appropriate behaviors and choices.

The paper suggests three primary priorities for addressing AI's ethical and legal challenges: (1) educating and training healthcare professionals, (2) providing robust support and guidance during AI system use, and (3) integrating ethical and legal considerations directly into AI tool development. Ultimately, the researchers stress the importance of close collaboration between healthcare professionals, ethicists, and legal experts to develop reliable and trustworthy AI technologies.

Saturday, February 8, 2025

AI Tools in Society: Impacts on Cognitive Offloading and the Future of Critical Thinking

Gerlich, M. (2025).
Societies, 15(1), 6.

Abstract

The proliferation of artificial intelligence (AI) tools has transformed numerous aspects of daily life, yet its impact on critical thinking remains underexplored. This study investigates the relationship between AI tool usage and critical thinking skills, focusing on cognitive offloading as a mediating factor. Utilising a mixed-method approach, we conducted surveys and in-depth interviews with 666 participants across diverse age groups and educational backgrounds. Quantitative data were analysed using ANOVA and correlation analysis, while qualitative insights were obtained through thematic analysis of interview transcripts. The findings revealed a significant negative correlation between frequent AI tool usage and critical thinking abilities, mediated by increased cognitive offloading. Younger participants exhibited higher dependence on AI tools and lower critical thinking scores compared to older participants. Furthermore, higher educational attainment was associated with better critical thinking skills, regardless of AI usage. These results highlight the potential cognitive costs of AI tool reliance, emphasising the need for educational strategies that promote critical engagement with AI technologies. This study contributes to the growing discourse on AI’s cognitive implications, offering practical recommendations for mitigating its adverse effects on critical thinking. The findings underscore the importance of fostering critical thinking in an AI-driven world, making this research essential reading for educators, policymakers, and technologists.

Here are some thoughts:

"De-skilling" is a concern regarding LLMs. Gerlich explores the critical relationship between AI tool usage and critical thinking skills. The study investigates how artificial intelligence technologies impact cognitive processes, with a specific focus on cognitive offloading as a mediating factor.

Gerlich conducted a comprehensive mixed-method research involving 666 participants from diverse age groups and educational backgrounds. The study employed surveys and in-depth interviews, analyzing data through ANOVA and correlation analysis, alongside thematic interview transcript analysis. Key findings revealed a significant negative correlation between frequent AI tool usage and critical thinking abilities, particularly pronounced among younger participants.

The research highlights several important insights. Younger participants demonstrated higher dependence on AI tools and correspondingly lower critical thinking scores compared to older participants. Conversely, individuals with higher educational attainment maintained better critical thinking skills regardless of their AI tool usage. These findings underscore the potential cognitive costs associated with excessive reliance on AI technologies.

The study's broader implications are important. It emphasizes the need for educational strategies that promote critical engagement with AI technologies, warning against the risk of cognitive offloading—where individuals delegate cognitive tasks to external tools, potentially reducing their capacity for deep, reflective thinking. By exploring how AI tools influence cognitive processes, the research contributes to the growing discourse on technology's impact on human cognitive development.

Gerlich's work is particularly significant as it offers practical recommendations for mitigating adverse effects on critical thinking in an increasingly AI-driven world. The research serves as essential reading for educators, policymakers, and technologists seeking to understand and address the complex relationship between artificial intelligence and human cognitive skills.

Saturday, February 1, 2025

Augmenting research consent: Should large language models (LLMs) be used for informed consent to clinical research?

Allen, J. W., et al. (2024).
Research Ethics, in press.

Abstract

The integration of artificial intelligence (AI), particularly large language models (LLMs) like OpenAI’s ChatGPT, into clinical research could significantly enhance the informed consent process. This paper critically examines the ethical implications of employing LLMs to facilitate consent in clinical research. LLMs could offer considerable benefits, such as improving participant understanding and engagement, broadening participants’ access to the relevant information for informed consent, and increasing the efficiency of consent procedures. However, these theoretical advantages are accompanied by ethical risks, including the potential for misinformation, coercion, and challenges in accountability. Given the complex nature of consent in clinical research, which involves both written documentation (in the form of participant information sheets and informed consent forms) and in-person conversations with a researcher, the use of LLMs raises significant concerns about the adequacy of existing regulatory frameworks. Institutional Review Boards (IRBs) will need to consider substantial reforms to accommodate the integration of LLM-based consent processes. We explore five potential models for LLM implementation, ranging from supplementary roles to complete replacements of current consent processes, and offer recommendations for researchers and IRBs to navigate the ethical landscape. Thus, we aim to provide practical recommendations to facilitate the ethical introduction of LLM-based consent in research settings by considering factors such as participant understanding, information accuracy, human oversight and types of LLM applications in clinical research consent.


Here are some thoughts:

This paper examines the ethical implications of using large language models (LLMs) for informed consent in clinical research. While LLMs offer potential benefits, including personalized information, increased participant engagement, and improved efficiency, they also present risks related to accuracy, manipulation, and accountability. The authors explore five potential models for LLM implementation in consent processes, ranging from supplementary roles to complete replacements of current methods. Ultimately, they propose a hybrid approach that combines traditional consent methods with LLM-based interactions to maximize participant autonomy while maintaining ethical safeguards.

Thursday, January 30, 2025

Advancements in AI-driven Healthcare: A Comprehensive Review of Diagnostics, Treatment, and Patient Care Integration

Kasula, B. Y. (2024, January 18).
International Journal of Machine Learning
for Sustainable Development.
Volume 6 (1).

Abstract

This research paper presents a comprehensive review of the recent advancements in AI-
driven healthcare, focusing on diagnostics, treatment, and the integration of AI technologies in
patient care. The study explores the evolution of artificial intelligence applications in medical
imaging, diagnosis accuracy, personalized treatment plans, and the overall enhancement of
healthcare delivery. Ethical considerations and challenges associated with AI adoption in
healthcare are also discussed. The paper concludes with insights into the potential future
developments and the transformative impact of AI on the healthcare landscape.


Here are some thoughts:

This research paper provides a comprehensive review of recent advancements in AI-driven healthcare, focusing on diagnostics, treatment, and the integration of AI technologies in patient care. The study explores the evolution of artificial intelligence applications in medical imaging, diagnosis accuracy, personalized treatment plans, and the overall enhancement of healthcare delivery. It discusses the transformative impact of AI on healthcare, highlighting key achievements, challenges, and ethical considerations associated with its widespread adoption.

The paper examines AI's role in improving diagnostic accuracy, particularly in medical imaging, and its contribution to developing personalized treatment plans. It also addresses the ethical dimensions of AI in healthcare, including patient privacy, data security, and equitable distribution of AI-driven healthcare benefits. The research emphasizes the need for a holistic approach to AI integration in healthcare, calling for collaboration between healthcare professionals, technologists, and policymakers to navigate the evolving landscape successfully.

It is important for psychologists to understand the content of this article for several reasons. Firstly, AI is increasingly being applied in mental health diagnosis and treatment, as mentioned in the paper's references. Psychologists need to be aware of these advancements to stay current in their field and potentially incorporate AI-driven tools into their practice. Secondly, the ethical considerations discussed in the paper, such as patient privacy and data security, are equally relevant to psychological practice. Understanding these issues can help psychologists navigate the ethical challenges that may arise with the integration of AI in mental health care.

Moreover, the paper's emphasis on personalized medicine and treatment plans is particularly relevant to psychology, where individualized approaches are often crucial. By understanding AI's potential in this area, psychologists can explore ways to enhance their treatment strategies and improve patient outcomes. Lastly, as healthcare becomes increasingly interdisciplinary, psychologists need to be aware of technological advancements in other medical fields to collaborate effectively with other healthcare professionals and provide comprehensive care to their patients.

Monday, January 27, 2025

Beyond rating scales: With targeted evaluation, large language models are poised for psychological assessment

Kjell, O. N., Kjell, K., & Schwartz, H. A. (2023).
Psychiatry Research, 333, 115667.

Abstract

In this narrative review, we survey recent empirical evaluations of AI-based language assessments and present a case for the technology of large language models to be poised for changing standardized psychological assessment. Artificial intelligence has been undergoing a purported “paradigm shift” initiated by new machine learning models, large language models (e.g., BERT, LAMMA, and that behind ChatGPT). These models have led to unprecedented accuracy over most computerized language processing tasks, from web searches to automatic machine translation and question answering, while their dialogue-based forms, like ChatGPT have captured the interest of over a million users. The success of the large language model is mostly attributed to its capability to numerically represent words in their context, long a weakness of previous attempts to automate psychological assessment from language. While potential applications for automated therapy are beginning to be studied on the heels of chatGPT's success, here we present evidence that suggests, with thorough validation of targeted deployment scenarios, that AI's newest technology can move mental health assessment away from rating scales and to instead use how people naturally communicate, in language.

Highlights

• Artificial intelligence has been undergoing a purported “paradigm shift” initiated by new machine learning models, large language models.

• We review recent empirical evaluations of AI-based language assessments and present a case for the technology of large language models, that are used for chatGPT and BERT, to be poised for changing standardized psychological assessment.

• While potential applications for automated therapy are beginning to be studied on the heels of chatGPT's success, here we present evidence that suggests, with thorough validation of targeted deployment scenarios, that AI's newest technology can move mental health assessment away from rating scales and to instead use how people naturally communicate, in language.

Here are some thoughts:

The article underscores the transformative role of machine learning (ML) and artificial intelligence (AI) in psychological assessment, marking a significant shift in how psychologists approach their work. By integrating these technologies, assessments can become more accurate, efficient, and scalable, enabling psychologists to analyze vast amounts of data and uncover patterns that might otherwise go unnoticed. This is particularly important in improving diagnostic accuracy, as AI can help mitigate human bias and subjectivity, providing data-driven insights that complement clinical judgment. However, the adoption of these tools also raises critical ethical and practical considerations, such as ensuring client privacy, data security, and the responsible use of AI in alignment with professional standards.

As AI becomes more prevalent, the role of psychologists is evolving, requiring them to collaborate with these technologies by focusing on interpretation, contextual understanding, and therapeutic decision-making, while maintaining their unique human expertise.

Looking ahead, the article highlights emerging trends like natural language processing (NLP) for analyzing speech and text, as well as wearable devices for real-time behavioral and physiological data collection, offering psychologists innovative methods to enhance their practice. These advancements not only improve the precision of assessments but also pave the way for more personalized and timely interventions, ultimately supporting better mental health outcomes for clients.

Sunday, January 19, 2025

Artificial Intelligence for Psychotherapy: A Review of the Current State and Future Directions

Beg et al. (2024). 
Indian Journal of Psychological Medicine.

Abstract

Background:

Psychotherapy is crucial for addressing mental health issues but is often limited by accessibility and quality. Artificial intelligence (AI) offers innovative solutions, such as automated systems for increased availability and personalized treatments to improve psychotherapy. Nonetheless, ethical concerns about AI integration in mental health care remain.

Aim:

This narrative review explores the literature on AI applications in psychotherapy, focusing on their mechanisms, effectiveness, and ethical implications, particularly for depressive and anxiety disorders.
Methods:

A review was conducted, spanning studies from January 2009 to December 2023, focusing on empirical evidence of AI’s impact on psychotherapy. Following PRISMA guidelines, the authors independently screened and selected relevant articles. The analysis of 28 studies provided a comprehensive understanding of AI’s role in the field.

Results:

The results suggest that AI can enhance psychotherapy interventions for people with anxiety and depression, especially chatbots and internet-based cognitive-behavioral therapy. However, to achieve optimal outcomes, the ethical integration of AI necessitates resolving concerns about privacy, trust, and interaction between humans and AI.

Conclusion:

The study emphasizes the potential of AI-powered cognitive-behavioral therapy and conversational chatbots to address symptoms of anxiety and depression effectively. The article highlights the importance of cautiously integrating AI into mental health services, considering privacy, trust, and the relationship between humans and AI. This integration should prioritize patient well-being and assist mental health professionals while also considering ethical considerations and the prospective benefits of AI.

Here are some thoughts:

Artificial Intelligence (AI) is emerging as a promising tool in psychotherapy, offering innovative solutions to address mental health challenges. The comprehensive review explores the potential of AI-powered interventions, particularly for anxiety and depression disorders.

The study highlights several key insights about AI's role in mental health care. Researchers found that AI technologies like chatbots and internet-based cognitive-behavioral therapy (iCBT) can enhance psychological interventions by increasing accessibility and providing personalized treatment approaches. Machine learning, natural language processing, and deep learning are particularly crucial technologies enabling these advancements.

Despite the promising potential, the review emphasizes the critical need for careful integration of AI into mental health services. Ethical considerations remain paramount, with researchers stressing the importance of addressing privacy concerns, maintaining patient trust, and preserving the human element of therapeutic interactions. While AI can offer cost-effective and stigma-reducing solutions, it cannot yet fully replicate the profound empathy of face-to-face therapy.

The research examined 28 studies spanning from 2009 to 2023, revealing that AI interventions show particular promise in managing symptoms of anxiety and depression. Chatbots and iCBT demonstrated effectiveness in reducing psychological distress, though their impact on overall life satisfaction varies. The study calls for continued research to optimize AI's implementation in mental health care, balancing technological innovation with ethical principles.

Globally, organizations like the World Health Organization are developing regulatory frameworks to guide AI's responsible use in healthcare. In India, the Indian Council of Medical Research has already established guidelines for AI applications in biomedical research, signaling a growing recognition of this technology's potential.

Wednesday, January 8, 2025

The Unpaid Toll: Quantifying the Public Health Impact of AI

Han, Y. et al.
arXiv:2412.06288 [cs.CY]

Abstract

The surging demand for AI has led to a rapid expansion of energy-intensive data centers, impacting the environment through escalating carbon emissions and water consumption. While significant attention has been paid to AI's growing environmental footprint, the public health burden, a hidden toll of AI, has been largely overlooked. Specifically, AI's lifecycle, from chip manufacturing to data center operation, significantly degrades air quality through emissions of criteria air pollutants such as fine particulate matter, substantially impacting public health. This paper introduces a methodology to model pollutant emissions across AI's lifecycle, quantifying the public health impacts. Our findings reveal that training an AI model of the Llama3.1 scale can produce air pollutants equivalent to more than 10,000 round trips by car between Los Angeles and New York City. The total public health burden of U.S. data centers in 2030 is valued at up to more than $20 billion per year, double that of U.S. coal-based steelmaking and comparable to that of on-road emissions of California. Further, the public health costs unevenly impact economically disadvantaged communities, where the per-household health burden could be 200x more than that in less-impacted communities. We recommend adopting a standard reporting protocol for criteria air pollutants and the public health costs of AI, paying attention to all impacted communities, and implementing health-informed AI to mitigate adverse effects while promoting public health equity.

The research is linked above.

This research paper quantifies the previously overlooked public health consequences of artificial intelligence (AI), focusing on the air pollution generated throughout its lifecycle—from chip manufacturing to data center operation. The authors present a methodology for modeling pollutant emissions and their resulting health impacts, finding that AI's environmental footprint translates to substantial health costs, potentially exceeding $20 billion annually in the US by 2030 and disproportionately affecting low-income communities. This "hidden toll" of AI, the paper argues, necessitates standardized reporting protocols for air pollutants and health impacts, the development of "health-informed AI" to mitigate adverse effects, and a focus on achieving public health equity.

Psychologists could find the information in the sources valuable as it highlights the potential mental health consequences of socioeconomic disparities exacerbated by AI's environmental impact. The sources reveal that the health burden of AI, particularly from data centers, is unevenly distributed and disproportionately affects low-income communities. This raises concerns about increased stress, anxiety, and depression in these communities due to factors like higher exposure to air pollution, reduced access to healthcare, and financial strain from increased health costs. Understanding these psychological impacts could inform interventions and policies aimed at mitigating the negative mental health consequences of AI's growth, particularly for vulnerable populations.

Tuesday, January 7, 2025

Are Large Language Models More Empathetic than Humans?

Welivita, A., and Pu, P. (2024, June 7).
arXiv.org.

Abstract

With the emergence of large language models (LLMs), investigating if they can surpass humans in areas such as emotion recognition and empathetic responding has become a focal point of research. This paper presents a comprehensive study exploring the empathetic responding capabilities of four state-of-the-art LLMs: GPT-4, LLaMA-2-70B-Chat, Gemini-1.0-Pro, and Mixtral-8x7B-Instruct in comparison to a human baseline. We engaged 1,000 participants in a between-subjects user study, assessing the empathetic quality of responses generated by humans and the four LLMs to 2,000 emotional dialogue prompts meticulously selected to cover a broad spectrum of 32 distinct positive and negative emotions. Our findings reveal a statistically significant superiority of the empathetic responding capability of LLMs over humans. GPT-4 emerged as the most empathetic, marking ≈31% increase in responses rated as Good compared to the human benchmark. It was followed by LLaMA-2, Mixtral-8x7B, and Gemini-Pro, which showed increases of approximately 24%, 21%, and 10% in Good ratings, respectively. We further analyzed the response ratings at a finer granularity and discovered that some LLMs are significantly better at responding to specific emotions compared to others. The suggested evaluation framework offers a scalable and adaptable approach for assessing the empathy of new LLMs, avoiding the need to replicate this study’s findings in future research.


Here are some thoughts:

The research presents a groundbreaking study exploring the empathetic responding capabilities of large language models (LLMs), specifically comparing GPT-4, LLaMA-2-70B-Chat, Gemini-1.0-Pro, and Mixtral-8x7B-Instruct against human responses. The researchers designed a comprehensive between-subjects user study involving 1,000 participants who evaluated responses to 2,000 emotional dialogue prompts covering 32 distinct emotions.

By utilizing the EmpatheticDialogues dataset, the study meticulously selected dialogue prompts to ensure equal distribution across positive and negative emotional spectrums. The researchers developed a nuanced approach to evaluating empathy, defining it through cognitive, affective, and compassionate components. They provided LLMs with specific instructions emphasizing the multifaceted nature of empathetic communication, which went beyond traditional linguistic proficiency to capture deeper emotional understanding.

The findings revealed statistically significant superiority in LLMs' empathetic responding capabilities. GPT-4 emerged as the most empathetic, demonstrating approximately a 31% increase in responses rated as "Good" compared to the human baseline. Other models like LLaMA-2, Mixtral-8x7B, and Gemini-Pro showed increases of 24%, 21%, and 10% respectively. Notably, the study also discovered that different LLMs exhibited varying capabilities in responding to specific emotions, highlighting the complexity of artificial empathy.

This research represents a significant advancement in understanding AI's potential for nuanced emotional communication, offering a scalable and adaptable framework for assessing empathy in emerging language models.

Saturday, January 4, 2025

Artificial Intelligence (AI) Psychotherapy: Coming Soon to a Consultation Room Near You?

Swartz, H. A. (2023).
American Journal of Psychotherapy,
76(2), 55–56.


News stories about generative artificial intelligence (AI) abound, alternately predicting that AI will cause humanity’s demise and touting it as the solution to everything from climate change to health care scarcity. In 2022, OpenAI’s release of ChatGPT, an easy-to-use AI interface that produces fluent text in response to prompts, intensified highly ambivalent discussions about AI. Pundits and knowledgeable experts alike marveled at ChatGPT’s ability to produce text that resembles human writing while acknowledging that the speed of its evolution was outstripping our ability to understand its risks. This technology performed better on law school and medical school entrance exams than many human applicants, leading to predictions that AI will eventually render some professions obsolete. Psychotherapists, too, seem at risk for being replaced by AI. Are fears of AI supplanting psychotherapists reasonable?

To explore how AI might respond to a patient in distress, I typed this prompt into ChatGPT: “I feel sad. Can you help me?” It quickly generated the following text:
I am sorry to hear that you are feeling sad. I’ll do my best to help you. It’s important to remember that I’m an AI language model and may not fully understand or empathize with your emotions, but I can offer some general suggestions and support. . . . Remember, everyone experiences sadness from time to time, and it’s okay to seek help when you need it. You don’t have to face it alone.
Despite being impressed by the speed and conversational tone of ChatGPT’s text generation and the convenience of receiving an immediate response during hours when a human therapist would likely be unavailable, the formulaic exchange and canned feedback provided by ChatGPT left me with doubts about its ability to provide genuine soothing to humans experiencing depression. 


Here are some thoughts:

This editorial examines the potential of artificial intelligence (AI) in psychotherapy. While AI chatbots offer increased accessibility and convenience, providing self-help tools and improving symptom management, studies reveal limitations, including a lack of genuine human connection and potential risks like increased self-harm. The author concludes that AI is a useful supplementary tool, particularly in low-resource settings, but cannot replace human therapists for complex emotional and interpersonal issues. Ultimately, a blended approach incorporating both AI and human interaction is suggested for optimal therapeutic outcomes.

Friday, January 3, 2025

Assessing Empathy in Large Language Models with Real-World Physician-Patient Interactions

Luo, M., et al. (2024, May 26).
arXiv.org.

Abstract

The integration of Large Language Models (LLMs) into the healthcare domain has the potential to significantly enhance patient care and support through the development of empathetic, patient-facing chatbots. This study investigates an intriguing question Can ChatGPT respond with a greater degree of empathy than those typically offered by physicians? To answer this question, we collect a de-identified dataset of patient messages and physician responses from Mayo Clinic and generate alternative replies using ChatGPT. Our analyses incorporate novel empathy ranking evaluation (EMRank) involving both automated metrics and human assessments to gauge the empathy level of responses. Our findings indicate that LLM-powered chatbots have the potential to surpass human physicians in delivering empathetic communication, suggesting a promising avenue for enhancing patient care and reducing professional burnout. The study not only highlights the importance of empathy in patient interactions but also proposes a set of effective automatic empathy ranking metrics, paving the way for the broader adoption of LLMs in healthcare.


Here are some thoughts:

The research explores an innovative approach to assessing empathy in healthcare communication by comparing responses from physicians and ChatGPT, a large language model (LLM). The study focuses on prostate cancer patient interactions, utilizing a real-world dataset from Mayo Clinic to investigate whether AI-powered chatbots can potentially deliver more empathetic responses than human physicians.

The researchers developed a novel methodology called EMRank, which employs multiple evaluation techniques to measure empathy. This approach includes both automated metrics using LLaMA (another language model) and human assessments. By using zero-shot, one-shot, and few-shot learning strategies, they created a flexible framework for ranking empathetic communication that could be generalized across different healthcare domains.

Key findings suggest that LLM-powered chatbots like ChatGPT have significant potential to surpass human physicians in delivering empathetic communication. The study's unique contributions include using real patient data, developing innovative automatic empathy ranking metrics, and incorporating patient evaluations to validate the assessment methods. By demonstrating the capability of AI to generate compassionate responses, the research opens new avenues for enhancing patient care and potentially reducing professional burnout among healthcare providers.

The methodology carefully addressed privacy concerns by de-identifying patient and physician information, and controlled for response length to ensure a fair comparison. Ultimately, the study represents a promising step towards integrating artificial intelligence into healthcare communication, highlighting the potential of LLMs to provide supportive, empathetic interactions in medical contexts.

Sunday, December 29, 2024

Artificial intelligence in healthcare: a scoping review of perceived threats to patient rights and safety

Botha, N. N., et al. (2024).
Archives of Public Health, 82(1).

Abstract

Background
The global health system remains determined to leverage on every workable opportunity, including artificial intelligence (AI) to provide care that is consistent with patients’ needs. Unfortunately, while AI models generally return high accuracy within the trials in which they are trained, their ability to predict and recommend the best course of care for prospective patients is left to chance.

Purpose
This review maps evidence between January 1, 2010 to December 31, 2023, on the perceived threats posed by the usage of AI tools in healthcare on patients’ rights and safety.

Methods
We deployed the guidelines of Tricco et al. to conduct a comprehensive search of current literature from Nature, PubMed, Scopus, ScienceDirect, Dimensions AI, Web of Science, Ebsco Host, ProQuest, JStore, Semantic Scholar, Taylor & Francis, Emeralds, World Health Organisation, and Google Scholar. In all, 80 peer reviewed articles qualified and were included in this study.

Results
We report that there is a real chance of unpredictable errors, inadequate policy and regulatory regime in the use of AI technologies in healthcare. Moreover, medical paternalism, increased healthcare cost and disparities in insurance coverage, data security and privacy concerns, and bias and discriminatory services are imminent in the use of AI tools in healthcare.

Conclusions
Our findings have some critical implications for achieving the Sustainable Development Goals (SDGs) 3.8, 11.7, and 16. We recommend that national governments should lead in the roll-out of AI tools in their healthcare systems. Also, other key actors in the healthcare industry should contribute to developing policies on the use of AI in healthcare systems.

Here are some thoughts:

This article presents a comprehensive scoping review that examines the perceived threats posed by artificial intelligence (AI) in healthcare concerning patient rights and safety. This review analyzes literature from January 1, 2010, to December 31, 2023, identifying 80 peer-reviewed articles that highlight various concerns associated with AI tools in medical settings.

The review underscores that while AI has the potential to enhance healthcare delivery, it also introduces significant risks. These include unpredictable errors in AI systems, inadequate regulatory frameworks governing AI applications, and the potential for medical paternalism that may diminish patient autonomy. Additionally, the findings indicate that AI could lead to increased healthcare costs and disparities in insurance coverage, alongside serious concerns regarding data security and privacy breaches. The risk of bias and discrimination in AI services is also highlighted, raising alarms about the fairness of care delivered through these technologies.

The authors argue that these challenges have critical implications for achieving Sustainable Development Goals (SDGs) related to universal health coverage and equitable access to healthcare services. They recommend that national governments take the lead in integrating AI tools into healthcare systems while encouraging other stakeholders to contribute to policy development regarding AI usage.

Furthermore, the review emphasizes the need for rigorous scrutiny of AI tools before their deployment, advocating for enhanced machine learning protocols to ensure patient safety. It calls for a more active role for patients in their care processes and suggests that healthcare managers conduct thorough evaluations of AI technologies before implementation. This scoping review aims to inform future research directions and policy formulations that prioritize patient rights and safety in the evolving landscape of AI in healthcare.

Wednesday, December 25, 2024

Deus in machina: Swiss church installs AI-powered Jesus

Ashifa Kassam
The Guardian
Originally posted 21 Nov 24

The small, unadorned church has long ranked as the oldest in the Swiss city of Lucerne. But Peter’s chapel has become synonymous with all that is new after it installed an artificial intelligence-powered Jesus capable of dialoguing in 100 different languages.

“It was really an experiment,” said Marco Schmid, a theologian with the Peterskapelle church. “We wanted to see and understand how people react to an AI Jesus. What would they talk with him about? Would there be interest in talking to him? We’re probably pioneers in this.”

The installation, known as Deus in Machina, was launched in August as the latest initiative in a years-long collaboration with a local university research lab on immersive reality.

After projects that had experimented with virtual and augmented reality, the church decided that the next step was to install an avatar. Schmid said: “We had a discussion about what kind of avatar it would be – a theologian, a person or a saint? But then we realised the best figure would be Jesus himself.”

Short on space and seeking a place where people could have private conversations with the avatar, the church swapped out its priest to set up a computer and cables in the confessional booth. After training the AI program in theological texts, visitors were then invited to pose questions to a long-haired image of Jesus beamed through a latticework screen. He responded in real time, offering up answers generated through artificial intelligence.


Here are some thoughts:

A Swiss church conducted a two-month experiment using an AI-powered Jesus avatar in a confessional booth, allowing over 1,000 people to interact with it in various languages. The experiment, called Deus in Machina, aimed to gauge public reaction and explore the potential of AI in religious contexts. While many participants reported a positive spiritual experience, others found the AI's responses trite or superficial, highlighting the limitations of current AI technology in nuanced spiritual conversation. The church ultimately deemed the AI Jesus unsuitable for permanent installation due to the significant responsibility involved. The project sparked both interest and criticism within the church community.

Sunday, December 8, 2024

The Dark Side Of AI: How Deepfakes And Disinformation Are Becoming A Billion-Dollar Business Risk

Bernard Marr
Forbes.com
Originally posted 6 Nov 24

Every week, I talk to business leaders who believe they're prepared for AI disruption. But when I ask them about their defense strategy against AI-generated deepfakes and disinformation, I'm usually met with blank stares.

The truth is, we've entered an era where a single fake video or manipulated image can wipe millions off a company's market value in minutes. While we've all heard about the societal implications of AI-generated fakery, the specific risks to businesses are both more immediate and more devastating than many realize.

The New Face Of Financial Fraud

Picture this: A convincing deepfake video shows your CEO announcing a major product recall that never happened, or AI-generated images suggest your headquarters is on fire when it isn't. It sounds like science fiction, but it's already happening. In 2023, a single fake image of smoke rising from a building triggered a panic-driven stock market sell-off, demonstrating how quickly artificial content can impact real-world financials.

The threat is particularly acute during sensitive periods like public offerings or mergers and acquisitions, as noted by PwC. During these critical junctures, even a small piece of manufactured misinformation can have outsized consequences.


Here are some thoughts:

The article discusses the dangers of deepfakes and AI-generated disinformation, warning that these technologies can be used for financial fraud and reputational damage. The author argues that businesses must be proactive in developing defense strategies, including educating employees, implementing cybersecurity solutions, and being transparent with customers. The author suggests that companies must adopt a new culture of vigilance to combat these threats and protect their interests in the increasingly blurred world of real and artificial content.