Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Natural language processing. Show all posts
Showing posts with label Natural language processing. Show all posts

Sunday, August 24, 2025

Shaping the Future of Healthcare: Ethical Clinical Challenges and Pathways to Trustworthy AI

Goktas, P., & Grzybowski, A. (2025).
Journal of clinical medicine, 14(5), 1605.

Abstract

Background/Objectives: Artificial intelligence (AI) is transforming healthcare, enabling advances in diagnostics, treatment optimization, and patient care. Yet, its integration raises ethical, regulatory, and societal challenges. Key concerns include data privacy risks, algorithmic bias, and regulatory gaps that struggle to keep pace with AI advancements. This study aims to synthesize a multidisciplinary framework for trustworthy AI in healthcare, focusing on transparency, accountability, fairness, sustainability, and global collaboration. It moves beyond high-level ethical discussions to provide actionable strategies for implementing trustworthy AI in clinical contexts. 

Methods: A structured literature review was conducted using PubMed, Scopus, and Web of Science. Studies were selected based on relevance to AI ethics, governance, and policy in healthcare, prioritizing peer-reviewed articles, policy analyses, case studies, and ethical guidelines from authoritative sources published within the last decade. The conceptual approach integrates perspectives from clinicians, ethicists, policymakers, and technologists, offering a holistic "ecosystem" view of AI. No clinical trials or patient-level interventions were conducted. 

Results: The analysis identifies key gaps in current AI governance and introduces the Regulatory Genome-an adaptive AI oversight framework aligned with global policy trends and Sustainable Development Goals. It introduces quantifiable trustworthiness metrics, a comparative analysis of AI categories for clinical applications, and bias mitigation strategies. Additionally, it presents interdisciplinary policy recommendations for aligning AI deployment with ethical, regulatory, and environmental sustainability goals. This study emphasizes measurable standards, multi-stakeholder engagement strategies, and global partnerships to ensure that future AI innovations meet ethical and practical healthcare needs. 

Conclusions: Trustworthy AI in healthcare requires more than technical advancements-it demands robust ethical safeguards, proactive regulation, and continuous collaboration. By adopting the recommended roadmap, stakeholders can foster responsible innovation, improve patient outcomes, and maintain public trust in AI-driven healthcare.


Here are some thoughts:

This article is important to psychologists as it addresses the growing role of artificial intelligence (AI) in healthcare and emphasizes the ethical, legal, and societal implications that psychologists must consider in their practice. It highlights the need for transparency, accountability, and fairness in AI-based health technologies, which can significantly influence patient behavior, decision-making, and perceptions of care. The article also touches on issues such as patient trust, data privacy, and the potential for AI to reinforce biases, all of which are critical psychological factors that impact treatment outcomes and patient well-being. Additionally, it underscores the importance of integrating human-centered design and ethics into AI development, offering psychologists insights into how they can contribute to shaping AI tools that align with human values, promote equitable healthcare, and support mental health in an increasingly digital world.

Saturday, June 21, 2025

A Framework for Language Technologies in Behavioral Research and Clinical Applications: Ethical Challenges, Implications, and Solutions

Diaz-Asper, C., Hauglid, M. K., et al. (2024).
American Psychologist, 79(1), 79–91.

Abstract

Technological advances in the assessment and understanding of speech and language within the domains of automatic speech recognition, natural language processing, and machine learning present a remarkable opportunity for psychologists to learn more about human thought and communication, evaluate a variety of clinical conditions, and predict cognitive and psychological states. These innovations can be leveraged to automate traditionally time-intensive assessment tasks (e.g., educational assessment), provide psychological information and care (e.g., chatbots), and when delivered remotely (e.g., by mobile phone or wearable sensors) promise underserved communities greater access to health care. Indeed, the automatic analysis of speech provides a wealth of information that can be used for patient care in a wide range of settings (e.g., mHealth applications) and for diverse purposes (e.g., behavioral and clinical research, medical tools that are implemented into practice) and patient types (e.g., numerous psychological disorders and in psychiatry and neurology). However, automation of speech analysis is a complex task that requires the integration of several different technologies within a large distributed process with numerous stakeholders. Many organizations have raised awareness about the need for robust systems for ensuring transparency, oversight, and regulation of technologies utilizing artificial intelligence. Since there is limited knowledge about the ethical and legal implications of these applications in psychological science, we provide a balanced view of both the optimism that is widely published on and also the challenges and risks of use, including discrimination and exacerbation of structural inequalities.

Public Significance Statement

Computational advances in the domains of automatic speech recognition, natural language processing, and machine learning allow for the rapid and accurate assessment of a person’s speech for numerous purposes. The widespread adoption of these technologies permits psychologists an opportunity to learn more about psychological function, interact in new ways with research participants and patients, and aid in the diagnosis and management of various cognitive and mental health conditions. However, we argue that the current scope of the APA’s Ethical Principles of Psychologists and Code of Conduct is insufficient to address the ethical issues surrounding the application of artificial intelligence. Such a gap in guidance results in the onus falling directly on psychologists to educate themselves about the ethical and legal implications of these emerging technologies potentially exacerbating the risk of their use in both research and practice.

Thursday, June 19, 2025

Large Language Model (LLM) Algorithms in Reshaping Decision-Making and Cognitive Biases in the AI-Leading World: An Experimental Study.

Khatoon, H., Khan, M. L., & Irshad, A. 
(2025, January 22). PsyArXiv

Abstract

The rise of artificial intelligence (AI) has accelerated decision-making since AI algorithmic recommendation may help reduce human limitations while increasing decision accuracy and efficiency. Large language model (LLM) algorithms are designed to enhance human decision-making competencies and remove possible cognitive biases. However, these algorithms can be biased and lead to poor decision-making. Building on previously existing LLM algorithm (i.e., ChatGPT and Perplexity.ai), this study examines whether users who get AI assistance during task-based decision-making have greater decision-making abilities than their peers who employ their own cognitive processes to make decisions. By using domain-independent LLM , incentives, and scenario-based task decisions, we find that the advice suggested by these AIs in the decisive situations were biased and wrong, and that resulted in poor decision outcomes. It has been observed that using public access LLM in crucial situations might result in both ineffective outcomes for the advisee and inadvertent consequences for third parties. Findings highlight the need of having an ethical AI algorithm and the ability to accurately assess trust in order to effectively deploy these systems. This raises concerns regarding the use of AI in decision making with careful assistance.

Here are some thoughts:

This research is important to psychologists because it examines how collaboration with large language models (LLMs) like ChatGPT affects human decision-making, particularly in relation to cognitive biases. By using a modified Adult Decision-Making Competence battery, the study offers empirical data on whether AI assistance improves or impairs judgment. It highlights the psychological dynamics of trust in AI, the risk of overreliance, and the ethical implications of using AI in decisions that impact others. These findings are especially relevant for psychologists interested in cognitive bias, human-technology interaction, and the integration of AI into clinical, organizational, and educational settings.

Friday, May 17, 2024

Moral universals: A machine-reading analysis of 256 societies

Alfano, M., Cheong, M., & Curry, O. S. (2024).
Heliyon, 10(6).
doi.org/10.1016/j.heliyon.2024.e25940 

Abstract

What is the cross-cultural prevalence of the seven moral values posited by the theory of “morality-as-cooperation”? Previous research, using laborious hand-coding of ethnographic accounts of ethics from 60 societies, found examples of most of the seven morals in most societies, and observed these morals with equal frequency across cultural regions. Here we replicate and extend this analysis by developing a new Morality-as-Cooperation Dictionary (MAC-D) and using Linguistic Inquiry and Word Count (LIWC) to machine-code ethnographic accounts of morality from an additional 196 societies (the entire Human Relations Area Files, or HRAF, corpus). Again, we find evidence of most of the seven morals in most societies, across all cultural regions. The new method allows us to detect minor variations in morals across region and subsistence strategy. And we successfully validate the new machine-coding against the previous hand-coding. In light of these findings, MAC-D emerges as a theoretically-motivated, comprehensive, and validated tool for machine-reading moral corpora. We conclude by discussing the limitations of the current study, as well as prospects for future research.

Significance statement

The empirical study of morality has hitherto been conducted primarily in WEIRD contexts and with living participants. This paper addresses both of these shortcomings by examining the global anthropological record. In addition, we develop a novel methodological tool, the morality-as-cooperation dictionary, which makes it possible to use natural language processing to extract a moral signal from text. We find compelling evidence that the seven moral elements posited by the morality-as-cooperation hypothesis are documented in the anthropological record in all regions of the world and among all subsistence strategies. Furthermore, differences in moral emphasis between different types of cultures tend to be non-significant and small when significant. This is evidence for moral universalism.


Here is my summary:

The study aimed to investigate potential moral universals across human societies by analyzing a large dataset of ethnographic texts describing the norms and practices of 256 societies from around the world. The researchers used machine learning and natural language processing techniques to identify recurring concepts and themes related to morality across the texts.

Some key findings:

1. Seven potential moral universals were identified as being very widespread across societies:
            Fairness/reciprocity
            Harm/care
            Deference to authorities/respect
            Loyalty to the in-group
            Purity/sanctity
            Liberty/oppression
            Ownership/property rights

2. However, there was also substantial variation in how these principles were interpreted and prioritized across cultures.

3. Certain potential universals like harm/care and fairness were more universally condemned when violations impacted one's own group versus other groups.

4. Societies' mobility, population density, and reliance on agriculture or animal husbandry seemed to influence the relative importance placed on different moral principles.

The authors argue that while there do appear to be some common moral foundations widespread across societies, there is also substantial cultural variation in how these are expressed and prioritized. They suggest morality emerges from an interaction of innate psychological foundations and cultural evolutionary processes.