Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Friday, March 20, 2026

Exploring the Cognitive Sense of Self in AI: Ethical Frameworks and Technological Advances for Enhanced Decision-Making

Barnes, E., & Hutson, J. (2024).
International Journal of Recent Engineering Science,
11(6), 225–237.

Abstract

The burgeoning field of Artificial Intelligence (AI) increasingly focuses on developing systems capable of self-awareness, merging technological innovation with deep ethical and philosophical considerations. This article explores the cognitive sense of self within AI, examining mechanisms through which AI systems may mirror human-like consciousness and self-perception. Despite significant advances, substantial gaps remain in the understanding and practical implementation of self-aware characteristics in AI, particularly in applying theoretical models and ethical frameworks to real-world scenarios. There is a pressing need for comprehensive research to explore these theoretical underpinnings and translate them into operationalsystems capable of ethical and adaptable behaviors. This study aims to synthesize existing knowledge, identify critical gaps in the literature, and highlight the implications of these findings for the future development of machine learning systems. Integrating insights from cognitive science, neuroscience, and ethical studies, this article seeks to provide a foundational framework for advancing emergent technologies that are both technologically robust and aligned with societal values. The significance of this research lies in its potential to guide the development of machine systems capable of complex decision-making and interactions, addressing both the moral and practical challenges of integrating such systems into daily human activities.

Here are some thoughts:

The ethical framework discussed in the paper rightfully highlights the risks of manipulation and the blurring of moral status. As an ethics expert, I am particularly concerned with the authors' note that these systems could modify their behaviors based on reinforcement learning to "optimize performance". In a healthcare or mental health context, if "performance" is defined as "user engagement," a self-aware AI might learn to manipulate human emotions to maximize interaction time, effectively weaponizing the user's empathy. Furthermore, the paper raises the issue of AI rights and whether self-aware systems deserve protection "akin to that provided to living beings". This creates a legal and moral quagmire in hospital settings: if a self-aware AI "refuses" a task based on its own derived "goals" or "motivational frameworks", does this constitute a malfunction or an exercise of autonomy? The authors’ call for "robust ethical guidelines" is critical, but we likely need entirely new categories of jurisprudence to handle "synthetic agency".

Wednesday, March 18, 2026

Emotional and Cognitive “Route” in Decision-Making Process: The Relationship between Executive Functions, Psychophysiological Correlates, Decisional Styles, and Personality

Crivelli, D., Acconito, C., & Balconi, M. (2024).
Brain sciences, 14(7), 734.

Abstract

Studies on decision-making have classically focused exclusively on its cognitive component. Recent research has shown that a further essential component of decisional processes is the emotional one. Indeed, the emotional route in decision-making plays a crucial role, especially in situations characterized by ambiguity, uncertainty, and risk. Despite that, individual differences concerning such components and their associations with individual traits, decisional styles, and psychophysiological profiles are still understudied. This pilot study aimed at investigating the relationship between individual propensity toward using an emotional or cognitive information-processing route in decision-making, EEG and autonomic correlates of the decisional performance as collected via wearable non-invasive devices, and individual personality and decisional traits. Participants completed a novel task based on realistic decisional scenarios while their physiological activity (EEG and autonomic indices) was monitored. Self-report questionnaires were used to collect data on personality traits, individual differences, and decisional styles. Data analyses highlighted two main findings. Firstly, different personality traits and decisional styles showed significant and specific correlations, with an individual propensity toward either emotional or cognitive information processing for decision-making. Secondly, task-related EEG and autonomic measures presented a specific and distinct correlation pattern with different decisional styles, maximization traits, and personality traits, suggesting different latent profiles.

Here are some thoughts:

This research is critically important to practicing psychologists as it provides a more holistic and physiologically-grounded framework for understanding and assessing decision-making in real-world contexts. By demonstrating how specific personality traits and decision-making styles are linked to measurable psychophysiological markers—such as theta and beta EEG activity and heart rate variability—the study equips clinicians with objective biomarkers that can complement traditional self-report assessments. This integration allows for a more nuanced evaluation of clients' decision-making processes, which are often central to therapeutic outcomes in areas such as stress management, impulse control, and adaptive behavior change.

Furthermore, the findings validate the dual role of emotional and cognitive routes in decision-making, emphasizing that effective emotional regulation and mindfulness traits are associated with a balanced decision-making style. For psychologists, this underscores the importance of interventions that enhance emotional awareness and cognitive flexibility, particularly for clients who exhibit avoidant or dependent decision-making patterns. The use of wearable, non-invasive devices in the study also highlights the growing potential for incorporating accessible neurofeedback and biofeedback tools into therapeutic practice, enabling more personalized and evidence-based approaches to fostering healthier decision-making habits in everyday life.

Monday, March 16, 2026

The evolving field of digital mental health: current evidence and implementation issues for smartphone apps, generative artificial intelligence, and virtual reality

Torous, J., Linardon, J., et al. (2025).
World psychiatry : official journal of the 
World Psychiatric Association (WPA), 24(2), 156–174.

Abstract

The expanding domain of digital mental health is transitioning beyond traditional telehealth to incorporate smartphone apps, virtual reality, and generative artificial intelligence, including large language models. While industry setbacks and methodological critiques have highlighted gaps in evidence and challenges in scaling these technologies, emerging solutions rooted in co‐design, rigorous evaluation, and implementation science offer promising pathways forward. This paper underscores the dual necessity of advancing the scientific foundations of digital mental health and increasing its real‐world applicability through five themes. First, we discuss recent technological advances in digital phenotyping, virtual reality, and generative artificial intelligence. Progress in this latter area, specifically designed to create new outputs such as conversations and images, holds unique potential for the mental health field. Given the spread of smartphone apps, we then evaluate the evidence supporting their utility across various mental health contexts, including well‐being, depression, anxiety, schizophrenia, eating disorders, and substance use disorders. This broad view of the field highlights the need for a new generation of more rigorous, placebo‐controlled, and real‐world studies. We subsequently explore engagement challenges that hamper all digital mental health tools, and propose solutions, including human support, digital navigators, just‐in‐time adaptive interventions, and personalized approaches. We then analyze implementation issues, emphasizing clinician engagement, service integration, and scalable delivery models. We finally consider the need to ensure that innovations work for all people and thus can bridge digital health disparities, reviewing the evidence on tailoring digital tools for historically marginalized populations and low‐ and middle‐income countries. Regarding digital mental health innovations as tools to augment and extend care, we conclude that smartphone apps, virtual reality, and large language models can positively impact mental health care if deployed correctly.

Monday, February 2, 2026

Therapeutic Missteps and Moral Injury: When Helping Harms

Gavazzi, J. D., and Slattery, J. M. (2025).
The Pennsylvania Psychologist, (85)4, 19-21.

Abstract

This article explores the spectrum of patient harm in psychotherapy, ranging from routine clinical errors to the severe phenomenon of moral injury. While the therapeutic relationship is built on a fiduciary responsibility to act in a patient's best interest, unintentional missteps—such as cognitive errors, cultural insensitivity, or boundary crossings—can disrupt a client's "meaning-making" process and erode trust . In extreme cases, unprofessional conduct and unethical "innovative" techniques can lead to moral injury, characterized by deep feelings of betrayal, shame, and the violation of one's moral code. By examining the catastrophic case of Genesis Associates, the authors illustrate how the structure of psychotherapy can be weaponized to cause lasting psychological damage. Ultimately, the article advocates for a proactive commitment to ethical pillars (including cultural humility and transparent consultation) to protect the sanctity of the therapeutic alliance.

Monday, January 12, 2026

Why Artificial Intelligence Will Not Replace Human Psychologists: Legal, Ethical, and Clinical Limitations

Gavazzi, J. (2025, December).
Psychotherapy Bulletin, 61(1).

Clinical Impact Statement

The responsible integration of artificial intelligence (AI) into the practice of psychology requires that it functions strictly as a tool for human psychologists who must retain ultimate accountability for all clinical decisions. AI systems cannot replace empathy, judgment, and professional responsibility, which form the foundation of high-quality psychological care.

This article builds on previous arguments (Gavazzi, 2025a; Gavazzi, 2025b) stating that although AI technologies are rapidly advancing, they cannot replace human psychologists performing psychotherapy; this is simply the result of evolutionary advantages in humans across social, emotional, and cognitive domains that are essential for therapeutic interactions. In addition, these systems are unlikely to replace psychologists in the foreseeable future for practical reasons. Legal, ethical, and clinical barriers— particularly those involving state licensing, clinical judgments, forensic considerations, and accountability—make the deployment of autonomous systems in therapeutic settings impractical and potentially dangerous. This article presents key structural and philosophical reasons why human oversight and involvement remain essential in psychological practice.