Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Thursday, July 3, 2025

Mindfulness, moral reasoning and responsibility: towards virtue in Ethical Decision-Making.

Small, C., & Lew, C. (2019).
Journal of Business Ethics, 169(1),
103–117.

Abstract

Ethical decision-making is a multi-faceted phenomenon, and our understanding of ethics rests on diverse perspectives. While considering how leaders ought to act, scholars have created integrated models of moral reasoning processes that encompass diverse influences on ethical choice. With this, there has been a call to continually develop an understanding of the micro-level factors that determine moral decisions. Both rationalist, such as moral processing, and non-rationalist factors, such as virtue and humanity, shape ethical decision-making. Focusing on the role of moral judgement and moral intent in moral reasoning, this study asks what bearings a trait of mindfulness and a sense of moral responsibility may have on this process. A survey measuring mindfulness, moral responsibility and moral judgement completed by 171 respondents was used for four hypotheses on moral judgement and intent in relation to moral responsibility and mindfulness. The results indicate that mindfulness predict moral responsibility but not moral judgement. Moral responsibility does not predict moral judgement, but moral judgement predicts moral intent. The findings give further insight into the outcomes of mindfulness and expand insights into the models of ethical decision-making. We offer suggestions for further research on the role of mindfulness and moral responsibility in ethical decision-making.

Here are some thoughts:

This research explores the interplay between mindfulness, moral reasoning, and moral responsibility in ethical decision-making. Drawing on Rest’s model of moral reasoning—which outlines four phases (awareness, judgment, intent, and behavior)—the study investigates how mindfulness as a virtue influences these stages, particularly moral judgment and intent, and how it relates to a sense of moral responsibility. Regression analyses revealed that while mindfulness did not directly predict moral judgment, it significantly predicted moral responsibility. Additionally, moral judgment was found to strongly predict moral intent.

For practicing psychologists, this study is important for several reasons. First, it highlights the potential role of mindfulness as a trait linked to moral responsibility, suggesting that cultivating mindfulness may enhance ethical decision-making by fostering a greater sense of accountability toward others. This has implications for ethics training and professional development in psychology, especially in fields where practitioners face complex moral dilemmas. Second, the findings underscore the importance of integrating non-rationalist factors—such as virtues and emotional awareness—into traditional models of moral reasoning, offering a more holistic understanding of ethical behavior. Third, the research supports the use of scenario-based approaches in training professionals to navigate real-world ethical challenges, emphasizing the contextual nature of moral reasoning. Finally, the paper contributes to the broader literature on mindfulness by linking it to prosocial behaviors and ethical outcomes, which can inform therapeutic practices aimed at enhancing clients’ moral self-awareness and responsible decision-making.

Wednesday, July 2, 2025

Realization of Empathy Capability for the Evolution of Artificial Intelligence Using an MXene(Ti3C2)-Based Memristor

Wang, Y., Zhang, Y., et al. (2024).
Electronics, 13(9), 1632.

Abstract

Empathy is the emotional capacity to feel and understand the emotions experienced by other human beings from within their frame of reference. As a unique psychological faculty, empathy is an important source of motivation to behave altruistically and cooperatively. Although human-like emotion should be a critical component in the construction of artificial intelligence (AI), the discovery of emotional elements such as empathy is subject to complexity and uncertainty. In this work, we demonstrated an interesting electrical device (i.e., an MXene (Ti3C2) memristor) and successfully exploited the device to emulate a psychological model of “empathic blame”. To emulate this affective reaction, MXene was introduced into memristive devices because of its interesting structure and ionic capacity. Additionally, depending on several rehearsal repetitions, self-adaptive characteristic of the memristive weights corresponded to different levels of empathy. Moreover, an artificial neural system was designed to analogously realize a moral judgment with empathy. This work may indicate a breakthrough in making cool machines manifest real voltage-motivated feelings at the level of the hardware rather than the algorithm.

Here are some thoughts:

This research represents a critical step toward endowing machines with human-like emotional capabilities, particularly empathy. Traditionally, AI has been limited to algorithmic decision-making and pattern recognition, lacking the nuanced ability to understand or simulate human emotions. By using an MXene-based memristor to emulate "empathic blame," researchers have demonstrated a hardware-level mechanism that mimics how humans adjust their moral judgments based on repeated exposure to similar situations—an essential component of empathetic reasoning. This breakthrough suggests that future AI systems could be designed not just to recognize emotions but to adaptively respond to them in real time, potentially leading to more socially intelligent machines.

For psychologists, this research raises profound questions about the nature of empathy, its role in moral judgment, and whether artificially created systems can truly embody these traits or merely imitate them. The ability to program empathy into AI could change how we conceptualize machine sentience and emotional intelligence, blurring the lines between biological and artificial cognition. Furthermore, as AI becomes more integrated into social, therapeutic, and even judicial contexts, understanding how machines might "feel" or interpret human suffering becomes increasingly relevant. The study also opens up new interdisciplinary dialogues between neuroscience, ethics, and AI development, emphasizing the importance of considering psychological principles in the design of emotionally responsive technologies. Ultimately, this work signals a shift from purely functional AI toward systems capable of engaging with humans on a deeper, more emotionally resonant level.

Tuesday, July 1, 2025

The Advantages of Human Evolution in Psychotherapy: Adaptation, Empathy, and Complexity

Gavazzi, J. (2025, May 24).
On Board with Professional Psychology.
American Board of Professional Psychology.
Issues 5.

Abstract

The rapid advancement of artificial intelligence, particularly Large Language Models (LLMs), has generated significant concern among psychologists regarding potential impacts on therapeutic practice. 

This paper examines the evolutionary advantages that position human psychologists as irreplaceable in psychotherapy, despite technological advances. Human evolution has produced sophisticated capacities for genuine empathy, social connection, and adaptive flexibility that are fundamental to effective therapeutic relationships. These evolutionarily-derived abilities include biologically-rooted emotional understanding, authentic empathetic responses, and the capacity for nuanced, context-dependent decision-making. In contrast, LLMs lack consciousness, genuine emotional experience, and the evolutionary framework necessary for deep therapeutic insight. While LLMs can simulate empathetic responses through linguistic patterns, they operate as statistical models without true emotional comprehension or theory of mind. The therapeutic alliance, cornerstone of successful psychotherapy, depends on authentic human connection and shared experiential understanding that transcends algorithmic processes. Human psychologists demonstrate adaptive complexity in understanding attachment styles, trauma responses, and individual patient needs that current AI cannot replicate.

The paper concludes that while LLMs serve valuable supportive roles in documentation, treatment planning, and professional reflection, they cannot replace the uniquely human relational and interpretive aspects essential to psychotherapy. Psychologists should integrate these technologies as resources while maintaining focus on the evolutionarily-grounded human capacities that define effective therapeutic practice.