Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Wednesday, June 18, 2025

The Role of Emotion Dysregulation in Understanding Suicide Risk: A Systematic Review of the Literature

Rogante, E.,  et al. (2024).
Healthcare, 12(2), 169.

Abstract
Suicide prevention represents a global imperative, and efforts to identify potential risk factors are intensifying. Among these, emotional regulation abilities represent a transdiagnostic component that may have an impactful influence on suicidal ideation and behavior. Therefore, the present systematic review aimed to investigate the association between emotion dysregulation and suicidal ideation and/or behavior in adult participants. The review followed PRISMA guidelines, and the research was performed through four major electronic databases (PubMed/MEDLINE, Scopus, PsycInfo, and Web of Science) for relevant titles/abstracts published from January 2013 to September 2023. The review included original studies published in peer-reviewed journals and in English that assessed the relationship between emotional regulation, as measured by the Difficulties in Emotional Regulation Scale (DERS), and suicidal ideation and/or behavior. In total, 44 studies were considered eligible, and the results mostly revealed significant positive associations between emotion dysregulation and suicidal ideation, while the findings on suicide attempts were more inconsistent. Furthermore, the findings also confirmed the role of emotion dysregulation as a mediator between suicide and other variables. Given these results, it is important to continue investigating these constructs and conduct accurate assessments to implement effective person-centered interventions.

Here are some thoughts. I used this research in a recent article.

This systematic review explores the role of emotion dysregulation in understanding suicide risk among adults, analyzing 44 studies that assess the association between emotional regulation difficulties—measured primarily by the Difficulties in Emotion Regulation Scale (DERS)—and suicidal ideation and behavior. The findings largely support a significant positive correlation between emotion dysregulation and suicidal ideation across both clinical and nonclinical populations. Specific dimensions of emotion dysregulation, such as impulsivity, lack of emotional clarity, and ineffective use of regulatory strategies, were particularly linked to increased suicidal thoughts. However, results regarding suicide attempts were more inconsistent, with some studies showing a strong link while others found no significant associations.

The review also highlights the mediating role of emotion dysregulation between various risk factors (e.g., childhood trauma, psychopathy, depression) and suicidal outcomes. Emotion dysregulation appears to amplify suicide risk by influencing how individuals cope with psychological pain and stress. Despite methodological limitations—including reliance on self-report measures, sample heterogeneity, and limited longitudinal data—the evidence suggests that improving emotional regulation could be a valuable target for suicide prevention strategies. The authors recommend further research using robust statistical methods and comprehensive assessments to better understand causal pathways and enhance intervention effectiveness.

Tuesday, June 17, 2025

Ethical implication of artificial intelligence (AI) adoption in financial decision making.

Owolabi, O. S., Uche, P. C., et al. (2024).
Computer and Information Science, 17(1), 49.

Abstract

The integration of artificial intelligence (AI) into the financial sector has raised ethical concerns that need to be addressed. This paper analyzes the ethical implications of using AI in financial decision-making and emphasizes the importance of an ethical framework to ensure its fair and trustworthy deployment. The study explores various ethical considerations, including the need to address algorithmic bias, promote transparency and explainability in AI systems, and adhere to regulations that protect equity, accountability, and public trust. By synthesizing research and empirical evidence, the paper highlights the complex relationship between AI innovation and ethical integrity in finance. To tackle this issue, the paper proposes a comprehensive and actionable ethical framework that advocates for clear guidelines, governance structures, regular audits, and collaboration among stakeholders. This framework aims to maximize the potential of AI while minimizing negative impacts and unintended consequences. The study serves as a valuable resource for policymakers, industry professionals, researchers, and other stakeholders, facilitating informed discussions, evidence-based decision-making, and the development of best practices for responsible AI integration in the financial sector. The ultimate goal is to ensure fairness, transparency, and accountability while reaping the benefits of AI for both the financial sector and society.

Here are some thoughts:

This paper explores the ethical implications of using artificial intelligence (AI) in financial decision-making.  It emphasizes the necessity of an ethical framework to ensure AI is used fairly and responsibly.  The study examines ethical concerns like algorithmic bias, the need for transparency and explainability in AI systems, and the importance of regulations that protect equity, accountability, and public trust.  The paper also proposes a comprehensive ethical framework with guidelines, governance structures, regular audits, and stakeholder collaboration to maximize AI's potential while minimizing negative impacts.

These themes are similar to concerns in using AI in the practice of psychology. Also, psychologists may need to be aware of these issues for their own financial and wealth management.

Monday, June 16, 2025

The impact of AI errors in a human-in-the-loop process

Agudo, U., Liberal, K. G., et al. (2024).
Cognitive Research Principles and 
Implications, 9(1).

Abstract

Automated decision-making is becoming increasingly common in the public sector. As a result, political institutions recommend the presence of humans in these decision-making processes as a safeguard against potentially erroneous or biased algorithmic decisions. However, the scientific literature on human-in-the-loop performance is not conclusive about the benefits and risks of such human presence, nor does it clarify which aspects of this human–computer interaction may influence the final decision. In two experiments, we simulate an automated decision-making process in which participants judge multiple defendants in relation to various crimes, and we manipulate the time in which participants receive support from a supposed automated system with Artificial Intelligence (before or after they make their judgments). Our results show that human judgment is affected when participants receive incorrect algorithmic support, particularly when they receive it before providing their own judgment, resulting in reduced accuracy. The data and materials for these experiments are freely available at the Open Science Framework: https://osf.io/b6p4z/ Experiment 2 was preregistered.

Here are some thoughts:


This study explores the impact of AI errors in human-in-the-loop processes, where humans and AI systems collaborate in decision-making.  The research specifically investigates how the timing of AI support influences human judgment and decision accuracy.  The findings indicate that human judgment is negatively affected by incorrect algorithmic support, particularly when provided before the human's own judgment, leading to decreased accuracy.  This research highlights the complexities of human-computer interaction in automated decision-making contexts and emphasizes the need for a deeper understanding of how AI support systems can be effectively integrated to minimize errors and biases.    

This is important for psychologists because it sheds light on the cognitive biases and decision-making processes involved when humans interact with AI systems, which is an increasingly relevant area of study in the field.  Understanding these interactions can help psychologists develop interventions and strategies to mitigate negative impacts, such as automation bias, and improve the design of human-computer interfaces to optimize decision-making accuracy and reduce errors in various sectors, including public service, healthcare, and justice. 

Sunday, June 15, 2025

Relationship between Personal Ethics and Burnout: The Unexpected Influence of Affective Commitment

Santiago-Torner, C., et al. (2024).
Administrative Sciences, 14(6), 123.

Abstract

Objective: Ethical climates and their influence on emotional health have been the subject of intense debates. However, Personal Ethics as a potential resource that can mitigate Burnout syndrome has gone unnoticed. Therefore, the main objective of this study is to examine the effect of Personal Ethics on the three dimensions that constitute Burnout, considering the moderating influence of Affective Commitment. 

Design/methodology: A model consisting of three simple moderations is used to solve this question. The sample includes 448 professionals from the Colombian electricity sector with university-qualified education. 

Findings: Personal Ethics mitigates Emotional Exhaustion and Depersonalization, but it is not related to Personal Realization. Affective Commitment, unexpectedly, has an inverse moderating effect. In other words, as this type of commitment intensifies, the positive impact of Personal Ethics on Burnout and Depersonalization decreases until it disappears. Furthermore, Affective Commitment does not influence the dynamic between Personal Ethics and self-realization. 

Research limitations/implications: A longitudinal study would strengthen the causal relationships established in this research. Practical implications: Alignment of values between the individual and the organization is crucial. In fact, integration between the organization and its personnel through organic, open and connected structures increases psychological well-being through values linked to benevolence and understanding. 

Social implications: Employees’ emotional health is transcendental beyond the organizational level, as it has a significant impact on personal and family interactions beyond the workplace.

Originality/value: The potential adverse repercussion of Affective Commitment has been barely examined. Additionally, Personal Ethics, when intensified by high Affective Commitment, can lead to extra-role behaviors that transform what is voluntary into a moral imperative. This situation could generate emotional fractures and a decrease in achievement. This perspective, compared to previous research, introduces an innovative element.

Here are some thoughts:

This study investigates the relationship between personal ethics and burnout, highlighting the unexpected mediating influence of affective commitment. While ethical climates have been extensively studied for their impact on emotional well-being, this research focuses on personal ethics as a potential resource for mitigating burnout across its three dimensions. The findings reveal that personal ethics indirectly reduces burnout through its positive association with affective commitment, suggesting that employees with stronger personal ethical values tend to feel more emotionally attached and committed to their organizations, which in turn buffers them against burnout. This research contributes to the understanding of burnout by identifying personal ethics and affective commitment as significant factors in employee well-being.

Saturday, June 14, 2025

Ethical decision-making models: a taxonomy of models and review of issues

Johnson, M. K., Weeks, S. N.,  et al. (2021).
Ethics & Behavior, 32(3), 195–209.

Abstract

A discussion of ethical decision-making literature is overdue. In this article, we summarize the current literature of ethical decision-making models used in mental health professions. Of 1,520 articles published between 2001 and 2020 that met initial search criteria, 38 articles were included. We report on the status of empirical evidence for the use of these models along with comparisons, limitations, and considerations. Ethical decision-making models were synthesized into eight core procedural components and presented based on the composition of steps present in each model. This taxonomy provides practitioners, trainers, students, and supervisors relevant information regarding ethical decision-making models.


Here are some thoughts:

This article reviews ethical decision-making models used in mental health professions and introduces a taxonomy of these models, defined by eight core procedural components. The study analyzed 38 articles published between 2001 and 2020 to identify these components. The eight core components are:   
  1. Framing the Dilemma: This involves identifying and describing the ethical dilemma.
  2. Considering Codes: This includes reviewing relevant ethical codes and legal standards.
  3. Consultation: Seeking advice from supervisors, colleagues, or ethics experts.
  4. Identifying Stakeholders: Recognizing all individuals and parties affected by the decision.
  5. Generating Alternatives: Developing various potential courses of action.
  6. Assessing Consequences: Evaluating the potential outcomes of each alternative.
  7. Making a Decision: Choosing the best course of action.
  8. Evaluating the Outcome: Reflecting on the decision-making process and its results.    
The paper discusses the empirical evidence for the use of these models, their limitations, and other important considerations for practitioners, trainers, students, and supervisors. 

Friday, June 13, 2025

AI Anxiety: a comprehensive analysis of psychological factors and interventions

Kim, J. J. H., Soh, J., et al. (2025).
AI And Ethics.

Abstract

The rapid advancement of artificial intelligence (AI) has raised significant concerns regarding its impact on human psychology, leading to a phenomenon termed AI Anxiety—feelings of apprehension or fear stemming from the accelerated development of AI technologies. Although AI Anxiety is a critical concern, the current literature lacks a comprehensive analysis addressing this issue. This paper aims to fill that gap by thoroughly examining the psychological factors underlying AI Anxiety and proposing effective solutions to tackle the problem. We begin by comparing AI Anxiety with Automation Anxiety, highlighting the distinct psychological impacts associated with AI-specific advancements. We delve into the primary contributor to AI Anxiety—the fear of replacement by AI—and explore secondary causes such as uncontrolled AI growth, privacy concerns, AI-generated misinformation, and AI biases. To address these challenges, we propose multidisciplinary solutions, offering insights into educational, technological, regulatory, and ethical guidelines. Understanding the root causes of AI Anxiety and implementing strategic interventions are critical steps for mitigating its rise as society enters the era of pervasive AI.


Here are some thoughts:

The rapid advancement of artificial intelligence (AI) has led to a growing concern termed "AI Anxiety," which is the apprehension or fear individuals experience due to the fast-paced development of AI technologies.  This anxiety is multifaceted, encompassing fears about job security, privacy infringements, the loss of control over AI systems, and the potential for AI to generate misinformation and exhibit biases.  While AI Anxiety shares similarities with Automation Anxiety, which arose during the Industrial Revolution with the introduction of machinery, it presents unique challenges.  Unlike Automation Anxiety, which was primarily focused on the replacement of manual labor, AI Anxiety extends to the replacement of cognitive and creative skills across various sectors, including healthcare, finance, and education.  The pervasive nature of AI, its integration into personal lives, and the ethical dilemmas it raises contribute to a deeper and more complex form of anxiety. 

Thursday, June 12, 2025

The Illusion of Thinking: Understanding the Strengths and Limitations of Reasoning Models via the Lens of Problem Complexity

Parshin, S.,  et al. (n.d.).
Apple.

Abstract

Recent generations of frontier language models have introduced Large Reasoning Models (LRMs) that generate detailed thinking processes before providing answers. While these models demonstrate improved performance on reasoning benchmarks, their fundamental capabilities, scaling properties, and limitations remain insufficiently understood. Current evaluations primarily focus on established mathematical and coding benchmarks, emphasizing final answer accuracy. However, this evaluation paradigm often suffers from data contamination and does not provide insights into the reasoning traces’ structure and quality. In this work, we systematically investigate these gaps with the help of controllable puzzle environments that allow precise manipulation of compositional complexity while maintaining consistent logical structures. This setup enables the analysis of not only final answers but also the internal reasoning traces, offering insights into how LRMs “think”. Through extensive experimentation across diverse puzzles, we show that frontier LRMs face a complete accuracy collapse beyond certain complexities. Moreover, they exhibit a counterintuitive scaling limit: their reasoning effort increases with problem complexity up to a point, then declines despite having an adequate token budget. By comparing LRMs with their standard LLM counterparts under equivalent inference compute, we identify three performance regimes: (1) low-complexity tasks where standard models surprisingly outperform LRMs, (2) medium-complexity tasks where additional thinking in LRMs demonstrates advantage, and (3) high-complexity tasks where both models experience complete collapse. We found that LRMs have limitations in exact computation: they fail to use explicit algorithms and reason inconsistently across puzzles. We also investigate the reasoning traces in more depth, studying the patterns of explored solutions and analyzing the models’ computational behavior, shedding light on their strengths, limitations, and ultimately raising crucial questions about their true reasoning capabilities.

The paper can be located here.

Here are some thoughts:

This paper is important to psychologists because it explores how Large Reasoning Models (LRMs) generate reasoning processes that appear human-like but may lack true understanding—an illusion that mirrors aspects of human cognition. By analyzing LRMs’ step-by-step reasoning traces, the study reveals striking parallels to human reasoning heuristics, biases, and limitations, such as inconsistent logic, computational failures under complexity, and a collapse in effort beyond a certain threshold. These findings offer psychologists a novel framework to compare AI and human reasoning, particularly in domains like problem-solving, metacognition, and cognitive overload. Additionally, the paper raises urgent questions about human-AI interaction: if people overtrust AI-generated reasoning (despite its flaws), this could influence reliance on AI in therapeutic, educational, or decision-making contexts. The study’s methods—using controlled puzzles to dissect reasoning—also provide psychologists with tools to test human cognition with similar precision. Ultimately, this work challenges assumptions about what constitutes "genuine" reasoning, bridging AI research and psychological theories of intelligence, bias, and the boundaries of human and artificial thought.

Wednesday, June 11, 2025

Communitarianism revisited

Etzioni, A. (2014).
Journal of Political Ideologies, 19(3), 241–260.

Abstract

This article provides a retrospective account and analysis of communitarianism. Drawing upon the author's involvement with the political branch of communitarianism, it attempts to summarize both the history of the school of thought as well as its most prominent ideas. These include the communitarian emphasis on the common good; the effort to find an acceptable balance between individual rights and social responsibilities; the basis of social order; and the need to engage in substantive moral dialogues. The article closes with a discussion of cultural relativism according to which communities ought to be the ultimate arbitrators of the good and a universalistic position.


Here are some thoughts:

This article offers a comprehensive overview and critical reflection on the evolution of communitarian thought, particularly as it relates to political philosophy and public life. Etzioni traces the historical roots of communitarianism, highlighting its emphasis on the common good, the balance between individual rights and social responsibilities, and the necessity of substantive moral dialogue within communities. He notes that while communitarianism is a relatively small school in academic philosophy, its core ideas-such as prioritizing the welfare of the community alongside individual freedoms-are deeply embedded in various religious, political, and cultural traditions across the world.

The article explores the resurgence of communitarian ideas in the 1980s and 1990s as a response to the perceived excesses of individualism promoted by liberalism and laissez-faire conservatism. Etzioni discusses the tension between individual autonomy and communal obligations, arguing for a nuanced approach that seeks equilibrium between these often competing values, adapting as societal conditions change. He also addresses critiques of communitarianism, including concerns about its potential association with authoritarianism and the vagueness of the concept of "community."

For practicing psychologists, this article is significant because it underscores the importance of considering both individual and collective dimensions in understanding human behavior, ethical decision-making, and therapeutic practice. Recognizing the interplay between personal autonomy and social context can enhance psychologists’ ability to support clients in navigating moral dilemmas, fostering social connectedness, and promoting well-being within diverse communities.

Tuesday, June 10, 2025

Prejudiced patients: Ethical considerations for addressing patients’ prejudicial comments in psychotherapy.

Mbroh, H., Najjab, A., et al. (2020).
Professional Psychology: Research and
Practice, 51(3), 284–290.

Abstract

Psychologists will often encounter patients who make prejudiced comments during psychotherapy. Some psychologists may argue that the obligations to social justice require them to address these comments. Others may argue that the obligation to promote the psychotherapeutic process requires them to ignore such comments. The authors present a decision-making strategy and an intervention based on principle-based ethics for thinking through such dilemmas.

Public Significance Statement—

This article identifies ethical principles psychologists should consider when deciding whether to address their patients’ prejudicial comments in psychotherapy. It also provides an intervention strategy for addressing patients’ prejudicial comments.


Here are some thoughts:

The article explores how psychologists should ethically respond when clients express prejudicial views during therapy. The authors highlight a tension between two key obligations: the duty to promote the well-being of the patient (beneficence) and the broader responsibility to challenge social injustice (general beneficence). Using principle-based ethics, the article presents multiple real-life scenarios in which clients make discriminatory remarks—whether racist, ageist, sexist, or homophobic—and examines the ethical dilemmas that arise. In each case, psychologists must consider the context, potential harm, and therapeutic alliance before choosing whether or how to intervene. The authors emphasize that while tolerance for clients' values is important, it should not extend to condoning harmful biases. They propose a structured approach to addressing prejudice in session: show empathy, create cognitive dissonance by highlighting harm, and invite the client to explore the issue further. Recommendations include ongoing education, self-reflection, consultation, and thoughtful, non-punitive interventions. Ultimately, the article argues that addressing patient prejudice is ethically justifiable when done skillfully, and doing so can improve both individual therapy outcomes and societal well-being.