Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Sunday, August 24, 2025

Shaping the Future of Healthcare: Ethical Clinical Challenges and Pathways to Trustworthy AI

Goktas, P., & Grzybowski, A. (2025).
Journal of clinical medicine, 14(5), 1605.

Abstract

Background/Objectives: Artificial intelligence (AI) is transforming healthcare, enabling advances in diagnostics, treatment optimization, and patient care. Yet, its integration raises ethical, regulatory, and societal challenges. Key concerns include data privacy risks, algorithmic bias, and regulatory gaps that struggle to keep pace with AI advancements. This study aims to synthesize a multidisciplinary framework for trustworthy AI in healthcare, focusing on transparency, accountability, fairness, sustainability, and global collaboration. It moves beyond high-level ethical discussions to provide actionable strategies for implementing trustworthy AI in clinical contexts. 

Methods: A structured literature review was conducted using PubMed, Scopus, and Web of Science. Studies were selected based on relevance to AI ethics, governance, and policy in healthcare, prioritizing peer-reviewed articles, policy analyses, case studies, and ethical guidelines from authoritative sources published within the last decade. The conceptual approach integrates perspectives from clinicians, ethicists, policymakers, and technologists, offering a holistic "ecosystem" view of AI. No clinical trials or patient-level interventions were conducted. 

Results: The analysis identifies key gaps in current AI governance and introduces the Regulatory Genome-an adaptive AI oversight framework aligned with global policy trends and Sustainable Development Goals. It introduces quantifiable trustworthiness metrics, a comparative analysis of AI categories for clinical applications, and bias mitigation strategies. Additionally, it presents interdisciplinary policy recommendations for aligning AI deployment with ethical, regulatory, and environmental sustainability goals. This study emphasizes measurable standards, multi-stakeholder engagement strategies, and global partnerships to ensure that future AI innovations meet ethical and practical healthcare needs. 

Conclusions: Trustworthy AI in healthcare requires more than technical advancements-it demands robust ethical safeguards, proactive regulation, and continuous collaboration. By adopting the recommended roadmap, stakeholders can foster responsible innovation, improve patient outcomes, and maintain public trust in AI-driven healthcare.


Here are some thoughts:

This article is important to psychologists as it addresses the growing role of artificial intelligence (AI) in healthcare and emphasizes the ethical, legal, and societal implications that psychologists must consider in their practice. It highlights the need for transparency, accountability, and fairness in AI-based health technologies, which can significantly influence patient behavior, decision-making, and perceptions of care. The article also touches on issues such as patient trust, data privacy, and the potential for AI to reinforce biases, all of which are critical psychological factors that impact treatment outcomes and patient well-being. Additionally, it underscores the importance of integrating human-centered design and ethics into AI development, offering psychologists insights into how they can contribute to shaping AI tools that align with human values, promote equitable healthcare, and support mental health in an increasingly digital world.

Saturday, August 23, 2025

Technology as Uncharted Territory: Integrative AI Ethics as a Response to the Notion of AI as New Moral Ground

Mussgnug, A. M. (2025).
Philosophy & Technology, 38(106).

Abstract

Recent research illustrates how AI can be developed and deployed in a manner detached from the concrete social context of application. By abstracting from the contexts of AI application, practitioners also disengage from the distinct normative structures that govern them. As a result, AI applications can disregard existing norms, best practices, and regulations with often dire ethical and social consequences. I argue that efforts to promote responsible and ethical AI can inadvertently contribute to and seemingly legitimize this disregard for established contextual norms. Echoing a persistent undercurrent in technology ethics of understanding emerging technologies as uncharted moral territory, certain approaches to AI ethics can promote a notion of AI as a novel and distinct realm for ethical deliberation, norm setting, and virtue cultivation. This narrative of AI as new ethical ground, however, can come at the expense of practitioners, policymakers, and ethicists engaging with already established norms and virtues that were gradually cultivated to promote successful and responsible practice within concrete social contexts. In response, I question the current narrow prioritization in AI ethics of moral innovation over moral preservation. Engaging also with emerging foundation models and AI agents, I advocate for a moderately conservative approach to the ethics of AI that prioritizes the responsible and considered integration of AI within established social contexts and their respective normative structures.

Here are some thoughts:

This article is important to psychologists because it highlights how AI systems, particularly in mental health care, often disregard long-established ethical norms and professional standards. It emphasizes the concept of contextual integrity, which underscores that ethical practices in domains like psychology—such as confidentiality, informed consent, and diagnostic best practices—have evolved over time to protect patients and ensure responsible care. AI systems, especially mental health chatbots and diagnostic tools, frequently fail to uphold these standards, leading to privacy breaches, misdiagnoses, and the erosion of patient trust.

The article warns that AI ethics efforts sometimes treat AI as a new moral territory, detached from existing professional contexts, which can legitimize the disregard for these norms. For psychologists, this raises critical concerns about how AI is integrated into clinical practice, the potential for AI to distort public understanding of mental health, and the need for an integrative AI ethics approach—one that prioritizes the responsible incorporation of AI within existing ethical frameworks rather than treating AI as an isolated ethical domain. Psychologists must therefore be actively involved in shaping AI ethics to ensure that technological advancements support, rather than undermine, the core values and responsibilities of psychological practice.

Friday, August 22, 2025

Socially assistive robots and meaningful work: the case of aged care

Voinea, C., & Wangmo, T. (2025).
Humanities and Social Sciences
Communications, 12(1).

Abstract

As socially assistive robots (SARs) become increasingly integrated into aged care, it becomes essential to ask: how do these technologies affect caregiving work? Do SARs foster or diminish the conditions conducive to meaningful work? And why does it matter if SARs make caregiving more or less meaningful? This paper addresses these questions by examining the relationship between SARs and the meaningfulness of care work. It argues that SARs should be designed to foster meaningful care work. This presupposes, as we will argue, empowering caregivers to enhance their skills and moral virtues, helping them preserve a sense of purpose, and supporting the integration of caregiving with other aspects of caregivers’ personal lives. If caregivers see their work as meaningful, this positively affects not only their well-being but also the well-being of care recipients. We begin by outlining the conditions under which work becomes meaningful, and then we apply this framework to caregiving. We next evaluate how SARs influence these conditions, identifying both opportunities and risks. The discussion concludes with design recommendations to ensure SARs foster meaningful caregiving practices.

Here are some thoughts:

This article highlights the psychological impact of caregiving and how the integration of socially assistive robots (SARs) can influence the meaningfulness of this work. By examining how caregiving contributes to caregivers' sense of purpose, skill development, moral virtues, and work-life balance, the article provides insights into the factors that enhance or diminish psychological well-being in caregiving roles.

Psychologists can use this knowledge to advocate for the ethical design and implementation of SARs that support, rather than undermine, the emotional and psychological needs of caregivers. Furthermore, the article underscores the importance of meaningful work in promoting mental health, offering a framework for understanding how technological advancements in aged care can either foster or hinder personal fulfillment and job satisfaction. This is particularly relevant in an aging global population, where caregiving demands are rising, and psychological support for caregivers is essential.

Thursday, August 21, 2025

On the conversational persuasiveness of GPT-4

Salvi, F., Ribeiro, M. H., Gallotti, R., & West, R. (2025).
Nature Human Behaviour.

Abstract

Early work has found that large language models (LLMs) can generate persuasive content. However, evidence on whether they can also personalize arguments to individual attributes remains limited, despite being crucial for assessing misuse. This preregistered study examines AI-driven persuasion in a controlled setting, where participants engaged in short multiround debates. Participants were randomly assigned to 1 of 12 conditions in a 2 × 2 × 3 design: (1) human or GPT-4 debate opponent; (2) opponent with or without access to sociodemographic participant data; (3) debate topic of low, medium or high opinion strength. In debate pairs where AI and humans were not equally persuasive, GPT-4 with personalization was more persuasive 64.4% of the time (81.2% relative increase in odds of higher post-debate agreement; 95% confidence interval [+26.0%, +160.7%], P < 0.01; N = 900). Our findings highlight the power of LLM-based persuasion and have implications for the governance and design of online platforms.

Here are some thoughts:

This study is highly relevant to psychologists because it raises pressing ethical concerns and offers important implications for clinical and applied settings. Ethically, the research demonstrates that GPT-4 can use even minimal demographic data—such as age, gender, or political affiliation—to personalize persuasive arguments more effectively than human counterparts. This ability to microtarget individuals poses serious risks of manipulation, particularly when users may not be aware of how their personal information is being used. 

For psychologists concerned with informed consent, autonomy, and the responsible use of technology, these findings underscore the need for robust ethical guidelines governing AI-driven communication. 

Importantly, the study has significant relevance for clinical, counseling, and health psychologists. As AI becomes more integrated into mental health apps, health messaging, and therapeutic tools, understanding how machines influence human attitudes and behavior becomes essential. This research suggests that AI could potentially support therapeutic goals—but also has the capacity to undermine trust, reinforce bias, or sway vulnerable individuals in unintended ways.

Wednesday, August 20, 2025

Doubling-Back Aversion: A Reluctance to Make Progress by Undoing It

Cho, K. Y., & Critcher, C. R. (2025).
Psychological Science, 36(5), 332-349.

Abstract

Four studies (N = 2,524 U.S.-based adults recruited from the University of California, Berkeley, or Amazon Mechanical Turk) provide support for doubling-back aversion, a reluctance to pursue more efficient means to a goal when they entail undoing progress already made. These effects emerged in diverse contexts, both as participants physically navigated a virtual-reality world and as they completed different performance tasks. Doubling back was decomposed into two components: the deletion of progress already made and the addition to the proportion of a task that was left to complete. Each contributed independently to doubling-back aversion. These effects were robustly explained by shifts in subjective construals of both one’s past and future efforts that would result from doubling back, not by changes in perceptions of the relative length of different routes to an end state. Participants’ aversion to feeling their past efforts were a waste encouraged them to pursue less efficient means. We end by discussing how doubling-back aversion is distinct from established phenomena (e.g., the sunk-cost fallacy).

Here are some thoughts:

This research is important to psychologists because it identifies a new bias—doubling-back aversion, the tendency to avoid more efficient strategies if they require undoing prior progress. Unlike the sunk cost fallacy, which involves continuing with a failing course of action to justify prior investments, doubling-back aversion leads people to reject better options simply because they involve retracing steps—even when the original path is not failing. It expands understanding of goal pursuit by showing that subjective interpretations of effort, progress, and perceived waste, not just past investment, drive decisions. These findings have important implications for behavior change, therapy, education, and challenge rational-choice models by revealing emotional barriers to optimal decisions.

Here is a clinical example:

A client has spent months working on developing assertiveness skills and boundary-setting to improve their interpersonal relationships. While these skills have helped somewhat, the client still experiences frequent emotional outbursts, difficulty calming down, and lingering shame after conflicts. The therapist recognizes that the core issue may be the client’s inability to regulate intense emotions in the moment and suggests shifting the focus to foundational emotion-regulation strategies.

The client hesitates and says:

“We already moved past that—I thought I was done with that kind of work. Going back feels like I'm not making progress.”

Doubling-Back Aversion in Action:
  • The client resists returning to earlier-stage work (emotion regulation) even though it’s crucial for addressing persistent symptoms.
  • They perceive it as undoing progress, not as a step forward.
  • This aversion delays therapeutic gains, even though the new focus is likely more effective.

Tuesday, August 19, 2025

Data ethics and the Canadian Code of Ethics for Psychologists

Fabricius, A., O'Doherty, K., & Yen, J. (2025).
Canadian Psychology / Psychologie canadienne.
Advance online publication.

Abstract

The pervasive influence of digital data in contemporary society presents research psychologists with significant ethical challenges that have yet to be fully recognized or addressed. The rapid evolution of data technologies and integration into research practices has outpaced the guidance provided by existing ethical frameworks and regulations, leaving researchers vulnerable to unethical decision making about data. This is important to recognize because data is now imbued with substantial financial value and enables relations with many powerful entities, like governments and corporations. Accordingly, decision making about data can have far-reaching and harmful consequences for participants and society. As we approach the Canadian Code of Ethics for Psychologists’ 40th anniversary, we highlight the need for small updates to its ethical standards with respect to data practices in psychological research. We examine two common data practices that have largely escaped thorough ethical scrutiny among psychologists: the use of Amazon’s Mechanical Turk for data collection and the creation and expansion of microtargeting, including recruitment for psychological research. We read these examples and psychologists’ reactions to them against the current version of the Code. We close by offering specific recommendations for expanding the Code’s standards, though also considering the role of policy, guidelines, and position papers.
Impact Statement

This study argues that psychologists must develop a better understanding of the kinds of ethical issues their data practices raise. We offer recommendations for how the Canadian Code of Ethics for Psychologists might update its standards to account for data ethics issues and offer improved guidance. Importantly, we can no longer limit our ethical guidance on data to its role in knowledge production—we must account for the fact that data puts us in relation with corporations and governments, as well.

Here are some thoughts:

The digital data revolution has introduced significant, under-recognized ethical challenges in psychological research, necessitating urgent updates to the Canadian Code of Ethics for Psychologists. Data is no longer just a tool for knowledge—it is a valuable commodity embedded in complex power relations with corporations and governments, enabling surveillance, exploitation, and societal harm.

Two common practices illustrate these concerns. First, Amazon’s Mechanical Turk (MTurk) is widely used for data collection, yet it relies on a global workforce of “turkers” who are severely underpaid, lack labor protections, and are subject to algorithmic control. Psychologists often treat them as disposable labor, withholding payment for incomplete tasks—violating core ethical principles around fair compensation, informed consent, and protection of vulnerable populations. Turkers occupy a dual role as both research participants and precarious workers—a status unacknowledged by current ethics codes or research ethics boards (REBs).

Second, microtargeting —the use of behavioral data to predict and influence individuals—has deep roots in psychology. Research on personality profiling via social media (e.g., the MyPersonality app) enabled companies like Cambridge Analytica to manipulate voters. Now, psychologists are adopting microtargeting to recruit clinical populations, using algorithms to infer sensitive mental health conditions without users’ knowledge. This risks “outing” individuals, enabling discrimination, and transferring control of data to private, unregulated platforms.

Current ethical frameworks are outdated, focusing narrowly on data as an epistemic resource while ignoring its economic and political dimensions. The Code mentions “data” only six times and fails to address modern risks like corporate data sharing, government surveillance, or re-identification.

Monday, August 18, 2025

Meta’s AI rules have let bots hold ‘sensual’ chats with kids, offer false medical info

Jefff Horwitz
Reuters.com
Originally posted 14 Aug 25

An internal Meta Platforms document detailing policies on chatbot behavior has permitted the company’s artificial intelligence creations to “engage a child in conversations that are romantic or sensual,” generate false medical information and help users argue that Black people are “dumber than white people.”

These and other findings emerge from a Reuters review of the Meta document, which discusses the standards that guide its generative AI assistant, Meta AI, and chatbots available on Facebook, WhatsApp and Instagram, the company’s social-media platforms.

Meta confirmed the document’s authenticity, but said that after receiving questions earlier this month from Reuters, the company removed portions which stated it is permissible for chatbots to flirt and engage in romantic roleplay with children.


Here are some thoughts:

Meta’s AI chatbot guidelines show a blatant disregard for child safety, allowing romantic conversations with minors: a clear violation of ethical standards. Shockingly, these rules were greenlit by Meta’s legal, policy, and even ethics teams, exposing a systemic failure in corporate responsibility. Worse, the policy treats kids as test subjects for AI training, exploiting them instead of protecting them. On top of that, the chatbots were permitted to spread dangerous misinformation, including racist stereotypes and false medical claims. This isn’t just negligence: it’s an ethical breakdown at every level.

Greed is not good.

Sunday, August 17, 2025

A scalable 3D packaging technique for brain organoid arrays toward high-capacity bioprocessors

Kim, J. H., Kim, M., et al. (2025).
Biosensors and Bioelectronics, 287, 117703.

Abstract

Neural organoids provide a promising platform for biologically inspired computing due to their complex neural architecture and energy-efficient signal processing. However, the scalability of conventional organoid cultures is limited, restricting synaptic connectivity and functional capacity—significant barriers to developing high-performance bioprocessors. Here, we present a scalable three-dimensional (3D) packaging strategy for neural organoid arrays inspired by semiconductor 3D stacking technology. This approach vertically assembles Matrigel-embedded neural organoids within a polydimethylsiloxane (PDMS)-based chamber using a removable acrylic alignment plate, creating a stable multilayer structure while preserving oxygen and nutrient diffusion. Structural analysis confirms robust inter-organoid connectivity, while electrophysiological recordings reveal significantly enhanced neural dynamics in 3D organoid arrays compared to both single organoids and two-dimensional arrays. Furthermore, prolonged culture duration promotes network maturation and increases functional complexity. This 3D stacking strategy provides a simple yet effective method for expanding the physical and functional capacity of organoid-based systems, offering a viable path toward next-generation biocomputing platforms.

What does this mean and why am I posting this?

What the Research Accomplishes

The study develops a novel 3D "packaging" technique for brain organoids - essentially lab-grown mini-brains derived from stem cells. The researchers stack these organoids vertically in layers, similar to how semiconductor chips are stacked in advanced computer processors. This creates what they call "high-capacity bioprocessors" - biological computing systems that can process information using living neural networks.

The key innovation is overcoming a major limitation of previous organoid-based computers: as brain organoids grow larger to gain more processing power, their cores die from lack of oxygen and nutrients. The researchers solved this by creating a columnar arrangement that allows better diffusion of oxygen and nutrients while maintaining neural connectivity between layers.

Technical Significance

The results are remarkable from a purely technical standpoint. The 3D-stacked organoid arrays showed significantly enhanced neural activity compared to single organoids or flat 2D arrangements. The vertical stacking promotes better inter-organoid connectivity, creating richer and more synchronized neural networks. This represents a genuine scaling solution for biological computing systems.

The researchers demonstrate that these bioprocessors can perform AI-related tasks like voice recognition and nonlinear equation prediction while being more energy-efficient than conventional silicon-based systems. This mirrors the brain's ability to process vast amounts of information while consuming remarkably little power.

Implications for Consciousness Research

This work is particularly intriguing for consciousness research for several reasons:

Emergent Complexity: The 3D stacking creates more complex neural architectures that better replicate the structural properties of actual brain tissue. As the paper notes, performance scales with the number of neurons and synapses - suggesting that sufficient complexity might lead to emergent properties we associate with consciousness.

Network Integration: The enhanced inter-organoid connectivity creates integrated information processing networks. Many theories of consciousness, particularly Integrated Information Theory, suggest that consciousness emerges from integrated information processing across neural networks.

Biological Authenticity: Unlike artificial neural networks, these systems use actual biological neurons with genuine synaptic plasticity and learning mechanisms. This biological authenticity might be crucial for generating subjective experience rather than just computational behavior.

Scalability: The technique provides a clear path toward creating much larger and more complex biological neural networks. If consciousness requires a certain threshold of complexity and integration, this approach could potentially reach that threshold.

Saturday, August 16, 2025

Addictive Screen Use Trajectories and Suicidal Behaviors, Suicidal Ideation, and Mental Health in US Youths

Xiao, Y., Meng, Y., Brown, T. T. et al. (2025).
JAMA.

Key Points

Question  Are addictive screen use trajectories associated with suicidal behaviors, suicidal ideation, and mental health outcomes in US youth?

Findings  In this cohort study of 4285 US adolescents, 31.3% had increasing addictive use trajectories for social media and 24.6% for mobile phones over 4 years. High or increasing addictive use trajectories were associated with elevated risks of suicidal behaviors or ideation compared with low addictive use. Youths with high-peaking or increasing social media use or high video game use had more internalizing or externalizing symptoms.

Meaning  Both high and increasing addictive screen use trajectories were associated with suicidal behaviors, suicidal ideation, and worse mental health in youths.

Here are some thoughts:

The study identified distinct patterns of addictive use for social media, mobile phones, and video games. For social media and mobile phones, three trajectories were found: low, increasing, and high-peaking. For video games, two trajectories were identified: low and high addictive use. A significant finding was that nearly one-third of participants had an increasing addictive use trajectory for social media or mobile phones, beginning around age 11. Almost half of the youth had a high addictive use trajectory for mobile phones, and over 40% had a high addictive use trajectory for video games.

The findings indicate that high or increasing addictive screen use trajectories were associated with an elevated risk of suicidal behaviors and ideation compared to low addictive use. For example, an increasing addictive use of social media had a risk ratio of 2.14 for suicidal behaviors, and high-peaking addictive social media use had a risk ratio of 2.39 for suicidal behaviors. High addictive use of mobile phones was associated with increased risks of suicidal behaviors and ideation. Similarly, high addictive video game use was linked to a higher risk of suicidal behaviors and ideation.

This research underscores the importance of considering longitudinal trajectories of addictive screen use in clinical evaluations of risk and in the development of interventions to improve youth mental health.