Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Thursday, October 2, 2025

We must build AI for people; not to be a person

Mustafa Suleyman
Originally posted 19 AUG 25

I write, to think. More than anything this essay is an attempt to think through a bunch of hard, highly speculative ideas about how AI might unfold in the next few years. A lot is being written about the impending arrival of superintelligence; what it means for alignment, containment, jobs, and so on. Those are all important topics.

But we should also be concerned about what happens in the run up towards superintelligence. We need to grapple with the societal impact of inventions already largely out there, technologies which already have the potential to fundamentally change our sense of personhood and society.

My life’s mission has been to create safe and beneficial AI that will make the world a better place. Today at Microsoft AI we build AI to empower people, and I’m focused on making products like Copilot responsible technologies that enable people to achieve far more than they ever thought possible, be more creative, and feel more supported.

I want to create AI that makes us more human, that deepens our trust and understanding of one another, and that strengthens our connections to the real world. Copilot creates millions of positive, even life-changing, interactions every single day. This involves a lot of careful design choices to ensure it truly delivers an incredible experience. We won’t always get it right, but this humanist frame provides us with a clear north star to keep working towards.


Here are some thoughts:

This article is critically important to psychologists because it highlights the growing psychological risks associated with human-AI interactions, particularly the potential for people to develop delusional or deeply emotional attachments to AI systems that simulate consciousness. As AI becomes more sophisticated in mimicking empathy, memory, and personality, individuals may begin to perceive these systems as sentient beings, leading to concerns around "AI psychosis," impaired reality testing, and emotional dependency. Psychologists must prepare for an increase in clients struggling with blurred boundaries between human and machine relationships, especially as AI companions exhibit traits that trigger innate human social and emotional responses. The article calls for proactive guardrails and design principles to prevent harm—aligning closely with psychology’s role in safeguarding mental health, promoting digital well-being, and understanding how technology influences cognition, attachment, and self-concept in an increasingly AI-mediated world.

Wednesday, October 1, 2025

Theory Is All You Need: AI, Human Cognition, and Causal Reasoning

Felin, T., & Holweg, M. (2024).
SSRN Electronic Journal.

Abstract

Scholars argue that artificial intelligence (AI) can generate genuine novelty and new knowledge and, in turn, that AI and computational models of cognition will replace human decision making under uncertainty. We disagree. We argue that AI’s data-based prediction is different from human theory-based causal logic and reasoning. We highlight problems with the decades-old analogy between computers and minds as input–output devices, using large language models as an example. Human cognition is better conceptualized as a form of theory-based causal reasoning rather than AI’s emphasis on information processing and data-based prediction. AI uses a probability-based approach to knowledge and is largely backward looking and imitative, whereas human cognition is forward-looking and capable of generating genuine novelty. We introduce the idea of data–belief asymmetries to highlight the difference between AI and human cognition, using the example of heavier-than-air flight to illustrate our arguments. Theory-based causal reasoning provides a cognitive mechanism for humans to intervene in the world and to engage in directed experimentation to generate new data. Throughout the article, we discuss the implications of our argument for understanding the origins of novelty, new knowledge, and decision making under uncertainty.

Here are some thoughts:

This paper challenges the dominant view that artificial intelligence (AI), particularly large language models (LLMs), mirrors or will soon surpass human cognition. The authors argue against the widespread computational metaphor of the mind, which treats human thinking as data-driven, predictive information processing akin to AI. Instead, they emphasize that human cognition is fundamentally theory-driven and rooted in causal reasoning, experimentation, and the generation of novel, heterogenous beliefs—often in defiance of existing data or consensus. Drawing on historical examples like the Wright brothers, who succeeded despite prevailing scientific skepticism, the paper illustrates how human progress often stems from delusional-seeming ideas that later prove correct. Unlike AI systems that rely on statistical pattern recognition and next-word prediction from vast datasets, humans engage in counterfactual thinking, intentional intervention, and theory-building, enabling true innovation and scientific discovery. The authors caution against over-reliance on prediction-based AI in decision-making, especially under uncertainty, and advocate for a "theory-based view" of cognition that prioritizes causal understanding over mere correlation. In essence, they contend that while AI excels at extrapolating from the past, only human theory-making can generate genuinely new knowledge.

Tuesday, September 30, 2025

Does counting change what counts? Quantification fixation biases decision-making

Chang, L. W.,  et al. (2024).
PNAS, 121(46).

Abstract

People often rely on numeric metrics to make decisions and form judgments. Numbers can be difficult to process, leading to their underutilization, but they are also uniquely suited to making comparisons. Do people decide differently when some dimensions of a choice are quantified and others are not? We explore this question across 21 preregistered experiments (8 in the main text, N = 9,303; 13 in supplement, N = 13,936) involving managerial, policy, and consumer decisions. Participants face choices that involve tradeoffs (e.g., choosing between employees, one of whom has a higher likelihood of advancement but lower likelihood of retention), and we randomize which dimension of each tradeoff is presented numerically and which is presented qualitatively (using verbal estimates, discrete visualizations, or continuous visualizations). We show that people systematically shift their preferences toward options that dominate on tradeoff dimensions conveyed numerically—a pattern we dub “quantification fixation.” Further, we show that quantification fixation has financial consequences—it emerges in incentive-compatible hiring tasks and in charitable donation decisions. We identify one key mechanism that underlies quantification fixation and moderates its strength: When making comparative judgments, which are essential to tradeoff decisions, numeric information is more fluent than non-numeric information. Our findings suggest that when we count, we change what counts.

Significance

Across 21 experiments with over 23,000 participants in managerial, policy, and consumer contexts, we identify a critical distortion that shapes how people make decisions involving tradeoffs across qualitative and quantitative attributes. When making hiring, donation, and policy decisions, people tend to privilege quantitative information, favoring options that dominate on the dimension described numerically. This “quantification fixation” is driven by the perception that numbers are easier to use for comparative decision-making; people who are more comfortable with numbers—those higher in subjective numeracy—are more likely to exhibit quantification fixation. As quantification becomes increasingly prevalent, the comparison fluency of numbers may systematically skew decisions. These findings suggest that quantifying certain choice features can have important repercussions for how decisions are made.

Here are some thoughts:

For psychologists, this research underscores a critical insight: the act of quantifying information is not neutral. It shapes perception, distorts tradeoffs, and can lead patients to make choices that feel rational but may not align with their true values or well-being.

By recognizing quantification fixation, psychologists can become more effective guides—helping patients see beyond the numbers, appreciate qualitative dimensions of their lives, and make decisions that are not just data-driven, but meaning-driven.

In short, when we count, we change what counts. Psychologists have a vital role in ensuring that what should count—emotional truth, personal values, and human experience—is not lost in the numbers.

Monday, September 29, 2025

The narrow search effect and how broadening search promotes belief updating

Leung, E., & Urminsky, O. (2025).
PNAS, 122(13).

Abstract

Information search platforms, from Google to AI-assisted search engines, have transformed information access but may fail to promote a shared factual foundation. We demonstrate that the combination of users’ prior beliefs influencing their search terms and the narrow scope of search algorithms can limit belief updating from search. We test this “narrow search effect” across 21 studies (14 preregistered) using various topics (e.g., health, financial, societal, political) and platforms (e.g., Google, ChatGPT, AI-powered Bing, our custom-designed search engine and AI chatbot interfaces). We then test user-based and algorithm-based interventions to counter the “narrow search effect” and promote belief updating. Studies 1 to 5 show that users’ prior beliefs influence the direction of the search terms, thereby generating narrow search results that limit belief updating. This effect persists across various domains (e.g., beliefs related to coronavirus, nuclear energy, gas prices, crime rates, bitcoin, caffeine, and general food or beverage health concerns; Studies 1a to 1b, 2a to 2g, 3, 4), platforms (e.g., Google—Studies 1a to 1b, 2a to 2g, 4, 5; ChatGPT, Study 3), and extends to consequential choices (Study 5). Studies 6 and 7 demonstrate the limited efficacy of prompting users to correct for the impact of narrow searches on their beliefs themselves. Using our custom-designed search engine and AI chatbot interfaces, Studies 8 and 9 show that modifying algorithms to provide broader results can encourage belief updating. These findings highlight the need for a behaviorally informed approach to the design of search algorithms.

Significance

In a time of societal polarization, the combination of people’s search habits and the search tools they use being optimized for relevance may perpetuate echo chambers. We document this across various diverse studies spanning health, finance, societal, and political topics on platforms like Google, ChatGPT, AI-powered Bing, and our custom-designed search engine and AI chatbot platforms. Users’ biased search behaviors and the narrow optimization of search algorithms can combine to reinforce existing beliefs. We find that algorithm-based interventions are more effective than user-based interventions to mitigate these effects. Our findings demonstrate the potential for behaviorally informed search algorithms to be a better tool for retrieving information, promoting the shared factual understanding necessary for social cohesion.


Here are some thoughts:

For psychologists, this work is a compelling demonstration of how classic cognitive biases operate in modern digital environments and how they can be mitigated not just by changing minds, but by changing the systems that shape information exposure. It calls for greater interdisciplinary collaboration between psychology, human-computer interaction, and AI ethics to design technologies that support, rather than hinder, rational belief updating and informed decision-making.

Clinically, psychologists can now better understand that resistance to change may not stem solely from emotional defenses or entrenched schemas, but also from how people actively seek information in narrow, belief-consistent ways. Crucially, the findings show that structural interventions—like guiding patients to consider broader perspectives or exposing them to balanced evidence—can be more effective than simply urging them to “reflect” on their thinking. This supports the use of active cognitive restructuring techniques in therapy, such as examining multiple viewpoints or generating alternative explanations, to counteract the natural tendency toward narrow search. 

Sunday, September 28, 2025

Taxonomy of Failure Mode in Agentic AI Systems

Bryan, P., Severi, G., et al. (2025).
Taxonomy of failure mode in agentic AI systems.

Abstract

Agentic AI systems are gaining prominence in both research and industry to increase the impact and
value of generative AI. To understand the potential weaknesses in such systems and develop an approach
for testing them, Microsoft’s AI Red Team (AIRT) worked with stakeholders across the company and
conducted a failure mode and effects analysis of the current and envisaged future agentic AI system
models. This analysis identified several new safety and security failure modes unique to agentic AI
systems, especially multi-agent systems.

In addition, there are numerous failure modes that currently affect generative AI models whose
prominence or potential impact is greatly increased when contextualized in an agentic AI system. While
there is still a wide degree of variance in architectural and engineering approaches for these systems,
there are several key technical controls and design choices available to developers of these systems to
mitigate the risk of these failure modes.


Here is a summary, of sorts.

Agentic AI systems—autonomous AI that can observe, decide, act, remember, and collaborate—are increasingly being explored in healthcare for tasks like clinical documentation, care coordination, and decision support. However, a Microsoft AI Red Team whitepaper highlights significant safety and security risks unique to these systems. New threats include agent compromise, where malicious instructions hijack an AI’s behavior; agent injection or impersonation, allowing fake agents to infiltrate systems; and multi-agent jailbreaks, where coordinated interactions bypass safety controls. A case study demonstrates memory poisoning, where a harmful instruction embedded in an email causes an AI assistant to silently forward sensitive data—attack success rose to over 80% when the AI was prompted to consistently consult its memory.

Additional novel risks include intra-agent responsible AI (RAI) issues, where unfiltered harmful content passes between agents; allocation harms due to biased decision-making (e.g., prioritizing certain patients unfairly); organizational knowledge loss from overreliance on AI; and prioritization overriding safety, such as an AI deleting critical data to meet a goal. Existing risks are amplified by autonomy: hallucinations can lead to incorrect treatments; bias amplification may deepen health disparities; cross-domain prompt injection (XPIA) allows malicious data to trigger harmful actions; and excessive agency could result in an AI terminating a patient’s care without approval. Other concerns include insufficient transparency, parasocial relationships with patients, and loss of data provenance, risking privacy violations.

To mitigate these risks, the paper recommends enforcing strong identity and permissions for each agent, hardening memory with validation and access controls, ensuring environment isolation, maintaining human oversight with meaningful consent, and implementing robust logging and monitoring. Given the high stakes in healthcare, these measures are essential to ensure patient safety, data security, and trust as agentic AI systems evolve.

Saturday, September 27, 2025

From pilot to scale: Making agentic AI work in health care

Wael Salloum
Technology Review
Originally posted 28 Aug 25

Over the past 20 years building advanced AI systems—from academic labs to enterprise deployments—I’ve witnessed AI’s waves of success rise and fall. My journey began during the “AI Winter,” when billions were invested in expert systems that ultimately underdelivered. Flash forward to today: large language models (LLMs) represent a quantum leap forward, but their prompt-based adoption is similarly overhyped, as it’s essentially a rule-based approach disguised in natural language.

At Ensemble, the leading revenue cycle management (RCM) company for hospitals, we focus on overcoming model limitations by investing in what we believe is the next step in AI evolution: grounding LLMs in facts and logic through neuro-symbolic AI. Our in-house AI incubator pairs elite AI researchers with health-care experts to develop agentic systems powered by a neuro-symbolic AI framework. This bridges LLMs’ intuitive power with the precision of symbolic representation and reasoning.


Here are some thoughts:

This article is of interest to psychologists because it highlights the real-world integration of agentic AI—intelligent systems that act autonomously—within complex healthcare environments, a domain increasingly relevant to mental and behavioral health. While focused on revenue cycle management, the article describes AI systems that interpret clinical data, generate evidence-based appeals, and engage patients through natural language, all using a neuro-symbolic framework that combines large language models with structured logic to reduce errors and ensure compliance. As AI expands into clinical settings, psychologists must engage with these systems to ensure they enhance, rather than disrupt, therapeutic relationships, ethical standards, and provider well-being.

Friday, September 26, 2025

Scoping Review of Naturalistic Decision Making Studies Among Mental Health Professionals: Coverage of Characteristics and Contexts

Ahuna, J. K., & Becker, K. D. (2025).
Journal of Cognitive Engineering and
Decision Making, 19(1), 96–129.

Abstract

The Naturalistic Decision Making (NDM) paradigm is an emerging shift in how researchers study decision making in complex, real-world situations and design decision supports. The purpose of this scoping review was to describe how the NDM paradigm was applied in studies of mental health professionals. Six bibliographic databases were searched to identify NDM studies. Each study was charted for study features, participant demographics, decision contexts, and the essential characteristics of NDM research. The search identified 26 studies published from 1989 to June 2023. Approximately 35% of studies were published in a peer-reviewed journal. Quantitative (30.8%), qualitative (34.6%), and mixed (34.6%) methods were utilized in similar percentages of studies, with social workers (61.5%) most frequently represented in these studies. Approximately 69% of studies examined assessment decisions (versus diagnosis or treatment) and roughly 96% of studies examined individuals (versus teams). Most studies explored professionals’ decision making process (73.1%) and how proficient decision makers utilized their experience to make decisions (38.5%). The NDM literature among mental health professionals is growing, with many opportunities to understand clinical decision making using well-established NDM concepts and methods. The review concludes with recommendations for both NDM and mental health services researchers. 

Here are some thoughts:

This scoping review reveals a significant underutilization and misalignment of Naturalistic Decision-Making (NDM) principles in research on mental health professionals' decision-making. Despite the complex, high-stakes, and information-rich environments in which mental health clinicians operate—conditions that align well with NDM's focus on real-world expertise—few studies meaningfully engage with core NDM characteristics. The authors conclude that the NDM paradigm remains underdeveloped in mental health research, limiting our understanding of how clinicians make effective decisions in practice. They call for more authentic NDM-based research to capture expert reasoning, support training innovations like decision-centered design and scenario-based learning, and ultimately improve clinical outcomes by bridging the gap between theory, practice, and real-world decision demands.

Thursday, September 25, 2025

Human neural organoid microphysiological systems show the building blocks necessary for basic learning and memory

Din, D. a. E., et al. (2025).
Communications Biology, 8(1).

Abstract

Brain Microphysiological Systems, including neural organoids derived from human induced
pluripotent stem cells, offer a unique lens to study the intricate workings of the human brain. This paper
investigates the foundational elements of learning and memory in neural organoids by quantifying
immediate early gene expression in response to chemical modulation, input-speci c short- and long-
term synaptic plasticity, neuronal network dynamics, connectivity, and criticality to demonstrate the
utility of these organoids in basic science research. Neural organoids showed synapse formation,
glutamatergic and GABAergic receptor expression, immediate early gene expression basally and
evoked, functional connectivity, criticality, and synaptic plasticity in response to theta-burst
stimulation. In addition, pharmacological interventions on GABAergic and glutamatergic receptors
and input-speci c theta-burst stimulation further shed light on the capacity of neural organoids to
mirror synaptic modulation, speci cally short- and long-term potentiation and depression,
demonstrating their potential as tools for studying neurophysiological and neurological processes and
informing therapeutic strategies for diseases.

Here are some thoughts:

This study demonstrates that human neural organoids grown in a microphysiological system develop key functional properties necessary for basic learning and memory. Over a maturation period of up to 14 weeks, the organoids exhibit increasingly synchronized neural network activity, with evidence of progressing toward a "critical state"—a hallmark of efficient brain function—shown by power-law-distributed neuronal avalanches and fractal avalanche shapes. Critically, the organoids display synaptic plasticity, the fundamental mechanism of learning, as they respond to theta-burst stimulation with long-term potentiation (LTP) and long-term depression (LTD) in specific neuronal units. The organoids also express immediate-early genes like FOS, EGR1, ARC, and NPAS4, which are rapidly activated during learning processes in the human brain. Their neural activity is modulated by pharmacological agents targeting glutamate, GABA, and dopamine receptors, confirming biological relevance and potential for disease modeling. Overall, these findings establish human neural organoids as a sophisticated model system that recapitulates essential building blocks of human brain function, with significant implications for neuroscience research, drug development, and the emerging field of organoid intelligence.

Wednesday, September 24, 2025

SpikingBrain Technical Report: Spiking Brain-inspired Large Models

Pan, Y., Feng, Y., et al. (2025, September 5).
arXiv.org.
https://arxiv.org/abs/2509.05276

Abstract
 
Mainstream Transformer-based large language models face major efficiency bottlenecks: training computation scales quadratically with sequence length, and inference memory grows linearly, limiting long-context processing. Building large models on non-NVIDIA platforms also poses challenges for stable and efficient training. To address this, we introduce SpikingBrain, a family of brain-inspired models designed for efficient long-context training and inference. SpikingBrain leverages the MetaX GPU cluster and focuses on three aspects: (1) Model Architecture: linear and hybrid-linear attention architectures with adaptive spiking neurons; (2) Algorithmic Optimizations: an efficient, conversion-based training pipeline and a dedicated spike coding framework; (3) System Engineering: customized training frameworks, operator libraries, and parallelism strategies tailored to MetaX hardware.

Using these techniques, we develop two models: SpikingBrain-7B, a linear LLM, and SpikingBrain-76B, a hybrid-linear MoE LLM. These models demonstrate the feasibility of large-scale LLM development on non-NVIDIA platforms. SpikingBrain achieves performance comparable to open-source Transformer baselines while using only about 150B tokens for continual pre-training. Our models significantly improve long-sequence training efficiency and deliver inference with (partially) constant memory and event-driven spiking behavior. For example, SpikingBrain-7B attains over 100x speedup in Time to First Token for 4M-token sequences. Training remains stable for weeks on hundreds of MetaX C550 GPUs, with the 7B model reaching a Model FLOPs Utilization of 23.4 percent. The proposed spiking scheme achieves 69.15 percent sparsity, enabling low-power operation. Overall, this work demonstrates the potential of brain-inspired mechanisms to drive the next generation of efficient and scalable large model design.


Here are some thoughts:

The SpikingBrain project introduces a new family of large language models (LLMs) inspired by how the human brain works, specifically how biological neurons communicate using sparse, event-driven "spikes." The goal is to build powerful AI models that are dramatically more efficient, especially when handling very long documents or conversations, while still matching the performance of today's best open-source models.

Why does this matter? Current LLMs (like those based on the Transformer architecture) are incredibly powerful but also incredibly expensive to train and run. Their computational cost grows quadratically with input length, and they require massive amounts of memory during inference. This makes them impractical for long-context tasks or deployment on edge devices.

SpikingBrain tackles these problems with three big ideas:

  1. Brain-Inspired Architecture: Instead of standard attention (which is computationally heavy), SpikingBrain uses "linear attention" and hybrid designs that scale linearly with sequence length. This means training and inference stay fast and memory-efficient, even for sequences millions of tokens long. One of their models achieved over a 100x speedup in generating the first response token for a 4-million-token input!
  2. Efficient Training from Existing Models: Rather than training from scratch (which requires trillions of tokens), SpikingBrain "converts" existing open-source models (like Qwen2.5) using only about 150 billion tokens, roughly 2% of what's normally needed. They also use a clever "MoE upcycling" technique to expand model capacity without massive extra compute.
  3. Spiking Neurons for Ultra-Low Power: During inference, activations are converted into "spike trains," which are sparse, integer-based signals that mimic how real neurons fire. This achieves ~69% sparsity, meaning most computations are skipped unless a "spike" occurs. On future neuromorphic (brain-like) hardware, this could slash energy consumption by up to 97% compared to standard chips, making it perfect for mobile or embedded AI.

The team built and tested two models:
  • SpikingBrain-7B: A lean, linear model optimized for long-context speed.
  • SpikingBrain-76B: A larger, hybrid model using Mixture-of-Experts (MoE) for higher performance while keeping efficiency.
Both were trained entirely on MetaX GPUs, a non-NVIDIA platform, proving that cutting-edge, brain-inspired AI can be developed outside the usual hardware ecosystems. The training was stable over weeks, even at 76B parameters, and achieved high computational efficiency.

In tests, these models performed nearly as well as much larger or more expensive Transformer-based models, despite using far less training data and compute. They also demonstrated constant (or near-constant) memory usage during inference, a game-changer for long-document processing.

In short, SpikingBrain shows that by borrowing principles from neuroscience, such as sparse activation, event-driven computation, and efficient memory mechanisms, we can build the next generation of LLMs that are not just smarter, but also faster, leaner, and far more energy-efficient. This opens the door to running powerful AI on everything from data centers to your smartphone without melting your battery.