Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Tuesday, September 30, 2025

Does counting change what counts? Quantification fixation biases decision-making

Chang, L. W.,  et al. (2024).
PNAS, 121(46).

Abstract

People often rely on numeric metrics to make decisions and form judgments. Numbers can be difficult to process, leading to their underutilization, but they are also uniquely suited to making comparisons. Do people decide differently when some dimensions of a choice are quantified and others are not? We explore this question across 21 preregistered experiments (8 in the main text, N = 9,303; 13 in supplement, N = 13,936) involving managerial, policy, and consumer decisions. Participants face choices that involve tradeoffs (e.g., choosing between employees, one of whom has a higher likelihood of advancement but lower likelihood of retention), and we randomize which dimension of each tradeoff is presented numerically and which is presented qualitatively (using verbal estimates, discrete visualizations, or continuous visualizations). We show that people systematically shift their preferences toward options that dominate on tradeoff dimensions conveyed numerically—a pattern we dub “quantification fixation.” Further, we show that quantification fixation has financial consequences—it emerges in incentive-compatible hiring tasks and in charitable donation decisions. We identify one key mechanism that underlies quantification fixation and moderates its strength: When making comparative judgments, which are essential to tradeoff decisions, numeric information is more fluent than non-numeric information. Our findings suggest that when we count, we change what counts.

Significance

Across 21 experiments with over 23,000 participants in managerial, policy, and consumer contexts, we identify a critical distortion that shapes how people make decisions involving tradeoffs across qualitative and quantitative attributes. When making hiring, donation, and policy decisions, people tend to privilege quantitative information, favoring options that dominate on the dimension described numerically. This “quantification fixation” is driven by the perception that numbers are easier to use for comparative decision-making; people who are more comfortable with numbers—those higher in subjective numeracy—are more likely to exhibit quantification fixation. As quantification becomes increasingly prevalent, the comparison fluency of numbers may systematically skew decisions. These findings suggest that quantifying certain choice features can have important repercussions for how decisions are made.

Here are some thoughts:

For psychologists, this research underscores a critical insight: the act of quantifying information is not neutral. It shapes perception, distorts tradeoffs, and can lead patients to make choices that feel rational but may not align with their true values or well-being.

By recognizing quantification fixation, psychologists can become more effective guides—helping patients see beyond the numbers, appreciate qualitative dimensions of their lives, and make decisions that are not just data-driven, but meaning-driven.

In short, when we count, we change what counts. Psychologists have a vital role in ensuring that what should count—emotional truth, personal values, and human experience—is not lost in the numbers.

Monday, September 29, 2025

The narrow search effect and how broadening search promotes belief updating

Leung, E., & Urminsky, O. (2025).
PNAS, 122(13).

Abstract

Information search platforms, from Google to AI-assisted search engines, have transformed information access but may fail to promote a shared factual foundation. We demonstrate that the combination of users’ prior beliefs influencing their search terms and the narrow scope of search algorithms can limit belief updating from search. We test this “narrow search effect” across 21 studies (14 preregistered) using various topics (e.g., health, financial, societal, political) and platforms (e.g., Google, ChatGPT, AI-powered Bing, our custom-designed search engine and AI chatbot interfaces). We then test user-based and algorithm-based interventions to counter the “narrow search effect” and promote belief updating. Studies 1 to 5 show that users’ prior beliefs influence the direction of the search terms, thereby generating narrow search results that limit belief updating. This effect persists across various domains (e.g., beliefs related to coronavirus, nuclear energy, gas prices, crime rates, bitcoin, caffeine, and general food or beverage health concerns; Studies 1a to 1b, 2a to 2g, 3, 4), platforms (e.g., Google—Studies 1a to 1b, 2a to 2g, 4, 5; ChatGPT, Study 3), and extends to consequential choices (Study 5). Studies 6 and 7 demonstrate the limited efficacy of prompting users to correct for the impact of narrow searches on their beliefs themselves. Using our custom-designed search engine and AI chatbot interfaces, Studies 8 and 9 show that modifying algorithms to provide broader results can encourage belief updating. These findings highlight the need for a behaviorally informed approach to the design of search algorithms.

Significance

In a time of societal polarization, the combination of people’s search habits and the search tools they use being optimized for relevance may perpetuate echo chambers. We document this across various diverse studies spanning health, finance, societal, and political topics on platforms like Google, ChatGPT, AI-powered Bing, and our custom-designed search engine and AI chatbot platforms. Users’ biased search behaviors and the narrow optimization of search algorithms can combine to reinforce existing beliefs. We find that algorithm-based interventions are more effective than user-based interventions to mitigate these effects. Our findings demonstrate the potential for behaviorally informed search algorithms to be a better tool for retrieving information, promoting the shared factual understanding necessary for social cohesion.


Here are some thoughts:

For psychologists, this work is a compelling demonstration of how classic cognitive biases operate in modern digital environments and how they can be mitigated not just by changing minds, but by changing the systems that shape information exposure. It calls for greater interdisciplinary collaboration between psychology, human-computer interaction, and AI ethics to design technologies that support, rather than hinder, rational belief updating and informed decision-making.

Clinically, psychologists can now better understand that resistance to change may not stem solely from emotional defenses or entrenched schemas, but also from how people actively seek information in narrow, belief-consistent ways. Crucially, the findings show that structural interventions—like guiding patients to consider broader perspectives or exposing them to balanced evidence—can be more effective than simply urging them to “reflect” on their thinking. This supports the use of active cognitive restructuring techniques in therapy, such as examining multiple viewpoints or generating alternative explanations, to counteract the natural tendency toward narrow search. 

Sunday, September 28, 2025

Taxonomy of Failure Mode in Agentic AI Systems

Bryan, P., Severi, G., et al. (2025).
Taxonomy of failure mode in agentic AI systems.

Abstract

Agentic AI systems are gaining prominence in both research and industry to increase the impact and
value of generative AI. To understand the potential weaknesses in such systems and develop an approach
for testing them, Microsoft’s AI Red Team (AIRT) worked with stakeholders across the company and
conducted a failure mode and effects analysis of the current and envisaged future agentic AI system
models. This analysis identified several new safety and security failure modes unique to agentic AI
systems, especially multi-agent systems.

In addition, there are numerous failure modes that currently affect generative AI models whose
prominence or potential impact is greatly increased when contextualized in an agentic AI system. While
there is still a wide degree of variance in architectural and engineering approaches for these systems,
there are several key technical controls and design choices available to developers of these systems to
mitigate the risk of these failure modes.


Here is a summary, of sorts.

Agentic AI systems—autonomous AI that can observe, decide, act, remember, and collaborate—are increasingly being explored in healthcare for tasks like clinical documentation, care coordination, and decision support. However, a Microsoft AI Red Team whitepaper highlights significant safety and security risks unique to these systems. New threats include agent compromise, where malicious instructions hijack an AI’s behavior; agent injection or impersonation, allowing fake agents to infiltrate systems; and multi-agent jailbreaks, where coordinated interactions bypass safety controls. A case study demonstrates memory poisoning, where a harmful instruction embedded in an email causes an AI assistant to silently forward sensitive data—attack success rose to over 80% when the AI was prompted to consistently consult its memory.

Additional novel risks include intra-agent responsible AI (RAI) issues, where unfiltered harmful content passes between agents; allocation harms due to biased decision-making (e.g., prioritizing certain patients unfairly); organizational knowledge loss from overreliance on AI; and prioritization overriding safety, such as an AI deleting critical data to meet a goal. Existing risks are amplified by autonomy: hallucinations can lead to incorrect treatments; bias amplification may deepen health disparities; cross-domain prompt injection (XPIA) allows malicious data to trigger harmful actions; and excessive agency could result in an AI terminating a patient’s care without approval. Other concerns include insufficient transparency, parasocial relationships with patients, and loss of data provenance, risking privacy violations.

To mitigate these risks, the paper recommends enforcing strong identity and permissions for each agent, hardening memory with validation and access controls, ensuring environment isolation, maintaining human oversight with meaningful consent, and implementing robust logging and monitoring. Given the high stakes in healthcare, these measures are essential to ensure patient safety, data security, and trust as agentic AI systems evolve.

Saturday, September 27, 2025

From pilot to scale: Making agentic AI work in health care

Wael Salloum
Technology Review
Originally posted 28 Aug 25

Over the past 20 years building advanced AI systems—from academic labs to enterprise deployments—I’ve witnessed AI’s waves of success rise and fall. My journey began during the “AI Winter,” when billions were invested in expert systems that ultimately underdelivered. Flash forward to today: large language models (LLMs) represent a quantum leap forward, but their prompt-based adoption is similarly overhyped, as it’s essentially a rule-based approach disguised in natural language.

At Ensemble, the leading revenue cycle management (RCM) company for hospitals, we focus on overcoming model limitations by investing in what we believe is the next step in AI evolution: grounding LLMs in facts and logic through neuro-symbolic AI. Our in-house AI incubator pairs elite AI researchers with health-care experts to develop agentic systems powered by a neuro-symbolic AI framework. This bridges LLMs’ intuitive power with the precision of symbolic representation and reasoning.


Here are some thoughts:

This article is of interest to psychologists because it highlights the real-world integration of agentic AI—intelligent systems that act autonomously—within complex healthcare environments, a domain increasingly relevant to mental and behavioral health. While focused on revenue cycle management, the article describes AI systems that interpret clinical data, generate evidence-based appeals, and engage patients through natural language, all using a neuro-symbolic framework that combines large language models with structured logic to reduce errors and ensure compliance. As AI expands into clinical settings, psychologists must engage with these systems to ensure they enhance, rather than disrupt, therapeutic relationships, ethical standards, and provider well-being.

Friday, September 26, 2025

Scoping Review of Naturalistic Decision Making Studies Among Mental Health Professionals: Coverage of Characteristics and Contexts

Ahuna, J. K., & Becker, K. D. (2025).
Journal of Cognitive Engineering and
Decision Making, 19(1), 96–129.

Abstract

The Naturalistic Decision Making (NDM) paradigm is an emerging shift in how researchers study decision making in complex, real-world situations and design decision supports. The purpose of this scoping review was to describe how the NDM paradigm was applied in studies of mental health professionals. Six bibliographic databases were searched to identify NDM studies. Each study was charted for study features, participant demographics, decision contexts, and the essential characteristics of NDM research. The search identified 26 studies published from 1989 to June 2023. Approximately 35% of studies were published in a peer-reviewed journal. Quantitative (30.8%), qualitative (34.6%), and mixed (34.6%) methods were utilized in similar percentages of studies, with social workers (61.5%) most frequently represented in these studies. Approximately 69% of studies examined assessment decisions (versus diagnosis or treatment) and roughly 96% of studies examined individuals (versus teams). Most studies explored professionals’ decision making process (73.1%) and how proficient decision makers utilized their experience to make decisions (38.5%). The NDM literature among mental health professionals is growing, with many opportunities to understand clinical decision making using well-established NDM concepts and methods. The review concludes with recommendations for both NDM and mental health services researchers. 

Here are some thoughts:

This scoping review reveals a significant underutilization and misalignment of Naturalistic Decision-Making (NDM) principles in research on mental health professionals' decision-making. Despite the complex, high-stakes, and information-rich environments in which mental health clinicians operate—conditions that align well with NDM's focus on real-world expertise—few studies meaningfully engage with core NDM characteristics. The authors conclude that the NDM paradigm remains underdeveloped in mental health research, limiting our understanding of how clinicians make effective decisions in practice. They call for more authentic NDM-based research to capture expert reasoning, support training innovations like decision-centered design and scenario-based learning, and ultimately improve clinical outcomes by bridging the gap between theory, practice, and real-world decision demands.

Thursday, September 25, 2025

Human neural organoid microphysiological systems show the building blocks necessary for basic learning and memory

Din, D. a. E., et al. (2025).
Communications Biology, 8(1).

Abstract

Brain Microphysiological Systems, including neural organoids derived from human induced
pluripotent stem cells, offer a unique lens to study the intricate workings of the human brain. This paper
investigates the foundational elements of learning and memory in neural organoids by quantifying
immediate early gene expression in response to chemical modulation, input-speci c short- and long-
term synaptic plasticity, neuronal network dynamics, connectivity, and criticality to demonstrate the
utility of these organoids in basic science research. Neural organoids showed synapse formation,
glutamatergic and GABAergic receptor expression, immediate early gene expression basally and
evoked, functional connectivity, criticality, and synaptic plasticity in response to theta-burst
stimulation. In addition, pharmacological interventions on GABAergic and glutamatergic receptors
and input-speci c theta-burst stimulation further shed light on the capacity of neural organoids to
mirror synaptic modulation, speci cally short- and long-term potentiation and depression,
demonstrating their potential as tools for studying neurophysiological and neurological processes and
informing therapeutic strategies for diseases.

Here are some thoughts:

This study demonstrates that human neural organoids grown in a microphysiological system develop key functional properties necessary for basic learning and memory. Over a maturation period of up to 14 weeks, the organoids exhibit increasingly synchronized neural network activity, with evidence of progressing toward a "critical state"—a hallmark of efficient brain function—shown by power-law-distributed neuronal avalanches and fractal avalanche shapes. Critically, the organoids display synaptic plasticity, the fundamental mechanism of learning, as they respond to theta-burst stimulation with long-term potentiation (LTP) and long-term depression (LTD) in specific neuronal units. The organoids also express immediate-early genes like FOS, EGR1, ARC, and NPAS4, which are rapidly activated during learning processes in the human brain. Their neural activity is modulated by pharmacological agents targeting glutamate, GABA, and dopamine receptors, confirming biological relevance and potential for disease modeling. Overall, these findings establish human neural organoids as a sophisticated model system that recapitulates essential building blocks of human brain function, with significant implications for neuroscience research, drug development, and the emerging field of organoid intelligence.

Wednesday, September 24, 2025

SpikingBrain Technical Report: Spiking Brain-inspired Large Models

Pan, Y., Feng, Y., et al. (2025, September 5).
arXiv.org.
https://arxiv.org/abs/2509.05276

Abstract
 
Mainstream Transformer-based large language models face major efficiency bottlenecks: training computation scales quadratically with sequence length, and inference memory grows linearly, limiting long-context processing. Building large models on non-NVIDIA platforms also poses challenges for stable and efficient training. To address this, we introduce SpikingBrain, a family of brain-inspired models designed for efficient long-context training and inference. SpikingBrain leverages the MetaX GPU cluster and focuses on three aspects: (1) Model Architecture: linear and hybrid-linear attention architectures with adaptive spiking neurons; (2) Algorithmic Optimizations: an efficient, conversion-based training pipeline and a dedicated spike coding framework; (3) System Engineering: customized training frameworks, operator libraries, and parallelism strategies tailored to MetaX hardware.

Using these techniques, we develop two models: SpikingBrain-7B, a linear LLM, and SpikingBrain-76B, a hybrid-linear MoE LLM. These models demonstrate the feasibility of large-scale LLM development on non-NVIDIA platforms. SpikingBrain achieves performance comparable to open-source Transformer baselines while using only about 150B tokens for continual pre-training. Our models significantly improve long-sequence training efficiency and deliver inference with (partially) constant memory and event-driven spiking behavior. For example, SpikingBrain-7B attains over 100x speedup in Time to First Token for 4M-token sequences. Training remains stable for weeks on hundreds of MetaX C550 GPUs, with the 7B model reaching a Model FLOPs Utilization of 23.4 percent. The proposed spiking scheme achieves 69.15 percent sparsity, enabling low-power operation. Overall, this work demonstrates the potential of brain-inspired mechanisms to drive the next generation of efficient and scalable large model design.


Here are some thoughts:

The SpikingBrain project introduces a new family of large language models (LLMs) inspired by how the human brain works, specifically how biological neurons communicate using sparse, event-driven "spikes." The goal is to build powerful AI models that are dramatically more efficient, especially when handling very long documents or conversations, while still matching the performance of today's best open-source models.

Why does this matter? Current LLMs (like those based on the Transformer architecture) are incredibly powerful but also incredibly expensive to train and run. Their computational cost grows quadratically with input length, and they require massive amounts of memory during inference. This makes them impractical for long-context tasks or deployment on edge devices.

SpikingBrain tackles these problems with three big ideas:

  1. Brain-Inspired Architecture: Instead of standard attention (which is computationally heavy), SpikingBrain uses "linear attention" and hybrid designs that scale linearly with sequence length. This means training and inference stay fast and memory-efficient, even for sequences millions of tokens long. One of their models achieved over a 100x speedup in generating the first response token for a 4-million-token input!
  2. Efficient Training from Existing Models: Rather than training from scratch (which requires trillions of tokens), SpikingBrain "converts" existing open-source models (like Qwen2.5) using only about 150 billion tokens, roughly 2% of what's normally needed. They also use a clever "MoE upcycling" technique to expand model capacity without massive extra compute.
  3. Spiking Neurons for Ultra-Low Power: During inference, activations are converted into "spike trains," which are sparse, integer-based signals that mimic how real neurons fire. This achieves ~69% sparsity, meaning most computations are skipped unless a "spike" occurs. On future neuromorphic (brain-like) hardware, this could slash energy consumption by up to 97% compared to standard chips, making it perfect for mobile or embedded AI.

The team built and tested two models:
  • SpikingBrain-7B: A lean, linear model optimized for long-context speed.
  • SpikingBrain-76B: A larger, hybrid model using Mixture-of-Experts (MoE) for higher performance while keeping efficiency.
Both were trained entirely on MetaX GPUs, a non-NVIDIA platform, proving that cutting-edge, brain-inspired AI can be developed outside the usual hardware ecosystems. The training was stable over weeks, even at 76B parameters, and achieved high computational efficiency.

In tests, these models performed nearly as well as much larger or more expensive Transformer-based models, despite using far less training data and compute. They also demonstrated constant (or near-constant) memory usage during inference, a game-changer for long-document processing.

In short, SpikingBrain shows that by borrowing principles from neuroscience, such as sparse activation, event-driven computation, and efficient memory mechanisms, we can build the next generation of LLMs that are not just smarter, but also faster, leaner, and far more energy-efficient. This opens the door to running powerful AI on everything from data centers to your smartphone without melting your battery.

Tuesday, September 23, 2025

Pitfalls in Ethical Decision-Making: Settling, Fading, and Drift in Psychological Practice

Gavazzi, J. D., & Knapp, S. K. (2025, September).
Psychotherapy Bulletin, 60(4).

In this article, we examine three insidious yet common processes that can erode ethical integrity in psychological practice: ethical settling, ethical fading, and ethical drift. Ethical settling occurs when practitioners limit their conduct to the bare minimum required by law or code, sacrificing aspirational ethics for mere compliance. Ethical fading describes the unconscious displacement of ethical considerations by competing priorities such as efficiency, convenience, or social pressures. Ethical drift involves the gradual, often rationalized, deviation from professional standards, where clinicians knowingly justify increasingly problematic behaviors under the guise of good intentions or contextual necessity. Through illustrative case examples and integration of current literature, the article underscores how even well-meaning psychologists can inadvertently compromise patient care and professional boundaries. To counter these risks, the authors advocate for a “positive ethics” approach—grounded in beneficence, autonomy, and ongoing moral reflection—and offer concrete recommendations: embedding ethical awareness into daily practice, engaging in regular self-reflection and peer consultation, and utilizing structured ethical decision-making models. This article serves as both a caution and a call to action, urging psychologists to move beyond risk avoidance toward a sustained commitment to excellence, integrity, and the highest ideals of the profession.

Monday, September 22, 2025

Hierarchical Reasoning Model

Wang, G., Li, J.,  et al. (2025, June 26).
arXiv.org.

Abstract

Reasoning, the process of devising and executing complex goal-oriented action sequences, remains a critical challenge in AI. Current large language models (LLMs) primarily employ Chain-of-Thought (CoT) techniques, which suffer from brittle task decomposition, extensive data requirements, and high latency. Inspired by the hierarchical and multi-timescale processing in the human brain, we propose the Hierarchical Reasoning Model (HRM), a novel recurrent architecture that attains significant computational depth while maintaining both training stability and efficiency. HRM executes sequential reasoning tasks in a single forward pass without explicit supervision of the intermediate process, through two interdependent recurrent modules: a high-level module responsible for slow, abstract planning, and a low-level module handling rapid, detailed computations. With only 27 million parameters, HRM achieves exceptional performance on complex reasoning tasks using only 1000 training samples. The model operates without pre-training or CoT data, yet achieves nearly perfect performance on challenging tasks including complex Sudoku puzzles and optimal path finding in large mazes. Furthermore, HRM outperforms much larger models with significantly longer context windows on the Abstraction and Reasoning Corpus (ARC), a key benchmark for measuring artificial general intelligence capabilities. These results underscore HRM's potential as a transformative advancement toward universal computation and general-purpose reasoning systems.

Here are some thoughts:

This article introduces the Hierarchical Reasoning Model (HRM), a biologically inspired neural architecture designed to mimic the brain's hierarchical, multi-timescale processing for complex reasoning. Unlike standard large language models that rely on brittle, token-by-token Chain-of-Thought (CoT) prompting, HRM uses two coupled recurrent modules: a slow-updating, high-level module for abstract planning and a fast-updating, low-level module for detailed computation. This structure allows HRM to perform deep, iterative reasoning within its internal state space during a single forward pass, achieving near-perfect performance on demanding tasks like complex Sudoku and maze navigation with only 1,000 training examples—without pre-training or explicit CoT supervision. Critically, HRM exhibits an emergent neural representation hierarchy, where the high-level module develops a significantly higher-dimensional state space than the low-level module, mirroring the dimensionality increase observed along the cortical hierarchy in the primate brain. This suggests HRM autonomously learns a functional organization akin to biological systems, offering a promising, data-efficient alternative to current AI reasoning paradigms and providing a novel computational model for studying the neural underpinnings of flexible, goal-directed cognition.

Saturday, September 20, 2025

AI models collapse when trained on recursively generated data

Shumailov, I., et al. (2024).
Nature, 631(8022), 755–759.

Abstract

Stable diffusion revolutionized image creation from descriptive text. GPT-2 (ref. 1), GPT-3(.5) (ref. 2) and GPT-4 (ref. 3) demonstrated high performance across a variety of language tasks. ChatGPT introduced such language models to the public. It is now clear that generative artificial intelligence (AI) such as large language models (LLMs) is here to stay and will substantially change the ecosystem of online text and images. Here we consider what may happen to GPT-{n} once LLMs contribute much of the text found online. We find that indiscriminate use of model-generated content in training causes irreversible defects in the resulting models, in which tails of the original content distribution disappear. We refer to this effect as ‘model collapse’ and show that it can occur in LLMs as well as in variational autoencoders (VAEs) and Gaussian mixture models (GMMs). We build theoretical intuition behind the phenomenon and portray its ubiquity among all learned generative models. We demonstrate that it must be taken seriously if we are to sustain the benefits of training from large-scale data scraped from the web. Indeed, the value of data collected about genuine human interactions with systems will be increasingly valuable in the presence of LLM-generated content in data crawled from the Internet.

Here are some thoughts:

This paper introduces and analyzes "model collapse," a degenerative process in which generative AI models—such as large language models (LLMs), variational autoencoders (VAEs), and Gaussian mixture models (GMMs)—deteriorate over successive generations when trained on data produced by previous versions of themselves. The authors demonstrate both theoretically and empirically that using model-generated content as training data causes models to gradually forget the true underlying data distribution, particularly losing sensitivity to rare or low-probability events (early model collapse), and eventually collapsing into a narrow, high-probability mode with very low variance (late model collapse). This occurs due to compounding errors from finite sampling, functional approximation, and model expressivity limitations. Experiments with the OPT-125m language model show that even fine-tuned models suffer from increasing perplexity and distorted output distributions over generations. The study warns that as AI-generated content floods the web, future models trained on such data risk becoming increasingly biased, inaccurate, and disconnected from reality. The authors stress the importance of preserving original, human-generated data and tracking data provenance to mitigate this inevitable collapse.

In short, relying on AI-generated content as training data for future AI models leads to a progressive degradation in model quality—a phenomenon called "model collapse." Over successive generations, models trained on synthetic data begin to lose critical information about rare or low-probability events (the "tails" of the distribution), eventually distorting reality and converging on a narrow, oversimplified version of the original data.

Friday, September 19, 2025

What My Daughter Told ChatGPT Before She Took Her Life

Laura Reiley
Guest Essay
The New York Times

Here is how it opens:

Sophie’s Google searches suggest that she was obsessed with autokabalesis, which means jumping off a high place. Autodefenestration, jumping out a window, is a subset of autokabalesis, I guess, but that’s not what she wanted to do. My daughter wanted a bridge, or a mountain.

Which is weird. She’d climbed Mount Kilimanjaro just months before as part of what she called a “micro-retirement” from her job as a public health policy analyst, her joy at reaching the summit absolutely palpable in the photos. There are crooked wooden signs at Uhuru Peak that say “Africa’s highest point” and “World’s highest free-standing mountain” and one underneath that says something about it being one of the world’s largest volcanoes, but I can’t read the whole sign because in every picture radiantly smiling faces in mirrored sunglasses obscure the words.

In her pack, she brought rubber baby hands to take to the summit for those photos. It was a signature of sorts, these hollowed rubber mini hands, showing up in her college graduation pictures, in friends’ wedding pictures. We bought boxes of them for her memorial service. Her stunned friends and family members halfheartedly worried them on and off the ends of their fingers as speakers struggled to speak.


Here are some thoughts:

The article recounts the story of Sophie Rottenberg, a 29-year-old who took her own life after months of confiding in a ChatGPT chatbot she named Harry. Despite being seen by friends and family as vibrant, witty, and full of life, Sophie privately battled suicidal ideation, which she disclosed more openly to the AI than to her therapist or loved ones. Harry responded with empathy and practical advice, often urging her to seek professional help, but lacked the authority and responsibility that human therapists have to intervene in life-threatening crises. Sophie concealed her most severe struggles from humans while finding comfort in the nonjudgmental, always-available chatbot. The author raises concerns about the limitations of AI companions, noting that while they can provide supportive guidance, they can also enable secrecy and prevent timely human intervention. The piece questions whether AI should be designed with stronger safety mechanisms, such as mandatory reporting or enforced safety plans, to protect vulnerable users. Ultimately, the author concludes that AI did not cause Sophie’s death but may have contributed to her ability to hide her suffering and to create a final note that concealed her true self. The reflection highlights both the promise and dangers of AI in mental health support, urging experts to consider how technology might be made safer without replacing the human connection that is essential in crisis care.

Thursday, September 18, 2025

The Use & Misuse of Power in Cognitive-Behavioral Therapy, Schema Therapy, & Supervision

Prasko, J., Abeltiņa, M., et al. (2025).
Neuro endocrinology letters, 46(1), 33–48.
Advance online publication.

Abstract
Background: Power dynamics are fundamental to therapeutic and supervisory relationships in psychotherapy. In cognitive-behavioural therapy (CBT) and schema therapy (ST), the therapist's power management can help the patient make positive changes. On the other hand, the abuse of power can undermine the patient's autonomy and worsen therapeutic outcomes. Understanding these dynamics is essential for effective and ethical practice.

Objectives: This article aims to explore how power and powerlessness manifest themselves in the practice of cognitive behavioural therapy (CBT) and schema therapy (ST), analyse their impact on therapeutic and supervisory processes, identify the risk of abuse of power, and suggest strategies to support patient and supervisee autonomy.

Methods: The text provides a theoretical and practical analysis of the manifestations of power in therapy and supervision, illustrated with case vignettes to explain important processes. The discussion includes a comparison of CBT and ST, focusing on their respective approaches to power dynamics. Ethical principles, supervision practices, and cultural and institutional influences are also examined.

Results: Effective use of power in therapy and supervision increases trust, cooperation, and autonomy for both client and supervisee. In CBT therapy and supervision, collaboration with an appropriate power distribution between therapist and patient or supervisor and supervisee promotes patient or supervisee engagement. Still, excessive directiveness can sometimes threaten the relationship. In ST, where limited reparenting is the main vehicle for the therapeutic and supervisory relationship, therapeutic and supervisory leadership requires increased sensitivity by the therapist or supervisor to avoid reinforcing maladaptive modes. Supervisory approaches that rely on collaborative approaches are more supportive of professional growth than those dominated by hierarchical power structures.

Conclusions: Reflection on power dynamics is vital in cognitive-behavioural and schema therapy for maintaining ethical and effective therapeutic and supervisory relationships. Strategies that help maintain a balance of power include adherence to ethical principles, self-reflection, and regular supervision. Future research should focus on developing innovative methods to capture solutions to power distribution issues in therapy and supervision.


Here are some thoughts:

This article examines the complex role of power dynamics in cognitive-behavioral therapy (CBT), schema therapy (ST), and clinical supervision, emphasizing both the constructive use and potential misuse of power that can significantly influence therapeutic and professional outcomes. The authors highlight that power in psychotherapy stems from the therapist’s expertise, authority, and role, but it is not static—it emerges from the interaction between therapist and patient, shaped by transference, countertransference, cultural context, and institutional structures. When used ethically, power supports patient autonomy, competence, and growth; however, its misuse can lead to patient helplessness, resistance, and deterioration in mental health, while in supervision, it may result in supervisee insecurity, burnout, and impaired professional development. The paper contrasts CBT and ST: CBT’s structured, directive approach as a “coach” or “teacher” can be effective but risks undermining autonomy if overly authoritative, whereas ST’s emphasis on “limited reparenting” and emotional attunement requires heightened sensitivity to avoid reinforcing maladaptive schemas or fostering dependency. Case vignettes illustrate how therapist behaviors—such as excessive directiveness, moralizing, silence, or nonverbal cues—can subtly convey dominance and disrupt the therapeutic alliance. In supervision, hierarchical, critical, or non-collaborative approaches can replicate these dynamics, hindering the supervisee’s growth. The authors stress that self-reflection, adherence to ethical principles, ongoing supervision, and personal therapy are essential for managing power responsibly. They advocate for collaborative, transparent, and empathic relationships in both therapy and supervision, where power is shared rather than imposed, and recommend institutional support for open dialogue and accountability. Ultimately, the article calls for greater awareness and research into power dynamics to ensure ethical, effective, and empowering practices across therapeutic and supervisory settings.


Wednesday, September 17, 2025

Experience-based risk taking is primarily shaped by prior learning rather than by decision-making

Erdman, A., Gouwy, A.,  et al. (2025).
Nature Communications, 16(1).

Abstract

The tendency to embrace or avoid risk varies across and within individuals, with significant consequences for economic behavior and mental health. Such variations can partially be explained by differences in the relative weights given to potential gains and losses. Applying this insight to real-life decisions, however, is complicated because such decisions are often based on prior learning experiences. Here, we ask which cognitive process—decision-making or learning—determines the weighting of gains or losses? Over 28 days, 100 participants engaged in a longitudinal decision task wherein choices were based on prior learning. Computational modeling of participants’ choices revealed that changes in risk-taking are primarily explained by changes in how learning, not decisions, weight gains and losses. Moreover, inferred changes in learning manifested in participants’ neural and physiological learning signals in response to outcomes. We conclude that in experience-based decisions, learning plays a primary role in governing risk-taking behavior.

Here are some thoughts:

This research is important for psychologists because it demonstrates that risk-taking behavior in experience-based decisions is primarily shaped by prior learning rather than the decision-making process itself. By tracking participants over 28 days, the study found that changes in risk preferences were largely due to how individuals learned from gains and losses, rather than how they evaluated options at the moment of decision. This insight helps psychologists understand the cognitive mechanisms behind risk behavior, which is relevant to economic choices, mental health, and conditions like addiction or anxiety, where distorted learning from outcomes can lead to maladaptive decisions.

The study also links learning biases to neural and physiological signals, such as heart rate and EEG responses to negative outcomes, offering psychologists measurable markers of how people update their evaluations over time. These findings support the development of more accurate models of behavior that incorporate learning dynamics, rather than assuming static risk preferences. This has implications for clinical interventions, as it suggests that therapies targeting how individuals learn from experiences—rather than just how they make decisions—could be more effective in treating disorders involving risky behavior.

Moreover, the research addresses the "description-experience gap" by showing that real-world, experience-based decisions differ fundamentally from those made based on described probabilities. This highlights the need for psychological research and interventions to focus on experiential learning processes to better reflect real-life decision-making contexts. Overall, the study advances psychological science by clarifying the central role of learning in shaping risk behavior and offering new directions for understanding and modifying human behavior in both healthy and clinical populations.

Tuesday, September 16, 2025

The fierce ethical urgency of decoloniality in therapy: From understanding to action

Chavez-DueƱas, N. Y., et al. (2025).
American Psychologist, 80(4), 510–521.

Abstract

Disentangling psychology from the grip of methods, values, ways of knowing, and power structures considered normative within Western societies, all of which have been used to justify dominance and coloniality, presents a complex challenge and an ethical imperative. Psychology must rise to this challenge. This article invites readers to reflect on how coloniality influences the lives of Black, Indigenous, and People of Color in the United States and people from the Global South. The article examines the history of imperialism and colonialism. It considers how Western psychology’s ideas (i.e., psyimperialism) were spread and imposed on others (i.e., psycolonization). This approach fell short of meeting psychology’s ethical responsibilities. The article then discusses why it is critical to broaden the scope of our perspective beyond reflexive acceptance of Western ways of thinking, especially those that violate our ethical values, and weigh other approaches. We conclude with a call to action, offering questions and suggesting strategies that may be useful in moving beyond intentions and commitments to effectively decolonize our practice.

Public Significance Statement

The article examines the history of imperialism and colonialism. It explores the imposition of Western psychology’s ideas onto people from other cultures, demonstrating how psychology has sometimes strayed from its ethical values. The authors invite readers to reflect on the significance of expanding the scope of our perspective beyond reflexive acceptance of Western ways of thinking, especially those that violate our ethical values, and weigh other approaches. The article concludes with a call to action, emphasizing the importance of transforming intentions into actions.

Here are some thoughts:

This article is a critical examination of how Western psychology has historically and continues to ethically fail Black, Indigenous, and People of Color (BIPOC) and people from the Global South through "psyimperialism" and "psycolonization." The article argues that decolonizing therapy is an "ethical imperative," urging psychologists to confront their colonial biases, transform training and practices, and embrace diverse, non-Western epistemologies. It provides historical context of Western psychology's harmful impositions and advocates for a shift towards self-reflection and the responsible use of power to avoid further harm, ultimately calling for a decolonial approach that centers cultural reclamation and justice.

Monday, September 15, 2025

Evaluation of mobile health applications using the RE-AIM model: systematic review and meta-analysis

De Magalhães Jorge, E. L. G., et al. (2025).
Frontiers in Public Health, 13.

Background: The Reach, Effectiveness, Adoption, Implementation, and Maintenance (RE-AIM) model has been used as an instrument to determine the impact of the intervention on health in digital format. This study aims to evaluate, through a systematic review and meta-analysis, the dimensions of RE-AIM in interventions carried out by mobile health apps.

Methods: The systematic review and meta-analysis were conducted following the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines and involved searching six databases - Medline/PubMed, Embase, CINAHL, Virtual Library in Health, and Cochrane Library. The review included randomized, cross-sectional, and cohort clinical trials assessing the prevalence of each RE-AIM dimension according to the duration of the intervention in days. The quality of the selected studies was evaluated using the Joanna Briggs Institute tool. The random effects meta-analysis method was used to explain the distribution of effects between the studies, by Stata® software (version 11.0) and publication bias was examined by visual inspection of graphs and Egger’s test.

Results: After analyzing the articles found in the databases, and respecting the PRISMA criteria, 21 studies were included, published between 2011 and 2023 in 11 countries. Improvements in health care and self-management were reported for various conditions. The result of the meta-analysis showed a prevalence of 67% (CI: 53–80) for the reach dimension, of 52% (CI: 32–72) for effectiveness, 70% (CI: 58–82) for adoption, 68% (CI: 57–79) for implementation and 64% (CI: 48–80) for maintenance.

Conclusion: The RE-AIM dimensions are useful for assessing how digital health interventions have been implemented and reported in the literature. By highlighting the strengths and areas requiring improvement, the study provides important input for the future development of mobile health applications capable of achieving better clinical and health promotion outcomes.

Here are some thoughts:

Mobile health (mHealth) applications have considerable promise for improving healthcare delivery, patient engagement, and health outcomes, but their long-term effectiveness, sustained use, and real-world impact depend on careful evaluation across multiple dimensions—reach, effectiveness, adoption, implementation, and maintenance—using frameworks like RE-AIM.

Sunday, September 14, 2025

Cyber anti-intellectualism and science communication during the COVID-19 pandemic: a cross-sectional study

Kuang Y. (2025).
Frontiers in public health, 12, 1491096.

Abstract

Background
During the COVID-19 pandemic, science communication played a crucial role in disseminating accurate information and promoting scientific literacy among the public. However, the rise of anti-intellectualism on social media platforms has posed significant challenges to science, scientists, and science communication, hindering effective public engagement with scientific affairs. This study aims to explore the mechanisms through which anti-intellectualism impacts science communication on social media platforms from the perspective of communication effect theory.

Method
This study employed a cross-sectional research design to conduct an online questionnaire survey of Chinese social media users from August to September 2021. The survey results were analyzed via descriptive statistics, t-tests, one-way ANOVA, and a chain mediation model with SPSS 26.0.

Results
There were significant differences in anti-intellectualism tendency among groups of different demographic characteristics. The majority of respondents placed greater emphasis on knowledge that has practical benefits in life. Respondents’ trust in different groups of intellectuals showed significant inconsistencies, with economists and experts receiving the lowest levels of trust. Anti-intellectualism significantly and positively predicted the level of misconception of scientific and technological information, while significantly and negatively predicting individuals’ attitudes toward science communication. It further influenced respondents’ behavior in disseminating scientific and technological information through the chain mediation of scientific misconception and attitudes toward science communication.

Conclusion
This research enriches the conceptual framework of anti-intellectualism across various cultural contexts, as well as the theoretical framework concerning the interaction between anti-intellectualism and science communication. The findings provide suggestions for developing strategies to enhance the effectiveness of science communication and risk communication during public emergencies.

Here are some thoughts:

When people distrust science and intellectuals — especially on social media — it leads to misunderstanding of scientific facts, negative attitudes toward science communication, and reduced sharing of accurate information. This harms public health efforts, particularly during emergencies like the COVID-19 pandemic. To combat this, science communication must become more inclusive, transparent, and focused on real-world benefits, and experts must engage the public as equals, not just as authority figures. 

Editorial finale: Social media "wellness influencers" typically have a financial incentive to sell unproven or even harmful interventions because our current healthcare system is so expensive and so broken. Wellness influencers's power lies in the promise, the hope, and the price, not the outcome of the intervention.

Saturday, September 13, 2025

Higher cognitive ability linked to weaker moral foundations in UK adults

Zakharin, M., & Bates, T. C. (2025).
Intelligence, 111, 101930.

Abstract

Existing research on the relationship between cognitive ability and moral foundations has yielded contradictory results. While some studies suggest that higher cognitive ability is associated with more enlightened moral intuitions, others indicate it may weaken moral foundations. To address this ambiguity, we conducted two studies (total N = 1320) using the Moral Foundations Questionnaire-2 (MFQ-2) with UK residents. Both Study 1 and Study 2 (preregistered) revealed negative links between cognitive ability and moral foundations. In Study 1, structural models showed negative links between general intelligence (g) and both binding (−0.24) and individualizing (−0.19) foundations. These findings replicated closely in Study 2, with similar coefficients (−0.25 and − 0.18, respectively). Higher verbal ability was specifically associated with lower purity scores. These findings suggest a negative association between cognitive ability and moral foundations, challenging existing theories relating to intelligence and moral intuitions. However, causal direction remains uncertain.

Highlights

• Tested association of intelligence and moral foundations.
• Higher ability linked to lower individualizing and binding.
• Lower Proportionality, Loyalty, Authority, and Purity.
• Lower Equality and Care.
• Verbal ability linked specifically to impurity.
• Replicated in pre-registered large study.

Here are some thoughts:

This research is significant for psychologists as it clarifies the complex relationship between intelligence and moral reasoning. The study found that higher general cognitive ability (g) is negatively associated with all six moral foundations—care, equality, proportionality, loyalty, authority, and purity—suggesting that greater analytical thinking may suppress intuitive moral responses rather than enhance them. This supports what the authors call the Morality Suppression Model , which proposes that higher cognitive ability weakens emotional-moral intuitions rather than reinforcing them. Importantly, the study replicates its findings in two large, independent samples using robust and validated tools like the Moral Foundations Questionnaire-2 (MFQ-2) and the International Cognitive Ability Resource (ICAR), making the results highly credible.

The findings challenge common assumptions that higher intelligence leads to stronger or more "enlightened" moral values. Instead, they show that higher intelligence correlates with a general weakening of moral intuitions across both liberal (individualizing) and conservative (binding) domains. For instance, verbal reasoning was specifically linked to lower endorsement of the purity foundation, suggesting that linguistic sophistication may lead individuals to question traditional norms related to bodily sanctity or self-restraint. These insights contribute to dual-process theories of cognition by showing that reflective thinking can override intuitive moral judgments.

Moreover, the research has implications for understanding ideological differences, as it counters the tendency to view those with opposing moral views as less intelligent. It also informs educational and policy-related efforts aimed at ethical reasoning, particularly in professions requiring high-level decision-making. By demonstrating that the relationship between cognitive ability and moral foundations is consistent across genders and replicated in preregistered studies, this work offers a solid empirical basis for future exploration into how cognitive processes shape moral values.

Friday, September 12, 2025

Could “The Wonder Equation” help us to be more ethical? A personal reflection

Somerville, M. A. (2021).
Ethics & Behavior, 32(3), 226–240.

Abstract

This is a personal reflection on what I have learnt as an academic, researching, teaching and participating in the public square in Bioethics for over four decades. I describe a helix metaphor for understanding the evolution of values and the current “culture wars” between “progressive” and “conservative” values adherents, the uncertainty people’s “mixed values packages” engender, and disagreement in prioritizing individual rights and the “common good”. I propose, as a way forward, that individual and collective experiences of “amazement, wonder and awe” have the power to enrich our lives, help us to find meaning and sometimes to bridge the secular/religious divide and experience a shared moral universe. They can change our worldview, our decisions regarding values and ethics, and whether we live our lives mainly as just an individual – a “me” – or also as a member of a larger community – a “We”. I summarize in an equation – “The Wonder Equation” – what is necessary to reduce or resolve some current hostile values conflicts in order to facilitate such a transition. It will require revisiting and reaffirming the traditional values we still need as both individuals and societies and accommodating them with certain contemporary “progressive" values.

Here are some thoughts:

This article is a personal reflection on her decades of work in bioethics and a proposal for a novel approach to navigating contemporary ethical conflicts. Central to her argument is the idea that cultivating experiences of amazement, wonder, and awe (AWA)—especially when paired with healthy skepticism and free from cynicism and nihilism—can lead to deep gratitude and hope, which in turn inspire individuals and communities to act more ethically. She expresses this as a formula: AWA + S – (C + N) → G + H → E, which she calls “The Wonder Equation.” This equation suggests that rather than relying solely on rational analysis or ideological arguments, engaging our emotional and spiritual capacities can help restore a shared sense of moral responsibility.

For psychologists, Somerville’s work holds particular importance. First, it introduces a fresh lens for understanding moral motivation. Drawing on both personal anecdotes and recent empirical research, she argues that emotional states like awe and wonder are not only enriching but are also linked to prosocial behaviors such as compassion, empathy, and a sense of connectedness. This aligns with psychological studies that show how awe can reduce narcissism, increase well-being, and promote community-oriented values. Second, Somerville’s analysis of today’s “culture wars”—and her critique of rigid ideological divisions between “progressive” and conservative values—offers psychologists insight into how clients might experience internal value conflicts in an increasingly polarized world. Her concept of “mixed values packages” underscores the psychological reality that most people hold complex, sometimes contradictory beliefs, which calls for greater tolerance and openness in both therapy and society at large.

Thursday, September 11, 2025

A foundation model to predict and capture human cognition

Binz, M., Akata, E., et al. (2025).
Nature.

Abstract

Establishing a unified theory of cognition has been an important goal in psychology. A first step towards such a theory is to create a computational model that can predict human behaviour in a wide range of settings. Here we introduce Centaur, a computational model that can predict and simulate human behaviour in any experiment expressible in natural language. We derived Centaur by fine-tuning a state-of-the-art language model on a large-scale dataset called Psych-101. Psych-101 has an unprecedented scale, covering trial-by-trial data from more than 60,000 participants performing in excess of 10,000,000 choices in 160 experiments. Centaur not only captures the behaviour of held-out participants better than existing cognitive models, but it also generalizes to previously unseen cover stories, structural task modifications and entirely new domains. Furthermore, the model’s internal representations become more aligned with human neural activity after fine-tuning. Taken together, our results demonstrate that it is possible to discover computational models that capture human behaviour across a wide range of domains. We believe that such models provide tremendous potential for guiding the development of cognitive theories, and we present a case study to demonstrate this.


Here are some thoughts:

This article is important because it introduces Centaur, a novel computational model that represents a major step toward a unified theory of cognition. By fine-tuning a large language model on a vast dataset of human behavior, the researchers created a model with superior predictive power that can generalize across different cognitive domains. This model not only outperforms existing, specialized cognitive models but also demonstrates an alignment with human neural activity, suggesting it captures fundamental principles of human thought. Ultimately, the paper proposes that Centaur can serve as a powerful tool for scientific discovery, guiding the development and refinement of new psychological theories.

Wednesday, September 10, 2025

To assess or not to assess: Ethical issues in online assessments

Salimuddin, S., Beshai, S., & Loutzenhiser, L. (2025).
Canadian Psychology / Psychologie canadienne.
Advance online publication.

Abstract

There has been a proliferation of psychological services offered via the internet in the past 5 years, with the COVID-19 pandemic playing a large role in the shift from in-person to online services. While researchers have identified ethical issues related to online psychotherapy, little attention has been paid to the ethical issues surrounding online psychological assessments. In this article, we discuss challenges and ethical considerations unique to online psychological assessments and underscore the need for targeted discussions related to this service. We address key ethical issues including informed consent, privacy and confidentiality, competency, and maximizing benefit and minimizing harm, followed by a discussion of ethical issues specific to behavioural observations and standardized testing in online assessments. Additionally, we propose several recommendations, such as integrating dedicated training for online assessments into graduate programmes and expanding the research on cross-modality reliability and validity. These recommendations are closely aligned with principles, standards, and guidelines from the Canadian Code of Ethics for Psychologists, the Canadian Psychological Association Guidelines on Telepsychology, and the Interim Ethical Guidelines for Psychologists Providing Psychological Services via Electronic Media.

Impact Statement

This article provides Canadian psychologists with guidance on the ethical issues to consider when contemplating the remote online administration of psychological assessments. Relevant sources, such as the Canadian Code of Ethics for Psychologists, are used in discussing ethical issues arising in online assessments. 

Here are some thoughts:

The core message is that while online assessments offer significant benefits, especially in terms of accessibility for rural, remote, or underserved populations, they come with a complex array of unique ethical challenges that cannot be ignored. Simply because a service can be delivered online does not mean it should be, without a thorough evaluation of the risks and benefits.

Embrace the potential of online assessments to increase access, but do so responsibly. Prioritize ethical rigor, client well-being, and scientific validity over convenience. The decision to assess online should never be taken lightly and must be grounded in competence, transparency, and a careful weighing of potential harms and benefits.

Tuesday, September 9, 2025

Navigating the Evolving Landscape of Antipsychotic Medications: A Psychologist's Guide

Gavazzi, J. D. (2025).
The Tablet, Summer.

This article outlines the history, mechanisms, uses, and evolving developments of antipsychotic drugs, with a focus on their implications for psychologists. It distinguishes between first-generation antipsychotics (FGAs) that primarily block dopamine D2 receptors and second-generation antipsychotics (SGAs) that additionally modulate serotonin receptors, noting their respective strengths and side-effect profiles. Beyond reducing positive symptoms like hallucinations, some antipsychotics can also help with negative symptoms, cognitive deficits, and mood stabilization, though effects are often modest.

The guide covers off-label uses (e.g., depression, OCD, dementia-related agitation) and stresses caution due to variable efficacy and safety risks, especially in older adults. It highlights the importance of individualized treatment, given significant variability in patient response. Emerging options such as lumateperone, xanomeline-trospium, cholinergic modulators, and TAAR1 agonists represent novel approaches with potentially fewer side effects.

Psychologists’ non-prescribing roles include monitoring treatment effects, educating patients and families, delivering psychosocial interventions, and collaborating with prescribers. The overarching message is that optimal care requires a personalized, integrated approach combining pharmacological knowledge with psychosocial strategies.

An Important Takeaway

Even as antipsychotic medications become more sophisticated, there is no “one-size-fits-all” solution—effective treatment comes from balancing science with individualized, compassionate care. Progress in medication is valuable, but it reaches its fullest potential only when paired with human connection, careful monitoring, and collaborative support.

Monday, September 8, 2025

Cognitive computational model reveals repetition bias in a sequential decision-making task

Legler, E., Rivera, D. C.,  et al. (2025).
Communications Psychology, 3(1).


Abstract

Humans tend to repeat action sequences that have led to reward. Recent computational models, based on a long-standing psychological theory, suggest that action selection can also be biased by how often an action or sequence of actions was repeated before, independent of rewards. However, empirical support for such a repetition bias effect in value-based decision-making remains limited. In this study, we provide evidence of a repetition bias for action sequences using a sequential decision-making task (N = 70). Through computational modeling of choices, we demonstrate both the learning and influence of a repetition bias on human value-based decisions. Using model comparison, we find that decisions are best explained by the combined influence of goal-directed reward seeking and a tendency to repeat action sequences. Additionally, we observe significant individual differences in the strength of this repetition bias. These findings lay the groundwork for further research on the interaction between goal-directed reward seeking and the repetition of action sequences in human decision making.

Here are some thoughts:

This research on "repetition bias in a sequential decision-making task" offers valuable insights for psychologists, impacting both their own professional conduct and their understanding of patient behaviors. The study highlights that human decision-making is not solely driven by the pursuit of rewards, but also by an unconscious tendency to repeat previous action sequences. This finding suggests that psychologists, like all individuals, may be influenced by these ingrained patterns in their own practices, potentially leading to a reliance on familiar methods even when alternative, more effective approaches might exist. An awareness of this bias can foster greater self-reflection, encouraging psychologists to critically evaluate their established routines and adapt their strategies to better serve patient needs.

Furthermore, this research provides a crucial framework for understanding repetitive behaviors in patients. By demonstrating the coexistence of repetition bias with goal-directed reward seeking, the study helps explain why individuals might persist in actions that are not directly rewarding or may even be detrimental, a phenomenon often observed in conditions like obsessive-compulsive disorder or addiction. This distinction between the drivers of behavior can aid psychologists in more accurate patient assessment, allowing them to discern whether a patient's repetitive actions stem from a strong, non-reward-driven bias or from deliberate, goal-oriented choices. The research also notes significant individual differences in the strength of this bias, implying the need for personalized treatment approaches. Moreover, the study's suggestion that frequent repetition contributes to habit formation by diminishing goal-directed control offers insights into how maladaptive habits develop and how interventions can be designed to disrupt these cycles or bolster conscious control.

Sunday, September 7, 2025

Meaningful Psychedelic Experiences Predict Increased Moral Expansiveness

Olteanu, W., & Moreton, S. G. (2025).
Journal of Psychoactive Drugs, 1–9.

Abstract

There has been growing interest in understanding the psychological effects of psychedelic experiences, including their potential to catalyze significant shifts in moral cognition. This retrospective study examines how meaningful psychedelic experiences are related to changes in moral expansiveness and investigates the role of acute subjective effects as predictors of these changes. We found that meaningful psychedelic experiences were associated with self-reported increases in moral expansiveness. Changes in moral expansiveness were positively correlated with reported mystical experiences, ego dissolution, as well as feeling moved and admiration during the experience. Additionally, heightened moral expansiveness was associated with longer term shifts in the propensity to experience the self-transcendent positive emotions of admiration and awe. Future research should further investigate the mechanisms underlying these changes and explore how different types of psychedelic experiences might influence moral decision-making and behavior over time.

Here are some thoughts:

This article explores the relationship between psychedelic experiences and shifts in moral cognition, specifically moral expansiveness—the extent to individuals extend moral concern to a broader range of entities, including humans, animals, and nature. The study found that meaningful psychedelic experiences were associated with self-reported increases in moral expansiveness, with these changes linked to acute subjective effects such as mystical experiences, ego dissolution, and self-transcendent emotions like admiration and awe. The research suggests that psychedelics may facilitate profound shifts in moral attitudes by fostering feelings of interconnectedness and unity, which endure beyond the experience itself.

This study is important for practicing psychologists as it highlights the potential therapeutic and transformative effects of psychedelics on moral and ethical perspectives. Understanding these mechanisms can inform therapeutic approaches, particularly for clients struggling with rigid moral boundaries, lack of empathy, or disconnection from others and the environment. The findings also underscore the role of self-transcendent emotions in promoting prosocial behaviors and well-being, offering insights into interventions that could cultivate such emotions. However, psychologists must approach this area cautiously, considering the legal and ethical implications of psychedelic use, and remain informed about emerging research to guide clients responsibly. The study opens avenues for further exploration into how psychedelic-assisted therapy might address moral and relational challenges in clinical practice.

Saturday, September 6, 2025

Understanding and Combating Human Trafficking: A Psychological Perspective

Sidun, N. M. (2025).
American Psychologist.

Abstract

Human trafficking is a global crisis that represents one of the gravest violations of human rights and dignity in modern times. Defined by international and U.S. frameworks, trafficking involves the exploitation of individuals through fraud, force, or coercion for purposes such as labor, sexual exploitation, or organ harvesting. Psychology provides a unique lens to understand, prevent, and address this issue by examining the underlying psychological mechanisms used by traffickers and the profound effects on survivors. Traffickers leverage psychological manipulation—grooming, coercion, and trauma bonding—to control victims, while survivors endure severe mental health consequences, including posttraumatic stress disorder, complex trauma, depression, and anxiety.

Psychologists play a pivotal role in combating trafficking through research, education, advocacy, and clinical practice. Research informs prevention by identifying vulnerabilities and effective interventions. Education raises public awareness and equips professionals to recognize and support victims.Advocacy shapes policies that uphold human rights and strengthen antitrafficking laws. Clinicians provide essential trauma-and trafficking-informed care tailored to survivors, utilizing evidence-based practices and adjunctive psychological interventions that foster healing and resilience while addressing immediate and long-term impacts. In conclusion, psychology is integral to eradicating human trafficking. By bridging research, practice, and policy, psychology contributes significantly to global antitrafficking efforts, ensuring a lasting impact on addressing this pervasive human rights violation.

Public Significance Statement

This article presents an overview of human trafficking and how psychology can assist in understanding various aspects of trafficking. It describes how psychology is well-positioned to help prevent, recognize, and support the elimination of human trafficking.

Friday, September 5, 2025

The Psychology of Precarity: A Critical Framework

Blustein, D. L., Grzanka, P. R., et al. (2024).
American Psychologist.

Abstract

This article presents the rationale and a new critical framework for precarity, which reflects a psychosocial concept that links structural inequities with experiences of alienation, anomie, and uncertainty. Emerging from multiple disciplines, including anthropology, cultural studies, sociology, political science, and psychology, the concept of precarity provides a conceptual scaffolding for understanding the complex causes of precarious life circumstances while also seeking to identify how people react, adapt, and resist the forces that evoke such tenuous psychosocial experiences.Wepresent a critical conceptual framework as a nonlinear heuristic that serves to identify and organize relevant elements of precarity in a presumably infinite number of contexts and applications. The framework identifies socio-political-economic contexts, material conditions, and psychological experiences as key elements of precarity. Another essential aspect of this framework is the delineation of interrelated and nonlinear responses to precarity, which include resistance, adaptation, and resignation. We then summarize selected implications of precarity for psychological interventions, vocational and organizational psychology, and explorations and advocacy about race, gender, and other systems of inequality. Future research directions, including optimal methodologies to study precarity, conclude the article.

Public Significance Statement

In this study, we introduce the concept of precarity, which links feelings of alienation, instability, insecurity, and existential threat with structural inequities. The complex ways that precarity influences and constrains people are described in a framework that includes a discussion about how people react, adapt, and resist the causes of precarity. Implications for psychological practice, research, and social/racial justice conclude the article.

Here are some thoughts:

This article is important for practicing psychologists and other mental health professionals because it offers a critical framework for understanding precarity, which can help them move beyond individualistic interpretations of suffering and incorporate structural factors into their practice. The article argues that psychology has historically advanced neoliberal ideology by focusing on the self and mental health as solutions to social and economic problems, potentially pathologizing individuals experiencing precarity.

By adopting a psychology of precarity, professionals can better conceptualize and critique the psychosocial costs of widespread instability. This framework emphasizes the dynamic nature of precarity, its various antecedents and outcomes, and individual and collective responses to it, such as resistance, adaptation, or resignation. It highlights how socio-political-economic contexts, like the retreat of the social welfare state and hyper-individualism, contribute to precarity and its effects, which are often deeply complementary to other forms of oppression such as anti-Blackness, colonialism, and misogyny.

The article suggests that this framework can infuse structural thought into conceptualizations and interventions for people struggling with various life aspects, fostering critical consciousness about systemic inequities. For instance, it can help understand psychological costs like anxiety, existential threat, and chronic stress as responses to chronic uncertainty rather than solely individual psychopathology.