Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Tuesday, November 4, 2025

Moral trauma, moral distress, moral injury, and moral injury disorder: definitions and assessments

VanderWeele, T. J., Wortham,  et al. (2025).
Frontiers in psychology, 16, 1422441.

Abstract

We propose new definitions for moral injury and moral distress, encompassing many prior definitions, but broadening moral injury to more general classes of victims, in addition to perpetrators and witnesses, and broadening moral distress to include settings not involving institutional constraints. We relate these notions of moral distress and moral injury to each other, and locate them on a “moral trauma spectrum” that includes considerations of both persistence and severity. Instances in which moral distress is particularly severe and persistent, and extends beyond cultural and religious norms, might be considered to constitute “moral injury disorder.” We propose a general assessment to evaluate various aspects of this proposed moral trauma spectrum, and one that can be used both within and outside of military contexts, and for perpetrators, witnesses, victims, or more generally.

Here are some thoughts:

This article proposes updated, broader definitions of moral injury and moral distress, expanding moral injury to include victims (not just perpetrators or witnesses) and moral distress to include non-institutional contexts. The authors introduce a unified concept called the “moral trauma spectrum,” which ranges from temporary moral distress to persistent moral injury—and in severe, functionally impairing cases, possibly a “moral injury disorder.” They distinguish moral trauma from PTSD, noting different causes (moral transgressions or worldview disruptions vs. fear-based trauma) and treatment needs. The paper also presents a new assessment tool with definitional and symptom items applicable across military, healthcare, and civilian settings. Finally, it notes the recent inclusion of “Moral Problems” in the DSM-5-TR as a significant step toward clinical recognition.

Monday, November 3, 2025

Scaling Laws Are Unreliable for Downstream Tasks: A Reality Check

Lourie, N., Hu, M. Y., & Cho, K. (2025).
ArXiv.org.

Abstract

Downstream scaling laws aim to predict task performance at larger scales from pretraining losses at smaller scales. Whether this prediction should be possible is unclear: some works demonstrate that task performance follows clear linear scaling trends under transformation, whereas others point out fundamental challenges to downstream scaling laws, such as emergence and inverse scaling. In this work, we conduct a meta-analysis of existing data on downstream scaling laws, finding that close fit to linear scaling laws only occurs in a minority of cases: 39% of the time. Furthermore, seemingly benign changes to the experimental setting can completely change the scaling trend. Our analysis underscores the need to understand the conditions under which scaling laws succeed. To fully model the relationship between pretraining loss and downstream task performance, we must embrace the cases in which scaling behavior deviates from linear trends.

Here is a summary:

This paper challenges the reliability of downstream scaling laws—the idea that you can predict how well a large language model will perform on specific tasks (like question answering or reasoning) based on its pretraining loss at smaller scales. While some prior work claims a consistent, often linear relationship between pretraining loss and downstream performance, this study shows that such predictable scaling is actually the exception, not the rule.

Key findings:
  • Only 39% of 46 evaluated tasks showed smooth, predictable (linear-like) scaling.
  • The rest exhibited irregular behaviors: inverse scaling (performance gets worse as models grow), nonmonotonic trends, high noise, no trend, or sudden “breakthrough” improvements (emergence).
  • Validation dataset choice matters: switching the corpus used to compute pretraining perplexity can flip conclusions about which model or pretraining data is better.
  • Experimental details matter: even with the same task and data, small changes in setup (e.g., prompt format, number of answer choices) can qualitatively change scaling behavior.
Conclusion: Downstream scaling laws are context-dependent and fragile. Researchers and practitioners should not assume linear scaling holds universally—and must validate scaling behavior in their own specific settings before relying on extrapolations.