Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Friday, April 10, 2026

Why AI systems don’t learn and what to do about it

Dupoux, E., LeCun, Y., & Malik, J. (2026).

Introduction

We critically examine the limitations of current AI models in achieving autonomous learning and
propose a learning architecture inspired by human and animal cognition. The proposed framework
integrates learning from observation (System A) and learning from active behavior (System B) while
flexibly switching between these learning modes as a function of internally generated meta-control
signals (System M). We discuss how this could be built by taking inspiration on how organisms adapt
to real-world, dynamic environments across evolutionary and developmental timescales.


Here are some thoughts:

This paper draws heavily on cognitive science and developmental psychology in ways that should resonate with practicing psychologists. The authors lean on foundational developmental psychology, including Piaget, Vygotsky, infant perceptual learning, critical periods, and social learning theory, as the blueprint for next-generation AI. For psychologists, this is a meaningful acknowledgment that decades of careful empirical work on human cognition is not just descriptively interesting but architecturally prescriptive for building intelligent systems.

By cataloguing what current AI cannot do, the paper implicitly maps the distinctive features of human cognition: flexible switching between learning modes, active data selection, embodied grounding, and lifelong adaptation. For clinical or educational psychologists, this reinforces the irreplaceable value of understanding genuine human learning. The ethical sections of the paper are also directly clinically relevant, as the authors raise concerns about anthropomorphization, over-trust in AI agents, and the possibility that AI systems processing somatic-like signals may have uncertain moral status. These are questions psychologists will increasingly face as clients interact with AI systems in therapeutic and educational contexts. Perhaps most importantly, the paper suggests that the gap between AI and human intelligence is not primarily about raw computation but about the architecture of learning itself, which has been psychology's domain all along.