Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Friday, June 20, 2025

Artificial intelligence and free will: generative agents utilizing large language models have functional free will

Martela, F. (2025).
AI And Ethics.

Abstract

Combining large language models (LLMs) with memory, planning, and execution units has made possible almost human-like agentic behavior, where the artificial intelligence creates goals for itself, breaks them into concrete plans, and refines the tactics based on sensory feedback. Do such generative LLM agents possess free will? Free will requires that an entity exhibits intentional agency, has genuine alternatives, and can control its actions. Building on Dennett’s intentional stance and List’s theory of free will, I will focus on functional free will, where we observe an entity to determine whether we need to postulate free will to understand and predict its behavior. Focusing on two running examples, the recently developed Voyager, an LLM-powered Minecraft agent, and the fictitious Spitenik, an assassin drone, I will argue that the best (and only viable) way of explaining both of their behavior involves postulating that they have goals, face alternatives, and that their intentions guide their behavior. While this does not entail that they have consciousness or that they possess physical free will, where their intentions alter physical causal chains, we must nevertheless conclude that they are agents whose behavior cannot be understood without postulating that they possess functional free will.

Here are some thoughts:

This article explores whether advanced AI systems, particularly generative agents using large language models (LLMs), possess free will. The author argues that while these AI agents may not have “physical free will,” meaning the ability to alter physical causal chains, they do exhibit “functional free will”. Functional free will is defined as the capacity to display intentional agency, recognize genuine alternatives, and control actions based on internal intentions. The article uses examples like Voyager, an AI agent in Minecraft, and Spitenik, a hypothetical autonomous drone, to illustrate how these systems meet the criteria for functional free will.

This research is important for psychologists because it challenges traditional views on free will, which often center on human consciousness and metaphysical considerations. It compels psychologists to reconsider how we attribute agency and decision-making to various entities, including AI, and how this attribution shapes our understanding of behavior