Bello, P., & Malle, B. F. (2023).
In R. Sun (Ed.), Cambridge Handbook
of Computational Cognitive Sciences
(pp. 1037-1063). Cambridge University Press.
Introduction
Morality regulates individual behavior so that it complies with community interests (Curry et al., 2019; Haidt, 2001; Hechter & Opp, 2001). Humans achieve this regulation by motivating and deterring certain behaviors through the imposition of norms – instructions of how one should or should not act in a particular context (Fehr & Fischbacher, 2004; Sripada & Stich, 2006) – and, if a norm is violated, by levying sanctions (Alexander, 1987; Bicchieri, 2006). This chapter examines the mental and behavioral processes that facilitate human living in moral communities and how these processes might be represented computationally and ultimately engineered in embodied agents.
Computational work on morality arises from two major sources. One is empirical moral science, which accumulates knowledge about a variety of phenomena of human morality, such as moral decision making, judgment, and emotions. Resulting computational work tries to model and explain these human phenomena. The second source is philosophical ethics, which has for millennia discussed moral principles by which humans should live. Resulting computational work is often labeled machine ethics, which is the attempt to create artificial agents with moral capacities reflecting one or more of the ethical theories. A brief discussion of these two sources will ground the subsequent discussion of computational morality.
Here are some thoughts:
This chapter examines computational approaches to morality, driven by two goals: modeling human moral cognition and creating artificial moral agents ("machine ethics"). It maps key moral phenomena – behavior, judgments, emotions, sanctions, and communication – arguing these are shaped by social norms rather than innate brain circuits. Norms are community instructions specifying acceptable/unacceptable behavior. The chapter explores philosophical ethics: deontology (duty-based ethics, exemplified by Kant, Rawls, Ross) and consequentialism (outcome-based ethics, particularly utilitarianism). It addresses computational challenges like scaling, conflicting preferences, and framing moral problems. Finally, it surveys rule-based approaches, case-based reasoning, reinforcement learning, and cognitive science perspectives in modeling moral decision-making.