Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Monday, July 11, 2022

Moral cognition as a Nash product maximizer: An evolutionary contractualist account of morality

André, J., Debove, S., Fitouchi, L., & Baumard, N. 
(2022, May 24). https://doi.org/10.31234/osf.io/2hxgu

Abstract

Our goal in this paper is to use an evolutionary approach to explain the existence and design-features of human moral cognition. Our approach is based on the premise that human beings are under selection to appear as good cooperative investments. Hence they face a trade-off between maximizing the immediate gains of each social interaction, and maximizing its long-term reputational effects. In a simple 2-player model, we show that this trade-off leads individuals to maximize the generalized Nash product at evolutionary equilibrium, i.e., to behave according to the generalized Nash bargaining solution. We infer from this result the theoretical proposition that morality is a domain-general calculator of this bargaining solution. We then proceed to describe the generic consequences of this approach: (i) everyone in a social interaction deserves to receive a net benefit, (ii) people ought to act in ways that would maximize social welfare if everyone was acting in the same way, (iii) all domains of social behavior can be moralized, (iv) moral duties can seem both principled and non-contractual, and (v) morality shall depend on the context. Next, we apply the approach to some of the main areas of social life and show that it allows to explain, with a single logic, the entire set of what are generally considered to be different moral domains. Lastly, we discuss the relationship between this account of morality and other evolutionary accounts of morality and cooperation.

From The psychological signature of morality: the right, the wrong and the duty Section

Cooperating for the sake of reputation always entails that, at some point along social interactions, one is in a position to access benefits, but one decides to give them up, not for a short-term instrumental purpose, but for the long-term aim of having a good reputation.  And, by this, we mean precisely:the long-term aim of being considered someone with whom cooperation ends up bringing a net benefit rather than a net cost, not only in the eyes of a particular partner, but in the eyes of any potential future partner.  This specific and universal property of reputation-based cooperation explains the specific and universal phenomenology of moral decisions.

To understand, one must distinguish what people  do in practice, and what they think is right to do. In practice, people may sometimes cheat, i.e., not respect the contract. They may do so conditionally on the specific circumstances, if they evaluate that  the actual reputational benefits  of  doing  their duty is lower than the immediate cost (e.g., if their cheating has a chance to go unnoticed).  This should not –and in fact does  not  (Knoch et al., 2009;  Kogut, 2012;  Sheskin et al., 2014; Smith et al., 2013) – change their assessment of what would have been the right thing to do.  This assessment can only be absolute, in the sense that it depends only on what one needs to do to ensure that the interaction ends up bringing a net benefit to one’s partner rather than a cost, i.e., to respect the contract, and is not affected by the actual reputational stake of the specific interaction.  Or, to put it another way, people must calculate their moral duty by thinking “If someone  was looking at me, what would they think?”,  regardless of whether anyone is actually looking at them.