Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Contractualism. Show all posts
Showing posts with label Contractualism. Show all posts

Thursday, November 21, 2024

Moral Judgment Is Sensitive to Bargaining Power

Le Pargneux, A., & Cushman, F. (2024).
Journal of Experimental Psychology: General.
Advance online publication.

Abstract

For contractualist accounts of morality, actions are moral if they correspond to what rational or reasonable agents would agree to do, were they to negotiate explicitly. This, in turn, often depends on each party’s bargaining power, which varies with each party’s stakes in the potential agreement and available alternatives in case of disagreement. If there is an asymmetry, with one party enjoying higher bargaining power than another, this party can usually get a better deal, as often happens in real negotiations. A strong test of contractualist accounts of morality, then, is whether moral judgments do take bargaining power into account. We explore this in five preregistered experiments (n = 3,025; U.S.-based Prolific participants). We construct scenarios depicting everyday social interactions between two parties in which one of them can perform a mutually beneficial but unpleasant action. We find that the same actions (asking the other to perform the unpleasant action or explicitly refusing to do it) are perceived as less morally appropriate when performed by the party with lower bargaining power, as compared to the party with higher bargaining power. In other words, participants tend to give more moral leeway to parties with better bargaining positions and to hold disadvantaged parties to stricter moral standards. This effect appears to depend only on the relative bargaining power of each party but not on the magnitude of the bargaining power asymmetry between them. We discuss implications for contractualist theories of moral cognition and the emergence and persistence of unfair norms and inequality.

Public Significance Statement

Many social interactions involve opportunities for mutual benefit. By engaging in negotiation—sometimes explicitly, but often tacitly—we decide what each party should do and enter arrangements that we anticipate will be advantageous for everyone involved. Contractualist theories of morality insist on the fundamental role played by such bargaining procedures in determining what constitutes appropriate and inappropriate behavior. But the outcome of a negotiation often depends on each party’s bargaining power and their relative positions if an agreement cannot be reached. And situations in which each party enjoys equal bargaining power are rare. Here, we investigate the influence of bargaining power on our moral judgments. Consistent with contractualist accounts, we find that moral judgments take bargaining power considerations into account, to the benefit of the powerful party, and that parties with lower bargaining power are held to stricter moral standards.

Here are some thoughts:

This research provides insights into how people perceive fairness and morality in social interactions, which is fundamental to understanding human behavior and relationships. Mental health professionals often deal with clients struggling with interpersonal conflicts, and recognizing the role of bargaining power in these situations can help them better analyze and address these issues.

Secondly, the findings suggest that people tend to give more moral leeway to those with higher bargaining power and hold disadvantaged individuals to stricter moral standards. This knowledge is essential for therapists working with clients from diverse socioeconomic backgrounds, as it can help them recognize and address potential biases in their own judgments and those of their clients.

Furthermore, the research implications regarding the emergence and persistence of inequality are particularly relevant for mental health professionals. Understanding how moral intuitions may contribute to the perpetuation of unfair norms and outcomes can help therapists develop more effective strategies for addressing issues related to social inequality and its impact on mental health.

Lastly, the findings highlight the complexity of moral cognition and decision-making processes. This knowledge can enhance therapists' ability to help clients explore their own moral reasoning and decision-making patterns, potentially leading to more insightful and effective therapeutic interventions.

Saturday, August 3, 2024

Moral agents as relational systems: The Contract-Based Model of Moral Cognition for AI

Vidal, L. M., Marchesi, S., Wykowska, A., & Pretus, C.
(2024, July 3)

Abstract

As artificial systems are becoming more prevalent in our daily lives, we should ensure that they make decisions that are aligned with human values. Utilitarian algorithms, which aim to maximize benefits and minimize harm fall short when it comes to human autonomy and fairness since it is insensitive to other-centered human preferences or how the burdens and benefits are distributed, as long as the majority benefits. We propose a Contract-Based model of moral cognition that regards artificial systems as relational systems that are subject to a social contract. To articulate this social contract, we draw from contractualism, an impartial ethical framework that evaluates the appropriateness of behaviors based on whether they can be justified to others. In its current form, the Contract-based model characterizes artificial systems as moral agents bound to obligations towards humans. Specifically, this model allows artificial systems to make moral evaluations by estimating the relevance each affected individual assigns to the norms transgressed by an action. It can also learn from human feedback, which is used to generate new norms and update the relevance of different norms in different social groups and types of relationships. The model’s ability to justify their choices to humans, together with the central role of human feedback in moral evaluation and learning, makes this model suitable for supporting human autonomy and fairness in human-to-robot interactions. As human relationships with artificial agents evolve, the Contract-Based model could also incorporate new terms in the social contract between humans and machines, including terms that confer artificial agents a status as moral patients.


Here are some thoughts:

The article proposes a Contract-Based model of moral cognition for artificial intelligence (AI) systems, drawing from the ethical framework of contractualism, which evaluates actions based on their justifiability to others. This model views AI systems as relational entities bound by a social contract with humans, allowing them to make moral evaluations by estimating the relevance of norms to affected individuals and learning from human feedback to generate and update these norms. The model is designed to support human autonomy and fairness in human-robot interactions and can also function as moral enhancers to assist humans in moral decision-making in human-human interactions. However, the use of moral enhancers raises ethical concerns about autonomy, responsibility, and potential unintended consequences. Additionally, the article suggests that as human relationships with AI evolve, the model could incorporate new terms in the social contract, potentially recognizing AI systems as moral patients. This forward-thinking approach anticipates the complex ethical questions that may arise as AI becomes more integrated into daily life.

Monday, July 11, 2022

Moral cognition as a Nash product maximizer: An evolutionary contractualist account of morality

AndrĂ©, J., Debove, S., Fitouchi, L., & Baumard, N. 
(2022, May 24). https://doi.org/10.31234/osf.io/2hxgu

Abstract

Our goal in this paper is to use an evolutionary approach to explain the existence and design-features of human moral cognition. Our approach is based on the premise that human beings are under selection to appear as good cooperative investments. Hence they face a trade-off between maximizing the immediate gains of each social interaction, and maximizing its long-term reputational effects. In a simple 2-player model, we show that this trade-off leads individuals to maximize the generalized Nash product at evolutionary equilibrium, i.e., to behave according to the generalized Nash bargaining solution. We infer from this result the theoretical proposition that morality is a domain-general calculator of this bargaining solution. We then proceed to describe the generic consequences of this approach: (i) everyone in a social interaction deserves to receive a net benefit, (ii) people ought to act in ways that would maximize social welfare if everyone was acting in the same way, (iii) all domains of social behavior can be moralized, (iv) moral duties can seem both principled and non-contractual, and (v) morality shall depend on the context. Next, we apply the approach to some of the main areas of social life and show that it allows to explain, with a single logic, the entire set of what are generally considered to be different moral domains. Lastly, we discuss the relationship between this account of morality and other evolutionary accounts of morality and cooperation.

From The psychological signature of morality: the right, the wrong and the duty Section

Cooperating for the sake of reputation always entails that, at some point along social interactions, one is in a position to access benefits, but one decides to give them up, not for a short-term instrumental purpose, but for the long-term aim of having a good reputation.  And, by this, we mean precisely:the long-term aim of being considered someone with whom cooperation ends up bringing a net benefit rather than a net cost, not only in the eyes of a particular partner, but in the eyes of any potential future partner.  This specific and universal property of reputation-based cooperation explains the specific and universal phenomenology of moral decisions.

To understand, one must distinguish what people  do in practice, and what they think is right to do. In practice, people may sometimes cheat, i.e., not respect the contract. They may do so conditionally on the specific circumstances, if they evaluate that  the actual reputational benefits  of  doing  their duty is lower than the immediate cost (e.g., if their cheating has a chance to go unnoticed).  This should not –and in fact does  not  (Knoch et al., 2009;  Kogut, 2012;  Sheskin et al., 2014; Smith et al., 2013) – change their assessment of what would have been the right thing to do.  This assessment can only be absolute, in the sense that it depends only on what one needs to do to ensure that the interaction ends up bringing a net benefit to one’s partner rather than a cost, i.e., to respect the contract, and is not affected by the actual reputational stake of the specific interaction.  Or, to put it another way, people must calculate their moral duty by thinking “If someone  was looking at me, what would they think?”,  regardless of whether anyone is actually looking at them.