Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Unfairness. Show all posts
Showing posts with label Unfairness. Show all posts

Sunday, September 10, 2023

Seeing and sanctioning structural unfairness

Flores-Robles, G., & Gantman, A. P. (2023, June 28).
PsyArXiv

Abstract

People tend to explain wrongdoing as the result of a bad actor or bad system. In five studies (four U.S. online convenience, one U.S. representative sample), we tested whether the way people understand unfairness affects how they sanction it. In Pilot 1A (N = 40), people interpreted unfair offers in an economic game as the result of a bad actor (vs. unfair rules), unless incentivized (Pilot 1B, N = 40), which, in Study 1 (N = 370), predicted costly punishment of individuals (vs. changing unfair rules). In Studies 2 (N = 500) and 3, (N = 470, representative of age, gender, and ethnicity in the U.S), we found that people paid to change the rules for the final round of the game (vs. punished individuals), when they were randomly assigned a bad system (vs. bad actor) explanation for prior identical unfair offers. Explanations for unfairness affect how people sanction it.

Statement of Relevance

Humans are facing massive problems including economic and social inequality. These problems are often framed in the media, and by friends and experts, as a problem either of individual action (e.g., racist beliefs) or of structures (e.g., discriminatory housing laws). The current research uses a context-free economic game to ask whether these explanations have any effect on what people think should happen next. We find that people tend to explain unfair offers in the game in terms of bad actors (unless incentivized) which is related to punishing individuals over changing the game itself.  When people are told that the unfairness they witnessed was the result of a bad actor, they prefer to punish that actor; when they are told that the same unfair behavior is the result of unfair rules, they prefer to change the rules. Our understanding of the mechanisms of inequality affect how we want to sanction it.

My summary:

The article discusses how people tend to explain wrongdoing as the result of a bad actor or bad system.  In essence, this is a human, decision-making process. The authors conducted five studies to test whether the way people understand unfairness affects how they sanction it. They found that people are more likely to punish individuals for unfair behavior when they believe that the behavior is the result of a bad actor. However, they are more likely to try to change the system (or the rules) when they believe that the behavior is the result of a bad system.

The authors argue that these findings have important implications for ethics, morality, and values. They suggest that we need to be more aware of the way we explain unfairness, because our explanations can influence how we respond to it. How an individual frames the issue is a key to correct possible solutions, as well as biases.  They also suggest that we need to be more critical of the systems that we live in, because these systems can create unfairness.

The article raises a number of ethical, moral, and value-related questions. For example, what is the responsibility of individuals to challenge unfair systems? What is the role of government in addressing structural unfairness? And what are the limits of individual and collective action in addressing unfairness?

The article does not provide easy answers to these questions. However, it does provide a valuable framework for thinking about unfairness and how we can respond to it.

Monday, March 18, 2019

The college admissions scandal is a morality play

Elaine Ayala
San Antonio Express-News
Originally posted March 16, 2019

The college admission cheating scandal that raced through social media and dominated news cycles this week wasn’t exactly shocking: Wealthy parents rigged the system for their underachieving children.

It’s an ancient morality play set at elite universities with an unseemly cast of characters: spoiled teens and shameless parents; corrupt test proctors and paid test takers; as well as college sports officials willing to be bribed and a ring leader who ultimately turned on all of them.

William “Rick” Singer, who went to college in San Antonio, wore a wire to cooperate with FBI investigators.

(cut)

Yet even though they were arrested, the 50 people involved managed to secure the best possible outcome under the circumstances. Unlike many caught shoplifting or possessing small amounts of marijuana and who lack the lawyers and resources to help them navigate the legal system, the accused parents and coaches quickly posted bond and were promptly released without spending much time in custody.

The info is here.

Monday, October 29, 2018

We hold people with power to account. Why not algorithms?

Hannah Fry
The Guardian
Originally published September 17, 2018

Here is an excerpt:

But already in our hospitals, our schools, our shops, our courtrooms and our police stations, artificial intelligence is silently working behind the scenes, feeding on our data and making decisions on our behalf. Sure, this technology has the capacity for enormous social good – it can help us diagnose breast cancer, catch serial killers, avoid plane crashes and, as the health secretary, Matt Hancock, has proposed, potentially save lives using NHS data and genomics. Unless we know when to trust our own instincts over the output of a piece of software, however, it also brings the potential for disruption, injustice and unfairness.

If we permit flawed machines to make life-changing decisions on our behalf – by allowing them to pinpoint a murder suspect, to diagnose a condition or take over the wheel of a car – we have to think carefully about what happens when things go wrong.

Back in 2012, a group of 16 Idaho residents with disabilities received some unexpected bad news. The Department of Health and Welfare had just invested in a “budget tool” – a swish piece of software, built by a private company, that automatically calculated their entitlement to state support. It had declared that their care budgets should be slashed by several thousand dollars each, a decision that would put them at serious risk of being institutionalised.

The problem was that the budget tool’s logic didn’t seem to make much sense. While this particular group of people had deep cuts to their allowance, others in a similar position actually had their benefits increased by the machine. As far as anyone could tell from the outside, the computer was essentially plucking numbers out of thin air.

The info is here.