Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Procedural Justice. Show all posts
Showing posts with label Procedural Justice. Show all posts

Saturday, March 27, 2021

Veil-of-ignorance reasoning mitigates self-serving bias in resource allocation during the COVID-19 crisis

Huang, K. et al.
Judgment and Decision Making
Vol. 16, No. 1, pp 1-19.

Abstract

The COVID-19 crisis has forced healthcare professionals to make tragic decisions concerning which patients to save. Furthermore, The COVID-19 crisis has foregrounded the influence of self-serving bias in debates on how to allocate scarce resources. A utilitarian principle favors allocating scarce resources such as ventilators toward younger patients, as this is expected to save more years of life. Some view this as ageist, instead favoring age-neutral principles, such as “first come, first served”. Which approach is fairer? The “veil of ignorance” is a moral reasoning device designed to promote impartial decision-making by reducing decision-makers’ use of potentially biasing information about who will benefit most or least from the available options. Veil-of-ignorance reasoning was originally applied by philosophers and economists to foundational questions concerning the overall organization of society. Here we apply veil-of-ignorance reasoning to the COVID-19 ventilator dilemma, asking participants which policy they would prefer if they did not know whether they are younger or older. Two studies (pre-registered; online samples; Study 1, N=414; Study 2 replication, N=1,276) show that veil-of-ignorance reasoning shifts preferences toward saving younger patients. The effect on older participants is dramatic, reversing their opposition toward favoring the young, thereby eliminating self-serving bias. These findings provide guidance on how to remove self-serving biases to healthcare policymakers and frontline personnel charged with allocating scarce medical resources during times of crisis.

Friday, October 16, 2020

When eliminating bias isn’t fair: Algorithmic reductionism and procedural justice in human resource decisions

Newman, D., Fast, N. and Harmon, D.
Organizational Behavior and 
Human Decision Processes
Volume 160, September 2020, Pages 149-167

Abstract

The perceived fairness of decision-making procedures is a key concern for organizations, particularly when evaluating employees and determining personnel outcomes. Algorithms have created opportunities for increasing fairness by overcoming biases commonly displayed by human decision makers. However, while HR algorithms may remove human bias in decision making, we argue that those being evaluated may perceive the process as reductionistic, leading them to think that certain qualitative information or contextualization is not being taken into account. We argue that this can undermine their beliefs about the procedural fairness of using HR algorithms to evaluate performance by promoting the assumption that decisions made by algorithms are based on less accurate information than identical decisions made by humans. Results from four laboratory experiments (N = 798) and a large-scale randomized experiment in an organizational setting (N = 1654) confirm this hypothesis. Theoretical and practical implications for organizations using algorithms and data analytics are discussed.

Highlights

• Algorithmic decisions are perceived as less fair than identical decisions by humans.

• Perceptions of reductionism mediate the adverse effect of algorithms on fairness.

• Algorithmic reductionism comes in two forms: quantification and decontextualization.

• Employees voice lower organizational commitment when evaluated by algorithms.

• Perceptions of unfairness mediate the adverse effect of algorithms on commitment.

Conclusion

Perceived unfairness notwithstanding, algorithms continue to gain increasing influence in human affairs, not only in organizational settings but throughout our social and personal lives. How this influence plays out against our sense of fairness remains to be seen but should undoubtedly be of central interest to justice scholars in the years ahead.  Will the compilers of analytics and writers of algorithms adapt their
methods to comport with intuitive notions of morality? Or will our understanding of fairness adjust to the changing times, becoming inured to dehumanization in an ever more impersonal world? Questions
such as these will be asked more and more frequently as technology reshapes modes of interaction and organization that have held sway for generations. We have sought to contribute answers to these questions,
and we hope that our work will encourage others to continue studying these and related topics.

Sunday, May 17, 2020

Veil-of-Ignorance Reasoning Favors Allocating Resources to Younger Patients During the COVID-19 Crisis

Huang, K., Bernhard, R., and others
(2020, April 22).
https://doi.org/10.31234/osf.io/npm4v

Abstract

The COVID-19 crisis has forced healthcare professionals to make tragic decisions concerning which patients to save. A utilitarian principle favors allocating scarce resources such as ventilators toward younger patients, as this is expected to save more years of life. Some view this as ageist, instead favoring age-neutral principles, such as “first come, first served”. Which approach is fairer? Veil-of-ignorance reasoning is a decision procedure designed to produce fair outcomes. Here we apply veil-of-ignorance reasoning to the COVID-19 ventilator dilemma, asking participants which policy they would prefer if they did not know whether they are younger or older. Two studies (pre-registered; online samples; Study 1, N=414; Study 2 replication, N=1,276) show that veil-of-ignorance reasoning shifts preferences toward saving younger patients. The effect on older participants is dramatic, reversing their opposition toward favoring the young. These findings provide concrete guidance to healthcare policymakers and front line personnel charged with allocating scarce medical resources during times of crisis.

From the General Discussion

In two pre-registered studies, we show that veil-of-ignorance reasoning favors allocating scarce medical resources to younger patients during the COVID-19 crisis. A strong majority of participants who engaged in veil-of-ignorance reasoning concluded that a policy of maximizing the number of life-years saved is what they would want for themselves if they did not know whom they were going to be.Importantly, engaging in veil-of-ignorance reasoning subsequently produced increased moral approval of this utilitarian policy. These findings, though predicted based on prior research(Huang, Greene, &Bazerman, 2019), make three new contributions. First, they apply directly to an ongoing crisis in which competing claims to fairness must be resolved. While the ventilator shortage in the developed world has been less acute than many feared, it may reemerge in greater force as the COVID-19 crisis spreads to the developing world (Woodyatt, 2020). Second, the dilemma considered here differs from those considered previously because it concerns maximizing the number of life-years saved, rather than the number of lives saved.Finally, the results show the power of the veil to eliminate self-serving bias. In the control condition, few older participants (33%) favored prioritizing younger patients. But after engaging in veil-of-ignorance reasoning, most older participants (62%) favored doing so, just like younger participants.

The research is here.

Monday, November 4, 2019

Ethical Algorithms: Promise, Pitfalls and a Path Forward

Image result for ethical algorithmJay Van Bavel, Tessa West, Enrico Bertini, and Julia Stoyanovich
PsyArXiv Preprints
Originally posted October 21, 2019

Abstract

Fairness in machine-assisted decision making is critical to consider, since a lack of fairness can harm individuals or groups, erode trust in institutions and systems, and reinforce structural discrimination. To avoid making ethical mistakes, or amplifying them, it is important to ensure that the algorithms we develop are fair and promote trust. We argue that the marriage of techniques from behavioral science and computer science is essential to develop algorithms that make ethical decisions and ensure the welfare of society. Specifically, we focus on the role of procedural justice, moral cognition, and social identity in promoting trust in algorithms and offer a road map for future research on the topic.

--

The increasing role of machine-learning and algorithms in decision making has revolutionized areas ranging from the media to medicine to education to industry. As the recent One Hundred Year Study on Artificial Intelligence (Stone et al., 2016) reported: “AI technologies already pervade our lives. As they become a central force in society, the field is shifting from simply building systems that are intelligent to building intelligent systems that are human-aware and trustworthy.” Therefore, the effective development and widespread adoption of algorithms will hinge not only on the sophistication of engineers and computer scientists, but also on the expertise of behavioural scientists.

These algorithms hold enormous promise for solving complex problems, increasing efficiency, reducing bias, and even making decision-making transparent. However, the last few decades of behavioral science have established that humans hold a number of biases and shortcomings that impact virtually every sphere of human life (Banaji& Greenwald, 2013) and discrimination can become entrenched, amplified, or even obscured when decisions are implemented by algorithms (Kleinberg, Ludwig, Mullainathan, & Sunstein, 2018). While there has been a growing awareness that programmers and organizations should pay greater attention to discrimination and other ethical considerations (Dignum, 2018), very little behavioral research has directly examined these issues.  In  this  paper,  we  describe  how  behavioural  science  will  play  a  critical  role  in  the development  of  ethical  algorithms  and  outline  a  roadmap  for behavioural  scientists  and computer scientists to ensure that these algorithms are as ethical as possible.

The paper is here.