Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Friday, October 16, 2020

When eliminating bias isn’t fair: Algorithmic reductionism and procedural justice in human resource decisions

Newman, D., Fast, N. and Harmon, D.
Organizational Behavior and 
Human Decision Processes
Volume 160, September 2020, Pages 149-167

Abstract

The perceived fairness of decision-making procedures is a key concern for organizations, particularly when evaluating employees and determining personnel outcomes. Algorithms have created opportunities for increasing fairness by overcoming biases commonly displayed by human decision makers. However, while HR algorithms may remove human bias in decision making, we argue that those being evaluated may perceive the process as reductionistic, leading them to think that certain qualitative information or contextualization is not being taken into account. We argue that this can undermine their beliefs about the procedural fairness of using HR algorithms to evaluate performance by promoting the assumption that decisions made by algorithms are based on less accurate information than identical decisions made by humans. Results from four laboratory experiments (N = 798) and a large-scale randomized experiment in an organizational setting (N = 1654) confirm this hypothesis. Theoretical and practical implications for organizations using algorithms and data analytics are discussed.

Highlights

• Algorithmic decisions are perceived as less fair than identical decisions by humans.

• Perceptions of reductionism mediate the adverse effect of algorithms on fairness.

• Algorithmic reductionism comes in two forms: quantification and decontextualization.

• Employees voice lower organizational commitment when evaluated by algorithms.

• Perceptions of unfairness mediate the adverse effect of algorithms on commitment.

Conclusion

Perceived unfairness notwithstanding, algorithms continue to gain increasing influence in human affairs, not only in organizational settings but throughout our social and personal lives. How this influence plays out against our sense of fairness remains to be seen but should undoubtedly be of central interest to justice scholars in the years ahead.  Will the compilers of analytics and writers of algorithms adapt their
methods to comport with intuitive notions of morality? Or will our understanding of fairness adjust to the changing times, becoming inured to dehumanization in an ever more impersonal world? Questions
such as these will be asked more and more frequently as technology reshapes modes of interaction and organization that have held sway for generations. We have sought to contribute answers to these questions,
and we hope that our work will encourage others to continue studying these and related topics.