Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Friday, April 13, 2018

Computer Says "No": Part 1- Algorithm Bias

Jasmine Leonard
www.thersa.org
Originally published March 14, 2018

From the court room to the workplace, important decisions are increasingly being made by so-called "automated decision systems". Critics claim that these decisions are less scrutable than those made by humans alone, but is this really the case? In the first of a three-part series, Jasmine Leonard considers the issue of algorithmic bias and how it might be avoided.

Recent advances in AI have a lot of people worried about the impact of automation.  One automatable task that’s received a lot of attention of late is decision-making.  So-called “automated decision systems” are already being used to decide whether or not individuals are given jobs, loans or even bail.  But there’s a lack of understanding about how these systems work, and as a result, a lot of unwarranted concerns.  In this three-part series I attempt to allay three of the most widely discussed fears surrounding automated decision systems: that they’re prone to bias, impossible to explain, and that they diminish accountability.

Before we begin, it’s important to be clear just what we’re talking about, as the term “automated decision” is incredibly misleading.  It suggests that a computer is making a decision, when in reality this is rarely the case.  What actually happens in most examples of “automated decisions” is that a human makes a decision based on information generated by a computer.  In the case of AI systems, the information generated is typically a prediction about the likelihood of something happening; for instance, the likelihood that a defendant will reoffend, or the likelihood that an individual will default on a loan.  A human will then use this prediction to make a decision about whether or not to grant a defendant bail or give an individual a credit card.  When described like this, it seems somewhat absurd to say that these systems are making decisions.  I therefore suggest that we call them what they actually are: prediction engines.