Cossette-Lefebvre, H., Maclure, J.
AI Ethics (2022).
https://doi.org/10.1007/s43681-022-00233-w
Abstract
The use of predictive machine learning algorithms is increasingly common to guide or even take decisions in both public and private settings. Their use is touted by some as a potentially useful method to avoid discriminatory decisions since they are, allegedly, neutral, objective, and can be evaluated in ways no human decisions can. By (fully or partly) outsourcing a decision process to an algorithm, it should allow human organizations to clearly define the parameters of the decision and to, in principle, remove human biases. Yet, in practice, the use of algorithms can still be the source of wrongful discriminatory decisions based on at least three of their features: the data-mining process and the categorizations they rely on can reconduct human biases, their automaticity and predictive design can lead them to rely on wrongful generalizations, and their opaque nature is at odds with democratic requirements. We highlight that the two latter aspects of algorithms and their significance for discrimination are too often overlooked in contemporary literature. Though these problems are not all insurmountable, we argue that it is necessary to clearly define the conditions under which a machine learning decision tool can be used. We identify and propose three main guidelines to properly constrain the deployment of machine learning algorithms in society: algorithms should be vetted to ensure that they do not unduly affect historically marginalized groups; they should not systematically override or replace human decision-making processes; and the decision reached using an algorithm should always be explainable and justifiable.
From the Conclusion
Given that ML algorithms are potentially harmful because they can compound and reproduce social inequalities, and that they rely on generalization disregarding individual autonomy, then their use should be strictly regulated. Yet, these potential problems do not necessarily entail that ML algorithms should never be used, at least from the perspective of anti-discrimination law. Rather, these points lead to the conclusion that their use should be carefully and strictly regulated. However, before identifying the principles which could guide regulation, it is important to highlight two things. First, the context and potential impact associated with the use of a particular algorithm should be considered. Anti-discrimination laws do not aim to protect from any instances of differential treatment or impact, but rather to protect and balance the rights of implicated parties when they conflict [18, 19]. Hence, using ML algorithms in situations where no rights are threatened would presumably be either acceptable or, at least, beyond the purview of anti-discriminatory regulations.
Second, one also needs to take into account how the algorithm is used and what place it occupies in the decision-making process. More precisely, it is clear from what was argued above that fully automated decisions, where a ML algorithm makes decisions with minimal or no human intervention in ethically high stakes situation—i.e., where individual rights are potentially threatened—are presumably illegitimate because they fail to treat individuals as separate and unique moral agents. To avoid objectionable generalization and to respect our democratic obligations towards each other, a human agent should make the final decision—in a meaningful way which goes beyond rubber-stamping—or a human agent should at least be in position to explain and justify the decision if a person affected by it asks for a revision. If this does not necessarily preclude the use of ML algorithms, it suggests that their use should be inscribed in a larger, human-centric, democratic process.