Tom Douglas
theconversation.com
Originally posted January 21, 2019
Here is an excerpt:
What’s fair?
AI researchers concerned about fairness have, for the most part, been focused on developing algorithms that are procedurally fair – fair by virtue of the features of the algorithms themselves, not the effects of their deployment. But what if it’s substantive fairness that really matters?
There is usually a tension between procedural fairness and accuracy – attempts to achieve the most commonly advocated forms of procedural fairness increase the algorithm’s overall error rate. Take the COMPAS algorithm for example. If we equalised the false positive rates between black and white people by ignoring the predictors of recidivism that tended to be disproportionately possessed by black people, the likely result would be a loss in overall accuracy, with more people wrongly predicted to re-offend, or not re-offend.
We could avoid these difficulties if we focused on substantive rather than procedural fairness and simply designed algorithms to maximise accuracy, while simultaneously blocking or compensating for any substantively unfair effects that these algorithms might have. For example, instead of trying to ensure that crime prediction errors affect different racial groups equally – a goal that may in any case be unattainable – we could instead ensure that these algorithms are not used in ways that disadvantage those at high risk. We could offer people deemed “high risk” rehabilitative treatments rather than, say, subjecting them to further incarceration.
The info is here.