Orlando Torres
www.towardsdatascience.com
Originally posted April 4, 2018
Here is an excerpt:
2. Transparency of Algorithms
Even more worrying than the fact that companies won’t allow their algorithms to be publicly scrutinized, is the fact that some algorithms are obscure even to their creators.
Deep learning is a rapidly growing technique in machine learning that makes very good predictions, but is not really able to explain why it made any particular prediction.
For example, some algorithms haven been used to fire teachers, without being able to give them an explanation of why the model indicated they should be fired.
How can we we balance the need for more accurate algorithms with the need for transparency towards people who are being affected by these algorithms? If necessary, are we willing to sacrifice accuracy for transparency, as Europe’s new General Data Protection Regulation may do? If it’s true that humans are likely unaware of their true motives for acting, should we demand machines be better at this than we actually are?
3. Supremacy of Algorithms
A similar but slightly different concern emerges from the previous two issues. If we start trusting algorithms to make decisions, who will have the final word on important decisions? Will it be humans, or algorithms?
For example, some algorithms are already being used to determine prison sentences. Given that we know judges’ decisions are affected by their moods, some people may argue that judges should be replaced with “robojudges”. However, a ProPublica study found that one of these popular sentencing algorithms was highly biased against blacks. To find a “risk score”, the algorithm uses inputs about a defendant’s acquaintances that would never be accepted as traditional evidence.
The info is here.