Steven Tiell
Harvard Business Review
Originally posted 15 Nov 19
Here is an excerpt:
Establishing this level of ethical governance is critical to helping executives mitigate downside risks, because addressing AI bias can be extremely complex. Data scientists and software engineers have biases just like everyone else, and when they allow these biases to creep into algorithms or the data sets used to train them — however unintentionally — it can leave those subjected to the AI feeling like they have been treated unfairly. But eliminating bias to make fair decisions is not a straightforward equation.
While many colloquial definitions of “bias” involve “fairness,” there is an important distinction between the two. Bias is a feature of statistical models, while fairness is a judgment against the values of a community. Shared understandings of fairness are different across cultures. But the most critical thing to understand is their relationship. The gut feeling may be that fairness requires a lack of bias, but in fact, data scientists must often introduce bias in order to achieve fairness.
Consider a model built to streamline hiring or promotions. If the algorithm learns from historic data, where women have been under-represented in the workforce, myriad biases against women will emerge in the model. To correct for this, data scientists might choose to introduce bias — balancing gender representation in historic data, creating synthetic data to fill in gaps, or correcting for balanced treatment (fairness) in the application of data-informed decisions. In many cases, there’s no possible way to be both unbiased and fair.
An Ethics Committee can help to not only maintain an organization’s values-based intentions, but can increase transparency into how they use AI. Even when it’s addressed, AI bias can still be maddening and frustrating for end users, and most companies deploying AIs today are subjecting people to it without giving them much agency in the process. Consider the experience of using a mapping app. When travelers are simply told which route to take, it is an experience stripped of agency; but when users are offered a set of alternate routes, they feel more confident in the selected route because they enjoyed more agency, or self-determination, in choosing it. Maximizing agency when AI is being used is another safeguard strong governance can help to ensure.
The info is here.