Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Monday, February 20, 2023

Definition drives design: Disability models and mechanisms of bias in AI technologies

Newman-Griffis, D., et al. (2023).
First Monday, 28(1).
https://doi.org/10.5210/fm.v28i1.12903

Abstract

The increasing deployment of artificial intelligence (AI) tools to inform decision-making across diverse areas including healthcare, employment, social benefits, and government policy, presents a serious risk for disabled people, who have been shown to face bias in AI implementations. While there has been significant work on analysing and mitigating algorithmic bias, the broader mechanisms of how bias emerges in AI applications are not well understood, hampering efforts to address bias where it begins. In this article, we illustrate how bias in AI-assisted decision-making can arise from a range of specific design decisions, each of which may seem self-contained and non-biasing when considered separately. These design decisions include basic problem formulation, the data chosen for analysis, the use the AI technology is put to, and operational design elements in addition to the core algorithmic design. We draw on three historical models of disability common to different decision-making settings to demonstrate how differences in the definition of disability can lead to highly distinct decisions on each of these aspects of design, leading in turn to AI technologies with a variety of biases and downstream effects. We further show that the potential harms arising from inappropriate definitions of disability in fundamental design stages are further amplified by a lack of transparency and disabled participation throughout the AI design process. Our analysis provides a framework for critically examining AI technologies in decision-making contexts and guiding the development of a design praxis for disability-related AI analytics. We put forth this article to provide key questions to facilitate disability-led design and participatory development to produce more fair and equitable AI technologies in disability-related contexts.

Conclusion

The proliferation of artificial intelligence (AI) technologies as behind the scenes tools to support decision-making processes presents significant risks of harm for disabled people. The unspoken assumptions and unquestioned preconceptions that inform AI technology development can serve as mechanisms of bias, building the base problem formulation that guides a technology on reductive and harmful conceptualisations of disability. As we have shown, even when developing AI technologies to address the same overall goal, different definitions of disability can yield highly distinct analytic technologies that reflect contrasting, frequently incompatible decisions in the information to analyse, what analytic process to use, and what the end product of analysis will be. Here we have presented an initial framework to support critical examination of specific design elements in the formulation of AI technologies for data analytics, as a tool to examine the definitions of disability used in their design and the resulting impacts on the technology. We drew on three important historical models of disability that form common foundations for policy, practice, and personal experience today—the medical, social, and relational models—and two use cases in healthcare and government benefits to illustrate how different ways of conceiving of disability can yield technologies that contrast and conflict with one another, creating distinct risks for harm.