Lindsey Jarrett
Center for Practical Bioethics
For several decades now, we have been having conversations about the impact that technology, from the voyage into space to the devices in our pockets, will have on society. The force with which technology alters our lives at times feels abrupt. It has us feeling excited one day and fearful the next.
If your experiences in life are not dependent on the use of technology — especially if your work still allows for you to disconnect from the virtual world – it may feel like technology is working at a decent pace. However, many of us require some sort of technology to work, to communicate with others, to develop relationships, and to disseminate ideas into the world. Further, we also increasingly need technology to help us make decisions. These decisions vary in complexity from auto-correcting our messages to connecting to someone on a dating app, and without access to a piece of technology, it is increasingly challenging to rely on anything but technology.
Is the use of technology for decision making a problem in and of itself due to its entrenched use across our lives, or are there particular components and contexts that need attention? Your answer may depend on what you want to use it for, how you want others to use it to know you, and why the technology is needed over other tools. These considerations are widely discussed in the areas of criminal justice, finance, security, hiring practices, and conversations are developing in other sectors as issues of inequity, injustice and power differentials begin to emerge.
Issues emerging in the healthcare sector is of particular interest to many, especially since the coronavirus pandemic. As these conversations unfold, people start to unpack the various dilemmas that exist within the intersection of technology and healthcare. Scholars have engaged in theoretical rhetoric to examine ethical implications, researchers have worked to evaluate the decision-making processes of data scientists who build clinical algorithms, and healthcare executives have tried to stay ahead of regulation that is looming over their hospital systems.
However, recommendations tend to focus exclusively on those involved with algorithm creation and offer little support to other stakeholders across the healthcare industry. While this guidance turns into practice across data science teams building algorithms, especially those building machine learning based tools, the Ethical AI Initiative sees opportunities to examine decisions that are made regarding these tools before they get to a data scientist’s queue and after they are ready for production. These opportunities are where systemic change can occur, and without that level of change, we will continue to build products to put on the shelf and more products to fill the shelf when those fail.
Healthcare is not unique in facing these types of challenges, and I will outline a few recommendations on how an adapted, augmented system of healthcare technology can operate, as the industry prepares for more forceful regulation of the use of machine learning-based tools in healthcare practice.