Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Thursday, January 26, 2023

The AI Ethicist's Dirty Hands Problem

H. S. Sætra, M. Coeckelbergh, & J. Danaher
Communications of the ACM, January 2023, 
Vol. 66 No. 1, Pages 39-41

Assume an AI ethicist uncovers objectionable effects related to the increased usage of AI. What should they do about it? One option is to seek alliances with Big Tech in order to "borrow" their power to change things for the better. Another option is to seek opportunities for change that actively avoids reliance on Big Tech.

The choice between these two strategies gives rise to an ethical dilemma. For example, if the ethicist's research emphasized the grave and unfortunate consequences of Twitter and Facebook, should they promote this research by building communities on said networks? Should they take funding from Big Tech to promote the reform of Big Tech? Should they seek opportunities at Google or OpenAI if they are deeply concerned about the negative implications of large-scale language models?

The AI ethicist’s dilemma emerges when an ethicist must consider how their success in communicating an
identified challenge is associated with a high risk of decreasing the chances of successfully addressing the challenge.  This dilemma occurs in situations in which the means to achieve one’s goals are seemingly best achieved by supporting that which one wishes to correct and/or practicing the opposite of that which one preaches.

(cut)

The Need for More than AI Ethics

Our analysis of the ethicist’s dilemma shows why close ties with Big Tech can be detrimental for the ethicist seeking remedies for AI related problems.   It is important for ethicists, and computer scientists in general, to be aware of their links to the sources of ethical challenges related to AI.  One useful exercise would be to carefully examine what could happen if they attempted to challenge the actors with whom they are aligned. Such actions could include attempts to report unfortunate implications of the company’s activities internally, but also publicly, as Gebru did. Would such actions be met with active resistance, with inaction, or even straightforward sanctions? Such an exercise will reveal whether or not the ethicist feels free to openly and honestly express concerns about the technology with which they work. Such an exercise could be important, but as we have argued, these individuals are not necessarily positioned to achieve fundamental change in this system.

In response, we suggest the role of government is key to balancing the power the tech companies have
through employment, funding, and their control of modern digital infrastructure. Some will rightly argue that political power is also dangerous.   But so are the dangers of technology and unbridled innovation, and private corporations are central sources of these dangers. We therefore argue that private power must be effectively bridled by the power of government.  This is not a new argument, and is in fact widely accepted.