Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Friday, January 28, 2022

The AI ethicist’s dilemma: fighting Big Tech by supporting Big Tech

Sætra, H.S., Coeckelbergh, M. & Danaher, J. 
AI Ethics (2021). 


Assume that a researcher uncovers a major problem with how social media are currently used. What sort of challenges arise when they must subsequently decide whether or not to use social media to create awareness about this problem? This situation routinely occurs as ethicists navigate choices regarding how to effect change and potentially remedy the problems they uncover. In this article, challenges related to new technologies and what is often referred to as ‘Big Tech’ are emphasized. We present what we refer to as the AI ethicist’s dilemma, which emerges when an AI ethicist has to consider how their own success in communicating an identified problem is associated with a high risk of decreasing the chances of successfully remedying the problem. We examine how the ethicist can resolve the dilemma and arrive at ethically sound paths of action through combining three ethical theories: virtue ethics, deontological ethics and consequentialist ethics. The article concludes that attempting to change the world of Big Tech only using the technologies and tools they provide will at times prove to be counter-productive, and that political and other more disruptive avenues of action should also be seriously considered by ethicists who want to effect long-term change. Both strategies have advantages and disadvantages, and a combination might be desirable to achieve these advantages and mitigate some of the disadvantages discussed.

From the Discussion

The ethicist’s dilemma arises as soon as the desire to effect change is seemingly most easily satisfied using the very systems that needs changing. In this article it is shown that the dilemma involves either strengthening the system by attempting to harness its powers, or potentially not achieving anything by relinquishing the means of using technology to spread one’s message. An environmental ethicist who is sincerely concerned about the effects of climate change could start working as an ethics officer for Big Oil, but there is a chance that doing so may ‘trap’ them in both a logic and an incentive structure that make real change hard to achieve. An AI ethicist contemplating the dangers of new technologies is faced with a similar problem, when they are, for example, offered a lucrative job at a Big Tech company with a quite uncertain future outside the mainstream as the only alternative.

Turning to the practicalities of change, some rightly argue that political power is dangerous [61]. Furthermore, they might argue that private initiative and innovation is the key to the good life and human welfare. However, the dangers of technology and unbridled innovation are also real. At least according to the ethicists. And if they are serious about these dangers, it may be necessary to emphasise the political domain and its power to disrupt the technological system. The dangers of private power must be bridled by the power of government, and this is in a sense a liberal argument in favour of more active use of government power [1]. Private companies generate a range of problems, and when these are understood as problems resulting from a too free market, government intervention for the sake of correcting market failure is normally acceptable to those of the left and right wings of politics alike.