Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Wednesday, December 18, 2024

Artificial Intelligence, Existential Risk and Equity: The Need for Multigenerational Bioethics

Law, K. F., Syropoulos, S., & Earp, B. D. (2024).
Journal of Medical Ethics, in press.

“Future people count. There could be a lot of them. We can make their lives better.”
––William MacAskill, What We Owe The Future

“[Longtermism is] quite possibly the most dangerous secular belief system in the world today.”
––Émile P. Torres, Against Longtermism

Philosophers, psychologists, politicians, and even some tech billionaires have sounded the alarm about artificial intelligence (AI) and the dangers it may pose to the long-term future of humanity. Some believe it poses an existential risk (X-Risk) to our species, potentially causing our extinction or bringing about the collapse of human civilization as we know it.

The above quote from philosopher Will MacAskill captures the key tenets of “longtermism,” an ethical standpoint that places the onus on current generations to prevent AI-related—and other—X-Risks for the sake of people living in the future. Developing from an adjacent social movement commonly associated with utilitarian philosophy, “effective altruism,” longtermism has amassed following of its own. Its supporters argue that preventing X-Risks is at least as morally significant as addressing current challenges like global poverty.

However, critics are concerned that such a distant-future focus will sideline efforts to tackle the many pressing moral issues facing humanity now. Indeed, according to “strong” longtermism, future needs arguably should take precedence over present ones. In essence, the claim is that there is greater expected utility to allocating available resources to prevent human extinction in the future than there is to focusing on present lives, since doing so stands to benefit the incalculably large number of people in later generations who will far outweigh existing populations. Taken to the extreme, this view suggests it would be morally permissible, or even required, to actively neglect, harm, or destroy large swathes of humanity as it exists today if this would benefit or enable the existence of a sufficiently large number of future—that is, hypothetical or potential—people, a conclusion that strikes many critics as dangerous and absurd.


Here are some thoughts: 

This article explores the ethical implications of artificial intelligence (AI), particularly focusing on the concept of longtermism. Longtermism argues for prioritizing the well-being of future generations, potentially even at the expense of present-day needs, to prevent existential risks (X-Risks) such as the collapse of human civilization. The paper examines the arguments for and against longtermism, discussing the potential harms of prioritizing future populations over current ones and highlighting the importance of addressing present-day social justice issues. The authors propose a multigenerational bioethics approach, advocating for a balanced perspective that considers both future risks and present needs while incorporating diverse ethical frameworks. Ultimately, the article argues that the future of AI development should be guided by an inclusive and equitable framework that prioritizes the welfare of both present and future generations.