Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label longtermism. Show all posts
Showing posts with label longtermism. Show all posts

Thursday, November 9, 2023

Moral Future-Thinking: Does the Moral Circle Stand the Test of Time?

Law, K. F., Syropoulos, S., et al. (2023, August 10). 
PsyArXiv

Abstract

The long-term collective welfare of humanity may lie in the hands of those who are presently living. But do people normatively include future generations in their moral circles? Across four studies conducted on Prolific Academic (N Total=823), we find evidence for a progressive decline in the subjective moral standing of future generations, demonstrating decreasing perceived moral obligation, moral concern, and prosocial intentions towards other people with increasing temporal distance. While participants generally tend to display present-oriented moral preferences, we also reveal individual differences that mitigate this tendency and predict pro-future outcomes, including individual variation in longtermism beliefs and the vividness of one’s imagination. Our studies reconcile conflicting evidence in the extant literature on moral judgment and future-thinking, shed light on the role of temporal distance in moral circle expansion, and offer practical implications for better valuing and safeguarding the shared future of humanity.

Here's my summary:

This research investigates whether people normatively include future generations in their moral circles. The authors conducted four studies with a total of 823 participants, and found evidence for a progressive decline in the subjective moral standing of future generations with increasing temporal distance. This suggests that people generally tend to display present-oriented moral preferences.

However, the authors also found individual differences that mitigate this tendency and predict pro-future outcomes. These factors include individual variation in longtermism beliefs and the vividness of one's imagination. The authors also found that people are more likely to include future generations in their moral circles when they are primed to think about them or when they are asked to consider the long-term consequences of their actions.

The authors' findings reconcile conflicting evidence in the extant literature on moral judgment and future-thinking. They also shed light on the role of temporal distance in moral circle expansion and offer practical implications for better valuing and safeguarding the shared future of humanity.

Overall, the research paper provides evidence that people generally tend to prioritize the present over the future when making moral judgments. However, the authors also identify individual factors and contextual conditions that can promote moral future-thinking. These findings could be used to develop interventions that encourage people to consider the long-term consequences of their actions and to take steps to protect the well-being of future generations.

Sunday, September 25, 2022

Understanding "longtermism": Why this suddenly influential philosophy is so toxic

Émile P. Torres
Salon.com
Originally posted 20 AUG 22

Here is an excerpt:

But what is longtermism? I have tried to answer that in other articles, and will continue to do so in future ones. A brief description here will have to suffice: Longtermism is a quasi-religious worldview, influenced by transhumanism and utilitarian ethics, which asserts that there could be so many digital people living in vast computer simulations millions or billions of years in the future that one of our most important moral obligations today is to take actions that ensure as many of these digital people come into existence as possible.

In practical terms, that means we must do whatever it takes to survive long enough to colonize space, convert planets into giant computer simulations and create unfathomable numbers of simulated beings. How many simulated beings could there be? According to Nick Bostrom —the Father of longtermism and director of the Future of Humanity Institute — there could be at least 1058 digital people in the future, or a 1 followed by 58 zeros. Others have put forward similar estimates, although as Bostrom wrote in 2003, "what matters … is not the exact numbers but the fact that they are huge."

In this article, however, I don't want to focus on how bizarre and dangerous this ideology is and could be. Instead, I think it would be useful to take a look at the community out of which longtermism emerged, focusing on the ideas of several individuals who helped shape the worldview that MacAskill and others are now vigorously promoting. The most obvious place to start is with Bostrom, whose publications in the early 2000s — such as his paper "Astronomical Waste," which was recently retweeted by Musk — planted the seeds that have grown into the kudzu vine crawling over the tech sector, world governments and major media outlets like the New York Times and TIME.

Nick Bostrom is, first of all, one of the most prominent transhumanists of the 21st century so far. Transhumanism is an ideology that sees humanity as a work in progress, as something that we can and should actively reengineer, using advanced technologies like brain implants, which could connect our brains to the Internet, and genetic engineering, which could enable us to create super-smart designer babies. We might also gain immortality through life-extension technologies, and indeed many transhumanists have signed up with Alcor to have their bodies (or just their heads and necks, which is cheaper) frozen after they die so that they can be revived later on, in a hypothetical future where that's possible. Bostrom himself wears a metal buckle around his ankle with instructions for Alcor to "take custody of his body and maintain it in a giant steel bottle flooded with liquid nitrogen" after he dies.