Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label extinction risk. Show all posts
Showing posts with label extinction risk. Show all posts

Friday, October 13, 2023

Humans Have Crossed 6 of 9 ‘Planetary Boundaries’

Meghan Bartles
Scientific American
Originally posted 13 September 23

Here is an excerpt:

The new study marks the second update since the 2009 paper and the first time scientists have included numerical guideposts for each boundary—a very significant development. “What is novel about this paper is: it’s the first time that all nine boundaries have been quantified,” says Rak Kim, an environmental social scientist at Utrecht University in the Netherlands, who wasn’t involved in the new study.

Since its initial presentation, the planetary boundaries model has drawn praise for presenting the various intertwined factors—beyond climate change alone—that influence Earth’s habitability. Carbon dioxide levels are included in the framework, of course, but so are biodiversity loss, chemical pollution, changes in the use of land and fresh water and the presence of the crucial elements nitrogen and phosphorus. None of these boundaries stands in isolation; for example, land use changes can affect biodiversity, and carbon dioxide affects ocean acidification, among other connections.

“It’s very easy to think about: there are eight, nine boundaries—but I think it’s a challenge to explain to people how these things interact,” says political scientist Victor Galaz of the Stockholm Resilience Center, a joint initiative of Stockholm University and the Beijer Institute of Ecological Economics at the Royal Swedish Academy of Sciences, who focuses on climate governance and wasn’t involved in the new research. “You pull on one end, and actually you’re affecting something else. And I don’t think people really understand that.”

Although the nine overall factors themselves are the same as those first identified in the 2009 paper, researchers on the projects have fine-tuned some of these boundaries’ details. “This most recent iteration has done a very nice job of fleshing out more and more data—and, more and more quantitatively, where we sit with respect to those boundaries,” says Jonathan Foley, executive director of Project Drawdown, a nonprofit organization that develops roadmaps for climate solutions. Foley was a co-author on the original 2009 paper but was not involved in the new research.

Still, the overall verdict remains the same as it was nearly 15 years ago. “It’s pretty alarming: We’re living on a planet unlike anything any humans have seen before,” Foley says. (Humans are also struggling to meet the United Nations’ 17 Sustainable Development Goals, which are designed to address environmental and societal challenges, such as hunger and gender inequality, in tandem.)


Here is my summary:

Planetary boundaries are the limits within which humanity can operate without causing irreversible damage to the Earth's ecosystems. The six boundaries that have been crossed are:
  • Climate change
  • Biosphere integrity
  • Land use and system change
  • Nitrogen and phosphorus flows
  • Freshwater use
  • Atmospheric aerosol loading
The study found that these boundaries have been crossed due to a combination of factors, including population growth, economic development, and unsustainable consumption patterns. The authors of the study warn that crossing these planetary boundaries could have serious consequences for human health and well-being.

The article also discusses the implications of the study's findings for policymakers and businesses. The authors argue that we need to make a fundamental shift in the way we live and produce goods and services in order to stay within the planetary boundaries. This will require investments in renewable energy, sustainable agriculture, and other technologies that can help us to decouple economic growth from environmental damage.

Overall, the article provides a sobering assessment of the state of the planet. It is clear that we need to take urgent action to address the environmental challenges that we face.

Monday, July 3, 2023

Is Avoiding Extinction from AI Really an Urgent Priority?

S. Lazar, J, Howard, & A. Narayanan
fast.ai
Originally posted 30 May 23

Here is an excerpt:

And why focus on extinction in particular? Bad as it would be, as the preamble to the statement notes AI poses other serious societal-scale risks. And global priorities should be not only important, but urgent. We’re still in the middle of a global pandemic, and Russian aggression in Ukraine has made nuclear war an imminent threat. Catastrophic climate change, not mentioned in the statement, has very likely already begun. Is the threat of extinction from AI equally pressing? Do the signatories believe that existing AI systems or their immediate successors might wipe us all out? If they do, then the industry leaders signing this statement should immediately shut down their data centres and hand everything over to national governments. The researchers should stop trying to make existing AI systems safe, and instead call for their elimination.

We think that, in fact, most signatories to the statement believe that runaway AI is a way off yet, and that it will take a significant scientific advance to get there—one that we cannot anticipate, even if we are confident that it will someday occur. If this is so, then at least two things follow.

First, we should give more weight to serious risks from AI that are more urgent. Even if existing AI systems and their plausible extensions won’t wipe us out, they are already causing much more concentrated harm, they are sure to exacerbate inequality and, in the hands of power-hungry governments and unscrupulous corporations, will undermine individual and collective freedom. We can mitigate these risks now—we don’t have to wait for some unpredictable scientific advance to make progress. They should be our priority. After all, why would we have any confidence in our ability to address risks from future AI, if we won’t do the hard work of addressing those that are already with us?

Second, instead of alarming the public with ambiguous projections about the future of AI, we should focus less on what we should worry about, and more on what we should do. The possibly extreme risks from future AI systems should be part of that conversation, but they should not dominate it. We should start by acknowledging that the future of AI—perhaps more so than of pandemics, nuclear war, and climate change—is fundamentally within our collective control. We need to ask, now, what kind of future we want that to be. This doesn’t just mean soliciting input on what rules god-like AI should be governed by. It means asking whether there is, anywhere, a democratic majority for creating such systems at all.