S-Risks: Fates Worse Than Extinction

post by aggliu, Writer · 2024-05-04T15:30:36.666Z · LW · GW · 2 comments

This is a link post for https://youtu.be/fqnJcZiDMDo

Cross-posted to the EA forum [EA · GW]

In this Rational Animations video, we discuss s-risks (risks from astronomical suffering), which involve an astronomical number of beings suffering terribly. Researchers on this topic argue that s-risks have a significant chance of occurring and that there are ways to lower that chance.

The script for this video was a winning submission to the Rational Animations Script Writing contest (https://www.lesswrong.com/posts/RH8nGG5vnuXc4eKu5/rational-animations-script-writing-contest [LW · GW]). The first author of this post, Allen Liu, was the primary script writer with the second author (Writer) and other members of the Rational Animations writing team giving significant feedback. Outside reviewers, including authors of several of the cited sources, provided input as well.  Production credits are at the end of the video.  You can find the script of the video below.


Is there anything worse than humanity being driven extinct? When considering the long term future, we often come across the concept of "existential risks" or "x-risks": dangers that could effectively end humanity's future with all its potential. But these are not the worst possible dangers that we could face. Risks of astronomical suffering, or "s-risks", hold even worse outcomes than extinction, such as the creation of an incredibly large number of beings suffering terribly. Some researchers argue that taking action today to avoid these most extreme dangers may turn out to be crucial for the future of the universe.

Before we dive into s-risks, let's make sure we understand risks in general. As Swedish philosopher Nick Bostrom explains in his 2013 paper "Existential Risk Prevention as Global Priority",[1] one way of categorizing risks is to classify them according to their "scope" and their "severity". A risk's "scope" refers to how large a population the risk affects, while its "severity" refers to how much that population is affected. To use Bostrom's examples, a car crash may be fatal to the victim themselves and devastating to their friends and family, but not even noticed by most of the world. So the scope of the car crash is small, though its severity is high for those few people. Conversely, some tragedies could have a wide scope but be comparatively less severe. If a famous painting were destroyed in a fire, it could negatively affect millions or billions of people in the present and future who would have wanted to see that painting in person, but the impact on those people's lives would be much smaller.

In his paper, Bostrom analyzes risks which have both a wide scope and an extreme severity, including so-called "existential risks" or "x-risks". Human extinction would be such a risk: affecting the lives of everyone who would have otherwise existed from that point on and forever preventing all the joy, value and fulfillment they ever could have produced or experienced. Some other such risks might include humanity's scientific and moral progress permanently stalling or reversing, or us squandering some resource that could have helped us immensely in the future.

S-risk researchers take Bostrom's categories a step further. If x-risks are catastrophic because they affect everyone who would otherwise exist and prevent all their value from being realized, then an even more harmful type of risk would be one that affects more beings than would otherwise exist and that makes their lives worse than non-existence: in other words, a risk with an even broader scope and even higher severity than a typical existential risk, or a fate worse than extinction.

David Althaus and Lukas Gloor, in their article from 2016 titled "Reducing Risks of Astronomical Suffering: A Neglected Priority",[2] claim that such a terrible future is a possibility worth paying attention to, and that we might be able to prevent it or make it less likely. Specifically, they define s-risks as "risks of events that bring about suffering in cosmically significant amounts", relative to how much preventable suffering we expect in the future on average.

Because s-risks involve larger scopes and higher severities than anything we've experienced before, any examples of s-risks we could come up with are necessarily speculative and can sound like science fiction. Remember, though, that some risks that we take very seriously today, such as the risk of nuclear war, were first imagined in science fiction stories. For instance, in "The World Set Free", written by H.G. Wells in 1914,[3] bombs powered by artificial radioactive elements destroy hundreds of cities in a globe spanning war.

So, we shouldn't ignore s-risks purely because they seem speculative. We should also keep in mind that s-risks are a very broad category - any specific s-risk story might sound unlikely to materialize, but together they can still form a worrying picture. With that in mind, we can do some informed speculation ourselves.

Some s-risk scenarios are like current examples of terrible suffering, but on a much larger scale: imagine galaxies ruled by tyrannical dictators in constant, brutal war, devastating their populations; or an industry as cruel as today's worst factory farms being replicated across innumerable planets in billions of galaxies. Other possible s-risk scenarios involve suffering of a type we don't see today, like a sentient computer program somehow accidentally being placed in a state of terrible suffering and being copied onto billions of computers with no way to communicate to anyone to ease its pain.

These specific scenarios are inspired by "S-risks: An introduction" by Tobias Baumann,[4] a researcher at the Center for Reducing Suffering. The scenarios have some common elements that illustrate Althaus and Gloor's arguments as to why s-risks are a possibility that we should address. In particular, they involve many more beings than currently exist, they involve risks brought on by technological advancement, they involve suffering coming about as part of a larger process, and they are all preventable with enough foresight.

Here are some arguments for why s-risks might have a significant chance of occurring, and why we can probably lower that chance.

First, if humanity or our descendants expand into the cosmos, then the number of future beings capable of suffering could be vast: maybe even trillions of times more than today. This could come about simply by increasing the number of inhabited locations in the universe, or by creating many artificial or simulated beings advanced enough to be capable of suffering.

Second, as technology continues to advance, so does the capability to cause tremendous and avoidable suffering to such beings. We see this already happening with weapons of mass destruction or factory farming. Technology that has increased the power of humanity in the past, from fire to farming to flight, has almost always allowed for both great good and great ill. This trend will likely continue.

Third, if such suffering is not deliberately avoided, it could easily come about. While this could be from sadists promoting suffering for its own sake, it doesn't have to be. It could be by accident or neglect, for example if there are beings that we don't realize are capable of suffering. It could come about in the process of achieving some other goal: today's factory farms aren't explicitly for the purpose of causing animals to suffer, but they do create enormous suffering as part of the process of producing meat to feed humans. Suffering could also come about as part of a conflict, like a war on a much grander scale than anything in humanity's past, or in the course of beings trying to force others to do something against their will.

Finally, actions that we take today can reduce the probability of future suffering occurring. One possibility is expanding our moral circle.[5] By making sure to take as many beings as possible into account in making decisions about the future, we can avoid causing astronomical suffering because a class of morally relevant beings was simply ignored. We particularly want to prevent the idea of caring for other beings from becoming ignored or controversial. Another example, proposed by David Althaus, is reducing the influence of people Althouse describes as "malevolent", with traits shared by history's worst dictators. Additionally, some kinds of work that prevents suffering today will also help prevent suffering in the future, like curing or eliminating painful diseases and strengthening organizations and norms promoting peace.

Maybe you're not yet convinced that s-risks are likely enough that we should take them seriously. Tobias Baumann, in his book "Avoiding the Worst: How to Prevent a Moral Catastrophe", argues that the total odds of an s-risk scenario are quite significant, but even small risks are worth our attention if there's something we can do about them. The risk of dying in a car accident within the next year is around 1 in 10,000,[6] but we still wear seatbelts because they're easy interventions that effectively reduce the risk.

Or perhaps you think that even a life with a very large amount of suffering is still preferable to non-existence. This is a reasonable objection but remember that the alternative to astronomical suffering doesn't have to be extinction. Consider a universe with many galaxies of people living happy and fulfilled lives, but one galaxy filled with people in extreme suffering. All else being equal, it would still be a tremendous good to free the people of that galaxy and allow them to live as they see fit.[7] Just how much attention we should pay to preventing this galaxy of suffering depends on your personal morals and ethics, but almost everyone would agree that it is at least something worth doing if we can. And suffering doesn't need to be our only moral concern for s-risks to be important. It's very reasonable to care about s-risks while also believing that the best possible futures are full of potential and worth fighting for.

Althaus and Gloor further argue that s-risks could be even more important to focus on than existential risks. They picture human futures like a lottery. Avoiding risks of extinction is like buying tickets in that lottery and getting a better chance of humanity living to see one of those futures, but if the bad outweighs the good in the average future represented by each of those tickets, then buying more of them isn't worth it. Avoiding s-risks increases the average value of the tickets we already have, and makes buying more of them all the more valuable. Additionally, if an s-risk scenario comes about it may be extremely difficult to escape it, so we need to act ahead of time. Being aware and prepared today could be critical for stopping astronomical suffering before it can begin.

There are many causes in the world today demanding our limited resources and s-risks can seem too far off and abstract to be worth caring about. But we've shown arguments for why s-risks may have a significant chance of occurring, that we can lower that risk, and that doing so will help make the future better for everyone. If we can alter humanity's distant future at all, it's well worth putting in time and effort to prevent these worst case scenarios now, while we still have the chance.

  1. ^

    Bostrom, Nick (2013). "Existential Risk Prevention as Global Priority" https://existential-risk.com/concept.pdf

  2. ^

    Althaus, David and Lukas Gloor (2016). “Reducing Risks of Astronomical Suffering: A Neglected Priority” (https://longtermrisk.org/reducing-risks-of-astronomical-suffering-a-neglected-priority/)

  3. ^

    Wells, Herbert George (1914). The World Set Free. E.P.Dutton & Company.

  4. ^

    Baumann, Tobias (2017). “S-risks: An introduction” https://centerforreducingsuffering.org/research/intro/

  5. ^
  6. ^

    National Highway Traffic Safety Administration, US Department of Transportation: Overview of Motor Vehicle Crashes in 2020

  7. ^

    Example from: Sotala, Kaj and Lukas Gloor (2017). “Superintelligence as a Cause or Cure for Risks of Astronomical Suffering” https://longtermrisk.org/files/Sotala-Gloor-Superintelligent-AI-and-Suffering-Risks.pdf

2 comments

Comments sorted by top scores.

comment by Dagon · 2024-05-04T21:35:19.020Z · LW(p) · GW(p)

Most of these kinds of posts should start with Woody Allen's 1979 quote:

More than any other time in history, mankind faces a crossroads. One path leads to despair and utter hopelessness. The other, to total extinction. Let us pray we have the wisdom to choose correctly.

comment by RedMan · 2024-05-04T21:18:01.502Z · LW(p) · GW(p)

https://www.lesswrong.com/posts/BSo7PLHQhLWbobvet/unethical-human-behavior-incentivised-by-existence-of-agi [LW · GW]

 

I wrote about this, but didn't use the s-risk term.  I'm fine with exposing future me to s-risk, please don't pulp my brain.