Reducing x-risk might be actively harmful

post by MountainPath · 2024-11-18T14:25:07.127Z · LW · GW · 5 comments

Contents

5 comments

Great. Another crucial consideration I missed. I was convinced that working on reducing the existential risk for humanity should be a global priority.

Upholding our potential and ensuring that we can create a truly just future seems so wonderful.

Well, recently I was introduced to the idea that this might actually not be the case. 

The argument is rooted in suffering-focused ethics [EA · GW] and the concept of complex cluelessness [? · GW]. If we step back and think critically though, what predicts suffering more than the mere existence of sentient beings—humans in particular? Our history is littered with pain and exploitation: factory farming, systemic injustices, and wars, to name just a few examples. Even with our best intentions, humanity has perpetuated vast amounts of suffering.

So here’s the kicker: what if reducing existential risks isn’t inherently good? What if keeping humanity alive and flourishing actually risks spreading suffering further and faster—through advanced technologies, colonization of space, or systems we can’t yet foresee? And what if our very efforts to safeguard the future have unintended consequences that exacerbate suffering in ways we can't predict?

I was also struck by the critique of the “time of perils” assumption. The idea that now is a uniquely critical juncture in history, where we can reduce existential risks significantly and set humanity on a stable trajectory, sounds compelling. But the evidence supporting this claim is shaky at best. Why should we believe that reducing risks now will have lasting, positive effects over millennia—or even that we can reduce these risks at all, given the vast uncertainties?

This isn’t to say existential risk reduction is definitively bad—just that our confidence in it being good might be misplaced. A truly suffering-focused view might lean toward seeing existential risk reduction as neutral at best, and possibly harmful at worst.

It’s humbling, honestly. And frustrating. Because I want to believe that by focusing on existential risks, we’re steering humanity toward a better future. But the more I dig, the more I realize how little we truly understand about the long-term consequences of our actions.

So, what now? I’m not sure. 

I am sick of missing crucial considerations. All I want to do is to make a positive impact. But no. Radical uncertainty it is.

I know that this will potentially cost me hundreds of hours to fully think through. It is going to cost a lot of energy if I pursue with this.

Right now I am just considering to pursue earning to give instead and donate a large chunk of my money to different worldviews and cause areas.

Would love to get your thoughts.
 

5 comments

Comments sorted by top scores.

comment by ZY (AliceZ) · 2024-11-19T02:03:58.200Z · LW(p) · GW(p)

I personally agree with your reflection on suffering risks (including factory farming, systemic injustices, and wars) and the approach to donating to different cause areas. My (maybe unpopular under "prioritizing only 1" type of mindset) thought is: maybe we should avoid prioritizing only one single area (especially collectively), but recognize that in reality there are always multiple issues we need to fight about/solve. Personally we could focus professionally on one issue, and volunteer for/donate to another cause area, depending on our knowledge, interests, and ability; additionally, we could donate to multiple cause areas. Meanwhile, a big step is to be aware of and open our ears to the various issues we may be facing as a society, and that will (I hope) translate into multiple type of actions. After all some of these suffering risks involve human actions, and each of us doing something differently could help with reducing these suffering risks in both short and long term. But there are also many things that I do not know how to best balance as well.

A side note - I also hope you are not very very sad by thinking of "missing crucial considerations" (but also appreciate that you are trying to gather more information and learn more quickly; we all should do more of this too)! The key to me might be an open mind and the ability to consider different aspects of things; hopefully we will be on the path towards something "more complete". Proactively, one approach I often try to do is talking to people who are passionate in different areas, who are different from me, and understand more from there. Also, I sometimes refer to https://www.un.org/en/global-issues for some ideas.

comment by Richard_Kennaway · 2024-11-19T12:20:09.295Z · LW(p) · GW(p)

What if keeping humanity alive and flourishing actually risks spreading suffering further and faster—through advanced technologies, colonization of space, or systems we can’t yet foresee? And what if our very efforts to safeguard the future have unintended consequences that exacerbate suffering in ways we can't predict?

It's up to those future people to solve their own problems. It is enough that we make a future for them to use as they please. Parents must let their children go, or what was the point of creating them?

comment by quila · 2024-11-18T21:41:52.339Z · LW(p) · GW(p)

(was this written by chatgpt?)

another crucial consideration here is that a benevolent ASI could do acausal trade to reduce suffering in the unreachable universe.[1] (comparing the EV of that probability and of the probability of human-caused long-term-suffering is complex / involves speculation about the many variables going into each side)

  1. ^

    there's writing about this somewhere, i'm here just telling you that the possibility / topic exists

    i wrote this about it but i don't think it's comprehensive enough https://quila.ink/posts/ev-of-alignment-for-negative-utilitarians/

comment by hmys (the-cactus) · 2024-11-18T18:25:50.211Z · LW(p) · GW(p)

Seems unlikely to me. I mean, I think, in large part due to factory farming, that the current immediate existence of humanity, and also its history, are net negatives. The reason I'm not a full blown antinatalist is because these issues are likely to be remedied in the future, and the goodness of the future will astronomically dwarf the current negativity humanity has and is bringing about. (assuming we survive and realize a non-negligible fraction of our cosmic endowment)

The reason I think this is, well, the way I view it, its an immediate corollary of the standard yudkowsky/bostrom AI arguments. Animals existing and suffering is an extremely specific state of affairs, just like humans existing and being happy is an extremely specific state of affairs. This means that, if you optimize hard enough for anything, thats not exactly that (humans happy or animals suffering), you're not gonna get it. 

And, maybe this is me being too optimistic (but I really hope not, and I really don't think so), but I don't think many humans want animals to suffer for its own sake. They'd eat lab-grown meat if it was cheaper and better tasting than animal-grown meat. Lab-grown meat is a good example of the general principle I'm talking about. Suffering of sentient minds is a complex thing. If you have a powerful optimizer, about its way optimizing the universe, you're virtually never gonna get suffering sentient minds unless that is what the optimizer is deliberately aiming for.

Replies from: the-cactus
comment by hmys (the-cactus) · 2024-11-20T22:57:05.975Z · LW(p) · GW(p)

People seem to disagree with this comment. There's two statements and one argument in it

  1. Humanity's current and historical existence are net-negatives.
  2. The future, assuming humans survive, will have massive positive utility
    1. The argument for why this is the case, based on something something optimization

What are people disagreeing with? Is it mostly the former? I think the latter is rather clear. I'm very confident it is true. Both the argument and the conclusion. The former, I'm quite confident is true as well (~90% ish?), but only for my set of values.