Collapse Might Not Be Desirable

post by Dzoldzaya · 2023-01-05T17:29:43.886Z · LW · GW · 9 comments

Contents

9 comments

Epistemic Status: Wisdom gleaned from years of meditation.

There are many ways we could create societal collapse.

Creating an engineered pandemic, triggering a nuclear catastrophe, developing a computer virus to permanently crash the global economy etc. 

Now you may be thinking about trying out one of these. Most value is in the far future and, in a collapse scenario, we would probably be more likely to sustain moral progress than technological progress (because 'moral infrastructure' is more likely to stay intact than technological and global trade infrastructure). Creating a moderate collapse scenario, with a high level of depopulation, will definitely give us more time and probably more societal wisdom to focus on getting past the age of perils- in particular, solving AI alignment. And in the short term, it might seem morally desirable as well, because the net harm caused by humans, such as animal suffering on factory farms, will be greatly reduced. You might also be an antinatalist, and (reasonably) believe that most of our lives are net negative.  

But there are a few reasons that this strategy might not be a good idea:

1. You might go wrong and kill everybody. Fine, so developing a contagious virus that kills 95% of the population might seem like a win-win- it would slow everything down, end most factory farming, and give us another century to rebuild and focus on alignment. But what if the virus mutates and accidentally kills 99.9% of the population, leaving only the most remote and uncontacted tribes (who aren't party to our moral progress) alive? Or what if it kills too few, and just makes certain actors angrier or more desperate to develop violent, weaponised AGI? 


2. They might lock in worse institutions/ forms of governance. Of course, we will probably be able to build on our moral progress in the case of a collapse, but institutions are sticky. A collapse scenario is likely to lead to varied outcomes between today's political units. If any of these survive, it is hard to predict which will come out of the collapse stronger. If your planned collapse scenario ends up destroying every political unit except an elite bunch of the CCP with bunker access, this could be even riskier than the current predicament!
 

3. Reputational damage. If you try to create an engineered pandemic, trigger a nuclear catastrophe or otherwise strive for societal collapse and succeed, it's not too much of an issue- your reputation will be the least of your worries. But if you fail, the reputational costs to yourself and your movement could be so serious that much of the good work you do will likely be undone by your recklessness. 
 

4. We might have locked in low population growth. Birth rates have dropped as a result of attitudinal changes (towards female labour force participation, preference for large family sizes etc.). Unless collapse really takes us back to pre-modern values, it's likely that we won't have the societal incentives to propagate and rebuild a more enlightened society.


5. Do-no-harm principle. All things equal, if we can manage to stop misaligned TAI, and end factory farming, without causing extreme pain and suffering around the world through deliberate collapse, we probably should. It might be more difficult, but we should be very sure that solving the problem is unlikely before we choose the nuclear option. Some forms of collapse could also lead to worse forms of human and animal suffering, and there's a high level of unpredictability.

 

So, all in all, I think deliberately striving for societal collapse is probably (60%) not a good idea, so I don't recommend this as a plan/ career path. 

9 comments

Comments sorted by top scores.

comment by Dagon · 2023-01-05T23:02:22.588Z · LW(p) · GW(p)

I'd love to see the posts in favor of pursuing/accelerating collapse.  It sounds like you're arguing against someone, but I don't actually know of any (at least not any who this would rebut).

Though I guess "60%" is a bit of an endorsement - it's much lower than my estimate, even if I substitute the much-more-reasonable "claim to pursue collapse, but actually just take money/status from idiots rather than doing anything about it".

Replies from: Dzoldzaya
comment by Dzoldzaya · 2023-01-08T00:54:40.122Z · LW(p) · GW(p)

Of course there are no such posts, and I hoped that people would read it in this spirit! I'm in fact arguing against that elephant in many a room where people are discussing collapse and x-risk.

I'm sure many people have thought:

a= x-risk this century, and b= chance of non-recovery post collapse and c= likelihood of future society reaching modernity at a stage when they're better organised than current society at addressing the age of perils.

If a>b and c > 0.5, and we accept longtermism, then collapse seems desirable. If we add fairly pessimistic views about ongoing moral tragedies, or ideas like antinatalism and negative utilitarianism, it tips the balance further towards collapse.

This post is an expression of this dilemma- I feel it captures the tone I was hoping for... but no-one else seems to like it, unfortunately.

comment by niplav · 2023-01-05T21:39:27.878Z · LW(p) · GW(p)

Your 60% of deliberately inducing societal collapse being bad strikes me as underconfident, when I try to imagine 1000 worlds in which that happens maybe 5 or so turn out well.

Replies from: Dzoldzaya
comment by Dzoldzaya · 2023-01-06T01:43:41.516Z · LW(p) · GW(p)

Sorry, I think the logic of my 60% figure was imprecise. 

If we're just talking about 1 person attempting to deliberately induce societal collapse, the chances of any kind of impact, either way, would be relatively low (depending on the person), so the 60% would seem a bit meaningless there. 

If we're talking about whether it's worth developing a more serious movement to initiate societal collapse (potentially in an optimal way) or to plan for it as an option, I think there are arguments both ways, but I'd lean against it being a good idea because of the risks laid out in the post. This is what my 60% figure was aiming at. 

If we're talking about a world where we have successfully induced collapse (in a way that allows society to rebuild in some way), would this be a better or worse world, in expectation? This is the question I was really hinting at with this post, and I would definitely dispute your 5 in 1000 claim if this was the question you were thinking of. 

If we're Eliezer-level pessimistic about TAI timelines, serious about the horrors of factory farming (and perhaps antinatalism), and optimistic about moral progress in the absence of technological progress, I think this question gets very interesting. 

comment by Ofer (ofer) · 2023-01-06T12:04:37.995Z · LW(p) · GW(p)

Relevant & important: The unilateralist's curse [? · GW].

Replies from: Dzoldzaya
comment by Dzoldzaya · 2023-01-06T17:13:13.534Z · LW(p) · GW(p)

Definitely relevant, but not sure if it supports my argument that we shouldn't try to induce collapse.

This post is about unilaterally taking a very radical action to avert a potentially worse scenario that inertia and race dynamics are pushing us towards. That looks like classic 'unilateralist's benediction' territory.

Replies from: sharmake-farah
comment by Noosphere89 (sharmake-farah) · 2023-01-06T19:45:19.449Z · LW(p) · GW(p)

What ofer's saying is that collapse plans are probably a unilateralist curse, since one agent is taking dangerous decisions on their own, because one agent perceives the benefit to be high relative to risk.

comment by Richard_Kennaway · 2023-01-06T10:35:11.740Z · LW(p) · GW(p)

Dzoldzaya, meet LVSN [LW · GW].

Replies from: Dzoldzaya
comment by Dzoldzaya · 2023-01-06T16:53:29.869Z · LW(p) · GW(p)

Thanks! I'm actually a more serious EA type in everyday life, but my lesswrong alter ego is proudly Kakistocurious.