Second-Order Existential Risk

post by Ideopunk · 2020-07-01T18:46:52.140Z · LW · GW · 1 comments

Contents

1 comment

Cross-posted.

[Epistemic status: Low confidence]

[I haven’t seen this discussed elsewhere, though there might be overlap with Bostrom’s “crunches” and “shrieks”]

How important is creating the conditions to fix existential risks versus actually fixing existential risks? 

We can somewhat disentangle these. Let’s say there are two levels to “solving existential risk.” The first level includes the elements deliberately ‘aimed’ at solving existential risk. This includes researchers, their assistants, their funding. On the second level are the social factors that come together to produce humans and institutions with the knowledge and skills to even be able to contribute to existential risk. This second level includes things like “a society that encourages curiosity” or “continuity of knowledge” or “a shared philosophy that lends itself to thinking in terms of things like existential risk (humanism?).” All of these have numerous other benefits to society, and they could maybe be summarized as “create enough surplus to enable long-term thinking.” 

Another attribute of this second level is that these are all conditions that allow us to tackle existential risk. Here are a few more of these conditions:

If any of these were reversed, it seems conceivable that our capacity to deal with existential risk would be heavily impacted. Is there a non-negligible risk of these conditions reversing? If so, then perhaps research should be put into dealing with this “second-order” existential risk (population collapse, civilization collapse) the same way it’s put into dealing with “first-order” existential risk (nuclear war, AI alignment).

Reasons why second-order x-risk might be a real concern:

Reasons why second-order x-risk might not be a real concern:

When considering whether or not second-order x-risk is worth researching, it’s also worth looking at where second-order existential risk falls in terms of effective altruist criteria: 

My suspicion is that second-order x-risk is not as important as ex-risk. It might not even be a thing! However, I think the tractability is still worth exploring. Perhaps there are cheap, high-impact measures that maximize our future ability to deal with existential risk. It’s possible that these measures could also align with other EA values. Even decreasing disease burden in developing countries slightly increases the chances of a future innovator not dying of starvation. 

I am also personally interested in the exploration of second-order x-risk because there is a lot of overlap with conservative concerns about social and moral collapse. I think those fears are overblown but they are shared by a huge chunk of the population (and are probably the norm outside of WEIRD countries). I’m curious to see robust analyses of how much we realistically should worry about institutional decay, population collapse, and technological upheaval. It’s a ‘big question’ the same way religion is: if its claims are true, it would be a big deal, and enough people consider it a big deal that it’s worth checking. However, if it is rational to not worry about such things, then we could convince at least a few people with those concerns to worry about our long-term prospects instead.

1 comments

Comments sorted by top scores.

comment by Slider · 2020-07-01T19:02:58.683Z · LW(p) · GW(p)

Stable career paths read to me as very surprising to be a condition. The implication is that if we don't have scientists we can get screwed? But what if science gets done but not by professionals but citizens or hobbyists?

Shift toward a negative freedom of thought to the right to be stupid coul dlead to idiocrazy or that scientists exist but instead of doing science they do politics or a kind of orthodoxy production. European style positive right for universal education could keep the voting populus science literate and keep important science to inform political decision making.