Second-Order Existential Risk
post by Ideopunk · 2020-07-01T18:46:52.140Z · LW · GW · 1 commentsContents
1 comment
[Epistemic status: Low confidence]
[I haven’t seen this discussed elsewhere, though there might be overlap with Bostrom’s “crunches” and “shrieks”]
How important is creating the conditions to fix existential risks versus actually fixing existential risks?
We can somewhat disentangle these. Let’s say there are two levels to “solving existential risk.” The first level includes the elements deliberately ‘aimed’ at solving existential risk. This includes researchers, their assistants, their funding. On the second level are the social factors that come together to produce humans and institutions with the knowledge and skills to even be able to contribute to existential risk. This second level includes things like “a society that encourages curiosity” or “continuity of knowledge” or “a shared philosophy that lends itself to thinking in terms of things like existential risk (humanism?).” All of these have numerous other benefits to society, and they could maybe be summarized as “create enough surplus to enable long-term thinking.”
Another attribute of this second level is that these are all conditions that allow us to tackle existential risk. Here are a few more of these conditions:
- Humans continue to reproduce.
- Humans tend to see a stable career as their preferred life-path.
- Research institutions exist.
- Status is allocated to researchers and their institutions.
If any of these were reversed, it seems conceivable that our capacity to deal with existential risk would be heavily impacted. Is there a non-negligible risk of these conditions reversing? If so, then perhaps research should be put into dealing with this “second-order” existential risk (population collapse, civilization collapse) the same way it’s put into dealing with “first-order” existential risk (nuclear war, AI alignment).
Reasons why second-order x-risk might be a real concern:
- The above conditions are not universal and thus can’t be taken for granted.
- Some of these conditions are historical innovations and thus can’t be taken for granted.
- The continued survival of our institutions could be based more on inertia than any real strength.
- Civilizations do collapse. Due to our increased interconnectivity a global collapse seems possible.
- A shift away from ‘American values’ over the coming decades could lead to greater conformity and less innovation.
- Technology could advance faster than humanity’s ability to adapt, significantly impacting our ability to reproduce ourselves.
Reasons why second-order x-risk might not be a real concern:
- Civilization keeps on trucking through all the disruptions of modernity. Whether or not the kids are alright, they grow up to complain about the next kids.
- Whatever poor adaptations people have to new technology, they’ll be selected against. Future humans might develop good attention habits and self-control.
- The bottleneck could really only be in funding. You don’t need that many talented people to pluck all the significant x-risk fruit. They’re out there and they’ll be out there for years to come, they just need funding once found.
When considering whether or not second-order x-risk is worth researching, it’s also worth looking at where second-order existential risk falls in terms of effective altruist criteria:
- Scale: Impaired ability to deal with existential risk would, by definition, affect everybody.
- Neglect: Many people are already working on their version of preserving civilization.
- Tractability: It is unclear what the impact of additional resources would be.
My suspicion is that second-order x-risk is not as important as ex-risk. It might not even be a thing! However, I think the tractability is still worth exploring. Perhaps there are cheap, high-impact measures that maximize our future ability to deal with existential risk. It’s possible that these measures could also align with other EA values. Even decreasing disease burden in developing countries slightly increases the chances of a future innovator not dying of starvation.
I am also personally interested in the exploration of second-order x-risk because there is a lot of overlap with conservative concerns about social and moral collapse. I think those fears are overblown but they are shared by a huge chunk of the population (and are probably the norm outside of WEIRD countries). I’m curious to see robust analyses of how much we realistically should worry about institutional decay, population collapse, and technological upheaval. It’s a ‘big question’ the same way religion is: if its claims are true, it would be a big deal, and enough people consider it a big deal that it’s worth checking. However, if it is rational to not worry about such things, then we could convince at least a few people with those concerns to worry about our long-term prospects instead.
1 comments
Comments sorted by top scores.
comment by Slider · 2020-07-01T19:02:58.683Z · LW(p) · GW(p)
Stable career paths read to me as very surprising to be a condition. The implication is that if we don't have scientists we can get screwed? But what if science gets done but not by professionals but citizens or hobbyists?
Shift toward a negative freedom of thought to the right to be stupid coul dlead to idiocrazy or that scientists exist but instead of doing science they do politics or a kind of orthodoxy production. European style positive right for universal education could keep the voting populus science literate and keep important science to inform political decision making.