Is There a Valley of Bad Civilizational Adequacy?
post by lbThingrb · 2022-03-11T19:49:49.049Z · LW · GW · 1 commentsContents
Main argument Possible counterarguments None 1 comment
This post is my attempt to think through an idea that I’ve been mulling over since this discussion on Twitter last November prompted by a question of Matthew Barnett, which I was reminded of while reading the section on energy in Zvi’s recent post [LW(p) · GW(p)] on the war in Ukraine. The meaning of the title, “valley of bad civilizational adequacy,” is the idea that as one relaxes constraints on the bounded rationality of hypothetical future collective human policy decisions, the result may initially be a decrease in expected utility captured by humans, due to increased existential risk from unaligned AGI, before one starts the ascent to the peak of optimal rationality.
Preliminary definition: By pro-growth policy I mean a certain set of public policy proposals that aren’t directly about AI, but could shorten the timeline to AGI: less immigration restriction, particularly for high-skill workers; cheaper, denser housing, especially in the SF Bay Area; and cheap energy, by building out lots of nuclear and renewable generation capacity. (Is there anything else that fits this category?)
Main argument
- Pro-growth policy can be expected to accelerate AI capabilities research and therefore shorten the timeline to AGI, via the agglomeration effects of more smart people in dense urban tech hubs, decreased energy cost of running ML experiments, and overall economic growth leading to more lucrative investment opportunities and therefore more research funding.
- Having less time to solve AI safety would cause a greater increase in AI X-risk than any decrease in AI X-risk resulting from pro-growth policy also accelerating AI-alignment research.
- AI X-risk dominates all other considerations.
- Therefore pro-growth policy is bad, and AI-risk-alarmist rationalists should not support it, and perhaps should actively oppose it.
Possible counterarguments
(Other than against step 3; the intended audience of this post is people who already accept that.)
The main argument depends on
- the amount that AI-risk alarmists can affect pro-growth policy,
- the effect of such changes in pro-growth policy on the timeline to AGI,
- and the effect of such changes in the AGI timeline on our chances of solving alignment.
One or more of these could be small enough that the AI-risk community’s stance on pro-growth policy is of negligible consequence.
Perhaps pro-growth policy won’t matter because the AGI timeline will be very short, not allowing time for any major political changes and their downstream consequences to play out before the singularity.
Perhaps it’s bad to oppose pro-growth policy because the AGI timeline will be very long: If we have plenty of time, there’s no need to suffer from economic stagnation in the meantime. Furthermore, sufficiently severe stagnation could lead to technological regress, political destabilization that sharply increases and prolongs unnecessary pre-singularity misery, or even the failure of human civilization to ever escape earth.
Even without a very long AGI timeline, perhaps the annual risk of cascading economic and political instability due to tech stagnation, leading to permanent civilizational decline, is so high that it outweighs increased AI X-risk from shortening the AGI timeline.
Perhaps there is no valley of bad civilizational adequacy, or at most a very small valley: A civilization adequate enough to get the rational pursuit of growth right may be likely enough to also get AI risk right that pro-growth policy is positive-EV. E.g. more smart people in dense urban tech hubs might accelerate AI-safety research enough to outweigh the increased risk from also accelerating capabilities research. (This seems less implausible w.r.t. housing and immigration policy than energy policy, since running lots of expensive large-scale ML experiments seems to me to be particularly likely to advance capabilities more than safety.)
I find the cumulative weight of these counterarguments underwhelming, but I also find the conclusion of the main argument very distasteful, and it certainly seems to run counter to the prevailing wisdom of the AI-risk community. Perhaps I am missing something?
1 comments
Comments sorted by top scores.
comment by Dagon · 2022-03-11T23:06:35.593Z · LW(p) · GW(p)
To the extent that the "valley" is just the intersection of the curves of the sanity waterline and the capability curve, I think every major new technology has had this concern. Machine guns were too horrific to contemplate in 1910. Nukes were clearly more power than humans were ready for.
I wouldn't argue that it's not true or that the valley doesn't exist. I'd argue that it isn't a crux for any decision one can make. The people making the tools are different from the people who are worried about it, and stopping them isn't possible. All we can do is to try to raise the sanity waterline fast enough that we can handle it when they succeed.
Honestly, I still suspect we'll destroy ourselves through conventional means before uncontrollable super-powered AI does it. It may well be tool-AI-enhanced human stupidity, but it'll still be humans at the root of the decision tree.