Against Explosive Growth
post by c.trout (ctrout) · 2024-09-04T21:45:03.120Z · LW · GW · 1 commentsContents
Assumptions/Definitions Setup Argument Some implications Afterthought on motivation None 1 comment
Epistemic status: hot take, the core of which was written in ~10mn.
Assumptions/Definitions
I'm conditioning on AGI sometime within next 3~40 yrs.
"Explosive growth" ≝ the kind of crazy 1yr or 6 month GDP (or GWP) doubling times Shulman talks about here, happening within this century.
Setup
I was just listening to Carl Shulman on the 80k podcast talk about why he thinks explosive growth is very likely. One of the premises in his model is that, people will just want it – they'll want to be billionaires, have all this incredible effectively free entertainment etc etc.
(Side note: somewhat in tension with his claims about how humans are unlikely to have any comparative advantage post-AGI, he claims that parents will prefer AI-robot nannies/tutors over human nannies/tutors because, among other things, they produce better educational outcomes for children. But presumably there is little pressure toward exceptional educational attainment in this world in which most cognitive labor is outsourced to AGI.)
I actually think, if business continues as usual, explosive growth is fairly likely. But I also think this would probably be a calamity. Here's a quick and dirty argument for why:
Argument
I expect that if we successfully aligned AI on CEV, or pulled off a long-reflection (before anything crazy happened – e.g. we paused right now and did a long reflection) and then aligned AGI to the outputs of that reflection, we would not see explosive growth in this century. Some claims as to why:
- Humans don't like shocks. Explosive growth would definitely be a shock. We tend to like very gradual changes, or brief flirts with big change. We do like variety, of a certain scale and tempo – e.g. seasonality. The kind explosive growth we're considering here is anything but a gentle change in season though.
- In our heart of hearts, we value genuine trust (built on the psychology of reciprocity, not just unthinking accustomnedness to a process that "just works"), dignity, community recognition, fellow-feeling, belonging, feeling useful, overcoming genuine adversity. In other words, we would choose to create an environment in which we sacrifice some convenience, accept some friction, have to earn things, help each other, and genuinely co-depend to some extent. Basically I think we'd find that we need to need each other to flourish.
- Speaking of "genuine" things, I think many people value authenticity – we do discount simulacra (at least, again, in our heart of hearts). Even if what counts as simulacra is to some extent culturally defined, there will be a general privileging of "natural" and "traditional" things – things much more like they were found/done in the ancestral environment – since those things have an ancient echo of home in them. So I expect few would choose to live in a VR world full-time, and we would erect some barriers to doing so. (Yes, even if our minds were wiped on entering, since we would interpret this as delusion/abandonment of proper ties).
- We value wilderness, and more generally, otherness.
- A large enough majority will understand themselves as being a particular functional kind, homo sapiens; and our flourishing, as being attached to that functional kind. In other words, we wouldn't go transhumanist anytime soon (though we might keep that door open for future generations).
Some implications
If you expect this is true, then you should expect a scenario in which we see explosive growth to be a scenario in which we fail to align AI to these ideals. I suspect it will mean we merely aligned AI to profits, power, consumer surplus, instant-gratification, short-term individualistic coddling, national-security-under-current-geopolitical-conditions. All at the notable expense of (among other things) all goods that can currently only be had by having consumers/citizens collectively deliberate and coordinate in ways markets fail to allow, or even actively suppress (see e.g. how "market instincts" or market priming might increase selfishness and mute pro-social cooperative instincts).
Even if I'm wrong about the positive outputs of our idealized alignment targets (and I'm indeed low~medium confidence about those), I'm pretty confident that those outputs will not highly intrinsically value the abstractions of profits, power, consumer surplus, instant-gratification, short-term individualistic coddling or national-security-under-current-geopolitical-conditions. So I expect, from the perspective of these idealized alignment targets, explosive growth within our century would look like a pretty serious calamity, especially if that results in fairly bad value lock-in. Sure, not as bad as extinction, but still very very bad (and arguably, this is the most likely default outcome).
Afterthought on motivation
I guess part of what I wanted to convey here is: since it's increasingly unlikely we'll get the chance to align to these idealized targets, maybe we should start at least trying to align ourselves to whatever we think the outputs of those targets are, or at the very least, some more basic democratic targets. And I think Rationalists/EAs tend to underestimate just how odd their values are.
I think I also just want more Rationalists/EAs to be thinking about this "WALL-E" failure mode (assuming you see it as a failure mode). Of course that should tell you something about my values.
1 comments
Comments sorted by top scores.
comment by Vladimir_Nesov · 2024-09-04T22:49:08.485Z · LW(p) · GW(p)
The Shulman world is exploratory engineering, it rests on the assumption of AGI sufficient for explosive growth that somehow doesn't cause an intelligence explosion for many years. It neither forecasts actuality nor endorses a possibility, it instead explores the consequences of a specific magical assumption. Being aware of these consequences helps set a lower bound on expected scale of change in actuality.
Aligned growth can in principle coexist with undisturbed slow development. Superintelligence doesn't make an elephant too large to notice ants, it makes it capable of observing minute distinctions. A sudden arrival of the solved world causes many issues, that doesn't seem like a compelling reason to preserve death. Which asks for immediate access to some infrastructure from distant technological future, even if it does little else for a while.