Economic Post-ASI Transition
post by Joel Burget (joel-burget) · 2025-01-01T22:37:31.722Z · LW · GW · 11 commentsContents
11 comments
Who's done high quality work / can tell a convincing story about managing the economic transition to a world where machines can do every job better than humans?
Some common tropes and why I don't think they're good enough:
- "We've always managed in the past. Take the industrial revolution for example. People stop doing the work that's been automated and find new, usually better-compensated work to do." This is true, and I think it will probably be an important component of the transition. But it's clearly not sufficient if machines are better than humans at everything.
- "Tax AI (developers?) to pay for UBI." Again, something in this vein will probably be part of the solution. But:
- (a) UBI hasn't been well-tested.
- (b) I don't think the math works out if / when AI companies dominate the economy, since they'll capture more and more of the economy unless tax rates are high enough that everyone else receives more through UBI than they're paying the AI companies.
- (c) It doesn't have enough detail.
- Worldcoin. I think the idea is similar to the UBI story, but again it needs more detail.
Who has thought about this really deeply / well?
Note that for the purpose of this question, assume a world where alignment basically works (we can debate that question elsewhere).
11 comments
Comments sorted by top scores.
comment by Satron · 2025-01-02T10:19:54.930Z · LW(p) · GW(p)
Here is my metastrategy for transitioning to post-ASI economy:
If we have an aligned ASI (which you've granted), we can just ask it what's the best way for humanity to transition to post-ASI economy. Given that ASI will be vastly smarter than any of the alive humans at the time it could probably come up with a solution that's better than anything we currently have.
Replies from: joel-burget, Seth Herd↑ comment by Joel Burget (joel-burget) · 2025-01-02T16:59:09.262Z · LW(p) · GW(p)
I should have included this in my list from the start. I basically agree with @Seth Herd [LW · GW] that this is a promising direction but I'm concerned about the damage that could occur during takeoff, which could be a years-long period.
Replies from: Satron↑ comment by Satron · 2025-01-02T17:24:58.410Z · LW(p) · GW(p)
That's certainly a fair concern. The worst case scenario is where we have AGI that can displace human labour, but which can't solve economics, and a slow takeoff.
Here are some of the things that work in our favor in that scenario:
-
Companies turned out to replace human workers much slower than I expected. This is purely anecdotal, but there are low-level jobs at my work which could be almost fully automated with just the technologies that we have now. But they still haven't been, mostly, I suspect, because of the convenience of relying on humans.
-
Under slow takeoff, jobs would mostly be replaced in groups, not all at once. For example, ChatGPT put heavy pressure on copywriters. After no longer being able to work as copywriters, some of them relocated to other jobs. So far, the effect was local and with slow takeoff, chances are the trend will continue.
-
Robotics are advancing much slower and much less dramatically than LLMs. If you are a former copywriter who is jobless, fields that require robotic work should be safe for at least some time.
-
"We've always managed in the past. Take the industrial revolution for example. People stop doing the work that's been automated and find new, usually better-compensated work to do." This argument is now back to working because we are talking about an AI that for the time being is clearly not better than humans at everything.
-
Even an AI that can't by itself solve economics, can help economists with their job. By the time it becomes relevant, AI would be better than what we have now. I am especially excited about its use as a quick lookup tool for specific information that's tricky to google.
-
Slow takeoff means economists and people on LessWrong have more time to think about solving post-ASI economics. We've came a long way since 2022 (when it all arguably blew up). And it has been just 2 years.
-
Slow takeoff also means that governments have more time to wake up to the potential economical problems that we might face as the AI gets better and better.
↑ comment by Seth Herd · 2025-01-02T11:10:19.600Z · LW(p) · GW(p)
I think this is a reasonable direction for hope. But details matter. In a lot of likely-looking medium takeoff scenarios, you don't have an aligned ASI; you've got many aligned AGIs around and increasingly above the human level of intelligence. If the damage is done by the time they're superintelligent, we may not get the help we need in time.
My own hope is that we do not get massive proliferation of AGI, because some sufficiently powerful coalition of governments steps in and says "hey um perhaps we shouldn't replace all of human labor all at once, not to mention maybe not keep making and distributing AGIs until we get a misaligned recursively self-improving one" - possibly because their weakly superhuman AGI suggested they might want to do that.
Or that a bunch of smart people on LW and elsewhere gave thoroughly debated the issue before then and determined that proliferating AGIs leads to short term economic doom even more certainly than long term existential doom for humanity....
Replies from: Satron↑ comment by Satron · 2025-01-02T12:10:48.963Z · LW(p) · GW(p)
Alright, that's fair enough.
In the medium takeoff scenario, my plan would be to achieve basically what I wanted to achieve in my original scenario but with quantity instead of quality. As soon as we get weakly superhuman AGI, we can try throwing 1000 of them at this task. Assuming they are better than humans at intellectual tasks, I would think that having 1000 genius researchers work on an issue with very good coordination 24/7 gives us pretty good chances. The main bottleneck here is how much energy each one of them consumes. I am decently confident that we can afford at least 1000, but probably much more, which also somewhat boosts our chances.
comment by FlorianH (florian-habermacher) · 2025-01-03T11:44:37.363Z · LW(p) · GW(p)
I cannot say I've thought about it deep enough, but I've thought and written a bit about UBI, taxation/tax competition and so on. My imagination so far is:
A. Taxation & UBI would really be natural and workable, if we were choosing the right policies (though I have limited hope our policy making and modern democracy is up to the task, especially also with the international coordination required). Few subtleties that come to mind:
- Simply tax high revenues or profits.
- No need to tax "AI (developers?)"/"bots" specifically.
- In fact, if AIs remain rather replicable/if we have many competing instances: Scarcity rents will be in raw factors (e.g. ores and/or land) rather than the algorithms used to processing them
- UBI to the people.
- International tax (and migration) coordination as essential.
- Else, especially if it's perfectly mobile AIs that earn the scarcity rents, we end up with one or a few tax havens that amass & keep the wealth to them
- If you have good international coordination, and can track revenues well, you may use very high tax rates, and correspondingly spread a very high share of global value added with the population.
- If specifically world economy will be dominated by platform economies, make sure we deal properly with it, ensuring there's competition instead of lock-in monopoly
- I.e. if, say, we'd all want to live in metaverses, avoid everyone being forced to live in Meta's instead of choosing freely among competing metaverses.
Risks include:
- Expect geographic revenue distribution to be foreign to us today, and potentially more unequal with entire lands with zero net contribution in terms of revenue-earning value added
- Maybe ores (and/or some types of land) will capture the dominant share of value added, not anymore the educated populations
- Maybe instead it's a monopoly or oligopoly, say with huge shares in Silicon Valley and/or its Chinese counterpart or what have you
- Inequality might exceed today's: Today poor people can become more attractive by offering cheap labor. Tomorrow, people deprived of valuable (i) ores or so, or (ii) specific, scarcity-rent earning AI capabilities, may be able to contribute zero, so have zero raw earnings
- Our rent-seeking economic lobbies who successfully put their agents at top policy-makers in charge, and who lead us to voting for antisocial things, will have ever stronger incentive to keep rents for themselves. Stylized example: We'll elect the supposedly-anti-immigration populist, but whose main deed is to make sure firms don't pay high enough taxes
- You can more easily land-grab than people-grab by force, so may expect military land conquest to become more a thing than in the post-war decades where minds seemed the most valuable thing
- Human psychology. Dunno what happens with societies with no work (though I guess we're more malleable, able to evolve into a society that can cope with it, than some people think, tbc)
- Trade unions and alike, trying to keep their jobs somehow, and finding pseudo-justifications for it, so the rest of society lets them do that.
B. Specifically to your following point:
I don't think the math works out if / when AI companies dominate the economy, since they'll capture more and more of the economy unless tax rates are high enough that everyone else receives more through UBI than they're paying the AI companies.
Imagine it's really at AI companies where the scarcity rents i.e. profits, occur (as mentioned, that's not at all clear): Imagine for simplicity all humans still want TVs and cars, maybe plus metaverses, and AI requires Nvidia cards. By scenario definition, AI produces everything, and as in this example we assume it's not the ores that earn the scarcity rents, and the AIs are powerful in producing stuff from raw earth, we don't explicitly track intermediate goods other than Nvidia cards the AIs produce too. Output be thus:
AI output = 100 TVs, 100 cars, 100 Nvidia cards, 100 digital metaverses, say in $bn.
Taxes = Profit tax = 50% (could instead call it income tax for AI owners; in reality would all be bit more complex, but overall doesn't matter much).
AI profit 300 ( = all output minus the Nividia cards)
People thus get $150bn; AI owners get $150bn as distributed AI profit after taxes
People consume 50 TVs, 50 cars, 50 digital metaverses
AI owners also consume 50 TVs, 50 cars, 50 digital metaverses
So you have a 'normal' circular economy that works. Not so normal, e.g. we have simplified for AI to require not only no labor but also no raw resources (or none with scarcity rent captured by somebody else). You can easily extend it to more complex cases.
In reality, of course, output will be adjusted, e.g. with different goods the rich like to consume instead of thousands of TVs per rich person, as happens already today in many forms; what the rich like to do with the wealth remains to be seen. Maybe fly around (real) space. Maybe get better metaverses. Or employ lots of machines to improve their body cells.
C. Btw, the "we'll just find other jobs" imho is indeed overrated, and I think the bias, esp. among economists, can be very easily explained when looking at history (where these economists had been spot on) yet realizing, that in future, machines will not anymore augment brains but replace them instead.
comment by Hzn · 2025-01-02T09:20:40.804Z · LW(p) · GW(p)
It's not clear to me that the system wouldn't collapse. The number of demand side, supply side, cultural & political changes may be beyond the adaptive capacity of the system.
Some jobs would be maintained b/c of human preference. Human preference has many aspects like customer preference, distrust of AI, networking, regulation etc, so human preference is potentially quite substantial. (Efficiency is maybe also a factor; even if AI is super human intelligent the energy consumption & size of the hardware may still be an issue especially for AI embodied in a robot). But that stills seems like huge job loss.
So as we head in that direction there's going to be job loss plus the fear of job loss -- that's likely to pull down demand leading to even more job loss. But it's not a typical demand driven recession b/c 1) jobs are not expected to return, 2) possible supply side issues from transition to AI, 3) paradoxical disinclination to work b/c jobs are expected to disappear soon or b/c of 'UBI' & 4) cultural shock from AI & ensuing events.
How bad could this be? A vicious cycle of culture, economics & politics can be quite vicious. The number of people who quit critical jobs prior to those jobs being properly automated is an important variable. 'UBI' is not obviously helpful in that regard.
Addendum
The comment concerns transition to a post ASI economy & possible failures along the way. Assuming that ASI already exists, as Satron has done, removes most of the interesting & relevant aspects of the question.
Replies from: Seth Herd↑ comment by Seth Herd · 2025-01-02T10:58:59.407Z · LW(p) · GW(p)
Wait now, why do you think people will quit jobs because of fear of job loss? This was all too sensible until that turn.
I wonder if your comment is being downvote over that puzzling turn, or because people just don't like the rest of the grim logic.
Replies from: cousin_it, Hzncomment by Seth Herd · 2025-01-02T11:14:21.987Z · LW(p) · GW(p)
I second this question.
I have looked and found only the partial solution directions mentioned in the question and in Satron's comment on having an ASI solve it. Which also seems unlikely to work for reasons I gave there.
Everything I've found looks disturbingly like copium and vague wishful thinking.