The case for multi-decade AI timelines [Linkpost]
post by Noosphere89 (sharmake-farah) · 2025-04-27T15:31:47.902Z · LW · GW · 9 commentsThis is a link post for https://epoch.ai/gradient-updates/the-case-for-multi-decade-ai-timelines
Contents
9 comments
So this post is an argument that multi-decade timelines are reasonable, and the key cruxes that Ege Erdil has with most AI safety people who believe in short timelines are due to the following set of beliefs:
- Ege Erdil don't believe that trends exist that require AI to automate everything in only 2-3 years.
- Ege Erdil doesn't believe that the software-only singularity is likely to happen, and this is perhaps the most important crux he has with AI people like @Daniel Kokotajlo [LW · GW] who believe that a software-only singularity is likely.
- Ege Erdil expects Moravec's paradox to bite hard once AI agents are made in a big way.
This is a pretty important crux, because if this is true, a lot more serial research agendas like Infra-Bayes research, Natural Abstractions work, and human intelligence augmentation can work, and also it means that political modeling (like is the US economy going to be stable long-term) matter a great deal more than is recognized in the LW/EA community.
Here's a quote from the article:
- I don’t see the trends that one would extrapolate in order to arrive at very short timelines on the order of a few years. The obvious trend extrapolations for AI’s economic impact give timelines to full remote work automation of around a decade, and I expect these trends to slow down by default.
- I don’t buy the software-only singularity as a plausible mechanism for how existing rates of growth in AI’s real-world impact could suddenly and dramatically accelerate by an order of magnitude, mostly because I put much more weight on bottlenecks coming from experimental compute and real-world data. This kind of speedup is essential to popular accounts of why we should expect timelines much shorter than 10 years to remote work automation.
- I think intuitions for how fast AI systems would be able to think and how many of them we would be able to deploy that come from narrow writing, coding, or reasoning tasks are very misguided due to Moravec’s paradox. In practice, I expect AI systems to become slower and more expensive as we ask them to perform agentic, multimodal, and long-context tasks. This has already been happening with the rise of AI agents, and I expect this trend to continue in the future.
9 comments
Comments sorted by top scores.
comment by elifland · 2025-04-27T18:59:17.624Z · LW(p) · GW(p)
I still think full automation of remote work in 10 years is plausible, because it’s what we would predict if we straightforwardly extrapolate current rates of revenue growth and assume no slowdown. However, I would only give this outcome around 30% chance.
In an important sense I feel like Ege and I are not actually far off here. I'm at more like 65-70% on this. I think this overall recommends quite similar actions. Perhaps we have a more important disagreement regarding something like P(AGI within 3 years), for which I'm at approx. 25-30% and Ege might be very low (my probability mass is somewhat concentrated in the next 3 years due to an expectation that compute and algorithmic effort scaling will slow down around 2029 if AGI or close isn't achieved).
My guess is that this disagreement is less important to make progress on than disagreements regarding takeoff speeds/dynamics and alignment difficulty.
Replies from: sharmake-farah↑ comment by Noosphere89 (sharmake-farah) · 2025-04-27T19:05:50.247Z · LW(p) · GW(p)
I do think the difference between an AGI timeline median of 5 years and one of 20 years does matter, because politics starts affecting whether we get AGI way more if we have to wait 20 years instead of 5, and serial alignment agendas make more sense if we assume a timeline of 20 years is a reasonable median.
Also, he argues against very fast takeoffs/software only singularity in the case for multi-decade timelines post.
comment by Vladimir_Nesov · 2025-04-27T16:00:38.452Z · LW(p) · GW(p)
I think the main crux with AI 2027 is the very possibility of software-only singularity, rather than details of human economy, or details of software-only singularity. For people feeling skeptical about it, it's hard to find legible arguments that it can happen, and so they tend to lose interest in discussing its details, instead discussing other systems such as the datacenter-relevant parts of the economy, which doesn't chip away at this crux of the disagreement. I talk more about this in another comment [LW(p) · GW(p)] in response to Ege Erdil's post.
A well-known example where a system different from human industry [LW(p) · GW(p)] matters more for the dynamics of AI takeoff is nanotech, which makes the inputs derived from human industry irrelevant for scaling compute.
Many people are skeptical of nanotech, but alternatively there's macroscopic biotech, designing animal robots assembled with metamorphosis from small mobile things analogous to fruit flies, which can double biomass every 1-3 days (an impossible level of growth compared to the human economy). This only depends on procuring power and raw materials (rather than human technology) and provides manipulation in the physical world to assemble mines and factories and such (possibly even directly growing biological compute). There is much more of a proof of concept for this kind of thing than for nanotech.
Replies from: sharmake-farah↑ comment by Noosphere89 (sharmake-farah) · 2025-04-27T16:36:51.931Z · LW(p) · GW(p)
Basically agree with this, but the caveat here is fruit flies are pretty much pure instinct, and a lot of the nanotech that is imagined is more universalist than that.
But yeah, fruit flies are an interesting case where biotech has managed to get pretty insane doubling times, and if we can pack large amounts of effective compute into very small spaces, this would hugely support something like a software only singularity.
Though I did highlight the possibility of a software-only singularity as the main crux in my post.
Many people are skeptical of nanotech
The best (in my opinion) nanotech skeptical cases are from @Muireall [LW · GW] and @bhauth [LW · GW] below:
https://forum.effectivealtruism.org/posts/oqBJk2Ae3RBegtFfn/my-thoughts-on-nanotechnology-strategy-research-as-an-ea?commentId=WQn4nEH24oFuY7pZy [EA(p) · GW(p)]
https://muireall.space/nanosystems/
https://muireall.space/pdf/considerations.pdf#page=17
https://www.lesswrong.com/posts/FijbeqdovkgAusGgz/grey-goo-is-unlikely [LW · GW]
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2025-04-27T17:13:52.960Z · LW(p) · GW(p)
Macroscopic biotech is not software-only singularity (and doesn't depend on it being possible), it's a separate example of a system that is not human industry, that seems sufficient to create a rate of scaling completely implausible for the human economy (illustrating the crux with Ege Erdil's position, which system is relevant for timelines). Software-only singularity operates within an unchanging amount of raw compute, while macroscopic biotech takeoff manufactures new compute much faster than human industry could be instructed to.
The "fruit flies" are the material substrate, packed with DNA that can specialize the cells to do their part in a larger organism reassembled from these fast-growing and mobile packages of cells, and so precise individual control over the "fruit flies" is unnecessary, only the assembled large biorobots will need to be controllable. The main issue is that macroscopic biotech probably won't be able to create efficient compute or fusion plants directly, but it can create integrated factories that manufacture these things (or non-biological robots) in more standard ways by reproducing the whole necessary supply chains starting with raw materials (or whatever is most convenient to transport) on site, plausibly extremely quickly. With enough robots, city-sized factories could be assembled overnight, and then reassembled again and again wholesale, as feedback from trying it out in the physical world reveals flaws in design.
Replies from: faul_sname↑ comment by faul_sname · 2025-04-27T21:49:17.461Z · LW(p) · GW(p)
- How long does you expect it would take to assemble a "biorobot"?
- How many serial attempts before the kinks in the process are worked out? Keep in mind that with biological components, things will try to eat and parasitize the things you build, so you have a moving target.
- When do you expect the first attempt?
- Does this timeline work for a parallel human-economy-level system spun up within the next 10 years?
comment by elifland · 2025-04-27T19:17:42.626Z · LW(p) · GW(p)
- It underrates the difficulty of automating the job of a researcher. Real world work environments are messy and contain lots of detail that are neglected in an abstraction purely focused on writing code and reasoning about the results of experiments. As a result, we shouldn’t expect automating AI R&D to be much easier than automating remote work in general.
I basically agree. The reason I expect AI R&D automation to happen before the rest of remote work isn't because I think it's fundamentally much easier, but because (a) companies will try to automate it before other remote work tasks, and relatedly (b) because companies have access to more data and expertise for AI R&D than other fields.
comment by ryan_greenblatt · 2025-04-28T02:52:47.776Z · LW(p) · GW(p)
Another potential crux[^related] is that Ege's world view seemingly doesn't depend at all on AIs which are much faster and smarter than any human. As far as I can tell, it doesn't enter into his modeling of takeoff (or timelines to full automation of remote work which partially depends on something more like takeoff).
On my views this makes a huge difference because a large number of domains would go much faster with much more (serial and smarter) intelligence. My sense is that a civilization where the smartest human was today's median human and also everyone's brain operated 50x slower[1] would in fact make technological progress much slower. Similarly, if AIs were as much smarter than the smartest humans as the smartest human is smarter than the median human and also ran 50x faster than humans (and operated at greater scale than the smartest humans with hundreds of thousands of copies all at 50x speed for over 10 million parallel worker equivalents putting aside the advantages of serial work and intelligence), then we'd see lots of sectors go much faster.
My sense is that Ege bullet bites on this and thinks that slowing everyone down wouldn't make a big difference, but I find this surprising. Or maybe his views are that parallelism is nearly as good as speed and intelligence and sectors naturally scale up parallel worker equivalents to match up with other inputs, so we're bottlenecking on some other inputs in the important cases.
Putting aside cases like construction etc where human reaction time being close enough to nature is important. ↩︎
comment by Noosphere89 (sharmake-farah) · 2025-04-27T23:40:37.896Z · LW(p) · GW(p)
@Vladimir_Nesov [LW · GW] I republished the post, you may reply to @faul_sname [LW · GW].