A breakdown of AI capability levels focused on AI R&D labor acceleration

post by ryan_greenblatt · 2024-12-22T20:56:00.298Z · LW · GW · 5 comments

Contents

  What do I think these levels of capability look like?
  Alternative capability breakdowns
    t-AGI
    Anthropic's ASL levels
    Purely qualitative breakdowns
    Total AI progress speedup or total AI software progress (including compute bottlenecks)
  Will all these levels be passed at once?
  Conclusion
None
5 comments

In a variety of conversations about AI misalignment risks, I find that it is important to be able to clearly point at different levels of AI capability. My current favorite approach is to talk about how much the AI accelerates AI R&D[1] labor.

I define acceleration of AI R&D labor by Y times as "the level of acceleration which is as useful (for making more powerful AIs) for an AI company as having its employees run Y times faster[2] (when you allow the total inference compute budget for AI assistance to be equal to total salaries)". Importantly, a 5x AI R&D labor acceleration won't necessarily mean that research into making AI systems more powerful happens 5x faster, as this just refers to increasing the labor part of the production function, and compute might also be an important input.[3] This doesn't include acceleration of hardware R&D (as a pragmatic simplification).

Further, when I talk about AIs that can accelerate AI R&D labor by some factor, that means after being given some reasonable amount of time for human integration (e.g., 6 months) and given broad usage (but keeping fine-tuning and elicitation fixed during this integration time).

Why might this be a good approach? Because ultimately what we're worried about is AIs which can greatly accelerate R&D in general, and AI R&D in particular is worth focusing on as it could yield much faster AI progress [LW · GW], quickly bringing us to much greater levels of capability.

Why not just talk about the overall acceleration of AI progress (i.e., increases in the rate of effective compute increases as discussed in the Anthropic RSP) rather than just the labor input into AI R&D? Because for most misalignment-related discussions, I'd prefer to talk about capability levels mostly independent of exogenous factors that determine how useful that level of capability actually ends up being (i.e., independent from the extent to which compute is a bottleneck to AI research or the fraction of progress driven by scaling up hardware rather than algorithms). Rather than talking about overall AI progress or software progress labor acceleration, we could talk about the overall acceleration of just AI software progress (just algorithms, not compute increases)[4], but this just adds the potential for compute bottlenecks without much benefit in discussions related to technical measures of misalignment. AI R&D labor acceleration doesn't fully avoid exogenous factors, but it avoids many such factors while still getting at a relevant and specific task.

I'll compare this approach to several alternatives later.

So, now we can talk about levels of capability like "3x AI R&D labor AIs". I'll call such systems "3x AIs" as shorthand.

Beyond discussing AI R&D labor acceleration, I think it is often useful to talk about the point when human cognitive labor is totally obsolete. Thus, I think it also makes sense to separately talk about Top-human-Expert-Dominating AI (TEDAI): AIs which strictly dominate top human experts[5] in virtually all cognitive tasks (i.e., doable via remote work) while being at least 2x faster[6] and within a factor of 5 on cost[7]. It is very unclear what level of AI R&D labor acceleration would occur with such systems, and this would be heavily dependent on factors like cost, speed, and the parallelizability of research[8]. Sometimes the term AGI is defined such that TEDAI is equivalent to AGI, but I think defining a different precise term is useful for clarity.

Beyond the level of TEDAI, it can be worth pointing at very generally superhuman AIs: AIs which are generally qualitatively much more capable than humans and greatly dominate humans in virtually all cognitive tasks (while being faster). This level of capability is much less precise, and it is very hard to say much at all about such systems.

Now, we can talk about the following levels of capability:

(Thanks to Ajeya Cotra, Cody Rushing, Eli Lifland, Nate Thomas, Zach Stein-Perlman, Buck Shlegeris, and Claude 3.5 Sonnet for feedback on this post.)

What do I think these levels of capability look like?

Now that I've outlined these levels of capability, we can discuss what they might look like and what the rough conversion into other frameworks (like t-AGI) might be. I'll make some rough guesses here.

My sense is:

My qualitative guesses are focused on something like a nearcast with more focus on timelines where AI approaches haven't massively changed from where it looks like current approaches are going. This is because other cases are much harder to say anything about (and probably involve longer timelines).

Alternative capability breakdowns

t-AGI

I have two main problems with t-AGI:

Anthropic's ASL levels

These aren't defined above ASL-3, and the intention is that they will be defined with respect to the necessary level of mitigations (which in my opinion seems likely to focus on security). I've run into some cases where confusion about how ASL levels will end up being defined has caused issues with communication.

Purely qualitative breakdowns

Above, I describe qualitative intelligence of different systems. I expect that people will disagree radically about this (and already do). This is certainly hard to operationalize regardless. So, while this is often worth referencing, I don't think it should be the default approach to discussing capability levels.

Total AI progress speedup or total AI software progress (including compute bottlenecks)

As discussed above, I'm worried that total AI progress speed up pulls in a bunch of exogenous factors people often disagree about. A similar issue related to compute bottlenecks applies if you consider overall AI software progress speed up (rather than merely the labor input into this).

Will all these levels be passed at once?

I think we'll see a slow enough takeoff that I expect to see 3x AIs more than a year before very generally superhuman AIs, but it is unclear how slowly/smoothly we'll progress through units of AI R&D labor acceleration by default. Additionally, adoption delays make the picture more complex. Nonetheless, to the extent you were interested in talking about whether various mitigations would work at different levels of capability, I think AI R&D labor acceleration can be useful for this.

Conclusion

The AI R&D labor acceleration framework seems like a good approach for measuring and discussing AI capabilities, particularly for when discussing misalignment risk and mitigations. It compromises between a focus on the downstream implications of a capability level and on a more qualitative measurement of capability while still being relatively precisely defined.


  1. I use AI R&D, but I expect these numbers would probably transfer fine to any sort of R&D that can be done digitally (in software), which is as measurable as AI R&D, and which the AIs are optimized for as much as AI R&D. ↩︎

  2. Relative to only having access to AI systems publicly available in January 2023. ↩︎

  3. You can also think about this as roughly being: "consider the subset of tasks that aren't bottlenecked by delays/costs in the environment (e.g., not bottlenecked by compute), how much can AIs accelerate people on average". ↩︎

  4. Sometimes "software progress overall acceleration" is referred to as "software progress productivity acceleration", but I find "overall" clearer than "productivity". ↩︎

  5. That is, top human experts with only access to AIs available by January 2023. This is done to avoid the edge case where the human mostly or fully defers to an AI system such that comparing to humans is just comparing the AI to itself. This also avoids comparing to future humans who are substantially augmented by AIs which could be misleading when thinking about the capability threshhold and overall makes this harder to reason about. ↩︎

  6. That is, 2x faster at accomplishing the tasks. ↩︎

  7. This post originally said 2x cheaper, but I realized this operationalization has an multiple issue: once AIs dominate top human experts, we would eventually expect human wages to drop and compute costs to rise until employers are more indifferent (at least for usages that don't require trust and putting aside wage stickiness). One alternative way to operationalize this would be to fix compute prices and wages to the prices we would expect putting aside the effect of the AI automating labor (e.g. extrapolating out compute costs and wages based on earlier trends) and then say "2x cheaper". ↩︎

  8. Beyond human obsolescence, I think it generally becomes less helpful to talk about AI R&D labor acceleration when trying to point at different levels of capability for discussion about misalignment risks and mitigations. Partially this is because our understanding of what the systems will look like gets even worse after human obsolescence. ↩︎

5 comments

Comments sorted by top scores.

comment by Wis3man · 2024-12-28T15:00:44.352Z · LW(p) · GW(p)

I think the compute bottleneck tax is probably around 25% with this acceleration (really, we should think about this in terms of tax brackets).


Please could you expand on why we should think in terms of tax brackets? 

Replies from: ryan_greenblatt
comment by ryan_greenblatt · 2024-12-29T00:59:08.954Z · LW(p) · GW(p)

I expect that as you increase AI R&D labor acceleration compute becomes a larger and larger bottleneck. So, the first doubling of acceleration has less of a compute bottleneck tax than the 4th doubling.

This can be equivalently thought of in terms of "tax brackets" though perhaps this was a confusing way to put it.

comment by Martín Soto (martinsq) · 2024-12-23T17:40:24.753Z · LW(p) · GW(p)

I have two main problems with t-AGI:

A third one is a definitory problem exacerbated by test-time compute: What does it mean for an AI to succeed at task T (which takes humans X hours)? Maybe it only succeeds when an obscene amount of test-time compute is poured. It seems unavoidable to define things in terms of resources as you do

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2024-12-28T23:33:45.504Z · LW(p) · GW(p)

The range of capabilities between what can be gained at a reasonable test-time cost and at an absurd cost (but in reasonable time) can remain small, with most improvements to the system exceeding this range, likely to move what could only be obtained at an absurd cost before into the reasonable range. This is true right now (for general intelligence), and it could well remain true until the intelligence explosion.

comment by Tao Lin (tao-lin) · 2024-12-23T00:08:24.733Z · LW(p) · GW(p)

The AI generally feels as smart as a pretty junior engineer (bottom 25% of new Google junior hires)

I expect it to be more smart than that. Plausibly o3 now generally feels as smart as  60th percentile google junior hires