Slowing AI: Crunch time

post by Zach Stein-Perlman · 2023-05-03T15:00:12.495Z · LW · GW · 1 comments

Contents

  What is more time good for?
  What is different later?
  How does buying time now trade off with buying time later?
None
1 comment

It is often asserted that time will be more valuable and higher leverage in "crunch time," shortly before critical models are deployed.

This brief post provides a starting point for consideration of three questions:

  1. What is more time (before dangerous AI) good for?
  2. What is different near the end-- what makes actions higher (or lower) leverage, or makes adding time more (or less) valuable?
  3. How does buying time now trade off with buying time later?

What is more time good for?

What is different later?

Many differences depend on whether actors realize that we're near the end.

Roughly, crunch time is more valuable and higher leverage than the time before it, but this depends on the particular goal at issue and the exact difference is unclear even for particular goals.

How does buying time now trade off with buying time later?

Largely it doesn't. For example, if there existed policy regimes that slow dangerous AI, that would mostly make similar policy regimes more likely to exist in the future.

In some cases, you can choose to prioritize slowing now or preparing to slow later, in your work and influence-exerting.

Insofar as leading labs can choose to burn their lead now or later, burning it now prevents them from burning it later.

Slowing leading labs is roughly necessary and sufficient for slowing AI. Some possible ways of slowing leading labs would not slow other labs, or would slow them less.[1] Preserving lead time (or equivalently, avoiding increasing multipolarity) among labs is good because it helps leading labs slow down later, makes coordination among leading labs easier, and may decrease racing (but how "racing" works is unclear). If you're slowing AI now, try to preserve lead time (or equivalently, avoid increasing multipolarity) to avoid trading off with slowing AI later.[2]


This post expands on "Time is more valuable near the end" in "Slowing AI: Foundations." [LW · GW]

Thanks to Olivia Jimenez for discussion.

  1. ^

    For example, temporarily banning training runs larger than 10^25 FLOP would let non-leading labs catch up, so it would differentially slow leading labs. On the other hand, increasing the cost of compute would slow all labs roughly similarly, and decreasing the diffusion of ideas would differentially slow non-leading labs.

  2. ^

    Sidenote: for policy proposals like "mandatory pause on large training runs for some time," it's not obvious how much this slows dangerous AI progress, nor how much it differentially slows the leading labs (burning their lead time).

1 comments

Comments sorted by top scores.

comment by trevor (TrevorWiesinger) · 2023-05-03T17:55:03.090Z · LW(p) · GW(p)

Roughly, crunch time is more valuable and higher leverage than the time before it, but this depends on the particular goal at issue and the exact difference is unclear even for particular goals.

This would potentially imply that [finding ways to just have more crunch time] would also be worth researching ahead of time. It's a difficult kind of future to forecast, but it could be incredibly valuable if someone, right now, successfully thinks of of a reasonable way to keep crunch time at a stable state for a very long time.