"Slow" takeoff is a terrible term for "maybe even faster takeoff, actually"

post by Raemon · 2024-09-28T23:38:25.512Z · LW · GW · 15 comments

Contents

15 comments

For a long time, when I heard "slow takeoff", I assumed it meant "takeoff that takes longer calendar time than fast takeoff." (i.e. what is now referred to more often as "short timelines" vs "long timelines."). I think Paul Christiano popularized the term, and it so happened he both expected to see longer timelines and smoother/continuous takeoff.

I think it's at least somewhat confusing to use the term "slow" to mean "smooth/continuous", because that's not what "slow" particularly means most of the time.

I think it's even more actively confusing because "smooth/continuous" takeoff not only could be faster in calendar time, but, I'd weakly expect this on average, since smooth takeoff means that AI resources at a given time are feeding into more AI resources, whereas sharp/discontinuous takeoff would tend to mean "AI tech doesn't get seriously applied towards AI development until towards the end."

I don't think this is academic[1].

I think this has wasted a ton of time arguing past each other on LessWrong, and if "slow/fast" is the terminology that policy makers are hearing as they start to tune into the AI situation, it is predictably going to cause them confusion, at least waste their time, and quite likely lead many of them to approach the situation through misleading strategic frames that conflate smoothness and timelines.

Way back in Arguments about fast takeoff [LW(p) · GW(p)], I argued that this was a bad term, and proposed "smooth" and "sharp" takeoff were better terms. I'd also be fine with "hard" and soft" takeoff. I think "Hard/Soft" have somewhat more historical use, and maybe are less likely to get misheard as "short", so maybe use those.[2]

I am annoyed that 7 years later people are still using "slow" to mean "maybe faster than 'fast'." This is stupid. Please stop. I think smooth/sharp and hard/soft are both fairly intuitive (at the very least, more intuitive than slow/fast, and people who are already familiar with the technical meaning of slow/fast will figure it out).

I would be fine with "continuous" and "discontinuous", but, realistically, I do not expect people to stick to those because they are too many syllables. 

Please, for the love of god, do not keep using a term that people will predictably misread as implying longer timelines. I expect this to have real-world consequences. If someone wants to operationalize a bet about it having significant real-world consequences I would bet money on it.

Curves
The graph I posted in response to Arguments about fast takeoff [LW · GW

 

  1. ^

    a term that ironically means "pointlessly pedantic."

  2. ^

    the last time I tried to write this post, 3 years ago, I got stuck on whether to argue for smooth/sharp or hard/soft and then I didn't end up posting it at all and I regret that. 

15 comments

Comments sorted by top scores.

comment by Raemon · 2024-09-28T23:51:05.561Z · LW(p) · GW(p)

Okay since I didn't successfully get buy-in for a particular term before writing this post, here's a poll to agree/disagree vote on. (I'm not including Fast/Slow as an option by default but you can submit other options here and if you really want to fight for preserving it, seems fine).

.

Replies from: Raemon, Benito, Raemon, ryan_greenblatt, habryka4, Raemon, Mathieu Putz, shankar-sivarajan
comment by Raemon · 2024-09-28T23:51:17.907Z · LW(p) · GW(p)

Smooth/Sharp takeoff

comment by Ben Pace (Benito) · 2024-09-29T03:20:38.627Z · LW(p) · GW(p)

Predictable/Unpredictable takeoff

comment by Raemon · 2024-09-29T03:02:48.638Z · LW(p) · GW(p)

Long/short takeoff

comment by ryan_greenblatt · 2024-09-29T02:41:00.314Z · LW(p) · GW(p)

Long duration/short duration takeoff

comment by habryka (habryka4) · 2024-09-29T02:13:26.020Z · LW(p) · GW(p)

Continuous/Discontinuous takeoff

comment by Raemon · 2024-09-28T23:51:26.229Z · LW(p) · GW(p)

Soft/Hard Takeoff

comment by Mathieu Putz · 2024-09-29T01:46:26.374Z · LW(p) · GW(p)

Gradual/hard

comment by ryan_greenblatt · 2024-09-29T02:36:02.441Z · LW(p) · GW(p)

I don't love "smooth" vs "sharp" because these words don't naturally point at what seem to me to be the key concept: the duration from the first AI capable of being transformatively useful [LW · GW] to the first system which is very qualitatively generally superhuman[1]. You can have "smooth" takeoff driven by purely scaling things up where this duration is short or nonexistent.

I also care a lot about the duration from AIs which are capable enough to 3x R&D labor to AIs which are capable enough to strictly dominate (and thus obsolete) top human scientists but which aren't necessarly very smarter. (I also care some about the duration between a bunch of different milestones and I'm not sure that my operationalizations of the milestones is the best one.)

Paul originally operationalized [LW · GW] this as seeing an economic doubling over 4 years prior to a doubling within a year, but I'd prefer for now to talk about qualitative level of capabilities rather than also entangling questions about how AI will effect the world[2].

So, I'm tempted by "long duration" vs "short duration" takeoff, though this is pretty clumsy.


Really, there are bunch of different distinctions we care about with respect to takeoff and the progress of AI capabilities:

  • As discussed above, the duration from the first transformatively useful AIs to AIs which are generally superhuman. (And between very useful AIs to top human scientist level AIs.)
  • The duration from huge impacts in the world from AI (e.g. much higher GDP growth) to very superhuman AIs. This is like the above, but also folding in economic effects and other effects on the world at large which could come apart from AI capabilities even if there is a long duration takeoff in terms of capabilities.
  • Software only singularity. How much the singularity is downstream of AIs working on hardware (and energy) vs just software. (Or if something well described as a singularity even happens.)
  • Smoothness of AI progress vs jumpyness. As in, is progress driven by a larger number of smaller innovations and/or continuous scale ups rather being substantially driven by a small number of innovations and/or large phase changes that emerge with scale.
  • Predictability of AI progress. Even if AI progress is smooth in the sense of the prior bullet, it may not follow a very predictable trend if the rate of innovations or scaling varies a lot.
  • Tunability of AI capability. Is is possible to get a fully sweep of models which continuously interpolates over a range of capabilities?[3]

Of course, these properties are quite correlated. For instance, if the relevant durations for the first bullet are very short, then I also don't expect economic impacts until AIs are much smarter. And, if the singularity requires AIs working on increasing available hardware (software only doesn't work or doesn't go very far), then you expect more economic impact and more delay.


  1. One could think that there will be no delay between these points, though I personally think this is unlikely. ↩︎

  2. In short timelines, with a software only intelligence explosion, and with relevant actors not intentionally slowing down, I think I don't expect huge global GDP growth (e.g. 25% annualized global GDP growth rate) prior to very superhuman AI. I'm not very confident in this, but I think both inference availability and takeoff duration point to this. ↩︎

  3. This is a very weak property, though I think some people are skeptical of this. ↩︎

Replies from: Raemon
comment by Raemon · 2024-09-29T03:11:01.877Z · LW(p) · GW(p)

I think long duration is way too many syllables, and I think I have similar problems with this naming schema as Fast/Slow, but, if you were going to go with this naming schema I think just saying "short takeoff" and "long takeoff" seems about as clear ("duration" comes implied IMO)

I don't love "smooth" vs "sharp" because these words don't naturally point at what seem to me to be the key concept: the duration from the first AI capable of being transformatively useful [LW · GW] to the first system which is very qualitatively generally superhuman[1] [LW · GW]. You can have "smooth" takeoff driven by purely scaling things up where this duration is short or nonexistent.

I'm not sure I buy the distinction mattering?

Here's a few worlds:

  1. Smooth takeoff to superintelligence via scaling the whole way, no RSI
  2. Smooth takeoff to superintelligence via a mix of scaling, algorithmic advance, RSI, etc
  3. smoothish looking takeoff via scaling (like we currently see) but then suddenly the shape of the curve changes dramatically due to RSI or similar
  4. smoothish looking takeoff via scaling like we see, but, and then RSI is the mechanism by which the curve continues, but not very quickly (maybe this implies the curve actively levels off S-curve style before eventually picking up again)
  5. alt-world where we weren't even seeing similar types of smoothly advancing AI, and then there's abrupt RSI takeoff in days or months
  6. alt-world where we weren't seeing similar smooth scaling AI, and then RSI is the thing that initiates our current level of growth

At least with the previous way I'd been thinking about things, I think the worlds above that look smooth, I feel like "yep, that was a smooth takeoff."

Or, okay, I thought about it a bit more and maybe agree that "time between first transformatively-useful to superintelligence" is a key variable. But, I also think that variable is captured by saying "smooth takeoff/long timelines?" (which is approximately what people are currently saying?

Hmm, I updated towards being less confident while thinking about this.

comment by Thane Ruthenis · 2024-09-29T01:26:50.843Z · LW(p) · GW(p)

IMO, soft/smooth/gradual still convey wrong impressions. They still sound like "slow takeoff", they sound like the progress would be steady enough that normal people would have time to orient to what's happening, keep track, and exert control. As you're pointing out, that's not necessarily the case at all: from a normal person's perspective, this scenario may very much look very sharp and abrupt.

The main difference in this classification seems to be whether AI progress occurs "externally", as part of economic and R&D ecosystems, or "internally", as part of an opaque self-improvement process within a (set of) AI system(s). (Though IMO there's a mostly smooth continuum of scenarios, and I don't know that there's a meaningful distinction/clustering at all.)

From this perspective, even continuous vs. discontinuous don't really cleave reality at the joints. The self-improvement is still "continuous" (or, more accurately, incremental) in the hard-takeoff/RSI case, from the AI's own perspective. It's just that ~nothing besides the AI itself is relevant to the process.

Just "external" vs. "internal" takeoff, maybe? "Economic" vs. "unilateral"?

Replies from: Raemon
comment by Raemon · 2024-09-29T01:44:18.400Z · LW(p) · GW(p)

I do agree with that, although I don't know that I feel the need to micromanage the implicature of the term that much. 

I think it's good to try to find terms that don't have misleading connotations, but also good not to fight too hard to control the exact political implications of a term, partly because there's not a clear cutoff between being clear and being actively manipulative (and not obvious to other people which you're being, esp. if they disagree with you about the implications), and partly because there's a bit of a red queen race of trying to get terms into common parlance that benefit your agenda, and, like, let's just not.

Fast/slow just felt actively misleading.

I think the terms you propose here are interesting but a bit too opinionated about the mechanism involved. I'm not that confident those particular mechanisms will turn out to be decisive, and don't think the mechanism is actually that cruxy for what the term implies in terms of strategy.

If I did want to try to give it the connotations that actually feel right to me, I might say "rolling*" as the "smooth" option. I don't have a great "fast" one.

*although someone just said they found "rolling" unintuitive so shrug.

comment by Cole Wyeth (Amyr) · 2024-09-28T23:47:07.486Z · LW(p) · GW(p)

If you’re trying to change the vocabulary you should have settled on an option.

Replies from: Raemon
comment by Raemon · 2024-09-28T23:48:19.403Z · LW(p) · GW(p)

I know, but last time I worried about that I ended up not writing the post at all and it seemed better make sure I published anything at all.

(edit: made a poll so as not to fully abdicate responsibility for this problem tho)