More on disambiguating "discontinuity"
post by Aryeh Englander (alenglander) · 2020-06-09T15:16:34.432Z · LW · GW · 1 commentsContents
1 comment
There have already been numerous posts and [LW(p) · GW(p)] discussions [LW(p) · GW(p)] related to [AF · GW] disambiguating [AF · GW] the [AF · GW] term [AF · GW] "discontinuity" [AF · GW]. Here is my attempt.
For the purposes of the following discussion I’m going to distinguish between (a) continuous vs. discontinuous progress in AI research, where discontinuity refers specifically to a sharp jump or change in the AI research progress curve relative to the previous curve; (b) slow vs. fast rate of progress, referring to the steepness of the progress curve slope, regardless of whether or not it’s discontinuous; and (c) long vs. short clock time – i.e., whether progress takes a long or short time relative to absolute time and not relative to previous trend lines. What exactly counts as discontinuous / fast / short will depend on what purpose we are using them for, as below.
There seem to be three or four primary AI-risk-related issues that depend on whether or not there will be a discontinuity / fast takeoff speed:
- Will we see AGI (or CAIS or TAI or whatever you want to call it) coming far enough ahead of time such that we will be able to respond appropriately at that point? This question in turn breaks down into two sub-questions: (a) Will we see AGI coming before it arrives? (I.e., will there be a “fire alarm for AGI” as Eliezer calls it.) (b) If we do see it coming, will we have enough time to react before it’s too late?
- Will the feedback loops during the development of AGI be long enough that we will be able to correct course as we go?
- Is it likely that one company / government / other entity could gain enough first-mover advantage such that it will not be controllable or stoppable by other entities?
Let’s deal with each of these individually:
- Question 1/a: Will we see AGI coming before it arrives? This seems to depend on all three types of discontinuity:
- If there’s discontinuous progress relative to the previous curve, then presumably that jump will act as a fire alarm (although it might be too late to do anything about it by then).
- If there’s continuous but sufficiently fast rate of progress and/or a sufficiently short clock time in the lead up to AGI, then that might act as a fire alarm in the sense that people will see the world start going crazy due to sufficiently advanced AI and that will be a wake-up call.
- If progress is continuous AND sufficiently slow AND takes a sufficiently long time, then it seems quite plausible that people will get used to all the changes as they come, and they might not notice the progress that AI is making until it is too late.
- Question 1/b: If we do see it coming, will we have enough time to react before it’s too late?
- If the absolute (clock) time between the “fire alarm” and the first potentially-dangerous AGI is too short, then we will likely not be able to react in time, whereas if it’s long enough then we will probably be able to react in time.
- However, if progress is sufficiently continuous and/or slow such that there are other very advanced AIs to help with our research, then we could perhaps use those almost-AGIs to help do a ton of research in a short amount of absolute time.
- Question 2: Will the feedback loops during the development of AGI be long enough that we will be able to correct course as we go?
- If absolute clock time is very short, then humans will probably not be able to react fast enough.
- However, once again if progress is sufficiently continuous and/or slow, then there will likely be powerful almost-AGIs which could plausibly allow us to correct course very quickly.
- Question 3: Is it likely that one company / government / other entity gain enough first-mover advantage that it will not be controllable or stoppable by other entities?
- If AI progress is discontinuous in the jump to AGI or immediately preceding that, and/or if it is discontinuous from AGI in the sense that the first AGI might recursively self-improve and go FOOM, then presumably yes. (However, if the discontinuity is a bit earlier in the lead-up to AGI, then that’s mostly irrelevant to this question.)
- If AI progress is continuous both to and from AGI but short in an absolute sense, then the answer is maybe since companies or governments can presumably keep things secret at least for a few months and thereby gain a sufficient head start.
- If AI progress is continuous and long enough, then probably not because there will be nearly as powerful AIs that can help stop it.
Thanks especially to Sammy Martin and Issa Rice for discussions of this post and for helping me to clarify my thinking on this.
1 comments
Comments sorted by top scores.
comment by Rohin Shah (rohinmshah) · 2020-06-18T02:03:32.898Z · LW(p) · GW(p)
Planned summary for the Alignment Newsletter:
This post considers three different kinds of “discontinuity” that we might imagine with AI development. First, there could be a sharp change in progress or the rate of progress that breaks with the previous trendline (this is the sort of thing <@examined@>(@Discontinuous progress in history: an update@) by AI Impacts). Second, the rate of progress could either be slow or fast, regardless of whether there is a discontinuity in it. Finally, the calendar time could either be short or long, regardless of the rate of progress.
The post then applies these categories to three questions. Will we see AGI coming before it arrives? Will we be able to “course correct” if there are problems? Is it likely that a single actor obtains a decisive strategic advantage?