More on disambiguating "discontinuity"

post by Aryeh Englander (alenglander) · 2020-06-09T15:16:34.432Z · LW · GW · 1 comments

Contents

1 comment

There have already been numerous posts and [LW(p) · GW(p)] discussions [LW(p) · GW(p)] related to [AF · GW] disambiguating [AF · GW] the [AF · GW] term [AF · GW] "discontinuity" [AF · GW]. Here is my attempt.

For the purposes of the following discussion I’m going to distinguish between (a) continuous vs. discontinuous progress in AI research, where discontinuity refers specifically to a sharp jump or change in the AI research progress curve relative to the previous curve; (b) slow vs. fast rate of progress, referring to the steepness of the progress curve slope, regardless of whether or not it’s discontinuous; and (c) long vs. short clock time – i.e., whether progress takes a long or short time relative to absolute time and not relative to previous trend lines. What exactly counts as discontinuous / fast / short will depend on what purpose we are using them for, as below.

There seem to be three or four primary AI-risk-related issues that depend on whether or not there will be a discontinuity / fast takeoff speed:

  1. Will we see AGI (or CAIS or TAI or whatever you want to call it) coming far enough ahead of time such that we will be able to respond appropriately at that point? This question in turn breaks down into two sub-questions: (a) Will we see AGI coming before it arrives? (I.e., will there be a “fire alarm for AGI” as Eliezer calls it.) (b) If we do see it coming, will we have enough time to react before it’s too late?
  2. Will the feedback loops during the development of AGI be long enough that we will be able to correct course as we go?
  3. Is it likely that one company / government / other entity could gain enough first-mover advantage such that it will not be controllable or stoppable by other entities?

Let’s deal with each of these individually:

Thanks especially to Sammy Martin and Issa Rice for discussions of this post and for helping me to clarify my thinking on this.

1 comments

Comments sorted by top scores.

comment by Rohin Shah (rohinmshah) · 2020-06-18T02:03:32.898Z · LW(p) · GW(p)

Planned summary for the Alignment Newsletter:

This post considers three different kinds of “discontinuity” that we might imagine with AI development. First, there could be a sharp change in progress or the rate of progress that breaks with the previous trendline (this is the sort of thing <@examined@>(@Discontinuous progress in history: an update@) by AI Impacts). Second, the rate of progress could either be slow or fast, regardless of whether there is a discontinuity in it. Finally, the calendar time could either be short or long, regardless of the rate of progress.

The post then applies these categories to three questions. Will we see AGI coming before it arrives? Will we be able to “course correct” if there are problems? Is it likely that a single actor obtains a decisive strategic advantage?