Distinguishing definitions of takeoff
post by Matthew Barnett (matthew-barnett) · 2020-02-14T00:16:34.329Z · LW · GW · 6 commentsContents
Foom/Hard takeoff Hansonian "slow" takeoff Bostromian takeoffs Slow Fast Moderate Continuous takeoff Gradual/incremental takeoff? Paul slow takeoff No takeoff Drexler's takeoff Baumann's soft takeoff Less common definitions Event Horizon/Epistemic Horizon Accelerating change Individual vs. collective takeoff Goertzel's semihard takeoff Further reading None 6 comments
I find discussions about AI takeoff to be very confusing. Often, people will argue for "slow takeoff" or "fast takeoff" and then when I ask them to operationalize what those terms mean, they end up saying something quite different than what I thought those terms meant.
To help alleviate this problem, I aim to compile the definitions of AI takeoff that I'm currently aware of, with an emphasis on definitions that have clear specifications. I will continue updating the post as long as I think it serves as a useful reference for others.
In this post, an AI takeoff can be roughly construed as "the dynamics of the world associated with the development of powerful artificial intelligence." These definitions characterize different ways that the world can evolve as transformative AI is developed.
Foom/Hard takeoff
The traditional hard takeoff position, or "Foom" position (these appear to be equivalent terms) was characterized in this post [LW · GW] from Eliezer Yudkowsky. It contrasts Hanson's takeoff scenario by emphasizing local dynamics: rather than a population of artificial intelligences coming into existence, there would be a single intelligence that quickly reaches a level of competence that outstrips the world's capabilities to control it. The proposed mechanism that causes such a dynamic is recursive self improvement [LW · GW], though Yudkowsky later suggested that this wasn't necessary.
The ability for recursive self improvement to induce a hard takeoff was defended in Intelligence Explosion Microeconomics. He argues against Robin Hanson in the AI Foom debates. Watch this video to see the live debate.
Given the word "hard" in this notion of takeoff, a "soft" takeoff could simply be defined as the negation of a hard takeoff.
Hansonian "slow" takeoff
Robin Hanson objected to hard takeoff by predicting that growth in AI capabilities will not be extremely uneven between projects. In other words, there is unlikely to be one AI project, or even a small set of AI projects, that produces a system that outstrips the abilities of the rest of the world. While he rejects Yudkowsky's argument, it is inaccurate to say that Robin Hanson expected growth in AI capabilities to be slow.
In Economic Growth Given Machine Intelligence, Hanson argues that AI induced growth could cause GDP to double on the timescale of months. Very high economic growth would mark a radical transition to a faster mode of technological progress and capabilities, something that Hanson argues is entirely precedented in human history.
The technology that Hanson envisions will induce fast economic growth is whole brain emulation, which he wrote a book about. In general, Hanson rejects the framework that AGI should be seen as an invention that occurs at a particular moment in time: instead, AI should be viewed as an input to the economy, (like electricity, though the considerations may be different).
Bostromian takeoffs
Nick Bostrom appeared to throw away much of the terminology in the AI Foom debate in order to invent his own. In Superintelligence he provides a characterization of three types of AI capability growth modes, defined by the clock-time (real physical time) from when a system is roughly human-level to when it is strongly superintelligent, defined as "a level of intelligence vastly greater than contemporary humanity's combined intellectual wherewithal."
Some have objected to Bostrom's use of clock-time to define takeoff, instead arguing that work required to align systems [LW · GW] is a better metric (though harder to measure).
Slow
A slow takeoff is one that occurs over the timescale of decades or centuries. Bostrom predicted that this timescale would allow for institutions, such as governments, to react to new AI developments. It would also allow for testing incrementally more powerful technologies without existential risks associated with testing.
Fast
A fast takeoff is one that occurs over the timescale of minutes, hours, or days. Given such short time to react, Bostrom believes that local dynamics of the takeoff become relevant, as was the case in Yudkowsky's foom scenario.
Moderate
A moderate takeoff is situated between slow and fast, and occurs on the timescale of months or years.
Continuous takeoff
Continuous takeoff was defined, and partially defended in my post [LW · GW]. Its meaning primarily derives from Katja Grace's post on discontinuous progress around the development of AGI. In that post, Grace characterizes discontinuities:
We say a technological discontinuity has occurred when a particular technological advance pushes some progress metric substantially above what would be expected based on extrapolating past progress. We measure the size of a discontinuity in terms of how many years of past progress would have been needed to produce the same improvement. We use judgment to decide how to extrapolate past progress.
In my post, I extrapolate this concept and invert it, using terminology that I saw Rohin use in this Alignment Newsletter edition [LW · GW], and define continuous takeoff as
A scenario where the development of competent, powerful AI follows a trajectory that is roughly in line with what we would have expected by extrapolating from past progress.
Gradual/incremental takeoff?
Some people objected [LW(p) · GW(p)] to my use of the word continuous, as they found that the words gradual or incremental are more descriptive and mathematically accurate. After all, the following function is continuous, but not gradual.
Additionally, if you agree with Hanson's thesis that history can be seen as a series of economic growth modes, each faster than the last one, then continuous takeoff as plainly defined is in trouble. That's because technological progress from 1800 - 1900 was much faster than technological progress from 1700 - 1800. Therefore, "extrapolating from past progress" would provide an incorrect estimate of progress, if one did not foresee the industrial revolution. In general, extrapolating from past progress is hard because it depends on the reference class [LW · GW] you are using to forecast.
Paul slow takeoff
Paul Christiano argues that we should characterize takeoff in terms of economic growth rates (similar to Hanson) but uses a definition that emphasizes how quickly the economy transitions into a period of higher growth. He defines slow takeoff as
There will be a complete 4 year interval in which world output doubles, before the first 1 year interval in which world output doubles. (Similarly, we’ll see an 8 year doubling before a 2 year doubling, etc.)
and defines fast takeoff as the negation of the above statement. Note that this definition leaves a third possibility: you could believe that the world output will never double during a 1 year interval, a position I would refer to as "no takeoff" which I explain next.
Paul's outline of slow takeoff shares some of its meaning with continuous takeoff, because under a slow transition to a higher growth mode, change won't be sudden.
No takeoff
"No takeoff" is essentially my term for the belief that world economic growth rates won't accelerate to a very high level (perhaps >30% real GDP growth rate in one year) following the development of AI. William Macaskill is a notable skeptic [EA(p) · GW(p)] of AI takeoff. I have created this Metaculus question to operationalize the thesis.
The Effective Altruism Foundation wrote this post suggesting that peak economic growth rates may lie in the past. If we use the outside view, this position may be reasonable. Economic growth rates have slowed down since the 1960s despite the rise of personal computers and the internet: technologies that we might have naively predicted would be transformative ahead of time.
This position should not be confused with the idea that humanity will never develop superintelligent computers, though that scenario is compatible with no takeoff.
Drexler's takeoff
Eric Drexler argues in Comprehensive AI Services (CAIS) that future AI will be modular, meaning that there is unlikely to be a single system that can perform a set of diverse tasks all at once before there are individual systems that can perform the individual tasks more competently than the single system can. This idea shares groundwork with Hanson's objection to a local takeoff. The reverse of this scenario is what Hanson calls "lumpy AI" where single agentic systems outcompete a set of services.
Drexler uses the CAIS model to argue against the binary characterization of self-improvement. Just as technology already feeds into itself, and thus the world can already be seen as "recursively self improving itself", future AI research could feed into itself as recursive technological improvement [LW · GW], without the necessary focus on single systems improving themselves.
In other words, rather than viewing AIs as either self improving or not, self improvement can be seen as a continuum from "the entire world works to improve a system" on one end, and "a single local system improves only itself, with outside forces providing minimal benefit to growth in capabilities" on the other.
Baumann's soft takeoff
In this post, Tobias Baumann argues that we should operationalize soft takeoff in terms of how quickly the fraction of global economic activity attributable to autonomous AI systems will rise. "Time" here is not necessarily clock-time, as was the case in Bostrom's takeoff. Time can also refer to economic time, which is a measure of time that adjusts for rate of economic growth, and political time, a measure that adjusts for rate of social change.
He explains that this operationalization avoids the pitfalls of definitions that rely on moments in time where AI reaches thresholds such as "human-level" or "superintelligent." He argues that AI is likely to surpass human abilities in some domains and not in others, rather than surpass us in all ways all at once.
Robin Hanson appears to agree [LW(p) · GW(p)]with a similar measure for AI progress.
Less common definitions
Event Horizon/Epistemic Horizon
In 2007, Yudkowsky outlined the three schools of singularity, which was perhaps the state of the art for takeoff discussions at the time. In it he included his own scenario (Foom), the Event Horizon, and Accelerating Change.
The Event Horizon hypothesis could be seen as an extrapolation of Vernor Vinge's definition of the technological singularity. It is defined as a point in time after which current models of future progress break down, which is essentially the opposite definition of continuous takeoff.
An epistemic horizon would be relevant for decision making because it would imply that AI progress could come suddenly, without warning. If this were true, then our safety guarantees assumed under a continuous takeoff scenario would fail. Furthermore, even if we could predict rapid change ahead of time, due to social pressures, people might fail to act until it's too late, a position argued for in There’s No Fire Alarm for Artificial General Intelligence.
(Note, I see a lot of people interpreting the Fire Alarm essay as merely arguing that we can't predict rapid progress before it's too late. The essay itself dispels this interpretation, "When I observe that there’s no fire alarm for AGI, I’m not saying that there’s no possible equivalent of smoke appearing from under a door.")
Accelerating change
Continuing the discussion from the three schools of singularity, this version of AI takeoff is most closely associated with Ray Kurzweil. Accelerating change is characterized by AI capability trajectories following smooth exponential curves. It shares with continuous takeoff the predictability of AI developments, but is more narrow and makes much more specific predictions.
Individual vs. collective takeoff
Kaj Sotala has used the words "individual takeoff" vs. "collective takeoff" which I think are roughly synonymous with the local vs. global distinction provided by the Foom debate. Other words that often come up are "distributed" and "diffuse", "unipolar" vs "multipolar", and "decisive strategic advantage."
Goertzel's semihard takeoff
I can't say much about this one except that it's in-between soft and hard takeoff.
Further reading
A Contra Foom Reading List and Reflections on Intelligence from Magnus Vinding
Self-improving AI: an Analysis, from John Storrs Hall
How sure are we about this AI stuff? [? · GW], from Ben Garfinkel
Can We Avoid a Hard Takeoff from Vernor Vinge
6 comments
Comments sorted by top scores.
comment by Mati_Roy (MathieuRoy) · 2020-02-19T04:23:51.949Z · LW(p) · GW(p)
trying to put this in my own words to remember it
so different axes for take-off dynamics include:
- time span: physical time, economic time, political time, AI time (development speed of front runners over others)
- shape of the take-off curve, ex.: exponential, S-curves, linear, etc.
- monopolistic effect: do front runners become less likely to be outcompeted as they grow? how many large players will there be? also: how will this change? ex.: it could be that AI doesn't have strong monopolistic effect until you reach a certain level
measurement to quantify:
- AI progress: GDP, decisive strategic advantage
- AI progress speed: time until AI having more power than the rest of humanity / time until solving the control problem
related:
- impact on forecasting capabilities
comment by Davidmanheim · 2020-02-18T19:13:30.601Z · LW(p) · GW(p)
This is a fantastic set of definitions, and it is definitely useful. That said, I want to add something to what you said near the end. I think the penultimate point needs further elaboration. I've spoken about "multi-agent Goodhart" in other contexts, and discussed why I think it's a fundamentally hard problem, but I don't think I've really clarified how I think this relates to alignment and takeoff. I'll try to do that below.
Essentially, I think that the question of multipolarity versus individual or collective takeoff is critical, as (to me) it is the most worrying scenario for alignment.
Individual or collective vs. Multipolar takeoff
Individual takeoff implies that a coherent, agentic system is being improved or accelerating, where takeoff could be defined by either economic growth, where a single company or system accounts for a majority of humanity's economic output, or otherwise be a foom or similar scenario. Collective takeoff would imply that a set of agentic systems are accelerating in ways that are (in the short term) non-competitive. If humanity as a whole does benefit widely from greatly increased economic growth, at some point even doubling in a year, yet there is no single dominant system, this would be a collective takeoff.
Multipolar takeoff, however, is a scenario where systems are actively competing in some domain. It seems plausible that competition of this sort would provide incentives for rapid improvement that could impact even non-agentic systems like Drexler's CAIS. Alternatively, or additionally, improvement could be enabled by feedback from competition with peer or near-peer systems. (This seems to be the way humans developed intelligence, and so it seems a-priori worrying.) In either case, this type of takeoff could involve zero or negative sum interaction between systems. If a single winner emerged quickly enough to prevent destructive competition, it would be the "evolutionary" winner, with goals being aligned with success. For that reason, it seems implausible to me that it would be aligned with humanity's interests as a whole. If no winner emerged, it seems that convergent instrumental goals combined with rapidly increasing capabilities would lead to at best a Hansonian Em-scenario, where systems respect property and other rights, but all available resources would be directed towards competition, and systems would be expanded to take over resources until the marginal cost of expansion equals marginal benefit. It seems implausible that in a takeoff scenario, competition reaching this point would leave significant resources for the remainder of humanity, likely at least wasting our cosmic endowment. If the competition turned negative sum, there could be even faster races to the bottom, leading to worse consequences.
comment by Rohin Shah (rohinmshah) · 2020-03-17T18:40:31.802Z · LW(p) · GW(p)
Planned summary for the Alignment Newsletter:
This post lists and explains several different "types" of AI takeoff that people talk about. Rather than summarize all the definitions (which would only be slightly shorter than the post itself), I'll try to name the main axes that definitions vary on (but as a result this is less of a summary and more of an analysis):
1. _Locality_. It could be the case that a single AI project far outpaces the rest of the world (e.g. via recursive self-improvement), or that there will never be extreme variations amongst AI projects across all tasks, in which case the "cognitive effort" will be distributed across multiple actors. This roughly corresponds to the Yudkowsky-Hanson FOOM debate, and the latter position also seems to be that taken by <@CAIS@>(@Reframing Superintelligence: Comprehensive AI Services as General Intelligence@).
2. _Wall clock time_. In Superintelligence, takeoffs are defined based on how long it takes for a human-level AI system to become strongly superintelligent, with "slow" being decades to centuries, and "fast" being minutes to days.
3. _GDP trend extrapolation_. Here, a continuation of an exponential trend would mean there is no takeoff (even if we some day get superintelligent AI), a hyperbolic trend where the doubling time of GDP decreases in a relatively continuous / gradual manner counts as continuous / gradual / slow takeoff, and a curve which shows a discontinuity would be a discontinuous / hard takeoff.
Planned opinion:
I found this post useful for clarifying exactly which axes of takeoff people disagree about, and also for introducing me to some notions of takeoff I hadn't seen before (though I haven't summarized them here).
comment by Pattern · 2020-02-14T17:45:10.829Z · LW(p) · GW(p)
The Event Horizon hypothesis could be seen as an extrapolation of Vernor Vinge's definition of the technological singularity. It is defined as a point in time after which current models of future progress break down, which is essentially the opposite definition of continuous takeoff.
This might be interesting to compare against how models of the stock market have changed over time. (Its particular relationship with statistics may be illuminating.)
comment by Mati_Roy (MathieuRoy) · 2020-02-19T03:54:11.914Z · LW(p) · GW(p)
the fraction of global economic activity attributable to autonomous AI systems will rise
I thought a bit about this, but haven't figured it out: how can this be measured? if AI is commoditized, AI companies won't make a profit from it. AI researchers might make more money, but likely not more than however much it would cost to train more AI researchers (or something like that). maybe we can see which industries have their price reduced because of AI, and count this as a lower bound for the consumer surplus created by AI. what else?