Takeoff Speed: Simple Asymptotics in a Toy Model.

post by Aaron Roth (aaron-roth) · 2018-03-05T17:07:12.866Z · LW · GW · 21 comments

This is a link post for http://aaronsadventures.blogspot.com/2018/03/how-unlikely-is-intelligence-explosion.html

Contents

  First, Some Background.
  A Toy Model for Rates of Self Improvement
  Thoughts
  Postscript
None
21 comments

I've been having fun recently reading about "AI Risk". There is lots of eloquent writing out there about this topic: I especially recommend Scott Alexander's Superintelligence FAQ for those looking for a fun read. The subject has reached the public consciousness, with high profile people like Stephen Hawking and Elon Musk speaking publicly about it. There is also an increasing amount of funding and research effort being devoted to understanding AI risk. See for example the Future of Humanity Institute at Oxford, the Future of Life Institute at MIT, and the Machine Intelligence Research Institute in Berkeley, among others. These groups seem to be doing lots of interesting research, which I am mostly ignorant of. In this post I just want to talk about a simple exercise in asymptotics.

First, Some Background.

A "superintelligent" AI is loosely defined to be an entity that is much better than we are at essentially any cognitive/learning/planning task. Perhaps, by analogy, a superintelligent AI is to human beings as human beings are to Bengal tigers, in terms of general intelligence. It shouldn't be hard to convince yourself that if we were in the company of a superintelligence, then we would be very right to be worried: after all, it is intelligence that allows human beings to totally dominate the world and drive Bengal tigers to near extinction, despite the fact that tigers physiologically dominate humans in most other respects. This is the case even if the superintelligence doesn't have the destruction of humanity as a goal per-se (after all, we don't have it out for tigers), and even if the superintelligence is just an unconscious but super-powerful optimization algorithm. I won't rehash the arguments here (Scott does it better) but it essentially boils down to the fact that it is quite hard to anticipate what the results of optimizing an objective function will be, if the optimization is done over a sufficiently rich space of strategies. And if we get it wrong, and the optimization has some severely unpleasant side-effects? It is tempting to suggest that at that point, we just unplug the computer and start over. The problem is that if we unplug the intelligence, it won't do as well at optimizing its objective function compared to  if it took steps to prevent us from unplugging it. So if it's strategy space is rich enough so that it is able to take steps to defend itself, it will. Lots of the most interesting research in this field seems to be about how to align optimization objectives with our own desires, or simply how to write down objective functions that don't induce the optimization algorithm to try and prevent us from unplugging it, while also not incentivizing the algorithm to unplug itself (the corrigibility problem).

Ok. It seems uncontroversial that a hypothetical superintelligence would be something we should take very seriously as a danger. But isn't it premature to worry about this, given how far off it seems to be? We aren't even that good at making product recommendations, let alone optimization algorithms so powerful that they might inadvertently destroy all of humanity. Even if superintelligence will ultimately be something to take very seriously, are we even in a position to productively think about it now, given how little we know about how such a thing might work at a technical level? This seems to be the position that Andrew Ng was taking, in his much quoted statement that (paraphrasing) worrying about the dangers of super-intelligence right now is like worrying about overpopulation on Mars. Not that it might not eventually be a serious concern, but that we will get a higher return investing our intellectual efforts right now on more immediate problems.

The standard counter to this is that super-intelligence might always seem like it is well beyond our current capabilities -- maybe centuries in the future -- until, all of a sudden, it appears as the result of an uncontrollable chain reaction known as an "intelligence explosion", or "singularity". (As far as I can tell, very few people actually think that intelligence growth would exhibit an actual mathematical singularity --- this seems instead to be a metaphor for exponential growth.) If this is what we expect, then now might very well be the time to worry about super-intelligence. The first argument of this form was put forth by British mathematician I.J. Good (of Good-Turing Frequency Estimation!):

“Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an ‘intelligence explosion,’ and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control.”

Scott Alexander summarizes the same argument a bit more quantitatively. In this passage, he is imagining the starting point being a full-brain simulation of Einstein --- except run on faster hardware, so that our simulated Einstein operates at a much faster clock-speed than his historical namesake:

It might, like the historical Einstein, contemplate physics. Or it might contemplate an area very relevant to its own interests: artificial intelligence. In that case, instead of making a revolutionary physics breakthrough every few hours, it will make a revolutionary AI breakthrough every few hours. Each AI breakthrough it makes, it will have the opportunity to reprogram itself to take advantage of its discovery, becoming more intelligent, thus speeding up its breakthroughs further. The cycle will stop only when it reaches some physical limit – some technical challenge to further improvements that even an entity far smarter than Einstein cannot discover a way around. 
To human programmers, such a cycle would look like a “critical mass”. Before the critical level, any AI advance delivers only modest benefits. But any tiny improvement that pushes an AI above the critical level would result in a feedback loop of inexorable self-improvement all the way up to some stratospheric limit of possible computing power. 
This feedback loop would be exponential; relatively slow in the beginning, but blindingly fast as it approaches an asymptote. Consider the AI which starts off making forty breakthroughs per year – one every nine days. Now suppose it gains on average a 10% speed improvement with each breakthrough. It starts on January 1. Its first breakthrough comes January 10 or so. Its second comes a little faster, January 18. Its third is a little faster still, January 25. By the beginning of February, it’s sped up to producing one breakthrough every seven days, more or less. By the beginning of March, it’s making about one breakthrough every three days or so. But by March 20, it’s up to one breakthrough a day. By late on the night of March 29, it’s making a breakthrough every second.

As far as I can tell, this possibility of an exponentially-paced intelligence explosion is the main argument for folks devoting time to worrying about super-intelligent AI now, even though current technology doesn't give us anything even close. So in the rest of this post, I want to push a little bit on the claim that the feedback loop induced by a self-improving AI would lead to exponential growth, and see what assumptions underlie it.

A Toy Model for Rates of Self Improvement

Lets write down an extremely simple toy model for how quickly the intelligence of a self improving system would grow, as a function of time. And I want to emphasize that the model I will propose is clearly a toy: it abstracts away everything that is interesting about the problem of designing an AI. But it should be sufficient to focus on a simple question of asymptotics, and the degree to which growth rates depend on the extent to which AI research exhibits diminishing marginal returns on investment. In the model, AI research accumulates with time: at time t, R(t) units of AI research have been conducted. Perhaps think of this as a quantification of the number of AI "breakthroughs" that have been made in Scott Alexander's telling of the intelligence explosion argument. The intelligence of the system at time t, denoted I(t), will be some function of the accumulated research R(t). The model will make two assumptions:

  1. The rate at which research is conducted is directly proportional to the current intelligence of the system. We can think about this either as a discrete dynamics, or as a differential equation. In the discrete case, we have: , and in the continuous case:
  2. The relationship between the current intelligence of the system and the currently accumulated quantity of research is governed by some function f: .

The function f can be thought of as capturing the marginal rate of return of additional research on the actual intelligence of an AI. For example, if we think AI research is something like pumping water from a well --- a task for which doubling the work doubles the return --- then, we would model f as linear: . In this case, AI research does not exhibit any diminishing marginal returns: a unit of research gives us just as much benefit in terms of increased intelligence, no matter how much we already understand about intelligence. On the other hand, if we think that AI research should exhibit diminishing marginal returns --- as many creative endeavors seem to --- then we would model f as an increasing concave function. For example, we might let , or , or , etc. If we are really pessimistic about the difficulty of AI, we might even model .  In these cases, intelligence is still increasing in research effort, but the rate of increase as a function of research effort is diminishing, as we understand more and more about AI. Note however that the rate at which research is being conducted is increasing, which might still lead us to exponential growth in intelligence, if it increases fast enough.

So how does our choice of f affect intelligence growth rates? First, lets consider the case in which – the case of no diminishing marginal returns on research investment. Here is a plot of the growth over 1000 time steps in the discrete model: 

Here, we see exponential growth in intelligence. (It isn't hard to directly work out that in this case, in the discrete model, we have , and in the continuous model, we have ). And the plot illustrates the argument for worrying about AI risk now. Viewed at this scale, progress in AI appears to plod along at unimpressive levels before suddenly shooting up to an unimaginable level: in this case, a quantity if written down as a decimal that would have more than 300 zeros. 

It isn't surprising that if we were to model severely diminishing returns – say , that this would not occur. Below, we plot what happens when , with time taken out all the way to 1,000,000 rather than merely 1000 as in the above plot:

Intelligence growth is not very impressive here. At time 1,000,000 we haven't even reached 17. If you wanted to reach (say) an intelligence level of 30 you'd have to wait an unimaginably long time. In this case, we definitely don't need to worry about an "intelligence explosion", and probably not even about ever reaching anything that could be called a super-intelligence. 

But what about moderate (polynomial) levels of diminishing marginal returns. What if we take ? Lets see:

Ok – now we are making more progress, but even though intelligence now has a polynomial relationship to research (and research speed is increasing, in a chain reaction!) the rate of growth in intelligence is still decreasing. What about if ? Lets see:

At least now the rate of growth doesn't seem to be decreasing: but it is growing only linearly with time. Hardly an explosion. Maybe we just need to get more aggressive in our modeling. What if

Ok, now we've got something! At least now the rate of intelligence gains is increasing with time. But it is increasing more slowly than a quadratic function – a far cry from the exponential growth that characterizes an intelligence explosion. 

Lets take a break from all of this plotting. The model we wrote down is simple enough that we can just go and solve the differential equation. Suppose we have for some . Then the differential equation solves to give us: What this means is that for any positive value of , in this model, intelligence grows at only a polynomial rate. The only way this model gives us exponential growth is if we take , and insist that – i.e. that the intelligence design problem does not exhibit any diminishing marginal returns at all. 

Thoughts

So what do we learn from this exercise? Of course one can quibble with the details of the model, and one can believe different things about what form for the function f best approximates reality. But for me, this model helps crystallize the extent to which the "exponential intelligence explosion" story crucially relies on intelligence design being one of those rare tasks that doesn't exhibit any decreasing marginal returns on effort at all. This seems unlikely to me, and counter to experience
Of course, there are technological processes out there that do appear to exhibit exponential growth, at least for a little while. Moore's law is the most salient example. But it is important to remember that even exponential growth for a little while need not seem explosive at human time scales. Doubling every day corresponds to exponential growth, but so does increasing by 1% a year. To paraphrase Ed Felten: our retirement plans extend beyond depositing a few dollars into a savings account, and waiting for the inevitable "wealth explosion" that will make us unimaginably rich. 

Postscript

I don't claim that anything in this post is either novel or surprising to folks who spend their time thinking about this sort of thing. There is at least one paper that writes down a model including diminishing marginal returns, which yields a linear rate of intelligence growth.

It is also interesting to note that in the model we wrote down, exponential growth is really a knife edge phenomenon. We already observed that we get exponential growth if , but not if for any . But what if we have for ? In that case, we don't get exponential growth either! As Hadi Elzayn pointed out to me, Osgood's Test tell us that in this case, the function contains an actual mathematical singularity – it approaches an infinite value in finite time. 

21 comments

Comments sorted by top scores.

comment by jsteinhardt · 2018-03-06T13:54:41.342Z · LW(p) · GW(p)

Thanks for writing this Aaron! (And for engaging with some of the common arguments for/against AI safety work.)

I personally am very uncertain about whether to expect a singularity/fast take-off (I think it is plausible but far from certain). Some reasons that I am still very interested in AI safety are the following:

  • I think AI safety likely involves solving a number of difficult conceptual problems, such that it would take >5 years (I would guess something like 10-30 years, with very wide error bars) of research to have solutions that we are happy with. Moreover, many of the relevant problems have short-term analogues that can be worked on today. (Indeed, some of these align with your own research interests, e.g. imputing value functions of agents from actions/decisions; although I am particularly interested in the agnostic case where the value function might lie outside of the given model family, which I think makes things much harder.)
  • I suppose the summary point of the above is that even if you think AI is a ways off (my median estimate is ~50 years, again with high error bars) research is not something that can happen instantaneously, and conceptual research in particular can move slowly due to being harder to work on / parallelize.
  • While I have uncertainty about fast take-off, that still leaves some probability that fast take-off will happen, and in that world it is an important enough problem that it is worth thinking about. (It is also very worthwhile to think about the probability of fast take-off, as better estimates would help to better direct resources even within the AI safety space.)
  • Finally, I think there are a number of important safety problems even from sub-human AI systems. Tech-driven unemployment is I guess the standard one here, although I spend more time thinking about cyber-warfare/autonomous weapons, as well as changes in the balance of power between nation-states and corporations. These are not as clearly an existential risk as unfriendly AI, but I think in some forms would qualify as a global catastrophic risk; on the other hand I would guess that most people who care about AI safety (at least on this website) do not care about it for this reason, so this is more idiosyncratic to me.

Happy to expand on/discuss any of the above points if you are interested.

Best,

Jacob

Replies from: aaron-roth
comment by Aaron Roth (aaron-roth) · 2018-03-06T15:00:44.417Z · LW(p) · GW(p)

Good points all; these are good reasons to work on AI safety (and of course as a theorist I'm very happy to think about interesting problems even if they don't have immediate impact :-) I'm definitely interested in the short-term issues, and have been spending a lot of my research time lately thinking about fairness/privacy in ML. Inverse-RL/revealed preferences learning is also quite interesting, and I'd love to see some more theory results in the agnostic case.

comment by paulfchristiano · 2018-03-06T02:33:45.913Z · LW(p) · GW(p)
As far as I can tell, this possibility of an exponentially-paced intelligence explosion is the main argument for folks devoting time to worrying about super-intelligent AI now, even though current technology doesn't give us anything even close. So in the rest of this post, I want to push a little bit on the claim that the feedback loop induced by a self-improving AI would lead to exponential growth, and see what assumptions underlie it.

I think few AI safety advocates believe this. It's much more common to expect growth to be faster than exponential. As you point out, exponential growth is a knife-edge phenomenon.

As far as I can tell, very few people actually think that intelligence growth would exhibit an actual mathematical singularity

This is actually a pretty common view---not a literal singularity, but rapid technological acceleration until natural resource limitations (e.g. on total available solar energy and raw minerals) start binding. If you look at the history of technological progress, it looks a whole lot more like a hyperbola than like an exponential curve, so the hyperbolic growth forecast isn't so insane. It's the person arguing that growth rates are going to stop at 3% who is arguing against the bulk of historical precedent (and whose predecessors would have been wrong if they'd expected growth to stop at 0.3% or 0.03% or 0.003%...).

this seems instead to be a metaphor for exponential growth.

I think "singularity" usually either follows Vinge's use (as the point beyond which you can't predict what will happen, because the future is guided by actors smarter than you are) or as a reference to the dynamic that would produce a mathematical singularity if left unchecked.

comment by paulfchristiano · 2018-03-06T02:26:17.204Z · LW(p) · GW(p)

In a more typical endogenous growth model, output is the product of physical capital (e.g. how many computers you have) and a technology factor (e.g. how smart you are). You can either invest in producing more capital (building more computers) or doing research (becoming smarter). On these models, even returns of still lead to a mathematical singularity (while constant technology leads to exponential growth).

From this perspective, you are investigating whether there is an intelligence explosion with finite capital. If productivity grows sublinearly with inputs, you need to build more machines (and ultimately extract more resources from nature) in order to grow really fast. This might suggest that getting to a singularity would take years rather than weeks, but doesn't much change the qualitative conclusion or substantially change the urgency (especially given that the early phase of takeoff would be driven by moving resources over from lower productivity areas into higher productivity areas).

comment by paulfchristiano · 2018-03-06T02:23:01.914Z · LW(p) · GW(p)

I think it's a mistake to think of "productivity is linear in effort" as the "no diminishing returns" model, and to consider it a degenerate extreme case. Linear returns is the model where doubling inputs leads to doubled outputs. A priori, it's nearly as natural for constant additional effort leads to doubling of efficiency, so we need to actually look at the data to distinguish.

(It seems more theoretically natural---and more common in practice---for each clever trick to lead to a 10% increase in efficiency, then for each clever trick to lead to an absolute increase of 1 unit of efficiency.)

In semiconductors, as you point out, output has increased exponentially over time. Research investment has also increased exponentially, but with a significantly smaller exponent. So on your model the curve appears to be for .

The performance curves database contains many interesting time series, and you'll note that the y-axis is typically exponential. They don't track inputs, so it's a bit hard to draw conclusions, but comparing to overall increases in R&D investment it looks like superlinear returns are probably quite common.

A few years ago Katja looked into the rate of algorithmic progress, and found that it was very approximately comparable to the rate of progress in hardware (though it's hard to know how much of that comes from realizing increasing economies of scale w.r.t. compute), across a range of domains. Algorithms seem like a particularly relevant domain to the current discussion.

comment by Aaron Roth (aaron-roth) · 2018-03-06T14:49:32.343Z · LW(p) · GW(p)

Hi all,

Thanks for the very thoughtful comments; lots to chew on. As I hope was clear, I'm just an interested outside observer, and have not spent very long thinking about these issues, and don't know much of the literature. (My blog post ended up as a cross post here because I posted it to facebook, and asked if anyone could point me to more serious literature thinking about this problem, and a commenter suggested that I should crosspost here for feedback)

I agree that linear feedback is more plausible if we think of research breakthroughs as producing multiplicative gains, a simple point that I hadn't thought about.

comment by Qiaochu_Yuan · 2018-03-06T22:39:07.934Z · LW(p) · GW(p)

Eliezer did exactly this calculation in an old LW post. Unfortunately I have no idea how to find it. Fortunately the calculation comes out the same no matter who does it!

comment by Vanessa Kosoy (vanessa-kosoy) · 2018-03-05T20:13:56.921Z · LW(p) · GW(p)
As far as I can tell, this possibility of an exponentially-paced intelligence explosion is the main argument for folks devoting time to worrying about super-intelligent AI now, even though current technology doesn't give us anything even close.

Not at all. The reasons we should work on AI alignment now are:

  • AI alignment is a hard problem
  • We don't know how long it will take us to solve it
  • We don't know how long it will be until superintelligent AI becomes possible
  • There is no strong reason to believe we will know superintelligent AI is coming far in advance

"Current technology doesn't give us anything even close" is not extremely informative since we don't know the metric w.r.t. which "close" should be measured. Heavier than air flight was believed impossible by many, until the Wright brothers did it. The technology of 1929 didn't give anything close to an atom bomb or a moon landing, and yet the atom bomb was made 16 years later, and the moon landing 40 years later.

Regarding the differential equations, I don't think it's a very meaningful analysis if you haven't even defined the scale on which you measure intelligence. If I(x) is some measure of intelligence that grows exponentially, then log I(x) is another measure of intelligence which grows linearly, and if I(x) grows linearly then exp I(x) grows exponentially.

Also, you might be interested in this paper by Yudkowsky.

Replies from: paulfchristiano
comment by paulfchristiano · 2018-03-06T02:43:48.800Z · LW(p) · GW(p)
if you do want to analyze the plausibility of an intelligence explosion then it seems worthwhile to respond in detail to previous work

If you replace "analyze the plausibility" with "convincingly demonstrate to skeptics" then this seems right.

The OP seems to be written more in the spirit of exploration rather than conclusive argument though, which seems valuable and doesn't necessarily require responding in detail to prior work (in this case ~100 pages). Seems like kind of a soul-crushing way to respond to curiosity :)

(I hope my own comments didn't come across harshly.)

Replies from: vanessa-kosoy
comment by Vanessa Kosoy (vanessa-kosoy) · 2018-03-06T06:48:46.181Z · LW(p) · GW(p)

You're right, sorry. Edited.

comment by daozaich · 2018-03-07T01:54:56.786Z · LW(p) · GW(p)

(1) As Paul noted, the question of the exponent alpha is just the question of diminishing returns vs returns-to-scale.

Especially if you believe that the rate is a product of multiple terms (like e.g. Paul's suggestion with one exponent for computer tech advances and another for algorithmic advances) then you get returns-to-scale type dynamics (over certain regimes, i.e. until all fruit are picked) with finite-time blow-up.

(2) Also, an imho crucial aspect is the separation of time-scales between human-driven research and computation done by machines (transistors are faster than neurons and buying more hardware scales better than training a new person up to the bleeding edge of research, especially considering Scott's amusing parable of the alchemists).

Let's add a little flourish to your model: You had the rate of research and the cumulative research ; let's give a name to the capability of the AI system. Then, we can model . This is your model, just splitting terms into , which tells us how hard AI progress is, and which tells us how good we are at producing research.

Now denote by the fraction of work that absolutely has to be done by humans, and by the speed-up factor for silicon over biology. Amdahl's law gives you , or somewhat simplified . This predicts a rate of progress that first looks like , as long as human researcher input is the limiting factor, then becomes when we have AIs designing AIs (recursive self-improvement, aka explosion), and then probably saturates at something (when the AI approaches optimality).

The crucial argument for fast take-off (as far as I understood it) is that we can expect to hit at some cross-over , and we can expect this to happen with a nonzero derivative . This is just the claim that human-level AI is possible, and that the intelligence of the human parts of the AI research project is not sitting at a magical point (aka: this is generic, you would need to fine-tune your model to get something else).

The change of the rate of research output from the regime to the regime sure looks like a hard-take-off singularity to me! And I would like to note that the function , i.e. the hardness AI research and the diminishing-returns vs returns-to-scale debate does not enter this discussion at any point.

In other words: If you model AI research as done by a team of humans and proto-AIs assisting the humans; and if you assert non-fungibility of humans vs proto-AI-assistents (even if you buy a thousand times more hardware, you still need the generally intelligent human researchers for some parts); and if you assert that better proto-AI-assistents can do a larger proportion of the work (at all); and if you assert that computers are faster than humans; then you get a possibly quite wild change at .

I'd like to note that the cross-over is not "human-level AI", but rather "" , i.e. an AI that needs (almost) no human assistence to progress the field of AI research.

On the opposing side (that's what Robin Hanson would probably say) you have the empirical argument that should decay like a power-law long before we ("the last 10% take 90% of the work" is a folk formulation for "percentile 90-99 take nine time as much work as percentile 0-89" aka power law, and is borne out quite well, empirically).

This does not have any impact on whether we cross with non-vanishing derivative, but would support Paul's view that the world will be unrecognizably crazy long before .

PS. I am currently agnostic about the hard vs soft take-off debate. Yeah, I know, cowardly cop-out.

edit: In the above, C kinda encodes how fast / good our AI is and q encodes how general it is compared to humans. All AI singularity stuff tacitly assumes that human intelligence (assisted by stupid proto-AI) is sufficiently general to design an AI that exceeds or matches the generality of human intelligence. I consider this likely. The counterfactual world would have our AI capabilities saturate at some subhuman level for a long time, using terribly bad randomized/evolutionary algorithms, until it either stumbles unto an AI design that has better generality or we suffer unrelated extinction/heat-death. I consider it likely that human intelligence (assisted by proto-AI) is sufficiently general for a take-off. Heat-death is not an exaggeration: Algorithms with exponentially bad run-time are effectively useless.

Conversely, I consider it very well possible that human intelligence is insufficiently general to understand how human intelligence works! (we are really, really bad at understanding evolution/gradient-descent optimized anything, an that's what we are)

comment by ESRogs · 2018-03-05T20:38:31.884Z · LW(p) · GW(p)
the Machine Intelligence Research Institute at Berkeley

Just wanted to clarify that MIRI is in Berkeley (the city), but is not affiliated with UC Berkeley (the university).

Replies from: jsteinhardt, aaron-roth
comment by jsteinhardt · 2018-03-06T13:32:48.176Z · LW(p) · GW(p)

Very minor nitpick, but just to add, FLI is as far as I know not formally affiliated with MIT. (FHI is in fact a formal institute at Oxford.)

comment by Aaron Roth (aaron-roth) · 2018-03-06T15:04:30.227Z · LW(p) · GW(p)

Thanks for the corrections. I changed the text to "in Berkeley". How should FLI be described? (I was just cribbing from Scott's FAQ when claiming it was at MIT)

Replies from: ESRogs
comment by ESRogs · 2018-03-07T00:41:20.330Z · LW(p) · GW(p)

You could say that it's in Cambridge, MA...

See more here: https://en.wikipedia.org/wiki/Futureof Life_Institute

comment by abramdemski · 2021-02-16T17:37:42.598Z · LW(p) · GW(p)

One thing which confused me momentarily -- I looked at your differential equations and mentally substituted  with , to get something just in terms of , for convenience. Then I was temporarily confused by your graphs in terms of , because I was getting very different graphs (graphs in ) working things out in my head.

This pointed me at the question: should we be graphing in terms of  or ?

A large part of the analysis is to pinpoint where we get sublinear growth vs superlinear, and subexponential vs superexponential. This quantifies different meanings of explosive growth (ie the "explosion" in "intelligence explosion"). But perhaps we should be looking at the growth of , instead of .

It seems like, in this model,  represents capabilities -- if you know a lot of concrete things, you can do a lot.  represents the pace at which capabilities increase. An explosion in capabilities could be alarming despite a rather modest graph of intelligence increase.

Put simply, you're graphing the derivative of capabilities. What happens when we graph capabilities?

Considering the cases you look at:

  • : This is the one case where the two graphs are just the same anyway.  grows exponentially, just like .
  • : capabilities see very nearly linear growth (since the derivative is very nearly constant).
  • : capabilities grow like .
  • : capabilities grow like .
  • : capabilities grow like .
  • : capabilities grow polynomially for , exponentially at , and hyperbolically at .

This gives a very different picture: some sort of superlinear growth seems almost inevitable. We get an explosion unless returns are extremely diminishing. On the other hand, the crossover from subexponential to superexponential happens at exactly the same point.

Of course, "cababilities" is a rather ambiguous notion. What does it really entail? Perhaps the salient feature of the world ends up being the log of capabilities.

comment by habryka (habryka4) · 2018-03-05T18:27:57.961Z · LW(p) · GW(p)

Are you open to me copying over the complete content of the post? This makes it easier for people to reference and read over here.

Replies from: aaron-roth
comment by Aaron Roth (aaron-roth) · 2018-03-05T18:33:28.519Z · LW(p) · GW(p)

Sure

Replies from: habryka4
comment by habryka (habryka4) · 2018-03-05T18:56:10.542Z · LW(p) · GW(p)

Done! (with proper LaTeX rendering!)

Replies from: aaron-roth
comment by Charlie Steiner · 2018-03-06T00:30:35.218Z · LW(p) · GW(p)

I agree with this, but I think you have to remember that many things with diminishing returns also have accelerating returns earlier on.

That is to say, logistic curves are all over the place. Business growth, practicing a new instrument, functionality of a software project over time, learning a language through immersion...

It's absolutely plausible for intelligence self-improvement to work for a few IQ points and then peter out, for some architecture. Humans, for example, are horrible at improving their own brains - but also see EURISKO. But I'm skeptical that returns are always going to be so sharply diminishing, and if everyone else is improving slowly, whatever system "goes critical" first is going to be the one that matters.