Superexponential Historic Growth, by David Roodman

post by Ben Pace (Benito) · 2020-06-15T21:49:00.188Z · LW · GW · 6 comments

This is a link post for https://www.openphilanthropy.org/blog/modeling-human-trajectory

Contents

6 comments

This is research trying to do a similar analysis to Hanson's paper Long-Term Growth as a Sequence of Exponential Modes, and coming to different conclusions in some areas and the same conclusions in others. It also discusses Scott's post 1960: The Year the Singularity Was Cancelled [LW · GW]. The name of the post/paper is "Modeling Human Trajectory".

 From the summary.

One strand of analysis that has caught our attention is about the pattern of growth of human society over many millennia, as measured by number of people or value of economic production. Perhaps the mathematical shape of the past tells us about the shape of the future. I dug into that subject. A draft of my technical paper is here. (Comments welcome.) In this post, I’ll explain in less technical language what I learned.

It’s extraordinary that the larger the human economy has become—the more people and the more goods and services they produce—the faster it has grown on average. Now, especially if you’re reading quickly, you might think you know what I mean. And you might be wrong, because I’m not referring to exponential growth. That happens when, for example, the number of people carrying a virus doubles every week. Then the growth rate (100% increase per week) holds fixed. The human economy has grown super-exponentially. The bigger it has gotten, the faster it has doubled, on average. The global economy churned out $74 trillion in goods and services in 2019, twice as much as in 2000.1 Such a quick doubling was unthinkable in the Middle Ages and ancient times. Perhaps our earliest doublings took millennia.

If global economic growth keeps accelerating, the future will differ from the present to a mind-boggling degree. The question is whether there might be some plausibility in such a prospect. That is what motivated my exploration of the mathematical patterns in the human past and how they could carry forward. Having now labored long on the task, I doubt I’ve gained much perspicacity. I did come to appreciate that any system whose rate of growth rises with its size is inherently unstable. The human future might be one of explosion, perhaps an economic upwelling that eclipses the industrial revolution as thoroughly as it eclipsed the agricultural revolution. Or the future could be one of implosion, in which environmental thresholds are crossed or the creative process that drives growth runs amok, as in an AI dystopia. More likely, these impulses will mix.

And from the conclusion.

I do not know whether most of the history of technological advance on Earth lies behind us or ahead of us. I do know that it is far easier to imagine what has happened than what hasn’t. I think it would be a mistake to laugh off or dismiss the predictions of infinity emerging from good models of the past. Better to take them as stimulants to our imaginations. I believe the predictions of infinity tell us two key things. First, if the patterns of history continue, then some sort of economic explosion will take place again, the most plausible channel being AI. It wouldn’t reach infinity, but it could be big. Second, and more generally, I take the propensity for explosion as a sign of instability in the human trajectory. Gross world product, as a rough proxy for the scale of the human enterprise, might someday spike or plunge or follow complicated paths in between. The projections of explosion should be taken as indicators of the long-run tendency of the human system to diverge. They are hinting that realistic models of long-term development are unstable, and stable models of long-term development unrealistic. The credible range of future paths is indeed wide.

Data and code for the paper and for this post are on GitHub.

Holden Karnofsky also gives his thoughts on the piece, which I found quite interesting.

Some personal reactions on this piece:

First, a note on how it came about and what I think the relevance to our work is. I asked David to evaluate Robin Hanson’s work on long-term growth as a sequence of exponential growth modes. I found it interesting that an attempt to extrapolate future economic growth from the past (with very little reasoning other than attempting to essentially trend-extrapolate) implied a strong chance of explosive growth in the next few decades, but I wasn’t convinced that Hanson’s approach was the best method of doing such trend extrapolation. I asked David how he would extrapolate future growth based on the past, and this is the result. The model is very different from Hanson’s (and I prefer it), but it too has an implication of explosive growth in the next few decades.

On its own, seeing trend extrapolation exercises with this implication doesn’t necessarily mean much. However, I independently have a view (based on other reasoning) that transformative AI could plausibly be developed in the next couple of decades. I think one of the best reasons to be skeptical of this view about transformative AI is that it seemingly implies a major “trend break”: it seems that it would, one way or another, have to mean world economic growth well outside the 1-3% range that it’s been pretty steady in for the last couple of centuries. However, Hanson’s and Roodman’s work both imply that a broader look at economic history demonstrates accelerating growth, and that in this sense, expecting that “the future will be like the past” could be entirely consistent with expecting radically world-changing technology to be developed in the coming decades.

Like David, I wouldn’t take the model discussed in this piece literally, but I tentatively agree with what I see as the central themes: that a sufficiently broad view of history shows accelerating growth, that this dynamic is inherently unstable, and that we therefore have little reason to “rule out” explosive growth in the coming decades.

We are working on a number of other analyses regarding the likelihood of transformative AI being developed in the coming decades. One topic we’re exploring is a potential followup on this piece in which we would try to understand the degree to which growth economists find this piece’s central themes reasonable, and what objections are most common.

Now a few comments on ways in which I see things differently from how David sees them. I should start by saying that any model makes simplifications, and this is a case where extreme simplifications are particularly called for. However, if I’d written this post I would’ve called out the following non-modeled dynamics as particularly worth noting.

1 - this post’s multivariate model does not match the way I intuitively model what I call the “technological landscape.” Some discoveries and technological developments enable others, so there is in some sense an “order” in which we’ve developed new technologies that might be fairly stable across multiple possible versions of history. And some technologies are more impactful than others. There may thus be important natural structure that leads to inevitable (as opposed to stochastic) acceleration and deceleration, as the world hits phases where there are more vs. less impactful technologies being discovered. The most obvious way in which I expect the “technological landscape “ to matter is that at some point, I think the world could “run out of new findings” - at which point technology could stop improving. I see this as a likely way that real-world growth could avoid going to infinity in finite time, without needing to invoke natural resource limits.

2 - it seems to me that a more realistic multivariate model would have natural resource shortages leading to growth “leveling off” rather than spiking and imploding. E.g., at the point where natural resources are foreseeably going to be a bottleneck on growth, I expect them to become more expensive and hence more carefully conserved. I’m not sure whether this would apply to a long enough time frame to make a big visual difference to the charts in this post, but I still thought it was worth mentioning.

3 - I’m interested in the hypothesis that the recent “stagnation” this model sees is largely driven by the fact that population growth has slowed, which in turn limits the rate of technological advance. Advances in AI could later lead to a dynamic in which capital can more efficiently substitute for labor’s (and/or human capital’s) role in technological advance. This is an example of how the shape of the “technological landscape” could explain some of the “surprises” seen in David’s tests of the model.

4 - regarding this statement in David’s piece:

The scenario is, one hopes, unrealistic. Its realism will depend on whether human enterprise ultimately undermines itself by depleting a natural endowment such as safe water supplies or the greenhouse gas absorptive capacity of the atmosphere; or whether we skirt such limits by, for example, switching to climate-safe energy sources and using them to clean the water and store the carbon.

In worlds where explosive growth of the kind predicted by David’s model occured, I’d anticipate radical changes to the way the world looks (for example, civilization expanding outside of Earth), which could significantly change the picture of what resources are scarce.

6 comments

Comments sorted by top scores.

comment by steven0461 · 2020-06-17T00:10:43.523Z · LW(p) · GW(p)

The main part I disagree with is the claim that resource shortages may halt or reverse growth at sub-Dyson-sphere scales. I don't know of any (post)human need that seems like it might require something else than matter, energy, and ingenuity to fulfill. There's a huge amount of matter and energy in the solar system and a huge amount of room to get more value out of any fixed amount.

(If "resource" is interpreted broadly enough to include "freedom from the side effects of unaligned superintelligence", then sure.)

Replies from: AnthonyC, ryan_b
comment by AnthonyC · 2020-06-17T23:23:21.222Z · LW(p) · GW(p)

I am inclined to agree, but it seems plausible to me that the transition from planet-wide economy to Dyson sphere may be slower than the earthbound economic boom that makes it possible. I don't really see a plausible way to disassemble the asteroid belt and maybe a few planets days after Earth figures out how to start expanding into space at scale.

comment by ryan_b · 2020-06-18T14:58:50.126Z · LW(p) · GW(p)

The problem is that access to the entire store of matter and energy runs through the single thread of successfully scaling space travel. So the logic appears to run similar to Dissolving the Fermi Paradox; the question largely reduces to whether one or more of the critical choke points fail.

Space travel successful -> almost certain growth

Space travel fails -> almost certain doom

comment by Sammy Martin (SDM) · 2020-06-18T13:26:50.999Z · LW(p) · GW(p)

The increase is so monotonic that either the data's wrong, we're going to experience a major break with the past in the mid 2040s or its galactic time when I'm in early middle age. One thing this post led me to consider is that when we bring together everything, the evidence for 'things will go insane in the next century' is stronger than the evidence for any specific scenario as to how. This isn't the only evidence for the broad thesis of 'things are going to go crazy over the next decades', where crazy is defined as more rapid change than we saw over the previous century.

Treat this like a detective story - bring in disparate clues. We're probably alone in the universe, and anthropic arguments tend to imply we're living at an incredibly unusual time in history. Isn't that what you'd expect to see in the same world where there is a totally plausible mechanism that could carry us a long way up this line, in the form of AGI and eternity in six hours? It's like - all the pieces are already there, and they only need to be approximately right for our lifetimes to be far weirder than those of people who were e.g. born in 1896 and lived to 1947 - which was weird enough, but that should be your minimum expectation

EDIT: on the point about AI, I just checked to see if there were any recent updates and now we have Image GPT. Heck.

Replies from: SDM
comment by Sammy Martin (SDM) · 2020-06-20T12:53:31.977Z · LW(p) · GW(p)

Further to this point - there is something a little strange about calling a fast takeoff from AGI and whatever was driving superexponential growth throughout all of history the same phenomenon - if true there is some huge cosmic coincidence that causes there to always be superexponential growth - so as soon as population growth + growth in wealth per capita or whatever was driving it until now runs out in the great stagnation (which is visible as a tiny blip on the RHS of the double-log plot), AGI takes over and pushes us up the same trend line. That's clearly not possible, so there would have to be some factor responsible for both if AGI is what takes us up the rest of that trend line.

In general, there are three categories of evidence that things are likely to become very weird over the next century, or that we live at the hinge of history [EA · GW]in some sense:

1) Specific mechanisms around AGI - possibility of rapid capability gain

2) Economic [LW · GW]and technological trend-fitting predicting a singularity around 2050

3) Anthropic [EA · GW]and Fermi arguments suggesting that we live at some extremely unusual time

All of these are also arguments for the more general claim that we live at the hinge of history. 1) is because a superintelligent AGI takeoff is just a specific example for how the hinge occurs, and it is plausible for much more specific reasons. 3) is already directly arguing for that, but how does 2) fit in with 1) and 3)? For AGI to be the driver of the rest of that growth curve, there has to be a single causal mechanism that keeps us on the same trend and includes AGI as its final step - if we say we are agnostic about what that mechanism is, we can still call 2) evidence for us living at the hinge point, though we have to note that there is a huge blank spot in need of explanation - what phenomenon causes the right technologies to appear to continue the superexponential trend all the way from 10,000 BCE to the arrival of AGI?