Could Advanced AI Drive Explosive Economic Growth?

post by Matthew Barnett (matthew-barnett) · 2021-06-30T22:17:23.875Z · LW · GW · 4 comments

This is a link post for https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth

Contents

4 comments

Tom Davidson from Open Philanthropy released a post recently talking about economic growth and AI. I was somewhat surprised to see that no one had yet made a linkpost on Lesswrong yet, so that's what this is. Here's the linkpost [EA · GW] on the Effective Altruism Forum.

This report evaluates the likelihood of ‘explosive growth’, meaning > 30% annual growth of gross world product (GWP), occurring by 2100. Although frontier GDP/capita growth has been constant for 150 years, over the last 10,000 years GWP growth has accelerated significantly. Endogenous growth theory, together with the empirical fact of the demographic transition, can explain both trends. Labor, capital and technology were accumulable over the last 10,000 years, meaning that their stocks all increased as a result of rising output. Increasing returns to these accumulable factors accelerated GWP growth. But in the late 19th century, the demographic transition broke the causal link from output to the quantity of labor. There were not increasing returns to capital and technology alone and so growth did not accelerate; instead frontier economies settled into an equilibrium growth path defined by a balance between a growing number of researchers and diminishing returns to research.

This theory implies that explosive growth could occur by 2100. If automation proceeded sufficiently rapidly (e.g. due to progress in AI) there would be increasing returns to capital and technology alone. I assess this theory and consider counter-arguments stemming from alternative theories; expert opinion; the fact that 30% annual growth is wholly unprecedented; evidence of diminishing returns to R&D; the possibility that a few non-automated tasks bottleneck growth; and others. Ultimately, I find that explosive growth by 2100 is plausible but far from certain.

Rohin Shah has also written a summary, which you can read here [LW · GW].

Overall I was surprised by some parts of the report, and intrigued by other parts.

Despite arguing for the possibility of explosive growth, Tom seems more skeptical of the thesis than most people at Open Philanthropy, writing,

my colleague Ajeya Cotra’s draft [AF · GW] report [AF · GW] estimates when we’ll develop human-level AI; she finds we’re 80% likely to do so by 2100. In a previous report I took a different approach to the question, drawing on analogies between developing human-level AI and various historical technological developments. My central estimate was that there’s a ~20% probability of developing human-level AI by 2100. These probabilities are consistent with the predictions of AI practitioners. 

Overall, I place at least 10% probability on advanced AI driving explosive growth this century.

He may also be deferring somewhat to background priors, like our ignorance about the fundamental determinants of the growth process, and the fact that the economics experts are very skeptical of transformative growth by 2100.

In one section, Tom addresses an objection which roughly says that, if AI is going to lead to explosive growth later this century, we should already see signs of explosive growth in our current economy. The reason is that AI will likely accelerate some parts of our economy first, before it accelerates the whole thing. However, since no sector of our economy is growing at transformative rates, we shouldn't expect transformative growth any time soon.

In response, Tom says that this objection is plausible, but thinks that it mostly gives us reason to think that transformative growth will not happen by 2050, leaving the possibility of a longer timeline on the table. He cites Nordhaus (2021) who investigated specific economic indicators in the tech sector, and found the evidence to be somewhat lacking. 

My own opinion is that this sort of economic analysis is likely very helpful for forecasting timelines; to that end, I'm interested in learning about what economic indicators might be most helpful for timing AI development. 

My current understanding is that the primary ingredient in explosive AI-driven growth models is the lack of diminishing returns to computer capital, in contrast to other forms of capital. We could probably start to see signs of entering that regime if new companies could grow rapidly as a result of merely obtaining lots of computing resources. Therefore, it would be useful to see if we could detect these sorts of returns in the future, as a way of giving us a heads-up about what's to come.

4 comments

Comments sorted by top scores.

comment by [deleted] · 2021-07-01T03:07:53.406Z · LW(p) · GW(p)

One comment : nuclear fission generated explosive bursts of energy and enormous increases in the amount of energy humans could release. (Destructively) Very likely the "megatons per year" growth rate was 30 percent some years on the 60s and 70s.

Yet if you moved the plot backward to 1880 and asked the most credible scientists alive the if we would find a way to do this, most would be skeptical and might argue that the increase in dynamite production with each year didn't show 30 percent growth.

Replies from: matthew-barnett
comment by Matthew Barnett (matthew-barnett) · 2021-07-01T04:36:35.431Z · LW(p) · GW(p)

Nuclear fission is often compared to AI development, but I think it's probably a bad comparison.

Nuclear bombs weren't built by successively making bombs more efficient or bigger. Rather, they were a result of a particular quirk of physics works at the subatomic level. Once that quirk was understood, people quickly realized that it was possible, and governments began enriching uranium. 

By contrast, machine learning and AI research builds on itself. Intelligence looks more like a continuum, and will be built from a culmination of ideas and knowledge. There are rarely large theoretical leaps made in the AI field; at least, none which lend themselves rapidly to practical applications. Most of the best ideas in AI usually have ancestry in precursor ideas which were almost as good. For example, almost all of the ideas in modern ML are downstream of probability theory, learning theory, and backpropagation, which were developed many decades ago.

Replies from: None
comment by [deleted] · 2021-07-02T04:04:02.865Z · LW(p) · GW(p)

The reason to compare it to fission is from self gain. For a fission reaction that quirk of physics is called criticality where the neutrons produced self amplify, leading to exponential gain. Up until sufficient fissionable material was concentrated (the Chicago pile) there was zero fission gain and you could say fission was only a theoretical possibility.

Today human beings design and participate in building AI software, AI computer chips, and robots for AI to drive. They also gather the resources to make these things.

The 'quirk' we expect to exploit here is that human minds are very limited in I/O and lifespan, and have many inefficiencies and biases.  They are also millions of times slower than computer chips that already exist, at least for individual subsystems.  They were designed by nature to handle far more limited domain problems than the ones we are faced with now, and thus we are bad at them.

The 'quirk' therefore is that if you can build a superior than a human mind but also as robust and broad in capabilities, and the physical materials are a small amount of refined silicon or carbon with small energy requirements (say a cube that is 10cm*10cm and requiring 1 kW) you can order those machines to self replicate, getting the equivalent of adding trillions of workers to our population without any of the needs or desires of those trillions of people.  

This will obviously cause explosive economic growth.  Will it be over 30% in a single year?  No idea.

comment by Donald Hobson (donald-hobson) · 2021-07-03T23:06:44.761Z · LW(p) · GW(p)

I am not confident that GDP is a useful abstraction over the whole region of potential futures.

Suppose someone uses GPT5 to generate code, and then throws lots of compute at the generated code. GPT5 has generalized from the specific AI techniques humans have invented, seeing them as just a random sample from the broader space of AI techniques. When it samples from that space, it sometimes happens to pull out a technique more powerful than anything humans have invented. Given plenty of compute, it rapidly self improves. The humans are happy to keep throwing compute at it. (Maybe the AI is doing some moderately useful task for them, maybe they think its still training.) Neither the AI's actions, nor the amount of compute used are economically significant. (The AI can't yet gain much more compute without revealing how smart it is, and having humans try to stop it.) After a month of this, the AI hacks some lab equipment over the internet, and sends a few carefully chosen emails to a biotech company. A week later, nanobots escape the lab. A week after that and the grey goo has extinguished all earth life. 

Alternate. The AI thinks its most reliable route to takeover involves economic power. It makes loads of money performing various services, (Like 50% GDP money) It uses this money to buy all the compute, and to pay people to make nanobots. Grey goo as before.

(Does grey goo count as GDP? What about various techs that the AI develops that would be ever so valuable if they were under meaningful human control?)

So in this set of circumstances, whether there is explosive economic growth or not depends on whether "Do everything and make loads of money" or "stay quiet and hack lab equipment" offer faster / more reliable paths to nanobots.