Using the Copernican mediocrity principle to estimate the timing of AI arrival

post by turchin · 2015-11-04T11:42:44.952Z · LW · GW · Legacy · 17 comments

Contents

17 comments

Gott famously estimated the future time duration of the Berlin wall's existence:

“Gott first thought of his "Copernicus method" of lifetime estimation in 1969 when stopping at the Berlin Wall and wondering how long it would stand. Gott postulated that the Copernican principle is applicable in cases where nothing is known; unless there was something special about his visit (which he didn't think there was) this gave a 75% chance that he was seeing the wall after the first quarter of its life. Based on its age in 1969 (8 years), Gott left the wall with 75% confidence that it wouldn't be there in 1993 (1961 + (8/0.25)). In fact, the wall was brought down in 1989, and 1993 was the year in which Gott applied his "Copernicus method" to the lifetime of the human race”. “https://en.wikipedia.org/wiki/J._Richard_Gott

The most interesting unknown in the future is the time of creation of Strong AI. Our priors are insufficient to predict it because it is such a unique task. So it is reasonable to apply Gott’s method.

AI research began in 1950, and so is now 65 years old. If we are currently in a random moment during AI research then it could be estimated that there is a 50% probability of AI being created in the next 65 years, i.e. by 2080. Not very optimistic. Further, we can say that the probability of its creation within the next 1300 years is 95 per cent. So we get a rather vague prediction that AI will almost certainly be created within the next 1000 years, and few people would disagree with that. 

But if we include the exponential growth of AI research in this reasoning (the same way as we do in Doomsday argument where we use birth rank instead of time, and thus update the density of population) we get a much earlier predicted date.

We can get data on AI research growth from Luke’s post

“According to MAS, the number of publications in AI grew by 100+% every 5 years between 1965 and 1995, but between 1995 and 2010 it has been growing by about 50% every 5 years. One sees a similar trend in machine learning and pattern recognition.”

From this we could conclude that doubling time in AI research is five to ten years (update by adding the recent boom in neural networks which is again five years)

This means that during the next five years more AI research will be conducted than in all the previous years combined. 

If we apply the Copernican principle to this distribution, then there is a 50% probability that AI will be created  within the next five years (i.e. by 2020) and a 95% probability that AI will be created within next 15-20 years, thus it will be almost certainly created before 2035. 

This conclusion itself depends of several assumptions: 

•   AI is possible

•   The exponential growth of AI research will continue 

•   The Copernican principle has been applied correctly.

 

Interestingly this coincides with other methods of AI timing predictions: 

•   Conclusions of the most prominent futurologists (Vinge – 2030, Kurzweil – 2029)

•   Survey of the field of experts

•   Prediction of Singularity based on extrapolation of history acceleration (Forrester – 2026, Panov-Skuns – 2015-2020)

•   Brain emulation roadmap

•   Computer power brain equivalence predictions

•   Plans of major companies

 

It is clear that this implementation of the Copernican principle may have many flaws:

1. The one possible counterargument here is something akin to a Murphy law, specifically one which claims that any particular complex project requires much more time and money before it can be completed. It is not clear how it could be applied to many competing projects. But the field of AI is known to be more difficult than it seems to be for researchers.

2. Also the moment at which I am observing AI research is not really random, as it was in the Doomsday argument created by Gott in 1993, and I probably will not be able to apply it to a time before it become known.

3. The number of researchers is not the same as the number of observers in the original DA. If I were a researcher myself, it would be simpler, but I do not do any actual work on AI.

 

Perhaps this method of future prediction should be tested on simpler tasks. Gott successfully tested his method by predicting the running time of Broadway shows. But now we need something more meaningful, but testable in a one year timeframe. Any ideas?

 

 

17 comments

Comments sorted by top scores.

comment by gjm · 2015-11-04T12:06:09.678Z · LW(p) · GW(p)

I suggest that rather than putting "AI is possible" and "exponential growth of research will continue" in as assumptions, it would be better to adjust the conclusion: 95% probability that by 2035 the exponential growth of human AI research will have stopped. This could be (1) because it produced a strongly superhuman AI and declared its job complete, or (2) because we found good reason to believe that AI is actually impossible, or (3) because we found other more exciting things to work on, or (4) because there weren't enough resources to keep the exponential growth going, or (etc.).

I think this framing is better because it emphasizes that there are lots of ways for exponential growth in AI research to stop [EDITED to add: or to slow substantially] other than achieving all the goals of such research.

Replies from: passive_fist, turchin
comment by passive_fist · 2015-11-04T22:44:42.944Z · LW(p) · GW(p)

The exponential growth of ML research may already be decreasing.

Here's the number of paper hits from the keyword "machine learning":

http://i.imgur.com/jezwBhV.png

And here's the number of paper hits from the keyword "pattern recognition":

http://i.imgur.com/Sor5seJ.png

(Don't mind the tiny value for 2016, these are papers that are due to be published next year and obviously that year's data has not been collected yet!)

Source: scopus, plotted with Gadfly

If I had to guess, I'd say we've already reached the limit of diminishing returns when it comes to the ratio of amount of material you have to learn / amount you can contribute. Research is hard.

Replies from: jacob_cannell
comment by jacob_cannell · 2015-11-08T16:36:23.984Z · LW(p) · GW(p)

This is interesting, but I wonder how much of it is just shifts in naming.

What does the graph for say deep learning look like, or neural nets?

Replies from: passive_fist
comment by passive_fist · 2015-11-08T20:27:29.373Z · LW(p) · GW(p)

I don't know. You can plot the data for yourself.

comment by turchin · 2015-11-04T12:14:02.591Z · LW(p) · GW(p)

Yes, but we need to add "Humanity goes extinct before this date" which is also possible. ((( Sufficiently large catastrophe could prevent AI creation, like supervirus or nuclear war.

Replies from: gjm
comment by gjm · 2015-11-04T13:34:59.467Z · LW(p) · GW(p)

That would be another way for exponential growth in human AI research to stop, yes. You can think of it as one of the options under "(etc.)", or as a special case of "not enough resources".

comment by freyley · 2015-11-06T17:01:14.520Z · LW(p) · GW(p)

75% probability that the following things will be gone by: LessWrong: 2020 Email: 2135 The web: 2095 Y Combinator: 2045 Google: 2069 Microsoft: 2135 USA: 2732 Britain: 4862

These don't seem unreasonable.

I'm not sure that this method works with something that doesn't exist coming into existence. Would we say that we expect a 75% chance that someone will solve the problems of the EmDrive by 2057? That we'll have seasteading by 2117?

Replies from: V_V, Good_Burning_Plastic, turchin
comment by V_V · 2015-11-07T14:49:29.401Z · LW(p) · GW(p)

I can't see any plausible reason to predict that Microsoft will last longer than Google or that Britain will last longer than the USA.

In general, I tend to assume that recent history is more relevant to future prediction than older history, a sort of generalized informal Markov assumption if you wish, therefore trying to predict how long things will last based only on their age is likely to yield incorrect results.

comment by Good_Burning_Plastic · 2015-11-07T13:00:34.040Z · LW(p) · GW(p)

These don't seem unreasonable.

I'd give Less Wrong and e-mail substantially more than 25% chance of surviving to 2020 and 2135 respectively in some form, and the US a bit less than 25% chance of surviving to 2732. (But still within the same ballpark -- not bad for such a crude heuristic.)

comment by turchin · 2015-11-06T21:36:56.446Z · LW(p) · GW(p)

I think it should work if we see clear effort to create something physically possible. In case of Emdrive it may be proved that it is impossible. (But NASA just claimed that its new version of Emdrive seems to work :) In case of seasteding I think it is quite possible and most likely will be created during 21 century.

We could also use this logic to estimate next time nuclear weapons will be used in war, based on 1945 date. It gives 75 per cent for the next 105 years.

But if we use 75 per cent interval, it also means that 1 of 4 predictions will be false. So Lesswrong may survive )))

comment by HungryHobo · 2015-11-04T12:38:41.028Z · LW(p) · GW(p)

exponential growth? There's more publications but they're on less revolutionary things.

AI is hard. the field can go years without any really massive revolutionary breakthroughs.

When it was young and there was a lot of low-hanging fruit there were lots of breakthroughs in quick succession from a small number of people which made people more optimistic than was warranted.

comment by [deleted] · 2015-11-04T15:42:56.404Z · LW(p) · GW(p)

AI research began in 1950, and so is now 65 years old. If we are currently in a random moment during AI research then it could be estimated that there is a 50% probability of AI being created in the next 65 years, i.e. by 2080. Not very optimistic. Further, we can say that the probability of its creation within the next 1300 years is 95 per cent. So we get a rather vague prediction that AI will almost certainly be created within the next 1000 years, and few people would disagree with that.

This does not necessarily suggest that AI will exist with 95% certainty by that time, but that the period of AI research like what has gone on since 1950 will stop by then, success or non.

Replies from: turchin
comment by turchin · 2015-11-04T15:44:54.017Z · LW(p) · GW(p)

True. Several possible reasons for that were discussed in the comment below.

comment by JoshuaZ · 2015-11-06T19:35:38.472Z · LW(p) · GW(p)

The most interesting unknown in the future is the time of creation of Strong AI. Our priors are insufficient to predict it because it is such a unique task.

I'm not sure this follows. The primary problems with predicting the rise of Strong AI apply to most other artificial existential risks also.

Replies from: turchin
comment by turchin · 2015-11-06T21:30:03.729Z · LW(p) · GW(p)

Many of them may be predicted using the same logic. For example, we may try to estimate next time nuclear weapons will be used in war, based on a fact that they were used once in 1945. It results in 75 per cent probability for next 105 years. see also a comment below.

comment by turchin · 2015-11-04T12:19:08.996Z · LW(p) · GW(p)

Also, one more idea for applying Copernican principle to AI. I live in (random moment of) period of time during which AI is technically possible (we have powerful computers) but before it will be actually created. If we assume that technical possibility of AI was reached around 2000, we could come to the similar conclusion as in OP. But in fact we can't know in advance needed power of computers for AI, so this logic is more circular.

comment by gurugeorge · 2015-11-13T18:33:54.187Z · LW(p) · GW(p)

I dunno, isn't this just a nerdy version of numerology?