Greatest Lower Bound for AGI

post by Michaël Trazzi (mtrazzi) · 2019-02-05T20:17:24.675Z · score: 8 (4 votes) · LW · GW · 14 comments

This is a question post.

Contents

(Note: I assume that the timeline between AGI and an intelligence explosion is an order of magnitude shorter than the timeline between now and the first AGI. Therefore, I might refer indifferently to AGI/intelligence explosion.)

Take a grad student deciding to do a PhD (~3-5y). The promise of an intelligence explosion in 10y might make him change his mind.

More generally, estimating a scientifically sound infimum for AGI would favor coordination and clear thinking.

My baselines for lower bounds on AGI have been optimists' estimates. Actually, I stumbled upon the concept of singularity through this documentary, where Ben Goertzel asserts in 2009 that we can have a positive singularity in 10 years "if the right amount of effort is expanded in the right direction. If we really really try" (I later realized that he made some similar statement in 2006).

Ten years after Goertzel's statements, I'm still confused about how long it would take humanity to reach AGI in a context of global coordination. This leads me to this post's question:

According to your model, in which year will we reach a 1% probability of AGI (between January and December), and why?

I'm especially curious about arguments that don't (only) rely on compute trends.


EDIT: First answers seem to agree on some value between 2019 and 2021. This surprises me, as I think outside of the AI Safety bubble, AI researchers would be really surprised (less than 1% chance) to see AGI in less than 10 years.

I think my confusion about short timelines comes from the dissonance between estimates in AI Alignment research and the intuition of top AI researchers. In particular, I vividly remember a thread with Yann Le Cun where he confidently dismissed short timelines, comment after comment.

My follow-up question would therefore be:

"What is an important part of your model you think top ML researchers (such as Le Cun) are missing?"

answer by rohinmshah · 2019-02-05T22:22:34.096Z · score: 13 (6 votes)

Given the sheer amount of effort DeepMind and OpenAI are putting into the problem, and the fact that what they are working on need not be clear to us, and the fact that forecasting is hard, I think it's hard to place less than 1% on short timelines. You could justify less than 1% on 2019, maybe even 2020, but you should probably put at least 1% on 2021.

(This is assuming you have no information about DeepMind or OpenAI besides what they publish publicly.)

14 comments

Comments sorted by top scores.

comment by avturchin · 2019-02-05T21:29:15.970Z · score: 6 (4 votes) · LW · GW

2019, based on anthropic reasoning. We are randomly located between the beginning of AI research in 1956 and the moment of AGI. 1956 was 62 years ago, which implies 50 per cent probability of creating AGI in next 62 years, according Gott's equation. This is roughly equal to 50/62 = 0.81 per cent of yearly probability.

comment by Unnamed · 2019-02-06T09:02:52.496Z · score: 3 (2 votes) · LW · GW

I think you mean 50/62 = 0.81?

comment by avturchin · 2019-02-06T10:25:08.770Z · score: 2 (2 votes) · LW · GW

ups, yes.

comment by Michaël Trazzi (mtrazzi) · 2019-02-05T23:06:19.435Z · score: 3 (2 votes) · LW · GW

Could you detail a bit more the Gott's equation? I'm not familiar with it.

Also, do you think that those 62 years are meaningful if we think about AI winters or exponential technological progress?

PS: I think you commented instead of giving an answer (different things in question posts)

comment by avturchin · 2019-02-06T11:44:38.093Z · score: 1 (1 votes) · LW · GW

Gott's equation could be found in wiki and the main idea is that if I am randomly observing some external process, its age could be used to estimate its future time of existence, as, most likely, I observe it somewhere in the middle of its existence. Gott himself used this logic to predict the fall of Berlin wall when he was a student, and it actually failed in predicted timing, when Gott was already a prominent scientists and his article about it was published in Nature.

If we account for the exponential growth in AI, and assume that I am randomly taken of all AI researchers, the end will be much nearer - but all it becomes more speculative, as accounting for AI winters will dilute the prediction etc.

comment by Gurkenglas · 2019-02-06T12:58:30.989Z · score: 1 (1 votes) · LW · GW

This anthropic evidence gives you a likelihood function. If you want a probability distribution, you additionally need a prior probability distribution.

comment by avturchin · 2019-02-06T13:57:08.147Z · score: 1 (1 votes) · LW · GW

He we use an assumption that probability of AI creation is distributed linearly along the interval of AI research - which is obviously false, as it should grow to the end, may be exponentially. If we assume that the field is doubling, say, every 5 years, Copernican reasoning tells us that if we randomly selected from the members of this field, the field will end in after the next doubling with something like 50 per cent probability, and 75 per cent after 2 doublings.

TL;DR: anthropic + exponential growth = AGI to 2030.

comment by Gurkenglas · 2019-02-06T02:31:07.327Z · score: 1 (1 votes) · LW · GW

Proves too much: This would give ~the same answer for any other future event that marks the end of some duration that started in the last century.

comment by Vaniver · 2019-02-06T02:55:43.436Z · score: 3 (1 votes) · LW · GW

It's a straightforward application of the Copernican principle. Of course, that is not always the best approach.

comment by avturchin · 2019-02-06T10:23:30.965Z · score: 1 (1 votes) · LW · GW

BTW, exactly this paper is wrong, as could be seen that from his bet about predicting the age of OLD gods. The doomsday argument is statistical, so it can't be refuted by nitpicking example with specific age.

comment by Charlie Steiner · 2019-02-05T23:24:41.192Z · score: 1 (2 votes) · LW · GW

2016.


Quantum immortality!

(jk)