Greatest Lower Bound for AGI

post by Michaël Trazzi (mtrazzi) · 2019-02-05T20:17:24.675Z · score: 9 (5 votes) · LW · GW · 17 comments

This is a question post.


    13 rohinmshah

(Note: I assume that the timeline between AGI and an intelligence explosion is an order of magnitude shorter than the timeline between now and the first AGI. Therefore, I might refer indifferently to AGI/intelligence explosion.)

Take a grad student deciding to do a PhD (~3-5y). The promise of an intelligence explosion in 10y might make him change his mind.

More generally, estimating a scientifically sound infimum for AGI would favor coordination and clear thinking.

My baselines for lower bounds on AGI have been optimists' estimates. Actually, I stumbled upon the concept of singularity through this documentary, where Ben Goertzel asserts in 2009 that we can have a positive singularity in 10 years "if the right amount of effort is expanded in the right direction. If we really really try" (I later realized that he made some similar statement in 2006).

Ten years after Goertzel's statements, I'm still confused about how long it would take humanity to reach AGI in a context of global coordination. This leads me to this post's question:

According to your model, in which year will we reach a 1% probability of AGI (between January and December), and why?

I'm especially curious about arguments that don't (only) rely on compute trends.

EDIT: First answers seem to agree on some value between 2019 and 2021. This surprises me, as I think outside of the AI Safety bubble, AI researchers would be really surprised (less than 1% chance) to see AGI in less than 10 years.

I think my confusion about short timelines comes from the dissonance between estimates in AI Alignment research and the intuition of top AI researchers. In particular, I vividly remember a thread with Yann Le Cun where he confidently dismissed short timelines, comment after comment.

My follow-up question would therefore be:

"What is an important part of your model you think top ML researchers (such as Le Cun) are missing?"


answer by rohinmshah · 2019-02-05T22:22:34.096Z · score: 13 (6 votes) · LW · GW

Given the sheer amount of effort DeepMind and OpenAI are putting into the problem, and the fact that what they are working on need not be clear to us, and the fact that forecasting is hard, I think it's hard to place less than 1% on short timelines. You could justify less than 1% on 2019, maybe even 2020, but you should probably put at least 1% on 2021.

(This is assuming you have no information about DeepMind or OpenAI besides what they publish publicly.)

comment by Michaël Trazzi (mtrazzi) · 2019-02-05T23:14:48.666Z · score: 7 (3 votes) · LW · GW

I intuitively agree with your answer. Avturchin also commented saying something close (he said 2019, but for different reasons). Therefore, I think I might not be communicating clearly my confusion.

I don't remember exactly when, but there was some debates between Yann Le Cun and AI Alignment folks on a Fb group (maybe AI Safety discussion "open" a few months ago). What stroke me was how confident LeCun was about long timelines. I think, for him, the 1% would be in at least 10 years. How do you explain that someone who has access to private information (e.g. at FAIR) might have timelines so different than yours?

Meta: Thanks for expressing clearly your confidence levels through your writing with "hard", "maybe" and "should": it's very efficient.

EDIT: Le Cun thread:

comment by rohinmshah · 2019-02-06T08:08:49.423Z · score: 10 (6 votes) · LW · GW

Either he's not trying to be calibrated, or he's not good at being calibrated, probably the former. Like, my inside view also screams fairly loudly that AGI in 2020 is never going to happen -- but assigning 99% confidence to my inside view is far too much confidence. I expect LeCun is mostly trying to communicate what his inside view is confident about.

There are lots of good non-alignment ML researchers whose timelines are much much shorter (including many working at DeepMind and OpenAI). Of course, it could be that they are the ones who are wrong and LeCun is right, but I don't see a particularly compelling reason to make that judgment.


Comments sorted by top scores.

comment by avturchin · 2019-02-05T21:29:15.970Z · score: 7 (5 votes) · LW · GW

2019, based on anthropic reasoning. We are randomly located between the beginning of AI research in 1956 and the moment of AGI. 1956 was 62 years ago, which implies 50 per cent probability of creating AGI in next 62 years, according Gott's equation. This is roughly equal to 50/62 = 0.81 per cent of yearly probability.

comment by Unnamed · 2019-02-06T09:02:52.496Z · score: 4 (3 votes) · LW · GW

I think you mean 50/62 = 0.81?

comment by avturchin · 2019-02-06T10:25:08.770Z · score: 3 (3 votes) · LW · GW

ups, yes.

comment by Michaël Trazzi (mtrazzi) · 2019-02-05T23:06:19.435Z · score: 4 (3 votes) · LW · GW

Could you detail a bit more the Gott's equation? I'm not familiar with it.

Also, do you think that those 62 years are meaningful if we think about AI winters or exponential technological progress?

PS: I think you commented instead of giving an answer (different things in question posts)

comment by avturchin · 2019-02-06T11:44:38.093Z · score: 2 (2 votes) · LW · GW

Gott's equation could be found in wiki and the main idea is that if I am randomly observing some external process, its age could be used to estimate its future time of existence, as, most likely, I observe it somewhere in the middle of its existence. Gott himself used this logic to predict the fall of Berlin wall when he was a student, and it actually failed in predicted timing, when Gott was already a prominent scientists and his article about it was published in Nature.

If we account for the exponential growth in AI, and assume that I am randomly taken of all AI researchers, the end will be much nearer - but all it becomes more speculative, as accounting for AI winters will dilute the prediction etc.

comment by Gurkenglas · 2019-02-06T12:58:30.989Z · score: 1 (1 votes) · LW · GW

This anthropic evidence gives you a likelihood function. If you want a probability distribution, you additionally need a prior probability distribution.

comment by avturchin · 2019-02-06T13:57:08.147Z · score: 1 (1 votes) · LW · GW

He we use an assumption that probability of AI creation is distributed linearly along the interval of AI research - which is obviously false, as it should grow to the end, may be exponentially. If we assume that the field is doubling, say, every 5 years, Copernican reasoning tells us that if we randomly selected from the members of this field, the field will end in after the next doubling with something like 50 per cent probability, and 75 per cent after 2 doublings.

TL;DR: anthropic + exponential growth = AGI to 2030.

comment by Gurkenglas · 2019-02-06T02:31:07.327Z · score: 1 (1 votes) · LW · GW

Proves too much: This would give ~the same answer for any other future event that marks the end of some duration that started in the last century.

comment by Vaniver · 2019-02-06T02:55:43.436Z · score: 4 (2 votes) · LW · GW

It's a straightforward application of the Copernican principle. Of course, that is not always the best approach.

comment by avturchin · 2019-02-06T10:23:30.965Z · score: 0 (2 votes) · LW · GW

BTW, exactly this paper is wrong, as could be seen that from his bet about predicting the age of OLD gods. The doomsday argument is statistical, so it can't be refuted by nitpicking example with specific age.

comment by TheWakalix · 2019-02-19T21:42:30.392Z · score: 0 (2 votes) · LW · GW

The old dogs example illustrates that you can do far better than Gott's equation with further information, such as "this dog is one of the oldest dogs in the sample".

If you want to say "our ultimate prior should be Copernican," that's fine, but that prior should be adjusted heavily by any available evidence.

comment by avturchin · 2019-02-20T08:33:48.441Z · score: 1 (1 votes) · LW · GW

What you said is true, but it seems to me that Caves didn't mean it. His bet was intended to demonstrate the general weakness of Copernican logic.

comment by Charlie Steiner · 2019-02-05T23:24:41.192Z · score: 2 (3 votes) · LW · GW


Quantum immortality!


comment by TheWakalix · 2019-02-19T21:34:58.091Z · score: 1 (1 votes) · LW · GW

Something to keep in mind: in many models, AGI will be surprising. To everyone outside the successful team, it will probably seem impossible right up until it is done. If you think this is true, then the Outside View recommends assigning a small probability of AGI happening in the next few years - even if that seems impossible, because "seeming impossible" isn't reliable information that it's not immanent.

What I would say ML researchers are missing is that we don't have a good enough model of AGI to know how far we have left with high confidence. We know we're missing something, but not how much we're missing.