post by [deleted] · · ? · GW · 0 comments

This is a link post for

0 comments

Comments sorted by top scores.

comment by avturchin · 2019-02-05T21:29:15.970Z · LW(p) · GW(p)

2019, based on anthropic reasoning. We are randomly located between the beginning of AI research in 1956 and the moment of AGI. 1956 was 62 years ago, which implies 50 per cent probability of creating AGI in next 62 years, according Gott's equation. This is roughly equal to 50/62 = 0.81 per cent of yearly probability.

Replies from: Unnamed, mtrazzi, Gurkenglas
comment by Unnamed · 2019-02-06T09:02:52.496Z · LW(p) · GW(p)

I think you mean 50/62 = 0.81?

Replies from: avturchin
comment by avturchin · 2019-02-06T10:25:08.770Z · LW(p) · GW(p)

ups, yes.

comment by Michaël Trazzi (mtrazzi) · 2019-02-05T23:06:19.435Z · LW(p) · GW(p)

Could you detail a bit more the Gott's equation? I'm not familiar with it.

Also, do you think that those 62 years are meaningful if we think about AI winters or exponential technological progress?

PS: I think you commented instead of giving an answer (different things in question posts)

Replies from: avturchin
comment by avturchin · 2019-02-06T11:44:38.093Z · LW(p) · GW(p)

Gott's equation could be found in wiki and the main idea is that if I am randomly observing some external process, its age could be used to estimate its future time of existence, as, most likely, I observe it somewhere in the middle of its existence. Gott himself used this logic to predict the fall of Berlin wall when he was a student, and it actually failed in predicted timing, when Gott was already a prominent scientists and his article about it was published in Nature.

If we account for the exponential growth in AI, and assume that I am randomly taken of all AI researchers, the end will be much nearer - but all it becomes more speculative, as accounting for AI winters will dilute the prediction etc.

Replies from: Gurkenglas
comment by Gurkenglas · 2019-02-06T12:58:30.989Z · LW(p) · GW(p)

This anthropic evidence gives you a likelihood function. If you want a probability distribution, you additionally need a prior probability distribution.

Replies from: avturchin
comment by avturchin · 2019-02-06T13:57:08.147Z · LW(p) · GW(p)

He we use an assumption that probability of AI creation is distributed linearly along the interval of AI research - which is obviously false, as it should grow to the end, may be exponentially. If we assume that the field is doubling, say, every 5 years, Copernican reasoning tells us that if we randomly selected from the members of this field, the field will end in after the next doubling with something like 50 per cent probability, and 75 per cent after 2 doublings.

TL;DR: anthropic + exponential growth = AGI to 2030.

comment by Gurkenglas · 2019-02-06T02:31:07.327Z · LW(p) · GW(p)

Proves too much: This would give ~the same answer for any other future event that marks the end of some duration that started in the last century.

Replies from: Vaniver
comment by Vaniver · 2019-02-06T02:55:43.436Z · LW(p) · GW(p)

It's a straightforward application of the Copernican principle. Of course, that is not always the best approach.

Replies from: avturchin
comment by avturchin · 2019-02-06T10:23:30.965Z · LW(p) · GW(p)

BTW, exactly this paper is wrong, as could be seen that from his bet about predicting the age of OLD gods. The doomsday argument is statistical, so it can't be refuted by nitpicking example with specific age.

Replies from: TheWakalix
comment by TheWakalix · 2019-02-19T21:42:30.392Z · LW(p) · GW(p)

The old dogs example illustrates that you can do far better than Gott's equation with further information, such as "this dog is one of the oldest dogs in the sample".

If you want to say "our ultimate prior should be Copernican," that's fine, but that prior should be adjusted heavily by any available evidence.

Replies from: avturchin
comment by avturchin · 2019-02-20T08:33:48.441Z · LW(p) · GW(p)

What you said is true, but it seems to me that Caves didn't mean it. His bet was intended to demonstrate the general weakness of Copernican logic.

comment by TheWakalix · 2019-02-19T21:34:58.091Z · LW(p) · GW(p)

Something to keep in mind: in many models, AGI will be surprising. To everyone outside the successful team, it will probably seem impossible right up until it is done. If you think this is true, then the Outside View recommends assigning a small probability of AGI happening in the next few years - even if that seems impossible, because "seeming impossible" isn't reliable information that it's not immanent.

What I would say ML researchers are missing is that we don't have a good enough model of AGI to know how far we have left with high confidence. We know we're missing something, but not how much we're missing.

comment by Charlie Steiner · 2019-02-05T23:24:41.192Z · LW(p) · GW(p)

2016.


Quantum immortality!

(jk)