To what extent is your AGI timeline bimodal or otherwise "bumpy"?

post by jchan · 2022-05-16T17:42:54.281Z · LW · GW · 1 comment

This is a question post.

Contents

  Answers
    4 p.b.
None
1 comment

For example, you might think:

It's likely that AGI will be invented before 2050; however, if it isn't, then that must mean either that AGI is impossible, or that it requires much more advanced technology than I currently think it does, or else that there was some kind of large-scale civilizational collapse in the meantime.

For that matter, any non-exponential distribution has this property, where the non-occurrence of the event by a certain time will change your expectation of it going forward. I'm curious if people think this is the case for AGI, and if so, why. (Also curious if this question has been asked before.)

Answers

answer by p.b. · 2022-05-17T03:47:10.571Z · LW(p) · GW(p)

It's bumpy because either "normal" Deep Learning progress will get us there or there is a big roadblock ahead that will require a major scientific breakthrough. 

The Deep Learning scenario creates a bump within the next two decades I would say.

Whole brain simulation could create another bump but I don't know where.

The "major scientific breakthrough" scenario doesn't create a bump. It could've happened yesterday. 

1 comment

Comments sorted by top scores.

comment by Shmi (shminux) · 2022-05-16T20:16:15.526Z · LW(p) · GW(p)

It's hard to come up with a reasonable probability distribution for a one-off event, not clear what the reference class might be. But my guess is that it would be some form of the power law, because it is universal and scale-independent. No idea about the power exponent though.