What if AGI is near?

post by Wulky Wilkinsen · 2021-04-14T00:05:44.337Z · LW · GW · 5 comments

Contents

5 comments

Consider the following observations:

Questions:

More serious thought needs to be given to this, to solemnly consider it as a looming possibility. 

5 comments

Comments sorted by top scores.

comment by GeneSmith · 2021-04-14T01:47:11.249Z · LW(p) · GW(p)

I honestly don't know and thinking about this fills me with despair. Every good solution requires time and that seems to be the main thing we're short of.

Has anyone done serious research into what it would take to slow down progress in the field of AI? Could we just ban hardware improvements for a couple of decades and place a worldwide cap on total compute? I realize this would be incredibly unpopular and would require a majority of the world's population to understand how dangerous powerful AI will be. But one would think FLI or someone else would have started doing research into this area.

comment by Shmi (shminux) · 2021-04-14T01:54:52.929Z · LW(p) · GW(p)

Consider that "if AGI is very near" probably means that it's already happened (or, equivalently, that we are past the point of no return) on Copernican grounds, since the odds of living in a very special moment where the timelines are short but it's not too late yet are very low. Not seeing an obvious AGI around likely means that either it's not very near, or that the take-off is slow, not fast. 

Ironically, it's not Roko's basilisk that is an infohazard, it's the "AGI go foom!" idea that is.

Replies from: niplav, Wulky Wilkinsen, avturchin
comment by niplav · 2021-04-14T07:33:50.991Z · LW(p) · GW(p)

I want to clarify that "AGI go foom!" is not really concerned with the nearness of the advent of AGI, but with whether AGIs have a discontinuity that results in an acceleration of the development of their intelligence over time.

comment by Wulky Wilkinsen · 2021-04-14T03:31:56.082Z · LW(p) · GW(p)

I don't understand how the Copernican argument works. Being alive in the moment before the first AGI exists is very unlikely, but surely it is very unlikely to be alive in any moment around the development of AGI in general.  If anything, you could possibly argue that it's more likely we are in some kind of simulation than in base reality right before AGI takeoff. If that's not the point you're making, could you restate the argument?
 

comment by avturchin · 2021-04-14T08:43:34.367Z · LW(p) · GW(p)

Yes, it is a reversed doomsday argument: it is unlikely that the end is nigh.