What if AGI is near?
post by Wulky Wilkinsen · 2021-04-14T00:05:44.337Z · LW · GW · 5 commentsContents
5 comments
Consider the following observations:
- The scaling hypothesis is probably true: we will likely continue to see great improvements in AI capabilities as model sizes increase.
- Sutskever mentioned that models currently under development already have dramatic and qualitative improvements (e.g. going more multimodal) over those already made in 2020.
- AI model sizes are likely to increase very quickly over the short term.
- NVIDIA’s CEO: “We expect to see models with greater than 100 trillion parameters by 2023". Something 1000x the size of GPT-3, given its already shocking performance, is scary to imagine, to say the least.
- Even if OpenAI is cautious and will not go ahead with potentially catastrophic projects (dubious), the wide availability and rapidly decreasing cost of efficient hardware, along with publicly available information about how to train GPT-like architectures, means that some organization will achieve scaling.
- We will likely not be able to solve AI alignment within the next few years, even if AI safety research were to speed up dramatically.
- Deep learning AIs in the near term cannot conceivably be anything besides not remotely aligned.
Questions:
- What takeoff speeds are likely with large deep learning models, if they attain AGI? Are medium takeoffs more plausible due to deep learning type AIs having less obvious “tweaks” or ways to recursively self improve by altering code?
- The key extension of that question is how will society react to presence of AGI? will it even shut down, stop development & wait for safety?
- Perhaps takeoff speeds are moot beyond human level, because even a human-level AGI would have the wits to proliferate itself over the internet to computers all over the world and therefore make its eradication impossible once it has come into existence? Therefore once it has been created, it would be a slow but certain, inexorable death?
- Given short term AGI, what options are available to reduce the risk of existential catastrophe? Would global nuclear war or a similarly devastating event that prevents technological progress be the only thing which could stave off AGI annihilation in the near term?
More serious thought needs to be given to this, to solemnly consider it as a looming possibility.
5 comments
Comments sorted by top scores.
comment by GeneSmith · 2021-04-14T01:47:11.249Z · LW(p) · GW(p)
I honestly don't know and thinking about this fills me with despair. Every good solution requires time and that seems to be the main thing we're short of.
Has anyone done serious research into what it would take to slow down progress in the field of AI? Could we just ban hardware improvements for a couple of decades and place a worldwide cap on total compute? I realize this would be incredibly unpopular and would require a majority of the world's population to understand how dangerous powerful AI will be. But one would think FLI or someone else would have started doing research into this area.
comment by Shmi (shminux) · 2021-04-14T01:54:52.929Z · LW(p) · GW(p)
Consider that "if AGI is very near" probably means that it's already happened (or, equivalently, that we are past the point of no return) on Copernican grounds, since the odds of living in a very special moment where the timelines are short but it's not too late yet are very low. Not seeing an obvious AGI around likely means that either it's not very near, or that the take-off is slow, not fast.
Ironically, it's not Roko's basilisk that is an infohazard, it's the "AGI go foom!" idea that is.
Replies from: niplav, Wulky Wilkinsen, avturchin↑ comment by Wulky Wilkinsen · 2021-04-14T03:31:56.082Z · LW(p) · GW(p)
I don't understand how the Copernican argument works. Being alive in the moment before the first AGI exists is very unlikely, but surely it is very unlikely to be alive in any moment around the development of AGI in general. If anything, you could possibly argue that it's more likely we are in some kind of simulation than in base reality right before AGI takeoff. If that's not the point you're making, could you restate the argument?