What if AGI is near?post by Wulky Wilkinsen · 2021-04-14T00:05:44.337Z · LW · GW · 5 comments
Consider the following observations:
- The scaling hypothesis is probably true: we will likely continue to see great improvements in AI capabilities as model sizes increase.
- Sutskever mentioned that models currently under development already have dramatic and qualitative improvements (e.g. going more multimodal) over those already made in 2020.
- AI model sizes are likely to increase very quickly over the short term.
- NVIDIA’s CEO: “We expect to see models with greater than 100 trillion parameters by 2023". Something 1000x the size of GPT-3, given its already shocking performance, is scary to imagine, to say the least.
- Even if OpenAI is cautious and will not go ahead with potentially catastrophic projects (dubious), the wide availability and rapidly decreasing cost of efficient hardware, along with publicly available information about how to train GPT-like architectures, means that some organization will achieve scaling.
- We will likely not be able to solve AI alignment within the next few years, even if AI safety research were to speed up dramatically.
- Deep learning AIs in the near term cannot conceivably be anything besides not remotely aligned.
- What takeoff speeds are likely with large deep learning models, if they attain AGI? Are medium takeoffs more plausible due to deep learning type AIs having less obvious “tweaks” or ways to recursively self improve by altering code?
- The key extension of that question is how will society react to presence of AGI? will it even shut down, stop development & wait for safety?
- Perhaps takeoff speeds are moot beyond human level, because even a human-level AGI would have the wits to proliferate itself over the internet to computers all over the world and therefore make its eradication impossible once it has come into existence? Therefore once it has been created, it would be a slow but certain, inexorable death?
- Given short term AGI, what options are available to reduce the risk of existential catastrophe? Would global nuclear war or a similarly devastating event that prevents technological progress be the only thing which could stave off AGI annihilation in the near term?
More serious thought needs to be given to this, to solemnly consider it as a looming possibility.
Comments sorted by top scores.