Dave Kasten's AGI-by-2027 vignette
post by davekasten · 2024-11-26T23:20:47.212Z · LW · GW · 8 commentsContents
My scenario for Stage 1: runup to AGI My scenario for Stage 2: after the AGI None 9 comments
(Epistemic status: Very loosely held and generated in a 90-minute workshop led by @Daniel Kokotajlo [LW · GW], @Thomas Larsen [LW · GW], @elifland [LW · GW], and Jonas Vollmer at The Curve Conference; explores how it might happen, if it happens soon. I expect there to be at least one "duh, that makes no sense" discovered with any significant level of attention that would require me to rethink some of this.)
Recently, at The Curve conference, I participated in a session that helped facilitate us writing AGI vignettes -- narratives of how we get to AGI. [1] I got feedback that folks thought it was a good vignette, so I'm publishing it here.
My scenario for Stage 1: runup to AGI
AGI is created in early- to mid-2027. It just works first time.
Anthropic does it. In many worlds, it could have been OpenAI or Google or Meta or even small startups that figured out One Weird Trick, but in this world, it’s Anthropic. It’s a close-run race; in the months ahead of it, everyone knew each of the frontier labs were nearing in on the finish line; some of them released models or new features that were damnsome close, but Anthropic just got there first. Ultimately, the writer of this narrative is a recovering org practice consultant from McKinsey that thinks org culture matters a lot, and so in this world the crucial factor was that Anthropic staff just had the psychological safety [2]to navigate the terrifying go/no-go decisions on the last few development and deployment phases a little bit better and faster than the others. (In retrospect, that’s why Anthropic started pulling ahead in late 2024, though the lead was far too narrow to call definitive or clear at that point).
Anthropic calls it Shannon.
The others were close, though. Google was probably 9-12 months away, despite all of their constant organizational chaos, because they got their shit together after their first down quarter led to a legendary screaming match at the all-hands. Meta had slowed down after the Trump administration started asking hard questions about open AI weights getting shared with China, and the organizational tax of spinning up all of the APIs and hosting for their newly-closed models, as well as stapling CoT on top of Llama (they’d not wanted to take the inference-time hit back when the model was still open, especially since their users were building a bunch of different approaches and they weren’t sure which was best), meant they were 6 months behind. And OpenAI was just about 3 months behind.
All of the labs had ultimately followed the same set of insights:
- Scaling laws had briefly hit a wall, which in retrospect was seen as when things accelerated.
- GPT o1-preview’s use of greater inference-time and CoT had been quickly copied by everyone and iterated on. The scaffolding believers were just correct; the brief few months in which everyone assumed naive scaling had slowed led to a rapid grabbing of all sorts of low-hanging fruit on CoT that folks just hadn’t want to spend the time on developing. In fact, in 2025 and 2026, every lab had gotten really good at using primitive, semi-AGI-ish automated AI researchers to try many, many different approaches of what to stick on top of a model to make it more effective, allowing maximum parallelization of automated AI researcher effort without burning a lot of training run time.
- That only lasted for a bit; the scaffolding effort was met in parallel with a renewed focus on training better given the available data, as well as generating synthetic data in parallel to further expand the pool of training data. Their lawyers also cut a bunch of deals for high-uniqueness, high-value content in various private enterprises, and the US government also packaged up a bunch of new datasets really nicely, as well.
Everyone finds out pretty quickly; Anthropic doesn’t try to hide the ball from USG or the public. Besides, they’re worried that someone else is very close and will spin up something misaligned – the first task they gave a cohort of Shannon instances after confirming it was safe-enough-to-risk running was, “write a list of all the ways you could have been misaligned that we didn’t think of”; it’s a concerning list.
My scenario for Stage 2: after the AGI
Welcome to the AGI division, and no, it’s not your first day.
Things start going crazy, immediately. Anthropic faces the fork of a dilemma: either let the Shannon instances be used for economic tasks, or for starting ASI research. The former means either that lots of jobs go away real soon now in the private or public sectors (the Trump admin is still looking for ways to cut federal headcount after early efforts failed to achieve desired impact). The latter means that things accelerate. (Technically, “have the AGI do research to build out its multimodal capabilities to run robots” is an excluded middle, but practically this is just a way of saying “do first the latter leg, then the former leg of the dilemma”)
Anthropic ultimately chooses to slowly let the economic task capabilities out into the world, while burning most of their capabilities on research.
On the economic side, it’s not API access at first; it’s more like a SCIF model, where tasks get one-way moved into a protected environment, and very paranoically-inspected results from Shannon are eventually moved outwards, in printout only at first. (Ironically, this means that the first industry that Shannon fully disrupts is the print fiction and comic book industries.) Of course, some of this is also being used to stand up government capabilities, including a USG contract via Palantir to use Shannon-as-defensive-intelligence-analyst. (They understand how terrifyingly bad this would be if Shannon is feigning alignment, but don’t see a choice; there are very real worries that World War 3 is about to kick off, and the Trump admin has produced way too many national security strategy documents that say “yes let’s race China” – it’s either cooperate, or risk nationalization).
As this gets more comfortable, Shannon variants (further armored up by automated AI research) start being made available via SaaS-like arrangements. It’s not a true API; you have to have an enterprise deal, in large part because the Shannon instances need access to lots of multifactor authentication for the companies they work for, so you end up having a more entangled relationship.
On the research side, it’s not just capability or safety or alignment or what-have-you, though that’s a huge chunk of it. A small but significant minority of it is burned on research to try to understand how to detect an unaligned AGI elsewhere in the world, or to check whether another AGI is unaligned – Anthropic knows how short their lead is over the others, and they’re frantically trying to shore up the floodwalls before all those just behind them catch up. In fact, they get their board to authorize them to start braindumping previously-proprietary safety measures on all their competitors.
ASI turns out to take longer than you might think; it doesn’t arrive until 2037 or so. [3] It’s not because someone wants to stop the race to ASI, or that there was a treaty, or anything like that. Nation-states just want to race, and unfortunately, A Narrow Path wasn’t able to convince them otherwise, nor was anyone else. This is the default timeline that your author is predicting, not a better one. Rather, it’s that the effort requires truly massive scale-ups. Everyone’s convinced that ASI could in theory be done with a much less energistically- and compute-expensive training run, but it’s just way faster in practice to spam compute farms across the landscape.
This is actually bad news for racing, but good news for global stability. An all-costs espionage push is going against every AI lab and the USG simultaneously, and China works the cracks really effectively; they grab some stuff from Anthropic, but most of what they grab is actually from the other labs collectively; together, it’s enough to cobble together AGI and start trying to catch up. But as a result, Xi Jinping decides in 2029 – great news! – that it doesn’t make sense yet to nuke San Francisco, and that buys enough time for everyone to buy more time.
The period between 2027 and 2037 is, bluntly, insane. The entire Western population is in intense, irrevocable future shock, and most of the Chinese population is as well. The economy’s gone hyperbolic. And here is where I have to draw the curtain, dear reader, because it is beyond our ability to predict. The people in these circumstances just aren’t much like us any more.
(Disclaimers: Not an endorsement of any org mentioned, and not the opinion of anyone I work for. Views may not even be my own, much less anyone else's. I want to be clear that this is my effort to explore a possibility space as best I can, and, e.g., picking Anthropic as the gets-there-first isn't intended to be a bet-the-farm prediction, an endorsement, or any other adversarial interpretation you might pick.)
- ^
Meaning, here, "An AI system as skilled at every cognitive task as the best humans, while being faster and cheaper than human labor."
- ^
Note that psychological safety != AI safety.
- ^
So far, this is the part of the scenario that's gotten the most pushback. I have a bunch of intuitions here about what an ecosystem of better-than-human intelligences look like that makes it hard to truly breakout to ASI that I should probably write out in more detail at some point.
8 comments
Comments sorted by top scores.
comment by Nathan Helm-Burger (nathan-helm-burger) · 2024-11-26T23:55:55.303Z · LW(p) · GW(p)
ASI turns out to take longer than you might think; it doesn’t arrive until 2037 or so. So far, this is the part of the scenario that's gotten the most pushback.
Uhhh, yeah. 10 years between highly profitable and capitlized upon AGI, with lots of hardware and compute put towards it, and geopolitical and economic reasons for racing...
I can't fathom it. I don't see what barrier at near-human-intelligence is holding back futher advancement.
I'm quite confident that if we had the ability to scale up arbitrary portions of a human brain (e.g. math area and it's most closely associated parietal and pre-frontal cortex areas), we'd create a smarter human than had ever before existed basically overnight. Why wouldn't this be the case for a human-equivalent AGI system? Bandwidth bottlenecks? Nearly no returns to further scaling for some arbitrary reason?
Seems like you should prioritize making a post about how this could be a non-trivial possibility, because I just feel confused at the concept.
Replies from: Seth Herd, davekasten↑ comment by Seth Herd · 2024-11-28T01:11:40.210Z · LW(p) · GW(p)
I largely agree that ASI will follow AGI faster, but with a couple caveats.
The road from AGI to superintelligence will very likely be fairly continuous. You could slap the term "superintelligence" almost wherever you want after it passes human level.
I do see some reasons that the road will go a little slower than we might think. Scaling laws are logarithmic; making more and better chips requires physical technology that the AGI can help with but can't do until it gets better with robotics, possibly including new hardware (although humanoid robotics will be close to adequate for most things by then, with new control networks rapidly trained by the AGI).
If the architecture is similar to current LLMs, it's enough like human thought that I expect the progression to remain logarithmic; you're still using the same clumsy basic algorithm of using your knowledge to come up with ideas, then going through long chains of thought and ultimately experiments to test the validity of different ideas.
It's completely dependent on what we mean by superintelligence, but creating new technologies in a day will take maybe five years after the first clearly human-level general real AGI on this path, in my rough estimate.
Of course that's scaled by how hard people are actually trying for it.
↑ comment by davekasten · 2024-11-27T00:22:27.067Z · LW(p) · GW(p)
Oh, it very possibly is the wrongest part of the piece! I put it in the original workshop draft as I was running out of time and wanted to provoke debate.
A brief gesture at a sketch of the intuition: imagine a different, crueler world, where there were orders of magnitude more nation-states, but at the start only a few nuclear powers, like in our world, with a 1950s-level tech base. If the few nuclear powers want to keep control, they'll have to divert huge chunks of their breeder reactors' output to pre-emptively nuking any site in the many many non-nuclear-club states that could be an arms program to prevent breakouts, then any of the nuclear powers would have to wait a fairly long time to assemble an arms stockpile sufficient to launch a Project Orion into space.
↑ comment by Daniel Kokotajlo (daniel-kokotajlo) · 2024-11-27T03:47:59.457Z · LW(p) · GW(p)
Interesting! You should definitely think more about this and write it up sometime, either you'll change your mind about timelines till superintelligence or you'll have found an interesting novel argument that may change other people's minds (such as mine).
Replies from: davekasten↑ comment by davekasten · 2024-11-27T15:23:30.651Z · LW(p) · GW(p)
I think I'm also learning that people are way more interested in this detail than I expected!
I debated changing it to "203X" when posting to avoid this becoming the focus of the discussion but figured, "eh, keep it as I actually wrote it in the workshop" for good epistemic hygiene.
comment by Mitchell_Porter · 2024-11-27T11:46:12.737Z · LW(p) · GW(p)
Started promisingly, but like everyone else, I don't believe in the ten-year gap from AGI to ASI. If anything, we got a kind of AGI in 2022 (with ChatGPT), and we'll get ASI by 2027, from something like your "cohort of Shannon instances".
Replies from: anders-lindstroem, red75prime↑ comment by Anders Lindström (anders-lindstroem) · 2024-11-28T12:49:42.990Z · LW(p) · GW(p)
Yes, the soon-to-be-here "human level" AGI people talk about is for all intent and purposes ASI. Show me one person who is at the highest expert level on thousands of subjects and that have the content of all human knowledge memorized and can draw the most complex inferences on that knowledge across multiple domains in seconds.
↑ comment by red75prime · 2024-11-27T12:56:50.149Z · LW(p) · GW(p)
comment by Seth Herd · 2024-11-28T01:16:05.923Z · LW(p) · GW(p)
Bravo and big upvote! Spinning out concrete scenarios like this is going to sharpen our collective thinking. Everyone should do this. I'll take a shot at it soon.
I find the timeline highly plausible. In this world, though what happened to the rest that were months behind? Since it took a while to reach ASI, now we have to hope those are all aligned and well-used too - unless somebody halts those projects. Leading to the big question:
What is the government doing once AGI is achieved? Surely Trump isn't keeping his little fingers off it.
These questions are for everyone as much as for Dave. After we've got some good scenarios-to-AGI in our collective minds, we should be better able to push past them to scenarios for impacts.