"We know how to build AGI" - Sam Altman

post by Nikola Jurkovic (nikolaisalreadytaken) · 2025-01-06T02:05:05.134Z · LW · GW · 5 comments

This is a link post for https://blog.samaltman.com/reflections

Contents

6 comments

We are now confident we know how to build AGI as we have traditionally understood it. We believe that, in 2025, we may see the first AI agents “join the workforce” and materially change the output of companies. We continue to believe that iteratively putting great tools in the hands of people leads to great, broadly-distributed outcomes.

We are beginning to turn our aim beyond that, to superintelligence in the true sense of the word. We love our current products, but we are here for the glorious future. With superintelligence, we can do anything else. Superintelligent tools could massively accelerate scientific discovery and innovation well beyond what we are capable of doing on our own, and in turn massively increase abundance and prosperity.

 

More context in this Bloomberg piece.

What’s the threshold where you’re going to say, “OK, we’ve achieved AGI now”?
The very rough way I try to think about it is when an AI system can do what very skilled humans in important jobs can do—I’d call that AGI. There’s then a bunch of follow-on questions like, well, is it the full job or only part of it? Can it start as a computer program and decide it wants to become a doctor? Can it do what the best people in the field can do or the 98th percentile? How autonomous is it? I don’t have deep, precise answers there yet, but if you could hire an AI as a remote employee to be a great software engineer, I think a lot of people would say, “OK, that’s AGI-ish.”
Now we’re going to move the goalposts, always, which is why this is hard, but I’ll stick with that as an answer. And then when I think about superintelligence, the key thing to me is, can this system rapidly increase the rate of scientific discovery that happens on planet Earth?

5 comments

Comments sorted by top scores.

comment by Vladimir_Nesov · 2025-01-06T04:59:51.381Z · LW(p) · GW(p)

This narrative (on timing) promotes building $150bn training systems in 2026-2027. AGI is nigh, therefore it makes sense to build them. If they aren't getting built, that might be the reason AGI hasn't arrived yet, so build them already (implies the narrative).

Actual knowledge that this last step of scaling is just enough to be relevant doesn't seem likely. This step of scaling seems to be beyond what happens by default, so a last push to get it done might be necessary. And the step after it won't be possible to achieve with mere narrative. While funding keeps scaling, the probability of triggering an intelligence explosion is higher; once it stops scaling, the probability (per year) goes down (if intelligence hasn't exploded by then). In this sense the narrative has a point.

comment by BrianTan · 2025-01-06T04:11:55.593Z · LW(p) · GW(p)

Thanks for linking these! I also want to highlight that Sam shared his AGI timeline in the Bloomberg interview: "I think AGI will probably get developed during this president’s term, and getting that right seems really important."

comment by Kaj_Sotala · 2025-01-07T03:40:02.000Z · LW(p) · GW(p)

Worth keeping in mind that OpenAI is burning through crazy amounts of money and is constantly in need of more:

OpenAI raised $6.6 billion [in October], the largest venture capital round in history. But it plans to lose $5 billion [2024] alone. And by 2026, it could be losing $14 billion per year. That’s head exploding territory. If OpenAI keeps burning money at this rate, it will have to raise another round soon. Perhaps as early as 2025.

As a result, Altman has a significant financial incentive to believe/say that OpenAI is on the verge of a breakthrough and that it's worth it for their investors to continue giving them money.

comment by LawrenceC (LawChan) · 2025-01-06T20:31:38.794Z · LW(p) · GW(p)

I think the title greatly undersells the importance of these statements/beliefs. (I would've preferred either part of your quote or a call to action.) 

I'm glad that Sam is putting in writing what many people talk about.  People should read it and take them seriously. 

Replies from: nikolaisalreadytaken
comment by Nikola Jurkovic (nikolaisalreadytaken) · 2025-01-06T22:46:44.589Z · LW(p) · GW(p)

I have edited the title in response to this comment

comment by greylag · 2025-01-06T08:10:39.293Z · LW(p) · GW(p)