Altman blog on post-AGI world
post by Julian Bradshaw · 2025-02-09T21:52:30.631Z · LW · GW · 10 commentsThis is a link post for https://blog.samaltman.com/three-observations
Contents
10 comments
First part just talks about scaling laws, nothing really new. Second part is apparently his latest thoughts on a post-AGI world. Key part:
While we never want to be reckless and there will likely be some major decisions and limitations related to AGI safety that will be unpopular, directionally, as we get closer to achieving AGI, we believe that trending more towards individual empowerment is important; the other likely path we can see is AI being used by authoritarian governments to control their population through mass surveillance and loss of autonomy.
Ensuring that the benefits of AGI are broadly distributed is critical. The historical impact of technological progress suggests that most of the metrics we care about (health outcomes, economic prosperity, etc.) get better on average and over the long-term, but increasing equality does not seem technologically determined and getting this right may require new ideas.In particular, it does seem like the balance of power between capital and labor could easily get messed up, and this may require early intervention. We are open to strange-sounding ideas like giving some “compute budget” to enable everyone on Earth to use a lot of AI, but we can also see a lot of ways where just relentlessly driving the cost of intelligence as low as possible has the desired effect.
Anyone in 2035 should be able to marshall the intellectual capacity equivalent to everyone in 2025; everyone should have access to unlimited genius to direct however they can imagine.
Edit to add commentary:
That last part sounds like he thinks everyone should be on speaking terms with an ASI by 2035? If you just assume alignment succeeds, I think this is a directionally reasonable goal - no permanent authoritarian rule, ASI helps you as little or as much as you desire.
10 comments
Comments sorted by top scores.
comment by JuliaHP · 2025-02-10T02:00:55.881Z · LW(p) · GW(p)
I do believe that if Altman does manage to create his superAI's, the first such eats Altman and makes squiggles. But if I were to engage in the hypothetical where nice corrigible superassistants are just magically created, Altman does not appear to treat this future he claims to be steering towards seriously.
The world where "everyone has a superassitant" is inherently incredibly volatile/unstable/dangerous due to an incredibly large offence-defence assymetry of superassistants attacking fragile-fleshbags (with optimized viruses, bacteria, molecules, nanobots etcetc) or hijacking fragile minds with supermemes.
Avoiding this kind of outcome to me seems difficult. Nonsystematic "patches" are always workaroundable.
If openAI's superassistant refuses your request to destroy the world, use it to build your own superassistant, or use it for subtasks etc etc. Humans are fragile-fleshbags, and if strong optimization is ever pointed in their direction, they die.
There are ways to make such a world stable, but all of them that I can see look incredibly authoritarian, something Altman says hes not aiming for. But Altman does not appear to be proposing any alternatives as to how this will turn out fine, and I am not aware of any research agenda at openai trying to figure out how "giving everyone a superoptimizer" will result in a stable world with humans doing human things.
I know only three coherent ways to interpret what Altman is saying, and none of them take the object of writing seriously:
1) I wanted to have the stock go up and wrote words which do that
2) I didnt really think about it, oops
3) I'm actully gonna keep the superassistants all to myself and rule, and this nicecore writing will make people support me as I approach the finish line
This is less meant to be critical of the writing, and more me asking for help of how to actually make sense of what Altman says
↑ comment by cousin_it · 2025-02-10T09:57:30.008Z · LW(p) · GW(p)
I suppose the superassistants could form coalitions and end up as a kind of "society" without too much aggression. But this all seems moot, because superassistants will anyway get outcompeted by AIs that focus on growth. That's the real danger.
↑ comment by Viliam · 2025-02-10T08:53:45.618Z · LW(p) · GW(p)
I don't see a reason why we should trust Altman's words on this topic more than his previous words on making OpenAI a non-profit.
Before Singularity, I think it just means that OpenAI would like to have everyone as a customer, not just the rich (although the rich will get higher quality), which makes perfect sense economically. Even if governments paid you billions, it would still make sense to also collect $20 from each person on the planet individually.
After Singularity... this just doesn't make much sense, for the reasons you wrote.
I was trying to steelman the plan -- I think the nearest possible option that would work is having one superintelligence that keeps everyone safe and tries to keep the world "normal" as much as people in general want it to have; and to give every human an individual assistant which will do exactly as much as the human wants it to do.
But even this doesn't make much sense, because people interact with other e.g. on the market, so the ones who choose to do it slowly will be hopelessly outcompeted by the ones who choose to do it fast, so there won't be much of a choice.
I imagine we could fix this by e.g. splitting the planet into "zones" with different levels of AI assistants allowed (but the superintelligence making sure all zones are safe), and people could choose which zone they want to live in, and would only compete with other people within the same zone. But these are just my fantasies inspired by reading Yudkowsky, and have little to do with Altman's statements, and shouldn't be projected into them.
↑ comment by Thane Ruthenis · 2025-02-10T11:40:57.358Z · LW(p) · GW(p)
I think "enforce NAP then give everyone a giant pile of resources to do whatever they want with" is a reasonable first-approximation idea regarding what to do with ASI, and it sounds consistent with Altman's words.
But I don't believe that he's actually going to do that, so I think it's just (3).
↑ comment by james oofou (james-oofou) · 2025-02-10T11:59:00.809Z · LW(p) · GW(p)
There are ways to make such a world stable, but all of them that I can see look incredibly authoritarian, something Altman says hes not aiming for.
If he were aiming for an authoritarian outcome, would it make any sense for him to say so? I don't think so. Outlining such a plan would quite probably lead to him being ousted, and would have little upside.
The reason I think it would lead to his ouster is that most Americans' reaction to the idea of an authoritarian AI regime would be strongly negative rather than positive.
So, I think his current actions align with his plan being something authoritarian.
↑ comment by rvnnt · 2025-02-10T10:53:28.539Z · LW(p) · GW(p)
Out of (1)-(3), I think (3)[1] is clearly most probable:
- I think (2) would require Altman to be deeply un-strategic/un-agentic, which seems in stark conflict with all the skillful playing-of-power-games he has displayed.
- (3) seems strongly in-character with the kind of manipulative/deceitful maneuvering-into-power he has displayed thus far.
- I suppose (1) is plausible; but for that to be his only motive, he would have to be rather deeply un-strategic (which does not seem to be the case).
(Of course one could also come up with other possibilities besides (1)-(3).)[2]
or some combination of (1) and (3) ↩︎
E.g. maybe he plans to keep ASI to himself, but use it to implement all-of-humanity's CEV, or something. OTOH, I think the kind of person who would do that, would not exhibit so much lying, manipulation, exacerbating-arms-races, and gambling-with-everyone's-lives. Or maybe he doesn't believe ASI will be particularly impactful; but that seems even less plausible. ↩︎
comment by cousin_it · 2025-02-10T01:15:06.910Z · LW(p) · GW(p)
I don't quite understand the plan. What if I get access to cheap friendly AI, but there's also another much more powerful AI that wants my resources and doesn't care much about me? What would stop the much more powerful AI from outplaying me for these resources, maybe by entirely legal means? Or is the idea that somehow the AIs in public access are always the strongest possible? That isn't true even now.
Replies from: ozziegooen, Julian Bradshaw↑ comment by ozziegooen · 2025-02-10T02:40:46.674Z · LW(p) · GW(p)
This might be obvious, but I don't think we have evidence to support the idea that there really is anything like a concrete plan. All of the statements I've seen from Sam on this issue so far are incredibly basic and hand-wavy.
I suspect that any concrete plan would be fairly controversial, so it's easiest to speak in generalities. And I doubt there's anything like an internal team with some great secret macrostrategy - instead I assume that they haven't felt pressured to think through it much.
↑ comment by Julian Bradshaw · 2025-02-10T03:26:47.623Z · LW(p) · GW(p)
The only sane version of this I can imagine is where there's either one aligned ASI, or a coalition of aligned ASIs, and everyone has equal access. Because the AI(s) are aligned they won't design bioweapons for misanthropes and such, and hopefully they also won't make all human effort meaningless by just doing everything for us and seizing the lightcone etc etc.
Replies from: Davidmanheim↑ comment by Davidmanheim · 2025-02-10T14:12:18.265Z · LW(p) · GW(p)
Seems bad to posit that there must be a sane version.