The Mask Comes Off: A Trio of Tales

post by Zvi · 2025-02-14T15:30:15.372Z · LW · GW · 1 comments

Contents

  Table of Contents
  Three Observations
  Frog and Toad (or There Is No Plan)
  A Trade Offer Has Arrived
None
1 comment

This post covers three recent shenanigans involving OpenAI.

In each of them, OpenAI or Sam Altman attempt to hide the central thing going on.

First, in Three Observations, Sam Altman’s essay pitches our glorious AI future while attempting to pretend the downsides and dangers don’t exist in some places, and in others admitting we’re not going to like those downsides and dangers but he’s not about to let that stop him. He’s going to transform the world whether we like it or not.

Second, we have Frog and Toad, or There Is No Plan, where OpenAI reveals that its plan for ensuring AIs complement humans rather than AIs substituting for humans is to treat this as a ‘design choice.’ They can simply not design AIs that will be substitutes. Except of course this is Obvious Nonsense in context, with all the talk of remote workers, and also how every company and lab will rush to do the substituting because that’s where the money will be. OpenAI couldn’t follow this path even if it wanted to do so, not without international coordination. Which I’d be all for doing, but then you have to actually call for that.

Third, A Trade Offer Has Arrived. Sam Altman was planning to buy off the OpenAI nonprofit for about $40 billion, even as the for-profit’s valuation surged to $260 billion. Elon Musk has now offered $97 billion for the non-profit, on a completely insane platform of returning OpenAI to a focus on open models. I don’t actually believe him – do you see Grok’s weights running around the internet? – and obviously his bid is intended as a giant monkey wrench to try and up the price and stop the greatest theft in human history. There was also an emergency 80k hours podcast on that.

Table of Contents

  1. Three Observations.
  2. Frog and Toad (or There Is No Plan).
  3. A Trade Offer Has Arrived.

Three Observations

Altman used to understand that creating things smarter than us was very different than other forms of technology. That it posed an existential risk to humanity. He now pretends not to, in order to promise us physically impossible wonderous futures with no dangers in sight, while warning that if we take any safety precautions then the authoritarians will take over.

His post, ‘Three Observations,’ is a cartoon villain speech, if you are actually paying attention to it.

Even when he says ‘this time is different,’ he’s now saying this time is just better.

Sam Altman: In some sense, AGI is just another tool in this ever-taller scaffolding of human progress we are building together.

In another sense, it is the beginning of something for which it’s hard not to say “this time it’s different”; the economic growth in front of us looks astonishing, and we can now imagine a world where we cure all diseases, have much more time to enjoy with our families, and can fully realize our creative potential.

In a decade, perhaps everyone on earth will be capable of accomplishing more than the most impactful person can today.

Yes, there’s that sense. And then there’s the third sense, in that at least by default it is rapidly already moving from ‘tool’ to ‘agent’ and to entities in competition with us, that are smarter, faster, more capable, and ultimately more competitive at everything other than ‘literally be a human.’

It’s not possible for everyone on Earth to be ‘capable of accomplishing more than the most impactful person today.’ The atoms for it are simply not locally available. I know what he is presumably trying to say, but no.

Altman then lays out three principles.

  1. The intelligence of an AI model roughly equals the log of the resources used to train and run it. These resources are chiefly training compute, data, and inference compute. It appears that you can spend arbitrary amounts of money and get continuous and predictable gains; the scaling laws that predict this are accurate over many orders of magnitude.
  2. The cost to use a given level of AI falls about 10x every 12 months, and lower prices lead to much more use. You can see this in the token cost from GPT-4 in early 2023 to GPT-4o in mid-2024, where the price per token dropped about 150x in that time period. Moore’s law changed the world at 2x every 18 months; this is unbelievably stronger.
  3. The socioeconomic value of linearly increasing intelligence is super-exponential in nature. A consequence of this is that we see no reason for exponentially increasing investment to stop in the near future.

Even if we fully accept point one, that doesn’t tell us as much as you might think.

  1. It doesn’t tell us how many OOMs (orders of magnitude) are available to us, or how we can make them available, or how much they cost.
  2. It doesn’t tell us what other ways we could also scale intelligence of the system, because of algorithmic efficiency. He covers this in point #2, but we should expect this law to break to the upside (go faster) once AIs smarter than us are doing the work.
  3. It doesn’t tell us what the scale of this ‘intelligence’ is, which is a matter of much debate. What does it mean to be ‘twice as smart’ as the average (let’s simplify and say IQ 100) person? It doesn’t mean ‘IQ 200,’ that’s not how that scale works. Indeed, much of the debate is people essentially saying that this wouldn’t mean anything, if it was even possible.
  4. It doesn’t tell us what that intelligence actually enables, which is also a matter of heated debate. Many claim, essentially, ‘if you had a country of geniuses in a data center’ to use Dario’s term, that this would only add e.g. 0.5% to RGDP growth, and would not threaten our lifestyles much let alone our survival. The fact that this does not make any sense does not seem to dissuade them. And the ‘final form’ likely goes far beyond ‘genius’ in that data center.

Then point two, as I noted, we should expect to break to the upside if capabilities continue to increase, and to largely continue for a while in terms of cost even if capabilities mostly stall out.

Point three may or may not be correct, since defining ‘linear intelligence’ is difficult. And there are many purposes for which all you need is ‘enough’ intelligence – as we can observe with many human jobs, where being a genius is of at most marginal efficiency benefit. But there are other things for which once you hit the necessary thresholds, there are dramatic super exponential returns to relevant skills and intelligence by any reasonable measure.

Altman frames the impact of superintelligence as a matter of ‘socioeconomic value,’ ignoring other things this might have an impact upon?

If these three observations continue to hold true, the impacts on society will be significant.

Um, no shit, Sherlock. This is like saying dropping a nuclear bomb would have a significant impact on an area’s thriving nightlife. I suppose Senator Blumenthal was right, by ‘existential’ you did mean the effect on jobs.

Speaking of which, if you want to use the minimal amount of imagination, you can think of virtual coworkers, while leaving everything else the same.

Still, imagine it as a real-but-relatively-junior virtual coworker. Now imagine 1,000 of them. Or 1 million of them. Now imagine such agents in every field of knowledge work.

Then comes the part where he assures us that timelines are only so short.

The world will not change all at once; it never does. Life will go on mostly the same in the short run, and people in 2025 will mostly spend their time in the same way they did in 2024. We will still fall in love, create families, get in fights online, hike in nature, etc.

But the future will be coming at us in a way that is impossible to ignore, and the long-term changes to our society and economy will be huge. We will find new things to do, new ways to be useful to each other, and new ways to compete, but they may not look very much like the jobs of today.

Yes, everything will change. But why all this optimism, stated as fact? Why not frame that as an aspiration, a possibility, an ideal we can and must seek out? Instead he blindly talks like Derek on Shrinking and says it will all be fine.

And oh, it gets worse.

Technically speaking, the road in front of us looks fairly clear.

No it bloody does not. Do not come to us and pretend that your technical problems are solved. You are lying. Period. About the most important question ever. Stop it!

But don’t worry, he mentions AI Safety! As in, he warns us not to worry about it, or else the future will be terrible – right after otherwise assuring us that the future will definitely be Amazingly Great.

While we never want to be reckless and there will likely be some major decisions and limitations related to AGI safety that will be unpopular, directionally, as we get closer to achieving AGI, we believe that trending more towards individual empowerment is important; the other likely path we can see is AI being used by authoritarian governments to control their population through mass surveillance and loss of autonomy.

That’s right. Altman is saying: We know pushing forward to AGI and beyond as much as possible might appear to be unsafe, and what we’re going to do is going to be super unpopular and we’re going to transform the world and put the entire species and planet at risk directly against the overwhelming preferences of the people, in America and around the world. But we have to override the people and do it anyway. If we don’t push forward quickly as possible then China Wins.

Oh, and all without even acknowledging the possibility that there might be a loss of control or other existential risk in the room. At all. Not even to dismiss it, let alone argue against it or that the risk is worthwhile.

Seriously. This is so obscene.

Anyone in 2035 should be able to marshall the intellectual capacity equivalent to everyone in 2025; everyone should have access to unlimited genius to direct however they can imagine.

Let’s say, somehow, you could pull that off without already having gotten everyone killed or disempowered along the way. Have you stopped, sir, for five minutes, to ask how that could possibly work even in theory? How the humans could possibly stay in control of such a scenario, how anyone could ever dare make any meaningful decision rather than handing it off to their unlimited geniuses? What happens when people direct their unlimited geniuses to fight with each other in various ways?

This is not a serious vision of the future.

Or more to the point: How many people do you think this ‘anyone’ consists of in 2035?

As we will see later, there is no plan. No vision. Except to build it, and have faith.

Now that Altman has made his intentions clear: What are you going to do about it?

Frog and Toad (or There Is No Plan)

Don’t make me tap the sign, hope is not a strategy, solve for the equilibrium, etc.

Gary Tan: We are very lucky that for now that frontier AI models are very smart toasters instead of Skynet (personally I hope it stays that way)

This means *agency* is now the most important trait to teach our kids and will be a mega multiplier on any given person’s life outcome.

Agency is important. By all means teach everyone agency.

Also don’t pretend that the frontier AI models will effectively be ‘very smart toasters.’

The first thing many people do, the moment they know how, is make one an agent.

Similarly, what type of agent will you build?

Oh, OpenAI said at the summit, we’ll simply only build the kind that complements humans, not the kind that substitutes for humans. It’ll be fine.

Wait, what? How? Huh?

This was the discussion about it on Twitter.

The OpenAI plan here makes no sense. Or rather, it is not a plan, and no one believes you when you call it a plan, or claim it is your intention to do this.

Connor Axiotes: I was invited to the @OpenAI AI Economics event and they said their AIs will just be used as tools so we won’t see any real unemployment, as they will be complements not substitutes.

When I said that they’d be competing with human labour if Sama gets his AGI – I was told it was just a “design choice” and not to worry. From 2 professional economists!

Also in the *whole* event there was no mention of Sama’s UBI experiment or any mention of what post AGI wage distribution might look like. Even when I asked.

Sandro Gianella (OpenAI): hey! glad you could make to our event

– the point was not that it was “just a design choice” but that we have agency on how we build and deploy these systems so they are complementing

– we’re happy to chat about UBI or wage distribution but you can’t fit everything into 1.5h

Connor Axiotes: I appreciate you getting me in! It was very informative and you were very hospitable.

And I wish I didn’t have to say anything but many in that room will have left, gone back to their respective agencies and governments, and said “OpenAI does not think there will be job losses from AGI” and i just think it shouldn’t have been made out to be that black and white.

Regarding your second point, it also seems Sama has just spoken less about UBI for a while. What is OpenAI’s plans to spread the rent? UBI? World coin? If there is no unemployment why would we need that?

Zvi Mowshowitz (replying to Sandro, got no response so far): Serious question on the first point. We do have such agency in theory, but how collectively do we get to effectively preserve this agency in practice?

The way any given agent works is a design choice, but those choices are dictated by the market/competition/utility if allowed.

All the same concerns about the ‘race to AGI’ apply to a ‘race to agency’ except now with the tools generally available, you have a very large number of participants. So what to do?

Steven Adler (ex-OpenAI): Politely, I don’t think it is at all possible for OpenAI to ‘have AGI+ only complement humans rather than replace them’; I can’t imagine any way this could be done. Nor do I believe that OpenAI’s incentives would permit this even if possible.

David Manheim: Seems very possible to do, with a pretty minimal performance penalty as long as you only compare to humans, instead of comparing to inarguably superior unassisted and unmonitorable agentic AI systems.

Steven Adler: In a market economy, I think those non-replacing firms just eventually get vastly outcompeted by those who do replacement. Also, in either case I still don’t see how OAI could enforce that its customers may only complement not replace

David Manheim:Yes, it’s trivially incorrect. It’s idiotic. It’s completely unworkable because it makes AI into a hindrance rather than an aide.

But it’s *also* the only approach I can imagine which would mean you could actually do the thing that was claimed to be the goal.

OpenAI can enforce it the same way they plan to solve superalignment; assert an incoherent or impossible goal and then insist that they can defer solving the resulting problem until they have superintelligence do it for them.

Yes, this is idiocy, but it’s also their plan!

sma: > we have agency on how we build and deploy these systems so they are complementing

Given the current race dynamics this seems… very false.

I don’t think it is their plan. I don’t even think it is a plan at all. The plan is to tell people that this is the plan. That’s the whole plan.

Is it a design choice for any individual which way to build their AGI agent? Yes, provided they remain in control of their AGI. But how much choice will they have, competing against many others? If you not only keep the human ‘in the loop’ but only ‘complement’ them, you are going to get absolutely destroyed by anyone who takes the other path, whether the ‘you’ is a person, a company or a nation.

Once again, I ask, is Sam Altman proposing that he take over the world to prevent anyone else from creating AI agents that substitute for humans? If not, how does he intend to prevent others from building such agents?

The things I do strongly agree with:

  1. We collectively have agency over how we create and deploy AI.
  2. Some ways of doing that work out better for humans than others.
  3. We should coordinate to do the ones that work out better, and to not do the ones that work out worse.

The problem is, you have to then figure out how to do that, in practice, and solve for the equilibrium, not only for you or your company but for everyone. Otherwise, It’s Not Me, It’s the Incentives. And in this case, it’s not a subtle effect, and you won’t last five minutes.

You can also say ‘oh, any effective form of coordination would mean tyranny and that is actually the worst risk from AI’ and then watch as everyone closes their eyes and runs straight into the (technically metaphorical, but kind of also not so metaphorical) whirling blades of death. I suppose that’s another option. It seems popular.

A Trade Offer Has Arrived

Remember when I said that OpenAI’s intention to buy their nonprofit arm off for ~$40 billion was drastically undervaluing OpenAI’s nonprofit and potentially the largest theft in human history?

Confirmed.

Jessica Toonkel and Berber Jin: “It’s time for OpenAI to return to the open-source, safety-focused force for good it once was,” Musk said in a statement provided by Toberoff. “We will make sure that happens.”

One piece of good news is that this intention – to take OpenAI actual open source – will not happen. This would be complete insanity as an actual intention. There is no such thing as OpenAI as ‘open-source, safety-focused force for good’ unless they intend to actively dismantle all of their frontier models.

Indeed I would outright say: OpenAI releasing the weights of its models would present a clear and present danger to the national security of the United States.

(Also it would dramatically raise the risk of Earth not containing humans for long, but alas I’m trying to make a point about what actually motivates people these days.)

Not that any of that has a substantial chance of actually happening. This is not a bid that anyone involved is ever going to accept, or believes might be accepted.

Getting it accepted was never the point. This offer is designed to be rejected.

The point is that if OpenAI still wants to transition to a for-profit, it now has to pay the nonprofit far closer to what it is actually worth, a form of a Harberger tax.

It also illustrates the key problem with a Harberger tax. If someone else really does not like you, and would greatly enjoy ruining your day, or simply wants to extort money, then they can threaten to buy something you’re depending on simply to blow your whole operation up.

Altman of course happy to say the pro-OpenAI half the quiet part out loud.

Sam Altman: I think he is probably just trying to slow us down. He obviously is a competitor. I wish he would just compete by building a better product, but I think there’s been a lot of tactics, many, many lawsuits, all sorts of other crazy stuff, now this.

Charles Capel and Tom MacKenzie: In the interview on Tuesday, Altman chided Musk, saying: “Probably his whole life is from a position of insecurity — I feel for the guy.” Altman added that he doesn’t think Musk is “a happy person.”

Garrison Lovely explains all this here, that it’s all about driving up the price that OpenAI is going to have to pay.

Nathan Young also has a thread where he angrily explains Altman’s plan to steal OpenAI, in the context of Musk’s attempt to disrupt this.

Sam Altman: no thank you but we will buy twitter for $9.74 billion if you want.

Elon Musk (reply to Altman): Swindler.

Kelsey Piper: Elon’s offer to purchase the OpenAI nonprofit for $97.4 billion isn’t going to happen, but it may seriously complicate OpenAI’s efforts to claim the nonprofit is fairly valued at $40billion. If you won’t sell it for $97.4billion, that means you think it’s worth more than that.

I wrote back in October that OpenAI was floating valuations of its nonprofit that seemed way, way too low.

Jungwon has some experience with such transfers, and offers thoughts, saying this absolutely presents a serious problem for Altman’s attempt to value the nonprofit at a fraction of its true worth. Anticipated arguments include ‘OpenAI is nothing without its people’ and that everyone would quit if Elon bought the company, which is likely true. And that Elon’s plan would violate the charter and be terrible for humanity, which is definitely true.

And that Altman could essentially dissolve OpenAI and start again if he needed to, as he essentially threatened to do last time. In this case, it’s a credible threat. Indeed, one (unlikely but possible) danger of the $97 billion bid is if Altman accepts it, takes the $97 billion and then destroys the company on the way out the door and starts again. Whoops. I don’t think this is enough to make that worth considering, but there’s a zone where things get interesting, at least in theory.

80k Hours had an emergency podcast on this (also listed under The Week in Audio). Another note is that technically, any board member can now sue if they think the nonprofit is not getting fair value in compensation.

Finally, there’s this.

Bret Taylor (Chairman of the Board): “OpenAI is not for sale” because they have a “mission of ensuring AGI benefits humanity and I have a hard time seeing how this would.”

That is all.

 

1 comments

Comments sorted by top scores.

comment by Daniel Kokotajlo (daniel-kokotajlo) · 2025-02-14T18:27:10.771Z · LW(p) · GW(p)

major decisions and limitations related to AGI safety


What he's alluding to here, I think, is things like refusals and non-transparency. Making models refuse stuff, and refusing to release the latest models or share information about them with the public (not to mention, refusing to open-source them) will be sold to the public as an AGI safety measure. In this manner Altman gets the public angry at the idea of AGI safety instead of at him.