We’re not prepared for an AI market crash

post by Remmelt (remmelt-ellen) · 2025-04-01T04:33:55.040Z · LW · GW · 11 comments

Contents

11 comments

Our community is not prepared for an AI crash. We're good at tracking new capability developments, but not as much the company financials. Currently, both OpenAI and Anthropic are losing $5 billion+ a year, while under threat of losing users to cheap LLMs.

A crash will weaken the labs. Funding-deprived and distracted, execs struggle to counter coordinated efforts to restrict their reckless actions. Journalists turn on tech darlings. Optimism makes way for mass outrage, for all the wasted money and reckless harms.

You may not think a crash is likely [LW(p) · GW(p)]. But if it happens, we can turn the tide.

Preparing for a crash is our best bet [LW · GW].[1] But our community is poorly positioned to respond. Core people positioned themselves inside institutions – to advise on how to maybe make AI 'safe', under the assumption that models rapidly become generally useful.

After a crash, this no longer works, for at least four reasons:

  1. The 'inside game' approach is already failing. To give examples: OpenAI ended its superalignment team, and Anthropic is releasing agents. The US is demolishing the AI Safety Institute, and its UK counterpart was renamed the AI Security Institute. The AI Safety Summit is now called the AI Action Summit. Need we go on?
  2. In the economic trough, skepticism of AI will reach its peak. People will dismiss and ridicule us for talking about risks of powerful AI. I'd say that promoting the “powerful AI” framing to an audience that contains power-hungry entrepreneurs and politicians never was a winning strategy. But it sure was believable when ChatGPT took off. Once OpenAI loses more money [LW · GW] than it can recoup through VC rounds and its new compute provider goes bankrupt [LW · GW], the message just falls flat.
  3. Even if we change our messaging, it won't be enough to reach broad-based public agreement. To create lasting institutional reforms (that powerful tech lobbies cannot undermine), various civic groups that often oppose each other need to reach consensus. Unfortunately, AI Safety is rather insular, and lacks experienced bridgebuilders and facilitators who can listen to the concerns of different communities, and support coordinated action between them.
  4. To overhaul institutions that are failing us, more confrontational tactics like civil disobedience may be needed. Such actions are often seen as radical in their time (e.g. as civil rights marches were). The AI Safety community lacks the training and mindset to lead such actions, and may not even want to associate itself with people taking such actions. Conversely, many of the people taking such actions may not want to associate with AI Safety. The reasons are various: safety researchers and funders collaborated with the labs, while neglecting already harmed communities, and ignoring the value of religious worldviews.
     

As things stand, we’ll get caught flat-footed.

One way to prepare is to fund a counter-movement outside of AI Safety. I'm assisting experienced organisers making plans. I hope to share details before a crash happens.[2]
 

  1. ^

    Preparing for a warning shot is another option. This is dicey though given that: (1) we don’t know when or how it will happen (2) a convincing enough warning shot implies that models are already gaining the capacity for huge impacts, making it even harder to prepare for the changed world that results (3) in a world with such resourceful AI, the industry could still garner political and financial backing to continue developing supposedly safer versions, and (4) we should not rely on rational action following a (near-)catastrophe, given that even tech with little upside has continued to be developed after being traced back to maybe having caused a catastrophe (e.g. virus gain-of-function research).

    Overall, I’d prefer to not wait until the point that lots of people might die before trying to restrict AI corporations. I think campaigning in an early period of industry weakness is a better moment than campaigning when the industry gains AI with autonomous capabilities. Maybe I'm missing other options (please share), but this is why I think preparing for a market crash is our best bet.

  2. ^

    We’re starting to see signs of investments not being able to swell further. E.g. OpenAI’s latest VC round is led by an unrespectable firm that must lend money to invest at a staggering valuation of $300 billion. Also, OpenAI buys compute from CoreWeave, a debt-ridden company that recently had a disappointing IPO. I think we're in the late stage of the bubble, which is most likely to pop by 2027.

11 comments

Comments sorted by top scores.

comment by Seth Herd · 2025-04-01T12:22:20.839Z · LW(p) · GW(p)

Huh? Yes we're unprepared to capitalize on a crash because how would we? This post doesn't say how one might do that. It seems you've got ideas but why write this if you weren't going to say what they are or what you want us to do or think about?

Replies from: remmelt-ellen
comment by Remmelt (remmelt-ellen) · 2025-04-02T03:43:57.462Z · LW(p) · GW(p)

Yes, I get you don’t just want to read about the problem but a potential solution. 

The next post in this sequence will summarise the plan by those experienced organisers.

These organisers led one of the largest grassroots movements in recent history. That took years of coalition building, and so will building a new movement. 

So they want to communicate the plan clearly, without inviting misinterpretations down the line. I myself rushed writing on new plans before (when I nuanced a press release put out by a time-pressed colleague at Stop AI). That backfired because I hadn’t addressed obvious concerns. This time, I drafted a summary that the organisers liked, but still want to refine. So they will run sessions with me and a facilitator, to map out stakeholders and their perspectives, before going public on plans.

Check back here in a month. We should have a summary ready by then.

comment by Vladimir_Nesov · 2025-04-01T14:24:03.120Z · LW(p) · GW(p)

The scale of training and R&D spending by AI companies can be reduced on short notice, while global inference buildout costs much more and needs years of use to pay for itself. So an AI slowdown mostly hurts clouds and makes compute cheap due to oversupply, which might be a wash for AI companies. Confusingly major AI companies are closely tied to cloud providers, but OpenAI is distancing itself from Microsoft, and Meta and xAI are not cloud providers, so wouldn't suffer as much. In any case the tech giants will survive, it's losing their favor that seems more likely to damage AI companies, making them no longer able to invest as much in R&D.

Replies from: remmelt-ellen
comment by Remmelt (remmelt-ellen) · 2025-04-02T05:29:54.007Z · LW(p) · GW(p)

This is a solid point that I forgot to take into account here. 

What happens to GPU clusters inside the data centers build out before the market crash? 

If user demand slips and/or various companies stop training, that means that compute prices will slump. As a result, cheap compute will be available for remaining R&D teams, for the three years at least that the GPUs last. 

I find that concerning. Because not only is compute cheap, but many of the researchers left using that compute will have reached an understanding that scaling transformer architectures on internet-available data has become a dead end. With investor and managerial pressure to release LLM-based products gone, researchers will explore their own curiosities. This is the time you’d expect the persistent researchers to invent and tinker with new architectures – that could end up being more compute and data efficient at encoding functionality. 

~ ~ ~

I don’t want to skip over your main point. Is your argument that AI companies will be protected from a crash, since their core infrastructure is already build? 

Or more precisely: 

  • that since data centers were build out before the crash, that compute prices end up converging on mostly just the cost of the energy and operations needed to run the GPU clusters inside,
  • which in turn acts as a financial cushion for companies like OpenAI and Anthropic, for whom inference costs are now lower,
  • where those companies can quickly scale back expensive training and R&D, while offering their existing products to remaining users at lower cost.
  • as a result of which, those companies can continue to operate during the period that funding has dried up, waiting out the 'AI winter' until investors and consumers are willing to commit their money again.

That sounds right, given that compute accounts for over half of their costs. Particularly if the companies secure another large VC round ahead of a crash, then they should be able to weather the storm. E.g. the $40 billion just committed to OpenAI (assuming that by the end of this year OpenAI exploits a legal loophole to become for-profit, that their main backer SoftBank can lend enough money, etc). 

Just realised that your point seems similar to Sequoia Capital’s
“declining prices for GPU computing is actually good for long-term innovation and good for startups. If my forecast comes to bear, it will cause harm primarily to investors. Founders and company builders will continue to build in AI—and they will be more likely to succeed, because they will benefit both from lower costs and from learnings accrued during this period of experimentation.”

~ ~ ~

A market crash is by itself not enough to deter these companies – from continuing to integrate increasingly automated systems into society.

I think a coordinated movement is needed; one that exerts legitimate pressure on our failing institutions. The next post will be about that. 

Replies from: eggsyntax
comment by eggsyntax · 2025-04-02T16:46:27.356Z · LW(p) · GW(p)

E.g. the $40 billion just committed to OpenAI (assuming that by the end of this year OpenAI exploits a legal loophole to become for-profit, that their main backer SoftBank can lend enough money, etc). 

VC money, in my experience, doesn't typically mean that the VC writes a check and then the startup has it to do with as they want; it's typically given out in chunks and often there are provisions for the VC to change their mind if they don't think it's going well. This may be different for loans, and it's possible that a sufficiently hot startup can get the money irrevocably; I don't know.

comment by Knight Lee (Max Lee) · 2025-04-01T05:16:09.981Z · LW(p) · GW(p)

I agree that AI company finances aren't that good, but my personal opinion is that there won't be a dramatic collapse which significantly affects how people and policymakers perceive AI and AI companies.

  • AI companies which fail will be bought up by companies which want an AI team. E.g. if OpenAI fails, it might be folded into Microsoft.
  • AI companies probably won't fail at the same time.
  • Even if many AI companies fail, actual usage of AI won't drop. Whatever brand name the most popular AI has will be perceived as the winning AI company (even if it isn't an AI company).
  • Many companies do interesting research for long times without expecting much profit, e.g. Bell Labs and Google. Jeff Bezos funded his space company without expecting profit, purely driven by spacetravel mania. Now that AI is as cool as spacetravel, Elon Musk may fund xAI in the same way.
  • The only "outrage" will come from investors who lost their money, but they are too money-driven to do anything with their outrage. They are used to losing money from failed bets, and they had always expected AI companies to be high risk high reward.

This is just my current vague opinion, I'm not saying that you're wrong and I'm right.

Replies from: remmelt-ellen
comment by Remmelt (remmelt-ellen) · 2025-04-01T06:12:20.243Z · LW(p) · GW(p)

Thanks for your takes!  Some thoughts on your points:

  • Yes, OpenAI has useful infrastructure and brands. It's hard to imagine a scenario where they wouldn't just downsize and/or be acquired by e.g. Microsoft.
  • If OpenAI or Anthropic goes down like that, I'd be surprised if some other AI companies don't go down with them. This is an industry that very much relies on stories convincing people to buy into the promise of future returns, given that most companies are losing money on developing and releasing large models. When those stories fail to play out with an industry leader, the common awareness of that failure will cascade into people dropping their commitments throughout the industry.
  • AI companies may fail in part because people stop using their products. For example, if a US recession happens, paid users may switch to cheaper alternatives like DeepSeek's, or stop using the tools altogether. Also, ChatGPT started as a flashy product that relied on novelty and future promises to get people excited to use it. After a while, people get bored of a product that isn't changing much anymore, and is not actually delivering on OpenAI's proclamations of how AI will rapidly improve.
  • Sure, companies fund interesting research. At the same time, do you know other examples of $600 billion+ being invested yearly into interesting research without expectations of much profit?
  • Other communities I'm in touch with are already outraged about the AI thing. This includes creative professionals, tech privacy advocates, families targeted by deepfakes, tech-aware environmentalists, some Christians, and so forth. More broadly, there has been growing public frustration about tech oligarchs extracting wealth while taking over the government, about a 'rot economy' that pushes failing products, about algorithmic intermediaries creating a sense of disconnection, and about a lack of stable dignified jobs. 'AI' is at the intersection of all of those problems, and therefore become a salient symbol for communities to target. An AI market crash, alongside other correlated events, can bring to surface and magnify their frustrations.


Those are my takes. Curious if this raises new thoughts.

Replies from: Max Lee
comment by Knight Lee (Max Lee) · 2025-04-01T08:44:33.742Z · LW(p) · GW(p)

:) thank you for saying thanks and replying.

You're right, $600 billion/year sounds pretty unsustainable. That's like 60 OpenAI's, and more than half the US military budget. Maybe the investors pouring in that money will eventually run out of money that they're willing to invest, and it will shrink. I think there is a 50% chance that at some point before we build AGI/ASI, the amount of spending on AI research will be halved (compared to where it is now).

It's also a good point how the failure might cascade. I'm reminded about people discussing whether something like the "dot-com bubble" will happen to AI, which I somehow didn't think of when writing my comment.

Right now my opinion is 25%, there will be a cascading market crash, when OpenAI et al. finally run out of money. A lot of seemingly stable things have unexpectedly crashed, and AI companies don't look more stable than them. It's one possible future.

I still think the possible future where this doesn't happen is more likely, because one company failing does not dramatically reduce the expected value of future profits from AI, it just moves it elsewhere.

I agree that "AI Notkilleveryoneism" should be friends with these other communities who aren't happy about AI.

I still think the movement should work with AI companies and lobby the government. Even if AI companies go bankrupt, AI researchers will move elsewhere and continue to have influence.

Replies from: remmelt-ellen
comment by Remmelt (remmelt-ellen) · 2025-04-02T04:15:07.415Z · LW(p) · GW(p)

Glad to read your thoughts!

Agreed on being friends with communities who are not happy about AI. 

I’m personally not a fan of working with OpenAI or Anthropic, given that they’ve defected on people here concerned about a default trajectory to mass extinction, and used our research for their own ends [LW · GW].

comment by Mitchell_Porter · 2025-04-01T08:28:59.834Z · LW(p) · GW(p)

I don't follow the economics of AI at all, but my model is that Google (Gemini) has oceans of money and would therefore be less vulnerable in a crash, and that OpenAI and Anthropic have rich patrons (Microsoft and Amazon respectively) who would have the power to bail them out. xAI is probably safe for the same reason, the patron being Elon Musk. China is a similar story, with the AI contenders either being their biggest tech companies (e.g. Baidu) or sponsored by them (Alibaba and Tencent being big investors in "AI 2.0"). 

comment by Polar · 2025-04-01T13:18:21.912Z · LW(p) · GW(p)

There is a possibility of self-reinforcing negative cycle: models don't show rapid capabilities improvement -> investors halt pouring money into AI sector -> AI labs focus on cutting costs -> models don't show rapid capabilities improvement.