Crash scenario 1: Rapidly mobilise for a 2025 AI crash
post by Remmelt (remmelt-ellen) · 2025-04-11T06:54:47.974Z · LW · GW · 4 commentsContents
4 comments
Large movement organising takes time. It takes listening deeply to many communities' concerns, finding consensus around a campaign, ramping up training of organisers, etc.
But what if the AI crash is about to happen? What if US tariffs[1] triggered a recession that is making consumers and enterprises cut their luxury subscriptions? What if even the sucker VCs stop investing in companies that after years of billion-dollar losses on compute, now compete with cheap alternatives to their not-much-improving-LLMs [LW · GW]?
Then there is little time to organise and we must jump to mobilisation. But AI Safety has been playing the inside game, and is poorly positioned [LW · GW] to mobilise the resistance.
So we need groups that can:
- Scale the outside game, meaning a movement pushing for change from the outside.
- Promote robust messages, e.g. affirm concerns about tech oligarchs seizing power.
- Bridge-build with other groups to start campaigns around connected concerns.
- Legitimately pressure and negotiate with institutions to enforce restrictions.
Each group could mobilise a network of supporters fast. But they need money to cover their hours. We have money. Some safety researchers advise tech billionaires. You might have a high-earning tech job. If you won't push for reforms, you can fund groups that do.
You can donate to organisations already resisting AI, so more staff can go full-time.
Some examples:
- Data rights (NA Voice Actors, Algorithmic Justice League)
- Workers (Tech Workers Coalition, Turkopticon)
- Investigations (FoxGlove, Disruption Network Lab)
- Christians (World Pause Coalition, Singularity Weekly)
- Extinction risk (Stop AI, PauseAI)
Their ideologies vary widely, with some controversial to other groups. By supporting many to stand up for their concerns, you can preempt the ‘left-right’ polarisation we saw around climate change. Many different groups are needed for a broad-based movement.
At the early signs of a crash, groups need funding to ratchet up actions against weakened AI companies. If you wait, they lose their effectiveness. In this scenario, it is better to seed fund many proactive groups than to hold off.[2]
Plus you can fund coaches for the groups to build capacity. The people I have in mind led one of the largest grassroots movements in the last decade. I'll introduce them in the next post.
There is also room for large campaigns grounded in citizens' concerns. These can target illegal and dehumanising activities by leading AI companies. That's also for the next post.
Want to discuss more? Join me on Sunday the 20th. Add this session to your calendar.
- ^
The high tariffs seem partly temporary, meant to pressure countries into better trade deals. Still, AI's hardware supply chains span 3+ continents. So remaining tariffs on goods can put a lasting damper on GPU data center construction.
Chaotic tit-for-tat tariffs also further erode people’s trust in and willingness to rely on the US economy, fueling civil unrest and eroding its international ties. The relative decline of the US makes it and its allies vulnerable to land grabs, which may prompt militaries to ramp up contracts for autonomous weapons. State leaders may also react to civil unrest by procuring tools for automated surveillance. So surveillance and autonomous weapons are "growth" opportunities that we can already see AI companies pivot to.
- ^
Supporting other communities unconditionally also builds healthier relations. Leaders working to stop AI's increasing harms are suspicious of us buddying up with and soliciting outsized funds from tech leaders. Those connections and funds give us a position of power, and they do not trust us to wield that power to enable their work. If it even looks like we use our money to selectively influence their communities to do our bidding, that will confirm their suspicions. While in my experience, longtermist grants are unusually hands-off, it only takes one incident. This already happened – last year, a fund suddenly cancelled an already committed grant, for political reasons they didn't clarify. The recipient runs professional activities and has a stellar network. They could have gone public, but instead decided to no longer have anything to do with our community.
4 comments
Comments sorted by top scores.
comment by Vladimir_Nesov · 2025-04-11T15:05:04.131Z · LW(p) · GW(p)
I think it's overdetermined by Blackwell NVL72/NVL36 and long reasoning training that there will be no AI-specific "crash" until at least late 2026. Reasoning models want a lot of tokens, but their current use is constrained by cost and speed, and these issues will be going away to a significant extent. Already Google has Gemini 2.5 Pro (taking advantage of TPUs), and within a few months OpenAI and Anthropic will make reasoning variants of their largest models practical to use as well (those pretrained at the scale of 100K H100s / ~3e26 FLOPs, meaning GPT-4.5 for OpenAI).
The same practical limitations (as well as novelty of the technique) mean that long reasoning models aren't using as many reasoning tokens as they could in principle, everyone is still at the stage of getting long reasoning traces to work at all vs. not yet, rather than scaling things like the context length they can effectively use (in products rather than only internal research). It's plausible that contexts with millions of reasoning tokens can be put to good use, where other training methods failed to make contexts at that scale work well.
So later in 2025 there's better speed and cost, driving demand in terms of the number of prompts/requests, and for early to mid-2026 potentially longer reasoning traces, driving demand in terms of token count. After that, it depends on whether capabilities get much better than Gemini 2.5 Pro. Pretraining scale in deployed models will only advance 2x-5x by mid-2026 compared to now (using 100K-200K Blackwell chip training systems built in 2025), which is not a large enough change to be very noticeable, so it's not by itself sufficient to prevent a return of late 2024 vaguely pessimistic sentiment, and other considerations might get more sway with funding outcomes. But even then, OpenAI might get to ~$25bn annualized revenue that won't be going away, and in 2027 or slightly earlier there will be models pretrained for ~4e27 FLOPs using the training systems built in 2025-2026 (400K-600K Blackwell chips, 0.8-1.4 GW, $22-35bn), which as a 10x-15x change (compared to the models currently or soon-to-be deployed in 2025) is significant enough to get noticeably better across the board, even if nothing substantially game-changing gets unlocked. So the "crash" might be about revenue no longer growing 3x per year, and so the next generation training systems built in 2027-2028 not getting to the $150bn scale they otherwise might've aspired to.
Replies from: remmelt-ellen↑ comment by Remmelt (remmelt-ellen) · 2025-04-12T02:27:31.094Z · LW(p) · GW(p)
Thanks, I might be underestimating the impact of new Blackwell chips with improved computation.
I’m skeptical whether offering “chain-of-thought” bots to more customers will make a significant difference. But I might be wrong – especially if new model architectures would come out as well.
And if corporations throw enough cheap compute behind it plus widespread personal data collection, they can get to commercially very useful model functionalities. My hope is that there will be a market crash before that could happen, and we can enable other concerned communities to restrict the development and release of dangerously unscoped models.
But even then, OpenAI might get to ~$25bn annualized revenue that won't be going away
What is this revenue estimate assuming?
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2025-04-12T03:12:39.684Z · LW(p) · GW(p)
the impact of new Blackwell chips with improved computation
It's about world size, not computation [LW(p) · GW(p)], and has a startling effect that probably won't occur again with future chips, since Blackwell sufficiently catches up to models at the current scale.
But even then, OpenAI might get to ~$25bn annualized revenue that won't be going away
What is this revenue estimate assuming?
The projection for 2025 is $12bn at 3x/year growth (1.1x per month, so $1.7bn per month at the end of 2025, $3bn per month in mid-2026), and my pessimistic timeline above assumes that this continues up to either end of 2025 or mid-2026 and then stops growing after the hypothetical "crash", which gives $20-36bn per year.
Replies from: remmelt-ellen↑ comment by Remmelt (remmelt-ellen) · 2025-04-17T05:34:04.765Z · LW(p) · GW(p)
It's about world size, not computation, and has a startling effect that probably won't occur again with future chips
Thanks, I got to say I’m a total amateur when it comes to GPU performance. So will take the time to read your linked-to comment to understand it better.