PauseAI and E/Acc Should Switch Sides

post by WillPetillo · 2025-04-01T23:25:51.265Z · LW · GW · 4 comments

Contents

4 comments

In the debate over AI development, two movements stand as opposites: PauseAI calls for slowing down AI progress, and e/acc (effective accelerationism) calls for rapid advancement.  But what if both sides are working against their own stated interests?  What if the most rational strategy for each would be to adopt the other's tactics—if not their ultimate goals?

AI development speed ultimately comes down to policy decisions, which are themselves downstream of public opinion.  No matter how compelling technical arguments might be on either side, widespread sentiment will determine what regulations are politically viable.

Public opinion is most powerfully mobilized against technologies following visible disasters.  Consider nuclear power: despite being statistically safer than fossil fuels, its development has been stagnant for decades.  Why?  Not because of environmental activists, but because of Chernobyl, Three Mile Island, and Fukushima.  These disasters produce visceral public reactions that statistics cannot overcome.  Just as people fear flying more than driving despite the latter being far more dangerous, catastrophic events shape policy regardless of their statistical rarity.

Any e/acc advocate with a time horizon extending beyond the next fiscal quarter should recognize that the most robust path to sustained, long-term AI acceleration requires implementing reasonable safety measures immediately.  By temporarily accepting measured caution now, accelerationists could prevent a post-catastrophe scenario where public fear triggers an open-ended, comprehensive slowdown that might last decades.  Rushing headlong into development without guardrails virtually guarantees the major "warning shot" that would permanently turn public sentiment against rapid AI advancement in the way that accidents like Chernobyl turned public sentiment against nuclear power.

Meanwhile, the biggest dangers from superintelligent AI—proxy gaming, deception, and recursive self-improvement—won't show clear evidence until it's too late.  AI safety work focusing on current harms (hallucination, complicity with malicious use, saying politically incorrect things, etc.) fails to address the fundamental alignment problems with ASI.  These problems may take decades to solve—if they're solvable at all.  This becomes even more concerning when we consider that "successful" alignment might create dystopian power concentrations.

Near-term AI safety efforts, both technical and policy-based, might succeed at preventing minor catastrophes while allowing development to continue unabated toward existential risks.  They are like equipping a car to not break down when travelling over rough terrain so that it can drive more smoothly off a cliff.

If any of that sounded like a good idea, note the date of posting and consider this your periodic reminder that AI safety is not a game.  Trying to play 3D Chess with complex systems is a recipe for unintended, potentially irreversible consequences.

…But if you’re on break and just want a moment to blow off steam, feel free to have fun in the comments.

4 comments

Comments sorted by top scores.

comment by Seth Herd · 2025-04-02T01:37:13.325Z · LW(p) · GW(p)

I'm on board! We needed people going fast to get seatbelts!

AI safety isn't a game, which means you'll be disappointed in yourself (if only very briefly) if you fail to play your best to win. The choice of risky 3D chess moves or virtue ethics is not obvious.

Replies from: AprilSR
comment by AprilSR · 2025-04-02T01:45:13.223Z · LW(p) · GW(p)

I think it's obvious that you should not pursue 3D chess without investing serious effort in making sure that you play 3D chess correctly. I think there is something to be said for ignoring the shiny clever ideas and playing simple virtue ethics. 

But if a clever scheme is in fact better, and you have accounted for all of the problems inherent to clever schemery, of which there are very many, then... the burden of proof isn't literally insurmountable, you're just unlikely to end up surmounting it in practice.

(Unless it's 3D chess where the only thing you might end up wasting is your own time. That has a lower burden of proof. Though still probably don't waste all your time.)

comment by Adebayo Mubarak (adebayo-mubarak) · 2025-04-02T20:52:50.348Z · LW(p) · GW(p)

One thing is clear for sure... If both accelerationists and safety advocates are susceptible to self-defeating strategies, what does an actually effective approach look like?

comment by YonatanK (jonathan-kallay) · 2025-04-02T16:43:56.240Z · LW(p) · GW(p)

There's a gap in the Three Mile Island/Chernobyl/Fukushima analogy, because those disasters were all in the peaceful uses of nuclear power. I'm not saying that they didn't also impact the nuclear arms race, only that, for completeness, the arms race dynamics have to be considered as well.