It is (probably) time for a Buterlian Jihad

post by waterlubber · 2025-01-20T05:55:17.156Z · LW · GW · 2 comments

Contents

  What Lies Ahead
  Best Course of Action
  Appeal to Urgency
  Downsides
    AI Pause
    Backfiring
  Battle Plan
  Conclusions & Clarifications
None
2 comments

We find ourselves at the precipice of a great upset; a mode switch of society. "Hinge of history" might not be the best description - after all, many decisions made in the past could have prevented the present day - but nonetheless we are uniquely poised to affect the outcome of AI now, and the influence of the common man will only diminish in the future. A tangle of paths lie ahead; it is up to us to assess these outcomes and avoid the maelstroms and future torment nexus.

What Lies Ahead

Generally, outcomes from AGI development might be grouped into three broad categories: "good", "mixed," and "clearly awful."

Good outcomes might be:

Ambivalent outcomes include:

and of course the bad outcomes, which include:

The overwhelming consensus among AI & alignment researchers, experts, and average Joes is that the "aligned AGI" outcome is extremely unlikely and the bad outcomes are significantly (1%+, although usually ranging from 10-90%) more likely. The most optimistic predictions of "good" outcomes usually come from groups that stand to benefit from public support for AI (e.g OpenAI employees & friends); these same groups, however, still publicly profess extremely significant credence (10%+!) for bad outcomes. 

Ultimately, the cost-benefit analysis for AI development is strongly against: we see a significant chance of overwhelmingly bad outcome, solid odds of ambivalent and mundane outcomes, and a razor-thin gamble of an extraordinarily good outcome. The timescale for this decision is ever shrinking; Metaculus puts AGI at about six years out. However, humanity is not otherwise beholden to a six-year clock: should we not develop AI further, we likely have ~hundreds of years to sort out our current problems before facing similar extinction level threats. As humanity develops, we will find ourselves better equipped to revisit artificial intelligence in the future, and possibly approach utopic aligned ASI in a safer and more cautious way. 

Best Course of Action

Personally, I'd prefer to see humanity develop itself rather than outsource its very soul to thinking machines. Nonetheless, many technologists may wish to see some of the fruits and mundane improvements brought about by AI. In either case, the best course of action remains to halt its development now - at a near human level with humans in control.

"AI pause" critics bring up the risks of a compute overhang or development by competing nations (e.g China). These are real concerns, but can absolutely be addressed. Progress on compute, much like other progress, can be halted and stalled: the research and development takes place at an extremely limited number of organizations (NVIDIA, TSMC, ASML, etc.), requires large capital & human experience investments, and is easily disrupted by government bodies. Should the US decide to halt or slow semiconductor development, it is highly likely that it could do so (the rest of the world would likely collaborate, as they are currently behind in the AI race and would reasonably fear the same outcomes we do).

Likewise, risks from competing nation states (e.g China) could be mitigated via existing intentional collaboration strategies - nuclear proliferation management techniques like inspections & intelligence agencies keeping check on each other could feasibly serve as a means for the world to prevent the development of AI. 

I am by no means an expert in international law; and likely solutions in this regard will differ significantly from what I have suggested. Nonetheless,  I think there are strong chances of international collaboration against AI, should the political will exist.

Appeal to Urgency

Right now public opinion of AI is at an all-time low:

A sudden outcry from the public against AI is likely to stall or help pause AI research; a united political front or movement could likely delay it by several years or perhaps decades, buying us time to solve technical and social alignment challenges or develop a long-term solution. Generally, political will is strongly swayed by positive feedback loops:

The same loops that make it easier for an anti-AI movement to propagate make it dangerously easy for AI to become a "locked-in" technology that cannot be stopped. Once a third of the population is hooked on character.ai, $100B+ in revenue is rolling in, or the US/China arms race takes off, it's going to be very, very, difficult to argue for a pause or halt to AI development.

Downsides

Potential downsides to a present-day attack on AI can be broken into two groups: downsides from AI pauses in general, and the effects of a concerted effort "backfiring". 

AI Pause

"AI-pause-derived" issues are largely based on the idea of an AI arms race, mostly with China. China is not run by clueless people; most users on this website went from unconcerned with AI to highly concerned with nothing but well-reasoned arguments. If China thinks it is in her best interest to not develop the Torment Nexus, she will not develop the Torment Nexus - and the best way to convince China of this is to make strides towards halting it here in the United States. 

Compute Overhang issues can be resolved by simply not developing semiconductors further. Numerical computing research for medicine, aerospace, and other useful fields lags behind hardware by several years; we will slow, but not eliminate, improvement in this area for a decade even if we stopped developing semiconductors further. Any software developer would know about the many, many orders of magnitude of inefficiency in the vast majority of computer programs we use today.

Backfiring

The largest hazard here is likely the formation of a partisian (i.e right-wing vs. left-wing US politics) divide on AI, which will allow the issue to be co-opted and defeated much like other grassroots movements in the US prior. The appeals to different political groups are likely to be wildly different and potentially mutually exclusive - nonetheless a concerted effort to bring up this point ahead of time could help prevent such a thing from happening.

A massive outcry of public support with the reason of "AI will be incredibly powerful" might also drive further investment (this is a key part of OpenAI's pitch); arguments against should likely focus on "AI will either flop or cause damage; neither are good" and similar points.

Battle Plan

I am too far removed from the land of SF, alignment, and US politics to effectively formulate a battle plan for coordinated anti-AI support. Nonetheless, I believe this forum contains many capable readers and collectively could produce significant results. 

My suggestions, to be taken independently in whatever plans you all cook up, are as follows:

Conclusions & Clarifications

It's only going to get more difficult to argue for an AI pause. Humanity can solve its own problems without AI; we don't need to build it. Arguments against AI pause or halt are not great and certainly aren't strong enough to continue sacrifice to Moloch. You, the humble LW reader, can make an impact on the ~hundreds of people that you know; even priming them against whatever shenanigans develop in the near future might help sway the outcome of the world. 

As a clarifying note: I use the terms AI/AGI/ASI to mean silicon-based intelligence intended to mimic, replace, or supersede the decision-making, planning, and agency of the human brain. Software tools such as image upscaling algorithms, speech recognition, or linear regression are not included in this definition, and while they may have their own pros & cons I don't wish to involve them in the discussion of this post.

I also don't intend this post to be an ironclad argument in favor of AI pause. Rather, I think it presents a sentiment that others on this site approximately share, and would serve as a useful jumping-off point for similar-minded users to plot and scheme ways to make sure AGI doesn't get built. 

As a final note: the term "Butlerian Jihad" is taken from Dune and describes the shunning of "thinking machines" by mankind. It does not mean, in this context, terrorism or similar violent means of preventing the development of AI. I do not think these would be effective measures; right now, the best strategy lies firmly in the court of public opinion.

2 comments

Comments sorted by top scores.

comment by Foyle (robert-lynn) · 2025-01-20T07:26:17.498Z · LW(p) · GW(p)

The phone seems to be off the hook for most of public on AI danger, perhaps a symptom of burnout from numerous other scientific Millenialist scares - people have been hearing of imminent dangers of catastrophe for decades that have failed to impact the lives of 95%+ of population in any significant way and now just write it all off as more of the same.

I am sure that most LW readers find little in the way of positive reception for our concerns amongst less technologically engaged family members and acquaintances.  There are just too many comforting techno-utopian narratives that we are still having to compete with informed by the superficially positive representations in movies and TV, and and most people bias towards optimism in relatively good/comfortable times like these.  We are dealing with emotional reactions and the sheeply 'vibe' of population rather than thoughtfully considered positions.

But I am pretty sure that will all change as soon as we see significant AI competition/undercutting for white collar professions.  Those affected will quickly start baying for blood, and the electorally dominant empathic response to those sad stories of the economically impacted will rapidly swing the more emotively governed wider population against AI.  OECD Democratic govts will inevitably then move to ban AI taking jobs of particularly politically protected classes of people (might still leave some niches vulnerable - like medicine where there is always a shortage of service supply causing ridiculously high prices, and perhaps tech for vengeful reasons).  It will be Butlerian Jihad lite, aimed at symptoms rather than causes, and will likely buy us a few years of relative normalcy as more dangerous ASI is developed in govt approved labs and by despotic regimes.

I doubt it will save us in 50 year time frame, but will perhaps make the economic disruption less severe for 5-10 years.

 

The way to have a bigger impact in the shorter term would be to buy AI-danger editorial support from influencers, particularly those with young female audiences that form the core of environmental and other popular protest movements.  They are by temperament the easiest to bring on board, and have outsized poltical influence.

comment by Robert Cousineau (robert-cousineau) · 2025-01-20T07:07:35.674Z · LW(p) · GW(p)

I mostly agree with the body of this post, and think your calls to action make sense. 

On your title and final note: Butlerian Jihad feels out of place.  It’s catchy, but it seems like you are recommending AI concerned people more or less do what AI concerned people already do.  I feel like we should save our ability to use words that are a call to arms for a time when that is what we are doing.