The Failed Strategy of Artificial Intelligence Doomers
post by Ben Pace (Benito) · 2025-01-31T18:56:06.784Z · LW · GW · 3 commentsThis is a link post for https://www.palladiummag.com/2025/01/31/the-failed-strategy-of-artificial-intelligence-doomers/
Contents
3 comments
This is the best sociological account of the AI x-risk reduction efforts of the last ~decade that I've seen. I encourage folks to engage with its critique and propose better strategies going forward.
Here's the opening ~20% of the article. I encourage reading it all.
In recent decades, a growing coalition has emerged to oppose the development of artificial intelligence technology, for fear that the imminent development of smarter-than-human machines could doom humanity to extinction. The now-influential form of these ideas began as debates among academics and internet denizens, which eventually took form—especially within the Rationalist and Effective Altruist movements—and grew in intellectual influence over time, along the way collecting legible endorsements from authoritative scientists like Stephen Hawking and Geoffrey Hinton.
Ironically, by spreading the belief that superintelligent AI is achievable and supremely powerful, these “AI Doomers,” as they came to be called, inspired the creation of OpenAI and other leading artificial intelligence labs whose technology they argue will destroy us all. Despite this, they have continued nearly the same advocacy strategy, and are now in the process of persuading Western governments that superintelligent AI is achievable and supremely powerful. To this end, they have created organized and well-funded movements to lobby for regulation, and their members are staffing key positions in the U.S. and British governments.
Their basic argument is that more intelligent beings can outcompete less intelligent beings, just as humans outcompeted mastodons or saber-toothed tigers or neanderthals. Computers are already ahead of humans in some narrow areas, and we are on track to create a superintelligent artificial general intelligence (AGI) which can think as broadly and creatively in any domain as the smartest humans. “Artificial general intelligence” is not a technical term, and is used differently by different groups to mean everything from “an effectively omniscient computer which can act independently, invent unthinkably powerful new technologies, and outwit the combined brainpower of humanity” to “software which can substitute for most white-collar workers” to “chatbots which usually don’t hallucinate.”
AI Doomers are concerned with the former scenario, where computer systems outreason, outcompete, and doom humanity to extinction. The AI Doomers are only one of several factions that oppose AI and seek to cripple it via weaponized regulation. There are also factions concerned about “misinformation” and “algorithmic bias,” which in practice means they think chatbots must be censored to prevent them from saying anything politically inconvenient. Hollywood unions oppose generative AI for the same reason that the longshoremen’s union opposes automating American ports and insists on requiring as much inefficient human labor as possible. Many moralists seek to limit “AI slop” for the same reasons that moralists opposed previous new media like video games, television, comic books, and novels—and I can at least empathize with this last group’s motives, as I wasted much of my teenage years reading indistinguishable novels in exactly the way that 19th century moralists warned against. In any case, the AI Doomers vary in their attitudes towards these factions. Some AI Doomers denounce them as Luddites, some favor alliances of convenience, and many stand in between.
Most members of the “AI Doomer” coalition initially called themselves by the name of “AI safety” advocates. However, this name was soon co-opted by these other factions with concerns smaller than human extinction. The AI Doomer coalition has far more intellectual authority than AI’s other opponents, with the most sophisticated arguments and endorsements from socially-recognized scientific and intellectual elites, so these other coalitions continually try to appropriate and wield the intellectual authority gathered by the AI Doomer coalition. Rather than risk being misunderstood, or fighting a public battle over the name, the AI Doomer coalition abandoned the name “AI safety” and rebranded itself to “AI alignment.” Once again, this name was co-opted by outsiders and abandoned by its original membership. Eliezer Yudkowsky coined the term “AI Notkilleveryoneism” in an attempt to establish a name that could not be co-opted, but unsurprisingly it failed to catch on among those it was intended to describe.
Today, the coalition’s members do not agree on any name for themselves. “AI Doomers,” the only widely understood name for them, was coined by their rhetorical opponents and is considered somewhat offensive by many of those it refers to, although some have adopted it themselves for lack of a better alternative. While I regret being rude, this essay will refer to them as “AI Doomers” in the absence of any other clear, short name.
Whatever name they go by, the AI Doomers believe the day computers take over is not far off, perhaps as soon as three to five years from now, and probably not longer than a few decades. When it happens, the superintelligence will achieve whatever goals have been programmed into it. If those goals are aligned exactly to human values, then it can build a flourishing world beyond our most optimistic hopes. But such goal alignment does not happen by default, and will be extremely difficult to achieve, if its creators even bother to try. If the computer’s goals are unaligned, as is far more likely, then it will eliminate humanity in the course of remaking the world as its programming demands. This is a rough sketch, and the argument is described more fully in works like Eliezer Yudkowsky’s essays [? · GW] and Nick Bostrom’s Superintelligence.
This argument relies on several premises: that superintelligent artificial general intelligence is philosophically possible, and practical to build; that a superintelligence would be more or less all-powerful from a mere human perspective; that superintelligence would be “unfriendly” to humanity by default; that superintelligence can be “aligned” to human values by a very difficult engineering program; that superintelligence can be built by current research and development methods; and that recent chatbot-style AI technologies are a major step forward on the path to superintelligence. Whether those premises are true has been debated extensively, and I don’t have anything useful to add to that discussion which I haven’t said before. My own opinion is that these various premises range from “pretty likely but not proven” to “very unlikely but not disproven.”
Even assuming all of this, the political strategy of the AI Doomer coalition is hopelessly confused and cannot possibly work. They seek to establish onerous regulations on for-profit AI companies in order to slow down AI research—or forcibly halt research entirely, euphemized as “Pause AI,” although most of the coalition sees the latter policy as desirable but impractical to achieve. They imagine that slowing or halting development will necessarily lead to [? · GW] “prioritizing a lot of care over moving at maximal speed” and wiser decisions about technology being made. This is false, and frankly very silly, and it’s always left extremely vague because the proponents of this view cannot articulate any mechanism or reason why going slower would result in more “care” and better decisions, with the sole exception of Yudkowsky’s plan to wait indefinitely for unrelated breakthroughs in human intelligence enhancement.
But more immediately than that, if AI Doomer lobbyists and activists like the Center for AI Safety, the Institute for AI Policy and Strategy, Americans for Responsible Innovation, Palisade Research, the Safe AI Forum, Pause AI, and many similar organizations succeed in convincing the U.S. government that AI is the key to the future of all humanity and is too dangerous to be left to private companies, the U.S. government will not simply regulate AI to a halt. Instead, the U.S. government will do what it has done every time it’s been convinced of the importance of a powerful new technology in the past hundred years: it will drive research and development for military purposes. This is the same mistake the AI Doomers made a decade ago, when they convinced software entrepreneurs that AI is the key to the future and so inspired them to make the greatest breakthroughs in AI of my lifetime. The AI Doomers make these mistakes because their worldview includes many assumptions, sometimes articulated and sometimes tacit, which don’t hold up to scrutiny.
Continue reading here.
3 comments
Comments sorted by top scores.
comment by StefanHex (Stefan42) · 2025-01-31T20:40:47.941Z · LW(p) · GW(p)
I’ve just read the article, and found it indeed very thought provoking, and I will be thinking more about it in the days to come.
One thing though I kept thinking: Why doesn’t the article mention AI Safety research much?
In the passage
The only policy that AI Doomers mostly agree on is that AI development should be slowed down somehow, in order to “buy time.”
I was thinking: surely most people would agree on policies like “Do more research into AI alignment” / “Spend more money on AI Notkilleveryoneism research”?
In general the article frames the policy to “buy time” as to wait for more competent governments or humans, while I find it plausible that progress in AI alignment research could outweigh that effect.
—
I suppose the article is primarily concerned with AGI and ASI, and in that matter I see much less research progress than in more prosaic fields.
That being said, I believe that research into questions like “When do Chatbots scheme?”, “Do models have internal goals?”, “How can we understand the computation inside a neural network?” will make us less likely to die in the next decades.
Then, current rationalist / EA policy goals (including but lot limited to pauses and slow downs of capabilities research) could have a positive impact via the “do more (selective) research” path as well.
comment by Nathan Helm-Burger (nathan-helm-burger) · 2025-01-31T22:59:17.797Z · LW(p) · GW(p)
But more immediately than that, if AI Doomer lobbyists and activists ... succeed in convincing the U.S. government that AI is the key to the future of all humanity and is too dangerous to be left to private companies, the U.S. government will not simply regulate AI to a halt. Instead, the U.S. government will do what it has done every time it’s been convinced of the importance of a powerful new technology in the past hundred years: it will drive research and development for military purposes
I said exactly this in the comments on Max Tegmark's post...
"If you are in the camp that assumes that you will be able to safely create potent AGI in a contained lab scenario, and then you'd want to test it before deploying it in the larger world... Then there's a number of reasons you might want to race and not believe that the race is a suicide race.
Some possible beliefs downstream of this:
My team will evaluate it in the lab, and decide exactly how dangerous it is, without experiencing much risk (other than leakage risk).
We will test various control methods, and won't deploy the model on real tasks until we feel confident that we have it sufficiently controlled. We are confident we won't make a mistake at this step and kill ourselves.
We want to see empirical evidence in the lab of exactly how dangerous it is. If we had this evidence, and knew that other people we didn't trust were getting close to creating a similarly powerful AI, this would guide our policy decisions about how to interact with these other parties. (E.g. what treaties to make, what enforcement procedures would be needed, what red lines would need to be drawn).
"
https://www.lesswrong.com/posts/oJQnRDbgSS8i6DwNu/the-hopium-wars-the-agi-entente-delusion?commentId=ssDnhYeHJCLCcCk3x [LW(p) · GW(p)]
comment by romeostevensit · 2025-01-31T22:35:27.008Z · LW(p) · GW(p)
Notkilleveryonism, why not Omnicidal AI? As in we oppose OAI.