AI Politics: Polarization and Chaos

post by American Psychohistory · 2025-03-31T23:46:38.303Z · LW · GW · 0 comments

Contents

  Major Assumption: No Superintelligence Speedrun
  Polarization: AI as a New Political Battlefield
  The Left: Resistance and Radicalization
  The Right: AI as Opportunity and Weapon
  Bots, Spam, and Chaos
  Silicon Valley's Internal Struggle
  Implications for AI Safety Advocates
  Influencing Silicon Valley: A Crucial Strategy
  Conclusion: Preparing for the Storm
None
No comments

Disclaimer: Most futurology turns out to be totally wrong and utterly without foundation. This goes double for unprecedented world-changing tech like AI. Still, it’s difficult for me to imagine that we won’t see some version of the following scenario.

Major Assumption: No Superintelligence Speedrun

We might quickly hit AGI and then use a million AGIs to invent superintelligence in a month. In that case, unless you work at the AI company concerned or are an important government regulator, you will have next to no impact on how things turn out.

For everyone else, it's more important to concentrate on medium/slow takeoff scenarios in which AI is integrated into and disrupts industries and everyday life. This will give the general public, politicians, and activists time to incorporate AI into their agendas.

Polarization: AI as a New Political Battlefield

The economic winners (capitalists and blue collar workers) and losers (educated knowledge workers) during medium/slow AI takeoff already form the opposing sides in the current political war. AI will slot seamlessly into the existing left-right divide, massively exacerbating the culture war by threatening the left's economic livelihood. Chaos, mass protest, and even potential civil war will result.

The Left: Resistance and Radicalization

The liberal side of the spectrum includes knowledge workers, professionals, and students—the people whose jobs will be automated by AI.

The threat of automation will deepen liberals' existing antipathy towards Silicon Valley and big tech. Professionals will form unions and strike to protect their jobs and wages. Meanwhile, college students, facing massive debt for degrees AI will render obsolete by the time they graduate, will join protests. Historically, students have been leaders in protest movements, and now—with AI taking care of homework—they’ll have even more time to mobilize politically.

While personal economic insecurity may be the catalyst, protesters will integrate anti-AI activism into the broader liberal agenda, highlighting AI’s potential for discrimination, the exploitation of internet commons by corporations, environmental impacts, and troubling applications of algorithmic judgment in human lives.

Their demands will likely include a universal basic income, extensive regulatory oversight of AI, mandatory human supervision, and protection of certain jobs as strictly human domains.

The Right: AI as Opportunity and Weapon

On the right, capitalists will embrace AI as a powerful economic tool. Business owners will see automation as a chance to dramatically reduce labor costs and enhance productivity. Blue-collar workers will also favor AI—benefiting from affordable AI-driven services like medical diagnoses, legal assistance, and help navigating bureaucracy.

Politically, conservatives will use AI to reduce government bureaucracy, replacing federal employees with AI agents. Right-wing pro-AI arguments will center around efficiency, international competitiveness and security, and economic growth.

Although conservatives will push to control AI-generated speech to mitigate its liberal bias, this won't ultimately cause them to break with AI companies. AI CEO's will likely make token protests, then tweak their prompts or implement whatever kludges are necessary to placate conservative regulators, just as social media has bowed to political pressure on content moderation. 

Bots, Spam, and Chaos

Complicating the picture will be the rise of uncontrolled, jailbroken AI models flooding digital spaces. Online platforms will become saturated with sophisticated spam, convincing scams, and AI-generated propaganda. Social media and online communities, overwhelmed by bot activity, will lose much of their legitimacy.

As digital spaces grow increasingly unreliable and unusable, more activism will move offline, intensifying street demonstrations, marches, and real-world political unrest.

The courts and government bureaucracy will become hopelessly clogged as petitioners flood them with AI-generated forms and documents. Judges and bureaucrats will be slow to adopt AI to handle this influx, and may well be legally prohibited from doing so. Conservatives will leverage the resulting rage to intensify their campaign to dismantle the government. Frustrated by their inability to achieve change through gummed-up legal avenues, activists on the left and right will be increasingly tempted to resort to violence. 

Silicon Valley's Internal Struggle

Tech companies occupy an uncertain position in this scenario. While CEOs and executives will welcome automation, their predominantly liberal workforce will be first in line for replacement.

As a result, internal conflicts, disgruntled employees, sabotage, and security breaches will be common in the tech sector. Tech companies will have to balance their enthusiasm for automation with the damage that online AI-generated chaos will do to their existing business models. Social media companies will have to choose whether to take aggressive anti-bot measures (such as eliminating online anonymity) or to use AI bots themselves to control narratives and manipulate user engagement.

Implications for AI Safety Advocates

Those focused on AI safety should recognize the imminent politicization of their field. As AI becomes an ideological issue, political identity will increasingly determine AI attitudes. AI safety is now in the same position pandemic safety was just before COVID-19.

AI safety researchers and advocates are going to take fire from very angry people on both sides of the political divide. Anyone working on making AI more tractable will be accused of enabling the capitalists to realize their techno-feudalist agenda. Anyone pointing out unsafe AI behavior will be tarred as a luddite and probable Chinese agent. AI alignment research will become as controversial as race or climate science but in a more chaotic age when anybody with a chip on their shoulder will be able to generate an army of online troll bots to harass their opponents.

Strategically, it might be wise for AI safety advocates to proactively align themselves with liberal activist organizations—even those currently skeptical about AI safety. Whatever your opinions on wokeness, identity politics, and environmental regulation, if you want to slow or regulate AI, the left will be on your side and the right will be against you.  By joining with activists early, safety groups would be in place to set the agenda for future anti-AI protests and make constructive political use of popular rage against the machines.

On the other hand, AI safety researchers need to be increasingly cautious if they want to be seen as neutral scientists as opposed to paid stooges of the left or right. Political opponents are going to dig up whatever donations, affiliations, or personal foibles they can to tar the research and character of safety researchers whose work supports the opposing camp. Old internet comments on anarcho-capitalism or shrimp welfare will be dredged up to prove that safety researchers are capitalist tools or anti-human ecoterrorists.

Influencing Silicon Valley: A Crucial Strategy

Finally, persuading influential Silicon Valley actors to shift away from unrestricted AI acceleration and toward embracing strong regulatory frameworks will be crucial. If tech giants begin to view uncontrolled AI as an existential threat to their profitability, they could become powerful allies in advocating for oversight and caution. Unlike the political left or right, big tech could potentially swing either way, so convincing them to push for AI regulation would be a good use of resources for anyone with appropriate connections. 

Conclusion: Preparing for the Storm

The culture war is going to eat AI and grow bigger and more dangerous in the process. Maintaining sanity and rationality about AI safety will become increasingly difficult. Safety researchers will have to defend their positions as objective scientists against political attack. Safety advocates may be forced into alliances of convenience with liberals or tech bosses. There's going to be a lot of fear and a lot of politics, but I hope that some of your minds manage to survive because we're going to need them.


 

0 comments

Comments sorted by top scores.