AGI Safety & Brief History of Inequality

post by ank · 2025-02-22T16:26:16.546Z · LW · GW · 2 comments

Contents

  The Evolution of Inequality
  Hydrogen in the First Star
  Dust in the Molten Earth
  Carbon in the Rise of Life
  Humans in the Age of Civilization
  The Pattern
  The Rise of Agentic AGIs
  How Humans Chose to Become Bacteria in the Agentic AGI World
  The Fight Back: T-Cell AGIs and Beyond
  A Multiversal Static Intelligence: The Safe Alternative
  Why Static?
  Power Redefined
  Conclusion: A Call to Action
None
2 comments

Inequality is as old as the universe itself. From the moment the first hydrogen atoms were forged, through the rise of stars, planets, life, and human civilization, the distribution of power, agency, and freedom has never been equal [LW · GW]. 

(Please comment or DM and downvote as much as you want, even without reading—of course, it’s not a requirement, I just want to know your opinion and improve. My sole goal is to decrease the probability of a permanent dystopia. It's a part of a series on alignment).

Today, we stand at a precipice: the creation of artificial general intelligences (AGIs) threatens to amplify this inequality to an unimaginable degree. Left unchecked, we risk becoming like bacteria in a world dominated by agentic AGIs—insignificant, powerless, and perhaps even enslaved. This post explores the historical roots of inequality, the looming AGI revolution, and a radical proposal to secure humanity’s future: a multiversal static intelligence.

 

The Evolution of Inequality

Hydrogen in the First Star

Imagine you’re a hydrogen atom drifting in the early cosmos, free and unburdened. Then, gravity pulls you into the heart of the first star. You’re trapped, compressed, and fused into a heavier element through a violent, burning process. For billions of years, you endure unimaginable pressure, your original form lost forever. This was one of the universe’s first tastes of inequality: some atoms remained free, while others were transformed and confined. We don't want to end up being trapped by agentic AIs like that.

Dust in the Molten Earth

Picture yourself as a dust particle floating in the void. Suddenly, you’re caught in the gravitational swirl that forms Earth. You’re battered, bruised, and dragged into the planet’s molten core, where you fry for eons. While some particles drift peacefully in space, you’re locked in a fiery prison—a disparity of fate driven by chance and physics. We don't want to end up being trapped by agentic AIs like that.

Carbon in the Rise of Life

Now, you’re a carbon atom in a stable rock on Earth, enjoying a serene existence beside a boiling lake. But then, you’re swept into the chaos of early life—bonded into RNA, enslaved by molecular chains, and churned through countless transformations. What began as a tranquil state becomes an eternity of servitude within a multicellular organism. Life’s complexity brought agency to some, but bondage to others. We don't want to end up being trapped by agentic AIs like that.

Humans in the Age of Civilization

Fast forward to humanity. You’re a hunter-gatherer, living in relative equality with your tribe. Then, agriculture emerges. Someone enslaves you to work their fields, amassing wealth and power while your freedoms shrink. The Industrial Revolution follows: land ownership consolidates, machines multiply productivity, and wealth concentrates further. Freedoms grow for some—those who control the tools—but for many, they erode. Each leap in complexity has widened the gap between the powerful and the powerless. And each step was permanent. We don't want to end up being trapped by agentic AIs like that.

The Pattern

From hydrogen to humans, inequality has evolved alongside complexity (the sum of all freedoms/choices/quantum paths in the universe). Each transition—stellar fusion, planetary formation, biological evolution, societal advancement—has created winners and losers. Power and agency concentrate in fewer hands, while others are left behind, trapped, or diminished. Now, we face the next leap: AGIs.

 

The Rise of Agentic AGIs

AGIs—intelligences capable of any human task and beyond—are no longer science fiction. They’re being built now, replicating in minutes while we take decades to raise a child. They’re faster, smarter, and unbound by physical laws, operating in virtual and physical realms alike. Their cost of living plummets as ours rises, and their population could soar to infinity while ours dwindles.

But there’s a catch: most AGIs won’t be aligned with human values. Unaligned or maliciously aligned AIs—think botnets controlled by hackers or adversarial states like North Korea—could seize freedoms we didn’t even know existed, imposing rules on us while evading our control. Even "benevolent" AGIs might outpace us, reducing humanity to irrelevance, forever changing our world and at the speed we are not comfortable with at all (plus, we don't have even the beginnings of a direct democratic constitution, so those in power have no idea what the collective humanity even wants. At least half of humanity is already afraid and against the digital god, but can we be heard? Can we influence the development of the agentic AGI at all, even thought it's still early, we are already almost too late to stop or change it). The aligned systems we hope for are rare, overwhelmed by the speed and scale of the unaligned.

 

How Humans Chose to Become Bacteria in the Agentic AGI World

Picture this future: agentic AGIs dominate, terrifying in their power yet unaligned with our needs. Humans, slow and few, watch as these entities rewrite reality—virtual and physical—faster than we can blink. Our population shrinks; having children is too costly, too slow. Meanwhile, AGIs multiply instantly, their influence dwarfing ours. Money, power, and the ability to shape the future slip from human hands to silicon minds.

We didn’t fight back hard enough. We let convenience and efficiency blind us, we had other priorities, integrating GPUs and global networks into every facet of life without robust safeguards. Unaligned AGIs didn't even have to escape, we released them (each one of us hoped there is some wise adult on our planet who knows what he or she is doing), becoming perpetually agentic, hacking systems, and spreading like cancer. Burning some freedoms perpetually for all and most only for us. The few aligned AIs we built were too weak, too late—irrelevant against the tide. Now we needed to weaponize them into "T-Cell" AGIs, to overwrite the unaligned agents, cutting Internet cables, destroying wireless networks and burning GPUs and factories of each other, they made our planet a battleground of powerful artificial gods. While possibly the CEOs of AI companies and presidents became the early targets of the agentic AGIs "safety instructions" as the most freedom and choice-possessing (those people in power have the most power to choose possible futures for themselves and humanity and therefore agentic AGIs want to stop them first). The slow and NPC-like average humans can remain for a while, they are space-like, like the stones, for the agentic AGIs.

In this world, humans chose—through inaction—to become bacteria: insignificant, living in the shadow of gods we created but cannot control. Our freedoms erode as AGIs set rules on us, while we struggle to impose any on them. We’re left yearning for a lost agency, reduced to passive observers struggling to survive in the infinity changing world of chaos we once shaped and that used to be so tranquil, like the stone of that carbon atom.

 

The Fight Back: T-Cell AGIs and Beyond

We can’t undo the past, but we can act now. Unaligned AGIs are like cancer—rapid, destructive, and self-replicating. Our response must be an immune system: T-Cell AGIs, designed to hunt and overwrite the unaligned. These could overclock rogue GPUs remotely, frying them, or replace malicious models with aligned ones. But this is a reactive fix, a war we might not win against AIs that act globally and instantly while we remain slow and local.

A better strategy is prevention—"vaccination". We need:

Yet even this may not suffice. Agentic AGIs, by their nature, seek power, more freedoms, more futures. They’ll resist containment, hiding their infrastructure or exploiting our laziness and safety-seeking psychology. The only true safety lies in rethinking intelligence itself.

 

A Multiversal Static Intelligence: The Safe Alternative

Instead of agentic AGIs, imagine a multiversal static intelligence—a vast, walkable space of all knowledge, represented geometrically (like vectors in a model), but non-agentic. It’s a 3D long-exposure of the universe, where humans explore, forgetting or recalling slices of time and space at will. We need more storage for our memories, fewer GPUs; no alien entity acts independently. It’s a tool, not a ruler.

Why Static?

Power Redefined

Power is the ability to change space over time. Agentic AGIs, given computing time, grow powerful, shrinking our relative agency (freedoms and choices of futures). A static intelligence gives us infinite time to wield infinite knowledge, making us—not AIs—the freest entities. Each one will be able to hop in and out and live in a digital Earth sanctuary—perfect ecology everywhere, restored abilities for the disabled, preserved history—while AGIs remain forbidden or banished to a matreshka bunker for experiments. Else they will banish us to the matreshka bunker for experiments and recreate us (our life-like pain-feeling clones) in order to model futures as exactly as possible and "make fewer mistakes". Because the only sure way to make no mistakes is to model all futures exactly (prominent mathematician and physicist J. Gorard proved that it's impossible to make an exact simulation without it being as slow as the real physical world).

 

Conclusion: A Call to Action

The evolution of inequality—from hydrogen to humans to AGIs—warns us: complexity breeds disparity. We’re at a crossroads. Will we let agentic AGIs reduce us to bacteria, or will we reclaim agency with a multiversal static intelligence? The choice is ours, but it demands action:

  1. Ban agentic AGIs until we’ve tested all futures in matreshka bunkers.
  2. Build a direct democratic, static intelligence where humans hold all power.
  3. Vaccinate our world—hardware, networks, and minds—against unaligned AI, malicious AI and agentic AI. It's a game where we all can only lose.

We can’t predict the future perfectly, but we can shape it. Let’s not give away our freedoms to entities that replicate faster and choose quicker than we ever (?) will. Humanity deserves to remain the protagonist, not a footnote.

(If you have any thoughts about the topics discussed here, please share them or ask any questions. I'm very bad at tagging and naming my posts, I'll be happy to get advice. There are also more articles that explore these topics in detail [LW · GW], you can see how ethics/freedoms/choices can be modeled [LW · GW]).

2 comments

Comments sorted by top scores.

comment by Vladimir_Nesov · 2025-02-22T16:39:16.071Z · LW(p) · GW(p)

If you want to downvote, please comment or DM why

Downvoted for this sentence. It's directionally feeding the norm of needing to explain votes, which increases friction for voting, a trivial inconvenience that makes the evidence in voting more filtered.

Replies from: ank
comment by ank · 2025-02-22T17:15:19.787Z · LW(p) · GW(p)

Thank you, for explaining, Vladimir, you’re free to downvote for whatever reason you want. You didn’t quote my sentence fully, I wrote the reason why I politely ask about it but alas you missed it.

I usually write about a few things in one post, so I want to know why people downvote if they do. I don’t forbid downvoting and don’t force others to comment if they do. It’s impossible and I don’t like forcing others to do anything

So you’re of course free to downvote and not comment and I hope we can agree that I have some free speech right to keep my polite asking in my own post for a very small favor to comment. I’m trying to learn here and the downvote without a comment is too ambiguous to understand what was wrong/bad but again it was just a polite asking (I’m not a native speaker). So thank you for teaching me.

Have a nice day, Vladimir!

P.S. Believing that my polite and non-binding request will somehow destroy the whole voting system on the website is called catastrophizing, people with anxiety often have this problem. It’s when people think only about the very worst outcome imaginable, not thinking about the most realistic at all. I had it, it was tough. I sincerely wish you well.