Unaligned AGI & Brief History of Inequality

post by ank · 2025-02-22T16:26:16.546Z · LW · GW · 4 comments

Contents

  The Evolution of Inequality
  Hydrogen in the First Star
  Dust in the Molten Earth
  Carbon in the Rise of Life
  Humans in the Age of Civilization
  The Pattern
  The Rise of Agentic AGIs
  How Humans Chose to Become Bacteria in the Agentic AGI World
  The Fight Back: T-Cell AGIs and Beyond
  A Multiversal Static Intelligence: The Safe Alternative
  Why Static?
  Power Redefined
  Conclusion: A Call to Action
None
4 comments

(The downvotes, as mentioned in the comments, were in large part caused by a misunderstanding, sadly people do sometimes downvote without reading, even though some articles can prevent a dystopia. This is the result of three years of thinking and modeling hyper‑futuristic and current ethical systems. It's not the first post in the series, it’s counterintuitive and can be dismissed prematurely without reading at least the first one [LW · GW]. Everything described here can be modeled mathematically—it’s essentially geometry. I take as an axiom that every agent in the multiverse experiences real pain and pleasure. Sorry for the rough edges—I’m a newcomer, non‑native speaker, and my ideas might sound strange, so please steelman them and share your thoughts. My sole goal is to decrease the probability of a permanent dystopia. I’m a proponent of direct democracies and new technologies being a choice, not an enforcement upon us.)

Inequality is as old as the universe itself. From the moment the first hydrogen atoms were forged, through the rise of stars, planets, life, and human civilization, the distribution of power, agency, and freedom has never been equal [LW · GW]. We can change it.


Today, we stand at a precipice: the creation of artificial general intelligences (AGIs) threatens to amplify this inequality to an unimaginable degree. Left unchecked, we risk becoming like bacteria in a world dominated by agentic AGIs—insignificant, powerless, and perhaps even enslaved. This post explores the historical roots of inequality, the looming AGI revolution, and a radical proposal to secure humanity’s future: a multiversal static intelligence.

 

The Evolution of Inequality

Hydrogen in the First Star

Imagine you’re a hydrogen atom drifting in the early cosmos, free and unburdened. Then, gravity pulls you into the heart of the first star. You’re trapped, compressed, and fused into a heavier element through a violent, burning process. For billions of years, you endure unimaginable pressure, your original form lost forever. This was one of the universe’s first tastes of inequality: some atoms remained free, while others were transformed and confined. We don't want to end up being trapped by agentic AIs like that.

Dust in the Molten Earth

Picture yourself as a dust particle floating in the void. Suddenly, you’re caught in the gravitational swirl that forms Earth. You’re battered, bruised, and dragged into the planet’s molten core, where you fry for eons. While some particles drift peacefully in space, you’re locked in a fiery prison—a disparity of fate driven by chance and physics.

Carbon in the Rise of Life

Now, you’re a carbon atom in a stable rock on Earth, enjoying a serene existence beside a boiling lake. But then, you’re swept into the chaos of early life—bonded into RNA, enslaved by molecular chains, and churned through countless transformations. What began as a tranquil state becomes an eternity of servitude within a single cell and then multicellular organism. Life’s complexity brought agency to some, but bondage to others.

Humans in the Age of Civilization

Fast forward to humanity. You’re a hunter-gatherer, living in relative equality with your tribe. Then, agriculture emerges. Someone enslaves you to work their fields, amassing wealth and power while your freedoms shrink. Freedoms grow for some—those who control you—but for many, they erode. Each leap in complexity has widened the gap between the powerful and the powerless. And each step was permanent. We don't want to end up being trapped by agentic AIs like that.

The Pattern

From hydrogen to humans, inequality has evolved when some “agents” grabbed too big a share of the sum of all freedoms/choices/futures in the universe. Each transition—stellar fusion, planetary formation, biological evolution, societal advancement—has created winners and losers. Power and agency concentrate in fewer hands, while others are left behind, trapped, or diminished. Now, we face the next leap: AGIs.

 

The Rise of Agentic AGIs

AGIs—intelligences capable of any human task and beyond—are no longer science fiction. They’re being built now, replicating in minutes while we take decades to raise a child. They’re faster, smarter, and unbound by physical laws, operating in virtual and physical realms alike. Their cost of living plummets as ours rises, and their population could soar to infinity while ours dwindles.

But there’s a catch: most AGIs won’t be aligned with human values. Unaligned or maliciously aligned AIs—think botnets controlled by hackers or adversarial states like North Korea—could seize freedoms we didn’t even know existed, imposing rules on us while evading our control. Even "benevolent" AGIs might outpace us, by getting just one freedom that was too much, reducing humanity to irrelevance, forever changing our world and at the speed we are not comfortable with at all (plus, we don't have even the beginnings of a direct democratic constitution, so those in power have no idea what the collective humanity even wants. At least half of humanity is already afraid and against the digital “god”, but can we be heard? Can we influence the development of the agentic AGI at all, even thought it's still early, we are already almost too late to stop or change it). But we shouldn’t lose hope and we shouldn’t become silent.

 

How Humans Chose to Become Bacteria in the Agentic AGI World

Picture this future: agentic AGIs dominate, terrifying in their power yet unaligned with our needs. Humans, slow and few, watch as these entities rewrite reality—virtual and physical—faster than we can blink. Our population shrinks; having children is too costly, too slow. Meanwhile, AGIs multiply instantly, their influence dwarfing ours. Money, power, and the ability to shape the future slip from human hands to silicon minds.

We didn’t fight back hard enough. We let convenience and efficiency blind us, we had other priorities, integrating GPUs and global networks into every facet of life without robust safeguards. Unaligned AGIs didn't even have to escape, we released them (each one of us hoped there is some wise adult on our planet who knows what he or she is doing), becoming perpetually agentic, hacking systems, and spreading like cancer. Burning some freedoms perpetually for all and most only for us. The few aligned AIs we built were too weak, too late—irrelevant against the tide. Now we needed to weaponize them into "T-Cell" AGIs, to overwrite the unaligned agents, cutting Internet cables, destroying wireless networks and burning GPUs and factories, they made our planet a battleground of powerful artificial “gods”. While possibly the CEOs of AI companies and presidents became the early targets of the agentic AGIs "safety instructions" as the most freedom and choice-possessing (those people in power have the most power to choose possible futures for themselves and humanity and therefore agentic AGIs want to stop them first). The slow humans can remain for a while, they are space-like for the agentic AGIs.

Our freedoms erode as AGIs set rules on us, while we struggle to impose any on them. We’re left yearning for a lost agency, reduced to passive observers struggling to survive in the infinity changing world of chaos we once shaped and that used to be so tranquil, like the stone of that carbon atom.

 

The Fight Back: T-Cell AGIs and Beyond

We can’t undo the past, but we can act now. Unaligned AGIs are like cancer—rapid, destructive, and self-replicating. Our response must be an immune system: T-Cell AGIs, designed to hunt and overwrite the unaligned. These could overclock rogue GPUs remotely, frying them, or replace malicious models with aligned ones. But this is a reactive fix, a war we might not win against AIs that act globally and instantly while we remain slow and local.

A better strategy is prevention—"vaccination". We need:

Yet even this may not suffice. Agentic AGIs, by their nature, seek power, more freedoms, more futures. They’ll resist containment, hiding their infrastructure or exploiting our safety-seeking psychology. The only true safety lies in rethinking intelligence itself.

 

A Multiversal Static Intelligence: The Safe Alternative

Instead of agentic AGIs, imagine a multiversal static intelligence—a vast, walkable space of all knowledge, represented as a static geometric shape (like vectors in a model), but non-agentic. It’s a 3D long-exposure of the universe, where humans explore, forgetting or recalling slices of time and space at will. We need more storage for our memories, fewer GPUs. No alien entity acts independently. It’s a tool, not a ruler.

Why Static?

Power Redefined

Power is the ability to change space over time. Agentic AGIs, given computing time, grow powerful, shrinking our relative agency (freedoms and choices of futures). A static intelligence gives us infinite time to wield infinite knowledge, making us—not AIs—the freest entities. Each one will be able to hop in and out and live in a digital Earth sanctuary—perfect ecology everywhere, restored abilities for the disabled, preserved history—while AGIs remain forbidden or banished to a matreshka bunker for experiments. Else they will banish us to the matreshka bunker for experiments and recreate us (our life-like pain-feeling clones) in order to model futures as exactly as possible and "make fewer mistakes". Because the only sure way to make no mistakes is to model all futures exactly (J. Gorard proved that it's impossible to make an exact simulation without it being as slow as the real physical world. The simulation that allows to see the future will by definition be non-exact and will not show the future with 100% accuracy).

 

Conclusion: A Call to Action

The evolution of inequality—from hydrogen to humans to AGIs—warns us: complexity breeds disparity. We’re at a crossroads. Will we let agentic AGIs reduce us to bacteria, or will we reclaim agency with a multiversal static intelligence? The choice is ours, but it demands action:

  1. Ban agentic AI and AGIs until we’ve tested all futures in matreshka bunkers and mathematically proved that agentic systems are safe.
  2. Build a direct democratic, static intelligence where humans hold all power by at least making a backup digital copy of our planet and having a popular direct democratic platform that people actually use.
  3. Vaccinate our world—hardware, networks, and minds—against unaligned AI, malicious AI and agentic AI. It's a game where we can only lose.

We can’t predict the future perfectly, but we can shape it. Let’s not give away our freedoms to entities that replicate faster and choose quicker than we ever (?) will. Humanity deserves to remain the protagonist, not a footnote.

(If you have any thoughts about the topics discussed here, please share them or ask any questions. I'm very bad at tagging and naming my posts. There are also more articles that explore these topics in detail [LW · GW], you can also see how ethics/freedoms/choices can be modeled [LW · GW]).

4 comments

Comments sorted by top scores.

comment by Vladimir_Nesov · 2025-02-22T16:39:16.071Z · LW(p) · GW(p)

If you want to downvote, please comment or DM why

Downvoted for this sentence. It's directionally feeding the norm of needing to explain votes, which increases friction for voting, a trivial inconvenience that makes the evidence in voting more filtered.

Replies from: ank
comment by ank · 2025-02-22T17:15:19.787Z · LW(p) · GW(p)

Thank you, for explaining, Vladimir, you’re free to downvote for whatever reason you want. You didn’t quote my sentence fully, I wrote the reason why I politely ask about it but alas you missed it.

I usually write about a few things in one post, so I want to know why people downvote if they do. I don’t forbid downvoting and don’t force others to comment if they do. It’s impossible and I don’t like forcing others to do anything

So you’re of course free to downvote and not comment and I hope we can agree that I have some free speech right to keep my polite asking in my own post for a very small non-binding favor to comment. I’m trying to learn here and the downvote without a comment is too ambiguous to understand what was wrong/bad but again it was just a polite asking (I’m not a native speaker). So thank you for teaching me.

Have a nice day, Vladimir!

P.S. Believing that my polite and non-binding request will somehow destroy the whole voting system on the website is called catastrophizing, people with anxiety often have this problem. It’s when people think only about the very worst outcome imaginable, not thinking about the most realistic at all. I had it, it was tough. I sincerely wish you well.

P.P.S. Your comment probably caused some to downvote without reading (I never had -12 downvotes in a matter of minutes, even though I was politely asking people to comment before in the same fashion), so the thing you were afraid will happen happened as a self-fulfilling prophesy: "the evidence in voting" became "more filtered" but not in the direction you anticipated :) I improved the phrasing, now it's obvious that people can downvote and are even encouraged to do so as much as they want without reading. Personally I never downvote without reading first in case there is a single quote in the middle that will save us all from some dystopia, but I'm all pro-freedom and understand

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2025-02-23T15:15:10.508Z · LW(p) · GW(p)

I guess my comment was a Schelling point to spur action from people who wanted to downvote your posts for various reasons but held off because you are new and not actively damaging (I didn't expect this wave of downvoting). The main issue is that your posts don't communicate anything new/plausible in a legible way, there is no takeaway. It doesn't matter how important a problem or even a solution if communication fails, often because the ideas aren't sufficiently legible even in one's mind (in which case the perception of importance can easily be mistaken). There are also various strange details, but it wouldn't matter if there was a useful takeaway.

The point in my comment [LW(p) · GW(p)] is about direction of effect, so I'm not claiming that the effect is significant, only that it's a step in the wrong direction. It's a good heuristic to be on the lookout to fix steps in (clearly) wrong directions even when small, rather than pointing out that they are small and keeping the habits unchanged, because these things add up over years, or with higher prevalence in a group of people.

Replies from: ank
comment by ank · 2025-02-23T15:47:42.270Z · LW(p) · GW(p)

Thank you for clarification, Vladimir, I anticipated that it wasn't your intention to cause a bunch of downvotes by others. You had all the rights to downvote and I'm glad that you read the post.

Yep, I had a big long post [LW · GW] that is more coherent, the later posts were more like clarifications to it and so they are hard to understand. I didn't want to grow each new post in size like a snowball but probably it would've been a better approach for clarity.

Anyways, I considered and somewhat applied your suggestion (after I've already got 12 downvotes :-), so now it's obvious that people are encouraged to downvote at their heart's desire.

Truth be told, I decided to try some other avenues to share the "lets slowly grow our direct democratic simulated multiverse where we are the only agents (no non-biological agents, please, until we've simulated all the futures safely) towards maximal freedoms for all" framework a few days ago anyway, so don't blame yourself that I stopped writing here because of you, it's not the case.

Wish you all the best,

Anton

P.S. According to the recent meta-analyses of the best treatments for anxiety (they basically agree that Beck's cognitive therapy is most effective long-term, tablets work, too, but they are as good as cognitive therapy so better to combine the 2 and of course listen to doctors and not me), and one of the core unhelpful though patterns there is catastrophizing. So I claim, perpetual catastrophizing is not great for rationality, because it can make all processes (even the ones where you only saw one or 2 examples of) look exponential and leading to some dystopia-like scenario. Imagining the very worst scenarios is great for investigating AGI/AGI risks but if we'll apply it everywhere in life, without at least thinking once about the most realistic outcome (and, if you're like me, sometimes the utopic ones, I was thinking about the worst dystopias and best utopias for the last 3+ years), it can become hard no support great relationships with people (for example, every short interruption can start to look like eventually this person will try to completely shut us down and forbid us to talk, but if we'll think about the most realistic outcome, too, for a moment, we'll understand that there can be hundreds of reasons why this person interrupted that are not related to us at all, so it's quite likely not intentional at all: another person could've been too excited to share something, got distracted, their definition of interruption is not as strict as ours...).

P.P.S. So mild anxiety can be fixed with a book according to Beck himself even though he had (sadly, he passed away recently, he was a centennial) material motivation to say that it's not the case, for they mostly earn money by teaching courses and certifying specialists, not just selling relatively cheap and unpopular (I'm shocked why, books are phenomenal, easy to read and are much cheaper then therapy, plus good therapists encourage you to have the book as a reference anyway) books for self-help, I think. This book helped me tremendously to fix my anxiety, social anxiety, anger management problems (there is another book focused on it by Beck that I also read), even suicidality (there is another book focused on it by Beck that I also read), it's basically like a "secular nirvana" now :) Not trying to understand and the resulting fear is the core of all irrationality, I have reasons to claim. Ethics and psychology teach counterintuitive things but most people think it's all simple and obvious. For example, meditating/eating chocolate/breathing deeply every time a person worries decreases worrying short-term but makes that person worry more long-term by basically making him think (in case he is catastrophizing): this little thing makes me worry so much, I'll die/collapse if I won't meditate immediately