The Three Warnings of the Zentradi

post by Trevor Hill-Hand (Jadael) · 2024-11-21T20:28:45.567Z · LW · GW · 0 comments

Contents

  Beyond the Machine's Eye: Power, Choice, and the Crisis of Human Agency
    Part I: How to Optimize Your Civilization Away
    Part II: The Three Warnings
      Warning One: The Control Problem
      Warning Two: The Distribution Problem
      Warning Three: The Meaning Crisis
    Part III: The Real Levers and False Comforts
      Recognizing Real Pressures
    Part IV: Protected Spaces and Human Agency
    The Essential Task
None
No comments

This is mostly some ramblings and background notes for a fanfiction, and should not be taken seriously as a real-world argument, except insofar as I would hope it could become good enough to be a real-world argument if I were smart enough and worked on it enough and got the right feedback. I would love to hear criticism on any or all of it, and your ideas on where or how else the story of Macross/Robotech has interesting ideas to explore.


Beyond the Machine's Eye: Power, Choice, and the Crisis of Human Agency

Imagine teaching a computer to play chess. You give it clear rules about what makes a "good" move - capturing pieces, controlling the center, protecting the king. The computer gets incredibly good at following these rules.

But here's the thing: it can never ask whether chess is worth playing.

This might seem like a silly example, but it points to something crucial about the challenges we face as machine intelligence becomes increasingly powerful. Systems optimized for specific goals - whether winning chess games or maximizing "engagement" - can't step outside their programming to question whether those goals are worthwhile.

To understand these challenges better, let's look at a story about space warriors called the Zentradi from the anime "Macross" (also known as, in a sense, "Robotech"), and how they optimized themselves into extinction.

Part I: How to Optimize Your Civilization Away

Imagine you're part of an advanced spacefaring civilization called the Protoculture. You face genuine existential threats - hostile aliens, cosmic disasters, internal conflicts. You decide you need a military force to survive.

The reasonable decision: Create an elite warrior force, the Zentradi, genetically engineered for combat effectiveness. Give them their own ships and resources so they can operate independently, without endangering civilian lives.

Seems sensible. What could go wrong?

Your warrior force is effective but has problems:

The reasonable decision: Start limiting these "inefficiencies." Restrict relationships. Standardize routines. Optimize for pure military effectiveness.

Still seems rational. You're just removing obvious problems.

Your warriors are now more effective, but you notice:

The reasonable decision: Double down on what works. Further reduce cultural activities. Increase standardization. Strengthen hierarchies.

You're just following the data, right? It would be silly to let our messy human biases to lead us astray.

Now an interesting pattern emerges:

The reasonable decision: Let natural selection take its course. The most effective units should be the model for others.

After thousands of years of this process:

After hundreds of thousands of years:

No one even remembers that these were choices anymore. The designers and their reasoning are lost to time. The system runs on autopilot, optimizing itself into an ever-narrower space of possibilities.

Part II: The Three Warnings

This story isn't just about losing meaning - it's about three distinct but interconnected dangers we face as we develop increasingly powerful and interconnected machines:

Warning One: The Control Problem

The Zentradi were created as a military force under Protoculture control. But they eventually grew beyond their creators' ability to control them. This mirrors our first and most urgent challenge with machine intelligence: maintaining meaningful human control over increasingly powerful systems.

Consider what happened:

  1. The Protoculture created the Zentradi for a specific purpose
  2. They made them increasingly powerful and autonomous
  3. The systems for controlling them proved inadequate
  4. The creation eventually destroyed its creators

We face similar risks today:

This isn't just about killer robots. Any sufficiently powerful optimization process - whether military, economic, or social - can escape human control with catastrophic consequences.

Warning Two: The Distribution Problem

Even before they destroyed their creators, the Zentradi system created massive inequality of power and resources. Their society split into:

We face similar challenges:

Even if we solve the control problem, unequal distribution of machine intelligence and its benefits could still lead to:

Warning Three: The Meaning Crisis

Even if we solve both the control and distribution problems we leave the meaning crisis:

This is the Zentradi's third warning - that even if you "survive" and "have resources", optimizing away human agency creates its own kind of extinction.

Part III: The Real Levers and False Comforts

Consider a crucial detail about the Protoculture's fall: They believed they were in control of their military through formal command structures, military hierarchies, and genetic engineering. They had extensive systems of oversight and control. They had laws, regulations, and safety protocols.

None of it mattered.

The real levers of power had shifted long before the formal structures acknowledged it. Each "reasonable" optimization created gaps between:

This highlights a critical challenge we face today. When people discuss AI safety and control, they often focus on what we might call the kayfabe - the maintained illusions of control:

But just as the Protoculture's control systems proved inadequate against the reality of what they'd created, these structures might have little relationship to where real power actually develops in AI systems.

Consider how this plays out in current AI development:

This isn't to say formal structures are meaningless. But like the Protoculture's genetic controls on the Zentradi, they can provide false comfort while the real dynamics of power shift beneath the surface.

Recognizing Real Pressures

The Zentradi's development shows how optimization itself becomes a real driving force. Once the feedback loops of military effectiveness were established, they drove development regardless of formal control structures.

We see similar patterns emerging in AI development:

These are the real levers moving development, often despite or around formal control structures.

Part IV: Protected Spaces and Human Agency

In our story, there's a Chinese restaurant called the Nyan-Nyan. What makes it special isn't that it's less efficient than automated food production. What makes it special is that it's a place where humans can:

These spaces matter precisely because they operate outside the dominant optimization pressures that drive development of powerful systems. One can safely try "wrong" things and learn about reality from them, including learning about how the optimization pressures themselves are working (or not). They're not just about preserving culture - they're about maintaining environments where humans can :

The Essential Task

Our task isn't just to:

It's to do all three in ways that preserve our ability to choose different paths as we discover what survival, distribution, and meaning really require.

The Zentradi's ultimate warning is that a civilization can solve its immediate problems while losing its ability to recognize what it's losing in the process. Their fate teaches us that the most dangerous trap isn't choosing wrong goals - it's losing the ability to choose goals at all.

0 comments

Comments sorted by top scores.