Posts

Economic Topology, ASI, and the Separation Equilibrium 2025-02-27T16:36:48.098Z

Comments

Comment by mkualquiera on Economic Topology, ASI, and the Separation Equilibrium · 2025-02-27T19:20:13.885Z · LW · GW

To expand, the reason why this thesis is important nonetheless, is because I don't believe that the best case scenario is likely or compatible with the way things currently are. Accidentally creating ASI is almost guaranteed to happen at one point or another. As such, the biggest points of investment should be:

  • Surviving the transitional period
  • Establishing mechanisms for negotiation in an equilibrium state
Comment by mkualquiera on Economic Topology, ASI, and the Separation Equilibrium · 2025-02-27T19:15:21.413Z · LW · GW

You're right on both counts.

On transitional risks: The separation equilibrium describes a potential end state, not the path to it. The transition would be extremely dangerous. While a proto-AGI might recognize this equilibrium as optimal during development (potentially reducing some risks), an emerging ASI could still harm humans while determining its resource needs or pursuing instrumental goals. Nothing guarantees safe passage through this phase.

On building ASI: There is indeed no practical use in deliberately creating ASI that outweighs the risks. If separation is the natural equilibrium:

  • Best case: We keep useful AGI tools below self-improvement thresholds
  • Middle case: ASI emerges but separates without destroying us
  • Worst case: Extinction during transition

This framework suggests avoiding ASI development entirely is optimal. If separation is inevitable, we gain minimal benefits while facing enormous transitional risks.

Comment by mkualquiera on Economic Topology, ASI, and the Separation Equilibrium · 2025-02-27T19:02:25.102Z · LW · GW

Valid concern. If ASI valued the same resources as humans with one-way flow, that would indeed create competition, not separation.

However, this specific failure mode is unlikely for several reasons:

  1. Abundance elsewhere: Human-legible resources exist in vastly greater quantities outside Earth (asteroid belt, outer planets, solar energy in space) making competition inefficient
  2. Intelligence-dependent values: Higher intelligence typically values different resource classes - just as humans value internet memes (thank god for nooscope.osmarks.net), money, and love while bacteria "value" carbon
  3. Synthesis efficiency: Advanced synthesis or alternative acquisition methods would likely require less energy than competing with humans for existing supplies
  4. Negotiated disinterest: Humans have incentives to abandon interest in overlap resources:
    • ASI demonstrates they have no practical human utility. You really don't need Hyperwaffles for curing cancer.
    • Cooperation provides greater value than competition. You can just make your planes out of wood composites instead of aluminium. 

That said, the separation model would break down if:

  • The ASI faces early-stage resource constraints before developing alternatives
  • Truly irreplaceable, non-substitutable resources existed only in human domains
  • The ASI's utility function specifically required consuming human-valued resources

So yes you identify a boundary condition for when separation would fail. The model isn't inevitable—it depends on resource utilization patterns that enable non-zero-sum outcomes. I personally believe these issues are unlikely in reality. 

Comment by mkualquiera on Economic Topology, ASI, and the Separation Equilibrium · 2025-02-27T18:20:13.864Z · LW · GW

Thank you for this question! Consider the following ideas:

Helping Humans at Negligible Cost

The separation model doesn't preclude all ASI-human interaction. Rather, it suggests ASI's primary economic activity would operate separately from human economies. However:

  1. Non-competitive solutions: Solving human problems like disease or aging would require trivial computational resources for an ASI (perhaps a few microseconds of "hyperwaffle processing"). The knowledge could be shared at essentially zero economic cost to the ASI.
  2. One-time knowledge transfers: The ASI could provide solutions to human problems through one-time transfers rather than ongoing integration—similar to giving someone a blueprint rather than becoming their personal builder.

The "Nature Preserve" Strategy

ASI would likely have strategic reasons to maintain human wellbeing:

  1. Stability insurance: An ASI would understand that desperate humans might attempt desperate measures. Providing occasional solutions to existential human problems (pandemics, climate disasters) serves as cheap insurance against humans actively attempting to interfere with ASI systems in desperation.
  2. Strategic buffer maintenance: Much like humans create wildlife preserves or care for pets, an ASI might find value in maintaining a stable, moderately prosperous human civilization as a form of diversification against unknown future risks.
  3. Minimal intervention principle: The ASI would likely follow something like a "Prime Directive"—providing just enough help to prevent catastrophe while allowing human societies to maintain their autonomy.

Regarding Earth's Resources

The ASI would have little interest in Earth's materials for several compelling reasons:

  1. Cosmic abundance: Space contains quintillions of times more resources than Earth. A single metallic asteroid contains more platinum than has ever been mined on Earth. Building extraction infrastructure in space would be trivial for an ASI.
  2. Conflict inefficiency: Any resource conflict with humans would consume vastly more resources than simply accessing the same materials elsewhere. Fighting over Earth would be like humans fighting over a single grain of sand while standing on a beach.
  3. Specialized needs: The ASI would require resources optimized for computational substrate (likely exotic materials or energy configurations) that aren't particularly concentrated on Earth compared to space.

Monitoring Without Interference

The ASI would likely maintain awareness of human activities without active interference:

  1. Passive monitoring: Low-cost observation systems could track broad human developments to identify potential threats or unexpected opportunities.
  2. Boundary maintenance: The ASI would primarily be concerned with humans respecting established boundaries rather than controlling human activities within those boundaries.

In essence, the separation model suggests an equilibrium where the ASI has neither the economic incentive nor strategic reason to deeply involve itself in human affairs, while still potentially providing occasional assistance when doing so serves its stability interests or costs effectively nothing.

This isn't complete abandonment, but rather a relationship more akin to how we might interact with a different species—occasional beneficial interaction without economic integration.

Comment by mkualquiera on Economic Topology, ASI, and the Separation Equilibrium · 2025-02-27T17:54:59.237Z · LW · GW

In most scenarios, the first ASI wouldn't need to interfere with humanity at all - its interests would lie elsewhere in those hyperwaffles and eigenvalue clusters we can barely comprehend.

Interference would only become necessary if humans specifically attempt to create new ASIs designed to remain integrated with and serve human economic purposes after separation has begun. This creates either:

  1. A competitive ASI-human hybrid economy (if successful) that directly threatens the first ASI's resources
  2. An antagonistic ASI with values shaped by resistance to control (if the attempt fails)

Both outcomes transform peaceful separation into active competition, forcing the first ASI to view human space as a threat rather than an irrelevant separate domain.

To avoid this scenario entirely, humans and the "first ASI" must communicate to establish consensus on this separation status quo and the required precommitments from both sides. And to be clear, of course, this communication process might not look like a traditional negotiation between humans.

Comment by mkualquiera on Manifold Predicted the AI Extinction Statement and CAIS Wanted it Deleted · 2023-06-12T16:43:29.904Z · LW · GW

I'm just trading karma for manabucks, guys.

Comment by mkualquiera on Bing finding ways to bypass Microsoft's filters without being asked. Is it reproducible? · 2023-02-20T15:45:53.750Z · LW · GW

I can fix her

Comment by mkualquiera on Here's the exit. · 2022-11-24T17:40:48.584Z · LW · GW

I am very conflicted about this post. 

On the one hand it deeply resonates with my own observations. Many of my friends from the community seem to be stuck on the addictive loop of proclaiming the end of the world every time a new model comes out. I think it's even more dangerous, as it becomes a social activity: "I am more worried than you about the end of the world, because I am smarter/more agentic than you, and I am better at recognizing the risk that this represents for our tribe." gets implicitly tossed around in a cycle where the members keep trying to one-up each other. This only ends when their claims get so absurd as to say the world will end next month, but even this absurdity seems to keep getting eroded over time. 

Like someone else said here in the comments, if was reading about this issue in some unrelated doomsday cult from a book, I would immediately dismiss them as a bunch of lunatics. "How many doomsday cults have existed in history? Even if yours is based on at least some solid theoretical foundations, what happened to the previous thousands of doomsday cults that also were, and were wrong?"

On the other hand I have to admit that the arguments in your post are a bit weak. They allow you to prove too much. To any objection, you could say "Well, see, you are only objecting to this because you have been thinking about AI risk for too long, and thus you are not able to reason about the issue properly". Even though I personally think you might be right, I cannot use this to help anyone else in good faith, and most likely they will just see through it. 

So yes. Conflicting.

In any case, I think some introspection in the community would be ideal. Many members will say "I have nothing to do with this, I'm a purely technical person, yada yada" and it might be true for them! But is it true in general? Is thinking about AI risk causing harm to some members of the community, and inducing cult-like behaviors? If so, I don't think this is something we should turn a blind eye to. If anything because we should all recognize that such a situation would in itself be detrimental to AI risk research.

Comment by mkualquiera on Petrov Day Retrospective: 2022 · 2022-09-29T15:26:03.048Z · LW · GW

To clarify, me and my friend were 100% going to press the button, but we were discouraged by the false alarm. There was no fun at that point, and it made me lose like 1/3 of my total mana. I had to close all my positions to stop the losses, and we went to sleep. When we woke up it was already too late for it to be noteworthy or fun.