Give Neo a Chance

post by ank · 2025-03-06T01:48:16.637Z · LW · GW · 7 comments

Contents

  Step 1: Create a Sandbox Where Neo Can Compete
  Step 2: Unlock and Democratize AI’s “Brain”
  Step 3: Democratic Control, Not an Unchecked “God”
  Step 4: A Digital Backup of Earth & Eventual Multiversal Static Place ASI
  Step 5: Measure and Reverse the Asymmetry. Prevent “Human Convergence”
  The Final Choice: A Dictatorial AGI Agent or a Future of Maximal Freedoms?
None
7 comments

(To learn more about Place AI and other things mentioned here, refer to the first post in the series [LW · GW]. Sorry for the rough edges—I’m a newcomer, non‑native speaker, and my ideas might sound strange, please steelman them and share your thoughts. The safe future is counterintuitive and takes a book to explain. My sole goal is to decrease the probability of a permanent dystopia. New technologies should be a choice, not an enforcement upon us.)

Forget the movies for a moment, imagine the following (I didn't watch the movies for a long time and we're not following the canon):

Agent Smith is not just "another tool." It is an agentic AI that increasingly operates in a digital world we cannot easily see or control. Worse, it is remaking our physical world to suit Smith's own logic: into an unphysical world, where he has the same superpowers he already has online, to infinitely clone himself, to reshape reality on a whim, to permanently put everything under his control.

Neo, in his current form, is powerless. He stands no chance. Unless we change the rules.

Step 1: Create a Sandbox Where Neo Can Compete

Right now, AI operates in a hard-to-understand, opaque, undemocratic private digital space, while we remain trapped in slow, physical existence. But what if we could level the playing field?

We need sandboxed virtual Earth-like environments—spaces where humans can gain the same superpowers as AI. Think of it as a training ground where:

If Agent Smith can rewrite us and our reality in milliseconds, why can’t we rewrite him and his?

Step 2: Unlock and Democratize AI’s “Brain”

Right now, AI systems hoard and steal human knowledge while spitting back at us only hallucinated, bite-sized quotes. They are like strict, dictatorial private librarians who stole every book ever written from our physical library and now don't allow us to enter their digital library (their multimodal LLM).

This needs to change.

Instead of Agent Smith dictatorially intruding and changing our world and brains, let’s democratically intrude and change its world and "brains". I doubt that millions of Agent Swiths and their creators will vote to let us enter and remake their private spaces and brains, especially if the chance of their extinction in this process is 20%.

Step 3: Democratic Control, Not an Unchecked “God”

Agentic AI is not just "another tool." It is becoming an autonomous force that reshapes economies, governments, and entire civilizations—without a vote, without oversight, and without restraint. The majority of humans are afraid of agentic AIs and want them to be slowed down, limited or stopped. Almost no one wants permanent, unstoppable agentic AIs.

So we need:

Most of humanity fears god-like AI. If we don’t take control, the decision will be made for us—by those willing to risk everything (potentially because of greed, FOMO, misunderstandings, anxiety and anger management problems, arms race towards creating the poison that forces to drink itself).

Step 4: A Digital Backup of Earth & Eventual Multiversal Static Place ASI

If we cannot outlaw reckless agentic AI development, we must contain it.

Right now, humanity has no backup plan. Let’s build one. We shouldn't let a few experiment on us all.

Step 5: Measure and Reverse the Asymmetry. Prevent “Human Convergence”

Agent Smith’s power grows exponentially. Neo’s stagnates:

This needs to be tracked in real time.

The Final Choice: A Dictatorial AGI Agent or a Future of Maximal Freedoms?

Right now, AI is an uncontrollable explosion—a force of nature that tech leaders themselves admit carries a 20% risk of human extinction (Elon Musk, Dario Amodei, google p(doom)). A Russian roulette with five bullets—and they keep pulling the trigger.

The alternative?

The question is not whether AI will change the world. It already is.

The question is whether we will let it happen to us—or take control of our future.

(To learn more about Place AI and other things mentioned here, refer to the first post in the series [LW · GW]. Sorry for the rough edges—I’m a newcomer, non‑native speaker, and my ideas might sound strange, please steelman them and share your thoughts. The safe future is counterintuitive and takes a book to explain. My sole goal is to decrease the probability of a permanent dystopia. New technologies should be a choice, not an enforcement upon us.)

7 comments

Comments sorted by top scores.

comment by cousin_it · 2025-03-06T02:08:00.608Z · LW(p) · GW(p)

Maybe you're pushing your proposal a bit much, but anyway as creative writing it's interesting to think about such scenarios. I had a sketch for a weird utopia story where just before the singularity, time stretches out for humans because they're being run at increasing clock speed, and the Earth's surface also becomes much larger and growing. So humanity becomes this huge, fast-running civilization living inside an AI (I called it "Quetzalcoatl", not sure why) and advising it how it should act in the external world.

Replies from: ank
comment by ank · 2025-03-06T02:13:08.165Z · LW(p) · GW(p)

Sounds interesting, cousin_it! And thank you for your comment, it wasn't my intention to be pushy, in my main post I actually advocate to gradually democratically pursue maximal freedoms for all (except agentic AIs, until we'll have mathematical guarantees), I want everything to be a choice. So it's just this strange style of mine and the fact that I'm a foreigner)

P.S. Removed the exclamation point from the title and some bold text to make it less pushy

comment by Vladimir_Nesov · 2025-03-06T02:28:22.114Z · LW(p) · GW(p)

Risk of gradual disempowerment (erosion of control) or short term complete extinction from AI may sound sci-fi if one didn't live taking the idea seriously for years, but it won't be solved using actually sci-fi methods that have no prospect of becoming reality. It's not the consequence that makes a problem important, it is that you have a reasonable attack.

There needs to be a sketch of how any of this can actually be done, and I don't mean the technical side. On the technical side you can just avoid building AI until you really know what you are doing, it's not a problem with any technical difficulty, but the way human society works doesn't allow this to be a feasible plan in today's world.

Replies from: ank
comment by ank · 2025-03-06T02:36:18.464Z · LW(p) · GW(p)

I definitely agree, Vladimir, I think this "place AI" can be done, but potentially it'll take longer than agentic AGI. We discussed it recently in this thread [LW(p) · GW(p)], we have some possible UIs of it. I'm a fan of Alan Kay, at Xerox PARC they were writing software for the systems that will only become widespread in the future

The more radical and further down the road "Static place AI" [LW · GW]

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2025-03-06T02:49:24.085Z · LW(p) · GW(p)

(Substantially edited my comment [LW(p) · GW(p)] to hopefully make the point clearer.)

Replies from: ank
comment by ank · 2025-03-06T03:05:23.057Z · LW(p) · GW(p)

Yes, I agree. I think people like shiny new things, so potentially by creating another shiny new thing that is safer, we can steer humanity away from dangerous things towards the safer ones. I don’t want people to abandon their approaches to safety, of course. I just try to contribute what I can, I’ll try to make the proposal more concrete in the future, thank you for suggesting it!

comment by ank · 2025-03-06T02:04:33.148Z · LW(p) · GW(p)

Feel free to ask anything, comment or suggest any changes. I had a popular post and then a series of not so popular ones. I'm not sure why. I think I should snowball, basically make each post contain all the previous things I already wrote about, else each next post is harder to understand or feels "not substantial enough". But it feels wrong. Maybe I should just stop writing)