How to not build a dystopia

post by ank · 2025-01-29T14:16:09.862Z · LW · GW · 3 comments

Contents

3 comments

PS. Just edited this first post of mine,  now it's obvious I don't propose to build any dystopias. I'm actually trying to avoid building any.

I consider thinking about the extremely long-term future important if we don't want to end up in a dystopia. The question is: if we'll eventually have almost infinite compute what should we build?

I think it's hard or impossible to answer it 100% right, so it's better to build something that gives us as many choices as possible (including choices to undo), something close to a multiverse (possibly virtual) - because this is the only way to make sure we don't permanently censor ourselves into a corner of some dystopia.

The mechanism for instant and cost-free switching between universes/observers in the future human-made multiverse - is essential to get us as close as possible to some perfect utopia. It'll allow us to debug the future.

With instant switching, we won't need to be afraid to build infinitely many worlds in search of an utopia because if some world we're building will start to look like a dystopia - we'll instantly switch away from it. It's more realistic to try to build many and find some good one, instead of trying to get to good one from the single first try and end up in a single permanent dystopia.

ank

3 comments

Comments sorted by top scores.

comment by JBlack · 2025-01-30T03:14:24.537Z · LW(p) · GW(p)

Building every possible universe seems like a very direct way of purposefully creating one of the biggest possible S-risks. There are almost certainly vastly more dystopias of unimaginable suffering than there are of anything like a utopia.

So to me this seems like not just "a bad idea" but actively evil.

Replies from: ank
comment by ank · 2025-01-31T14:26:33.973Z · LW(p) · GW(p)

Fair enough, my writing was confusing, sorry, I didn't mean to purposefully create dystopias, I just think it's highly likely they will unintentionally be created and the best solution is to have an instant switching mechanism between observers/verses - basically an AI that really likes to be turned off/switched to a different model. I'll edit the post to make it obvious, I don't want anyone to create dystopias.

comment by ank · 2025-01-29T14:19:37.208Z · LW(p) · GW(p)

Any criticism is welcome, it’s my first post and I’ll post next on the implication for the current and future AI systems. There are some obvious implication for political systems, too. Thank you for reading