A Good Future (rough draft)
post by Michael Soareverix (michael-soareverix) · 2022-10-24T20:45:45.029Z · LW · GW · 5 commentsContents
5 comments
It is the year 2500.
Humanity has overcome its challenges. AI alignment has been solved and a benevolent god watches over us, allowing us to create and share what we desire. But it no longer interferes too much; at least, not too much in this realm.
There are a lot of augmented humans. They have essentially uploaded their minds via nanotechnology. Every night for 5 years, tiny nanomachines would replace their neurons with synthetic ones until they were fully synthetic. With these synthetic neurons in place, they could think up to 10,000x faster and could add different dimensions to their consciousness, like the ability to see infrared or to have an intuitive understanding of quantum mechanics or to controls hundreds of different bodies at once.
I am one of those augmented humans. My consciousness is digital, composed of photonic signals instead of neurotransmitters. Right now, I am independent and not connected to the sea of computing infrastructure that surrounds me. I’m human, except that I can vary the speed of how fast I think. Right now, actually, I am thinking just 1.5 times faster than a biological human. I could be thinking more than a thousand times faster (light is so much faster than sound, and our biological nervous system transmits at the speed of sound) but I like the way the world moves at this speed.
I am not on Earth. I am about 100 light-years away from Earth in a gigantic solar system that belongs to me. Within this space, I control every atom. Well, not all of them. Those atoms that belong to others are still protected by god. I cannot inflict pain on others without their permission and even then, there are strict limits on the magnitude of suffering that is allowed to exist.
But functionally, since I don't want to cause anyone pain, I can do everything I want to do.
Nanomachines and swarms of self-replicating probes are currently disassembling the planets in orbit of this star. We’re building a Dyson swarm / Matrioshka brain. Basically, a swarm of orbiting computers, powered by the sun. Within this massive computing infrastructure, I have built a universe.
I've been building this universe for ~490 years. I originally intended to be a writer, then a VR developer, then an AI reseacher. Now... I am a worldbuilder in the most literal sense. So are millions of others, and I hope someday I'll experience some of their simulations as well. But for now, this is the one I want.
I sit back into my cozy chair, close my eyes, and enter the universe I've built.
The wind ripples across my skin. To build anticipation, I haven’t used a simulation at all for the last few months. I’ve been catching up on fantasy stories, drinking hot chocolate, and letting time go by at fifty times normal speed as the spaceship nears this solar system under construction.
It is marvelous how realistic this feels.
My old memories fade. This world is best experienced by a new consciousness. But my identity remains, and my memories are still stored on a server somewhere, ready to be re-downloaded as soon as I hop out of simulation.
This simulation is a world with purpose. As a biological human, I dreamed of being a hero. I wanted to use magic, to fly through the air, and to protect those I loved.
That desire burns within me now. I snap my fingers and a spark flies into the air. My hair ripples in the wind and I set out through the desert with a sword across my back.
There are others here, real humans. In fact, every human I meet is real and has rights. My actions have consequences.
As the last of my memories fade, I think of the world I'm leaving behind (but only temporarily).
Humanity is spreading across the stars. If I ever grow bored of this world I've made, then there are a quadrillion other worlds I can explore that are just as detailed. The universe has become an absurdly diverse place.
Some worlds still use biology, using the real physical world for their ideas. There are comparatively few of these, maybe 1 in a million, but the landmass they take up is equal to a thousand solar systems of mass, or about 1 trillion earth worlds. 1 trillion physical worlds, filled with different ecosystems, totally alien environments, and all sorts of terrifying and mysterious creatures.
A lot of worlds are copies of Old Earth, either through simulation or physicality. Many of these worlds even have biological humans with no augmentations at all. One of my friends chose to stay biological and now lives on one of those planets, reliving his college experience and making the best of it. I think that's a little unoriginal, but I guess I'm not one to speak.
Because on the other end of the spectrum, there are giant simulations where minds experience games in hundreds of different spatial dimensions and examine the universe in a detail I cannot even begin to comprehend. There are also worlds where your pleasure settings are pushed all the way up and minds float in eternity experiencing incredible rapture as they examine the world around them.
I'm not particularly interested in those. I like where I'm at. I want to have a diversity of experience, have an impact on others, and continue to develop my identity as a person in a natural-seeming way. I want to try to be a hero and experience everything that comes with it.
This is the world we want. A world of security, fairness, and love. A world of choice, diversity, and infinite potential.
It is the end goal enabled by every technology and especially the research of AI Alignment.
(This is my intuition of what a good world with AGI might look like, inspired by the ideas of Isaac Arthur. It sat in my drafts for a long time before I decided 'screw it, someone might find this helpful and I'd like to see what they make of it and how their ideal world differs from mine'. I'll reply to every comment!)
5 comments
Comments sorted by top scores.
comment by Vakus Drake (vakus-drake) · 2022-12-22T20:31:50.487Z · LW(p) · GW(p)
I've had similar ideas but my conception of such a utopia would differ slightly in that:
- This early on (at least given how long the OC has been subjectively experiencing) I wouldn't expect one to want to spend most time experiencing simulations stripped of their memory. As I'd expect a simulation with perfect accuracy to initially be if anything easier to enjoy if you could relax knowing it wasn't actually real (plus people will want simulations where they can kill simulated villains guilt free).
- I personally could never be totally comfortable being totally at the mercy of the machinations of superintelligences and the protection of the singleton AGI. So I would get the singleton AI to make me a lesser superintelligence to specifically look out for my values/interests, which it should have no problem with if it's actually aligned. Similarly I'd expect such an aligned singleton to allow the creation of "guardian angel" AGI's for countless other people, provided said AI's have stable values which are compatible with its aligned values.
- I would expect most simulations to entail people's guardian angel AI simply acting out the roles of all NPC's with perfect verisimilitude, while obviously never suffering when they act out pain and the like. I'd also expect that many NPC's one formed positive relationships with would at some point be seamlessly swapped with a newly created mind, provided the singleton AI considered their creation to be positive utility and they wouldn't have issues with how they were created. I expect this to a major source of new minds such that the distant future will have many thousands of minds who were created as approximations of fictional characters, from all the people living out their fantasies in say Hogwarts for instance and then taking a bunch of its characters out of it.
PS: If I were working on a story like this (I've actually seriously considered it, and I get the sense we read and watch a lot of the same stuff like Isaac Arthur), I'd make mention of how many(most?) people don't like reverting their level of intelligence, for similar reasons to why people would find the idea of being reverted to a young child's intelligence level existentially terrifying.
This is important because it means that one should view adult human level intelligence as being a sort of "childhood" for +X% of human-level superintelligence. So essentially to maximize the amount of novel fun that one can experience (without forgetting things and repeating the same experiences repeatedly like a loop immortal) one should wait until you get bored of all there is to appreciate at your intelligence level (for the range of variance in mind design you're comfortable with) before improving it slightly. This also means that unless you are willing to become a loop immortal, the speed you run your mind at will determine maybe within an order of magnitude or so how quickly you progress along the process of "maturing" into a superintelligence, unless you're deliberately "growing up" faster than is generally advised.
Replies from: michael-soareverix↑ comment by Michael Soareverix (michael-soareverix) · 2023-02-26T00:48:35.516Z · LW(p) · GW(p)
Yeah, this makes sense. However, I can honestly see myself reverting my intelligence a bit at different junctures, the same way I like to replay video games at greater difficulty. The main reason I am scared of reverting my intelligence now is that I have no guarantee of security that something awful won't happen to me. With my current ability, I can be pretty confident that no one is going to really take advantage of me. If I were a child again, with no protection or less intelligence, I can easily imagine coming to harm because of my naivete.
I also think singleton AI is inevitable (and desirable). This is simply because it is stable. There's no conflict between superintelligences. I do agree with the idea of a Guardian Angel type AI, but I think it would still be an offshoot of that greater singleton entity. For the most part, I think most people would forget about the singleton AI and just perceive it as part of the universe the same way gravity is part of the universe. Guardian Angels could be a useful construct, but I don't see why they wouldn't be part of the central system.
Finally, I do think you're right about not wanting to erase memories for entering a simulation. I think there would be levels, and most people would want to stay at a pretty normal level and would move to more extreme levels slowly before deciding on some place to stay.
I appreciate the comment. You've made me think a lot. The key idea behind this utopia is the idea of choice. You can basically go anywhere, do anything. Everyone will have different levels of comfort with the idea of altering their identity, experience, or impact. If you'd want to live exactly in the year 2023 again, there would be a physical, earth-like planet where you could do that! I think this sets a good baseline so that no one is unhappy.
Replies from: vakus-drake↑ comment by Vakus Drake (vakus-drake) · 2023-02-28T08:18:14.694Z · LW(p) · GW(p)
I think the whole point of a guardian angel AI only really makes sense if it isn't an offshoot of the central AGI. After all if you trusted the singleton enough to want a guardian angel AI, then you will want it to be as independent from the singleton as is allowed. Whereas if you do trust the singleton AI (because say you grew up after the singularity) then I don't really see the point of a guardian angel AI.
>I think there would be levels, and most people would want to stay at a pretty normal level and would move to more extreme levels slowly before deciding on some place to stay.
I also disagree with this insofar as as I don't think that people "deciding on some place to stay" is a stable state of affairs under an aligned superintelligence. Since I don't think people will want to be loop immortals if they know they are heading towards that. Similarly I don't even know if I would consider an AGI aligned if it didn't try ensure people understood the danger of becoming a loop immortal and try to nudge people away from it.
Though I really want to see some surveys of normal people to confirm my suspicions that most people find the idea of being an infinitely repeating loop immortal existentially horrifying.
comment by Shamash · 2022-10-24T21:53:26.113Z · LW(p) · GW(p)
As a whole, I find your intuition of a good future similar to my intuition of a good future, but I do think that once it is examined more closely there are a few holes worth considering. I'll start by listing the details I strongly agree with, then the ones I am unsure of, and then the ones I strongly disagree with.
Strongly Agree
- It makes sense for humans to modify their memories and potentially even their cognitive abilities depending on the circumstance. The example provided of a worldbuilder sealing off their memories to properly enjoy their world from an inhabitant's perspective seems plausible.
- The majority of human experience is dominated by virtual/simulated worlds
Unsure
- It seems inefficient for this person to be disconnected from the rest of humanity and especially from "god". In fact, the AI seems like it's too small of an influence on the viewpoint character's life.
- The worlds with maximized pleasure settings sound a little dangerous and potentially wirehead-y. A properly aligned AGI probably would frown on wireheading.
Strongly Disagree
- If you create a simulated world where simulated beings are real and have rights, that simulation becomes either less ethical or less optimized for your utility. Simulated beings should either be props without qualia or granted just as much power as the "real" beings if the universe is to be truly fair.
- Inefficiency like creating a planet where a simulation would do the same thing but better seems like an untenable waste of resources that could be used on more simulations.
- When simulated worlds are an option to this degree, it seems ridiculous to believe that abstaining from simulations altogether would be an optimal action to take in any circumstance. Couldn't you go to a simulation optimized for reading, a simulation optimized for hot chocolate, etc.? Partaking of such things in the real world also seems to be a waste of resources
I might update this comment if anything else comes to mind.
By the way, if you haven't already, I would recommend you read the Fun Theory sequence by Eliezer Yudowsky. One of the ways you can access it is through this post:
https://www.lesswrong.com/posts/K4aGvLnHvYgX9pZHS/the-fun-theory-sequence [LW · GW]
"Seduced by Imagination" might be particularly relevant, if this sort of thing has been on your mind for a while.
Replies from: michael-soareverix↑ comment by Michael Soareverix (michael-soareverix) · 2022-10-25T08:11:43.955Z · LW(p) · GW(p)
Thanks! I think I can address a few of your points with my thoughts.
(Also, I don't know how to format a quote so I'll just use quotation marks)
"It seems inefficient for this person to be disconnected from the rest of humanity and especially from "god". In fact, the AI seems like it's too small of an influence on the viewpoint character's life."
The character has chosen to partially disconnect themselves from the AI superintelligence because they want to have a sense of agency, which the AI respects. It's definitely inefficient, but that is kind of the point. The AI has a very subtle presence that isn't noticeable, but it will intervene if a threshold is going to be crossed. Some people, including myself, instinctively dislike the idea of an AI controlling all of our actions and would like to operate as independently as possible from it.
"The worlds with maximized pleasure settings sound a little dangerous and potentially wirehead-y. A properly aligned AGI probably would frown on wireheading."
I agree. I imagine that these worlds have some boundary conditions. Notably, the pleasure isn't addictive (once you're removed from it, you remember it being amazing but don't feel an urge to necessarily go back) and there are predefined limits, either set by the people in them or by the AI. I imagine a lot of variation in these worlds, like a world where your sense of touch is extremely heightened and turned into pleasure and you can wander through feeling all sorts of ecstatic textures.
"If you create a simulated world where simulated beings are real and have rights, that simulation becomes either less ethical or less optimized for your utility. Simulated beings should either be props without qualia or granted just as much power as the "real" beings if the universe is to be truly fair."
The simulation that the character has built (the one I intend to build) has a lot of real people in it. When those people 'die', they go back to the real world and can choose to be reborn into the simulation again later. In a sense, this simulated world is like Earth, and the physical world is like Heaven. There is meaning in the simulation because of how you interact with others.
There is also simulated life, but it is all an offshoot of the AI. Basically, there's this giant pool of consciousness from the AI, and little bits of it are split off to create 'life', like a pet animal. When that pet dies, the consciousness is reabsorbed into the whole and then new life can emerge once again.
Humans can also choose to merge with this pool of simulated consciousness, and theoretically, parts of this consciousness can also decide to enter the real world. There is no true 'death' or suffering in the way that there is today, except for those like the human players who open themselves to it.
"Inefficiency like creating a planet where a simulation would do the same thing but better seems like an untenable waste of resources that could be used on more simulations."
This is definitely true! But the AI allows people to choose what to do and prevents others from over-optimizing. Some people genuinely just want to live in a purely physical world, even if they can't tell the difference, and there is definitely something special about physical reality, given that we started out here. It is their right, even if it is inefficient. We are not optimizing for efficiency, just choice. Besides, there is so much other simulation power that it isn't really needed. In the same sense, the superminds playing 100-dimensional chess are inefficient, even if it's super cool. The key here is choice.
"When simulated worlds are an option to this degree, it seems ridiculous to believe that abstaining from simulations altogether would be an optimal action to take in any circumstance. Couldn't you go to a simulation optimized for reading, a simulation optimized for hot chocolate, etc.? Partaking of such things in the real world also seems to be a waste of resources"
Another good point! The point is that you have so many resources you don't need to optimize if you don't want to. Sure, you can have a million tastier simulated hot chocolates for every real one, but you might just have it be real just because you can. I'm in a pattern where given the choice, I'd probably choose the real option, even knowing the inefficiency, just because it's comfortable. And the AI supermind won't attempt to persuade me differently, even if it knows my choice is inoptimal.
The important keys of this future are its diversity (endless different types of worlds) and the importance of choice in almost every situation except when there is undesired suffering. In my eyes, there are three nice things to optimize toward in life: Identity, Experience, and Impact. Optimizing purely for an experience like pleasure seems dangerous. It really seems to me that there can be meaning in suffering, like when I work out to become stronger (improving identity) or to help others (impact).
I'll read through the Fun Theory sequence and see if it updates my beliefs. I appreciate the comment!