Sanctuary for Humans
post by Nikola Jurkovic (nikolaisalreadytaken) · 2023-10-27T18:08:22.389Z · LW · GW · 9 commentsContents
Appendix: Biological human future vs weird future None 9 comments
TL;DR: If we succeed at AI safety, humans will probably decide the future of the universe, and we currently have strong self-preservation incentives to choose a future populated by humans. If we committed to provide sanctuary for all currently alive humans, this would make our decision making process less biased at the cost of one planet or solar system, which I think is a good trade.
I think these two scenarios are the main ways for the future to go:
- The universe is populated by biological humans very similar to the ones alive today and not much else (a "star trek" type future)
- The universe is populated by something very weird (digital humans, posthumans, computronium, hedonium, “paperclips”)
It is possible that a specific version of the second scenario is "better" (whatever that means) than the first scenario. Getting this right is extremely important, as the entire universe literally depends on it.
I think currently alive humans are unlikely to let ourselves be replaced (or modified) even if on some level we think (or think we should think) that this would be better, as we have strong incentives to converge to value systems that lead to us surviving. It’s hard to reason with a gun pointed at your head. Additionally, the people who get to shape the future might be eager to choose a future without humans for no good reason, making us averse to that type of reasoning.
One way to remove most of the self-preservation constraint is to commit to provide sanctuary to currently alive humans indefinitely. In a post-AGI world, we might wall off the Earth, or the Solar system, and reserve it forever to be for humans and humans only. This way, we can reason about how to shape the rest of the universe without our self-preservation mechanisms kicking in as much. Without the metaphorical gun pointed at humanity’s head, we are likely to reason more clearly about how to shape the future.
In the worst case, preserving the Earth or the Solar system as a sanctuary would cause an extremely small fraction of the universe to be suboptimal according to the values which shape the rest of the universe. I think this is a worthy tradeoff, as being “closer to the bullseye” in regards to what happens to the rest of the universe is worth much more than one planet or star system. The lightcone could contain a septillion star systems. One star system being reserved for humans is a small sacrifice.
Appendix: Biological human future vs weird future
We can imagine a world very similar to the human-inhabited world, but with digital human minds in a simulated environment, where the humans live equally good or better lives (assuming the lives are net-positive in both worlds).
If the following are true:
- Substrate-independence (humans in virtual worlds are similarly valuable per human as humans in the physical world)
- Humans housed in virtual worlds are less resource-intensive per person
- The "goodness of the universe" is sensitive to the number of people in it
Then the future inhabited by virtual humans could be thousands or millions of times better than the one inhabited by biological humans, depending on how much “cheaper” it is for a human to live digitally than biologically. It's extremely important to pick the right one.
9 comments
Comments sorted by top scores.
comment by Donald Hobson (donald-hobson) · 2023-10-28T22:24:33.075Z · LW(p) · GW(p)
There are reasons to prefer virtual humans that are not "virtual humans are resource cheap".
One of them is life expectancy. There is probably some limit to how reliable bio humans can be.
Another is shear cool stuff. If you want magic that works like your favorite fiction, then that's often trivial in a virtual world. (Few exceptions around timetravel, logical contradictions etc) But something like a box that's bigger on the inside is much harder to make in the physical world. A wider possibility space allows more extreme optimization for fun. (Although that level of optimization is already pretty high with nanotech. And quite a lot higher in parts of todays world than anything in the environment of evolutionary adaptedness)
Now this doesn't distinguish between uploaded minds and brains in vats.
Another advantage of mind uploading is in enhanceability. Suppose we want to be able to fit 1000 years of memories into our minds. Not like right now, we have >900 years before we get to needing that for most people. But we will need that eventually. With digital minds, you have total freedom about how to do the enhancements in a minimally invasive way. With physical brains, you start getting physics problems.
Also, the amount of resources needed for a physical mind is not some fixed number. We (in a sense) use a bunch more resources today than our ancestors did. And have better lives because of it. There is a minimal level of resources needed to keep a person alive at all, and then a diminishing returns tradeoff curve where more resources = better quality of life. Virtual worlds mostly short circuit that, I think. I strongly suspect it should be possible to make nearly any virtual thing, to a quality sufficient to fool humans, with an amount of compute comparable to that needed to run the virtual human. Ie if you have a virtual human on a virtual beach, you can spend unlimited compute tracking every molecule in that ocean, but if it only needs to look close enough to the human, I doubt you need orders of magnitude more compute on it than you do on the human. (The human knows their world is virtual, but thinks pixelated waves are ugly) As opposed to a physical society, where regular rocket trips to mars could take millions of times the minimum energy needed to sustain a human.
There are a few exceptions to this, if some mathmatician wants to know if the rienmann hypothesis is true, they don't want an answer that can fool a human. A coin toss can give them that. They want the real answer. Which could take huge amounts of compute in a real or virtual world.
Other advantages of virtuallity include the ability to run different minds at different speeds, and the ability to copy people.
This gives lots of reasons to go virtual, even if it is more expensive than the minimal subsistence human.
comment by Vladimir_Nesov · 2023-10-28T01:18:42.981Z · LW(p) · GW(p)
If we succeed at AI safety, humans will probably decide the future of the universe
There are two levels of success: humanity gets to live, and humanity keeps control of the future. An AGI as aligned as a human has a decent chance of giving the currently alive humans either boon. A pseudokind [LW(p) · GW(p)] AGI that's not otherwise very aligned only allows humanity to live, but keeps the future.
(I have Yudkowskian doom levels for losing control of the future, assuming there is no decades-long pause to figure things out, but significantly less for everyone ending up dead.)
comment by Donald Hobson (donald-hobson) · 2023-10-28T22:28:20.328Z · LW(p) · GW(p)
Personally I think that making the whole future a place I would want to live sounds like a good idea.
Well given existing other human minds, some room for things that they would enjoy and that I wouldn't can be made.
comment by Vladimir_Nesov · 2023-10-28T01:24:08.092Z · LW(p) · GW(p)
two scenarios [...] biological humans [...] digital humans, posthumans, computronium, hedonium, “paperclips”
It is possible that a specific version of the second scenario is "better" (whatever that means) than the first scenario. Getting this right is extremely important, as the entire universe literally depends on it.
Individual humans should have a say in shaping their share of the universe, especially their own mind.
Replies from: donald-hobson, M. Y. Zuo↑ comment by Donald Hobson (donald-hobson) · 2023-10-28T22:53:07.617Z · LW(p) · GW(p)
Good luck figuring out what you are doing with your share, I have no idea. Well I have some ideas. But it's not like I could design a fully functioning utopia by myself. And most people do not think about how to build a cosmic utopia.
Remember how stupid the average person is, and then remember that half of them are stupider than that.
If you hand a smart skilled person controls of something, say a car to be specific, it doesn't matter exactly how the controls are arranged. They can learn to use them, and use them to get where they want to go. However the controls are set, the person will get to the same place.
Hand the controls to a sufficiently stupid mind, and they pull the controls at random, or in some arbitrary pattern like pressing the biggest button. Thus where they end up is sensitively dependent on exactly how the controls are structured. And if most random actions end in a crash, they are likely to crash.
So, how do you hand control of most of a galaxy of resources to some flat earth space denier who thinks "stars are fake"? Any plan to turn incoherent gibbering into actions will produce actions that no one really wants, and which are sensitively dependent on the control scheme.
The same applies to "control of our own minds". Suppose that control came in the form of some intricate and arcane system. Like writing assembly for the brain. 3 genius neurologists carefully read through the 6000 pages of dense technical manual, take years carefully designing and double checking some enhancement and actually make the improvement they aimed for. A million idiots bang keyboards and give themselves new mental illnesses.
Or suppose the interface was friendlier, in the sense of giving people what they were asking for. And loads of religious people ask to be given 100% certain faith in god. And they get it.
The "everyone gets their share" could work in a world where everyone had written, or could write, long coherent descriptions of what they planned to do with their share. But that isn't this world.
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2023-10-29T04:42:10.377Z · LW(p) · GW(p)
Hence "gets a say", not unreservedly "determines". Becoming smarter should certainly be a high salience early option. And the option to eventually get to determine in detail shouldn't be lost because of initial lack of competence. There is an unimaginable amount of time for people to get their act together at some point.
Replies from: donald-hobson↑ comment by Donald Hobson (donald-hobson) · 2023-10-30T12:24:21.420Z · LW(p) · GW(p)
An "everyone gets a share" system has the downside that if 0.1% of people want X to exist, and 95% of people strongly want X not to exist, then the 0.1% can make X in their share.
Where X might be torturing copies of a controversial political figure. Or violent video games with arguably sentient AI opponents getting killed.
Also, I think you are passing the buck a lot here. Instead of deciding what to do with the universe, you now need to decide how to massively upgrade a bunch of humans into the sort of beings who can decide that.
Also, some people just dislike responsibility.
And the modifications needed to make a person remotely trustworthy to that level are likely substantial. Perhaps. How much do you need to overwrite everyones mind with a FAI? I don't know.
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2023-10-30T14:15:39.600Z · LW(p) · GW(p)
Some general laws seem appropriate, the same as with competence. This is different from imposing strong optimization pressure. People who have no use for compute could rent it out, until they have a personal need for it at the end of time. Still getting to decide what happens then is what it means to keep control of the future.
↑ comment by M. Y. Zuo · 2023-10-28T01:45:00.436Z · LW(p) · GW(p)
Seems a bit of a paradox as 'their share of the universe' is not a fixed quantity, nor did such a concept exist before humans, so how could the 'share' even have been decided on beforehand in order for the first 'individual humans' to have a say?