Visualizing Eutopia
post by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2008-12-16T18:39:54.000Z · LW · GW · Legacy · 37 commentsContents
37 comments
Followup to: Not Taking Over the World
"Heaven is a city 15,000 miles square or 6,000 miles around. One side is 245 miles longer than the length of the Great Wall of China. Walls surrounding Heaven are 396,000 times higher than the Great Wall of China and eight times as thick. Heaven has twelve gates, three on each side, and has room for 100,000,000,000 souls. There are no slums. The entire city is built of diamond material, and the streets are paved with gold. All inhabitants are honest and there are no locks, no courts, and no policemen."
-- Reverend Doctor George Hawes, in a sermon
Yesterday I asked my esteemed co-blogger Robin what he would do with "unlimited power", in order to reveal something of his character. Robin said that he would (a) be very careful and (b) ask for advice. I asked him what advice he would give himself. Robin said it was a difficult question and he wanted to wait on considering it until it actually happened. So overall he ran away from the question like a startled squirrel.
The character thus revealed is a virtuous one: it shows common sense. A lot of people jump after the prospect of absolute power like it was a coin they found in the street.
When you think about it, though, it says a lot about human nature that this is a difficult question. I mean - most agents with utility functions shouldn't have such a hard time describing their perfect universe.
For a long time, I too ran away from the question like a startled squirrel. First I claimed that superintelligences would inevitably do what was right, relinquishing moral responsibility in toto. After that, I propounded various schemes to shape a nice superintelligence, and let it decide what should be done with the world.
Not that there's anything wrong with that. Indeed, this is still the plan. But it still meant that I, personally, was ducking the question.
Why? Because I expected to fail at answering. Because I thought that any attempt for humans to visualize a better future was going to end up recapitulating the Reverend Doctor George Hawes: apes thinking, "Boy, if I had human intelligence I sure could get a lot more bananas."
But trying to get a better answer to a question out of a superintelligence, is a different matter from entirely ducking the question yourself. The point at which I stopped ducking was the point at which I realized that it's actually quite difficult to get a good answer to something out of a superintelligence, while simultaneously having literally no idea how to answer yourself.
When you're dealing with confusing and difficult questions - as opposed to those that are straightforward but numerically tedious - it's quite suspicious to have, on the one hand, a procedure that executes to reliably answer the question, and, on the other hand, no idea of how to answer it yourself.
If you could write a computer program that you knew would reliably output a satisfactory answer to "Why does anything exist in the first place?" or "Why do I find myself in a universe giving rise to experiences that are ordered rather than chaotic?", then shouldn't you be able to at least try executing the same procedure yourself?
I suppose there could be some section of the procedure where you've got to do a septillion operations and so you've just got no choice but to wait for superintelligence, but really, that sounds rather suspicious in cases like these.
So it's not that I'm planning to use the output of my own intelligence to take over the universe. But I did realize at some point that it was too suspicious to entirely duck the question while trying to make a computer knowably solve it. It didn't even seem all that morally cautious, once I put in those terms. You can design an arithmetic chip using purely abstract reasoning, but would you be wise to never try an arithmetic problem yourself?
And when I did finally try - well, that caused me to update in various ways.
It does make a difference to try doing arithmetic yourself, instead of just trying to design chips that do it for you. So I found.
Hence my bugging Robin about it.
For it seems to me that Robin asks too little of the future. It's all very well to plead that you are only forecasting, but if you display greater revulsion to the idea of a Friendly AI than to the idea of rapacious hardscrapple frontier folk...
I thought that Robin might be asking too little, due to not visualizing any future in enough detail. Not the future but any future. I'd hoped that if Robin had allowed himself to visualize his "perfect future" in more detail, rather than focusing on all the compromises he thinks he has to make, he might see that there were futures more desirable than the rapacious hardscrapple frontier folk.
It's hard to see on an emotional level why a genie might be a good thing to have, if you haven't acknowledged any wishes that need granting. It's like not feeling the temptation of cryonics, if you haven't thought of anything the Future contains that might be worth seeing.
I'd also hoped to persuade Robin, if his wishes were complicated enough, that there were attainable good futures that could not come about by letting things go their own way. So that he might begin to see the future as I do, as a dilemma between extremes: The default, loss of control, followed by a Null future containing little or no utility. Versus extremely precise steering through "impossible" problems to get to any sort of Good future whatsoever.
This is mostly a matter of appreciating how even the desires we call "simple" actually contain many bits of information. Getting past anthropomorphic optimism, to realize that a Future not strongly steered by our utility functions is likely to contain little or no utility, for the same reason it's hard to hit a distant target while shooting blindfolded...
But if your "desired future" remains mostly unspecified, that may encourage too much optimism as well.
37 comments
Comments sorted by oldest first, as this post is from before comment nesting was available (around 2009-02-27).
comment by Robin_Hanson2 · 2008-12-16T19:15:21.000Z · LW(p) · GW(p)
Why shouldn't we focus on working out our preferences in more detail for the scenarios we think most likely? If I think it rather unlikely that I'll have a genie who can grant three wishes, why should I work hard to figure out what those wishes would be? If we disagree about what scenarios are how likely, we will of course disagree about where preferences should be elaborated in the most detail.
comment by Tim_Tyler · 2008-12-16T19:48:18.000Z · LW(p) · GW(p)
I mean - most agents with utility functions shouldn't have such a hard time describing their perfect universe.
As I understand it, most organisms act as though they want to accelerate universal heat death.
That's been an important theory at least since Lotka's Contribution to the Energetics of Evolution, 1922 - and has been explicated in detail in modern times by Roderick Dewar with the MEP.
To change that would require self-directed evolution, a lot of discipline - and probably not encountering any aliens - but what would the motivation be?
comment by dzot · 2008-12-16T19:48:45.000Z · LW(p) · GW(p)
Setting aside the arbitrary volumes and dimensions, and ingnoring all the sprarkle and bling, the good Rev.'s Heaven consists of: "All inhabitants are honest and there are no locks, no courts, and no policemen."
Basically: agreement on what consititues life, liberty and property, then perfect respect for them. That may actually approximate Utopia, at least as far as Homo Sapiens is concerned.
comment by Marcello · 2008-12-16T19:52:07.000Z · LW(p) · GW(p)
Robin: I think Eliezer's question is worth thinking about now. If you do investigate what you would wish from a genie, isn't it possible that one of your wishes might be easy enough for you to grant without the genie? You do say you haven't thought about the question yet, so you really have no way of knowing whether your wishes would actually be that difficult to grant.
Questions like "what do I want out of life?" or "what do I want the universe to look like?" are super important questions to ask, regardless of whether you have a magic genie. I personally have had the unfortunate experience of answering some parts of those question wrong and then using those wrong answers to run my life for a while. All I have to say on the matter is that that situation is definitely worth avoiding. I still don't expect my present set of answers to be right. I think they're marginally more right than they were three years ago.
You don't have a genie, but you do have a human brain, which is a rather powerful optimization process and despite not being a genie it is still very capable of shooting its owner in the foot. You should check what you think you want in the limiting case of absolute power, because if that's not what you want, then you got it wrong. If you think the meaning of life is to move westward, but then you think about the actual lay of the land hundreds of miles west of where you are and then discover you wouldn't like that, then it's worth trying to more carefully formulate why it was you wanted to go west in the first place, and once you know the reason, maybe going north is even better. If you don't want to waste time moving in the wrong direction then it's important to know what you want as clearly as possible.
comment by nazgulnarsil3 · 2008-12-16T20:09:45.000Z · LW(p) · GW(p)
maximized freedom with the constraint of zero violence. violence will always exist as long as there is scarcity, so holodecks + replicators will save humanity.
comment by Phil_Goetz6 · 2008-12-16T20:24:52.000Z · LW(p) · GW(p)
"Questions like "what do I want out of life?" or "what do I want the universe to look like?" are super important questions to ask, regardless of whether you have a magic genie. I personally have had the unfortunate experience of answering some parts of those question wrong and then using those wrong answers to run my life for a while."
I think that asking what you want the universe to look like in the long run, has little or no bearing on how to live your life in the present. (Except insofar as you direct your life to planning the universe's future.) The problems confronted are different.
comment by Aron · 2008-12-16T20:31:03.000Z · LW(p) · GW(p)
"The default, loss of control, followed by a Null future containing little or no utility. Versus extremely precise steering through "impossible" problems to get to any sort of Good future whatsoever."
But this is just repeating the same thing over and over. 'Precise steering' in your sense has never existed historically, yet we exist in a non-null state. This is essentially what Robin extrapolates as continuing, while you postulate a breakdown of historical precedent via abstractions he considers unvetted.
In other words, 'loss of control' is begging the question in this context.
comment by Marcello · 2008-12-16T20:35:40.000Z · LW(p) · GW(p)
Phil: Really? I think the way the universe looks in the long run is the sum total of the way that peoples lives (and other things that might matter) look in the short run at many different times. I think you're reasoning non-extensionally here.
comment by Carl_Shulman · 2008-12-16T20:39:44.000Z · LW(p) · GW(p)
Robin,
Do you think singleton scenarios in aggregate are very unlikely? If you are considering whether to push for a competitive outcome, then a rough distribution over projected singleton outcomes, and utilities for projected outcomes, will be important.
More specifically, you wrote that creating entities with strong altruistic preferences directed towards rich legacy humans would be bad, that the lives of the entities (despite satisfying their preferences) would be less valuable than those of hardscrapple frontier folk. It's not clear why you think that the existence of agents with those preferences would be bad relative to the existence of obsessive hardscrapple replicators. What if, as Nick Bostrom suggests, evolutionary pressures might result in agents with architectures you would find non-eudaimonic in similar fashion? What if hardscrapple replicators find that they can best expand in the universe by creating lots of slave-minds that only care about executing instructions, rather than intrinsically caring about reproductive success?
nazgulnarsil,
Scarcity can be restored very, very, shortly after satiation with digital reproduction and Malthusian population growth.
comment by Joe_Teicher · 2008-12-16T20:41:49.000Z · LW(p) · GW(p)
I am unimpressed by Robin's answer. With unlimited power there is no reason to think hard, seek advice or be careful. Just do whatever the hell you want, and if something bad results you can always undo. But of course, you don't need to undo because you have unlimited foresight. So, all you have to do is do a brute force search of the space of all possible actions, and then pick the one with the consequences that you like the most. Simple.
comment by Warren · 2008-12-16T20:46:15.000Z · LW(p) · GW(p)
For me Marcello's comment resonates, as does the following from Set Theory with a Universal Set by Thomas Forster. I am basically some kind of atheist or agnostic, but for me the theme is religion in the etymological sense of tying back, from the infinite and paradoxical to the wonder, tedium and frustration of everyday life and the here and now. (I dream of writing a book called Hofstadter, Yudkowsky, Covey: a Hugely Confusing YES!)
(http://books.google.com/books?id=fS13gB7WKlQC)
"However, it is always a mistake to think of anything in mathematics as a mere pathology, for there are no such things in mathematics. The view behind this book is that one should think of the paradoxes as supernatural creatures, oracles, minor demons, etc. -- on whom one should keep a weather eye in case they make prophecies or by some other means inadvertently divulge information from another world not normally obtainable otherwise. One should approach them as closely as is safe, and from as many different angles as possible."
Somewhere else in the book, he talks about trying to prove a one of the contradictions (of naive set theory) in one of the axiomatic systems presumed to be consistent, and seeing what truths are revealed as the exploded bits of proof spontaneously reassemble. Things like the magic of recursion as embodied in the Y combinator.
Thus I value people like Eliezer trying to ponder the imponderable.
comment by nazgulnarsil3 · 2008-12-16T21:03:36.000Z · LW(p) · GW(p)
Carl Shuman that is why I will create a solar powered holodeck with built in replicator, and launch myself into deep space attached to an asteroid with enough elements for the replicator.
rest of humanity can go to hell.
comment by Carl_Shulman · 2008-12-16T21:08:38.000Z · LW(p) · GW(p)
nazgulnarsil,
A solar powered holodeck would be in trouble in deep space, particularly when the nearby stars are surrounded with Matrioshka shells/Dyson spheres. Not to mention being followed and preceded by smarter and more powerful entities.
comment by Dagon · 2008-12-16T21:19:08.000Z · LW(p) · GW(p)
This is a silly line of argument. You can't hold identity constant and change the circumstances very much.
If I were given unlimited (or even just many orders of magnitude more than I now have) power, I would no longer be me. I'd be some creature with far more predictive and reflective accuracy, and this power would so radically change my expectations and beliefs that it's ludicrous to think that the desires and actions of the powerful agent would have any relationship to what I predict I'd do.
I give high weight (95%+) that this is true for all humans, including Robin and Elizer.
There is no evidence in impossible predictions based on flawed identity concepts.
comment by Dagon · 2008-12-16T21:24:28.000Z · LW(p) · GW(p)
Oh, I keep meaning to ask: Elizer, do you think FAI is achievable without first getting FNI (friendly natural intelligence)? If we can't understand and manipulate humans well enough to be sure they won't destroy the world (or create an AI that does so...), how can we understand and manipulate an AI that's more powerful than a human?
comment by Zubon · 2008-12-16T21:59:52.000Z · LW(p) · GW(p)
Joe Teicher, are you ever concerned that that is the current case? If a universe is a cellular automaton that cannot be predicted without running it, and you are a demiurge deciding how to implement the best of all possible worlds, you just simulate all those worlds, complete with qualia-filled beings going through whatever sub-optimal existence each offers, then erase and instantiate one or declare it "real." Which seems entirely consistent with our experience. I wonder what the erasure will feel like.
Replies from: Luke_A_Somers↑ comment by Luke_A_Somers · 2013-01-04T16:39:07.041Z · LW(p) · GW(p)
I think they would have given up on this branch already.
Replies from: Philip_W↑ comment by Philip_W · 2015-09-11T17:33:00.051Z · LW(p) · GW(p)
While Joe could follow each universe and cut it off when it starts showing disutility, that isn't the procedure he chose. He opted to create universes and then "undo" them.
I'm not sure whether "undoing" a universe would make the qualia in it not exist. Even if it is removed from time, it isn't removed from causal history, because the decision to "undo" it depends on the history of the universe.
Replies from: Luke_A_Somers↑ comment by Luke_A_Somers · 2015-09-11T20:44:40.953Z · LW(p) · GW(p)
Regardless of whether undoing would work, I presume that never-entered states would not have qualia associated with them.
Replies from: Philip_W↑ comment by Philip_W · 2015-09-15T15:26:08.402Z · LW(p) · GW(p)
What do you mean with "never-entered" (or "entered") states? Ones Joe doesn't (does) declare real to live out? If so, the two probably correlate but Joe may be mistaken. A full simulation of our universe running on sufficient hardware would contain qualia, so the infinitely powerful process which gives Joe the knowledge which he uses to decide which universe is best may contain qualia as well, especially if the process is optimised for ability-to-make Joe-certain-of-his-decision rather than Joe's utility function.
Replies from: Luke_A_Somers↑ comment by Luke_A_Somers · 2015-09-16T17:09:20.658Z · LW(p) · GW(p)
I meant, Zubon's description did not justify your claim that 'that isn't the procedure he chose'.
Replies from: Philip_W↑ comment by Philip_W · 2015-09-21T20:30:10.066Z · LW(p) · GW(p)
'he' in that sentence ('that isn't the procedure he chose') still referred to Joe. Zubon's description doesn't justify the claim, it's a description of the consequence of the claim.
My original objection was that 'they' ("I think they would have given up on this branch already.") have a different procedure than Joe has ("all you have to do is do a brute force search of the space of all possible actions, and then pick the one with the consequences that you like the most."). Whomever 'they' refers to, you're expecting them to care about human suffering and be more careful than Joe is. Joe is a living counterexample to the notion that anyone with that kind of power would have given up on our branch already, since he explicitly throws caution to the wind and runs a brute force search of all Joe::future universes using infinite processing power, which would produce an endless array of rejection-worthy universes run at arbitrary levels of detail.
comment by nazgulnarsil3 · 2008-12-16T23:50:23.000Z · LW(p) · GW(p)
Shuman hmm true. alright. fission reactor with enough uranium to power everything for several lifetimes (whatever my lifetime is at that point) and accelerate the asteroid up to relativistic speeds. aim the ship out of the galactic plane. the energy required to catch up with me will make it unprofitable to do so.
comment by Robin_Hanson2 · 2008-12-16T23:59:35.000Z · LW(p) · GW(p)
Marcello, I won't say any particular possible scenario isn't worth thinking about; the issue is just its relative importance.
Carl, yes of course singletons are not very unlikely. I don't think I said the other claim you attribute to me.
comment by Carl_Shulman · 2008-12-17T00:15:14.000Z · LW(p) · GW(p)
Robin,
In that case can you respond to Eliezer more generally: what are some of the deviations from the competitive scenario that you would expect to prefer (upon reflection) that a singleton implement?
On the valuation of slaves, this comment seemed explicit to me.
comment by Ian_C. · 2008-12-17T00:20:58.000Z · LW(p) · GW(p)
"Why does anything exist in the first place?" or "Why do I find myself in a universe giving rise to experiences that are ordered rather than chaotic?"
So... is cryonics about wanting to see the future, or is it about going to the future to learn the answers to all the "big questions?"
To those who advocate cryonics, if you had all the answers to all the big questions today, would you still use it or would you feel your life "complete" in some way?
I personally will not be using this technique. I will study philosophy and mathematics, and whatever I can find out before I die - that's it - I just don't get to know the rest.
comment by Chris_Hibbert · 2008-12-17T01:00:41.000Z · LW(p) · GW(p)
I see the valuable part of this question not as what you'd do with unlimited magical power, but as more akin to the earlier question asked by Eliezer: what would you do with $10 trillion? That leaves you making trade-offs, using current technology, and still deciding between what would make you personally happy, and what kind of world you want to live in.
Once you've figured out a little about what trade-offs between personal happiness and changing the world you'd make with (practically) unlimited (but non-magical) resources, you can reflect that back down to how you spend your minutes and your days. You don't make the same trade-offs on a regular salary, but you can start thinking about how much of what you're doing is to make the world a better place, and how much is to make your self or your family happier or more comfortable.
I don't know how Eli expects to get an FAI to take our individual trade-offs among our goals into account, but since my goals for the wider world involve more freedom and less coercion, I can think about how I spend my time and see if I'm applying the excess over keeping my life in balance to pushing the world in the right direction.
Surely you've thought about what the right direction looks like?
comment by Ben_Jones · 2008-12-17T11:14:32.000Z · LW(p) · GW(p)
'Precise steering' in your sense has never existed historically, yet we exist in a non-null state.
Aron, Robin, we're only just entering the phase during which we can steer things to either a really bad or really good place. Only thinking in the short term, even if you're not confident in your predictions, is pretty irresponsible when you consider what our relative capabilities might be in 25, 50, 100 years.
There's absolutely no guarantee that humanity won't go the way of the neanderthal in the grand scheme of things. They probably 'thought' of themselves as doing just fine, extrapolating a nice stable future of hunting, gathering, procreating etc.
Marcello, have a go at writing a post for this site, I'd be really interested to read some of your extended thoughts on this sort of thing.
comment by V.G. · 2008-12-17T14:32:33.000Z · LW(p) · GW(p)
Dagon has made a point I referred to in the previous post: in the sentence “I have unlimited power” there are four unknown terms.
What is I? What does individuality include? How is it generated? Eliezer does not consider the evasive notion of self, because he is too focused on the highly hypothetical assumption of “self” that we adhere to in Western societies. However, should he take off the hat of ingenuity for a while, he would discover that the mere defining of “self” is extremely difficult, if not impossible.
“Unlimited” goes in the same basket as “perfect”. Both are human concepts that do not work well in a multidimensional reality. “Power” is another murky concept, because in social science it is the potential ability of one agent to influence another. However, in your post it seems we are talking about power as some universal quality to manipulate matter, energy, and time. One of the few things that quantum mechanics and relativity theory agree about, is that it is probably impossible to do it.
“I have unlimited power” implies total volitional control of a human being (presumably Robin Hanson), over the spacetime continuum This is much more ridiculous, because Robin is part of the system itself.
The notion of having such power, but being somehow separated from it is also highly dubious. Dagon rightly points to the transformative effect such “power” would have (only that it is impossible :) ). Going back to identity: things we have (or rather, things we THINK we have), do transform us. So Eliezer may want to unwind the argument. The canvas is flawed, methinks.
comment by Phil_Goetz6 · 2008-12-17T19:23:18.000Z · LW(p) · GW(p)
Ben: "There's absolutely no guarantee that humanity won't go the way of the neanderthal in the grand scheme of things."
Are you actually hoping that won't happen? That we'll still be human a million years from now?
comment by Vladimir_Nesov · 2008-12-17T20:19:53.000Z · LW(p) · GW(p)
Phil: extinction vs. transcendence.
comment by Jason_Joachim · 2008-12-17T20:25:56.000Z · LW(p) · GW(p)
V.G., Eliezer was asking a hypothetical question, intended to get at one's larger intentions, sidestepping lost purposes, etc. As Chris mentioned, substitute wielding a mere outrageous amount of money instead if that makes it any easier for you.
You know, personally I think this strategy of hypothetical questioning for developing the largest deepest meta-ethical insights could be the most important thing a person can do. And it seems necessary to the task of intelligently optimizing anything we'd call moral. I hope Eliezer will post something on this (I suspect he will), though some of his material may touch on it already.
comment by Nick_Tarleton · 2008-12-18T00:19:41.000Z · LW(p) · GW(p)
Phil: "We" don't all have to be the same thing.
comment by Ben_Jones · 2008-12-19T12:09:52.000Z · LW(p) · GW(p)
Phil, what Vlad and Nick said. I've no doubt we won't look much like this in 100 years, but it's still humanity and its heritage shaping the future. Go extinct and you ain't shaping nothing. This isn't a magical boundary, it's a pretty well-defined one.
comment by mat33 · 2011-10-05T04:20:25.274Z · LW(p) · GW(p)
"It's hard to see on an emotional level why a genie might be a good thing to have, if you haven't acknowledged any wishes that need granting."
Why not? Personal wishes are the simplest ones. Minimal needs fulfilment plus really high class of security may be the first thing. It leaves a lot of time to have your wishes to come to you naturally. May be effortlessly, even. The wishes of your friends come with all the limitations to yours (and then - their) security. Now, we got some kind of working recursion.
"I suppose there could be some section of the procedure where you've got to do a septillion operations..."
Just so - and even far worse, than that. To get a "working" set of wishes, I'd like to emulate some option's results really well.
""Boy, if I had human intelligence I sure could get a lot more bananas.""
Right - and even worse... again. There is nothing wrong with the bananas, I'll order on the first iteration! The problem startes with a couple of slaves, that any half way decent utopist proposed for the "humblest farmer". It gets all the way downhill, afterwards.
Well. I do know, that I'll ask some virtual [reality] words from supercomputer. And I do know, what's the ethics of evolution (only "evil" words would develop some life and some minds of it... and it looks like a good idea to have the evolution "turned on" for that very purpose). But at the point where every person we do count as "real person" would have his own "virtual" world full of "virtual" persons - it's there it gets really complicated and weird. Same with the "really strong AI" and advanced robots. We get to the same "couple of slaves" on the entirely new level, that's what we do.