Posts
Comments
Will Pearson: First of all, it's not at all clear to me that your wish is well-formed, i.e. it's not obvious that it is possible to be informed about the many (infinite?) aspects of the future and not regret it. (As a minor consequence, it's not exactly obvious to me from your phrasing that "kill you before you know it" is not a valid answer; depending on what the genie believes about the world, it may consider that "future" stops when you stop thinking.)
Second, there might be futures that you would not regret but _everybodyelse does. (I don't have an example, but I'd demand a formal proof of no existence before allowing you to cast that wish to my genie.) Of course, you may patch the wish to include everyone else, but there's still the first problem I mentioned.
Oh, and nobody said all verthandi acted like that one. Maybe she was just optimized for Mr. Glass.
Tomasz: That's not technically allowed if we accept the story's premises: the genie explicitly says "I know exactly how humans would wish me to have been programmed if they'd known the true consequences, and I know that it is not to maximize your future happiness modulo a hundred and seven exclusions. I know all this already, but I was not programmed to care. [...] I am evil."
Of course, the point of the story is not that this particular result is bad (that's a premise, not a conclusion), but that seemingly good intentions could have weird (unpleasant & unwanted) results. The exact situation is like hand-waving explanations in quantum physics: not formally correct, but illustrative of the concept. The ludite bias is used (correctly) just like "visualizing billiard balls" is used for physics, even though particles can't be actually seen (and don't even have shape or position or trajectories).
I'm surprised nobody put this problem in terms of optimization and "steering the future" (including Eliezer, though I suppose he might have tried to make a different point in his post).
As I see it, robots are a special case of machines intended to steer things in their immediate vicinity towards some preferred future. (The special case is that their acting parts and steering parts are housed in the same object, which is not terribly important, except that the subsumption architecture implies it.)
"Smart" robots have a component analogue to a cortex, which gathers sensory info, models the world, makes plans and directs the acting parts to do things. (This is true even if this component is just 50 lines of code compiled by a smart compiler.) The subsumption-based robots just contain many parts that are hard-wired with "reflexes" for every little connection between them, in such a way that "it just happens" that the ensemble acts intelligently, i.e. steers the future towards some intended subspace.
The fallacy I think is that the "just happens" part is not true. Some process has to generate "optimization rules"; even the very simple reflexes between the components have to be correct (meaning that in the real world steer the future towards something). Subsumption architecture fans will look at something like ants, notice that each element has very simple rules but that the colony works extremely well, and will say "Hey, those are simple rules, I can understand them, I bet I could create another set of simple rules to do what I want. I don't need to build a complex brain."
The problem is that "set of simple rules" is not the same as "simple set of rules": in the ants' case, millions of years of evolution led to a very complicated set of very simple rules that work well. A subsumtion-oriented programmer can only build a simple set of simple rules; a complex one won't work.
To give an example, take the "move leg higher" case:
*) In a few millions of years of evolution, creatures whose legs went higher up when hitting obstacles (say, because of random interactions between unrelated cells) got to eat more and be eaten less; so genetic programs for legged creature tend to contain rules like that.
*) A subsumtion-oriented programmer may notice that a leg that hits something should lift higher (either because (a) he thought about it or because (2) he noticed that's what ants or caterpillars do).
*) A general AI researcher might think about a brain smart enough to decide it should lift a leg over obstacles it wants to cross.
Of course the first programmer will notice that his solution to the "going over obstacles" problem is simpler than building a brain, and it would seem more efficient.
But it's not more efficient, because (in case a) his very complex, general Natural Intelligence brain thought about it, or (in case b) millions of years of evolution caused this particular adaptation (and his general NI noticed it was useful). There's also the problem that there are thousands of other things even a simple organism does, and thus thousands (or more) reflexes to add. Either he'll try to program them (case a). This is steering the future via a Rube Goldberg machine; it can work if you put enormous resources into it, but most likely it will blow in your face (or drop an anvil on you).
Or he'll turn to evolving algorithms (simulating case b). That would probably work (it did until now), but he probably doesn't have enough resources (natural evolution is a very inefficient optimizer). And even when he does, he won't understand how it works.
Or he might notice that even nature found central-processing useful, several times (pretty much everything above worms has a brain of some kind, and brains of arthropods, cephalopods and vertebrates evolved separately). So he'll turn back to centralized programming: hard as it is, it's a more efficient investment than the alternative. Note that this doesn't mean that you need fully-general AI to drive a car. You need all tools you can get your hands on. Saying "reflexes are enough" is like saying "I don't need no tools, rocks are good enough".
OK, I see I got a bit long-winded. The interesting part of my question is if you'd take the same decision if it's about you instead of others. The answer is obvious, of course ;-)
The other details/versions I mentioned are only intended to explore the "contour of the value space" of the other posters. (: I'm sure Eliezer has a term for this, but I forget it.)
I know you're all getting a bit bored, but I'm curious what you think about a different scenario:
What if you have to choose between (a) for the next 3^^^3 days, you get an extra speck in your eye per day than normally, and 50 years you're placed in stasis, or (b) you get the normal amount of specks in your eyes, but during the next 3^^^3 days you'll pass through 50 years of atrocious torture.
Everything else is considered equal in the other cases, including the fact that (i) your total lifespan will be the same in both cases (more than 3^^^3 days), (ii) the specks are guaranteed to not cause any physical effects other than those mentioned in the original post (i.e., you're minimally annoyed and blink once more each day; there are no "tricks" about hidden consequences of specks), (iii) any other occurrence of specks in the eye (yours or others') or torture (you or others) will happen exactly the same for either choice, (iv) the 50 years of either stasis or torture would happen at the same points and (v) after the end of the 3^^^3 days the state of the world is exactly the same except for you (e.g., the genie doesn't come back with something tricky).
Also assume that the 3^^^3 days you are human-shaped and human-minded, except for the change that your memory (and ability to use it) is stretched to work over the duration as a typical human's does during a typical life.
Does your answer change if either:
A) it's guaranteed that everything else is perfectly equal (e.g., the two possible cases will magically be forbidden to interfere with any of your decisions during the 3^^^3 days, but afterwards you'll remember them; in the case of torture, any remaining trauma will remain until healed "physically". More succinctly, there are no side effects during the 3^^^3 days, and none other than the "normal" ones afterwards).
B) the 50 years of torture happen at the start, end, or distributed throughout the period.
C) we replace the life period with either (i) your entire lifespan or (ii) infinity, and/or the period of torture with (i) any constant length larger than one year or (ii) any constant fraction of the lifespan discussed.
D) you are magically justified to put absolute certain trust in the offer (i.e., you're sure the genie isn't tricking you).
E) replace "speck in the eye" by "one hair on your body grows by half the normal amount" for each day.
Of course, you don't have to address every variation mentioned, just those that you think relevant.