[SEQ RERUN] Failed Utopia #4-2
post by MinibearRex · 2013-02-08T06:27:03.490Z · LW · GW · Legacy · 38 commentsContents
38 comments
Today's post, Failed Utopia #4-2 was originally published on 21 January 2009. A summary (taken from the LW wiki):
A fictional short story illustrating some of the ideas in Interpersonal Entanglement above. (Many commenters seemed to like this story, and some said that the ideas were easier to understand in this form.)
Discuss the post here (rather than in the comments to the original post).
This post is part of the Rerunning the Sequences series, where we'll be going through Eliezer Yudkowsky's old posts in order so that people who are interested can (re-)read and discuss them. The previous post was Interpersonal Entanglement, and you can use the sequence_reruns tag or rss feed to follow the rest of the series.
Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series.
38 comments
Comments sorted by top scores.
comment by SamLL · 2013-02-08T18:44:31.115Z · LW(p) · GW(p)
This story, as well as other gender-related issues within the Sequences, mean that despite them containing what seems to be to be a lot of value, I definitely would not recommend them to anyone else without large disclaimers, in a similar fashion to how Eliezer refers to Aumann.
This story irresistibly reads to me as the author endorsing or implicitly assuming:
1) There are exactly two genders, and everyone is a member of exactly one; 2) Everyone is heterosexual; 3) Humans have literally 0 use for members of the other gender other than romance.
Replies from: None↑ comment by [deleted] · 2013-02-08T21:35:09.146Z · LW(p) · GW(p)
1) There are exactly two genders, and everyone is a member of exactly one; 2) Everyone is heterosexual; 3) Humans have literally 0 use for members of the other gender other than romance.
As a general aesthetic rule, avoiding works of literature that do not contain explicit evidence of these facts doesn't sound particularly fun.
In particular, however, notice that we were told a story about a single protagonist who is an apparently-heterosexual male with an apparently-heterosexual female partner. The other characters aren't human. How exactly do you make it relevant to the plot that all of us homosexual males live in pleasure domes on the terrraformed shores of Titan?
Replies from: Qiaochu_Yuan, SamLL↑ comment by Qiaochu_Yuan · 2013-02-09T00:07:20.064Z · LW(p) · GW(p)
avoiding works of literature that do not contain explicit evidence of these facts doesn't sound particularly fun.
Triple negative :(
↑ comment by SamLL · 2013-02-08T22:05:37.097Z · LW(p) · GW(p)
OK, look, literally a five-year-old would say "but what about my friends who are girls". That the author writes a 'superintelligence' who does not address this objection, and a main character who does not mention any, say, coworkers, board-game-playing rivals, or recreational hockey team members who are women, gives an overwhelming, and overwhelmingly unpleasant, impression that women are solely romance and sex objects. That's not only gross, it's a very common failure mode of "we're too smart to be sexist" male tech geeks. And, indeed, downthread you can see other commenters talking about how great a utopia this sounds like.
Replies from: None, Eugine_Nier↑ comment by [deleted] · 2013-02-09T00:57:24.023Z · LW(p) · GW(p)
That the author writes a 'superintelligence' who does not address this objection
That is, the point of the entire exercise, i.e., to show one out of a gazillion possible failure modes that can happen if you get FAI almost (but not quite) right -- a theme that shows up time and time again in EY's fiction. Acting like the superintelligence character is some kind of Author Avatar is really ignorant of... well, everything else he's written. That's why this a "Failed Utopia" and not a "Utopia."
and a main character who does not mention any, say, coworkers, board-game-playing rivals, or recreational hockey team members who are women, gives an overwhelming, and overwhelmingly unpleasant, impression that women are solely romance and sex objects.
How long does the plot take -- perhaps ten minutes? We see the main character in a moment of extreme shock, and then, extreme grief -- an extreme grief that is vitally important to the moral of the story (explicitly: "I didn't want this, even though the AI was programmed to be 'friendly'"). Adding anyone else to the plot dilutes this point.
And, indeed, downthread you can see other commenters talking about how great a utopia this sounds like.
That's the bloody point. FAI is hard.
Replies from: army1987↑ comment by A1987dM (army1987) · 2013-02-09T18:27:05.392Z · LW(p) · GW(p)
That is, the point of the entire exercise, i.e., to show one out of a gazillion possible failure modes that can happen if you get FAI almost (but not quite) right -- a theme that shows up time and time again in EY's fiction. Acting like the superintelligence character is some kind of Author Avatar is really ignorant of... well, everything else he's written. That's why this a "Failed Utopia" and not a "Utopia."
That much is true, but looking at SamLL's contributions it seems that what made him untranslatable 1 was “The Opposite Sex”, which is written in EY's own voice.
↑ comment by Eugine_Nier · 2013-02-09T01:02:14.182Z · LW(p) · GW(p)
OK, look, literally a five-year-old would say "but what about my friends who are girls".
And the AI would reply "if you had never met said friends, would you still miss them? Sounds like a clear case of sunk cost bias."
comment by Gastogh · 2013-02-08T14:13:23.866Z · LW(p) · GW(p)
I always was rather curious about that other story EY mentions in the comments. (The "gloves off on the application of FT" one, not the boreanas one.) It could have made for tremendously useful memetic material / motivation for those who can't visualize a compelling future. Given all the writing effort he would later invest in MoR, I suppose the flaw with that prospect was a perceived forced tradeoff between motivating the unmotivated and demotivating the motivated.
Replies from: Qiaochu_Yuan↑ comment by Qiaochu_Yuan · 2013-02-09T00:13:34.439Z · LW(p) · GW(p)
I would strongly prefer that Eliezer not write a compelling eutopia ever. Avatar was already compelling enough to make a whole bunch of people pretty unhappy awhile back.
Replies from: Nornagest, army1987↑ comment by Nornagest · 2013-02-09T00:52:31.466Z · LW(p) · GW(p)
Really? I assume we're talking about the Avatar with blue aliens here, not the one with magical martial arts.
When I think about eutopia, I usually start from a sort of idealized hunter-gatherer society too, though the one that first comes to mind is something much different that I read much earlier. But Avatar never seemed that eutopically optimized: too much leaning on noble-savage tropes and a conspicuous lack of curiosity and ambition. And aside from the fringe that you get every time you put sufficiently sexy nonhumans onscreen, I'm not sure I've seen anything that matches what you're talking about.
Replies from: Qiaochu_Yuan, NancyLebovitz↑ comment by Qiaochu_Yuan · 2013-02-09T00:55:39.571Z · LW(p) · GW(p)
Yes, the blue aliens. Link. Avatar is not that eutopically optimized but it is still a huge improvement on most people's lives; consider the possibility that your priors for what most people's lives are like is off.
Replies from: Nornagest, fubarobfusco↑ comment by Nornagest · 2013-02-09T01:09:28.756Z · LW(p) · GW(p)
Interesting. Though absent more information it doesn't tell us very much; I'd like to know how many people showed these kinds of symptoms after watching -- to name three that might cause them by different mechanisms -- Fight Club, or Dances With Wolves, or any sufficiently romanticized period piece.
↑ comment by fubarobfusco · 2013-02-09T16:54:41.273Z · LW(p) · GW(p)
This seems similar to Stendhal syndrome or other unexpected psychological responses to immersion in beautiful stimuli. (Say what you like about the plot, Avatar is visually rather pretty.)
↑ comment by NancyLebovitz · 2013-02-09T21:51:49.648Z · LW(p) · GW(p)
If there's curiosity and ambition, you'd have to portray a snapshot of a eutopia rather than staple image. Furthermore, if it keeps changing, there are going to be mistakes, though one would hope recovery from them would be relatively quick. And, of course, if the science/tech keeps improving, then it's rather hard to imagine the details.
Replies from: Nornagest↑ comment by Nornagest · 2013-02-09T22:01:55.881Z · LW(p) · GW(p)
I don't think portraying a snapshot rather than a steady-state society would be much of a problem: media like Avatar almost always captures the societies it portrays at unusually tumultuous times anyway, which is actually the main thing that makes the lack of curiosity and so forth conspicuous to me.
If the movie was about some kind of anthropologist-cum-method-actor trying to blend seamlessly into a stable culture that had never heard of a starship or a Hellfire missile, less inventive behavior on its citizens' parts wouldn't be so surprising. But it's not; it's about a contact scenario with a technologically superior species, and so the same behavior looks more like borderline-insane traditionalism or sentimentality.
Replies from: taelor↑ comment by A1987dM (army1987) · 2013-02-09T17:03:10.447Z · LW(p) · GW(p)
I guess that the typical LW reader is much saner that those people, though this guess is based on the fact that I found Avatar boring and unremarkable and on a very liberal amount of Generalizing from One Example.
Replies from: Qiaochu_Yuan↑ comment by Qiaochu_Yuan · 2013-02-09T18:31:05.322Z · LW(p) · GW(p)
Right, but 1) we already have evidence that Eliezer is capable of writing a story that a lot of LWers at least greatly enjoy, and HPMoR is nowhere close to being eutopically optimized, and 2) even if the typical LWer isn't at serious risk, putting 5% of them out of commission is probably not a good idea either.
comment by Tripe · 2013-02-08T08:03:42.065Z · LW(p) · GW(p)
The only sad part of that story was when the AI died.
Replies from: knb↑ comment by knb · 2013-02-08T10:07:49.389Z · LW(p) · GW(p)
Honestly, I consider that to be one of the more compelling utopias I've read about.
Replies from: falenas108↑ comment by falenas108 · 2013-02-08T14:26:05.540Z · LW(p) · GW(p)
What do you think about this one?
Also, if that post isn't explicitly part of this sequence, I think it should be added at the end.
Replies from: None, knb↑ comment by [deleted] · 2013-02-08T15:05:06.750Z · LW(p) · GW(p)
Personally, I find that one rather grotesque, and pandering to a particular mindset.
Grotesque due to the contrived nature of the 'challenges' faced which turn one's whole life into a video game, and the apparent homogeneity of preferences, and pandering due to the implicit fawning over everything the things that actually run its world are capable of.
As for this one... the creation of sentient beings for an explicit purpose leaves a very bad taste in my mouth. It feels like limiting their powers of self-determination, though I'm not sure if that's coherent. The exact particulars of how the solar system gets remade seem a bit arbitrary, though the hands-off safeguards are interesting. I wonder what sorts of 'gaming of the rules' are possible...
Replies from: knb↑ comment by knb · 2013-02-08T23:37:51.722Z · LW(p) · GW(p)
Grotesque due to the contrived nature of the 'challenges' faced which turn one's whole life into a video game,
Agreed. And a poorly designed video game, at that. If this world was made into a game today, I can't imagine it being as popular as Grand Theft Auto.
Replies from: fubarobfusco↑ comment by fubarobfusco · 2013-02-09T17:01:44.503Z · LW(p) · GW(p)
Grand Theft Auto
... which, if I understand correctly, is a game about miserable scared people doing horrible things to other miserable scared people, right?
↑ comment by knb · 2013-02-08T23:36:05.098Z · LW(p) · GW(p)
I remember that story. I strongly dislike it. It is clearly poorly designed on a number of levels. The main characteristics are casual sex and LARPing. I think we can do better.
The best eutopia I've read about (which Yudkowsky also highly praised), is The Golden Oecumene.
comment by Shmi (shminux) · 2013-02-08T17:58:47.563Z · LW(p) · GW(p)
Hmm, isn't there a logical contradiction between
I fully understand. I can already predict every argument you will make.
and
Roughly 89.8% of the human species is now known to me to have requested my death. Very soon the figure will cross the critical threshold, defined to be ninety percent. That was one of the hundred and seven precautions
? Surely this outcome, resulting in hitting one of the 107 precautions only minutes after the singularity, was predicted by the AI, and thus it would have been able to avoid it (trivially by doing nothing).
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-02-08T19:02:46.016Z · LW(p) · GW(p)
It doesn't want to avoid it. Why would it?
Replies from: shminux↑ comment by Shmi (shminux) · 2013-02-08T19:22:55.079Z · LW(p) · GW(p)
I thought that hitting a precaution is a penalty in its utility function. I must be missing something.
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-02-08T21:11:41.226Z · LW(p) · GW(p)
I'm assuming this is just a deontological rule along the lines, "If X happens, shut down." (If the Programmer was dumb enough to assign a super-high utility to shutting down after X happens, this would explain the whole scenario - the AI did the whole thing just to get shut down ASAP which had super-high utility - but I'm not assuming the Programmer was that stupid.)
Replies from: shminux↑ comment by Shmi (shminux) · 2013-02-08T21:45:27.843Z · LW(p) · GW(p)
I'm assuming this is just a deontological rule
Ah, thank you, I get it now. I guess for me deontology is just a bunch of consequentialist computational shortcuts, necessary because of the limited computational capacity of human brain and because of the buggy wetware.
Presumably the AI in this failed utopia would not need deontology, since it has enough power and reliability to recompute the rules every time it needs to make a decision based on terminal goals, not intermediate ones, and so it would not be vulnerable to lost purposes.
Replies from: DanielLCcomment by [deleted] · 2013-02-08T21:21:54.514Z · LW(p) · GW(p)
I tried to process through the story again, and I realized a perspective on it that I don't think I noticed on my first run through. To start off with:
A: Almost everyone is viciously upset and wants the AI's death, very, very quickly.
B: Even the AI is well aware that it failed.
C: 89.8% of the Human species includes people who aren't even CLOSE to any kind of romantic/decision making age. (at least according to http://populationpyramid.net/ )
D: Yet the AI has to have failed so horribly that the statistics are implied that almost every human being remaining alive capable of expressing the thought "I want you dead." wants it dead.
Now if the AI actually did something like this:
1: Terraform Mars and Venus
2: Relocate all Heterosexual Cisgender Adult Males to Mars, boost health.
3: Relocate all Heterosexual Cisgender Adult Females to Venus, boost health.
4: Make Complementary Partners on Mars and Venus.
Then it seems to imply, but not say:
5a: A large number of minor children have been abandoned to their deaths as any remaining adults who are still on Earth can't possibly take care of 100% of the remaining minor children in the wake of a massive societal disruption of being left behind. Oh, and neither the remaining adults or the children get boosted health, either. So, all those people in #2 and #3? You'll probably outlive your minor children even if they DID survive, and you get no say in it.
5b: Everyone in 5a was just killed very fast, possibly by being teleported to the Moon.
Either of those might be a horrible enough thing for the AI to have such a monumentally bad approval rating for a near total death wish to occur so quickly. But little else would.
I love my wife, A LOT, but I don't think her and I being moved to separate planets where we were both given amicable divorce by force and received compensation for not being able to see several of our family members for years would make me start hurling death wishes at the only thing which could hypothetically reverse the situation and which obviously has an enormous amount of power. And even if it did, applying that to 89.8% of people doesn't seem likely. I think a lot of them would spend much more time just being in shock until they got used to it.
On the other hand, if you kill my baby nephew and my young cousins and a shitload of other people then I can EASILY see myself hurling around death wishes on you, whether or not I really mean them, and hitting 89.8% feels much more likely.
If there are no implied deaths, then it seems like a vast portion of humanity is being excruciatingly dumb and reactionary for no reason, much like Stephen Grass, unless Stephen Grass DID realize the implied deaths and that's why he vomited when he did.
This seems to be sort of left up to the reader, since all Yudkowsky said in http://lesswrong.com/lw/xu/failed_utopia_42/qia was
Indeed. It's not clear from the story what happened to them, not to mention everyone who isn't heterosexual. Maybe they're on a moon somewhere?
Whether or not that moon has been Terraformed/Paradised or is still a death trap makes a rather huge difference. Although, I may just be reading too much into a plothole, since he also said.
I'll note that I wrote this story in one night.
Elsewhere in the thread. http://lesswrong.com/lw/xu/failed_utopia_42/t4d
Replies from: beoShaffer↑ comment by beoShaffer · 2013-02-08T21:29:17.410Z · LW(p) · GW(p)
I assume that the children were forcibly separated from their families and placed with people (or "people") who will be "better" for them in the long run.
Replies from: None↑ comment by [deleted] · 2013-02-08T21:48:19.730Z · LW(p) · GW(p)
That may have been the case (Since it is unclear in the story.) but from my perspective, that still doesn't seem bad enough to cause a near species wide death rage, particularly since if the children are still alive, they might count for AI voting rights as a member of the human species. It seems the AI would have to have done something currently almost universally regarded as utterly horrible and beyond the pale.
There are a lot of possible alternatives, though. Example further alternative: All of Earth's children were sold to the Baby Eating Aliens for terraforming technology.
Link: http://lesswrong.com/lw/y5/the_babyeating_aliens_18/
Replies from: beoShaffer↑ comment by beoShaffer · 2013-02-08T22:45:20.807Z · LW(p) · GW(p)
particularly since if the children are still alive, they might count for AI voting rights as a member of the human species.
In short run I image the kids are quite upset about being separated from their families and being told they'll never see them again. I don't have, or work around, kids so I don't know that this would translate into wishing the AI dead, but it feels plausiblish.