The Sword of Good

post by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-09-03T00:53:55.237Z · LW · GW · Legacy · 303 comments

Contents

  Acknowledgments:
None
304 comments

..fragments of a book that would never be written...

*      *      *

Captain Selena, late of the pirate ship Nemesis, quietly extended the very tip of her blade around the corner, staring at the tiny reflection on the metal.  At once, but still silently, she pulled back the sword; and with her other hand made a complex gesture.

The translation spell told Hirou that the handsigns meant:  "Orcs.  Seven."

Dolf looked at Hirou.  "My Prince," the wizard signed, "do not waste yourself against mundane opponents.  Do not draw the Sword of Good as yet.  Leave these to Selena."

Hirou's mouth was very dry.  He didn't know if the translation spell could understand the difference between wanting to talk and wanting to make gestures; and so Hirou simply nodded.

Not for the first time, the thought occurred to Hirou that if he'd actually known he was going to be transported into a magical universe, informed he was the long-lost heir to the Throne of Bronze, handed the legendary Sword of Good, and told to fight evil, he would have spent less time reading fantasy novels.  Joined the army, maybe.  Taken fencing lessons, at least.  If there was one thing that didn't prepare you for fantasy real life, it was sitting at home reading fantasy fiction.

Dolf and Selena were looking at Hirou, as if waiting for something more.

Oh.  That's right.  I'm the prince.

Hirou raised a finger and pointed it around the corner, trying to indicate that they should go ahead -

With a sudden burst of motion Selena plunged around the corner, Dolf following hard on her heels, and Hirou, startled and hardly thinking, moving after.

(This story ended up too long for a single LW post, so I put it on yudkowsky.net.
Do read the rest of the story there, before continuing to the Acknowledgments below.)

 


 

Acknowledgments:

I had the idea for this story during a conversation with Nick Bostrom and Robin Hanson about an awful little facet of human nature I call "suspension of moral disbelief".  The archetypal case in my mind will always be the Passover Seder, watching my parents and family and sometimes friends reciting the Ten Plagues that God is supposed to have visited on Egypt.  You take drops from the wine glass - or grape juice in my case - and drip them onto the plate, to symbolize your sadness at God slaughtering the first-born male children of the Egyptians.  So the Seder actually points out the awfulness, and yet no one says:  "This is wrong; God should not have done that to innocent families in retaliation for the actions of an unelected Pharaoh."  I forget when I first realized how horrible that was - the real horror being not the Plagues, of course, since they never happened; the real horror is watching your family not notice that they're swearing allegiance to an evil God in a happy wholesome family Cthulhu-worshiping ceremony.  Arbitrarily hideous evils can be wholly concealed by a social atmosphere in which no one is expected to point them out and it would seem awkward and out-of-place to do so.

In writing it's even simpler - the author gets to create the whole social universe, and the readers are immersed in the hero's own internal perspective.  And so anything the heroes do, which no character notices as wrong, won't be noticed by the readers as unheroic.  Genocide, mind-rape, eternal torture, anything.

Explicit inspiration was taken from this XKCD (warning: spoilers for The Princess Bride), this Boat Crime, and this Monty Python, not to mention that essay by David Brin and the entire Goblins webcomic.  This Looking For Group helped inspire the story's title, and everything else flowed downhill from there.

303 comments

Comments sorted by top scores.

comment by Kaj_Sotala · 2009-09-03T07:15:40.466Z · LW(p) · GW(p)

Why not put a copy of those acknowledgements at the bottom of the story itself, as well?

I suspect lots of people are going to see the story and only think of it as a neat story. Explicitly bringing out the lesson would help, plus I thought the Passover example was really interesting. It wouldn't hurt to make more people see it.

comment by AllanCrossman · 2009-09-03T14:11:34.196Z · LW(p) · GW(p)

I wonder if you might have seen this essay by David Brin...

Now ponder something that comes through even the party-line demonization of a crushed enemy -- this clear-cut and undeniable fact: Sauron's army was the one that included every species and race on Middle Earth, including all the despised colors of humanity, and all the lower classes.

Hmm. Did they all leave their homes and march to war thinking, "Oh, goody, let's go serve an evil Dark Lord"?

Or might they instead have thought they were the "good guys," with a justifiable grievance worth fighting for, rebelling against an ancient, rigid, pyramid-shaped, feudal hierarchy topped by invader-alien elfs and their Numenorean-colonialist human lackeys?

Picture, for a moment, Sauron the Eternal Rebel, relentlessly maligned by the victors of the War of the Ring -- the royalists who control the bards and scribes (and moviemakers). Sauron, champion of the common Middle Earthling! Vanquished but still revered by the innumerable poor and oppressed who sit in their squalid huts, wary of the royal secret police with their magical spy-eyes, yet continuing to whisper stories, secretly dreaming and hoping that someday he will return ... bringing more rings.

Replies from: sketerpot, Eliezer_Yudkowsky, Douglas_Knight, cousin_it, TuviaDulin
comment by sketerpot · 2009-09-03T18:17:37.051Z · LW(p) · GW(p)

My guess would be that Mordor is a totalitarian communist state, formed on promises of empowerment of the People, and then turned into a horrible labor camp with collective farms by lake Nurnen and armies of expendable mooks kept in line by harsh superiors (think Commissars), along with heavy racist and nationalistic propaganda so they hate their enemies more than they hate their own rulers. Remember the communist revolution that happened in the Shire while our heroes were out destroying the One Ring? It started out as a ham-fisted attempt at social justice, and before long people were disappearing for being enemies of the state. Imagine that, but on a larger scale, and many times worse, and festering for generations.

The orcs (et al) don't have to be inherently evil for Sauronland to be an evil nation.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-09-03T19:54:43.536Z · LW(p) · GW(p)

Right! I forgot this. Will add to acknowledgments.

comment by Douglas_Knight · 2009-09-03T19:43:25.505Z · LW(p) · GW(p)

When Brin (via RH) invoked his article on Overcoming Bias, Brian Moore (and Eliezer) invoked Jacqueline Carey’s "Sundering.” I'm surprised that Carey didn't show up in the acknowledgements. Brin & Carey are mentioned in another (ex-)OB thread.

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-09-03T22:55:17.143Z · LW(p) · GW(p)

Carey's book is even more powerful but it sends a totally different message - the idea that both sides have their reasons.

comment by cousin_it · 2009-09-03T14:51:41.841Z · LW(p) · GW(p)

Nick Perumov wrote a huge fan-sequel to LOTR in exactly this vein. In the end the new rebel leader (who started out pretty good and gathered races with legitimate grievances) zbecuf vagb n zbafgre orpnhfr ur'q hfrq gur anmthyf' yrsgbire evatf gb tnva fgeratgu, naq hcba ernyvmvat gung ur fheeraqref gb gur cebgntbavfg gb trg xvyyrq.

EDIT: rot13'd the spoilers. Which doesn't mean I recommend reading the book!

Replies from: Eliezer_Yudkowsky, dclayh
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-09-03T19:54:26.163Z · LW(p) · GW(p)

Please edit this to rot13 the spoilers. You don't say: "X wrote a wonderful story and here's the ending", just "X wrote a wonderful story and here's the link".

comment by dclayh · 2009-09-03T18:36:23.079Z · LW(p) · GW(p)

Where can I read this?

Replies from: cousin_it
comment by cousin_it · 2009-09-03T19:40:04.786Z · LW(p) · GW(p)

I don't advise you to, and anyway who'd translate a Russian fanfic into English and put it online?

Replies from: jaimeastorga2000
comment by jaimeastorga2000 · 2010-10-23T15:19:12.885Z · LW(p) · GW(p)

Google Translate? Assuming there was a digital copy, anyways.

comment by TuviaDulin · 2012-04-11T19:07:44.330Z · LW(p) · GW(p)

I think Sauron did enough explicitly evil stuff to make himself the bad guy. Tricking the Numenorians into destroying themselves out of spite is pretty hard to justify.

There's also the fact that orcs don't have free will. They were created from tortured elves and mindraped into obedience. The fact that Sauron was willing to use them as canon fodder rather than trying to find a way to reverse what Melkor did to them speaks worlds about his moral virtue.

Finally, the rings. Using mind control to turn foreign leaders into your obedient thralls, consoling them with the promise that they will be able to crush others under their heel as you crushed them. Real nice of Sauron.

Middle Earth was a flawed world filled with the same evils and injustices as our own, but Sauron was almost definitely the worst thing in it. I'll give David Brin some credit, though, as Tolkien did a pretty bad job of explaining the situation in Lord of the Rings. You have to read the Silmarilion (or, as in my case, talk to another guy who has read the Silmarilion, as I lacked the patience to wade through another gazillion pages of archaic English) to understand what's going on, which is a major failing of LotR.

Replies from: Vaniver
comment by Vaniver · 2012-04-11T19:29:05.415Z · LW(p) · GW(p)

They were created from tortured elves and mindraped into obedience.

Did you learn this from an unbiased source?

Finally, the rings. Using mind control to turn foreign leaders into your obedient thralls, consoling them with the promise that they will be able to crush others under their heel as you crushed them. Real nice of Sauron.

Suppose you're the prime minister of a parliamentary republic, and the neighboring country is ruled by hereditary nobility that mostly hate each other, and wars between the barons ruin a lot of the land and kill a lot of the peasants. You, being a genius engineer, have figured out a way to control people, but it requires they wear the device for an extended period of time, the effects are obvious, and they can take it off before the process is complete if they feel like it.

This hereditary nobility situation is obviously not going to fix itself- and you figure that the easiest way to fix it is to corrupt all the nobility, playing on their hatred of each other to get them to wear the devices long enough for them to work, and then have them give you power in a bloodless coup. As a bonus, you now have fanatically loyal assassins / spec ops forces, and an eternity of servitude seems like a fitting punishment for their misconduct as rulers.

Replies from: TuviaDulin
comment by TuviaDulin · 2012-04-11T20:10:41.446Z · LW(p) · GW(p)

"Did you learn this from an unbiased source?"

I'm pretty sure it was in Tolkien's notes.

"Suppose you're the prime minister of a parliamentary republic, and the neighboring country is ruled by hereditary nobility that mostly hate each other, and wars between the barons ruin a lot of the land and kill a lot of the peasants. You, being a genius engineer, have figured out a way to control people, but it requires they wear the device for an extended period of time, the effects are obvious, and they can take it off before the process is complete if they feel like it."

Except that's exactly what Sauron DIDN'T do. Mordor was not a parliamentary republic; more like a military dictatorship with semi-mindless orc drones enforcing Sauron's commands over his human subjects. The monarchs who were given the rings - however just or unjust their rule might have been, and however flawed the notion of monarchy as a political system - were lied to about what the rings did, and the rings' effects were very subtle at first.

Its also worth noting that the human kings didn't become any kinder or more democratic in their sensibilities once they fell under Sauron's influence. The Witch King was still a king, and a much more murderous one than he was in life. Unleashing barrow-wrights on a partially civilian population, torturing Gollum for information, and stabbing an innocent (if possibly misguided) hobbit when he didn't have to are all things that the Witch King did in person.

"This hereditary nobility situation is obviously not going to fix itself- and you figure that the easiest way to fix it is to corrupt all the nobility, playing on their hatred of each other to get them to wear the devices long enough for them to work, and then have them give you power in a bloodless coup. As a bonus, you now have fanatically loyal assassins / spec ops forces, and an eternity of servitude seems like a fitting punishment for their misconduct as rulers."

In other words, the only way to improve the world is to become just as bad as the people currently running it? The best solution to dictatorships is to make slaves of your own, and for all eternity no less?

I think you're going out of your way to defend Brin's essay rather than actually using your own moral judgement. You can easily say that the "good guys" in Lord of the Rings weren't all that good, but Sauron was very obviously worse.

Replies from: Vaniver
comment by Vaniver · 2012-04-11T20:19:02.283Z · LW(p) · GW(p)

I'm pretty sure it was in Tolkien's notes.

Right, and Brin's premise is that Tolkien is a biased source.

In other words, the only way to improve the world is to become just as bad as the people currently running it? The best solution to dictatorships is to make slaves of your own, and for all eternity no less?

If those slaves were the dictators of the old era? Seems suitably karmic.

I think you're going out of your way to defend Brin's essay rather than actually using your own moral judgement.

"My own moral judgment" is a tricky thing in this situation, as it depends on which situation we're describing.

If I have first-hand experience of the events of LotR, and everything is as Tolkien describes it, then yeah, it's pretty obvious that Sauron is the bad guy.

If I have third-hand experience of the events of LotR, think that at most 90% of the description is accurate, and I think that the philosophies of the modern day are present in the LotR world, then it seems plausible that Sauron is the good guy, for the reasons Brin describes.

You might be interested in The Sword of Good, if you haven't read it. [edit] It looks like you commented there today, but I'll leave the recommendation here for any spectators to the conversation.

Replies from: thomblake
comment by thomblake · 2012-04-11T20:20:33.158Z · LW(p) · GW(p)

You might be interested in The Sword of Good, if you haven't read it.

Amusing because you linked to this very post.

Replies from: Vaniver
comment by Vaniver · 2012-04-11T20:34:45.979Z · LW(p) · GW(p)

That is amusing, and what I get for jumping into conversations from the Recent Comments link and not thinking to check where the conversation is happening. I'm tempted to edit it out, but might as well leave it for posterity.

comment by JulianMorrison · 2009-09-07T11:16:43.419Z · LW(p) · GW(p)

Extend this beyond fiction. What misdeeds are we shrugging off because they're normal?

Replies from: rwallace, Richard_Kennaway, taryneast, betterthanwell, bruno-mailly, lmm, taryneast
comment by rwallace · 2009-09-07T23:25:56.712Z · LW(p) · GW(p)

Apartheid based on age that replaces the previous versions based on race and sex.

The morally indefensible and insanely self-destructive attempt to mitigate drug addiction by banning drugs.

comment by Richard_Kennaway · 2009-09-07T12:23:31.855Z · LW(p) · GW(p)

What misdeeds are we shrugging off because they're normal?

Religion. Schools. Television. Not caring about people remote from you. Spending effort on trifles. Akrasia. Irrationality.

Some would say, having political beliefs different from mine.

Replies from: JulianMorrison, thomblake, Psy-Kosh
comment by JulianMorrison · 2009-09-07T12:27:26.468Z · LW(p) · GW(p)

Burial/cremation.

Loss of time to work. Loss of utility to unemployment.

The way children get so few civil rights they're used as excuses for removing rights from adults.

Replies from: Alicorn
comment by Alicorn · 2009-09-07T13:26:52.130Z · LW(p) · GW(p)

The way children get so few civil rights they're used as excuses for removing rights from adults.

I am aware of the poor state of affairs re: children's rights, but I'm not sure what you're getting at by citing consequences for adults. Can you elaborate?

Replies from: JulianMorrison, cousin_it
comment by JulianMorrison · 2009-09-07T13:41:29.204Z · LW(p) · GW(p)

Just think how much legislation that restricts adults has been sold on the premise that it "protects children", especially from non-harmful things like porn and homosexuality.

comment by thomblake · 2009-09-09T14:00:05.480Z · LW(p) · GW(p)

Schools.

Thanks for saying it

comment by Psy-Kosh · 2009-09-09T15:19:30.623Z · LW(p) · GW(p)

As far as schools, do you mean something about the specific way that we have schooling set up currently (and do you include universities in that?) or do you mean more generally?

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2009-09-09T15:41:56.547Z · LW(p) · GW(p)

As far as schools, do you mean something about the specific way that we have schooling set up currently (and do you include universities in that?) or do you mean more generally?

I had in mind the education of children in school, as done in, I think, all of the developed world and a lot of the rest, and critiques like this one.

Universities may also have their faults, but not on the scale of misdeeds being considered, and, anyway, the people in them chose to go there.

Replies from: Psy-Kosh
comment by Psy-Kosh · 2009-09-09T15:58:50.265Z · LW(p) · GW(p)

Aaaah, okay. Yeah, I agree that that's a nasty aspect of our system.

comment by taryneast · 2011-01-10T12:14:00.173Z · LW(p) · GW(p)

having to kowtow/kiss-up to bullies because they're part of a hierarchy (eg in business), rather than being treated with respect as a human being. Further - being expected to also treat your subordinates badly and thus perpetuate the hierarchy.

comment by betterthanwell · 2009-09-07T20:52:35.216Z · LW(p) · GW(p)

What misdeeds are we shrugging off because they're normal?

Eating mammals. More generally; non-vegetarianism.

Replies from: army1987, rwallace, AdShea
comment by A1987dM (army1987) · 2012-04-12T14:39:10.116Z · LW(p) · GW(p)

Who says it's a misdeed?

Replies from: shokwave
comment by shokwave · 2012-04-12T15:53:29.771Z · LW(p) · GW(p)

User:betterthanwell, I presume.

comment by rwallace · 2009-09-07T23:23:39.800Z · LW(p) · GW(p)

Specifically, most people assert that animals are sentient; yet most people are not vegetarians, even though eating meat is no longer necessary for survival. There is an inconsistency between these positions.

Replies from: eirenicon, wedrifid
comment by eirenicon · 2009-09-08T19:24:12.522Z · LW(p) · GW(p)

You missed the step where you assert that most people assert it is wrong to eat sentient animals, which is what would create the inconsistency, were most people to assert that.

Replies from: army1987, rwallace
comment by A1987dM (army1987) · 2012-04-12T14:42:36.261Z · LW(p) · GW(p)

You also need the word sentient to mean the same thing in both premises, otherwise it's like “feathers are light, what's light is not dark, therefore feathers are not dark”.

comment by rwallace · 2009-09-08T23:11:13.339Z · LW(p) · GW(p)

Okay, but if offered the opportunity to kill and eat a human, or an elf, or a Wookie, most people would recoil in moral revulsion, and if you asked them "is that because you think it's wrong to kill and eat sentient beings" would probably say yes, so I think most people do assert that.

Replies from: Eliezer_Yudkowsky, eirenicon, Mass_Driver, Armok_GoB, Johnicholas
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-09-09T00:30:52.851Z · LW(p) · GW(p)

Actually, "Would you eat a Wookie?" is probably a helpful distinguishing question here. For me the answer is obviously "No!" and occurs with the same fleeting nausea as "Would you eat a human being?" But I grew up reading SF books like Little Fuzzy that teach personhood theory in a very visceral way. Other readers claimed they weren't bothered by the Babyeaters because the children eaten weren't human!

Replies from: thomblake
comment by thomblake · 2009-09-09T14:02:52.514Z · LW(p) · GW(p)

because the children eaten weren't human!

Indeed, one thing that surprises ethicists their first time teaching is that in ordinary English, 'person' and 'human' mean the same thing - so most intro students, when asked 'is Yoda a person' will answer 'no', even though they'd answer 'yes' to 'is Luke Skywalker a person'.

Replies from: Eliezer_Yudkowsky, Alicorn
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-09-09T18:06:54.989Z · LW(p) · GW(p)

Maybe you need to ask, "Would you eat Yoda if his species were tasty?"

Replies from: Desrtopa
comment by Desrtopa · 2011-05-30T06:59:42.235Z · LW(p) · GW(p)

I was just the other day lamenting how many people, even largely intelligent and conscientious seeming individuals, answer "Yes" to the OkCupid match question "If you landed on an alien planet where the local intelligent life form tasted unbelievably good, would you eat them?"

comment by Alicorn · 2009-09-09T14:34:07.137Z · LW(p) · GW(p)

I'm TAing discussion sections for the first time today, and based on some of the nonsense the students spouted in lecture yesterday, I'm going to need to cover what those words mean.

Replies from: Alicorn
comment by Alicorn · 2009-09-09T20:37:03.355Z · LW(p) · GW(p)

Update: I had one person say she would be fine with barbecuing Yoda because he wasn't human. I used this to segue into my explanation of what it means to bite the bullet.

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-09-09T20:41:08.124Z · LW(p) · GW(p)

I begin to wonder if your students are people.

comment by eirenicon · 2009-09-09T00:09:36.318Z · LW(p) · GW(p)

I imagine that would be because most people don't understand that sentient beings includes chickens, lobsters[1], and unborn fetuses (not that many people would agree with eating fetuses). If you asked "is that because you think it's wrong to kill and eat beings that are capable of perceiving stimuli" most would probably disagree with you. Now, if you asked "is that because you think it's wrong to kill and eat beings that are capable of doing algebra," you'd probably get a different response.

The reason people wouldn't eat an elf isn't because it's a sentient being, it's because it's a human equivalent sentient being. So you need to reach beyond sentience to find your inconsistency.

And of course, the reason people wouldn't eat a Wookie is because it probably would taste like an old boot.

[1]Research in recent years suggests that crustaceans may be capable of feeling pain and stress.

Replies from: betterthanwell
comment by betterthanwell · 2009-09-10T15:53:08.876Z · LW(p) · GW(p)

Research in recent years suggests that crustaceans may be capable of feeling pain and stress.

Pain and stress in crustaceans? Source: Applied Animal Behaviour Science.

We consider evidence that crustaceans might experience pain and stress in ways that are analogous to those of vertebrates. Various criteria are applied that might indicate a potential for pain experience: (1) a suitable central nervous system and receptors, (2) avoidance learning, (3) protective motor reactions that might include reduced use of the affected area, limping, rubbing, holding or autotomy, (4) physiological changes, (5) trade-offs between stimulus avoidance and other motivational requirements, (6) opioid receptors and evidence of reduced pain experience if treated with local anaesthetics or analgesics, and (7) high cognitive ability and sentience. For stress, we examine hormonal responses that have similar function to glucocorticoids in vertebrates. We conclude that there is considerable similarity of function, although different systems are used, and thus there might be a similar experience in terms of suffering. The treatment of these animals in the food industry and elsewhere might thus pose welfare problems.

No more prawn cocktails or shrimp sandwiches for me.

Replies from: thomblake
comment by thomblake · 2009-09-10T17:53:01.708Z · LW(p) · GW(p)

Is there really a place where both 'prawn' and 'shrimp' are used? What's the difference?

Replies from: Baughn
comment by Baughn · 2010-04-30T12:34:55.253Z · LW(p) · GW(p)

You probably looked it up a long time ago, but for any future readers: They're different groups of species. Both are soft-shelled crustaceans, but that's where the similarity ends.

Any morphological similarities are probably down to converging evolution.

Replies from: thomblake
comment by thomblake · 2010-04-30T13:15:57.745Z · LW(p) · GW(p)

Ha... actually, I didn't look it up at all. According to Wikipedia, you're right, but 'shrimp' is the common name for a lot of things that get called 'prawns' outside of the US.

comment by Mass_Driver · 2010-07-12T14:53:26.623Z · LW(p) · GW(p)

Part of what bothers me about the idea of eating an elf or Wookie is that they don't feel like prey -- they feel like peers. When I see a fox, e.g., it doesn't make me hungry -- the fox doesn't seem like it's below me on the food chain. When I see a rabbit or a pigeon, it does make me hungry -- I can imagine what it would be like to hunt, clean, roast, and gnaw on it.

On the other hand, I wouldn't hesitate to kill 5 foxes to save one elf or human or Wookie.

I would not, however, hunt a rabbit or a fox for sport; that seems unnecessarily cruel.

One way of accounting for all these moral intuitions is that rabbits, foxes, and Wookies are all sentient; one should not cause pain to sentient creatures for amusement. Foxes and Wookies are ecological peers; one should not eat ecological peers. Wookies are people; one should not trade off the lives of people against roughly comparable numbers of lives of non-people.

comment by Armok_GoB · 2011-01-09T23:38:06.712Z · LW(p) · GW(p)

I think I'm among the few who after realizing this, as well as how icky the sources of most foods are when you think about them, and that most danger from eating stuff is from things that don't seem disgusting, decided that food revulsion is not a part of me and that I should be perfectly fine with eating human flesh or drink [self-censored]. I haven't actually tested any of this, so I'm not sure if my brain would go along with it.

comment by Johnicholas · 2009-09-08T23:28:06.513Z · LW(p) · GW(p)

I think Joshua Greene, among others, has investigated these sort of things (moral intuitions, and the justifications people typically give, which may be a sort of confabulation).

http://www.wjh.harvard.edu/~jgreene/

comment by wedrifid · 2009-09-08T18:27:08.250Z · LW(p) · GW(p)

Specifically, most people assert that animals are sentient; yet most people are not vegetarians, even though eating meat is no longer necessary for survival. There is an inconsistency between these positions.

No there isn't. It implies that they violate another norm that you value but it is not inconsistent.

comment by AdShea · 2010-12-02T22:57:53.847Z · LW(p) · GW(p)

I think being non-vegetarian is less evil than being a morally inconsistent non-vegetarian. If you would have moral trouble being introduced to your food (or raising it) then you shouldn't be eating it.

Replies from: Swimmy, lmm
comment by Swimmy · 2012-04-12T06:52:48.694Z · LW(p) · GW(p)

I don't see why. For clarity, since we probably agree it's wrong, imagine you're making the same argument for cannibalism instead. One person says, "I'm fine with eating and farming humans but if I get to know one first, doing it would make me feel bad." Another says, "Screw that, I'll eat anyone, even if I know them and their children!"

The second person is more morally consistent and also more callous. Even if there's no difference in the way they live their lives, trying to end the holocaust of humans for food would be easier in a world full of the first type than the second.

Just as I would prefer the opposite of rule of law when the law is uniformly terrible, I prefer the opposite of moral consistency when a morality is terrible.

Replies from: wedrifid
comment by wedrifid · 2012-04-12T08:19:22.553Z · LW(p) · GW(p)

I don't see why. For clarity, since we probably agree it's wrong, imagine you're making the same argument for cannibalism instead. One person says, "I'm fine with eating and farming humans but if I get to know one first, doing it would make me feel bad." Another says, "Screw that, I'll eat anyone, even if I know them and their children!"

The second person is more morally consistent and also more callous.

The consistency difference seems minimal. The most obvious moral rule in play is "Don't do harmful things to those people who are socially near" combined with a moral indifference to cannibalistic farming but acknowledgement that it is undesirable to be so farmed and eaten. This isn't a complex or unusual morality system (where arbitrary complexity seems to be what we mean when we say 'inconsistent').

comment by lmm · 2013-09-16T22:00:38.235Z · LW(p) · GW(p)

I've never understood this argument. I have a visceral reaction against surgery (even the sight of blood can set me off); I certainly couldn't stand to be in the same room in which surgery was being performed. Does this mean that for consistency I'm required to morally oppose surgery?

comment by Bruno Mailly (bruno-mailly) · 2021-12-28T14:20:57.621Z · LW(p) · GW(p)

Advertisement.

AKA parasitic manipulation so normalized it invades every medium and pollutes our minds by hogging our attention, numbing our moral sense of honesty, and preventing a factual information system from forming.

comment by lmm · 2013-09-16T22:04:07.104Z · LW(p) · GW(p)

It's striking how different our cultural response seems to be to political assassination by knife or political assassination by airstrike.

Replies from: Nornagest
comment by Nornagest · 2013-09-16T22:34:59.892Z · LW(p) · GW(p)

I don't think this is the right distinction. Osama bin Laden for example was killed in person by American special forces, which probably isn't that unusual a type of targeted killing but rarely makes it into the news, and the method didn't seem to attract much mainstream comment.

I think we're looking at more of a tribal distinction, or possibly a cultural feeling that different rules apply in engagements among states than between states and non-state actors. (Compare the death of Chris Dorner, the shooter targeting LAPD officers a few months ago.)

comment by taryneast · 2011-01-09T21:22:38.787Z · LW(p) · GW(p)

People that deliberately brainwash their children into beliefs (for which the evidence overwhelmingly points to it being false) about the world, simply because it's what has been taught for >2000 years.

Edit: technically not provably-false...

comment by swestrup · 2009-09-03T15:44:32.976Z · LW(p) · GW(p)

My first impression of this story was very positive, but as it asks us to ask moral questions about the situation, I find myself doing so and having serious doubts about the moral choices offered.

First of all, it appears to be a choice between two evils, not evil and good. On one hand is a repressive king-based classist society that is undeniably based on socially evil underpinnings. On the other hand we have an absolute unquestionably tyranny that plans to do good. Does no one else have trouble deciding which is the lesser problem?

Secondly, we know for a fact that, in our world, kingdoms and repressive regimes sometimes give way to more enlightened states, and we don't know enough about the world to even know how many different kingdoms there are or what states of enlightenment exist elsewhere. For all we know things are on the verge of a (natural) revolution. We can't say much about rule by an infinite power, having no examples to hand, but there is the statement that "power corrupts". Now, I'm not going to say that this is inevitable, but I have at least to wonder if an integration over total sentient happiness going forward is higher in the old regime and its successors, or in the Infinite Doom regime.

Finally, the hero is big into democracy. Where in either of these choices does the will of the peasants fit in anywhere?

EDIT: One more point I wanted to add, since its clearly not a Choice Between Good and Evil as the prophesy states, why assume there is a choice, or that there are only two options. Would not a truly moral person look for a third alternative?

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2009-09-03T18:26:50.290Z · LW(p) · GW(p)

Does no one else have trouble deciding which is the lesser problem?

I gathered that the choice being a difficult one was the whole point. It's not a genuine choice if the right choice is obvious, that much was explicitly stated.

You say it "clearly" wasn't a Choice Between Good and Evil, but I don't think that's clear. One choice might still have a good outcome and the other an evil one. It's just that we don't know which one is which.

Replies from: swestrup
comment by swestrup · 2009-09-03T21:21:34.078Z · LW(p) · GW(p)

It would say that the likelihood is overwhelming that BOTH choices will lead to bad ends. The only question is: which is worse. That's why I was saying it was between two evils.

Besides, its hard to reconcile the concept of 'Good' with a single flawed individual deciding the fate of the world, possibly for an infinite duration. The entire situation is inherently evil.

Replies from: Aurini
comment by Aurini · 2009-09-04T03:01:33.645Z · LW(p) · GW(p)

Though it wasn't explicitly said, it was heavily implied that either choice would be for a potentially infinite duration. This is a world of fantasy and prophecy, after all: I got the impression that the current social order was stable, and given that there was magic (not psychic ability but magic) it's also fair to assume that the scientific method doesn't work (not that this makes any sense, but you have to suspend that disbelief for magic to work [gnomes are still allowed to build complex machines, they're just not allowed to build useful machines]).

The way I interpreted it was that he had a choice between the status quo for 1000 years, or and unknown change, guided by good intentions, for 1000 years.

Besides, the Big Bad was Marty Stu. How could I not side with him?

(Another great work, Yudkowski - you really should send one of these to Asimov's SciFi)

Replies from: swestrup
comment by swestrup · 2009-09-04T04:47:28.539Z · LW(p) · GW(p)

Interesting. Is hard to reconstruct my reasoning exactly, but I think that I assumed that things I didn't know were simply things I didn't know, and based my answer on the range of possibilities -- good and bad.

Replies from: Aurini
comment by Aurini · 2009-09-05T09:17:41.217Z · LW(p) · GW(p)

Huh; I thought my browser had failed, and this post hadn't appeared. Anyway...

There's an old army saying: "Being in the army ruins action movies for you." I feel the same way about 'scifi' - Aside from season 3, every episode of Torchwood (that I've recently started watching, now that I finished Sopranos) is driving me up the wall. I propose a corollary saying:

"Understanding philosophical materialism and the implications thereof ruins 99% of Science Fiction... and don't get me started on Fantasy!"

In my opinion, there are three essential rules to Fantasy:

  1. The protagonist is a priori important; by their very nature they have metaphysical relevance (even though they don't know it yet!). All other characters are living their rightful and deserved life, unless they are below their means with a Heart of Gold.

  2. The scientific method (hypothesis, experiment, conclusion, theory) only works in the immediate sense, not the broad sense; your immediate world will be logical, but the world as a whole is incomprehensible. You can only build machines if a) they already exist; or b) they serve no practical purpose. Magic, on the other hand, generally works as intended; the human will guides it, and can only be countervened by another magical authority (a navigation spell will not require knowledge of the local plant life, nor will it require accurate grid coordinates given a non-simultaneous Relativistic geometry).::If magic doesn't work as the protagonists intend, it will be working under a higher moral power.

  3. There is an abstract and absolute division between Right and Wrong; somebody is keeping score, and no actions are hidden. Your evil acts might escape the notice of the local authorities, but they will show through by your bearing, your beauty, or your image.

Heh, this might be worth a top level post except tvtropes has covered it all already.

Replies from: Eliezer_Yudkowsky, swestrup
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-09-05T19:05:46.210Z · LW(p) · GW(p)

The most rationalist-relevant TV Tropes would easily be worth a top post or three.

Replies from: JulianMorrison
comment by JulianMorrison · 2009-09-07T11:21:10.618Z · LW(p) · GW(p)

You'd lose your whole crop of rationalists. They would never come back out.

comment by swestrup · 2009-09-07T10:51:51.507Z · LW(p) · GW(p)

I agree, which is why I tend to shy away from performing a moral analysis of Fantasy stories in the first place. That way lies a bottomless morass.

Replies from: Aurini
comment by Aurini · 2009-09-10T04:22:28.853Z · LW(p) · GW(p)

Fantasy stories, and ninety percent of science fiction nowadays...

comment by Furcas · 2009-09-04T06:58:44.635Z · LW(p) · GW(p)

Good story. I love Yudkowskian fiction!

That said, I don't see why Vhazhar would want to touch the Sword of Good if all it does is "test good intentions". Isn't that just another way of saying that if the holder survives, he knows that his terminal values correspond to the Sword's?

It makes sense that Hirou would want Vhazhar to touch the Sword, because since Hirou can touch it, if Vhazhar can touch it too Hirou will know that Vhazhar's terminal values are similar to his own. But why does Vhazhar give a crap about the Sword's terminal values?

Replies from: RobinZ
comment by RobinZ · 2009-09-27T19:32:10.411Z · LW(p) · GW(p)

If the stories of previous wielders of the Sword were public and reasonably accurate, he presumably already evaluated whether the Sword's terminal values match the terminal values he wished to uphold.

Replies from: Furcas
comment by Furcas · 2009-09-27T21:27:09.256Z · LW(p) · GW(p)

Good answer. I guess it depends on what is meant by "good intentions". If subconscious intentions are included, then it would be possible to hold false beliefs about one's own intentions, and being able to touch the Sword would be evidence that these beliefs are mostly correct.

It wouldn't be extremely strong evidence, though. All Vhazhar could know by studying historical records is that previous owners of the Sword acted in accordance with the values Vhazhar believes he has. However, these owners could have been deluded about their true terminal values their entire lives, and the Sword could therefore have been selecting for individuals with terminal values that don't accord with their actions, which means it would be a waste of time for Vhazhar to touch it, at best, or a fatal mistake at worst.

And obviously, if "good intentions" means conscious intentions, then Vhazhar already knows he has the terminal values he believes he has.

Replies from: AdShea
comment by AdShea · 2010-12-02T22:49:56.256Z · LW(p) · GW(p)

As the sword killed 90% of those who touched it, Vhazhar could have, upon reading the records, discovered that the sword only allowed to survive those who help increase the CEV for sentient life (and thus slaughtering a ridiculous number of Cohen-esque "heroes").

comment by cousin_it · 2009-09-03T11:53:43.008Z · LW(p) · GW(p)

No, the Spell of Infinite Doom destroys the Equilibrium. Light and dark, summer and winter, luck and misfortune - the great Balance of Nature will be, not upset, but annihilated utterly; and in it, set in place a single will, the will of the Lord of Dark. And he shall rule, not only the people, but the very fabric of the World itself, until the end of days.

No matter how good a person the Lord may be, if he's human, I'd have tried to stop the spell.

Replies from: Psy-Kosh, Vladimir_Nesov
comment by Psy-Kosh · 2009-09-03T18:01:34.648Z · LW(p) · GW(p)

Something that occurred to me along these lines. (not directly the same, but "close enough" that some of the moral judgments would be equivalent)

Let's say, next week, someone actually solved the mind uploading problem. They have a decision to make: go for it themselves, find someone as trustworthy as possible, forget about the plan and simply wait however long for the FAI math to be solved, etc...

What would you advise? Should they go for it themselves, try to then work out how to incrementally upgrade themselves without absolute disaster, forget it, etc etc etc...? (If nothing else, assume they already have the raw computing power to run a human at a vast speedup)

It's not an identical problem, but it's probably the closest thing.

Replies from: Eliezer_Yudkowsky, cousin_it
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-09-03T19:22:11.328Z · LW(p) · GW(p)

go for it themselves

What, you mean try to self-modify? Oh hell no. Human brain not designed for that. But you would have a longer time to try to solve FAI. You could maybe try a few non-self-modifications if you could find volunteers, but uploading and upload-driven-upgrading is fundamentally a race between how smart you get and how insane you get.

Replies from: MichaelVassar, matt, Vladimir_Nesov, Psy-Kosh, pjeby
comment by MichaelVassar · 2009-09-05T03:55:18.440Z · LW(p) · GW(p)

The modified people can be quite a bit smarter than you are too, so long as you can see their minds and modify them. Groves et al managed to mostly control the Manhattan project despite dozens of its scientists being smarter than any of their supervisors and many having communist sympathies. If he actually shared their earlier memories and could look inside their heads... There's a limit to control, you still won't control an adversarial super intelligence this way, but a friendly human who appreciates your need for power over them? I bet they can have a >50 IQ point advantage, maybe even >70. Schoolteachers control children who have 70 IQ points on them with the help of institutions.

Replies from: Douglas_Knight, Document
comment by Douglas_Knight · 2009-09-05T06:03:37.070Z · LW(p) · GW(p)

Schoolteachers control children who have 70 IQ points on them with the help of institutions.

Is it relevant that IQ is correlated with obedience to authority?

And how dumb do you think schoolteachers are? Bottom of those with BAs. I'd guess 100. And correlated with their pupils.

Replies from: Desrtopa
comment by Desrtopa · 2011-06-01T18:29:00.425Z · LW(p) · GW(p)

Estimations from SAT scores imply that the IQ of teachers and education majors is below average. Conscientious, hardworking students can graduate from most high schools and colleges with good grades, even if they are fairly stupid, as long as they stay away from courses which demand too much of them, and there are services available for those who are neither hardworking nor conscientious.

Education major courses are somewhat notorious for demanding little of students, and it is a stereotypically common choice for students seeking MRS degrees.

I'd like to imagine that the system would at least filter out individuals who are borderline retarded or below, but experience suggests to me that even this is too optimistic.

Replies from: Douglas_Knight
comment by Douglas_Knight · 2011-06-02T03:01:53.288Z · LW(p) · GW(p)

I don't buy the conversion in the first link, which is also a dead link. That Ed majors have an SAT score of 950 sounds right. That is 37th percentile among "college-bound seniors." If this population, which I assume means people taking the SAT, were representative of the general population, that would be an IQ of 95, but they aren't. I stand by my estimate of 100.

I doubt you have much experience with people with an IQ of 85, let alone the borderline retarded.

Replies from: Desrtopa
comment by Desrtopa · 2011-06-02T06:42:55.147Z · LW(p) · GW(p)

What makes you doubt I have much experience with either? IQ 85 is one standard deviation below average; close to 14 percent of the population has an IQ at least that low. The lower limit of borderline retardation, that is, the least intelligent you can be before you are no longer borderline, is two standard deviations below the mean, meaning that about one person in fifty is lower than that.

As it happens, I've spent a considerable amount of time with special needs students, some of whom suffer from learning disabilities which do not affect their reasoning abilities, but some of whom are significantly below borderline retarded.

At the public high school I attended, more than 95% of the students in my graduating year went on to college. While the most mentally challenged students in the area were not mainstreamed and didn't attend the same school, there was no shortage of <80 IQ students.

An average IQ of 100 for education majors would be within the error bars for the aforementioned projection, but some individuals are going to be considerably lower.

Replies from: Douglas_Knight
comment by Douglas_Knight · 2011-06-02T07:27:23.825Z · LW(p) · GW(p)

At the public high school I attended, more than 95% of the students in my graduating year went on to college. While the most mentally challenged students in the area were not mainstreamed and didn't attend the same school, there was no shortage of <80 IQ students.

Those two sentences are not very compatible.

Replies from: Desrtopa
comment by Desrtopa · 2011-06-02T07:41:10.117Z · LW(p) · GW(p)

The rates at which students progress to college have a lot more to do with parental expectations, funding, and the school environment than the intelligence of the students in question. My school had very good resources to support students in the admissions process, and students who didn't take it for granted that they were college bound were few and far between.

comment by Document · 2010-12-02T23:55:54.542Z · LW(p) · GW(p)

It seems unrealistic to assume that we'll be able to literally read the intentions of the first upload; I'd think that we'd start out not knowing any more about them than we would about an organic person through external scanning.

Replies from: MichaelVassar
comment by MichaelVassar · 2010-12-03T03:44:49.307Z · LW(p) · GW(p)

You won't be able to evaluate their thoughts exactly, but there's a LOT that you should be able to tell about what a person is thinking if you can perfectly record all of their physiological reactions and every pattern of neural activation with perfect resolution, even with today's knowledge. Kock and Crick even found grandmother neurons, more or less.

Replies from: Document
comment by Document · 2010-12-03T07:34:55.341Z · LW(p) · GW(p)

I'd still expect it to be hard to tell the difference someone between thinking about or wanting to kill someone/take over the world and someone actually intending to. But I can imagine at least being able to reliably detect lies with that kind of information, so I'll defer to your knowledge of the subject.

comment by matt · 2009-09-06T04:11:36.255Z · LW(p) · GW(p)

Eliezer, I'm with you that a properly designed mind will be great, but mere uploads will still be much more awesome than normal humans on fast forward.

Without hacking on how your mind fundamentally works, it seems pretty likely that being software would allow a better interface with other software than mouse, keyboard and display does now. Hacking on just the interface would (it seems to me) lead to improvements in mental capability beyond mere speed. This sounds like mind hacking to me (software enhancing a software mind will likely lead to blurry edges around which part we call "the mind"), and seems pretty safe.

Some (pretty safe*) cognitive enhancements:

  • Unmodified humans using larger displays are better at many tasks than humans using small displays (somewhat fluffy pdf research). It'll be pretty surprising if being software doesn't allow a better visual interface than a 30" screen.
  • Unmodified humans who can touch-type spend less time and attention on the mechanics of human machine interface and can be more productive (no research close to hand). Who thinks that uploaded humans are not going to be able to figure better interfaces than virtual keyboards?
  • Argument maps improve critical thinking, but the interfaces are currently clumsy enough to discourage use (lots of clicking and dragging). Who thinks that being software won't provide a better way to quickly generate argument maps?
  • In front of a computer loaded up with my keyboard shortcuts and browser plugins I have easy access to very fast lookup on various web reference sites. At the moment the lookup delay is still long enough that short term memory management (stack overflow after a mere 7±2 pushes) is a problem (when I need a reference I push my current task onto a mental stack; it takes time and attention to pop that task when the reference has been found). Who thinks I couldn't be smarter with a reference interface better than a keyboard?

All of which is just to say that I don't think you've tried very hard to think of safe self-modifications. I'm pretty confident that you could come up with more, and better, and safer than I have.

* Where "pretty safe" means "safe enough to propose to the LW community, but not safe enough to try before submitting for public ridicule"

comment by Vladimir_Nesov · 2009-09-03T19:36:38.326Z · LW(p) · GW(p)

You can make volunteers out of your own copies. As long as the modified people aren't too smart, it's safe keep them in a sandbox and look through the theoretical work they produce on overdrive.

Replies from: matt
comment by matt · 2009-09-06T04:17:17.071Z · LW(p) · GW(p)

AI boxes are pretty dangerous.

(I agree that "as long as the modified people aren't too smart" you're safe, but we are hacking on minds that will probably be able to hack on themselves, and possibly recursively self-improve if they decide, for instance, that they don't want to be shut down and deleted at the end of the experiment. I'm pretty strongly motiviated not to risk insanity by trying dangerous mind-hacking experiments, but I'm not going to be deleted in a few minutes.)

comment by Psy-Kosh · 2009-09-03T19:35:40.063Z · LW(p) · GW(p)

*blinks* I understand your "oh hell no" reaction to self modification and "use the speedup to buy extra time to solve FAI" suggestion.

However, I don't quite understand why you think "attempted upgrading of other" is all that much better. If you get that one wrong in a "result is super smart but insane (or, more precisely, very sane but with the goal architecture all screwed up) doesn't one end up with the same potential paths to disaster? At that point, if nothing else, what would stop the target from then going down the self modification path?

Replies from: Eliezer_Yudkowsky, Nick_Tarleton
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-09-03T23:02:01.862Z · LW(p) · GW(p)

Non-self-modification is by no means safe, but it's slightly less insanely dangerous than self-modification.

Replies from: Psy-Kosh, pjeby
comment by Psy-Kosh · 2009-09-04T00:35:20.129Z · LW(p) · GW(p)

Ooooh, okay then. That makes sense.

Hrm... given though your suggested scenario, why the need to start with looking for other volunteers? ie, if the initial person is willing to be modified under the relevant constraints, why not just, well, spawn off another instance of themselves, one the modifier and one the modifiee?

EDIT: whoops, just noticed that Vladimir suggested the same thing too.

comment by pjeby · 2009-09-04T04:26:19.823Z · LW(p) · GW(p)

Non-self-modification is by no means safe, but it's slightly less insanely dangerous than self-modification.

I think I see where you're confused now. You think there's only one of you. ;-)

But if you think about it, akrasia is an ample demonstration that there is more than one of you: the one who acts and chooses, and the one who reflects upon the acts and choices of the former.

And the one who acts and chooses also modifies itself all the frickin' time, whether you like it or not. So if the one who reflects then refrains from modifying the one who acts, well... the results are going to be kind of random. Better directed self-modification than undirected, IMO.

(I don't pretend to be an expert on what would happen with this stuff in brain simulation; I'm talking strictly about the behavior of embodied humans here, and my own experiences with self-modification.)

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-09-04T04:58:23.156Z · LW(p) · GW(p)

We're talking about direct brain editing here. People who insist on comparing direct brain editing to various forms of internal rewiring carried out autonomously by opaque algorithms... or choice over deliberate procedures to follow deliberatively... well, don't be surprised if you're downvoted, because you did, in fact, say something stupid.

Replies from: pjeby, rhollerith_dot_com
comment by pjeby · 2009-09-05T01:03:56.549Z · LW(p) · GW(p)

We're talking about direct brain editing here.

If by "direct" here you mean changing the underlying system - metaprogramming as it were, then I have to say that that's the idea that's stupid. If you have a system that's perfectly capable of making changes on its own, debugged by millions of years of evolution, why on earth would you want to bypass those safeties?

On that, I believe we're actually in agreement.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2009-09-05T08:11:45.428Z · LW(p) · GW(p)

If you have a system that's perfectly capable of making changes on its own, debugged by millions of years of evolution, why on earth would you want to bypass those safeties?

To do better?

Replies from: pjeby
comment by pjeby · 2009-09-05T13:30:54.801Z · LW(p) · GW(p)

To do better?

You don't need to bypass the safeties to do better. What you need is not a bigger hammer with which to change the brain, but a better idea of what to change, and what to change it to.

That's the thing that annoys me the most about brain-mod discussions here -- it's like talking about opening up the case on your computer with a screwdriver, when you've never even looked at the screen or tried typing anything in -- and then arguing that all modifications to computers are therefore difficult and dangerous.

Replies from: CronoDAS, Vladimir_Nesov
comment by CronoDAS · 2009-09-05T22:43:47.016Z · LW(p) · GW(p)

To use an analogy, the kind of brain modifications we're talking about would be the kind of modifications you'd have to do to a 286 in order to play Crysis (a very high-end game) on it.

Replies from: None
comment by [deleted] · 2009-09-06T02:01:56.006Z · LW(p) · GW(p)

If I'm not mistaken, as far as raw computing power goes, the human brain is more powerful than a 286. The question is--and this is something I'm honestly wondering--whether it's feasible, given today's technology, to turn the brain into something that can actually use that power in a fashion that isn't horribly indirect. Every brain is powerful enough to play dual 35-back perfectly (if I had access to brain-making tools, I imagine I could make a dual 35-back player using a mere 70,000 neurons); it's simply not sufficiently well-organized.

If your answer to the above is "no way José", please say why. "It's not designed for that" is not sufficient; things do things they weren't designed to do all the time.

comment by Vladimir_Nesov · 2009-09-05T13:52:07.562Z · LW(p) · GW(p)

You don't need to bypass the safeties to do better. What you need is not a bigger hammer with which to change the brain, but a better idea of what to change, and what to change it to.

But you do need a bigger hammer as well. And that bigger hammer is dangerous.

Replies from: pjeby
comment by pjeby · 2009-09-05T17:25:04.838Z · LW(p) · GW(p)

But you do need a bigger hammer as well.

For what, specifically?

Replies from: JGWeissman, Eliezer_Yudkowsky
comment by JGWeissman · 2009-09-05T19:10:57.963Z · LW(p) · GW(p)

A brain emulation may want to modify so that when it multiplies numbers together, instead of its hardware emulating all the neurons involved, it performs the multiplication on a standard computer processor.

This would be far faster, more accurate, and less memory intensive.

Implementation would involve figuring out how the hardware recognizes the intention to perform a multiplication, represent the numbers digitally, and then present the answer back to the emulated neurons. This is outside the scope of any mechanism we might have to make changes within our brains, which would not be able to modify the emulator.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-09-05T19:10:38.950Z · LW(p) · GW(p)

Cracking the protein folding problem, building nanotechnology, and reviving a cryonics patient at the highest possible fidelity. Redesigning the spaghetti code of the brain so as to permit it to live a flourishing and growing life rather than e.g. overloading with old memories at age 200.

I suppose you make a remarkable illustration of how people with no cosmic ambitions and brainwashed by the self-help industry, don't even have any goals in life that require direct brain editing, and aren't much willing to imagine them because it implies that their own brains are (gasp!) inadequate.

Replies from: Steve_Rayhawk, pjeby, DS3618
comment by Steve_Rayhawk · 2009-09-09T05:08:51.862Z · LW(p) · GW(p)

people with no cosmic ambitions and brainwashed by the self-help industry, don't even have any goals in life that require direct brain editing, aren't much willing to imagine them because it implies that their own brains are (gasp!) inadequate.

Is this your causal theory? Literally, that pjeby considered a goal that would have required direct brain editing, noticed that the goal would have implied that his brain was inadequate, felt negative self-image associations, and only then dropped the goal from consideration, and for no other reason? And further, that this is why he asked: "If you have a system that's perfectly capable of making changes on its own, debugged by millions of years of evolution, why on earth would you want to bypass those safeties?"

I think that, where you are imagining direct brain editing done only with a formal, philosophically cross-validated theory of brain editing safety and only after a long enough delay to develop that theory, and where you imagine pjeby to be imagining direct brain editing done only with a formal, philosophically cross-validated theory of brain editing safety and only after a long enough delay to develop that theory, pjeby may be actually imagining someone who already has a brain-editing device and no safetiness theory, and who is faced with a short-range practical decision problem about whether to use the device when the option of introspective self-modification is available. pjeby probably has a lot of experience with people who have simple technical tools and are not reflective like you about whether they are safe to use. That is the kind of person he might be thinking of when he is deciding whether it would be better advice to tell the person to introspect or to use the brain editor.

(Also, someone other than me should have diagnosed this potential communication failure already! Do you guys prefer strife and ad-hominems and ill will or something?)

The x you get from

argmax_(x) U(x, y)

for fixed y is, in general, different from the x you get from

argmax_(x, y) U(x, y).

But this doesn't mean you can conclude that the first argmax calculated U() wrong.

comment by pjeby · 2009-09-09T03:24:25.738Z · LW(p) · GW(p)

I suppose you make a remarkable illustration of how people with no cosmic ambitions and brainwashed by the self-help industry, don't even have any goals in life that require direct brain editing, and aren't much willing to imagine them because it implies that their own brains are (gasp!) inadequate.

Wow, somebody's cranky today. (I could equally note that you're an illustration of what happens when people try to build a technical solution to a human problem... while largely ignoring the human side of the problem.)

Solving cooler technical problems or having more brain horsepower sure would be nice. But as I already know from personal experience, just being smarter than other people doesn't help, if it just means you execute your biases and misconceptions with greater speed and an increased illusion of certainty.

Hence, I consider the sort of self-modification that removes biases, misconceptions, and motivated reasoning to be both vastly more important and incredibly more urgent than the sort that would let me think faster, while retaining the exact same blindspots.

But if you insist on hacking brain hardware directly or in emulation, please do start with debugging support: the ability to see in real-time what belief structures are being engaged in reaching a decision or conclusion, with nice tracing readouts of all their backing assumptions. That would be really, really useful, even if you never made any modifications outside the ones that would take place by merely observing the debugger output.

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-09-09T04:09:59.781Z · LW(p) · GW(p)

you're an illustration of what happens when people try to build a technical solution to a human problem

If there were a motivator captioned "TECHNICAL SOLUTIONS TO HUMAN PROBLEMS", I would be honored to have my picture appear on it, so thank you very much.

Replies from: pjeby
comment by pjeby · 2009-09-09T04:36:45.553Z · LW(p) · GW(p)

If there were a motivator captioned "TECHNICAL SOLUTIONS TO HUMAN PROBLEMS", I would be honored to have my picture appear on it, so thank you very much.

You left out the "ignoring the human part of the problem" part.

The best technical solutions to human problems are the ones that leverage and use the natural behaviors of humans, rather than trying to replace those behaviors with a perfect technical process or system, or trying to force the humans to conform to expectations.

(I'd draw an analogy with Nelson's Xanadu vs. the web-as-we-know-it, but that could be mistaken for a pure Worse Is Better argument, and I certainly don't want any motivated superintelligences being built on a worse-is-better basis.)

comment by DS3618 · 2009-09-05T20:01:33.376Z · LW(p) · GW(p)

Wow what hubris the "brain is inadequate spaghetti code". Tell me have you ever actually studied neuroscience? Where do you think modern science came from? This inadequate spaghetti code has given us the computer, modern physics and plenty of other things. For being inadequate spaghetti code (this is really a misnomer because we don't actually understand the brain well enough to make that judgement) it does pretty well.

If the brain is as bad as you make it out to be then I challenge you to make a better one. In fact I challenge you to make a computer capable of as many operations as the brain running on as little power as the brain does. If you can't do better then you are no better then the people who go around bashing General Relativity without being able to propose something better.

Replies from: Eliezer_Yudkowsky, Z_M_Davis, wedrifid, Vladimir_Nesov
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-09-05T23:10:17.353Z · LW(p) · GW(p)

If the brain is as bad as you make it out to be then I challenge you to make a better one.

I accept your challenge. See you in a while.

Replies from: Furcas, DS3618
comment by Furcas · 2009-09-05T23:25:32.421Z · LW(p) · GW(p)

Awesome.

comment by DS3618 · 2009-09-06T18:37:03.786Z · LW(p) · GW(p)

I look forward to it. (though I doubt I will ever see it considering how long you've been saying you were going to make an FAI and how little progress you have actually made) But maybe your pulling a Wolfram and going to work alone for 10 years and dazzle everyone with your theory.

comment by Z_M_Davis · 2009-09-05T21:34:23.098Z · LW(p) · GW(p)

I don't think there's actually any substantive disagreement here. "Good," "bad," "adequate," "inadequate"--these are all just words. The empirical facts are what they are, and we can only call them good or bad relative to some specific standard. Part of Eliezer's endearing writing style is holding things to ridiculously impossibly high standards, and so he has a tendency to mouth off about how the human brain is poorly designed, human lifespans are ridiculously short and poor, evolutions are stupid, and so forth. But it's just a cute way of talking about things; we can easily imagine someone with the same anticipations of experience but less ambition (or less hubris, if you prefer to say that) who says, "The human brain is amazing; human lives are long and rich; evolution is a wonder!" It's not a disagreement in the rationalist's sense, because it's not about the facts. It's not about neuroscience; it's about attitude.

comment by wedrifid · 2009-09-11T14:53:11.361Z · LW(p) · GW(p)

While my sample size is limited I have noticed a distinct correlation between engaging in hubris and levelling the charge at others. Curious.

comment by Vladimir_Nesov · 2009-09-05T20:29:41.712Z · LW(p) · GW(p)

For calibration, see The Power of Intelligence.

Replies from: DS3618
comment by DS3618 · 2009-09-05T20:41:47.425Z · LW(p) · GW(p)

"The Power of Intelligence"

Derivative drivel...

The post shows the exact same lack of familiarity with neuroscience as the comment I responded to. Examine closely how a single neuron functions and the operations that it can perform. Examine closely the ability of savants (things like memory, counting in primes, calender math...) and after a few years of reading the current neuroscience research comeback and we might have something to discuss.

comment by RHollerith (rhollerith_dot_com) · 2009-09-04T06:33:53.514Z · LW(p) · GW(p)

Eliezer, replying to a comment by pjeby: "you did, in fact, say something stupid."

Word.

comment by Nick_Tarleton · 2009-09-03T20:10:40.995Z · LW(p) · GW(p)

If insane happens before super-smart, you can stop upgrading the other.

Replies from: Psy-Kosh
comment by Psy-Kosh · 2009-09-03T20:12:19.136Z · LW(p) · GW(p)

Well, fair enough, there is that.

comment by pjeby · 2009-09-04T04:16:15.684Z · LW(p) · GW(p)

What, you mean try to self-modify? Oh hell no. Human brain not designed for that

Perhaps you mean to say that we're not particularly trustworthy in our choices of what we modify ourselves to do or prefer?

Human brains, after all, are most exquisitely designed for modifying themselves, and can do it quite autonomously. They're just not very good at predicting the broader implications of those modifications, or at finding the right things to modify.

Replies from: Psy-Kosh
comment by Psy-Kosh · 2009-09-04T06:26:35.113Z · LW(p) · GW(p)

We're talking about direct explicit low level self modification. ie, uploading, then using that more convenient form to directly study one's own internal workings until one decides to go "hrm... I think I'll reroute these neural connections to... that, add a few more of this other kind of neuron over here and..."

Recall that the thing doing all that reasoning is the thing that's being affected by these modifications.

Replies from: pjeby
comment by pjeby · 2009-09-05T01:10:04.570Z · LW(p) · GW(p)

We're talking about direct explicit low level self modification. ie, uploading, then using that more convenient form to directly study one's own internal workings until one decides to go "hrm... I think I'll reroute these neural connections to... that, add a few more of this other kind of neuron over here and..."

Yes, but that would be the stupidest possible way of doing it, when there are already systems in place to do structured modification at a higher level of abstraction. Doing it at an individual neuron level would be like trying to... well, I would've said "write a property management program in Z-80 assembly," except I know a guy who actually did that. So, let's say, something about 1000 times harder. ;-)

What I find extremely irritating is when people talk about brain modification as if it's some sort of 1) terribly dangerous thing that 2) only happens post-uploading and 3) can only be done by direct hardware (or simulated hardware) modification. The correct answer is, "none of the above".

Replies from: Douglas_Knight, CronoDAS
comment by Douglas_Knight · 2009-09-05T06:05:40.949Z · LW(p) · GW(p)

What I find extremely irritating is when people talk about brain modification as if it's some sort of 1) terribly dangerous thing that 2) only happens post-uploading and 3) can only be done by direct hardware (or simulated hardware) modification. The correct answer is, "none of the above".

Lists like that have a good chance of canceling out. That is, there are a bunch of ways people disagree with you because they're talking about something else.

comment by CronoDAS · 2009-09-06T03:17:12.742Z · LW(p) · GW(p)

Well, we're talking about the kind of modifications that ordinary, non-invasive, high-level methods, acting through the usual sensory channels, don't allow. For example, no amount of ordinary self-help could make someone unable to feel physical pain, or can let you multiply large numbers extremely quickly in the manner of a savant. Changing someone's sexual orientation is also, at best, extremely difficult and at worst impossible. We can't seem to get rid of confirmation bias, or cure schizophrenia, or change an autistic brain into a neurotypical brain (or vice versa). There are lots of things that one might want to do to a brain that simply don't happen as long as that brain is sitting inside a skull only receiving input through normal human senses.

comment by cousin_it · 2009-09-03T19:01:33.702Z · LW(p) · GW(p)

Difficult question. I believe those links are relevant, but your formulation also implies the threat of an arms race.

My best shot for now would be this: avoid self-modification. The top priority right now is defending people from the potential harmful effects of this thing you created, because someone less benevolent might stumble upon it soon. Find people who share this sentiment and use the speedup together to think hard about the problem of defense.

Replies from: Psy-Kosh
comment by Psy-Kosh · 2009-09-03T19:28:40.795Z · LW(p) · GW(p)

Perhaps an "anti arms race" would be a more accurate notion. ie, in once sense, waiting for the mathematics of FAI to be solved would be preferable. Would be safer to get to a point that we can mathematically ensure that the thing will be well behaved.

On the other hand, while waiting, how many will suffer and die irretrievably? If the cost for waiting was much smaller, then the answer of "wait for the math and construct the FAI rather than trying to patchwork update a spaghetti coded human mind" would be, to me, the clearly preferable choice.

Even given avoiding self modification, massive speedup would still correspond to significant amount of power. We already know how easily humans... change... with power. And when sped up, obviously people not sped up would seem different, "lesser"... helping to reinforce the "I am above them" sense. One might try to solve this by figuring out how to self modify enough to, well, not to that. But self modification itself being a starting point for, if one does not do it absolutely perfectly, potential disaster, well...

Anyways, so your suggestion would basically be "only use the power to, well, defend against the power" rather than use it to actually try to fix some of the annoying little problems in the world (like... death and and and and and... ?)

Replies from: cousin_it
comment by cousin_it · 2009-09-03T19:50:12.396Z · LW(p) · GW(p)

FAI is one possible means of defense, there might be others.

You shouldn't just wait for FAI, you should speed up FAI developers too because it's a race.

I think the strategy of developing a means of defense first has higher expected utility than fixing death first, because in the latter case someone else who develops uploading can destroy/enslave the world while you're busy fixing it.

comment by Vladimir_Nesov · 2009-09-03T13:03:21.390Z · LW(p) · GW(p)

Given how misrepresented the official story is supposed to be, the part about personally ruling the fabric of the World can be assumed to be twisted as well.

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-09-03T13:26:52.271Z · LW(p) · GW(p)

Nope, they didn't get that part wrong.

Look, you should know me well enough by now to know that I don't keep my stories on nice safe moral territory.

A happy ending here is not guaranteed. But think about this very carefully. Are you sure you'd have turned the Sword on Vhazhar? They don't have the same options we do.

Replies from: bgrah449, cousin_it, Vladimir_Nesov
comment by bgrah449 · 2009-09-03T16:05:18.565Z · LW(p) · GW(p)

He's going to be the emperor. He could implement Parliament, he could create jury trials. He could even put Dolf and Selena on trial for their crimes.

It's interesting that Hirou holds the world accountable to his own moral code, which assumes power corrupts. Then, at the last moment, he grants absolute power to Vhazhar. So in the middle of choosing to use our world's morality, which is built upon centuries of learning to doubt human nature, in the middle of that - Vhazhar's good intentions are so good that they justify granting him absolute power. Lesson not learned.

Replies from: Peter_de_Blanc
comment by Peter_de_Blanc · 2009-09-03T16:33:22.077Z · LW(p) · GW(p)

his own moral code, which assumes power corrupts

Hold on. How can a moral code say anything about questions of fact, such as whether or not power corrupts?

Replies from: SilasBarta
comment by SilasBarta · 2009-09-03T17:26:15.534Z · LW(p) · GW(p)

Because "corrupt" is a morally-loaded term.

Replies from: Peter_de_Blanc
comment by Peter_de_Blanc · 2009-09-03T17:38:30.468Z · LW(p) · GW(p)

It seems to me that "power corrupts" means "power changes goal content," and that's a purely factual claim.

Replies from: SilasBarta
comment by SilasBarta · 2009-09-03T17:45:06.279Z · LW(p) · GW(p)

It doesn't mean that. It means something more like "power changes the empowered's utility function in a way others deem immoral". (ETA simplified)

ETA: Just to make the point clearer, there are many things that change an individual's goal content but are not considered corrupting. For example, trying new foods will generally make you divert more effort to finding one kind of food (that you didn't know you liked). Having children of your own makes you more favorable to children in general. But we don't say, and people generally don't believe, "having children corrupts" or "trying new foods corrupts".

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-09-03T19:16:33.333Z · LW(p) · GW(p)

Okay, but that's still a factual claim underneath the moral one.

It's a bit of argumentum ad webcomicum, but http://www.agirlandherfed.com/comic/?375 is not something I find particularly implausible. There was Marcus Aurelius.

Replies from: kpreid, PlatypusNinja, SilasBarta, Wei_Dai, thomblake, PlatypusNinja
comment by kpreid · 2012-06-01T14:15:00.720Z · LW(p) · GW(p)

Link's broken. Is this guess the page in question?

Replies from: Eliezer_Yudkowsky
comment by PlatypusNinja · 2009-09-04T01:32:23.232Z · LW(p) · GW(p)

Also: it seems like a really poor plan, in the long term, for the fate of the entire plane to rest on the sanity of one dude. If Hirou kept the sword, he could maybe try to work with the wizards -- ask them to spend one day per week healing people, make sure the crops do okay, etc. Things maybe wouldn't be perfect, but at least he wouldn't be running the risk of everybody-dies.

comment by SilasBarta · 2009-09-03T19:38:55.746Z · LW(p) · GW(p)

Okay, but in any case, regarding the issue at hand, "power corrupts" is not a purely factual claim. (And I thought that hybrid claims get counted as moral by default, since that's the most useful for discussion, but I could be wrong.)

Replies from: Peter_de_Blanc
comment by Peter_de_Blanc · 2009-09-03T21:03:02.870Z · LW(p) · GW(p)

Then you need to separate the factual claim and the moral claim, and discuss them separately. The factual claim would be, "power changes goal content in this particular way", and the moral claim is, "...and this is bad."

Replies from: SilasBarta
comment by SilasBarta · 2009-09-06T22:03:10.045Z · LW(p) · GW(p)

Is this fair though? Let's say the passage had been, "... his position that it is immoral to possess nuclear weapons". That too breaks down into a factual and moral claim.

Moral: "it is wrong to possess a weapon with massive, unfocused destructive power"

Factual: "The devices we currently call nuclear weapons inflict massive, unfocused destruction."

Would you object to "his position that it is immoral to posses nuclear weapons" on the grounds that "you need to separate the factual and moral claims"?

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-09-06T22:42:43.172Z · LW(p) · GW(p)

Well, in fact it would be highly helpful to separate the claims here, even though the factual part is uncontroversial, because it makes it clear what argument is being made, exactly.

And in this case it's uncertain/controversial how much power actually changes behavior, who it changes, how reliably; and this is the key issue, whereas the moral concept that "the behavior of killing everyone who disagrees with you, is wrong" is relatively uncontroversial among us. So calling this a moral claim when the key disputed part is actually a factual claim is a bad idea.

comment by Wei Dai (Wei_Dai) · 2009-09-03T19:41:24.666Z · LW(p) · GW(p)

What's the evolutionary explanation for power not corrupting?

Replies from: MichaelVassar, SilasBarta
comment by MichaelVassar · 2009-09-05T04:28:18.821Z · LW(p) · GW(p)

Evolution doesn't do most things. Doing things requires oceans of blood for every little adaptation and humans haven't had power for all that long.
Toddlers need to learn how to hide. How's that for failing to evolve knowledge of the obvious (to a human brain) and absurdly useful.

comment by SilasBarta · 2009-09-03T19:53:33.112Z · LW(p) · GW(p)

Be careful you don't end up explaining two contradictory outcomes equally well, thus proving you have zero knowledge on evolution's effect on power and corruption!

comment by thomblake · 2009-09-04T13:12:12.430Z · LW(p) · GW(p)

And then there are those of us who take moral claims to be factual claims.

comment by PlatypusNinja · 2009-09-04T01:26:45.133Z · LW(p) · GW(p)

I think my concern about "power corrupts" is this: humans have a strong drive to improve things. We need projects, we need challenges. When this guy gets unlimited power, he's going to take two or three passes over everything and make sure everybody's happy, and then I'm worried he's going to get very, very bored. With an infinite lifespan and unlimited power, it's sort of inevitable.

What do you do, when you're omnipotent and undying, and you realize you're going mad with boredom?

Does "unlimited power" include the power to make yourself not bored?

comment by cousin_it · 2009-09-03T14:15:53.130Z · LW(p) · GW(p)

If Vhazhar has the option of editing the nasty bits out of reality and then stepping down from power, I'd help him. If he must personally become a ruler for all eternity, I'd kill him, then smash the goddamn device, then try to somehow ensure that future aspiring Dark Lords also get killed in time.

Replies from: thomblake
comment by thomblake · 2009-09-04T13:23:22.300Z · LW(p) · GW(p)

This could be how the 'balance' mythology and the prophecy got started. Perhaps the hero decided long ago that it wasn't worth the risk, and wanted to make sure future heroes kill the Dark Lord.

comment by Vladimir_Nesov · 2009-09-03T14:34:53.515Z · LW(p) · GW(p)

I assume that the sword tests the correspondence of person's intentions (plan) to their preference. If the sword uses a static concept of preference that comes with the sword instead, why would Vhazhar be interested in sword's standard of preference? Thus, given that the Vhazhar's plan involves control over the fabric of the World, the plan must be sound and result in correct installation of Vhazhar's preference in the rules of the world. This excludes the technical worries about the failure modes of human mind in wielding too much power (which is how I initially interpreted "personal control" -- as a recipe for failure modes).

I'm not sure what it means for the other people's preferences (and specifically mine). I can't exclude the possibility that it's worse than the do-nothing option, but it doesn't seem obviously so either, given psychological unity of humans. From what I know, on the spot I'd favor Vhazhar's personal preference, if the better alternative is unlikely, given that this choice instantly wards off existential risk and lack of progress.

Replies from: Eliezer_Yudkowsky, cousin_it
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-09-03T19:17:46.701Z · LW(p) · GW(p)

I assume that the sword tests the correspondence of person's intentions (plan) to their preference.

No, it's the Sword of GOOD. It tests whether you're GOOD, not any of this other stuff.

It should be obvious that the sword doesn't test how well your plans correspond to what you think you want! Otherwise Hirou would have been vaporized.

Replies from: thomblake, Vladimir_Nesov, eirenicon
comment by thomblake · 2009-09-04T13:13:51.536Z · LW(p) · GW(p)

No, it's the Sword of GOOD. It tests whether you're GOOD, not any of this other stuff.

Wasn't it established that this world's conception of "good" and "evil" are messed up? Why should he trust that the sword really works exactly as advertised?

comment by Vladimir_Nesov · 2009-09-03T19:34:41.679Z · LW(p) · GW(p)

It should be obvious that the sword doesn't test how well your plans correspond to what you think you want! Otherwise Hirou would have been vaporized.

Only assuming that the sword is impulsive. If you take into account Hirou's overall role in the events, this role could be judged good, if only by the final decision.

If the sword judges not plans, but preference, then failing 9 out of 10 people means that it's pretty selective among humans and probably people it selects and their values aren't representative (act in the interests) of the humanity as whole.

comment by eirenicon · 2009-09-03T19:45:29.536Z · LW(p) · GW(p)

If the Sword of Good tested whether you're good, Hirou would have been vapourized, because he was obviously not good. He was at the very least an accomplice to murderers, a racist, and a killer. The Sword of Good may not have vapourized Charles Manson, Richard Nixon, Hitler, or most suicide bombers, either. The Sword of Good tests whether you think you are good, not whether your actions are good.

Strangely, the sword kills nine out of ten people who try to wield it. However, if you knew the sword could only be wielded by a good person, you'd only try to pick it up if you thought you were good, which happens to be the criteria you must fulfil in order to pick up the sword. Essentially, if you think you can wield the Sword of Good, you can.

Replies from: CronoDAS, MugaSofer, thomblake
comment by CronoDAS · 2009-09-03T20:32:06.892Z · LW(p) · GW(p)

If the Sword of Good tested whether you're good, Hirou would have been vapourized, because he was obviously not good. He was at the very least an accomplice to murderers, a racist, and a killer.

Well, he was clearly redeemable, at least. It didn't take very much for him to let go of his assumptions, just a few words from someone he thought was an enemy. Making dumb mistakes, even ones with dire consequences, doesn't necessarily make you not Good.

Replies from: eirenicon
comment by eirenicon · 2009-09-03T20:49:54.933Z · LW(p) · GW(p)

What, realistically, does it mean to be irredeemable? Was Dolf irredeemable? Selena? Is the difference between them and Hirou simply the fact that Hirou realized he was doing bad, and they didn't? Why should that be sufficient to redeem him? Mistakes are not accidents; mistakenly killing someone is still murder.

Surely if awareness and repentance of the immoral nature of your actions makes you Good, the reverse - lack of awareness - means animals that kills other animals without regret are more evil than people who kill other people and regret it.

Replies from: CronoDAS
comment by CronoDAS · 2009-09-03T20:56:53.189Z · LW(p) · GW(p)

Mistakes are not accidents; mistakenly killing someone is still murder

No, it's manslaughter.

Replies from: eirenicon
comment by eirenicon · 2009-09-03T21:27:32.678Z · LW(p) · GW(p)

If you believe someone is evil, hunt them down and kill them, and afterward realize they weren't, it was a mistake. It was also murder. It's not as though you killed in self defense or accidentally dropped an air conditioner on them. Manslaughter is not a defense that can be employed simply because you changed your mind.

Perhaps I should clarify: I don't mean "mistake" in that "he mistook his wife for a burglar and killed her". That's manslaughter. I mean "mistake" in that "he mistakenly murdered a good person instead of a bad one". Ba gur bgure unaq, jura Uvebh xvyyrq Qbys ng gur raq, ur jnfa'g znxvat n zvfgnxr (ubjrire, V fgvyy guvax vg jnf zheqre).

Replies from: wedrifid, CronoDAS, Eliezer_Yudkowsky
comment by wedrifid · 2013-09-17T01:27:08.813Z · LW(p) · GW(p)

If you believe someone is evil, hunt them down and kill them, and afterward realize they weren't, it was a mistake. It was also murder.

You present a compelling argument that murder can be a morally blameless---even praiseworthy---act. I do not believe this was your intention.

Replies from: MugaSofer, CronoDAS
comment by MugaSofer · 2013-09-17T08:19:37.046Z · LW(p) · GW(p)

To be clear, you believe that, right wedrifid? I came this close to downvoting before I deduced the context.

Replies from: wedrifid
comment by wedrifid · 2013-09-17T11:10:07.505Z · LW(p) · GW(p)

To be clear, you believe that, right wedrifid? I came this close to downvoting before I deduced the context.

I believe that there are times where the described behaviour is morally acceptable. I don't think it is helpful to label that behaviour 'murder' but if someone were to define that as murder it would mean that murder (of that particular kind) was ok.

To be clear, there are stringent standards on the behaviour which preceded the mistake. This is something that should happen very infrequently. Both epistemic rationality standards and instrumental rationality standards apply. For example, sincerely believing that the person had committed a crime because you happen to be bigoted and irrational leaves you morally culpable and failing to take actions that provide more evidence where the VoI is high and cost is low also leaves you morally culpable. The 'excuse' for hunting and down a killing an innocent that you mistakenly believed was sufficiently evil is not "I was mistaken" but rather "any acceptably rational and competent individual in this circumstance would have believed that the target was sufficiently evil".

comment by CronoDAS · 2013-09-17T05:22:23.999Z · LW(p) · GW(p)

It's not too hard to imagine a scenario in which hunting down and killing someone is indeed the right thing to do... the obvious example is that, given perfect hindsight, it would have been much better if one of the many early attempts to assassinate Hitler had in fact succeeded.

Bonus question: Which one of the failed attempts was most likely to have been made by a time traveler? ;)

comment by CronoDAS · 2009-09-04T03:03:58.573Z · LW(p) · GW(p)

If you believe someone is evil, hunt them down and kill them, and afterward realize they weren't, it was a mistake. It was also murder.

Suppose you're a police officer trying to arrest someone for a crime, and there is ample evidence that the person you are trying to arrest is indeed guilty of that crime. The person resists arrest, and you end up killing the person instead of making a successful capture. Are you a murderer?

Does it matter if it turns out that the evidence against this person turns out to have been forged (by someone else)?

Replies from: eirenicon
comment by eirenicon · 2009-09-04T04:58:11.227Z · LW(p) · GW(p)

If you have no intention of killing them and they die as a side effect of your actions, it's an accident, and manslaughter. If you kill them because you realize you can't arrest them, it's murder, complete with intention of malice. However, the fact that your actions are sanctioned by the state is obviously not a defense (a la Nuremberg), and so there's no point in adding "police officer" to the example.

You could ask if I thought executing someone who was framed would be considered murder, but since I view all manner of execution murder, guilty or no, there's no use.

Replies from: CronoDAS
comment by CronoDAS · 2009-09-04T19:41:03.799Z · LW(p) · GW(p)

However, the fact that your actions are sanctioned by the state is obviously not a defense (a la Nuremberg), and so there's no point in adding "police officer" to the example.

Actually, I think there is. If you kill someone without "state sanction", as you put it, it's almost certainly Evil. If you kill someone that the local laws allow you to kill, it's much less likely to be Evil, because non-Evil reasons for killing, such as self-defense, tend to be accounted for in most legal systems. Anyway, I think I'm getting off the subject. Let me try rephrasing the general scenario:

You are a police officer. You have an arrest warrant for a suspected criminal. If you try to arrest the suspect, he is willing to use lethal force against you in order to prevent being captured. You also believe that, once the suspect has attempted to use lethal force against you, non-lethal force will prove to be insufficient to complete the arrest.

The way I see it, this could end in several ways:

1) Don't try to make an arrest attempt at all.

2) Attempt to make an arrest. The suspect responds by attempting to use lethal force against you. (He shoots at you with a low-caliber pistol, but you are protected by your bulletproof vest.) You believe that non-lethal force will most likely fail to subdue the suspect. Not willing to use lethal force and kill the suspect, you retreat, failing to make the arrest.

3) Attempt to make an arrest. The suspected criminal responds by attempting to use lethal force against you. (He shoots at you with a low-caliber pistol, but you are protected by your bulletproof vest.) You believe that non-lethal force will most likely fail to subdue the suspected criminal, but try anyway. (You start running at him, intending to wrestle the gun away from him with your bare hands.) The suspected criminal kills you. (He shoots you in the head.)

4) Attempt to make an arrest. The suspected criminal responds by attempting to use lethal force against you. (He shoots at you with a low-caliber pistol, but you are protected by your bulletproof vest.) You believe that non-lethal force will most likely fail to subdue the suspected criminal, so you resort to lethal force. (You shoot him with your own gun.) The suspected criminal is killed, and, when you are questioned about your actions, your lawyer says that you killed the suspect in self-defense. (Under U.S. law, this would indeed be the case - you would not be guilty of murder.)

Obviously Scenario 2 is a better outcome than Scenario 3, because in Scenario 3, you end up dead. However, if you know that you're not willing to use lethal force to begin with, and that non-lethal force is going to be insufficient, you're probably better off not making the arrest attempt at all, which is Scenario 1. Therefore Scenario 1 is better than Scenario 3. If you're going to make an arrest attempt at all, you are expecting Scenario 4 to occur. If you go through with Scenario 4, does that make you Evil? You initiated the use of force by making the arrest attempt, but the suspect could have chosen to submit to arrest rather than to fight against you - and he did, indeed, use lethal force before you did.

Replies from: wedrifid, wedrifid, lmm
comment by wedrifid · 2013-09-17T01:23:32.910Z · LW(p) · GW(p)

The way I see it, this could end in several ways:

I notice that you left off an outcome that if anything allows you to make your point stronger.

5) Attempt to make an arrest. You see that the suspected criminal has the capacity to use lethal force against you (he is armed) and you suspect that he will use it against you. You shoot the suspect. His use of lethal force against you is never more than counterfactual (ie. a valid suspicion).

For consistency some "6)" may be required in which the first "attempt to use lethal force against you" is successful. I suggest that this action is not necessarily Evil, for similar reasons that you describe for scenario 4. Obviously this is less clear cut and has more scope for failure modes like "black suspect reaches for ID" so we want more caution in this instance and (ought to) grant police officers less discretion.

comment by wedrifid · 2013-09-17T01:08:04.058Z · LW(p) · GW(p)

If you kill someone without "state sanction", as you put it, it's almost certainly Evil.

I think 'almost certain' may be something of an overstatement. The states that we personally live in are not a representative sample of states and killing tyrants is not something we can call 'almost certainly' Evil. The same consideration applies to self defence laws. Self defence laws in an average state selected from all states across time were not sufficiently fair as to make claims about almost certain Evil.

comment by lmm · 2013-09-16T22:20:19.689Z · LW(p) · GW(p)

Once he uses lethal force against you, your use of lethal force would be self-defense, not murder.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-09-03T22:46:52.594Z · LW(p) · GW(p)

Ba gur bgure unaq, jura Uvebh xvyyrq Qbys ng gur raq, ur jnfa'g znxvat n zvfgnxr (ubjrire, V fgvyy guvax vg jnf zheqre).

I perceive that you have not yet learned to use the logic of the Phoenix.

Replies from: eirenicon, thomblake
comment by eirenicon · 2009-09-04T00:42:17.401Z · LW(p) · GW(p)

Care to elaborate on that rather cryptic remark?

Replies from: Cyan
comment by Cyan · 2009-09-04T16:00:09.360Z · LW(p) · GW(p)

"After I complete the Spell of Ultimate Power, I'll have the ability to bring Alek back. And I will. ... I'm not asking anything from you. Just telling you that if I win, I'll bring Alek back. That's a promise."

...the moment of the Sword touching Dolf's skin, the wizard stopped, ceased to exist... as something seemed to flow away from the corpse toward the gears above the altar.

...he closed his eyes to sleep until the end of the world.

The logic of the Phoenix is that the Lord of Dark will resurrect everyone he can, including Dolf, so it isn't murder.

comment by thomblake · 2009-09-04T13:18:11.383Z · LW(p) · GW(p)

logic of the phoenix?

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-09-04T20:11:41.700Z · LW(p) · GW(p)

No, this logic of the Phoenix. What makes you think cutting off someone's head is murder?

"He died, but you have taught me a new meaning for 'is dead'." (From the same book.)

Replies from: Douglas_Knight, TobyBartels
comment by Douglas_Knight · 2009-09-04T23:05:37.316Z · LW(p) · GW(p)

What makes you think cutting off someone's head is murder?

Not every decapitation is murder, but "the wizard stopped, ceased to exist...as something seemed to flow away" is suggestive.

Replies from: Psy-Kosh
comment by Psy-Kosh · 2009-09-05T00:41:48.784Z · LW(p) · GW(p)

I was thinking the same thing. The way Eliezer wrote that bit seemed to make it clear that something rather more than mere decapitation occurred there.

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-09-05T03:04:13.789Z · LW(p) · GW(p)

Hm, so it does. Well, if Hirou had no way of knowing that, then it's manslaughter at worst.

Replies from: Psy-Kosh, Eliezer_Yudkowsky
comment by Psy-Kosh · 2009-09-05T05:33:25.724Z · LW(p) · GW(p)

Though, actually spelling it out directly does end up sounding funny. "Well... I don't know that cutting off his head with this sword would kill him... I mean, is it really reasonable for me to have expected that?" :)

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-09-05T05:34:52.729Z · LW(p) · GW(p)

(Actually, I thought I'd deleted the "ceased to exist" phrase. I'll go ahead and take it out.)

comment by TobyBartels · 2011-01-09T09:28:18.050Z · LW(p) · GW(p)

I figured that Vhazhar really wouldn't be able to save Dolf. That's why it's a sacrifice.

comment by MugaSofer · 2013-09-17T08:32:54.252Z · LW(p) · GW(p)

You are using two definitions of "good" - how much good your actions cause, and how good you believe yourself to be. Neither of those is used by the sword; rather, some sort of virtue-ethics definition - I suspect motive.

comment by thomblake · 2009-09-04T13:43:44.562Z · LW(p) · GW(p)

If the Sword of Good tested whether you're good, Hirou would have been vapourized, because he was obviously not good. He was at the very least an accomplice to murderers, a racist, and a killer.

Doing a bad thing does not necessarily make one a bad person. Though it helps.

comment by cousin_it · 2009-09-03T14:49:32.317Z · LW(p) · GW(p)

I assume that the sword tests the correspondence of person's intentions (plan) to their preference.

So a sincerely evil person would pass with flying colors?

I assumed the sword tested compliance with the current CEV of the human race.

Replies from: dclayh, Vladimir_Nesov
comment by dclayh · 2009-09-03T18:32:39.778Z · LW(p) · GW(p)

I assumed the sword tested compliance with the current CEV of the human race.

Why just the human race? Orcs are people too (at least in this story).

Replies from: cousin_it
comment by cousin_it · 2009-09-03T18:35:52.865Z · LW(p) · GW(p)

Good catch. Yes, of course.

comment by Vladimir_Nesov · 2009-09-03T16:08:42.305Z · LW(p) · GW(p)

Presumably, actual mutants are unlikely, with most "evil" people actually just holding mistaken (about their actual preference) moral beliefs. If the sword is an external moral authority, it's harder to see why one would consult it.

On the other hand, sword checks soundness of the plan against some preference, which is an important step that is absent if one doesn't consult the sword, which can justify accepting a somewhat mismatched preference if that allows to use the test.

This passes the choice of mismatching preferences to a different situation. If the sword tests person's preference, then protagonist's choice is between lack of progress or unlikely good outcome and (if Vhazhar's plan is sound) verified installation of Vhazhar's preference, with the latter presumably close to others' preference, thus being a moderately good option. If the sword tests some kind of standard preference, this standard preference is presumably also close to Vhazhar's preference, thus Vhazhar faces a choice between trying to install his own preference through unverified process, which can go through all kinds of failure modes, and using the sword to test the reliability of his plan.

The fact that Vhazhar is willing to use the sword to test the soundness of his plan, when the failed test means his death, shows that he prefers leaving the rest of the world be to incorrectly changing it. This is a strong signal that should've been part of the information given to protagonist for making the decision.

comment by LucasSloan · 2009-09-04T00:23:15.852Z · LW(p) · GW(p)

After reading the whole thing, I'm appalled that my only thought against the enforced morality was approximately "they're just worms.." And then immediately accepting the characters' disgust.

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-09-04T01:56:33.419Z · LW(p) · GW(p)

FYI, part of the inspiration for this was reading the referenced XKCD and realizing I hadn't gotten that - albeit I first watched The Princess Bride as a child, which may have something to do with it. But yeah, although I seem more resistant to moral dissonance than average - probably more because my mind generally tries to visualize things as real, than out of any innate superethics - I'm still vulnerable to it, and that's part of the horror.

So of course I wanted to share that horror with the rest of you!

Replies from: Alicorn
comment by Alicorn · 2009-09-04T02:11:07.221Z · LW(p) · GW(p)

The inability to suspend moral disbelief is one of many things that can interfere with the enjoyment of basically good fiction. When I am screaming at characters that they FAIL ETHICS FOREVER, I'm rarely having fun.

Replies from: CronoDAS, LucasSloan
comment by CronoDAS · 2009-09-04T05:34:13.927Z · LW(p) · GW(p)

Characters who FAIL ETHICS FOREVER can still be entertaining. For example, plenty of villains clearly have little regard for ethics. Authors who FAIL ETHICS FOREVER are usually less desirable. For example, Terry Goodkind. I've rarely felt personally insulted by a work of fiction, but, well, Naked Empire somehow managed to contain the purest, unadulterated essence of Ethics Fail I've ever encountered - it even managed to contradict the explicit moral lessons of the earlier books in the series!

comment by LucasSloan · 2009-09-04T05:23:18.542Z · LW(p) · GW(p)

I agree that screaming at characters that they FAIL ETHICS FOREVER can interrupt enjoyment of a story, but it is far worse to never realize that their actions are, in fact, contemptible.

Replies from: Alicorn, MichaelVassar
comment by Alicorn · 2009-09-04T16:47:06.240Z · LW(p) · GW(p)

Oh, I agree - but I try to postpone this contemplation until after I've finished the story, if I can.

comment by MichaelVassar · 2009-09-05T03:47:26.813Z · LW(p) · GW(p)

No, maybe disgusting, definitely enraging, but usuallynot contemptible. Agamemnon is an exception, but he's pretty much the villain in a story without clear villains. Odysseus is heroic in the extreme, not contemptible, but his heroism has nothing to do with good intentions or outcomes, only with displaying his desirability as an ally.

comment by Tyrrell_McAllister · 2009-09-03T02:42:30.442Z · LW(p) · GW(p)

Vfa'g Uvebh'f gehfgvat bs gur Ybeq bs Qnex yvxr gehfgvat gung nal Fvathynevgl jbhyq znxr guvatf orggre? Jung nffhenapr qbrf Uvebh unir gung gur YbQ vf Sevraqyl?

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-09-03T03:39:15.572Z · LW(p) · GW(p)

Because the Sword of Good didn't kill him; also he seems to be quite an excellent moral philosopher - someone who actually perceives morality. And if not him, then who else on the next try? (Of course there's going to be a next try eventually, given that it's possible in the first place.)

Replies from: Tyrrell_McAllister, kpreid
comment by Tyrrell_McAllister · 2009-09-03T18:37:40.931Z · LW(p) · GW(p)

Because the Sword of Good didn't kill him;

Why does Hirou trust the Sword of Good? How does he know that it's Friendly?

also he seems to be quite an excellent moral philosopher - someone who actually perceives morality.

I didn't get that from the story. All those fantasy books he's read, and he only now ponders whether something is good just because the author labeled it "Good"? He only now considers how immoral the actions of many fantasy heroes would be were they real? I remember being bothered by Aragorn's divine right to lead when I was eight and my Dad was reading Lord of the Rings to me.

As your acknowledgments show, pondering whether it could really be moral to kill "bad guys" so willy-nilly is common in fantasy circles. One of the Austin Powers movies used this to humorous effect with a little vignette about how one of the henchmen killed by Powers had a loving family and had just celebrated his retirement surrounded by loving friends.

Maybe these thoughts never occur to many fantasy readers, but I don't think that we're talking about some vanishingly rare perspicacity here.

And if not him, then who else on the next try?

Maybe someone who's developed a rigorous theory of friendliness :).

I guess I'm just surprised to see an allegory from you in which someone solves Friendliness by applying thirty seconds of his at-best-slightly-above-average moral intuition. I did not get the impression that Hirou was any kind of moral savant. And I had thought that even a moral savant, on your view, couldn't reliably make such a decision in thirty seconds.

Replies from: gwern, Eliezer_Yudkowsky
comment by gwern · 2009-09-04T11:06:59.856Z · LW(p) · GW(p)

I didn't get that from the story. All those fantasy books he's read, and he only now ponders whether something is good just because the author labeled it "Good"?

I think you're being a little optimistic here in thinking your skepticism is at all general.

Why was Norman Spinrad's _The Iron Dream_ so critically well-received and still read? (If you haven't read it, it's much like Eliezer's story except without the sane hero.) Because it demonstrated that most readers weren't critical, that they'd been reading fantasy stories for literally decades without cottoning onto how well the same stories justified genocide and fascism!

Replies from: NancyLebovitz
comment by NancyLebovitz · 2013-09-18T07:32:20.099Z · LW(p) · GW(p)

I thought the point of The Iron Dream was that Hitler's novel (the story is set in an alternate world where Hitler became a pulp writer) was the nastiest sort of inappropriate fantasy.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-09-03T19:11:56.705Z · LW(p) · GW(p)

also he seems to be quite an excellent moral philosopher - someone who actually perceives morality.

I didn't get that from the story. All those fantasy books he's read

Not Hirou, Vhazhar. For some reason, even as a very young child facing religious indoctrination, I couldn't quite accept that Abraham had made the right choice in trying to sacrifice Isaac upon God's command. That was one of my first moral breaks with Judaism. The Lord of Dark is - almost necessarily - actually visualizing situations and reacting to them as if seen, rather than processing words however the people around him expect to process them; there's no other way he could reject the values of his society to that extent, and even then, the amount of convergence he exhibits with our own civilization is implausible barring extremely optimistic assumptions about (a) the amount of absolute coherence (b) our own society's intelligence and (c) the Lord of Dark's intelligence; but of course the story wouldn't have worked otherwise.

I guess I'm just surprised to see an allegory from you in which someone solves Friendliness by applying thirty seconds of his at-best-slightly-above-average moral intuition.

Vhazhar's been working on it for some unknown number of years, having successfully realized that sucking the life from worms may be icky but doesn't actually hurt any sentient beings. (Though I wasn't assuming Vhazhar was ancient, he very well could be, and that would make a number of things more plausible, really.) Hirou has a whole civilization behind him and just needed to wake up and actually think.

Replies from: Tyrrell_McAllister
comment by Tyrrell_McAllister · 2009-09-03T19:54:28.095Z · LW(p) · GW(p)

Okay, Hirou has evidence that Vhazhar is a moral savant. But the reader, and Hirou, sees little evidence that Vhazhar has worked out a formal, rigorous theory of Friendliness. I thought that anything less than that, on your view, virtually guaranteed the obliteration of almost everything valuable.

But I draw a weaker inference from Vhazhar's ability to overcome indoctrination. Yes, it implies that he probably had a high native aptitude for correct moral reasoning. But the very fact that he was subjected to the indoctrination means that he's probably damaged anyways. If someone survives a disease that's usually deadly, you should expect that she went into the disease with an uncommonly strong constitution. But, given that she's had the disease, you should expect that she's now less healthy than average.

Replies from: Eliezer_Yudkowsky, CronoDAS
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-09-03T22:50:45.779Z · LW(p) · GW(p)

But the reader, and Hirou, sees little evidence that Vhazhar has worked out a formal, rigorous theory of Friendliness. I thought that anything less than that, on your view, virtually guaranteed the obliteration of almost everything valuable.

Only by AIs. Human uploads would be a whole different story. Not necessarily a good story, but a different story, and one in which - whatever the objective frequency of winning - I'd have to say that, relative to my subjective knowledge, there's a pretty sizable chunk of chance.

If Vhazhar was literally casting a spell to run the world directly, and he wasn't able to take advantage of moral magic like that embodied in the Sword of Good itself (which, conceivably, could be a lot less sophisticated than its name implies) then it's a full-fledged Friendly AI problem.

Replies from: Tyrrell_McAllister
comment by Tyrrell_McAllister · 2009-09-03T23:34:41.338Z · LW(p) · GW(p)

What are the justifiable expectations one could have about the Sword of Good? In particular, why suppose that it's a Sword of Good in anything other than name only? Why suppose that it's any protection against evil?

I also didn't consider the possibility that Vhazhar was planning to run the world himself directly. A human just doesn't have the computational capacity to run the world. If a human tried to run the world, there would still be both fortune and misfortune.

For that reason, I assumed that his plan was for some extrapolated version of his volition to run the world. But if he's created something that will implement his CEV accurately, hasn't he solved FAI?

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-09-04T01:58:27.006Z · LW(p) · GW(p)

I also didn't consider the possibility that Vhazhar was planning to run the world himself directly. A human just doesn't have the computational capacity to run the world. If a human tried to run the world, there would still be both fortune and misfortune.

There could be less misfortune. A cautious human god who wasn't corrupted by power certainly could plausibly accomplish a lot of good with a few minimal actions. Of course the shaky part is that "cautious" and "not corrupted" part.

Replies from: Vladimir_Nesov, Tyrrell_McAllister
comment by Vladimir_Nesov · 2009-09-04T02:16:45.008Z · LW(p) · GW(p)

Where does the ability to specify complex wishes become distinct from the ability to implement them though? What are the capabilities of a god with human mind? If there is a lot of automation for implementing the wishes, how much of the person's preference does this automation anticipate? In what sense does the limitation on a god's mind to be merely human affect god's capacity to control the world? There doesn't seem to be a natural concept that captures this.

comment by Tyrrell_McAllister · 2009-09-04T14:02:08.884Z · LW(p) · GW(p)

There could be less misfortune.

Okay. I had taken the Prophecy of Doom to be saying that there would no longer be both "luck and misfortune". I can see that it could be read otherwise, though.

comment by CronoDAS · 2009-09-03T20:37:41.131Z · LW(p) · GW(p)

Well, there are at least several obvious fixes that we humans would want to make to the world we live in, but are unable to. For example, we would like to wipe out the malaria parasite that infects humans. The dragon is bad, the world is full of really, really horrible things, and I'd rather just make it stop rather than worry too much about being corrupted by power.

comment by kpreid · 2009-09-03T13:33:31.225Z · LW(p) · GW(p)

I wrote and deleted a comment to the effect of “The Sword of Good didn't kill him, and the Sword appears to be a judge of good intentions = Friendliness (though not good reasoning)”, then deleted it on consideration that unfriendliness-through-failures-of-reasoning might be worse than the current state of the world. But "there's going to be a next try" indeed outweighs that. I think.

comment by Emile · 2009-09-03T14:11:08.798Z · LW(p) · GW(p)

Great story!

By coincidence, today I just read The Case for the Empire.

comment by Nubulous · 2009-09-03T11:54:04.821Z · LW(p) · GW(p)

My metaphor lobes appear to be on fire.

comment by roland · 2009-09-03T05:22:03.381Z · LW(p) · GW(p)

Required reading for everyone serving in the army of whatever nation.

Replies from: cabalamat
comment by cabalamat · 2009-09-03T09:45:28.947Z · LW(p) · GW(p)

Depends whether they want soldiers who think. But yeah, expand the whole thing into a book and it would make a great moral story.

comment by dclayh · 2009-09-03T07:50:34.609Z · LW(p) · GW(p)

I found the ending to be highly telegraphed. No doubt this is partly because I know how the author is likely to think about things, but having the idea of an untrustworthy translation spell introduced in the fourth paragraph, combined with the Excessively Straightforward Names, certainly didn't help. Not to mention the bit about, Oh, let's think about what heroism really entails.

comment by bellisaurius · 2009-09-04T22:00:50.786Z · LW(p) · GW(p)

If you meet the buddha on the road, you must kill him.

The koan really strikes me in this situation. The character in the end accepts that he alone gets to make the final moral decisions for himself, regardless of what the labels are and what his teachings were. Many religious ceremonies are about abject submission, but many are also about the idea of "I freely give myself, and accept the consequnces of that submission" and so on.

Although I will add that I completely disagree with the hero. If he really was undecided, the current balance was his best bet. The dark lord's spell would have forced the balance into one that might not be corrected. Kill him, and the war continues, and he can learn which side is truly good by his definition. This seems especially true since the hero kind of accepted the idea of balance early on.

Replies from: RobinHanson, TuviaDulin, Vladimir_Nesov
comment by RobinHanson · 2009-09-06T01:43:15.860Z · LW(p) · GW(p)

I thought his conversion was too quick to be believable - he need to ask more questions, to have more back and forth in a random walk of opinion change.

Replies from: Douglas_Knight
comment by Douglas_Knight · 2009-09-06T18:50:24.599Z · LW(p) · GW(p)

Random walks are for agents who have thought through the possibilities and are responding to new information. Hirou's response is far, far more realistic for a human, though perhaps too quick.

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-09-10T18:39:37.339Z · LW(p) · GW(p)

I've had similar experiences myself, and try not to have them again. Evidence builds up behind a wall of denial, and when the dam breaks the flood is loosed.

comment by TuviaDulin · 2012-04-11T18:52:40.241Z · LW(p) · GW(p)

Indeed. His willingness to kill Dolf without asking any questions or making any attempts to verify the Dark Lord's statements just shows that Hirou still hasn't learned anything.

comment by Vladimir_Nesov · 2009-09-05T07:32:04.484Z · LW(p) · GW(p)

There is nothing about status quo that makes it a preferable option in times of uncertainty, except that the expectation of the intervention may at some point be below or above status quo, which gives the decision.

Replies from: AdShea
comment by AdShea · 2010-12-02T22:51:42.316Z · LW(p) · GW(p)

The status quo is preferable when other option is of unknown goodness and irrevocable.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-12-04T16:28:56.402Z · LW(p) · GW(p)

Value is associated with states of knowledge (about consequences), not with precise outcomes. What you are saying is that uncertainty confers low value, and so is generally less preferable than (well-known) status quo. This is not generally correct.

Replies from: FAWS
comment by FAWS · 2011-01-09T12:55:08.337Z · LW(p) · GW(p)

But easily changeable outcomes are preferable when there is uncertainty.

comment by TuviaDulin · 2012-04-11T19:29:32.365Z · LW(p) · GW(p)

I'd like to think I would have noticed the moral problems with what the "good" guys were doing on my own, and without the benefit of knowing who the author was. I think I would have, but I'm not totally confidence in my Milgram Resistance.

The ending did bother me, though. Why was Hirou willing to believe everything Vhazhar told him without trying to verify it? Why did he kill Dolf instead of accepting that Dolf was simply limited by the moral myopia of his own society, which he clearly was? Maybe exceptionally good people like Vhazhar could see the problems with the status quo, but it wouldn't take an exceptionally evil person to NOT see them, so Dolf wasn't necessarily a bad guy. Couldn't Hirou have looked for another wizard who was willing to volunteer for the process? Or, hell, found some other trustworthy person to become the new god, and let Vhazhar prove his virtue by sacrificing his OWN wizardly ass to fuel the spell? He didn't even ask Vhazhar what his new world would look like; he just decided that Vhazhar's ideas were probably good, and that he could be trusted to not become corrupt.

I guess that ending was the best we could expect from someone like Hirou.

comment by nick012000 · 2010-10-23T16:36:14.650Z · LW(p) · GW(p)

In writing it's even simpler - the author gets to create the whole social universe, and the readers are immersed in the hero's own internal perspective. And so anything the heroes do, which no character notices as wrong, won't be noticed by the readers as unheroic. Genocide, mind-rape, eternal torture, anything.

Not true. If you've got some time to kill, read this thread on The Fanfiction Forum; long story short, a guy who's quite possibly psychopathic writes a story wherein Naruto is turned into a self-centered, hypocritical bastard who happily mindrapes every woman around him, and the people on the forum spend 60-odd pages lambasting him.

Replies from: Nornagest, Eliezer_Yudkowsky
comment by Nornagest · 2011-01-09T10:07:54.595Z · LW(p) · GW(p)

People are a lot more willing to criticize the morality of the story if they didn't find the story itself to be competently written. Notice the amount of social criticism that's been leveled at Twilight.

Seems to work the other way if the story's written to convince people of a moral point, though.

Replies from: Eliezer_Yudkowsky, ikrase
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2011-01-09T12:11:42.100Z · LW(p) · GW(p)

I.e., agree with the morals -> don't notice the bad writing?

Replies from: Nornagest
comment by Nornagest · 2011-01-09T20:56:35.271Z · LW(p) · GW(p)

Agree with the morals -> enjoy reading crude stereotypes of your moral opponents. Get enough enjoyment from that and the story's a net positive even if it has no other redeeming qualities.

comment by ikrase · 2013-10-26T14:34:49.126Z · LW(p) · GW(p)

I think proximity also matters. There are no modern romantic heroes, but there are modern heartthrobs with questionable gender politics.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2011-01-09T12:10:49.500Z · LW(p) · GW(p)

I don't have permission to view that, says the board. But, just taking a wild guess here, that wouldn't be a Perfect Lionheart fic would it? Because unless the same forumgoers are also lambasting the Bible and David Eddings, one can't help but suspect that it's not the content so much as the writing which triggers the hate.

Replies from: nick012000
comment by nick012000 · 2011-01-09T12:33:16.552Z · LW(p) · GW(p)

Yeah, you have to register to view the board, and yeah, it's the Perfect Lionheart fic. The reason that thread's gotten so many posts and the story's gotten so much negative feeling about it, though, is because it started off looking good, was well-written (as far as the technical aspects of writing like spelling, grammar, and so on go), and had occasional teases in a scene here and there that it might manage to redeem itself.

If it was simply poorly written it would have been dismissed as just another piece of the sea of shit that makes up 90% of ff.net.

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2011-01-09T13:00:09.098Z · LW(p) · GW(p)

So Chuunin Exam Day, then? I've never read it, but I've heard of it.

Considering that I was able to identify the author and possibly the exact fic from the information that the morality was being heavily lambasted, may I suggest that readers noticing nonlampshaded evil doesn't actually happen all that often? TV Tropes is good at noticing Moral Dissonance, but literally nowhere else that I've ever heard of. It took a critic on the order of David Brin to point out that Aragorn wasn't democratically elected.

Replies from: ciphergoth, Eugine_Nier
comment by Paul Crowley (ciphergoth) · 2011-01-09T15:13:19.428Z · LW(p) · GW(p)

I think people just think of it not being evil to be a dictator as part of the fantasy setting. I'd be more moved by an example in an everyday setting.

Replies from: taryneast
comment by taryneast · 2011-01-09T21:17:48.727Z · LW(p) · GW(p)

Have you read: http://en.wikipedia.org/wiki/Bio_of_a_Space_Tyrant ?

Replies from: ciphergoth, spriteless
comment by Paul Crowley (ciphergoth) · 2011-01-09T22:30:03.836Z · LW(p) · GW(p)

Nope, sorry!

comment by spriteless · 2011-01-10T03:25:53.587Z · LW(p) · GW(p)

Wah, but... how can people not see that Tyrant Hope Hubris becomes evil?

Gur tubfg bs Qernzre cerqvpgf uvf snyy! Gur glenag uvzfrys cbvagf vg bhg uvzfrys juvyr vapbtavgb! Uvf rfgenatrq jvsr gur fnzr, nsgrejneqf! Cvref Nagubal rira anzrq uvz Ubcr Uhoevf!

Anyways, if you can stand Piers Anthony it is an OK read.

Replies from: taryneast
comment by taryneast · 2011-01-10T12:10:28.425Z · LW(p) · GW(p)

Yah agreed. It definitely plays with the theme... which is kinda fun.

I was mainly saying it's an example not in the fantasy setting ;)

It's more along the lines of "If I were king, what would I do... and how would I become king anyways?"

comment by Eugine_Nier · 2011-01-12T01:20:16.121Z · LW(p) · GW(p)

Unfortunately half the examples of Unfortunate Implications on TV Tropes are places where the work's universe has rules that create problems for currently popular systems of ethics (the implication being it's wrong to imply such rules might be true). Or otherwise violating prevailing moral fashions.

comment by spriteless · 2009-09-04T18:12:22.236Z · LW(p) · GW(p)

The only fantasy book I've read where something similar happened is King Rat by China Miéville, and it doesn't hit it on your head quite so hard. :P

Replies from: CronoDAS
comment by CronoDAS · 2009-09-05T07:39:33.278Z · LW(p) · GW(p)

I can think of a few vaguely similar situations, actually.

Near the end of Final Fantasy X, the characters decisively reject the quest they had been on, and end up using "forbidden" technology to permanently destroy the evil sea monster instead of merely winning a ten-year respite.

The second big twist in Ender's Game also is a bit like this, when the surviving aliens finally figure out how to communicate with humans...

Replies from: Document
comment by Document · 2011-01-26T01:00:24.006Z · LW(p) · GW(p)

Absolution Gap by Alastair Reynolds also had a "quest finally judged a bad idea and abandoned after years of work and sacrifice" ending.

comment by roland · 2009-09-03T05:32:07.660Z · LW(p) · GW(p)

I found two misspellings: 1) Serena 2) Hiro

Replies from: Douglas_Knight, cabalamat, Eliezer_Yudkowsky, gwern, steven0461
comment by Douglas_Knight · 2009-09-03T05:42:41.872Z · LW(p) · GW(p)

Does anyone have, as a step in copyediting, creating a concordance? eg, running "sort | uniq -c"?

Replies from: thomblake
comment by thomblake · 2009-09-03T13:28:00.078Z · LW(p) · GW(p)

That's brilliant and I shall do it nine times.

Replies from: thomblake
comment by thomblake · 2011-09-12T22:33:47.449Z · LW(p) · GW(p)

update: haven't done it nine times yet. It's still brilliant though.

comment by cabalamat · 2009-09-03T09:44:18.879Z · LW(p) · GW(p)

Also, "rainment" should be "raiment".

comment by gwern · 2009-09-04T11:01:53.030Z · LW(p) · GW(p)

2) Hiro

Personally, I think Eliezer stole the name Hiro from Snow Crash (seriously, he's named 'Hiro Protagonist', so it fits the story perfectly...) and forgot to run the search-and-replace.

comment by steven0461 · 2009-09-03T12:17:03.240Z · LW(p) · GW(p)

Also "Selene" twice.

comment by jaimeastorga2000 · 2012-01-07T07:36:37.543Z · LW(p) · GW(p)

Forgive me if this is a stupid question, but is the opening line just a framing device for a short story with abrupt transitions, or does it mean that this is an actual draft of a book that won't be finished for whatever reason?

Replies from: Anubhav, MBlume
comment by Anubhav · 2012-01-07T08:05:27.573Z · LW(p) · GW(p)

I'd say it means "This thing is more of a story outline than a story, but I can't be bothered to write the book it'd take to tell the whole story." If you'd categorise that as a 'draft'.... well, go ahead.

comment by MBlume · 2012-01-08T06:51:15.306Z · LW(p) · GW(p)

I think it's kind of a "have your cake and eat it too" way to transform the latter into the former.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-09-03T01:20:19.612Z · LW(p) · GW(p)

...and, amazingly enough, FictionPress doesn't allow me to include double spaces in my writing. Deal-breaker in my book, so I'm giving up and hosting on yudkowsky.net instead. Can anyone suggest a better place to post in the future?

Replies from: John_Maxwell_IV, eirenicon, thomblake
comment by John_Maxwell (John_Maxwell_IV) · 2009-09-04T04:24:48.613Z · LW(p) · GW(p)

I believe that   is the html character code for a non-breaking-space. It wouldn't be hard replace all occurrences of a period followed by two spaces with ".  " before copying and pasting into FictionPress. Of course it's possible that FictionPress renders html character codes literally (as LessWrong apparently does).

Edit: This might also work.

comment by eirenicon · 2009-09-03T04:26:44.563Z · LW(p) · GW(p)

The Chicago Manual of Style recommends against double spacing. Do you have a particular attachment to it?

Replies from: Kaj_Sotala, Kevin
comment by Kaj_Sotala · 2009-09-03T10:12:00.415Z · LW(p) · GW(p)

I usually despise double spacing. It bloats the length of the next unnecessarily. (Though I do admit that I didn't even notice it in this case.)

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-09-03T13:27:58.854Z · LW(p) · GW(p)

Let me amplify: By "double spaces" I mean two spaces after a period, not double spaces between lines.

Replies from: CronoDAS, eirenicon, thomblake, Kaj_Sotala
comment by CronoDAS · 2009-09-03T18:46:47.991Z · LW(p) · GW(p)

Web browsers automatically condense double spaces to single spaces...

comment by eirenicon · 2009-09-03T14:52:55.737Z · LW(p) · GW(p)

That is also what I meant, and what the CMS discourages. See double spacing at the end of sentences. While it does come down to personal preference, if there is any standard web convention it is toward single spacing.

Replies from: byrnema, Douglas_Knight, billswift, AllanCrossman
comment by byrnema · 2009-09-03T15:42:04.232Z · LW(p) · GW(p)

Regarding double spacing:

In the Old Days, type writers (and even the first word processors) did not add an extra half space after the period to separate the end of a sentence and the beginning of the next one aesthetically. It became convention to leave two spaces after a period and this was the proper thing to do.

But now that proportional fonts leave the aesthetically "correct amount" of space after a period (something between 1 and 2 spaces), it is incorrect to try to force two spaces.

When I use the words "correct" and "incorrect" I mean in the context of conventional writing. It's up to each person if their writing is a little bit more like a poem than prose, in which case they can bend convention as they wish.

As an expert on what is aesthetic -- like everyone else- - I'll comment that the FictionPress font does not provide enough of a gap. I judge the font is going for an old-timey typing-in-the-attic-on-the-back-of-scratch-paper aesthetic; not easy to read but something typers over a certain age might feel nostalgic about.

comment by Douglas_Knight · 2009-09-03T19:14:28.498Z · LW(p) · GW(p)

While it does come down to personal preference

It has consequences. Double spaces after periods cause readers to skim. That is good for many types of text, but I doubt most authors want the effect in their fiction.

(and double line-spacing causes readers to read slowly, but not to read well.)

comment by billswift · 2009-09-03T16:28:36.845Z · LW(p) · GW(p)

No it does not come down to personal preference, except that the writer's proper preference to produce more readable writing. In fact one thing I particularly dislike about HTML is that it (usually) automatically collapses two spaces to one. And conventions are only good when they are better than the alternatives - two spaces helps set off a sentence, just as capitalization does, and makes text more readable. Web "usability" is also strongly against long blocks of text, which tends to suggest (to me at least) that non-readers (or even anti-readers, witness the popularity of videoblogging and podcasts) have too much influence over web conventions.

Replies from: eirenicon
comment by eirenicon · 2009-09-03T16:38:11.205Z · LW(p) · GW(p)

It does come down to personal preference in choosing what style to follow. For example, while the CMS says you shouldn't double space, the MLA says it's okay. I was taught to double space in high school, but gave it up afterwards, first because I felt it was unaesthetic, and second because I prefer to follow the CMS in most respects.

comment by AllanCrossman · 2009-09-03T15:34:13.249Z · LW(p) · GW(p)

One suspects this is mainly because all extra whitespace is simply ignored in HTML...

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-09-03T22:54:05.695Z · LW(p) · GW(p)

So you use   - unless your silly little editor won't let you.

Replies from: dfranke, taw
comment by dfranke · 2009-09-04T01:06:58.702Z · LW(p) · GW(p)

  is non-breaking. It'll prevent the browser from breaking the line at what ought to be a good place to break it. If you want to force a wider space after a period than the renderer's default, then use  .

Replies from: TobyBartels, thomblake
comment by TobyBartels · 2011-09-27T20:16:45.053Z · LW(p) · GW(p)

  is non-breaking. It'll prevent the browser from breaking the line at what ought to be a good place to break it.

Not if you only use it once. The second of the two spaces should be normal. I agree that   is better, however.

comment by thomblake · 2009-09-04T13:22:05.524Z · LW(p) · GW(p)

About time someone said it

comment by taw · 2009-09-04T03:07:45.782Z · LW(p) · GW(p)

It sounds like one of those small quirks that might or might not have some value, but it's probably too small to bother fighting over it. All geeks have a few of those.

comment by thomblake · 2009-09-03T13:33:07.024Z · LW(p) · GW(p)

Ah. I'd missed that as well. I automatically include two spaces after a period, but have been trying to stop it. It's not preferred, especially on the web.

comment by Kaj_Sotala · 2009-09-03T18:31:13.197Z · LW(p) · GW(p)

Ahhh, alright. That's interesting: I suspect it's an English-language convention, as this is the first time that I hear the term used in such a context. I've never heard anyone even mention the possibility of inserting an extra space after a period, and this includes my Finnish and Swedish teachers back in school.

comment by Kevin · 2009-09-03T07:30:43.225Z · LW(p) · GW(p)

The Chicago Manual of Style isn't for writing intended to be read onscreen. Double space between paragraphs is the accepted convention because it's the easiest to read.

comment by thomblake · 2009-09-03T13:31:35.319Z · LW(p) · GW(p)

Somehow, I'm missing the distinction. What's the stylistic difference between the two?

comment by WanderingHero · 2012-04-28T12:33:51.251Z · LW(p) · GW(p)

Hey I know you posted this a long time ago but I found this a few weeks on tvtropes and found it very clever. But the ending confuses me, so I'm hoping you'll reply to my message:

Are we supposed to agree with the antagonist? When I first read it I thought "ahah I get it, its about people blindly accepting whats put in front of them, so Hiro representing that blindly accepted what he said about remaking the world and we're supposed to think about it and realise he was too gullible" then reading some of the comments made me wonder if he was supposed to be right.

That disturbed, but then I wondered, if one had god like power would it be a good idea to try to remake the world or would their be too great a risk of people **Ing it up?

Replies from: MarkusRamikin
comment by MarkusRamikin · 2012-04-28T13:31:39.080Z · LW(p) · GW(p)

If the power to remake the world exists and you know how to get it, then the responsibility is already upon you. From then on, if you refuse to act, every evil and wrong thing that happens in the world is your fault.

Replies from: chaosmosis, TheOtherDave
comment by chaosmosis · 2012-04-28T17:33:55.973Z · LW(p) · GW(p)

If you act, and you screw up and destroy the universe or send everyone into everlasting torment, then that is also your fault. WanderingHero was asking whether the risk of benefits would outweigh the cost. I think that if god like power wasn't accompanied by god like knowledge, it would probably be a very good idea to give up that power.

Replies from: Dolores1984
comment by Dolores1984 · 2012-04-28T17:55:52.076Z · LW(p) · GW(p)

I think I disagree. The arbitrary and unfeeling processes of the universe can probably be outperformed by anything with a shred of empathy and intellect. You'd just want to be really, really careful, and try to create something better than yourself to hand off control to.

Replies from: chaosmosis
comment by chaosmosis · 2012-04-28T18:23:53.661Z · LW(p) · GW(p)

I don't think I'm capable of having that much power and not being tempted to use it recklessly.

I would need to think about it.

comment by TheOtherDave · 2012-04-28T14:25:39.393Z · LW(p) · GW(p)

...though it's worth keeping in mind that the usual connotations of "my fault" don't necessarily apply. For example, if lots of other people also know how to get that power, then it's also equally lots of other people's fault.

Replies from: MarkusRamikin
comment by MarkusRamikin · 2012-04-28T14:45:33.618Z · LW(p) · GW(p)

Yeah, of course. Not to mention the actual direct perpetrators of the evils themselves.

Replies from: TimS
comment by TimS · 2012-04-28T16:37:56.623Z · LW(p) · GW(p)

Law makes a distinction between but-for cause and proximate cause. All proximate causes are but-for causes, but not all but-for causes are proximate causes. The distinction exists to differentiate effects of one's acts that one is responsible for and effects that are not one's responsibility.

Replies from: chaosmosis
comment by chaosmosis · 2012-04-28T17:29:56.500Z · LW(p) · GW(p)

Unless you're talking about the act-omission distinction I don't see how this doesn't blatantly contradict what "MarkusRamikin" was saying. "Every evil and wrong thing that happens in the world is your fault" vs. "effects that are not one's responsibility". But you don't make an argument that the law or the act-omission distinction is justified so I don't understand what your comment was trying to do. Are you just criticizing the way the legal system works?

Replies from: TimS
comment by TimS · 2012-04-28T17:51:51.373Z · LW(p) · GW(p)

Yes, American law disagrees with the position MarcusRamikin appears to be articulating. Under American law, Alice can do something wrong, that act can harm Bob, and Alice will not be responsible for the harm if her act was not a proximate cause of Bob's injury.

The wikipedia article lays it out pretty well. In the cases the article cites, X erred in operating a boat, damaging a bridge and therefore disrupting the commerce along the river. X was held liable for the damage to the bridge, but not the losses from disruption of the commerce. Even though X was not held (financially) responsible, no one thinks that X did not cause the disruption of the river traffic.

comment by dfranke · 2009-09-03T20:06:15.015Z · LW(p) · GW(p)

I saw what was coming when I got to the bit about the wormarium and Dolf's hyperbolic reaction to it. Are my moral instincts just really warped relative to the norm, or do others agree with me that this was way too obvious?

Replies from: dclayh
comment by dclayh · 2009-09-03T20:31:53.107Z · LW(p) · GW(p)

Yeah, I already wrote above that the ending was telegraphed: I pretty much concluded that the "Lord of Dark" was going to be a nice guy about the time they murdered the first wizard with absolutely no explanation of why he deserved it.

Replies from: dfranke
comment by dfranke · 2009-09-03T20:53:42.718Z · LW(p) · GW(p)

It's a convention of fantasy and science fiction that there can exist sentient races which are, by their very nature, inimical to mankind, and can therefore be justifiably killed on sight. In principle, there's no reason why such creatures can't actually exist. So that scene didn't set off any alarm bells for me. The first thing that made me look askew was that Hirou's company included both a pirate and a thief, and the wormarium was my confirmation.

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-09-03T22:48:13.227Z · LW(p) · GW(p)

It's a convention of fantasy and science fiction that there can exist sentient races which are, by their very nature, inimical to mankind, and can therefore be justifiably killed on sight

And by the way, I'm willing to buy that. Hirou's sin is that he didn't actually buy it, as in, pay for the conclusion, if you see what I mean.

Replies from: dfranke
comment by dfranke · 2009-09-03T22:53:59.230Z · LW(p) · GW(p)

No, I don't. Can you clarify?

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-09-03T22:59:06.248Z · LW(p) · GW(p)

Hirou didn't bother to verify that the orcs actually were irredeemably evil, which is possible, but you would have to see evidence. Hirou just saw something physically ugly, and his social surroundings expected him to kill them. But the view which he acted-as-if-believed was possible, it simply wasn't true. Arguably my Orthodox Jewish parents committed more blatant mistakes - if far more theoretical mistakes - in endorsing God's murder of the Egyptian firstborn.

comment by thomblake · 2009-09-03T13:43:40.092Z · LW(p) · GW(p)

Reading it at first I was sad because I thought it did not belong on Lw but did not want to downvote it. I was happy that the ending justified its inclusion.

comment by kpreid · 2009-09-03T02:33:43.411Z · LW(p) · GW(p)

Vs Iunmune pna naq jvyy erfgber Nyrx, jul abg Uvebh nf jryy?

[will de-rot13 on request; I don't know what spoiler policy to apply]

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-09-03T03:37:27.026Z · LW(p) · GW(p)

Wasn't assuming he was dead - but sure, if he was, then of course.

Replies from: kpreid
comment by kpreid · 2009-09-03T13:27:45.114Z · LW(p) · GW(p)

Well, not dead, but “sleep[ing] until the end of the world” rather suggests that he’s not going to do any more interacting with anyone else, which is (under a principle I've attempted to develop to deal with future personhood/identity/instantiating-people problems) equivalent to him being dead (unless he can make something out of the end of the world).

[I've thought about writing up said principle, but I'm not good at the sort of discussion it would likely prompt, and it's probably too simplistic. Anyone want to see it anyway?]

Replies from: Psy-Kosh
comment by Psy-Kosh · 2009-09-03T19:45:27.544Z · LW(p) · GW(p)

I thought "end of the world" meant "end of the world as it was"... ie, "by the time he wakes back up, the spell will have completed and our friendly neighborhood dark lord will have already started fixing the place up"

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-09-03T22:52:28.077Z · LW(p) · GW(p)

Yup, that was the intended interpretation.

comment by ArthurRainbow · 2022-01-25T02:25:24.559Z · LW(p) · GW(p)

Please not that the youtube link and Brin's essay are broken link

comment by [deleted] · 2009-09-04T08:24:11.831Z · LW(p) · GW(p)

In writing it's even simpler - the author gets to create the whole social universe, and the readers are immersed in the hero's own internal perspective. And so anything the heroes do, which no character notices as wrong, won't be noticed by the readers as unheroic. Genocide, mind-rape, eternal torture, anything.

I don't think you give readers enough credit. The author has some influence, but not that much. Some of what appears to be acceptance of the social norms depicted is really just acceptance that the characters live within those norms.

For the influence that does exist, there's a whole body of criticism, controversy, and alternative versions taking on various uses of it. It's so well known, I didn't even realize you were trying to call attention to it. I read the story as straightforward propaganda for your work on an artificial BDFL.

comment by Nominull · 2009-09-04T04:25:10.379Z · LW(p) · GW(p)

I figured out the game that was afoot about a quarter of the way through, but I credit that to the fact that I was trying to write a similar story. Which I may as well abandon now. ;_;

comment by MrCheeze · 2010-12-02T21:21:48.874Z · LW(p) · GW(p)

Loved the story and also the first time I took you strong atheism completely seriously, but I think that one bit where they stab those three sleeping guys went a bit too strongly to the "no, this definately isn't right" side of things. Although I didn't think about that scene at all when I was trying to figure out which side was the Good side and thought about the death of Alek as my main piece of evidence for the Lord of Dark being Bad possibility so that's something.

comment by Unnamed · 2009-12-02T04:00:23.409Z · LW(p) · GW(p)

This matches one interpretation of Jack and the Beanstalk, which has been emphasized in some versions of the tale.

comment by Bindbreaker · 2009-09-07T10:03:01.464Z · LW(p) · GW(p)

I liked the story; that said, the ending seemed obvious to me. This may be a good sign.

comment by cousin_it · 2009-09-03T10:43:42.147Z · LW(p) · GW(p)

At the sentence and paragraph level the story is very well written, but overall I think you'd have made your point better (and it would've fared better as sci-fi) if the ending were more ambiguous. For now it just implies that the fantasy world is American party politics in funny suits: yay to racial equality, boo to the corrupt establishment! The acknowledgements do dispel this impression somewhat, so you could just move them into the story page as Kaj Sotala suggested.

That said, IAWYC completely, and upvoted.

comment by CronoDAS · 2009-09-03T05:02:05.421Z · LW(p) · GW(p)

The ending of your story reminds me of the ending of a certain fantasy series, in which the hero manages to successfully gain near-infinite power by using the Extremely Dangerous MacGuffin of Lots of Power, and uses it to create a parallel universe and banish the Bad Guys into it, so they won't be able to go around being Evil at good people any more. It's a damn shame the guy who got to use the MacGuffin was an Objectivist, though. :P

Yeah, Hirou didn't notice the Moral Dissonance...

Replies from: Psychohistorian
comment by Psychohistorian · 2009-09-03T08:43:15.565Z · LW(p) · GW(p)

I thought of the same series. The "evil" in the two is quite different, but it raises interesting questions as to what degree certain characters would have been justified if they had actually been right.

comment by Randomredditor12345 · 2022-04-24T05:39:47.241Z · LW(p) · GW(p)

So I'm noticing what you say about the Seder and having just celebrated passover myself I'd like to offer some perspective that you seem not to have been exposed to.

The Egyptians we're punished because they were equally complicit in our suffering. There is even a commentary that goes so far as to say that there was something of a revolt and pharaoh was originally opposed to enslaving and breaking us but he was ousted from the palace until he capitulated. Further any officer carrying out his orders was blatantly culpable. They had the option of imitating the Jewish taskmasters and bearing the brunt of the punishment but they either chose to be prison guards from stamford or to simply defect as prisoners. Either way a choice to do the wrong thing. Also while all plagues affected almost (and yes I do mean not all) all the Egyptians to some degree, those who were more culpable for inflicting suffering on the Jews suffered more and the reverse was true as well. In fact, even according to the opinion that Moshe's adoptive mother remained a gentile and thus an Egyptian, she did not suffer from any of the plagues because she went out of her way to save a suffering Jewish child. Meanwhile other Egyptian overseers were forcing parents to cement their children into walls when they fell short of their brick making quota and later on in the enslavement our status was expanded from that of slaves to the palace to slaves of the common citizen and it became somewhat commonplace for a Egyptians to even hook Jews up to plows and other farming equipment when they wished to give their animals a break. In fact they were so entrenched that even when the warning of the last plague was taken seriously enough by many of the firstborn to start a civil war to free us (out of self preservation rather than recognition they were wrong for how they treated us, otherwise your inevitable follow-up question of "why did they get killed then?" would be a great point.) their own fathers fought against them to keep us enslaved.

You might say "but all that didn't happen because the exodus didn't happen?" So I would respond that if you want to dismiss that part of the story as fiction so must you dismiss your classification of God based on that story. If you will judge the God to whom we pledge ourselves based on the exodus story, know that this is also part of that story partially based upon which we pladge him our allegiance.

Edit- I'd really hoped that unlike reddit, people here would explain why they disagree with something rather than just downvote and move on. Whoever already downvoted me, please explain why and for anyone who has just read this for the first time, if you disagree or don't like what I'm saying please explain why. Thank you