Epilogue: Atonement (8/8)

post by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-02-06T11:52:42.000Z · LW · GW · Legacy · 195 comments

Contents

195 comments

(Part 8 of 8 in "Three Worlds Collide")

Fire came to Huygens.

The star erupted.

Stranded ships, filled with children doomed by a second's last delay, still milled around the former Earth transit point.  Too many doomed ships, far too many doomed ships.  They should have left a minute early, just to be sure; but the temptation to load in that one last child must have been irresistable.  To do the warm and fuzzy thing just this one time, instead of being cold and calculating.  You couldn't blame them, could you...?

Yes, actually, you could.

The Lady Sensory switched off the display.  It was too painful.

On the Huygens market, the price of a certain contract spiked to 100%.  They were all rich in completely worthless assets for the next nine minutes, until the supernova blast front arrived.

"So," the Lord Pilot finally said.  "What kind of asset retains its value in a market with nine minutes to live?"

"Booze for immediate delivery," the Master of Fandom said promptly.  "That's what you call a -"

"Liquidity preference," the others chorused.

The Master laughed.  "All right, that was too obvious.  Well... chocolate, sex -"

"Not necessarily," said the Lord Pilot.  "If you can use up the whole supply of chocolate at once, does demand outstrip supply?  Same with sex - the value could actually drop if everyone's suddenly willing.  Not to mention:  Nine minutes?"

"All right then, expert oral sex from experienced providers.  And hard drugs with dangerous side effects; the demand would rise hugely relative to supply -"

"This is inane," the Ship's Engineer commented.

The Master of Fandom shrugged.  "What do you say in the unrecorded last minutes of your life that is not inane?"

"It doesn't matter," said the Lady Sensory.  Her face was strangely tranquil.  "Nothing that we do now matters.  We won't have to live with the consequences.  No one will.  All this time will be obliterated when the blast front hits.  The role I've always played, the picture that I have of me... it doesn't matter.  There's... a peace... in not having to be Dalia Ancromein any more."

The others looked at her.  Talk about killing the mood.

"Well," the Master of Fandom said, "since you raise the subject, I suppose it would be peaceful if not for the screaming terror."

"You don't have to feel the screaming terror," the Lady Sensory said.  "That's just a picture you have in your head of how it should be.  The role of someone facing imminent death.  But I don't have to play any more roles.  I don't have to feel screaming terror.  I don't have to frantically pack in a few last moments of fun.  There are no more obligations."

"Ah," the Master of Fandom said, "so I guess this is when we find out who we really are."  He paused for a moment, then shrugged.  "I don't seem to be anyone in particular.  Oh well."

The Lady Sensory stood up, and walked across the room to where the Lord Pilot stood looking at the viewscreen.

"My Lord Pilot," the Lady Sensory said.

"Yes?" the Lord Pilot said.  His face was expectant.

The Lady Sensory smiled.  It was bizarre, but not frightening.  "Do you know, my Lord Pilot, that I had often thought how wonderful it would be to kick you very hard in the testicles?"

"Um," the Lord Pilot said.  His arms and legs suddenly tensed, preparing to block.

"But now that I could do it," the Lady Sensory said, "I find that I don't really want to.  It seems... that I'm not as awful a person as I thought."  She gave a brief sigh.  "I wish that I had realized it earlier."

The Lord Pilot's hand swiftly darted out and groped the Lady Sensory's breast.  It was so unexpected that no one had time to react, least of all her.  "Well, what do you know," the Pilot said, "I'm just as much of a pervert as I thought.  My self-estimate was more accurate than yours, nyah nyah -"

The Lady Sensory kneed him in the groin, hard enough to drop him moaning to the floor, but not hard enough to require medical attention.

"Okay," the Master of Fandom said, "can we please not go down this road?  I'd like to die with at least some dignity."

There was a long, awkward silence, broken only by a quiet "Ow ow ow ow..."

"Would you like to hear something amusing?" asked the Kiritsugu, who had once been a Confessor.

"If you're going to ask that question," said the Master of Fandom, "when the answer is obviously yes, thus wasting a few more seconds -"

"Back in the ancient days that none of you can imagine, when I was seventeen years old - which was underage even then - I stalked an underage girl through the streets, slashed her with a knife until she couldn't stand up, and then had sex with her before she died.  It was probably even worse than you're imagining.  And deep down, in my very core, I enjoyed every minute."

Silence.

"I don't think of it often, mind you.  It's been a long time, and I've taken a lot of intelligence-enhancing drugs since then.  But still - I was just thinking that maybe what I'm doing now finally makes up for that."

"Um," said the Ship's Engineer.  "What we just did, in fact, was kill fifteen billion people."

"Yes," said the Kiritsugu, "that's the amusing part."

Silence.

"It seems to me," mused the Master of Fandom, "that I should feel a lot worse about that than I actually do."

"We're in shock," the Lady Sensory observed distantly.  "It'll hit us in about half an hour, I expect."

"I think it's starting to hit me," the Ship's Engineer said.  His face was twisted.  "I - I was so worried I wouldn't be able to destroy my home planet, that I didn't get around to feeling unhappy about succeeding until now.  It... hurts."

"I'm mostly just numb," the Lord Pilot said from the floor.  "Well, except down there, unfortunately."  He slowly sat up, wincing.  "But there was this absolute unalterable thing inside me, screaming so loud that it overrode everything.  I never knew there was a place like that within me.  There wasn't room for anything else until humanity was safe.  And now my brain is worn out.  So I'm just numb."

"Once upon a time," said the Kiritsugu, "there were people who dropped a U-235 fission bomb, on a place called Hiroshima.  They killed perhaps seventy thousand people, and ended a war.  And if the good and decent officer who pressed that button had needed to walk up to a man, a woman, a child, and slit their throats one at a time, he would have broken long before he killed seventy thousand people."

Someone made a choking noise, as if trying to cough out something that had suddenly lodged deep in their throat.

"But pressing a button is different," the Kiritsugu said.  "You don't see the results, then.  Stabbing someone with a knife has an impact on you.  The first time, anyway.  Shooting someone with a gun is easier.  Being a few meters further away makes a surprising difference.  Only needing to pull a trigger changes it a lot.  As for pressing a button on a spaceship - that's the easiest of all.  Then the part about 'fifteen billion' just gets flushed away.  And more importantly - you think it was the right thing to do.  The noble, the moral, the honorable thing to do.  For the safety of your tribe.  You're proud of it -"

"Are you saying," the Lord Pilot said, "that it was not the right thing to do?"

"No," the Kiritsugu said.  "I'm saying that, right or wrong, the belief is all it takes."

"I see," said the Master of Fandom.  "So you can kill billions of people without feeling much, so long as you do it by pressing a button, and you're sure it's the right thing to do.  That's human nature."  The Master of Fandom nodded.  "What a valuable and important lesson.  I shall remember it all the rest of my life."

"Why are you saying all these things?" the Lord Pilot asked the Kiritsugu.

The Kiritsugu shrugged.  "When I have no reason left to do anything, I am someone who tells the truth."

"It's wrong," said the Ship's Engineer in a small, hoarse voice, "I know it's wrong, but - I keep wishing the supernova would hurry up and get here."

"There's no reason for you to hurt," said the Lady Sensory in a strange calm voice.  "Just ask the Kiritsugu to stun you.  You'll never wake up."

"...no."

"Why not?" asked the Lady Sensory, in a tone of purely abstract curiosity.

The Ship's Engineer clenched his hands into fists.  "Because if hurting is that much of a crime, then the Superhappies are right."  He looked at the Lady Sensory.  "You're wrong, my lady.  These moments are as real as every other moment of our lives.  The supernova can't make them not exist."  His voice lowered.  "That's what my cortex says.  My diencephalon wishes we'd been closer to the sun."

"It could be worse," observed the Lord Pilot.  "You could not hurt."

"For myself," the Kiritsugu said quietly, "I had already visualized and accepted this, and then it was just a question of watching it play out."  He sighed.  "The most dangerous truth a Confessor knows is that the rules of society are just consensual hallucinations.  Choosing to wake up from the dream means choosing to end your life.  I knew that when I stunned Akon, even apart from the supernova."

"Okay, look," said the Master of Fandom, "call me a gloomy moomy, but does anyone have something uplifting to say?"

The Lord Pilot jerked a thumb at the expanding supernova blast front, a hundred seconds away.  "What, about that?"

"Yeah," the Master of Fandom said.  "I'd like to end my life on an up note."

"We saved the human species," offered the Lord Pilot.  "Man, that's the sort of thing you could just repeat to yourself over and over and over again -"

"Besides that."

"Besides WHAT?"

The Master managed to hold a straight face for a few seconds, and then had to laugh.

"You know," the Kiritsugu said, "I don't think there's anyone in modern-day humanity, who would regard my past self as anything but a poor, abused victim.  I'm pretty sure my mother drank during pregnancy, which, back then, would give your child something called Fetal Alcohol Syndrome.  I was poor, uneducated, and in an environment so entrepreneurially hostile you can't even imagine it -"

"This is not sounding uplifting," the Master said.

"But somehow," the Kiritsugu said, "all those wonderful excuses - I could never quite believe in them myself, afterward.  Maybe because I'd also thought of some of the same excuses before.  It's the part about not doing anything that got to me.  Others fought the war to save the world, far over my head.  Lightning flickering in the clouds high above me, while I hid in the basement and suffered out the storm.  And by the time I was rescued and healed and educated, in any shape to help others - the battle was essentially over.  Knowing that I'd been a victim for someone else to save, one more point in someone else's high score - that just stuck in my craw, all those years..."

"...anyway," the Kiritsugu said, and there was a small, slight smile on that ancient face, "I feel better now."

"So does that mean," asked the Master, "that now your life is finally complete, and you can die without any regrets?"

The Kiritsugu looked startled for a moment.  Then he threw back his head and laughed.  True, pure, honest laughter.  The others began to laugh as well, and their shared hilarity echoed across the room, as the supernova blast front approached at almost exactly the speed of light.

Finally the Kiritsugu stopped laughing, and said:

"Don't be ridicu-"

 

 

 

 

 

 

195 comments

Comments sorted by oldest first, as this post is from before comment nesting was available (around 2009-02-27).

comment by CannibalSmith · 2009-02-06T12:10:46.000Z · LW(p) · GW(p)

By the way, how big is a supernova? Does it blast Earth-sized planets to dust in an instant?

Replies from: spriteless
comment by spriteless · 2009-06-09T16:42:36.596Z · LW(p) · GW(p)

Depends on your definition of instant, but a plasma wave traveling through you at the speed of light will kill you before the nerves get the message to you, so yes as far as we're concerned.

Replies from: CannibalSmith
comment by CannibalSmith · 2009-06-09T17:21:56.750Z · LW(p) · GW(p)

I'm thinking about the people on the dark side of a planet.

Replies from: wedrifid, Luke_A_Somers
comment by wedrifid · 2010-12-31T05:25:32.353Z · LW(p) · GW(p)

Is there anyone around now who would care to speculate on that? I'm curious.

comment by Luke_A_Somers · 2011-11-29T23:12:07.384Z · LW(p) · GW(p)

This post was obsolete years before writing. Replacing with a link to a better answer

http://lesswrong.com/lw/yc/epilogue_atonement_88/t0d

comment by Thomas · 2009-02-06T12:16:49.000Z · LW(p) · GW(p)

SH are too advanced to be tricked by humans this way. Several hundred years difference wouldn't allow the underdog to "win".

Replies from: None, staticIP
comment by [deleted] · 2009-07-31T21:51:26.028Z · LW(p) · GW(p)

Could people from the Renaissance trick us?

Replies from: MatthewBaker, rkyeun
comment by MatthewBaker · 2011-06-07T07:57:56.441Z · LW(p) · GW(p)

Given a basic knowledge of our civilization... Uninverted is completely correct. A devious human of the renaissance is on the could use mysticism to deceive us.

comment by rkyeun · 2015-03-26T14:06:06.304Z · LW(p) · GW(p)

Religion still exists, so we can be tricked from far further back than the Renaissance.

comment by staticIP · 2012-03-18T02:15:31.760Z · LW(p) · GW(p)

The had blinders as far as lying goes. Their species was simply incapable of it.

comment by Kaj_Sotala · 2009-02-06T12:16:55.000Z · LW(p) · GW(p)

I think the dramatic impact would be stronger without the "Fin".

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-06-08T05:20:11.673Z · LW(p) · GW(p)

Upon due consideration: You know, you're right.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-02-06T12:19:26.000Z · LW(p) · GW(p)

Actually, I think the neutrinos might be enough to kill you. Not sure about this, but Wikipedia alleged that in a supernova, the outer parts of the star are blown away by the neutrinos because that's what gets there first. I don't quite understand how this can be, but even leaving that aside...

I would presume that if the initial light-blast didn't actually eat all the way through the planet, then the ashes following behind it only slightly slower, once they got in front of the night side, would be emitting light at intensity sufficient to vaporize the night side too. So, yeah, everyone dies pretty damn fast, I think. Don't know if the planet's core stays intact for another minute or whatever.

Replies from: snog toddgrass
comment by snog toddgrass · 2020-07-02T06:13:34.821Z · LW(p) · GW(p)

The matter of the star exerts a downward pressure on the electric-interacting particles. The neutrinos at first are just held in by the random-walk they do from colisions in the very dense part of the star. The trigger for the release is neutrinos leaving the super-dense-random-walk area, allowing it to cool down crazy fast. So when they neutrinos start to emerge they leave the core of the star at near light speed, the same way photons leave the edge of the star. IIRC, they do not need to be accelerated from collapsing to expanding, but the mass does. (took physics an eternity ago, so do not quote me)

comment by Patrick · 2009-02-06T12:21:20.000Z · LW(p) · GW(p)

Nice Dark Knight reference there, I wonder if the confessor ever ran around in clown makeup?

Replies from: ChrisHallquist
comment by ChrisHallquist · 2013-10-28T23:15:36.462Z · LW(p) · GW(p)

I just re-read the relevant parts twice, and I'm not seeing it. Anyone?

Replies from: shokwave
comment by shokwave · 2013-12-25T01:39:58.804Z · LW(p) · GW(p)

Was not going to reply until I saw this is actually a month old and not more than three years, so you're in luck.

The Confessor claims to have been a violent criminal, and in Interlude with the Confessor we see the Confessor say this to Akon:

And faster than you imagine possible, people would adjust to that state of affairs. It would no longer sound quite so shocking as it did at first. Babyeater children are dying horrible, agonizing deaths in their parents' stomachs? Deplorable, of course, but things have always been that way. It would no longer be news. It would all be part of the plan."

Contrast with the Joker in Dark Knight:

You know what I noticed? Nobody panics when things go according to plan. Even when the plan is horrifying. If tomorrow I told the press that, like, a gang-banger would get shot, or a truckload of soldiers will be blown up, nobody panics. Because it's all part of the plan. But when I say that one little old mayor will die, well then everybody loses their minds!

One hundred chapters of HPMoR have taught us that Eliezer is totally okay with throwing these references in. I think it's pretty clear (also hilarious, because all of the Joker's plots in Dark Knight were game-theory-esque, tying in with this gigantic Prisoner's Dilemma story).

comment by Andrew_Ducker · 2009-02-06T12:32:55.000Z · LW(p) · GW(p)

Now it's finished, any chance of getting it into EPUB or PDF format?

comment by Anonymous_Coward4 · 2009-02-06T12:52:47.000Z · LW(p) · GW(p)

Still puzzled by the 'player of games' ship name reference earlier in the story... I keep thinking, surely Excession is a closer match?

comment by Russell_Wallace · 2009-02-06T12:54:38.000Z · LW(p) · GW(p)

A type 2 supernova emits most of its energy in the form of neutrinos; these interact with the extremely dense inner layers that didn't quite manage to accrete onto the neutron star, depositing energy that creates a shockwave that blows off the rest of the material. I've seen it claimed that the neutrino flux would be lethal out to a few AU, though I suspect you wouldn't get the chance to actually die of radiation poisoning.

A planet the size and distance of Earth would intercept enough photons and plasma to exceed its gravitational binding energy, though I'm skeptical about whether it would actually vaporize; my guess for what its worth is that most of the energy would be radiated away again. Wouldn't make any difference to anyone on the planet at the time of course.

Well-chosen chapter title, and good wrapup!

Replies from: gwern
comment by gwern · 2013-11-26T23:53:34.770Z · LW(p) · GW(p)

I've seen it claimed that the neutrino flux would be lethal out to a few AU, though I suspect you wouldn't get the chance to actually die of radiation poisoning.

Randall Munroe just posted on the topic: http://what-if.xkcd.com/73/ "Lethal Neutrinos".

comment by Aleksei_Riikonen · 2009-02-06T13:04:04.000Z · LW(p) · GW(p)

It's somehow depressing that in this story, a former rapist dirtbag saves the world. Such a high score he gets in the end, perhaps making us currently-rather-lazy but not-worse-than-ordinary folks feel were worse than some such dirtbags can end up being. (It's fair and truthful, though.)

I hope you others feel that the character was primarily a victim way back when, instead of a dirtbag.

Replies from: Alsadius, wedrifid, araneae, AndHisHorse
comment by Alsadius · 2009-10-23T06:55:19.532Z · LW(p) · GW(p)

Why would I possibly feel that way? He was a rapist and a murderer. Crappy circumstances or not, he made that decision. That is not the mark of a victim.

comment by wedrifid · 2009-11-13T23:27:17.288Z · LW(p) · GW(p)

I hope you others feel that the character was primarily a victim way back when, instead of a dirtbag.

He's both. It is what it is, I feel no particular inclination to judge him better or worse on some cosmological scale. I would be glad that he was there when it mattered. Splintering the starline network is a far better solution than just cutting off one link.

comment by araneae · 2010-09-27T18:21:00.842Z · LW(p) · GW(p)

It's not depressing at all.

It's only right that the sort of vile person who would kill and rape a young girl would do the dirty work of killing billions of people to save humanity. It's only depressing if you think of him as a victim.

comment by AndHisHorse · 2013-12-16T09:26:18.743Z · LW(p) · GW(p)

Regardless of his past, if he's been around since before this modern era - a period so long that the tradition of a century being a prerequisite for master is well-established - he's had enough time to rewrite his brain many times over with natural processes alone, never mind psychiatric medication, psychological counseling, or other forms of modification. He's had more time for personal growth than any other human that has existed (in the real world).

"People change" is an understatement here. I would go so far as to say that, through incremental change, the Confessor is an entirely different person from the younger man in his history, connected by a long causal chain and a continued possession of the same body, but nothing else.

comment by Aleksei_Riikonen · 2009-02-06T13:08:00.000Z · LW(p) · GW(p)

Speaking of Culture-style ship names (ref. Anonymous Coward above), this story btw inspires good new ones:

"Untranslatable 2" "Big Angelic Power" "We Wish To Subscribe To Your Newsletter" "Big Fucking Edward"

comment by Russell_Wallace · 2009-02-06T13:16:11.000Z · LW(p) · GW(p)
I hope you others feel that the character was primarily a victim way back when, instead of a dirtbag.

Of course not. The victim was the girl he murdered.

That's the point of the chapter title - he had something to atone for. It's what tvtropes.org calls a Heel Face Turn.

Replies from: Articulator
comment by Articulator · 2016-12-15T07:26:30.880Z · LW(p) · GW(p)

And at the same time, they were both victims, as are we all, of human nature. Never let it be said that if you are a victim, you are only a victim.

comment by CannibalSmith · 2009-02-06T13:21:44.000Z · LW(p) · GW(p)

Anyone want to try defining Untranslatable 2?

Replies from: army1987, paradoxius
comment by A1987dM (army1987) · 2012-02-05T00:32:49.801Z · LW(p) · GW(p)

Fuckmunication?

Replies from: rkyeun
comment by rkyeun · 2012-08-27T20:46:23.558Z · LW(p) · GW(p)

Cummunion.

comment by paradoxius · 2013-07-16T04:55:15.202Z · LW(p) · GW(p)

"Mental bonding" is a close approximation. In science fiction, similar systems are referred to as "mind-melding" in Star Trek, although this interaction is not sexual in nature, albeit platonically intimate, and simply "the bond" in Mass Effect, which is a much closer analogue, since the beings who use it do use it for reproduction.

comment by spuckblase · 2009-02-06T13:48:47.000Z · LW(p) · GW(p)

the first installments were pure genius. Than it got kinda lame. the kiritsugus words about button pushing et al are common knowledge for decades now, and the characters on the ship are surprised??? Come on. i thougt you'd think of something better!?

Replies from: rntz
comment by rntz · 2010-07-16T23:38:22.192Z · LW(p) · GW(p)

Today's common knowledge may fade to psychological trivia in a thousand years. If humanity has systematically got its shit together, will this be because everyone understands the biases that let us do terrible (if, arguably, occasionally necessary) things, or because we are simply systematically denied the opportunities to enact them? I hope and expect the former (insofar as I expect humanity's future to be bright at all, which is to say, not much), but I cannot rule out the latter.

comment by AnnaSalamon · 2009-02-06T13:54:05.000Z · LW(p) · GW(p)

Aleksei, I don't know what you think about the current existential risks situation, but that situation changed me in the direction of your comment. I used to think that to have a good impact on the world, you had to be an intrinsically good person. I used to think that the day to day manner in which I treated the people around me, the details of my motives and self-knowledge, etc. just naturally served as an indicator for the positive impact I did or didn't have on global goodness.

(It was a dumb thing to think, maintained by an elaborate network of rationalizations that I thought of as virtuous, much the way many people think of their political "beliefs"/clothes as virtuous. My beliefs were also maintained by not bothering to take an actually careful look either at global catastrophic risks or even at the details of e.g. global poverty. But my impression is that it's fairly common to just suppose that our intuitive moral self-evaluations (or others' evaluations of how good of people we are) map tolerably well onto actual good consequences.)

Anyhow: now, it looks to me as though most of those "good people", living intrinsically worthwhile lives, aren't contributing squat to global goodness compared to what they could contribute if they spent even a small fraction of their time/money on a serious attempt to shut up and multiply. The network of moral intuitions I grew up in is... not exactly worthless; it does help with intrinsically worthwhile lives, and, more to the point, with the details of how to actually build the kinds of reasonable human relationships that you need for parts of the "shut up and multiply"-motivated efforts to work... but, for most people, it's basically not very connected to how much good they do or don't do in the world. If you like, this is good news: for a ridiculously small sum of effort (e.g., a $500 donation to SIAI; the earning power of seven ten-thousandths of your life if you earn the US minimum wage), you can do more expected-good than perhaps 99.9% of Earth's population. (You may be able to do still more expected-good by taking that time and thinking carefully about what most impacts global goodness and whether anyone's doing it.)

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-02-06T13:57:38.000Z · LW(p) · GW(p)

Andrew, check back at Three Worlds Collide for PDF version, hope it works for you.

AC, I can't stand Banks's Excession.

Aleksei, you were an SIAI donor - and early in SIAI's history, too. If SIAI succeeds, you will have no right to complain relative to almost anyone else on the planet. If SIAI fails, at least you tried.

comment by spriteless · 2009-02-06T14:05:15.000Z · LW(p) · GW(p)

Untranslatable 2 is the thought sharing sex.

Untranslatable 1 is confusion or distress. Untranslatable 3 is intellegence enhancing drugs. Untranslatable 4 is forced happy via untranslatable 2, possibly happy drugs refined from the chemical process of it. Untranslatable 5 is wisdom inherited from gene-thoughts.

Replies from: wiresnips
comment by wiresnips · 2010-09-30T01:13:55.314Z · LW(p) · GW(p)

if you can translate them, they're hardly untranslatable

Replies from: Elliott, DaFranker
comment by Elliott · 2010-11-14T09:11:42.686Z · LW(p) · GW(p)

Really? In that case, please translate the word "naches" from Yiddish to English in one word.

Replies from: Alicorn, jkaufman, assaf, JackAttack1024
comment by Alicorn · 2010-11-14T14:48:38.452Z · LW(p) · GW(p)

How about "naches"? English: "Why translate when you can steal?"

comment by jefftk (jkaufman) · 2011-10-30T13:01:20.609Z · LW(p) · GW(p)

"pleasure"

The level of translation they were using wasn't all that fancy. They certainly had worse translations than that.

comment by assaf · 2014-03-23T19:16:09.636Z · LW(p) · GW(p)

Easy: contentment.

Replies from: Vaniver
comment by Vaniver · 2014-03-24T01:13:59.273Z · LW(p) · GW(p)

Contentment is insufficient, because it's a specific flavor of contentment, isn't it?

comment by JackAttack1024 · 2015-05-28T03:51:07.322Z · LW(p) · GW(p)

There are many examples of this scenario, both in fact and fiction; an untranslatable word so laden with connotation that it cannot effectively be replaced. Usually, these words represent some core value of their society of origin (reference: the Dwarves' Super-Honor in Eragon). In a way, the fact that they cannot be translated helps convey their meaning, showing their importance and giving them a quality of both simpleness and complexity, as if your brain was meant to have a word for them, as if they were simply a basic part of the universe falling into place. It's a beautiful thing, really.

comment by DaFranker · 2012-08-29T18:04:55.101Z · LW(p) · GW(p)

The problem isn't quite so much "they can be translated" as... to translate them, you need to pause and first explain the concept. There is no existing conceptual token, phrase, meme, word or other sort of direct translation of the message for these Untranslatables, at least in the fiction, because their speaker did not explain them, they simply used the token.

The translation software (presumably) does not understand this stuff, and will not create new explanations where none was given by the speaker it is attempting to translate (again, presumably). For this, you would (presumably) require some form of powerful AI rather than complex algorithms - and the story preamble explicitly declares that AI never worked. I'm assuming this remained true for the other species.

comment by Aleksei_Riikonen · 2009-02-06T14:27:24.000Z · LW(p) · GW(p)

Eliezer, I've indeed been a hard-working Good Guy at earlier times in my life (though probably most of my effort was on relatively useless rather ordinary do-gooder projects), but from this it doesn't follow that my current self would be a Good Guy.

Currently I'm happily (yes, happily) wasting a huge chunk of my time away on useless fun stuff, while I easily could be more productive. It's not that I would be resting after a burn-out either, I've just become more selfish, and don't feel bad about it, except perhaps very rarely and briefly in a mild manner. I like myself and my life a lot, even though I don't currently classify myself as a Good Guy. I won't even feel particularly bad if we run into an existential risk, I think.

(Though being a Good Guy is fun also, and I might make more of a move in that direction again soon -- or not.)

comment by Vizikahn2 · 2009-02-06T14:41:07.000Z · LW(p) · GW(p)

What I actually got from this story is that we shouldn't be selfish as a species. If the good of all species requires that we sacrifice our core humanity, then we should become non-human and be superhappy about it.

comment by AnnaSalamon · 2009-02-06T14:58:00.000Z · LW(p) · GW(p)

"though probably most of my effort was on relatively useless rather ordinary do-gooder projects"

Aleksei, "ordinary do-gooder projects" are relatively useless. That is, they are multiple orders of magnitude less efficient at global expected-goodness production than well thought out efforts to reduce existential risks. If you somehow ignore existential risks, "ordinary do-gooder projects" are even orders of magnitude less efficient than the better charities working on current human welfare, as analyzed by Givewell or by the Copenhagen Consensus.

Enjoy your life, and don't feel guilty if you don't feel guilty; but if you do want to increase the odds that anything of value survives in this corner of the universe, don't focus on managing to give up more of your current pleasure. Focus on how efficiently you use whatever time/money/influence you are putting toward global goodness. Someone who spends seven ten-thousandths of their time earning money to donate to SIAI does ridiculously more good than someone who spends 90% of their time being a Good Person at an ordinary charity. (They have more time left-over to enjoy themselves, too.)

Replies from: Benquo
comment by Benquo · 2013-11-08T20:49:37.429Z · LW(p) · GW(p)

Aagh! How did I never notice this comment until now?! I would have lived to have started internalizing this back in 2009. As it is I got to a similar place at last weekend's CFAR NY workshop.

comment by Tiiba2 · 2009-02-06T14:58:52.000Z · LW(p) · GW(p)

1) Who the hell is Master of Fandom? A guy who maintains the climate control system, or the crew's pet Gundam nerd?

2) Do you really think the aliens' deal is so horrifying? Or are you just overdramatizing?

Replies from: Auroch, player_03
comment by Auroch · 2010-12-31T04:23:10.122Z · LW(p) · GW(p)

1) The master of the ship's internet-equivalent, probably

comment by player_03 · 2011-10-28T04:50:10.593Z · LW(p) · GW(p)

2) Honestly, I would have been happy with the aliens' deal (even before it was implemented), and I think there is a ~60% chance that Elizier agrees.

I'm of the opinion that pain is a bad thing, except insofar as it prevents you from damaging yourself. People argue that pain is necessary to provide contrast to happiness, and that pleasure wouldn't be meaningful without pain, but I would say that boredom and slight discomfort provide more than enough contrast.

However, this future society disagrees. The idea that "pain is important" is ingrained in these people's minds, in much the same way that "rape is bad" is ingrained in ours. I think one of the main points Elizier is trying to make is that we would disagree with future humans almost as much as we would disagree with the baby-eaters or superhappies.

(Edit 1.5 years later: I was exaggerating in that second paragraph. I suspect I was trying too hard to sound insightful. The claims may or may not have merit, but I would no longer word them as forcefully.)

Replies from: Multiheaded, Hul-Gil, JackAttack1024
comment by Multiheaded · 2012-02-04T20:44:33.608Z · LW(p) · GW(p)

I think one of the main points Elizier is trying to make is that we would disagree with future humans almost as much as we would disagree with the baby-eaters or superhappies.

I never had this impression; if anything, I thought that all the things Eliezer mentioned in any detail - changes in gender and sexuality, the arcane libertarian framework that replaces the state and generally all the differences that seem important by the measure of our own history - are still intended to underscore how humanity still operates against a scale recongizable to its past. The aliens are simply unrelated, and that's why dialogue fails.

When faced with such a catastrophic choice, we humans argue whether to use consequentialist or non-consequentialist ethics, whether an utilitarian model should value billions of lives more than the permanent extinction of some of our deepest emotions, etc, etc. To the Superhappies all of this is simply an incomprehensible hellish nightmare; if a human asked them to go ahead with the transformation but leave a fork of the species as a control group (like in Joe Haldeman's Forever War), this would sound to them like a Holocaust survivor asking us to set up a new camp with intensified torture, so it can "fix" something that's been wrong with the human condition.

(This does not imply some deep xenophobia in me; indeed, after thorough thinking, I say that I would've risked waiting the full eight hours for the evacuation - not because I like the alien mode of being so much, but because I find myself unable to form any judgment about it strong enough to outweigh the cost in lives. My utility function here simply runs into a boundary, with 16 billion ~ infinity to a factor of X, where I don't quite understand what value to assign X with)

comment by Hul-Gil · 2012-03-30T05:05:26.275Z · LW(p) · GW(p)

I think that point would make more sense than the point he is apparently actually making... which is that we must keep negative aspects of ourselves (such as pain) to remain "human" (as defined by current specimens, I suppose), which is apparently something important. Either that or, as you say, Yudkowsky believes that suffering is required to appreciate happiness.

I too would have been happy to take the SH deal; or, if not happy, at least happier than with any of the alternatives.

comment by JackAttack1024 · 2015-05-28T04:01:34.242Z · LW(p) · GW(p)

People argue that pain is necessary to provide contrast to happiness

You should read this: http://www.nickbostrom.com/fable/dragon.html

It makes your point well. This is also touched on in HPMOR.

comment by AnnaSalamon · 2009-02-06T14:59:59.000Z · LW(p) · GW(p)

Never mind, I'm an idiot. I somehow read "relatively useless rather than ordinary", even though Aleksei wrote "relatively useless rather ordinary".

comment by Aleksei_Riikonen · 2009-02-06T15:08:50.000Z · LW(p) · GW(p)

Anna, but good that you raised those very important points for the benefit of those readers to whom they might not be familiar :)

comment by billswift · 2009-02-06T15:18:16.000Z · LW(p) · GW(p)

They should have blown up the nova. Hopefully they could have found some way to warn the human race, but that isn't too important given the way Alderson line run. They not only would have saved humanity with minimal cost of life, they would have saved Babykiller lives too. Sacrificing human lives for baby Babykiller lives is not good.

Replies from: rkyeun
comment by rkyeun · 2012-08-27T20:57:43.242Z · LW(p) · GW(p)

Your solution is to abandon every Babyeater baby who exists now, and every one which will exist in the future, the population of which will grow at an exponential rate as the Babyeaters explore their home starline -- possibly an actual infinite number of babies -- to a hellishly slow death by gradual digestion, for the sake of the however many mere billions of humans currently exist?

Replies from: JackAttack1024
comment by JackAttack1024 · 2015-05-28T04:04:44.865Z · LW(p) · GW(p)

I know that Eliezer Yudowsky is creating a more optimal world when I see someone on the Internet use the words "mere billions of humans".

comment by Kevin_Reid · 2009-02-06T15:33:44.000Z · LW(p) · GW(p)

What concept in the Babyeaters' language is the humans' "good" translated to? We have been given a concrete concept for their terminal value, but what is theirs for ours, if any?

Replies from: rkyeun, AndHisHorse
comment by rkyeun · 2012-08-27T21:01:19.555Z · LW(p) · GW(p)

Betrayal. Perjury. Theft. Cancer.

comment by AndHisHorse · 2013-12-16T09:33:51.979Z · LW(p) · GW(p)

Probably something ambivalent to them, such as "honor-but-with-possible-mercy", or "science-and-peace-but-with-excessive-compassion".

comment by Ben_Jones · 2009-02-06T15:43:34.000Z · LW(p) · GW(p)

Untranslatable 2 is the thought sharing sex.

Sprite, you are, by definition, wrong.

Replies from: moshez
comment by moshez · 2012-10-24T23:23:43.319Z · LW(p) · GW(p)"By definition" argument detected in a discussion not about math.

The software was using "untranslatable" as a short hand for "the current version of the software cannot translate a term and so is giving it a numeric designation so you will be able to see if we use it again", probably not even saying "no future version of the software will be able to translate it", not to mention a human who spent non-trivial amount of thought on the topic (in TWC future, there's no AI, which means human thought will do some things no software can do).

comment by Anonymous53 · 2009-02-06T15:49:37.000Z · LW(p) · GW(p)

If you put your ear up to the monitor you can almost hear the wooshing sound from the hundreds of people unsubscribing to this blog...

comment by Alex_Martelli · 2009-02-06T16:01:50.000Z · LW(p) · GW(p)

There is, of course, a rather large random/unknown component in the amortized present value of the amount of good any action of mine is going to do. Maybe my little contributions to Python and other open-source projects will be of some fractional help one day to somebody writing some really important programs -- though more likely they won't. Maybe my buying and bringing food to hungry children will enhance the diet, and thus facilitate the brain development, of somebody who one day will do something really important -- though more likely it won't. Landsburg in http://www.slate.com/id/2034/ argued for assessing the expected values of one's charitable giving condiitonal on each feasible charity action, then focusing all available resources on that one action -- no matter how uncertain the assessment. However, this optimizes only for the peak of the a posteriori distribution, ignores the big issue of radical (Knightian) uncertainty, etc, etc -- so I don't really buy it (though pondering and debating these issues HAS led me to focus my charitable activities more, as have other lines of reasoning).

comment by Nominull3 · 2009-02-06T16:08:41.000Z · LW(p) · GW(p)

Anonymous: The blog is shutting down anyway, or at least receding to a diminished state. The threat of death holds no power over a suicidal man...

Replies from: Articulator
comment by Articulator · 2016-12-15T07:30:50.361Z · LW(p) · GW(p)

This comment, archaeologically excavated in the future, amuses me.

comment by Thom_Blake · 2009-02-06T16:18:02.000Z · LW(p) · GW(p)

Anonymous, that sound you hear is probably people rushing to subscribe. http://www.rifters.com/crawl/?p=266 - note the comments.

comment by Tiiba2 · 2009-02-06T17:30:06.000Z · LW(p) · GW(p)

Untranslatable 2: The frothy mixture of lube and fecal matter that is sometimes the byproduct of anal sex.

comment by Billy_Brown · 2009-02-06T17:32:22.000Z · LW(p) · GW(p)

Good story, and a nice illustration of many of the points you’ve previously made about cross-species morality. I do find it a bit disturbing that so many people think the SH offer doesn’t sound so bad – not sure if that’s a weakness in the story, commenter contrarianism, or a measure of just how diverse human psychology already is.

The human society looks like a patched-up version of Star Trek’s bland liberal utopianism, which I realize is probably for the convenience of the story. But it’s worth pointing out that any real society with personal freedom and even primitive biotech is going to see an explosion of experimentation with both physical and mental modifications – enforcing a single collective decision about what to do with this technology would require a massive police state or universal mind control. Give the furries, vampire-lovers and other assorted xenophiles a few generations to chase their dreams, and you’re going to start seeing groups with distinctly non-human psychology. So even if we never meet “real” aliens, it’s quite likely that we’ll have to deal with equally strange human-descended races at some point.

I’ll also note that, as is usually the case with groups that ‘give up war’, the human response is crippled by their lack of anything resembling military preparedness. A rational but non-pacifist society would be a lot less naïve in their initial approach, and a lot more prepared for unpleasant outcomes - at a minimum they’d use courier boats to keep the exploration vessel in contact with higher command, which would let them start precautionary evacuations a lot sooner and lose far fewer people. But the tech in the story massively favors the defense, to the point that a defender who is already prepared to fracture his starline network if attacked is almost impossible to conquer (you’d need to advance faster than the defender can send warnings of your attack while maintaining perfect control over every system you’ve captured). So an armed society would have a good chance of being able to cut itself off from even massively superior aliens, while pacifists are vulnerable to surprise attacks from even fairly inferior ones.

comment by a_soulless_automaton · 2009-02-06T17:50:04.000Z · LW(p) · GW(p)

Tiiba: Somewhere between a Gundam nerd and a literature professor, I expect. Since the main real differences between the two in our current world are 1) lit profs get more cultural respect 2) people actually enjoy Gundam, the combination makes a fair amount of sense.

comment by ad2 · 2009-02-06T18:02:56.000Z · LW(p) · GW(p)

It's somehow depressing that in this story, a former rapist dirtbag saves the world.

Why is that depressing?

And if the good and decent officer who pressed that button had needed to walk up to a man, a woman, a child, and slit their throats one at a time, he would have broken long before he killed seventy thousand people.

I have my doubts about that. If he could do it seven times, he could do it seventy thousand times. Since when was it harder for a killer to kill again?

Replies from: TruthOrWar, AndHisHorse
comment by TruthOrWar · 2010-09-26T06:01:43.551Z · LW(p) · GW(p)

When the man would think that 7 deaths is worth it but perhaps 70 000 is too many

comment by AndHisHorse · 2013-12-16T09:43:54.443Z · LW(p) · GW(p)

I think that the relationship between (number of deaths) and (amount of despair/psychological impact) isn't linear, and differs depending on the psychological proximity that one has to the act of killing. For a very abstract example, let's say that killing in person has a square-root relationship with psychological impact; killing 10,000 people is about ten times as psyche-breaking as killing 100 people. Even that is probably inexact for small numbers; the multiplicative difference between killing 1 person vs 100 people, and killing 100 people vs 10,000 people, might well be different. Killing but button, however, may have a logarithmic relationship: it's only three times as bad to kill 1,000,000 people as it is to kill 1,000.

Additionally, consider why such a good and decent officer might kill: because in the moment, he is convinced of the righteousness of his cause. He begins full of fervor, but as the act continues, he may grow weary, or the hormones which contributed to his enthusiasm may wear off, as the killing stretches long into the night. He may question if killing this next person is strictly necessary, or if maybe, just maybe, he could stop at 69,000, or let that child live while killing everyone after him.

I don't doubt that there are killers for whom killing again is easier - pyschopaths, certainly, and relatively psychologically normal people who are convinced of the inhumanity of their enemies - but we are talking about a good and decent officers, killing civillians for the greater good. There are some similarities, but there are also a great many differences.

Replies from: JackAttack1024
comment by JackAttack1024 · 2015-05-28T04:30:52.733Z · LW(p) · GW(p)

It could actually be the other way around-- an exponential decay. You would be horrified by killing one person, but as the numbers grow, the killings get more impersonal and therefore easier. However, killing a billion people one at a time would still hurt as much as killing one person times a billion.

Actually, it's probably more of a twisted, jumbled mess of a correlation that no one has the time, resources, or heart to untangle.

Actually, it's probably more of a... hold on.

[EDIT: I had originally made a very detailed graph out of characters, but it didn't format correctly when I posted, so...]

There! A skewed S-curve with a negative exponential progression!

Replies from: Lumifer, VoiceOfRa
comment by Lumifer · 2015-05-28T15:05:52.923Z · LW(p) · GW(p)

You would be horrified by killing one person, but as the numbers grow, the killings get more impersonal and therefore easier.

"A single death is a tragedy; a million deaths is a statistic" -- usually attributed to Stalin.

comment by VoiceOfRa · 2015-06-02T04:20:38.070Z · LW(p) · GW(p)

However, killing a billion people one at a time would still hurt as much as killing one person times a billion.

Probably not, after the first ten you probably stop feeling anything. After the first twenty you start comparing their "performance" dying. By a hundred you're probably coming up with creative means of execution.

comment by Nominull3 · 2009-02-06T18:52:23.000Z · LW(p) · GW(p)

So I guess Lord Administrator Akon remains anesthetized until the sun roasts him to death? I can't decide if that's tragic or merciful, that he never found out how the story ended.

Replies from: AndHisHorse
comment by AndHisHorse · 2013-12-16T09:45:27.478Z · LW(p) · GW(p)

For some reason, on my first reading, I had assumed that he was given a fatal dose of anesthesia, making it a painless but swift execution.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-02-06T19:27:02.000Z · LW(p) · GW(p)

Nominull, neither Akon, the Lord Programmer, or the Xenopsychologist seem to be appearing in this section.

Billy Brown:

Give the furries, vampire-lovers and other assorted xenophiles a few generations to chase their dreams, and you're going to start seeing groups with distinctly non-human psychology.

WHY HAVEN'T I READ THIS STORY?

Replies from: garethrees, player_03, Catnip
comment by garethrees · 2010-05-13T09:07:35.016Z · LW(p) · GW(p)

Stanislaw Lem, "The Twenty-First Voyage of Ijon Tichy", collected in "The Star Diaries".

Replies from: Tamfang
comment by Tamfang · 2010-08-14T04:55:22.076Z · LW(p) · GW(p)

hm, is ''Tichy'' the Polish word for 'peaceful'?

Replies from: MaDeR, IlyaShpitser
comment by MaDeR · 2011-04-05T20:17:54.991Z · LW(p) · GW(p)

Nah. "Tichy" is not existing word in Polish. Hovever, there is very similar word "cichy" that means "silent", low volume sound. Probably not accidental.

Replies from: StaplerGuy
comment by StaplerGuy · 2012-04-01T00:25:47.754Z · LW(p) · GW(p)

Lem actually mentions that at some point... while I don't have my copy of Peace on Earth with me, there's that scene where Tichy's picking out postcards and drawing sketches on them. He draws a mouse on a postcard with an owl and says something along the lines of "And how does a mouse behave around an owl? It is quiet, quiet as a mouse." and then comments on his name.

comment by IlyaShpitser · 2014-03-24T10:24:13.894Z · LW(p) · GW(p)

Tichy is Russian for quiet. "Tichy Okean" is Russian for "Pacific Ocean."

comment by player_03 · 2011-10-28T05:10:56.464Z · LW(p) · GW(p)

Give the furries, vampire-lovers and other assorted xenophiles a few generations to chase their dreams, and you're going to start seeing groups with distinctly non-human psychology.

WHY HAVEN'T I READ THIS STORY?

Because you haven't had time to read all the Orion's Arm stories, probably. (Details)

comment by Catnip · 2011-12-17T11:54:54.151Z · LW(p) · GW(p)

I would recommend "Down and out in the Magic Kingdom" by Cory Doctorow. A wonderful insight in transhumanism. Furries, social structure based on Facebook, etc. Also, there is Cure for Death. Book is avaliable for free on author's site.

comment by Wei_Dai2 · 2009-02-06T19:54:54.000Z · LW(p) · GW(p)

But the tech in the story massively favors the defense, to the point that a defender who is already prepared to fracture his starline network if attacked is almost impossible to conquer (you’d need to advance faster than the defender can send warnings of your attack while maintaining perfect control over every system you’ve captured). So an armed society would have a good chance of being able to cut itself off from even massively superior aliens, while pacifists are vulnerable to surprise attacks from even fairly inferior ones.

I agree, and that's why in my ending humans conquer the Babyeaters only after we develop a defense against the supernova weapon. The fact that the humans can see the defensive potential of this weapon, but the Babyeaters and the Superhappies can't, is a big flaw in the story. The humans sacrificed billions in order to allow the Superhappies to conquer the Babyeaters, but that makes sense only if the Babyeaters can't figure out the same defense that the humans used. Why not?

Also, the Superhappies' approach to negotiation made no game theoretic sense. What they did was, offer a deal to the other side. If they don't accept, impose the deal on them anyway by force. If they do accept, trust that they will carry out the deal without try to cheat. Given these incentives, why would anyone facing a Superhappy in negotiation not accept and then cheat? I don't see any plausible way in which this morality/negotiation strategy could have become a common one in Superhappy society.

Lastly, I note that the Epilogue of the original ending could be named Atonement as well. After being modified by the Superhappies (like how the Confessor was "rescued"?), the humans would now be atoning for having forced their children suffer pain. What does this symmetry tell us, if anything?

Replies from: accolade
comment by accolade · 2016-01-21T22:56:45.340Z · LW(p) · GW(p)

why would anyone facing a Superhappy in negotiation not accept and then cheat?

The SH cannot lie. So they also cannot claim to follow through on a contract while plotting to cheat instead.

They may have developed their negotiation habits only facing honest, trustworthy members of their own kind. (For all we know, this was the first Alien encounter the SH faced.)

comment by Random_Passerby · 2009-02-06T20:51:49.000Z · LW(p) · GW(p)

Tiiba said:

Untranslatable 2: The frothy mixture of lube and fecal matter that is sometimes the byproduct of anal sex.

Then why didn't it just translate it as "santorum"?

comment by Cabalamat2 · 2009-02-06T22:37:27.000Z · LW(p) · GW(p)

Then the part about 'sixteen billion' just gets flushed away. And more importantly - you think it was the right thing to do. The noble, the moral, the honorable thing to do.

Like eating babies, then.

Aleksei: I hope you others feel that the character was primarily a victim way back when, instead of a dirtbag.

He was who he was. Labelling him "victim" or "dirtbag" or whatever says nothing about what he was, but a lot about the person doing the labelling.

Russell: Of course not. The victim was the girl he murdered.

If one person is a victim, it doesn't follow that another person was not.

comment by Nikhil · 2009-02-06T23:22:58.000Z · LW(p) · GW(p)

Super! I read only as the installments came (even though I desparately wanted to download the pdf) so I could think about it longer.

Wouldn't it be fun to get 3(+) groups together, have each create value systems and traditions for itself subject to certain universals, and stage three-way negotiations of this sort? Planning, participating, and evaluating afterward could be fascinating.

comment by Nickolai_Leschov · 2009-02-06T23:48:34.000Z · LW(p) · GW(p)

Nikhil: Good idea. I've been also thinking about the best way to utilise computer technology for arranging that sort of role-playing game.

comment by Nicholas2 · 2009-02-07T02:19:59.000Z · LW(p) · GW(p)

I enjoyed reading this story, but I would like to point out what I see as a grim possible future for humanity even after shutdown Huygens starline. As I understand, the super happies have a very accelerated reproduction rate among other things, which in certain circumstances could be as low as a 20 hour doubling time, ship crew and all; It's hard to pinpoint what the doubling time for solar system/starline colonization though it is likely related to the reproduction doubling time, but with the conservative estimate that they super happies have colonized/explored at least 8 systems in the 20 years (as the stars count time) they have been in space that would give them about a 6 year doubling time. There are about 400 billion stars in the galaxy while this may be a lot; It is only a mere 39 doubling cycles to full colonization, and an additional 39 (78 total) to full colonization of the next 400 billion galaxies. We have a range of somewhere between 780 hours (based on the 20 hour doubling speed) or about 32 and a half days to a more respectable yet still short 234 years (based on the conservative 6 year doubling time estimate) until the whole galaxy has been explored by the super happies, and only double that time if we are speaking of an area much much larger then the observable universe. It is safe to say that this is a strong upper bound on the amount of time it would take the super happies to rediscover humanity, and that time decreases significantly due to anything that would increase the chance of discovery from pure random chance, such as better understanding of starline topography, better inter-starline scanning, and additional novas that humanity chooses to investigate. So, I think the sad fact of the matter is that this victory is just one of little time, and considering the advancements in immortality I think the vast majority of humans would be around to see the return of the super happies to bestow upon them their gift.

Replies from: PetjaY
comment by PetjaY · 2016-10-09T09:57:32.583Z · LW(p) · GW(p)

True, but on the other hand humanity has been left alone for millions of years, so odds of some species conquering universe just after humans accidentally happen to meet them (while they are still very limited in size) seem low. If there would be nothing stopping such expansions, i would´ve expected seeing some species conquering universe millions or billions of years ago.

comment by Nicholas2 · 2009-02-07T02:24:48.000Z · LW(p) · GW(p)

Please excuse the spacing on my previous post, Never quite sure how line breaks end up in various comment systems.

comment by Wei_Dai2 · 2009-02-07T02:55:35.000Z · LW(p) · GW(p)

Nicholas, suppose Eliezer's fictional universe contains a total of 2^(10^20) star systems, and each starline connects two randomly selected star systems. With a 20 hour doubling speed, the Superhappies, starting with one ship, can explore 2^(t36524/20) random star systems after t years. Let's say the humans are expanding at the same pace. How long will it take, before humans and Superhappies will meet again?

According to the birthday paradox, they will likely meet after each having explored about sqrt(2^(10^20)) = 2^(510^19) star systems, which will take 510^19/(365*24/20) or approximately 10^17 years to accomplish. That should be enough time to get over our attachment to "bodily pain, embarrassment, and romantic troubles", I imagine.

comment by JamesAndrix · 2009-02-07T06:31:54.000Z · LW(p) · GW(p)

Wei Dai: Given these incentives, why would anyone facing a Superhappy in negotiation not accept and then cheat? I don't see any plausible way in which this morality/negotiation strategy could have become a common one in Superhappy society.

Perhaps they evolved to honestly accept, and then find overwhelming reasons to cheat later on, and this is their substitute for deception. Maybe they also evolved to give not-quite acceptable options, which the other party will accept, and then cheat on to some degree.

So the true ending might be a typical superhappy negotiation, if their real solution were something we would represent as "We absolutely have to deal with the babyeaters, You childabusers are awful but we'll let you exist unmodified as long as you never, ever sextalk us again."

And because they evolved expecting a certain amount of cheating, This consciously manifested itself as the unacceptable offer they made.

Replies from: magnus-anderson
comment by Magnus Anderson (magnus-anderson) · 2024-04-22T17:45:54.656Z · LW(p) · GW(p)

I think that they possibly evolved to have Untranslatable 2 whenever they disagreed which would perhaps resolve this.

Being unable to do so with another species, they expected a species which cannot have untranslatable 2 to be unable to come to a rational agreement.

comment by Abigail · 2009-02-07T12:29:02.000Z · LW(p) · GW(p)

"So does that mean," asked the Master, "that now your life is finally complete, and you can die without any regrets?"

Well. That is indeed ridiculous. It fails (I think) to realise the Lady Sensory's lesson that she no longer needs to be the person she always thought she needed to be. I am moved to quote the Quaker Isaac Penington: "The end of words is to bring men to the knowledge of things beyond what words can utter". The Master of Fandom's words are meaningless, an ideal of what people should be, they imagine, rather than how things actually are.

comment by Robin_Hanson2 · 2009-02-07T16:03:36.000Z · LW(p) · GW(p)

Clearly, Eliezer should seriously consider devoting himself more to writing fiction. But it is not clear to me how this helps us overcome biases any more than any fictional moral dilemma. Since people are inconsistent but reluctant to admit that fact, their moral beliefs can be influenced by which moral dilemmas they consider in what order, especially when written by a good writer. I expect Eliezer chose his dilemmas in order to move readers toward his preferred moral beliefs, but why should I expect those are better moral beliefs than those of all the other authors of fictional moral dilemmas? If I'm going to read a literature that might influence my moral beliefs, I'd rather read professional philosophers and other academics making more explicit arguments. In general, I better trust explicit academic argument over implicit fictional "argument."

Replies from: staticIP
comment by staticIP · 2012-03-18T02:21:40.493Z · LW(p) · GW(p)

Morals are axioms. They're ultimately arbitrary. Relying on arguments with logic and reason for deciding the axioms of your morals is silly, go with what feels right. Then use logic and reason to best actualize on those beliefs. Try to trace morality too far down and you'll realize it's all ultimately pointless, or at least there's no single truth to the matter.

Replies from: Incorrect, TheOtherDave, wnoise
comment by Incorrect · 2012-03-18T02:23:00.532Z · LW(p) · GW(p)

Try to trace morality too far down and you'll realize it's all ultimately pointless, or at least there's no single truth to the matter.

Why care? It all adds up to normalcy.

If there is a physical law preventing me from caring about things once I realize they are arbitrary in certain conceptual frameworks please enlighten me on it.

Replies from: staticIP
comment by staticIP · 2012-03-19T00:59:03.453Z · LW(p) · GW(p)

I'm not suggesting that any emotion should be attached to the lack of a great truth or true indisputable morals; I'm simply stating the obvious,

comment by TheOtherDave · 2012-03-18T02:45:30.306Z · LW(p) · GW(p)

Morals can be axioms, I suppose, but IME what many of us have as object-level "morals" are instead the sorts of cached results that could in principle be derived from axioms. Often, those morals are inconsistent with one another; in those cases using logic and reason to actualize them leads at best to tradeoffs, more often to self-defeating cycles as I switch from one so-called "axiom" to another, sometimes to utter paralysis as these "axioms" come into conflict.

An alternative to that is to analyze my own morality and edit it (insofar as possible) for consistency.

You're welcome to treat your moral instincts as ineluctable primitives if you wish, of course, but it's not clear to me that I ought to.

Replies from: someonewrongonthenet
comment by someonewrongonthenet · 2012-08-14T18:45:41.670Z · LW(p) · GW(p)

it's not clear to me that I ought to.

The desire to have a set of morals that derive from consistent axioms can be considered an "ineluctable" as well. It's simply that your preference to have consistent morals in some cases overrides your other ineluctable preferences...and this conflict is another instance of the paralysis you mentioned.

The morals are indeed cached results...they are the best approximation of the morals that would have been most useful for propagating your genes in the ancestral environment that the combination of evolution and natural selection could come up with.

comment by wnoise · 2012-03-18T04:52:03.728Z · LW(p) · GW(p)

Morals are modeled as axioms in certain formulations.

comment by a_soulless_automaton · 2009-02-07T16:49:08.000Z · LW(p) · GW(p)

Robin, fiction also has the benefit of being more directly accessible; i.e., people who would or could not read explicit academic argument can read a short story that grapples with moral issues and get a better sense of the conflict then they would otherwise. Even with the extremely self-selected audience of this blog, compare the comments the story got vs. many other posts.

And while of course the story was influenced by Eliezer's beliefs, the amount of arguing about the endings suggests that it was not so cut and dry as simply "moving readers toward his beliefs".

comment by Maglick2 · 2009-02-07T17:05:46.000Z · LW(p) · GW(p)

I thought the story worked very well as a parable and broke down as it expanded into more conventional fiction. But I found scenario a very vivid way to imagine some of EY's issues. A gift for metaphor is the surest sign of intelligence I know. Based on this story and the starblinker, I'd agree his best chance of hitting the bestseller list as he plans may be on the fiction side.

As for academic work, I'd love to hear about the last piece RH read that made him question his views or change his mind about something. The links here all seem to reinforce.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-02-07T17:21:49.000Z · LW(p) · GW(p)

Robin, that's one reason I first wrote up my abstract views on the proposition of eliminating pain, and then I put up a fictional story that addressed the same issue.

But what first got me thinking along the same lines was watching a certain movie - all the Far arguments I'd read up until that point hadn't moved me; but watching it playing out in an imaginary Near situation altered my thinking. Pretty sure it's got something to do with there being fewer degrees of emotional freedom in concrete Near thinking versus abstract propositional Far thinking.

I think that so long as I lay my cards plainly on the table outside the story, writing the story should fall more along the lines of public service to understanding, and less along the lines of sneaky covert hidden arguments. Do I need to remark on how desperately important it is, with important ideas, to have some versions that are as accessible as possible? The difficulty is to do this without simply flushing away the real idea and substituting one that's easier to explain. But I do think - I do hope - that I am generally pretty damned careful on that score.

Maglick, last piece RH wrote that changed my mind (as in causing me to alter my beliefs on questions I had previously considered) was the "Near vs. Far" one.

comment by Anonymous_Coward4 · 2009-02-07T17:45:10.000Z · LW(p) · GW(p) AC, I can't stand Banks's Excession.

Interesting, and I must admit I am surprised.

Regardless of personal preferences though... it seems the closest match for the topic at hand. But hey, it's your story...

"Excession; something excessive. Excessively aggressive, excessively powerful, excessively expansionist; whatever. Such things turned up or were created now and again. Encountering an example was one of the risks you ran when you went a-wandering..."

comment by RobinHanson · 2009-02-07T17:57:00.000Z · LW(p) · GW(p)

Eliezer, I didn't find your explicit arguments persuasive, nor even clear enough to be worth an explicit response. The fact that you yourself were persuaded of your conclusion by fiction does not raise my estimate of its quality. I don't think readers should much let down their guard against communication modes where sneaky persuasion is more feasible simply because the author has made some more explicit arguments elsewhere. I understand your temptation to use such means to persuade given that there are readers who have let down their guard. But I can only approve of that if I think your conclusions are worth such pushing.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-02-07T18:12:00.000Z · LW(p) · GW(p)

Robin, it looks to me like we diverged at an earlier point in the argument than that. As far as I can tell, you're still working in something like a mode of moral realism/externalism (you asked whether the goodness of human values was "luck"). If this is the case, then the basic rules of argument I adopted will sound to you like mere appeals to intuition. I'm not sure what I could do about this - except, maybe, trying to rewrite and condense and simplify my writing on metaethics. It is not clear to me what other mode of argument you thought I could have adopted. So far as I know, trying to get people to see for themselves what their implicit rightness-function returns on various scenarios, is all there is and all there can possibly be.

comment by Psy-Kosh · 2009-02-07T18:13:00.000Z · LW(p) · GW(p)

Robin: Well, to be fair, it was written well enough that lots of us are arguing about who was actually right.

ie, several of us seem to be taking the side that the Normal Ending was the better one, at least compared to the True Ending.

So it's not as if we were manipulated into taking the position that Eliezer seemed to be advocating.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-02-07T18:27:00.000Z · LW(p) · GW(p)

Also - explaining complex beliefs is a fantastically difficult enterprise. To consider fiction as just a way of "sneaking things past the unwary reader's guard" is selling it far short. It verges, I fear, on too much respect for respectability.

I tried to make the Superhappy position look as appealing as possible - show just how strange our position would look to someone who didn't have it, ask whether human children might be innocent victims, depict the human objections as overcomplicated departures from rationality. Of course I wanted my readers to feel Akon's helplessness. But to call that "sneaking things past someone's guard" is a little unfair to the possibilities of fiction as a vehicle for philosophy, I should think.

I'm still conflicted and worried about the ethics of writing fiction as a way of persuading people of anything, but conflict implies almost-balance; you don't seem to think that there's much in the way of benefit.

comment by Nicholas_"Indy"_Ray · 2009-02-07T20:04:00.000Z · LW(p) · GW(p)

Wei Dai, Sure I too could invent numbers large enough to make any calculation give me the result I want. But as it still stands I think that 2^(10^20) is impractically huge fore any universe, especially one that seems to be based in our own. Also I think it's hard to imagine that the starline topography to be completely random among the whole universe. So I stand by my stance that it is difficult to imagine a universe where the super happies do not come across the humans again in a relatively short period of time. And I figure the point about fiction is that you're trying to convince you're readers about a consistent world where a story takes place in.

comment by Thom_Blake · 2009-02-07T20:08:00.000Z · LW(p) · GW(p)

EY, but you are a moral realist (or at least a moral objectivist, which ought to refer to the same thing). There's a fact about what's right, just like there's a fact about what's prime or what's baby-eating. It's a fact about the universe, independent of what anyone has to say about it. If we were human' we'd be moral' realists talking about what's right'. ne?

comment by RobinHanson · 2009-02-07T20:41:00.000Z · LW(p) · GW(p)

Eliezer, academic philosophy offers exemplary formats and styles for low-sneak ways to argue about values.

comment by Anonymous53 · 2009-02-07T22:19:00.000Z · LW(p) · GW(p)

Robin:
Eliezers philosophy of fun and how it related to human value system was not grounded enough to be intelligble?

Eliezer:
According which criterion is it balanced? How likely is it that extremely positive and strong feedback balances negative side-effect of manipulating people? Did you just justify use of fiction by feeling conflicted about it?
As a side note I'll comment that so far I have no reason to expect your story to be a excellent introduction to your ideas. It does show several ideas well, but readers not familiar with your writing miss a lot easily, but notice that you've used a lot of insider-speak. I have no reason to expect it to be terrible either.

Thom Blake:
Whatever is true of human rightness might not that much look anything invidual humans value, but it ought to address it all somehow. I'd like to hear if there is a reason to expect human rightness to be in some sense coherent and if there is then I'd like to understand in what sense there is. I don't remember from top of my head any posts addresing this.

comment by Wei_Dai2 · 2009-02-07T22:43:00.000Z · LW(p) · GW(p)

Robin, what is your favorite piece of academic philosophy that argues about values?

Nicholas, our own universe may have an infinite volume, and it's only the speed of light that limits the size of the observable universe. Given that infinite universes are not considered implausible, and starlines are not considered implausible (at least as a fictional device), I find it surprising that you consider starlines that randomly connect a region of size 2^(10^20) to be implausible.
Starlines have to have an average distance of something, right? Why not 2^(10^20)?

comment by frelkins · 2009-02-08T00:31:00.000Z · LW(p) · GW(p)

@Anon.

"if there is a reason to expect human rightness to be in some sense coherent"

Alas there probably is not. Sir Isaiah Berlin speaks powerfully and beautifully of this so-called values pluralism in his book Liberty.

There are several ironies - if not outright tragedies - of life and this is one: that we don't want what we want to want, and that the things we think we ought to want often conflict with each other as well as our underlying motives. We are not in charge of ourselves and we are mysterious to our own hearts. Men and women are conflicted and, due to evolution, conflict.

comment by Z._M._Davis · 2009-02-08T02:06:00.000Z · LW(p) · GW(p)

I don't think I see how moral-philosophy fiction is problematic at all. When you have a beautiful moral sentiment that you need to offer to the world, of course you bind it up in a glorious work of high art, and let the work stand as your offering. That makes sense. When you have some info you want to share with the world about some dull ordinary thing that actually exists, that's when you write a journal article. When you've got something to protect, something you need to say, some set of notions that you really are entitled to, then you write a novel.

Just as it is dishonest to fail to be objective in matters of fact, so it is dishonest to feign objectivity where there simply is no fact. Why pretend to make arguments when what you really want to write is a hymn?

comment by Nicholas · 2009-02-09T00:41:00.000Z · LW(p) · GW(p)

Wei Dai, Except that traditionally speaking an infinitelly massive universe is generally considered implausable by the greater scientific community.

But I think the greater matter is that even if it were physicially possible, it's impossible to mentally reason about as a reader of good fiction. And thus has the ability to break internal consistancy of an otherwise good store in the mind of the reader.

Thanks,
Indy

comment by Doug_S. · 2009-02-09T05:28:00.000Z · LW(p) · GW(p)

The story specifically asks a question that none of the commenters have addressed yet.

"So," the Lord Pilot finally said. "What kind of asset retains its value in a market with nine minutes to live?"

My answer: Music.

If your world is going to end in nine minutes, you might as well play some music while you wait for the inevitable.

Short story collections, perhaps? If you've never read, say, "The Last Question", it would be your last chance. (And if you're reading this now, and you haven't read "The Last Question" yet, then something has gone seriously wrong in your life.)

Replies from: Tamfang
comment by Tamfang · 2010-08-14T04:59:54.492Z · LW(p) · GW(p)

Eh? Perhaps I was too young to get (or remember) what's so great about it. (I haven't read much Asimov since my early teens.)

comment by denis_bider · 2009-02-12T19:54:00.000Z · LW(p) · GW(p)

Neh. Eliezer, I'm kind of disappointed by how you write the tragic ending ("saving" humans) as if it's the happy one, and the happy ending (civilization melting pot) as if it's the tragic one. I'm not sure what to make of that.

Do you really, actually believe that, in this fictional scenario, the human race is better off sacrificing a part of itself in order to avoid blending with the super-happies?

It just blows my mind that you can write an intriguing story like this, and yet draw that kind of conclusion.

Replies from: Hul-Gil, ikrase
comment by Hul-Gil · 2012-03-30T04:59:07.593Z · LW(p) · GW(p)

Agreed. I was very surprised that Mr. Yudkowsky went with the very ending I, myself, thought would be the "traditional" and irrational ending - where suffering and death are allowed to go on, and even caused, because... um... because humans are special, and pain is good because it's part of our identity!

Yes, and the appendix is useful because it's part of our body.

Replies from: player_03
comment by player_03 · 2013-02-22T23:33:18.831Z · LW(p) · GW(p)

Perhaps the fact that it's the "traditional and irrational" ending is the reason Eliezer went with it as the "real" one. (Note that he didn't actually label them as "good" and "bad" endings.)

comment by ikrase · 2012-09-30T08:23:31.165Z · LW(p) · GW(p)

Death does not go on. The humans are immortal.

The flaw i see is why could the super happies not make separate decisions for humanity and the baby eaters. And why meld the cultures? Humans didn't seem to care about the existence of shockingly ugly super happies.

Replies from: Snowyowl
comment by Snowyowl · 2012-11-27T16:32:49.723Z · LW(p) · GW(p)

The flaw i see is why could the super happies not make separate decisions for humanity and the baby eaters.

I don't follow. They waged a genocidal war against the babyeaters and signed an alliance with humanity. That looks like separate decisions to me.

And why meld the cultures? Humans didn't seem to care about the existence of shockingly ugly super happies.

For one, because they're symmetrists. They asked something of humanity, so it was only fair that they should give something of equal value in return. (They're annoyingly ethical in that regard.) And I do mean equal value - humans became partly superhappy, and superhappies became partly human. For two, because shared culture and psychology makes it possible to have meaningful dialogue between species: even with the Cultural Translator, everyone got headaches after five minutes. Remember that to the superhappies, meaningful communication is literally as good as sex.

Replies from: DaFranker, ikrase
comment by DaFranker · 2012-11-27T16:40:15.380Z · LW(p) · GW(p)

Remember that to the superhappies, meaningful communication is literally as good as sex.

More accurately:

Remember that to the superhappies, meaningful communication is literally good sex.

comment by ikrase · 2012-12-12T23:14:37.716Z · LW(p) · GW(p)

Yeah... I guess I just didn't quite pick up on the whole symmetry thing. It seems like they could have, for example, immediately waged war on the baby eaters (I think it was not actually genocide but rather cultural imperialism, or forced modification so that the baby eaters would cause disutility) and THEN made the decision for the humans.

comment by Zargon · 2009-02-12T21:13:00.000Z · LW(p) · GW(p)

This was terribly interesting, I'll be re-reading it in a few days to see what I can pick up that I missed the first time through.

I'm not so sure we can so easily label these two endings the good and bad endings. In one, humanity (or at least what humanity evolved into) goes along with the superhappies, and in the other, they do not. Certainly, going along with the superhappies is not a good solution. We give up much of what we consider to be vital to our identity, and in return, the superhappies make their spaceships look nice. Now, the superhappies are also modifying themselves, arguably just as much as humanity is, but even if they lose (their perspective) as much as we lose (out perspective), we don't gain (our perspective) as much as we lose.

The true ending is about resisting the transformation. But they seem to accept an... unintuitive tradeoff while doing so. They trade the lives of a few billion humans for the ability to allow the superhappies to do to the babyeaters exactly what they intended to do to the humans. In fact, unless I missed something, I don't think taking this trade was ever even questioned, it just seemed that taking the trade and sacrificing the people to so that the babyeaters would be transformed exactly like humanity would have been was just common sense to them. Now, I can see how the characters in the story could perceive this choice as righteous and moral, but it seems to me to just be a another tragic ending, just in a different flavor. A tragedy due to a massive failure of humanity's morals, rather than a tragedy due to the loss of pain & suffering for humanity.

As an aside, the construction of the two (three?) alien species, with their traits, culture, and thought processes was superb.

comment by Supernova_Blast_Front_Rider · 2009-02-17T20:53:00.000Z · LW(p) · GW(p)

We all have our personal approaching supernova blast fronts in T minus...

In the intervening time, everyone with a powerful enough mind, please consider engaging in scientific research that has the potential to change the human condition. Don't waste your time on the human culture. It's not worth it - yet.

comment by MichaelHoward · 2009-02-18T00:06:00.000Z · LW(p) · GW(p)

Sorry I'm late... is no-one curious given the age of the universe why the 3 races are so close technologically? (Great filter? Super-advanced races have prime directive? Simulated experiment? Novas only happen if 3 connecting stars recently had their first gates be gates out? ...)

Eliezer if you're reading this, amazing story. I'm worried though about your responses to so many commenters (generally smarter & more rational than the most humans) with widely different preferences and values to what you see as right in this story and the fun sequence. I'm not saying your values are wrong, I'm saying you seem to have very optimistic models/estimates of where many human value systems go when fed lots of knowledge/rationality/good arguments. If so, I hope CEV doesn't depend on it.

If your model is causing you to be constantly surprised, then...

Replies from: player_03
comment by player_03 · 2013-02-22T23:45:55.892Z · LW(p) · GW(p)

Sorry I'm late... is no-one curious given the age of the universe why the 3 races are so close technologically?

Sorry I'm late in replying to this, but I'd guess the answer is that this is "the past's future." He would not have been able to tell this story with one species being that advanced, so he postulated a universe in which such a species doesn't exist (or at least isn't nearby).

Your in-universe explanations work as well, of course.

comment by Zargon · 2009-02-24T23:31:00.000Z · LW(p) · GW(p)

Well, I re-read it, and now neither ending seems so tragic anymore. I now think that there is utility in transforming the babyeaters that I didn't see before.

That said, the way they went about their supernova operation seems illogical, particularly the part about giving them 3 hours and 41 minutes. I would imagine they decided on that amount of time by estimating the chance of the superhappies showing up as more time passes times the disutility of them stopping the operation, vs the number of humans that will be killed by the supernova as more time passes, and choosing optimal time.

It seems like relatively few humans are able to escape prior to around the 8 hour mark, and, given that the superhappies gave no indications of when, if ever, they would follow (before their operation with the babyeaters was finished), the best times to blow up the star would be either immediately, if the chance of the superhappies showing up is judged to be high, or the disutility of transforming all the humans is high (relatively speaking), or they would wait 8 hours and save most of the people on the planet. Waiting about half that time seems to be accepting a significant risk that the superhappies would show up, for not much gain, while waiting about another 4 hours seems to be about the same risk again, for a much larger gain.

Still though, a very good story. I expect I'll continue to stretch my mind now and then contemplating it.

Replies from: Thinkchronous
comment by Thinkchronous · 2013-12-07T15:57:46.387Z · LW(p) · GW(p)

Eliezer state elsewhere that he 3h 41m where the time the physical process needed - so they blew up the planet as fast as they could. (A hint is that ships are still evacuating when the process in the stars start and none pushes a button after the time limit ). But your analysis still holds, but the the decision in the story is not to blow up at half time, rather as early as possible.

comment by a_soulless_automaton · 2009-02-25T00:28:00.000Z · LW(p) · GW(p)

Zargon, I think the time given was how long it would take from beginning the feedback loop to the actual supernova, and they began the process the moment they arrived. If they could have destroyed the star immediately, they would have done so, but with the delay they encouraged as many people as possible to flee.

At least, that's how it sounded to me.

comment by astrophysicsgeek · 2009-03-06T23:22:00.000Z · LW(p) · GW(p)

I know I'm way late, but I did once work out what kills you if your star goes supernova (type II anyway) while doing my dissertation on supernova physics. It's the neutrinos, as previously mentioned. They emerge from the center of the star several hours before there is any other outward sign of a problem. Any planet in a roughly 1 AU orbit will absorb enough energy from the neutrino blast to melt the entire planet into liquid rock, and this will happen pretty much instantly, everywhere. Needless to say, when the light hits, the planet absorbs photons much more efficiently, and the whole thing turns to vapor, but everyone is very very very dead long before then.

comment by Haakon · 2009-03-15T13:43:00.000Z · LW(p) · GW(p)

I really liked your story.
I know it's not a really insightful comment, or some philosophical masterpiece of reasoning, but it was fun and interesting.

comment by Psychohistorian4 · 2009-04-04T23:20:00.000Z · LW(p) · GW(p)

This is, perhaps, obsolete by now.

That said, there seems to be a serious reasoning problem in assuming that this is a permanent solution. A species capable of progressing from Galileo to FTL travel in what, 30 years, seems like given another few centuries (if not much, much less) would easily be able to track down the remainder of both alien civilizations via some alternate route.

Consequently it seems like a massive sacrifice to delay the inevitable, or, at least, a sacrifice with highly uncertain probability of preventing that which it seeks to prevent. Not to mention the aliens would be rather unlikely to give humans any say in what happened after this interaction. The point about them being harder to fool is also probably true.

Perhaps I fail to understand the science, which I doubt is the issue. Perhaps flawed reasoning was intended by the author? I have to admit I was rather in agreement with the adminitrator.

Replies from: ikrase
comment by ikrase · 2012-09-30T08:26:17.862Z · LW(p) · GW(p)

One issue: The highly self destructive method of preventing Baby eater incursion will send a strong method to the Baby Eaters and also possibly serve as a method of coercion a la hunger strike.

comment by Meagen · 2009-07-06T12:52:00.941Z · LW(p) · GW(p)

What gets me the most about the story is how vague the Future!Human ideology is. Eating babies is bad... especially if they suffer... but destroying a whole culture to remove their suffering is bad... and we'd never want anyone to do that to us... 'cause... y'know, just 'cause.

At first they seem to be all humanitarians and inclusivists. They also seem to be highly rational, and attempting to describe and account for all emotional and cultural bias in their decision-making argument. They're authoritarian... because that's what works best... but the leaders always try to get the feel of what the majority thinks... and the people who are there to make sure things stay sane and rational are not actually leaders... but still have extensive moral authority.

People raise objections to humans reforming the Baby-eaters, and they raise objections to having us reformed by the Superhappies, but (in stark contrast to the excessive self-analysis everywhere else) nobody really gives a clear reason other than "it's just bad". It's like they're afraid to even put their own beliefs on the table, in the form of "we hold these truths to be self-evident, and the majority of us will act on them. we are willing to accept that some or all of them may be wrong, but you will have to convince each of us separately". Or possibly the author is afraid of making the humans as dogmatic as the aliens, so instead makes them all wishy-washy.

The words "freedom of self-determiantion" never appear (except in the comments). The mentions of non-consensual sex and of races being mixed together gives an impression that the Future!Humans value peace and getting along higher than personal freedom, in which case it's hard to see why the Superhappy proposal is so bad for them.

So the moral of the story is, "sure, maybe humans are capable of killing millions of their own kind in the name of vague ideology, but at least we're not baby-eaters or writhing masses of tentacles that do nothing but reproduce". Uplifting, I guess.

Replies from: Hul-Gil, TuviaDulin
comment by Hul-Gil · 2012-03-30T04:55:10.981Z · LW(p) · GW(p)

This is a good comment. I don't understand why this choice was made, or why Mr. Yudkowsky - presumably - thinks that it was the right choice to make. The Superhappy proposal was reasonable, and only instinctive disgust at the idea of eating babies - even non-sentient ones - prevented the acceptance of their offer. (Or was it that humans want to keep suffering, because it's somehow intrinsic to "being human", which is important because... uh... because...) For that small sacrifice, a paradise could have been created. Instead, billions die and humans remain unhappy sufferers.

Very disappointing.

Replies from: Philip_W, Isaac_the_K
comment by Philip_W · 2013-04-12T14:37:38.762Z · LW(p) · GW(p)

See the matter of the pie. Xannon's proposal is identical to the Super Happies'; meta-ethical "fairness". Regardless of how you might feel about a Super Happy future, the Suffering Rapists hated it. Rational agents will choose the option which maximizes expected utility, accepting the deal didn't maximise expected utility, so the deal was rejected. That's all there is to it.

comment by Isaac_the_K · 2013-08-29T21:03:37.380Z · LW(p) · GW(p)

Though you seem to implicitly disagree, vanishingly few "moral systems" hold that pleasure is the ultimate "good," and most people engaged in the pursuit of pleasure would not introspect on their own behavior as "good/moral." At best, they would tend to consider themselves selfishly (chaotic?) neutral.

If you question why, in this exploration of Blue/Orange morality, Future Humanity resists the concept of removal of pain, then you missed THE very critical concept of the piece - "For that terrible winnowing was the central truth of life, after all."

The fundamental essence of the Babyeaters is tied to their evolutionary origins as a people that eat babies. The whole story is a fantastic exploration on how the biological and ecological underpinnings of a species distinctly shape their social thought processes and therefore the evolution of their moral and social systems. It's explained very clearly how the BE's view the self sacrifice of their behavior as THE ultimate good.

Similarly, the unified thought/data/DNA system (there are bacteria and virii that seem to be taking this path) of the SH's makes their path to complete empathy and their resulting behavior entirely logical. It's an application of "Pure Harmony" - as beings of complete empathy, they would rather alter their racial essence than experience/perceive the suffering en masse of others. Similarly, any species that does NOT eliminate suffering in any and all forms is barbaric – if for nothing else because they inherently do not sense the pain of others.

What isn't explained is that pain is as essential to our understanding of the world, and therefore our humanity, as the winnowing/sacrifice is to the BE's. Since we possess this dichotomy between neurological and DNA structures, empathy is only possible through mirroring - the PERSONAL experience of pain is what allows for comprehension, even though it may be considered "uncomfortable." The natural world makes for an uncaring universe, and having a keen sense of pain has enabled us to advance through our challenges. Removing pain is effectively removing our alignment and awareness of the indifference of the universe around us - a fatal mistake. At the same time, individuality is a prized value of humanity, in contrast to BOTH the BE's and SH's. The idea that I should be forced to abandon all sense of individuality (the essence of complete empathy) is anathematic to my very humanity. I don't exist to experience pleasure or pain, I exist to be myself.

comment by TuviaDulin · 2012-04-03T10:05:48.290Z · LW(p) · GW(p)

Does the notion of future humans being irrational, self-deluded hypocrites really strike you as so implausible? Just because they think they're smarter than our generation doesn't mean they actually are.

comment by Pavitra · 2009-09-22T08:37:35.330Z · LW(p) · GW(p)

Excellent story.

The one thing that felt really out of place to me, literarily, was the passage "You couldn't blame them, could you...? Yes, actually, you could."

In a story about Hard Choices, it seems to me that the narrator shouldn't tell the reader what to feel. It would be better to put these sentiments in the mouths of characters, perhaps the Kiritsugu for the reply, and leave the reader to choose who, if either, they agreed with.

Replies from: wedrifid, JGWeissman
comment by wedrifid · 2009-11-13T23:55:19.160Z · LW(p) · GW(p)

The one thing that felt really out of place to me, literarily, was the passage "You couldn't blame them, could you...? Yes, actually, you could."

"You could" is different to "you should". In this context "you could" actually serves to reject an absolute normative claim ("you couldn't blame") and leaves the reader more freedom.

Replies from: Pavitra
comment by Pavitra · 2009-11-19T02:25:14.051Z · LW(p) · GW(p)

If this were a nonfiction essay, then the denotative meaning of the words would take precedence, as you seem to be reading, then I would agree with you.

But here, the "could" in "yes, actually, you could" is a parallel phrasing, a response to the set-up "You couldn't blame them, could you...?"

The tone of "You couldn't blame them, could you...?" is one of moral comfort, of reassuring oneself that the late ships weren't morally culpable. The "Yes, actually, you could", a direct contradiction of that statement, therefore winds up meaning that the late ships were morally culpable -- that they, objectively, did do the wrong thing.

Yes, it's a false dichotomy, but fiction operates by the rules of emotion, not logic. In literary interpretation and in writing, the laws of rationality don't directly apply in the naive sense. Trying to read a novel by the literal denotative meaning of the words is like trying to do economics or sociology on the assumption that all humans are perfect Bayesians.

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-11-19T02:41:50.164Z · LW(p) · GW(p)

Yes, I did indeed mean to assign some definite blame there. Just by way of inverting the usual story "logic" where the heroic idiots always get away with it. TV Tropes probably has a term for this but I'm not looking it up.

Replies from: RobinZ, Pavitra
comment by RobinZ · 2009-11-19T02:49:21.214Z · LW(p) · GW(p)

TV Tropes probably has a term for this but I'm not looking it up.

Probably "Just In Time", although the page seems to have suffered a bit of decay.

Replies from: CronoDAS
comment by CronoDAS · 2009-11-19T03:01:44.772Z · LW(p) · GW(p)

"Reality Ensues" is the trope Eliezer used. I'm not sure what the Defied Trope was, though.

Replies from: RobinZ
comment by RobinZ · 2009-11-19T03:05:04.923Z · LW(p) · GW(p)

I said Just In Time because that's the trope Eliezer was subverting by having the ships miss the deadline.

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-11-19T03:10:55.151Z · LW(p) · GW(p)

well, it's not in http://tvtropes.org/pmwiki/pmwiki.php/Main/ThreeWorldsCollide so you know what to do...

(I haven't ever edited this page myself for obvious reasons)

Replies from: RobinZ, CronoDAS, wedrifid
comment by RobinZ · 2009-11-19T03:15:56.644Z · LW(p) · GW(p)

Done. (On reflection, it's less subversion than deconstruction in TV Tropes vocabulary.)

comment by CronoDAS · 2009-11-19T03:53:21.684Z · LW(p) · GW(p)

There's no rule against it on the TV Tropes Wiki. In fact, there's a whole category for Troper Works. However, if you just don't want to have TV Tropes Ruin Your Life, I won't blame you for staying out.

Replies from: Alicorn, RobinZ
comment by Alicorn · 2009-11-19T03:58:29.194Z · LW(p) · GW(p)

I thought it was considered tacky to mess with entries for your own things on wikis? (If it's not, time to go spam TV Tropes with links to my stuff... traffic, sweet traffic...)

Replies from: kpreid, CronoDAS, CannibalSmith
comment by kpreid · 2009-11-19T04:23:51.431Z · LW(p) · GW(p)

wikis ≠ Wikipedia

comment by CronoDAS · 2009-11-19T04:24:36.914Z · LW(p) · GW(p)

TV Tropes is a buttload more informal than That Other Wiki. ;)

Go ahead and link away!

comment by CannibalSmith · 2009-11-19T05:58:05.905Z · LW(p) · GW(p)

I'd say it's kind of an achievement to have something written about you in a wiki by someone who aren't you.

Replies from: Pavitra
comment by Pavitra · 2009-11-19T06:25:15.333Z · LW(p) · GW(p)

Strictly speaking, it's a signal of an achievement. It provides lots of warm fuzzies, but basically no utilons (beyond those intrinsic to the achievement it signals).

Replies from: wedrifid
comment by wedrifid · 2009-11-19T07:56:27.806Z · LW(p) · GW(p)

Strictly speaking, it's a signal of an achievement. It provides lots of warm fuzzies, but basically no utilons (beyond those intrinsic to the achievement it signals).

I think this 'strict' use is a distortion of the concept of achievement. This kind achievement is very similar in nature to other achievements and for most part, yes, the part we call an 'achievement' is primarily signal, with any utility beyond that just a bonus.

Replies from: Pavitra
comment by Pavitra · 2009-11-21T09:29:34.673Z · LW(p) · GW(p)

If you're using 'achievement' in the video game sense, sure. I assumed that 'achievement' meant achieving something that mattered; that is, utility.

It's probably good cognitive hygiene to keep the two as clearly distinct as feasible.

Replies from: wedrifid
comment by wedrifid · 2009-11-21T09:37:49.673Z · LW(p) · GW(p)

If you're using 'achievement' in the video game sense

No, I'm using the human sense. The one all linked up to 'success' in ways people don't tend to explicitly understand.

I assumed that 'achievement' meant achieving something that mattered; that is, utility.

I haven't met many people whose utility functions appear restricted to things that matter.

Replies from: Pavitra
comment by Pavitra · 2009-11-21T19:51:13.322Z · LW(p) · GW(p)

No, I'm using the human sense. The one all linked up to 'success' in ways people don't tend to explicitly understand.

I think that you're talking about near-mode feel-good, while I'm talking about far-mode feel-good.

I haven't met many people whose utility functions appear restricted to things that matter.

To things that matter to you, perhaps. And I haven't met many people that have utility functions; that is, that behave as rational optimizers. But a utility function by definition is restricted to things that matter to the mind that has it.

Replies from: wedrifid
comment by wedrifid · 2009-11-22T06:33:01.111Z · LW(p) · GW(p)

I think that you're talking about near-mode feel-good, while I'm talking about far-mode feel-good.

I think you are right.

comment by RobinZ · 2009-11-19T03:59:40.870Z · LW(p) · GW(p)

Is EY a troper? I haven't been looking.

Replies from: Blueberry
comment by Blueberry · 2009-11-19T04:19:28.024Z · LW(p) · GW(p)

Tropers are equal to awesomeness, and EY is equal to awesomeness. So yes.

comment by wedrifid · 2009-11-19T07:52:18.292Z · LW(p) · GW(p)

Here I was assuming that tvtropes was about, well, TV.

comment by Pavitra · 2009-11-19T03:00:50.593Z · LW(p) · GW(p)

Warning: TVTropes links in this post. Do not click.

WhatTheHellHero, maybe.

I still think the passage would be more effective presented less directly, particularly considering the relatively high intelligence level that the rest of the story seems to be writing for.

Replies from: RobinZ
comment by RobinZ · 2009-11-19T03:08:45.000Z · LW(p) · GW(p)

I still think the passage would be more effective presented less directly, particularly considering the relatively high intelligence level that the rest of the story seems to be writing for.

For the record, I don't agree - it wasn't subtle, but there's no dwelling on the point, either. It's just a small thing that happened, in the context of the story.

comment by JGWeissman · 2009-11-19T20:12:41.462Z · LW(p) · GW(p)

I think the point is that it does not matter whether you blame them or not. They made a serious mistake, and have to face the natural consequences.

comment by Jesus_Christ_is_Lord · 2010-11-04T22:41:46.253Z · LW(p) · GW(p)

Your story's really good, and I hope you get it published. But I suspect you might have to change the part where Samuel L. Jackson appears out of nowhere and saves humanity. You called him masculine twice. Seriously, you fail creative writing forever.

comment by UnclGhost · 2010-11-28T21:59:56.937Z · LW(p) · GW(p)

Something else that humans generally value is autonomy. Why not just make an optional colony of superhappiness?

Replies from: DrRobertStadler
comment by DrRobertStadler · 2011-09-14T00:49:59.832Z · LW(p) · GW(p)

At what point do children get to choose it?

comment by rkyeun · 2012-08-27T21:26:24.566Z · LW(p) · GW(p)

If you know there are aliens and that there are ways to collapse starlines... you move all colonies such that the node length between them is two. And on the odd-numbered nodes you leave a drone which watches for aliens, is sufficiently powerful to detonate the star before any ship could cross the space to the next starline, and pops the star at first contact. No matter what any invasion fleet of any size attempts, you only ever lose one world. And you may broadcast this fact as leverage when negotiating with the super-happy people. You will not attempt to forcibly change us, or our star will pop and cut off from you any possibility of interfering or rescuing anybody on our side. And so if you do not desire our eternal suffering as you view it, you must let us be while you attempt a diplomatic approach where we actually convince and understand and come to terms with each other. And if our values are entirely and intractably incompatible, then we just have to accept that as an unfortunate fact about the universe.

Replies from: rkyeun
comment by rkyeun · 2012-08-27T22:23:29.677Z · LW(p) · GW(p)

That idea doesn't work because stars going off attract aliens from two nodes away. This plan calls aliens to all your colonies, when they see the adjacent star go off and investigate the flash they see via the jump to your colony worlds.

Replies from: rkyeun
comment by rkyeun · 2012-08-27T22:27:38.202Z · LW(p) · GW(p)

OH CRAP. That means the TRUE ENDING DOESN'T WORK. You just set off a beacon that attracted every alien to EARTH. Humanity is doomed.

comment by rkyeun · 2012-08-27T22:39:54.450Z · LW(p) · GW(p)

If you know there are aliens and that there are ways to collapse starlines... you move all colonies such that the node length between them is two. And on the odd-numbered nodes you leave a drone which watches for aliens, is sufficiently powerful to detonate the star before any ship could cross the space to the next starline, and pops the star at first contact. No matter what any invasion fleet of any size attempts, you only ever lose one world, and you only lose that one if aliens happen upon it, or can tell which way you came from when you happen upon them. And you may broadcast this fact as leverage when negotiating with the super-happy people. You will not attempt to forcibly change us, or our star will pop and cut off from you any possibility of interfering or rescuing anybody on our side. And so if you do not desire our eternal suffering as you view it, you must let us be while you attempt a diplomatic approach where we actually convince and understand and come to terms with each other. And if our values are entirely and intractably incompatible, then we just have to accept that as an unfortunate fact about the universe.

comment by Articulator · 2013-11-12T07:55:00.145Z · LW(p) · GW(p)

The most enjoyable part of reading through these comments is that everyone is in a combined state of ethically relaxed and mentally aware. Makes for stimulating conversation.

comment by ahuff44 · 2014-02-03T06:48:04.064Z · LW(p) · GW(p)

Did anyone else think the baby-eaters were created by the superhappies to demonstrate their point to the humans? The way the humans looked thought of the baby-eaters clearly paralleled how the superhappies viewed the humans. The end of chapter 3 really drove the point home:

They're a lot like our own children, really.

"- they somewhat resemble the earlier life stages of our own kind." [said Lady 3rd]

It seems to me that the superhappies had observed the humans beforehand and found their acceptance of pain abhorrent. They then fabricated the baby-eaters and arranged their contact with the humans to help drive home their point. Admittedly, fabricating petabytes of baby-eater history is very impressive, but I think this could be explained by the superhappies' advanced technology. Sure, this explanation is a little elaborate, but I find it much more plausible than humans meeting two new alien species at once. Yes, I read the explanation that the nova attracted the attention of nearby star systems (found in chapter 7, I believe), but I don't find that explanation extraordinary enough to fit the claim.

comment by wobster109 · 2014-03-13T02:37:33.159Z · LW(p) · GW(p)

This is still disappointing, all these years later. Everything was set up so carefully, and then, Whoops! Look, a way out! Strikes me as very deus ex machina.

And anyhow, why didn't they forcibly sedate every human until after the change? Then if they decided it wasn't worthwhile they could choose to die then.

And anyway what person would choose pain, embarrassment, and romantic angst over untranslateable 2 anyhow?

Edit: that came out very negative sounding, sorry. Three Worlds Collide (with the normal ending) is my second-favorite story, second only to The Fable of the Dragon Tyrant. It teaches me so much.

Replies from: Mestroyer
comment by Mestroyer · 2014-03-13T03:08:34.854Z · LW(p) · GW(p)

And anyhow, why didn't they forcibly sedate every human until after the change? Then if they decided it wasn't worthwhile they could choose to die then.

It wouldn't be their own value system making the decision. It would be the modified version after the change.

Unrelatedly, you like Eliezer Yudkowsky's writing, huh? You should read HPMOR.

comment by RMcD · 2016-01-13T16:59:05.713Z · LW(p) · GW(p)

What was the purpose in tasering the Captain? It seemed a needless mutiny since all he had to do was let the Captain know of the possibility, he obviously hadn't thought about it after all.

Why the hell did the SuperHappy or whatever people not predict that? Unlike the humans they know everyone is capable of destroying the sun and starlinks, so without question they should have predicted that in the face of total domination from both the BabyEaters and from the humans. Why did the BabyEaters destroy the Star (and so the links)?

This seems a uselessly temporary measure since stellar events are going to happen and over long enough time scales regular travel will be possible between stars.

comment by siIver · 2017-01-05T14:38:13.610Z · LW(p) · GW(p)

Well, I don't think this is even complicated. The super happies are right... it is normal for them to forcefully reform us, and it is moral for us to erase the babyeater species.

Suffice to say I preferred the normal ending.

comment by Shelby Lynn (shelby-lynn) · 2019-06-29T08:19:07.695Z · LW(p) · GW(p)

"The rules of society are just consensual hallucinations." Sounds a lot like the sociological definition of "societal norms." Hi, I'm a sociologist and a n00b at most aspects of the rationalist corner of the internet! I very much enjoyed this story!

comment by ImmConCon (ImmemorConsultrixContrarie) · 2020-07-11T22:56:20.502Z · LW(p) · GW(p)

Me: "Come on! Only sociopath could instantly think of killing whateverllions of humans to save humanity (or so I, as a sociopath, think from figuring out the True ending the moment the author asked me to think about either stunning Lord Pilot or doing some other thing; while many guessing comments didn't suppose that possibility). Yet, no highly intelligent sociopath would ever be scared of turning into an alien instead of becoming an hero…"

Confessor: "I killed a girl and liked it."

Me: "… Oh, so he's a psychopath and likes killing Homo Sapiens. Well, THAT makes sense."

comment by Kotomine · 2020-08-09T22:40:56.342Z · LW(p) · GW(p)

It's pretty obvious that ANY way is better than having your entire species being raped by somewhat Untranslatable. Three worlds are meant to be separate. If the Superhappies met us now, in 2020, they would've pressed the destroy button immediately. There's no way they were able to discuss anything with nowadays humans, which know rape and prefer violence in every aspect of our existance. The later humankind is more likely to suffice but even Akon is unwilling to being EVEN virtually raped. It is a high indicator that humans are not changing by any force (other than maybe time itself).

The aliens should've talked to great teacher Onizuka or Sherlock or doctor House rather than those cowards who were too afraid to discuss other options.

As for the endings... The first one seems like you prefer to surrender to the will of the culprit, while he feeds you drugs to always be happy and all the while rapes you (untill you destroy your pain and suffering issues, abandon your resistance skills and become unfeeling). There's some parallel to "Future" by Glukhovsky or "Brave New World" where they eat happiness pills all the time. And as for the children you produce, their destiny will be decided by the culprit, not you.

The second ending is that where you found a rusty knife and cut off your leg with it to escape the mantrap. At the same time it means the cycle of violence in human society will continue, more billions will die eventually. The pattern hasn't been broken. There is no need to count losses when any loss counts as A HYMN OF LOGIC PURE LIKE STONES AND SACRIFICE.

comment by Malik Endsley (malik-endsley) · 2024-08-21T00:24:49.510Z · LW(p) · GW(p)

I have stumbled upon this many years later and have never read something quite like it, I think this is my new favorite set of pieces