Leave a Line of Retreat

post by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2008-02-25T23:57:58.000Z · LW · GW · Legacy · 73 comments

When you surround the enemy

Always allow them an escape route.

They must see that there is

An alternative to death.

—Sun Tzu, The Art of War

Don’t raise the pressure, lower the wall.

—Lois McMaster Bujold, Komarr

I recently happened into a conversation with a nonrationalist who had somehow wandered into a local rationalists’ gathering. She had just declared (a) her belief in souls and (b) that she didn’t believe in cryonics because she believed the soul wouldn’t stay with the frozen body. I asked, “But how do you know that?”

From the confusion that flashed on her face, it was pretty clear that this question had never occurred to her. I don’t say this in a bad way—she seemed like a nice person without any applied rationality training, just like most of the rest of the human species.

Most of the ensuing conversation was on items already covered on Overcoming Bias—if you’re really curious about something, you probably can figure out a good way to test it, try to attain accurate beliefs first and then let your emotions flow from that, that sort of thing. But the conversation reminded me of one notion I haven’t covered here yet:

“Make sure,” I suggested to her, “that you visualize what the world would be like if there are no souls, and what you would do about that. Don’t think about all the reasons that it can’t be that way; just accept it as a premise and then visualize the consequences. So that you’ll think, ‘Well, if there are no souls, I can just sign up for cryonics,’ or ‘If there is no God, I can just go on being moral anyway,’ rather than it being too horrifying to face. As a matter of self-respect, you should try to believe the truth no matter how uncomfortable it is, like I said before; but as a matter of human nature, it helps to make a belief less uncomfortable, before you try to evaluate the evidence for it.”

The principle behind the technique is simple: as Sun Tzu advises you to do with your enemies, you must do with yourself—leave yourself a line of retreat, so that you will have less trouble retreating. The prospect of losing your job, for example, may seem a lot more scary when you can’t even bear to think about it than after you have calculated exactly how long your savings will last, and checked the job market in your area, and otherwise planned out exactly what to do next. Only then will you be ready to fairly assess the probability of keeping your job in the planned layoffs next month. Be a true coward, and plan out your retreat in detail—visualize every step—preferably before you first come to the battlefield.

The hope is that it takes less courage to visualize an uncomfortable state of affairs as a thought experiment, than to consider how likely it is to be true. But then after you do the former, it becomes easier to do the latter.

Remember that Bayesianism is precise—even if a scary proposition really should seem unlikely, it’s still important to count up all the evidence, for and against, exactly fairly, to arrive at the rational quantitative probability. Visualizing a scary belief does not mean admitting that you think, deep down, it’s probably true. You can visualize a scary belief on general principles of good mental housekeeping. “The thought you cannot think controls you more than thoughts you speak aloud”—this happens even if the unthinkable thought is false!

The leave-a-line-of-retreat technique does require a certain minimum of self-honesty to use correctly.

For a start: You must at least be able to admit to yourself which ideas scare you, and which ideas you are attached to. But this is a substantially less difficult test than fairly counting the evidence for an idea that scares you. Does it help if I say that I have occasion to use this technique myself? A rationalist does not reject all emotion, after all. There are ideas which scare me, yet I still believe to be false. There are ideas to which I know I am attached, yet I still believe to be true. But I still plan my retreats, not because I’m planning to retreat, but because planning my retreat in advance helps me think about the problem without attachment.

But the greater test of self-honesty is to really accept the uncomfortable proposition as a premise, and figure out how you would really deal with it. When we’re faced with an uncomfortable idea, our first impulse is naturally to think of all the reasons why it can’t possibly be so. And so you will encounter a certain amount of psychological resistance in yourself, if you try to visualize exactly how the world would be, and what you would do about it, if My-Most-Precious-Belief were false, or My-Most-Feared-Belief were true.

Think of all the people who say that without God, morality is impossible.1 If theists could visualize their real reaction to believing as a fact that God did not exist, they could realize that, no, they wouldn’t go around slaughtering babies. They could realize that atheists are reacting to the nonexistence of God in pretty much the way they themselves would, if they came to believe that. I say this, to show that it is a considerable challenge to visualize the way you really would react, to believing the opposite of a tightly held belief.

Plus it’s always counterintuitive to realize that, yes, people do get over things. Newly minted quadriplegics are not as sad, six months later, as they expect to be, etc. It can be equally counterintuitive to realize that if the scary belief turned out to be true, you would come to terms with it somehow. Quadriplegics deal, and so would you.

See also the Litany of Gendlin and the Litany of Tarski. What is true is already so; owning up to it doesn’t make it worse. You shouldn’t be afraid to just visualize a world you fear. If that world is already actual, visualizing it won’t make it worse; and if it is not actual, visualizing it will do no harm. And remember, as you visualize, that if the scary things you’re imagining really are true—which they may not be!—then you would, indeed, want to believe it, and you should visualize that too; not believing wouldn’t help you.

How many religious people would retain their belief in God if they could accurately visualize that hypothetical world in which there was no God and they themselves have become atheists?

Leaving a line of retreat is a powerful technique, but it’s not easy. Honest visualization doesn’t take as much effort as admitting outright that God doesn’t exist, but it does take an effort.

1And yes, this topic did come up in the conversation; I’m not offering a strawman.


Comments sorted by oldest first, as this post is from before comment nesting was available (around 2009-02-27).

comment by L._Zoel · 2008-02-26T00:20:21.000Z · LW(p) · GW(p)

How many rationalists would retain their belief in reason, if they could accurately visualize that hypothetical world in which there was no rationality and they themselves have become irrational?

Replies from: None, faul_sname, Jotto999, Yosarian2, Idan Arye
comment by [deleted] · 2012-11-05T21:34:28.690Z · LW(p) · GW(p)

I don't know. But I would. Irrationality is caused by ignorance, so there will always be tangent worlds (while regarding this current one as prime) in which I give up. There will always be a world where anything that is physically possible occurs. (and probably many where even that requirement doesn't hold)

To put it another way, there has been a moment in time when I was not rational. Is that reason to give up rationality forever? Time could be just another dimension, it's manipulation as far out of our grasp as that of other possible worlds.

comment by faul_sname · 2012-11-12T23:59:25.889Z · LW(p) · GW(p)

if they could accurately visualize that hypothetical world in which there was no rationality and they themselves have become irrational?

I just attempted to visualize such a world, and my mind ran into a brick wall. I can easily imagine a world in which I am not perfectly rational (and in fact am barely rational at all), and that world looks a lot like this world. But I can't imagine a world in which rationality doesn't exist, except as a world in which no decision-making entities exist. Because in any world in which there exist better and worse options and an entity that can model those options and choose between them with better than random chance, there exists a certain amount of rationality.

Replies from: Benito, Voltairina
comment by Ben Pace (Benito) · 2013-08-02T20:42:39.284Z · LW(p) · GW(p)

I suppose I'd just think about before I met LessWrong. I wouldn't choose that world.

comment by Voltairina · 2014-10-14T14:13:04.828Z · LW(p) · GW(p)

Well, a world that lacked rationality might be one in which all the events were a sequence of non-sequiters. A car drives down the street. Then dissappears. We are in a movie theater with a tyrannosaurus. Now we are a snail on the moon. Then there's just this poster of rocks. Then I can't remember what sight was like, but there's jazz music. Now I fondly remember fighting in world war 2, while evading the Empire with Hans solo. Oh! I think I might be boiling water, but with a sense of smell somehow.... that's a poor job of describing it -- too much familiar stuff - but you get the idea. If there was no connection between one state of affairs and the next, talking about what strategy to take might be impossible, or a brief possibility that then dissappears when you forget what you are doing and you're back in the movie theater again with the tyrannosaurus. If 'you' is even a meaningful way to describe a brief moment of awareness bubbling into being in that universe. Then again, if at any moment 'you' happen to exist and 'you' happen to understand what rationality means- I guess now that I think about it, if there is any situation where you can understand what the word rationality means, its probably one in which it exists (howevery briefly) and is potentially helpful to you, even if there is little useful to do about whatever situation you are in, there might be some useful thing to do about the troubling thoughts in your mind.

Replies from: CCC
comment by CCC · 2014-10-14T14:34:32.997Z · LW(p) · GW(p)

While that is a world without rationality, it seems a fairly extreme case.

Another example of a world without rationality is a world in which, the more you work towards achieving a goal, the longer it takes to reach that goal; so an elderly man might wander distractedly up Mount Everest to look for his false teeth with no trouble, but a team of experienced mountaineers won't be able to climb a small hill. Even if they try to follow the old man looking for his teeth, the universe notices their intent and conspires against them. And anyone who notices this tendency and tries to take advantage of it gets struck by lightning (even if they're in a submarine at the time) and killed instantly.

Replies from: Voltairina, JustinMElms
comment by Voltairina · 2014-10-15T00:46:08.004Z · LW(p) · GW(p)

That reminds me of Hofstadter's Law: "It will always take longer than you think it is going to take. Even when you take into account Hofstadter's Law."

comment by JustinMElms · 2016-07-22T23:02:37.244Z · LW(p) · GW(p)

I like both Volairina and your takes on the non-rational world. I was having a lot of trouble working something out.

That said, while Voltairina's world is a bit more horrifyingly extreme than yours, it seems to me more probably that cause and effect simply did not exist. I can envision a structure of elementary physics that simply change--functionally randomly--far more easily than that causality does exist, but operates in the inverse. I have more trouble envisioning the elementary physics that bring that into existence without a observational intellect directly upsetting motivated plans.

All that is to say, might not your case be the more extreme one?

Replies from: CCC
comment by CCC · 2016-08-17T15:02:59.036Z · LW(p) · GW(p)

...it's possible. There are many differences between our proposed worlds, and it really depends on what you mean by "more extreme". Volairina's world is "more extreme" in the sense that there are no rules, no patterns to take advantage of. My world is "more extreme" in that the rules actively punish rationality.

My world requires that elementary physics somehow takes account of intent, and then actively subverts it. This means that it reacts in some way to something as nebulous as intent. This implies some level of understanding of the concept of intent. This, in turn, implies (as you state) an observational intellect - and worse, a directly malevolent one. Volairina's can exist without a directly malevolent intelligence directing things.

So it really comes down to what you mean by "extreme", I guess. Both proposed worlds are extreme cases, in their own way.

Replies from: JustinMElms
comment by JustinMElms · 2016-08-18T16:30:17.531Z · LW(p) · GW(p)

Fair point.

comment by Jotto999 · 2012-11-27T11:33:06.200Z · LW(p) · GW(p)

I'm not sure what "no rationality" would mean. Evolutionarily relevant kinds of rationality can still be expected, like preference to sexually fertile mates, fearing spiders/snakes/heights, and if we're still talking about something at all similar to Homo Sapiens, language and cultural learning and such, which require some amounts of rationality to use.

I wonder if you might be imagining rationality in the form of essentialism, allowing you to universally turn the attribute off, but in reality there no such off switch that is compatible with having decision making agents.

comment by Yosarian2 · 2014-01-21T02:26:00.478Z · LW(p) · GW(p)

That's not the idea that really scares Less Wrong people.

Here's a more disturbing one; try to picture a world where all the rational skills you're learning on Less Wrong are actually somehow flawed, and actually make it less likely that you'll discover the truth or made you correct less often, for whatever reason? What would that look like? Would you be able to tell the difference.

I must say, I have trouble picturing that, but I can't prove it's not true (we are basically tinkering with the way our mind works without a software manual, after all).

Replies from: accolade
comment by Idan Arye · 2020-08-31T15:49:56.284Z · LW(p) · GW(p)

No rationality, or no Bayesianism? Rationality is a general term for reasoning about reality. Bayesianism is the specific school of rationality advocated on LessWrong.

A "world in which there was no rationality" is not even meaningful, just like "world in which there was no physics" is meaningless. Even if energy and matter behaves in a way that's completely alien to us, there are still laws that govern how it works and you can call these laws "physics". Similarly, even if we'd live in some hypothetical world where the rules of reasoning are not derived from Bayes' theorem, there are still rules that can be thought of as that reality's rationalism.

A world without Bayesianism is easy to visualize, because we have all seen such worlds in fiction. Cartoons takes this to the extreme - Wile E. Coyote paints a tunnel and expects Road Runner to crash into it - but Road Runner manages to go through. Then he expects that if Road Runner could go through, he could go through as well - but he crashes into it when he tried.

Coyote's problem is that his rationalism could have worked in our world - but he is not living in our world. He is living in a cartoon world with cartoon logic, and needs a different kind of rationalism.

Like... the one Bugs Bunny uses.

Bugs Bunny plugs Elmer Fudd's rifle with his finger. In our world, this could not stop the bullet. But Bugs Bunny is not living in our world - he lives in cartoon world. He correctly predicts that the rifle will explode without harming him, and his belief in that prediction is strong enough to bet his life on it.

Now, one may claim that it is not rationality that gets messed up here - merely physics. But in the examples I picked it is not just that laws of nature that don't work like real world dwellers would expect - it is consistency itself that fails. Let us compare with superhero comics, where the limitations of physics are but a suggestion but at least some effort is done to maintain consistency.

When mirror master jumps into a mirror, he uses his technology/powers to temporarily turn the mirror into a portal. If Flash is fast enough, he can jump into the mirror after him, before the mirror turns back to normal. The rules are simple - when the portal is open you can pass, when it's closed you can't. Even if it doesn't make sense scientifically it makes sense logically. But there are no similar rules that can tell Coyote whether or not its safe to pass.

Superman can also plug his finger into criminals' guns to stop them from shooting, just like Bugs Bunny. But Superman can stop the bullets with any part of his body, before or after they leave the barrel. So him successfully plugging the guns is consistent. Bugs Bunny, however, is not invulnerable to bullets. When Elmer Fudd chases after him, rifle blazing, Bugs Bunny runs for his life because he know the bullets will pierce him. They are stronger than his body can handle. Except... when he sticks his finger into the barrel. Not consistent.

Still - there are laws that govern cartoon reality. Like the law of funny. Bugs Bunny is aware of them - his actions may seem chaotic when judged by our world's rationality, but they make perfect sense in cartoon world. Wile E. Coyote's actions make perfect some sense in our world's rationality, but are doomed to fail when executed under cartoon world logic.

Had I lived in cartoon world, I'd rather be like Bugs Bunny than like Wile E. Coyote. Not to insist on Bayesianism even though it wouldn't work, but try to figure out how reasoning in that reality really works and rely on that.

Then again - wouldn't Bayesianism itself deter me from relying on things that don't work? Is Wile E. Coyote even Bayesian if he doesn't update his believes every time his predictions fail?

I'm no longer sure I can imagine a world where there is no Bayesianism...

comment by SnappyCrunch · 2008-02-26T00:32:58.000Z · LW(p) · GW(p)

I enjoy the non-mathy posts. I believe Overcoming Bias is a worthy endeavor, and as a relatively new field of study, the math-oriented posts are important. They are often the most succinct and accurate way to convey concepts. With that said, I find that math posts to be dense with information, perhaps overly so. I find myself unconsciously starting to skim instead of read, and I find it difficult to force myself to pay attention.

The mathy posts appeal to people who are serious about moving this burgeoning field forward, and the non-mathy posts appeal to people who are more casually interested in the concepts, and allow you to have a wider audience. You will have a balance between the two no matter what you attempt, the only question is what your intended audience is, and the best way to reach those people.

Replies from: Odinn
comment by Odinn · 2015-08-03T18:28:29.120Z · LW(p) · GW(p)

Not sure why you got a downvote. Displaying, or worse still obstinately defending, poor reasoning is a valid reason for getting a down (I got a big stack of them with a sloppy article and from rushed comments [working on making it better]) but admitting that you aren't a mathematically focused person and providing feedback on Eliezer's communication styles is no cause for it. Got my upvote.

comment by Kriti · 2008-02-26T00:39:56.000Z · LW(p) · GW(p)

I enjoy all posts here, but would love a post on what does it mean to be rational. Something introductory, something you can link to when you talk with people who think "if you can justify what someone did, no matter what the justification is, the action becomes rational".

comment by Thermopyle · 2008-02-26T00:48:48.000Z · LW(p) · GW(p)
then I am interested in hearing from you in the comments.

While I appreciate the mathy posts as well as I can, as someone without much training in mathematics I really enjoy these types of posts (I've got a large backlog of your more mathy posts bookmarked for me to work through, whereas your non-mathy posts I read as soon as they show up in my feed reader).

Let us have both!

comment by Caledonian2 · 2008-02-26T01:00:08.000Z · LW(p) · GW(p)

The ability to endure cognitive dissonance long enough to find the resolution to the dissonance, rather than just short-circuiting to something that makes no sense but offers relief from the strain, is a necessary precondition for rational thought.

I don't think it can be cultivated, and I don't think there's a substitute. Either you pass through the gauntlet, or you don't.

Replies from: SecondWind
comment by SecondWind · 2013-05-02T02:27:37.354Z · LW(p) · GW(p)

Couldn't you start with easier cognitive dissonances, and work your way up?

comment by Tiiba2 · 2008-02-26T01:01:27.000Z · LW(p) · GW(p)

I just want you to get to that "revelation" of yours already. I thought you were approaching it, if you're talking about neural nets and arithmetic coding. Where does it rank in your schedule? Or is this blog for human reasoning only?

comment by Kellopyy · 2008-02-26T01:04:02.000Z · LW(p) · GW(p)

I was expecting to read yet another mathy post tonight, but I was dissapointed. Less mathy stuff is ok, but shouldn't really come at cost of anything intresting.

I agree with Kriti - introductory essay, post, etc would be useful.

comment by O2 · 2008-02-26T01:21:53.000Z · LW(p) · GW(p)

I prefer the less mathy.

comment by brent · 2008-02-26T01:26:46.000Z · LW(p) · GW(p)

I too prefer less mathy - well, to be precise I'll actually read the less mathy stuff in the first place.

More to the point, I've stopped listening to news reports about global warming - and this is harming my ability to think rationally about it. I'll change the channel instead of hear someone say "You know how we all thought we've got 50 years to live? Turns out it's only 30/25/20."

comment by Frank_Hirsch · 2008-02-26T02:10:46.000Z · LW(p) · GW(p)

[Without having read the comments]

WTF? You say: [...] I was actually advised to post something "fun", but I'd rather not [...]

I think it was fun!

BTW could we increase the probability of people being honest by basing reward not on individual choices, but on the log-likelihood over a sample of similar choices? (For a given meaning of similar.)

comment by Roland2 · 2008-02-26T02:26:21.000Z · LW(p) · GW(p)

As a mathematician I like your mathy posts, but this is also very welcome for a reason: it contains practical advice. Some posts are of little direct practical use but this one certainly is.

Keep on the good work!

comment by Roland2 · 2008-02-26T02:28:46.000Z · LW(p) · GW(p)

"this is also very welcome" I'm refering to this post.

comment by Frank_Hirsch · 2008-02-26T02:42:04.000Z · LW(p) · GW(p)

[having read the comments]

Kriti et al: I'd recommend this and this to anybody who hasn't already read it. Otherwise I have not much idea for introductory texts right now.

comment by denis_bider · 2008-02-26T03:47:28.000Z · LW(p) · GW(p)

I think you should go with the advice and post something fun. Especially so if you have "much important material" to cover in following months. No need for a big hurry to lose readers. ;)

comment by denis_bider · 2008-02-26T03:55:17.000Z · LW(p) · GW(p)

I should however note that one of the last mathy posts (Mutual Information) struck a chord with me and caused an "Aha!" moment for which I am grateful.

Specifically, it was this:

I digress here to remark that the symmetry of the expression for the mutual information shows that Y must tell us as much about Z, on average, as Z tells us about Y. I leave it as an exercise to the reader to reconcile this with anything they were taught in logic class about how, if all ravens are black, being allowed to reason Raven(x)->Black(x) doesn't mean you're allowed to reason Black(x)->Raven(x). How different seem the symmetrical probability flows of the Bayesian, from the sharp lurches of logic - even though the latter is just a degenerate case of the former.


comment by Benquo · 2008-02-26T04:14:52.000Z · LW(p) · GW(p)

I agree with SnappyCrunch.

comment by Ben_L. · 2008-02-26T08:49:53.000Z · LW(p) · GW(p)

I like non-mathy posts. I particularly enjoyed this one, as it seems to have a clear practical application.

comment by Ulrik · 2008-02-26T08:58:03.000Z · LW(p) · GW(p)

I liked this post, but then again, I like all your posts Eliezer! (I've just been hiding behind my feedreader, and so not commenting about it before.)

My opinion about mathy/non-mathy is that you should do what you think is most natural. Most days, you'll probably want to get on with the mathy exposition (and I am very much looking forward to the more advanced mathy posts), and then sprinkle in something lighter when the occasions to do so arise. For instance, I like that you based today's post on a recent discussion you had.

I believe this approach would be most conducive to interesting reading.

comment by Ben_Jones · 2008-02-26T09:23:19.000Z · LW(p) · GW(p)

'Newly minted quadriplegics'? What's more fun than that?

Don't worry too much about who wants what when. Like you say, it's all important stuff, and at a post a day no-one's going to complain about the odd vignette. Just keep up the good work.

comment by CarlShulman · 2008-02-26T13:45:30.000Z · LW(p) · GW(p)

When I saw the title I thought you were responding to this: http://www.overcomingbias.com/2008/02/more-moral-wigg.html

comment by The_Darkness · 2008-02-26T13:55:50.000Z · LW(p) · GW(p)

Thank GOD for non-mathy posts ;-)

comment by LG · 2008-02-26T14:47:36.000Z · LW(p) · GW(p)

There's a common literary technique used in most storytelling in which the author writes alternating "up" and "down" scenes -- it provides pacing and context; it also allows us time to digest the "up" scenes.

It seems to me that the technique is appropriate here -- it might be worth making a goal for yourself to write a mathy post, then to follow up with a post on the same topic but without any math in it at all, except maybe references to the previous post. That would be an interesting exercise for you, I think. It's supposed to accessible work -- how accessible can you make it? Can you write about these mathy topics without numbers?

I don't know, but if you never try to do impossible things...

comment by Will_Pearson · 2008-02-26T14:47:52.000Z · LW(p) · GW(p)

There hasn't been much evidence of atheists forming groups that have the positive aspects that a church/synagogue/mosque holds in the social life of some humans. So you might forgive a theist pretending to be a rationalist, for not holding the probability of this happening very high, and that the world would lack said institutions and would be a worse place.

If rationalists truly wants to get rid of religions, without getting rid of humans, we would have to ask ourselves, "What do humans get out of being part of a religion?" And then provide that through organisations.

And please no strawmen of the comfort of ignorance, I am talking about reassurance of being with people who are trying to hold the same goal system.

comment by Gordon_Worley · 2008-02-26T15:13:25.000Z · LW(p) · GW(p)


You know that you can't succeed without the math, and slowing down for posts like this is taking away 24 hours that might have been better used to save humanity. Not that this was a bad post, but I think you would be better off letting others write the fun posts unless you need to write a fun post to recover from teaching.

comment by randy · 2008-02-26T16:23:30.000Z · LW(p) · GW(p)

Eliezer, this was a welcome relief from the long series of mathy posts.

comment by Unknown · 2008-02-26T19:14:19.000Z · LW(p) · GW(p)

Eliezer, suppose it turned out to be the case that:

1) God exists. 2) At some time in the future, tomorrow, for example, God comes to Eliezer Yudkowsky in order announce His existence. 3) Not only does He announce His existence, but He is willing to have His existence and power tested, and passes every test. 4) He also asserts that according to Eliezer's CEV, although not according to his present knowledge, God's way of acting in the world is perfectly moral, even according to Eliezer's values.

How would you react to these events? Would you write a post about them on OB?

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2008-02-26T19:54:27.000Z · LW(p) · GW(p)

Thanks for feedback, all! The consensus appears to favor leavening mathy posts with less mathy ones. I'll bear that in mind, though I make no promises - I do have my own agenda here.

Unknown, can't say I've ever thought of that one. I've considered how to kill or rewrite a Judeo-Christian type God, but not that particular scenario you've just described.

I think I would simply reply to number 4, "I don't believe that without an explanation." After all, just because an entity displays great power doesn't mean it will always tell you the truth.

You can't necessarily force me to consider believing number 4 because it involves a moral question and those are not subject to forced visualization (by this rule) in the way that factual scenarios are.

You can invent all kinds of Gods and demand that I visualize the case of their existence, or of their telling me various things, but you can't necessarily force me to visualize the case where I accept their statement that killing babies is a good idea - not unless you can argue it well enough to create a real moral doubt in my mind.

If I myself am in actual doubt on a moral question, then I can visualize it both ways without confusing myself; and then you can demand that I visualize it. But when I am not in doubt, trying to visualize the contrary has the same quality as trying to concretely visualize 2 + 2 = 3, only more so.

I can visualize a mind constructed so as to possess a different morality, of course; but that is not the same as identifying myself with that mind.

This reminds me of an item from a list of "horrible job interview questions" we once devised for SIAI:

Would you kill babies if it was intrinsically the right thing to do? Yes/No

If you circled "no", explain under what circumstances you would not do the right thing to do:

If you circled "yes", how right would it have to be, for how many babies?

comment by conchis · 2008-02-26T20:05:18.000Z · LW(p) · GW(p)

Alternatively, if you want something super scary, try 1), 2), and 3) without 4).

comment by Nick_Tarleton · 2008-02-26T20:22:07.000Z · LW(p) · GW(p)

I've considered how to kill or rewrite a Judeo-Christian type God

Please make this your next "fun" post. (Speaking of which, I enjoy the digression.)

You can't necessarily force me to consider believing number 4 because it involves a moral question and those are not subject to forced visualization (by this rule) in the way that factual scenarios are.

But "my CEV judges killing babies as good" (unlike "killing babies is good") is a factual proposition. You know what your current moral judgments are, but you can't be certain what the idealized Eliezer would think. You might justifiably judge repugnant volition too unlikely to bother imagining it, but exempt?

comment by PK · 2008-02-26T20:35:44.000Z · LW(p) · GW(p)

This reminds me of an item from a list of "horrible job interview questions" we once devised for SIAI:

Would you kill babies if it was intrinsically the right thing to do? Yes/No

If you circled "no", explain under what circumstances you would not do the right thing to do:

If you circled "yes", how right would it have to be, for how many babies? ___

What a horrible horrible question. My answer is ... what do you mean when you say "intrinsically the right thing to do"? The "right thing" according to whom? If it was the right thing according to an authority figure but I disagreed, I probably would not do it. If the circumstances were so extreme that I truly believed it's the right thing(eg: not killing a baby results in the baby's death anyway + 1 million babies) then I would kill babies(assuming I could overcome my aversion to killing).

Actually I don't really know how I would react. This is how I wish I would act. Calmly theorising in front of the computer never having experienced circumstances remotely as awful is not the same as being in those circumstances when the fear and dread overtakes you. There would probably be a significant shift from what I consider and feel is "me" right now to the "me" I would become in that hypothetical situation.

comment by Tom_McCabe2 · 2008-02-26T20:52:44.000Z · LW(p) · GW(p)

"This reminds me of an item from a list of "horrible job interview questions" we once devised for SIAI:"

Could you post these?

comment by Psy-Kosh · 2008-02-26T21:46:22.000Z · LW(p) · GW(p)

"I've considered how to kill or rewrite a Judeo-Christian type God"

Okay, now I'm curious what you've concluded with regards to that. :)

Probably not worth doing more then just talking 'bout it in comments, if that, unless you feel like doing a post on that just for fun.

But as far as this post, I also liked it. Useful to have actual suggestions for mental practices to practice to help one debias oneself.

comment by Joe_Marier · 2008-02-27T00:05:39.000Z · LW(p) · GW(p)

Why do the work of hypothesizing the world without God? It's not like Nietszche, Sartre, Camus, Marx, Shaw, Derrida, etc. haven't done a much better job of it than me, because they were better philosophers than me. However, I also consider Aquinas to be the better philosopher than the aforementioned. Is that so unreasonable?

comment by Maribel_Hawkins · 2008-02-27T01:21:50.000Z · LW(p) · GW(p)

Thanks for reminding me of The Art of War from your quote. You might be interested in this great translation - http://www.sonshi.com/huynh.html

comment by Wendy_Collings · 2008-02-27T02:27:16.000Z · LW(p) · GW(p)

"The mathy posts appeal to people who are serious about moving this burgeoning field forward, and the non-mathy posts appeal to people who are more casually interested in the concepts" - (Snappycrunch)

Beware of mistaking mathematical thinking for rational thinking; math is a tool like any other, to be used rationally or irrationally. Nassim Taleb demonstrates this very well in his book "Fooled by Randomness".

There's nothing casual about being interested in the concepts of rational thinking; even the mathematically minded (who will naturally be more interested in the mathy posts) need the concepts to understand what framework to put the math into.

comment by Mike5 · 2008-02-27T04:20:14.000Z · LW(p) · GW(p)

How does one go about visualizing a world without souls? Or, rather a world in which nobody believes in souls, and how would this visualization have any bearing to "reality"? It seems like the thought experiment is really: What would I do if everything were the same except I didn't have a soul?

comment by Anna · 2008-02-27T07:21:59.000Z · LW(p) · GW(p)

Regardless of all previous posts.

I think you write better when you are expressing your beliefs and inner thoughts as opposed to the mathematical equation that leads you there.

“Do not dwell in the past, do not dream of the future, concentrate the mind on the present moment.”

Just a thought. Anna

comment by Ben_Jones · 2008-02-27T09:56:14.000Z · LW(p) · GW(p)

slowing down for posts like this is taking away 24 hours that might have been better used to save humanity.

Sarcasm? Humour? Sincerity?

I've considered how to kill or rewrite a Judeo-Christian type God

Please make this your next "fun" post.


comment by steven · 2008-02-27T14:05:43.000Z · LW(p) · GW(p)

I've considered how to kill or rewrite a Judeo-Christian type God

Obligatory Pascal: Ah, but what if there's a tiny chance that He's reading along to figure out our tactics?

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2008-02-27T16:48:57.000Z · LW(p) · GW(p)

Steven: To kill or rewrite a Judeo-Christian God, obviously, the technique has to work even if the God can read your planning thoughts. It's a lot easier than dealing with an UFAI, though, because the Judeo-Christian God has anthropomorphic cognitive vulnerabilities and a considerable response time delay. ("You ate the apple?")

Naturally you prefer to rewrite the God if possible - shame to waste all that power.

comment by steven · 2008-02-27T17:17:52.000Z · LW(p) · GW(p)

Heh, so how do you know that it is not the case that this hypothetical JCG reads overcomingbias but not people's private thoughts?

comment by steven · 2008-02-27T17:38:33.000Z · LW(p) · GW(p)

(Of course as long as we're under these weird assumptions then not discussing tactics could be a fatal mistake too, etc etc)

comment by Paul_Gowder · 2008-02-27T22:59:48.000Z · LW(p) · GW(p)

I'm skeptical about the possibility of really carrying out this kind of visualization (or, more broadly, imaginary leap). Here's why.

I might be able to say that I can imagine the existence of a god, and what the world would be like if, say, it were the Christian one. But I can't imagine myself in that world -- in that world, I'm a different person. For in that world, either I hold the counterfactually true belief that there is such a god, or I don't. If I don't hold that belief, then my response to that world is the same as my response to this world. If I do hold it, well, how can I model that?

This point is related to a point that Eliezer made in the comments, that I think just absolutely nails the problem, for a narrower class of the true set of states for which the problem exists:

You can invent all kinds of Gods and demand that I visualize the case of their existence, or of their telling me various things, but you can't necessarily force me to visualize the case where I accept their statement that killing babies is a good idea - not unless you can argue it well enough to create a real moral doubt in my mind.


But I maintain that you can't model the existence of a God with the right properties (including omnipotence, omniscience, and omnibenevolence) without being able to model that acceptance.

And likewise, the woman who believed in the soul couldn't model her reaction to a world without a soul without being able to experience herself as a person who genuinely doesn't believe in a soul. But she can only have that experience by becoming such a person.

I think this is just a limitation of human psychology. Cf. Thomas Nagel's great article, What is it like to be a bat? The argument doesn't directly apply, but the intuition does.

comment by Mike_Blume · 2008-07-27T21:00:32.000Z · LW(p) · GW(p)

This reminds me of an item from a list of "horrible job interview questions" we once devised for SIAI:

Would you kill babies if it was intrinsically the right thing to do? Yes/No

If you circled "no", explain under what circumstances you would not do the right thing to do: I assume by "intrinsically right thing to do", you do not intend something straightforward like "here are five babies carrying a virus which, if left unchecked, will wipe out half the population of the planet. There is no means by which they can be quarantined, the virus can cross even the cold reaches of space. The only way to save us is to kill them". I assume rather, that you, Eliezer Yudkowsky, hand me a booklet, possibly hundreds of pages long. On page 0 are listed my most cherished moral truths, and on page N is written: "thus, it is right and decent to kill as many babies as possible, whenever the opportunity arises. Any man who walks past a mother pushing a stroller, and does not immediately throttle the infant where it lies, is nothing more than a moral coward." For all n between 1 and N inclusive, the statements on page n seem to me to follow naturally and self-evidently from my acceptance of the statements on page n-1. As I look up, astonishment etched on my face, I see you standing before me, grinning broadly. You hand me a long, curved blade, and tell me the staff of the SIAI are taking the afternoon off to raid the local nursery, and would I like to join?

Under these circumstances I would assign high probability to the idea that you are morally ill, and wish to murder infants for your own enjoyment. That somewhere in the proof you have given me is a logical error - the moral equivalent of dividing by zero. I would imagine, not that morality led me astray, but that my incomplete knowledge of morality led me not to spot this error. I would show the proof to as many moral philosophers as I could, ones whose intelligence and expertise in the field I respected, and held to be above my own, and who were initially as unenthusiastic as I am at the prospect of infanticide. I would ask them if they could point me to an error in the proof, and explain to me clearly and fully why this step, which had seemed so simple to me, is not a legal move in the dance at that point. If they could not explain this to me to my satisfaction, I would devote much of my time from then on to the study of morality so that I could better understand it, and until I could, would distrust any moral conclusions I came to on my own. If none of them could find an error, I would still assign high probability to the notion that somewhere in the proof is an error which we humans have not advanced sufficiently in the study of metamorality to discover. I would consider it one of the most important outstanding problems in the field, and would, again, distrust any major moral decisions which did not clearly add up to normality until it was solved.

Just as the mathematical "proof" that 2=1 would, if accepted, destroy the foundations of mathematics itself, and must therefore be doubted until we can discover its error, so your proof that killing babies is good, would, if accepted, destroy the foundations of my morality, and so I must doubt it until I can find an error.

I am well aware that a fundamentalist could take my previous paragraph, replace "killing babies" with "oral sex" and thus make his prudery unassailable by argument. So much the worse for him, I say. If he considers the prohibition of a mutually beneficial and joyful act to be at the foundation of his morality, then he is a miserable creature and all my rationality will not save him from himself.

I have tried indirectly to answer your question. To answer it directly I will have to resort to what seems a paradox. I would not do "the right thing to do" if I know, at bottom, that it simply is not the right thing to do.

If you circled "yes", how right would it have to be, for how many babies? N/A

So, would I get the job?

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2008-07-27T21:44:38.000Z · LW(p) · GW(p)

I would show the proof to as many moral philosophers as I could

Boy, I sure wouldn't. Ever read Cherniak's "The Riddle of the Universe and Its Solution"?

I am well aware that a fundamentalist could take my previous paragraph, replace "killing babies" with "oral sex" and thus make his prudery unassailable by argument. So much the worse for him, I say.

I sympathize, but I don't think that really solves the dilemma.

comment by Hopefully_Anonymous · 2008-07-27T22:25:46.000Z · LW(p) · GW(p)

Post what you want to post most. The advice that you should go against your own instincts and pander is bad, in my opinion. The only things you should force yourself to do are: (1) try to post something every day, and (2) try to edit and delete comments as little as possible. I believe the result will be an excellent and authentic blog with the types of readers you want most (and that are most useful to you).

comment by CarlShulman · 2008-07-28T00:47:33.000Z · LW(p) · GW(p)


I think there is pretty overwhelming evidence that moral philosophers are almost never moved to do anything nearly so onerous and dangerous as killing babies by their moral views. See Unger, Singer, Parfit, etc.

comment by Raw_Power · 2010-10-31T11:01:38.021Z · LW(p) · GW(p)

That title confused me. I expected an article on how, when debating, it was better to leave the opponent a line of retreat so that they would not feel dialectically cornered and start panicking. Of course, along that line of retreat, your arguments would be waiting for them. Socrates apparently was a true master of this little dance. This is especially useful if you have a lot of time and you are trying to actually change the way your opponent thinks, rather than changing that of an audience.

Replies from: timtyler
comment by timtyler · 2011-04-23T16:23:53.052Z · LW(p) · GW(p)

I am pretty sure that is what the term "leaving a line of retreat" in the context of an argument or disagreement should be used to refer to.

The meaning being proposed in this post is counter-intuitive. I classify it as being undesirable terminology.

comment by TheStevenator · 2011-12-13T05:06:00.063Z · LW(p) · GW(p)

Great post!

I think the greatest test of self honesty (maybe it ties with honestly imagining the world you wish weren't real) would be admitting to yourself that the world looks an awful lot like the hypotheticl world you just vividly imagined. I think if anyone who believes in god or homeopathy or what-have-you honestly imagined what the world would look like if their belief was wrong, and they had enough courage, they'd admit to themselves that the world looks a lot like that already.

comment by [deleted] · 2012-11-05T21:30:54.407Z · LW(p) · GW(p)

You really should write a book. Seriously. I could probably raise the hypothesis of teaching Rationality as a first-year course (as a follow-up to Logic) instead of useless "password" classes like I've received at my college. Having a book I could wave around with to convince people maybe being rational is important when you're a scientist would help a lot. At least I'd start printing and distributing it.

You could also just put the primary sequences of this website into a (e)book format, and release it. You might reach a wider audience that way, which would of course be Winning.

Replies from: thomblake, Nornagest, None
comment by thomblake · 2012-11-05T21:44:49.163Z · LW(p) · GW(p)

A serious book on Rationality has been in the works for some time.

comment by Nornagest · 2012-11-05T21:49:57.218Z · LW(p) · GW(p)

There's a couple of ebook versions of the Sequences floating around. I believe an official release is still in the works, but links to several unofficial ones may be found here.

comment by [deleted] · 2012-11-09T22:11:57.109Z · LW(p) · GW(p)

The trouble with the sequences is that each was written in the course of a day, and most were unrevised since then. They're obviously rich and interesting, but far from publishable material. The sequences meet every standard you could want for being insightful, but they fall far short of most standards of factual accuracy, organization, contact with contemporary discussions, etc.

comment by [deleted] · 2012-11-05T21:48:51.447Z · LW(p) · GW(p)

The hope is that it takes less courage to visualize an uncomfortable state of affairs as a thought experiment, than to consider how likely it is to be true. But then after you do the former, it becomes easier to do the latter.

And again you manage to condense a wise life lesson to two sentences. I should really write them down.

comment by Unknowns · 2014-12-06T02:49:09.647Z · LW(p) · GW(p)

"How many religious people would retain their belief in God, if they could accurately visualize that hypothetical world in which there was no God and they themselves have become atheists?"

More than a few. For example, if you are a Muslim in some places, accurately visualizing the world where you become an atheist means visualizing a world in which you get killed for apostasy.

Replies from: shminux
comment by shminux · 2014-12-06T03:40:07.622Z · LW(p) · GW(p)

I don't think that's quite it. For many, the world where there is no God is like the world where you have no parents.