Planning the Enemy's Retreat
post by Gram_Stone · 2017-01-11T05:44:56.471Z · LW · GW · Legacy · 12 commentsContents
12 comments
Related: Leave a Line of Retreat
When I was smaller, I was sitting at home watching The Mummy, with my mother, ironically enough. There's a character by the name of Bernard Burns, and you only need to know two things about him. The first thing you need to know is that the titular antagonist steals his eyes and tongue because, hey, eyes and tongues spoil after a while you know, and it's been three thousand years.
The second thing is that Bernard Burns was the spitting image of my father. I was terrified! I imagined my father, lost and alone, certain that he would die, unable to see, unable even to properly scream!
After this frightening ordeal, I had the conversation in which it is revealed that fiction is not reality, that actions in movies don't really have consequences, that apparent consequences are merely imagined and portrayed.
Of course I knew this on some level. I think the difference between the way children and adults experience fiction is a matter of degree and not kind. And when you're an adult, suppressing those automatic responses to fiction has itself become so automatic, that you experience fiction as a thing compartmentalized. You always know that the description of consequences in the fiction will not by magic have fire breathed into them, that Imhotep cannot gently step out of the frame and really remove your real father's real eyes.
So, even though we often use fiction to engage, to make things feel more real, in another way, once we grow, I think fiction gives us the chance to entertain formidable ideas at a comfortable distance.
A great user once said, "Vague anxieties are powerful anxieties." Related to this is the simple rationality technique of Leaving a Line of Retreat: before evaluating the plausibility of a highly cherished or deeply frightening belief, one visualizes the consequences of the highly cherished belief being false, or of the deeply frightening belief being true. We hope that it will thereby become just a little easier to evaluate the plausibility of that belief, for if we are wrong, at least we know what we're doing about it. Sometimes, if not often, what you'd really do about it isn't as bad as your intuitions would have you think.
If I had to put my finger on the source of that technique's power, I would name its ability to reduce the perceived hedonic costs of truthseeking. It's hard to estimate the plausibility of a charged idea because you expect your undesired outcome to feel very bad, and we naturally avoid this. The trick is in realizing that, in any given situation, you have almost certainly overestimated how bad it would really feel.
But Sun Tzu didn't just plan his own retreats; he also planned his enemies' retreats. What if your interlocutor has not practiced the rationality technique of Leaving a Line of Retreat? Well, Sun Tzu might say, "Leave one for them."
As I noted in the beginning, adults automatically compartmentalize fiction away from reality. It is simply easier for me to watch The Mummy than it was when I was eight. The formidable idea of my father having his eyes and tongue removed is easier to hold at a distance.
Thus, I hypothesize, truth in fiction is hedonically cheap to seek.
When you recite the Litany of Gendlin, you do so because it makes seemingly bad things seem less bad. I propose that the idea generalizes: when you're experiencing fiction, everything seems less bad than its conceivably real counterpart, it's stuck inside the book, and any ideas within will then seem less formidable. The idea is that you can use fiction as an implicit line of retreat, that you can use it to make anything seem less bad by making it make-believe, and thus, safe. The key, though, is that not everything inside of fiction is stuck inside of fiction forever. Sometimes conclusions that are valid in fiction also turn out to be valid in reality.
This is hard to use on yourself, because you can't make a real scary idea into fiction, or shoehorn your scary idea into existing fiction, and then make it feel far away. You'll know where the fiction came from. But I think it works well on others.
I don't think I can really get the point across in the way that I'd like without an example. This proposed technique was an accidental discovery, like popsicles or the Slinky:
A history student friend of mine was playing Fallout: New Vegas, and he wanted to talk to me about which ending he should choose. The conversation seemed mostly optimized for entertaining one another, and, hoping not to disappoint, I tried to intertwine my fictional ramblings with bona fide insights. The student was considering giving power to a democratic government, but he didn't feel very good about it, mostly because this fictional democracy was meant to represent anything that anyone has ever said is wrong with at least one democracy, plausible or not.
"The question you have to ask yourself," I proposed to the student, "is 'Do I value democracy because it is a good system, or do I value democracy per se?' A lot of people will admit that they value democracy per se. But that seems wrong to me. That means that if someone showed you a better system that you could verify was better, you would say 'This is good governance, but the purpose of government is not good governance, the purpose of government is democracy.' I do, however, understand democracy as a 'current best bet' or local maximum."
I have in fact gotten wide-eyed stares for saying things like that, even granting the closing ethical injunction on democracy as local maximum. I find that unusual, because it seems like one of the first steps you would take towards thinking about politics clearly, to not equivocate democracy with good governance. If you were further in the past and the fashionable political system were not democracy but monarchy, and you, like many others, consider democracy preferable to monarchy, then upon a future human revealing to you the notion of a modern democracy, you would find yourself saying, regrettably, "This is good governance, but the purpose of government is not good governance, the purpose of government is monarchy."
But because we were arguing for fictional governments, our autocracies, or monarchies, or whatever non-democratic governments heretofore unseen, could not by magic have fire breathed into them. For me to entertain the idea of a non-democratic government in reality would have solicited incredulous stares. For me to entertain the idea in fiction is good conversation.
The student is one of two people with whom I've had this precise conversation, and I do mean in the particular sense of "Which Fallout ending do I pick?" I snuck this opinion into both, and both came back weeks later to tell me that they spent a lot of time thinking about that particular part of the conversation, and that the opinion I shared seemed deep.
Also, one of them told me that they had recently received some incredulous stares.
So I think this works, at least sometimes. It looks like you can sneak scary ideas into fiction, and make them seem just non-scary enough for someone to arrive at an accurate belief about that scary idea.
I do wonder though, if you could generalize this even more. How else could you reduce the perceived hedonic costs of truthseeking?
12 comments
Comments sorted by top scores.
comment by TiffanyAching · 2017-01-12T05:39:48.358Z · LW(p) · GW(p)
I like this post. Sneaking "scary" ideas into fiction, where they can be faced in a context that feels safer - that makes a lot of sense to me. And while I think you're right that it's tricky to consciously use the technique on yourself, I've certainly had it happen that way for me accidentally. (Though I think it's worth mentioning that the moment of realization - the moment it hit me that the logical or moral conclusion I had accepted in a fictional context was also valid/applicable in real life - was still sometimes painful or at least jarring.)
You asked about other ways to "reduce the perceived hedonic costs of truthseeking". I have an example of my own that might be relevant, especially to the word "perceived". Have you ever seen that trick where someone pulls a tablecloth off a table quickly and smoothly enough that all the plates and glasses and things stay right where they were?
I was speaking to a friend-of-a-friend to whom I had just been introduced - call her Jenny. In casual conversation, Jenny brought up her belief in crystal healing and asked me directly what I thought of it. Our mutual friend winced in horror because she knows how I feel about woo and anticipated a scathing response, or at least a condescending lecture about evidence-based medicine.
I'm not completely tactless, and Jenny was nice. I didn't want to ruin her evening over some stupid crystals. I had an idea. I said, as near as I can recall, this:
"Oh, yes, I think crystal healing is amazing! Gosh, when you think that just by looking at a little piece of quartz or hematite or topaz and thinking about things like mental clarity or relaxation, we have the power to lower our anxiety levels, lessen our feelings of fatigue, even reduce our own blood pressure - I mean it's such a beautiful example of the power of the human mind, isn't it?"
And more in the same vein. Basically I gushed for five minutes about how cool the placebo effect is (without once using the term "placebo effect") and how cool the natural world is, and how cool it is that we're constantly learning more about things that used to be mysteries, and so on.
My friend was relieved and Jenny was nodding - a little hesitantly, like she was slightly bewildered by something she couldn't quite put her finger on, but she was listening and she wasn't upset or defensive or annoyed and the party proceeded without awkwardness or rancor.
I didn't tell any lies. Crystal healing does work, in the sense that it's better than nothing. Of course almost anything that doesn't do active harm or negate the effects of real treatments works better than nothing - that's the beauty of the placebo. Doesn't really matter if it's administered via sugar pill or amethyst or homeopathic milkshake, if the belief is there (and I've seen some intriguing evidence to suggest that even true belief isn't necessary, by the way - you might only need hope).
See what I mean about the tablecloth trick? I was able to introduce Jenny to a less-wrong way of thinking about crystals without the hedonic cost of totally dismantling her beliefs. Now, I don't think I convinced her that crystals aren't filled with mysterious healing energy, and we never got near the fact that real medicine should work better than a placebo, but it still felt like a win - because I slipped a line of retreat into her head without setting off her intruder-alert. I gave her the plans for a model where her beloved crystals are cool and interesting and not-useless and not-lame that doesn't rely on them being magic. I showed her that you could take away the tablecloth and leave her good china in place.
It's a small example but I think there's an argument for minimizing perceived hedonic cost by demonstrating to someone that the absence of one cherished belief does not necessarily mean that every cherished belief or value that apparently rests upon it must come crashing down. Relinquishing belief in the magic of crystals doesn't mean Jenny has to throw out her collection of pretty rocks. Relinquishing belief in God doesn't mean a life without joy or meaning or domestic felicity and I think that's the kind of thing a lot of people are really afraid of losing, not the abstract idea of God's existence itself. They need to know there's a table under there.
Replies from: Pimgd, Gram_Stone↑ comment by Pimgd · 2017-01-12T16:35:01.050Z · LW(p) · GW(p)
I get the feeling that if you told Jenny all this they'd get angry at you at some point of your explanation. It feels kinda manipulative. I don't get this "manipulative" feeling from the example. The end result seems good, though.
Replies from: TiffanyAching↑ comment by TiffanyAching · 2017-01-12T17:08:47.347Z · LW(p) · GW(p)
You're absolutely right, it was totally, consciously manipulative and I'm not going to try to justify that with a bunch of utility boilerplate - but I'd claim that the manipulation element lay only in my tacit implication that Jenny must, as a matter of course, see the question as I did. The "as I'm sure you already know" stuff. Is there a name for that? It's like begging the question on purpose, treating an important assumption as though it's settled and barreling ahead before they can get a word in.
It's embarrassing - socially difficult - for someone to interrupt in order to correct you, to say "wait a minute, back up, that's all fine and dandy but I actually believe that the crystals themselves are magic." Especially someone with only a shaky ability to articulate their belief in the first place, like Jenny. It was intellectual bullying in a small way and I don't really know why I felt I had to do it like that. Petty fun, maybe? The devil is strong in me where crystals are concerned.
But I believe the rest of my idea still stands - there goes the tablecloth again - if you remove that unnecessarily manipulative element. if I had just simply and honestly discussed all the reasons I find crystals and their - ahem - healing properties fascinating, which again are all true, I would have still been making a model available to Jenny which preserved most of what she valued about her belief while jettisoning the belief itself.
↑ comment by Gram_Stone · 2017-01-14T06:23:42.521Z · LW(p) · GW(p)
(Upvoted.) Just wanted to say, "Welcome to LessWrong."
comment by Jiro · 2017-01-12T05:19:10.872Z · LW(p) · GW(p)
Entertaining such ideas in reality may be subject to the problem that you are a fallible human and incapable of the certainty needed to justify making the choice. It may be that fictional dictatorships could be better than fictional democracies, but in a real-life situation I could never know that the dictatorship is better with enough certainty to be able to choose the dictatorship.
Replies from: Gram_Stone, Gram_Stone↑ comment by Gram_Stone · 2017-01-14T06:12:07.710Z · LW(p) · GW(p)
I think this is worth pointing out because it seems like an easy mistake to use my reasoning to justify dictatorship. I also think this is an example of two ships passing in the night. Eliezer was talking about a meta-level/domain-general ethical injunction. When I was talking to the student, I was talking about how to avoid screwing up the object-level/domain-specific operationalization of the phrase 'good governance'.
My argument was that if you're asking yourself the question, "What does the best government look like?", assuming that that is indeed a right question, then you should be suspicious if you find yourself confidently proposing the answer, "My democracy." The first reason is that 'democracy' can function as a semantic stopsign, which would stop you dead in your tracks if you didn't have the motivation to grill yourself and ask, "Why does the best government look like my democracy?" The second reason is that the complement of the set containing the best government would be much larger than the set containing the best government, so if you use the mediocrity heuristic, then you should conclude that any given government in your hypothesis space is plausibly not the best government. If you consider it highly plausible that your democracy is the end state of political progress, then you're probably underestimating the plausibility of the true hypothesis. And lastly, we hope that we have thereby permitted ourselves to one day generate an answer that we expect to be better than what we have now, but that does not require the seizure of power by any individual or group.
If, in the course of your political-philosophical investigations, you find yourself attempting to determine your preference ordering over the governments in your hypothesis space, and, through further argumentation, you come to the separate and additional conclusion that dictatorship is preferable to democracy, then the ethical injunction, "Do not seize power for the good of the tribe," should kick in, because no domain is supposed to be exempt from an ethical injunction. It just so happens that you should also be suspicious of that conclusion on epistemic grounds, because the particular moral error that that particular ethical injunction is intended to prevent may often be caused by an act of self-deception. And if you add a new government to your hypothesis space, and this government somehow doesn't fit into the category 'dictatorship', but also involves the seizure of power for the good of the tribe, then the ethical injunction should kick in then too, and you should once more be suspicious on epistemic grounds as well.
What do you think about all of that?
Replies from: Jiro↑ comment by Jiro · 2017-01-14T18:00:31.902Z · LW(p) · GW(p)
My point is that using fiction to sneak ideas about the real world past people is a cheat. It is possible to be certain about something fictional in a way in which one cannot be certain about the real world.
Replies from: Gram_Stone↑ comment by Gram_Stone · 2017-01-14T23:41:37.470Z · LW(p) · GW(p)
Stipulation is obviously sometimes a cheat. I would be surprised if it was always one.
↑ comment by Gram_Stone · 2017-01-14T06:09:47.481Z · LW(p) · GW(p)
I think this is worth pointing out because it seems like an easy mistake to use my reasoning to justify dictatorship. I also think this is an example of two ships passing in the night. Eliezer was talking about a meta-level/domain-general ethical injunction. When I was talking to the student, I was talking about how to avoid screwing up the object-level/domain-specific operationalization of the phrase 'good governance'.
My argument was that if you're asking yourself the question, "What does the best government look like?", assuming that that is indeed a right question, then you should be suspicious if you find yourself confidently proposing the answer, "My democracy." The first reason is that 'democracy' can function as a semantic stopsign, which would stop you dead in your tracks if you didn't have the motivation to grill yourself and ask, "Why does the best government look like my democracy?" The second reason is that the complement of the set containing the best government would be much larger than the set containing the best government, so if you use the mediocrity heuristic, then you should conclude that any given government in your hypothesis space is plausibly not the best government. If you consider it highly plausible that your democracy is the end state of political progress, then you're probably underestimating the plausibility of the true hypothesis. And lastly, we hope that we have thereby permitted ourselves to one day generate an answer that we expect to be better than what we have now, but that does not require the seizure of power by any individual or group.
If, in the course of your political-philosophical investigations, you find yourself attempting to determine your preference ordering over the governments in your hypothesis space, and, through further argumentation, you come to the separate and additional conclusion that dictatorship is preferable to democracy, then the ethical injunction, "Do not seize power for the good of the tribe," should kick in, because no domain is supposed to be exempt from an ethical junction. It just so happens that you should also be suspicious of that conclusion on epistemic grounds, because the particular moral error that that particular ethical injunction is intended to prevent may often be caused by an act of self-deception. And if you add a new government to your hypothesis space, and this government somehow doesn't fit into the category 'dictatorship', but also involves the seizure of power for the good of the tribe, then the ethical injunction should kick in then too, and you should once more be suspicious on epistemic grounds as well.
What do you think about all of that?
comment by chaosmage · 2017-01-11T12:06:00.635Z · LW(p) · GW(p)
What I do is consciously decide, right in the beginning, that "I'd rather know if this is true than believe that it is true."
For the short intervals that I expend conscious attention on a belief, I find it not too hard to value "finding out this is actually false" more highly than "continuing to believe I'm right".
It seems important, whenever I change my mind about something, to sustain conscious attention on the subject for a while. This gives my more automatic, system-1-ish thoughts the timne they need to learn to emulate the new thought and keep thinking it when my conscious thinking has moved elsewhere.
comment by [deleted] · 2017-01-12T06:25:09.858Z · LW(p) · GW(p)
[Comment related to the title. Post itself is insightful and I enjoyed it]:
I read this thinking that it was going to be about charitably treating interlocutors in the spirit of leaving a line of retreat ala Yudkowsky. While related, the switch into using fiction as a way to have more serious truthseeking discussions was unexpected.
And it does seem like the fiction part is pretty core to this essay.
Any chance of changing the title (EX: Fiction Allows Better Truthseeking) to something a little more direct?
comment by Flinter · 2017-01-15T22:04:21.571Z · LW(p) · GW(p)
"But because we were arguing for fictional governments, our autocracies, or monarchies, or whatever non-democratic governments heretofore unseen, could not by magic have fire breathed into them. For me to entertain the idea of a non-democratic government in reality would have solicited incredulous stares. For me to entertain the idea in fiction is good conversation.
I do wonder though, if you could generalize this even more. How else could you reduce the perceived hedonic costs of truthseeking?"
I don't know if I am perfectly speaking to your point but I think that it might be very relevant and significant insight I mean to levate. A conceptual argument, discussion (a real discussion but on a conceptual subject/idea), or device can be useful and valuable in their own right.
It's a very important point to make I think, and I think it can be observed as a society we do not properly use that insight.
I think there can be further reduction of costs by the exploration of the observation that we generally don't allow the introduction of "heaven" in conversation, debate proofs etc.
Is anyone worried I might come from a religious perspective that won't fit into the community when I say this?
I simply mean that we need to think about the ultimate game theoretically optimal end games in the social problems we describe and try to solve.
And it seems to me that is a taboo concept to introduce, and not in the useful sense of the of the word.