A Dialogue On Doublethink

post by LoganStrohl (BrienneYudkowsky) · 2014-05-11T19:38:47.821Z · LW · GW · Legacy · 108 comments

Contents

  Doublethink
  Against Doublethink
  Against Against Doublethink
  Against Against Against Doublethink
None
108 comments

Followup to: Against Doublethink (sequence), Dark Arts of Rationality, Your Strength as a Rationalist


Doublethink

It is obvious that the same thing will not be willing to do or undergo opposites in the same part of itself, in relation to the same thing, at the same time. --Book IV of Plato's Republic

Can you simultaneously want sex and not want it? Can you believe in God and not believe in Him at the same time? Can you be fearless while frightened?

To be fair to Plato, this was meant not as an assertion that such contradictions are impossible, but as an argument that the soul has multiple parts. It seems we can, in fact, want something while also not wanting it. This is awfully strange, and it led Plato to conclude the soul must have multiple parts, for surely no one part could contain both sides of the contradiction.

Often, when we attempt to accept contradictory statements as correct, it causes cognitive dissonance--that nagging, itchy feeling in your brain that won't leave you alone until you admit that something is wrong. Like when you try to convince yourself that staying up just a little longer playing 2048 won't have adverse effects on the presentation you're giving tomorrow, when you know full well that's exactly what's going to happen.

But it may be that cognitive dissonance is the exception in the face of contradictions, rather than the rule. How would you know? If it doesn't cause any emotional friction, the two propositions will just sit quietly together in your brain, never mentioning that it's logically impossible for both of them to be true. When we accept a contradiction wholesale without cognitive dissonance, it's what Orwell called "doublethink".

When you're a mere mortal trying to get by in a complex universe, doublethink may be adaptive. If you want to be completely free of contradictory beliefs without spending your whole life alone in a cave, you'll likely waste a lot of your precious time working through conundrums, which will often produce even more conundrums.

Suppose I believe that my husband is faithful, and I also believe that the unfamiliar perfume on his collar indicates he's sleeping with other women without my permission. I could let that pesky little contradiction turn into an extended investigation that may ultimately ruin my marriage. Or I could get on with my day and leave my marriage intact.

It's better to just leave those kinds of thoughts alone, isn't it? It probably makes for a happier life.

Against Doublethink

Suppose you believe that driving is dangerous, and also that, while you are driving, you're completely safe. As established in Doublethink, there may be some benefits to letting that mental configuration be.

There are also some life-shattering downsides. One of the things you believe is false, you see, by the law of the excluded middle. In point of fact, it's the one that goes "I'm completely safe while driving". Believing false things has consequences.

Be irrationally optimistic about your driving skills, and you will be happily unconcerned where others sweat and fear. You won't have to put up with the inconvenience of a seatbelt. You will be happily unconcerned for a day, a week, a year. Then CRASH, and spend the rest of your life wishing you could scratch the itch in your phantom limb. Or paralyzed from the neck down. Or dead. It's not inevitable, but it's possible; how probable is it? You can't make that tradeoff rationally unless you know your real driving skills, so you can figure out how much danger you're placing yourself in. --Eliezer Yudkowsky, Doublethink (Choosing to be Biased)

What are beliefs for? Please pause for ten seconds and come up with your own answer.

Ultimately, I think beliefs are inputs for predictions. We're basically very complicated simulators that try to guess which actions will cause desired outcomes, like survival or reproduction or chocolate. We input beliefs about how the world behaves, make inferences from them to which experiences we should anticipate given various changes we might make to the world, and output behaviors that get us what we want, provided our simulations are good enough.

My car is making a mysterious ticking sound. I have many beliefs about cars, and one of them is that if my car makes noises it shouldn't, it will probably stop working eventually, and possibly explode. I can use this input to simulate the future. Since I've observed my car making a noise it shouldn't, I predict that my car will stop working. I also believe that there is something causing the ticking. So I predict that if I intervene and stop the ticking (in non-ridiculous ways), my car will keep working. My belief has thus led to the action of researching the ticking noise, planning some simple tests, and will probably lead to cleaning the sticky lifters.

If it's true that solving the ticking noise will keep my car running, then my beliefs will cash out in correctly anticipated experiences, and my actions will cause desired outcomes. If it's false, perhaps because the ticking can be solved without addressing a larger underlying problem, then the experiences I anticipate will not occur, and my actions may lead to my car exploding.

Doublethink guarantees that you believe falsehoods. Some of the time you'll call upon the true belief ("driving is dangerous"), anticipate future experiences accurately, and get the results you want from your chosen actions ("don't drive three times the speed limit at night while it's raining"). But some of the time, if you actually believe the false thing as well, you'll call upon the opposite belief, anticipate inaccurately, and choose the last action you'll ever take.

Without any principled algorithm determining which of the contradictory propositions to use as an input for the simulation at hand, you'll fail as often as you succeed. So it makes no sense to anticipate more positive outcomes from believing contradictions.

Contradictions may keep you happy as long as you never need to use them. Should you call upon them, though, to guide your actions, the debt on false beliefs will come due. You will drive too fast at night in the rain, you will crash, you will fly out of the car with no seat belt to restrain you, you will die, and it will be your fault.

Against Against Doublethink

What if Plato was pretty much right, and we sometimes believe contradictions because we're sort of not actually one single person?

It is not literally true that Systems 1 and 2 are separate individuals the way you and I are. But the idea of Systems 1 and 2 suggests to me something quite interesting with respect to the relationship between beliefs and their role in decision making, and modeling them as separate people with very different personalities seems to work pretty darn well when I test my suspicions.

I read Atlas Shrugged probably about a decade ago. I was impressed with its defense of capitalism, which really hammers home the reasons it’s good and important on a gut level. But I was equally turned off by its promotion of selfishness as a moral ideal. I thought that was *basically* just being a jerk. After all, if there’s one thing the world doesn’t need (I thought) it’s more selfishness.

Then I talked to a friend who told me Atlas Shrugged had changed his life. That he’d been raised in a really strict family that had told him that ever enjoying himself was selfish and made him a bad person, that he had to be working at every moment to make his family and other people happy or else let them shame him to pieces. And the revelation that it was sometimes okay to consider your own happiness gave him the strength to stand up to them and turn his life around, while still keeping the basic human instinct of helping others when he wanted to and he felt they deserved it (as, indeed, do Rand characters). --Scott of Slate Star Codex in All Debates Are Bravery Debates

If you're generous to a fault, "I should be more selfish" is probably a belief that will pay off in positive outcomes should you install it for future use. If you're selfish to a fault, the same belief will be harmful. So what if you were too generous half of the time and too selfish the other half? Well, then you would want to believe "I should be more selfish" with only the generous half, while disbelieving it with the selfish half.

Systems 1 and 2 need to hear different things. System 2 might be able to understand the reality of biases and make appropriate adjustments that would work if System 1 were on board, but System 1 isn't so great at being reasonable. And it's not System 2 that's in charge of most of your actions. If you want your beliefs to positively influence your actions (which is the point of beliefs, after all), you need to tailor your beliefs to System 1's needs.

For example: The planning fallacy is nearly ubiquitous. I know this because for the past three years or so, I've gotten everywhere five to fifteen minutes early. Almost every single person I meet with arrives five to fifteen minutes late. It is very rare for someone to be on time, and only twice in three years have I encountered the (rather awkward) circumstance of meeting with someone who also arrived early.

Before three years ago, I was also usually late, and I far underestimated how long my projects would take. I knew, abstractly and intellectually, about the planning fallacy, but that didn't stop System 1 from thinking things would go implausibly quickly. System 1's just optimistic like that. It responds to, "Dude, that is not going to work, and I have a twelve point argument supporting my position and suggesting alternative plans," with "Naaaaw, it'll be fine! We can totally make that deadline."

At some point (I don't remember when or exactly how), I gained the ability to look at the true due date, shift my System 1 beliefs to make up for the planning fallacy, and then hide my memory that I'd ever seen the original due date. I would see that my flight left at 2:30, and be surprised to discover on travel day that I was not late for my 2:00 flight, but a little early for my 2:30 one. I consistently finished projects on time, and only disasters caused me to be late for meetings. It took me about three months before I noticed the pattern and realized what must be going on.

I got a little worried I might make a mistake, such as leaving a meeting thinking the other person just wasn't going to show when the actual meeting time hadn't arrived. I did have a couple close calls along those lines. But it was easy enough to fix; in important cases, I started receiving Boomeranged notes from past-me around the time present-me expected things to start that said, "Surprise! You've still got ten minutes!"

This unquestionably improved my life. You don't realize just how inconvenient the planning fallacy is until you've left it behind. Clearly, considered in isolation, the action of believing falsely in this domain was instrumentally rational.

Doublethink, and the Dark Arts generally, applied to carefully chosen domains is a powerful tool. It's dumb to believe false things about really dangerous stuff like driving, obviously. But you don't have to doublethink indiscriminately. As long as you're careful, as long as you suspend epistemic rationality only when it's clearly beneficial to do so, employing doublethink at will is a great idea.

Instrumental rationality is what really matters. Epistemic rationality is useful, but what use is holding accurate beliefs in situations where that won't get you what you want?

Against Against Against Doublethink

There are indeed epistemically irrational actions that are instrumentally rational, and instrumental rationality is what really matters. It is pointless to believing true things if it doesn't get you what you want. This has always been very obvious to me, and it remains so.

There is a bigger picture.

Certain epistemic rationality techniques are not compatible with dark side epistemology. Most importantly, the Dark Arts do not play nicely with "notice your confusion", which is essentially your strength as a rationalist. If you use doublethink on purpose, confusion doesn't always indicate that you need to find out what false thing you believe so you can fix it. Sometimes you have to bury your confusion. There's an itsy bitsy pause where you try to predict whether it's useful to bury.

As soon as I finally decided to abandon the Dark Arts, I began to sweep out corners I'd allowed myself to neglect before. They were mainly corners I didn't know I'd neglected.

The first one I noticed was the way I responded to requests from my boyfriend. He'd mentioned before that I often seemed resentful when he made requests of me, and I'd insisted that he was wrong, that I was actually happy all the while. (Notice that in the short term, since I was probably going to do as he asked anyway, attending to the resentment would probably have made things more difficult for me.) This self-deception went on for months.

Shortly after I gave up doublethink, he made a request, and I felt a little stab of dissonance. Something I might have swept away before, because it seemed more immediately useful to bury the confusion than to notice it. But I thought (wordlessly and with my emotions), "No, look at it. This is exactly what I've decided to watch for. I have noticed confusion, and I will attend to it."

It was very upsetting at first to learn that he'd been right. I feared the implications for our relationship. But that fear didn't last, because we both knew the only problems you can solve are the ones you acknowledge, so it is a comfort to know the truth.

I was far more shaken by the realization that I really, truly was ignorant that this had been happening. Not because the consequences of this one bit of ignorance were so important, but because who knows what other epistemic curses have hidden themselves in the shadows? I realized that I had not been in control of my doublethink, that I couldn't have been.

Pinning down that one tiny little stab of dissonance took great preparation and effort, and there's no way I'd been working fast enough before. "How often," I wondered, "does this kind of thing happen?"

Very often, it turns out. I began noticing and acting on confusion several times a day, where before I'd been doing it a couple times a week. I wasn't just noticing things that I'd have ignored on purpose before; I was noticing things that would have slipped by because my reflexes slowed as I weighed the benefit of paying attention. "Ignore it" was not an available action in the face of confusion anymore, and that was a dramatic change. Because there are no disruptions, acting on confusion is becoming automatic.

I can't know for sure which bits of confusion I've noticed since the change would otherwise have slipped by unseen. But here's a plausible instance. Tonight I was having dinner with a friend I've met very recently. I was feeling s little bit tired and nervous, so I wasn't putting as much effort as usual into directing the conversation. At one point I realized we had stopped making making any progress toward my goals, since it was clear we were drifting toward small talk. In a tired and slightly nervous state, I imagine that I might have buried that bit of information and abdicated responsibility for the conversation--not by means of considering whether allowing small talk to happen was actually a good idea, but by not pouncing on the dissonance aggressively, and thereby letting it get away. Instead, I directed my attention at the feeling (without effort this time!), inquired of myself what precisely was causing it, identified the prediction that the current course of conversation was leading away from my goals, listed potential interventions, weighed their costs and benefits against my simulation of small talk, and said, "What are your terminal values?"

(I know that sounds like a lot of work, but it took at most three seconds. The hard part was building the pouncing reflex.)

When you know that some of your beliefs are false, and you know that leaving them be is instrumentally rational, you do not develop the automatic reflex of interrogating every suspicion of confusion. You might think you can do this selectively, but if you do, I strongly suspect you're wrong in exactly the way I was.

I have long been more viscerally motivated by things that are interesting or beautiful than by things that correspond to the territory. So it's not too surprising that toward the beginning of my rationality training, I went through a long period of being so enamored with a-veridical instrumental techniques--things like willful doublethink--that I double-thought myself into believing accuracy was not so great.

But I was wrong. And that mattered. Having accurate beliefs is a ridiculously convergent incentive. Every utility function that involves interaction with the territory--interaction of just about any kind!--benefits from a sound map. Even if "beauty" is a terminal value, "being viscerally motivated to increase your ability to make predictions that lead to greater beauty" increases your odds of success.

Dark side epistemology prevents total dedication to continuous improvement in epistemic rationality. Though individual dark side actions may be instrumentally rational, the patterns of thought required to allow them are not. Though instrumental rationality is ultimately the goal, your instrumental rationality will always be limited by your epistemic rationality.

That was important enough to say again: Your instrumental rationality will always be limited by your epistemic rationality.

It only takes a fraction of a second to sweep an observation into the corner. You don't have time to decide whether looking at it might prove problematic. If you take the time to protect your compartments, false beliefs you don't endorse will slide in from everywhere through those split-second cracks in your art. You must attend to your confusion the very moment you notice it. You must be relentless an unmerciful toward your own beliefs.

Excellent epistemology is not the natural state of a human brain. Rationality is hard. Without extreme dedication and advanced training, without reliable automatic reflexes of rational thought, your belief structure will be a mess. You can't have totally automatic anti-rationalization reflexes if you use doublethink as a technique of instrumental rationality.

This has been a difficult lesson for me. I have lost some benefits I'd gained from the Dark Arts. I'm late now, sometimes. And painful truths are painful, though now they are sharp and fast instead of dull and damaging.

And it is so worth it! I have much more work to do before I can move on to the next thing. But whatever the next thing is, I'll tackle it with far more predictive power than I otherwise would have--though I doubt I'd have noticed the difference.

So when I say that I'm against against against doublethink--that dark side epistemology is bad--I mean that there is more potential on the light side, not that the dark side has no redeeming features. Its fruits hang low, and they are delicious.

But the fruits of the light side are worth the climb. You'll never even know they're there if you gorge yourself in the dark forever.

108 comments

Comments sorted by top scores.

comment by So8res · 2014-05-08T17:07:44.019Z · LW(p) · GW(p)

Thanks for writing this!

I remain unconvinced. I agree with most of your points, and I think most of my disagreement stems from modeling my mind, the world, and/or 'dark techniques' in a different way than you do. I'd be happy to get together and try to converge sometime.

I do have one direct disagreement with the text, which is somewhat indicative of my more general disagreements.

Your instrumental rationality will always be limited by your epistemic rationality.

In my experience, many rationalists are motivation-limited, not accuracy-limited. I have met many people who are smarter than I am, who think faster than I do, who are better epistemic rationalists than I am---and who suffer greatly from akrasia or other stamina issues.

I seem to be quite good at achieving my goals. I am by no means convinced that this is due to some excess of willpower: my successes could alternatively be attributed to chance, genetics, self-delusion, or other factors. Even conditioned upon the assumption that my ability to avoid akrasia is a large part of my success, I am not convinced that my motivational techniques are the source of this ability.

However, I do see many "light-side" epistemic rationalists suffering from more akrasia than I do. In the real world, I am not convinced that epistemic rationality is enough. As such, I am cautious about removing motivational techniques in the name of the light.

(I also am under the impression that I can use my motivational techniques in such a way as to avoid many of the averse effects you mention, which gets back to us modeling things differently. This is, of course, exactly what my brain would tell me, and the objection should largely be disregarded until we have a chance to converge.)

There is, of course, some degree to which the above argument only indicates my ability to find self-protecting arguments that I myself find convincing. This topic is somewhat emotionally laden for me, so next time I find a few spare hours I will spend them strongly considering whether I am wrong. However, after cursory examination, I don't expect any particular update.

Replies from: ChristianKl, None
comment by ChristianKl · 2014-05-09T11:27:01.360Z · LW(p) · GW(p)

Which particular motivation techniques do you use?

Replies from: So8res
comment by So8res · 2014-05-09T16:19:19.649Z · LW(p) · GW(p)

There are many. I was particularly referring to the ones I discussed in the dark arts post, to which the above post is a followup.

Replies from: ChristianKl
comment by ChristianKl · 2014-05-09T16:42:32.942Z · LW(p) · GW(p)

Okay, I didn't remember that you were the person who wrote that post.

comment by [deleted] · 2014-05-08T22:25:59.846Z · LW(p) · GW(p)

It seems somewhat absurd to say that your ability to achieve goals is limited by the thingspace cluster we refer to as epistemic rationality. After all, caring too much about epistemic rationality leads to needing things like this.

Epistemic rationality seems like it should be something that you care about when it matters to care about, and don't care about when it doesn't matter. Like any other investment, your capital invested should be proportional to your rate and belief of return. Similarly, you should always be willing to sacrifice some epistemic cleanliness if it means winning. You can clean up the dark nasty corners of your mind on top of your pile of utility.

Replies from: So8res
comment by So8res · 2014-05-09T00:08:24.381Z · LW(p) · GW(p)

I think the point Brienne made is that seemingly small tradeoffs of epistemic accuracy for instrumental power actually cost much more than you might expect. You can't pay a little epistemic accuracy for a lot of instrumental power, because epistemic rationality requires that you leave yourself no outs. If you sanction even one tiny exception, you lose the benefits of purity that you didn't even know were available.

Replies from: None, Lumifer
comment by [deleted] · 2014-05-09T03:38:21.896Z · LW(p) · GW(p)

You're definitely paying for epistemic rationality with instrumental power if you spend all of your time contemplating metaethics so that you have a pure epistemic notion of what your goals are.

Humans start with effectively zero rationality. At some point, it becomes less winning to spend time gaining epistemic rationality than to spend time effecting your goals.

So, it seems like you can spend potential epistemtic rationality for instrumental power by using time to effect change rather than becoming epistemically pure.

To respond to some of your later points:

Take a programming language like OCaml. OCaml supports mutable and immutable state, and you could write an analysis over OCaml that would indeed barf and die on the first instances of mutation. Mutable state does make it incredibly difficult, sometimes to the point of impossiblity, to use conventional analyses on modern computers to prove facts about programs.

But this doesn't mean that a single ref cell destroys the benefits of purity in an OCaml program. To a human reading the program (which is really the most important use case), a single ref cell can be the best way to solve a problem, and the language's modularity can easily abstract it away so that the interface on the whole is pure.

Similarily, I agree that it's important to strive for perfection, in all cases. But striving for perfection doesn't mean taking every available sacrifice for perfection. I can strive for epistemic perfection while still choosing locally to not improve my epistemic state. An AI might have a strict total ordering of terminal goals, but a human never will. So as a human, I can simultaneously strive for epistemic perfection and instrumental usefulness.

In any case, I still think there's a limit where the return on investment into epistemic rationality diminishes into nothingness, and I think that limit is much closer than most less wrongers think, primarily because what matters most isn't absolute rationality, but relative rationality in your particular social setting. You only need to be more able to win than everyone you compete with; becoming more able to win without actually winning is not only a waste of time, but actively harmful. It's better to win two battles than to waste time overpreparing. Overfocusing on epistemic rationality ignores the opportunity cost of neglecting to use your arts for something outside themselves.

comment by Lumifer · 2014-05-09T00:56:09.294Z · LW(p) · GW(p)

If you sanction even one tiny exception, you lose the benefits of purity

What is that "purity" you're talking about? I didn't realize humans could achieve epistemic perfection.

Replies from: So8res
comment by So8res · 2014-05-09T01:12:23.910Z · LW(p) · GW(p)

Keep in mind here that I'm steelmanning someone else's argument, perhaps improperly. I don't want to put words in anyone else's mouth. That said, I used the term 'purity' in loose analogy to a 'pure' programming language, wherein one exception is sufficient to remove much of the possible gains.

Continuing the steelmanning, however, I'd say that while no human can achieve epistemic perfection, there's a large class of epistemic failures that you only recognize if you're striving for perfection. Striving for purity, not purity itself, is what gets you the gains.

Replies from: BrienneYudkowsky, Eugine_Nier, BrienneYudkowsky, Lumifer
comment by LoganStrohl (BrienneYudkowsky) · 2014-05-09T02:34:10.257Z · LW(p) · GW(p)

So8ers, you're completely accurate in your interpretation of my argument. I'm going to read some more of your previous posts before responding much to your first comment here.

comment by Eugine_Nier · 2014-05-09T01:38:07.976Z · LW(p) · GW(p)

Yes, as Eliezer put it somewhat dramatically here:

If you once tell a lie, the truth is ever after your enemy.

To expand on this in context, as long as you are striving for the truth any evidence you come across helps you, but once you choose to believe a lie you must forever avoid dis-confirming evidence.

Replies from: fezziwig, fezziwig
comment by fezziwig · 2014-05-09T19:47:45.477Z · LW(p) · GW(p)

You've drawn an important distinction, between believing a lie and telling one. Your formulation is correct, but Eliezer's is wrong.

Replies from: Eugine_Nier
comment by Eugine_Nier · 2014-05-12T02:02:47.735Z · LW(p) · GW(p)

Telling a lie has it's own problems, as I discuss here.

Replies from: fezziwig
comment by fezziwig · 2014-05-12T20:08:10.896Z · LW(p) · GW(p)

Yes, it's pretty much impossible to tell a lie without hurting other people, or at least interfering with them; that's the point of lying, after all. But right now we're talking about the harm one does to oneself by lying; I submit that there needn't be any.

Replies from: Armok_GoB, Eugine_Nier
comment by Armok_GoB · 2014-05-14T00:15:17.575Z · LW(p) · GW(p)

One distinction I don't know if it matters, but many discussions fail to mention at all, is the distinction between telling a lie and maintaining it/keeping the secret. Many of the epistemic arguments seem to disappear if you've previously made it clear you might lie to someone, you intend to tell the truth a few weeks down the line, and if pressed or questioned you confess and tell the actual truth rather than try to cover it with further lies.

Edit: also, have some kind of oat and special circumstance where you will in fact never lie, but precommit to only use it for important things or give it a cost in some way so you won't be pressed to give it for everything.

comment by Eugine_Nier · 2014-05-12T21:57:44.767Z · LW(p) · GW(p)

Did you even read the comment I linked to? It's whole point was about the harm you do to yourself and your cause by lying.

Replies from: None
comment by [deleted] · 2014-05-13T01:15:48.369Z · LW(p) · GW(p)

I think you and fezziwig aren't disagreeing. You're saying as an empirical matter that lying can (and maybe often does) harm the liar. He's just saying that it doesn't necessarily harm the liar, and indeed it may well be that lies are often a net benefit. These are compatible claims.

comment by fezziwig · 2014-05-09T19:41:33.794Z · LW(p) · GW(p)

You've drawn an important distinction, between believing a lie and telling one. Right now we're talking about lying to ourselves so the difference isn't very great, but be very careful with that quote in general.

comment by LoganStrohl (BrienneYudkowsky) · 2014-05-09T22:25:49.368Z · LW(p) · GW(p)

I can already predict, though, that much or my response will include material from here and here.

comment by Lumifer · 2014-05-09T01:35:34.672Z · LW(p) · GW(p)

in loose analogy to a 'pure' programming language, wherein one exception is sufficient to remove much of the possible gains.

Could you give some examples?

there's a large class of epistemic failures that you only recognize if you're striving for perfection.

I am not sure which class you're talking about... again, can you provide some examples?

comment by A1987dM (army1987) · 2014-05-08T16:00:11.986Z · LW(p) · GW(p)

You'd better remove Scott's real last name from your post before search engines index it, because he doesn't want it to be easy to find his blog given his full name.

Replies from: BrienneYudkowsky, gjm
comment by LoganStrohl (BrienneYudkowsky) · 2014-05-08T18:15:41.036Z · LW(p) · GW(p)

done. sorry, didn't know.

comment by gjm · 2014-05-09T10:15:47.236Z · LW(p) · GW(p)

Agreed, but unless a bunch of other things that have been there for ages get removed it's never going to take much effort to do. (E.g., Scott might want to look for ways to make typing in his real name not produce the top Google result that it currently does.)

Replies from: army1987
comment by A1987dM (army1987) · 2014-05-09T15:31:19.976Z · LW(p) · GW(p)

Raikoth.net only links to his old blog. AFAICT none of the results in the first page allow someone who doesn't already know where his new blog is to find it in less than five minutes, and he isn't trying to make it impossible, only inconvenient.

(Edit: OTOH raikoth.net does show his new pen name, and someone might google it, but I don't think it'd occur to Aunt Tillie to do so.)

Replies from: gjm
comment by gjm · 2014-05-09T19:01:01.385Z · LW(p) · GW(p)

Your edit explains exactly what I had in mind.

[EDITED to fix a typo.]

comment by Decius · 2014-05-11T17:07:23.314Z · LW(p) · GW(p)

Would an apt summary be "Expertly used Dark Side techniques have a high local maximum of instrumental rationality, but there is a region of higher instrumental rationality that involves epistemic rationality techniques that are incompatible with Dark Side techniques"?

comment by Viliam_Bur · 2014-05-08T14:02:48.029Z · LW(p) · GW(p)

Being wrong about something may harm you in the long term. Being right when others are wrong can get you killed right now.

Not sure how exactly this relates to the article (maybe it doesn't), but I feel weird when this obvious part is missing from a debate about instrumental rationality. As if there is just me and the universe, and if I have the correct beliefs, the universe will reward me, and if I have incorrect beliefs, the universe will punish me, on average. Therefore, let's praise the universe and let's have correct beliefs! I agree that if I were a Robinson on an empty island, trying to have correct beliefs would probably be the best way. But most people are not in this situation.

It is a great privilege to live in the time and space when having the right beliefs doesn't get you killed immediately. It probably contributes to our epistemic rationality more than anything else. And I enjoy it, a lot! But it doesn't mean that the social punishments are gone completely. Even in the same country, different people live in different situations, so probably an important strategic move in becoming more rational is to navigate yourself in situations where the punishment for having correct beliefs is smaller. If you can't... then you play by the more complex rules; the outcomes of epistemic rationally may be smaller, and you might need some doze of Dark Arts just to survive. (And by the way, this is the situation we are optimized for by evolution.)

Uhm... not sure where I wanted to get by saying this. I guess I wanted to say that "epistemic rationality is the best way to win" depends on the environment. In theory, you could have epistemically correct beliefs and yet behave in public according to other people's wrong beliefs and expectations; but I think this is rather difficult for a human.

Replies from: ChristianKl, fezziwig, MugaSofer, blacktrance, None
comment by ChristianKl · 2014-05-08T20:48:28.884Z · LW(p) · GW(p)

Having correct beliefs and telling people about them are two separate things.

Replies from: JTHM
comment by JTHM · 2014-05-09T02:43:28.318Z · LW(p) · GW(p)

Lying constantly about what you believe is all well and good if you have Professor Quirrell-like lying skills and your conscience doesn't bother you if you lie to protect yourself from others' hostility to your views. I myself lie effortlessly, and felt not a shred of guilt when, say, I would hide my atheism to protect myself from the hostility of my very anti-anti-religious father (he's not a believer himself, he's just hostile to atheism for reasons which elude me).

Other people, however, are not so lucky. Some people are obliged to publicly profess belief of some sort or face serious reprisals, and also feel terrible when they lie. Defiance may not be feasible, so they must either use Dark Side Epistemology to convince themselves of what others demand they be convinced, or else be cursed with the retching pain of a guilty conscience.

If you've never found yourself in such a situation, lucky you. But realize that you have it easy.

Replies from: brazil84, ChristianKl, NancyLebovitz, christopherj
comment by brazil84 · 2014-05-10T12:11:22.201Z · LW(p) · GW(p)

Lying constantly about what you believe is all well and good if you have Professor Quirrell-like lying skills and your conscience doesn't bother you if you lie to protect yourself from others' hostility to your views.

Even then, it's more cognitively demanding to lie. It's like running a business with two sets of books -- the set you show to the IRS and the set you actually use to run the business. It may save you a lot in taxes but you still have to spend double the time keeping your books.

Replies from: TheAncientGeek
comment by TheAncientGeek · 2014-05-10T15:18:18.197Z · LW(p) · GW(p)

Agreeing with the people around you isn't demanding. And most people don't need to maintain any "true" beliefs about politics, religion and philosophy. They butter no parsnips in practice, and parsinip-buttering beliefs are not varied or unpredictable enough for ingroup signalling purposes.

Replies from: brazil84
comment by brazil84 · 2014-05-10T17:01:29.261Z · LW(p) · GW(p)

Agreeing with the people around you isn't demanding.

I would say it depends on whether you really agree with them or not. If you believe X and you are surrounded by people who believe Y, and you need to conceal your belief in X, then you constantly have to be asking yourself "what would someone who believes in Y do or say?"

And most people don't need to maintain any "true" beliefs about politics, religion and philosophy.

I'm not sure what it means to "maintain 'true' beliefs." If you go through life, you will naturally develop a mental model (at least one, I suppose) of how the world works. If that model contains an Almighty Creator, then you are a theist. If it doesn't, then you are an atheist. Perhaps there is a third possibility, that your model is uncertain on this point, making you an agnostic.

If you are an atheist or an agnostic, and you are in a time or place where everyone is expected to be a theist, especially anyone who wants to get ahead in life, then that's a potential problem. Agreed?

Replies from: TheAncientGeek
comment by TheAncientGeek · 2014-05-11T11:02:54.227Z · LW(p) · GW(p)

I believe your first point us answered by my second.

You don't need mental models involving God or not god for any practical purpose .. other than solidarity with your community.

If you are one of the people, typical on LW but not in the population at large, who like to have beliefs on the "big" but practically unimportant questions, you will find dissimulation difficult. If not, not.

Replies from: TheAncientGeek, brazil84, ChristianKl
comment by TheAncientGeek · 2014-05-11T11:18:44.601Z · LW(p) · GW(p)

A man was telling one of his friends the secret of his contented married life: "My wife makes all the small decisions," he explained, "and I make all the big ones, so we never interfere in each other's business and never get annoyed with each other. We have no complaints and no arguments." "That sounds reasonable," answered his friend sympathetically. "And what sort of decisions does your wife make?" "Well," answered the man, "she decides what jobs I apply for, what sort of house we live in, what furniture we have, where we go for our holidays, and things like that." His friend was surprised. "Oh?" he said. "And what do you consider important decisions then?" "Well," answered the man, "I decide who should be Prime Minister, whether we should increase our help to poor countries, what we should do about the atom bomb, and things like that."

comment by brazil84 · 2014-05-11T18:59:11.258Z · LW(p) · GW(p)

You don't need mental models involving God or not god for any practical purpose .. other than solidarity with your community.

I disagree with that. For example, suppose you are hunting in the woods and you find 10 gold coins. According to your village elders, Crom the grim gloomy unforgiving god commands that you donate any such windfall to the Village Shrine to Crom, and that to do so will guarantee you eternal paradise. And that to fail to do so will guarantee eternal damnation.

If your mental model of the universe includes Crom the grim gloomy unforgiving god, then of course you will make the donation. Otherwise you are likely to keep the windfall to yourself. Of course a decision must be made.

Of course you might object that those days are gone, that nobody is expected to follow religious precepts anymore, at least not in the United States. And I would disagree with that too. In today's United States, you must still decide where to live and whom to do business with. Does your mental model of the universe include the fact that certain groups are more prone to crime and disruptive behavior than others? If so, you would be wise to have a rationalization in mind for why you don't want to live anywhere near such groups. Or at least a few euphemisms.

Anyway, please answer my question from before:

If you are an atheist or an agnostic, and you are in a time or place where everyone is expected to be a theist, especially anyone who wants to get ahead in life, then that's a potential problem. Agreed?

Replies from: army1987
comment by A1987dM (army1987) · 2014-05-12T11:19:56.643Z · LW(p) · GW(p)

Otherwise you are likely to keep the windfall to yourself.

Unless your model of the world includes people ostracizing you for doing so.

Replies from: brazil84
comment by brazil84 · 2014-05-12T12:36:38.166Z · LW(p) · GW(p)

Unless your model of the world includes people ostracizing you for doing so.

I completely agree, but you are kinda fighting the hypothetical here.

comment by ChristianKl · 2014-05-11T21:05:54.537Z · LW(p) · GW(p)

You don't need mental models involving God or not god for any practical purpose .. other than solidarity with your community.

Oathes do work as a commitment device if you think that the God on which you swear is real and will punish you really exists. No automatic tracking like Beeminder, but still a decent alternative.

Replies from: army1987
comment by A1987dM (army1987) · 2014-05-12T11:21:07.461Z · LW(p) · GW(p)

But such punishment is even further away in time than staying fat or failing the exam, so if the latter can't motivate you to diet or study...

comment by ChristianKl · 2014-05-09T11:25:31.366Z · LW(p) · GW(p)

Not talking about religion, politics and sex is position that's acceptable in many places.

Being an atheist is also an identity label. You don't need an identity label to have accurate beliefs. If you label yourself as an atheist than you will feel uncomfortable doing certain to participate in certain religious rituals because your family expects you to be at church.

If you just don't believe the ritual becomes a silly game that won't make you uncomfortable.

comment by NancyLebovitz · 2014-06-03T11:25:44.873Z · LW(p) · GW(p)

How skillful you need to be at lying depends on the culture you're in and the personalities of the people you're surrounded by.

Some cultures leave a lot of room for hypocrisy.

comment by christopherj · 2014-05-16T04:32:38.453Z · LW(p) · GW(p)

I myself lie effortlessly, and felt not a shred of guilt when, say, I would hide my atheism to protect myself from the hostility of my very anti-anti-religious father (he's not a believer himself, he's just hostile to atheism for reasons which elude me).

Hm, an atheist who hides his atheism, from his father who also seems to be an atheist (aka non-believer) but acts hostile towards atheists? Just out of curiosity, do you also act hostile towards atheists when you're around him?

comment by fezziwig · 2014-05-08T19:17:56.203Z · LW(p) · GW(p)

I think you've identified a special case of a more general problem, which is that true beliefs do not have equal value, and that their values can vary wildly with your circumstances. To borrow blacktrance's example: if you're living in 6th-century Rome then it's useful to know that Jews aren't inherently evil...but it's more useful to know what happens to people who say so. And if you don't know how to profess that Jews are inherently evil without being corrupted by that lie, then it's more important to learn that than it is to believe true things about Jews.

This discipline, of predicting the value of information before you've learned it, is very difficult. For me, it's the most difficult thing. But it's also the center of the art; if it weren't, we could all level up endlessly by browsing Wikipedia.

comment by MugaSofer · 2014-05-10T18:38:04.918Z · LW(p) · GW(p)

As if there is just me and the universe, and if I have the correct beliefs, the universe will reward me, and if I have incorrect beliefs, the universe will punish me, on average. Therefore, let's praise the universe and let's have correct beliefs!

It is just you and the universe. "Other people" are a part of the universe.

(I actually kind of agree with you, though - the larger point is that your beliefs can impact outcomes directly rather than only via predictions. A non-sentient example of this would be Placebo effects. This seems not to have been included in the OP's discussion.)

comment by blacktrance · 2014-05-08T16:53:50.808Z · LW(p) · GW(p)

Having correct beliefs does not mean expressing them. If I traveled back in time to medieval Rome, I would still believe that Jews aren't inherently evil and that Christ did not rise from the dead, but it would be unwise for me to be too public about those beliefs.

Replies from: Eugine_Nier
comment by Eugine_Nier · 2014-05-08T23:17:34.417Z · LW(p) · GW(p)

Nickpick: my understanding is that even in medieval Rome a lot of people didn't consider Jews inherently evil. At least to the extend that they were willing to engage in business dealings with them.

comment by [deleted] · 2014-05-09T06:29:06.710Z · LW(p) · GW(p)

The concept is called 'ketman' -- that term was popularized by Czeslaw Miłosz, who wrote about its practice under Communism.

I'm not sure if the pressure comes from lying per se -- it's not as if the practice is recent or uncommon -- or from having no place to go where you can escape the necessity to lie. Dalrymple was on to something when he said that the purpose of forcing public profession of the official idea under Communism was to humiliate; any place to tell the truth is a blow against the regime's goal of humiliation. Underground acts of non-public defiance aren't a new concept.

Secret societies aren't a new concept either; they don't seem to be as common anymore as they once were (but then again, how would I know?), but that's because they've been replaced by open but obscure/anonymous pseudosocieties online.

But there's a problem with the act of practicing ketman and going underground. Say you get n utility from having a secret society or similar, having an outlet to assert the truth outside the watch of the authority demanding that you lie -- but you'd get n^2 utility from getting the official lies dethroned. But you'd lose a great deal of utility if you got caught not believing in the lie.

That's a difficult coordination problem, since you clearly can't dethrone the official idea yourself. Perhaps it is deserving of study.

Replies from: Viliam_Bur, NancyLebovitz
comment by Viliam_Bur · 2014-05-10T19:11:04.004Z · LW(p) · GW(p)

I'm not sure if the pressure comes from lying per se -- it's not as if the practice is recent or uncommon -- or from having no place to go where you can escape the necessity to lie.

I believe it's the latter. On emotional level, if I can't speak openly with a person, I have a feeling like they "don't belong to my tribe", they are a stranger. There is a difference between being sometimes with strangers, and being alone among strangers, all the time.

It is much easier to have clear rules about when to use my "public" face, and when to relax and be myself. Using my "public" face increases my internal pressure; I need a place to talk about it and relax. If I don't have that place, then I will lose attention in random moments, and expose my internal heresies. It is easier to keep control, if I have clear boundaries for when the hypocrisy begins and when it ends.

Having just one person to talk honestly with already helps a lot. (I am tired to google now, but there is probably some article on LW about how the first voice of dissent is most important.) It is much easier for me to think, if I can talk. Talking makes my thought processes much clearer. Not having a sane person to talk with is like not having a part of brain, or for a more realistic analogy, like being drunk or exhausted all the time.

comment by NancyLebovitz · 2014-06-03T11:29:26.884Z · LW(p) · GW(p)

The recent history of getting homosexuality mainstreamed is an interesting example.

comment by brazil84 · 2014-05-08T09:07:40.035Z · LW(p) · GW(p)

Ultimately, I think beliefs are inputs for predictions

As Robin Hanson has pointed out, beliefs are also a way of showing something about oneself. Tribal membership, moral superiority, etc. A good Cimmerian believes in Crom, the grim gloomy unforgiving god.

Often, when we attempt to accept contradictory statements as correct, it causes cognitive dissonance--that nagging, itchy feeling in your brain that won't leave you alone until you admit that something is wrong.

My impression is that most people never admit that their beliefs are contradictory, instead they either lash out at whoever is bringing the contradictions to the forefront of their mind or start ignoring him.

But I was wrong. And that mattered. Having accurate beliefs is a ridiculously convergent incentive. Every utility function that involves interaction with the territory--interaction of just about any kind!--benefits from a sound map.

Can you give three examples of improvements in your life since your epiphany?

Replies from: TheAncientGeek, BrienneYudkowsky
comment by TheAncientGeek · 2014-05-08T09:38:57.238Z · LW(p) · GW(p)

Caring about contradictions signals geekishnes, which is generally undesirable.

Pointing out contradictions is generally seen as an attack, an attempt to lower status, rathernthan as something neutral or positive. Rationality and knowledge are high status as end states, for all that what you have to do to get ther is seen as low status nerdishness.

The ultimate in high status is effortless omniscience,as displayed by James Bond, who always knows everything about everything from nuclear reactors to the international diamond trade without ever reading a book.

Replies from: Lumifer, brazil84
comment by Lumifer · 2014-05-08T14:57:52.039Z · LW(p) · GW(p)

signals geekishnes, which is generally undesirable.

Um. Undesirable to whom and for what?

Replies from: TheAncientGeek
comment by TheAncientGeek · 2014-05-08T15:23:07.016Z · LW(p) · GW(p)

To most people for signaling purposes.

Replies from: Lumifer
comment by Lumifer · 2014-05-08T15:34:01.104Z · LW(p) · GW(p)

I don't believe this to be true.

We might have a different concept of geekishness, though.

Replies from: TheAncientGeek
comment by TheAncientGeek · 2014-05-08T16:12:57.282Z · LW(p) · GW(p)

Who gets laughed at...Bill Gates or Warren Buffet?

Replies from: gjm, blacktrance, None, Gunnar_Zarncke
comment by gjm · 2014-05-08T16:27:55.093Z · LW(p) · GW(p)

Neither, these days.

Replies from: None
comment by [deleted] · 2014-05-08T22:10:32.041Z · LW(p) · GW(p)

How many derogatory memes (in the internet sense, pictures with words on them) exist about Warren Buffet compared to Bill Gates?

You can't deny that one of the two is easier to laugh at. You might believe this to be morally wrong or undesirable for other reasons, but it seems to be obviously true.

Replies from: Lumifer, gjm, Eugine_Nier
comment by Lumifer · 2014-05-08T23:40:28.887Z · LW(p) · GW(p)

Warren Buffet compared to Bill Gates? You can't deny that one of the two is easier to laugh at.

Let's throw in another non-geek/geek pair: Justin Bieber and Mark Zuckerberg.

You can't deny that one of the two is easier to laugh at.

comment by gjm · 2014-05-08T23:21:04.850Z · LW(p) · GW(p)

I suppose I must just not frequent the right corners of the internet, because I can't remember the last time I saw a derogatory internet-meme about either of them.

And the things I can recall Gates getting laughed at for are mostly geeky inside-baseball. For instance: "640K should be enough for anybody" -- he got laughed at for saying that (even though, so far as anyone can tell, he didn't ever actually say it) but mostly by other geeks. I've seen him admired for being very smart, admired for being a shrewd businessman and buliding a hugely valuable company, excoriated for giving the world a lot of bad software, hated for shady business tactics, laughed at for things other geeks find funny -- but I really can't think of any occasion I've witnessed where he's been laughed at for being geeky. Perhaps I just have friends and colleagues who are too geeky, and all these years the Normal People have been pointing and laughing at Bill Gates for being a geek?

I'm sure there's been a lot more said and done (positive and negative) about Gates than about Buffett, because Gates was founder and CEO of Microsoft -- a company whose products just about everyone in the Western world uses daily and many people have strong feelings about, and that engaged in sufficiently, ah, colourful business practices to get it into hot water with more than one large national government -- and Buffett, well, wasn't. Can you, off the top of your head, think of three things Microsoft has done that you feel strongly about? OK, now what about Berkshire Hathaway?

So, I dunno, maybe Gates is easier to laugh at because he's geekier, but it seems to me there are other more obvious explanations for any difference in laughed-at-ness.

Replies from: gwern
comment by gwern · 2014-05-08T23:41:00.781Z · LW(p) · GW(p)

And the things I can recall Gates getting laughed at for are mostly geeky inside-baseball. For instance: "640K should be enough for anybody" -- he got laughed at for saying that (even though, so far as anyone can tell, he didn't ever actually say it) but mostly by other geeks. I've seen him admired for being very smart, admired for being a shrewd businessman and buliding a hugely valuable company, excoriated for giving the world a lot of bad software, hated for shady business tactics, laughed at for things other geeks find funny -- but I really can't think of any occasion I've witnessed where he's been laughed at for being geeky.

The Simpsons comes to mind as mocking Gates for being geeky, and I'd suggest that Gates gets mocked more than Buffett (I struggle to think of anyone mocking Buffett except Bitcoiners recently after he criticized it); that said, Gates gets mocked a lot less these days than he did in the '90s, and your inability to think of many examples is due to the disappearance of '90s popular media, magazines, Usenet posts, ./ comments, etc, from consciousness.

Replies from: ESRogs, gjm
comment by ESRogs · 2014-05-10T09:20:11.255Z · LW(p) · GW(p)

The Simpsons comes to mind

I'm reminded of this Pinky and the Brain episode: https://www.youtube.com/watch?v=KyGY8AuS7rU.

Replies from: gwern
comment by gwern · 2014-05-10T21:14:57.672Z · LW(p) · GW(p)

Yep. Note the mockery of Gates's monotone voice, arrogance, aversion to personal contact, overuse of computers...

comment by gjm · 2014-05-09T00:47:38.984Z · LW(p) · GW(p)

a lot less these days than he did in the '90s [...] the disappearance of '90s popular media [...] etc., from consciousness.

Yes, very likely. (Recall that my original answer was: "Neither, these days." -- emphasis added.) I'm pretty sure I haven't seen the Simpsons episodes in which Gates is mocked, so of course I can't comment on them.

Again: if -- as is very likely the case -- Gates has been mocked much more than Buffett, it seems clear that there are plenty of explanations for this that have nothing to do with his being geekier. I don't see how Gates v Buffett can possibly be much use as an example of how geekiness is undesirable, given all the other factors in play there.

comment by Eugine_Nier · 2014-05-08T23:11:00.063Z · LW(p) · GW(p)

To be fare, I suspect a large number of the anti-Gates memes are by other geeks fighting the open/closed source holy war.

Replies from: gothgirl420666
comment by gothgirl420666 · 2014-05-09T21:29:36.244Z · LW(p) · GW(p)

Geeks have most likely absorbed the "geeks are lesser, should be laughed at" meme to a certain extent as well.

comment by blacktrance · 2014-05-08T23:56:47.652Z · LW(p) · GW(p)

Who's more prominent, Bill Gates or Warren Buffet? Yes, Bill Gates gets made fun of more, but he gets more attention in general.

Replies from: TheAncientGeek
comment by TheAncientGeek · 2014-05-09T14:32:05.340Z · LW(p) · GW(p)

If gets mocked more, he would get more attention.

comment by [deleted] · 2014-05-30T22:15:52.181Z · LW(p) · GW(p)

If Bill Gates is mocked more than Warren Buffett, there are other, arguably more plausible reasons for this. Anecdotally, the most frequent cause I've encountered for criticizing or mocking Bill Gates is a dislike of his company, its products, practices, and prevalence.

comment by Gunnar_Zarncke · 2014-05-09T05:52:22.138Z · LW(p) · GW(p)

Making fun of a high status person is a compensating action by low status people. Which person is made fun of depends more on the availability of trivia about that person than on their accomplishments (and geekiness surely is one such trivia). Also at the status high end the variance in any dimension is probably high.

Replies from: TheAncientGeek, wedrifid
comment by TheAncientGeek · 2014-05-09T14:36:12.152Z · LW(p) · GW(p)

Most trivia aren't funny. Simultaneous high and low status is funny. Dumb sports stars are another example.

comment by wedrifid · 2014-05-09T17:48:38.264Z · LW(p) · GW(p)

Making fun of a high status person is a compensating action by low status people.

(And even if it isn't you will tend to be well served by claiming that is what the behaviors mean. Because that is the side with the power.)

Replies from: army1987
comment by A1987dM (army1987) · 2014-05-11T08:39:35.818Z · LW(p) · GW(p)

Because that is the side with the power.

Which one do you mean, social power or structural power?

Replies from: Eugine_Nier, wedrifid
comment by Eugine_Nier · 2014-05-13T00:12:42.021Z · LW(p) · GW(p)

I'm not sure I agree with Yvain's post.

One issue, with the abortion example:

Moldbug later uses the example of pro-lifers protesting abortion as an example of an unsympathetic and genuinely powerless cause. Yet as far as I can tell abortion protesters and Exxon Mobile protesters are treated more or less the same.

Well, there are laws limiting the ability of pro-life activists to protest outside abortion clinics. There are no analogous laws for Exxon Mobile.

His claim about how social power can't overcome structural power is dubious. Tell that to Mozilla co-founder Brendan Eich or GitHub co-founder Tom Preston-Werner. To be fair to Yvain both these incidents happened after the article was written and it appears he has at least moved in the direction of updating on them.

Also Yvain says:

Social power is much easier to notice than structural power, especially if you're not the one on the wrong end of the structural power.

This is pure BS. Structural power is very easy to notice, look at the org-chart. It is social power, as Yvain defines it, that is much harder to notice.

comment by wedrifid · 2014-05-12T15:54:34.889Z · LW(p) · GW(p)

Which one do you mean, social power or structural power?

I mean power. The ability to significantly influence decision relevant outcomes without excessive cost to self. The statement doesn't care where the power is derived and it would sacrifice meaning to make either substitution.

comment by brazil84 · 2014-05-08T12:15:08.328Z · LW(p) · GW(p)

The ultimate in high status is effortless omniscience,as displayed by James Bond, who always knows everything about everything from nuclear reactors to the international diamond trade without ever reading a book.

If James Bond wandered into a discussion of jewelry and started pontificating about the international diamond trade, I wonder if it would be seen as high status or low status.

comment by LoganStrohl (BrienneYudkowsky) · 2014-05-09T02:40:15.440Z · LW(p) · GW(p)

Can you give three examples of improvements in your life since your epiphany?

Sure!

1) My conversations with friends are more efficient illuminating. 2) I learn more quickly from mistakes. 3) I prevent more mistakes before they get the chance to happen.

If I hadn't given those examples, could you have predicted positive changes resulting from having generally more accurate beliefs? It really doesn't seem that surprising to me that someone's life would improve in a zillion different ways if they weren't wrong so much.

Replies from: brazil84
comment by brazil84 · 2014-05-09T07:43:51.810Z · LW(p) · GW(p)

1) My conversations with friends are more efficient illuminating. 2) I learn more quickly from mistakes. 3) I prevent more mistakes before they get the chance to happen.

Well can you give specific examples of mistakes you learned more quickly from and/or prevented? And can you give an example of some illumination you got more efficiently out of a conversation?

If I hadn't given those examples, could you have predicted positive changes resulting from having generally more accurate beliefs?

Not necessarily -- peoples' maps of reality tend to be pretty good when important personal interests are at stake. Perhaps a good Cimmerian believes, in theory, that if he dies then Crom will instantly take him to eternal paradise. But somehow that doesn't stop our Good Cimmerian from expending a lot of effort trying to stay alive, possibly including breaking some of Crom's rules.

Also, it costs mental energy to make your beliefs more accurate and there is no guarantee that it will be worth the trouble to do so.

Last, as mentioned above, beliefs serve other purposes besides being inputs for predictions.

comment by Shmi (shminux) · 2014-05-08T15:06:30.353Z · LW(p) · GW(p)

Against Against Against Doublethink

What, only 3 levels-deep meta? This is like approximating e^x with only 1+x+x^2/2+x^3/6. Back to the drawing board.

Replies from: Benito
comment by Ben Pace (Benito) · 2014-05-08T18:29:45.903Z · LW(p) · GW(p)

According to this analogy, we previously thought e to be 2. Now it's 2.6 recurring. We're making progress, of a sort.

Replies from: Kawoomba, itaibn0
comment by Kawoomba · 2014-05-08T22:01:04.090Z · LW(p) · GW(p)

Reminds me of The Relativity of Wrong.

comment by itaibn0 · 2014-05-10T22:58:26.666Z · LW(p) · GW(p)

Not if what you're trying to calculate is e^(-5).

comment by Swimmer963 (Miranda Dixon-Luinenburg) (Swimmer963) · 2014-05-10T10:37:36.337Z · LW(p) · GW(p)

Thoughts on this:

Obviously it's possible to want multiple things and believe multiple things. My mind, at least, is best approximately as a society of sub-agents than as a single unified self. I think "System 1 vs System 2" is already too much of an approximation–my System 1 definitely isn't unified, and even my System 2 doesn't agree on a single set of beliefs.

Can you simultaneously want sex and not want it?

Yes, and even large amounts of luminosity haven't made this divide go away. I used to not want sex because it was unpleasant, but want to want it because it was a way to profess love and, damn it, I wanted to do that. The not-wanting-sex happened on a more basic, less endorsed level, leading to weird mental resistance and frustration whenever I overrode it and had sex anyway because it was a thing I ought to do. I now do almost the opposite–I listen to my System 1 instincts and don't have sex, but I'm not totally happy with this state of affairs. There's good evidence that humans can't change their sexual orientations, so I've accepted it for now, but if that status quo changed, I would have some rethinking to do, and might press a button to make it different. These are different 'file formats' of belief–System 2 verbal beliefs don't automatically propagate into System 1 visceral urges–but they're nevertheless contradictory, and years of thinking about and paying a lot of attention to the issue hasn't allowed me to resolve that.

Another example: I want kids. By that, I mean that seeing a baby makes me feel all warm and fuzzy inside; that I daydream about it; that the first thought that comes when I see or learn many things is "I'm going to teach this to my kids!" I'm also fairly sure that having kids now is not the correct thing to do. It may not be the correct thing to do for a few years. In this case, System 2 rules win out, while System 1 whispers quietly in the background that why don't I have a baby already, and hey, you could put up with some unpleasantness and have a baby in nine months. I'm sure as hell not going to change my System 1, but there is or is not an instrumentally rational thing to do, and what my System 1 wants is only a small part of the calculation. So, if all the other variables push me in the other direction, I might end up not having kids for a long time–and having a mental contradiction for the same length of time.

Is this inevitable? Maybe, maybe not. But it certainly seems to be the default, even for people who spend a lot of time thinking about their beliefs.

comment by Kawoomba · 2014-05-08T17:36:16.988Z · LW(p) · GW(p)

Well, I for one am confused much of the time, and whenever I encounter someone who ostensibly isn't, I get nervous. Believing falsehoods isn't just the domain of dark artisans, it comes courtesy of having a brain. My belief that "most of my beliefs must have large error bounds" probably has among my lowest error bounds; I'm surest about being unsure.

I do wonder if convincing oneself of having given up deluding oneself isn't the greatest dark side achievement of all -- after all, how would you know it's not? Maybe you got tricked by your System 0.

But I'm being contrarian. Good post overall. I guess the metric I'd prefer in terms of belief improvement is "number of times I've noticed my confusion and bent myself to accept what I perceive, instead of bending my perceptions to myself". More of an engineering approach, still allowing for a few holy cows that don't get slaughtered (without overly compromising your overall strength as a rationalist).

comment by ChristianKl · 2014-05-08T13:26:09.149Z · LW(p) · GW(p)

When you know that some of your beliefs are false, and you know that leaving them be is instrumentally rational, you do not develop the automatic reflex of interrogating every suspicion of confusion.

Noticing confusion is about noticing your feelings and reacting towards them. Acknowledging your feelings and thinking about their causes is useful whether the feeling is confusion, anger or fear.

Replies from: drethelin
comment by drethelin · 2014-06-03T20:24:41.526Z · LW(p) · GW(p)

Also joy! and happiness! noticing what kind of things leave me joyful has been helpful for me

comment by CronoDAS · 2014-05-08T08:42:17.013Z · LW(p) · GW(p)

It's not that hard to accidentally believe a contradiction, since we're not logically omniscient and "consistency checking" is a computationally intractable problem except in simple cases. Proving that an arbitrary sentence of propositional logic isn't a contradiction is an NP-complete problem, and human beliefs are more complicated than statements in propositional logic.

Replies from: brazil84
comment by brazil84 · 2014-05-09T23:45:19.201Z · LW(p) · GW(p)

It's not that hard to accidentally believe a contradiction, since we're not logically omniscient and "consistency checking" is a computationally intractable problem except in simple cases.

I agree with you in theory, but in practice there are plenty of contradictions which are pretty darned obvious. i.e. the limiting factor in human rationality seems to be the human tendency to self-deception and hypocrisy. As opposed to the computational difficulty of finding contradictions.

comment by JoshuaFox · 2014-05-10T17:47:05.637Z · LW(p) · GW(p)

cache out -> cash out.

Not that I want to nitpick spelling, but "cached thoughts" and "cashing out your beliefs" are both used for different things.

comment by [deleted] · 2014-05-13T05:05:23.240Z · LW(p) · GW(p)

So I predict that if I intervene and stop the ticking (in non-ridiculous ways), my car will keep working.

Nitpick: "ridiculous" is relative to your goals here. A slightly better wording might be "fix the root cause of the ticking".

comment by moridinamael · 2014-05-09T14:23:08.074Z · LW(p) · GW(p)

So, it seems like there's been up upswing in interest regarding meditation around here recently. I mention this because in this article Brienne advocates for several mental habits such as catching yourself having millisecond-scale mental events and arresting or reversing them, or being able to dispassionately watch herself being uncomfortable and then act on that discomfort in an effectively dissociated fashion. I have done exactly the same thing where I've suggested in a post that the solution to somebody's problem was to simply execute a highly specific mental contortion, with the "how" of it left as an exercise to the reader. Plug for MarkL's excellent meditation blog.

If I were to be honored with a seat on the Less Wrong High Council, I would probably lobby for some kind of short daily meditative practice to me incorporated into our dogma. Aside from various peer reviewed health benefits, I can anecdotally report that mindfulness meditation trains exactly the type of command-and-control abilities Brienne is describing.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2014-06-03T11:48:31.168Z · LW(p) · GW(p)

I've believed that some thoughts are hard to notice because they happen quickly, but now I'm wondering whether it's not so much that the thoughts are fast as that blanking the thoughts out of consciousness is what happens quickly.

Your "mental events" would include at least both the thoughts and the blanking out process. Have you noticed a blanking out process, and if so, what did you notice about it?

comment by drethelin · 2014-06-03T20:20:00.216Z · LW(p) · GW(p)

This reminds me of the studies that found that "Releasing stress" via punching pillows and screaming only trained you to respond to stressful situations in violent ways, rather than actually having beneficial effects. Training is a question of learning to unconsciously do a conscious activity and training yourself in dark side methods is making unconscious your reliance/use of falsities.

comment by Gunnar_Zarncke · 2014-05-29T17:00:00.146Z · LW(p) · GW(p)

Finally got around to reading this completely. Great exposition.

[T]oward the beginning of my rationality training, I went through a long period of being so enamored with a-veridical instrumental techniques [...] that I double-thought myself into believing accuracy was not so great. But I was wrong. And that mattered. Having accurate beliefs is a ridiculously convergent incentive.

It reminds me of Wittgensteins ladder: You seemed to have stepped up the ladder thru practical rationality and dark arts and by no longer need them explicitly. You have unconscious competence.

I wonder if it just coincidence that there are also Four Stages of Competence.

comment by TobyBartels · 2014-05-12T05:53:16.541Z · LW(p) · GW(p)

Would you advocate never using Dark Side techniques, or are these techniques reasonable in some situations?, even though, once you become a real master at rationality, they have to be left behind.

(Before I studied the Dark Arts, truth was truth and lies were lies. While I studied the Dark Arts, truth was not truth and lies were not lies. After I studied the Dark Arts, truth was truth and lies were lies.)

Replies from: Viliam_Bur
comment by Viliam_Bur · 2014-05-12T08:07:30.264Z · LW(p) · GW(p)

Just a sidenote: studying is not the same as using. It is worth studying Dark Arts techniques so you recognize when other people are trying to use them against you.

Replies from: TobyBartels
comment by TobyBartels · 2014-05-14T22:41:37.690Z · LW(p) · GW(p)

True indeed. Although in this case, even the using would be using them on yourself.

comment by A1987dM (army1987) · 2014-05-08T15:44:03.806Z · LW(p) · GW(p)

You'd better remove Scott's real last name from your post before search engines index it, because he doesn't want it to be easy to find his blog given his full name.

comment by Fraveilve · 2017-01-24T05:14:55.969Z · LW(p) · GW(p)

In many contexts the law of the excluded middle can certainly be brought to bear, but I would argue that reasoning about double think about driving ability is not one of them. A driver who is realistically pessimistic about their driving will drive less confidently, more hesitantly, more constantly vigilantly, and be a hazard to other drivers' abilities to model what is going on. The same is also true of the irrationally over-optimistic drivers, but modelling what a driver who seems to think they know what they are doing is more straightforward. So the optimal modelling of driving among other drivers is supported best when almost all drivers drive with somewhat more confidence than is warranted, with the proviso that drivers in general not only learn to drive better with experience, but become more accurate about how well they drive ... and still drive with more confidence than is fully warranted, that taken into account. A truly good experienced driver will be modelling how experienced drivers around them are AND how much those drivers are driving within their ability, without having to think about it. That is a lot of shades of gray estimating going on in real time, and the models will not be improved by a driver treating any continuum estimation, including that of their own relative ability, as a dichotomy will not improve the overall modelling. Nor will too much detail. The truth is, every driver gets to be aware at times that they hadn't known what they hadn't known about driving safely in particular situations, but keeping that in mind constantly will not make them a better driver.

And when the consequences of modelling-mismatch can easily be multi-fatal and are more usually instructive, keeping modelling updates tractable has great value, and simplifying one's own understanding of one's own driving ability supports that.

So far this may sound like another way of talking about what Against Against Doublethink is saying, but I think there is more going on.

"Without any principled algorithm determining which of the contradictory propositions to use as an input for the simulation at hand, you'll fail as often as you succeed. So it makes no sense to anticipate more positive outcomes from believing contradictions."

It looks to me like the second sentence doesn't follow from the first in the way stated. The phrase "you'll fail as often as you succeed" sounds to me like a crapshoot centered around 50:50, but often minimal checking is enough to get a sense of whether naive estimations in a domain, with Doublethink known to be active, are actually much less accurate than that or nonetheless pretty accurate, maybe more than good enough for realtime use: quick, that aggressive driver weaving fast up behind you, is that driver going to wait for a principled model update?

So in practice even someone doing their best to eschew Doublethink is going to benefit from some meta-estimating of its effects, if only in other actors present. To eschew that modelling too on principle invites much the same Doublethink about modelling in through the backdoor.

There's no kicking Doublethink to the curb, there is only managing it.

comment by tristanhaze · 2014-10-01T12:09:41.328Z · LW(p) · GW(p)

I just want to say that the title of this post is fantastic, and in a deep sort of mathy way, beautiful. It's probably usually not possible, but I love it when an appropriate title - especially a nice not-too-long one - manages to contain, by itself, so much intellectual interest. Even just seeing that title listed somewhere could plant an important seed in someone's mind.

comment by christopherj · 2014-05-16T05:33:27.732Z · LW(p) · GW(p)

It is pretty much a necessity that humans will believe contradictory things, if only because consistency checking each new belief with each of your current beliefs is impossibly difficult. Cognitive dissonance won't occur if the contradiction is so obscure that you haven't noticed it, or perhaps wouldn't even understand exactly how it contradicts a set of 136 other beliefs even if it was explained to you. Even if you could check for contradictions, your values change drastically from one hour to the next (how much you value food, water, company, solitude, leisure, etc), and that will change all your beliefs that start with "I want ...". Most likely you actually have different bits of brain with different values vying for dominance

Moreover, many times a belief is part of a group membership (eg "I support [cause]", or simply feels good (eg "I am a good person"). People will not appreciate if you point out contradictions in these things, possibly because they are instrumental and not epistemic beliefs. There is no doubt that professing contradictory beliefs can be highly beneficial (eg "Republicans are fiscally conservative, want small government, cut taxes, more money for the military and enforcing morality", if you reject any of that you're not a viable candidate)

comment by TheAncientGeek · 2014-05-08T09:15:23.199Z · LW(p) · GW(p)

Belief is for many things, including signaling.

Instrumentlal rationality and epistemic rationality aren’t the same. Epistemic rationality seeks to maxmise knowledge, truth and consistency. Instrumental rationality seeks to maximise efficiemcy, gain and personal utility.One area they come apart is signalling, the implicit and explicit ways we tell others what kind of person we are. The instrumentally rational way is to signal is to maximise your utility by sending out agreeable signals to whichever individual or group youhappen to need something from. This Vicar-of-Bray style behaviour will lead to your making highly inconsistent statements in the limit. If you want to signal sincerity, you will need to believe them too.So you will end up with inconsistent beliefs. So,IR+signalling is inconsistent with ER.

Replies from: Eugine_Nier, None
comment by Eugine_Nier · 2014-05-08T23:15:00.644Z · LW(p) · GW(p)

Of course the more times you switch sides, the harder it becomes for anyone to take your sincerity seriously.

Replies from: TheAncientGeek
comment by TheAncientGeek · 2014-05-10T15:08:45.373Z · LW(p) · GW(p)

Assuming you're found out.

If you are scrutinised, in different siutations, by someone who cares about consistency, the benefit of inconsistent signalling vanishes.And noone is scrutinsed more than a politician in a healthy democracy. People read reports of politicians contradicting themselves and being inconsistent, and infer that politicians are unusually hypocritical.

But absence of evidence is not evidence of absence The ordinary persons hypocrisy is not publicised becausd the ordinary person does not have reporters following them round. The ordinary person typically moves in a number of fairly disjoimt circles -- the workplace, family, same-sex friends and so on -- signaling different loyalties to each. The existence of Chinese walls is even humorously acknowledged: "what happens in X stays in X".

Inconsisten.cy reaches a peak when communicating with completely unconnected individuals and groups. My go-to example is a telesales operative Iwho would ring various people during the crude of a day and agree with every word they said. Her customers were of course unknown to each other and in no position to compare notes,.

Replies from: Eugine_Nier
comment by Eugine_Nier · 2014-05-11T03:40:00.763Z · LW(p) · GW(p)

Well, in the example you cited, the Vicar of Bray, one is dealing with the kind of religious fanatics who are likely to have low tolerance for hypocrisy and may very well do some investigation into one's history.

comment by [deleted] · 2014-05-09T06:33:37.156Z · LW(p) · GW(p)

This Vicar-of-Bray style behaviour will lead to your making highly inconsistent statements in the limit.

Will it?

Consider the regime of the official idea. Under certain regime structures, its direction of development is as obvious as its current state, and its current state is obvious. That is, there's one group that you consistently need something from, and the only inconsistencies arise from its idea-drift over time -- which can be predicted with a good deal of accuracy.

Replies from: TheAncientGeek
comment by TheAncientGeek · 2014-05-09T12:38:00.224Z · LW(p) · GW(p)

Where is this regime ....?

Replies from: None
comment by [deleted] · 2014-05-14T18:15:14.963Z · LW(p) · GW(p)

It's a type, a pattern. I don't mean to single out any particular regime. I suspect there are instances of this type, but I'll leave that as an exercise for the reader.

Replies from: TheAncientGeek
comment by TheAncientGeek · 2014-05-14T18:18:02.298Z · LW(p) · GW(p)

I suspect that there aren't instances, hence my question.