Skill: The Map is Not the Territory

post by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-10-06T09:59:25.358Z · LW · GW · Legacy · 179 comments

Followup to: The Useful Idea of Truth (minor post)

So far as I know, the first piece of rationalist fiction - one of only two explicitly rationalist fictions I know of that didn't descend from HPMOR, the other being "David's Sling" by Marc Stiegler - is the Null-A series by A. E. van Vogt. In Vogt's story, the protagonist, Gilbert Gosseyn, has mostly non-duplicable abilities that you can't pick up and use even if they're supposedly mental - e.g. the ability to use all of his muscular strength in emergencies, thanks to his alleged training. The main explicit-rationalist skill someone could actually pick up from Gosseyn's adventure is embodied in his slogan:

"The map is not the territory."

Sometimes it still amazes me to contemplate that this proverb was invented at some point, and some fellow named Korzybski invented it, and this happened as late as the 20th century. I read Vogt's story and absorbed that lesson when I was rather young, so to me this phrase sounds like a sheer background axiom of existence.

But as the Bayesian Conspiracy enters into its second stage of development, we must all accustom ourselves to translating mere insights into applied techniques. So:

Meditation: Under what circumstances is it helpful to consciously think of the distinction between the map and the territory - to visualize your thought bubble containing a belief, and a reality outside it, rather than just using your map to think about reality directly?  How exactly does it help, on what sort of problem?

...

...

...

Skill 1: The conceivability of being wrong.

In the story, Gilbert Gosseyn is most liable to be reminded of this proverb when some belief is uncertain; "Your belief in that does not make it so." It might sound basic, but this is where some of the earliest rationalist training starts - making the jump from living in a world where the sky just is blue, the grass just is green, and people from the Other Political Party just are possessed by demonic spirits of pure evil, to a world where it's possible that reality is going to be different from these beliefs and come back and surprise you. You might assign low probability to that in the grass-is-green case, but in a world where there's a territory separate from the map it is at least conceivable that reality turns out to disagree with you. There are people who could stand to rehearse this, maybe by visualizing themselves with a thought bubble, first in a world like X, then in a world like not-X, in cases where they are tempted to entirely neglect the possibility that they might be wrong. "He hates me!" and other beliefs about other people's motives seems to be a domain in which "I believe that he hates me" or "I hypothesize that he hates me" might work a lot better.

Probabilistic reasoning is also a remedy for similar reasons: Implicit in a 75% probability of X is a 25% probability of not-X, so you're hopefully automatically considering more than one world. Assigning a probability also inherently reminds you that you're occupying an epistemic state, since only beliefs can be probabilistic, while reality itself is either one way or another.

Skill 2: Perspective-taking on beliefs.

What we really believe feels like the way the world is; from the inside, other people feel like they are inhabiting different worlds from you. They aren't disagreeing with you because they're obstinate, they're disagreeing because the world feels different to them - even if the two of you are in fact embedded in the same reality.

This is one of the secret writing rules behind Harry Potter and the Methods of Rationality. When I write a character, e.g. Draco Malfoy, I don't just extrapolate their mind, I extrapolate the surrounding subjective world they live in, which has that character at the center; all other things seem important, or are considered at all, in relation to how important they are to that character. Most other books are never told from more than one character's viewpoint, but if they are, it's strange how often the other characters seem to be living inside the protagonist's universe and to think mostly about things that are important to the main protagonist. In HPMOR, when you enter Draco Malfoy's viewpoint, you are plunged into Draco Malfoy's subjective universe, in which Death Eaters have reasons for everything they do and Dumbledore is an exogenous reasonless evil. Since I'm not trying to show off postmodernism, everyone is still recognizably living in the same underlying reality, and the justifications of the Death Eaters only sound reasonable to Draco, rather than having been optimized to persuade the reader. It's not like the characters literally have their own universes, nor is morality handed out in equal portions to all parties regardless of what they do. But different elements of reality have different meanings and different importances to different characters.

Joshua Greene has observed - I think this is in his Terrible, Horrible, No Good, Very Bad paper - that most political discourse rarely gets beyond the point of lecturing naughty children who are just refusing to acknowledge the evident truth. As a special case, one may also appreciate internally that being wrong feels just like being right, unless you can actually perform some sort of experimental check.

Skill 3: You are less bamboozleable by anti-epistemology or motivated neutrality which explicitly claims that there's no truth.

This is a negative skill - avoiding one more wrong way to do it - and mostly about quoted arguments rather than positive reasoning you'd want to conduct yourself. Hence the sort of thing we want to put less emphasis on in training. Nonetheless, it's easier not to fall for somebody's line about the absence of objective truth, if you've previously spent a bit of time visualizing Sally and Anne with different beliefs, and separately, a marble for those beliefs to be compared-to. Sally and Anne have different beliefs, but there's only one way-things-are, the actual state of the marble, to which the beliefs can be compared; so no, they don't have 'different truths'.  A real belief (as opposed to a belief-in-belief) will feel true, yes, so the two have different feelings-of-truth, but the feeling-of-truth is not the territory.

To rehearse this, I suppose, you'd try to notice this kind of anti-epistemology when you ran across it, and maybe respond internally by actually visualizing two figures with thought bubbles and their single environment. Though I don't think most people who understood the core insight would require any further persuasion or rehearsal to avoid contamination by the fallacy.

Skill 4: World-first reasoning about decisions a.k.a. the Tarski Method aka Litany of Tarski.

Suppose you're considering whether to wash your white athletic socks with a dark load of laundry, and you're worried the colors might bleed into the socks, but on the other hand you really don't want to have to do another load just for the white socks. You might find your brain selectively rationalizing reasons why it's not all that likely for the colors to bleed - there's no really new dark clothes in there, say - trying to persuade itself that the socks won't be ruined. At which point it may help to say:

"If my socks will stain, I want to believe my socks will stain;
If my socks won't stain, I don't want to believe my socks will stain;
Let me not become attached to beliefs I may not want."

To stop your brain trying to persuade itself, visualize that you are either already in the world where your socks will end up discolored, or already in the world where your socks will be fine, and in either case it is better for you to believe you're in the world you're actually in. Related mantras include "That which can be destroyed by the truth should be" and "Reality is that which, when we stop believing in it, doesn't go away". Appreciating that belief is not reality can help us to appreciate the primacy of reality, and either stop arguing with it and accept it, or actually become curious about it.

Anna Salamon and I usually apply the Tarski Method by visualizing a world that is not-how-we'd-like or not-how-we-previously-believed, and ourselves as believing the contrary, and the disaster that would then follow.  For example, let's say that you've been driving for a while, haven't reached your hotel, and are starting to wonder if you took a wrong turn... in which case you'd have to go back and drive another 40 miles in the opposite direction, which is an unpleasant thing to think about, so your brain tries to persuade itself that it's not lost.  Anna and I use the form of the skill where we visualize the world where we are lost and keep driving.

Note that in principle, this is only one quadrant of a 2 x 2 matrix:

  In reality, you're heading in the right direction In reality, you're totally lost
You believe you're heading in the right direction No need to change anything - just keep doing what you're doing, and you'll get to the conference hotel Just keep doing what you're doing, and you'll eventually drive your rental car directly into the sea
You believe you're lost Alas!  You spend 5 whole minutes of your life pulling over and asking for directions you didn't need After spending 5 minutes getting directions, you've got to turn around and drive 40 minutes the other way.

 

Michael "Valentine" Smith says that he practiced this skill by actually visualizing all four quadrants in turn, and that with a bit of practice he could do it very quickly, and that he thinks visualizing all four quadrants helped.

(Mainstream status here.)

Part of the sequence Highly Advanced Epistemology 101 for Beginners

Next post: "Rationality: Appreciating Cognitive Algorithms"

Previous post: "The Useful Idea of Truth"

179 comments

Comments sorted by top scores.

comment by thomblake · 2012-10-04T13:59:09.431Z · LW(p) · GW(p)

"grass is green" and "sky is blue" are always funny examples to me, since whenever I hear them I go check, and they're usually not true. Right now from my window, I can see brown grass and a white/gray sky.

So they're especially good examples, as people will actually use them as paradigms of indisputably true empirical propositions, and even those seem almost always to be a mismatch between the map and the territory.

Replies from: Error, Chriswaterguy
comment by Error · 2013-03-22T11:34:23.903Z · LW(p) · GW(p)

I wish I could upvote this twice, just for pointing out an obvious error that I've never previously twigged on. I shall try to keep it close to the front of memory the next time I feel really certain about something.

comment by Chriswaterguy · 2015-12-29T11:03:56.849Z · LW(p) · GW(p)

As an experiment, a couple raised their child without telling them what colour the sky was. When they eventually asked, the child... thought about it. Eventually... "white". (I'd assumed it was a clear sky. Just realised it's a pointless story if it was cloudy.)

Why Isn't the Sky Blue? - starts with colours in Homer.

comment by Morendil · 2012-10-04T09:04:19.088Z · LW(p) · GW(p)

Implicit in a 75% probability of X is a 25% probability of not-X

This may strike everyone as obvious...

My experience with the GJP suggests that it's not. Some people there, for instance, are on record as assigning a 75% probability to the proposition "The number of registered Syrian conflict refugees reported by the UNHCR will exceed 250,000 at any point before 1 April 2013".

Currently this number is 242,000, the trend in the past few months has been an increase of 1000 to 2000 a day, and the UNHCR have recently provided estimates that this number will eventually reach 700,000. This was clear as early as August. The kicker is that the 242K number is only the count of people who are fully processed by the UNHCR administration and officially in their database; there are tens of thousands more in the camp who only have "appointments to be registered".

It's hard for me to understand why people are not updating to, maybe not 100%, but at least 99%, and that these are the only answers worth considering. To state your probability as 85% or 91% (as some have quite recently) is to say, "There is a one in ten chance that the Syrian conflict will suddenly stop and all these people will go home, all in the next few days before the count goes over."

This is kind of like saying "There is a one in ten chance Santa Claus will be the one distributing the presents this year."

It's really, really weird that in a contest aimed at people who understand the notion of probability and calibration, people presumed to be would-be rationalists, you'd get this kind of "Clack".

I can only speculate as to what's going on there, but I think it must be along the following lines: queried for a probability, people are translating something like "Sure, it's gonna happen" into a biggish number, and reporting that. They are totally failing to flip the question around and visualize what would have to happen to make it true. (Perhaps, too, people have been so strongly cautioned by Tetlock's writing against being overconfident that they reflexively shy away from the extreme numbers.)

My experience there casts some doubt on the statement "Probabilistic thinking is a remedy (...) so you're hopefully automatically considering more than one world."

At the very least, we must make a distinction between "express your beliefs in numerical terms and label these numbers 'probabilities'" on the one hand, and "actually organize your thinking so as to respect the axioms of probability" on the other. Just because you use "75%" as a shorthand for "I'm pretty sure" doesn't mean you are thinking probabilistically; you must train the skill of seeing that for some events "25%" also counts as "I'm pretty sure".

Replies from: bentarm, None
comment by bentarm · 2012-10-11T22:58:09.888Z · LW(p) · GW(p)

My experience with the GJP suggests that it's not. Some people there, for instance, are on record as assigning a 75% probability to the proposition "The number of registered Syrian conflict refugees reported by the UNHCR will exceed 250,000 at any point before 1 April 2013".

I am a registered participant in one of the Good Judgement Project teams. I have literally no idea what my estimates of the probabilities are for quite a few of the events for which I have 'current' predictions. Depending on what you mean by 'some people', you might just be picking up on the fact that some people just don't care as much about the accuracy of their predictions on GJP as you do.

Replies from: Morendil
comment by Morendil · 2012-10-11T23:27:18.384Z · LW(p) · GW(p)

some people just don't care as much about the accuracy of their predictions on GJP

Agreed. Insofar as GJP is a contest, and the objective is to win, my remarks should be read with the implied proviso "assuming you care about winning". In the prelude to the post where I discuss my GJP participation in more detail I used an analogy with playing Poker. I acknowledge that some people play Poker for the thrill of the game, and don't actually mind losing their money - and there are variable levels of motivation all the way up to dedicated players.

comment by [deleted] · 2012-10-05T16:25:56.448Z · LW(p) · GW(p)

I think you are entirely right, that people don't visualize.

Replies from: Omegaile
comment by Omegaile · 2012-10-07T06:48:08.171Z · LW(p) · GW(p)

I think you are 75% right.

Replies from: None
comment by [deleted] · 2012-10-08T11:45:35.815Z · LW(p) · GW(p)

Let's do 1000 trials and see if it converges, verify that p<0.05, write a paper and publish.

comment by daenerys · 2012-10-04T02:27:52.103Z · LW(p) · GW(p)

I've been enjoying the new set of Sequences. I wasn't around when the earlier Sequences were being written; It's like the difference between reading a series of books all in one go, versus being part of the culture, reading them one at a time, and engaging in discussion in between. So thanks to Eliezer for posting them!

I really liked how there was an ending koan in the last post. It prompted discussion. I tried to think of a good prompt to post for this one, but couldn't. Anyone have some good ideas?

Also, Skill #2 made me think of this optical illusion

Replies from: johnlawrenceaspden, daenerys, Maelin, DaFranker
comment by johnlawrenceaspden · 2012-10-08T13:41:25.412Z · LW(p) · GW(p)

I was planning to paint my boat today. There's already a coat of paint on it, drying. If I overpaint today, that's optimal. If I wait till tomorrow, then I'll have to sand it down first.

It looks like it might rain, but the forecast is good. I don't know what effect rain will have on newly applied paint, or indeed on the current partly dried surface.

Do I spend the afternoon painting the boat or carry on sitting in a coffee shop reading Less Wrong?

Replies from: Richard_Kennaway, CCC
comment by Richard_Kennaway · 2012-10-08T16:32:39.865Z · LW(p) · GW(p)

LessWrong will still be there tomorrow. The optimal opportunity to paint the boat won't be.

comment by CCC · 2012-10-08T13:44:38.454Z · LW(p) · GW(p)

Is it possible to protect the boat from rain in some manner, such as leaving it under a roof?

Replies from: johnlawrenceaspden
comment by johnlawrenceaspden · 2012-10-08T16:48:13.529Z · LW(p) · GW(p)

Impractical, as it happens. I eventually solved the problem by going home, changing into painting clothes, cleaning brushes, arranging tools and stirring paint. At that point it started raining heavily. So I undid all that in the rain, changed back into dry clothes, went back to the coffee shop and am now reading Less Wrong again. I think I just failed rationality for ever.

Replies from: CCC
comment by CCC · 2012-10-09T12:27:43.157Z · LW(p) · GW(p)

I don't think it's possible to fail rationality "for ever", as long as you are in a state where you can make observations, record memories, formulate goals, plan and take actions. Though you do seem to have been a bit unfortunate in the timing of the precipitation.

Replies from: arundelo, wedrifid
comment by arundelo · 2012-10-09T14:00:23.782Z · LW(p) · GW(p)

You may already know this, but the phrase "fail x forever" is a thing.

comment by wedrifid · 2012-10-09T12:43:29.391Z · LW(p) · GW(p)

I don't think it's possible to fail rationality "for ever"

Merely humanly impossible. If you are a more pure agent just assign probability "1" to enough things and you'll be set.

Replies from: CCC
comment by CCC · 2012-10-10T13:31:34.297Z · LW(p) · GW(p)

Hmmm. It seems that I should add "as long as you are able to reassign all priors of 1 to priors of 0.999999999, and all priors of 0 to priors of 0.000000001" to my list of exceptions. (It won't fix the agent immediately, but it will place the agent in a situation of being able to fix itself, given sufficient observations and updates).

Replies from: Eugine_Nier
comment by Eugine_Nier · 2012-10-10T22:57:45.332Z · LW(p) · GW(p)

That's not the only problem. An agent that assigns equal probability to all possible experiences will never update.

Replies from: CCC
comment by CCC · 2012-10-11T07:07:27.446Z · LW(p) · GW(p)

Oh, that's sneaky.

Perhaps a perfect agent should occasionally - very occasionally - perturb a random selection of its own priors by some very small factor (10^-10 or smaller) in order to avoid such a potential mathematical dead end?

Replies from: Eugine_Nier
comment by Eugine_Nier · 2012-10-12T00:56:37.646Z · LW(p) · GW(p)

Nice try, but random perturbations won't help here.

Replies from: CCC
comment by CCC · 2012-10-12T07:15:49.660Z · LW(p) · GW(p)

I think that this re-emphasises the importance of good priors.

comment by daenerys · 2012-10-04T19:54:55.310Z · LW(p) · GW(p)

I couldn't think of a koan-y question, but here is a discussion prompt.

Let's make a Worksheet!

Let's come up with some practice examples of the 2x2 matrix (such as the "Being Lost or Not" example in the OP), that people can fill out. The examples should be short (single paragraph) everyday type problems that people can relate to. Submit examples in the comments. I'll take the best and put them in a worksheet in Google docs, and link to it here.

That way, when people in the future come and read this post, they have an activity to help them practice it. Also, people can use them at meetups if they want. Worksheets, of course, aren't the BEST way to learn, but they're better than nothing.

Replies from: DaFranker, Alejandro1, None, shminux
comment by DaFranker · 2012-10-04T20:23:15.503Z · LW(p) · GW(p)

You're at work, and you find yourself wanting very badly to make a certain, particularly funny-but-possibly-taken-as-offensive remark to your boss. The comment feels particularly witty, quick-minded and insightful.

(trying to think of stuff that's fairly common and happens relatively often in everyday life)

comment by Alejandro1 · 2012-10-04T20:34:42.503Z · LW(p) · GW(p)

You are leaving your home in the morning, to return in the evening; your day will involve quite a bit of walking and public transport. It is now warm and sunny, but you know that a temperature drop with heavy rains is forecasted for the afternoon. Looking out at the window and thinking of the walk in the sun and the crowded bus, you don't feel like carrying around a coat and umbrella. You start thinking maybe the forecast is wrong...

Replies from: army1987
comment by A1987dM (army1987) · 2012-10-05T19:35:14.251Z · LW(p) · GW(p)

I put a pocket umbrella and/or a foldable raincoat into my handbag. Duh.

Replies from: Alejandro1, DaFranker
comment by Alejandro1 · 2012-10-05T19:41:35.669Z · LW(p) · GW(p)

Yes, that is clearly the optimal solution. I was assuming you don't own those two items, or that you don't have a handbag the right size or don't want to use it--more plausible for a man that for a woman, I guess.

comment by DaFranker · 2012-10-05T19:41:18.220Z · LW(p) · GW(p)

Carrying around a handbag in the first place happens to be something that I find annoying and risky. I'm prone to leaving it in easy-to-notice, easy-to-steal places or outright forgetting it in some public location.

Replies from: army1987
comment by A1987dM (army1987) · 2012-10-05T20:07:29.945Z · LW(p) · GW(p)

Now that I think about that, that happened to me exactly once (as far as I can remember) with a handbag, though it happens to me very often¹ with other items such as keys, jackets, sweatshirts and sometimes my iPod. (I usually² eventually manage to recover them, but not always.) I guess that's because I'm more likely to immediately notice that I'm missing my bag than that I'm missing my keys.


  1. Around once per month in average.

  2. Around 90% of the times.

comment by [deleted] · 2012-10-06T18:53:12.423Z · LW(p) · GW(p)

What immediately comes to mind for me:

You are knitting a fitted garment. Let's say it's a sweater. You've been knitting for awhile, and you''re starting to get concerned it won't fit the intended recipient. You can't tell for sure, because your needle is too short to fully stretch it out, but you just have this feeling. This feeling you hope is wrong, because you don't want to rip out and re-do all the ribbing you've just knit...

Replies from: EvelynM
comment by EvelynM · 2012-10-08T01:36:23.540Z · LW(p) · GW(p)

That's time for a new set of knitting needles, and empiricisim. I have 60in cables.

comment by shminux · 2012-10-05T20:27:46.655Z · LW(p) · GW(p)

You are an ex-smoker overcome with a sudden craving after a particularly bad day, and your helpful friend offers you a cigarette "have just this one smoke!" to relieve tension. You know that anything less than a complete abstinence has a chance of kickstarting the habit.

Replies from: apotheon
comment by apotheon · 2012-10-07T02:18:55.454Z · LW(p) · GW(p)

If a stressful day is enough to give you a craving difficult to resist, I think that saying "anything less than complete abstinence has a chance of kickstarting the habit" is a misleading statement of how it works. It might be more accurate to say that every cigarette you have is one cigarette closer to having a habit you need to kick. It seems, in fact, that there's sort of a gradient of average craving from abstinence all the way up to two packs a day, with variances around those averages. It seems a bit obfuscatory to suggest that "complete abstinence" is the deciding factor, especially when considering the question "When does complete abstinence start? Why doesn't it start after the next cigarette?" After all, the "real" complete abstinence has already failed, if you had to quit smoking in the first place.

. . . but that's kind of off the topic of the worksheet example.

comment by Maelin · 2012-10-04T08:52:51.756Z · LW(p) · GW(p)

Sharing this sentiment. I'm particularly impressed with the cartoon diagrams. They're visually very appealing, and they encapsulate an idea in a way that takes just enough thought to untangle that I feel like it makes me engage with the conceptual message.

comment by DaFranker · 2012-10-04T14:10:58.286Z · LW(p) · GW(p)

Same here, I'm certainly happy that this new sequence is starting. I devoured the old sequences, but being forced to stop and digest these makes them feel more impactful.

I'd be curious to see how much more powerful the sequences could be if they all had Koans, too, especially if they were wrapped up in an interactive shell and you had to answer them before the rest of the article (and/or the next one(s)) would show up. Not as good as a Bayesian Dojo, but there doesn't seem to be enough Beisusenseitachi around to really be effective on that front.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-10-03T22:05:58.101Z · LW(p) · GW(p)

Mainstream status:

"The conceivability of being wrong" and "perspective-taking on beliefs" are old indeed; I wouldn't be the least bit surprised to find explicit precedent in Ancient Greece.

Skill 3 in the form "Trust not those who claim there is no truth" is widely advocated by modern skeptics fighting anti-epistemology.

Payoff matrices as used in the grid-visualization method are ancient; using the grid-visualization method in response to a temptation to rationalize was invented on LW as far as I currently know, as was the Litany of Tarski. (Not to be confused with Alfred Tarski's original truth-schemas.)

Replies from: lukeprog, Vaniver, None, pragmatist, Unnamed, MarkL
comment by lukeprog · 2012-10-04T07:27:42.317Z · LW(p) · GW(p)

"The conceivability of being wrong" aka "Consider the opposite" is the standard recommended debiasing technique in psychology. See e.g. Larrick (2004).

comment by Vaniver · 2012-10-03T23:16:10.932Z · LW(p) · GW(p)

"The conceivability of being wrong" and "perspective-taking on beliefs" are old indeed; I wouldn't be the least bit surprised to find explicit precedent in Ancient Greece.

The most famous expression of this that I'm aware of originates with Lord Cromwell:

I beseech you, in the bowels of Christ, think it possible you may be mistaken.

Arguably, Socrates's claims of ignorance are a precursor, but they may stray dangerously close to anti-epistemology. I'm not a good enough classical scholar to identify anything closer.

The grid-visualization method / Litany of Tarski was invented on LW as far as I currently know.

The grid-visualization method seems like a relatively straightforward application of the normal-form game, with your beliefs as your play and the state of the world as your opponent's play. The advocacy to visualize it might come from LW, but actually applying game theory to life has a (somewhat) long and storied tradition.

[edit] I agree that doing it in response to a temptation to rationalize is probably new to LW; doing it in response to uncertainty in general isn't.

comment by [deleted] · 2012-10-03T22:48:07.446Z · LW(p) · GW(p)

The grid-visualization method / Litany of Tarski was invented on LW as far as I currently know.

I've seen it before used in the treatment of pascals wager: Believe in god x god exists = heavan, believe in god x god not exists = wasted life.... etc.

Can't cite specific texts, but it was definately pre-LW for me, from people who had not heard of LW.

Replies from: Eliezer_Yudkowsky, Manfred
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-10-03T22:54:51.293Z · LW(p) · GW(p)

Ah yes, sorry. Payoff matrices are ancient; the Tarski Method is visualizing one in response to a temptation to rationalize. Edited.

Replies from: MaoShan
comment by MaoShan · 2012-10-04T02:13:46.003Z · LW(p) · GW(p)

That sounds like a good idea in two ways: It gives you practice at visualizing the alternatives (which is always good if it can be honed to greater availability/reflex by practice), and by choosing those specific situations, you are automatically providing real-world examples in which to apply it; that way, it is a practical skill.

comment by Manfred · 2012-10-03T23:02:37.098Z · LW(p) · GW(p)

The intent seems different there, and that shapes the details. Pascal's wager isn't about how you act because of your beliefs - the belief is considered to be the action, and the outcomes are declared by fiat (or perhaps, fide) at the start of the problem, rather than modeled in your head as part of the purpose of the exercise.

comment by pragmatist · 2012-10-04T06:24:00.483Z · LW(p) · GW(p)

The Litany of Tarski has connections to certain versions of the direction-of-fit model of beliefs and desires. The model is usually considered a descriptive attempt at cashing out the difference between the functional role played by beliefs and desires. Both beliefs and desires are intentional states, they have propositional content (we believe that p, we desire that p). According to the direction-of-fit model, the crucial difference between beliefs and desires is the relation between the content of these states and the world -- specifically, the direction of fit between the content and the world differs. In the case of beliefs, subjects try to fit the content to the world, whereas in the case of desires, subjects try to fit the world to the content.

However, some philosophers treat the direction-of-fit model not as descriptive but as normative. The model tells us that the representational contents of our beliefs and desires should be kept rigorously separate (don't let your conception of how the world is be contaminated by your conception of how you would like it to be) and that we should have different attitudes to the contents of these mental states. Here's Mark Platts, from his book Ways of Meaning:

Beliefs aim at being true, and their being true is their fitting the world; falsity is a decisive failing in a belief, and false beliefs should be discarded; beliefs should be changed to fit with the world, not vice versa. Desires aim at realization, and their realization is the world fitting with them; the fact that the indicative content of a desire is not realized is not yet a failing in the desire, and not yet any reason to discard the desire; the world, crudely, should be changed to fit with our desires, and not vice versa.

Also related (but not referring to the map/territory distinction as explicitly) is what Ken Binmore calls "Aesop's principle" (in reference to the fable in which a fox who cannot reach some grapes decides that the grapes must be sour). From his book Rational Decisions:

[An agent's] preferences, her beliefs, and her assessments of what is feasible should all be independent of each other.

For example, the kind of pessimism that might make [the agent] predict that it is bound to rain now that she has lost her umbrella is irrational. Equally irrational is the kind of optimism that Voltaire was mocking when he said that if God didn't exit, it would be necessary to invent Him.

I should note that Binmore is talking about terminal preferences here. Of course, instrumental preferences need not (indeed, should not) be independent of our beliefs about the world and our assessments of what is feasible.

Replies from: bryjnar
comment by bryjnar · 2012-10-04T11:13:27.493Z · LW(p) · GW(p)

As someone else engaged with mainstream philosophy, I'd like to mention that I personally think that direction of fit is one of the biggest red herrings in modern philosophy. It's pretty much just an unhelpful metaphor. Just sayin'.

Replies from: Decius, pragmatist
comment by Decius · 2012-10-06T00:13:50.343Z · LW(p) · GW(p)

I never saw it as a real 'model', just a way of clarifying definitions, and making statements such as "I believe that {anything not a matter of fact}" null. It provides a way to distinguish between "I don't believe in invisible dragons in my basement." and "I don't believe in {immoral action}". I suspect the original intention was to validate a philosopher who got fed up with someone who hid behind 'I don't believe in that' in a discussion, after which the philosopher responded with evidence that the subject under discussion was factual.

comment by pragmatist · 2012-10-04T12:44:01.598Z · LW(p) · GW(p)

It's really not my area at all, so I don't really have any well-developed opinions on this. My comment wasn't meant to be an endorsement of the model, I was just pointing out a similarity with a view in the mainstream literature. From a pretty uninformed perspective, it does seem to me that the direction-to-fit thing doesn't really get at what's important about the distinct functional roles of belief and desire, so I'm inclined to agree with your assessment.

Replies from: bryjnar
comment by bryjnar · 2012-10-04T18:46:31.117Z · LW(p) · GW(p)

Yeah, I did realise that you weren't necessarily supporting it, I just wanted to make it clear that it's not orthodoxy in mainstream philosophy! Sorry if it came off as a bit critical.

comment by Unnamed · 2012-10-04T05:18:50.615Z · LW(p) · GW(p)

What we really believe feels like the way the world is; from the inside, other people feel like they are inhabiting different worlds from you.

In psychology, this is called construal. A person's beliefs, emotions, behaviors, etc. depend on their construal (understanding/interpretation) of the world.

comment by MarkL · 2012-10-07T22:49:43.058Z · LW(p) · GW(p)

Some versions of cognitive behavioral therapy ask you to write down the pros and cons of holding a particular belief.

comment by lukeprog · 2012-10-05T08:21:48.498Z · LW(p) · GW(p)

It's too bad that these how-to posts tend to be not as popular as the philosophical posts. Good philosophy is important but I doubt it can produce rationalists of the quality that can be produced by consistent rationalist skills-training over months and years.

Replies from: aaronsw
comment by aaronsw · 2012-10-05T20:27:40.762Z · LW(p) · GW(p)

Philosophy posts are useful if they're interesting whereas how-to's are only useful if they work. While I greatly enjoy these posts, their effectiveness is admittedly speculative.

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-10-06T01:18:47.532Z · LW(p) · GW(p)

Philosophy posts are enjoyable if they're interesting. They're useful if they're right.

Replies from: wedrifid, chaosmosis
comment by wedrifid · 2012-10-06T09:12:03.929Z · LW(p) · GW(p)

Philosophy posts are enjoyable if they're interesting. They're useful if they're right.

Philosophy being right isn't enough to make it necessarily useful. There is a potentially unbounded space of philosophical concepts to explore and most of them are not of instrumental use at this particular time. We can't say much more than "They are useful if they are right and they are, well, in some way useful".

(I hesitate before pointing out the other side of the equation where a philosophy can be useful while actually being wrong because in such cases, and when unbounded processing capability is assumed, there is always going to be a 'right' philosophical principle that is at least as useful even if it is more complex, along the lines of randomized algorithms being not-better-than more thought out deterministic ones.)

comment by chaosmosis · 2012-10-06T03:11:25.869Z · LW(p) · GW(p)

They can also inspire tangentially related thoughts which are enjoyable or useful. This is why Calculus is helpful even to people who don't do math for a living or for fun.

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-10-06T07:52:54.850Z · LW(p) · GW(p)

...I honestly can't remember anymore what it's like to look at the world without knowing calculus. How do you figure out how any rate of change relates to anything else?

Replies from: wedrifid, Pentashagon
comment by wedrifid · 2012-10-06T09:06:23.646Z · LW(p) · GW(p)

...I honestly can't remember anymore what it's like to look at the world without knowing calculus. How do you figure out how any rate of change relates to anything else?

By, basically, intuitively grasping the most rudimentary aspects of and implications of calculus. (Or by learning the relationship explicitly or by learning one such relationship and intuitively extrapolating principles from one domain to another.)

comment by Pentashagon · 2012-10-08T18:05:37.599Z · LW(p) · GW(p)

It might be good practice to imagine maps without calculus since so many people use them. I wouldn't be surprised if beliefs in things like global warming were divided by the knows-calculus line. How could you even explain climate change to someone who didn't understand that Temperature = dEnergy_in/dt - dEnergy_out/dt + C?

Replies from: TheOtherDave
comment by TheOtherDave · 2012-10-08T18:20:48.822Z · LW(p) · GW(p)

How could you even explain climate change to someone who didn't understand that Temperature = dEnergy_in/dt - dEnergy_out/dt + C?

I would probably start by talking about electric heaters and how they convert energy to heat, and generalize a little to talk about the atmosphere being kind of like that. The harder part is explaining that the same energy input can cause not only temperature increases, but changes to wind and precipitation patterns.

comment by daenerys · 2012-10-09T03:09:19.476Z · LW(p) · GW(p)

I enjoy having posts which show how to apply rational thought processes to everyday situations, so thank you.

However, there is a failure mode on the 2x2 matrix method, that I think should be mentioned-- it ignores probabilities of various options, and focuses solely on their payoff (example given below). I think when making the 2x2 matrix, there should be an explicit step where you assign probabilities to the beliefs in question, and keep those probabilities in mind when making your decision.

I think this is obvious to most long-time LWers, but worry about someone new coming across this decision method, and utilizing it without thinking it through.

Here is an example of how this can backfire, otherwise:

Your new babysitter seems perfect in every way: Clean background check, and her organization skills helps offset your absent-mindedness. One day, you notice your priceless family heirloom diamond earrings aren't where you normally keep them. The probability is much higher that you accidentally misplaced them (you have a habit of doing that), but there is a small suspicion on your part that the babysitter might have taken them.

You BELIEVE she took them, in REALITY she took them- You fire the babysitter and have to find another.

You BELIEVE she took them, in REALITY you misplaced them- You fire the babysitter who was innocent after all.

You BELIEVE you misplaced them, in REALITY she took them- Your babysitter isn't as good or honest as you think she is! Not only might she continue stealing from you, but more importantly, you continue to leave your child under the care of a dishonest person. BAD THINGS MIGHT HAPPEN TO YOUR BABY!

You BELIEVE you misplaced them, in REALITY you misplaced them- You keep your nice babysitter. Perhaps you come across your earrings later.

comment by JulianMorrison · 2012-10-03T23:03:50.586Z · LW(p) · GW(p)

Two beliefs, one world is an oversimplification and misses an important middle step.

Two beliefs, two sets of evidence that may but need not overlap, and one world, is closer.

This becomes an issue when for example, one observer is differently socially situated than the other* and so one will say "pshaw, I have no evidence of such a thing" when the other says "it is my everyday life". They disagree, and they are both making good use of the evidence reality presents to each of them differently.

(* Examples of such social situational differences omitted to minimize politics, but can be provided on request.)

Replies from: JulianMorrison
comment by JulianMorrison · 2012-10-03T23:45:30.864Z · LW(p) · GW(p)

Expanding a little on this, it's not a counter argument, but a caveat to "Trust not those who claim there is no truth". When people say things like "western imperialist science", sometimes they are talking jibber-jabber, but sometimes they are pointing out that the victors write the ontologies and in an anthropocene world, their ideas are literally made concrete.

comment by RobinZ · 2012-10-05T15:56:29.821Z · LW(p) · GW(p)

Thinking about the map-territory distinction reminds me of Knoll's Law of Media Accuracy:

Everything you read in the newspapers is absolutely true except for the rare story of which you happen to have firsthand knowledge.

comment by Kaj_Sotala · 2012-10-04T10:06:03.666Z · LW(p) · GW(p)

When I write a character, e.g. Draco Malfoy, I don't just extrapolate their mind, I extrapolate the surrounding subjective world they live in, which has that character at the center; all other things seem important, or are considered at all, in relation to how important they are to that character. Most other books are never told from more than one character's viewpoint, but if they are, it's strange how often the other characters seem to be living inside the protagonist's universe and to think mostly about things that are important to the main protagonist. In HPMOR, when you enter Draco Malfoy's viewpoint, you are plunged into Draco Malfoy's subjective universe, in which Death Eaters have reasons for everything they do and Dumbledore is an exogenous reasonless evil.

This is an awesome trick, and I'll have to use it more explicitly when writing various characters. (I already did somewhat, but I'm not sure if I've explicitly thought of it in these terms.)

Replies from: ArisKatsaris, gwern, Chriswaterguy, Pentashagon
comment by ArisKatsaris · 2012-10-04T10:37:43.479Z · LW(p) · GW(p)

I think that part of this advice can be restated as "every character must think themselves the protagonist of their own lives" which I think I remember Orson Scott Card giving; though Eliezer's advice more explicitly focuses on how this affects their models of the universe.

A decade back, I was conciously attempting to use OSC's (if that's who I got it from) advice in a piece of Gargoyles fanfiction "Names and Forms" set in mythological-era Crete. In that story I had a character who saw everything through the prism of ethnic relations (Eteocretans vs Achaeans vs Lycians), and there's another who because of his partly-divine heritage couldn't help thinking about how gods and human and gargoyles interact with each other, and Daedalus in his cameo appearance treated everything as just puzzles to be solved, whether it's a case of murder or a case of how-to-build-a-folding-chair... (Note: It's not a piece of rationalist fanfiction, nor does it involve anything particularly relevant to LessWrong-related topics.)

Replies from: Morendil
comment by Morendil · 2012-10-04T10:49:55.316Z · LW(p) · GW(p)

I think that part of this advice can be restated as "every character must think themselves the protagonist of their own lives" which I think I remember Orson Scott Card giving

That's a very nice way of stating it, and in application to real life is one of my personal mantras. It helps me a lot, for instance in avoiding fundamental attribution error.

comment by gwern · 2012-10-08T18:31:45.076Z · LW(p) · GW(p)

David Weber places a lot of emphasis on this too; I wrote down what I could remember of his discussion of the topic at ICON 2012:

Then Weber went onto a tangent I really appreciated: while working 4 assistantships at a university, he would tell his class that Hitler's actions were all highly rational & understandable if one understood his world view. An important writing rule: have no simplistic villains. The villains must have good reasons for everything they do.

Weber gave an example: the Mesan genetic slavers in his Honor novels. They are breeding a master race, and during the centuries, they have blighted the lives of billions - but they are all still human. So he described a scene from a book:

The leader and his wife are preparing for dinner in their rooms. The wife - "Oh honey, don't wear that red shirt." The husband: "but that's my favorite shirt!" Wife: "I know, and hopefully the geneticists can do something about your taste. And you're not wearing the red shirt."

(Everyone laughed).

A good writer makes bad guys comprehensible; hence, some fans come to opposite conclusions about Weber's politics, based sometimes, he said, on the same exact passages from his novels.

comment by Chriswaterguy · 2015-12-29T11:16:45.840Z · LW(p) · GW(p)

The other writer who also does this extremely well is Vikram Seth, in A Suitable Boy.

comment by Pentashagon · 2012-10-08T18:14:15.958Z · LW(p) · GW(p)

It's also an awesome trick for interacting with real people who have an actual subjective world-view different from mine.

Unfortunately my mind can only effectively hold one human-size worldview at a time and so I am often confused by other people's actions or at best I second-guess my imagined cause of their behavior.

comment by Richard_Kennaway · 2012-10-04T07:04:11.520Z · LW(p) · GW(p)

There are people who could stand to rehearse this, maybe by visualizing themselves with a thought bubble

Or with this teaching aid designed by Korzybski. He called the skill "consciousness of abstraction" and distinguishes more levels than "map" and "reality".

Replies from: buybuydandavis
comment by buybuydandavis · 2012-10-06T22:03:47.129Z · LW(p) · GW(p)

I've found myself pointing people to Korzybski a lot lately.

It has been troubling me for a while that EY starts with a couple of the most basic statements of Korzybski, and then busies himself reinventing the wheel, instead of at least starting from what Korzybski and the General Semantics crowd has already worked out.

EY is clearing brush through the wilderness, while there's a paved road 10 feet away, and you're the first person on the list who has seemed to notice.

There have been other smart people in the world. You can stand on the shoulders of giants, stand on the shoulders of stacks of midgets, or you can just keep on jumping in the air and flapping your arms.

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2012-10-07T08:51:01.519Z · LW(p) · GW(p)

Korzybski, for all his merits, is turgid, repetitive, and full of out of date science. The last is not his fault: he was as up to date for his time as Eliezer is now, but, for example, he was writing before the Bayesian revolution in statistics and mostly before the invention of the computer. Neither topic makes any appearance in his magnum opus, "Science and Sanity". I wouldn't recommend him except for historical interest. People should know about him, which is why I referenced him, and his work did start a community that continues to this day. However, having been a member of one of the two main general semantics organisations years back, I cannot say that he and they produced anything to compare with Eliezer's work here. If Eliezer is reinventing the wheel, compared with Korzybski he's making it round instead of square, and has thought of adding axle bearings and pneumatic tyres.

Some things should be reinvented.

Replies from: buybuydandavis, Eliezer_Yudkowsky
comment by buybuydandavis · 2012-10-07T09:33:53.410Z · LW(p) · GW(p)

EY talks about things they don't, but on the Map is Not the Territory, I don't see that EY or the usual discussions here have met Korzybski's level for consciousness of abstraction, let alone surpassed it. General Semantics provides a tidy metamodel of abstracting, identifies and names important concepts within the model, and adds some basic tools and practices for semantic hygiene. I find them generally useful, and I generally recommend them.

For consciousness of abstraction, where and how has EY exceeded Korzybski? What are new and improved bits? Where was K wrong, and EY right?

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2012-10-07T11:36:50.331Z · LW(p) · GW(p)

On second thoughts, when I said "[not] anything to compare with" that was wildly exaggerated. Of course they're comparable -- we are comparing them, and they are not so far apart that the result is a slam-dunk. But I don't want to get into a blue vs. green dingdong (despite having already veered in that direction in the grandparent).

Here are some brief remarks towards a comparison on the issues that occur to me. I'm sure there's a lot more to be said on this, but that would be a top-level post that would take (at least for me) weeks to write, with many hours of re-studying the source materials.

  1. Clarity of exposition. There really is no contest here: E wins hands down, and I have "Science and Sanity" in front of me.

  2. Informed by current science. Inevitably, E wins this one as well, just by being informed of another half-century of science. That doesn't just mean better examples to illustrate the same ideas, but new ideas to build on. I already mentioned Bayesian reasoning and computers, both unavailable to K.

  3. Consciousness of abstraction. Grokking, to use Heinlein's word, the map-territory distinction. Both E and K have hammered on this one. K refined it more, treating not merely of map/territory, but our capability for unlimited levels of abstraction, maps-of-maps-of-maps-of-etc to any depth. The more levels, the further removed from contact with reality, and the more scope for losing touch with it. Nested thought-bubbles have appeared in Eliezer's writings, but as far as I recall the spotlight has never been turned on the phenomenon.

  4. The "cortico-thalamic pause". The name is based on what I suspect is outdated neuroscience, but the idea is still around, with the currently fashionable name of "System 1 vs. System 2". The idea is current on LessWrong, but I don't recall if Eliezer himself has written anything on it. The technique consists of giving yourself time to respond rationally to whatever has just happened, time to perceive it clearly and consider (the "cortical" part) without emotional distraction (the "thalamic" part) what the situation is or might be and what to do about it, deploying consciousness of abstraction in order to be mindful of one's own flaws and see the emotional responses for what they are. This is in the Null-A books as well, so map ≠ territory isn't the only real-world actionable idea there.

  5. The unity of "body" and "mind", of "emotion" and "intellect", of "senses" and "thought", of "heredity" and "environment", etc. Our usual language artificially splits these apart (K uses the word "elementalistic"), when in reality they are indissoluble, and we require "non-elementalistic" language to speak accurately of them, hence his coining of the term "semantic reaction" to refer to the response of the organism-as-a-whole to an event. Not a topic that E has devoted attention to as a topic, but on the elementalistic splitting of "choice" from "physical law" there is this.

  6. Something to protect. K was motivated by the state of the world around him, seeing "the human dangers of the abuse of neuro-semantic and neuro-linguistic mechanisms", the neglect of those dangers in the democratic West, and their exploitation by totalitarian governments ("Science and Sanity", introduction to 2nd edition, 1941). "We humans after these millions of years should have learned how to utilize the 'intelligence' which we supposedly have, with some predictability, etc., and use it constructively, not destructively, as, for example, the Nazis are doing under the guidance of specialists." E was originally motivated by the Friendly AGI problem. I do not know to what extent he is motivated by the ordinary, pre-Singularity benefits that "raising the sanity waterline" would bring.

Etc., as Korzybski would say. Additions to the list welcome.

Replies from: buybuydandavis
comment by buybuydandavis · 2012-10-08T05:16:58.450Z · LW(p) · GW(p)

Thanks for the elaboration. I agree with the comparative aspects.

For 1), I'd say that although Korzybski was a painfully tedious windbag in Science and Sanity, I've seen lots of summaries that were concise and well written, though I don't remember a comprehensive summary of Science and Sanity that fits the bill.

I was mainly getting at 3), with order of abstraction, multi ordinal terms, and the concrete practices of semantic hygiene such as indexing, etc,. and hyphenated non-elementalism.

I'd add to your list that Korzybski's aversion to the izzes of identity and predication, along with his intensional vs. extensional distinction, really complement Tabooing a Word and Replacing the Symbol with the Substance. AK elaborates the full evaluative response - the intensional response - of a flesh and blood creature, identifies particularly problematic semantic practices which maladaptively evoke that response, and EY gives the practical method for semantic hygiene in terms of what you should be doing instead.

AK always keeps in views the abstracting nervous system in a way that EY doesn't, and it think that added reductionism helps. A reductionist model which includes the salient points of human abstraction provides a generative method to make sense of the series of narratives that EY provides on different points on rationality.

Also, AK's insistence on a physical structural differential, and knowledge based in the structure of various sensory modalities is really a gusher of good ideas.

AK stays closer to the wetware, and whatever the relative limits of science available to him, I think that reductionist focus works to provide a deep model for thinking about abstraction. Focus on a reductionist physical reality, and all sorts of supposed conundrums for speciation, life, and mind evaporate.

I've been going off on this because there's just a ton of material from AK on semantic hygiene, which I take as a core method of getting Less Wrong, and all I usually see mentioned on this list is "The Map is not the Territory". That's maybe a country in the world of AK, and I think people should do some travelling and see the rest of his world. There's a lot more to see.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-10-07T09:10:22.335Z · LW(p) · GW(p)

S. I. Hayakawa was a way better writer - that's where I got all my reprocessed Korzybski as a kid, and that's where I point people: Language in Thought and Action instead of Science and Sanity. I tried once to read the latter book as a kid, after being referred to it by Null-A. I was probably about... eleven years old? Thirteen? I gave up very, very rapidly, which I did not do for physics texts with math in them.

Replies from: buybuydandavis, buybuydandavis, Richard_Kennaway
comment by buybuydandavis · 2012-10-07T11:17:50.295Z · LW(p) · GW(p)

I won't argue with the literary analysis; K was stupendously tedious. I can't think of anyone more tiresome, although I have a feeling that his style was in vogue with various systematizers in the first half of the 20th century. I remember similar pain in reading Buckminster Fuller and Lugwig Von Mises, though I couldn't finish Fuller (tried him in my teens), and Von Mises wasn't quite as awful. Someone in the body awareness field as well - Joseph Pilates or Alexander. Less sure on the last one.

I trudged through Science and Sanity, often gritting my teeth, and think it was worth it.

My impression of Hayakawa is that he takes the conclusions but leaves out the metamodel which generates the conclusions and ties them together. I felt that K gave me a way of thinking, while Hayakawa packaged a lot of results, but left out the way of thinking. I read K first, so Hayakawa tasted like relatively weak tea and didn't leave a big impression.

K was more meaty particularly on the Science/Mathematics side. Mathematics as an abstraction of functional relations of actions in the world - I don't know if it was literally tossing pebbles in a bucket, but it was close. It was the physical action of counting. Science as a semantic enterprise - finding new semantic structures to model world. Space-Time as providing a static view of dynamic change. There was something good on differential equations too, something like reductionist locality turning nonlinear relations into linear relations. It's been almost 20 years now, so I'm a little hazy.

Anyway, I'd recommend at least having a serious chat with someone well versed in the mathematical and scientific side of Korzybski and Science and Sanity, as there is a lot of good stuff in there that doesn't get a lot of attention even from the General Semantics crowd, who, like Hayakawa, focus on the verbal aspects of the theory.

comment by buybuydandavis · 2012-10-08T23:50:22.453Z · LW(p) · GW(p)

Thank you for this response. This has removed a confusion I've had since I've come to the site.

You say in the article:

Sometimes it still amazes me to contemplate that this proverb was invented at some point, and some fellow named Korzybski invented it,

At least in my recollection, you refer to AK as the inventor of "The Map is not the Territory" when you bring it up, and that always gave me the impression that you had read him. But then I would be puzzled because many of the other things he said were appropriate to the conversation, and you wouldn't bring up those at all. And you didn't even mention Hayakawa in the article.

When someone mentions an author as the originator of an idea they're talking about, I assume he has read them, and bring that context to a reading of what they have written in turn. It would have been helpful to me if you had identified Hayakawa and Langauge in Thought in Action as where you had been exposed to the idea, distinguishing that from where Hayakawa had gotten the idea - AK. Maybe there aren't a lot of people who have actually read AK, but I think it would be a good general practice to make your sources clear to your readers.

comment by Richard_Kennaway · 2012-10-07T09:39:23.388Z · LW(p) · GW(p)

For me it was Heinlein --> Korzybski --> van Vogt in my early teens. I doggedly ploughed through Korzybski, but the curious thing is, in my early twenties I reread him, and found him, not exactly light reading, but far clearer than he had been on my first attempt.

comment by Randy_M · 2012-10-08T20:09:09.969Z · LW(p) · GW(p)

"Just keep doing what you're doing, and you'll eventually drive your rental car directly into the sea"

This works as a rhetorical device, but if one were to try to accurately weigh two options against each other, it might pay not to use reductio ad absurdium and have something like "Continue on in the wrong direction until the ETA were passed or events made the incorrect direction obvious, then try a new route, having lost up to ETA." Which is still bad, but if no safe/available places to stop for directions presented themselves, might not be the worst option. But of course, by using the skill in the article, it would be a considered risk, and not an unexpected occurance.

Anyway, useful and easy to follow piece and I look forward to the next.

comment by Jonathan_Graehl · 2012-10-06T22:11:55.805Z · LW(p) · GW(p)

The "koan" prompts are nice.

But please be responsible in employing them. Whatever the prompted reader generates as their own idea, and finds also in the following text, will be believed without the usual skepticism (at least, I noticed this "of course!" feeling). So be sure to write only true responses :)

comment by AlexMennen · 2012-10-05T20:42:29.558Z · LW(p) · GW(p)

My koan answer: a map-territory distinction can help you update in response to information about cognitive biases that could be affecting you. For instance, if I learn that people tend to be biased towards thinking that people from the Other Political Party are possessed by demonic spirits of pure evil, with a map-territory distinction, I can adjust my confidence that Republicans are possessed by demonic spirits of pure evil downwards, since I know that the cognitive bias means that my map is likely to be skewed from reality in a predictable direction.

Replies from: shminux
comment by shminux · 2012-10-05T20:56:57.176Z · LW(p) · GW(p)

I can adjust my confidence that Republicans are possessed by demonic spirits of pure evil

If you assign a non-infinitesimal probability to this literal case, odds are that your map is so bad, you don't have much to update to begin with.

Replies from: AlexMennen
comment by AlexMennen · 2012-10-06T00:53:49.572Z · LW(p) · GW(p)

Yes, I was not being literal.

comment by [deleted] · 2012-10-04T22:10:45.915Z · LW(p) · GW(p)

Sometimes it still amazes me to contemplate that this proverb was invented at some point, and some fellow named Korzybski invented it, and this happened as late as the 20th century.

It surprises less if you realize that other proverbs have conveyed the same idea—I think, more aptly: "Theory is gray, but the golden tree of life is green." --Johann Wolfgang von Goethe

The Goethe quote (substitute "reality" for "tree of life" to be more prosaic) brings out that the difference between the best theory and reality is reality's greater richness.

On the other hand, there are two distinct points conflated by the "map versus territory" standard offer: 1) the map leaves things out (by design) and 2) the map gets things wrong (by error).

Because of this conflation, "map versus territory" is one of the most abusable cliches around, perhaps second only to "the exception that proves the rule."

Replies from: army1987
comment by A1987dM (army1987) · 2012-10-05T18:29:23.661Z · LW(p) · GW(p)

My favourite one is

There are more things in heaven and earth, Horatio,
Than are dreamt of in your philosophy.

-- Hamlet Act 1, scene 5, 166–167

comment by Tyrrell_McAllister · 2012-10-03T23:37:05.248Z · LW(p) · GW(p)

(Mainstream status here.)

When I follow this link, I get the text

You aren't allowed to do that.

Replies from: Vladimir_Nesov, Vaniver
comment by Vladimir_Nesov · 2012-10-04T01:10:51.375Z · LW(p) · GW(p)

Fixed.

comment by Vaniver · 2012-10-03T23:43:47.902Z · LW(p) · GW(p)

Notice the link's text has Eliezer_Yudkowsky-drafts in it.

comment by Alicorn · 2012-10-03T22:45:07.070Z · LW(p) · GW(p)

'Luminosity' and 'Harry Potter and the Methods of Rationality'

Not the Hamlet one?

Replies from: Eliezer_Yudkowsky, RomeoStevens
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-10-03T22:56:31.173Z · LW(p) · GW(p)

Fair and added. Also there's a lovely new bit of Munchkin fiction called Harry Potter and the Natural 20 (the author has confirmed this was explicitly HPMOR-inspired) but I don't know if it's 'explicit rationalist fiction' yet, although it's possibly already a good fic to teach Munchkinism in particular.

Replies from: Vaniver, Armok_GoB, beoShaffer, Alicorn
comment by Vaniver · 2012-10-03T23:47:47.290Z · LW(p) · GW(p)

Harry Potter and the Natural 20

I thought it was starting poorly, but then I got to:

"Someone send for Dumbledore, this kid needs help."

"I'm right here in front of you."

"No, not you, the other Dumbledore."

"Oh," said Aberforth, slightly disappointed. "Nobody ever wants to send for me."

Replies from: chaosmosis, gwern
comment by chaosmosis · 2012-10-04T03:55:18.693Z · LW(p) · GW(p)

This means I'll try it, thanks for that quote.

comment by gwern · 2012-10-04T02:54:12.864Z · LW(p) · GW(p)

I thought there were a lot of quotable bits; fun fic.

Replies from: chaosmosis
comment by chaosmosis · 2012-10-04T04:07:29.640Z · LW(p) · GW(p)

Oh yes.

"Er, before we, uh, um, start choosing one," Milo stammered awkwardly. "There's something I've been, ah, meaning to ask of you, Mr. Ollivander."

"Yes?" he said softly. Gods, but this guy is weird.

"Your store name – I mean, Ollivanders: Makers of Fine Wands Since 382 BCE – well, it's just that, er…"

"Yes?"

"Shouldn't – shouldn't Ollivanders have an apostrophe in it?" Milo said, and instantly regretted it.

Mr. Ollivander chuckled, slowly and irregularly. It was a disconcertingly unnatural sound.

"Not if it's plural," Ollivander said.

Milo swallowed nervously.

Replies from: gwern
comment by gwern · 2012-10-04T04:28:15.821Z · LW(p) · GW(p)

That was good, but the blood was better.

comment by Armok_GoB · 2012-10-04T00:43:35.180Z · LW(p) · GW(p)

There are also like 3 different MLP ones!

comment by beoShaffer · 2012-10-04T01:11:52.739Z · LW(p) · GW(p)

Given all the rationalist fiction that is surfacing, may I suggest the wording: "in fact the only explicitly rationalist fiction I know of that is not a result of Less Wrong."

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-10-04T07:56:12.376Z · LW(p) · GW(p)

Fair and edited. Also I left out "David's Sling".

comment by Alicorn · 2012-10-06T22:04:55.131Z · LW(p) · GW(p)

Now that is a lovely fic. I want more of it. Why must things be works in progress?

Replies from: gwern
comment by gwern · 2012-10-06T23:30:14.656Z · LW(p) · GW(p)

Gresham's law.

Replies from: Alicorn
comment by Alicorn · 2012-10-07T00:50:30.166Z · LW(p) · GW(p)

I don't think that's really a good response to this complaint.

Replies from: gwern
comment by gwern · 2012-10-07T01:10:43.300Z · LW(p) · GW(p)

Yeah, but 40 years ago you wouldn't be saying 'gosh what I really need is a good munchkin HP/D&D crossover!'

You'd be saying something like, 'that P.G. Wodehouse/Piers Anthony/etc., what a hilarious writer! If only he'd write his next book faster!' or 'I'm really looking forward to the new anthology of G.K Chesterton's uncollected Father Brown tales!'

EDIT: Thanks for ninjaing your comment so my response looks like a complete non sequitur. -_-

Replies from: Alicorn, simplicio, Jonathan_Graehl
comment by Alicorn · 2012-10-07T04:34:15.643Z · LW(p) · GW(p)

Well, 40 years ago I wasn't born. I tend not to like old fiction. I would be less happy and enjoy fiction less in a world where that was all I had to read, although perhaps I wouldn't know what I was missing (there may even in reality be some genre I haven't found yet that I would adore and am the poorer for not having located yet).

I edited my comment because my first writing was based solely on seeing what article you linked to and then I searched for the specific law you named and decided my reply was inapt. Sorry.

Replies from: gwern
comment by gwern · 2012-10-07T22:55:33.406Z · LW(p) · GW(p)

I would be less happy and enjoy fiction less in a world where that was all I had to read, although perhaps I wouldn't know what I was missing (there may even in reality be some genre I haven't found yet that I would adore and am the poorer for not having located yet).

This is pretty much what my entire article is about: there is something like 300 million books out there, like >90% of which is 'old', with no real reason to expect an incredible quality imbalance (fantasy humor is an old genre, so old that practitioners like Robert Asprin have died), and yet, the reading ratio is perhaps quite the inverse with 90% of reading being new books and someone like you can tell me in all apparent seriousness 'I don't like old fiction, I would be less happy in a world in which that was all I had!'

Replies from: katydee, Alicorn, Richard_Kennaway
comment by katydee · 2012-10-07T23:43:47.039Z · LW(p) · GW(p)

Counterargument: Old writing was written in accordance with old ideas.

The inferential distance between a modern reader and an old writer is likely to be larger than the inferential distance between a modern reader and a modern writer. For this reason, modern writing is generally both easier and more relatable for the modern reader, and we should not be surprised that most modern readers read modern writing.

The exceptions-- old works that are considered classic and revered even by modern readers-- are (nominally) those that have touched something timeless, and therefore ring true across the ages.

Replies from: gwern
comment by gwern · 2012-10-08T00:22:09.998Z · LW(p) · GW(p)

Is this distance sufficient to explain the recentism bias? Can you give an example of how a great SF novel like Dune has 'inferential distance' so severe as to explain why more people are at any point buying the (incredibly shitty terrible) NYT-bestselling sequels by Kevin J. Anderson & Brian Herbert than the original?

Replies from: katydee
comment by katydee · 2012-10-08T00:30:17.473Z · LW(p) · GW(p)

"At any point" seems highly unlikely, since the sequels didn't exist during the same timespan as the original.

I would be surprised if the number of readers of any given Dune sequel were greater than the number of readers of Dune itself; such would indeed constitute evidence in favor of unreasonable recentism.

However, I think that the fact that the sequels are bought more often now is more likely to be the result of sampling bias rather than an actual reflection of the popularity of the original relative to its sequels.

Replies from: gwern
comment by gwern · 2012-10-08T00:44:25.049Z · LW(p) · GW(p)

I would be surprised if the number of readers of any given Dune sequel were greater than the number of readers of Dune itself; such would indeed constitute evidence in favor of unreasonable recentism.

Well, that's where the sales figures comes into play and why I mentioned them. If every reader first buys Dune and only later - maybe - buys any sequel or prequel, then we would expect Dune to always outrank any of the others. To the extent that Dune does not appear on the rankings... The flow of buyers will reflect popularity.

Of course, some readers will not buy Dune and will read it a different way, but this is equally true of the sequels/prequels! Filesharing networks and libraries stock them too.

Replies from: katydee, hairyfigment
comment by katydee · 2012-10-08T00:58:10.086Z · LW(p) · GW(p)

I expect that Dune is much, much more common in libraries than any of its sequels, or at least is checked out more often.

This is supported by a quick search of my local library catalog, which reveals that the library system here has zero to two copies of any given Dune sequel, nearly all of which are currently available, but six copies of Dune, only one of which is currently available.

The other library I sometimes visit appears to have zero to one copy of each Dune sequel, nearly all of which are currently available, but four copies of Dune, zero of which are available.

Obviously, this is a limited sample, but I expect that similar trends generally prevail.

comment by hairyfigment · 2012-10-08T06:01:14.733Z · LW(p) · GW(p)

this is equally true of the sequels/prequels!

Why would you think this? Besides what katydee says about libraries, I've gotten many SF books from my parents' stash over the years. To the point where I had to stop myself from generalizing and rejecting your claim out of hand.

comment by Alicorn · 2012-10-07T23:08:48.841Z · LW(p) · GW(p)

Yes, I read your article. I just disagree with you about most of it.

I like some fiction-by-people-now-dead, but I don't like elderly "classics", and if a ban on new books had been implemented at any point in the past I would be the poorer for not having things that have come out since then, even if you grandfathered in series-in-progress. This is not ridiculous just because you think some "quality" metric is holding steady.

There are other things to like about books than your invented bullshit "quality" metric. You know what? I like books that were written originally in my language. That doesn't include Shakespeare; my language updates constantly and books don't. I like fanfiction, and active living fandoms where people will write each other presents according to specific prompts because someone really wanted something really specific that didn't exist a minute ago and riff on and respond to and parody each other in prose around a shared touchstone. That couldn't exist if there were some ban on new material and all these people spent their time quilting instead. I like books with fancy tech in them, and exactly what can get past my suspension-of-disbelief filter changes alongside real technology. I can read Heinlein even with slide rules in space, but damn, that would get old. Hell, I like writing. I like a lot of things that you see no value in and wish to slay. Please step back with the pointy objects.

Replies from: gwern
comment by gwern · 2012-10-07T23:33:47.276Z · LW(p) · GW(p)

Hell, I like writing. I like a lot of things that you see no value in and wish to slay. Please step back with the pointy objects.

Calm down, it's just an essay...

I like fanfiction, and active living fandoms where people will write each other presents according to specific prompts because someone really wanted something really specific that didn't exist a minute ago and riff on and respond to and parody each other in prose around a shared touchstone. That couldn't exist if there were some ban on new material and all these people spent their time quilting instead.

I dunno, people used to get a lot out of quilting and knitting - the phrase 'knitting circle' comes to mind. But your contempt for various subcultures aside:

So, 'writing is not about writing'; which is pretty much one of the major themes - whatever is justifying all this new fiction, it's not nebulous claims about sliderules in space or new books being 'better' than old ones or reading like Shakespeare (most of those 300m books are, uh, not from Elizebethan times -_-).

Community is as good an explanation as any I've seen.

Replies from: Alicorn
comment by Alicorn · 2012-10-07T23:48:33.199Z · LW(p) · GW(p)

Calm down, it's just an essay...

I intensely resent this as a debate tactic. Your ability to ask me to calm down is unrelated to what emotions I'm having, whether I'm expressing them appropriately, or whether they are justified; it's a fully general silencing tactic. If I resorted to abuse or similar it might be warranted, but I haven't (unless you count "bullshit", but that's not what you quoted). I do in fact feel attacked by the suggestion that huge swaths of things valuable to me are worthless and ought to be done away with! You did in fact suggest that! I'm a human, and you cannot necessarily poke me without getting growled at.

Do you finish every book you pick up? I don't. I put them down if they don't reach a certain threshold of engagingness &c. The bigger the pile of books next to me, the pickier I can be: I can hold out for perfect 10s instead of sitting through lots of 8's because I can only get so many things out of the library at once. This includes pickiness for things other than "quality". If I want to go on a binge of mediocre YA paranormal romance (I did, a few months ago), I am fully equipped to find only the half-dozen most-Alicorn's-aesthetics-pleasing series about teenage vampires/werewolves/angels/banshees/half-devils/faeries/Greek deities/witches attending high school and musing about their respective love triangles. Having the freedom to go on this highly specific romp through bookspace is valuable. Having the selection available to do it as long as I want without having to suffer through especially execrable examples in the bookspace is valuable.

Replies from: Athrelon, wedrifid, gwern, Jonathan_Graehl
comment by Athrelon · 2012-10-08T00:14:11.169Z · LW(p) · GW(p)

I do in fact feel attacked by the suggestion that huge swaths of things valuable to me are worthless and ought to be done away with!

Unless you enjoy being outraged at a low threshold by something outside your control, this is a trait that you should be dissatisfied with and attempt to modify, not something to be stated as immovable fact. I, note however, that acting like that trait is an immovable fact makes for more favorable status dynamics and a better emotion-bargaining position...

Replies from: Alicorn
comment by Alicorn · 2012-10-08T00:18:11.998Z · LW(p) · GW(p)

Unless you enjoy being outraged at a low threshold by something outside your control, this is a trait that you should be dissatisfied with and attempt to modify

Does not follow. I prefer to feel in ways that reflect the world around me. As long as I also think this sort of thing is an attack, feeling that way is in accord with that preference whether it makes me happier or not. As long as I don't care to occupy a pushover role where I make myself okay with whatever happens to be going on so that people don't have to account for my values, drawing a line beyond which I will not self-modify makes perfect sense; and in fact I do not want to occupy that pushover role.

I note however, that acting like that trait is an immovable fact makes for more favorable status dynamics and a better emotion-bargaining position...

I derive some of my status from cultivating the ability to modify myself as I please; I'd actually sacrifice some of that if I declared this unchangeable. And I do not declare it unchangeable! I just have other values than happiness.

Replies from: Athrelon, wedrifid
comment by Athrelon · 2012-10-08T11:36:19.615Z · LW(p) · GW(p)

I prefer to feel in ways that reflect the world around me. As long as I also think this sort of thing is an attack, feeling that way is in accord with that preference whether it makes me happier or not. As long as I don't care to occupy a pushover role where I make myself okay with whatever happens to be going on

In any normal social context it would be reasonable to assume that this an overconfident statement deliberately made without caveats in order to enhance bargaining power. Which is fine - humans are selfish.

This being LW where there's a good chance that this was intended literally - this sort of rigidity was exactly why "learning how to lose" is a skill.

Replies from: wedrifid
comment by wedrifid · 2012-10-08T12:36:18.082Z · LW(p) · GW(p)

In any normal social context it would be reasonable to assume that this an overconfident statement deliberately made without caveats in order to enhance bargaining power. Which is fine - humans are selfish.

That isn't true. There are times where overconfidence is used to enhance bargaining power. But people just really not liking people doing things that hurt them is just considered normal and healthy human behavior.

This being LW where there's a good chance that this was intended literally - this sort of rigidity was exactly why "learning how to lose" is a skill.

No, it isn't. Learning to lose is an independent skill to knowing what 'lose' means and not liking to lose.

comment by wedrifid · 2012-10-08T02:30:37.518Z · LW(p) · GW(p)

I derive some of my status from cultivating the ability to modify myself as I please; I'd actually sacrifice some of that if I declared this unchangeable. And I do not declare it unchangeable! I just have other values than happiness.

Have 7.34 status points for not wireheading (more than you reflectively desire to wirehead). Some things you can counter-signal.

comment by wedrifid · 2012-10-08T02:28:56.365Z · LW(p) · GW(p)

I intensely resent this as a debate tactic. Your ability to ask me to calm down is unrelated to what emotions I'm having, whether I'm expressing them appropriately, or whether they are justified; it's a fully general silencing tactic.

I'd add that it is also a general discrediting tactic. It seems to have been rather effective in this case. According to my analysis of the conversation your comments don't seem any more intemperate, mind-killed or confrontational---in some ways they seem less so. You expressed disagreement with reasoning on something that is significantly subjective. Yet there are indications that perception has been swayed such that you are considered to have been emotional and irrational while gwern is noble and to be honored for what seems to be just claiming the moral high ground and exploiting that advantage.

comment by gwern · 2012-10-08T00:45:14.305Z · LW(p) · GW(p)

I'm a human, and you cannot necessarily poke me without getting growled at.

I don't like arguing with angry or growling people, so I'm going to stop here.

Replies from: None
comment by [deleted] · 2012-10-08T00:50:06.937Z · LW(p) · GW(p)

You've just gained an immense amount of my respect, which an upvote alone could not properly convey.

Replies from: wedrifid, Alicorn
comment by wedrifid · 2012-10-08T01:57:52.370Z · LW(p) · GW(p)

You've just gained an immense amount of my respect, which an upvote alone could not properly convey.

Gwern would have gained more respect from me if he withdrew with tact rather than making an exit in a way that also scores a point and reinforces the frame that Alicorn is behaving irrationally*. This doesn't mean I am saying gwern's approach was somehow inappropriate (I'm actively saying nothing either way). Instead I'm saying that being able to withdraw without losing face or causing the other to lose face demonstrates strong social competence as well as the willingness to cooperate with others. Exiting with a pointed tap-out does demonstrate wisdom and a certain amount of restraint but it is still crude and neutral at best when it comes to respect for the other and their emotions.

* Standard caveat for all my comments: Unless explicitly stated I am not making any claim about sincerity or intent when I talk about what effect or social role a given action has.

comment by Alicorn · 2012-10-08T00:51:25.516Z · LW(p) · GW(p)

Tapping out is all well and good, sure. Doing it because people have emotions is worthy of immense respect? Why?

Replies from: katydee, common_law, None
comment by katydee · 2012-10-08T01:04:52.063Z · LW(p) · GW(p)

This might be a good place to point out that LessWrong's use of "tapping out" strikes me as bizarre. On LessWrong, this term is used to represent withdrawing from a discussion because you think further participation might be unproductive-- in the martial arts, from whence it was purportedly adopted, this term typically signifies "I am about to be seriously injured/incapacitated and I concede."

I suppose an uncharitable eye might view the two in the same way, but I think the LessWrong term isn't meant to carry the attitude of surrender that the phrase "tapping out" generally does, and thus that a different term should be selected.

Replies from: Jonathan_Graehl
comment by Jonathan_Graehl · 2012-10-08T01:12:59.121Z · LW(p) · GW(p)

Yes, that's exactly what "tapping out" means. Even dropping win/lose from the metaphor, the connotation is that the discussion is being abandoned because it's too painful. I'd rather describe it as "bowing out" if someone decides that it's wisest not to waste time or needlessly inflame another.

Replies from: katydee, army1987
comment by katydee · 2012-10-08T01:16:52.853Z · LW(p) · GW(p)

Well, the LessWrong wiki specifically says that "tapping out doesn't mean accepting defeat," which I think would generally be considered false in other contexts. If you're agreeing with this, sorry for belaboring the point, but I'm not entirely sure how to parse your post.

"Bowing out" definitely seems like an appropriate replacement.

Replies from: wedrifid, Jonathan_Graehl
comment by wedrifid · 2012-10-08T01:43:37.836Z · LW(p) · GW(p)

Well, the LessWrong wiki specifically says that "tapping out doesn't mean accepting defeat," which I think would generally be considered false in other contexts.

That's a good point. I hadn't paid much attention to the origin of the phrase (and haven't used it), but that is exactly what we do to concede when doing Jiu-Jitsu.

"Bowing out" definitely seems like an appropriate replacement.

I didn't think the connotations to that one were any less.

Replies from: katydee
comment by katydee · 2012-10-08T02:02:17.227Z · LW(p) · GW(p)

Perhaps "stepping out," then?

Replies from: None
comment by [deleted] · 2012-10-08T02:18:21.032Z · LW(p) · GW(p)

I don't think any bit of jargon is going to hide the fact that it's a little humiliating to leave a discussion having failed to move your interlocutor. Someone who isn't humiliated at having laid out all their reasons to no effect is probably arguing in bad faith.

Replies from: katydee, Athrelon
comment by katydee · 2012-10-08T02:22:31.526Z · LW(p) · GW(p)

I'm not so sure. If I have laid out all my reasons to no effect, that could simply mean my opponent is unusually obstinate rather than that my arguments are unusually poor.

Replies from: None
comment by [deleted] · 2012-10-08T02:46:35.896Z · LW(p) · GW(p)

Fair enough, but we should recognize how powerfully motivated we are to think our intractable opponent is obstinate rather than reasonably unconvinced.

comment by Athrelon · 2012-10-08T11:49:24.134Z · LW(p) · GW(p)

"Having more free time" and "being more stubborn" shouldn't win arguments, but they do in real life where arguments are mostly about status, so we translate the status dynamics online.

comment by Jonathan_Graehl · 2012-10-08T01:47:49.835Z · LW(p) · GW(p)

Yeah. I agree with you. Wiki needs correction (although sometimes technically imprecise language can adjust attitudes better than precision).

comment by A1987dM (army1987) · 2012-10-08T11:20:23.249Z · LW(p) · GW(p)

(As for me, the main reason I do that is when I suspect I am being mind-killed and as a result a large fraction of what I would be going to say if I continued the discussion would be bullshit.)

comment by common_law · 2012-10-08T01:18:40.084Z · LW(p) · GW(p)

Doing it because people have emotions is worthy of immense respect? Why?

Emotions are part of rational process, but you aren't rational in discussion when you're in the grip of a strong, immediate emotion. Since you have the advantage in an argument when you remain calm, it is worthy of respect to forgo that advantage and disengage.

comment by [deleted] · 2012-10-08T00:53:19.873Z · LW(p) · GW(p)

I hardly see this line of inquiry ending well for anyone, so I decline to participate.

comment by Jonathan_Graehl · 2012-10-08T00:57:32.451Z · LW(p) · GW(p)

To the extent that people can go on a subgenre binge and be right to do so perhaps we can afford a few writers for relatively virgin genres. Otherwise I find gwern's argument that we'd be nearly as happy reading 20+ year old books pretty compelling (oddly, I don't buy a similar argument for movies, due only in part to movie-making tech advances).

comment by Richard_Kennaway · 2012-10-07T23:40:04.286Z · LW(p) · GW(p)

Books, music, and all other art forms, unlike apples, are not fungible, not even items of the same "quality" (however defined).

BTW, I have that collection of the complete Bach in 160 CDs (and have listened to all of it at least twice). And I'm collecting the complete Masaahi Suzuki recordings of the Bach cantatas (which are completely different from the Leonhardt/Harnoncourt performances in the Bach 2000 set), and I might spring for the John Eliot Gardiner cantatas if he manages to issue them as a complete set. I also went to this performance yesterday of an art form dating back all of 60 years (the drums are from the long-long-ago, but this use of them is not), and buy everything Greg Egan writes as soon as it comes out.

Yes, no-one can read/listen to/view more than the tiniest fraction of what there is, but to read nothing old, or to read nothing new, are selection rules that have only simplicity in their favour. There is no one-dimensional scale of "quality".

Replies from: gwern
comment by gwern · 2012-10-08T00:15:29.083Z · LW(p) · GW(p)

Books, music, and all other art forms, unlike apples, are not fungible, not even items of the same "quality" (however defined).

A point which applies equally to old and new. And ultimately every choice comes down to read or don't read...

Yes, no-one can read/listen to/view more than the tiniest fraction of what there is, but to read nothing old, or to read nothing new, are selection rules that have only simplicity in their favour. There is no one-dimensional scale of "quality".

I think you're deprecating them too quickly. Let's take the 90% guess at face-value: if you are selecting primarily from just the most recent 10% and quality - however multidimensional you choose to define it - then you need to somehow make up for throwing out 9/10ths of all the best books, the ones which happened to be old!

It'd be like running a machine learning or statistical algorithm which starts by throwing out 90% of the data from consideration; yeah, maybe that's a good idea, but you're going to have a hard time selecting from the remaining 10% so much better that it makes up for it.

comment by simplicio · 2012-10-07T01:34:59.556Z · LW(p) · GW(p)

I'd STILL like Wodehouse to write a few more. Unfortunately...

comment by Jonathan_Graehl · 2012-10-12T01:41:51.228Z · LW(p) · GW(p)

Not that gwern was wrong in any way in his general point, but I also tremendously enjoyed this particular crossover and second everyone's recommendation (at least, if you've ever attempted "roleplaying" of the non-sexual type).

comment by RomeoStevens · 2012-10-04T01:36:48.535Z · LW(p) · GW(p)

is Hamlet still available online? I don't see it.

Replies from: Alicorn, Blueberry
comment by Alicorn · 2012-10-04T01:48:13.187Z · LW(p) · GW(p)

Under normal circumstances, you have to buy it.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-10-10T02:27:10.540Z · LW(p) · GW(p)

Deleted due to the attempt to evade the -5 penalty.

Replies from: Eugine_Nier, Risto_Saarelma
comment by Eugine_Nier · 2012-10-10T06:35:43.687Z · LW(p) · GW(p)

I thought part of the point of the -5 penalty was to keep interesting discussions from happening down stream of downvoted comments. In that case isn't responding to heavily downvoted comments in a different thread exactly what should happen?

Replies from: wedrifid
comment by wedrifid · 2012-10-10T06:42:33.131Z · LW(p) · GW(p)

I thought part of the point of the -5 penalty was to keep interesting discussions from happening down stream of downvoted comments. In that case isn't responding to heavily downvoted comments in a different thread exactly what should happen?

I assumed that either Eliezer just didn't like the subject or that the comment actually quoted a -5 comment. Hang on. This can be checked. We can see from Eliezer's page which author Eliezer was replying to and look at that user's page.

(From what I can tell everything the user in question has written has been downvoted.)

comment by Risto_Saarelma · 2012-10-10T07:18:24.488Z · LW(p) · GW(p)

I understood the system actually stops the thread starter from replying to replies to their own comment if they have less than +5 total karma. Stop people talking to people talking to them and they will go for a circumvent.

Maybe just let people accrue more negative karma when replying to downvoted threads rather than stopping them when they hit the arbitrary zero point?

comment by Johnicholas · 2012-10-06T15:22:34.144Z · LW(p) · GW(p)

There are some aspects of maps - for example, edges, blank spots, and so on, that seem, if not necessary, extremely convenient to keep as part of the map. However, if you use these features of a map in the same way that you use most features of a map - to guide your actions - then you will not be guided well. There's something in the sequences like "the world is not mysterious" about people falling into the error of moving from blank/cloudy spots on the map to "inherently blank/cloudy" parts of the world.

The slogan "the map is not the territory" might encourage focusing on the delicate corrections necessary to act upon SOME aspects of one's representation of the world, but not act on other aspects which are actually intrinsic to the representation.

comment by JackV · 2012-10-05T09:41:24.503Z · LW(p) · GW(p)

Anna Salamon and I usually apply the Tarski Method by visualizing a world that is not-how-we'd-like or not-how-we-previously-believed, and ourselves as believing the contrary, and the disaster that would then follow.

I find just that description really, really useful. I knew about the Litany of Tarski (or Diax's Rake, or believing something just because you wanted it to be true) and have the habit of trying to preemptively prevent it. But that description makes it a lot easier to grok it at a gut level.

comment by beoShaffer · 2012-10-04T03:24:53.521Z · LW(p) · GW(p)

When I was trying to solve the koan I focused on a few interrelated subproblems of skill one. It seems like this sort of thinking is particularly useful for reminding yourself to consider the outside view and/or the difference between confidence levels inside and outside an argument.
Also, I think the koan left out something pretty important.
Under what circumstances, if any, is it harmful to consciously think of the distinction between the map and the territory - to visualize your thought bubble containing a belief, and a reality outside it, rather than just using your map to think about reality directly? How exactly does it hurt, on what sort of problem?

.

.

.

.

.

It looks pretty solid for describing unbounded epistemic rationality. It's slightly iffier from a bounded instrumental perspective in that it probably imposes some mental cost to apply it and their are many circumstances were its not noticably helpful. There's also the matter of political situations and similar were its -arguably- good to be generally overconfident.

Replies from: Richard_Kennaway, Morendil, None
comment by Richard_Kennaway · 2012-10-04T10:21:06.970Z · LW(p) · GW(p)

Under what circumstances, if any, is it harmful to consciously think of the distinction between the map and the territory

If you can ever gain by being ignorant, you can gain more by better knowledge still.

Cf. E.T. Jaynes: "It appears to be a quite general principle that, whenever there is a randomized way of doing something, then there is a nonrandomized way that delivers better performance but requires more thought", quoted here.

comment by Morendil · 2012-10-04T09:43:00.904Z · LW(p) · GW(p)

How exactly does it hurt, on what sort of problem?

Beliefs are part of reality too. The image "thought bubble containing a belief, and a reality outside it" is a good map, but it's not itself the territory.

In particular, the mantra "Reality is that which, when we stop believing in it, doesn't go away" can be harmful in areas such as psychology and sociology, and in domains which have a large component of these, such as finance, politics or software engineering. In these domains you must account for phenomena such as self-fulfilling or self-cancelling prophecies. Concrete example: stock market crashes.

Replies from: None
comment by [deleted] · 2012-10-04T13:20:31.376Z · LW(p) · GW(p)

So you're saying if stop believing in stock market crashes, they go away?

I think what you mean is that if you intervened to change everyone's beliefs away from "oh shit, sell!", then stock market crashes would not happen. That is a different matter than talking about just my or your belief.

Replies from: Morendil
comment by Morendil · 2012-10-04T14:31:57.002Z · LW(p) · GW(p)

So you're saying if stop believing in stock market crashes, they go away?

More often it works the other way around: the fact that someone stops believing in an overinflated stock market (i.e. claims a "bubble" is about to burst) acts as a self-fulfilling prophecy, causing others to also stop believing which -if this information cascade propagates enough- will cause a crash, therefore bringing reality in line with the original belief.

But information cascades can also cause booms, as I understand it more likely of individual stocks.

The "someone" above is underspecified: it can be one particularly influential person - Nate Silver recounts how Amazon stock surged 25% after Henry Blodget hyped it up in 1998. But it can also be a larger group, who, looking at small fluctuations in the market, panic and start a stampede.

My point is that "thought bubbles" in general are part of reality. Your believing in things has causal influence on reality (another concrete example: romantic relationships - the concept "love", which can be cashed out in terms of blood levels of various hormones, is one of those things that go away because people stop believing in it). It is generally bad epistemic practice to overstate this influence, but it can also be bad to understate it.

Replies from: None
comment by [deleted] · 2012-10-04T14:56:36.045Z · LW(p) · GW(p)

Agreed.

My point was that your examples were a part of reality in a way that the ideal belief-of-observer used in the "reality is that which..." mantra isn't.

comment by [deleted] · 2012-10-04T13:17:02.381Z · LW(p) · GW(p)

There's also the matter of political situations and similar were its -arguably- good to be generally overconfident.

No. It may be good to talk shit like you're overconfident. Actually being overconfident is just unnecesarily shooting yourself in the foot.

comment by wedrifid · 2012-10-08T02:56:00.140Z · LW(p) · GW(p)

Then he'd probably ignore alicorn's scornful comment

Yes. It would probably also involve expressing agreement with part of what Alicorn said (ideally part that he could sincerely agree with) and perhaps paraphrasing another part back with an elaboration. That seems to work sometimes.

I don't think gwern's required to turn the other cheek, and you obviously don't think you are so required, either.

No, I don't (where all the negatives add up to agreement with this quote). That is just what would gain the immense respect for social grace (and plain grace).

comment by Decius · 2012-10-06T00:05:57.414Z · LW(p) · GW(p)

It's important to distinguish "The map is not the territory" from "The map is not a perfect representation of the territory.".

The major difference is that beliefs cannot easily be used as direct or indirect concrete objects; I cannot look in my belief of what's in the basket and find (or not find) a marble. I cannot test my beliefs by experimentation to find if they correspond to reality; I must test reality to find if my beliefs correspond to it.

comment by [deleted] · 2012-10-04T03:04:08.488Z · LW(p) · GW(p)

If my socks will stain, I want to believe my socks will stain; If my socks won't stain, I don't want to believe my socks will stain; Let me not become attached to beliefs I may not want.

That was beautiful. I will definitely keep that mantra in mind.

comment by kris_buote · 2018-11-02T03:55:07.713Z · LW(p) · GW(p)
[...] while reality itself is either one way or another.

Is this true?

Replies from: SaidAchmiz, Elo
comment by Said Achmiz (SaidAchmiz) · 2018-11-02T08:19:37.307Z · LW(p) · GW(p)

Yes.

Replies from: kris_buote
comment by kris_buote · 2018-11-02T16:56:26.333Z · LW(p) · GW(p)

Quantum mechanics doesn't seem so clear-cut.

comment by Elo · 2018-11-02T05:24:20.777Z · LW(p) · GW(p)

Depends on who you ask.

comment by [deleted] · 2015-04-14T10:50:15.283Z · LW(p) · GW(p)

Sometimes it still amazes me to contemplate that this proverb was invented at some point(...) to me this phrase sounds like a sheer background axiom of existence.

Because "the map is not the territory" is applied atheism. To a theist, the map in god's mind caused the territory to happen, so that map is even more real than the territory. And every human map is as accurate as it approaches the primordial divine map, the fact that it also happens to predict the terrain merely being a nice bonus. Even Einstein believed this. To invent "the map is not the territory" you not only need to be an atheist, you need to be an experienced, confident atheist who can figure out what follows from it, and have balls of iron to challenge about one and half millenia of intellectual tradition which was about looking for the primordial map.

comment by [deleted] · 2012-10-06T13:06:01.733Z · LW(p) · GW(p)

Thanks for the clear illustration

comment by thomblake · 2012-10-04T14:26:28.750Z · LW(p) · GW(p)

The illustrations are great. I wish there were one or two more in this post.

comment by Kaj_Sotala · 2012-10-04T10:05:11.173Z · LW(p) · GW(p)

This time, I wrote down my answer to the koan: the basic idea was correct, but there weren't as many examples of subskills as Eliezer listed.

It helps to realize that there may be mistakes in the process of constructing a map, and that you may need to correct them. If there is a problem where it's important to be right, like when figuring out whether you should invest in a company, or if you are feeling bad about your life and wonder whether it's justified, you need to be able to make the map-territory distinction in order to evaluate the accuracy of your beliefs.

Though I'm somewhat pleased in that I don't, at least, remember Eliezer ever explicitly making the jump from beliefs to emotions and applying "are your emotions correct" as a special case of "the map is not the territory"; I can't claim that to be original to me (I think I might have gotten it from Jasen Murray or Michael Vassar or some book), but at least I've helped popularize it on LW somewhat.

comment by Daniel Winter (daniel-winter) · 2022-01-01T00:07:06.917Z · LW(p) · GW(p)

With apologies for being so late to the party, I'm somewhat perplexed by a post entitled "The Map is Not the Territory" that then dismisses the originator with a pithy, "...some fellow named Korzybski..."  Given that the site deals with AI/ML and that Korzybski is also credited with developing General Semantics (full of implications for AI) I'm guessing this apparently pithy dismissal belies an appreciation for Korzybski hidden elsewhere.  I could be wrong tho. 

comment by Error · 2013-03-22T11:58:48.586Z · LW(p) · GW(p)

Under what circumstances is it helpful to consciously think of the distinction between the map and the territory

I thought about this before reading the rest of the post, and came up with: "When I find myself surprised by something." Surprise may indicate that something improbable has happened, but may also indicate an error in my estimation of what's probable. Given that the observation appears improbable to begin with (or I wouldn't be surprised), I should suspect the map first.

comment by jslocum · 2013-02-27T15:39:50.229Z · LW(p) · GW(p)

I find myself to be particularly susceptible to the pitfalls avoided by skill 4. I'll have to remember to explicitly invoke the Tarski method next time I find myself in the act of attempting to fool myself.

One scenario not listed here in which I find it particularly useful to explicitly think about my own map is in cases where the map is blurry (e.g. low precision knowledge: "the sun will set some time between 5pm and 7pm") or splotchy (e.g. explicit gaps in my knowledge: "I know where the red and blue cups are, but not the green cup"). When I bring my map's flaws explicitly into my awareness, it allows me to make plans which account for the uncertainty of my knowledge, and come up with countermeasures.

comment by jsalvatier · 2012-10-07T03:22:14.596Z · LW(p) · GW(p)

In your verbal description it says 40 miles, but in the matrix it says 40 minutes.

Replies from: sboo
comment by sboo · 2012-10-14T06:13:00.187Z · LW(p) · GW(p)

60mph?

comment by Stuart_Armstrong · 2012-10-05T13:39:37.894Z · LW(p) · GW(p)

one of only two explicitly rationalist fictions I know of that didn't descend from HPMOR, the other being "David's Sling" by Marc Stiegler

You might consider Mark Clifton's novel "Eight Keys to Eden" (1960) as another rationalist fiction (though it's more debatable). Available from Gutenberg at http://www.gutenberg.org/ebooks/27595

Replies from: TheWakalix
comment by TheWakalix · 2018-05-11T18:24:17.831Z · LW(p) · GW(p)

Interestingly, that seems to take an opposite view on "map and territory" from Vogt.

comment by hairyfigment · 2012-10-08T06:10:44.723Z · LW(p) · GW(p)

I don't think gwern's required to turn the other cheek

Someone like you can tell me in all apparent seriousness that Alicorn slapped first, but that doesn't make it so.

Now it may be that her edited comments contained emotional attacks with nonstop profanity, starting before the linked comment. But the record only shows her apologizing and getting the contemptuous line I quoted in response.

comment by Reality_Check · 2012-10-08T13:55:49.865Z · LW(p) · GW(p)

Reality_Check: Unless one wants to risk going the way of Georg Cantor, Ludwig Boltzmann, Kurt Gödel, Alan Turing and others, I would lighten up on the philosophy and math approach to reality. These men all went insane and committed suicide pursuing infinities and other irrational ideas.

GWERN: Arguing from a few famous anecdotes? Not a good approach, especially since more systematic approaches show rates similar to what one would expect of the general population: http://blog.computationalcomplexity.org/2011/07/disproofing-myth-that-many-early.html

Mental disease, it seems, is a part of the general human condition, and not a flaw of the "philosophy and math approach to reality...pursuing infinities and other irrational ideas".

Computational Complexity http://answers.yahoo.com/question/index?qid=20090215000458AA8VWDo

Reality_Check: Bill Gasarch picked persons from wikipedia's list of Logicians, selecting from between the years 1845 and 1912 (cantor and Turig's birthdays). He arrived at 5 out of 48 being a few axioms short of a complete set. That is a litttle over 10 percent of selected logicians are "insane."

Well, I'm not sure how systematic an approach this is, but point taken. One's occupation doen't necessarily lead to insanity, or suicide. I did a (very) little research myself by querying your and my friend, Google.

What is the percentage of mentally insane people in our population? About 1 percent of the general population is insane, but 20 percent of adults behind bars have mental health problems. ChaCha! http://www.chacha.com/question/what-is-the-percentage-of-mentally-insane-people-in-our-population

What percent of the world population is insane? Estimated 26.2 percent of people ages 18 & older-about one in 4 adults-suffer from a diagnosable mental disorder in a given year. http://www.chacha.com/question/what-percent-of-the-world-population-is-insane

The 5 Percent Doctrine about 5 percent of our population is and always will be totally crazy. I don’t mean mentally ill. According to the National Institute for Mental Health, 26 percent of American adults suffer from a diagnosable mental disorder in any given year. http://www.nytimes.com/2010/09/09/opinion/09collins.html?_r=0

Mental Health and Illness - How Many People Are Mentally Ill? The Surgeon General's report estimated that 20% of the United States population was affected by mental disorders and that 15% use some type of mental health service every year. Community surveys estimate that as many as 30% of the adult population in the United States suffer from mental disorders.

Read more: Mental Health and Illness - How Many People Are Mentally Ill? - Disorders, January, Distress, and Survey http://www.libraryindex.com/pages/2996/Mental-Health-Illness-HOW-MANY-PEOPLE-ARE-MENTALLY-ILL.html#ixzz28r5djv9h

Mental Disorders in America

Mental disorders are common in the United States and internationally. An estimated 26.2 percent of Americans ages 18 and older or about one in four adults suffer from a diagnosable mental disorder in a given year. When applied to the 2004 U.S. Census residential population estimate for ages 18 and older, this figure translates to 57.7 million people.

Even though mental disorders are widespread in the population, the main burden of illness is concentrated in a much smaller proportion about 6 percent, or 1 in 17 who suffer from a serious mental illness. In addition, mental disorders are the leading cause of disability in the U.S. and Canada for ages 15-44. Many people suffer from more than one mental disorder at a given time. Nearly half (45 percent) of those with any mental disorder meet criteria for two or more disorders, with severity strongly related to comorbidity.

In the U.S., mental disorders are diagnosed based on the Diagnostic and Statistical Manual of Mental Disorders, fourth edition (DSM-IV). http://www.thekimfoundation.org/html/about_mental_ill/statistics.html

1 in 5 Americans Suffers From Mental Illness http://abcnews.go.com/blogs/health/2012/01/19/1-in-5-americans-suffer-from-mental-illness/

5% totally crazy 26% mental disorder 6% serious mental illness

Reality_Check: So if we consider the selected logicians as totally crazy, or having a serious mental disorder, then they are a greater percentage than the general population. If we consider them as just having a mental disorder, then they are doing pretty good.

Replies from: gwern
comment by gwern · 2012-10-10T00:23:36.301Z · LW(p) · GW(p)

Unless one wants to risk going the way of Georg Cantor, Ludwig Boltzmann, Kurt Gödel, Alan Turing and others, I would lighten up on the philosophy and math approach to reality. These men all went insane and committed suicide pursuing infinities and other irrational ideas.

Arguing from a few famous anecdotes? Not a good approach, especially since more systematic approaches show rates similar to what one would expect of the general population: http://blog.computationalcomplexity.org/2011/07/disproofing-myth-that-many-early.html

Mental disease, it seems, is a part of the general human condition, and not a flaw of the "philosophy and math approach to reality...pursuing infinities and other irrational ideas".