Rationality Quotes Thread March 2015
post by Vaniver · 2015-03-02T23:38:48.068Z · LW · GW · Legacy · 235 commentsContents
235 comments
Another month, another rationality quotes thread. The rules are:
- Please post all quotes separately, so that they can be upvoted or downvoted separately. (If they are strongly related, reply to your own comments. If strongly ordered, then go ahead and post them together.)
- Do not quote yourself.
- Do not quote from Less Wrong itself, HPMoR, Eliezer Yudkowsky, or Robin Hanson. If you'd like to revive an old quote from one of those sources, please do so here.
- No more than 5 quotes per person per monthly thread, please.
- Provide sufficient information (URL, title, date, page number, etc.) to enable a reader to find the place where you read the quote, or its original source if available. Do not quote with only a name.
235 comments
Comments sorted by top scores.
comment by James_Miller · 2015-03-04T15:46:02.938Z · LW(p) · GW(p)
It seemed rather short
Misao Okawa, the world's oldest person, when asked "how she felt about living for 117 years."
comment by fortyeridania · 2015-03-02T22:15:40.301Z · LW(p) · GW(p)
One kid said to me, “See that bird? What kind of bird is that?” I said, “I haven’t the slightest idea what kind of a bird it is.” He says, “It’s a brown-throated thrush (or something). Your father doesn’t teach you anything!” But it was the opposite. My father had taught me, looking at a bird, he says, “Do you know what that bird is? It’s a brown-throated thrush. But in Portuguese, it’s a Bom da Peida; in Italian, a Chutto Lapittida." He says, "In Chinese, it’s a Chung-long-tah, and in Japanese, it’s a Katano Tekeda, et cetera." He says, "Now you know all the languages you want to know what the name of that bird is, and when when you’re finished with all that," he says, "you’ll know absolutely nothing whatever about the bird. You’ll only know about humans in different places, and what they call the bird. Well," he says, "let’s look at the bird and what it’s doing."
--Richard Feynman, source. Full video (The above passage happens at about the 7:00 mark in the full version.)
N.B. The transcript provided differs slightly from the video. I have followed the video.
Related to: Replace the Symbol with the Substance
Replies from: elharo, DanielLC, dxu↑ comment by elharo · 2015-03-13T22:35:28.728Z · LW(p) · GW(p)
Feynman knew physics but he didn't know ornithology. When you name a bird, you've actually identified a whole lot of important things about it. It doesn't matter whether we call a Passer domesticus a House Sparrow or an English Sparrow, but it is really useful to be able to know that the male and females are the same species, even though they look and sound quite different; and that these are not all the same thing as a Song Sparrow or a Savannah Sparrow. It is useful to know that Fox Sparrows are all Fox Sparrows, even though they may look extremely different depending on where you find them.
Assigning consistent names to the right groups of things is colossally important to biology and physics. Not being able to name birds for an ornithologist would be like a physicist not being able to say whether an electron and a positron are the same thing or not. Again it doesn't matter which kind of particle we call electron and which we call positron (arguably Ben Franklin screwed up the names there by guessing wrong about the direction of current flow) but it matters a lot that we always call electrons electrons and positrons positrons. Similarly it's important for a chemist to know that Helium 3 and Helium 4 are both Helium and not two different things (at least as far as chemistry and not nuclear physics is concerned).
Names are useful placeholders for important classifications and distinctions.
Replies from: Epictetus, lmm, ChristianKl, fortyeridania↑ comment by Epictetus · 2015-03-18T18:32:22.233Z · LW(p) · GW(p)
Feynman knew physics but he didn't know ornithology. When you name a bird, you've actually identified a whole lot of important things about it.
I think Feynman's point was that a name is meaningful if you already know the other information. I can memorize a list of names of North American birds, but at the end I'll have learned next to nothing about them. I can also spend my days observing birds and learn a lot without knowing any of their names.
Assigning consistent names to the right groups of things is colossally important to biology and physics.
I don't think anyone will disagree with this. The hard part, though, is properly setting up the groups in the first place. Good classification systems took years (or centuries) of work and refinement to become the systems we take for granted today.
Not being able to name birds for an ornithologist would be like a physicist not being able to say whether an electron and a positron are the same thing or not.
Feynman has been quoted elsewhere criticizing students for parroting physics terminology without having the least idea of what they're actually talking about. There's the anecdote about students who knew all about the laws of refraction but failed to identify water as a medium with a refractive index.
Replies from: None↑ comment by [deleted] · 2015-03-18T19:23:56.371Z · LW(p) · GW(p)
Feynman wasn't really wrong, he just failed to mention that if you want to remember anything about a certain bird that you observed you will have to invent a name for it, because 'the traveler hath no memory'. Original names are OK if you only want the knowledge for yourself.
Replies from: Capla↑ comment by Capla · 2015-03-24T01:29:23.750Z · LW(p) · GW(p)
I'm reminded of another Feynman anecdote: when he invented his own mathematical notion in middle school. It made more sense to him, but he soon realized that it was no good for communicating ideas to others.
Replies from: johnlawrenceaspden↑ comment by johnlawrenceaspden · 2016-03-11T19:28:06.651Z · LW(p) · GW(p)
Every time I try to learn to sight-sing I get sidetracked by trying to invent better notation for music.
After many repeats of this process I've decided that music notation is pretty good, given the constraints under which it used to operate.
Now I'm trying to just force myself to learn to sight-sing, already.
↑ comment by lmm · 2015-03-20T09:25:07.101Z · LW(p) · GW(p)
Not being able to name birds for an ornithologist would be like a physicist not being able to say whether an electron and a positron are the same thing or not.
Did you deliberately pick this example, where Feynman speculated that they might be the same thing?
Names are useful as shorthand for a bundle of properties - but only once you know the actual bundle of properties. I sometimes think science should be taught with the examples first, and only given the name once students have identified the concept.
↑ comment by ChristianKl · 2015-03-20T11:12:00.369Z · LW(p) · GW(p)
Semantics are important. On the other hand you don't get additional knowledge from getting the name in an additional language that treats the concept with the same semantic borders.
↑ comment by fortyeridania · 2015-03-17T00:12:13.854Z · LW(p) · GW(p)
Assigning consistent names to the right groups of things is colossally important to biology and physics.
Yes, this is true.
↑ comment by dxu · 2015-03-17T01:42:56.648Z · LW(p) · GW(p)
Also related: Guessing the Teacher's Password
comment by sixes_and_sevens · 2015-03-02T12:46:17.810Z · LW(p) · GW(p)
[Transcript from video, hence long and choppy]
I think the way the battle lines are drawn in the world we live in, the battle lines typically fall in terms of 'what are your conclusions?' Like: are you a republican; are you a democrat; are you a libertarian; are you a socialist? And the more I think about it, this strikes me as extremely odd.
Why should the battle lines be drawn in terms of conclusions? Another way of drawing the battle lines would be, say, in terms of how people think. So if I take someone like Matt [Yglesias?], who's one of the commenters - I read Matt's blog all the time. Matt, I think, would agree that he and I disagree on a lot of issues. Not on everything, but we disagree a lot. We disagree every day. We sort of write back and forth to each other and to others, and even if we don't call each other by name, we're, like, disagreeing in public every day.
But at the same time when I read Matt I have this feeling like 'if I were a progressive, this is the argument I would make'. I feel that way when I read Matt. There's other writers, like when I read Paul Krugman, I don't feel that way. I don't think if I were progressive I would argue like Paul Krugman.
So this method of thinking in common, there's this question, should I be emotionally, intellectually, whatever, more allied to people with whom I share conclusions, or with whom I share a certain method of thinking? And when I disagree with Matt, which is frequently, I feel like I can always figure out very quickly where we disagree. There's something about the framework we have in common. And that, to me, seems like a powerful commonality. So in general I'm interested in getting people to explore, or re-explore, what are our true commonalities with other people?
-- Tyler Cowen from a talk on on neurodiversity
Replies from: Vaniver↑ comment by Vaniver · 2015-03-02T14:11:28.216Z · LW(p) · GW(p)
Why should the battle lines be drawn in terms of conclusions?
Suppose I agree with someone's conclusion, and disagree with them on the method used to reach that conclusion. Are we political allies, or enemies? That is, of course "politics" is the answer to 'why should the battle lines be drawn this way?'
Now, for Tyler as a pundit, the answer is different. Staying in an intellectual realm where he thinks like the other people around him makes it so any disagreements are interesting and intelligible.
Replies from: itslupus, Lumifer, PeterisP↑ comment by itslupus · 2015-03-03T01:59:25.295Z · LW(p) · GW(p)
This is sort of related to what Scott argues in "In Favor Of Niceness, Community, And Civilization".
↑ comment by Lumifer · 2015-03-02T18:03:10.241Z · LW(p) · GW(p)
Now, for Tyler as a pundit, the answer is different. Staying in an intellectual realm where he thinks like the other people around him makes it so any disagreements are interesting and intelligible.
I think the reasons for Tyler's positions are deeper than that.
Don't think in terms of a single-round game, think in terms of a situation where you have to co-exist with the other party for a relatively long time and have some kind of a relationship with it.
The conclusions about a particular specific issue of today are not necessarily all that important compared to sharing a a general framework of approaches to things, a similar way of analyzing them...
Replies from: Vaniver↑ comment by Vaniver · 2015-03-02T18:47:16.571Z · LW(p) · GW(p)
Don't think in terms of a single-round game, think in terms of a situation where you have to co-exist with the other party for a relatively long time and have some kind of a relationship with it.
I also had in mind this bit of wisdom from Robin.
The conclusions about a particular specific issue of today are not necessarily all that important compared to sharing a a general framework of approaches to things, a similar way of analyzing them...
As stated, this primarily matters for pundits. Notice that the methods of thinking that he's talking about don't reliably lead to the same conclusions; different values and different facts mean that two people who think very similarly (i.e. structure arguments in the same way) may end up with opposite policy preferences, able to look at each other and say "yes, I get what you think and why you think it, but I think the opposite." And so a particular part of the blogosphere will discuss policies in one way, another part another way, it'll be discussed a third way on television, and so on. But the battle lines will still be drawn in terms of conclusions, because policy conclusions are what actually get implemented, and it doesn't seem sensible to describe the boundaries between the areas where policies are discussed as "battle lines," when what they actually are is an absence of connections.
Replies from: TheOtherDave, Lumifer↑ comment by TheOtherDave · 2015-03-31T17:53:42.816Z · LW(p) · GW(p)
When dealing with someone who comes to different conclusions than I do, but whose way of thinking I understand well, it's relatively easy for me to negotiate with them -- I can predict what offers they'll value, and roughly to what degree, and what aspects of their own negotiating position they're likely to be OK with trading off.
Whereas negotiating with someone whose way of thinking I don't understand is relatively hard, and I can expect a significant amount of effort to be expended overcoming the friction of the negotiation itself, and otherwise benefiting nobody.
Of course, I don't have to negotiate with someone who agrees with me, so in the short term that's an easy tradeoff in favor of agree-on-conclusions.
But if I'm choosing people I want to work with in the future, it's worth asking how well agreeing on conclusions now predicts agreeing on conclusions in the future, vs. how well understanding each other now predicts understanding each other in the future. For my own part, I find mutual understanding tends to be more persistent.
That said, I'm not sure whether negotiation is more a part of what you're calling "politics" here, or what you're calling "punditry," or neither, or perhaps both.
But negotiation is a huge part of what I consider politics, and not an especially significant part of what I consider punditry.
↑ comment by Lumifer · 2015-03-02T18:57:56.199Z · LW(p) · GW(p)
As stated, this primarily matters for pundits.
I continue to disagree. This matters a lot for people who are interested in maintaining the status quo and are very much against any drastic and revolutionary changes -- which often enough come from a different way of thinking.
↑ comment by PeterisP · 2015-03-03T09:48:35.894Z · LW(p) · GW(p)
"Are we political allies, or enemies?" is rather orthogonal to that - your political allies are those whose actions support your goals and your political enemies are those whose actions hurt your goals.
For example, a powerful and popular extreme radical member of the "opposite" camp that has conclusions that you disagree with, uses methods you disagree with, and is generally toxic and spewing hate - that's often a prime example of your political ally whose actions incite the moderate members of society to start supporting you and focusing on your important issues instead of something else. The existance of such a pundit is important to you, you want them to keep doing what their do and have their propaganda actions be successful up to a point. I won't go into examples of particular politicians/parties of various countries, that gets dirty quickly, but many strictly opposed radical groups are actually allies in this sense against the majority of moderates; and sometimes they actively coordinate and cooperate despite the ideological differences.
On the other hand, a public speaker that targets the same audience as you do, shares the same goals/conslusions that you do, and the intended methods to achieve it, but simply does it consistently poorly - by using sloppy arguments that alienate part of the target audience, or by disgusting personal behavior that hurts the image of your organization. That's a good example of a political enemy, one that you must work to silence, to get them ignored and not heard; despite being "aligned" with your conclusions.
And of course, a political competitor that does everything that you want to do but holds a chair/position that you want for yourself, is also a political enemy. Infighting inside powerful political groups is a normal situation, and when (and if) it goes to public, very interesting political arguments appear to distinguish one from their political enemy despite sharing most of the platform.
Replies from: Vaniver, Epictetus↑ comment by Vaniver · 2015-03-03T14:39:58.373Z · LW(p) · GW(p)
your political allies are those whose actions support your goals and your political enemies are those whose actions hurt your goals.
! That's not how other humans interpret "alliance," and using language like that is a recipe for social disaster. This is a description of convenience. Allies are people that you will sacrifice for and they will sacrifice for you. The NAACP may benefit from the existence of Stormfront, but imagine the fallout from a fundraising letter that called them the NAACP's allies!
Whether or not someone is an ally or an enemy depends on the context. As the saying goes, "I against my brother, and I and my brother against my cousins, I and my brother and my cousins against the world"--the person that has the same preferences as you, and thus competes with you for the same resources, is potentially an enemy in the local scope but is an ally in broader scopes.
↑ comment by Epictetus · 2015-03-03T15:01:30.411Z · LW(p) · GW(p)
Allies are those who agree to cooperate with you. An alliance may be temporary, limited in scope, and subject to conditions, but in the end it's all about cooperation. A stupid enemy who makes mistakes certainly benefits your cause and is a useful tool, but he's no ally.
comment by AlanCrowe · 2015-03-05T21:01:19.657Z · LW(p) · GW(p)
Replies from: ZubonOne problem is that most people think we are always in the short run. No matter how many times you teach students that tight money raises rates in the short run (liquidity effect) and lowers them in the long run (income and Fisher effects), when the long run actually comes around they will still see the fall in interest rates as ECB policy "easing". And this is because most people think the term "short run" is roughly synonymous with "right now." It's not. Actually "right now" we see the long run effects of policies done much earlier. We are not in an eternal short run. That's the real problem with Keynes's famous "in the long run we are all dead."
↑ comment by Zubon · 2015-03-14T13:44:55.938Z · LW(p) · GW(p)
In practice, the economic "long run" can happen exceedingly quickly. Keynes was probably closer to right with "Markets can remain irrational longer than you can remain solvent," but if you plan on the basis of "in the long run we are all dead," you might find out just how short that long run can be.
Replies from: D_Alex↑ comment by D_Alex · 2015-03-23T06:33:40.227Z · LW(p) · GW(p)
If we need to look to economics for rationality quotes, we are getting towards the bottom of the barrel, Robin Hanson notwithstanding.
Replies from: Grant↑ comment by Grant · 2015-03-26T06:36:51.078Z · LW(p) · GW(p)
Macroeconomics? Sure, its highly politicized so in many cases I'll agree with that. But microeconomics is in many ways the study of how to rationally deal with scarcity. IMO, traditional micro assuming homo-economicus is actually more interesting (and useful, outside of politics) than the behavioral stuff for this reason.
comment by Jayson_Virissimo · 2015-03-03T18:32:49.777Z · LW(p) · GW(p)
Nothing is more dangerous than an idea if it's the only one you have.
-- Émile Auguste Chartier, Propos sur la religion, 1938
comment by Pablo (Pablo_Stafforini) · 2015-03-05T07:42:06.562Z · LW(p) · GW(p)
Because it is often easy to detect the operation of motivated belief formation in others, we tend to disbelieve the conclusions reached in this way, without pausing to see whether the evidence might in fact justify them. Until around 1990 I believed, with most of my friends, that on a scale of evil from 0 to 10 (the worst), Communism scored around 7 or 8. Since the recent revelations I believe that 10 is the appropriate number. The reason for my misperception of the evidence was not an idealistic belief that Communism was a worthy ideal that had been betrayed by actual Communists. In that case, I would simply have been victim of wishful thinking or self-deception. Rather, I was misled by the hysterical character of those who claimed all along that Communism scored 10. My ignorance of their claims was not entirely irrational. On average, it makes sense to discount the claims of the manifestly hysterical. Yet even hysterics can be right, albeit for the wrong reasons. Because I sensed and still believe that many of these fierce anti-Communists would have said the same regardless of the evidence, I could not believe that what they said did in fact correspond to the evidence. I made the mistake of thinking of them as a clock that is always one hour late rather than as a broken clock that shows the right time twice a day.
Jon Elster, Explaining Social Behavior: More Nuts and Bolts for the Social Sciences, Cambridge, 2007, pp. 136-137, n. 16
Replies from: seer, Jayson_Virissimo↑ comment by seer · 2015-03-09T00:07:16.092Z · LW(p) · GW(p)
I just realized what bothers me about this quote. It seems boil down to Elster trying to admit that he was wrong without having to give credit to those who were right.
Replies from: gjm, satt↑ comment by gjm · 2015-03-09T09:16:55.846Z · LW(p) · GW(p)
Yup, he appears to be doing that. On the grounds that he has other reasons for thinking they don't deserve credit for it.
Rather than commenting on the credibility of that in Elster's specific case (which would depend on knowing more than I do about Elster and about the anti-communists he paid attention to), I'll remark that there certainly are cases in which most of us here would do likewise. (Not literally zero credit, but extremely little, which I think is also what Elster's doing.) For instance:
One of your friends is an avid lottery enthusiast and keeps urging you to buy a ticket "because today might be your lucky day". He disdains your statements that buying lottery tickets is a substantial loss on average and insists that he's made a profit from playing the lottery. (Maybe he actually has, maybe not.) Eventually you give in and buy one ticket. It happens to win a large prize.
Another of your friends is a fundamentalist of some sort and tells you confidently that the current scientific consensus on evolution is all bunk. Any time she reads of any scientific claim about evolution she is liable to tell you confidently that in time it'll be refuted by later research. One day, a new discovery is made that refutes something you had said to her about evolution (e.g., that X is more closely related to Y than to Z).
Another worships the ancient Roman gods and tells you with great confidence that it will rain tomorrow because he has made sacrifice to Jupiter, Neptune and the lares and penates of his household. You are expecting a dry day because that's what the weather forecasts say. It does in fact rain a bit.
↑ comment by [deleted] · 2015-03-09T12:16:30.635Z · LW(p) · GW(p)
Is being anti-lottery some kind of badge of honor amongst intelligent people? It is entertainment, not investment. It is spending money to buy a feeling excited expectance. It is like buying a movie ticket. Does anyone consider buying a ticket to scary horror movie irrational? Some people just like that kind of excitement. People who buy lottery tickets just like different kinds of excitement, dream, fantasy.
As for the argument that it is a mis-investment of emotions that is also false, people can decide to work forward the goal then what happens is a lot of grinding, they can still dream about something else, it is not like you cannot dream while you grind. Realistic goals do not need a lot of dream investment but rather time and effort and it is safe to invest dreams in unrealistic ones.
When I have read Eliezer's mis-investment of emotions argument it came accross to me an elitistic Bay Area upper middle class thing. People in slums usually need to grind until they get a better schooling and job experience to escape it, this takes time investment not dream investment, and this leaves them free to dream about one day being a prince.
Replies from: Vaniver, gjm, Lumifer↑ comment by Vaniver · 2015-03-09T14:22:58.585Z · LW(p) · GW(p)
Realistic goals do not need a lot of dream investment but rather time and effort and it is safe to invest dreams in unrealistic ones.
I think this is factually untrue. It seems to me that time and effort investment follows dream investment, for basic psychological reasons.
When I have read Eliezer's mis-investment of emotions argument it came accross to me an elitistic Bay Area upper middle class thing.
I think that's because you misread it, or you're identifying correct financial attitudes with being upper middle class and throwing in the rest of the descriptions for free. Here's the part where he talks about mechanisms:
If not for the lottery, maybe they would fantasize about going to technical school, or opening their own business, or getting a promotion at work—things they might be able to actually do, hopes that would make them want to become stronger.
Going to technical school is not an "elitistic Bay Area upper middle class thing." Yes, later he talks about dot-com startups doing IPOs, but the vast majority of new businesses started are things like barbershops and restaurants, and people go to technical school to learn how to repair air conditioning systems, not to learn how to make Yelp. A person who dreams about owning their own barbershop or being an AC repairperson or demonstrating enough responsibility at work to earn a promotion is likely to do better than someone who dreams about being a lottery prince.
That is, I think a key component of grinding successfully is dreaming about grinding.
Replies from: None↑ comment by [deleted] · 2015-03-10T09:02:56.175Z · LW(p) · GW(p)
Basically you are saying constant grinding requires constant motivation - or discipline?
But in reality all it takes is the precommitment of shame.
Example 1. you come from a working-class or slum family, get into a university as the first one in the family. Your mom and grandma brags the whole kin and neighborhood full with what a genius you are. At that point, you are strongly precommitted, not exactly through your own choice, you don't want the shame of 100 people treating yoiu like a genius to be let down by you dropping out.
Example 2. you get your first real job and it sucks, but your dad is proudly supporting a family for 25 years know on a similarly sucky one and for you to get his approval / not feel ashamed in his eyes you need to stick to it until you get enough experience for a better one.
I think the elitism part is precisely in the lack of this kind of shame-precommitment: elites have discretionary goals, doing what they want, not what they must to get ahead, and thus need constant motivation. If you would quit a job once it stops being fun, you are of the elites in this sense. If you stick to it until it does not feel shameful to quit, then not.
And this is why for the majority constant motivation is not required for constant grinding.
Replies from: Lumifer↑ comment by Lumifer · 2015-03-10T14:47:53.862Z · LW(p) · GW(p)
I think the elitism part is precisely in the lack of this kind of shame-precommitment: elites have discretionary goals, doing what they want, not what they must to get ahead, and thus need constant motivation.
I think it's quite the reverse: elites have strong shame-precommittment, it's only a few levels higher. All your family went to Harvard and you're going to fail?? Your ancestors have Ph.Ds three generations back and you're not enrolling in a graduate program?? X-D
Of course I mean elites not of the Kardashian kind.
↑ comment by gjm · 2015-03-09T13:54:31.391Z · LW(p) · GW(p)
It is entertainment, not investment.
I was careful to specify that your hypothetical friend enjoins you to buy lottery tickets on the grounds that it is good for you financially. I agree that if you get great enjoyment from the thought that you might win the lottery, buying lottery tickets may be worth it for you.
(But two caveats on that last point. Firstly, if you enjoy daydreaming about getting rich then you can equally daydream about unexpected legacies, spectacular success of companies in your pension/investment portfolio if you have one, eccentric billionaire arbitrarily giving you a pile of money, etc. Of course these are improbable, but so is winning much in the lottery. Secondly, "dream investment" may lead you astray by, e.g., making all the most mentally salient paths to success the terribly improbable ones involving lotteries rather than the more-probable ones involving lots of hard work, and demotivating the hard work. Whether it actually has that effect is a question for the psychologists; I don't know whether it's one that's been answered.)
Replies from: Good_Burning_Plastic↑ comment by Good_Burning_Plastic · 2015-03-09T14:47:29.530Z · LW(p) · GW(p)
I was careful to specify that your hypothetical friend enjoins you to buy lottery tickets on the grounds that it is good for you financially.
Good point; I'm retracting my comment elsethread.
Whether it actually has that effect is a question for the psychologists; I don't know whether it's one that's been answered.
I'm guessing the hard part is figuring out which way the causation goes -- maybe not having mentally salient paths to success involving lots of hard work makes people more likely to buy lottery tickets, rather than or as well as vice versa.
↑ comment by Lumifer · 2015-03-09T15:03:25.444Z · LW(p) · GW(p)
It is entertainment, not investment.
Why do you need to pay money to someone in order to daydream?
people can decide to work forward the goal then what happens is a lot of grinding, they can still dream about something else
The problem is that "dreaming" often replaces grinding.
Replies from: None↑ comment by [deleted] · 2015-03-17T21:01:39.734Z · LW(p) · GW(p)
Don't people who go to amusement parks or Disneyland basically pay other people in order to have a daydream session? I mean, I can't imagine people walking around dreaming about winning a lottery, it would be Charlie and the Chocolate Factory. (Now that's a book about humanity outcompeted by a more profitable life form under the guidance of an omnipotent being.)
Replies from: Lumifer↑ comment by Lumifer · 2015-03-17T21:15:41.510Z · LW(p) · GW(p)
Don't people who go to amusement parks or Disneyland basically pay other people in order to have a daydream session?
No, they pay other people to provide experiences for them, experiences which they can't get otherwise on their own.
Replies from: None↑ comment by [deleted] · 2015-03-17T21:36:05.843Z · LW(p) · GW(p)
How is 'you can safely put on a princess's dress when you are in certain company, and pay in some amount of social embarrassment if you are wrong about the company' different from 'you can safely pay a small amount for a chance to put on any dress you want in any company whatsoever'? Buying a ticket is an experience you can't get otherwise on your own. (I mean yes, I largely agree with you, but I am not sure what exactly I agree with, therefore the nitpicking.)
Replies from: Lumifer↑ comment by Lumifer · 2015-03-18T14:34:13.237Z · LW(p) · GW(p)
How is 'you can safely put on a princess's dress when you are in certain company, and pay in some amount of social embarrassment if you are wrong about the company' different from 'you can safely pay a small amount for a chance to put on any dress you want in any company whatsoever'?
Huh? I don't understand.
Replies from: None↑ comment by [deleted] · 2015-03-18T15:35:36.997Z · LW(p) · GW(p)
Well, in what way is buying a ticket not paying other people to provide you an experience which you can't get otherwise on your own? Earning money is different, you expect to be paid a fixed sum and for many, there are multiple ways to do it.
Replies from: Lumifer↑ comment by Lumifer · 2015-03-18T15:38:26.147Z · LW(p) · GW(p)
Well, in what way is buying a ticket not paying other people to provide you an experience which you can't get otherwise on your own?
In the way that I can, on my own, daydream about having a million dollars. I don't need to pay other people for that.
Replies from: Epictetus↑ comment by Epictetus · 2015-03-18T15:47:03.213Z · LW(p) · GW(p)
If you want a strictly positive chance at getting a million dollars and the thrill of looking up the lottery drawings to see if you won, then you have to pay for it. People buy lottery tickets to have a fleeting, tangible hope, not just an imagined one.
Replies from: Lumifer↑ comment by Lumifer · 2015-03-18T15:56:00.126Z · LW(p) · GW(p)
If you want a strictly positive chance at getting a million dollars
You have a strictly positive chance of having a rich and unknown to you relative die and leave her fortune to you.
the thrill
Ah, that's a good point. Yes, if you want the gambling thrill, then you have to pay for it, I agree. However from the expected-loss point of view, going to a casino is much better than buying lottery tickets...
Replies from: Nornagest↑ comment by Nornagest · 2015-03-18T16:47:25.621Z · LW(p) · GW(p)
For that matter, there's a strictly positive chance that a meteor made of two tons of platinum will fall from the sky tomorrow and flatten my car in the driveway before I'm done brushing my teeth. The probability of almost anything you can think of is going to be positive, unless it's physically impossible -- and even there you have model uncertainty to take into account.
Replies from: Lumifer↑ comment by seer · 2015-03-11T01:55:39.051Z · LW(p) · GW(p)
Rather than commenting on the credibility of that in Elster's specific case (which would depend on knowing more than I do about Elster and about the anti-communists he paid attention to), I'll remark that there certainly are cases in which most of us here would do likewise. (Not literally zero credit, but extremely little, which I think is also what Elster's doing.)
No, you give them appropriate credit for their correct predictions, and and appropriate de-credit for their incorrect predictions.
Replies from: gjm↑ comment by gjm · 2015-03-11T02:11:47.780Z · LW(p) · GW(p)
You shouldn't give credit or discredit directly for correctness of predictions, if you have information about how those predictions were made. If you saw someone make their guess at tomorrow's Dow Jones figure by rolling dice, you don't then credit them with any extra stock-market expertise when it happens that their guess was on the nose; they just got lucky. (Though if they do it ten times in a row you may start to suspect that they have both stock-market expertise and skill in manipulating dice.)
↑ comment by Good_Burning_Plastic · 2015-03-09T12:41:02.820Z · LW(p) · GW(p)
He disdains your statements that buying lottery tickets is a substantial loss on average
Substantial? The tickets of all lotteries I'm familiar with cost less than a movie ticket.
Replies from: Lumifer, gjm↑ comment by Lumifer · 2015-03-09T15:03:28.135Z · LW(p) · GW(p)
Substantial?
Yes. Households earning less than $13,000 a year spend a shocking 9% of their money on lottery tickets.
Replies from: Vaniver↑ comment by Vaniver · 2015-03-09T15:58:43.795Z · LW(p) · GW(p)
Someone else follows the citation trail and claims the source thinks the actual number is lower:
Replies from: Lumifer, Good_Burning_Plastichouseholds with an income of less than $10 000 spend, on average, approximately 3% of their income on the lottery.
↑ comment by Lumifer · 2015-03-09T16:09:21.209Z · LW(p) · GW(p)
Upvoted for checking claims :-)
The link actually says that he cannot find the original source for the 9% number, but in the process found a 3% number.
I'll dig around for better numbers if I have time, but we can also look at significance from the other end:
State lotteries have become a significant source of revenue for the states, raising $17.6 billion in profits for state budgets in the 2009 fiscal year (FY) with 11 states collecting more revenue from their state lottery than from their state corporate income tax during FY2009.
P.S. An interesting paper. Notable quotes:
The introduction of a state lottery is associated with a decline of $115 per quarter in household non-gambling consumption. This figure implies a monthly reduction of $23 in per-adult consumption, which compares to average monthly sales of $18 per lottery-state adult. The response is most pronounced for low-income households, which on average reduce non-gambling consumption by three percent. Among households in the lowest income third of the CEX sample, the data demonstrate a statistically significant reduction in expenditures on food eaten in the home (3.1 percent) and on home mortgage, rent, and other bills (6.9 percent).
And also
Replies from: Good_Burning_PlasticBased on 1998 sales data compiled by LeFleurs Inc., adults living in lottery states averaged $226 annually on lottery tickets. In contrast, CEX Diary respondents living in lottery states report an average of $0.71 for the two-week interval. Assuming smooth annual expenditures, this implies mean annual lottery expenditures of only $36. The underreporting is so severe that magnitudes implied by an analyses of this data are not reliable.
↑ comment by Good_Burning_Plastic · 2015-03-09T21:48:39.257Z · LW(p) · GW(p)
Okay, now I can see where all the people giving financial reasons why lotteries are bad are coming from.
↑ comment by Good_Burning_Plastic · 2015-03-09T21:46:18.023Z · LW(p) · GW(p)
$300/year (unless someone is a bored millionaire) is still shocking to me.
Replies from: Nornagest↑ comment by Nornagest · 2015-03-09T21:51:11.616Z · LW(p) · GW(p)
Assume a flat distribution from 0 to 10000 and it's $150 a year, or about a lottery ticket and a half per week at $2 a ticket. Not too unreasonable. But on the other hand, you've got to figure lottery spending's unevenly distributed, probably following something along the lines of the 80/20 rule, and that brings us back to a ticket a day or higher.
Replies from: gjm, Good_Burning_Plastic↑ comment by gjm · 2015-03-09T22:21:34.285Z · LW(p) · GW(p)
Seems plenty unreasonable to me. If your income is somewhere on "a flat distribution from 0 to $10000" then you are probably just barely getting by, and perpetually one minor financial difficulty away from disaster. If you were able to save $150/year, that could make a really substantial difference to your financial resilience.
(Though I don't much like pronouncing from my quite comfortable position on how those in poverty should spend their money. It's liable to sound like a claim of superiority, but in fact I do plenty of stupid and counterproductive things and it's entirely possible that if I were suddenly thrown into poverty I'd manage much worse than those people; I doubt I'd be buying lottery tickets, but I'd probably be making other mistakes that they don't.)
[EDITED to fix a bit of incredibly clunky writing style.]
↑ comment by Good_Burning_Plastic · 2015-03-10T08:48:54.175Z · LW(p) · GW(p)
It still break my formerly favourite analogy, movie tickets -- I don't think the average household making <$10k/year spends $150/year on movie tickets. (Some such households probably do, but I strongly doubt the average one does.)
Replies from: None, johnlawrenceaspden↑ comment by johnlawrenceaspden · 2016-03-11T19:53:49.783Z · LW(p) · GW(p)
A family of four can probably blow $50 seeing one movie.
↑ comment by Jayson_Virissimo · 2015-03-05T17:11:25.410Z · LW(p) · GW(p)
Because it is often easy to detect the operation of motivated belief formation in others, we tend to disbelieve the conclusions reached in this way, without pausing to see whether the evidence might in fact justify them. Until around 2009 I believed, with most of my friends, that on a scale of danger from 0 to 10 (the most dangerous), global warming scored around 7 or 8. Since the recent revelations I believe that 10 is the appropriate number. The reason for my misperception of the evidence was not an idealistic belief that economic growth could have no downsides. In that case, I would simply have been victim of wishful thinking or self-deception. Rather, I was misled by the hysterical character of those who claimed all along that global warming scored 10. My ignorance of their claims was not entirely irrational. On average, it makes sense to discount the claims of the manifestly hysterical. Yet even hysterics can be right, albeit for the wrong reasons. Because I sensed and still believe that many of these fierce environmentalists would have said the same regardless of the evidence, I could not believe that what they said did in fact correspond to the evidence. I made the mistake of thinking of them as a clock that is always one hour late rather than as a broken clock that shows the right time twice a day.
Jane Elmer, Explaining Anti-Social Behavior: More Amps and Volts for the Social Sciences
EDIT: In case it wasn't clear, I disagree that "it is often easy to detect the operation of motivated belief formation in others". Also, when your opponents strongly believe that they are right and are trying to prevent a great harm (whether they have good arguments or not), this often feels from the inside like they are "manifestly hysterical".
Replies from: arundelo↑ comment by arundelo · 2015-03-05T17:18:26.335Z · LW(p) · GW(p)
Or just:
Replies from: Jiro, LumiferUntil around 1990 I believed, with most of my friends, that on a scale of evil from 0 to 10 (the worst), [$POLITICAL_BELIEF] scored around 7 or 8. Since the recent revelations I believe that 10 is the appropriate number. The reason for my misperception of the evidence was not an idealistic belief that [$POLITICAL_BELIEF] was a worthy ideal that had been betrayed by actual [proponents of $POLITICAL_BELIEF]. In that case, I would simply have been victim of wishful thinking or self-deception. Rather, I was misled by the hysterical character of those who claimed all along that [$POLITICAL_BELIEF] scored 10. My ignorance of their claims was not entirely irrational. On average, it makes sense to discount the claims of the manifestly hysterical. Yet even hysterics can be right, albeit for the wrong reasons. Because I sensed and still believe that many of these fierce [opponents of $POLITICAL_BELIEF] would have said the same regardless of the evidence, I could not believe that what they said did in fact correspond to the evidence.
↑ comment by Jiro · 2015-03-05T20:17:57.964Z · LW(p) · GW(p)
How is having a paragraph that applies to [$POLITICAL_BELIEF] not the same as making a fully general argument?
Or are you just saying that the original statement about Communism was a fully general argument?
Replies from: arundelo↑ comment by arundelo · 2015-03-05T21:35:24.910Z · LW(p) · GW(p)
I'm saying that I think the original quote (which I did think was good) would have been improved qua Rationality Quote by removing the specific political content from it. (Much like the "Is Nixon a pacifist?" problem would have been improved by coming up with an example that didn't involve Republicans.)
Replies from: Pablo_Stafforini↑ comment by Pablo (Pablo_Stafforini) · 2015-03-06T00:09:59.624Z · LW(p) · GW(p)
I think the problems associated with providing concrete political examples are in this case mitigated by the author's decision to criticize people on opposite sides of the political debate (Soviet communists and hysterical anti-communists), and by the author's admission that his former political beliefs were mistaken to a certain degree.
Replies from: arundelo↑ comment by Lumifer · 2015-03-05T17:27:35.131Z · LW(p) · GW(p)
I do appreciate the correct classification of global warming as a political belief :-D
Replies from: arundelo↑ comment by arundelo · 2015-03-05T18:09:45.895Z · LW(p) · GW(p)
I was substituting "[$POLITICAL_BELIEF]" for "Communism", which is what Pablo_Stafforini's quote referred to.
But I could also use it for "global warming" without making a statement against anthropogenic climate change, considering that even people who believe the science on climate change is mostly settled can also believe that
- climate change is political in the "Politics is the Mind-Killer" sense
- how we should respond to climate change is in large part a political question
comment by dxu · 2015-03-02T03:52:32.348Z · LW(p) · GW(p)
When you see a good move, look for a better one.
Emanuel Lasker
Replies from: bentarm, IlyaShpitser, Curiouskid↑ comment by bentarm · 2015-03-03T17:07:35.345Z · LW(p) · GW(p)
Lasker may have said this, but it also pre-dates him: http://en.wikipedia.org/wiki/Emmanuel_Lasker#QuotationsIt's also not always good advice. Sometimes you should just satisfice. Chess is often one of these times, as you have a clock. If you see something that wins a rook, and spend the rest of your time trying to win a queen, you're not going to win the game.
Replies from: dxu, None↑ comment by dxu · 2015-03-03T22:50:00.255Z · LW(p) · GW(p)
It's also not always good advice.
Of course it isn't. But I don't think that's a very good standard to be holding most forms of advice to. Very little advice is always good advice; nearly all sayings have exceptions. The fact is, however, that Lasker's (sort of Lasker's, anyway) quotation is useful most of the time, both in chess and out of chess (since unless you're playing a blitz game, you're likely to have plenty of time to think), and for a rationality quote, that suffices.
Replies from: ChristianKl, bentarm↑ comment by ChristianKl · 2015-03-06T15:33:21.818Z · LW(p) · GW(p)
The fact is, however, that Lasker's (sort of Lasker's, anyway) quotation is useful most of the time
I don't think that's the case. Of LW I would expect that more people suffer from perfectionism than there are people who optimize for satisfaction too much.
Replies from: dxu↑ comment by dxu · 2015-03-06T16:12:31.112Z · LW(p) · GW(p)
On LW, certainly. In general, no.
Replies from: Good_Burning_Plastic↑ comment by Good_Burning_Plastic · 2015-03-06T18:21:41.129Z · LW(p) · GW(p)
This raises an interesting question -- What should I do with Rationality Quotes entries which I think are preaching to the choir, i.e. they are good advice for most of the general population but most of the people who will actually read them here had better reverse? Should I upvote them or downvote them?
Replies from: TheOtherDave↑ comment by TheOtherDave · 2015-03-31T17:41:14.658Z · LW(p) · GW(p)
Would you rather see more quotes like that?
Or fewer?
Or are you not sure?
↑ comment by bentarm · 2015-03-04T13:33:40.232Z · LW(p) · GW(p)
It's not at all obvious to me that the failure mode of not looking for a better move when you've found a good one is more common than the failure mode of spending too long looking for a better move when you've found a good one - in general, I think the consensus is that people who are willing to satisfice actually end up happier with their final decisions than people who spend too long maximising, but I agree that this doesn't apply in all areas, and that there are likely times when this would be useful advice.
In the particular example I gave, if you've already found a move that wins a rook, then it's all-but irrelevant if you're missing a better move that wins a queen, as winning a rook is already equivalent to winning the game, but there are obviously degrees of this (it's obviously not irrelevant if you settle for winning a pawn and miss checkmate). This suggests you should be careful how you define a "satisficing" solution, but not necessarily that satisficing is a bad strategy (in the extreme, if your "good move" is a forced checkmate, then it's obviously a waste of time to look for a "better move", whatever that might mean).
Replies from: dxu↑ comment by dxu · 2015-03-04T17:02:29.847Z · LW(p) · GW(p)
Hm... I'm not sure you're interpreting me all that charitably. You keep on mentioning a dichotomy between satisficing and maximizing, for instance, as if you think I'm advocating maximizing as the better option, but really, that's not what I'm saying at all! I'm saying that regardless of whether you have a policy of satisficing or maximizing, both methods benefit from additional time spent thinking. Good satisficing =/= stopping at the first solution you see. This is especially common in programming, I find, where you generally aren't a time limit (or at least, not a "sensitive" time limit in the sense that fifteen extra minutes will be significant), and yet people are often willing to settle for the first "working" solution they see, even though a little extra effort could have bought them a moderate-to-large increase in efficiency. You can consciously decide "I want to satisfice here, not maximize," but if you have a policy of stopping at the first "acceptable" solution, you'll miss a lot of stuff. I'm not saying satisficing is bad, or even that satisficing isn't as good an option as maximizing; I'm saying that even when satisficing, you should still extend your search depth by a small amount to ensure you aren't missing anything. (And I'm speaking from real life experience here when I say that yes, that is a common failure mode.)
In terms of the chess analogy (which incidentally I feel is getting somewhat stretched, but whatever), I note that you only mention options that are very extreme--things like losing rooks, queens, or getting checkmated, etc. Often, chess is more complicated than that. Should you move your knight to an outpost in the center of the board, or develop your bishop to a more active square? Should you castle, moving your king to safety, or should you try and recoup a lost pawn first? These are situations in which the "right" move isn't at all obvious, and if you spot a single "good" move, you have no easy way of knowing if there's not a better move lurking somewhere out there. Contrast the situation you presented involving winning a pawn versus checkmating your opponent; the correct move is easy to see there. In short, I feel your chess examples are a bit contrived, almost cherry-picked to support your position. (I'm not saying you actually did cherry-pick them, by the way; I'm just saying that's how it sort of feels to me.)
So basically, to summarize my position: when you're stuck dealing with a complicated situation, in chess and in life, halting your search at the first "acceptable" option is not a good idea. That's my claim. Not "maximizing is better than satisficing".
Replies from: Lumifer↑ comment by Lumifer · 2015-03-04T21:23:29.025Z · LW(p) · GW(p)
I'm saying that regardless of whether you have a policy of satisficing or maximizing, both methods benefit from additional time spent thinking.
Taken literally, this is obviously and trivially true. You get more resources, your solution is likely to improve.
But in the context, the benefit is not costless. Time (in particular in a chess game) is a precious resource -- to justify spending it you need cost-benefit analysis.
Your position offers no criteria and no way to figure out when you've spent enough resources (time) and should stop -- and that is the real issue at hand.
Replies from: dxu↑ comment by dxu · 2015-03-04T23:06:49.082Z · LW(p) · GW(p)
Time (in particular in a chess game) is a precious resource -- to justify spending it you need cost-benefit analysis.
Position is also a precious resource in chess. You need to structure your play so that the trade-off between time and position is optimal, and cutting off your search the moment you think of a playable move is not that trade-off. Evidence in favor:
- I've personally competed in several mid-to-high-level chess tournaments and have an ELO rating of 1853. Every time I've ever blundered, it's been because of a failure to give the position a second look. Furthermore, I can't recall a single time the act of giving the position a second look has ever led me to time trouble, except in the (trivial) sense that every second you use is precious.
- I have personally interacted with a great deal of other high-rated players, all of whom agree that you should in general think through moves carefully and not just play the first good-looking move that you see.
- Lasker, a world-champion-level player, was the one quoted as giving this advice, and according to Wikipedia (thanks, bentarm), the saying actually predates him. If the saying has survived this long, that's evidence in favor of it being true.
Your position offers no criteria and no way to figure out when you've spent enough resources (time) and should stop -- and that is the real issue at hand.
Nor am I claiming to offer such a way. I agree that the optimal configuration is difficult to identify, and furthermore that if it weren't so, a great deal of economics would be vastly simpler. My claim is a far weaker one: that whatever the optimal configuration is, stopping after the first solution is not it. This may sound trivial, and to a regular LW reader, it very well may be, but based on my observations, very few regular (as in not explicitly interested in self-improvement) people actually apply this advice, so it does seem important enough to merit a rationality quote dedicated to it.
Replies from: Lumifer, Lumifer↑ comment by Lumifer · 2015-03-05T01:03:12.619Z · LW(p) · GW(p)
cutting off your search the moment you think of a playable move is not that trade-off ... stopping after the first solution is not it
You're successfully demolishing a strawman. Is anyone claiming what you are arguing against?
Replies from: None, dxu↑ comment by [deleted] · 2015-03-05T01:08:58.853Z · LW(p) · GW(p)
Perhaps lesson is that all such sayings mere wisdom-facets, not whole diamond. Appreciate the facet for its beauty, yes, but understand that there are others, including the one most opposite on the other side...perhaps should be something generally understood in thread such as this.
Do not sense real disagreement in this conversation. Thinking has benefits, all agree, and thinking has costs, all agree...doubt Lasker himself waited to move until he knew he had the most perfect move, and yet he no doubt lost and observed others losing because of a move played too rashly....
Replies from: Lumifer↑ comment by dxu · 2015-03-05T01:05:41.415Z · LW(p) · GW(p)
Is anyone claiming what you are arguing against?
No, which is why I feel Lasker's quote is a good rationality quote. If people are constantly expressing disagreement, that's evidence that something's wrong. (A decent level of disagreement is healthy, I feel, but not too much.) What happened is this: bentarm interpreted my position differently from what I intended and disagreed with his/her interpretation of my position, so I clarified said position and (hopefully) resolved the disagreement. If there's no longer anyone arguing against me, then that means I accomplished what I aimed to do.
↑ comment by [deleted] · 2015-03-03T22:49:09.768Z · LW(p) · GW(p)
It's also not always good advice.
Of course it isn't. But I don't think that's a very good standard to be holding most forms of advice to. Very little advice is always good advice; nearly all sayings have exceptions. The fact is, however, that Lasker's (sort of) quotation is useful most of the time, both in chess and out of chess (since unless you're playing a blitz game, you're likely to have plenty of time to think), and for a rationality quote, that suffices.
↑ comment by IlyaShpitser · 2015-03-18T12:39:57.066Z · LW(p) · GW(p)
It is worth noting that Lasker often played the opponent, not the board (e.g. he was known to pick a move he knew was not optimal, but which his opponent found most uncomfortable). He would go for tactics vs positional players, and for slow positional play vs tactical players. He was very annoying to play against, apparently. Also was the champion for 27 years, while having an academic career.
See also "nettlesomeness":
↑ comment by Curiouskid · 2015-03-06T02:30:33.207Z · LW(p) · GW(p)
See also: "The Perfect/Great is the enemy of the Good"
Replies from: parabarbarian↑ comment by parabarbarian · 2015-03-08T14:21:25.253Z · LW(p) · GW(p)
Without the Perfect, the Good would have no standard for measurement. This is especially important when making popcorn or building airplanes.
comment by Salemicus · 2015-03-03T16:46:24.668Z · LW(p) · GW(p)
The vanity of teaching often tempteth a Man to forget he is a Blockhead.
George Savile, 1st Marquess of Halifax, Political, Moral and Miscellaneous Reflections
comment by Salemicus · 2015-03-02T14:24:14.844Z · LW(p) · GW(p)
The mistakes are there, waiting to be made.
Savielly Tartakower, on the starting position in chess. Source.
Replies from: 27chaos, slicko↑ comment by 27chaos · 2015-03-17T17:58:15.158Z · LW(p) · GW(p)
I don't play chess, or know how to play at all well, nor am I interested in learning. But are there any books by or about chess masters that I might find interesting, for teaching good habits of thought? Or even just a list of famous chess quotations?
Replies from: macrojams, IlyaShpitser, kamerlingh, None↑ comment by macrojams · 2015-03-19T04:27:49.913Z · LW(p) · GW(p)
"Willy Hendriks, Move First, Think Later: Sense and Nonsense in Improving Your Chess. To me, more interesting as behavioral economics and as epistemology than as a chess book. The author claims that most chess advice is bad, and that we figure out positional strategies only by trying concrete moves, not by applying general principles. You do need chess knowledge to profit from the book, but if you can manage it, it is one of the best books on how to think that I know. - See more at: http://marginalrevolution.com/marginalrevolution/2013/04/what-ive-been-reading-24.html#sthash.PdwwzDJR.dpuf"
- Tyler Cowen
↑ comment by IlyaShpitser · 2015-03-17T22:02:03.388Z · LW(p) · GW(p)
Chess fundamentals by Capablanca. Still the best book on learning positional chess, and in general "good taste" in position evaluation. There is a certain clarity of thought in this book. I am not sure how useful it is or whether it can "rub off."
Available for free.
I think there are some vaguely autobiographical things by Botvinnik on preparing for matches, but it's more about discipline than thought habits.
↑ comment by kamerlingh · 2015-03-20T21:10:12.825Z · LW(p) · GW(p)
The Art of Learning: A Journey in the Pursuit of Excellence by Josh Waitzkin is the memoir of a chess child prodigy who later became a Tai Chi Chuan world champion. It's organized around his advice on developing the good habits of thought that he discovered when he was training for chess. But they are applicable to many domains: he makes the argument that the habits that made him excel at chess were also what made him a world-class competitor in Tai Chi Chuan.
↑ comment by slicko · 2015-03-13T18:58:42.989Z · LW(p) · GW(p)
Luckily you only have to make fewer mistakes than your opponent to win.
Replies from: None↑ comment by [deleted] · 2015-03-19T08:38:04.950Z · LW(p) · GW(p)
Describing good play as "making few mistakes" seems like the wrong terminology to me. A mistake is not a thing, in and of itself, it's just the entire space of possible games outside the very narrow subset that lead to victory. If you give me a list of 100 chess mistakes, you've actually told me a lot less about the game than if you've given me a list of 50 good strategies -- identifying a point in the larger space of losing strategies encodes far less information than picking one in the smaller space of winning.
And the real reason I'm nitpicking here is because my advisor has always proceeded mostly by pointing out mistakes, but rarely by identifying helpful, effective strategies, and so I feel like I've failed to learn much from him for very solid information-theoretic reasons.
Replies from: gjm, dxu, slicko, PlacidPlatypus↑ comment by gjm · 2015-03-25T02:23:09.487Z · LW(p) · GW(p)
my advisor [...]
Have you discussed this with him? Perhaps he hasn't noticed this and would be delighted to talk strategies. Perhaps he has a reason (good or bad) for doing as he does. (E.g., he may think that you'll learn more effectively by finding effective strategies for yourself, and that pointing them out explicitly will stunt your development in the longer run.) Perhaps his understanding of effective strategies is all implicit and he can't communicate it to you explicitly.
Replies from: None↑ comment by [deleted] · 2015-03-30T20:38:57.145Z · LW(p) · GW(p)
I've tried talking to him about it: he really does seem to possess only implicit understanding of what works and what doesn't. Well, that, and it just doesn't seem to occur to him, even upon my repeated requests, to lay out guidelines ahead of time.
↑ comment by dxu · 2015-03-20T01:33:58.762Z · LW(p) · GW(p)
it's just the entire space of possible games outside the very narrow subset that lead to victory.
Actually, most chess players define a mistake as a move that falls outside the subset of moves that either maintains equality OR leads to victory. This classification significantly reduces the size of mistake-space in chess.
Replies from: None↑ comment by PlacidPlatypus · 2015-03-24T19:52:12.363Z · LW(p) · GW(p)
A mistake is not a thing, in and of itself, it's just the entire space of possible games outside the very narrow subset that lead to victory.
Minor nitpick, surely you mean possible moves, rather than possible games? The set of games that lead to defeat is necessarily symmetrical with the set that lead to victory, aside from the differences between black and white.
comment by Vaniver · 2015-03-02T03:20:41.200Z · LW(p) · GW(p)
Replies from: shminux, noitanigamiYou cannot change and yet remain the same, though this is what most people want.
↑ comment by Shmi (shminux) · 2015-03-02T03:34:55.675Z · LW(p) · GW(p)
An interesting quote, but isn't it basically the definition of identity? The part that remains the same while changing all the while?
Replies from: Vaniver↑ comment by noitanigami · 2015-03-16T18:01:20.932Z · LW(p) · GW(p)
Changing while remaining the same is what Algebra is all about. Identify the quality you wish to hold invariant, then find the transformations that do so. Changing things while leaving them the same in important ways is how problems are solved.
comment by Epictetus · 2015-03-02T04:24:12.854Z · LW(p) · GW(p)
Having got on well by adopting a certain line of conduct, it is impossible to persuade men that they can get on well by acting otherwise. It thus comes about that a man's fortune changes, for she changes his circumstances but he does not change his ways.
-Niccolo Machiavelli, The Discourses
comment by ITakeBets · 2015-03-07T16:04:55.916Z · LW(p) · GW(p)
Any model makes some inaccurate predictions but models can retain utility despite significant propensities for inaccuracy. Inaccurate predictions aid the choice of models for future predictions. Because of this, the central scientific problem in the computational study of the MBH mechanism is not the inaccuracy of the predictions. Rather, it is the absence of any particular prediction at all.
--R. Erik Plata and Daniel A. Singleton, A Case Study of the Mechanism of Alcohol-Mediated Morita Baylis-Hillman Reactions. The Importance of Experimental Observations.
comment by seer · 2015-03-03T04:19:04.242Z · LW(p) · GW(p)
When writing the history, the writer is sitting outside the time, in Olympian detachment, surveying what was said and done, with the knowledge of what overhyped fads will fall by the wayside, and what ignored actions will prove to be crucial. He hasn't got that for the present era; the writer is still meshed in the circumstances that lead to the hyping and the ignoring. Not to mention that he is very likely to be a partisan in the matter -- most who write histories of a thing are passionately attached to the thing itself. Which can also lead to a shocking change in tone in the last chapter too, as the calm recitation of facts gives way to the sound of axes grinding, even if the writer manages to make interesting observations.
The irony is that anyone who's done the history of things will have read, in his research, many, many, many writers making idiots of themselves because they do not realize they are enmeshed in their era, and yet this does not stop doing the same thing over again.
comment by Lumifer · 2015-03-04T21:16:13.109Z · LW(p) · GW(p)
"Politics selects for people who see the world in black and white, then rage at all the darkness" -- Megan McArdle
Replies from: ChristianKl↑ comment by ChristianKl · 2015-03-06T15:10:33.912Z · LW(p) · GW(p)
Which people do you mean with that?
Politicians might talk in terms of black and white to appeal to voters but most of them don't think that way.
Replies from: Zubon, Lumifer↑ comment by Zubon · 2015-03-14T13:47:00.281Z · LW(p) · GW(p)
When you talk in terms of black and white all the time, it is very easy to forget that you don't think that way.
Replies from: ChristianKl↑ comment by ChristianKl · 2015-03-16T00:33:44.733Z · LW(p) · GW(p)
When you talk in terms of black and white all the time, it is very easy to forget that you don't think that way
This looks like a result of mind kill.
The fact that you let yourself be blinded by someone's strategy means that you fail at reasoning. It doesn't help to moralize.
comment by Salemicus · 2015-03-04T11:09:58.676Z · LW(p) · GW(p)
Anger is an evolutionary strategy that helps us deal with threats. It focuses our mind on the target, suppresses our fear and drives us to attack.
Anger is not evolution's answer to generic "threats." You don't get angry at the saber-toothed tiger charging you. Rather, it is a response to threats to social cohesion. People who break the rules make us angry even when they don't directly harm us. It's why people find themselves yelling at pedestrians who cross against the light even when the delay to the driver is a matter of seconds.
That's why politicians are angry: because they are trying to artificially create a sense of social cohesion in their coalition of voters.
Rob Lyman, in a discussion of why so many politicians have an angry persona.
Replies from: None↑ comment by [deleted] · 2015-03-04T12:59:34.671Z · LW(p) · GW(p)
You don't get angry at the saber-toothed tiger charging you.
The what? Rob never stubbed a toe in the dark and then launched an angry tirade on the offending piece of furniture?
The number of times I told my first, very bad car to eat a bag o' penises is, well, high.
And there is the saying that programmers know the language of swearing best - many bugs make one angry, not angry at something clear, just angry. Angry at the situation in general. Like why the eff had this had to happen to me when I need to run this script before I can go home? Aaargh. That kind of thing.
Replies from: Salemicus↑ comment by Salemicus · 2015-03-04T13:19:19.071Z · LW(p) · GW(p)
The furniture was put there by a thoughtless person not following the social rules. The bad car was built by shoddy engineers not living up to your expectations. The bug in the code was put there by a careless programmer not following agreed practice. And so on.
Your examples merely serve to reinforce the notion that what makes us angry is people breaking the (possibly unwritten) rules or violating social cohesion.
Replies from: satt, None↑ comment by satt · 2015-03-05T01:47:35.634Z · LW(p) · GW(p)
Your examples merely serve to reinforce the notion that what makes us angry is people breaking the (possibly unwritten) rules or violating social cohesion.
That clashes with my introspection, unlike DeVliegendeHollander's account. When I stub my toe in the dark and start swearing, my thoughts are not anything to do with social rules or their violation (at least not at a conscious level); typically no one else is around, no other person enters my mind, and I'm just annoyed that I'm unnecessarily experiencing pain, and that annoyance doesn't feel like it has a moral element to it. It feels like a straightforward reaction to unexpected, benefit-free pain.
↑ comment by [deleted] · 2015-03-04T13:21:12.510Z · LW(p) · GW(p)
Sounds rather forced to me. How about a simpler hypothesis that anger is frustration, the expression of the bad feelings coming from expectations not being fulfilled?
Replies from: Salemicus, ChristianKl↑ comment by Salemicus · 2015-03-04T13:48:22.854Z · LW(p) · GW(p)
So would you get angry if a sabre-toothed tiger charged at you when you weren't expecting it? Do you get angry when a clear day gives way to rain? Do you get angry when a short story has a twist ending?
Expectations not being fulfilled doesn't necessarily cause anger. It may lead to sadness, or laughter, or fear, or disappointment, or any number of emotions. But it normally only leads to anger when the frustrated expectation is about social rules.
Replies from: Good_Burning_Plastic, Desrtopa, None↑ comment by Good_Burning_Plastic · 2015-03-05T08:55:00.073Z · LW(p) · GW(p)
FWIW, Salemicus::anger ("how dare you!") and annoyance feel slightly but not very different to my System 1, much more similar to each other than, say, the various feelings that English labels as "love", and I don't normally feel the need of using different words for the two unless I want to be pedantic.
I realize that anger is supposed to be what "They offered me a lousy offer in this Ultimatum game so I'd better turn it down even if I CDT::will be worse off otherwise people TDT::would continue to make me similarly lousy offers" feels from the inside, but my System 1 has only a vague understanding of that, let alone of the fact that unanimate objects aren't actually playing Ultimatum with me (and I can't be alone on this last point otherwise no-one would have ever hypothesised that lightning came from Zeus), but YMMV.
BTW, are you two native English speakers? (FTR I'm not.) This might be a case of languages labeling feeling-space differently, rather than or as well as people's feeling-spaces being different.
Replies from: None, Salemicus↑ comment by [deleted] · 2015-03-05T13:59:59.537Z · LW(p) · GW(p)
I am not, but I got convinced by Salemicus's argument. I realized that what I translate as "anger at the weather" is better translated as "being mad at the weather" or "being pissed at the weather" and anger here is not something like a short fuck-you feeling but more like the urge to launch a long rant or dressing-down.
↑ comment by Salemicus · 2015-03-05T09:25:22.920Z · LW(p) · GW(p)
I am a native speaker, yes.
I find it interesting that our intuition clashes so. I immediately found RL's account compelling on the basis, whereas others did not. This could be a case of different labelling, or even different emotional experience.
Replies from: Good_Burning_Plastic↑ comment by Good_Burning_Plastic · 2015-03-06T18:34:00.488Z · LW(p) · GW(p)
I find it interesting that our intuition clashes so. I immediately found RL's account compelling on the basis, whereas others did not. This could be a case of different labelling, or even different emotional experience.
The weirdest thing is that I do have the intuition "corresponding" (FLOABW) to the fact that if deterring someone from doing something can work in principle it might be a good idea to try but if it cannot possibly work it makes no sense to try (the "Sympathy or Condemnation" section of the "Diseased thinking" post makes perfect sense to me); when Mencius Moldbug pointed out that people react differently to the threat of anthropogenic global warning differently to the way they'd react to hypothetical global warning due to the Sun, I knew exactly what he was talking about. But, Rob Lyman's example is a very poor choice of a pointer to that intuition for me, exactly because it points me to stuff like stubbing a toe in the dark instead.
Replies from: seer↑ comment by seer · 2015-03-07T02:40:02.267Z · LW(p) · GW(p)
when Mencius Moldbug pointed out that people react differently to the threat of anthropogenic global warning differently to the way they'd react to hypothetical global warning due to the Sun,
That's perfectly rational behavior. The two causes give different predictions about likely future warming.
Replies from: Good_Burning_Plastic↑ comment by Good_Burning_Plastic · 2015-03-07T17:53:48.164Z · LW(p) · GW(p)
He explicitly specified that the predicted increase of radiative forcing due to solar activity in his hypothetical would equal the predicted increase of radiative forcing due to greenhouse gases in the real world.
Sure, there is still a difference between the two situations akin to that described in the Diseased Thinking post I linked upthread, in that shaming people into not emitting as much CO2 might in principle work whereas shaming the Sun into not shining as much cannot possibly work (though Moldbug still has a point as the cost-effectiveness of the former is probably orders of magnitude less than most people would guess). I know you can't shame a saber-toothed tiger into not charging you either, but still Moldbug's example worked for me and Lyman's didn't for whatever reason.
EDIT: Might be because I'd think of an increase in the solar constant in Far Mode but I'd think of a saber-toothed tiger in Near Mode.
↑ comment by Desrtopa · 2015-03-27T02:54:11.717Z · LW(p) · GW(p)
So would you get angry if a sabre-toothed tiger charged at you when you weren't expecting it? Do you get angry when a clear day gives way to rain? Do you get angry when a short story has a twist ending?
For me at least, the answers are no, yes, and no respectively. We can further refine the prior hypothesis by stipulating that the bad feelings arise from expectations not being fulfilled in an unpleasurable way, which would stop it from generating the third situation as an example. As for the first, perhaps one might experience anger if it were not being overridden by the more pressing reaction of fear. Or perhaps the hypothesis is off base, but it seems to generate some correct predictions of anger which the hypothesis that anger only arises from frustrated expectations about social rules fails to generate.
↑ comment by [deleted] · 2015-03-04T13:55:58.328Z · LW(p) · GW(p)
My intuitive answer would be yes, but now I am realizing that for me sadness or fear is probably much closer to anger than for you. In my mind they all are "feel bad, be unhappy and express it too".
I suppose if we define anger in a very granular and precise way and not just as a general bad feeling, "being mad at" but more like, giving a long rant, it can only apply to humans because I will swear to the rain but only briefly, to let steam out, I will not give a long angry rant to it. I will be "mad at it", but not angry in that social sense that is clear.
Halfway conceded: anger in the very granular sense only applies to humans.
But. Can you think of a counter-example where 1) humans violate our expectations 2) but it is no a social rule or cohesion violation, and do we get angry or not?
This is very tricky, because our expectations are, of course, based on social rules! Usually. Now I am searching for a case when not.
Replies from: Salemicus↑ comment by Salemicus · 2015-03-04T14:02:30.960Z · LW(p) · GW(p)
Can you think of a counter-example where 1) humans violate our expectations 2) but it is no a social rule or cohesion violation, and do we get angry or not?
I already did give such an example - a short story with a "twist" ending. Such an ending violates our expectations (that's what makes it a "twist") but it doesn't break any social rule, so people often find these amusing, clever, etc. On the other hand, a "twist" ending in a context where there is a social rule against such endings might well make people angry - for example, if the recent movie Exodus: Gods and Kings had ended with the Israelites being drowned in the Red Sea and the Pharoah triumphant, that would no doubt have upset many viewers.
Replies from: None↑ comment by [deleted] · 2015-03-04T14:26:50.786Z · LW(p) · GW(p)
Hmmm... most social rules generally want people to behave in a predictable ways, for various reasons, so they avoid surprises. It seems almost like surprises are only allowed in special cases...
I almost accept your point now, but one objection. A good and a weak soccer teams play a match. Surprisingly, the weaker one wins. It was fair play. Nobody violated a rule. Still the fans of the losing one are angry - at their own team, because how could they let a much weaker team win. Is that a social rule violation that if you are generally better you are never allowed to lose? Or just an expectation violation? Is it more of a bias on the side of fans: their team must have violated to rule to try hard and not be lazy, because they cannot imagine any other explanation?
If you generally agree, I accept your point with a modification: anger is about perceived social rule violation: but people are not perfect judges of social rule violations, there are mistakes made both ways, and tententious, bias-driven mistakes.
Thus, as in my soccer example, sometimes all you see at first is a violated expectation. You see no rule violation. Then you need to figure out why exactly may other people think it is a rule violation. This is not always easy and we don't do it that often, and thus often we just see a violated expectation, and not see how others perceive it as a rule-violation.
Replies from: None↑ comment by [deleted] · 2015-03-05T08:36:44.185Z · LW(p) · GW(p)
I just want to say I am glad to have lost this debate, because it is working. For me. I mean, yesterday I was able to manage my anger better by asking myself questions like "what social rule I think is broken here? Is that a real one or just my wish? If real, a reasonable one?" even when the answer was yes/yes just being conscious of it worked.
I think I will shamelessly steal and apply this idea in discussions where it can be useful. Thanks a lot.
↑ comment by ChristianKl · 2015-03-06T15:16:33.739Z · LW(p) · GW(p)
anger is frustration
I think according two common usage of the terms they refer to different emotions.
Anger is a state of energy. Frustration is a rather passive state.
Anger doesn't get triggered for every unfulfilled expectation. It get's triggered if things aren't as they "should" be. If you think you don't deserve what you are expecting you get frustrated upon not getting it but not angry.
Replies from: dxu↑ comment by dxu · 2015-03-06T16:21:52.547Z · LW(p) · GW(p)
Anger doesn't get triggered for every unfulfilled expectation. It get's triggered if things aren't as they "should" be.
And since the concept of "should" evolved as a primarily social mechanism, it makes sense that anger would be triggered by (perceived) social affronts.
comment by csvoss (Terdragon) · 2015-03-22T19:57:11.929Z · LW(p) · GW(p)
But, above all, there is the conviction that the pursuit of truth, whether in the minute structure of the atom or in the vast system of the stars, is a bond transcending human differences.
-- Arthur Eddington, "The Future of International Science", as quoted in An Expedition to Heal the Wounds of War: the 1919 Eclipse Expedition and Eddington as Quaker Adventurer
comment by Zubon · 2015-03-14T14:08:52.286Z · LW(p) · GW(p)
Gordon [Tullock] was on my dissertation committee. After reading all 252 pages of my dissertation within twelve hours of my submitting it, Gordon caught me in the Public Choice hallway at Virginia Tech to give me his assessment: "Minimal but acceptable." To which I replied, "Optimal. Done!"
-- Richard McKenzie, quoted on Econlog
Replies from: Zubon↑ comment by Zubon · 2015-03-14T14:11:11.258Z · LW(p) · GW(p)
Related engineer joke: "anybody can build a bridge that won’t collapse–but it takes a real engineer to build a bridge that just barely avoids collapse."
comment by [deleted] · 2015-03-04T02:31:46.947Z · LW(p) · GW(p)
As to a "science" of human conduct, I have mentioned some difficulties, notably that one of the most distinctive traits of man is make-believe, hypocrisy, concealment, dissimulation, deception. He is the clothes-wearing animal, but the false exterior he gives to his body is nothing to that put on by his mind.
Frank Knight, "The Role of Principles in Economics and Politics" p.11
comment by FairWitness · 2015-10-27T01:52:08.962Z · LW(p) · GW(p)
Probably not found anywhere online, my favorite college professor, Ernest N. Roots, used to say, " Things that are simply remarkable, become remarkably simple, once they are understood". This has been my personal defense against arguments from ignorance ever since.
Replies from: Vaniver↑ comment by Vaniver · 2015-10-27T13:47:44.941Z · LW(p) · GW(p)
Welcome to LW! We post a new Rationality Quotes thread every month; the current one is October 2015 for a few more days, but you can find a link to the most recent one on the right sidebar if you're looking at Main (the header "Latest Rationality Quote" is a link to the page, above a link to the latest quote).
comment by alienist · 2015-03-02T06:18:10.782Z · LW(p) · GW(p)
Replies from: James_MillerNo, science is not a set of answers; it is a procedure.
↑ comment by James_Miller · 2015-03-05T02:48:25.047Z · LW(p) · GW(p)
True for scientists. But for most people science is indeed a set of answers
Replies from: satt, soreff↑ comment by satt · 2015-03-07T22:03:57.058Z · LW(p) · GW(p)
I am a scientist, albeit the most junior kind of scientist, and I reckon "science" can legitimately refer to a set of answers or a methodology or an institution.
I doubt anyone in this thread would object if I called a textbook compiling scientific discoveries a "science textbook". I'm not sure even Taleb would blink at that (if it were in a low-stakes context, not in the midst of a heated argument).
Replies from: Grant↑ comment by Grant · 2015-03-26T06:57:22.080Z · LW(p) · GW(p)
The information in a science textbook is (or should be) considered scientific because of the processes used to vet it. Absent of this process its just conjecture.
I often wonder if this position is unpopular because of its implications for economics and climatology.
comment by [deleted] · 2015-03-04T09:13:40.137Z · LW(p) · GW(p)
Eric S. Raymond: "Interesting human behavior tends to be overdetermined."
Example sources:
http://esr.ibiblio.org/?p=4213
http://esr.ibiblio.org/?m=20020525
http://esr.ibiblio.org/?p=6599
Replies from: arundelo↑ comment by arundelo · 2015-03-04T14:38:08.012Z · LW(p) · GW(p)
I didn't understand this quote out of context so I followed one of the links and he explains it in this comment:
It's something I learned from animal ethology. An "overdetermined" behavior is one for which there are multiple sufficient explanations. To unpack: "For every interesting behavior of animals and humans there is more than one valid and sufficient causal theory." Evolution likes overdetermined behaviors; they serve multiple functions at once.
comment by alienist · 2015-03-02T06:16:44.667Z · LW(p) · GW(p)
Science is the belief in the ignorance of experts.
Richard Feynman, What is Science?
Replies from: gedymincomment by hargup · 2015-03-11T08:55:43.663Z · LW(p) · GW(p)
Facts push other facts into and then out of consciousness at speeds that neither permit nor require evaluation.
Neil Postman from Amusing ourselves to Death, p 70
Replies from: JoshuaZcomment by Vulture · 2015-03-18T14:32:21.950Z · LW(p) · GW(p)
Suppose I think, after doing my accounts, that I have a large balance at the bank. And suppose you want to find out whether this belief of mine is "wishful thinking." You can never come to any conclusion by examining my psychological condition. Your only chance of finding out is to sit down and work through the sum yourself.
-- C. S. Lewis
Replies from: lmm, Jiro↑ comment by lmm · 2015-03-20T21:20:19.550Z · LW(p) · GW(p)
This seems obviously false. Am I missing something?
Replies from: g_pepper↑ comment by g_pepper · 2015-03-21T23:45:34.128Z · LW(p) · GW(p)
I think that C.S. Lewis means that when a person puts forth an assertion, you should ascertain the truth of falsity of the assertion by examining the assertion alone; the mental state of the person making the assertion is irrelevant.
Presumably Lewis is arguing against the genetic fallacy, or more specifically, Bulverism.
Edit: Why the downvote? My comment was fairly non-controversial (I thought).
Replies from: Jiro↑ comment by Jiro · 2015-03-22T02:13:54.829Z · LW(p) · GW(p)
Whether a belief is wishful thinking is inherently an assertion about the mental state of a person. It is meaningless to say that you should examine the assertion instead of the mental state, since the assertion is an assertion about the mental state.
Replies from: g_pepper↑ comment by g_pepper · 2015-03-22T03:49:49.167Z · LW(p) · GW(p)
I don't know about that. Merriam Webster defines wishful thinking as:
an attitude or belief that something you want to happen will happen even though it is not likely or possible
So if my calculations are accurate, per Merriam Webster's definition, I have not engaged in wishful thinking.
↑ comment by Jiro · 2015-03-18T15:40:11.738Z · LW(p) · GW(p)
Something can be wishful thinking and true at the same time. Doing the sum wouldn't prove that it's not wishful thinking.
Of course having the sum be correct is a necessary condition for non-wishful thinking, but it does not determine the existence of non-wishful-thinking all by itself.
Replies from: DanielLC↑ comment by DanielLC · 2015-03-21T23:18:09.516Z · LW(p) · GW(p)
Of course having the sum be correct is a necessary condition for non-wishful thinking,...
No it's not. You can be wrong for reasons other than wishful thinking.
Replies from: Jiro↑ comment by Jiro · 2015-03-22T02:12:04.433Z · LW(p) · GW(p)
When A is being correct and B is wishful thinking, what I said is that A implies B, which reduces to (B || ~A). What you're saying is that ~A does not imply ~B, which reduces to (B && ~A). Of course, these two statements are compatible.
Replies from: DanielLC↑ comment by DanielLC · 2015-03-22T02:37:41.583Z · LW(p) · GW(p)
When A is being correct and B is wishful thinking, what I said is that A implies B
I think you messed up there. Being correct certainly doesn't imply wishful thinking. You were saying that non-wishful thinking implies being correct. That is ~B implies A. Or ~A implies B, which is equivalent.
If I checked my balance and due to some bank error was told that I had a large balance, I would probably have the sum be incorrect but still be using non-wishful thinking. The sum being correct is not a necessary condition for non-wishful thinking. All the other combinations are possible as well, though I don't feel like going through all the examples.
Replies from: Jiro↑ comment by Jiro · 2015-03-22T02:58:46.020Z · LW(p) · GW(p)
You're right, I meant to say that B implies A, not to say that A implies B. However, that is still equivalent to (B || ~A) so the rest, and the conclusion, still follow.
Replies from: DanielLC↑ comment by DanielLC · 2015-03-22T06:16:01.298Z · LW(p) · GW(p)
B implies A would be wishful thinking implies that you are correct. This is obviously false. You clearly intended to have a not in there somewhere. Double check your definitions.
I was giving an example of (~A && ~B). If you want an example of (A && B), it would be that I don't even look at my statements and just assume that I have tons of money because that would be awesome, but I also just happen to have lots of money.
Replies from: Jiro↑ comment by Jiro · 2015-03-22T15:06:20.810Z · LW(p) · GW(p)
B implies A would be wishful thinking implies that you are correct. This is obviously false.
It being a law of the Internet that corrections usually contain at least one error, that applies to my own corrections too. In this case the error is the definitions of A and B.
A=being correct, B=non-wishful-thinking.
"Having the sum be correct is a necessary condition for non-wishful thinking" means B implies A, which in turn is equivalent to (B || ~A).
"You can be wrong for reasons other than wishful thinking" means ~(~B implies ~A), which is equivalent to ~(~B || A), which is equivalent to B && ~A.
Same conclusions as before, and they're still not inconsistent.
Replies from: DanielLC↑ comment by DanielLC · 2015-03-22T19:16:38.404Z · LW(p) · GW(p)
A=being correct, B=non-wishful-thinking.
Now that we have that out of the way, we can start communicating.
A counterexample to (B || ~A) would be (~B && A), so wishful thinking while still being correct. As I said in my last post, you just assume you have a lot of money because it would be awesome, and by complete coincidence, you actually do have a lot of money.
Now that we have established the language correctly and I looked through my first post again, you are correct and I misread it. I tried to go back and count through all the mistakes that lead to our mutual confusion, and I just couldn't do it. We have layers of mistakes explaining each others mistakes.
comment by WalterL · 2015-03-02T21:31:53.650Z · LW(p) · GW(p)
History teaches us, gentlemen, that great generals remain generals by never underestimating their opposition.
Gen. Antonio Lopez de Santa Anna: The Alamo: Thirteen Days to Glory (1987) (TV)
Replies from: fortyeridania, DanielLC↑ comment by fortyeridania · 2015-03-02T21:58:35.809Z · LW(p) · GW(p)
Overestimating can be costly too. That's why bluffing can work, in poker as in war.
Examples/articles:
100 horsemen and the empty city (gated). Here are two articles summarizing the original paper: Miami SBA and ScienceDaily
↑ comment by PeterisP · 2015-03-03T09:52:34.110Z · LW(p) · GW(p)
The most important decisions are before starting a war, and there the mistakes have very different costs. Overestimating your enemy results in peace (or cold war) which basically means that you just lose out on some opportunistic conquests but underestimating your enemy results in a bloody, unexpectedly long war that can disrupt you for a decade or more - there are many nice examples of that in 20th century history.
Replies from: fortyeridania, ChristianKl↑ comment by fortyeridania · 2015-03-03T16:41:03.828Z · LW(p) · GW(p)
Peace or cold war are not the only possible outcomes. Surrender is another. An example is the conquest of the Aztecs by Cortez, discussed here, here, and here. Surrender can (but need not) have disastrous consequences too.
↑ comment by ChristianKl · 2015-03-06T15:30:41.109Z · LW(p) · GW(p)
Generals are not the people who decide whether or not a war gets fought but who decide over individual battles.
↑ comment by DanielLC · 2015-03-04T21:43:17.437Z · LW(p) · GW(p)
If you're unbiased then you should be underestimating your opposition about half the time.
Replies from: Lumifer, Desrtopa↑ comment by Lumifer · 2015-03-04T21:45:25.866Z · LW(p) · GW(p)
If your loss function is severely skewed, you do NOT want to be unbiased.
Replies from: DanielLC, fortyeridania↑ comment by DanielLC · 2015-03-05T02:40:01.039Z · LW(p) · GW(p)
What you want is to have a distribution. You will expect your opposition to be about as strong as it is. You will prepare for the possibility that it is stronger or weaker.
Replies from: Lumifer↑ comment by Lumifer · 2015-03-05T02:53:14.446Z · LW(p) · GW(p)
A distribution is nice but often you have to commit to a choice. In such cases you generally want to minimize your expected loss (or maximize the gain) and if the loss function is lopsided, the forecast implied by the choice can be very biased indeed.
Replies from: TheMajor↑ comment by TheMajor · 2015-03-11T09:19:29.787Z · LW(p) · GW(p)
Even with a very skewed loss function you want to have an accurate estimate of your opposition, which will be an underestimate about half of the time, and then take excessive precautions. Your loss function does not influence your beliefs, only your actions.
Replies from: Lumifer↑ comment by Lumifer · 2015-03-11T15:29:33.488Z · LW(p) · GW(p)
Your loss function does not influence your beliefs, only your actions.
Yes, but actions is what you should care about -- if these are determined, your beliefs (which in this case do not pay rent) don't matter much.
Replies from: TheMajor↑ comment by TheMajor · 2015-03-11T15:43:28.469Z · LW(p) · GW(p)
If your loss function is severely skewed, you do NOT want to be unbiased.
.
Actions is what you should care about -- if these are determined, your beliefs (which in this case do not pay rent) don't matter much.
So why would I want to bias myself after I've decided to take excessive precautions?
I think we're in agreement btw, we care about actions, and if you have a very skewed loss function then it is rational to spend a lot of effort on improbable scenarios in which you lose heavily, which from the outside looks similar to a person with a less skewed loss function thinking that those scenarios are actually plausible. I was just trying to point out that DanielLC's reply was correct and your previous one is not - even with a skewed loss function this should not produce feedback to the actual beliefs, only to your actions. So no, you DO want to be unbiased, it's just that an unbiased estimate/posterior distribution can still lead to asymmetric behaviour (by which I mean spending an amount of time/effort to prepare for a possible future disproportionate to the actual probability to this future occurring).
Replies from: Lumifer↑ comment by Lumifer · 2015-03-11T16:02:42.653Z · LW(p) · GW(p)
Well, let me unroll what I had in mind.
Imagine that you need to estimate a single value, a real number, and your loss function is highly skewed. For me this would work as follows:
- Get a rough unbiased estimate
- Realize that I don't care about the unbiased estimate because of my loss function
- Construct a known-biased estimate that takes into account my loss function
- Take this known-biased estimate as the estimate that I'll use from now on
- Formulate a course of action on the basis of the the biased estimate
The point is that on the road to deciding on the course of action it's very convenient to have a biased estimate that you will take as your working hypothesis.
Replies from: TheMajor↑ comment by TheMajor · 2015-03-11T16:36:46.429Z · LW(p) · GW(p)
Yes. My point is that this new biased estimate is not your 'real estimate' - this is simply not your best guess/posterior distribution given your information. But as I remarked above your rational actions given a skewed loss function resemble the actions of a rational agent with a less risk-averse loss function with a different estimate, so in order to determine your actions you can compute what [an agent with a less skewed loss function and your (deliberately) biased estimate] would do, and then just copy those actions.
But despite all of this, you still want to be unbiased. It's fine to use the computational shortcut mentioned above to deal with skewed loss functions, but you need your beliefs to stay as accurate as possible to not get strange future behaviour. A small, simplified example:
Suppose you are in possesion of 1001$ total (all your assets included), and it costs $1000 to buy a cure for a fatal disease you happen to have/a ticket to heaven/insurance for cryonics. You most definitely don't want to lose more than one dollar. Then a guy walks up to you and offers a bet - you pay 2$, after which you are given a box which contains between 0$ and 10$, with uniform probability (this strange guy is losing money, yes). Clearly you don't take the bet - since you don't actually care much whether you have 1000$ or 1001$ or 1009$, but would be terribly sad if you had only 999$. But instead of doing the utility calculation you can also absorb this into your probability distribution of the box - you only care about scenarios where the box contains less than a dollar, so you focus most of your attention on this, and estimate that the box will contain less than a dollar. The problem now arises if you happen to find a dollar on the street - it is now a good idea to buy a box, although the agents who have started to believe the box contains at most a dollar will not buy it.
To summarise: absorbing sharp effects of your utility function into biased estimates can be a decent temporary computational hack, but it is dangerous to call the partial results you work with in the process 'estimates', since they in no way represent your beliefs.
P.S.: The example above isn't all that great, it was the best I could come up with right now. If it is unclear, or unclear how the example is (supposedly) related to the discussion above, I can try to find a better example.
Replies from: Vaniver, Lumifer↑ comment by Vaniver · 2015-03-11T16:57:00.122Z · LW(p) · GW(p)
To summarise: absorbing sharp effects of your utility function into biased estimates can be a decent temporary computational hack, but it is dangerous to call the partial results you work with in the process 'estimates', since they in no way represent your beliefs.
It seems to me that it's best to use "your beliefs" to refer to the entire underlying distribution. Yes, you should not bias your beliefs--but the point of estimates is to compress the entire underlying distribution into "the useful part," and what is the useful part will depend primarily on your application's loss function, not a generalized unbiased loss function.
↑ comment by Lumifer · 2015-03-11T17:04:00.238Z · LW(p) · GW(p)
My point is that this new biased estimate is not your 'real estimate' - this is simply not your best guess/posterior distribution given your information.
Sure it is my "real" estimate -- because I take real action on its basis.
Let me make a few observations.
First, any "best" estimate narrower than a complete probability distribution implies some loss function which you are minimizing in order to figure out which estimate is "best". Let's take the plain-vanilla case of estimating the central point of a distribution which produced some sample of real numbers. The usual estimate for that is the average of the sample numbers (the sample mean) and it is indeed optimal ("the best") for a particular, quadratic, loss function. But, for example, change the loss function to absolute deviation (L1) and now the median becomes "the best estimate".
The point is that to prefer any estimate over some other estimate, you must have a loss function already. If you are calling some estimate "best", this implies a particular loss function.
Second, the usefulness of any estimate is determined by the use you intend for it. "Suitability for a purpose" is an overriding criterion for estimates you produce. Different purposes ("produce an unbiased estimate" and "select a course of action" are different purposes) often require different estimates.
Third, "unbiased" is not an unalloyed blessing. In many situations you face the bias-variance tradeoff and sometimes you do want to have some bias.
↑ comment by fortyeridania · 2015-03-11T06:35:29.216Z · LW(p) · GW(p)
This is a good point. A helpful discussion of asymmetric loss functions is here.
comment by Vaniver · 2015-03-27T13:55:22.486Z · LW(p) · GW(p)
The history of human thought would make it seem that there is difficulty in thinking of an idea even when all the facts are on the table.
--Isaac Asimov, "How Do People Get New Ideas?"
comment by [deleted] · 2015-03-05T01:18:30.816Z · LW(p) · GW(p)
Social problems are not only hard but finally insoluble. Yet many of them will inevitably get some kind of "treatment"; it is a question of better or worse, or of making things better, more or less, or making them worse than before, even to downright disaster. As I remember hearing "Tommy" Adams say in a classroom, we must not call any problems insoluble which must be solved in some way and for which some solutions are better, or worse, than others.
Frank Knight, "The Role of Principles in Economics and Politics" p.19
His claim of the insolubility of social problems is not a note of hopeless despair but should be understood in the context of his argument that free association and cooperation are the best and really only way to solve social problems, but pushed to their limit the result would be intolerable.
Think "Tommy" Adams refers to Thomas Sewall Adams.
comment by Shmi (shminux) · 2015-03-18T16:51:55.735Z · LW(p) · GW(p)
Scott Adams posted his "My best tweets" collection. About half of them are examples of instrumental rationality in action, and most are worth a laugh. Some of my favorites from the Arguing with Idiots section are in the repiies.
Replies from: shminux, shminux, shminux, shminux↑ comment by Shmi (shminux) · 2015-03-18T16:52:54.146Z · LW(p) · GW(p)
Replies from: seer, ManfredTip: If you are in a conversation with someone who unexpectedly asks “Why are you attacking me?” … run away. Don’t even explain.
↑ comment by Shmi (shminux) · 2015-03-18T16:53:15.111Z · LW(p) · GW(p)
Replies from: JiroIf you can’t construct a coherent argument for the other side, you probably don’t understand your own opinion.
↑ comment by Jiro · 2015-03-18T17:50:32.638Z · LW(p) · GW(p)
I cannot construct a coherent argument for intelligent design, depending on what you mean by "coherent". I could construct an argument which is grammatically correct and uses lies, but I don't think you meant to count that as "coherent".
Replies from: Epictetus, shminux, Lumifer, seer, 27chaos↑ comment by Epictetus · 2015-03-18T18:54:41.144Z · LW(p) · GW(p)
If you have at your disposal an intelligent being who gets to decide the laws of physics and gets to set the initial conditions, then intelligent design is an easy consequence: "God set up the universe in such a way that allowed life to evolve according to His predetermined laws".
If we ever get enough computing power to simulate intelligent life, then those simulations will have been intelligently designed and an argument very similar to the above will be true (an intelligent person wrote a program and set the initial parameters in such a way that intelligence was simulated).
You can write a number of refutations of this argument (life sucks, problem of evil, Occam's razor, etc.), but I'd still say it's coherent.
↑ comment by Shmi (shminux) · 2015-03-18T21:36:12.906Z · LW(p) · GW(p)
The quote basically describes the principle of charity 2.0: you seek to understand the logic of a position foreign to you not just to refute it or to convince the other person, or to construct a compromise. You do it to better understand your own side and any potential fallacies you ordinarily do not see in your own logic.
Replies from: Jiro↑ comment by Lumifer · 2015-03-18T18:47:43.330Z · LW(p) · GW(p)
I cannot construct a coherent argument for intelligent design
You probably can if you start with a different set of axioms.
Note that, for example, "God exists" is not a lie but a non-falsifiable proposition.
Replies from: Jiro↑ comment by Jiro · 2015-03-18T19:15:25.387Z · LW(p) · GW(p)
According to supporters of intelligent design, "intelligent design" implies not using any religious premises. So if you started with that axiom, then you're not really talking about intelligent design after all.
Replies from: Jayson_Virissimo, Lumifer↑ comment by Jayson_Virissimo · 2015-03-18T19:45:09.026Z · LW(p) · GW(p)
According to supporters of intelligent design, "intelligent design" implies not using any religious premises.
I don't think this is quite right. I think they claim that intelligent design doesn't imply using any religious premises.
~□(x)(Ix⊃Ux) rather than □(x)(Ix⊃~Ux)
In other words, there is nothing inconsistent with a theist (using religious premises) and a directed panspermia proponent (not using any religious permises) both being supporters of intelligent design.
Replies from: Jiro↑ comment by Lumifer · 2015-03-18T19:21:51.396Z · LW(p) · GW(p)
According to supporters of intelligent design, "intelligent design" implies not using any religious premises.
I don't think so, though it's possible to quibble about the definition of "religious premises". Intelligent design necessary implies an intelligent designer who is, basically, a god, regardless of whether it's politically convenient to identify him as such.
Replies from: Jiro↑ comment by Jiro · 2015-03-19T15:26:21.851Z · LW(p) · GW(p)
Supporters of intelligent design may end up basically having a god as their conclusion, but they won't have it as one of their premises.
And they have to do it that way. If God was one of their premises, teaching it in government schools would be illegal.
Replies from: Lumifer↑ comment by Lumifer · 2015-03-19T16:08:57.127Z · LW(p) · GW(p)
I think you're confusing the idea of intelligent design and cultural wars in the US.
The question was whether you can construct "a coherent argument for intelligent design", not whether you would be willing to play political games with your congresscritters and school boards.
Replies from: Jiro↑ comment by Jiro · 2015-03-19T17:27:09.506Z · LW(p) · GW(p)
No, the question was whether the "rationality quote" makes sense. I offered intelligent design as a counterexample, a case where it doesn't. Telling me that you don't think that what I described is intelligent design is a matter of semantics; its usefulness as a counterexample is not changed depending on whether it's called "intelligent design" or "American politically expedient intelligent-design-flavored product".
Replies from: Lumifer↑ comment by Lumifer · 2015-03-19T18:15:41.765Z · LW(p) · GW(p)
I offered intelligent design as a counterexample, a case where it doesn't.
And I disagree, I think it does perfectly well.
The quote applies to actual positions, not to politically-based posturing.
Replies from: Jiro↑ comment by Jiro · 2015-03-19T18:43:05.954Z · LW(p) · GW(p)
The quote applies to actual positions, not to politically-based posturing.
That dilutes the quote to the point of uselessness. Probably most positions that people take involve posturing.
But if you really want a different example, how about homeopathy? I can't construct an argument for that which is coherent in the sense that was probably intended, although I could construct an argument for that which is grammatically correct but based on falsehoods or on obviously bad reasoning.
↑ comment by seer · 2015-03-20T03:16:40.774Z · LW(p) · GW(p)
I could construct an argument which is grammatically correct and uses lies
What lies are those? What evidence convinced you that they are in fact lies?
(That's how I would start.)
Replies from: Jiro↑ comment by Jiro · 2015-03-20T14:47:53.209Z · LW(p) · GW(p)
I said that I could construct such an argument. I think you'll agree that I am capable of constructing an argument that uses lies. It does not follow that I think all intelligent design proponents are liars, just that I could not reproduce their arguments without saying things that are (with my own level of knowledge) lies.
(If you really want an irrelevant example of intelligent design proponents lying, http://en.wikipedia.org/wiki/Wedge_strategy )
↑ comment by 27chaos · 2015-03-18T20:44:35.599Z · LW(p) · GW(p)
It's a heuristic, not an automatic rule. Excluding religion and aesthetics, I can't think of any cases where it doesn't work. There are probably some which I just haven't thought of, but there certainly aren't very many.
Replies from: Jiro↑ comment by Jiro · 2015-03-21T20:28:58.659Z · LW(p) · GW(p)
I mentioned homeopathy above.
Replies from: 27chaos↑ comment by Shmi (shminux) · 2015-03-18T16:52:46.700Z · LW(p) · GW(p)
Replies from: 27chaosThe presence of the word “deserve” is a sure sign a conversation won’t go well.
↑ comment by 27chaos · 2015-03-18T20:41:58.187Z · LW(p) · GW(p)
Better tell that to every book on negotiation ever, I guess.
The human concept of justice is fickle, but nonetheless real. Appeals to it, if done skillfully, can be very advantageous.
Replies from: shminux↑ comment by Shmi (shminux) · 2015-03-18T21:38:04.878Z · LW(p) · GW(p)
Just letting you know that I dislike your repetitive snark.
↑ comment by Shmi (shminux) · 2015-03-18T16:54:05.565Z · LW(p) · GW(p)
Replies from: 27chaosIf you think God wants people to suffer in the last month of their illness, that’s a mental problem not a religious point of view.
comment by summerstay · 2015-04-07T18:18:47.795Z · LW(p) · GW(p)
This is Hari's business. She takes innocuous ingredients and makes you afraid of them by pulling them out of context.... Hari's rule? "If a third grader can't pronounce it, don't eat it." My rule? Don't base your diet on the pronunciation skills of an eight-year-old.
From http://gawker.com/the-food-babe-blogger-is-full-of-shit-1694902226
comment by etotheipi · 2015-03-03T03:55:39.614Z · LW(p) · GW(p)
“My gripe is not with lovers of the truth but with truth herself. What succor, what consolation is there in truth, compared to a story? What good is truth, at midnight, in the dark, when the wind is roaring like a bear in the chimney? When the lightning strikes shadows on the bedroom wall and the rain taps at the window with its long fingernails? No. When fear and cold make a statue of you in your bed, don't expect hard-boned and fleshless truth to come running to your aid. What you need are the plump comforts of a story. The soothing, rocking safety of a lie.” ― Diane Setterfield
The context for this quote is a Hansonian post emphasizing that rationality has costs, and someone who wishes to seek truth must be prepared to accept them: http://lesswrong.com/lw/j/the_costs_of_rationality/
The particular example chosen in the quote is not the best since non-existence of ghosts is not a lie. Nonetheless, the point is well-taken. As a short-term comforting strategy (say to comfort a five year old), it is preferable to say ghosts were destroyed by Zeus or something, than to say that it is highly unlikely that ghosts ever existed because no ghost stories have had a reasonably credible source etc.
Replies from: hairyfigment↑ comment by hairyfigment · 2015-03-04T06:43:24.919Z · LW(p) · GW(p)
I admit that I have no children, but even that last part seems almost wholly false to me.
Now, I might tell my hypothetical child that I'm a high Bayesian adept in the Conspiracy (passing actuarial exams/ordeals of initiation counts), that if spirits existed I'd be a mighty ceremonial magician (also probable) and therefore no ghost would dare harm my child.
comment by johnlawrenceaspden · 2016-03-11T19:13:24.875Z · LW(p) · GW(p)
The Bible says that God made the world in six days. Great Uncle Charles thinks it took longer: but we need not worry about it, for it is equally wonderful either way
-- Margaret Vaughan Williams
comment by [deleted] · 2015-03-19T08:39:08.543Z · LW(p) · GW(p)
[F]ingertips without maps are empty; maps without fingertips are blind.
-- Paul Churchland, chapter 2 of Plato's Camera
comment by Epictetus · 2015-03-07T00:39:17.543Z · LW(p) · GW(p)
I know a man who, when I ask him what he knows, asks me for a book in order to point it out to me, and wouldn't dare tell me that he has an itchy backside unless he goes immediately and studies in his lexicon what is itchy and what is a backside.
-Montaigne, On Pedantry
comment by ike · 2015-03-29T02:48:31.637Z · LW(p) · GW(p)
This argument also relies on a ridiculous definition of rational.
Whilst rational economic actors do attempt to maximise their profit, the argument ignores that this takes place in the context of varying time windows. In effect it argues that it’s “rational” to take a tiny increase in profit today even if that destroys your business and all the potential long term profits you could obtain tomorrow and the day after. This definition is absurd and no actual business works that way.
Mike Hearn, Replace by Fee, a Counter Argument
comment by mrmaahir · 2015-08-07T06:26:31.393Z · LW(p) · GW(p)
Drug is not a habbit, habbit could be a drug (Maahir) http://here4share.com/inspirational-quotes
comment by Salemicus · 2015-03-05T18:14:40.368Z · LW(p) · GW(p)
Ethics from first principles is as useless and frivolous as physics from first principles.
"Mai La Dreapta," commenting at SSC.