The Fallacy of Dressing Like a Winnerpost by benelliott · 2010-12-24T15:22:10.364Z · score: 12 (13 votes) · LW · GW · Legacy · 20 comments
Imagine you are a sprinter, and your one goal in life is to win the 100m sprint in the Olympics. Naturally, you watch the 100m sprint winners of the past in the hope that you can learn something from them, and it doesn't take you long to spot a pattern.
Every one of them can be seen wearing a gold medal around their neck. Not only is there a strong correlation, you then also examine the rules of the olympics and find that 100% of winners must wear a gold medal at some point, there is no way that someone could win and never wear a gold medal. So, naturally, you go out and buy a gold medal from a shop, put it around your neck and sit back, satisfied.
For another example, imagine that you are now in charge of running a large oil rig. Unfortunately, some of the drilling equipment is old and rusty, and every few hours a siren goes off alerting the divers that they need to go down again and repair the damage. This is clearly not an acceptable state of affairs, so you start looking for solutions.
You think back to a few months ago, before things got this bad, and you remember how the siren barely ever went off at all. In fact, from you knowledge of how the equipment works, the there were no problems, the siren couldn't go off. Clearly the solution the problem, is to unplug the siren.
(I would like to apologise in advance for my total ignorance of how oil rigs actually work, I just wanted an analogy)
Both these stories demonstrate a mistake which I call 'Dressing Like a Winner' (DLAW). The general form of the error is, person has goal of X, person observes that X reliably leads to Y, person attempts to achieve Y, then sits back, satisfied with their work. This mistake is so obviously wrong that it is pretty much non-existant in near mode, which is why the above stories seem utterly ridiculous. However, once we switch into the more abstract far mode, even the most ridiculous errors become dangerous. In the rest of this post I will point out three places where I think this error occurs.
Changing our minds
In a debate between two people, it is usually the case that whoever is right is unlikely to change their mind. This is not only an empirically observable correlation, but it's also intuitively obvious, would you change your mind if you were right?
At this point, our fallacys steps in with a simple conclusion, "refusing to change your mind will make you right". As we all know, this could not be further from the truth, changing your mind is the only way to become right, or any rate less wrong. I do not think this realization is unique to this community, but it is far from universal (and it is a lot harder to practice than to preach, suggesting it might still hold on in the subconcious).
At this point a lot of people will probably have noticed that what I am talking about bears a close resemblance to signalling, and some of you are probably thinking that that is all there is to it. While I will admit that DLAW and Signalling are easy to confuse, I do think they are seperate things., and that there is more than just ordinary signalling going on in the debate.
One piece of evidence for this is the fact that my unwillingness to change my mind extends even to opinions I have admitted to nobody. If I was only interested in signalling surely I would want to change my mind in that case, since it would reduce the risk of being humiliated once I do state my opinion. Another reason to believe that DLAW exists is the fact that not only do debaters rarely change their minds, those that do are often criticised, sometimes quite brutally, for 'flip-flopping', rather than being praised for becoming smarter and for demonstrating that their loyalty to truth is higher than their ego.
So I think DLAW is at work here, and since I have chosen a fairly uncontroversially bad thing to start off with, I hope you can now agree with me that it is at least slightly dangerous.
It is an accepted fact that any map which completely fits the territory would be self-consistent. I have not seen many such maps, but I will agree with the argument that they must be consistent. What I disagree with is the claim that this means we should be focusing on making our maps internally consistent, and that once we have done this we can sit back because our work is done.
This idea is so widely accepted and so tempting, especially to those with a mathematical bent, that I believed it for years before noticing the fallacy that lead to it. Most reasonably intelligent people have gotten over one half of the toxic meme, in that few of them believe consistency is good enough (with the one exception of ethics, where it still seems to apply in full force). However, as with the gold medal, not only is it a mistake to be satisfied with it, but it is a waste of time to aim for it in the first place.
In Robin Hanson's article (beware consistency) [http://www.overcomingbias.com/2010/11/beware-consistency.html] we see that the consistent subjects actually do worse than the inconsistent ones, because they are consistently impatient or consistently risk averse. I think this problem is even more general than his article suggests, and represents a serious flaw in our whole epistemology, dating back to the Ancient Greek era.
Suppose that one day I notice an inconsistency in my own beliefs. Conventional wisdom would tell me that this is a serious problem, and I should discard one of the beliefs as quickly. All else being equal, the belief that gets discarded will probably be the one I am less attached to, which will probably be the one I acquired more recently, which is probably the one which is actually correct, since the other may well date back to long before I knew how to think critically about an idea.
Richard Dawkins gives a good example of this in his book 'The God Delusion'. Kurt Wise, a brilliant young geologist raised as a fundementalist Christian. Realising the contradiction between his beliefs, he took a pair of scissors to the bible and cut out every passage he would have to reject if he accepted the scientific world-view. After realizing his bible was left with so few pages that the poor book could barely hold itself together, he decided to abandon science entirely. Dawkins uses this to make an argument for why religion needs to be removed entirely, and I cannot neccessarily say I disagree with him, but I think a second moral can be drawn from this story.
How much better off would Kurt have been if he had just shrugged his shoulders at the contradiction and continued to believe both? How much worse off we be if Robert Aumann had abandoned the study of Rationality when he noticed it contradicted Orthodox Judaism? Its easy to say that Kurt was right to abandon one belief, he just abandoned the wrong one, but from inside Kurt's mind I'm not sure it was obvious to him which belief was right.
I think a better policy for dealing with contradictions is to put both beliefs 'on notice', be cautious before acting upon either of them and wait for more evidence to decide between them. If nothing else, we should admit more than two possibilities, they could actually be compatible, or they could both be wrong, or one or both of them could be badly confused.
To put this in one sentence "don't strive for consistency, strive for accuracy and consistency will follow".
Mathematical arguments about rationality
In this community, I often see mathematical proofs that a perfect Bayesian would do something. These proofs are interesting from a mathematical perspective, but since I have never met a perfect Bayesian I am sceptical of their relevance to the real world (perhaps they are useful to AI, someone more experienced than me should either confirm or deny that).
The problem comes when we are told that since a perfect Bayesian would do X, then we imperfect Bayesians should do X as well in order to better ourselves. A good example of this is Aumann's Agreement Theorem, which shows that not agreeing to disagree is a consequence of perfect rationality, being treated as an argument for not agreeing to disagree in our quest for better rationality. The fallacy is hopefully clear by now, we have been given no reason to believe that copying this particular by-product of success will bring us closer to our goal. Indeed, in our world of imperfect rationalists, some of whom are far more imperfect than others, an argument against disagreement seems like a very dangerous thing.
Elizer has (already)[http://lesswrong.com/lw/gr/the_modesty_argument/] argued against this specific mistake, but since he went on to (commit it)[http://lesswrong.com/lw/i5/bayesian_judo] a few articles later I think it bears mentioning again.
Another example of this mistake is (this post)[http://lesswrong.com/lw/26y/rationality_quotes_may_2010/36y9] (my apologies to Oscar Cunningham, this is not meant as an attack, you just provided a very good example of what I am talking about). The post provides a mathematical argument (a model rather than a proof) that we should be more sceptical of evidence that goes against our beliefs than evidence for them. To be more exact, it gives an argument why a perfect Bayesian, with no human bias and mathematically precise calibration should be more sceptical of evidence going against its beliefs than evidence for them.
The argument is, as far as I can tell, mathematically flawless. However, it doesn't seem to apply to me at all, if for no other reason than that I already have a massive bias overdoing that job, and my role is to counteract it.
In fact, I would say that in general our willingness to give numerical estimates is an example of this fallacy. The Cox theorems prove that any perfect reasoning system is isomorphic to Bayesian probability, but since my reasoning system is not perfect, I get the feeling that saying "80%" instead of "reasonably confident" is just making a mockery of the whole process.
This is not to say I totally reject the relevance of mathematical models and proofs to our pursuit. All else being equal if a perfect Bayesian does X. it is evidence that X is good for an imperfect Bayesian. It's just not overwhelmingly strong evidence, and shouldn't be treated as putting as if it puts a stop to all debate and decides the issue one way or the other (unlike other fields where mathematical arguments can do this).
How to avoid it
I don't think DLAW is particularly insidious as mistakes go, which is why I called it a fallacy rather than a bias. The only advice I would give is to be careful when operating in far mode (which you should do anyway), and always make sure the causal link between your actions and your goals is pointing in the right direction.
Note – When I first started planning this article I was hoping for more down-to-earth examples, but I struggled to find any. My current theory is that this fallacy is too obviously stupid to be committed in near mode, but if someone has a good example of DLAW occurring in their everyday life then please point it out in the comments. Just be careful that it is actually this rather than just signalling.
Comments sorted by top scores.