Harnessing Your Biases
post by swestrup · 2009-07-02T20:45:28.821Z · LW · GW · Legacy · 14 commentsContents
14 comments
Theoretically, my 'truth' function, the amount of evidence I need to cache something as 'probably true and reliable' should be a constant. I find, however, that it isn't. I read a large amount of scientific literature every day, and only have time to investigate a scant amount of it in practice. So, typically I rely upon science reporting that I've found to be accurate in the past, and only investigate the few things that have direct relevance to work I am doing (or may end up doing).
Today I noticed something about my habits. I saw an article on how string theory was making testable predictions in the realm of condensed matter physics, and specifically about room-temperature superconductors. While a pet interest of mine, this is not an area that I'm ever likely to be working in, but the article seemed sound and so I decided it was an interesting fact, and moved on, not even realizing that I had cached it as probably true.
A few minutes later it occurred to me that some of my friends might also be interested in the article. I have a Google RSS feed that I use to republish occasional articles that I think are worth reading. I have a known readership of all of 2. Suddenly, I discovered that what I had been willing to accept as 'probably true' on my own behalf was no longer good enough. Now I wanted to look at the original paper itself, and to see if I could find any learnéd refutations or comments.
This seems to be because my reputation was now, however tangentially, "on the line" since I have a reputation in my circle of friends as the science geek and would not want to damage it by steering someone wrong. Now, clearly this is wrong headed. My theory of truth should be my theory of truth, period.
One could argue, I suppose, that information that I store internally can only affect my own behavior while information that I disseminate can affect the behaviour of an arbitrarily large group of people, and so a more stringent standard should apply to things I tell others. In fact that was the first justification that sprang to mind when I noticed my double standard.
Its a bogus argument though, as none of my friends are likely to repeat the article or post it in their blogs and so the dissemination has only a tiny probability of propagating by that route. However, once its in my head and I'm treating it as true, I'm very likely to trot it out as an interesting fact when I'm talking at Science Fiction conventions or to groups of interested geeks. If anything, the standard for my believing something should be more stringent than my standard for repeating it, not the other way around.
But, the title of this post is "Harnessing Your Biases" and it seems to me that if I am going to have this strange predisposition to check more carefully if I am going to publish something, then maybe I need to set up a blog of things I have read that I think are true. It can just be an edited feed of my RSS stream, since this is simple to put together. Then I may find myself being more careful in what I accept as true. The mere fact that I have the feed and that its public (although I doubt that anyone would, in fact, read it), would make me more careful. Its even possible that it will contain very few articles as I would find I don't have time to investigate interesting claims well enough to declare them true, but this will have the positive side effect that I won't go around caching them internally as true either.
I think that, in many ways, this is why, in the software field, code reviews are universally touted as an extraordinarily cheap and efficient way of improving code design and documentation while decreasing bugs, and yet is very hard to get put into practice. The idea is that after you've written any piece of code, you give it to a coworker to critique before you put it in the code base. If they find too many things to complain about, it goes back for revision before being given to yet another coworker to check. This continues until its deemed acceptable.
In practice, the quality of work goes way up and the speed of raw production goes down marginally. The end result is code that needs far less debugging and so the number of working lines of code produced per day goes way up. I think this is because programmers in such a regime quickly find that the testing and documenting that they think is 'good enough' when their work is not going to be immediately reviewed is far less than the testing and documenting they do when they know they have to hand it to a coworker to criticize. The downside, of course, is that they are now opening themselves up for criticism on a daily basis, and this is something that few folks enjoy no matter how good it is for them, and so the practice continues to be quite rare due to programmer resistance to the idea.
This appears to be two different ways in which to harness the bias that folks have to do better (or more careful) work when it is going to be examined, to achieve better results. Can anyone else here think of other biases that can be exploited in useful ways to leverage greater productivity or reliability in projects?
14 comments
Comments sorted by top scores.
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-07-02T22:04:28.390Z · LW(p) · GW(p)
You're not harnessing your biases, you're harnessing your pride. So do I, in various ways.
In all frankness... I would have voted up your post except for the title. I'm fatigued of such clevernesses.
Replies from: kpreid, thomblake↑ comment by kpreid · 2009-07-02T23:07:03.071Z · LW(p) · GW(p)
I found more value in “maybe I need to set up a blog of things I have read that I think are true” than in the extremely broad topic of “harness your biases”. If I were editing the article I would throw out that topic and keep the particular notion of improving your knowledge by preparing it for publication.
Replies from: Eliezer_Yudkowsky, swestrup↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-07-02T23:34:09.204Z · LW(p) · GW(p)
Agreed (and I should have thought of that, instead of stopping upon my annoyance).
↑ comment by swestrup · 2009-07-03T09:27:54.284Z · LW(p) · GW(p)
Granted, the title was probably too flip, but I think yours is a little wordy. I'm not sure I can do better at the moment other than maybe something like "Self-Publication as a Truth Filter".
Replies from: Eliezer_Yudkowsky, kpreid, cousin_it↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-07-03T17:59:39.827Z · LW(p) · GW(p)
Go ahead and change it. It won't break any links.
↑ comment by cousin_it · 2009-07-03T09:47:03.206Z · LW(p) · GW(p)
I feel your post is related to Eliezer's "Say It Loud". That by the way is a great title, try to do no worse.
comment by anonym · 2009-07-04T01:30:06.489Z · LW(p) · GW(p)
Exploiting your own biases is a topic I've been wanting to write about for a while, although I've thought about it more in terms of using biases (and other unpleasant aspects of our personality) to at least partially counteract each other.
For example, if you know from experience that your initial estimate of how long a software project will take -- which you are often forced to give, even if informally, before you've had time to think properly -- is usually far too low (planning fallacy), make your initial estimate X times your real estimate. Even if you don't really believe it will take that long, the anchoring effect ensures that your final estimate will probably be higher (and thus more accurate) than if you had chosen your initial estimate 'more rationally' (not really, of course, but that's how it feels).
We are all familiar with this particular example -- and I'm sure many of us in the software field have used it repeatedly -- but the general principle of using one bias to counteract another seems more widely applicable. As long as we still have these biases, why not try to put them to positive use and to use them against each other?
I don't think that every bias can be counteracted with some other specific bias, but in a more general sense, we can also often use what we consider weaknesses (e.g., intense emotions interfering with rational thought during an argument) to re-structure or change how we think about decisions or tasks so that the 'rational' decision/behavior feels like it is the emotional or ego-gratifying one that takes no effort. I personally have found that merely thinking about certain things as (and engaging in them as if they were) games or optimization problems can be very helpful in getting myself out of habitual patterns of behavior, switching from a state that is mostly automatic and system1-like into a state that is more reflective and strategic and system2-like. What makes the alternate state so powerful is that by viewing it as a game, my strong ego and intense sense of competition compels me to put great effort into winning the game.
comment by MineCanary · 2009-07-03T03:25:07.554Z · LW(p) · GW(p)
Although it's interesting to ask whether talking in a group about something you read in a possibly-wrong article doesn't provide opportunities for people with more expertise than you to disseminate their knowledge. And for people with worse epistemologies to insist that they have more expertise than you and are disseminating their knowledge.
Certainly I find out a lot more about all the things I classify as "probably true" by talking to other people who have a different set of "probably trues" on the topic than I do by looking up as much information as I can find about each possibly-true. Do you gain more (or contribute more to the world) by classifying everything as "unknown" unless you have investigated it sufficiently and therefore not speaking on it, or by treating your knowledge as something worthy of conversation--with caveats, like "I read it in so-and-so, and they've been known to sensationalize or get science wrong, but..." or "I haven't followed up on this in a couple of years, but I heard there was a promising claim..." Then this may spur someone who knows more (or less) to speak up and may cause some further search for the truth later.
It seems to me that the main problem arises when someone bickers bitterly for a fact they've accepted to be recognized as true even when they shouldn't feel that certain. Of course, there is damage done in miscommunication, as when someone gets the impression (we'll say it's nobody's fault) that your pure speculation or dubious source is solid fact. And the best defense against that is knowledge of how much social wisdom is BS.
Perhaps it would be better to recognize that you have another class of filing information--"I've heard no contradictory evidence, it's useful to think about, but I wouldn't build a bridge on its blueprints." Going beyond that on a few statements may very well be quite a good thing, but I doubt it will eliminate altogether your tendency to remember and mention dubious facts.
comment by Psychohistorian · 2009-07-03T00:05:04.092Z · LW(p) · GW(p)
I'm not sure this really counts as a bias. It seems quite rational, unless you will actually suffer immediate and significant consequences if you are wrong about string theory.
The cost of privately held beliefs (especially about abstract truths) is quite low. If I believe the earth is a flat disk on the back of an infinite stack of tortoises, and if I'm, say, a car mechanic, I will not suffer at all for this belief. Unless of course, I mention it to my friends, because they will judge me for it, unless of course they all believe the same thing, in which case I'll probably proclaim this belief loudly and often and possibly meet up with them to discuss it on an appointed day of the week. I may suffer because I fail at epistemology, but it doesn't seem clear how trusting the wrong source on one marginal occasion will corrupt my epistemology (doubly so if I'm refined enough to have a concept of my own epistemology). Taking epistemology as exogenous, there's really no cost to a marginal false belief (that does not affect me directly).
Having a false belief about some fact that has no direct bearing on your life is way, way, way cheaper than publicly expressing belief in such a fact and being refuted. There seems to be nothing irrational about investing more energy fact-checking in the latter scenario.
Edit: Two things.
First, the turtles example was poorly chosen, as it blurs the line between belief and epistemology too much. Better examples would include, say, wrongly believing celebrity gossip, or having incorrect beliefs about unpractical science due to a lack of information or a misunderstanding of alternatives. If the car mechanic believed Newton was totally right (because he hadn't seen evidence to the contrary), this would be a very, very low cost false belief. Interestingly, "Barack Obama is a Muslim" probably falls under this category, though it blurs the epistemological line a bit more.
Second, it's also quite possible that you care more about misleading others than you do being wrong. It's easy enough to change your mind if you see contradictory evidence or reread the article and understand it better. It's rather harder to change other people's minds who have been convinced, and you'll feel like you've let them down as an authority, since they trusted you.
Replies from: Marcello↑ comment by Marcello · 2009-07-03T00:56:54.957Z · LW(p) · GW(p)
I'm not sure the cost of privately held false beliefs is as low as you think it is. The universe is heavily Causally Entangled. Now even if in your example, the shape of the earth isn't causally entangled with anything our mechanic cares about, that doesn't get you off the hook. A false belief can shoot you in the foot in at least two ways. First, you might explicitly use it to reason about the value of some other variable in your causal graph. Second, your intuition might draw on it as an analogy when you are reasoning about something else.
If our car mechanic thinks his planet is a disc supported atop an infinite pile of turtles, when this is in fact not the case, then isn't he more likely to conclude that other things which he may actually come into more interaction with (such as a complex device embedded inside a car which could be understood by our mechanic, if he took it apart and then took the pieces apart about five times) might also be "turtles all the way down"? If I actually lived on a disc on top of infinitely many turtles, then I would be nowhere near as reluctant to conclude that I had a genuine fractal device on my hands. If I actually lived in a world which was turtles all the way down, I would also be much more disturbed by paradoxes involving backward supertasks.
To sum up: False beliefs don't contaminate your belief pool via the real links in the causal network in reality; they contaminate your belief pool via the associations in your mind.
Replies from: Psychohistorian, MineCanary↑ comment by Psychohistorian · 2009-07-03T02:16:31.249Z · LW(p) · GW(p)
This is what I meant by epistemology. It's not the bad beliefs causing bad epistemology (with certain exceptions, like some instances of religion, in which people may mess up their epistemology to retain their beliefs), but the bad epistemology causing the beliefs. I picked a bit too extreme an example to illustrate my point, and made note of alternative examples in the original.
If I told the car mechanic, "Actually, the Earth revolves around the Sun, which is one star among billions in one galaxy among billions, and you should believe me because God told me so," and he changes his beliefs accordingly, he's not really any better off than he was. The problem is not his belief, it's his system for validating beliefs.
By contrast, if I actually explained why that statement was true and he said, "Well, duh, of course I was wrong! I really should have looked into that!" then I'd say he never had much of a problem to begin with, other than a lack of curiosity.
↑ comment by MineCanary · 2009-07-03T03:51:16.007Z · LW(p) · GW(p)
I'm not sure what the relationship between metaphors propagating in someone's thinking and the causal entanglement of the universe is.
I'd argue that people profit from having different ways to look at the world--even though it shares a common structure, this isn't always locally noticeable or important, and certainly things can look different at different scales. I'm equally unsure that it matters whether or not you see an object that is fractal for the scales of relevance to you and assume it is truly fractal or just a repeating pattern on a few scales.
I agree with Psychohistorian that it's more important that the mechanic be willing to abandon his belief with greater knowledge of the physics of the universe. But even then, facility with fractal thinking may still offer benefits.
That is: The associations in your mind are put to constant test when it comes to encountering the real world. Certainly long-term, serious misconceptions--liking seeing God in everything and missing insights into natural truth--can be quite a thing to overcome and can stifle certain important lines of thought. But for any beliefs you get from reading inadequately informed science journalism--well, the ways of thinking your mind's going to be contaminated with are those that are prevalent in our culture, so you probably encounter them anyway. They're also things that seem plausible to you, given your background, so you again probably already think in these terms, or the interconnectedness with all the other observations of life you've had is too small to distinguish between two alternate explanations--the false one you've just read and the real truth, which is "out there" still. And if scientific results were really so obvious from what we already know about the universe, research would be a lot less important--rather, it is because scientific findings can offer counter-intuitive results, ways of thinking that we DON'T find useful or essential in everyday life, that we find them so intriguing.