The Most Frequently Useful Thing
post by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-02-28T18:43:56.156Z · LW · GW · Legacy · 57 commentsContents
57 comments
Followup to: The Most Important Thing You Learned
What's the most frequently useful thing you've learned on OB - not the most memorable or most valuable, but the thing you use most often? What influences your behavior, factors in more than one decision? Please give a concrete example if you can. This isn't limited to archetypally "mundane" activities: if your daily life involves difficult research or arguing with philosophers, go ahead and describe that too.
57 comments
Comments sorted by top scores.
comment by Z_M_Davis · 2009-03-01T02:33:08.349Z · LW(p) · GW(p)
The most frequently useful thing I've gotten out of Overcoming Bias is not a technique or lesson so much as it is an attitude. It's the most ridiculously simple thing of all: to be in the habit of actually, seriously asking: is (this idea) really actually true? You can ask anyone if they think their beliefs are true, and they'll say yes, but it's another thing to know on a gut level that you could just be wrong, and for this to scare you, not in the sense of "O terror!--if my cherished belief were false, then I could not live!" but rather the sense of "O terror!--my cherished belief could be false, and if I'm not absurdly careful, I could live my whole life and not even know!"
comment by Ziphead · 2009-02-28T21:13:09.016Z · LW(p) · GW(p)
Expecting Short Inferential Distances
One of many posts that gave me a distinct concept for something I previously had been only vaguely aware of, and this one kept coming back to me all the time. By now, I don’t think it’s an extreme exaggeration to say that I make use of this insight every time I communicate with someone, and of all the insights I picked up from OB, this might be the one I most frequently try to explain to others. It doesn’t seem like the most important thing, but for some reason, it immediately struck me as the most frequently useful one.
Replies from: MichaelVassar, NQbass7, CarlShulman↑ comment by MichaelVassar · 2009-03-01T02:28:35.673Z · LW(p) · GW(p)
I think I'll second that, though the attitude that there's an exact right amount to update on every piece of information is really important too.
↑ comment by NQbass7 · 2009-03-01T21:53:22.836Z · LW(p) · GW(p)
For me it's between inferential distance and cached thoughts, at least for ones I explain to other people. For ones I use myself, Line of Retreat is probably the one I actively pursue most frequently.
Though I end up using Absence of Evidence pretty often as well.
↑ comment by CarlShulman · 2009-03-01T07:50:02.403Z · LW(p) · GW(p)
Inferential distance is the most frequently useful thing I learned at OB, followed by leaving a line of retreat. However, I use other insights I had previously encountered elsewhere more frequently.
comment by AnnaSalamon · 2009-03-02T01:56:02.985Z · LW(p) · GW(p)
Taking care, every time I think, to make sure I'm following out actual uncertainty or curiosity and am actually gathering evidence on what conclusion to write in, rather than rehearsing evidence or enjoying the sound of my own thoughts.
Replies from: Roko↑ comment by Roko · 2009-03-04T01:00:48.467Z · LW(p) · GW(p)
'yep. I'll second this.
Really, if you understand "the bottom line", you could reliably reinvent the whole of rationality for yourself. And once you have read and internalized it, you walk around the world noticing people justifying things to themselves all the time.
comment by badger · 2009-02-28T20:10:01.873Z · LW(p) · GW(p)
On the previous thread I mentioned the Mind-Projection fallacy and "the opposite of stupidity != intelligence" as being most frequently referenced, but on reflection, I think a reminder of the strictness of rationality makes the biggest difference in practice.
This passage from Technical Explanation sums it up: "But the deeper truth of Bayesianity is this: you cannot game the system. You cannot give a humble answer, nor a confident one. You must figure out exactly how much you anticipate the Sun rising tomorrow, and say that number. [...] You cannot do better except by guessing better and anticipating more precisely."
At this stage in my life, I find it easy to avoid to dogmaticism. False modesty, uncertainty, vagueness, and skepticism are all much more seductive. I work as a policy analyst in state government, and am frequently asked to provide forecasts. The dangers of narrow prediction intervals are well known, but I am tempted to be cautious and not focus my estimate. The field I work in is notoriously uncertain, but I can't do better just by being vague. Confidence and uncertainty have to be precisely balanced.
comment by Mary · 2009-03-03T14:32:54.489Z · LW(p) · GW(p)
...the doctrine of non-reductionism is a confusion, rather than a way that things could be, but aren't. -EY, Excluding the Supernatural
Turned me into an atheist. Damn you.
comment by Marcello · 2009-03-01T20:11:07.423Z · LW(p) · GW(p)
The idea that that you shouldn't internally argue for or against things or propose solutions too soon is probably the most frequently useful thing. I sometimes catch myself arguing for or against something and then I think "No, I should really just ask the question."
comment by MBlume · 2009-02-28T21:00:19.940Z · LW(p) · GW(p)
Eliezer, I suspect you might find the answers to these questions less useful than you expect. The most useful things we've learned from you are probably going to be those things that we've already forgotten you wrote, because they've become a part of us -- because they've become background in how we live, how we think, and thus are completely invisible to us at any given time.
Replies from: Emile, NQbass7, None↑ comment by Emile · 2009-02-28T21:35:34.164Z · LW(p) · GW(p)
I think the answers will be useful, even if they don't exactly represent the set of "most frequently useful things from OB" but instead the set of "most frequently useful among the very memorable and surprising OB posts".
Maybe Eliezer asked about the first set fully expecting to get answers from the (still useful) second set.
↑ comment by NQbass7 · 2009-03-01T21:57:14.345Z · LW(p) · GW(p)
Having particular names which may not be in common usage makes it easier for me to identify the things that I've picked up from OB that are now a part of me. Cached Thoughts, Inferential Distance, Mind-Projection Fallacy - those are all terms I use now when referring to things that are a part of me, but not many other people use those terms often. It makes it somewhat easier to identify those things.
Replies from: AnnaSalamon, badger↑ comment by AnnaSalamon · 2009-03-02T01:50:45.631Z · LW(p) · GW(p)
Yes -- and easier to invoke the principles in social contexts. I suspect Eliezer's OB posts gain a significant fraction of their usefulness from the names and from the chunk-by-chunk useability of the named principles/methods.
↑ comment by badger · 2009-03-02T03:05:18.535Z · LW(p) · GW(p)
I agree. I find it funny that you lead your list of examples with "cached thoughts", because that exactly what these are. Not that that's a bad thing.
If that's the case though, maybe we need to be proactive in preventing them from becoming cached thoughts of the bad kind. Eliezer's posts serve as a good introduction, but I don't think they are the ideal reference. Maybe a rationalist dictionary would do the trick. I envision something like urban dictionary where multiple definition/explanations can be submitted and voted on.
comment by johnbr · 2009-03-02T17:31:11.972Z · LW(p) · GW(p)
Most frequently useful - that my interest in being unbiased can become a sort of bias of its own, when I hear arguments from others, I can easily spot the biases, and I've worked hard to recognize that I have built-in biases as well that I can't discount.
comment by imaxwell · 2009-03-01T03:42:20.283Z · LW(p) · GW(p)
A few ideas:
the difference between Nobly Confessing One's Limitations and actually preparing to be wrong. I was pretty guilty of the former in the past. I think I'm probably still pretty guilty of it, but I am on active watch for it.
the idea that one should update on every piece of evidence, however slightly. This is something that I "knew" without really understanding its implications. In particular, I used to habitually leave debates more sure of my position than when I went in---yet this can't possibly be right, unless my opposition were so inept as to argue against their own position. So there's one bad habit I've thrown out. I've gone from being "open-minded" enough to debate my position, to being actually capable of changing my position.
That I should go with the majority opinion unless I have some actual reason to think I can do better. To be fair, on the matters where I actually had a vested interest, I followed this advice before receiving it; so perhaps this shouldn't be under 'useful' per se, although I've improved my predictions drastically by just parroting InTrade. (I don't bet on InTrade because I've never believed I could do better.)
Sticking your neck out and making a prediction, so that you have the opportunity to say "OOPS" as soon as possible.
↑ comment by Paul Crowley (ciphergoth) · 2010-01-28T18:16:32.858Z · LW(p) · GW(p)
I used to habitually leave debates more sure of my position than when I went in---yet this can't possibly be right, unless my opposition were so inept as to argue against their own position.
This isn't quite right - for example, the more I search and find only bad arguments against cryonics, the more evidence I have that the good arguments just aren't out there.
Replies from: MrHen↑ comment by MrHen · 2010-01-28T18:26:08.899Z · LW(p) · GW(p)
This isn't quite right - for example, the more I search and find only bad arguments against cryonics, the more evidence I have that the good arguments just aren't out there.
If all you did was argue with stupid people you would become erroneously self-confident. Also, two people who argued and didn't convert would both walk away feeling better about their own positions. Something seems wrong here. What am I missing? Doesn't this only make sense of there was some sort of weight attached to the argument your opponent used that was detached during the argument?
Replies from: ciphergoth, RobinZ↑ comment by Paul Crowley (ciphergoth) · 2010-01-28T22:21:02.726Z · LW(p) · GW(p)
Apply Bayes theorem. P(you don't find a good argument among stupid people | there is a good argument) is high. P(you don't find a good argument when you've made a true effort to scour high and low | there is one) is lower. Obviously the existence or otherwise of a good argument is only indirect information about the truth, but it still helps.
That it seems there are no two people who cannot disagree without both having the strong feeling that their argument was clearly the stronger must of course be borne in mind when weighing this evidence, but it's evidence all the same.
↑ comment by RobinZ · 2010-01-28T21:40:05.929Z · LW(p) · GW(p)
You're right that it's not as simple as that - if you set out to talk to idiots, you may well find that you can demolish all of them - but if you search for the strongest arguments from the best qualified and most intelligent proponents, and they're rubbish? But they still persistently get cited as the best arguments, in the face of all criticism? That's fairly strong evidence that the field might be bogus.
(Obvious example: intelligent design creationism. It's weaksauce religion and incompetent science.)
Replies from: MrHen↑ comment by MrHen · 2010-01-28T21:54:10.389Z · LW(p) · GW(p)
(Obvious example: intelligent design creationism. It's weaksauce religion and incompetent science.)
But why does dealing with intelligent design increase your probability in the alternative? Why were you assigning weight to intelligent design?
This isn't meant to be nitpicky. I suppose the question behind the question is this: When dividing up probability mass for X, how do you allot P(~X)? Do you try divvying it up amongst competing theories or do you simply assign it to ~X?
For some reason I thought that divvying it up amongst competing theories was Wrong. Was this foolish of me?
Replies from: RobinZ↑ comment by RobinZ · 2010-01-28T21:58:20.687Z · LW(p) · GW(p)
But why does dealing with intelligent design increase your probability in the alternative? Why were you assigning weight to intelligent design?
Not much - it increased slightly when I saw it proposed, and decreased precipitously when I saw it refuted.
This isn't meant to be nitpicky. I suppose the question behind the question is this: When dividing up probability mass for X, how do you allot P(~X)? Do you try divvying it up amongst competing theories or do you simply assign it to ~X?
Well, it has to be divvied up. It's just that there are so many theories encompassed in ~X that it is not easy to calculate the contribution to any specific theory except when the network is pretty clear already.
Replies from: MrHen↑ comment by MrHen · 2010-01-28T22:02:49.016Z · LW(p) · GW(p)
Well, it has to be divvied up.
Not to be a chore, but can you explain why?
Replies from: RobinZ↑ comment by RobinZ · 2010-01-28T22:07:55.921Z · LW(p) · GW(p)
The sum of your probabilities must add to 1. If you reduce the probability assigned to one theory, the freed probability mass must flow into other theories to preserve the sum.
Replies from: MrHen↑ comment by MrHen · 2010-01-28T22:11:51.679Z · LW(p) · GW(p)
But why are we assigning probability across a spectrum of competing theories? I thought we were supposed to be assigning probability to the theories themselves.
In other words, P(X) is my best guess at X being true. P(Y) is my best guess at Y being true. In the case of two complex theories trying to explain a particular phenomenon, why does P(X) + P(Y) + P(other theories) need to equal 1?
Or am I thinking of theories that are too complex? Are you thinking of X and Y as irreducible and mutually exclusive objects?
Replies from: RobinZ↑ comment by RobinZ · 2010-01-29T00:21:00.494Z · LW(p) · GW(p)
Or am I thinking of theories that are too complex? Are you thinking of X and Y as irreducible and mutually exclusive objects?
...yes? It's not a matter of complexity, though; the problem you might be alluding to is that the groups of theories we describe when we enunciate our thoughts can overlap.
comment by Gordon Seidoh Worley (gworley) · 2009-03-28T14:38:28.649Z · LW(p) · GW(p)
I think the most frequently useful thing I've learned is not in the content, but in the approach. The way of breaking things down, pulling together the themes, and looking at things in a way to makes the problems that seem to be there disappear. I guess that's what's been most frequently useful: when you get stuck, trying to see if you're looking at a problem in the wrong way, since often even if you're not it gets you unstuck.
comment by SarahSrinivasan (GuySrinivasan) · 2009-03-03T08:10:27.453Z · LW(p) · GW(p)
The knowledge that communication with another brain through words and/or body language is hard. It's very lossy and almost always the source of error when I think someone has just said something absurd or incomprehensible. I may be ignorant of their train of thought, but that does not mean it's inherently random.
I use this constantly to quickly identify non-surface-level differing usages of terms, or to tell when I'm interpreting a phrase someone said differently than they mean me to. Latest concrete example: a couple hours ago, when I suggested that a D&D 4e melee was not well supported by the rules at all, and specifically that "what would the DM do?" summed up my objections. My roommate replied that he would do exactly what he always did, which didn't jive with what I (thought I) was saying, and I immediately knew we were interpreting "melee" in different ways.
Either that's availability bias, or it comes up very frequently, since the most recent event was mere hours ago. :)
comment by Kaj_Sotala · 2009-03-01T15:07:32.562Z · LW(p) · GW(p)
Taboo Your Words and Wrong Questions together. There are numerous debates over definitions that I'd previously have happily engaged in, but now I just take a glance at them and shrug them off as trivially solvable by tabooing. Applied this into just about everything, from philosophy and Searle's Chinese Room to politics and "is online piracy theft". I feel it has considerably clarified my thinking (and it was my second candidate to the "most useful thing" thread).
comment by JamesAndrix · 2009-03-01T04:57:56.164Z · LW(p) · GW(p)
Shut up and do the impossible, or lesser levels of 'dont run away from the problem'
In particular, the AI box experiment applied to persuasion in general. I make a lot more progress in argument when I focus on changing the persons mind, rather than picking apart their flawed arguments or defending mine
comment by thomblake · 2009-03-01T03:57:57.733Z · LW(p) · GW(p)
This one comes up in philosophical discussions of ethics all the time: Sorting Pebbles into Correct Heaps
I end up referring to right vs right' a lot. Especially w.r.t. ethical naturalism. Though now that I think of it, the actual content there might have been in a follow-up post.
comment by PeteG · 2009-03-01T02:11:44.263Z · LW(p) · GW(p)
Most frequent would have to go to my avoidance of settling with cached thoughts. I notice, revise, and completely discard conclusions much more regularly and effectively when I recognize the conclusion was generated as soon as a question was asked.
comment by A1987dM (army1987) · 2013-11-17T03:19:00.406Z · LW(p) · GW(p)
Asking myself “Couldn't I get Y without X?” whenever I'm considering doing expensive/time-consuming/non-fun action X in order to get benefit Y.
comment by UnholySmoke · 2009-03-02T17:26:51.809Z · LW(p) · GW(p)
...is also the most important thing, though that wasn't necessarily the case.
I find that over the last year or so, whenever a debate ceases to be about something in the world and becomes about pure semantics, a little bulb flashes up in my head - Standard Debate! "Yes, I know it's only tiny, but the question I'm asking is, is Pluto a planet?"
comment by Cameron_Taylor · 2009-03-02T06:14:20.138Z · LW(p) · GW(p)
Inferential distances. Most frequently when calibrating the best way to explain something. More drastically on the few occasions I lacked the cached thoughts to absorb what I was hearing.
Replies from: Rokocomment by Alan · 2009-03-02T00:46:14.620Z · LW(p) · GW(p)
The most frequently useful thing I have learned from OB is to update assumptions based on new information on an ongoing basis. I think this idea ties in nicely with that of standing against maturity, if maturity is taken to mean a certain rigidity, an inflexibility of purpose and outlook.
comment by [deleted] · 2009-02-28T21:44:57.361Z · LW(p) · GW(p)
deleted
comment by Scott Alexander (Yvain) · 2009-02-28T20:57:55.117Z · LW(p) · GW(p)
How to take Joy In The Merely Real.
comment by Gleb_Tsipursky · 2014-11-03T00:27:54.845Z · LW(p) · GW(p)
The most frequently useful is map and territory. It is something that I deploy on a daily level to remind myself of the value of never believing that I know everything, and to leave myself open constantly to the possibility of making mistakes and updating my beliefs. It is also one of the most basic things I teach at Intentional Insights.
comment by Nick_Roy · 2011-04-22T03:00:51.874Z · LW(p) · GW(p)
Reminding myself that "I want to become stronger" emboldens me to make rational choices that require more courage than usual. This has caused me to take a class on entrepreneurship, for example, when otherwise I may have felt too shy to do so.
comment by [deleted] · 2010-10-05T20:21:46.040Z · LW(p) · GW(p)
Applying this skill has consistently led me to surprising and useful information. So far the gains seemed to have grown with the degree of subtleness I can recognize.
comment by PhilGoetz · 2009-03-01T05:24:01.508Z · LW(p) · GW(p)
Mostly things that I've observed. For instance:
If you try to anticipate someone else's misinterpretations of what you say, you are likely to match their reply to your expectations using regular-expression-like matching and reject their response, without fully parsing it.
Often, it's better to say one insightful thing than to say one insightful thing and two less-insightful things, because the less-insightful things are easier to respond to.
Developing expertise in overcoming bias, and "trying really hard" to overcome bias, doesn't overcome bias. It can make bias worse, by becoming an excuse not to update in response to the ideas of others.
comment by kevindick · 2009-03-01T01:46:46.315Z · LW(p) · GW(p)
I had already been a Bayesian and fan of Kahneman-Tversky for 15 years before I started reading OB. So I didn't learn the basics there.
Given that, I'd have to say the most important thing I learned was to exercise unflagging discipline when thinking about values. For me, this means (a) remember to keep track of terminal versus instrumental values in any problem framing and (b) realize that most people's terminal personal values are complex and != "maximize pleasure".
comment by Vladimir_Gritsenko · 2009-03-01T00:43:48.837Z · LW(p) · GW(p)
I tend to agree with MBlume - the most frequently used principles are probably assimilated too well. But let's see... the Bayesian worldview in general made me much more interested in probability, making me take the most "mathy" probability course in Uni early on and to plan on reading Jaynes and Pearl within the next half a year. Maybe it was The Dilemma: Science or Bayes that clinched the deal?
Skimming the list - Mind Projection Fallacy, Nobody Knows What Science Doesn't Know and Science as Attire often come to mind in contexts of what other people do wrong (I quoted the first and second principles in a few discussions). Making Beliefs Pay Rent, Mysterious Answers to Mysterious Questions, Making History Available, Cached Thoughts, Bind Yourself to Reality - I try to apply to myself on a regular basis. In my professional capacity, I try to apply (not very successfully at the moment) The Planning Fallacy and Hold Off Postponing Solutions.
The Robber's Cave Experiment was the post that got me hooked on OB in the first place. I have cited it many times. Finally, the posts on morality are frequently used in the sense that I refer myself to them every time moral discussions crop up.
I also think you should include other writings. The list you gave in the previous post does not include Robin's articles (obviously), but he certainly did leave a mark as well. (To mind come his posts on medical spending and Politics is not about Policy, but undoubtedly there are many more.)
But this is already a large list. Perhaps a series is in order, with the first book being "A Gentle Introduction".
comment by tim · 2009-02-28T21:18:13.577Z · LW(p) · GW(p)
for me, guessing the teacher's password was a real 'oh wow' moment for me that changed the way i went about learning and approaching my classes. it definitely awoke an awareness that i, myself, was simply guessing the password most of the time and it illuminated that this was the way the lecture system primarily functioned in my classes. as a result, i feel i mentally 'engage' more with the material presented and think critically not only about the material itself but on the meta-level of how and why the material is being presented in the way it is.
this was most apparent in an abnormal psychology class where we were presented with case studies containing the story of an individual and then meant to diagnose that person according to DSM IV criteria. you might read about an eccentric individual who dresses in eccentric fashions, has occasional muscle tremors, and will ramble incoherently, jumping from topic to topic. "aha! schizophrenia, disorganized type!" and that was that.
there was no follow up on exactly what that meant. there was simply a bucket of symptoms with a label on it and lo-and-behold this person had a cluster of symptoms from that bucket.
though schizophrenia may be a poor example due to science's weak understanding of it, this process was repeated across the board regardless of the mental ailment. this may simply have been due to the broad nature of the class and time constraints but i remember coming away with nothing but a bunch of labels and little understanding about what was actually happening.
link: http://www.overcomingbias.com/2007/08/guessing-the-te.html
comment by Marshall · 2009-02-28T19:39:12.344Z · LW(p) · GW(p)
In "teaching" situations with adults I often try to impart the modesty argument. I want my audience to become less sure of themselves, less sure of their judgements, less sure of their brains, less ready to disagree and thus more open to suggestions.This is a theme that repeats itself many times. Progress is rather uncertain so maybe this isn't very useful! Darn! Perhaps I should change course and go onto signalling and status issues instead, so we can arrive at the truth about ourselves.
Replies from: Marshall↑ comment by Marshall · 2009-03-01T08:19:45.177Z · LW(p) · GW(p)
On second thoughts maybe the most useful and frequently used thing is the daily discipline and the daily pleasure of reading OB - it primes the mind and in a sense gives membership to an international club of people who want to think about things - and reach many different answers.