The Most Important Thing You Learned

post by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-02-27T20:15:59.430Z · LW · GW · Legacy · 99 comments

My current plan does still call for me to write a rationality book - at some point, and despite all delays - which means I have to decide what goes in the book, and what doesn't.  Obviously the vast majority of my OB content can't go into the book, because there's so much of it.

So let me ask - what was the one thing you learned from my posts on Overcoming Bias, that stands out as most important in your mind?  If you like, you can also list your numbers 2 and 3, but it will be understood that any upvotes on the comment are just agreeing with the #1, not the others.  If it was striking enough that you remember the exact post where you "got it", include that information.  If you think the most important thing is for me to rewrite a post from Robin Hanson or another contributor, go ahead and say so.  To avoid recency effects, you might want to take a quick glance at this list of all my OB posts before naming anything from just the last month - on the other hand, if you can't remember it even after a year, then it's probably not the most important thing.

Please also distinguish this question from "What was the most frequently useful thing you learned, and how did you use it?" and "What one thing has to go into the book that would (actually) make you buy a copy of that book for someone else you know?"  I'll ask those on Saturday and Sunday.

PS:  Do please think of your answer before you read the others' comments, of course.

99 comments

Comments sorted by top scores.

comment by tim · 2009-02-27T20:32:44.527Z · LW(p) · GW(p)

"the map is not the territory" has stuck in my mind as one of the over-arching principles of rationality. it reinforces the concept of self-doubt, implies one should work to make their map conform more closely to the territory, and is invaluable when one believes to have hit a cognitive wall. there are no walls, just the ones drawn on your map.

the post, "mysterious answers to mysterious questions" is my favorite post that dealt with this topic, though it has been reiterated (and rightly so) over a multitude of postings.

link: http://www.overcomingbias.com/2007/08/mysterious-answ.html

Replies from: crazypaki
comment by crazypaki · 2009-02-27T22:55:03.590Z · LW(p) · GW(p)

I second Tim's post. Mysterious Answers and the "map vs territory" analogy have had a huge influence on my thinking

comment by caiuscamargarus · 2009-02-28T01:07:31.690Z · LW(p) · GW(p)

"Newcomb's Problem and Regret of Rationality" is one of my favorites. For all the excellent tools of rationality that stuck with me, this is the one that most globally encompassed Eliezer's general message: that rationality is about success, first and foremost, and if whatever you're doing isn't getting you the best outcome, then you're not being rational, even if you appear rational.

comment by JulianMorrison · 2009-02-28T01:21:04.857Z · LW(p) · GW(p)

"A rationalist should win". Very high-level meta-advice and almost impossible to directly apply, but it keeps me oriented.

Replies from: MichaelBishop
comment by Mike Bishop (MichaelBishop) · 2009-03-12T17:26:40.989Z · LW(p) · GW(p)

I agree that, on average, improvements in rationality lead to more winning, but I'm not convinced that every improvement in rationality does. It seems possible that a non-trivial number make winning harder.

comment by EniScien · 2022-06-05T19:43:07.762Z · LW(p) · GW(p)

I was late to vote to put it mildly, but nonetheless... The Power of Intellect. This is probably what impressed me the most and changed my attitude towards intelligence like Spock. From memory: a gun is stronger than the brain. As if people were born with them. Social skills are more important than intelligence. As if charisma is in the kidneys. Money is more powerful than the mind. As if they grew on trees. And people ask how AI can make money when it only has a mind. A million years ago, soft creatures roamed the savanna, and you would call it absurd to claim that they will rule the planet, and not lions. Soft Creatures have no armor, claws or poison. How could they work metal if they don't breathe fire? If you say that they split the nucleus of an atom, then this is just nonsense, they are not even radioactive. And evolution will not have time to work here, because it's just that no one can reproduce so fast as to get all these results in just a million years. But the brain is more dangerous than nuclear weapons, because the brain generates nuclear weapons and things are even more dangerous. Look at the difference between man and monkey. And now tell me what artificial intelligence can't do. Heal all diseases and old age, invent a unified field theory and solve the millennium problems, create nanotechnology and colonize other galaxies. Do you really think he can do so little? P.S. I chose it not only because, in addition to being important, he is also impressive, but also because, unlike others, in this form he unites and conveys not one, but many important thoughts at once: a false separation of social skills from intellect and the idea that he is intellectual the character should not have them, the capabilities of AI, its enormous speed of its development and the degree of influence comparable to the emergence of a new species, the correct intuition about intelligence as the difference between a man and a monkey, and not Einstein and a peasant, an understanding that intelligence is stronger than any technology, because that he creates them, abilities for good future and dangers of AI.

comment by CatDancer · 2009-02-27T21:21:46.503Z · LW(p) · GW(p)

Your explanation / definition of intelligence as an optimization process. (Efficient Cross-Domain Optimization)

That was a major "aha" moment for me.

comment by badger · 2009-02-27T23:30:02.977Z · LW(p) · GW(p)

The most important thing I can recall is conservation of expectation. In particular, I'm thinking of Making Beliefs Pay Rent and Conservation of Expected Evidence. We need to see a greater commitment to deciding in advance which direction new evidence will shift our beliefs.

Most frequently referenced concepts:

  1. Mind projection fallacy and "The map is not the territory."
  2. "The opposite of stupidity is not intelligence."
comment by infotropism · 2009-02-27T21:40:16.342Z · LW(p) · GW(p)

Engines of cognition was the final thing I needed to assimilate the idea that nothing's for free and that intelligence does not magically allow to do anything, has a cost, limitations, and obey the second law of thermodynamics. Or rather, that they both obey the same underlying principle.

http://www.overcomingbias.com/2008/02/second-law.html

comment by Scott Alexander (Yvain) · 2009-02-28T16:09:34.884Z · LW(p) · GW(p)

The most important thing I learned from Overcoming Bias was to stop viewing the human mind as a blank slate, ideally a blank slate, an approximation to a blank slate, or anything with properties even slightly resembling blankness or slateness. The rest is just commentary - admittedly very, very good commentary.

The posts I associate with this are everything on evolutionary psychology such as Godshatter (second most important thing I learned: study evolutionary psychology!), the free will series, the "ghost in the machine" and "ideal philosopher of perfect emptiness" series, and the Mind Projection Fallacy.

comment by Emile · 2009-02-27T23:02:46.104Z · LW(p) · GW(p)

The biggest "aha" post was probably the one linking thermodynamics to beliefs ( The Second Law of Thermodynamics, and Engines of Cognition, and the following one, Perpetual Motion Beliefs ), because it linked two subjects I knew about in a surprising and interesting way, deepening my understanding of both.

Apart from that, "Tsuyoku Naritai" was the one that got me hooked, though I didn't really "learn" anything by it - I like the attitude it portrays.

Replies from: SilasBarta
comment by SilasBarta · 2009-03-01T00:10:37.688Z · LW(p) · GW(p)

I agree about Engines of Cognition. It got me really interested in the parallels between information theory and thermodynamics and led me to start reading a lot more about the former, including the classic Jaynes papers. I think it gave me a deeper understanding of why e.g. the Carnot limit holds, and let me to read about the interesting discovery that the thermodynamic availability (extractable work) of a system is equal to its Kullback-Leibler divergence (a generalization of informational entropy) from its environment.

Second for me would have to be Artificial Addition, which helped me understand why attempts to "trick" a system into displaying intelligence are fundamentally misguided.

comment by MichaelGR · 2009-02-28T05:15:11.761Z · LW(p) · GW(p)

"Obviously the vast majority of my OB content can't go into the book, because there's so much of it."

I know this is not what you asked for, but I'd like to vote for a long book. I feel that the kind of people who will be interested by it (and readers of OB) probably won't be intimidated by the page count, and I know that I'd really like to have a polished paper copy of most of the OB material for future reference. The web just isn't quite the same.

In short: Something that is Godel, Escher, Bach-like in lenght probably wouldn't be a problem, though maybe there are other good reasons to keep it shorter other than "there is too much material".

comment by Darmani · 2009-02-28T02:50:36.082Z · LW(p) · GW(p)

I'm going to have to choose "How to Convince Me That 2 + 2 = 3." It did quite a lot to illuminate the true nature of uncertainty.

http://www.overcomingbias.com/2007/09/how-to-convince.html

The ideas in itare certainly not the most important, but another really striking posts for me is "Surprised by Brains." The lines "Skeptic: Yeah? Let's hear you assign a probability that a brain the size of a planet could produce a new complex design in a single day. / Believer: The size of a planet? (Thinks.) Um... ten percent." in particular are really helpful in fighting biases that cause me to regard conservative estimates as somehow more virtuous.

comment by whpearson · 2009-02-28T00:37:31.144Z · LW(p) · GW(p)

Taboo very useful in discussion I believe.

comment by Kaj_Sotala · 2009-03-01T11:18:54.546Z · LW(p) · GW(p)

A while back, I posted on my blog two lists with the posts I considered the most useful on Overcoming Bias so far.

If I just had to pick one? That's tough, but perhaps burdensome details. The skill of both cutting away all the useless details from predictions, and seeing the burdensome details in the predictions of others.

An example: Even though I was pretty firmly an atheist before, arguments like "people have received messages from the other side, so there might be a god" wouldn't have appeared structurally in error. I would have questioned whether or not people really had received messages from the dead, but not the implication. Now I see the mistake - "there's something after death" and "there is a supernatural entity akin to the traditional Christian god" may be hypotheses that are traditionally (in this culture) asssociated with the same memeplex, but as hypotheses they're entirely distinct.

Replies from: RobinZ
comment by RobinZ · 2009-07-11T12:13:11.879Z · LW(p) · GW(p)

I would vote for "Burdensome Details" as well.

comment by Z_M_Davis · 2009-02-27T23:47:37.069Z · LW(p) · GW(p)

This is to nominate "The Bottom Line" / "A Rational Argument."

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2009-02-28T00:12:39.331Z · LW(p) · GW(p)

I second this one, also as related to Making Beliefs Pay Rent: what you think and what you present as argument needs to be valid, needs to actually have the strength as evidence that it claims to have. Failure to abide by this principle results in empty or actively stupid thoughts.

comment by Paul Crowley (ciphergoth) · 2009-02-28T08:51:04.850Z · LW(p) · GW(p)

Hard to pick a favourite, of course, but there's a warning against confirmation bias that cautions us against standing firm, to move with the evidence like grass in the wind, that has stuck with me.

On the general discussions of what sort of book I want, I want one no more than a couple of hundred pages long which I can press into the hands of as many of my friends as possible. One that speaks as straightforwardly as possible, without all the self-aggrandizing eastern-guru type language...

comment by AnnaSalamon · 2009-02-27T23:35:20.791Z · LW(p) · GW(p)

A near-tie. Either:

(1) The Bottom Line, or

(2) Realizing there's actually something at stake that, like, having accurate conclusions really matters for (largely, Eliezer's article on heuristics and biases in global catastrophic risks, which I read shortly before finding OB), or

(3) Eliezer's re-definition of humility in "12 virtues", and the notion in general that I should aim to see how far my knowledge can take me, and to infer all I can, rather than just aiming to not be wrong (by erring on the side of underconfidence).

(1) wasn't a new thought for me, but I wasn't applying it consistently, and Eliezer's meditations on it helped. (2) and (3) more or less were new to me. I've gotten the most out of some of the most basic OB content, and probably continue to get the most out of reflecting on it.

comment by Cyan · 2009-02-27T22:35:52.912Z · LW(p) · GW(p)

I'm going to go with "Knowing About Biases Can Hurt People", but only because I got the Mind Projection Fallacy straight from Jaynes.

comment by orthonormal · 2009-03-23T22:23:23.940Z · LW(p) · GW(p)

The most important thing for me, basically, was the morality sequence and in particular The Moral Void. I was worrying heavily about whether any of the morals I valued were justified in a universe that lacked Intrinsic Meaning. The Morality sequence (and Nietzsche, incidentally) helped me internalize that it's OK after all to value certain things— that it's not irrational to have a morality— that there's no Universal Judge condemning me for the crime of parochialism if I value myself, my friends, humanity, beauty, knowledge, etc— and that even my flight from value judgments was the result of a slightly more meta value judgment.

Seems probable to me that many potential readers aren't currently too worried about the Moral Void, but those who are need a pretty substantial push in this direction.

comment by CronoDAS · 2009-02-28T03:56:12.082Z · LW(p) · GW(p)

"Shut up and multiply."

comment by Vladimir_Nesov · 2009-02-27T22:26:09.135Z · LW(p) · GW(p)

I refuse to name just one thing. I can't rank a number of ideas by how important they were relative to each other, they were each important in their own right. So, to preserve the voting format, I'll just split my suggestions into several comments.

Some notes in general. The first year I used to partially misinterpret some of your essays, but after I got a better grasp of underlying ideas, I saw many of the essays as not contributing any new knowledge. This is not to say that the essays were unimportant: they act as exercises, exploring the relevant ideas in excruciating detail, which makes them ideal for forming solid intuitive understanding of these ideas, a level of ownership for habits of thought without which it hardly makes sense to bother learning them. Focusing attention on each of the explored facets of rationality allows to think about extending and adapting them to your own background. At the same time, I think the verbosity in your writing should be significantly reduced.

Replies from: Marshall, Vladimir_Nesov, Vladimir_Nesov, Vladimir_Nesov, Vladimir_Nesov
comment by Marshall · 2009-02-28T07:37:50.377Z · LW(p) · GW(p)

I too would like to support more brevity in your writings - but maybe that just isn't your style.

comment by Vladimir_Nesov · 2009-02-28T02:20:45.100Z · LW(p) · GW(p)

Overcoming Bias: Thou Art Godshatter: understanding how intricate human psychology is, and how one should avoid inventing simplistic Fake Utility Functions for human behavior. I used to make this mistake. Also relevant: Detached Lever Fallacy, how there's more to other mental operations than meets the eye.

comment by Vladimir_Nesov · 2009-02-28T02:19:52.009Z · LW(p) · GW(p)

Prices or Bindings? and to a lesser extent (although with simpler formal statement) Newcomb's Problem and The True Prisoner's Dilemma: show just how insanely alien the rational thing can be, even if it's directed to your own cause. You may need to conscientiously avoid preventing the world destruction, not take free money, and trade a billion human lives for one paperclip.

comment by Vladimir_Nesov · 2009-02-28T02:21:20.946Z · LW(p) · GW(p)

The Simple Truth followed by A Technical Explanation of Technical Explanation, given some familiarity with probability theory, formed the basic understanding of Bayesian perspective on probability as quantity of belief. The most confusing point of Technical Explanation involving a tentacle was amended in the post about antiprediction on OB. It's very important to get this argument early on, as it forms the language for thinking about knowledge.

Replies from: AspiringRationalist
comment by NoSignalNoNoise (AspiringRationalist) · 2012-04-17T19:26:16.262Z · LW(p) · GW(p)

When I first read "The Simple Truth," I didn't really get it. I realized just how much I didn't get it when I re-read it after reading some of the sequences. I think it would work best as a review-of-what-you-just-learned rather than as an introduction.

comment by Vladimir_Nesov · 2009-02-28T02:18:45.985Z · LW(p) · GW(p)

Righting a Wrong Question: how everything you observe calls for understanding, how even an utter confusion or a lie can communicate positive knowledge. There are always causes behind any apparent confusion, so if the situation doesn't make sense in a way it's supposed to be interpreted, you can always step back and see how it really works, even if you are not supposed to look at the situation this way. For example, don't trust your thought, instead catch your own mind in the process of making a mistake.

comment by anonym · 2009-03-01T00:23:05.391Z · LW(p) · GW(p)

There are no genuine mysteries, only things that I am ignorant or confused about.

comment by [deleted] · 2009-02-28T06:46:20.223Z · LW(p) · GW(p)

deleted

comment by rhollerith · 2009-02-28T03:26:35.277Z · LW(p) · GW(p)

The most important and useful thing I learned from your OB posts, Eliezer, is probably the mind-projection fallacy: the knowledge that the adjective "probable" and the adverb "probably" always makes an implicit reference to an agent (usually the speaker).

Honorable mention: the fact that there is no learning without (inductive) bias.

comment by Nominull · 2009-02-28T01:14:15.489Z · LW(p) · GW(p)

It's hard to answer this question, given how much of your philosophy I have incorporated wholesale into my own, but I think it's the fundamental idea that there are Iron Laws of evidence, that they constrain exactly what it is reasonable to believe, and that no mere silly human conceit such as "argument" or "faith" can change them even in the millionth decimal place.

comment by leoger · 2009-02-27T23:29:50.348Z · LW(p) · GW(p)

The most important thing I learned may have been how to distinguish actual beliefs from meaningless sounds that come out of our mouths. Beliefs have to pay the rent. (http://www.overcomingbias.com/2007/07/making-beliefs-.html)

comment by GuySrinivasan · 2009-02-27T21:15:15.817Z · LW(p) · GW(p)

If my priors are right, then genuinely new evidence is a random walk. Especially: when I see something complicated I think is new evidence and think the story behind it is obviously something confirming my beliefs in every particular, I need to be very suspicious.

http://www.overcomingbias.com/2007/08/conservation-of.html

http://www.overcomingbias.com/2007/09/conjunction-fal.html

http://www.overcomingbias.com/2007/09/rationalization.html

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2009-02-27T22:31:37.170Z · LW(p) · GW(p)

I didn't get your point here, could you elaborate (re "evidence is a random walk").

comment by DonGeddis · 2009-05-01T21:32:43.512Z · LW(p) · GW(p)

I happened to have a young child about to enter elementary school when I read that, and it crystalized my concern about rote memorization. I forced many fellow parents to read the essay as well.

  • I realize you mostly care about #1, but just for more data: #2 I'd probably put the Quantum Physics sequence, although that is a large number of posts, and the effect is hard to summarize in a few pages.

  • For #3 I liked (within evolution) that we are adaptation-executers, not fitness-maximizers.

comment by Vladimir_Golovin · 2009-02-28T13:50:21.658Z · LW(p) · GW(p)

I've been enjoying the majority of OB posts, but here's the list of ideas I consider the most important for me:

  1. Intelligence as a process steering the future into a constrained region.

  2. The map / territory distinction.

  3. The use of probability theory to quantify the degree of belief.

comment by Lawliet · 2009-02-28T11:36:31.547Z · LW(p) · GW(p)

Is this to be a book that somebody could give to their grandmother and expect the first page to convince her that the second is worth reading?

comment by PeteG · 2009-02-27T22:55:49.984Z · LW(p) · GW(p)

The Wrong Question sequence was amazing. One of the very unintuitive sequences that greatly improved my categorization methods. Especially with the 'Disguised Queries' post.

comment by dfranke · 2009-02-27T21:11:25.391Z · LW(p) · GW(p)

Your debunking of philosophical zombieism really stuck with me. I don't think I've ever done a faster 180 on my stance on a philosophical argument.

comment by Gleb_Tsipursky · 2014-11-03T00:25:03.408Z · LW(p) · GW(p)

The most important thing I learned was the high value of the outside perspective. It is something that I strive to deploy deliberately through getting into intentional friendships with other aspiring rationalists at Intentional Insights. We support each other’s ability to achieve goals in life through what we came to call a goal buddy system, providing an intentional outside perspective on each other’s thinking about life projects and priorities.

comment by dumbshow · 2009-03-26T23:24:56.967Z · LW(p) · GW(p)

definitely "materialism"...especially the idea that there are no ontologically basic mental entities.

Replies from: ciphergoth
comment by Paul Crowley (ciphergoth) · 2009-03-26T23:46:37.513Z · LW(p) · GW(p)

That whole post is good, but that idea is due to Richard Carrier.

comment by johnbr · 2009-03-02T17:33:13.055Z · LW(p) · GW(p)

The most important thing for me, is the near-far bias - even though that's a relatively recent "discovery" here, it still resonates very well with why I argue with people about things, and why people who I respect argue with each other.

comment by UnholySmoke · 2009-03-02T17:23:49.651Z · LW(p) · GW(p)
  1. The Blegg / Rube series, which I'll still list as separate from...
  2. The Map / Territory distinction
  3. An Alien God

All things that, if pushed with the right questions, I'd have come to on my own, but all three put very beautifully.

comment by fostiak · 2009-03-01T21:44:22.221Z · LW(p) · GW(p)

Every Cause Wants To Be A Cult, Science as Attire, The Simple Truth

comment by JamesAndrix · 2009-02-28T16:36:17.683Z · LW(p) · GW(p)

That clear thinking can take you from obvious but wrong to non-obvious but right, and on issues of great importance. That we frequently incur great costs just because we're not really nailing things down.

Looking over the list of posts, I suggest the ones starting with 'Fake'

comment by prase · 2009-02-28T10:04:49.348Z · LW(p) · GW(p)

The series of post about the "free will". I was always a determinist but somehow refused to think about "free will" in detail, holding a belief that determinism and free will are compatible for some mysterious reason. OB helped me to see things clearly (now it seems all pretty obvious).

comment by Swimmy · 2009-02-28T01:37:21.016Z · LW(p) · GW(p)

I vote for "Conservation of Expected Evidence." The essential answer to supposed evidence from irrationalists.

Second place, either "Occam's Razor" or "Decoherence is Falsifiable and Testable" for the understandable explanation of technical definitions of Occam's Razor.

comment by jimrandomh · 2009-02-27T23:34:11.475Z · LW(p) · GW(p)

The intuitive breakthrough for me was realizing that given a proposition P and an argument A that supports or opposes P, then showing that A is invalid has no effect on the truth or falsehood of P, and showing that P is true has no effect on the validity of A. This is the core of the "knowing biases can hurt you" problem, and while it's obvious if put in formal terms, it's counterintuitive in practice. The best way to get that to sink in, I think, is to practice demolishing bad arguments that support a conclusion you agree with.

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2011-06-15T21:07:35.260Z · LW(p) · GW(p)

The intuitive breakthrough for me was realizing that given a proposition P and an argument A that supports or opposes P, then showing that A is invalid has no effect on the truth or falsehood of P

That sort of makes sense if what you mean is "whatever we humans think about A has no effect on the truth or falsehood of P in a Platonic sense" but surely showing that A is invalid ought to change how likely you think that P is true?

and showing that P is true has no effect on the validity of A.

Similarly, if P is actually true, a random argument that concludes with "P is true" is more likely to be valid than a random argument that concludes with "P is false". So showing P is true ought to make you think that A is more or less likely to be valid depending on its conclusion.

(Given that this comment was voted up to 3 and nobody gave a counterargument, I wonder if I'm missing something obvious.)

Replies from: jimrandomh, Benquo
comment by jimrandomh · 2011-06-15T21:27:47.817Z · LW(p) · GW(p)

I wrote that two years ago, and you're right that it's imprecise in a way that makes it not literally true. In particular, if a skilled arguer gives you what they think is the best argument for a proposition, and the argument is invalid, then the proposition is likely false. What I was getting at, I think, is that my intuition used to vastly overestimate the correlation between the validity of arguments encountered and the truth of propositions they argue for, because people very often make bad arguments for true statements. This made me reject things I shouldn't have, and easily get sidetracked into dealing with arguments too many layers removed from the interesting conclusions.

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2011-06-15T21:53:29.941Z · LW(p) · GW(p)

Ok, that makes a lot more sense. Thanks for the clarification.

comment by Benquo · 2011-06-15T21:34:51.442Z · LW(p) · GW(p)

3 is still a small number. If it were 10+ then you should worry. I'm confused by this too.

The nearest correct idea I can think of to what Jim actually said, is that if you have a proposition P with an associated credence based on the available evidence, then finding an additional but invalid argument A shouldn't affect your credence in P. The related error is assuming that if you argue with someone and are able to demolish all their arguments, that this means that you are correct, and giving too little weight to the possibility that they are a bad arguer with a true opinion. Jim, is that close to what you meant?

EDIT: Whoops, didn't see Jim's response. But it looks like I guessed right. I've also made the related error in the past, and this quote from Black Belt Bayesian was helpful in improving my truth-finding ability:

To win, you must fight not only the creature you encounter; you must fight the most horrible thing that can be constructed from its corpse.

comment by jimmy · 2009-02-27T22:42:11.862Z · LW(p) · GW(p)

"You cannot rely on anyone else to argue you out of your mistakes; you cannot rely on anyone else to save you; you and only you are obligated to find the flaws in your positions"

It wasn't much of an "aha!" moment- when I first read it, I thought something along the lines of "Of course higher standards are possible, but if no one can find flaws in your argument, you're doing pretty well." but the more I thought about it, the more I realized that EY made a good point. I had later stumbled upon flaws in my long standing arguments that I had overlooked, yet no one called me on.

Not only was the standard lower than I had previously realized, but it is entirely possible for someone to 1) not believe you 2) not be able put their refutation into words, and 3) still be right.

http://www.overcomingbias.com/2008/09/refutation-prod.html

Replies from: billswift
comment by billswift · 2009-02-28T06:02:07.316Z · LW(p) · GW(p)

The big problem with relying on someone else to save you is "Why would they bother?". No one is likely to be as motivated to find mistakes in your beliefs are you are (or at least as you should be).

comment by [deleted] · 2009-02-27T21:36:59.995Z · LW(p) · GW(p)

I've been reading OB for a comparatively short time, so I haven't yet been through the vast majority of your posts. But "The Sheer Folly of Callow Youth" really puts in perspective the importance of truth-seeking and why its necessary.

Quote: "Of this I learn the lesson: You cannot manipulate confusion. You cannot make clever plans to work around the holes in your understanding. You can't even make "best guesses" about things which fundamentally confuse you, and relate them to other confusing things. Well, you can, but you won't get it right, until your confusion dissolves. Confusion exists in the mind, not in the reality, and trying to treat it like something you can pick up and move around, will only result in unintentional comedy. Similarly, you cannot come up with clever reasons why the gaps in your model don't matter. You cannot draw a border around the mystery, put on neat handles that let you use the Mysterious Thing without really understanding it - like my attempt to make the possibility that life is meaningless cancel out of an expected utility formula. You can't pick up the gap and manipulate it."

Link: http://www.overcomingbias.com/2008/09/youth-folly.html

comment by steven0461 · 2009-02-27T20:35:38.013Z · LW(p) · GW(p)

How to make sense out of metaethics. I would particularly name The Meaning of Right.

comment by Gordon Seidoh Worley (gworley) · 2009-03-28T03:29:19.267Z · LW(p) · GW(p)

For me this is a tough question since I've been reading your stuff for nearly 10 years now, but thinking of only OB I'd have to say it was the quantum physics stuff, but only because I had encountered essentially everything else in one form or another already, so your writing was just refining the way of presenting what I had already generally learned from you.

comment by loqi · 2009-03-05T22:46:20.278Z · LW(p) · GW(p)

Clearing up my meta-ethical confusion regarding utilitarianism. From The "Intuitions" Behind "Utilitarianism":

Whatever value is worth thinking about at all, must be worth trading off against all other values worth thinking about, because thought itself is a limited resource that must be traded off. When you reveal a value, you reveal a utility.

Realizing that the expression of any set of values must inherently "sum to 1" was quite an abrupt and obviously-true-in-retrospect revelation.

comment by Ziphead · 2009-02-28T21:36:45.153Z · LW(p) · GW(p)

This is really from times before OB, and might be all too obvious, but the most important thing I’ve learned from your writings (so far) is bayesian probability. I had come in touch with the concept previously, but I didn’t understand it fully or understand why it was very important until I read your early explanatory essays on the topic. When you write your book, I’m sure that you will not neglect to include really good explanations of these things, suitable for people who have never heard of them before, but since no one else has mentioned it in this thread so far, I thought I might.

comment by Marshall · 2009-02-28T07:23:53.646Z · LW(p) · GW(p)

1) I learned to reconcile my postmodernist trends with physical reality. Sounds cryptic? Well let's say I learned to appreciate science a little more than I did.

2) I learned to think more "statistically" and probabilistically - though I didn't learn to multiply.

3) Winning is also a pretty good catch-word for an attitude of mind - and maybe a better title than less-wrong.

Replies from: Marshall
comment by Marshall · 2009-02-28T07:31:25.640Z · LW(p) · GW(p)

4) Oh! - And I stopped buying a lottery ticket.

5) The absence of evidence is not the evidence of absence - the symmetry of confirming and disconfirming evidence

Replies from: Marshall
comment by Marshall · 2009-02-28T07:41:34.094Z · LW(p) · GW(p)

6) Something I didn't learn was the long list of cognitive biases. It's ok to know about them - but I don't think they are very useable in practice. The one I like best is "overconfidence".

Replies from: Marshall
comment by Marshall · 2009-02-28T07:59:20.831Z · LW(p) · GW(p)

7) Something else I didn't learn or understand was your stance on ethics. I am going to rush and take the child of the rails as well - but all else is muddled mud to me.

comment by Nathan · 2009-02-28T03:28:54.537Z · LW(p) · GW(p)

"Thou art Godshatter" -- this was one of the first posts I read, and it made the entire heuristics and biases program feel more immediate / compelling than before

comment by Cameron_Taylor · 2009-02-28T02:00:51.904Z · LW(p) · GW(p)

Expecting Short Inferential Distances

The 'Shut up and do the impossible' sequence.

Newcombe's problem.

Godshatter.

Einstein's arrogance.

Joy in the merely real.

The cartoon Godel's theorem.

Science isn't strict enough.

The bottom line.

comment by John_Maxwell (John_Maxwell_IV) · 2009-02-28T01:12:40.424Z · LW(p) · GW(p)

Well, I'd say the most important thing I learned was to be less confident when taking a stand on controversial topics. So to that end, I'll nominate

  1. Twelve Virtues of Rationality
  2. Politics is the Mind-Killer
comment by Vladimir_Nesov · 2009-02-27T23:39:52.551Z · LW(p) · GW(p)

The Simple Truth followed by A Technical Explanation of Technical Explanation, given some familiarity with probability theory, formed the basic understanding of Bayesian perspective on probability as quantity of belief. The most confusing point of Technical Explanation involving a tentacle was amended in the post about antiprediction on OB. It's very important to get this argument early on, as it forms the language for thinking about knowledge.

comment by thomblake · 2009-02-27T20:49:34.569Z · LW(p) · GW(p)

Thanks for the link to the list - I keep forgetting that exists. And thanks again to Andrew Hay for making it.

That said, I don't think I would say I learned anything from your OB posts, at least about rationality. I think I did learn about young Eliezer and possibly about aspiring rationalists in general. If that's a reasonable topic, then I'd have to suggest something in the "Young Eliezer" sequence, possibly My Wild and Reckless Youth.

There are several variations on the questions you're asking that I think I could find answers to:

"Which post do you think other people should read so that they will learn something?" (That might be the same as your third question) The Failures of Eld Science

"Which post did you enjoy the most?" Three Worlds Collide - if that counts as 1 post

"Which post do you recommend to people most frequently?" Zombies: the Movie

"Which post do you refer to most frequently in philosophical discussions?" Sorting Pebbles into Correct Heaps

It seems any 'favorite' type question will turn up fiction from me.

Philosophers are notably bad at following directions.

comment by Bongo · 2009-02-27T20:32:46.924Z · LW(p) · GW(p)

I liked philosophy before OB, so I knew you were supposed to question everything. OB revelealed new things to question, and taught me to expect genuine answers.

Replies from: Technologos
comment by Technologos · 2009-03-28T05:16:07.844Z · LW(p) · GW(p)

In fact, I'd say that OB reinforced in a more concrete way the belief I got from Wittgenstein that not all questions are meaningful (in particular, the ones for which there cannot be "genuine answers").

comment by Nick_Roy · 2011-04-22T02:53:44.757Z · LW(p) · GW(p)

"I suspect that most existential angst is not really existential. I think that most of what is labeled 'existential angst' comes from trying to solve the wrong problem", from Existential Angst Factory.

comment by Akiyama · 2009-03-18T10:59:50.918Z · LW(p) · GW(p)

I don't know about "most important", but the one post that really stuck in my mind was Archimedes's Chronophone. I spent a while thinking about that one.

comment by kurige · 2009-03-14T11:01:49.462Z · LW(p) · GW(p)

Just did a quick search of this page and it didn't turn up... so, by far, the most memorable and referred-to post I've read on OB is Crisis of Faith.

Replies from: AnnaSalamon
comment by AnnaSalamon · 2009-03-14T14:41:52.425Z · LW(p) · GW(p)

Did practicing the Crisis of Faith technique cause you to change your mind about anything?

comment by Psy-Kosh · 2009-03-02T21:44:04.812Z · LW(p) · GW(p)

I really can't think of any one single thing. Part of it is I think I hadn't yet "dehindsightbiased" myself, (still haven't, except now sometimes I can catch myself as it's happening and say "No! I didn't know that before, stop trying to pretend that I did.")

Another part is that lots of posts helped crystallize/sharpen notions I'd been a bit fuzzy on. Part of it is just, well, the total effect.

Stuff like the Evolution sequence and so on were useful to me too.

If I had to pick one thing that stands out in my mind though, I guess I'd have to say the consciousness sequence. Specifically, making it much easier for me to imagine the day that it could be explained (REALLY explained, in the sense of "ooooh, now it really does make sense") in terms of perfectly ordinary stuff.

Bits and pieces I'd thought out on my own, but, again, you brought home the point strongly.

That's the best I can think of as far as specific things. The rest, well, it's the effect of all of it on me, rather than any single one that I can point to.

EDIT: oh, really really important thing: your definition of, well, definitions. ie, the whole clusters in thingspace, natural boundries around them, etc.

comment by aleiby · 2009-03-02T04:08:34.341Z · LW(p) · GW(p)

The idea that the purpose of the law is to provide structure for optimization.

I'm not sure this is the most important thing I've learned yet, but it's the only really 'aha' moment I've had in the admittedly small sample I've been able to catch up on thus far.

I find I think about this most often as I contemplate the effect traffic laws and implements have in shaping my 20 minute optimization exercise in getting to work each morning.

comment by gmweinberg · 2009-02-28T20:29:51.517Z · LW(p) · GW(p)

I'm not sure I've "learned" anything. You've largely convinced me that we don't really "know" anything but rather have varying degrees of belief, but I believed that to some degree before reading this site and am not 100% convinced of it now.

The most important thing I can think of that I would have said is almost certainly wrong before and that I'd say is probably right now is that it is legitimate to multiply the utility of a possible outcome by its probability to get the utility of the possibility.

comment by Vladimir_Nesov · 2009-02-28T02:11:19.592Z · LW(p) · GW(p)

Prices or Bindings? and to a lesser extent (although with simpler formal statement) Newcomb's Problem and The True Prisoner's Dilemma: show just how insanely alien the rational thing can be, even if it's directed to your own cause. You may need to conscientiously avoid preventing the world destruction, not take free money, and trade a billion human lives for one paperclip.

comment by Vladimir_Nesov · 2009-02-28T02:00:40.127Z · LW(p) · GW(p)

Righting a Wrong Question: how everything you observe calls for understanding, how even an utter confusion or a lie can communicate positive knowledge. There are always causes behind any apparent confusion, so if the situation doesn't make sense in a way it's supposed to be interpreted, you can always step back and see how it really works, even if you are not supposed to look at the situation this way. For example, don't trust you thought, instead catch your own mind in the process of making a mistake.

comment by Vladimir_Nesov · 2009-02-28T01:41:48.992Z · LW(p) · GW(p)

Overcoming Bias: Thou Art Godshatter: understanding how insanely intricate human psychology is, and how one should avoid inventing simplistic Fake Utility Functions for human behavior. I used to make this mistake. Also relevant: Detached Lever Fallacy, how there's more to other mental operations than meets the eye.

comment by pjeby · 2009-02-27T23:57:23.435Z · LW(p) · GW(p)

Intelligence as a blind optimization process shaping the future -- esp. in comparison with evolution -- and how the effect of our built-in anthropomorphism makes us see intelligence as existing, when in fact, ALL intelligence is blind. Some intelligence processes are just a little less blind than others.

(Somewhat offtopic, but related: some studies show that the number of "good" ideas produced by any process is linearly proportional to the TOTAL number of ideas produced by that process... which suggests that even human intelligence searches blindly, once we go past the scope of our existing knowledge and heuristics.)

comment by gwern · 2009-02-27T22:18:44.233Z · LW(p) · GW(p)

I'm going to echo CatDancer: for me the most valuable insight was that a little information goes a very long way. From the example of the simulated beings breaking out to the Bayescraft interludes to the few observations and lots of cogitations in Three Worlds Collide to GuySrinivasan's random-walk point, I've become more convinced that you can get a surprising amount of utility out of a little data; this changes other beliefs like my assessment of how possible AI rapid takeoff is.

comment by Gleb_Tsipursky · 2014-11-03T00:26:32.731Z · LW(p) · GW(p)

The most important thing I learned was the high value of the outside perspective. It is something that I strive to deploy deliberately through getting into intentional friendships with other aspiring rationalists at Intentional Insights. We support each other’s ability to achieve goals in life through what we came to call a goal buddy system, providing an intentional outside perspective on each other’s thinking about life projects and priorities.

comment by FiftyTwo · 2012-12-09T05:21:41.039Z · LW(p) · GW(p)

Generalising from one example.

comment by beoShaffer · 2011-06-15T21:20:13.892Z · LW(p) · GW(p)

Making Beliefs Pay Rent

comment by Madbadger · 2009-07-11T05:02:07.747Z · LW(p) · GW(p)

The explanation of Bayes Theorem and pointer to E. T. Jaynes. It gave me a statistics that is useful as well as rigorous, as opposed to the gratuitously arcane and not very useful frequentist stuff I was exposed to in grad school.

Second would be the quantum mechanics posts - finally an understandable explanation of the MW interpretation.

comment by DonGeddis · 2009-05-01T21:31:01.067Z · LW(p) · GW(p)

#1: Teacher's Password http://www.overcomingbias.com/2007/08/guessing-the-te.html

I happened to have a young child about to enter elementary school when I read that, and it crystalized my concern about rote memorization. I forced many fellow parents to read the essay as well.

I realize you mostly care about #1, but just for more data: #2 I'd probably put the Quantum Physics sequence, although that is a large number of posts, and the effect is hard to summarize in a few pages. For #3 I liked (within evolution) that we are adaptation-executers, not fitness-maximizers.

comment by DonGeddis · 2009-05-01T21:28:44.727Z · LW(p) · GW(p)

#1: Teacher's Password

I happened to have a young child about to enter elementary school when I read that, and it crystalized my concern about rote memorization. I forced many fellow parents to read the essay as well.

I realize you mostly care about #1, but just for more data: #2 I'd probably put the Quantum Physics sequence, although that is a large number of posts, and the effect is hard to summarize in a few pages. For #3 I liked (within evolution) that we are adaptation-executers, not fitness-maximizers.

comment by Vladimir_Nesov · 2009-02-28T01:49:46.431Z · LW(p) · GW(p)

Priors as Mathematical Objects: prior is not something arbitrary, a state of lack-of-knowledge, nor can sufficient evidence turn arbitrary prior into precise belief. Prior is the whole algorithm of what to do with evidence, and bad prior can easily turn evidence into stupidity.

P.S. I wonder if this post was downvoted exclusively because of Eliezer's administrative remark, and not because of its content.

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-02-28T02:00:51.248Z · LW(p) · GW(p)

Vlad, if you're going to do this, at least do it as replies to your original comment!

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2009-02-28T02:15:30.286Z · LW(p) · GW(p)

Right. I moved other comments under the original one.

comment by Johnicholas · 2009-02-28T01:43:45.762Z · LW(p) · GW(p)

I'm going to break with the crowd here.

I don't think that the Overcoming Bias posts, even cleaned up, are suitable for a book on how to be rational. They are something like a sequence of diffs of a codebase as it was developed. You can get a feel of the shape of the codebase by reading the diffs, particularly if you read them steadily, but it's not a great way to communicate the shape.

A book probably needs more procedures on how to behave rationally:

How to use likelihood ratios How to use utility functions Dutch Books: what they are and how to avoid them

comment by neurotron · 2009-02-28T00:29:22.996Z · LW(p) · GW(p)

The posts are amazing, well connected and very detailed. I think one of the best insights you had was to make concise these biases as the words of your Confessor:

"[human] rationalists learn to discuss an issue as thoroughly as possible before suggesting any solutions. For humans, solutions are sticky...We would not be able to search freely through the solution space, but would be helplessly attracted toward the 'current best' point, once we named it. Also, any endorsement whatever of a solution that has negative moral features, will cause a human to feel shame - and 'best candidate' would feel like an endorsement. To avoid feeling that shame, humans must avoid saying which of two bad alternatives is better than the other."

Any understanding of what it means to be rational must come to terms with the treacherous nature of the mind; the myriad number of traps that hold us back and the lack of any one true principle.

http://www.overcomingbias.com/2009/02/super-happy-people.html