Good Quality Heuristics

post by CannibalSmith · 2009-07-14T09:53:11.871Z · LW · GW · Legacy · 113 comments

We use heuristics when we don't have the time to think more, which is almost all the time. So why don't we compile a big list of good quality heuristics that we can trust? (Insert eloquent analogy with mathematical theorems and proofs.) Here are some heuristics to kick things off:

Make important decisions in a quiet, featureless room. [1]

Apply deodorant before going to bed rather than any other time. [1]

Avoid counterfactuals and thought experiments in when talking to other people. [Because they don't happen in real life. Not in mine at least (anecdotal evidence). For example with the trolley, I would not push the fat man because I'd be frozen in horror. But what if you wouldn't be? But I would! And all too often the teller of a counterfactual abuses it by crafting it so that the other person has to give either an inconsistent or unsavory answer. (This proof is a stub. You can improve it by commenting.)]

If presented with a Monty Hall problem, switch. [1]

Sign up for cryonics. [There are so many. Which ones to link? Wait, didn't Eliezer promise us some cryonics articles here in LW?]

In chit-chat, ask questions and avoid assertions. [How to Win Friends and Influence People by Dale Carnegie]

When in doubt, think what your past and future selves would say. [1, also there was an LW article with the prince with multiple personality disorder chaining himself to his throne that I can't find. Also, I'm not sure if I should include this because it's almost Think More.]

I urge you to comment my heuristics and add your own. One heuristic per comment. Hopefully this takes off and turns into a series if wiki pages. Edit: We should concentrate on heuristics that save time, effort, and thought.

113 comments

Comments sorted by top scores.

comment by Drahflow · 2009-07-15T01:15:52.493Z · LW(p) · GW(p)

Decisions with an utility equivalent of less than 0.50$ should be made after at most 10 seconds, by coin flip if necessary.

The time is more valuable.

Replies from: RobinZ
comment by RobinZ · 2009-07-15T03:18:08.274Z · LW(p) · GW(p)

That corresponds to valuing a marginal increment of your time at $180/hr, which seems a bit high - the base concept makes sense, though.

Replies from: jimmy
comment by jimmy · 2009-07-16T09:45:26.612Z · LW(p) · GW(p)

Not quite. Say your time is worth $90/hr. If you spend 20 seconds thinking about the answer, you've done worst than instantly picking at random. You've done just as bad as instantly picking the wrong answer. If it's worth spending any time at all thinking about the question, it's worth spending considerably less than 20 seconds.

On a binary question, you should spend <10 seconds even if you approach certainty at t = 10s (at $90/hr). Depending on your certainty/time profile, it could be even less.

comment by CronoDAS · 2009-07-14T18:45:10.340Z · LW(p) · GW(p)

In a field in which you personally are not an expert, the closest you can come to the truth is to accept the opinion of the majority of the experts in the age in which you live.

(Courtesy of my father.)

Replies from: Z_M_Davis, ciphergoth, CronoDAS, John_Maxwell_IV, CronoDAS, Vladimir_Nesov
comment by Z_M_Davis · 2009-07-15T04:23:24.934Z · LW(p) · GW(p)

The problem is that the very fact that experts are listened to and respected creates incentives to become certified as an expert or to claim to be an expert, and that these incentives are non-truth-tracking. If you lust for fame and glory, or you want to seem original, or if you have a political agenda, or if you're worried about what your publisher thinks will sell---all these sorts of things might help your bid to be certified as an expert or hinder it, but they're not directly about the map that reflects the territory, and everything that's not directly about the map that reflects the territory just adds noise to the process.

In a physical science with conclusions nailed down for decades, sure, don't even think about questioning the consensus. But on an issue people actually care about (sorry, physics nerds, but you know what I mean), if you have a concept of epistemic rationality, and you know about Aumann agreement and updating your beliefs on other people's beliefs as a special case of updating your beliefs on any data you get from your environment and you take all of this dead seriously, and you've read the existing literature, and you've spent many, many hours thinking about it, and you still find yourself disagreeing with the consensus---I'm not going to say you should forfeit your vision. You can't trust the mainstream, because the mainstream is insane. The fact that you're insane too doesn't mean you can just trust the authorities; it means you have to lower your confidence in everything.

But please---don't take my word for it!

comment by Paul Crowley (ciphergoth) · 2009-07-17T09:35:22.859Z · LW(p) · GW(p)

I agree, except that a non-expert needs to have some rules by which they can distinguish fields which really do have experts (eg climate science) from those that don't (eg theology)

Replies from: FiftyTwo
comment by FiftyTwo · 2011-12-19T20:33:58.707Z · LW(p) · GW(p)

Within theology, I will accept the views of theology professors (e.g. about the exact nature of the trinity) but not on the assumptions their field depends (e.g. whether God exists).

Replies from: FAWS
comment by FAWS · 2011-12-19T21:03:28.198Z · LW(p) · GW(p)

Weren't things like the current positions on the trinity arrived by political processes, including persecution of dissenters as heretics? Why should such positions be expected to be more likely to be true, even assuming divine beings? Or do you mean you will accept their expert opinion on what the church positions are, their history and so on?

Replies from: FiftyTwo
comment by FiftyTwo · 2011-12-19T21:25:52.458Z · LW(p) · GW(p)

I am presuming a theology professor is more likely to have a view based on the arguments (if you can call them that) and textual evidence than a random member of the public or follower of that religion. (A priest would be a different matter as they have strong investment in the doctrine.)

Analogously, I will likely trust the opinion of the head of a Harry Potter fandom group, who has likely been involved in debates on the topic, about some point of the minutiae of Harry Potter lore (how old his parents were when they died for example). But that doesn't entail accepting the premise 'Harry Potter is real.'

Edit Upon more thought I think the issue may be that I was working from the premise "Theology professors are not invested emotionally in the results of a debate, but argue based on theory and textual evidence" which, while it has been my experience, may not be universal and may not be a premise you share.

Replies from: FAWS
comment by FAWS · 2011-12-19T21:45:36.863Z · LW(p) · GW(p)

I'd trust the head of a Harry Potter fandom group to get questions about the fictional character Harry Potter right, but not for questions about a hypothetical culture of real wizards, even if someone were claiming the books to have been based on such.

Replies from: FiftyTwo
comment by FiftyTwo · 2011-12-19T21:52:17.416Z · LW(p) · GW(p)

But (assuming for the sake of argument the books count as documentary evidence) would you say they had a higher probability of being right than: 'someone who had read the books once' or 'someone who had never read the books.' Or would you expect them all to be equally likely to be right or wrong?

Replies from: FAWS
comment by FAWS · 2011-12-19T22:06:23.798Z · LW(p) · GW(p)

Someone who has read the books, but isn't a fan > a dedicated fan > someone who never read the books. I'd expect dedicated fans to over-count the books as evidence and to not give very different scenarios enough consideration, or fail to think of them at all.

Replies from: FiftyTwo
comment by FiftyTwo · 2011-12-19T22:11:08.944Z · LW(p) · GW(p)

But surely they are also more likely to have inconsistent beliefs that a person who had engaged in discussion wouldn't? (E.g. misunderstanding a section in a way that could easily be noticed in discussion.)

Analogously very few theology professors believe in the literal creation story, for obvious reasons, and are likely to have slightly more coherent conceptions of free will/sin/miracles.

comment by CronoDAS · 2009-07-15T03:12:09.661Z · LW(p) · GW(p)

An expert who disagrees with the majority opinion in his field is an iconoclast. Such experts are usually at least acknowledged as experts by other experts, and, sometimes, their opinions turn out to be right all along.

A layman who disagrees with the majority of the experts in a field is a crank, and cranks that turn out to be right are rarer than winning lottery tickets.

comment by John_Maxwell (John_Maxwell_IV) · 2009-07-14T22:35:58.061Z · LW(p) · GW(p)

I'm not convinced that the initial disclaimer is necessary. Would it make sense for a non-expert to base his opinion on that of one expert or a large group? Why does it make sense for an expert to base his opinion on his perceptions only instead of looking at his entire group?

comment by CronoDAS · 2009-07-15T17:11:23.978Z · LW(p) · GW(p)

Note that a relevant application of this heuristic would be global warming.

Replies from: SilasBarta
comment by SilasBarta · 2009-07-15T17:21:08.327Z · LW(p) · GW(p)

Warning: once you couple an argument to a current political debate, people quickly lose their ability to think rationally about it...

Replies from: RobinZ
comment by RobinZ · 2009-07-15T17:24:22.802Z · LW(p) · GW(p)

Only if they don't make their saving throw. Dawkins gets a lot of deconversion stories in his email.

comment by Vladimir_Nesov · 2009-07-14T18:56:08.437Z · LW(p) · GW(p)

Yet to understand the opinions on any nontrivial question, you have to become enough of an expert yourself to have at least some say in judging the validity of experts' opinions.

Replies from: Cyan
comment by Cyan · 2009-07-14T19:02:09.477Z · LW(p) · GW(p)

The heuristic seems to be about what to do when understanding the opinions of experts is not a practical option.

comment by CannibalSmith · 2009-07-14T11:19:49.522Z · LW(p) · GW(p)

If you don't know what you need, take power. [1, Power can be converted into almost everything else. Also, money is power.]

Replies from: FiftyTwo, John_Maxwell_IV
comment by FiftyTwo · 2011-12-19T20:58:32.769Z · LW(p) · GW(p)

Does the quote have any origin beyond "Final words?" I started there but the search only brought me here. I would be interested in more discussion around it before I adopt it as a method.

comment by John_Maxwell (John_Maxwell_IV) · 2009-07-14T22:29:39.016Z · LW(p) · GW(p)

Certain kinds of knowledge are also power. Intelligence is power; you can build intelligence through dual-n-back.

Replies from: MichaelBishop
comment by Mike Bishop (MichaelBishop) · 2009-07-14T23:45:42.785Z · LW(p) · GW(p)

you can build intelligence through dual-n-back

I believe this has some effect on some type of intelligence, but I remain unconvinced that the boost is large enough and generalizable enough that its worth the opportunity cost.

Replies from: John_Maxwell_IV
comment by John_Maxwell (John_Maxwell_IV) · 2009-07-15T05:06:52.119Z · LW(p) · GW(p)

Quote from brainworkshop.sourceforge.net:

In the original study, participants showed up to 40% gains in measured fluid intelligence scores after 19 days of daily practice.

Fluid intelligence is considered one of the two types of general intelligence. The other is crystallized intelligence. See http://en.wikipedia.org/wiki/Fluid_and_crystallized_intelligence

Replies from: Z_M_Davis
comment by Z_M_Davis · 2009-07-15T06:05:56.621Z · LW(p) · GW(p)

participants showed up to 40% gains in measured fluid intelligence scores

' sputters

What does that even mean? I know what it means for a rock to be 40% heavier than some other rock, or for a car to be travelling 40% faster than some other car, and I know what it means to go from the fiftieth percentile to the ninetieth percentile, but saying that subjects got 40% more items right on some particular test tells me nothing useful; we only care about the test insofar as it gives us evidence about this intelligence-thingy, and the raw score gives me no basis for comparison. Looking at the actual PNAS paper (hoping that I'm competent to read it), it looks like the experimental group saw a gain of 0.65 standard deviations (Cohen's d) on a test of Gf, said figure which actually tells me something---if we assume a Gaussian distribution, then a score in the fiftieth percentile among the untrained would be in the twenty-fifth percentile amongst the trained. (The control group also gained 0.25 standard deviations, probably due to a retest effect.)

Huh. d=0.65 is pretty impressive ...

comment by GuySrinivasan · 2009-07-14T20:21:44.430Z · LW(p) · GW(p)

If it feels like someone won't accept your basic, obviously-true point, the culprit is a communication error.

This is as opposed to what it will feel like in the moment: they are stupid, they are obstinate, they won't listen, etc. If you have no good reason to believe the other party has stopped acting like a reasonable social being, then back up and find the communication error before proceeding. Maybe you are accidentally attaching riders to your point. Maybe they are reading too much into your point. Who knows. But it's probably not that whoever you're talking to suddenly turned into a bad human being, which points to a communication error of some sort.

Replies from: Dan_Moore
comment by Dan_Moore · 2009-07-14T20:39:01.389Z · LW(p) · GW(p)

I think this is a good heuristic.

However, another possibility is that either you or your discussant is unduly influenced by an informational cascade.

comment by Vladimir_Nesov · 2009-07-14T17:48:15.474Z · LW(p) · GW(p)

If the situation you are considering is novel, your intuition about it is probably wrong. Use more reliable, if less powerful, tools.

comment by RobinZ · 2009-07-14T16:38:01.856Z · LW(p) · GW(p)

Don't trust simple solutions to old unsolved problems.

(Optional xkcd link.)

Replies from: Blueberry, jimmy, RobinZ
comment by Blueberry · 2010-06-07T23:00:02.524Z · LW(p) · GW(p)

Chesterton said something similar about reforming social institutions that you don't understand.

I'm not a fan of that first XKCD, though. It seems to suggest that any possibility of alternative sexuality is doomed to failure, whereas many people do form alternative arrangements of various types.

Replies from: RobinZ
comment by RobinZ · 2010-06-08T02:56:32.457Z · LW(p) · GW(p)

I'll agree with you that alternative sexuality is real and works for many people - but I'm fairly sure (based on other things Randall Munroe has said about e.g. gender) that the xkcd comic is not mocking anything like that. I think it's mocking ... well, the kind of thinking that produced The Open Source Boob Project fiasco.* Something which sounds like a good idea, something which maybe even works at first ... but which has been proven, at the very least, not be likely to scale smoothly and gracefully. And it results in drama, obviously.

* Ursula Vernon's takedown is fairly good, if you're interested in that kind of thing.

Replies from: Blueberry
comment by Blueberry · 2010-06-08T03:44:55.946Z · LW(p) · GW(p)

I've read a few different accounts about what occurred with the OSBP, and from what I understand, it was done among a very small number of women who mostly knew each other and were comfortable with each other, or who had agreed to participate by wearing a button, and everyone was very sensitive and careful about consent. So I'm reluctant to call it a "fiasco". It seems like the only people who were uncomfortable with it were the ones who misunderstood it after the fact. Though I wasn't there and don't really know for sure.

If you mean sexuality is frequently emotionally complex and often results in drama, I'd agree, but that's true whether you change the rules or not. Relationships are hard, and people have to try to make rules that work for them. It's not as if there's an official book of rules anyway.

Replies from: RobinZ
comment by RobinZ · 2010-06-08T03:52:24.505Z · LW(p) · GW(p)

I read the followup when I tracked down the link - I don't disagree with you. But, at the very least, the writeup meant that The Ferrett felt obliged to promise not to attend specific future events and to close comments, and that seems to me like more drama than most sexual relationships I've heard of. (I know nearly nothing, mind.)

Replies from: FiftyTwo
comment by FiftyTwo · 2011-12-19T21:16:32.597Z · LW(p) · GW(p)

Reading up on it (severely after the fact admittedly) I found it hard to work out what the problem was. As far as I can tell no-one was involved against their will, and those involved were not put under any obligations.

If everyone involved was consenting adults how did it become a 'fiasco?' Did people simply object aesthetically to it happening in the places they were, or were there plans to expand it in some seemingly detrimental way?

Replies from: RobinZ
comment by RobinZ · 2011-12-20T03:31:49.644Z · LW(p) · GW(p)

The latter - the drama wasn't due to the original event, but due to the suggestion that it be formalized as an Event for the next year. Which, for reasons which were elaborated in many places, would likely have not been successful.

Replies from: FiftyTwo
comment by FiftyTwo · 2011-12-20T04:12:32.507Z · LW(p) · GW(p)

But even then, if all participants are consenting adults, who could grope each other infromally anyway, who cares?

Replies from: Vaniver, RobinZ
comment by Vaniver · 2011-12-20T07:32:51.769Z · LW(p) · GW(p)

As for why doing the project again would have been a mistake, asking people for consent is not a cost-free thing, and many such events work far better with fewer participants for reasons both obvious and subtle.

The real mistake theferret made was posting about this on the internet. I was involved in a discussion about the OSBP on the xkcd forums when the post happened, and was amazed by the degree of misunderstanding and overreaction among people condemning it. That was the sort of reaction theferret should have seen coming, and kept the project an invite-by-referral thing rather than a public recruiting thing.

comment by RobinZ · 2011-12-20T07:10:55.609Z · LW(p) · GW(p)

The event as proposed did not control sufficiently for "consenting". (Or "adult", for that matter.) That was the exact problem, in fact.

comment by jimmy · 2009-07-16T09:55:15.266Z · LW(p) · GW(p)

I would change that to "easily conceived" instead of "simple", and make sure to distinguish between "unsolved" and "unagreed upon".

Replies from: RobinZ
comment by RobinZ · 2009-07-16T11:03:40.008Z · LW(p) · GW(p)

Technically, "easily conceived" is more accurate, but the hindsight bias might make that hard to determine.

comment by nerzhin · 2009-07-14T21:36:51.348Z · LW(p) · GW(p)

Make important decisions in a quiet, featureless room

Might this prime you to make a quiet, featureless decision?

To be more specific and a little less snarky: I tend to be too socially withdrawn and a bit of a loner. To make a decision about, say, whether or not to go to a party, in a quiet, featureless room, would be a mistake.

comment by Alicorn · 2009-07-14T18:38:15.664Z · LW(p) · GW(p)

Look things up if they are important.

Replies from: FiftyTwo
comment by FiftyTwo · 2011-12-19T21:10:16.215Z · LW(p) · GW(p)

Corollary, test them if possible.

comment by Alicorn · 2009-07-14T17:18:25.328Z · LW(p) · GW(p)

Smile.

Also, the post you mention in your last heuristic is here.

Edit: Missed the line about one heuristic per comment.

comment by MendelSchmiedekamp · 2009-07-14T15:01:23.676Z · LW(p) · GW(p)

Don't trust heuristics, unless you can (1) re-derive them, (2) know their limits explicitly, or (3) are willing to accept the risks for the moment, but will reevaluate them later.

The limit of this heuristic is that it relies on self-knowledge, and so is vulnerable to self-deception. It breaks down when we start operating with heuristics for domains where we can no longer trust ourselves as much.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2009-07-14T15:09:02.714Z · LW(p) · GW(p)

Don't trust heuristics, unless you can (1) re-derive them, (2) know their limits explicitly, or (3) are willing to accept the risks for the moment, but will reevaluate them later.

I'm not sure that I read your point (3) correctly. One feature of heuristics is that you need to trust them to do a better job than you can do without them. As you refine the understanding of where the heuristics are appropriate, the expected effectiveness of heuristics increases, but all the way heuristics need to pay rent. The useful side of heuristic needs to win over the bias part, which is more of an issue for heuristics than for declarative beliefs.

Replies from: MendelSchmiedekamp
comment by MendelSchmiedekamp · 2009-07-14T16:01:08.102Z · LW(p) · GW(p)

Vladimir,

(3) is the branch for when urgency prevents the use of cognitive or data collection resources needed to adequately trust the heuristic under normal circumstances, but that same urgency requires a decisions. Loosely speaking it is the emergency clause. So the heuristic for that branch is to use the best heuristic available under what resources you can muster, and schedule a reevaluation at a later time, to recover from the habit forming nature of what you might have just done.

Of course, many heuristics are far to complex to personally derive or even to fully and explicitly describe their limits (at least in a single evaluation), so instead we need to keep calling (3) to manage them even outside of a proper emergency. What this means is that heuristics "paying rent" is a sub-heuristic of the heuristic I propose here.

Of course the overall limit of this heuristic remains (it applies to the "rent charging" dynamic, as well as the other applications). To manage this limitation requires a higher level check (likely itself heuristic) to enable the aspiring rationalist to operate with greater caution in domains where self-trust is less reliable.

comment by CannibalSmith · 2009-07-14T11:30:28.543Z · LW(p) · GW(p)

If you meet Omega, take one box - the transparent one. [1, Think about it: what is the probability of you meeting an actual Omega versus it being a prank by fellow rationalists. The opaque box probably contains a spring loaded boxing glove.]

Replies from: Vladimir_Nesov
comment by steven0461 · 2009-07-14T20:08:59.902Z · LW(p) · GW(p)

YKUTWIDNTIMWYTIM. "Heuristic" is not synonymous to "tip".

comment by Alicorn · 2009-07-14T18:38:04.991Z · LW(p) · GW(p)

Prefer food with fewer than five ingredients on the label.

Replies from: GuySrinivasan
comment by GuySrinivasan · 2009-07-14T20:09:03.631Z · LW(p) · GW(p)

This is a good heuristic? I wouldn't have guessed it, but then I don't pay a lot of attention to what exactly I eat. Why is this a good idea, do you think? For now I'll adopt your beliefs, but I'd like more evidence. :)

Replies from: Alicorn
comment by Alicorn · 2009-07-14T20:33:02.765Z · LW(p) · GW(p)

Ideally, you want food with one ingredient (e.g. "cherries" or "peas" or "olive oil" or "oregano") and then you assemble it into multi-ingredient food yourself at home (or in the case of the cherries you can eat the one ingredient by itself). If you need to buy multi-ingredient things, then the fewer ingredients they have, the less likely they are to contain weird pseudo-food like coloring agents, the distressingly vague "natural flavors", more preservatives than you really want in your lunch, etc.

This being a heuristic, not a comprehensive meal plan, it has to be simple and easy, so "fewer than five ingredients" is what I said instead of "avoid the following evil food additives". I go into a little more detail in this post of Improv Soup, 2a.

Replies from: CronoDAS, GuySrinivasan
comment by CronoDAS · 2009-07-15T17:17:28.387Z · LW(p) · GW(p)

I absolutely cannot stand cooking. :(

Replies from: astray, Alicorn
comment by astray · 2009-07-16T19:39:33.299Z · LW(p) · GW(p)

Just um... think of it as deck construction? Get your land balance right and you'll have an excellent aggro dish.

It sounded like a better suggestion in my head...

Replies from: CronoDAS
comment by CronoDAS · 2009-07-16T23:02:37.236Z · LW(p) · GW(p)

Mostly, I simply have no patience for it. Any minute spent on food preparation is a wasted minute I'll never get back. Even frying an egg is too much trouble for me to bother with, when I could just have a bowl of cold cereal instead. I do like good-tasting food, but not nearly enough to make it myself when I could just grab a slice of cheese or something and continue surfing the Internet instead.

Replies from: astray
comment by astray · 2009-07-17T17:39:51.476Z · LW(p) · GW(p)

This is a problem I often have myself. I will note that cooking for two ameliorates much of the pain, and cooking with two is even better.

comment by Alicorn · 2009-07-15T17:20:46.626Z · LW(p) · GW(p)

Why not?

comment by GuySrinivasan · 2011-06-28T04:19:18.317Z · LW(p) · GW(p)

Two years later: This is a good heuristic for cooking!

Edit: it doesn't always work, especially when trying new atomic ingredients. I'd say stick to things you've at least kind of done before if you're feeding other people.

comment by Douglas_Knight · 2009-07-15T03:03:56.689Z · LW(p) · GW(p)

You're confusing deodorant with antiperspirant.

Replies from: CannibalSmith
comment by CannibalSmith · 2009-07-15T05:15:59.934Z · LW(p) · GW(p)

Explain the difference please.

Replies from: CronoDAS, anonym
comment by CronoDAS · 2009-07-15T05:41:52.530Z · LW(p) · GW(p)

Deodorant kills odor-causing bacteria, and often contains perfumes and such.

Antiperspirant prevents you from sweating in the area in which it is applied, preventing the bacteria from making odors.

(Or something like that.)

Most of the time, a stick of "deodorant" you buy in the store contains both.

comment by anonym · 2009-07-15T05:41:33.513Z · LW(p) · GW(p)

http://www.wisegeek.com/what-is-the-difference-between-antiperspirant-and-deodorant.htm

Executive summary: ANTIperspirant tries to eliminate sweating by blocking pores, deodorant aims to eliminate or hide the bad smell of sweat.

comment by RobinZ · 2009-07-14T16:40:44.477Z · LW(p) · GW(p)

If you previously committed to a decision for good reasons, don't reverse your choice without good reason. (Related to "When in doubt, think what your past and future selves would say", but applies in broader circumstances.)

comment by HalFinney · 2009-07-14T19:08:14.654Z · LW(p) · GW(p)

Do what other people do in your situation.

Replies from: orthonormal, Alicorn, HalFinney, John_Maxwell_IV
comment by orthonormal · 2009-07-15T01:15:44.404Z · LW(p) · GW(p)

Better: Imitate what successful people do/did in your situation.

Or perhaps: Adopt the most successfully tested strategy; if you think you've figured out something better, ask first why you don't see others doing it already.

Replies from: HalFinney
comment by HalFinney · 2009-07-15T17:29:24.795Z · LW(p) · GW(p)

The problem is that you are more likely to know how things turned out for successful people than for unsuccessful ones. A policy which has a large chance of disaster but a small chance of great success might appear to be very good under this heuristic, since it worked great for everyone you've heard of.

Replies from: orthonormal
comment by orthonormal · 2009-07-15T18:53:13.326Z · LW(p) · GW(p)

Excellent point. I was thinking more in terms of social strategies, which don't seem to have devastating black swan outcomes in the way that "guaranteed" gambling or investment strategies do. Is there a pithy way to make that distinction?

comment by Alicorn · 2009-07-14T19:17:56.571Z · LW(p) · GW(p)

I am skeptical that enough people do the best thing enough of the time to make this a good heuristic, even if you ignore the fact that "what other people do in your situation" isn't always available information.

comment by HalFinney · 2009-07-14T23:59:34.553Z · LW(p) · GW(p)

You can also replace "do" with 'believe".

One interesting question is whether you should believe what the experts do, or what the majority of people do, in situations where they differ. (See CronoDAS's suggestion on this page about believing the experts.)

Replies from: Drahflow
comment by Drahflow · 2009-07-15T01:26:25.737Z · LW(p) · GW(p)

No, you should not believe what others believe unless they presented serious arguments.

Otherwise

  • information cascades
  • memes

gain strength.

Doing is different here, as it is more costly than believing.

Replies from: HalFinney
comment by HalFinney · 2009-07-15T17:27:40.413Z · LW(p) · GW(p)

The fact that this policy may contribute to an information cascade is (mostly) a cost to other people rather than a cost to yourself. If your goal is the truth, the presence of this cost is not relevant.

The real question is whether the beliefs of others are a reliable guide to the truth, and if not, what is better. Judging the quality of arguments has IMO not been shown to be something that most people can successfully implement - too much opportunity for bias to creep in.

comment by John_Maxwell (John_Maxwell_IV) · 2009-07-14T22:15:19.215Z · LW(p) · GW(p)

I suggest the following revision: If you don't think it's worth your time to analyze your options, choose whatever option people seem to be choosing. Exceptions in the case of situations where too many people choosing one option is bad for all of them (for example, too many people with degrees in y is bad for all of them.)

comment by Alicorn · 2009-07-14T18:38:21.540Z · LW(p) · GW(p)

Tolerate tolerance. 1

Replies from: CronoDAS
comment by CronoDAS · 2009-07-15T17:25:32.814Z · LW(p) · GW(p)

Nitpick: To what extent should I tolerate tolerance of evil? (For example, I'd condemn a reporter who writes an uncritical piece about some new kind of medical quackery, such as the healing powers of magnets, or what have you.)

Replies from: Alicorn
comment by Alicorn · 2009-07-15T17:37:54.896Z · LW(p) · GW(p)

Giving something positive publicity is not just tolerating it. You might well criticize the journalist who writes an article about the practitioners of magnetic healing without ever mentioning that it doesn't work under controlled circumstances. You should probably not criticize the guy who never bothers to write about magnets because they don't seem newsworthy.

comment by jimrandomh · 2009-07-14T13:23:37.332Z · LW(p) · GW(p)

Given an important decision and unlimited time, think until your thoughts repeat themselves, and no more.

comment by Richard_Kennaway · 2009-07-14T10:49:19.841Z · LW(p) · GW(p)

Never decide what to do until you've thought of at least half a dozen alternatives beyond the ones you immediately thought of. [Sometimes the obvious thing is the best, but do it because you actually made that decision.]

Replies from: Vladimir_Nesov, SilasBarta, HughRistik, CannibalSmith
comment by SilasBarta · 2009-07-14T13:57:36.260Z · LW(p) · GW(p)

Didn't you just violate that heuristic? Don't you pretty much have to, unless you want to live your live in permanent decision paralysis?

Limit it to large, important decisions and I'd agree.

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2009-07-14T14:19:44.520Z · LW(p) · GW(p)

Didn't you just violate that heuristic? Don't you pretty much have to, unless you want to live your live in permanent decision paralysis?

It's a heuristic. It's up to one's judgement how or whether to apply it in any situation.

Myself, I'd draw the line wider than just large, important decisions.

Replies from: SilasBarta, HughRistik
comment by SilasBarta · 2009-07-14T14:30:26.785Z · LW(p) · GW(p)

It's a heuristic. It's up to one's judgement how or whether to apply it in any situation.

Yes, it's a heuristic, but that means it needs to be usually correct. Yours is rarely correct. You make numerous decisions throughout the day, such as how to word your comment. Coming up with 6 alternatives to everything would guarantee that you would Lose.

But if you're just going to fall back on "but you apply it with your judgment", then you miss the point of a heuristic, which is to assist with your judgment. Why not have just one universal, all-encompassing heuristic:

"Use judgment."

comment by HughRistik · 2009-07-16T20:40:06.199Z · LW(p) · GW(p)

Myself, I'd draw the line wider than just large, important decisions.

Ok, this answers my question above.

Perhaps it's useful, when discussing heuristics, to describe the type of problem they are best applied to. The worth of the heuristic doesn't just lie in itself, but also lies in knowing when to apply it.

comment by HughRistik · 2009-07-14T20:01:09.544Z · LW(p) · GW(p)

What types of problems do you expect this heuristic to be successful with? If the problem is something like improvizing jazz, it will fail miserably.

Replies from: Vladimir_Nesov, Richard_Kennaway
comment by Vladimir_Nesov · 2009-07-14T20:05:23.442Z · LW(p) · GW(p)

If it's easy to judge that a given heuristic fails for a certain problem, then heuristic is not at fault: it can be easily seen to not apply there, and so won't introduce bias in that situation. The trouble lies where you think the heuristic applies but it doesn't.

comment by Richard_Kennaway · 2009-07-15T20:20:57.017Z · LW(p) · GW(p)

Problems that require decisions. I doubt that any of the heuristics mentioned here would have any relevance to jazz improvisation.

More generally, I consider heuristics to be not substitutes for thought, but pointers to get thought moving in the most promising direction first.

comment by CannibalSmith · 2009-07-14T11:08:07.894Z · LW(p) · GW(p)

But that's Think More.

comment by RobinZ · 2009-07-16T21:44:00.407Z · LW(p) · GW(p)

Distrust any impression given by fragmented quotations, be they text, audio, video.

(The mere existence of the phrase "out of context" reflects the danger of trusting these. Note, however, that this doesn't apply merely to quotes. To give an example I personally fell for: a false impression as to who said what in a 'documentary' about a psychic detective was given by rapidly cutting between the accounts of the officers working the case and the account given by the detective.)

comment by John_Maxwell (John_Maxwell_IV) · 2009-07-14T22:34:51.765Z · LW(p) · GW(p)

At the overcoming bias meetup a couple days ago, Robin Hanson mentioned that the singularity institute should devote half its people to working on AI problems and the other half to improving the tools used by the first half. Any way we could turn this into a heuristic?

Some questions: Should the tool-improving group also split itself in half so that half of them can help with the tools used by the tool-improvers? Has there been any academic research on what the right ratio of workers to tool-improvers is? How do things change when the group consists of one person dividing their time between working on hard problems and analyzing how they can work smarter? Does it make sense for such person to find a like-minded individual so they can take turns analyzing each other's work habits?

Replies from: anonym, rwallace
comment by anonym · 2009-08-09T17:53:51.943Z · LW(p) · GW(p)

I've always thought of this in terms of "improving the first derivative", or working not only on current knowledge but on the rate at which we are acquiring knowledge. Improved tools are a great way to increase the rate of change. Some other techniques are improving understanding of foundational topics (dependencies), inventing better representations of the problem domain (e.g., notation in mathematics and computer science), improving one's health (so as to operate at peak efficiency) through things like good diet and exercise (there are many cognitive benefits of exercise), and to the extent that fluid intelligence may be malleable, working to improve intelligence itself (e.g., dual n-back as in the 2008 Jaeggi et al. study).

comment by rwallace · 2009-07-15T02:35:49.503Z · LW(p) · GW(p)

How about this for a heuristic:

Exploiting the resources, tools, techniques etc. that you presently have, and coming up with better ones for the future, are both important and neither should be neglected. "50/50 split" obviously shouldn't be taken too literally, the point is that it shouldn't be 1/99 or 99/1.

comment by djcb · 2009-07-14T18:26:32.508Z · LW(p) · GW(p)

Good quality heuristics would indeed be useful.

But I thought heuristics were about experience-based techniques, of the type 'when X occurs, there's a pretty good chance that Y happens as well'. The example heuristics do not really follow that pattern.

'Sign up for cryonics' does not seem like a heuristic at all - how does it follow from experience? Also, for me to trust them, heuristics have to be supported by facts -- either my own experiences or some trusted other party. I'd only use Dale Carnegie lessons after some own experimentation with them - no matter how plausible they sound. There are simply too many untrue 'heuristics' not be a bit skeptical -- think about phrenology for example.

Now I'll think about some heuristics...

Replies from: rwallace, Vladimir_Nesov
comment by rwallace · 2009-07-15T02:30:27.721Z · LW(p) · GW(p)

It's true that we don't have any experience telling us we will survive if we sign up for cryonics, so there's no way to even estimate its chances of success.

We do however have lots of experience making it very clear we are definitely dead if we don't.

Replies from: djcb
comment by djcb · 2009-07-15T07:46:21.655Z · LW(p) · GW(p)

My point was not so much about cryonics perse, but with the fact that most of the example 'heuristics' and many of the ones posted, are not heuristics at all - but more like little 'wisdoms'.

I understand the reasoning of the procryonics. But heuristics are not about reasoning - they are about experience. The interesting point of some heuristics is that we do not not really understand this reasoning -- we just see the correlations. But if there is no experience, no correlation, there is no heuristic.

Even the examples that actually have some evidence are problematic. E.g., only by reasoning you can get from 'people are susceptible to priming' to 'Make important decisions in a quiet, featureless room' (example 1). For a real heuristic, we'd need to see correlation between the quality of decisions and the kind of room they were made in, not some psy paper + reasoning.

This article would could have been better had it started with a clear definition of what it considers a 'heuristic' and then proceed from there.

comment by Vladimir_Nesov · 2009-07-14T18:35:55.092Z · LW(p) · GW(p)

If heuristic is adaptive, it takes a form depending on experience, more optimal than a fixed procedure, sometimes successful, sometimes terribly wrong. Simpler kinds may not be adaptive.

You use a heuristic because it's useful, and "proof" of usefulness may involve any connection between concepts at all, only extreme cases of such connections constitute direct experience.

comment by jimrandomh · 2009-07-14T13:27:01.215Z · LW(p) · GW(p)

For purposes of making a decision, any statement which leads to the conclusion that the decision has no effect, is false.

Replies from: John_Maxwell_IV
comment by John_Maxwell (John_Maxwell_IV) · 2009-07-14T22:19:27.397Z · LW(p) · GW(p)

What if you're trying to figure out how much time to spend deciding?

Don't read the following sentence if you follow jimrandomh's heuristic. What is your favorite three-color combination?

comment by kpreid · 2009-07-14T11:12:06.368Z · LW(p) · GW(p)

“Apply deodorant before going to bed” lacks information. If I hadn't seen the previous discussion, I would assume the point was "Do apply deodorant", not "...rather than in the morning".

Replies from: CannibalSmith
comment by CannibalSmith · 2009-07-14T12:44:05.771Z · LW(p) · GW(p)

Fixed.

Replies from: SilasBarta
comment by SilasBarta · 2009-07-14T13:54:28.721Z · LW(p) · GW(p)

So deodorant can withstand a shower and even be stronger afterward? (I shower in the morning.)

Replies from: Alicorn, MendelSchmiedekamp
comment by Alicorn · 2009-07-14T17:14:07.673Z · LW(p) · GW(p)

I've been doing this since I read the OB article. My showering times vary widely, but when I do shower in the morning, it still seems to work fine.

comment by MendelSchmiedekamp · 2009-07-14T14:51:42.490Z · LW(p) · GW(p)

According to the linked article, yes. The critical thing seems to be that the period of time between the application and the bathing where you perspire less, lets the deodorant enter your pores.

So for people who perspire less when they sleep than when they are awake, they should apply deodorant before bed.

Not being one of those people myself, I keep my own counsel on the subject.

comment by bentarm · 2009-07-15T01:47:52.450Z · LW(p) · GW(p)

I'm pretty sure that 'if presented with a Monty Hall Problem, switch', is a bad heuristic: you'd need to know what Monty's strategy for deciding whether or not to open any doors before you could make a sensible decision.

A better heuristic might be 'If presented with a Monty Hall problem, ask Monty why he decided to open a door and show you a goat'.

Replies from: Rune
comment by Rune · 2009-07-15T03:03:38.940Z · LW(p) · GW(p)

Why? Regardless of his strategy, you do no worse by switching.

Replies from: Douglas_Knight
comment by Douglas_Knight · 2009-07-15T03:11:08.810Z · LW(p) · GW(p)

What if he only makes the offer to people whose initial choice of door was the car?

I read somewhere that on the show itself, the odds were about 50-50.

Here's an interview in which he doesn't quite say that.

comment by JamesAndrix · 2009-07-14T15:41:49.450Z · LW(p) · GW(p)

"Avoid counterfactuals and thought experiments." Seems inconsistent with: "If presented with a Monty Hall problem, switch."

You're probably not going to encounter an actual Monty hall problem, but maybe something kind of similar. I think "If presented with a Monty Hall problem, Think" is a better heuristic.

Perhaps the most important heuristics are the ones that tell you when to stop using heuristics.

comment by cousin_it · 2009-07-14T10:11:40.832Z · LW(p) · GW(p)

Distrust the point with the long-winded proof. In this post, it would be #3. (Because thought experiments have been historically useful, e.g. EPR.)

Replies from: CannibalSmith
comment by CannibalSmith · 2009-07-14T10:16:58.834Z · LW(p) · GW(p)

Other heuristics link to articles that are longer than #3's proof.

comment by Richard_Kennaway · 2009-07-14T10:50:25.832Z · LW(p) · GW(p)

Win.

comment by Drahflow · 2009-07-15T01:29:44.848Z · LW(p) · GW(p)

If some talk includes obvious rhetoric tricks, flip the bozo bit on the whole talk

The speaker probably prepared for maximum effect on human brains. Thus the arguments in the talk are likely one-sided and omit essential data.

Also, by ignoring the talk you are likely to counterbalance the unduly influence over most of the rest of the audience.