The Substitution Principle
post by Kaj_Sotala · 2012-01-28T04:20:48.176Z · LW · GW · Legacy · 65 commentsContents
65 comments
Partial re-interpretation of: The Curse of Identity
Also related to: Humans Are Not Automatically Strategic, The Affect Heuristic, The Planning Fallacy, The Availability Heuristic, The Conjunction Fallacy, Urges vs. Goals, Your Inner Google, signaling, etc...
What are the best careers for making a lot of money?
Maybe you've thought about this question a lot, and have researched it enough to have a well-formed opinion. But the chances are that even if you hadn't, some sort of an answer popped into your mind right away. Doctors make a lot of money, maybe, or lawyers, or bankers. Rock stars, perhaps.
You probably realize that this is a difficult question. For one, there's the question of who we're talking about. One person's strengths and weaknesses might make them more suited for a particular career path, while for another person, another career is better. Second, the question is not clearly defined. Is a career with a small chance of making it rich and a large chance of remaining poor a better option than a career with a large chance of becoming wealthy but no chance of becoming rich? Third, whoever is asking this question probably does so because they are thinking about what to do with their lives. So you probably don't want to answer on the basis of what career lets you make a lot of money today, but on the basis of which one will do so in the near future. That requires tricky technological and social forecasting, which is quite difficult. And so on.
Yet, despite all of these uncertainties, some sort of an answer probably came to your mind as soon as you heard the question. And if you hadn't considered the question before, your answer probably didn't take any of the above complications into account. It's as if your brain, while generating an answer, never even considered them.
The thing is, it probably didn't.
Daniel Kahneman, in Thinking, Fast and Slow, extensively discusses what I call the Substitution Principle:
If a satisfactory answer to a hard question is not found quickly, System 1 will find a related question that is easier and will answer it. (Kahneman, p. 97)
System 1, if you recall, is the quick, dirty and parallel part of our brains that renders instant judgements, without thinking about them in too much detail. In this case, the actual question that was asked was ”what are the best careers for making a lot of money”. The question that was actually answered was ”what careers have I come to associate with wealth”.
Here are some other examples of substitution that Kahneman gives:
- How much would you contribute to save an endangered species? becomes How much emotion do I feel when I think of dying dolphins?
- How happy are you with your life these days? becomes What is my mood right now?
- How popular will the president be six months from now? becomes How popular is the president right now?
- How should financial advisors who prey on the elderly be punished? becomes How much anger do I feel when I think of financial predators?
All things considered, this heuristic probably works pretty well most of the time. The easier questions are not meaningless: while not completely accurate, their answers are still generally correlated with the correct answer. And a lot of the time, that's good enough.
But I think that the Substitution Principle is also the mechanism by which most of our biases work. In The Curse of Identity, I wrote:
In each case, I thought I was working for a particular goal (become capable of doing useful Singularity work, advance the cause of a political party, do useful Singularity work). But as soon as I set that goal, my brain automatically and invisibly re-interpreted it as the goal of doing something that gave the impression of doing prestigious work for a cause (spending all my waking time working, being the spokesman of a political party, writing papers or doing something else few others could do).
As Anna correctly pointed out, I resorted to a signaling explanation here, but a signaling explanation may not be necessary. Let me reword that previous generalization: As soon as I set a goal, my brain asked itself how that goal might be achieved, realized that this was a difficult question, and substituted it with an easier one. So ”how could I advance X” became ”what are the kinds of behaviors that are commonly associated with advancing X”. That my brain happened to pick the most prestigious ways of advancing X might be simply because prestige is often correlated with achieving a lot.
Does this exclude the signaling explanation? Of course not. My behavior is probably still driven by signaling and status concerns. One of the mechanisms by which this works might be that such considerations get disproportionately taken into account when choosing a heuristic question. And a lot of the examples I gave in The Curse of Identity seem hard to justify without a signaling explanation. But signaling need not to be the sole explanation. Our brains may just resort to poor heuristics a lot.
Some other biases and how the Substitution Principle is related to them (many of these are again borrowed from Thinking, Fast and Slow):
The Planning Fallacy: ”How much time will this take” becomes something like ”How much time did it take for me to get this far, and many times should that be multiplied to get to completion.” (Doesn't take into account unexpected delays and interruptions, waning interest, etc.)
The Availability Heuristic: ”How common is this thing” or ”how frequently does this happen” becomes ”how easily do instances of this come to mind”.
Over-estimating your own share of household chores: ”What fraction of chores have I done” becomes ”how many chores do I remember doing, as compared to the amount of chores I remember my partner doing.” (You will naturally remember more of the things that you've done than that somebody else has done, possibly when you weren't even around.)
Being in an emotionally ”cool” state and over-estimating your degree of control in an emotionally ”hot” state (angry, hungry, sexually aroused, etc.): ”How well could I resist doing X in that state” becomes ”how easy does resisting X feel like now”.
The Conjunction Fallacy: ”What's the probability that Linda is a feminist” becomes ”how representative is Linda of my conception of feminists”.
People voting for politicians for seemingly irrelevant reasons: ”How well would this person do his job as a politician” becomes ”how much do I like this person.” (A better heuristic than you might think, considering that we like people who like us, owe us favors, resemble us, etc. - in the ancestral environment, supporting the leader you liked the most was probably a pretty good proxy for supporting the leader who was most likely to aid you in return.)
And so on.
The important point is to learn to recognize the situations where you're confronting a difficult problem, and your mind gives you an answer right away. If you don't have extensive expertise with the problem – or even if you do – it's likely that the answer you got wasn't actually the answer to the question you asked. So before you act, stop to consider what heuristic question your brain might actually have used, and whether it makes sense given the situation that you're thinking about.
This involves three skills: first recognizing a problem as a difficult one, then figuring out what heuristic you might have used, and finally coming up with a better solution. I intend to develop something on how to taskify those skills, but if you have any ideas for how that might be achieved, let's hear them.
65 comments
Comments sorted by top scores.
comment by Scott Alexander (Yvain) · 2012-01-26T15:37:25.673Z · LW(p) · GW(p)
Good post, and upvoted, but I would phrase this part differently:
If a satisfactory answer to a hard question is not found quickly, System 1 will find a related question that is easier and will answer it.
The problem with a question like "What jobs make the most money" isn't so much that it's hard as that it's vague (or if you want to be harsh, "meaningless"). The question "How much would you contribute to save an endangered species" is even worse - if I were to actually answer it (by, for example, saying "Exactly two hundred seven dollars!"), you would be terribly confused and have no idea what I meant.
Seems to be a social norm that anyone asking an interlocutor to clarify a question is nitpicking and annoying, even though the overwhelming majority of questions people debate are meaningless as asked. People get rejected as poor conversational partners if they ask "To save all individuals in the species, or just to ensure at least one breeding pair...and are we talking per year, or pretending we have a 100% chance of saving them forever?", whereas if they say "We should pay whatever it takes!" they will be considered interesting even though that answer is clearly insane. It's no wonder that most people avoid becoming rationalists in such a situation.
Replies from: khafra↑ comment by khafra · 2012-01-31T03:01:41.640Z · LW(p) · GW(p)
Whether biases come up in making decisions, or only in making conversation, seems to be a perennial question around here. Does anybody know of a canonical list of the ones which have demonstrated in actual-stakes-involved decision making?
comment by Vladimir_Nesov · 2012-01-26T21:37:43.664Z · LW(p) · GW(p)
This post gives what could be called an "epistemic Hansonian explanation". A normal ("instrumental") Hansonian explanation treats humans as agents that possess hidden goals, whose actions follow closely from those goals, and explains their actual actions in terms of these hypothetical goals. People don't respond to easily available information about quality of healthcare, but (hypothetically) do respond to information about how prestigious a hospital is. Which goal does this behavior optimize for? Affiliation with prestigious institutions, apparently. Therefore, humans don't really care about health, they care about prestige instead. As Anna's recent post discusses, the problem with this explanation is that human behavior doesn't closely follow any coherent goals at all, so even if we posit that humans have goals, these goals can't be found by asking "What goals does the behavior optimize?"
Similarly in this instance, when you ask humans a question, you get an answer. Answers to the question "How happy are you with your life these days?" are (hypothetically) best explained by respondents' current mood. Which question are the responses good answers for? The question about the current mood. Therefore, the respondents don't really answer the question about their average happiness, they answer the question about their current mood instead.
The problem with these explanations seems to be the same: we try to fit the behavior (actions and responses to questions both) to the idea of humans as agents, whose behavior closely optimizes the goals they really pursue, and whose answers closely answer the questions they really consider. But there seems to be no reality to the (coherent) goals and beliefs (or questions one actually considers) that fall out of a descriptive model of humans as agents, even if there are coherent goals and beliefs somewhere, too loosely connected to actions and anticipations to be apparent in them.
Replies from: None, None, Jonathan_Graehl↑ comment by [deleted] · 2012-01-28T05:12:53.451Z · LW(p) · GW(p)
I am probably not qualified to make good guesses about this, but as an avid reader of O.B., I think Hanson would be among the first people to agree with you that humans aren't subconsciously enacting coherent goals. The agent with hidden goals models, similar to many situations where Markov model-like formalism is adopted, is just an expedient tool that might offer some correlation with what an agent will do in a given future situation. Affiliation with prestigious institutions, while probably not a coherent goal held over time by many people, does seem to correlate with certain actions (endorsing credentialed folks' predictions, trusting confident-seeming doctors, approving municipal construction projects despite being told to explicitly account for planning bias, etc.)
I guess what I'm suggesting is that you're right that people don't have these as coherent goals, but I don't see any better predictive model and would still use the hidden goal model until a better one comes up. IMO, the 'better one' will just be a deeper level Markov model. Maybe we don't have easily explicable hidden goals that lend themselves to English summaries, but presumably we do have cognitive principles that differ from noise and cause correlation with survival behaviors. Small model approximations of this are of course bad and not the whole story, but are better than anything else at the moment and often times useful.
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2012-01-28T12:05:42.508Z · LW(p) · GW(p)
Yes, these "hidden goals" and "hidden questions" of descriptive idealization have predictive power, behavior clusters around them. The (potential) error is in implying that these hold a more fundamental role, existence as actual goals/questions/beliefs, that their properties contradict those of their (more idealistic) loosely-connected counterparts in a normative idealization. People behaving in a way that ignores more direct signals of healthcare quality and instead pursuing healthcare providers' prestige doesn't easily contradict people normatively caring about quality of healthcare.
Replies from: None↑ comment by [deleted] · 2012-01-28T19:57:52.977Z · LW(p) · GW(p)
Sure, all it tells us is that the signal we evolved to extract from the environment when worrying about healthcare is related to credential. That was probably a great way to actually solve healthcare problems in various time periods past. If you really do care about healthcare, and the environment around you affords a low-cost signal in the form of credential that correlates to better healthcare, then you'll slowly adopt that policy or die; higher-cost signals might yield better healthcare but at the expense of putting yourself at a disadvantage compared to competitors using the low-cost signal.
When I hear the term 'hidden goal' in these models, I generally substitute "goal that would have correctly yielded the desired outcome in less data-rich environments." I agree it is misleading to tout some statement like, "look how foolish people are because they care more about credential than about the real data behind doctors' successes or treatments' survival rates." But I also don't think Hanson or Kahneman are saying anything like that. I think they are saying, "Look at how unfortunate our intrinsic, evolved signal-processing machinery is. What worked great when the best you could do was hope to live to the age of 30 as a hunter gatherer turns out not to really be that great at all when you state more explicit goals tied to the data. Gee, if we could be more aware of the residual hunter-gatherer mechanisms that produced these cognitive artifacts, maybe we could correct for them or take advantage of them in some useful way." Perhaps "vestigial goals" is a better term for what Hanson calls "hidden goals."
↑ comment by [deleted] · 2012-01-27T22:39:17.476Z · LW(p) · GW(p)
Hanson-type explanations: do they assume coherent goals? What if we disregard goals and focus on urges. So, if people respond to the prestige of the hospital rather than the health care it provides, we might then say that their urges pertain more to prestige than to health care. How does Hansonian explanation require coherent goals?
Replies from: Grognor↑ comment by Jonathan_Graehl · 2012-01-27T22:17:47.490Z · LW(p) · GW(p)
Well put.
But we can use one of these explanations (your hidden goal is to optimize status, etc.) to predict yet-unobserved behavior in other contexts.
In the case of "did I answer an easier / more accessible question than was really posed?", you may just be inventing a new just-so story in every case. So like all self-help/productivity tricks, I can use one hoping that they remind me to act more deliberately when it matters, more than they waste our energy, but I can't be sure it's more than placebo.
comment by Ezekiel · 2012-01-26T15:32:35.520Z · LW(p) · GW(p)
This seems to be something of a fake explanation. The statement is: Sometimes the brain provides false information, that actually answers a different question to the one asked.
That would be true no matter what answer was being given (unless it was completely random), because so long as the answer thrown up correlates with something, you can say: "Aha! The brain has substituted something for the question you thought was being asked!"
And since this explanation could be given for any answer the brain throws up, it doesn't actually give us any new information about the cognitive algorithm being used.
Replies from: Kaj_Sotala, orthonormal↑ comment by Kaj_Sotala · 2012-01-27T09:59:33.539Z · LW(p) · GW(p)
That would be true no matter what answer was being given (unless it was completely random), because so long as the answer thrown up correlates with something, you can say: "Aha! The brain has substituted something for the question you thought was being asked!"
Well... in principle, yes. But that seems a little uncharitable - do you think I actually committed that mistake in any of the examples in gave? I.e. do you actually think that it seems like a wrong explanation for any of those cases? If yes, you should just point them out. If not, well, obviously any explanation can be rationalized and overextended beyond its domain. But it being possible to misapply something doesn't make the thing itself bad.
You are right in the sense that the substitution principle provides, by itself, rather little information. It only says that an exact analysis is being replaced with an easier, heuristic analysis, but doesn't say anything about what that heuristic is. To figure out that requires separate study.
But the thing that distinguishes a fake explanation from a real explanation is that a fake explanation isn't actually useful for anything, for it doesn't constrain your anticipations. That isn't the case here. The substitution principle provides us with the (obvious in retrospect) anticipation that "if your brain instantly returns an answer to a difficult problem, the answer is most likely a simplified one", which provides a rule of thumb which is useful for noticing when you might be mistaken. (This anticipation could be disproven if it turned out that all such answers were actually answers to the questions being asked.) While learning about specific biases/heuristics might be more effective in teaching you to notice them, such learning is also more narrow in scope. The substitution principle, in contrast, requires more thought but also makes it easier to spot heuristics you haven't learned about before.
Replies from: Ezekiel↑ comment by Ezekiel · 2012-01-27T12:48:40.944Z · LW(p) · GW(p)
Do you think I actually committed that mistake?
I'm claiming that the substitution principle isn't so much a model as a (non-information-adding) rephrasing of the existence of heuristics, so there isn't much of a "mistake" to be made - the rephrasing isn't actually wrong, just unhelpful.
Unless of course you actually think the new question is explicitly represented in the brain in a similar way that a question read or heard would be, in which case I think you've made that mistake in every single one of your examples, unless you have data to back up that assumption.
But it being possible to misapply something doesn't make the thing itself bad.
Being possible to easily misapply an explanation does make it bad, because that means it's not anticipation-constraining.
The substitution principle provides us with the (obvious in retrospect) anticipation that "if your brain instantly returns an answer to a difficult problem, the answer is most likely a simplified one"
This is exactly what I'd expect the moment after learning of the existence of natural heuristics. If the brain is answering a question in less time than I'd expect it to take to calculate/retrieve the answer, obviously it's doing something else. What this post seems to be trying to add is that "doing something else" can be refined to "answering a different question" - but since the brain is providing output of type Answer, any output will be "answering a different question", so it's not actually a refinement.
It's possible you just wanted to explicitly state a principle that happened to be implicitly obvious to me, in which case we have no disagreement. But the length of the post and the fact that you bothered to cite Kahneman seem to me to indicate that you're trying to say something more substantial, in which case I've missed it.
Replies from: None, Kaj_Sotala↑ comment by [deleted] · 2012-01-28T05:21:27.513Z · LW(p) · GW(p)
I disagree that it fails to be a model. It predicts what types of information will be used by the agent (i.e. answers to simpler questions). Though Kahneman's book presents all this is in a glossed-over, readable way, his actual research papers do combine this with anchoring effects to specifically control and test that certain answers to anchor questions are being substituted for answers to more difficult questions. It's actually quite powerful as a model.
Replies from: Ezekiel↑ comment by Ezekiel · 2012-01-28T09:50:27.455Z · LW(p) · GW(p)
What definition does Kahneman use for "simpler"?
Replies from: None↑ comment by [deleted] · 2012-01-30T17:46:16.482Z · LW(p) · GW(p)
I'm no expert on this, but one main thing he uses as a proxy is pupil dilation. In arithmetical tasks, there is a strong correlation between pupil dilation, reported mental effort required to finish the task, and time taken to finish the task. So, the same person, when asked to do an unnatural modular arithmetic problem, will express more dilated pupils, report having a harder time, and take longer, than to solve a similar but more familiar numerical problem like adding large numbers. Kahneman applied similar approaches to moral and ethical questions.
I don't dispute that his findings are difficult to interpret and that many people (including Kahneman) probably overstretch to make them fit a compelling story (Kahneman even admits as much when discussing the narrative bias). But the overall model that when your System 1 hits a cognitive wall it demands that System 2 give it a compelling story about why this is so seems to be well confirmed. If you're willing to accept that things like pupil dilation are a proxy for how difficult a question feels to the answerer, then the anchoring-controlled experiments show that System 2 is lazy and wants the cheapest, quickest answer for a hard problem and it will often substitute an answer for a question it was already thinking about, or a question of considerably less cognitive strain (as measured by pupil dilation, etc.)
↑ comment by Kaj_Sotala · 2012-01-27T13:19:35.624Z · LW(p) · GW(p)
It's possible you just wanted to explicitly state a principle that happened to be implicitly obvious to me, in which case we have no disagreement.
I think this is the case. The principle seems to me obvious in retrospect, but it did not feel obvious before I'd read Kahneman.
Also, I was thinking about using the principle as a tool for teaching rationality, and this post was to some extent written as an early draft "how would I explain biases and heuristics to someone who's never heard about them before" article, to be followed by concrete exercises of the Sunk Costs type, which I'm about to start designing next.
Replies from: None↑ comment by [deleted] · 2012-01-28T03:05:09.362Z · LW(p) · GW(p)
The "substitution principle" isn't as trivial as both of you conclude. The claim is that an easier to answer question is substituted for the actual question when System 1 can't answer the harder question. That's not the same as saying a simplifying heuristic is involved. The difference is that to accord with the substitution principle, you simplify the question, which you then use valid means to ascertain. In the case of generic heuristic substitution, you use a heuristic that may not answer any plausible question—except as an approximation. The "substitution principle" constrains the candidate heuristics further (by limiting them to exact answers to substituted questions) than do ordinary heuristics. The "substitution principle" is an elegant theory, although don't know whether it's true. (I haven't read Kahnemann's newest book.)
↑ comment by orthonormal · 2012-01-26T20:41:15.816Z · LW(p) · GW(p)
A better summary might be: if you intuit a quick answer to a complex question, it's often instead an answer to a related question about your current mental/emotional state, and as such is subject to noticeable biases.
comment by [deleted] · 2012-01-26T19:21:54.770Z · LW(p) · GW(p)
You are making many unanalyzed assumptions here.
1) You are assuming that your mind did or did not do certain things in those moments when it was quietly answering the question. In particular, you make assumptions like "....your answer probably didn't take any of the above complications into account. It's as if your brain, while generating an answer, never even considered them .....". Why are you so sure that it probably didn't take any of the above considerations into account?
To illustrate how wrong this might be, consider that when a cognitive psychologist gives someone a visual priming task, with (for example) masked cues, the subject reports that she did not take ANY account of the masked cues. And yet, she shows clear evidence that she did very much take the cue into account! The proof is right there in the reaction times (which depend on what was in the cue).
So if someone can be that wrong in their self-report assessment of what factors they are taking into account in a situation as siimple as masked priming, what is the chance that a person in one of the scenarios you describe above is also doing all kinds of assessments that actually happen below the reporting threshold? At the very least this seems likely. But even if you don't accept that it is likely, you still have to give reasons why we should believe that it is not happening.
So, when Kahneman cites substitutions, his evidence clearly distinguishes substitutions from complex assessments that may be interpreted as substitutions, or which are correlated with substitutions? I don't buy that.
My second objection has to do with the oversimplification of the analysis:
2) You seem to be framing a lot of scenarios as if they were all instances of the same type of problem. As if the same mechanism was operating in most or all of these circumstances. Your mechanism involves a "target question", a "substituted question" (assumed to be of dubious validity) and a resulting answer that is assumed to be of sub-optimal quality. While there may be some situations where this frame neatly applies to a situation, I do not believe that it applies to all, and nor do I believe that it helps to try to oversimplify all instances of "bias" so they can be squeezed into this narrow frame.
At the very least, there appear to be situations that do not fit the pattern. Chess skill, for example. The question asked by the chess player is "How do I take the opponent's King?". But rather than address this question directly (as I did, in my very first chess game, when I imagined a sequence of moves that culminated in me taking that King, then started executing my planned sequence of moves .....), the expert chess player knows that a different set of questions have to be asked: to wit, "How do I make pleasing, coherent patterns of support and strength on the board?" and "Do I recognize anything about the current pattern as similar or identical to a situation I have seen in the past?"
That particular "substitution" happens to be extremely optimal. It also happens to be not how chess computers work (by and large: let's not get sidetracked by the finer points of chess programming ... the fact is that machines rely on depth to a very large extent, whereas humans rely on pattern). So it makes no sense to talk about substitution as a "problem" in this case. Far from it, substitution seems to be why a tiny little lookahead device (human mind) can give the massive supercomputer a run for its money.
3) Finally, this analysis overall has the feel (like almost all "human bias is a problem" arguments) of fitting the data to the theory. So many people (here on LW, and in the biasses community generally) want to see "Biasses" as a big deal, that they see them everywhere. All the evidence that you see supports the idea that the concept of "biasses and heuristics" is a meaningful one, and that there are many instances of that concept, and that thos instances have certain ramifications.
But, like people who see images of Jesus in jars of Marmite, or evidence of divine intervention whenever someone recovers from an illness, I think that you see evidence of bias (and, in this case, substitution) primarily because you have trained yourself to theorize about the world that way. Not because it is all real.
Replies from: Dmytry, Kaj_Sotala↑ comment by Dmytry · 2012-01-27T12:14:18.644Z · LW(p) · GW(p)
"It also happens to be not how chess computers work "
Not to sidetrack this with some unimportant fine point but that is by large a significantly invalid assertion and in so much as this assertion is relevant it needs to be corrected.
It actually happens to be, to significant extent, how good chess programs work. Also, for the best, most effective chess programs (e.g. crafty), you need to download gigabyte sized datasets before they'll play their best. Even the badly playing naive programs try to maximize the piece dis-balance rather than consider just the taking of the king. It is literally impossible (on today's hardware) to play chess by considering just taking the king - whenever you are human or supercomputer. Maybe an extremely powerful supercomputer could play chess by considering just the king - but that is an extreme case of playing chess perfectly.
There is no practical solution that does not involve substitution, not even for a well defined, compact problem like chess. Computers, even now, literally cannot win versus humans without relying on the human mind to perform good substitutions. Furthermore, if you run competitions between chess programs, on same hardware, the one with best substitutions and heuristics will win.
I think that actually strenghtens your point. Nothing can play chess yet without heuristics. Something that can play chess without heuristics would also have power to play chess perfectly (it would have to see the demise of the king or the tie right from the first move, to make any first move - that's the depth of, for some of the tree, 100+ moves) and give the answer to long standing unsolved problem - which side wins with the perfect play? Or is it a tie?
Replies from: PhilGoetz↑ comment by PhilGoetz · 2012-01-29T21:51:28.531Z · LW(p) · GW(p)
It actually happens to be, to significant extent, how good chess programs work... Nothing can play chess yet without heuristics.
Citation needed. The best chess-playing computer AFAIK is still Deep Blue. Deep Blue evaluated 200 million positions per second. That means it could look about 7 ply ahead per move, exhaustively, with no heuristics. But I agree that this does not weaken Richard's point.
Replies from: army1987↑ comment by A1987dM (army1987) · 2012-01-29T22:31:12.016Z · LW(p) · GW(p)
Even if you can look 7 ply ahead, how do you evaluate which of the possible resulting positions is better with you without heuristics?
Replies from: Dmytry↑ comment by Dmytry · 2012-01-30T08:16:28.088Z · LW(p) · GW(p)
Even if you can look 7 ply ahead, how do you evaluate which of the possible resulting positions is better with you without heuristics?
Precisely. The king is still alive 7 moves ahead (usually).
At very least you use the heuristics like e.g. summing the piece values, and penalties for the e.g. two pawns of yours on same column (and bonuses for the pawn that threatens to become a queen, etc).
http://en.wikipedia.org/wiki/Computer_chess#Leaf_evaluation
Also, God only knows how many millions positions per second Garry Kasparov evaluates. Possibly quite many as they can be evaluated in parallel.
I would think that Kasparov's evaluation, in a sense, substitutes less than Deep Blue's - Kasparov can meta-strategise to king's death or to tie from any position - when you are playing for tie you try to make computer exchange pieces, when you are playing for win you try to put computer into situation where you will be able to plan more moves ahead than the computer (computer tends to act as if it desperately hopes that some bad-looking move wins the game).
Replies from: army1987↑ comment by A1987dM (army1987) · 2012-01-30T10:15:33.777Z · LW(p) · GW(p)
(To end a block quote, leave a blank line (i.e. two consecutive line breaks) after it.)
Precisely. The king is still alive 7 moves ahead.
In the endgame this is useful, but in the opening/early mid-game, not so much. The king is still alive 7 moves ahead in the overwhelming majority of possible legal sequences of moves.
At very least you use the heuristics like e.g. summing the piece values, and penalties for the e.g. two pawns of yours on same column (and bonuses for the pawn that threatens to become a queen, etc).
I'd bet Deep Blue does that, too.
EDIT: The “usually” you added makes it clear that I had completely missed your point. I'm retracting this.
Replies from: Dmytry↑ comment by Dmytry · 2012-01-30T11:50:08.598Z · LW(p) · GW(p)
ahh btw to satisfy the citation request:
I've participated some in computer contests, not chess related, where you can't solve the problem exactly. It's generally the case that you need clever "substitutions" to get anywhere at all. Most problems that aren't outright trivial are too hard to solve from first principles.
Definitely so for the real world behaviours. A supposedly 'rational' calculation of best move, from first principles (death of king), but without substitutions and hunch based heuristics, done by human, over course of centuries, will not be even remotely powerful enough to match the immediate move that Kasparov would somehow make playing 1-minute blitz (and if you actually played chess real fast, it does become apparent that you just can't match this play without massively parallel, rather deep evaluation). I think all aspiring rationalists should learn boardgames like Chess, Go, and perhaps some RTS (like starcraft, though my favourite is springrts. In the starcraft the AI has too much advantage due to human bottlenecking on the output) to appreciate the difficulties. Ideally, try writing an AI that fights against other AIs (in a timed programming contest) to appreciate the issues with advance metastrategies and heuristics.
I perform best on that sort of stuff if I am not tired and I act on feelings. If I am tired or bored and I act on feelings that leads to a quick loss which is also rational because - what the hell am I doing playing game when tired?
edit: where the hell did my wikipedia link disappear? ahh nevermind.
Replies from: Ender, shokwave↑ comment by Ender · 2012-01-31T02:42:20.760Z · LW(p) · GW(p)
I have a friend who is much better at starcraft than I am; he says that he's largely better because he's worked out a lot of things like exactly the most efficient time to start harvesting gas and the resource collection per minute harvesters under optimal conditions, and he uses that information when he plays. It works better than playing based on feelings (by which I mean that he beats me).
If you don't have way too much time on your hands, though, it's about as much fun to not bother with all of that.
Also, I notice you cited a Wikipedia page. Naughty, naughty, naughty.
Replies from: Dmytry↑ comment by Dmytry · 2012-01-31T18:54:08.551Z · LW(p) · GW(p)
Well yea the start timings are important... they correspond to 'book' knowledge of chess openings. I did those a fair lot when i were playing chess more seriously (which was long long ago when i was 10-12). You have to do those before you start actually playing, and in RTS those openings are not so well developed as to read 'em off an old book like you do in chess.
Chess computer btw also uses openings, i think none of the 16 possible first moves leads to loss of king in 10ply (i'd wager a bet that none of possible white's first moves leads to inevitable loss or victory at all, and certainly not in less than 50ply), and the computer does e2-e4 (or other good first move) purely by book. It doesn't figure out that e2-e4 is better move than say a2-a3 from the rules alone. There's a LOT of human thought about chess that chess AI relies on to beat humans.
↑ comment by shokwave · 2012-01-31T03:17:48.955Z · LW(p) · GW(p)
though my favourite is springrts
Oh, hey, wow. I am a huge fan of Total Annihilation; this is really exciting! Which game do you recommend using with springrts?
Replies from: Dmytry↑ comment by Dmytry · 2012-01-31T18:43:50.905Z · LW(p) · GW(p)
I used to play balanced annihilation with springrts. I didn't play it a whole lot but I did program some lua scripts for it and contributed to lobby development. Not playing it much any more but it is very interesting to make scripts for.
↑ comment by Kaj_Sotala · 2012-01-27T10:32:30.072Z · LW(p) · GW(p)
1) You are assuming that your mind did or did not do certain things in those moments when it was quietly answering the question.
This is a fair criticism, and you're right - I can't say with definite certainty that these things were actually never considered at all. Still, if those things were considered, they don't seem to be reflected in the final output. If instead of sayng "the thing is, it probably didn't", I said "the thing is, it probably didn't - or if it did, it's difficult to notice from the provided answer", would you consider that acceptable?
2) You seem to be framing a lot of scenarios as if they were all instances of the same type of problem...
I think you might be somewhat misinterpreting me here. I didn't say that substitution is necessarily a problem - I specifically said it probably works pretty well most of the time. Heck, I imagine that if I was building an AI, I would explicitly program something like a substitution heuristic into it, to be used most of the time - because difficult problems are genuinely difficult, both in the sense of being computationally expensive and requiring information that isn't usually at hand. A system that always tried to compute the exact answer for everything would never get anything done. Much better to usually employ some sort of quick heuristic that tended to at least point in the right direction, and then only spend more effort on the problem if it seemed to be important.
For that matter, you could say that a large part of science consists of a kind of substitution. Does System 1 actually ignore all those complicated considerations when considering its answer? Well, we can't really answer that directly... but we can substitute the question with "does it seem to be ignoring them in certain kinds of experimental setups", and then reflect upon what the answers to that question seem to tell us. This is part of the reason why I felt confident in saying that the brain probably never did take all the considerations into account - because simplifying problems to easier ones is such an essential part of actually ever getting anything done that it would seem odd if the brain didn't do that.
So I agree that there are many cases (including your chess example) where substitution isn't actually a problem, but rather the optimal course of action. And I agree that there are also many cases of bias where the substitution frame isn't the best one.
The reason why I nevertheless brought it up was that, if I were building my hypothetical AI, there's still one thing that I'd do differently than how the human brain seems to do it. A lot of the time, humans seem to be completely unaware of the fact that they are actually making a substitution, and treat the answer they get as the actual answer to the question they were asking. Like I mentioned in my other comment, I think the substitution principle is useful because it gives a useful rule of thumb that we can use to notice when we might be mistaken, and might need to think about the matter a bit more before assigning our intuitive result complete confidence.
comment by Ghatanathoah · 2012-01-27T21:25:46.166Z · LW(p) · GW(p)
It seems to me like the substitution principle might be an explanation for the power of framing effects. You get a different answer for a question depending on what question you end up substituting for System 1 to answer.
That might explain the framing effect Eliezer noticed in Circular Altruism. The first time the question was asked it triggered System 1 to ask "How much would it suck if that gamble in option 2 didn't pay off?" When the question was reframed it triggered System 1 to ask "How much does it suck that 100 people die with certainty?"
The probably explains why political activists like to fight over framing effects so much. When people are asked the question "Would it be good for the government to do this?" System 1 spits out "Would it be good for my favorite metaphor for the government to do something vaguely analogous." What specific metaphor System 1 gets fed would obviously have huge power in framing the question.
comment by JoachimSchipper · 2012-01-26T14:31:53.284Z · LW(p) · GW(p)
This involves three skills: first recognizing a problem as a difficult one, then figuring out what heuristic you might have used, and finally coming up with a better solution.
Figuring out the particular heuristic seems more interesting than useful - "don't trust immediate answers" is a good rule, but it seems easier to start again than to de-bias an immediate guess (i.e. look up average salaries instead of trying to figure out how much you need to discount your guess of doctors' average salaries.)
Replies from: Kaj_Sotala, Vaniver↑ comment by Kaj_Sotala · 2012-01-26T14:59:43.174Z · LW(p) · GW(p)
Good point. I think I mostly intended the second step to be a way for you to evaluate whether the heuristic you were using seems to be a reasonable one - there could be cases where you look at it and decide that it's probably good enough for your purposes.
↑ comment by Vaniver · 2012-01-26T15:49:08.372Z · LW(p) · GW(p)
Figuring out the particular heuristic seems more interesting than useful - "don't trust immediate answers" is a good rule
Well, except for all those times where second-guessing makes you worse off.
Replies from: spriteless↑ comment by spriteless · 2012-02-07T20:11:25.408Z · LW(p) · GW(p)
That's like saying people are being too rational. Get better at second guessing. Get better at being rational.
comment by James_Miller · 2012-01-26T14:26:41.697Z · LW(p) · GW(p)
What are the best careers for making a lot of money?
Here is a fantastic chart showing the professions of the top 1%.
Replies from: Vaniver, gwerncomment by Grognor · 2012-03-20T08:05:15.664Z · LW(p) · GW(p)
A classic example of this, which I learned in a middle school class, is that when you go grocery shopping, you ask your brain, "How much food do I need in the next two weeks?" and it returns the value for how hungry you are right then. So to prevent overspending, don't go grocery shopping while hungry.
Replies from: wedrifid↑ comment by wedrifid · 2012-03-20T10:57:46.174Z · LW(p) · GW(p)
"How much food do I need in the next two weeks?" and it returns the value for how hungry you are right then. So to prevent overspending, don't go grocery shopping while hungry.
The bulk that you purchase is not so much of a problem as is what you purchase. If you buy extra food (and hopefully concentrate on eating the most perishable items first) you end up not having to shop for longer. However when grocery while hungry foods high in carbohydrates start seeming like a good idea.
You end up with food that is bad for you.
Replies from: Grognorcomment by lukeprog · 2012-01-26T17:08:00.758Z · LW(p) · GW(p)
Another post on attribute substitution is How You Make Judgments: The Elephant and its Rider.
comment by A1987dM (army1987) · 2012-01-26T15:07:47.030Z · LW(p) · GW(p)
The Planning Fallacy: ”How much time will this take” becomes something like ”How much time did it take for me to get this far, and many times should that be multiplied to get to completion.”
More like “how much stuff I have left to do divided by how much did I do today”; even ”How much time did it take for me to get this far, and many times should that be multiplied to get to completion” wouldn't be that bad, because that would mean I take in account the time I have procrastinated (or have been hindered) thus far and expect to procrastinate (or be hindered) for roughly the same fraction of the time.
comment by TimS · 2012-01-26T14:21:52.355Z · LW(p) · GW(p)
That's an interesting concept, with the particular advantage that it avoids the "evolved for the ancestral environment" just-so story.
I especially like the unpacking of various specific biases. How do you think the in-group bias would be unpacked?
Replies from: Kaj_Sotala↑ comment by Kaj_Sotala · 2012-01-26T15:03:44.053Z · LW(p) · GW(p)
How do you think the in-group bias would be unpacked?
That one doesn't seem to fit into the substitution framework very naturally: you can come up with ways to do it (I just tried), but they all feel a little tortured. (Or maybe I just didn't think about it long enough.)
comment by qvalq (qv^!q) · 2023-03-07T18:58:31.522Z · LW(p) · GW(p)
All the examples Kahneman gives, I do not seem to substitute for the questions he lists (or worse ones), for even a moment.
This is the first time in a while I've felt immune to a cognitive bias.
What am I doing wrong? Does my introspection not go deep enough?
Maybe I really have read enough trick questions on the internet that I (what part of me?) immediately tries to tackle the hard problem (at least when reading a Less Wrong article which just told me exactly which mistake I'm expected to make).
I have an impression I've fallen for problems of this type before, when they were given for instrumental rather than epistemic reasons. But I can't remember any examples, and I don't know frequently it happens.
comment by MarkusRamikin · 2012-03-02T21:31:27.035Z · LW(p) · GW(p)
Forgive me if I'm not really getting it - which is entirely possible.
But it seems to me that "the substitution principle" is not really an explanation. All it seems to be saying is that it's possible to express biases, rewrite their descriptions, in that particular form. It doesn't describe a mechanism, but is more of a semantic trick, like you've bounced off a problem. I can't possibly imagine making useful predictions based on it.
EDIT: Ezekiel possibly said it better.
comment by Desrtopa · 2012-02-01T13:56:19.633Z · LW(p) · GW(p)
My immediate thought was "Do you mean the best for reliably becoming affluent, or the ones that have the potential to make you the richest?"
I think I have a pretty general tendency to stay on system 2 reasoning when asked hard to answer questions by reflexively looking for ways to clarify rather than reframe the question. I don't know if this is something I ever went through a learning process for.
I often find myself getting stuck on the "come up with a better solution" step though. Since I didn't generate another answer in the first place, "How should financial advisers who prey on the elderly be punished?" becomes "How much do I care about working out a fair, effectively preventative punishment for financial advisers who prey on the elderly? I could be at this for a while."
At least coming up with an answer of "I don't care enough about this/have a good reason to figure this out properly" gives you useful information on your own motives and the gaps in your knowledge.
comment by Spurlock · 2012-01-30T18:02:26.343Z · LW(p) · GW(p)
The Conjunction Fallacy: ”What's the probability that Linda is a feminist” becomes ”how representative is Linda of my conception of feminists”.
I think this is more precisely an example of the Represenativeness Heuristic, though the point about substitution still stands.
comment by billswift · 2012-01-26T14:27:17.056Z · LW(p) · GW(p)
How popular will the president be six months from now? becomes How popular is the president right now?
I wish people would quit using this as an example of a fallacy. With the gross uncertainties involved in predicting presidential popularity, "current popularity" is probably the best predictor available.
Replies from: Kaj_Sotala, wedrifid, Stuart_Armstrong, TheOtherDave, gwern↑ comment by Kaj_Sotala · 2012-01-26T14:57:35.726Z · LW(p) · GW(p)
It was mentioned as an example of a substitution, not necessarily a fallacy. Like I say in the next paragraph, the heuristic does work pretty well most of the time.
↑ comment by wedrifid · 2012-01-27T13:55:17.048Z · LW(p) · GW(p)
I wish people would quit using this as an example of a fallacy. With the gross uncertainties involved in predicting presidential popularity, "current popularity" is probably the best predictor available.
Not true. Popularity isn't an efficient futures market.
Replies from: army1987, Jack↑ comment by A1987dM (army1987) · 2012-01-27T17:44:01.284Z · LW(p) · GW(p)
OK, what would be a better predictor of “popularity six months from now”?
Replies from: prase↑ comment by prase · 2012-02-03T00:29:07.023Z · LW(p) · GW(p)
Depends on the situation, but for example the president is reliably much more popular just after his / her election than two years later. To expect current probability just after the election to equal the president's popularity two years later is stupid.
↑ comment by Stuart_Armstrong · 2012-01-27T08:13:51.168Z · LW(p) · GW(p)
When asked at the beginning of a president's term, when we know he'll be less popular in six months, it is a fallacy.
↑ comment by TheOtherDave · 2012-01-26T15:37:33.968Z · LW(p) · GW(p)
That's fair, as long as my confidence in that prediction is correspondingly low.
The problem arises when I treat current popularity as predicted value of future popularity with high confidence.
↑ comment by gwern · 2012-01-26T15:45:26.512Z · LW(p) · GW(p)
With the gross uncertainties involved in predicting presidential popularity, "current popularity" is probably the best predictor available.
People like Nate Silver do not agree with that at all.
Replies from: orthonormal↑ comment by orthonormal · 2012-01-26T20:37:04.389Z · LW(p) · GW(p)
That's a little strong. From this pair of articles, it looks like a President's approval ratings are a poor estimator of their reelection chances when considered 2 years away, but quite a good one when considered 6 months out. The volatility in opinions seems to operate on timescales of several months.
Nate will of course include factors other than current polling averages in his forecasts, but those averages are the main component, even with the uncertainty in a few months' time.
comment by JulianMorrison · 2012-02-01T12:39:39.213Z · LW(p) · GW(p)
It seems that I don't do that. Like for example with the making money thing, my immediate thought was "banking is the fastest way to silly money, but you need deep math skills, and the burnout rate is harsh". Or the share of chores question: I immediately try to divide total chores by my effort. I seem to be caching the methodology.
comment by Dmytry · 2012-01-27T09:18:58.051Z · LW(p) · GW(p)
I disagree with the notion that this 'quick and dirty answer' is even handled by a substantially different part of the brain in a substantially different fashion than a better answer. The better answer is perhaps made by several refinement steps each conducted in this exact same fashion as the quick and dirty answer.
Consider the job question with complications - the question of who we are talking about. Absent a perfect brain scan and repeated re-simulation of that person, the personal qualities are themselves heuristic answer to some kind of substitute question.
Likewise for the properties of the society and the like. Each single step in the chain of reasoning about that kind of items, even a very long and detailed chain, is some sort of quick conclusion based on some substitutions of this kind.
It is just the case of reasoning with the unknowns. If the personal qualities of "who" are an unknown, then what is the point of reasoning as if they were known? If the preference for a lot of money and low probability versus less money and higher probability is unknown as well? Absent statistics of the most common preference, it IS best answered by picking the most common profession that made someone rich, in the way that you (your only data point) prefer. It's hardly a case of substituting anything for anything else. It's a case of giving best possible answer when the information is incomplete.