Is Sunk Cost Fallacy a Fallacy?

post by gwern · 2012-02-04T04:33:49.585Z · LW · GW · Legacy · 81 comments

I just finished the first draft of my essay, "Are Sunk Costs Fallacies?"; there is still material I need to go through, but the bulk of the material is now there. The formatting is too gnarly to post here, so I ask everyone's forgiveness in clicking through.

To summarize:

  1. sunk costs are probably issues in big organizations
    • but maybe not ones that can be helped
  2. sunk costs are not issues in animals
  3. they appear to be in children & adults
    • but many apparent problems can be explained as part of a learning strategy
  4. there are few clear indications sunk costs are genuine problems
  5. much of what we call 'sunk cost' looks like simple carelessness & thoughtlessness

(If any of that seems unlikely or absurd to you, click through. I've worked very hard to provide multiple citations where possible, and fulltext for practically everything.)

I started this a while ago; but Luke/SIAI paid for much of the work, and that motivation plus academic library access made this essay more comprehensive than it would have been and finished months in advance.

 

81 comments

Comments sorted by top scores.

comment by Morendil · 2012-02-04T11:03:01.616Z · LW(p) · GW(p)

There are interesting examples of this in Go, where pro play commentary often discusses tensions between "cutting your losses" and "being strategically consistent".

If things in Go aren't as clear-cut as the classic utilitarian example of "teleporting into the present situation" (which is typically the way Go programs are written, and they nevertheless lose to top human players), then maybe we can expect that they aren't clear-cut in complex life situations either.

This doesn't detract from the value of teaching people the sunk-cost fallacy: novice Go players do things such as adding stones to an already dead group which are clearly identifiable as instances of the sunk cost fallacy, and improvement reliably follows from helping them identify this as thinking that leads to lost games. Similarly, improvement at life reliably results from improving your ability to tell it's time to cut your losses.

Replies from: Prismattic, gwern
comment by Prismattic · 2012-02-04T17:34:29.528Z · LW(p) · GW(p)

novice Go players do things such as adding stones to an already dead group which are clearly identifiable as instances of the sunk cost fallacy,

I don't think this is correct. Novice players keep adding stones because they don't realize the group is dead, not because they can't give up on it.

Replies from: Morendil
comment by Morendil · 2012-02-04T18:12:10.085Z · LW(p) · GW(p)

That's probably right at higher kyu levels, when you really have no good grasp of group status.

When you ask a novice "what is the status of this group", though, there is typically a time when they can correctly answer "dead" in exercise settings, but fail to draw the appropriate conclusion in a game by cutting their losses, and that's where I want to draw a parallel with the sunk cost fallacy.

This is similar to life situations where if you'd just ask yourself the question "is this a sunk cost, and should I abandon it" you'd answer yes in the abstract, but you fail to ask that question.

In high-pressure or blitz games this even happens to higher level novice players - you strongly suspect the group is dead, but you keep adding stones to it, playing the situation out: the underlying reasoning is that your opponent has to respond to any move that might save the group, so you're no worse off, you've played one more move and they've played one more.

This is in fact wrong - by making the situation more settled you're in fact wasting the potential to use these plays later as ko threats.

Replies from: AnnaSalamon
comment by AnnaSalamon · 2012-02-06T01:27:40.650Z · LW(p) · GW(p)

Any idea whether Go beginners' tendency to "throw good stones after bad" results from sunk cost fallacy in particular, or from wishful thinking in general?

Like, is the thought "I don't want my stones to have been wasted" or "I really want to have that corner of the board"?

Replies from: Morendil, gwern
comment by Morendil · 2012-02-06T09:52:28.785Z · LW(p) · GW(p)

I'd have to look at actual evidence to answer that question with any degree of authority, and that would take more time than I have right now, but I can sketch an answer...

My source of empirical evidence would be the Go Teaching Ladder, where you get a chance to see higher level players commenting on the inferred thought processes of more novice players. (And more rarely, novice players providing direct evidence of their own thought processes.)

Higher level players tend to recommend "light" play, over "heavy" play: a typical expression is "treat this stone lightly".

Unpacked, this means something like "don't treat this stone as an investment that you must then protect by playing further moves reinforcing your conception of this stone as a living group that must be defended; instead, treat this stone as bait that you gladly abandon to your opponent while you consolidate your strength elsewhere".

"Heavy" play sounds a lot like treating a sunk cost as a commitment to a less valuable course of action. It is play that overlooks the strategic value of sacrifice. See here for some discussion.

However, this is usually expressed from an outside perspective - a better player commenting on the style of a more novice player. I don't know for sure what goes on in the mind of a novice player when making a heavy play - it might well be a mixture of defending sunk costs, wishful thinking, heuristic-inspired play, etc.

comment by gwern · 2012-02-06T01:47:44.619Z · LW(p) · GW(p)

It may be an example of a different bias at play, specifically confirmation bias: they don't realize that the stones are being wasted and can't be retrieved. For example, chess masters commit confirmation bias less than weaker players.

(It's not that the players explicitly realize that there are better moves elsewhere but decide to keep playing the suboptimal moves anyway, because of sunk costs which would be sunk cost bias; it's that they don't think of what the opponent might do - which is closer to 'thoughtlessness'.)

comment by gwern · 2012-02-04T20:01:07.917Z · LW(p) · GW(p)

If things in Go aren't as clear-cut as the classic utilitarian example of "teleporting into the present situation" (which is typically the way Go programs are written, and they nevertheless lose to top human players), then maybe we can expect that they aren't clear-cut in complex life situations either.

That's more a fact about Go programs, I think; reading the Riis material recently on the Rybka case, I had the strong impression that modern top-tier chess programs do not do anything at all like building a model or examining the game history, but instead do very fine-tuned evaluations of individual board positions as they evaluate plys deep into the game tree. So you could teleport a copy of Shredder into a game against Kramnik played up to that point by Shredder, and expect the performance to be identical.

(If there were any research on sunk cost in Go, I'd expect it to follow the learning pattern: high initially followed by steady decline with feedback. I looked in Google Scholar for '("wei qi" OR "weiqi" OR "wei-chi" OR "igo" OR "baduk" OR "baeduk") "sunk cost" game' but didn't turn up anything. GS doesn't respect capitalization so "Go" is useless to search for.)

comment by kilobug · 2012-02-04T10:25:54.753Z · LW(p) · GW(p)

Two remarks :

  1. Be careful with the Concorde example. As a French citizen, I was told that the goal of the Concorde never was to be profitable as a passenger service, but it served two goals : public relation/advertising to demonstrate the world the technical ability of french engineering and therefore sell french-made technology (civilian and military planes for example, but also through halo effect, trains or cars or nuclear power plants), and stimulating research and development that could then lead to other benefits (a bit like military research or space program does lead to civilian technology later on). Maybe it was just rationalization and not admitting they felt to the sunk cost fallacy, but as long as I remember, that was the official stance on the Concorde - and on that side, I don't really think it was sunk cost.

  2. I agree with your analysis that sunk cost is useful to counter other biases. I didn't think about the part of young children not committing it, but now that you pointed to studies showing it, it makes perfect sense (and is compatible with my own personal observation of young relatives). So, yes, sunk cost fallacy is useful because it helps us lower the damages done by the planning fallacy and our tendency to be too optimist. But I wouldn't go as far as saying it's not a bias. It's a bias, a "perfect rationalist" shouldn't have it. A bug that partially negates the effects of another bug, but sometimes create problems of its own, is still a bug. So I wouldn't say "sunk cost is not a fallacy" but "sunk cost is a fallacy but it does help us overcome other fallacies, so be careful".

Replies from: gwern
comment by gwern · 2012-02-04T19:51:12.783Z · LW(p) · GW(p)

IMO, the Concorde justifications are transparent rationalizations - if you want research, buy research. It'd be pretty odd if you could buy more research by not buying research but commercial products... In any case, I mention Concorde because it's such a famous example and because a bunch of papers call it the Concorde effect.

I agree with your analysis that sunk cost is useful to counter other biases.

I'm not terribly confident in that claim; it might be that one suffers them both simultaneously. I had to resort to anecdotes and speculation for that section; it's intuitively appealing, but we all know that means little without hard data.

I didn't think about the part of young children not committing it, but now that you pointed to studies showing it, it makes perfect sense (and is compatible with my own personal observation of young relatives).

Yeah. I was quite surprised when I ran into Arkes's claim - it certainly didn't match my memories of being a kid! - and kept a close eye out thenceforth for studies which might bear on it.

Replies from: Strange7, ChristianKl
comment by Strange7 · 2014-10-26T02:03:14.064Z · LW(p) · GW(p)

if you want research, buy research

Focusing money too closely on the research itself runs the risk that you'll end up paying for a lot of hot air dressed up to look like research. Cool-but-useless real-world applications are the costly signalling mechanism which demonstrates an underlying theory's validity to nonspecialists. You can't fly to the moon by tacking more and more epicycles onto the crystalline-sphere theory of celestial mechanics.

Replies from: gwern
comment by gwern · 2014-10-26T16:21:51.422Z · LW(p) · GW(p)

If you want to fly to the moon, buy flying to the moon. X-prizes etc. You still haven't shown that indirect mechanisms which happen to coincide with the status quo are the optimal way of achieving goals.

Replies from: Strange7
comment by Strange7 · 2014-10-28T13:11:05.740Z · LW(p) · GW(p)

"Modern-day best-practices industrial engineering works pretty well at it's stated goals, and motivates theoretical progress as a result of subgoals" is not a particularly controversial claim. If you think there's a way to do more with less, or somehow immunize the market for pure research against adverse selection due to frauds and crackpots, feel free to prove it.

Replies from: gwern, Lumifer
comment by gwern · 2014-10-28T15:58:26.373Z · LW(p) · GW(p)

is not a particularly controversial claim.

I disagree. I don't think there's any consensus on this. The success of prizes/contests for motivating research shows that grand follies like the Concorde or Apollo project are far from the only effective funding mechanism, and most of the arguments for grand follies come from those with highly vested interests in them or conflicts of interest - the US government and affiliated academics are certainly happy to make 'the Tang argument' but I don't see why one would trust them.

Replies from: Strange7
comment by Strange7 · 2014-11-01T11:50:01.895Z · LW(p) · GW(p)

I didn't say it was the only effective funding mechanism. I didn't say it was the best. Please respond to the argument I actually made.

Replies from: gwern
comment by gwern · 2014-11-01T16:36:52.552Z · LW(p) · GW(p)

You haven't made an argument that indirect funding is the best way to go and you've made baseless claims. There's nothing to respond to: the burden of proof is on anyone who claims that bizarrely indirect mechanisms through flawed actors with considerable incentive to overstate efficacy and do said indirect mechanism (suppose funding the Apollo Project was an almost complete waste of money compared to the normal grant process; would NASA ever under any circumstances admit this?) is the best or even a good way to go compared to directly incentivizing the goal through contests or grants.

Replies from: Strange7, Jiro
comment by Strange7 · 2014-11-02T17:11:57.873Z · LW(p) · GW(p)

You haven't made an argument that indirect funding is the best way to go

On this point we are in agreement. I'm not making any assertions about what the absolute best way is to fund research.

and you've made baseless claims.

Please be more specific.

There's nothing to respond to: the burden of proof is on anyone who claims that bizarrely indirect mechanisms through flawed actors

All humans are flawed. Were you perhaps under the impression that research grant applications get approved or denied by a gleaming crystalline logic-engine handed down to us by the Precursors?

Here is the 'bizarrely indirect' mechanism by which I am claiming industrial engineering motivates basic research. First, somebody approaches some engineers with a set of requirements that, at a glance, to someone familiar with the current state of the art, seems impossible or at least unreasonably difficult. Money is piled up, made available to the engineers conditional on them solving the problem, until they grudgingly admit that it might be possible after all.

The problem is broken down into smaller pieces: for example, to put a man on the moon, we need some machinery to keep him alive, and a big rocket to get him and the machinery back to Earth, and an even bigger rocket to send the man and the machinery and the return rocket out there in the first place. The Tsiolkovsky rocket equation puts some heavy constraints on the design in terms of mass ratios, so minimizing the mass of the life-support machinery is important.

To minimize life-support mass while fulfilling the original requirement of actually keeping the man alive, the engineers need to understand what exactly the man might otherwise die of. No previous studies on the subject have been done, so they take a batch of laboratory-grade hamsters, pay someone to expose the hamsters to cosmic radiation in a systematic and controlled way, and carefully observe how sick or dead the hamsters become as a result. Basic research, in other words, but focused on a specific goal.

would NASA ever under any circumstances admit this?

They seem to be capable of acknowledging errors, yes. Are you?

"It turns out what we did in Apollo was probably the worst way we could have handled it operationally," says Kriss Kennedy, project leader for architecture, habitability and integration at NASA's Johnson Space Center in Houston, Texas, US.

http://www.newscientist.com/article/dn11326

comment by Jiro · 2014-11-01T18:14:40.508Z · LW(p) · GW(p)

That's like asking "If homeopathy worked and all the doctors were wrong, would they admit it?" You can't just flip a bit in the world setting Homeopathy_Works to TRUE and keep everything else the same. If homeopathy worked and yet doctors still didn't accept it, that would imply that doctors are very different than they are now, and that difference would manifest itself in lots of other ways than just doctors' opinion on homeopathy.

If funding the Apollo Project was a complete waste of money compared to the normal grant process, the world would be a different place, because that would require levels of incompetency on NASA's part so great that it would get noticed.

Or for another example: if psi was real, would James Randi believe it?

Replies from: gwern, ChristianKl
comment by gwern · 2014-11-02T14:55:20.702Z · LW(p) · GW(p)

That's like asking "If homeopathy worked and all the doctors were wrong, would they admit it?"

No; it's like asking "If homeopathy didn't work and all the homeopaths were wrong, would they admit it?" You can find plenty of critics of Big Science and/or government spending on prestige projects, just like you can find plenty of critics of homeopathy.

If funding the Apollo Project was a complete waste of money compared to the normal grant process, the world would be a different place, because that would require levels of incompetency on NASA's part so great that it would get noticed.

If homeopathy was a complete waste of money compared to normal medicine implying 'great' levels of incompetency on homeopaths, how would the world look different than it does?

Replies from: Jiro
comment by Jiro · 2014-11-02T17:00:00.878Z · LW(p) · GW(p)

You can find plenty of critics of Big Science and/or government spending on prestige projects,

Those people generally claim that Apollo was a waste of money period, not that Apollo was a waste of money compared to going to the moon via the normal grant process.

comment by ChristianKl · 2014-11-01T19:12:42.566Z · LW(p) · GW(p)

That's like asking "If homeopathy worked and all the doctors were wrong, would they admit it?" You can't just flip a bit in the world setting Homeopathy_Works to TRUE and keep everything else the same.

You can look at cases like chiropractors. Over a long time there was a general belief that chiropractors didn't provide any good for patients because they theory based on which chiropractors practice is in substantial conflict with the theories used by Western medicine.

Suddenly in 2008 Cochrane comes out with the claim that chiropractors actually do provide comparable health benefits for patients with back pain as conventional treatment for backpain.

A lot of the opposition to homeopathy is based on the fact that the theory base of homeopathy is in conflict with standard Western knowledge about how things are supposed to work.

People often fail to notice things for bad reasons.

Replies from: Jiro
comment by Jiro · 2014-11-02T04:29:47.244Z · LW(p) · GW(p)

There are very good reasons why finding that one set of studies shows an unusual result is not taken as proof by either doctors or scientists. (It is also routine for pseudoscientists to latch onto that one or few studies when they happen.)

In other words, chiropractic is not such a case.

[the] theory based on which chiropractors practice is in substantial conflict with the theories used by Western medicine.

I hope you're not suggesting that the theories used by Western medicine are likely to be wrong here.

Replies from: ChristianKl
comment by ChristianKl · 2014-11-02T12:14:40.459Z · LW(p) · GW(p)

There are very good reasons why finding that one set of studies shows an unusual result is not taken as proof by either doctors or scientists.

Cochrane meta studies are the gold standard. In general they do get taken as proof.

The main point is that you don't need to have a valid theory to be able to produce empirical results.

Then I'm also don't believe that issues surrounding back pain are very well understood by today's Western medicine.

Replies from: Jiro
comment by Jiro · 2014-11-02T12:46:34.121Z · LW(p) · GW(p)

Cochrane meta studies are the gold standard. In general they do get taken as proof.

As a matter of simple Bayseianism, P(result is correct|result is unusual) depends on the frequency at which conventional wisdom is wrong, compared to the frequency at which other things (errors and statistical anomalies) exist that produce unusual results. The probability that the result of a study (or meta-study) is correct given that it produces an unusual result is not equivalent to the overall probability that studies from that source are correct, so "Cochrane meta studies are the gold standard" is not the controlling factor. (Imagine that 0.2% of their studies are erroneous, but conventional wisdom is wrong only 0.1% of the time. Then the probability that a study is right given that it produces a result contrary to conventional wisdom is only 1/3, even though the probability that studies in general are right is 99.8%.)

That's why we have maxims like "extraordinary claims require extraordinary evidence".

Replies from: hyporational, ChristianKl
comment by hyporational · 2014-11-08T06:20:17.176Z · LW(p) · GW(p)

FYI it isn't even clear the review he mentions says what he thinks it says, not to mention the reviewers noted most of the studies had high risk of bias. "Other therapies" as controls in the studies doesn't necessarily mean therapies that are considered to be effective.

comment by ChristianKl · 2014-11-02T13:50:44.861Z · LW(p) · GW(p)

The evidence for chiropractic intervention for lower back pain is good enough that RationalWiki which is full of people who don't like chiropractics write: "There is evidence that chiropractic can help alleviate symptoms of low back pain." RationalWiki then adds that the cost and risks still suggest to that it's good to stay aware from chiropractors.

Conventional wisdom by people who care about evidence for medical treatment is these days is that chiropractical interventions have effects for alleviate symptoms of low back pain.

That makes it a good test to identify people who pretend to care about evidence-based medicine but who care about medicine being motivated by orthodox theory instead of empirical evidence.

Replies from: Jiro
comment by Jiro · 2014-11-02T16:51:51.095Z · LW(p) · GW(p)

people who don't like chiropractics write: "There is evidence

Of course they'll write that. After all, there is evidence. You were implying that there's good evidence.

RationalWiki then adds that the cost and risks still suggest to that it's good to stay aware from chiropractors.

In other words, the evidence isn't all that good.

Conventional wisdom by people who care about evidence for medical treatment is these days is that chiropractical interventions have effects for alleviate symptoms of low back pain.

This is a no true Scotsman fallacy. You're asserting that anyone who seems to be part of conventional wisdom but doesn't agree doesn't count because he doesn't care about evidence.

Replies from: ChristianKl
comment by ChristianKl · 2014-11-02T17:50:25.469Z · LW(p) · GW(p)

In other words, the evidence isn't all that good.

No. Saying that costs and side effects aren't worth something is very different than saying it doesn't work and produces no effect.

Conventional treatment is often cheaper than chiropractics. Dismissing it on those grounds is very different than dismissing it on grounds that it produces no effect. Given that they don't like it they need to make some argument against it ;) Not being able to argue that it doesn't work make them go for risks and cost effectiveness.

This is a no true Scotsman fallacy. You're asserting that anyone who seems to be part of conventional wisdom but doesn't agree doesn't count because he doesn't care about evidence.

Cochrane meta studies have a reputation that's good enough that even venues like RationalWiki accept it when it comes to conclusions that they don't like.

There no meta study that's published after the Cochrane results that argues that the Cochrane analysis get's things wrong. Conventional of evidence-based medicine than suggests to use the Cochrane results as best source of evidence. It not only RationalWiki. Any good evidence-based source that has a writeup about chiropractics will these days tell you that the evidence suggests that it works for back pain for a value of works that means it works as well as other conventional treatments for back pain.

Replies from: Jiro
comment by Jiro · 2014-11-03T00:25:40.422Z · LW(p) · GW(p)

Saying that costs and side effects aren't worth something is very different than saying it doesn't work and produces no effect.

No, they're not very different at all. In fact they are directly related. Saying that costs and side effects are too great means that costs and side effects are too great for the benefit you get. If there is some probability that the study is bad and there is no benefit, that gets factored into this comparison; the greater the probability that the study is bad, the more the costs and side effects tip the balance against getting the treatment.

Cochrane meta studies have a reputation that's good enough that even venues like RationalWiki accept it when it comes to conclusions that they don't like.

You didn't say that everyone accepts it. You said that everyone who cares about evidence accepts it. This is equivalent to "the people who don't accept it don't count because their opinions are not really based on evidence". Likewise, now you're claiming "any good evidence-based source" will say that it works. Again, this is a No True Scotsman fallacy; you're saying that anyone who disagrees can't really be an evidence-based source.

Replies from: Strange7
comment by Strange7 · 2014-11-03T13:30:49.955Z · LW(p) · GW(p)

It's only a No True Scotsman if you can point to an actual citizen of Scotland who doesn't meet the 'true Scotsman' standard.

You are conflating two claims here. One is that chiropractic is more expensive than conventional treatments for lower back pain, and the other is that chiropractic is less effective than conventional treatments for lower back pain. What support do you have for the latter claim?

Replies from: Jiro
comment by Jiro · 2014-11-03T17:12:18.582Z · LW(p) · GW(p)

I covered that:

Saying that costs and side effects are too great means that costs and side effects are too great for the benefit you get. If there is some probability that the study is bad and there is no benefit, that gets factored into this comparison; the greater the probability that the study is bad, the more the costs and side effects tip the balance against getting the treatment.

Replies from: Strange7
comment by Strange7 · 2014-11-03T19:33:50.826Z · LW(p) · GW(p)

If there was some non-negligible probability that the study was bad, RationalWiki would, given their dislike for chiropractics, have seized upon that and discussed it explicitly, would they not?

Replies from: Jiro
comment by Jiro · 2014-11-07T21:39:59.336Z · LW(p) · GW(p)

They describe the Cochrane study as "weak evidence" that chiropractic is as effective as other therapy. This implicitly includes some non-negligible probability that the benefit is less than the study seems to say it is.

comment by Lumifer · 2014-10-28T16:18:09.326Z · LW(p) · GW(p)

"works pretty well" is not a controversial claim, but "motivates theoretical progress" is more iffy.

Offhand, I would say that it motivates incremental progress and applied aspects. I don't think it motivates attempts at breakthroughs and basic science.

Replies from: Strange7
comment by Strange7 · 2014-11-01T12:00:31.344Z · LW(p) · GW(p)

'Breakthroughs and basic science' seem to be running in to diminishing returns lately. As a policy matter, I think we (human civilization) should focus more on applying what we already know about the basics, to do what we're already doing more efficiently.

comment by ChristianKl · 2014-10-26T18:15:54.439Z · LW(p) · GW(p)

IMO, the Concorde justifications are transparent rationalizations - if you want research, buy research. It'd be pretty odd if you could buy more research by not buying research but commercial products... In any case, I mention Concorde because it's such a famous example and because a bunch of papers call it the Concorde effect.

It really depends on your view of academics. If you think that if you hand them a pile of money they just invest it into playing status games with each other, giving them a clear measurable outcome to provides feedback around which they have to structure their research could be helpful.

comment by wedrifid · 2012-02-04T09:12:26.024Z · LW(p) · GW(p)

Is Sunk Cost Fallacy a Fallacy?

Yes, it is. Roughly speaking it is when you reason that you should persist in following a choice of actions that doesn't give the best expected payoff because you (mistakenly) treat already spent resources as if they are a future cost of abandoning the path. If your essay is about "Is the sunk cost fallacy a problem in humans?" then the answer is not so trivial.

It is not clever or deep to title things as though you are overturning a basic principle when you are not. As far as I am concerned a (connotatively) false title - and the implicit conclusion conveyed thereby - significantly undermines any potential benefit the details of the essay may provide. I strongly suggest renaming it.

Replies from: gwern
comment by gwern · 2012-02-04T19:40:57.144Z · LW(p) · GW(p)

If your essay is about "Is the sunk cost fallacy a problem in humans?" then the answer is not so trivial.

And if it isn't, as I conclude (after an introduction discussing the difference between being valid in a simplified artificial model and the real world!), then it's perfectly legitimate to ask whether accusations of sunk cost fallacy - which are endemic and received wisdom - are themselves fallacious. Sheesh. I feel as if I were discussing someone's credibility and someone said 'but that's an ad hominem!'. Yes. Yes, it is.

(Notice your Wikipedia link is full of hypotheticals and description, and not real world evidence.)

It is not clever or deep to title things as though you are overturning a basic principle when you are not.

People do not discuss sunk cost because it is a theorem in some mathematical model or a theoretical way possible agents might fail to maximize utility; they discuss it because they think it is real and serious. If I conclude that it isn't serious, then in what sense am I not trying to overturn a basic principle?

Finally, your criticism of the title or what overreaching you perceive in it aside, did you have any actual criticism like missing refs or anything?

Replies from: Sniffnoy, wedrifid
comment by Sniffnoy · 2012-02-05T00:53:10.270Z · LW(p) · GW(p)

And if it isn't, as I conclude (after an introduction discussing the difference between being valid in a simplified artificial model and the real world!), then it's perfectly legitimate to ask whether accusations of sunk cost fallacy - which are endemic and received wisdom - are themselves fallacious.

But none of this changes the fact that the title is still misleading. Even if accusations of sunk cost fallacy are themselves often fallacious, this doesn't change the fact that you are arguing that the sunk cost fallacy is a mode of reasoning which doesn't often occur, rather than one that is actually valid. Claiming that it is not serious may indeed be overturning a basic principle, but it is not the basic principle the title claims you may be overturning. Sensationalize if you like, but there's no need to be unclear.

Replies from: None
comment by [deleted] · 2012-02-05T02:25:31.611Z · LW(p) · GW(p)

Even if accusations of sunk cost fallacy are themselves often fallacious, this doesn't change the fact that you are arguing that the sunk cost fallacy is a mode of reasoning which doesn't often occur, rather than one that is actually valid.

I don't know how you got that from the essay. To quote, with added emphasis:

We can and must do the same thing in economics. In simple models, sunk cost is clearly a valid fallacy to be avoided. But is the real world compliant enough to make the fallacy sound? Notice the assumptions we had to make: we wish away issues of risk (and risk aversion), long-delayed consequences, changes in options as a result of past investment, and so on.

Replies from: wedrifid
comment by wedrifid · 2012-02-05T06:05:18.457Z · LW(p) · GW(p)

I don't know how you got that from the essay.

I believe Sniffnoy, like myself, gave the author the benefit of the doubt and assumed that he was not actually trying to argue against a fundamental principle of logic and decision theory but rather claiming that the principle applies to humans far less than often assumed. If this interpretation is not valid then it would suggest that the body of the post is outright false (and logically incoherent) rather than merely non-sequitur with respect to the title and implied conclusion.

Replies from: None
comment by [deleted] · 2012-02-05T06:23:58.681Z · LW(p) · GW(p)

Sniffnoy claims that gwern has argued "that the sunk cost fallacy is a mode of reasoning which doesn't often occur, rather than one that is valid."

Actually, what gwern has argued is that while the sunk cost fallacy is often used as an heuristic there is little evidence that it is sound to do so in real world situations. This also seems to be what you've said, but it is not what Sniffnoy has said.

Hence my confusion.

On a side note, I don't really understand your qualms with the title, but that's less important to me.

Replies from: wedrifid
comment by wedrifid · 2012-02-05T06:35:16.094Z · LW(p) · GW(p)

On a side note, I don't really understand your qualms with the title, but that's less important to me.

The qualms are similar in nature to if I encoutered an article:

Is 7+6 = 16 not an arithmetic error? followed by an article explaining that it doesn't matter because humans only have 10 fingers, it's not like anyone counts on their toes and besides, sometimes it's healthier to believe the answer is 16 anyway because you were probably going to make a mistake later in the calculation and you need to cancel it out.

comment by wedrifid · 2012-02-05T06:26:04.639Z · LW(p) · GW(p)

(Notice your Wikipedia link is full of hypotheticals and description, and not real world evidence.)

Precisely. The wikipedia article set out to explain what the Sunk Cost Fallacy is and did it. It did not set out to answer any of the dozens of questions which would make sense as titles to your post (such as "Is the sunk cost fallacy a problem in humans?") and so real world 'evidence' wouldn't make much sense. Just like filling up the article on No True Scottsman with evidence about whether True Scottsman actually do like haggis would be rather missing the point! (The hypothetical is built right into the name for the informal fallacy!)

then it's perfectly legitimate to ask whether accusations of sunk cost fallacy - which are endemic and received wisdom - are themselves fallacious.

And with a slight tweak that is another thing that you could make your post about that wouldn't necessitate dismissing it out of hand. Please consider renaming along these lines.

  • Are most accusation of the Sunk Cost Fallacy fallacious?
  • Fallacious thinking about Sunk Costs
  • Sunk Costs - not a big deal
  • Accusations of Sunk Cost Fallacy Often Fallacious?
  • Fallacious thinking about Sunk Costs - a problem in the real world?

Finally, your criticism of the title or what overreaching you perceive in it aside, did you have any actual criticism like missing refs or anything?

Without implicitly accepting the connotations here by responding - No, your article seems to be quite thorough with making references. In particular all the dot points in the summary seem to be supported by at least one academic source.

comment by orthonormal · 2012-02-06T17:42:43.874Z · LW(p) · GW(p)

Wait a second:

Arkes & Ayton cite 2 studies finding that committing sunk cost bias increases with age - as in, children do not commit it.

Information is worth most to those who have the least: as we previously saw, the young commit sunk cost more than the old

These are in direct contradiction with each other. What gives?

Replies from: gwern
comment by gwern · 2012-02-06T23:15:30.670Z · LW(p) · GW(p)

They are in contradiction, but the latter claim is supported by the large second paragraph in the children (the section that 'previously saw' was linking to) where I quote the criticism of the 2 studies and then list 5 studies which find either that children do commit it on questions or that avoidance increases over lifetimes, which to me seem to override the 2 studies.

Replies from: orthonormal
comment by orthonormal · 2012-02-07T01:56:31.755Z · LW(p) · GW(p)

Ah. Can I suggest you re-write that section to make it clearer? I admit I wasn't reading closely, but I assumed that a two-line statement before a quote from a paper was going to be the conclusion of the section.

Also, given that the evidence there is far from unidirectional, I'd rather you didn't cite it as the first piece of supporting evidence for the "gaining information" hypothesis. I expect an argument to start with its strongest pieces of evidence first.

P.S. I'm not sure I agree with your argument, but thanks for putting this together!

Replies from: gwern
comment by gwern · 2012-02-07T04:20:10.104Z · LW(p) · GW(p)

I already modified it; hopefully the new version is clearer.

Also, given that the evidence there is far from unidirectional, I'd rather you didn't cite it as the first piece of supporting evidence for the "gaining information" hypothesis. I expect an argument to start with its strongest pieces of evidence first.

I was going in what I thought was logical implication order of the learning hypothesis.

comment by Morendil · 2012-02-04T11:21:42.590Z · LW(p) · GW(p)

when one engages in spring-cleaning, one may wind up throwing or giving away a great many things which one has owned for months or years but had not disposed of before; is this an instance of sunk cost where you over-valued them simply because you had held onto them for X months, or is this an instance of you simply never before devoting a few seconds to pondering whether you genuinely liked that checkered scarf?

If (during spring cleaning) you balk at throwing away something simply because it's sat so long in your basement, you are tempted to justify holding on to it a little bit more, then that's an instance of SCF.

If you balk at even doing spring cleaning (as I personally know some of my friends do) because the outcome is going to be reconsideration of your ownership of some items that you don't really value, but that you have "invested" in by keeping them in storage - then that is again an instance of SCF.

Spring cleaning itself, when it involves throwing things away, is an instance of cutting your losses. Storing items in anticipation of future use is not SCF (though it may be an instance of fooling yourself). Ergo, that you allow months or years to pass between storing items and spring cleaning is not per se an instance of SCF, even though past use of your storage space does represent a sunk cost.

ETA: on the other hand, balking at throwing something away because of emotional attachment does not necessarily qualify as SCF. For instance, throwing away kid toys that you know your now-grown kids are never going to use again, and that putative grandchildren are unlikely to use, but you would like to retain the option of using these items later to bring back happy memories.

Replies from: Prismattic
comment by Prismattic · 2012-02-04T17:32:47.840Z · LW(p) · GW(p)

Balking at getting rid of things you own may sometimes be more about the endowment effect than the sunk cost fallacy.

comment by Unnamed · 2012-02-08T03:31:44.008Z · LW(p) · GW(p)

A few brief comments:

The study in footnote 6 seems to show the opposite of what you say about it. The study found that diffusion of responsibility reduced the effect of sunk costs while you say "responsibility is diffused, which encourages sunk cost."

In the "subtleties" section, it's unclear what is meant by saying that "trying to still prove themselves right" is "an understandable and rational choice." After someone has made a decision and it is either right or wrong, it does not seem rational to try to prove it right (unless you just mean that it can be instrumentally rational to try to persuade others that you made the right decision).

The study quoted in fn 26 doesn't seem to match your description of it ("sunk costs were supported more when subjects were given justifications about learning to make better decisions"). The studies did not vary whether or not participants were given the learn-a-lesson justification. All participants were given that justification, and the DV was how highly they rated it.

There are few places where you downplay evidence of sunk cost effects by saying that the effects were small, but it's not clear what standard you're using for whether an effect is large or small. If an NBA player plays an extra 10-20 minutes per game based on sunk cost thinking that seems to me like an enormous effect (superstars only play about 25 minutes per game more than backups).

comment by malthrin · 2012-02-07T16:06:28.341Z · LW(p) · GW(p)

Good point. My interpretation of what you're saying is that the error is actually failure to re-plan at all, not bad math while re-planning.

Replies from: Douglas_Knight
comment by Douglas_Knight · 2012-02-08T02:28:24.497Z · LW(p) · GW(p)

I find that a very helpful formulation. I could not tell where Gwern was drawing distinctions.

comment by Unnamed · 2012-02-08T02:41:33.992Z · LW(p) · GW(p)

About the “Learning” section:

I think I understand the basic argument here: sometimes an escalation of commitment can be rational as a way to learn more from a project by continuing it for longer. But it seems like this only applies to some cases of sunk cost thinking and not others. Take Thaler's example: I don't see why a desire to learn would motivate someone to go to a football game in a blizzard (or, more specifically, how you'd learn more if you had paid for your ticket than if you hadn't).

And in some cases it seems like an escalation of commitment can hinder learning. Learning from a failed project often requires admitting that you made a mistake. One of the motivations for continuing a failing project is to avoid admitting you made a mistake (I believe that's called the “self-justification” explanation of the sunk cost fallacy). If you finish the project you can pretend that there was no mistake, but if you stop it prematurely you have to admit your error which can allow you to learn more. For example, if you throw out food from your plate then it's clear that you cooked too much (learning!), but if you eat everything on your plate to avoid wasting it then the error's less clear.

At the end of that section there's a list findings that “snap into focus” if escalation of commitment is for learning, but with many of them I don't see a clear connection to the learning hypothesis. For instance, “in situations where participants can learn and update, we should expect sunk cost to be attenuated or disappear” seems consistent with many different theories of sunk costs, including the theory that it's a bias which people can learn to avoid as they gain more experience with a type of decision. Is there something specific about the cited studies that points to the hypothesis that escalation of commitment is for learning?

Replies from: gwern
comment by gwern · 2012-02-09T01:26:11.139Z · LW(p) · GW(p)

Take Thaler's example: I don't see why a desire to learn would motivate someone to go to a football game in a blizzard

You'd learn more what it's like to go in a blizzard - maybe it's not so bad. (Personally, I've gone to football games in non-blizzards and learned that it is bad.) If you knew in this specific instance, drawn from all the incidents in your life, that you wouldn't learn anything, then you've already learned what you can and sunk cost oughtn't enter into it. It's hard to conclude very much from answers to hypothetical questions.

seems consistent with many different theories of sunk costs, including the theory that it's a bias which people can learn to avoid as they gain more experience with a type of decision.

Any result is consistent with an indefinite number of theories, as we all know. The results fit very neatly with a learning theory, and much more uncomfortably with things like self-justification.

comment by JoachimSchipper · 2012-02-04T10:06:27.388Z · LW(p) · GW(p)

I had serious trouble understanding the paragraph "COUNTERING HYPERBOLIC DISCOUNTING?" beyond "sunk costs probably counter other biases".

Also, I'd like to point out that, if sunk costs are indeed a significant problem in large organizations, they are indeed a significant problem; large organizations are (unfortunately?) rather important to modern life.

Replies from: gwern
comment by gwern · 2012-02-04T19:47:02.858Z · LW(p) · GW(p)

What's not clear about it? That's the idea.

they are indeed a significant problem

Only if there are better equilibriums which can be moved to by attacking sunk cost - otherwise they are simply the price of doing business.

(I only found two studies bearing on it, neither of which were optimistic: the study finding sunk costs encouraged coordination and the bank study finding attacking sunk cost resulted in deception and falsification of internal metrics.)

comment by Psychohistorian · 2012-02-04T16:36:29.521Z · LW(p) · GW(p)

Content aside, you should generally avoid the first person as well as qualifiers and you should definitely avoid both, e.g. "I think it is interesting." Where some qualifiers are appropriate, you often phrase them too informally, e.g. "perhaps it is more like," would read much better as, "It is possible that," or, "a possible explanation is." Some first person pronouns are acceptable, but they should really only be used when the only alternative is an awkward or passive sentence.

The beginning paragraph of each subsection should give the reader a clear idea of the ultimate point of that subsection, and you would do well to include a roadmap of everything you plan to cover at the beginning.

I don't know if this is the feedback you're searching for or if the writing style is purposeful, just my two cents.

Replies from: grouchymusicologist, gwern
comment by grouchymusicologist · 2012-02-04T20:09:02.208Z · LW(p) · GW(p)

I think how important these criticisms are depends on who the intended audience of the essay is -- which Gwern doesn't really make clear. If it's basically for SIAI's internal research use (as you might think, since they paid for it), tone probably hardly matters at all. The same is largely the case if the intended audience is LW users -- our preference for accessibly, informally written scholarly essays is revealed by our being LW readers. If it's meant as a more outward-facing thing, and meant to impress academics who aren't familiar with SIAI or LW and who judge writings based on their adherence to their own disciplinary norms, then sure. (Incidentally, I do think this would be a worthwhile thing to do, so I'm not disagreeing.) Perhaps Gwern or Luke would care to say who the intended beneficiaries of this article are.

For myself, I prefer scholarly writing that's as full of first-person statements as the writer cares to make it. I feel like this tends to provide the clearest picture of the writer's actual thought process, and makes it easier to spot where any errors in thinking actually occurred. I rarely think the accuracy of an article would be improved if the writer went back after writing it and edited out all the first-person statements to make them sound more neutral or universal.

comment by gwern · 2012-02-04T20:13:13.863Z · LW(p) · GW(p)

Well, style wasn't really what I had in mind since it's already so non-academic in style, but your points are well taken. I've fixed some of that.

comment by [deleted] · 2016-02-18T12:46:13.702Z · LW(p) · GW(p)

I prefer the way the Beeminder cofounder explains this on this page of his blog.

comment by PhilGoetz · 2012-02-27T23:07:08.845Z · LW(p) · GW(p)

I'm impressed with the thoroughness that went into this review, and with its objectivity and lack of premature commitment to an answer.

comment by NCoppedge · 2012-02-13T19:00:18.793Z · LW(p) · GW(p)

I would like to argue that it is less important to determine IF it is a fallacy, than what kind it is.

One view is that this is a "deliberation" fallacy, along the lines of a failed thought experiment; e.g. 'something went wrong because conditions weren't met.' Another view is that this fallacy, which relates if I am correct to "resource shortages" or "debt crises" is in fact a more serious 'systems error' such as a method fallacy involving recursivity or logic gates.

To some extent at this point I am prone to take the view that the extent of the problem is proportionistic, leading to a kind of quantitative rather than qualitative perspective, which makes me think in my own reasoning that it is not true logic, and therefore not a true logical problem.

For example, it can be argued modal-realistically that in some contingent or arbitrarily divergent context or world, debt might be a functional or conducive phenomenon that is incorporated in a functional framework.

I would be interested to know if this kind of reasoning is or is not actually helpful in determining about a debt crisis. Perhaps as might be expected, the solution lies in some kind of "technologism," and not a traditional philosophical debate per se.

comment by Eugine_Nier · 2012-02-06T02:24:42.925Z · LW(p) · GW(p)

Related: sunk cost fallacy fallacy

Replies from: gwern, Grognor
comment by gwern · 2012-02-06T02:27:11.319Z · LW(p) · GW(p)

Linked in a footnote, BTW.

comment by Grognor · 2012-02-11T19:35:06.824Z · LW(p) · GW(p)

Also related: Sunk Cost Fallacy by Zachary M. Davis

comment by adamisom · 2012-02-04T20:21:06.511Z · LW(p) · GW(p)

Well, I always thought it was obvious that "sunk cost" has one advantage going for it.

Placing a single incident of a "sunk cost" in a larger context, "sunk costs" can serve as a deterrent against abandoning projects. I wonder if the virtue of persistence isn't maligned. After all, as limited rationality machines, 1) we hardly ever can look at the full space of possible alternatives, and 2) probably underestimate the virtue of persistence. Pretty much every success story I've ever read is of someone who persisted beyond what you might call "the frustration barrier".

As I think about the error in forecasting expected payoff, it seems to me that unless we have a lot of experience with pushing projects through to the end, we're likely to underestimate the value of persistence, due to compounding effects and comparative advantage (if few people gain some skill).

Replies from: gwern
comment by gwern · 2012-02-04T20:30:10.942Z · LW(p) · GW(p)

Placing a single incident of a "sunk cost" in a larger context, "sunk costs" can serve as a deterrent against abandoning projects.

Sure, but why do you expect people to systematically err in judging when it is time to abandon a project? Unless you have a reason for this, this is buck-passing. ('Why do people need sunk cost as a deterrent? Well, it's because they abandon projects too easily.' 'But why do they abandon projects too easily?' 'Heck I dunno. Same way opium produces sleep maybe, by virtue of a dormitive fallacy.')

This line of thought is why I was looking into hyperbolic discounting, which seems like a perfect candidate for causing that sort of easily-abandonment behavior.

Pretty much every success story I've ever read is of someone who persisted beyond what you might call "the frustration barrier".

Which doesn't necessarily prove anything; we could just be seeing the winner's curse writ large. To win any auction is easy, you just need to be willing to bid more than anyone else... Persistence beyond 'the frustration barrier' may lead to outcomes like 'I am the Japanese Pog-collecting champion of the world.' Well, OK, but don't tell me that's something I should aspire to as a model of rationality!

Replies from: adamisom
comment by adamisom · 2012-02-05T00:58:08.056Z · LW(p) · GW(p)

"Sure, but why do you expect people to systematically err in judging when it is time to abandon a project? Unless you have a reason for this, this is buck-passing."

Because we aren't psychic and can only guess expected payoffs. Why would I hypothesize that we underestimate expected payoffs for persistence rather than the reverse? Two reasons--or assumptions, I suppose. 1. Most skills compound--the better we get, the faster we can get better. And humans are bad at estimating compounded effects, which is why Americans on the whole find themselves surprised at how much their debt has grown. 2. The better you get, the fewer competitors you have, and thus the more valuable your skill is, disproportionate to absolute skill level (a separate compounding effect).

"Persistence beyond 'the frustration barrier' may lead to outcomes like 'I am the Japanese Pog-collecting champion of the world.'" Yes, but the activity one persists in/with is a completely separate issue, so I feel you can just assume 'for activities that reasonably seem likely to yield large benefit'.

On a separate note, the sunk cost fallacy may not be a fallacy because it fails to take into account the social stigma of leaving projects incomplete versus completing them.

Oh, sure, if you're extra careful, you would take that into account in your utility function. You can always define your utility function to include everything relevant, but in real life estimations of utility, some things just don't occur to us.

I mean, consider morality. It's so easy to say that moral rules have plenty of exceptions and so arrive at a decision that breaks one or more of these rules (and not for simple reason of internal inconsistency). But this may be bad overall for society. You might arrive at a local maximum of overall good, but a global maximum would require strict adherence to moral rules. I believe this is the common "objection" to utilitarianism and why hardly anyone (other than a LWer) professes to be utilitarian. Because how we actually think of utility functions doesn't include the nuances that a complete function would.

Replies from: gwern
comment by gwern · 2012-02-05T01:37:04.948Z · LW(p) · GW(p)
  1. Most skills compound--the better we get, the faster we can get better. And humans are bad at estimating compounded effects, which is why Americans on the whole find themselves surprised at how much their debt has grown. 2. The better you get, the fewer competitors you have, and thus the more valuable your skill is, disproportionate to absolute skill level (a separate compounding effect).

The first is not true at all; graphs of expertise follow what looks like logarithmic curves, because it's a lot easier to master the basics than to become an expert. (Question: did Kasparov's chess skill increase faster from novice to master status, or from grandmaster to world champion?) #2 may be true, but everyone can see that effect so I don't see how that could possibly cause systematic underestimation and compensating sunk cost bias.

On a separate note, the sunk cost fallacy may not be a fallacy because it fails to take into account the social stigma of leaving projects incomplete versus completing them.

Mentioned in essay.

I believe this is the common "objection" to utilitarianism and why hardly anyone (other than a LWer) professes to be utilitarian. Because how we actually think of utility functions doesn't include the nuances that a complete function would.

One objection, and why variants like rule utilitarianism exist and act utilitarians emphasize prudence since we are bounded rational agents and not logical omniscient utility maximizers.

Replies from: adamisom
comment by adamisom · 2012-02-05T17:30:11.617Z · LW(p) · GW(p)

Thanks

comment by Dmytry · 2012-02-13T14:37:12.812Z · LW(p) · GW(p)

I came up with example of how sunk cost fallacy could helps increase the income for 2 competing agents.

Consider two corporations that each sunk considerable sum of money into two interchangeable competing IP-heavy products. Digital cameras for example. They need to recover that cost, which they would be unable to if they start price-cutting each other while ignoring the sunk costs. If they both act as not to price cut beyond the point where the sunk costs are not recovered, they settle at a price that permits to recover the software development costs. If they ignore sunk costs they can price cut to point where they don't recover development expenses. Effectively the fallacy results in a price-fixing behaviour.

Note: on the second thought, the digital cameras, being a luxury item, may be a poor choice for that example. Corporate goods, such as e.g. network hardware, may be a better choice. The luxury goods keep selling ok even if someone is price cutting you, as the luxuries attain some of the value from the price itself;

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2012-02-13T14:48:03.044Z · LW(p) · GW(p)

There are better ways of making credible commitments than having a tendency to commit sunk cost fallacy.

Replies from: gwern, PhilGoetz, Dmytry
comment by gwern · 2012-02-13T18:42:08.702Z · LW(p) · GW(p)

For ideal agents, absolutely. For things like humans... Have you looked at the models in "Do Sunk Costs Matter?", McAfee et al 2007?

EDIT: I've incorporated all the relevant bits of McAfee now, and there are one or two other papers looking at sunk cost-like models where the behavior is useful or leads to better equilibria.

comment by PhilGoetz · 2012-02-27T23:05:01.476Z · LW(p) · GW(p)

While that may be true, I don't see how it has any consequences.

comment by Dmytry · 2012-02-13T14:54:06.442Z · LW(p) · GW(p)

Of course. But what works, works; you'd cripple an agent by dispelling it's fallacies without providing alternatives.

comment by lessdazed · 2012-02-11T00:34:06.138Z · LW(p) · GW(p)

Is the sunk cost fallacy a fallacy?

I ask myself about many statements: would this have the same meaning if the word "really" were inserted? As far as my imagination can project, any sentence that can have "really" inserted into it without changing the sentence's meaning is at least somewhat a wrong question, one based on an unnatural category or an argument by definition.

If a tree falls in the forest, does it make a sound? --> If a tree falls in the forest, does it really make a sound?

Is Terry Schiavo alive? --> Is Terry Schiavo really alive?

Is the sunk cost fallacy a fallacy? --> Is the sunk cost fallacy really a fallacy?

Replies from: army1987, DSimon, thomblake
comment by A1987dM (army1987) · 2012-02-15T20:37:40.983Z · LW(p) · GW(p)

Did you really mean “that can have” rather than “that can't have”?

comment by DSimon · 2012-02-15T19:40:04.167Z · LW(p) · GW(p)

As far as I can tell you can do that with any sentence.

Replies from: gwern, Jiro
comment by gwern · 2012-02-15T20:01:53.612Z · LW(p) · GW(p)

Can you really do that with any sentence?

comment by Jiro · 2013-11-11T21:23:33.474Z · LW(p) · GW(p)

"Really" in this context means that an answer has already been provided by someone but you object to the rationale given for this provided answer, particularly because it's too shallow. In other words, it's not a description of the problem the question asks you to solve, it's a description of the context in which the problem is to be solved. So the fact that it can be done with any sentence doesn't mean that it provides no information, just like "Like I was discussing with Joe last week, is the sunk cost fallacy a fallacy?" doesn't provide no information.

comment by thomblake · 2012-02-15T20:13:22.474Z · LW(p) · GW(p)

Do you really ask yourself that about many statements?

Would this really have the same meaning if the word "really" were inserted?

Is any sentence that can have "really" inserted into it without changing the sentence's meaning really at least somewhat a wrong question, one based on an unnatural category or an argument by definition?