Visions and Mirages: The Sunk Cost Dilemma

post by OrphanWilde · 2015-05-20T20:56:51.291Z · LW · GW · Legacy · 68 comments

Contents

  Summary
  Introduction
  Solutions
  Summary
  Related Less Wrong Post Links
  ETA: Post Mortem
None
68 comments

Summary

How should a rational agent handle the Sunk Cost Dilemma?

Introduction

You have a goal, and set out to achieve it.  Step by step, iteration by iteration, you make steady progress towards completion - but never actually get any closer.  You're deliberately not engaging in the sunk cost fallacy - at no point does the perceived cost of completion get higher.  But at each step, you discover another step you didn't originally anticipate, and had no priors for anticipating.

You're rational.  You know you shouldn't count sunk costs in the total cost of the project.  But you're now into twice as much effort as you would have originally invested, and have done everything you originally thought you'd need to do, but have just as much work ahead of you as when you started.

Worse, each additional step is novel; the additional five steps you discovered after completing step 6 didn't add anything to predict the additional twelve steps you added after completing step 19.  And after step 35, when you discovered another step, you updated your priors with your incorrect original estimate - and the project is still worth completing.  Over and over.  All you can conclude is that your original priors were unreliable.  Each update to your priors, however, doesn't change the fact that the remaining cost is always worth paying to complete the project.

You are starting to feel like you are caught in a penny auction for your time.

When do you give up your original goal as a mirage?  At what point do you give up entirely?

Solutions

The trivial option is to just keep going.  Sometimes this is the only viable strategy; if your goal is mandatory, and there are no alternative solutions to consider.  There's no guarantee you'll finish in any finite amount of time, however.

One option is to precommit; set a specific level of effort you're willing to engage in before stopping progress, and possibly starting over from scratch if relevant.  When bugfixing someone else's code on a deadline, my personal policy is to set aside enough time at the end of the deadline to write the code from scratch and debug that (the code I write is not nearly as buggy as that which I'm usually working on).  Commitment of this sort can work in situations in which there are alternative solutions or when the goal is disposable.

Another option is to discount sunk costs, but include them; updating your priors is one way of doing this, but isn't guaranteed to successfully navigate you through the dilemma.

Unfortunately, there isn't a general solution.  If there were, IT would be a very different industry.

Summary

The Sunk Cost Fallacy is best described as a frequently-faulty heuristic.  There are game-theoretic ways of extracting value from those who follow a strict policy of avoiding engaging in the Sunk Cost Fallacy which happen all the time in IT - frequent requirement changes to fixed-cost projects are a good example (which can go both ways, actually, depending on how the contract and requirements are structured).  It is best to always have an exit policy prepared.

Related Less Wrong Post Links

http://lesswrong.com/lw/at/sunk_cost_fallacy/ - A description of the Sunk Cost Fallacy

http://lesswrong.com/lw/9si/is_sunk_cost_fallacy_a_fallacy/ - Arguments that the Sunk Cost Fallacy may be misrepresented

http://lesswrong.com/lw/9jy/sunk_costs_fallacy_fallacy/ - The Sunk Cost Fallacy can be easily used to rationalize giving up

ETA: Post Mortem

Since somebody has figured out the game now, an explanation: Everybody who spent time writing a comment insisting you -could- get the calculations correct, and the imaginary calculations were simply incorrect?  I mugged you.  The problem is in doing the calculations -instead of- trying to figure out what was actually going on.  You forgot there was another agent in the system with different objectives from your own.  Here, I mugged you for a few seconds or maybe minutes of your time; in real life, that would be hours, weeks, months, or your money, as you keep assuming that it's your own mistake.

Maybe it is a buggy open-source library that has a bug-free proprietary version you pay for - get you in the door, then charge you money when it's more expensive to back out than to continue.  Maybe it's somebody who silently and continually moves work to your side of the fence on a collaborative project, when it's more expensive to back out than to continue.  Not counting all your costs opens you up to exploitative behaviors which add costs at the back-end.

In this case I was able to mug you in part because you didn't like the hypothetical, and fought it.  Fighting the hypothetical will always reveal something about yourself - in this case, fighting the hypothetical revealed that you were exploitable.

In real life I'd be able to mug you because you'd assume someone had fallen prone to the Planning Fallacy, as you assumed must have happened in the hypothetical.  In the case of the hypothetical, an evil god - me - was deliberately manipulating events so that the project would never be completed (Notice what role the -author- of that hypothetical played in that hypothetical, and what role -you- played?).  In real life, you don't need evil gods - just other people who see you as an exploitable resource, and will keep mugging you until you catch on to what they're doing.

68 comments

Comments sorted by top scores.

comment by gjm · 2015-05-21T09:40:30.398Z · LW(p) · GW(p)

I think the objections raised by (e.g.) Unknowns, Lumifer and shminux are basically correct but they aren't (I think) phrased so that they exactly match the scenario OrphanWilde is proposing. Let me try to reduce the impedance mismatch a little.

OrphanWilde's scenario -- where your schedule keeps slipping but even with perfectly rational updating continuing always looks like a win -- is possible. But: it's really weird and I don't think it actually occurs in real life; that is, in reality, the scenarios that most resemble OrphanWilde's are ones in which the updating isn't perfectly rational and you would do well to cut your losses and reflect on your cognitive errors.

What would a real OrphanWilde scenario look like? Something like this.

  • You begin a project to (let's say) build a bridge. You think it should be done in six months.
  • After four months of work, it's clear that you underestimated and it's now going to take longer. Your new estimate is another six months.
  • After another four months, it's now looking like it will take only three months more -- so you're still going to be late, but not very. You no longer trust your prediction abilities, though (you were wrong the last two times), so you adjust your estimate: another six months.
  • After another four months, you've slipped further. Your error bars are getting large now, but you get a message from God telling you it'll definitely be done in another six months.
  • After another four months, you've lost your faith and now there's probably nothing that could (rationally) convince you to be confident of completion in 6 months. But now you get information indicating that completing the bridge is more valuable than you'd thought. So even though it's likely to be 9 months now, it's still worth it because extra traffic from the new stadium being built on the other side makes the bridge more important.
  • After another six months, you're wearily conceding that you've got very little idea how long the bridge is going to take to complete. Maybe a year? But now they're planning a whole new town on the other side of the bridge and you really need it.
  • After another nine months, it seems like it might be better just to tell the townspeople to swim if they want to get across. But now you're receiving credible terrorist threats saying that if you cancel the bridge project the Bad Guys are going to blow up half the city. Better carry on, I guess...

What we need here is constant escalation of evidence for timely completion (despite the contrary evidence of the slippage so far) and/or of expected value of completing the project even if it's really late -- perhaps, after enough slippage, this needs to be escalating evidence of the value of pursuing the project even if it's never finished. One can keep that up for a while, but you can see how the escalation had to get more and more extreme.

OrphanWilde, do you envisage any scenario in which a project keeps (rationally) looking worthwhile despite lots of repeated slippages without this sort of drastic escalation? If so, how? If not, isn't this going to be rare enough that we can safely ignore it in favour of the much commoner scenarios where the project keeps looking worthwhile because we're not looking at it rationally?

Replies from: None, OrphanWilde
comment by [deleted] · 2015-05-21T10:22:42.219Z · LW(p) · GW(p)

The 'even if never finished' part resembles childrearing:)

Replies from: gjm
comment by gjm · 2015-05-21T12:18:55.566Z · LW(p) · GW(p)

A nice example of a task whose value (1) is partly attached to the work rather than its goal and (2) doesn't depend on completing anything.

comment by OrphanWilde · 2015-05-21T14:17:40.196Z · LW(p) · GW(p)

OrphanWilde, do you envisage any scenario in which a project keeps (rationally) looking worthwhile despite lots of repeated slippages without this sort of drastic escalation?

Yes. Three cases:

First, the trivial case: You have no choice about whether or not to continue, and there are no alternatives.

Second, the slightly less trivial case: Every slippage is entirely unrelated. The project wasn't poorly scheduled, and was given adequate room for above-average slippage, but the number of things that has gone wrong is -far- above average. (We should expect a minority of projects to fit this description, but for a given IT career, everybody should encounter at least one such project.)

Third, the mugging case: The slippages are being introduced by another party that is calibrating what they're asking for to ensure you agree.

The mugging case is actually the most interesting to me, because the company I've worked for has been mugged in this fashion, and has developed anti-mugging policies. Ours are just to refuse projects liable to this kind of mugging - e/g, refuse payment-on-delivery fixed-cost projects. There are also reputation solutions, such as for dollar auctions - develop a reputation for -not- ignoring sunk costs, and you become less desirable a target for such mugging attempts.

[Edited to replace "i/e" with "e/g".]

Replies from: gjm
comment by gjm · 2015-05-21T15:53:50.453Z · LW(p) · GW(p)

Trivial case: obviously irrelevant, surely? If you have no choice then you have no choice, and it doesn't really matter whether or not you estimate that it's worth continuing.

Slightly less trivial case: If you observe a lot more apparently-unrelated slippages than you expected, then they aren't truly unrelated, in the following sense: you should start thinking it more likely that you did a poor job of predicting slippages (and perhaps that you just aren't very good at it for this project). That would lead you to increase your subsequent time estimates.

Mugging: as with the "slightly less trivial" case but more so, I don't think this is actually an example, because once you start to suspect you're getting mugged your time estimates should increase dramatically.

(There may be constraints that forbid you to consider the possibility that you're getting mugged, or at least to behave as if you are considering it. In that case, you are being forced to choose irrationally, and I don't think this situation is well modelled by treating it as one where you are choosing rationally and your estimates really aren't increasing.)

Replies from: OrphanWilde
comment by OrphanWilde · 2015-05-21T16:00:15.945Z · LW(p) · GW(p)

Trivial case: obviously irrelevant, surely? If you have no choice then you have no choice, and it doesn't really matter whether or not you estimate that it's worth continuing.

Not irrelevant from a prediction perspective.

If you observe a lot more apparently-unrelated slippages than you expected, then they aren't truly unrelated, in the following sense: you should start thinking it more likely that you did a poor job of predicting slippages (and perhaps that you just aren't very good at it for this project). That would lead you to increase your subsequent time estimates.

If this happens consistently in projects, yes. If 5% of your projects fall out of normal ranges for a 95% confidence interval probability distribution - then your estimates were good.

as with the "slightly less trivial" case but more so, I don't think this is actually an example, because once you start to suspect you're getting mugged your time estimates should increase dramatically.

That assumes you realize you are being mugged. As one example, we had a client (since fired) who added increasingly complex-to-calculate database fields as a project went on, with each set of new sample files (they were developing a system concurrently with ours to process our output, and was basically dumping the stuff they didn't want to do on us). We caught on that we were getting mugged when they deleted and renamed some columns; until then, we operated on the assumption of good faith, but the project just never went anywhere.

comment by Lumifer · 2015-05-20T23:50:31.769Z · LW(p) · GW(p)

But at each step, you discover another step you didn't originally anticipate

That is the core of your problem. Since it's happening repeatedly, you should stop assuming that you know the distance to completion and assign a probability distribution to the number of step (or time) needed to get the project done, likely with a long right tail.

If you constantly encounter the unexpected, you should acknowledge that your expectations are faulty and start to expect the unexpected.

Replies from: OrphanWilde
comment by OrphanWilde · 2015-05-21T00:05:06.430Z · LW(p) · GW(p)

That is the core of your problem. Since it's happening repeatedly, you should stop assuming that you know the distance to completion and assign a probability distribution to the number of step (or time) needed to get the project done, likely with a long right tail.

You've already done this when you've updated your priors. If you wish, assume you calculate the expected cost given the probability distribution, and it's still less than the expected value.

If you constantly encounter the unexpected, you should acknowledge that your expectations are faulty and start to expect the unexpected.

That doesn't actually help you decide what to do, however.

Replies from: Lumifer
comment by Lumifer · 2015-05-21T01:14:05.824Z · LW(p) · GW(p)

I assume you're familiar with the Hofstadter's law as it seems to describe your situation.

If you updated your expectations and they turned out to be wrong again then your update was incorrect. If you have a pattern of incorrect updates, you should go meta and figure out why this pattern exists.

All in all, if you still believe the cost/benefit ratio is favorable, you should continue. Or is the problem that you don't believe your estimates any more?

Replies from: Kyre, OrphanWilde
comment by Kyre · 2015-05-21T05:55:04.331Z · LW(p) · GW(p)

Very rough toy example.

Say I've started a project which I can definitely see 5 days worth of work. I estimate there'll be some unexpected work in there somewhere, maybe another day, so I estimate 6 days.

I complete day one but have found another day's work. When should I estimate completion now ? Taking the outside view, finishing in 6 days (on day 7) is too optimistic.

Implicit in my original estimate was a "rate of finding new work" of about 0.2 days per day. But, now I have more data on that, so I should update the 0.2 figure. Let's see, 0.2 is my prior, I should build a model for "rate of finding new work" and figure out what the correct Bayesian update is ... screw it, let's assume I won't find any more work today and estimate the rate by Laplace's rule of succession. My updated rate of finding new work is 0.5. Hmmm that's pretty high, the new work I find is itself going to generate new work, better sum the geometric series ... 5 known days work plus 5 more unknown, so I should finish in 10 days (ie day 11).

I complete day 2 and find another day's work ! Crank the handle around, should finish in 15 days (ie day 17).

... etc ...

If this state of affairs continues, my expected total amount of work grows really fast, and it won't be very long before it becomes clear that it is not profitable.

Contrast this with: I can see 5 days of work, but experience tells me that the total work is about 15 days. The first couple of days I turn up additional work, but I don't start to get worried until around day 3.

comment by OrphanWilde · 2015-05-21T14:27:32.144Z · LW(p) · GW(p)

All in all, if you still believe the cost/benefit ratio is favorable, you should continue.

Assume it is. You continue, and your situation continues to get worse. Now what?

Replies from: Lumifer
comment by Lumifer · 2015-05-21T15:28:37.898Z · LW(p) · GW(p)

Why do I have a feeling you're playing a "Yes, but..." game with a predetermined conclusion that you want us to reach?

And, by the way, if your workload is "getting adjusted" you're not dealing with updating probabilities about uncaring Nature, but you're in a game-theoretic situation which requires an entirely different line of analysis.

Replies from: OrphanWilde
comment by OrphanWilde · 2015-05-21T15:40:29.797Z · LW(p) · GW(p)

Why do I have a feeling you're playing a "Yes, but..." game with a predetermined conclusion that you want us to reach?

Because I'm playing a "Yes, but..." game with you.

And, by the way, if your workload is "getting adjusted" you're not dealing with updating probabilities about uncaring Nature, but you're in a game-theoretic situation which requires an entirely different line of analysis.

From the summary: "There are game-theoretic ways of extracting value from those who follow a strict policy of avoiding engaging in the Sunk Cost Fallacy which happen all the time in IT".

That's exactly what this post is about - the introduction was intended to illustrate what that situation -feels like-. Seeing the Planning Fallacy in that situation makes you -more- vulnerable to this kind of mugging; you keep doing what you were doing, and keep getting mugged, and each time assume you're the one at fault. I have seen people try to gaslight coworkers (no exaggeration - a phrase that gets bandied around in my company now comes from one of those attempts: "The requirements haven't changed, your understand of the requirements changed", after a database we were depositing data in had columns removed, added, and renamed, for the umpteenth time) to try to get them to keep coming for another round of mugging.

Would it clarify things if I changed the second part of the title to "Sunk Cost Mugging"?

Replies from: Lumifer
comment by Lumifer · 2015-05-21T16:14:05.710Z · LW(p) · GW(p)

You're strongly provoking category confusion in this subthread.

In game-theoretic scenarios where the other party can change your payoffs (or the rules of the game) notions like the Sunk Cost Fallacy are not operational, it's the wrong approach to a introduce them into the analysis. Of course it can be gamed, that's a pretty obvious observation. It's like trying to run regressions in the Milton Friedman's thermostat situation.

Replies from: OrphanWilde
comment by OrphanWilde · 2015-05-21T17:06:35.092Z · LW(p) · GW(p)

You're strongly provoking category confusion in this subthread.

There are about a dozen ways of interpreting this statement. I'll assume I'm causing you category confusion? The post is designed to confuse.

In game-theoretic scenarios where the other party can change your payoffs (or the rules of the game) notions like the Sunk Cost Fallacy are not operational, it's the wrong approach to a introduce them into the analysis. Of course it can be gamed, that's a pretty obvious observation. It's like trying to run regressions in the Milton Friedman's thermostat situation.

Then the Sunk Cost Fallacy is never operational in the real world, because there are always parties which can change the payoffs.

Replies from: Lumifer
comment by Lumifer · 2015-05-21T17:37:31.199Z · LW(p) · GW(p)

Yes, but :-P

Replies from: OrphanWilde
comment by OrphanWilde · 2015-05-21T18:04:42.452Z · LW(p) · GW(p)

It's entertaining "mugging" the people who keep insisting the issue is with the calculations, rather than the game they're inadvertently playing. Okay, that's another round of mugging for you.

comment by gjm · 2015-05-21T20:16:36.447Z · LW(p) · GW(p)

I don't understand the point of this.

I mean, I get that OrphanWilde is feeling very smug at having been able to "mug" some other people in the discussion here, and that this mugging is meant to be analogous both to the situation (deliberately incoherently) described in the article and to things that happen in real life.

But ... so what? Are we meant to be startled by the revelation that sometimes people exploit other people? Hardly.

And what seems to be one of the points you say you're trying to make -- that when this happens we are liable to assume it's our own fault rather than the other person's malice -- seems to me to be very ill supported by anything that's happened here. (1) I don't see other people assuming that the confusion here is their own fault, I see them trying to be tactful about the fact that it's yours. (2) I would expect posters here to be more willing to give benefit of the doubt than, e.g., in a business situation where they and the other party are literally competing for money. (3) You say "Here, I mugged you for a few seconds or maybe minutes [...] in real life, that would be hours, weeks, months" -- but I see no reason to expect people to be orders of magnitude slower in "real life" than here.

Further, you didn't in fact exploit anyone because (unless you're really malicious and actually enjoy seeing people waste time to no purpose, in which case fuck you) you didn't gain anything. You (at most) just wasted some people's time. Congratulations, but it's not like that's terribly hard to do. And, perhaps, you just made a bunch of people that little bit less inclined to be helpful and understanding to some confused-seeming guy on Less Wrong in the future.

I'm downvoting your post here and your replies in the comments, and would encourage other readers to do likewise. Making Less Wrong incrementally less useful in order to be able to preen about how you exploited people is not behaviour I wish to encourage here, and I see no actual insight (either overtly expressed or implicit) that counterbalances your act of defection.

[EDITED to add: OH HA HA I JUST MUGGED YOU AREN'T I CLEVER]

Replies from: Lumifer, OrphanWilde
comment by Lumifer · 2015-05-21T20:25:42.765Z · LW(p) · GW(p)

at having been able to "mug" some other people in the discussion here

The usual verb is "to troll".

Replies from: gjm
comment by gjm · 2015-05-21T21:11:17.769Z · LW(p) · GW(p)

I know, but OrphanWilde chose "mug" and I played along.

Replies from: Lumifer
comment by Lumifer · 2015-05-22T01:57:43.229Z · LW(p) · GW(p)

...and I played along.

Clearly, the lesson didn't take :-P

comment by OrphanWilde · 2015-05-21T20:43:24.712Z · LW(p) · GW(p)

I don't care about my karma points. If I did I wouldn't create these kinds of posts, which aggravate people. All you've done is vent some of your evident anger. If I cared about my karma points, I wouldn't create more comments, such as this one, for you to downvote. Feel free, just try not to get yourself banned for abusing it.

Incidentally, the purpose of this post was to teach, since you state that you don't understand.

ETA: The phrasing of that last sentence comes off as more "smug" than I intended. Read it for its literal value, if you would.

Replies from: gjm, gjm
comment by gjm · 2015-05-21T21:13:04.147Z · LW(p) · GW(p)

your evident anger

No, actually, not angry. I just think you did something of net negative value for crappy reasons.

I don't think I'm in any danger of getting banned for downvoting things that are blatantly (and self-admittedly) of negative value.

the purpose of this post was to teach

And how effectively do you think you can teach, having just boasted of how you wasted your readers' time being deliberately stupid at them? What incentive does anyone have to pay attention this time around?

(You might say: Aha, you learned my lesson. But, as it happens, I already knew.)

Replies from: OrphanWilde
comment by OrphanWilde · 2015-05-21T21:48:41.066Z · LW(p) · GW(p)

And how effectively do you think you can teach, having just boasted of how you wasted your readers' time being deliberately stupid at them?

Depends on whether they're looking to learn something, or looking for reasons not to learn something.

You might say: Aha, you learned my lesson. But, as it happens, I already knew.

Actually, you are entirely correct: You already knew. I did not, in fact, "mug" you. The mugging was not in the wasting of the readers' time, that was merely what was lost; it was a conceptual mugging. Every reader who kept insisting on fighting the hypothetical was mugged with each insistence. In real life, they would have kept sticking to the same "I must have planned this wrong" line of reasoning - your first response was that this was the wrong line of reasoning. Which is why I brought up mugging, and focused on that instead; it was a better line of conversation with you.

But my hypothetical situation was no worse than most hypothetical situations, I was simply more honest about it. Hypothetical situations are most usually created to manufacture no-win situations for specific kinds of thought processes. This was no different.

Replies from: gjm
comment by gjm · 2015-05-21T22:11:11.329Z · LW(p) · GW(p)

looking to learn something, or looking for reasons not to learn something.

Well, all I can say other than appealing to intuitions that might not be shared is this: I was looking to learn something when I read this stuff; I was disappointed that it seemed to consist mostly of bad thinking; after your confession I spent a little while reading your comments before realising that I couldn't and shouldn't trust them to be written with any intention of helping (or even not to be attempts at outright mental sabotage, given what you say this is all meant to be analogous to), at which point I gave up.

(If you're wondering, I'm continuing here mostly because it might be useful to other readers. I'm not very hopeful that it will, though, and will probably stop soon.)

comment by gjm · 2015-05-22T07:54:51.968Z · LW(p) · GW(p)

I don't care about my karma points

Oh, sorry, I should have made the following explicit: The point isn't to discourage you by making you suffer. It's to discourage other people from similar negative-value actions.

comment by Unknowns · 2015-05-21T07:58:43.156Z · LW(p) · GW(p)

Also, in practice this only happens when someone is procrastinating and the supposedly additional steps are just ways of avoiding the more difficult steps, so a reasonable estimate of the remaining time to completion is that person is simply not going to complete the task, ever.

comment by Richard_Kennaway · 2015-05-21T07:09:05.221Z · LW(p) · GW(p)

This is the Planning Fallacy, for which the remedy is the Outside View: ask how similar projects have turned out in the past. That will likely be more accurate than imagining how this particular project will go.

I have heard (but do not have personal experience) of a rule of thumb for software developers when quoting a number of days work for a project. Imagine the longest it could possibly take, then multiply by three.

But perhaps you have not taken an outside view at the start, and got into a project that is multiplying like a hydra? Then take the outside view now, avoid the Sunk Cost fallacy, and ask, is the difference in payoff from completing vs. abandoning the project worth the difference in costs, now realistically estimated, that will be incurred from here on?

Replies from: OrphanWilde
comment by OrphanWilde · 2015-05-21T14:24:26.996Z · LW(p) · GW(p)

You do your realistic estimate, and six months later are in the same boat. What now?

If it helps you consider the problem, imagine that all the additional work is coming from another party whom you're contracted to work for, and who is adjusting the workload - without violating the contract - as you're completing it.

Replies from: Richard_Kennaway, Jiro
comment by Richard_Kennaway · 2015-05-21T14:45:54.964Z · LW(p) · GW(p)

and who is adjusting the workload - without violating the contract - as you're completing it.

Bad contract. Don't agree that sort of contract next time.

Replies from: OrphanWilde
comment by OrphanWilde · 2015-05-21T15:05:34.806Z · LW(p) · GW(p)

Granted. And don't agree to non-iterative Prisoner's Dilemmas, either.

You're not always in a situation where this matters. Your boss agreed to the contract. Your boss is a perfect rationalist, who completely ignores sunk costs, and has produced perfect probability distributions, which include the knowledge that the other party is adjusting the workload. You suspect the other party at this point has a spy in your organization adjusting their requests according to his team of analysts' probability distributions to maximize the value they can extract from your company. How do you convince your boss it's not worth it to continue the project, regardless of the probability distribution he's currently produced?

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2015-05-22T06:50:40.594Z · LW(p) · GW(p)

At this point you're just responding to every answer to "what would you do?" by inventing another scenario designed to make it fail, and asking the same question again. But these scenarios, like the excuses of the man who claims to have a dragon in his garage, are raised to your attention not by reality, but by the task of finding a way around the answers you have received. There is no end to this process, because for every plan, the outcome pump in your head can imagine a way it could fail.

It is futile to engage in this any further.

Replies from: OrphanWilde
comment by OrphanWilde · 2015-05-22T13:06:23.426Z · LW(p) · GW(p)

That's the point.

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2015-05-22T13:13:54.682Z · LW(p) · GW(p)

That's the point.

That point being, it appears, "HEY GUYS I TRIED TO PISS YOU OFF AND YOU GOT PISSED OFF I WINZ0RZ HAHAHA L00K @ ALL MY DOWNVOTZ!1!!".

Replies from: OrphanWilde
comment by OrphanWilde · 2015-05-22T13:44:02.839Z · LW(p) · GW(p)

The point was to explicate an issue with a fallacy.

I do find the anger about it mildly amusing, because it is coming entirely from people whose reaction to a hypothetical was to immediately dissect and reject it, then get annoyed when I continued to insist that the hypothetical held as-is.

That was the extent of my "trolling", to insist that the hypothetical held as-is, and to mark it as a loss when they continued to reject it - the extent of their losing was merely the degree to which they continued to insist that the real issue was that the subject in the hypothetical was doing what the hypothetical explicitly said they were not doing - making inaccurate predictions.

comment by Jiro · 2015-05-21T14:40:58.837Z · LW(p) · GW(p)

Your estimate of the time it takes needs to take into account the probability that the third party will increase the workload.

Replies from: OrphanWilde
comment by OrphanWilde · 2015-05-21T14:57:33.897Z · LW(p) · GW(p)

You do so. They adjust the amount they ask for as you proceed. Again, if it helps, you can assume they have a spy in your organization, and are calibrating against your adjustments.

Replies from: Jiro
comment by Jiro · 2015-05-21T15:35:23.366Z · LW(p) · GW(p)

You need to change your estimate to a value that will be accurate after they adjust the amount. For instance, if you would normally estimate 10 days, but you know that estimating X would lead to them increasing the workload by 1/3 X, you should estimate 15 days, since 15 days would lead them to increase the workload by 5 days, making the result actually be 15. This is the estimation equivalent of gross-up.

Replies from: OrphanWilde
comment by OrphanWilde · 2015-05-21T15:45:17.361Z · LW(p) · GW(p)

I think you're reversing the math, but I get your gist.

And accurately predicting this round of mugging doesn't help you deal with the next round of mugging.

Replies from: Jiro
comment by Jiro · 2015-05-21T16:01:01.753Z · LW(p) · GW(p)

I'm not reversing the math. They increase the workload by 1/3 of your prediction, so you give them a prediction which is sized such that, after adding the current workload to the increase based on your prediction, you get the prediction.

And you don't need to predict the next round of mugging because the idea is to give a prediction which takes into account all successive rounds of mugging. If the sum of all these rounds is greater than 100%, the problem can never end at all. If it's less, you can do what I said.

Replies from: OrphanWilde
comment by OrphanWilde · 2015-05-21T17:15:52.754Z · LW(p) · GW(p)

I'm not reversing the math. They increase the workload by 1/3 of your prediction, so you give them a prediction which is sized such that, after adding the current workload to the increase based on your prediction, you get the prediction.

It'd be 1/2 your prediction, if you're giving them 10 days and want to arrive at 15 after they add their increase. Doesn't actually matter, though, you made your point clear.

And you don't need to predict the next round of mugging because the idea is to give a prediction which takes into account all successive rounds of mugging. If the sum of all these rounds is greater than 100%, the problem can never end at all. If it's less, you can do what I said.

They're adjusting their mugging so that it's always more profitable for you to continue than stop, if you discount what you've already spent. They've anticipated your predictions, and have priced accordingly.

That's assuming they want to maximize their mugging. They could execute only one or two muggings, and you might not catch on at all.

Replies from: Jiro
comment by Jiro · 2015-05-21T18:23:32.738Z · LW(p) · GW(p)

It'd be 1/2 your prediction, if you're giving them 10 days and want to arrive at 15 after they add their increase.

It's 1/2 of your non-mugging prediction, but it's 1/3 of your stated (with-mugging) prediction. You're trying to arrange it so that non-mugging prediction + mugging based on with-mugging prediction = with-mugging prediction.

comment by g_pepper · 2015-05-21T20:45:54.841Z · LW(p) · GW(p)

This was an interesting article. I've been involved in software consulting in the past, and this sort of mugging does sometimes occur in fixed-price projects. I think that there are several take-aways from this:

  • fixed-price projects are a lot higher risk (to the service provider) than are time and materials projects. This is true even if the service-provider is good at estimation and applies appropriate buffer in the schedule/pricing.

  • fixed-price projects require a skilled project manager who can recognize and manage scope creep (intentional and otherwise)

  • fixed-price projects require diligence up-front in crafting an unambiguous statement of work or contract

  • one-person or small project teams without a dedicated project manager should think twice before accepting fixed-price assignments

The last bullet is worth emphasizing; some technical people, wishing to stay focused on the work, will acquiesce to scope creep (particularly if introduced incrementally) to avoid getting involved in time-consuming and potentially adversarial discussions with the client. This can make manager-less teams particularly vulnerable to this type of mugging. An experienced project manager can often mitigate this danger.

Replies from: OrphanWilde
comment by OrphanWilde · 2015-05-21T20:51:57.765Z · LW(p) · GW(p)

I've seen the mugging go the other direction as well on fixed-cost, particularly in extremely large contracts; companies put low-bids in, then charge exorbitant rates on even the smallest changes to the requirements (and there are always changes). And with non-fixed price projects, the mugging in the other direction is even easier. People in IT don't pay nearly enough attention to reputation.

But yeah. It's very easy for individuals and small companies to get caught in this, especially if, say, your mortgage payment is due soon.

Replies from: g_pepper
comment by g_pepper · 2015-05-21T21:32:14.962Z · LW(p) · GW(p)

Yes, it can happen in the other direction too.

comment by shminux · 2015-05-20T23:35:56.846Z · LW(p) · GW(p)

But at each step, you discover another step you didn't originally anticipate, and had no priors for anticipating.

if this is the case, your issue is unrelated to sunk cost, it is the Planning Fallacy. You've failed to perform the proper risk analysis and mitigation. The excuse "had no priors for anticipating" is only valid for black swan events, but not for your run-of-the-mill problems every project has.

So, when faced with the situation you describe, one should stop barking up the wrong tree and and do a pre-mortem.

Replies from: OrphanWilde
comment by OrphanWilde · 2015-05-21T00:06:57.558Z · LW(p) · GW(p)

if this is the case, your issue is unrelated to sunk cost, it is the Planning Fallacy. You've failed to perform the proper risk analysis and mitigation. The excuse "had no priors for anticipating" is only valid for black swan events, but not for your run-of-the-mill problems every project has.

Assume you have performed the proper risk analysis and mitigation.

So, when faced with the situation you describe, one should stop barking up the wrong tree and and do a pre-mortem.

Assume you've done this and it has failed to prevent the issues described. What now?

Replies from: shminux, DanielLC
comment by shminux · 2015-05-21T01:12:22.472Z · LW(p) · GW(p)

Sorry, your assumptions are untenable for the reasons I described. So, not an interesting hypothetical.

comment by DanielLC · 2015-05-21T08:16:11.693Z · LW(p) · GW(p)

If you are a rational agent and encounter this, then the proper action is to keep going, since the fact that you haven't gotten any closer to your goal is just an unlikely coincidence. In real life it's much more likely that you're just committing the planning fallacy, which is why someone reading this will assume that you'll keep noticing steps you missed instead of actually being right this time.

Replies from: OrphanWilde
comment by OrphanWilde · 2015-05-21T14:29:51.006Z · LW(p) · GW(p)

If you are a rational agent and encounter this, then the proper action is to keep going, since the fact that you haven't gotten any closer to your goal is just an unlikely coincidence. In real life it's much more likely that you're just committing the planning fallacy, which is why someone reading this will assume that you'll keep noticing steps you missed instead of actually being right this time.

You keep going. The situation keeps getting worse. You've now spent five times the original estimate, three times as much as the project is worth to you, and you've inflated the last set of tasks by 1000x their original estimate, which is four times as much as your now absurdly updated probability distribution says they'll take. It's still worth doing. What now?

Replies from: DanielLC
comment by DanielLC · 2015-05-21T19:11:46.868Z · LW(p) · GW(p)

It's sort of like asking: a coin lands on heads twenty consecutive times. Do you keep betting tails at even odds? By the way, the coin is fair.

You're giving me massive amounts of evidence that I'm falling prey to the planning fallacy, giving the impression that I'm falling prey to the planning fallacy, and trying to get me to take the appropriate action given that I'm falling prey to the planning fallacy, but you're telling me that I'm not falling prey to the planning fallacy. So which is it? Because if you're really not falling prey to it, and this really is a coincidence, then you shouldn't give up, because this time you'll be right. I know it sounds unlikely, but that's your fault for picking an unlikely premise. This is to statistics what the trolley problem is to morality.

Replies from: OrphanWilde
comment by OrphanWilde · 2015-05-21T19:36:37.649Z · LW(p) · GW(p)

Are you feeling like you're caught in a rationality penny auction yet? That the premise is setting you up to lose no matter what you do except walk away from the problem?

You're right that you're not falling prey to the Planning Fallacy. You're wrong that it's a coincidence.

The coin is fair. The universe containing the coin is not.

That is, I'm deliberately setting the scenario up so that it's unwinnable. There is a hidden evil god who is running an unwinnable hypothetical, shifting the goalposts as soon as you near them. What's the difference between a hypothetical in which there is an evil god and one in which there isn't, given that you seem to be perpetually unlucky? Nothing tangible to you.

You and a partner are working on the same thing, and both expect the same reward (you can imagine it's putting up a fence between your partner-neighbor's yard and your own), which has an expected reward of about 25% above the effort you put into it; you gotten to the three-quarters mark, but now you're doing all their work, owing to some claimed emergency (they claim they hurt their back, say) on their part. What's the difference between the world in which you've fallen prey to the Planning Fallacy, and the world in which they're exploiting your unwillingness to drop the project now? Nothing tangible to you. Either way, you're breaking even on a project you expected to make you better-off, and they're doubling their effort investment.

The Planning Fallacy is a red herring here. The idea of predicting how much longer it will take, in general, is a red herring. It's all distracting you from a more central problem: avoidance of the Sunk Cost Fallacy makes you vulnerable to a mugging.

Replies from: DanielLC
comment by DanielLC · 2015-05-21T21:43:02.875Z · LW(p) · GW(p)

Realistically, our understanding of stuff is vague enough that there's little reason to differentiate between planning fallacy and an evil god. We should notice that we're failing more than we expected to and correct for it. We're not intelligent enough to work out precisely why.

If you had a truly rational agent performing solomonoff induction, it would eventually notice the hidden evil god. It doesn't need the sunk cost fallacy.

If someone keeps claiming they have emergencies, you should eventually notice that it isn't just luck. They might be trying to mug you. They might be really accident prone. The correct response is the same either way.

comment by ThisSpaceAvailable · 2015-05-29T06:48:20.971Z · LW(p) · GW(p)

Hopefully, I'm not just feeding the troll, but: just what exactly do you think "the sunk cost fallacy" is? Because it appears to me that you believe that it refers to the practice of adding expenses already paid to future expected expenses in a cost-benefit analysis, when in fact in refers the opposite, of subtracting expenses already paid from future expected expenses.

Replies from: OrphanWilde
comment by OrphanWilde · 2015-06-01T15:29:51.122Z · LW(p) · GW(p)

The Sunk Cost Fallacy is the fallacy considering sunk costs (expenses already paid) when calculating expected returns. I/e, if I've already spent $100, and my expected returns are $50, then it would be the sunk cost fallacy to say it is no longer worth continuing, since my expected return is negative - I should instead, to avoid the fallacy, only consider the -remaining- expenses to get that return.

Which is to say, to avoid the fallacy, sunk costs must be ignored.

The post is about the scenario when prior-cost insensitivity (avoiding the sunk cost fallacy) opens you up to getting "mugged", a situation referred to as the Sunk Cost Dilemma, of which surprisingly little has been written; one hostile agent can extract additional value from another, sunk-cost-insensitive, agent by adding additional costs at the back-end.

(There was no "trolling". Indeed, I wasn't even tricking anybody - my "mugging" of other people was conceptual, referring to the fact that any "victim" agent who continued to reason the way the people here were reasoning would continue to get mugged in a real-life analogue, again and again for each time they refused to update their approach or understanding of the problem.)

Replies from: ThisSpaceAvailable
comment by ThisSpaceAvailable · 2015-06-02T03:35:25.400Z · LW(p) · GW(p)

As I said, that is not what the sunk cost fallacy is. If you've spent $100, and your expected net returns are -$50, then the sunk cost fallacy would be to say "If I stop now, that $100 will be wasted. Therefore, I should keep going so that my $100 won't be wasted."

While it is a fallacy to just add sunk costs to future costs, it's not a fallacy to take them into account, as your scenario illustrates. I don't know of anyone who recommends completely ignoring sunk costs; as far as I can tell you are arguing against a straw man in that sense.

Also, it's "i.e.", rather than "i/e".

Replies from: OrphanWilde
comment by OrphanWilde · 2015-06-02T13:08:49.464Z · LW(p) · GW(p)

Taking them into account is exactly what the sunk cost fallacy is; including sunk costs with prospective costs for the purposes of making decisions.

I think you confuse the most commonly used examples of the sunk cost fallacy with the sunk cost fallacy itself.

(And it would be e.g. there, strictly speaking.)

ETA: So if I'm arguing against a straw man, it's because everybody is silently ignoring what the fallacy actually refers to in favor of something related to the fallacy but not the fallacy entire.

Replies from: ThisSpaceAvailable
comment by ThisSpaceAvailable · 2015-06-04T22:28:39.308Z · LW(p) · GW(p)

If you think that everyone is using a term for something other than what it refers to, then you don't understand how language works. And a discussion of labels isn't really relevant to the question of whether it's a straw man. Also, your example shows that what you're referring to as a sunk cost fallacy is not, in fact, a fallacy.

Replies from: OrphanWilde
comment by OrphanWilde · 2015-06-05T01:06:04.316Z · LW(p) · GW(p)

Wait. You paid a karma toll to comment on one of my most unpopular posts yet to... move the goalposts from "You don't know what you're talking about" to "The only correct definition of what you're talking about is the populist one"? Well, I guess we'd better redefine evolution to mean "Spontaneous order arising out of chaos", because apparently that's how we're doing things now.

Let's pull up the definition you offered.

in fact in refers the opposite, of subtracting expenses already paid from future expected expenses.

You're not even getting the -populist- definition of the fallacy right. Your version, as-written, implies that the cost for a movie ticket to a movie I later decide I don't want to see is -negative- the cost of that ticket. See, I paid $5, and I'm not paying anything else later, so 0 - 5 = -5, a negative cost is a positive inlay, which means: Yay, free money?

Why didn't I bring that up before? Because I'm not here to score points in an argument. Why do I bring it up now? Because I'm a firm believer in tit-for-tat - and you -do- seem to be here to score points in an argument, a trait which I think is overemphasized and over-rewarded on Less Wrong. I can't fix that, but I can express my disdain for the behavior: Your games of trivial social dominance bore me.

I believe it's your turn. You're slated to deny that you're playing any such games. Since I've called your turn, I've changed it, of course; it's a chaotic system, after all. I believe the next standard response is to insult me. Once I've called that, usually -my- turn is to reiterate that it's a game of social dominance, and that this entire thing is what monkeys do, and then to say that by calling attention to it, I've left you in confusion as to what game you're even supposed to be playing against me.

We could, of course, skip -all- of that, straight to: What exactly do you actually want out of this conversation? To impart knowledge? To receive knowledge? Or do you merely seek dominance?

Replies from: ThisSpaceAvailable
comment by ThisSpaceAvailable · 2015-06-06T04:53:27.130Z · LW(p) · GW(p)

You paid a karma toll to comment on one of my most unpopular posts yet

My understanding is that the karma toll is charged only when responding to downvoted posts within a thread, not when responding to the OP.

to... move the goalposts from "You don't know what you're talking about" to "The only correct definition of what you're talking about is the populist one"?

I didn't say that the only correct definition is the most popular one; you are shading my position to make it more vulnerable to attack. My position is merely that if, as you yourself said, "everybody" uses a different definition, then that is the definition. You said "everybody is silently ignoring what the fallacy actually refers to". But what a term "refers to" is, by definition, what people mean when they say it. The literal meaning (and I don't take kindly to people engaging in wild hyperbole and then accusing me of being hyperliteral when I take them at their word, in case you're thinking of trying that gambit) of your post is that in the entire world, you are the only person who knows the "true meaning" of the phrase. That's absurd. At the very least, your use is nonstandard, and you should acknowledge that.

Now, as to "moving the goalposts", the thing that I suspected you of not knowing what you were talking about was knowing the standard meaning of the phrase "sunk cost fallacy", so the goalposts are pretty much where they were in the beginning, with the only difference being that I have gone from strongly suspecting that you don't know what you're talking about to being pretty much certain.

Well, I guess we'd better redefine evolution to mean "Spontaneous order arising out of chaos", because apparently that's how we're doing things now.

I don't know of any mainstream references defining evolution that way. If you see a parallel between these two cases, you should explain what it is.

You're not even getting the -populist- definition of the fallacy right.

Ideally, if you are going to make claims, you would actually explain what basis you see for those claims.

Your version, as-written, implies that the cost for a movie ticket to a movie I later decide I don't want to see is -negative- the cost of that ticket. See, I paid $5, and I'm not paying anything else later, so 0 - 5 = -5, a negative cost is a positive inlay, which means: Yay, free money?

Presumably, your line of thought is that what you just presented is absurd, and therefore it must be wrong. I have two issues with that. The first is that you didn't actually present what your thinking was. That shows a lack of rigorous thought, as you failed to make explicit what your argument is. This leaves me with both articulating your argument and mine, which is rather rude. The second problem is that your syllogism "This is absurd, therefore it is false" is severely flawed. It's called the Sunk Cost Fallacy. The fact that it is illogical doesn't disqualify it from being a fallacy; being illogical is what makes it a fallacy.

Typical thinking is, indeed, that if one has a ticket for X that is priced at $5, then doing X is worth $5. For the typical mind, failing to do X would mean immediately realizing a $5 loss, while doing X would avoid realizing that loss (at least, not immediately). Therefore, when contemplating X, the $5 is considered as being positive, with respect to not doing X (that is, doing X is valued higher than not doing X, and the sunk cost is the cause of the differential).

Why didn't I bring that up before? Because I'm not here to score points in an argument.

And if you were here to score points, you would think that "You just described X as being a fallacy, and yet X doesn't make sense. Hah! Got you there!" would be a good way of doing so? I am quite befuddled.

Why do I bring it up now? Because I'm a firm believer in tit-for-tat - and you -do- seem to be here to score points in an argument

I sincerely believe that you are using the phrase "sunk cost fallacy" that is contrary to the standard usage, and that your usage impedes communication. I attempted to inform you of my concerns, and you responded by accusing me of simply trying "score points". I do not think that I have been particularly rude, and absent prioritizing your feelings over clear communication, I don't see how I could avoid you accusing me of playing "games of trivial social dominance".

"Once I've called that, usually -my- turn is to reiterate that it's a game of social dominance, and that this entire thing is what monkeys do"

Perceiving an assertion of error as being a dominance display is indeed something that the primate brain engages in. Such discussions cannot help but activate our social brains, but I don't think that means that we should avoid ever expressing disagreement.

We could, of course, skip -all- of that, straight to: What exactly do you actually want out of this conversation? To impart knowledge? To receive knowledge? Or do you merely seek dominance?

My immediate motive is to impart knowledge. I suppose if one follows the causal chain down, it's quite possible that humans' desire to impart knowledge stems from our evolution as social beings, but that strikes me as overly reductionist.

Replies from: OrphanWilde
comment by OrphanWilde · 2015-06-08T13:41:49.003Z · LW(p) · GW(p)

My understanding is that the karma toll is charged only when responding to downvoted posts within a thread, not when responding to the OP.

You could be correct there.

I didn't say that the only correct definition is the most popular one; you are shading my position to make it more vulnerable to attack. My position is merely that if, as you yourself said, "everybody" uses a different definition, then that is the definition. You said "everybody is silently ignoring what the fallacy actually refers to". But what a term "refers to" is, by definition, what people mean when they say it. The literal meaning (and I don't take kindly to people engaging in wild hyperbole and then accusing me of being hyperliteral when I take them at their word, in case you're thinking of trying that gambit) of your post is that in the entire world, you are the only person who knows the "true meaning" of the phrase. That's absurd. At the very least, your use is nonstandard, and you should acknowledge that.

There's a conditional in the sentence that specifies "everybody". "So if I'm arguing against a straw man..."

I don't think I -am- arguing against a straw man. As I wrote directly above that, I think your understanding is drawn entirely from the examples you've seen, rather than the definition, as written on various sites - you could try Wikipedia, if you like, but it's what I checked to verify that the definition I used was correct when you suggested it wasn't. I will note that the "Sunk Cost Dilemma" is not my own invention, and was noted as a potential issue with the fallacy as it pertains to game theory long before I wrote this post - and, indeed, shows up in the aforementioned Wikipedia. I can't actually hunt down the referenced paper, granted, so whether or not the author did a good job elaborating the problem is a matter I'm uninformed about.

Presumably, your line of thought is that what you just presented is absurd, and therefore it must be wrong. I have two issues with that. The first is that you didn't actually present what your thinking was. That shows a lack of rigorous thought, as you failed to make explicit what your argument is. This leaves me with both articulating your argument and mine, which is rather rude. The second problem is that your syllogism "This is absurd, therefore it is false" is severely flawed. It's called the Sunk Cost Fallacy. The fact that it is illogical doesn't disqualify it from being a fallacy; being illogical is what makes it a fallacy.

"Illogical" and "Absurd" are distinct, which is what permits common fallacies in the first place.

I sincerely believe that you are using the phrase "sunk cost fallacy" that is contrary to the standard usage, and that your usage impedes communication. I attempted to inform you of my concerns, and you responded by accusing me of simply trying "score points". I do not think that I have been particularly rude, and absent prioritizing your feelings over clear communication, I don't see how I could avoid you accusing me of playing "games of trivial social dominance".

Are you attempting to dissect what went wrong with this post?

Well, initially, the fact that everybody fought the hypothetical. That was not unexpected. Indeed, if I include a hypothetical, odds are it anticipates being fought.

It was still positive karma at that point, albeit modest.

The negative karma came about because I built the post in such a way as to utilize the tendency on Less Wrong to fight hypotheticals, and then I called them out on it in a very rude and condescending way, and also because at least one individual came to the conclusion that I was actively attempting to make people less rational. Shrug It's not something I'm terribly concerned with, on account that, in spite of the way it went, I'm willing to bet those who participated learned more from this post than they otherwise would have.

Perceiving an assertion of error as being a dominance display is indeed something that the primate brain engages in. Such discussions cannot help but activate our social brains, but I don't think that means that we should avoid ever expressing disagreement.

I'll merely note that your behavior changed. You shifted from a hit-and-run style of implication to over-specific elaboration and in-depth responses. This post appears designed to prove to yourself that your disagreement has a rational basis. Does it?

My immediate motive is to impart knowledge. I suppose if one follows the causal chain down, it's quite possible that humans' desire to impart knowledge stems from our evolution as social beings, but that strikes me as overly reductionist.

Case in point.

Let's suppose that is your motive. What knowledge have you imparted? Given that you're concerned that I don't know what it is, where's the correct definition of the Sunk Cost Fallacy, and how does my usage deviate from it? I'd expect to find that somewhere in here in your quest to impart knowledge on me.

Your stated motive doesn't align with your behavior. It still doesn't; you've dressed the same behavior up in nicer clothes, but you're still just scoring points in an argument.

So - and this time I want you to answer to -yourself-, not to me, because I don't matter in this respect - what exactly do you actually want out of this conversation?

Replies from: Jiro
comment by Jiro · 2015-06-08T15:08:08.988Z · LW(p) · GW(p)

The negative karma came about because I built the post in such a way as to utilize the tendency on Less Wrong to fight hypotheticals, and then I called them out on it in a very rude and condescending way, and also because at least one individual came to the conclusion that I was actively attempting to make people less rational. Shrug It's not something I'm terribly concerned with, on account that, in spite of the way it went, I'm willing to bet those who participated learned more from this post than they otherwise would have.

Is that "the end justifies the means"?

Replies from: OrphanWilde
comment by OrphanWilde · 2015-06-08T15:15:06.270Z · LW(p) · GW(p)

The means, in this case, don't violate any of my ethics checks, so I don't see any need to justify them, and nobody suggested my ethics in this case were off. The sole accusation of defection was on a misinterpretation of my behavior, that I was trying to make people less rational.

It's more a statement that I think the post was effective for its intended purposes, so I'm not too concerned about re-evaluating my methodology.

I should have separated that out into two paragraphs for clarity, I suppose.

comment by Unknowns · 2015-05-21T07:56:58.162Z · LW(p) · GW(p)

It seems that you are expecting a situation somewhat like this:

Day 1: I expect to be done in 5 days. Day 2: I expect to be done in 5 days. Day 10: I expect to be done in 7 days. Day 20: I expect to be done in 4 days. Day 30: I expect to be done in 5 days.

Basically, this cannot happen if I am updating rationally. You say, "Worse, each additional step is novel; the additional five steps you discovered after completing step 6 didn't add anything to predict the additional twelve steps you added after completing step 19." But in fact, it does add something: namely that this task that I am trying to accomplish is very long and unpredictable, and the more such steps are added, the longer and more unpredictable I should assume it to be, even in the remaining portion of the task. So by day 30, I should be expecting about another month, not another 5 days. And if I do this, at some point it will become clear that it is not worth finishing the task, at least assuming that it is not simply the process itself that is worth doing.

Replies from: OrphanWilde
comment by OrphanWilde · 2015-05-21T14:19:23.372Z · LW(p) · GW(p)

Basically, this cannot happen if I am updating rationally. You say, "Worse, each additional step is novel; the additional five steps you discovered after completing step 6 didn't add anything to predict the additional twelve steps you added after completing step 19." But in fact, it does add something: namely that this task that I am trying to accomplish is very long and unpredictable, and the more such steps are added, the longer and more unpredictable I should assume it to be, even in the remaining portion of the task. So by day 30, I should be expecting about another month, not another 5 days. And if I do this, at some point it will become clear that it is not worth finishing the task, at least assuming that it is not simply the process itself that is worth doing.

I will repeat what I have already repeated several times: Assume that you're correctly updating.

Replies from: Jiro
comment by Jiro · 2015-05-21T14:42:07.653Z · LW(p) · GW(p)

That assumption may be such an unreasonable assumption in a real-life situation that it makes the conclusion worthless to use in real life.

Replies from: OrphanWilde
comment by OrphanWilde · 2015-05-21T14:59:04.294Z · LW(p) · GW(p)

Okay. You decide your calculations are worthless. How do you decide how to proceed?

Replies from: Jiro
comment by Jiro · 2015-05-21T16:03:46.012Z · LW(p) · GW(p)

Your calculation is worthless because it fails to consider the probability that you're incorrectly updating. The solution is to consider the probability that you're incorrectly updating.

If you are required to assume that you're correctly updating, but it's a real life situation where you aren't, then there is no solution because you have assumed something false.