"What-the-hell" Cognitive Failure Mode: a Separate Bias or a Combination of Other Biases?
post by Shmi (shminux) · 2013-02-22T21:44:01.437Z · LW · GW · Legacy · 36 commentsContents
36 comments
The "what-the-hell" effect, when you break a rule and then go on a rule-breaking rampage, like binge eating after a single dietary transgression, is a very common failure mode. It was recently mentioned in the Overcoming Bias blog comments on the Which biases matter most? Let’s prioritise the worst! post. I have not been able to find an explicit discussion of this issue here, though there are quite a few comments on binge-<something>.
From the Psyblog entry quoting this paper:
Although everyone was given the same slice of pizza; when it was served up, for some participants it was made to look larger by comparison.
This made some people think they'd eaten more than they really had; although in reality they'd all eaten exactly the same amount. It's a clever manipulation and it means we can just see the effect of thinking you've eaten too much rather than actually having eaten too much.
When the cookies were weighed it turned out that those who were on a diet and thought they'd blown their limit ate more of the cookies than those who weren't on a diet. In fact over 50% more! [Emphasis mine]
Other examples include sliding back into one's old drinking/smoking/surfing habit. For example, that's how I stopped using Pomodoro.
My (title) question is, what's the mechanism of this cognitive failure and whether it can be reduced to a combination of existing biases/fallacies? If the latter is true, can addressing one of the components counteract the what-the-hell effect? If so, how would one go about testing it?
For completeness, the top hit from Google scholar to the "what-the-hell effect" query is chapter 5 of Striving and Feeling: Interactions Among Goals, Affect, and Self-regulation, by Martin and Tesser.
EDIT: personal anecdotes are encouraged, they may help construct a more complete picture.
36 comments
Comments sorted by top scores.
comment by Eugine_Nier · 2013-02-23T07:38:02.073Z · LW(p) · GW(p)
I believe this is related to the way Schelling points on slippery slopes work. A Schelling point gets a large part of its power from its uniqueness, "If not here where?" This has the disadvantage that if it is nevertheless breached, you'll slip much further down the slope.
comment by Elithrion · 2013-02-23T01:43:44.635Z · LW(p) · GW(p)
It seems like a case of a binary commitment. Breaking the commitment at all gives you -20 hedons, but breaking it even more only gives you -1 hedons per break or whatever (which is outweighed by the gains from breaking it more). I think most internal commitments are set up this way because it's easier to make a commitment like this explicit and enforceable than one that's more continuous over actions.
Replies from: buybuydandavis↑ comment by buybuydandavis · 2013-02-23T13:41:54.268Z · LW(p) · GW(p)
In for a penny, in for a pound.
Replies from: Kindly↑ comment by Kindly · 2013-02-23T14:52:25.519Z · LW(p) · GW(p)
That's just throwing good money after bad.
Replies from: Antisuji↑ comment by Antisuji · 2013-02-25T18:58:31.607Z · LW(p) · GW(p)
Sure, but you don't know it's bad. If you're in for a penny, that's evidence (if you trust your own judgement) that it's actually a good investment, and should go in for a pound if you can afford it.
Replies from: None↑ comment by [deleted] · 2013-02-26T03:22:07.289Z · LW(p) · GW(p)
Of course, treating your own belief in a proposition as evidence for that proposition seems like a rather dangerous thing to do.
Replies from: Antisuji, Eugine_Nier↑ comment by Antisuji · 2013-02-26T07:35:14.094Z · LW(p) · GW(p)
Absolutely! But I think a lot of people implicitly do the equivalent when they try to be or appear consistent at all costs. The reasoning goes something like, "I believe X, therefore I must have a good reason for it and so X must be true!"
↑ comment by Eugine_Nier · 2013-02-27T04:43:37.309Z · LW(p) · GW(p)
Depends, you don't necessarily want to recompute prepositions every time they come up.
comment by handoflixue · 2013-02-23T00:49:52.546Z · LW(p) · GW(p)
I've heard people talk about "Success Chains" - do something every day, and eventually you get a chain of successful days, and this helps pressure you to keep having successful days. Since this is generally a binary metric, it's better to have a single catastrophic failure than numerous smaller ones - it reduces the number of "breaks" in the chain and thus keeps that momentum going.
In other words, if I failed my diet a bit, I might as well fail it severely - then my body will have tons of food, and it'll be easier to get "back on track" the next few days.
Essentially, if your consequences don't scale correctly, then you want to cluster your failures. If you get written up for being late to work whether it's 15 minutes of 4 hours, you'd rather have one day where EVERYTHING goes wrong and you're 4 hours late, rather than a week where you show up 15 minutes late. Since humans are horrible at scale, it's not surprising that even our internal consequences of guilt and such don't scale linearly to the size of the transgression.
Replies from: pinyaka, Fadeway↑ comment by pinyaka · 2013-02-25T14:15:37.563Z · LW(p) · GW(p)
It seems like this assumes some kind of conservation of failure where you're going to have some amount of breakage and it's better to get it over with on the front end, but that doesn't seem normal to me. There's not an obvious reason why binge eating would make your diet more effective or easier to stick to. Hunger doesn't work in such a way that you can have a huge caloric excess one day and then not be hungry for several days while build a habit of eating less. The excess calories are excreted or stored and when you start to run a caloric deficit again, you will feel hungry and have more weight to lose.
Replies from: Eugine_Nier, handoflixue↑ comment by Eugine_Nier · 2013-02-25T23:50:08.718Z · LW(p) · GW(p)
I suspect the reason is that we're using brain mechanism that evolved to enforce/evade social rules, and there one big failure is better than lot's of small ones.
↑ comment by handoflixue · 2013-02-25T19:43:08.283Z · LW(p) · GW(p)
It seems like this assumes some kind of conservation of failure
Yeah, more or less. From my personal experience, it seems to require about the same amount of willpower to get either a string of small failures or a single big failure. I have no clue why this is, beyond the basic theory of "success chains" being good for motivating us - a single break doesn't seem to slow down motivation, but a lot of little ones tend to kill it.
Hmmm, given that some people look at this advice as "obvious" and others are utterly baffled by it, there's a chance that this advice only works for a certain segment of the population. It might help to model this as general advice, regardless of goal: I learned about it in terms of building career skills and fixing sleep schedules, and just naively started using it to build my diets on the assumption that it was a generic pattern (for me, at least, it's where all my semi-stable diets come from)
↑ comment by Fadeway · 2013-02-24T04:27:12.738Z · LW(p) · GW(p)
If you want to get up early, and oversleep once, chances are, you'll keep your schedule for a few days, then oversleep again, ad infinitum. Better to mark that first oversleep as a big failure, take a break for a few days, and restart the attempt.
Small failures always becoming huge ones also helps as a deterrent - if you know that that single cookie that bends your diet will end up with you eating the whole jar and canceling the diet altogether, you will be much more likely to avoid even small deviations like the plague, next time.
Replies from: handoflixue↑ comment by handoflixue · 2013-02-25T19:38:42.793Z · LW(p) · GW(p)
It seems to scale to willpower: For some people, "a single small failure once per month" is going to be an impossible goal, but "multiple small failures OR one big failure" is an option. If and only if one is dealing with THAT choice, it seems like a single big failure does a lot less damage to motivation.
If you've got different anecdotes then I think we'll just have to agree to disagree. If you've got studies saying I'm wrong, I'm happy to accept that I'm wrong - I know it worked, since I used this to help fix my spouse's sleep cycle, but that doesn't mean it worked for the reasons I think. :)
Replies from: Fadeway↑ comment by Fadeway · 2013-02-26T04:33:54.726Z · LW(p) · GW(p)
I agree, you can get over some slip-ups, depending on how easy what you're trying is compared to your motivation.
As you said, it's a chain - the more you succeed, the easier it gets. Every failure, on the other hand, makes it harder. Depending on the difficulty of what you're trying, a hard reset is sensible because it saves time from an already doomed attempt, >and< makes the next one easier (due to the deterrent thing).
comment by [deleted] · 2013-02-23T01:17:02.007Z · LW(p) · GW(p)
In the drug addiction literature this is called Abstinence Violation Effect. I'll write a full comment about it when I have some time.
comment by Kawoomba · 2013-02-23T07:48:02.656Z · LW(p) · GW(p)
You're exploiting your own scope neglect bias: You know you'll regret what happened at that event, but you also kind of know that you'll regret it a similar amount, whether you eat that additional cookie or you whether you don't.
Moreover, eating that additional cookie lessens the weight assigned to the first transgression (e.g. the pizza), because now instead of a singular event it's merely one in a series of events, and thinking about the whole series, there's less individual regret based on that first pizza incident.
It comes down to trying to game your own reward/punishment system, acting as if that surrogate parameter is the actual goal (instead of the weight loss).
Puny hu-mans!
comment by A1987dM (army1987) · 2013-03-03T14:44:53.263Z · LW(p) · GW(p)
Now that I think about that, that feels a lot like refusing a small offer in a game of Ultimatum again my past self.
comment by DanArmak · 2013-02-23T00:08:50.425Z · LW(p) · GW(p)
One way to look at the what-the-hell pattern is as perfectionism. I set up a rule for myself, and if I fail to follow it even a little bit, then I might as well fail to follow it entirely: the only categories are "following the rule" and "not following the rule".
For instance, I may resolve to exercise every day. Skipping a day for any reason is a very strong predictor of also skipping the next day. "Exercising sometimes" feels, psychologically, like not following a rule about exercising, rather than like following a weaker rule or following it 80% of the time. (If I set up a rule to exercise 80% of the time, then I'll behave the same way with regard to that new rule, instead.)
There's probably research and information on dealing with perfectionism, but I don't know what it is.
comment by Shmi (shminux) · 2013-02-22T22:23:13.082Z · LW(p) · GW(p)
There is an obvious model of this, I call it the "can of worms" model. When something is contained under pressure and the container cracks, odds are that more than a single worm will get out, and it is much harder to put the worms back in than to keep them in. This model describes some very diverse phenomena, like nuclear fission, glass breaking, contested divorce, social revolution, etc. That said, I am not sure how applying this model can help one avoid this pitfall.
Replies from: Douglas_Knight↑ comment by Douglas_Knight · 2013-02-23T00:17:07.763Z · LW(p) · GW(p)
What model are you talking about? In what sense is this compatible with the experiment you quote in the post?
Replies from: None↑ comment by [deleted] · 2013-02-23T08:35:28.760Z · LW(p) · GW(p)
Think of a scalar potential shaped like a cirque. A body in the depression can be perturbed a little without falling out, because it meets a restoring force at the walls. But if the body is moving fast or the wall of the depression becomes flat, then the body moves out and falls down the mountain side. A lot of energy is then required to move it back up. Shminux is just saying that people dieting are in a local minimum of their potential, and it doesn't take much motivational impetus to make them fall off the wagon. Not a perfect analogy, but it's understandable.
comment by Qiaochu_Yuan · 2013-02-22T21:53:28.393Z · LW(p) · GW(p)
It's not clear to me that this should be analyzed as a cognitive bias. It seems to be primarily a social phenomenon, and seems pretty reasonable if you operate in a social environment where small and large transgressions of rules are punished equally severely. I wouldn't be surprised to see it disappear in a social environment where people didn't use binary social rules but awarded points based on how well you adhered to rules.
Replies from: shminux, Desrtopa, handoflixue, army1987↑ comment by Shmi (shminux) · 2013-02-22T22:02:20.705Z · LW(p) · GW(p)
There is not much social about it when it's just between you and an extra slice of pizza in front of you.
Replies from: DanArmak, John_Maxwell_IV↑ comment by DanArmak · 2013-02-22T22:35:17.358Z · LW(p) · GW(p)
Dieting, and keeping to your commitments, are both socially rewarded. This thought, the image of others' disapproval if you fail to diet, is mentally present even when one is alone. People judge themselves by their society's standards. So it's a social pressure mediated by psychological effects (memory and imagination).
Replies from: shminux↑ comment by Shmi (shminux) · 2013-02-22T22:43:28.620Z · LW(p) · GW(p)
I agree on the "pressure" part. However, the pressure need not be social, unless you count every action as mediated by societal influences. True enough, dieting is often driven by societal pressure, maybe the pizza example was not so great, but there are many other pressures people exert on themselves. Are you saying that breaking under internal pressures don't result in the "what-the-hell" reaction?
Replies from: DanArmak↑ comment by DanArmak · 2013-02-23T00:05:30.169Z · LW(p) · GW(p)
Sorry, I was focusing on the dieting example too much. Social pressure is just one kind related to dieting, and it's probably not directly related to the "what the hell" pattern itself. I'll comment on the pattern in a top level comment.
↑ comment by John_Maxwell (John_Maxwell_IV) · 2013-02-24T09:58:04.626Z · LW(p) · GW(p)
Qiaochu's explanation could still work on an ev-psych level, though. It makes sense that far mode commitments would tend to be social.
↑ comment by Desrtopa · 2013-02-22T23:03:50.774Z · LW(p) · GW(p)
I've experienced this phenomenon when I was trying to hold myself to an entirely self crafted set of dietary restrictions which nobody else knew the terms of or was trying to encourage me to follow (I was trying to hit single digit body fat percentage in college, and had functionally unlimited quantities of food on offer in the cafeteria. Sometimes, if I still felt too hungry after the caloric intake I had allotted for myself, I would give in and let myself eat much, much more than I intended.)
Replies from: Qiaochu_Yuan↑ comment by Qiaochu_Yuan · 2013-02-23T00:23:47.411Z · LW(p) · GW(p)
Were the rules binary (e.g. "don't do X") or did you make yourself a point system?
Replies from: Desrtopa↑ comment by Desrtopa · 2013-02-23T00:39:35.110Z · LW(p) · GW(p)
I had a target number of calories per day, plus guidelines for the sort of things I ought to eat to feel full enough and get enough nutrition at low calorie levels, and an understanding that if I broke the target number, it was better to do it by a little rather than a lot.
I was pretty successful (dropped down to approximately 7% body fat according to an electrical impedance scale,) but I learned that it was important not to break the targets at all, because it was practically impossible to break them by just a little. If I did, it was hard to not simply give up on diet control for the day.
↑ comment by handoflixue · 2013-02-23T00:53:00.495Z · LW(p) · GW(p)
seems pretty reasonable if you operate in a social environment where small and large transgressions of rules are punished equally severely
I wouldn't call that a primarily social phenomenon, since this seems to happen with internal thought processes just as easily, but that's more of a minor nit-pick in phrasing. I think you're spot-on about rational adaptation to perverse incentives :)
↑ comment by A1987dM (army1987) · 2013-02-23T09:59:32.493Z · LW(p) · GW(p)
Pretty sure that happened to me before when I hadn't told anyone about my precommitment.
comment by savageorange · 2013-02-23T07:11:20.070Z · LW(p) · GW(p)
Am I the only one to whom this just looks like a minor transformation of Sunk Cost Fallacy?
comment by John_Maxwell (John_Maxwell_IV) · 2013-02-24T09:56:34.319Z · LW(p) · GW(p)
I often find it useful to focus my attention on what effects my actions have on the margin. For example, sometimes I randomly go running because I figure the marginal effects on my longevity and energy levels will outweigh the discomfort from running, and I see them as benefits that I can pluck for myself. (If I could quantify energy levels/longevity, the numbers would be going in the right direction.) Or sometimes I'm really tired and I don't have the energy to figure out what's optimal and work on it, so I just work on some random useful thing I can do, secure in the knowledge that if I was to zoom out and look at the bigger picture, I would agree that I was purchasing utility by working on it.
comment by A1987dM (army1987) · 2013-02-23T10:05:50.436Z · LW(p) · GW(p)
I think it is related to this effect. It's like there's something like ‘mental inertia’, so to speak.