Anti-Akrasia Reprise
post by dreeves · 2010-11-16T11:16:16.945Z · LW · GW · Legacy · 24 commentsContents
24 comments
A year and a half ago I wrote a LessWrong post on anti-akrasia that generated some great discussion. Here's an extended version of that post: messymatters.com/akrasia
And here's an abstract:
The key to beating akrasia (i.e., procrastination, addiction, and other self-defeating behavior) is constraining your future self -- removing your ability to make decisions under the influence of immediate consequences. When a decision involves some consequences that are immediate and some that are distant, humans irrationally (no amount of future discounting can account for it) over-weight the immediate consequences. To be rational you need to make the decision at a time when all the consequences are distant. And to make your future self actually stick to that decision, you need to enter into a binding commitment. Ironically, you can do that by imposing an immediate penalty, by making the distant consequences immediate. Now your impulsive future self will make the decision with all the consequences immediate and presumably make the same decision as your dispassionate current self who makes the decision when all the consequences are distant. I argue that real-world commitment devices, even the popular stickK.com, don't fully achieve this and I introduce Beeminder as a tool that does.
(Also related is this LessWrong post from last month, though I disagree with the second half of it.)
My new claim is that akrasia is simply irrationality in the face of immediate consequences. It's not about willpower nor is it about a compromise between multiple selves. Your true self is the one that is deciding what to do when all the consequences are distant. To beat akrasia, make sure that's the self that's calling the shots.
And although I'm using the multiple selves / sub-agents terminology, I think it's really just a rhetorical device. There are not multiple selves in any real sense. It's just the one true you whose decision-making is sometimes distorted in the presence of immediate consequences, which act like a drug.
24 comments
Comments sorted by top scores.
comment by Vaniver · 2010-11-17T04:04:54.304Z · LW(p) · GW(p)
My new claim is that akrasia is simply irrationality in the face of immediate consequences. It's not about willpower nor is it about a compromise between multiple selves. Your true self is the one that is deciding what to do when all the consequences are distant.
I question this. I can see how it is true in many cases- when both positive and negative consequences are distant, you can judge them in the same light. But I think the opposite is true- people often underestimate the negative consequences of something until those consequences are staring them in the face.
I mean, if someone believes "I want to be a writer" but does not believe "I want to write," is that akrasia? Or is that just not being self-aware enough? I come down pretty strongly in the latter camp. In cases like that, I wouldn't model procrastinatijng writing as irrationality so much as it is the id responding to the superego- "I know you're caught up in this fantasy, but really, it's not worth it."
It took me, depending on how you count things, a few months to a few years for my "I don't want to do physics" to overcome my "I want to be a physicist." If I hadn't been paying attention and thinking "hm, I don't want to do this. Why don't I want to do this?", I could see myself wasting years trying to satisfy myself in a suboptimal way.
Replies from: dreeves↑ comment by dreeves · 2010-11-17T06:10:52.078Z · LW(p) · GW(p)
This is a great point. But my position is that the use of self-binding accelerates the possible discovery that your dispassionate current self is wrong about what you want. If you believe you want to be a writer but never write then you're in fact not finding out that you hate writing! Eventually you'll concede that your id is telling you something but you might actually be wrong. It might just be a problem of activation energy, for example.
So I still side with the long-term self. Decide what you want from a distance, commit yourself for some reasonable amount of time, then reassess. It's the rationalist way: gather data and test hypotheses (in this case about your own preferences). Would you agree that it's hard for the delusion to persist under that scheme?
Replies from: Vaniver↑ comment by Vaniver · 2010-11-17T17:26:45.901Z · LW(p) · GW(p)
Upon rethinking it, I decided that my original position missed the mark somewhat, because it's not clear how "rationality" plays into an id-ego-superego model (which could either match short-term desires, decider, long-term desires, or immoral desires, decider, moral desires- the first seems more useful for this discussion).
It seems to me that rationality is not superego strengthening, but ego strengthening- and the best way to do that is to elevate whoever isn't present at the moment. If your superego wants you to embark on some plan, consult your id before committing (and making negative consequences immediate is a great way to do that); if your id wants you to avoid some work, consult your superego before not doing it.
And so I think what you've written is spot on for half of the problem, and agree your scheme is good at solving that half of the problem (and gives insights about the other half).
Replies from: Vive-ut-Vivas↑ comment by Vive-ut-Vivas · 2010-11-17T17:49:56.024Z · LW(p) · GW(p)
It seems to me that rationality is not superego strengthening, but ego strengthening- and the best way to do that is to elevate whoever isn't present at the moment. If your superego wants you to embark on some plan, consult your id before committing (and making negative consequences immediate is a great way to do that); if your id wants you to avoid some work, consult your superego before not doing it.
Thing is, I don't think this actually happens. When I'm being productive and not procrastinating, and I try to sit back and analyze why I'm "on" that day, I might attribute it to something like "hmm, long-term desires seem to be overriding short-term desires today, clearly this is the key". As if, for whatever reason, my short-term self was on vacation that day. My belief is that what's happening is something much more fundamental, and something that we actually have much less control over than we think; the conditions for not-procrastinating were already in place, and I later added on justifications like, "man, I really need to listen to far mode!". This is why, when I'm having a day where I am procrastinating, those same thoughts just don't move me. It's not the thought that's actually determining your actions ("My desire to make an A in this class SHOULD BE stronger than my desire to comment on Less Wrong, so therefore I am going to override my desire to play on the internet to do work instead"), but the conditions that allow for the generation of those thoughts. I think that's why telling myself "I don't want to do this problem set, but I know I need to" doesn't actually move me....until it does.
YMMV, of course. Others might be able to induce mental states of productivity by thinking really hard that they want to be productive, but I sure can't. It's either there or it isn't. I can't explain why it's there sometimes, but if you ask me in a productive mode why I'm able to get so much more done, well, it's just obvious that far mode is more important.
Replies from: Vaniver↑ comment by Vaniver · 2010-11-17T18:29:21.699Z · LW(p) · GW(p)
This reminds me a lot of Experiential Pica.
I agree with you that the issue for most people is motivation management, not time management- say I have 30-40 hours a week during which I could sit down to do homework, but I only have 10 hours a week during which if I sit down to do homework, homework will actually be completed. Once I acknowledge that, I can spend those other 20-30 hours a week doing things more valuable than looking at my homework and not doing it.
But I think we have more control over that than we think. Within this model, if I spend 20 of those hours relaxing, there might be 10 more hours I could accomplish homework during. There's also evidence that holding that model is what makes willpower expendable. (Of course, the alternative is some people have limitless wills and other people have limited wills, and they know how they operate.)
So deciding whether you'll listen to id or superego might not be the thoughts you verbalize, but the actions you take to prepare for that decision- the actual decision is an action, not a verbalization! If there are conditions you can place yourself in that strengthen your id or superego, knowing that and preparing accordingly is wise. dreeves' idea of wagering is a pretty good way to place a condition on yourself that confuses your id by introducing the desire to win a wager to the situation- but obviously there are other conditions that should be sought out.
Replies from: Nick_Tarleton↑ comment by Nick_Tarleton · 2010-11-17T19:17:51.918Z · LW(p) · GW(p)
There's also evidence that holding that model is what makes willpower expendable. (Of course, the alternative is some people have limitless wills and other people have limited wills, and they know how they operate.)
The study you cite saw an effect from manipulating theories of willpower, so self-knowledge isn't all that's going on.
Replies from: Vaniver↑ comment by Vaniver · 2010-11-17T19:32:44.515Z · LW(p) · GW(p)
The study you cite saw an effect from manipulating theories of willpower, so self-knowledge isn't all that's going on.
I mention the self-knowledge possibility because I think it possible that if there are people with limited willpower, they might be able to fake limitless willpower for a time (but not change themselves entirely). Even in that case, the obvious thing to do is pretend it's true until it fails, not hope for it to fail.
comment by taw · 2010-11-24T07:38:16.751Z · LW(p) · GW(p)
You're proposing immediate self-punishment solution to akrasia. Now where is the evidence that this particular method works for people in real world? Everyone seems to have their own pet theory, and their own pet solution, but they're all based on flimsy evidence, and persistence of akrasia in such akrasia-aware community as this one strongly suggests they're all wrong.
Replies from: dreeves↑ comment by dreeves · 2010-11-24T10:34:56.433Z · LW(p) · GW(p)
I'm accumulating evidence with beeminder.com and stickk.com has some evidence. I think this solution goes straight to the heart of what akrasia really is: a discrepancy between what you want to do and what you end up doing. If you have trouble following through on your intentions (and you're sure they were the right intentions, even in hindsight) then don't just intend, commit.
Replies from: taw↑ comment by taw · 2010-11-25T04:26:37.218Z · LW(p) · GW(p)
I'm with Tyler Cowen here that this is very unlikely to work. Lack of quality data on success percentages on both stickk.com and beeminder.com suggests to me they don't really work - they have both data, and incentive (including short term financial incentive) to compile such data, but never bothered.
Vague claims that it increases chances mean nothing, as this group is already self-selected. Simple chance vs commitment size graph would go a long way towards establishing stickk-style commitments as viable.
Replies from: dreeves↑ comment by dreeves · 2010-11-27T02:03:36.089Z · LW(p) · GW(p)
Thanks for pushing back on this -- good to hear this kind of skepticism. So far beeminder's evidence is basically anecdotal -- people seem to stay on track with a commitment device and fall off the wagon again when that pressure is gone. You're quite right about the self-selection problem.
I'm curious: do you yourself suffer from genuine akrasia? (I'm pretty sure Tyler Cowen doesn't.)
Replies from: taw↑ comment by taw · 2010-12-02T11:13:39.073Z · LW(p) · GW(p)
What is "genuine akrasia"?
Every time I failed to keep some GTD-ish system in proper order for any reason, my life immediately collapsed and nothing got done for weeks, even urgent things with supposedly severe consequences (from which I somehow got away in the end anyway, but that might be just luck), and with my mental state turning into some mixture of pointless stress and fuck-it-all.
Most of the time, there's plenty of things I sort of intend to do but never quite get to, but they tend to be not especially severe.
As far as I can tell, the only thing any threat of punishment would do to me would be add to the stress, reduce my mental energy, and make me more miserable overall.
Replies from: dreeves↑ comment by dreeves · 2010-12-06T02:48:13.309Z · LW(p) · GW(p)
Ah, I would say you're not genuinely akratic then. Consider these three questions about some hypothetical goal you have:
- How sure are you that you want to do this?
- How sure are you that you can do this?
- How sure are you that you will do this?
If your answers are "totally", "absolutely", and "given historical evidence, not entirely" then it's genuine akrasia and you should self-bind. It may add to the stress but it will force the thing that needs to happen to happen.
For example, if you regularly let your GTD-ish system fall out of order then that -- keeping it in order -- could make a ton of sense to self bind on. If there's some minimal daily effort that prevents weeks of stress and pain (and, hypothetically, you irrationally allow that to happen often) then the stress of the self-binding is probably totally worth it. It's like paying an insurance premium against screwing yourself over.
Note that the use of self-binding is technically blatantly irrational so for a non-akratic it makes sense that it seems simply crazy.
Replies from: taw↑ comment by taw · 2010-12-06T20:13:19.526Z · LW(p) · GW(p)
David Allen claims that most people's GTD-ish systems fall out of order every now and again, and this is to be expected, various real changes in life require changes to the system, rethinking of personal priorities etc., but we're pretty bad at spotting them pre-emptively.
I think noticing such problems earlier would be very helpful to me, and I tend to come back on track faster than in the past, but I'm still not terribly happy about it.
My most common akrasia-like problem is frequent gross misestimation of available time when I have high mental energy - most tasks compete for this, and amount I have is unpredictable, but not random, and I haven't figured out makes my mental energy more or less plentiful. Some minor correlates are higher room temperature (26-28C range seems optimal), more physical exercise during the last week or two, fewer distractions, cleaner room where I work, frequent naps, and better maintained GTD system. These are pretty solid, but that's still not enough to explain most of variance.
I cannot think of any recent akrasia related to anything that didn't require high mental energy levels, except during some GTD breakdowns.
comment by Vive-ut-Vivas · 2010-11-17T18:05:17.559Z · LW(p) · GW(p)
And although I'm using the multiple selves / sub-agents terminology, I think it's really just a rhetorical device. There are not multiple selves in any real sense.
I would actually dispute this, but that goes into what you actually mean by a "self". I don't see how it's not obvious that are multiple agents at work; the problem of akrasia is, then, trying to decide which agent actually gets to pilot your brain at that instant. I suspect this is alleviated, to some extent, by increased self-awareness; if you can pick out modes of thought that you don't actually want to "endorse" (like the "I want to be a physicist" versus "I don't want to do physics" example below), you are probably more likely to have the ability to override what you label as "not endorsed" than if you are actually sitting there wondering "wait, is this what I really think? Which mode is me?"
Replies from: dreeves↑ comment by dreeves · 2010-11-17T20:31:32.318Z · LW(p) · GW(p)
I agree that knowing which is the real me is the first step, and I propose that it has a simple answer: the me not under the influence of immediate consequences. So then the next question is how to make sure that me's decisions are the ones that stick.
If we agree on that much then I suppose the question of whether "multiple selves" are real vs a rhetorical device is moot.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2010-11-17T20:59:07.595Z · LW(p) · GW(p)
knowing which is the real me is the first step, and I propose that it has a simple answer: the me not under the influence of immediate consequences.
We don't agree on that much, actually.
I don't want to get into a whole digression on the concept of "the real me," so hopefully it will suffice to say that insofar as I choose to identify preferentially with any subset of my psyche I prefer to identify with the sustainably joyful subset, and I have often had the influence of immediate consequences evoking a more sustainably joyful frame than the working model of long-term consequences it displaces.
Conversely, if we insist on having the "real me" discussion, I propose that the real me is the entire network of mutually activating and inhibiting systems in my head. Making progress towards goals and experiencing akrasia are both manifestations of the real me.
That said, I don't think it matters in the local context. We can ignore the whole question of "which me" and "real me" and "multiple mes" and still address how to best strengthen the mental substructures that make progress towards goals and don't succumb to akrasia.
Replies from: dreeves↑ comment by dreeves · 2010-11-17T22:22:32.344Z · LW(p) · GW(p)
Agreed! (Not sure why you say we don't agree on that much.)
So you want to satisfy your joyful frame or however you want to put it, like eating yummy pie and whatnot. That's fine, but notice how you can appreciate the value of that now, while writing this LessWrong comment. So what's the problem with letting current-you call the shots? You don't seem to be in danger of undervaluing joy.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2010-11-17T23:17:55.394Z · LW(p) · GW(p)
The degree to which I seem to value joy (judging from my behavior) varies a lot, depending on what else is going on. The degree to which I am doing so now, while writing this LessWrong comment, doesn't seem to be the peak of that curve.
comment by Jonathan_Graehl · 2010-11-16T19:34:57.383Z · LW(p) · GW(p)
In an online setting, wouldn't most people lie in order to avoid paying the forfeit? Other than things like "this won't be the top post in two weeks" which can be checked by anyone, only people who really don't want to feel like a liar would ever take their punishment.
Replies from: dreeves↑ comment by dreeves · 2010-11-16T20:15:35.432Z · LW(p) · GW(p)
If you have a (real-world) friend involved then I don't think this is a problem. I could be naive though. But even if you know deep down that when push comes to shove you'll lie to avoid the penalty, that's still a definite disincentive and may keep you on track.
In any case, lying isn't an issue with the friends and family of mine who are trying it so far. I'd love to have more LessWrong folks try it. If you use the invite code LESSWRONG and indicate that you're willing to try a monetary commitment device, I'll guarantee to set up the first 10 beta applicants right away.
Replies from: Jonathan_Graehl↑ comment by Jonathan_Graehl · 2010-11-16T21:27:24.901Z · LW(p) · GW(p)
People are mostly (monetarily) committing to things they want to do and know they can do. Lying only enters into it if they try and fail. Perhaps they'll fudge a little if they're just off target.
I'll grant that knowing that doesn't completely destroy the incentive, which is interesting. Someone who finds and uses this tool will likely want to succeed without lying.