Akrasia, hyperbolic discounting, and picoeconomics

post by Paul Crowley (ciphergoth) · 2009-03-29T18:26:11.914Z · LW · GW · Legacy · 86 comments

Contents

86 comments

Akrasia is the tendency to act against your own long-term interests, and is a problem doubtless only too familiar to us all. In his book "Breakdown of Will", psychologist George C Ainslie sets out a theory of how akrasia arises and why we do the things we do to fight it. His extraordinary proposal takes insights given us by economics into how conflict is resolved and extends them to conflicts of different agencies within a single person, an approach he terms "picoeconomics". The foundation is a curious discovery from experiments on animals and people: the phenomenon of hyperbolic discounting.

We all instinctively assign a lower weight to a reward further in the future than one close at hand; this is "discounting the future". We don't just account for a slightly lower probability of recieving a more distant award, we value it at inherently less for being further away. It's been an active debate on overcomingbias.com whether such discounting can be rational at all. However, even if we allow that discounting can be rational, the way that we and other animals do it has a structure which is inherently irrational: the weighting we give to a future event is, roughly, inversely proportional to how far away it is. This is hyperbolic discounting, and it is an empirically very well confirmed result.

I say "inherently irrational" because it is inconsistent over time: the relative cost of a day's wait is considered differently whether that day's wait is near or far. Looking at a day a month from now, I'd sooner feel awake and alive in the morning than stay up all night reading comments on lesswrong.com. But when that evening comes, it's likely my preferences will reverse; the distance to the morning will be relatively greater, and so my happiness then will be discounted more strongly compared to my present enjoyment, and another groggy morning will await me. To my horror, my future self has different interests to my present self, as surely as if I knew the day a murder pill would be forced upon me.

If I knew that a murder pill really would be forced upon me on a certain date, after which I would want nothing more than to kill as many people as possible as gruesomly as possible, I could not sit idly by waiting for that day to come; I would want to do something now to prevent future carnage, because it is not what the me of today desires. I might attempt to frame myself for a crime, hoping that in prison my ability to go on a killing spree would be contained. And this is exactly the behavour we see in people fighting akrasia: consider the alcoholic who moves to a town in which alcohol is not sold, anticipating a change in desires and deliberately constraining their own future self. Ainslie describes this as "a relationship of limited warfare among successive selves".

And it is this warfare which Ainslie analyses with the tools of behavioural economics. His analysis accounts for the importance of making resolutions in defeating akrasia, and the reasons why a resolution is easier to keep when it represents a "bright clear line" that we cannot fool ourselves into thinking we haven't crossed when we have. It also discusses the dangers of willpower, and the ways in which our intertemporal bargaining can leave us acting against both our short-term and our long-term interests.

I can't really do more than scratch the surface on how this analysis works in this short article; you can read more about the analysis and the book on Ainslie's website, picoeconomics.org. I have the impression that defeating akrasia is the number one priority for many lesswrong.com readers, and this work is the first I've read that really sets out a mechanism that underlies the strange battles that go on between our shorter and longer term interests.

86 comments

Comments sorted by top scores.

comment by Aurini · 2009-03-30T00:11:33.599Z · LW(p) · GW(p)

This reminds me of a webcomic, where the author justifies his lack of self improvement, and his continual sucking at life:

"Pfft. I'll let Future Scott deal with it. That guy's a dick!"

http://kol.coldfront.net/comic/ (No perma-link; it's comic 192, if new one's been posted since I wrote this.)

When dealing with your future self there's an economic balancing act at play, because Future Self's values will inevitable shift. On the extreme side, if Omega had told Aurini'1989 that if he saves his $10 for ten years, it will grow to the point where he can buy every Ninja Turtle action figure out there, Aurini'1989 would have said, "Yes, but Aurini'1999 won't want Ninja Turtles anymore - however, he will likely value the memory of having played with Ninja Turtles." To hold the Future Self completely hostage to the desires of the present makes as little sense as holding the Present Self hostage to the desires of the future.

It breaks down to a tactical problem (which units do you build first in Civ 4?); I'm glad I spent money on that beer five years ago, because I still find value in the memory. What makes the problem difficult to solve is our fuzzy perceptions. First there's the issue of scope intensity; none of our senses are calibrated, including our sense of time. But there's also the issue of inconsistency of self. The 8 AM self which desires to be left alone to drink his coffee and read a book is a wildly different person than the 10 PM self hopped up on whiskey and telling the bartender how it really is.

The first problem is easy enough to correct for, you don't even need to be trained in rationality to accomplish this. Most people, if given the offer X period of suffering for Y period of benefit, will be able to make a cost/benefit analysis as to whether it is a good deal or not.* Aurini'2001 made this calculation when he joined the army. The numbers are fuzzy, but they're not inestimable. Furthermore, statistical studies (such as education level vs long-term earnings) can be used to bolster these calculations.

The real nut of the problem is the inconsistentcy of the self. We are wildly different people from moment to moment, regardless of a relatively consistent average over time. We are our values (apologies - I can't find who wrote the original post on this topic). We all have a number of ad hoc techniques we use to stay true to our primary goals, but I'm not sure what the broader solution would be.

I guess what I'm saying is that it isn't so much irrationality that causes you to stay up all night reading, instead of getting a good night's sleep. When we consider choices that aren't immediate, most people can make accurate judgements based upon the information they have. The bigger problem is how rapidly our values shift on minutiae. It's not just that the morning is relatively further away from now, than one month vs a month and a day - the bigger problem is that there's more personal variance between those times.

*Regarding the googleplex of dust motes vs a lifetime of torture dilemma: I think the scope intensity fail which occurred there is because a lifetime of torture could reasonably be expected to destroy the self; if it had been a week of torture, most people would volunteer, I think. It was an inability to empathize with a googleplex as opposed to an individual.

Replies from: Aurini, Sebastian_Hagen, gwern, None, None
comment by Aurini · 2009-03-30T01:59:13.441Z · LW(p) · GW(p)

Am I allowed to play my own devil's advocate? Autodevil's advocate, if you will (writing down my ideas often helps me criticize them).

Aurini¹'s premise: Short term examples of Akrasia are due primarily to variability of self. Self¹ and Self² are both pursuing their own interests in a rational manner, it's just that their interests are dissonant.

I still think this is largely the case; most instances of regret are either "Knowing what I know now, I wish I hadn't put all my money in Enron," ie "I based my choices on incorrect data,"; or the other possibility, "I wish I hadn't done that last night, but if you press me, I'll admit that I plan to do it again tonight," the second may be foolish, it may be hypocritical, but it's not Akrasia per se, because the regret is temporary, not existential.

There is a third type, however, which is distinctly counter-rational. Well need an example: getting drunk the night before, and failing to show up to traffic court (thus defaulting on an $X00.00 fine which you could have avoided). All Self(x) where (xn) agree that this was a poor choice. While there are substantial differences of Self over time, and this does not denote irrationality, the stark aberration which is Self(n) does.

So how do we start to explain this? On personal reflection, any time I've pulled a Self(n), it starts out subtly. "I'm stressed out about court tomorrow," becomes "I'm going to have a drink to calm down," becomes "Well that one tasted like three more," becomes, "The hell with the world an their stupid laws! I'm going to drink the whole bottle!"

What we've got here is a positive feedback cycle. On the one hand, we can use the ex-alcoholics strategy of moving to a booze-free town, and try to avoid the downward spiral, but I worry that there's always a new spiral waiting up ahead, one you can't predict and avoid. Better, perhaps, would be by identifying and labelling the Akrasia Spiral, being aware of it, and learning to cut it off before it begins.

Easier said than done, mind you.

comment by Sebastian_Hagen · 2009-03-30T10:36:35.237Z · LW(p) · GW(p)

While we're linking to webcomic strips, Miscellanea 2007-11-19 is also quite relevant to this.

comment by gwern · 2009-03-30T02:45:01.699Z · LW(p) · GW(p)

What's wrong with the link http://kol.coldfront.net/comic/index.php?strip_id=188 or http://kol.coldfront.net/comic/istrip_files/strips/20090127.gif ?

(Also: what the heck sort of comic numbering system puts comic #192 at ID #188?)

Replies from: Aurini, ciphergoth
comment by Aurini · 2009-03-30T16:03:29.106Z · LW(p) · GW(p)
  1. Did you just ninja me?

  2. The type of comic written by someone who has no interest in self improvement. :)

Replies from: gwern
comment by gwern · 2010-10-10T01:38:14.036Z · LW(p) · GW(p)

If I knew what ninja meant, perhaps I could answer that.

Replies from: Aurini
comment by Aurini · 2010-10-10T04:37:05.388Z · LW(p) · GW(p)

You found a permanlink, while I couldst not.

comment by Paul Crowley (ciphergoth) · 2011-02-09T08:46:31.298Z · LW(p) · GW(p)

Wow, looks like their efforts to defeat permalinking were more thorough than we thought. This link now works:

http://kol.coldfront.net/comic/index.php?strip_id=192

comment by [deleted] · 2015-09-19T13:29:21.265Z · LW(p) · GW(p)

Insulting my future self like that sure makes me less anxious about providing for my future self.

comment by [deleted] · 2015-09-17T14:05:56.272Z · LW(p) · GW(p)

Discounting your future yourself (e.g. thinking your future self is a dick) can be a strategy to work more efficiently now.

comment by RichardChappell · 2009-03-30T21:38:33.981Z · LW(p) · GW(p)

Akrasia is the tendency to act against your own long-term interests

No, akrasia is acting against your better judgment. This comes apart from imprudence in both directions: (i) someone may be non-akratically imprudent, if they whole-heartedly endorse being biased towards the near; (ii) we may be akratic by failing to act according to other norms (besides prudence) that we reflectively endorse, e.g. morality.

Replies from: ciphergoth
comment by Paul Crowley (ciphergoth) · 2009-03-30T21:43:51.551Z · LW(p) · GW(p)

In that case Ainslie has somewhat turned the word to his own ends. Is there a better word for what he's talking about?

I don't criticise your definition for not cleaving at the joins because it absolutely seems plausible to me that there could be a unified theory of how we are about morality and how we are about our own long-term interests; the distinction between the two is scarcely made in much of what people say.

Replies from: RichardChappell
comment by RichardChappell · 2009-03-30T23:45:20.342Z · LW(p) · GW(p)

Is there a better word for what he's talking about?

Inter-temporal conflict?

(Part of the problem with misusing language is that it makes it unclear exactly what one has in mind. I assume Ainslie has a broader target than mere imprudence: foreseeable moral failures may provide similar reasons for precommitment, regret, etc. So perhaps he really does mean general akrasia, despite the misleading definition. But does he also take his topic to include 'murder pills' and ordinary cases of [foreseeable] changes to our ultimate values? Or does he restrict himself solely to cases of intertemporal "conflict" involving akrasia -- i.e. whereby both 'selves' share the same ultimate values, and it's simply a matter of helping them "follow through" on these?)

Replies from: ciphergoth
comment by Paul Crowley (ciphergoth) · 2009-03-31T07:29:40.403Z · LW(p) · GW(p)

His topic is specifically those changes of mind that we can anticipate because of hyperbolic discounting.

Replies from: RichardChappell
comment by RichardChappell · 2009-04-01T03:29:58.289Z · LW(p) · GW(p)

Okay, that sounds like 'imprudence', then.

comment by taw · 2009-03-30T12:25:44.869Z · LW(p) · GW(p)

I just have one question, it's so obvious but I don't remember it being asked anywhere.

Humans and all animals tested use hyperbolic discounting + hacks on top of it to deal with paradoxes. Why hasn't evolution implemented exponential discounting in any animal? Is it technically impossible the way brain works (perhaps by local optimum), or is hyperbolic discounting + hacks better in the real world than exponential discounting?

I think this is a far more fundamental problem than anything else about akrasia.

Replies from: Erik, ciphergoth
comment by Erik · 2009-03-30T12:48:36.035Z · LW(p) · GW(p)

Reading the Wikipedia article on hyperbolic discounting it seems like there is some evidence for a quasi-hyperbolic discounting. Looking at the formula, the interpretation is exponential discounting for all future times considered but with a special treatment of the present.

How to explain this? It is not unlikely that the brain uses one system for thinking about now and another about the future. Considering the usual workings of evolution, the latter is most likely a much later feature than the former. Considering this, one could perhaps even argue that it would be surprising if there wasn't any differences between the systems.

There seems to be some literature referenced at the wiki article. I suggest looking into it if you are interested. I sadly don't have the time right now.

comment by Paul Crowley (ciphergoth) · 2009-03-30T12:32:45.733Z · LW(p) · GW(p)

I'm curious to know the answer to that one. My guess is that hyperbolic discounting is technically much easier to implement, and the circumstances of animals in the wild provide fewer opportunities for akrasia so it's not worth the cost of fixing. However there is doubtless room for more investigation of the evolutionary psychology of hyperbolic discounting.

EDIT also see this comment

comment by infotropism · 2009-03-29T19:22:19.200Z · LW(p) · GW(p)

I used to have a system of implicit moral contract with myself.

I saw my own situation as an iterated prisoner dilemma; any of my future selves could desist against its other selves, negating the hopes and dreams of past selves, depriving further selves of certain prospects, all of that for a short term benefit that would have negative long term consequences. The first to desist would win something on the moment, the others loose their investment or their potential. So I tried to keep to my word and plan ahead.

Not sure if I'm still strong willed enough to affirm I'm working like that. Actually, probably not in most cases.

Replies from: ciphergoth
comment by Paul Crowley (ciphergoth) · 2009-03-29T20:07:13.656Z · LW(p) · GW(p)

Ainslie describes it more as a sequence of one-shot prisoner's dilemmas; because we're talking about agencies that differ as time passes, it doesn't necessarily make sense to think of them as existing continuously.

comment by MikeStankavich · 2009-03-30T13:13:54.883Z · LW(p) · GW(p)

I found this article both interesting and informative. I definitely plan to spend some time studying picoeconomics.

One interesting effect that I have found in personal productivity efforts is that applying techniques to enforce resolution and overcome passive resistance can change the perceived emotional weighting between alternatives, often quite rapidly.

For example, let's say I'm reading LW instead of writing a term paper. I've made a (probably irrational) decision that the negative emotion of exerting the effort to write the term paper exceeds the negative emotion of having an incomplete assignment hanging over me. If I apply a pattern interrupt to get me started writing, my emotional weighting will shift, often within a matter of minutes - the effort of writing will not feel nearly as bad as the pressure of the unfinished assignment. Overcoming the emotional inertia of passive resistance shifts the perceived emotional weight of the alternatives.

Of course the challenge is to translate that knowledge into action. Even though I know that the emotional balance will likely shift, that doesn't alter the feeling of initial resistance. That challenge has sold and will continue to sell millions of self help books :) It's like an arms race between future/planning self and present moment self. For every pattern interrupt devised by future self, present self finds a defense to defuse the pattern interrupt and continue the present moment pleasurable activity.

The irony of reading LW as a present moment escape under the nominal guise of strengthening future self's ability to keep present self on course is not lost on me. And on that note, I'm off to get some work done.

Replies from: pjeby
comment by pjeby · 2009-03-30T17:17:20.173Z · LW(p) · GW(p)

You've actually missed a key distinction here: the negative emotion of the incomplete assignment is almost certainly what makes you procrastinate... and you're mistakenly interpreting that negative emotion as being about the writing.

What happens is this: since you feel the unfinished item pressure every time you think about doing the task, you literally condition yourself to feel bad about doing the task. It becomes a cached thought (actually a cached somatic marker) tagging the task with the same unpleasantness as the unpleasantness of it "hanging over you".

So, it's not that the process of writing really bothers you, it's the unfinishedness of the task that's bothering you. However, your logical brain assumes that it means you don't want to write (because it doesn't have any built-in grasp of how emotional conditioning works), and so it looks for logical explanations why the writing would be hard.

When you're busy writing, however, you're not thinking about that unfinishedness, so it doesn't come up -- the somatic marker isn't being triggered. That's not at all the same thing as "shifting the balance".

The actual way to fix this is to make it so you don't feel any pressure to finish the assignment... at which point you'll be able to freely choose to work on it, or not work on it, and won't find yourself looking for ways to avoid the conditioned negative response to the assignment.

Likewise, all the stuff you said about arms races is pure baloney: just a crazy story your logical mind is making up to explain your problems, like anosognosia of the will.

So here's what you do: establish a test for the somatic marker, by thinking about the task, and observing what happens to your body: does your head slump? Your gut clench? Fists tighten? What specific body changes take place, whenever you think about it? If you have trouble, clear your mind, shake out your body, and think about it again, so you can watch the physical state transition as it happens.

Once you've established the test, you can use it as a basis to check the effectiveness of different motivational or belief-change techniques: if a technique actually works, you will no longer respond with the same somatic marker to the original thought... and you will find that your inclinations to the task have also changed.

This is something I do in my work, and I only teach those few techniques (out of the many thousands in self-help books) that I have been able to successfully change somatic markers with. (Not that I've tested ALL of them yet, not by a long shot. And I only bother testing new ones when they have potential to be faster to use or easier to teach or cover a different kind of problem than the ones in my current toolbox.)

Happy self-experimentation. ;-)

Replies from: ciphergoth, MikeStankavich
comment by Paul Crowley (ciphergoth) · 2009-03-30T18:34:27.963Z · LW(p) · GW(p)

In the middle of some useful task, I have more than once said to myself: "I am not hearing any crap about how I never get around to anything, because however true it might be at any other time, I am doing something useful right now. Anything that needs to be said about the need to do things will have to wait for a time when I'm not doing things!" This works very well for me.

comment by MikeStankavich · 2009-04-01T22:42:33.401Z · LW(p) · GW(p)

Thank you for your thoughtful response. As it happens, I disagree with your premise that the negative emotion of the incomplete assignment is almost certainly what makes me procrastinate. Yes, that's a potential factor, but only one of many. For example, there's the difference between anticipated and actual difficulty of performing a procrastinated task progress.

But in the the spirit of rationality, I will give your suggestions a fair trial. You are absolutely correct that the most effective way to figure out what works is to use the scientific approach - design an experiment to test the hypothesis, test, assess the results, and go from there.

Replies from: pjeby, HughRistik
comment by pjeby · 2009-04-01T23:14:23.356Z · LW(p) · GW(p)

You are absolutely correct that the most effective way to figure out what works is to use the scientific approach - design an experiment to test the hypothesis, test, assess the results, and go from there.

In this case, the hypothesis I bet on is:

  1. You will identify a specific set of physical behaviors (muscle tension changes, viscera sensations, etc.) that accompany the thought

  2. These behaviors are preceded by some mental representation (however brief) of some expected result -- such as being yelled at for not finishing the task, or some other social status-impacting event that could come about as a result of failing to complete it successfully or failing to complete it at all

  3. Identifying and changing the thought process that led to creating and caching the expected outcome, will result in the cached thought going away, and possibly taking the somatic marker with it, or at least diminishing it in intensity. If the somatic marker remains or is replaced by a new one, there will be a new cached thought that goes with it.

There are exceptions to this pattern; some somatic markers are straight-up conditioning (i.e., there's no cached predictive thought in play - the marker is directly tied to the initial thought), and some are rooted in what I call "holes in the soul" -- a compulsion to fulfill an emotional need that's not being otherwise met. But most chronic procrastination in my experience follows the "main sequence" I've outlined above.

I used to waste a LOT of time helping people get over the "effort" they perceived associated with doing things... only to find out that it was 99% anosognosia -- misdirected explanations of the pain.

If we don't feel pushed to do something in the first place, then we don't usually experience the time spent as being effortful. So nowadays, I get results a lot faster by focusing on eliminating a handful of feelings associated with NOT doing the task, than the seemingly infinite number of new complaints people can generate about DOING the task.

comment by HughRistik · 2009-04-01T23:12:51.115Z · LW(p) · GW(p)

I really like pbjeby's advice and I think it applies to many types of procrastinating where we have negative feelings associated with the future task (which we can make worse through classical conditioning); the only part of his post I disagree with is his reduction of Mike's problem to negative emotion about the incomplete task. I agree with Mike that pleasure in our current activity can also be a part of procrastination, not just displeasure about the future task.

For instance, I often have trouble stopping reading in order to do incomplete work, but I also have trouble stopping reading to make myself go to bed, even when I'm tired. Now, I enjoy sleeping, and I don't feel negative emotion about it: I just take even more pleasure in reading. Yet, my goal was to go to bed on time.

Replies from: pjeby
comment by pjeby · 2009-04-01T23:27:28.665Z · LW(p) · GW(p)

I agree with Mike that pleasure in our current activity can also be a part of procrastination, not just displeasure about the future task.

It can be... but rarely is in people who suffer from chronic procrastination. Usually, they don't enjoy the thing they're using as an escape. And I'm not aware of anybody who goes out of their way to do something they REALLY enjoy when they're procrastinating. Usually, they go for mind-numbing distraction rather than true involvement or enjoyment.

For the most part, this is one of those areas where trusting your rational mind will lead you astray, because it's just telling you rational lies. It doesn't know what's actually going on, and so just makes up believable stories -- "that terrible LessWrong.com site tempted me and made me avoid my work..."

And this is especially likely to be the case if you're also ashamed of the bad feelings you have about the task...

For instance, I often have trouble stopping reading in order to do incomplete work, but I also have trouble stopping reading to make myself go to bed, even when I'm tired. Now, I enjoy sleeping, and I don't feel negative emotion about it: I just take even more pleasure in reading. Yet, my goal was to go to bed on time.

Quite so... but that's not something I think of as procrastination. It might be akrasia, but if you told that story to a "real" procrastinator, they might find it insulting.

One reason I'm a bit passionate about this, is that while the things you're saying may be true for you, they are not true for chronic procrastination, and not what a procrastinator needs to hear in order to get better. It's akin to telling an alcoholic that lots of people can handle their liquor. It might be possible to teach the alcoholic to handle their liquor, but that's definitely not the first order of business: detoxification is.

Negative emotions (and "seriousness" in most forms) are to a procrastinator what alcohol is to an alcoholic: a drug addiction with serious real-life impact.

(Really, it's only been in the last few weeks that it's even occurred to me myself that negative emotions and seriousness have some practical uses, once you're no longer addicted to them. And I'm still trying to get used to the idea, because I've spent the last year and a half or so trying to eradicate them from my life.)

Replies from: Caspian
comment by Caspian · 2009-04-02T23:44:04.441Z · LW(p) · GW(p)

Here's my current interpretation of where pleasure in the current activity comes into it for me: I would play a computer game, which I think should be pleasurable, and used to be, but feel guilty about procrastinating about something else, so I didn't enjoy it as much, or perhaps at all.

If I think about stopping before I've gotten the expected enjoyment, that is unpleasant, so I would avoid stopping or thinking about stopping. I would stop eventually and feel bad about having wasted so much time.

It's not so much that that I picked a game was inherently mind-numbing or unenjoyable, but that I turned it into something mind-numbing because I was avoiding these unpleasant thoughts.

comment by wnoise · 2009-07-08T20:11:56.904Z · LW(p) · GW(p)

Does hyperbolic discounting mean that the sunk-cost fallacy can be adaptive in certain situations, by "locking in" previous decisions?

comment by Roko · 2009-03-29T19:46:31.963Z · LW(p) · GW(p)

I'd like to hear more about akrasia on LW. It seems to be supremely important.

Replies from: steven0461
comment by steven0461 · 2009-03-29T19:52:49.860Z · LW(p) · GW(p)

I recommend this

comment by Wei Dai (Wei_Dai) · 2009-03-30T02:01:34.334Z · LW(p) · GW(p)

Does evolutionary psychology provide an explanation for hyperbolic discounting? I found one explanation at http://www.daviddfriedman.com/Academic/econ_and_evol_psych/economics_and_evol_psych.html#fnB27 but it doesn't seem to apply to the example of preference reversal between sleeping early and staying up.

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2010-01-23T06:25:31.011Z · LW(p) · GW(p)

I asked earlier:

Does evolutionary psychology provide an explanation for hyperbolic discounting?

There are actually many attempts to answer this question in the economics literature. I'm not sure why I didn't find them earlier.

One paper is Uncertainty and Hyperbolic Discounting and others can be found by searching for papers that cite it.

Replies from: bruno-mailly
comment by Bruno Mailly (bruno-mailly) · 2018-08-02T20:18:45.557Z · LW(p) · GW(p)

Basically : In the ancestral environment, future gains were THAT unsure.

BTW, I would not be surprised if evolution led to populations enduring bad seasons to become better at planning, especially long-term, and if this played a role in the enlightenment and industrial revolution.

Edit : Cold climates demand more intertemporal self-control than warm climates

comment by UnholySmoke · 2009-03-30T10:22:54.386Z · LW(p) · GW(p)

The problem I have with considering future discounting is that it forces me to formulate a consistent personal identity across time scales longer than a few moments. I've never successfully managed that.

To my horror, my future self has different interests to my present self

Can you describe 'my future self' without any sort of pronoun? If you could do that, the horror might, y'know, go away a bit. Thou art physics, after all.

as surely as if I knew the day a murder pill would be forced upon me.

Not quite as surely, otherwise you'd be taking steps to stop your mind changing day to day. This is exaggeration, even though the analogy works to a degree.

consider the alcoholic who moves to a town in which alcohol is not sold, anticipating a change in desires and deliberately constraining their own future self

So is 'the alcoholic' the optimiser that wants to keep drinking? Or the optimiser that wants to stop drinking? Or both? This article shows a tendency to view the mind as a point-particle of desire and value. It's nothing of the sort. We all have to accept that identity is a fuzzy, shifting entity. Trusting in rationality is an early step towards resolving this problem.

Replies from: AlexU
comment by AlexU · 2009-03-31T11:55:41.533Z · LW(p) · GW(p)

Good points. I share your concern. But it's not clear which direction rationality cuts in this case. If I have no special attachment to the "me" of one year from now, why should I sacrifice present interests for his? On the other hand, I've been wondering recently if it's possible to salvage our folk concept of identity by positing that, while "me" at T2 might not be "me" in any robust sense, 1). there will be a person (or locus of consciousness, if you will) at T2 who thinks he's me, and shares many of my memories and behavioral predispositions, and 2). that person will be disproportionately influenced by my actions today. I think it follows from ethical considerations, then, if not prudential ones, that I should act today in a way that is in keeping with my best interests, so as not to unduly harm that future person.

Now, what would really be interesting would be if we discovered that the "rational" thing to do would be some averaging of the two extremes -- i.e., I continue to act generally in my future best interests, but also prioritize present and near-term happiness to a much greater degree than seems naively appropriate.

comment by JulianMorrison · 2009-03-30T15:02:07.186Z · LW(p) · GW(p)

I wonder if hyperbolic discounting uses the visual processing system? It certainly works like perspective foreshortening.

Replies from: pjeby
comment by pjeby · 2009-03-30T17:00:25.943Z · LW(p) · GW(p)

I wonder if hyperbolic discounting uses the visual processing system? It certainly works like perspective foreshortening.

Richard Bandler's concept of "submodalities" strongly suggests a connection, since it uses image properties like distance, size, brightness, etc. to manipulate emotional response to imagined goals and behaviors. (An example of "front seat" driving -- i.e., directly manipulating the drivers of our behavior, rather than attempting to work around them.)

comment by Alex · 2009-03-30T09:52:55.650Z · LW(p) · GW(p)

Excellent article and topic. I suffer from this. My main problem (which is merely an excuse) is that there is a difference between what I think I want to do and what my body and mind actually wants to do when it's doing the things I tell it to do. Multiple selves become evident when this happens. The self that has planned the actions, and the self that - in doing those actions - gives up to do other (more fun) things. My approach is to make successive changes to the self who does those actions that 'I' plan, by trying to implement rules for him to follow. But I find it a constant uphill struggle, because he always outsmarts me.

Replies from: Aurini
comment by Aurini · 2009-03-30T10:18:52.756Z · LW(p) · GW(p)

Would you mind expanding on this?

comment by Pablo (Pablo_Stafforini) · 2009-03-30T05:07:39.410Z · LW(p) · GW(p)

His extraordinary proposal takes insights given us by economics into how conflict is resolved and extends them to conflicts of different agencies within a single person, an approach he terms "picoeconomics".

I haven't read Breakdown of Will, but Thomas Schelling makes a similar proposal in his articles on "egonomics".

On the broader question of how to respond to irrationality, I strongly recommend Jon Elster's chapter (including the references) in Explaining Social Behavior.

comment by dclayh · 2009-03-29T19:27:32.882Z · LW(p) · GW(p)

I haven't worked out the mathematical details, but qualitatively it seems to me that we discount the future more than expected because we don't know what our desires will be like in the future (but nevertheless want to maximize the happiness of our future selves, whatever that may consist in). This means whatever action I take now to benefit my future self has an extra decrement in (present) utility because of my uncertainty in how much the future self will be benefited. And then there's the higher-order effect that the future self may have turned into someone the present self doesn't want to benefit (maybe they want to gruesomely murder many people).

For example, I might pay less for tickets to the Bayreuth festival five years from now than I would for this year's tickets not only because of the loss of interest income, chance that the festival won't be around in five years, etc., but also because of the chance that I simply won't like Wagner as well in five years as I do now. (Of course, theater tickets are easily resalable, which should ameliorate the effect somewhat in that case.)

Therefore, since the murder pill is an example where you know exactly what your values will be at a future point, it may not be relevant to many more common instances of akrasia.

Replies from: gjm
comment by gjm · 2009-03-29T20:25:06.297Z · LW(p) · GW(p)

There are many different ways in which we could discount the future. The problem with almost all of them -- including the "hyperbolic" discounting Ainslie describes -- is not (necessarily) the mere fact that they discount, nor that they discount too much, but that they discount inconsistently: given times t1,t2,t3,t4, the relative importance of times t3 and t4 as seen from t1 is not the same as their relative importance as seen from t4. Or, to put it differently: if I apply t2-as-seen-from-t1 discounting together with t3-as-seen-from-t2 discounting, I don't get the same as if I apply t3-as-seen-from-t1 discounting.

It is possible to discount the future consistently, but there's basically only one degree of freedom when you choose how to do so. If you give events a time t in the future weight proportional to (constant)^t then that's consistent. It doesn't open you up to the bug ciphergoth describes, where your judgement now is that times t1 and t2 are almost equally important, whereas when t1 comes along you regard it as much more important than t2. (If you don't discount at all, that's the special case where the constant is 1.)

Ooo, no, actually you have more degrees of freedom than that: the most general scheme is that you choose a function F(t) and weight things according to that function. (Important note: one function, and its argument is absolute time, not time difference.) But the exponential case is the only possibility if you want your discounting function to be invariant if your whole life is shifted in time. (Which you might not -- if, e.g., there are external events that make a big difference.)

Anyway, the point is: it's not discounting "more than expected" that's the issue, it's having a pattern of discounting that's not internally consistent.

Replies from: dclayh, ciphergoth
comment by dclayh · 2009-03-29T22:02:14.149Z · LW(p) · GW(p)

Okay, I see how my comment was off-target. To explain the pattern described would require something more along the lines of "People know that the state of things (external or internal) can change quickly, yet over the long term tend to regress to the mean. Therefore they privilege the present over the immediate future, but regard two points in the far future as the same, having no way to distinguish between them." But that's both speculative and fairly empty of content.

comment by Paul Crowley (ciphergoth) · 2009-03-29T21:15:25.907Z · LW(p) · GW(p)

You mean F(t_1, t_2) where t_1 is the decision time and t_2 is the time of the event whose utility is weighed. Yes, that's the general form, but we assume that discounting is roughly constant across time (ie depends only on t_2 - t_1).

I guess it would mesh with our instincts if discounting varied with age, but in the simpler special case where we consider only timespans that are short relative to our whole lives the theory works well; there's room to consider how this extends to a more general theorem.

Replies from: gjm
comment by gjm · 2009-03-30T00:29:14.860Z · LW(p) · GW(p)

No, that's too general; for instance, hyperbolic discounting is F(a,b) = b-a, but hyperbolic discounting is inconsistent in the relevant sense. For consistency we need F(a,b) F(b,c) = F(a,c), or equivalently F(b,c) = F(a,c) / F(a,b) = G(c)/G(b) where G(t) = F(a,t). (Note that the dependence on a has gone away.) This is equivalent to discounting things at time t by a factor G(t), which is the general form I described.

Depending only on time differences is the same thing as being invariant under time-shifting your whole life.

Replies from: ciphergoth
comment by Paul Crowley (ciphergoth) · 2009-03-30T15:00:12.172Z · LW(p) · GW(p)

I started getting into this, but there's not really much point - the important thing is that we agree that if we require that preferences be invariant under time-shifting and not reverse as the choices approach, then only exponential discounting meets these criteria (treating not discounting at all as a special case of exponential discounting)

Replies from: gjm
comment by gjm · 2009-03-30T19:57:08.720Z · LW(p) · GW(p)

Right.

comment by kremlin · 2013-03-18T07:58:25.380Z · LW(p) · GW(p)

If we assume that (a) future discounting is potentially rational, and that (b) to be rational, the relative weightings we give to March 30 and March 31 should be the same whether it's March 29 or Jan 1, does it follow that rational future discounting would involve exponential decay? Like, a half-life?

For example, assuming the half life is a month, a day a month from now has half the weighting of today, and a month from that has half the weighting of that, and so on?

comment by Wilka · 2010-11-04T13:13:36.964Z · LW(p) · GW(p)

I was reminded of this post by a blog article I've just read: http://youarenotsosmart.com/2010/10/27/procrastination/ - it covers the same topic, but I think it presents it in an easier-to-grasp way for folks who aren't actively trying to be more rational.

Replies from: Observer
comment by Observer · 2012-01-07T13:57:03.855Z · LW(p) · GW(p)

Thanks for linking to that. It was helpful for me.

comment by AshwinV · 2014-04-10T06:11:24.242Z · LW(p) · GW(p)

Excellent food for thought. I especially loved the point relating the distance of the reward to the actual rewarding process itself, and yes defeating Akrasia is the one thing that is (probably) most relevant to Lesswrong readers. This is because most LW-ers are by nature (probably) smarter than their immediate surroundings and it is not understanding of situations that is holding them back. It then (yes, probably!) boil down to either interpersonal skills and/or Akrasia. And the two are not completely mutually exclusive.

This is , at least at first glance, an important step in the right direction.

comment by Gunnar_Zarncke · 2014-04-02T22:57:10.501Z · LW(p) · GW(p)

Ainslie has written quite a few interesting papers in the meantime: http://picoeconomics.org/articles2.html

comment by geoffb · 2009-03-31T11:41:20.272Z · LW(p) · GW(p)

I generally object to use of the term rational as a moral pejorative as the way its used in the article. We are all dealing with imperfect information, quantum uncertainty and human identity issues. We may suck at it, but the advice to just be more "rational" is insulting to people who are trying their best in an imperfect world. If you think discounting or valuing the future is so easy that you can bandy about words like rational, then I can show you how to make a killing in the mortgage backed securities market.

Replies from: ciphergoth
comment by Paul Crowley (ciphergoth) · 2009-04-01T14:31:06.187Z · LW(p) · GW(p)

Rereading, I do not think I could have done more to make it clear that I reject the moralism you accuse me of. As I say above, in large part I'm interested in this subject because I suffer from the failings it describes, and I think Ainslie goes a long way towards showing why it's not just a question of "trying to be more rational".

comment by Annoyance · 2009-03-29T18:30:13.867Z · LW(p) · GW(p)

Akrasia is a result, not a problem. A symptom, not the disease.

Replies from: jimrandomh
comment by jimrandomh · 2009-03-29T19:41:48.106Z · LW(p) · GW(p)

Dark art argument. It is better to treat underlying diseases than symptoms, but akrasia is neither a disease nor a symptom, except by analogy. Using the analogy to argue that we should address things that cause akrasia, rather than akrasia itself, is a circular argument: the conclusion justifying the analogy and vise versa.

If there are things which cause akrasia which can be addressed directly, then addressing them could be an effective way of addressing akrasia. However, if the only reason these things are bad is because they cause akrasia, then whether it is better to address them or to address akrasia directly depends on which is more effective, which depends on specifics and practicalities that can't be generalized away.

Replies from: Cyan, Annoyance
comment by Cyan · 2009-03-29T21:00:55.770Z · LW(p) · GW(p)

A recommendation: be careful not to use "dark art argument" as a fully general counter-argument. If you see a logical flaw, state it; if you detect an attempt to manipulate, dissect it. (You did do this, but it's still a useful recommendation.) Not only is the term "dark arts" jargon and prejudicial, but the Dark Arts are such a grab-bag of tricks and traps that merely labeling some argument as "dark arts" barely adds any information at all.

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-03-29T21:06:23.517Z · LW(p) · GW(p)

Yes, I got the same impression. Annoyance's advice is vague, useless, condescending, trying to sound like it has something profound to say without being specific, sonorous-sounding, promising help without offering any. It is not, however, particularly Dark Side Epistemology.

Replies from: Cyan, jimrandomh, Annoyance
comment by Cyan · 2009-03-29T22:47:51.737Z · LW(p) · GW(p)

I was thinking of your Dark Side Epistemology and Yvain's Dark Arts as two more-or-less separate things, the key difference stemming from the fact that Dark Arts are perpetrated on others and Dark Side Epistemology is perpetrated on one's self.

comment by jimrandomh · 2009-03-29T21:42:00.002Z · LW(p) · GW(p)

You're right that the phrase 'dark side' (and all other phrases of the form 'dark X') should probably be avoided. That bit was in reference to Defense Against The Dark Arts, which Annoyance's post reminded me of.

comment by Annoyance · 2009-03-30T18:04:03.079Z · LW(p) · GW(p)

Remarkable. How exactly did you come to be a fan of Zen Buddhism? Since most of the comments you object to are direct references to it, and the remainder are generally references to other philosophical traditions, many of which are well-known in popular culture and are quite easy to find and understand with a few quick web searchers, I can't quite grasp why you can't perceive the value you claim to find in those things in my references to them.

Perhaps your desire to shoot the messenger overwhelms your ability to appreciate the message. Or perhaps you don't actually have any appreciation for the traditions you make reference to. Or both, of course.

comment by Annoyance · 2009-03-30T16:09:28.119Z · LW(p) · GW(p)

"Dark art argument."

Shibboleth applause light.

Nothing causes akrasia. There is no such thing as akrasis. 'Akrasia' is the label you apply to a phenomenon you don't understand and you really need to think about more deeply.

Here, I'll make this simple: Socrates was right. What argument is unspoken but necessary to make Socrates' statement correct?

Replies from: pjeby
comment by pjeby · 2009-03-30T16:55:58.326Z · LW(p) · GW(p)

Nothing causes akrasia. There is no such thing as akrasis. 'Akrasia' is the label you apply to a phenomenon you don't understand and you really need to think about more deeply.

It would probably help if you pointed out that the reason we have the illusion of akrasia is because people's built-in systems for modeling the intentions of other people, generate mistaken predictions about motivation and decisions when applied to one's self. It's sort of like looking at yourself in a funhouse mirror, and mistakenly believing you're fatter or thinner than you actually are.

In reality, it's not that you don't follow through on your will, it's that you've failed to understand (or even observe) how your behavior works in the first place, let alone how to change it. Most descriptions of akrasia and how to deal with it (including what I've read of Ainslie's so far), strike me as trying to explain how to steer a car from the back seat, by tying ropes to the front wheels, or by building elaborate walled roads to keep the car going in the right direction.

It makes me want to scream, "but you're not even looking at the dashboard or touching the controls!" Those things are not even IN the back seat.

They're looking for information in the human parts of the mind, while entirely ignoring the fact that the secrets of our behavior and decisions CAN'T be there, or animals couldn't live their entire lives without ever having a single rational, logical, or "economical" thought.

Thought is not the solution here, it's the problem. And the answers are in the FRONT seat -- in the mind-body connection. In emotions, and their somatic markers. In the internal sensory (not verbal!) representations of available choices and expected outcomes. All that equipment that was (evolutionarily) there LONG before the back-seat driver showed up and started critiquing which way the car is going.

And the back-seat driver is only confused because he thinks he's the one who's supposed to be driving... when he's really only there to wave out the window and yell at the other drivers.

And maybe persuade them... that he knows where he's going.

Replies from: Annoyance
comment by Annoyance · 2009-03-30T17:20:41.568Z · LW(p) · GW(p)

"It would probably help if you pointed out that the reason we have the illusion of akrasia is because people's built-in systems for modeling the intentions of other people, generate mistaken predictions about motivation and decisions when applied to one's self."

Jorge Luis Borges once asked a famous question: "What is the only word that cannot be used in a riddle whose answer is 'time'?"

What's the one thing I can't state openly in my attempt to get people to recognize a certain truth for themselves?

Pointing out what you suggest would utterly defeat my purpose, and in the long run, would fail to help those I'm speaking to. Since those people don't seem capable of grasping my point anyway, maybe you're right that it would have been helpful to just state it directly.

Replies from: pjeby
comment by pjeby · 2009-03-30T17:31:05.952Z · LW(p) · GW(p)

Pointing out what you suggest would utterly defeat my purpose, and in the long run, would fail to help those I'm speaking to.

You're using a style of teaching that works better on disciples than on random strangers. The strangers appreciate being given more substantial hints that give them a basis for believing that you actually have something useful to say, and for being motivated to think about what you're asking them to think about.

Parables before koans, in other words.

Since those people don't seem capable of grasping my point anyway, maybe you're right that it would have been helpful to just state it directly.

If you state it directly, those who are interested at least have the option of checking into it. Not stating it directly means random wandering for years, wondering whether you're getting anywhere near it.

Having spent years wandering this particular desert for myself, I don't see any reason to make other people do it, too. The least I can do is point out a few landmarks and share a few travel tips.

Replies from: Annoyance
comment by Annoyance · 2009-03-30T17:44:31.094Z · LW(p) · GW(p)

I have an unfortunate tendency to initially overestimate the intelligence of people I talk to. Stories and pointers that I consider trivially obvious seem to be too complex for most of the people here.

Per your later point: I happen to believe that a certain amount of desert travel is not only desirable but absolutely necessary. Wandering in circles should obviously be avoided, but it's not possible to give the answers to people, in roughly the same way that jokes usually can't be effectively explained. High-level conscious understanding has nothing to do with "getting the joke", and if people don't recognize the implied contradiction in the material for themselves, the resolution of tension that we call 'humor' isn't produced.

A long time ago, I realized that there are two kinds of mystic obscurantism. The first belongs to lesser schools, that try to hide their ideas so that outsiders won't get them. The second belongs to the greater schools, that try to speak truths clearly and directly, but that most can't understand.

Finding a way to convey the truths of enlightenment to people who haven't reached that level is difficult, and possibly counterproductive. I will think further on this matter.

Replies from: JulianMorrison, MichaelHoward, Cyan, Eliezer_Yudkowsky
comment by JulianMorrison · 2009-03-30T20:49:56.196Z · LW(p) · GW(p)

You're arrogant. Also obnoxious. And completely failing at Zen. If this were a koan the teacher would be chasing you out of the temple with a stick, thwacking you as you run. What on earth do you expect to gain by insulting people's intelligence and berating them for not getting koans (and ignoring the evidence that they got, shrugged, and said "so?"). When has sneering ever been a technique of instruction? You should demote yourself to bottom-most neophyte and restart from sitting.

Convey the truths of enlightenment? *thwack*

Replies from: timtyler
comment by timtyler · 2009-03-30T21:17:03.152Z · LW(p) · GW(p)

Considering how irritating "Annoyance" is, their nickname is apt-ironic :-(

Replies from: Eliezer_Yudkowsky, thomblake, ciphergoth
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-03-30T22:28:49.260Z · LW(p) · GW(p)

If this is not Caledonian, then there are two Caledonians.

Replies from: JulianMorrison
comment by JulianMorrison · 2009-03-30T23:02:42.429Z · LW(p) · GW(p)

A whole country of them, north of England.

comment by thomblake · 2009-04-02T21:37:10.213Z · LW(p) · GW(p)

Did you honestly think that wasn't intentional?

I like to think Socrates' "gadfly" worked into the nickname.

comment by Paul Crowley (ciphergoth) · 2009-03-30T21:34:14.399Z · LW(p) · GW(p)

They do it to give themselves permission.

comment by MichaelHoward · 2009-04-01T13:14:23.933Z · LW(p) · GW(p)

I have an unfortunate tendency to initially overestimate the intelligence of people I talk to. Stories and pointers that I consider trivially obvious seem to be too complex for most of the people here.

I suspect you tend to overestimate the transparency of your writing rather than the intelligence of the reader, and tend to underestimate the inferential distance you need to cover to communicate rather than the complexity of what you're communicating.

comment by Cyan · 2009-03-30T18:03:32.121Z · LW(p) · GW(p)

I have an unfortunate tendency to initially overestimate the intelligence of people I talk to. Stories and pointers that I consider trivially obvious seem to be too complex for most of the people here.

I'd say what you're overestimating is how much like you other people are. (Perhaps you consider that statement redundant.)

Replies from: ciphergoth
comment by Paul Crowley (ciphergoth) · 2009-03-30T18:08:54.184Z · LW(p) · GW(p)

It could also straightforwardly result from all kinds of self-overestimation.

Replies from: Annoyance
comment by Annoyance · 2009-03-30T18:14:43.000Z · LW(p) · GW(p)

When a particular koan is considered to be easy, and people don't get it, my estimation of them drops.

When the meaning can easily be found by conducting a quick Google, and people don't search for it yet demand to be told what it means, my estimation of them drops.

And when people talk about rationality and becoming more rational, but don't make an effort to be so or to do so, guess what happens?

Perhaps I should just give up. I'm not certain what good I can do in a community that collectively never considered the possibility that certain ideas are communicated only indirectly for good reasons.

Replies from: GuySrinivasan
comment by SarahNibs (GuySrinivasan) · 2009-03-30T18:26:52.553Z · LW(p) · GW(p)

Very simple! Write a post explicitly explaining why certain ideas are communicated only indirectly for good reasons! And if you think that idea is itself communicated only indirectly for good reasons... sounds to me like a too-convenient coincidence.

Replies from: pre
comment by pre · 2009-03-30T22:54:16.747Z · LW(p) · GW(p)

"why certain ideas are communicated only indirectly for good reasons" by Pre.

Rather than use an obscure example like Zen, we'll use a fairly simple idea: Learning how to catch a ball.

Now I can directly explain to you how a ball is caught. I can describe the simultaneous ballistic equations that govern the flight of the ball, instruct you on how to alter your idea of where the ball will land based on Bayesian reasoning given certain priors and measured weather conditions.

These things are almost certainly needed if you're gonna program a computer to catch a ball.

If you're gonna teach a human to catch a ball though, you're just going to have to throw a lot of balls at them and tell 'em to keep theirs eyes on it.

I suspect most Zen koans are just poor jokes, but if there's a point to 'em it's the same as the point of throwing those balls at a student catcher.

Just to get you to practice thinking in that way. Because you, as a human, will become better at the things you practice.

If the thing you are practising is spouting existential bullshit this may or may not be a good idea. ;)

Replies from: Dustin, thomblake
comment by Dustin · 2009-03-30T23:42:16.952Z · LW(p) · GW(p)

But first explaining how to catch a ball won't keep the person from then learning how to catch it.

Replies from: Annoyance, loqi
comment by Annoyance · 2009-03-31T16:04:35.393Z · LW(p) · GW(p)

If you say you're teaching someone how to catch balls, and then provide them with sequences of equations, there's a dangerous meta-message involved. You're conveying the (unspoken, implicit) idea that the equations are what's needed to make the student good at catching.

If the student then believes that because they've mastered the equations they've learned how to catch, they'll go out into the world - and fail and fail and fail.

One real-life example of this may be people who attain high status in martial arts training schools and then get themselves slaughtered in actual fights, where the only rules are those of physics and people have chosen optimized strategies for reality.

comment by loqi · 2009-03-31T01:44:12.923Z · LW(p) · GW(p)

In fact, such an explanation can help to assure them that catching a ball is possible before they commit to practicing.

Replies from: Annoyance
comment by Annoyance · 2009-04-01T14:38:06.515Z · LW(p) · GW(p)

I would expect their real-life experience to be sufficient to convince them that it's possible to catch a ball.

More importantly, if they're not sure that's possible, they shouldn't be looking for someone to teach them how to do it. They should be trying to determine if it's possible before they do anything else.

comment by thomblake · 2009-04-02T21:41:16.712Z · LW(p) · GW(p)

These things are almost certainly needed if you're gonna program a computer to catch a ball.

Incidentally, that's not how we tend to program computers to do things like catch balls (successfully). We instead build a sort-of general learning system attached to grasping and visual systems, and then teach it how through observation.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-03-30T17:56:05.645Z · LW(p) · GW(p)

My own reaction, frankly, is "Take your unoriginal bluffs elsewhere."