On dollars, utility, and crack cocaine

post by PhilGoetz · 2009-04-04T00:00:24.951Z · LW · GW · Legacy · 100 comments

The lottery came up in a recent comment, with the claim that the expected return is negative - and the implicit conclusion that it's irrational to play the lottery.  So I will explain why this is not the case.

It's convenient to reason using units of equivalent value.  Dollars, for instance.  A utility function u(U) maps some bag of goods U (which might be dollars) into a value or ranking.  In general, u(kn) / u(n) < k.  This is because a utility function is (typically) defined in terms of marginal utility.  The marginal utility to you of your first dollar is much greater than the marginal utility to you of your 1,000,000th dollar.  It increases the possible actions available to you much more than your 1,000,000th dollar does.

Utility functions are sigmoidal.  A serviceable utility function over one dimension might be u(U) = k * ([1 / (1 + e-U)] - .5).  It's steep around U=0, and shallow for U >> 0 and U << 0.

Sounds like I'm making a dry, academic mathematical point, doesn't it?  But it's not academic.  It's crucial.  Because neglecting this point leads us to make elementary errors such as asserting that it isn't rational to play the lottery or become addicted to crack cocaine.

For someone with $ << 0, the marginal utility of $5 to them is minimal.  They're probably never going to get out of debt; someone has a lien on their income and it's going to be taken from them anyway; and if they're $5 richer it might mean they'll lose $4 in government benefits.  It can be perfectly reasonable, in terms of expected utility, for them to play the lottery.

Not in terms of expected dollars.  Dollars are the input to the utility function.

Rationally, you might expect that u(U) = 0 for all U < 0.  Because you can always kill yourself.  Once your life is so bad that you'd like to kill yourself, it could make perfect sense to play the lottery, if you thought that winning it would help.  Or to take crack cocaine, if it gives you a few short intervals over the next year that are worth living.

Why is this important?

Because we look at poor folks playing the lottery, and taking crack cocaine, and we laugh at them and say, Those fools don't deserve our help if they're going to make such stupid decisions.

When in reality, some of them may be making <EDITED> much more rational decisions than we think. </EDITED>

If that doesn't give you a chill, you don't understand.

 

(I changed the penultimate line in response to numerous comments indicating that the commenters reserve the word "rational" for the unobtainable goal of perfect utility maximization.  I note that such a definition defines itself into being irrational, since it is almost certainly not the best possible definition.)

100 comments

Comments sorted by top scores.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-04-04T03:58:23.646Z · LW(p) · GW(p)

1) Lottery tickets are bought using income that is after tax, after debt, and after loss of government benefits.

2) Many people buy more than one lottery ticket; they spend hundreds of dollars per year or more.

3) There was a period during which poor folks had reason to legitimately distrust banks and played the illegal numbers game as a sort of stochastic savings mechanism, up to 600-to-1 payouts on 1000-to-1 odds, which meant they did get large units of cash occasionally. Post-FDIC this is no longer a realistic motive and the odds on the government lotteries are worse.

4) Yes, your life can suck, yes, the lottery can seem like the only way out. But this is not a reasoned decision based on having literally no better life-improving use for hundreds of after-tax dollars. It is based on the lure and temptation of easy money to a mind that can't multiply.

5) Those who buy tickets will not win the lottery. If you think the chance is worth talking about, you've fallen prey to the fallacy yourself. In ordinary conversation odds of one in a hundred million of being wrong would correspond to a Godlike level of calibrated confidence. Therefore I say simply, "You WILL NOT win the lottery!" with far more confidence than I say that the Sun will rise tomorrow (since something could always... happen... overnight). On a statistical basis, before selective reporting, lottery winners are nonexistent - you would never encounter one if you lived in the ancestral environment and had no newspapers. The lottery is simply a lie.

Replies from: PhilGoetz, bogdanb
comment by PhilGoetz · 2009-04-04T05:54:25.203Z · LW(p) · GW(p)

Point #1 is wrong. Point #2 is consistent with my idea. Point #4 is not a point, but a conclusion presented as a point. Point #5 requires reformulating rationalism around something other than expected utility in order for it to be right.

Point #3 is interesting.

If one person (me, for instance), observes a phenomenon, and then proposes a theory that partly explains that phenomenon, and gives reasons why the assumptions required are valid, and shows that the proposed mechanism has the proposed results given the assumptions; and gives a testable hypothesis and shows that his theory passes at least that test,

Then it is unhelpful to "critique" the theory by insisting that some other mechanism that also has the same effect must account for all of the effect.

Can we all please be very careful about making arguments of the form (A=>B, C => B, C) => not(A) ?

(You can use such an argument to say that if A=>B, C => B, then C diminishes the evidence for A. That is most useful when B has a binary truth-value. When B is assigned not a truth-value, but a number indicating how often B goes on in the real world; and you have no quantitative knowledge of how much of the observed B C accounts for; then A=>B, C => B, C just diminishes the expected proportion of the observed B that is accounted for by A. You can't leap to the conclusion not(A).)

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-04-04T11:01:42.489Z · LW(p) · GW(p)

Please amplify on "#1 is wrong".

it is unhelpful to "critique" the theory by insisting that some other mechanism that also has the same effect must account for all of the effect.

This is a very common conversation in science. Some of it is conducted improperly, which is annoying, but I would hardly categorize the whole thing as unhelpful. In particular, the "improper" critiques usually consist of hypothesizing more and more elaborate hidden mechanisms with no evidence to support them as alternatives.

But we know hyperbolic discounting exists. We know that people are insensitive to the smallness of small probabilities.

When the other mechanism is nailed down by other evidence (hyperbolic discounting (for crack), or neglect of the tinyness of tiny odds (for lottery tickets)) and the new mechanism is not known, then A->B, C->B, C steals the evidence that C provides for A. You need to provide new D with A->D, C!->B. Where the implication from C to B is imperfect then B goes on providing some trickle of evidence to A but if the implications are equally strong then the trickle does not distinguish between A and C as opposed to other hypotheses and the prior odds win out.

In particular, the notion that ticket buyers really are making an expected utility calculation says that decreasing the odds of a lottery win by a factor of 10 (while perhaps multiplying the number of tickets sold by 10 and keeping the price constant, so that the number of lottery winners reported in the media is constant), will decrease the price they are willing to pay for a given lottery ticket by a factor of 10. Are you willing to make that prediction? I'd expect ticket sales to remain pretty much the same.

Replies from: PhilGoetz, PhilGoetz
comment by PhilGoetz · 2009-04-04T14:29:31.301Z · LW(p) · GW(p)

Please amplify on "#1 is wrong".

If lottery tickets were bought after paying off debts and after loss of government benefits, no one who was in debt, or who was receiving government benefits, could buy lottery tickets. Unless I misunderstand.

A->B, C->B, C steals the evidence that C provides for A. You need to provide new D with A->D, C!->B. Where the implication from C to B is imperfect then B goes on providing some trickle of evidence to A but if the implications are equally strong then the trickle does not distinguish between A and C as opposed to other hypotheses and the prior odds win out.

I tried to explain in my previous comment why I think this is the wrong way of looking at it. You're speaking as if B is a proposition with a truth-value that has a single cause. However, I think my explanation was not quite right either.

The weakest, most obviously true reply is that this is not a Boolean net; B does not have a single cause; and A => B and C => B can both be having an effect. It's even possible, in the real-valued non-Boolean world, to have (remember this is not Boolean; this is more like a metabolic network) A > 0, C > 0, A => B, C => B, B < 0.

A reply that is a little stronger ( = has more consequences), and a little less clearly correct, is that your argument for C => B is not as good as my argument for A => B, so who's stealing whose evidence?

The strongest, least-clear reply is that we have priors in favor of both A => B and C => B. Because they're both just-so stories, and we have no quantitative expectations of how much of an increase in B either would provide; and, unlike when B is a truth-value, there's no upper limit on how large B can get; A or C can't steal much evidence from each other without some quantitative prediction. All the info you have is that A and C would both make B > 0, and B > 0. If C accounts for x points of B, and B = x + y, then this knowledge can increase the probability of A. C, C => B diminishes the probability of A in the absence of knowledge about the value of B and the value of B explained by C, but by so little compared to the priors, that presenting it as an argument against an argument from principles is misleading.

comment by PhilGoetz · 2009-04-06T14:50:52.515Z · LW(p) · GW(p)

In particular, the notion that ticket buyers really are making an expected utility calculation says that decreasing the odds of a lottery win by a factor of 10 (while perhaps multiplying the number of tickets sold by 10 and keeping the price constant, so that the number of lottery winners reported in the media is constant), will decrease the price they are willing to pay for a given lottery ticket by a factor of 10. Are you willing to make that prediction? I'd expect ticket sales to remain pretty much the same.

That's an interesting point.

  • If, as I said in my post, it is possible for all situations in which utility < 0 to be considered equivalent because one can commit suicide, then you would predict that ticket sales would remain nearly the same.

  • I don't claim that they are all making a good utility calculation. But who does? I claim that more of their behavior is attributable to utility calculations than is commonly believed.

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-04-06T14:53:18.927Z · LW(p) · GW(p)

On this theory the "rational poor" should not spend money on anything except lottery tickets, then commit suicide.

comment by bogdanb · 2009-04-04T13:13:58.005Z · LW(p) · GW(p)

About point 5: I've encountered this idea quite often. And I agree, but only if “win the lottery” means winning the big prize.

I've never seen the consideration* that, in addition to the one (or, statistically, fewer than one) “jackpot”, there are in most lotteries relatively large numbers of consolation prizes.

(*: this doesn't mean that it's absent; it may be included in the general calculations, but I've never seen the point being made explicit.)

In terms of expected dollars this part doesn't change much (it's still sub-unitary, since lotteries don't generally go bankrupt), but in terms of expected utility as discussed in the post, and in particular with respects with your fifth point, it seems very significant. On the monetary side, even payoffs of a few hundred dollars may have highly “distorted” utilities for some persons. And on the epistemological side, probabilities of one in a few thousand (even more for lower payoffs) are much more relevant than one in a hundred million.

That doesn't mean that lottery players actually do the math—or base their decisions on more than intuition—but at such relatively lower levels of uncertainty it's not as obvious that the concept is completely invalid. Also, I expect there would be many takers for any winner-takes-all lottery, too, but I'd be surprised if the number wasn't significantly lower, all else being equal.

comment by conchis · 2009-04-04T12:13:21.228Z · LW(p) · GW(p)

FWIW, Charles Karelis makes this argument extensively in his book The Persistence of Poverty.

While it's plausible that utility functions are sigmoidal, it's not obviously true, and it's certainly not true of many of the utility functions generally used in the literature.

Moreover, even if experienced-utility (e.g. emotional state) functions are sigmoidal, that doesn't imply that decision-utility functions are, except in the special case that individuals are risk-neutral with respect to experienced utility. More generally than that, a consistent decision-utility function can be any positive monotonic transform of an experienced utility function.

EDIT: I should have added that the implication of that last point is that you can rationalize a lot of behavior just by assuming a particular level of risk preference. You can't rationalize literally anything (consistency is still a constraint), but you can rationalize a lot. All of this makes it especially important to argue explicitly for the particular form of happiness/utility function you're relying on.

(EDITED again to hopefully overcome ambiguities in the way different people are using the terms happiness and utility.)

comment by Sideways · 2009-04-04T05:02:01.240Z · LW(p) · GW(p)

IAWY right up to the penultimate sentence. Humans continuously modify their utility functions to maintain a steady level of happiness. A change in your utility function's input--like winning the lottery, or suffering a permanent injury--has only a temporary effect. The day you collect your winnings, you're super-happy; a year later, you're no happier than you were when you bought the ticket. If you're considering picking up a crack habit, you had better realize that in a year your baseline happiness will be no higher than it is now, despite all the things you'll sacrifice trying to be happy.

Supplying yourself with cocaine and money isn't an effective way to achieve a goal of happiness, just like supplying a country with foreign aid isn't an effective way to improve quality of life there. The rational thing to do is to grab the levers of your hedonic treadmill and set it where you want it to be. But it's risky to monkey around with such things--which is why I'm interested in That Which Must Not Be Named. I have no personal stake in Eliezer's mission, but his methodical approach to studying utility functions suggests what parts of your utility function you can safely alter.

Replies from: PhilGoetz
comment by PhilGoetz · 2009-04-04T05:15:30.286Z · LW(p) · GW(p)

That does introduce another level of complication. Utility functions assume a static model. They are not happiness functions. We talk about maximizing utility all the time on LW, when really we want to maximize happiness.

Maximizing your happiness is a higher level of rationality than maximizing your utility. I think it's still okay to sometimes define "rational" as maximizing expected utility.

(I don't think foreign aid has anything to do with the delta-nature of happiness, btw.)

Replies from: Nick_Tarleton, jimrandomh, Sideways
comment by Nick_Tarleton · 2009-04-04T07:10:58.714Z · LW(p) · GW(p)

We talk about maximizing utility all the time on LW, when really we want to maximize happiness.

If you "really want to maximize" X, how is X not utility?

Replies from: bogdanb
comment by bogdanb · 2009-04-04T13:24:32.773Z · LW(p) · GW(p)

I think the point Phil tries to make is the difference between “instantaneous utility”, that is a function on things at some point in time (actually, phase space), and the “general utility”, which is a function that also has time (or position in phase space) as an argument.

While not immediately obvious, I think his naming choice could be worse. According to my non-scientific poll of one (me), when seeing the word “happiness” people think of time as a parameter instinctively, but consider specific instants for “utility” unless there are other cues in the context.

A strict definition such as yours would require coining a few new words for the discussion. That's not a bad thing per se, I just can't think of any that have the advantage of being already used as such in general vocabulary.

Replies from: conchis, PhilGoetz
comment by conchis · 2009-04-04T13:47:17.895Z · LW(p) · GW(p)

This is an area that is generally plagued with ambuiguities and inconsistent usage. - which makes it even more important to be clear what we mean. I think this will usually this will require the use of adjectives/modifiers, rather than attempting to define already ambiguous words in our own idiosyncratically-preferred ways.

Instantaneous vs. life-time (or smaller life-slice) utility seems to make a clear distinction; decision-utility (i.e. the utility embodied in whatever function describes our decisions) vs. experienced utility (e.g. happiness or other psychological states) seem to make clear-ish distinctions. (Though if we care about non-experienced things, then maybe we need to further distinguish either of these from true-utility.)

But using "utility" and "happiness" to distinguish between different degrees of time aggregation seems unnecessarily confusing to me.

comment by PhilGoetz · 2009-04-04T13:53:37.196Z · LW(p) · GW(p)

Yes, thanks; that's what I meant.

comment by jimrandomh · 2009-04-04T13:55:15.134Z · LW(p) · GW(p)

If we really wanted to maximize happiness, then we'd jump at the chance to wirehead ourselves. We don't, because happiness is only an indicator of what we desire, not the thing we desire itself. Making yourself happier using drugs is like making yourself wealthier by telling your bank to lie to you on account statements.

Replies from: thomblake, conchis
comment by thomblake · 2009-04-06T21:22:07.336Z · LW(p) · GW(p)

It seems as though you're equivocating over 'happiness'. You suggest that happiness is just an indicator, not the thing we desire itself. Your analogy suggests otherwise. Having your bank lie to you on your statements does not actually make you wealthier. Similarly, using drugs to feel pleasure doesn't actually make you happier.

I prefer the latter usage.

comment by conchis · 2009-04-04T14:11:07.354Z · LW(p) · GW(p)

Actually, happiness is one of the things I desire; it's just not the only thing I desire. And drug induced happiness can be perfectly real, even if it's not necessarily the optimal way for me to achieve a positive emotional state all things considered.

Making myself happier using drugs doesn't seem at all analogous to telling my bank to lie.

comment by Sideways · 2009-04-04T06:30:51.448Z · LW(p) · GW(p)

Countries that rely heavily on foreign aid risk becoming self-stabilizing systems in which increasing foreign aid to Hypothetistan reduces the incentives for Hypothetistanis to be productive, instead of providing capital they need to act on those incentives. This is by no means a complete explanation--I'm just explaining the analogy between self-stabilizing systems more explicitly.

Replies from: PhilGoetz
comment by PhilGoetz · 2009-04-04T06:56:43.292Z · LW(p) · GW(p)

The specs for happiness require it to be self-stabilizing. Poverty can be self-stabilizing, but doesn't have to be.

comment by Nominull · 2009-04-04T03:25:54.671Z · LW(p) · GW(p)

I have a non-rhetorical question for you: do you actually think a significant fraction of people playing the lottery and taking crack cocaine actually maximize utility that way?

Replies from: PhilGoetz
comment by PhilGoetz · 2009-04-04T04:07:27.559Z · LW(p) · GW(p)

Maybe. I think that if we see poor people systematically playing the lottery more often than well-off people do, differences in utility functions are at least as good an explanation as differences in intelligence.

  • If utility functions are sigmoidal, this in itself would predict poor people to play the lottery much more often than rich people.
  • The "poor people are stupid" explanation says that poor people are less likely to grasp how small the probability of winning is. I'm skeptical that IQ 100 or even IQ 115 people grasp such small numbers any better.
  • Crack use is high in neighborhoods where people are not just poor, but have a high probability of dying or ending up in prison. Look at the Sandtown Health Profile 2008. A person in Sandtown has a 1 in 6 chance of dying before reaching age 45. For males, it's higher. A man who "lives in Sandtown" is more likely to actually live in prison than in Sandtown. I didn't cherry-pick Sandtown; I chose it because my mom used to work at a day care there. A man living there, deciding whether to take up crack, has fewer expected years of good life lost than someone living in Fairfax.

In general, when we see one group of people consistently engaging in higher levels of behavior that seems irrational to us, there's a good chance that something in their environment makes that behavior more rational for them than for us.

Replies from: Nominull, Eliezer_Yudkowsky
comment by Nominull · 2009-04-04T04:16:40.131Z · LW(p) · GW(p)

"Crack use is high in neighborhoods where people are not just poor, but have a high probability of dying or ending up in prison."

Are you entirely certain you have the arrow of causality pointing in the right direction? This question is rhetorical.

Replies from: PhilGoetz
comment by PhilGoetz · 2009-04-04T04:26:35.957Z · LW(p) · GW(p)

Sure, causality runs both ways. My point is that the idea that crack use is a rational decision predicts that crack use will be higher when the odds of dying or of spending much of your life in prison are higher. And that is what we see. It's a falsifiable test, and the idea passes the test.

Replies from: soreff
comment by soreff · 2009-04-04T16:43:48.397Z · LW(p) · GW(p)

Are there studies of behavior changes for terminally ill people? That wouldn't probe changes in financial behavior - winning the lottery isn't useful to someone with pancreatic cancer. Do we see recreational drug use rise?

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-04-04T04:15:21.879Z · LW(p) · GW(p)

"Poor people are stupid" is a strawman, in this case. Human beings in general have trouble grasping low probabilities. Poor people just have further motivations that lead them to grasp harder at this straw.

Replies from: PhilGoetz
comment by PhilGoetz · 2009-04-04T04:23:51.040Z · LW(p) · GW(p)

If we don't believe that the shape of their utility curves makes the lottery have a higher expected utility for poor people than for well-off people, then we are saying that poor people don't have any further motivations than rich people to grasp at this straw.

Replies from: conchis
comment by conchis · 2009-04-04T12:03:31.110Z · LW(p) · GW(p)

It's possible (indeed, plausible) both that (a) poor people have these utility functions, and therefore more reason to play the lottery; and (b) it's still irrational for them to play the lottery.

Replies from: PhilGoetz
comment by PhilGoetz · 2009-04-04T14:39:01.346Z · LW(p) · GW(p)

Yes. I'm not thinking of rationality as a line that people either cross or don't. If you say that rationality is maximizing your expected utility, then none of us are rational.

If they have more reason to play the lottery than we at first thought, then they are more rational than we at first thought.

comment by Zvi · 2009-04-04T16:04:13.853Z · LW(p) · GW(p)

If everything comes out exactly right, this can make a case for playing the lottery being better than doing nothing risky but it can't possibly make the case that the lottery isn't massively worse than other forms of gambling. Even if the numbers games are gone going to a casino offers the same opportunity at far better odds and allows you to choose the point on the curve where gambling stops being efficient. I do think however the point that negative-expectation risks can be rational is well taken.

comment by timtyler · 2009-04-04T05:41:24.990Z · LW(p) · GW(p)

A good reason for not playing the lottery is that you can get better odds by playing roulette, or using other forms of gambling. I am unimpressed by arguing against gambling in general because it's average dollar payoff is negative. That argument is ridiculous.

The discussion about lotteries that I presume lead to this thread was correct, though. It didn't talk about expected winnings, it talked about utility. There are cases where playing the lottery has a high utility - and if the utility is too low, then you shouldn't play.

Replies from: jimrandomh, PhilGoetz
comment by jimrandomh · 2009-04-04T13:44:06.092Z · LW(p) · GW(p)

I am unimpressed by arguing against gambling in general because it's average dollar payoff is negative. That argument is ridiculous.

Then the problem is with the argument, not the conclusion. A better argument against gambling is to observe what happens to gamblers, who generally end up broke.

Replies from: timtyler
comment by timtyler · 2009-04-04T14:15:57.655Z · LW(p) · GW(p)

That's habitual gamblers. Gambling is OK sometimes - for example, if it helps you to obtain your ferry fare home, thus saving you a long walk.

comment by PhilGoetz · 2009-04-04T13:56:39.895Z · LW(p) · GW(p)

A good reason for not playing the lottery is that you can get better odds by playing roulette, or using other forms of gambling.

Roulette doesn't give a big enough payoff to move you up into the steep area of your utility function, so it doesn't get the "a dollar won is worth more than a dollar spent" effect.

It would also be a plausible conjecture that, if someone's utility function is sigmoid, and they're on the low end of it, their model of their utility function is an exponential. That would enhance the effect.

Replies from: Eliezer_Yudkowsky, timtyler
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-04-04T14:21:27.426Z · LW(p) · GW(p)

Just bet 5 times in a row. Still better odds than the lottery.

Replies from: PhilGoetz, AllanCrossman
comment by PhilGoetz · 2009-04-05T14:51:34.650Z · LW(p) · GW(p)

True.

Actually, I don't know if it's true. But it sounds plausible.
comment by AllanCrossman · 2009-04-04T14:28:37.258Z · LW(p) · GW(p)

One problem with that is that, if you've won at roulette a few times in a row, you're now going to be risking quite a lot if you bet it all again. You'll actually end up badly regretting your actions in a lot of cases.

comment by timtyler · 2009-04-04T14:20:24.298Z · LW(p) · GW(p)

Re: Roulette doesn't give a big enough payoff [...]

Bet twice consecutively, then. Roulette's payoffs are flexible enough to give you practically any odds you desire.

...and what about the diminishing utility of money? Often you don't want to trade odds for cash.

comment by RobinHanson · 2009-04-04T12:14:49.357Z · LW(p) · GW(p)

I'm skeptical that lottery player utility is well modeled as in the convex section of a sigmoid. I'd want to see more analysis to that effect.

Replies from: PhilGoetz
comment by PhilGoetz · 2009-04-04T13:49:13.309Z · LW(p) · GW(p)

Do you mean you're skeptical that there is diminishing marginal negative utility?

Replies from: soreff, RobinHanson
comment by soreff · 2009-04-04T16:57:12.706Z · LW(p) · GW(p)

I'm not convinced that it is a reasonable common regime to be in for utils(dollars). I think that it might be a reasonably common response to physical trauma: 1001 blows to the head are not as much worse than 1000 blows as the first blow was (particularly if the 100th was fatal...).

comment by RobinHanson · 2009-04-05T03:38:37.460Z · LW(p) · GW(p)

I mean I'm skeptical of increasing marginal utility of money.

Replies from: PhilGoetz
comment by PhilGoetz · 2009-04-05T14:50:09.253Z · LW(p) · GW(p)

I take that as a yes.

I don't have any data; I just have the reasoning that I've already presented, plus more of the same.

comment by PhilGoetz · 2009-04-04T04:59:10.900Z · LW(p) · GW(p)

My attempt to liven up this post by talking about crack and lotteries has killed many minds here. If you're driven to write a long reply about crack and lotteries, perhaps you can spare one sentence in it to respond to this more general point:

We are inclined to use expected return when we should use expected utility.
This quick-and-dirty reasoning works well when we are reasoning, as we often are, about small changes in utility for ourselves or for other people in our same social class; because a line is a good local approximation to a curve. It works less well when we reason about people in other social classes, or about changes in utility that span social classes. Since we reason about people in other social classes less often, are less motivated to get correct results, and receive less feedback to correct ourselves even if we want to, we may never correct this error.

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-04-04T11:07:29.133Z · LW(p) · GW(p)

We are inclined to use expected return when we should use expected utility

A well-known point that goes back to Bernoulli and the very dawn of the expected utility formalism - except that conventionally this is illustrated by explaining why you should not buy lottery tickets that seem to have a positive expected return.

Your main post is rather an attempt to defend behavior as "rational" which on the surface appears to be "irrational". This may make sense when you're looking at a hedge-fund trader who seemingly lost huge amounts of money through "stupid" Black Swan trades, and yet who is, in fact, living comfortably in a mansion based on prior payouts. The fact that he's living in a mansion gives us good reason to suspect that his actions are not so "stupid" as they seemed.

The case for suspecting the hidden rationality of crack users is not so clear-cut. Is it really the case that before ever taking that first hit, the original potential drug user, looking over their entire futures with a clear eye free of such biases as the Peak-End Rule, would still choose the crack-user future?

People in general are crazy. We are, for example, hyperbolic discounters. Sometimes the different behavior of "unusual" people stems not from any added stupidity, but from added motives given their situation. Crack users are not mutants. Their baseline level of happiness is lower, they are more desperate for change, their life expectancy is short; none of this is stupidity per se. But like all humans they are still hyperbolic discounters who will value short-term pleasure over the long-term consequences to their future self. To suppose that being in poverty they must also stop being hyperbolic discounters, so that their final decision is inhumanly flawless and we can praise their hidden rationality, is a failure mode that we might call Pretending To Be An Economist.

Don't blame the readers, you killed your own post: humans in general are flawed beings, and buying lottery tickets is an illustration thereof. Trying to make it come out as an amazing counterintuitive demonstration of rationality was your mistake. To illustrate the difference between expected return and expected utility, you should have picked some example whose final answer added up to normality (like "Don't play the Martingale") rather than abnormality ("Buy lottery tickets now!").

Replies from: CarlShulman, PhilGoetz, PhilGoetz
comment by CarlShulman · 2009-04-04T23:49:43.834Z · LW(p) · GW(p)

Screwing over your future selves because of hyperbolic discounting, or other people because of scope insensitivity, isn't obviously a failure of instrumental rationality except insofar as one is defecting in a Prisoner's Dilemma (which often isn't so) and rationality counts against that.

Those 'biases' look essential to the shapes of our utility functions, to the extent that we have them.

Replies from: steven0461, Z_M_Davis
comment by steven0461 · 2009-04-05T00:17:29.815Z · LW(p) · GW(p)

Screwing over other people because of scope insensitivity is a failure of instrumental rationality if (and not only if) you also believe that the importance of someone's not being screwed over does not depend strongly on what happens to people unconnected to that person.

Replies from: CarlShulman
comment by CarlShulman · 2009-04-05T00:41:15.924Z · LW(p) · GW(p)

Steve, once people are made aware of larger scopes, they are less willing to pay the same amount of money to have effects with smaller scopes. See the references at this OB post.

Replies from: steven0461
comment by steven0461 · 2009-04-05T01:27:12.043Z · LW(p) · GW(p)

How much less willing? Suppose A would give up only a million times more utility to save B and 10^100 other people than to save B. Would A, if informed of the existence of 10^100 people, really choose not to save B alone at the price of a cent? It seems to me that would have to be the case if scope insensitivity were to be rational. (This isn't my true objection, which I'm not sure how to verbalize at the moment.)

comment by Z_M_Davis · 2009-04-05T00:13:50.022Z · LW(p) · GW(p)

Those 'biases' look essential to the shapes of our utility functions, to the extent that we have them.

This issue deserves a main post. Cf. also Michael Wilson on "Normative reasoning: a Siren Song?"

Replies from: CarlShulman
comment by CarlShulman · 2009-04-05T00:30:15.518Z · LW(p) · GW(p)

Thanks for the link, although it's addressing related but different issues. A hyperbolic discounter can assent to 'locking in' a fixed mapping of times and discount factors in place of the indexical one. Then the future selves will agree about the relative value of stuff happening at different times, placing highest value on the period right after the lock-in.

comment by PhilGoetz · 2009-04-06T16:37:44.725Z · LW(p) · GW(p)

like all humans they are still hyperbolic discounters who will value short-term pleasure over the long-term consequences to their future self.

Just a nitpick: As Carl Shulman observed, this is not irrational. It's just a different discounting function than yours.

Trying to make it come out as an amazing counterintuitive demonstration of rationality was your mistake.

Really? So you found a mistake in anything that I wrote? I must have missed it. All I see is you presenting just-so arguments along the lines of either "C causes people to play the lottery, therefor A cannot cause people to play the lottery", or "People are stupid; therefore they cannot be engaging in utility calculations when they play the lottery."

comment by PhilGoetz · 2009-04-06T16:03:25.408Z · LW(p) · GW(p)

A well-known point that goes back to Bernoulli and the very dawn of the expected utility formalism - except that conventionally this is illustrated by explaining why you should not buy lottery tickets that seem to have a positive expected return.

I'm skeptical that anyone has made this explanation, since lottery tickets never have a positive expected return. You can only mean an "explanation" for people who don't know how to multiply.

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-04-06T16:13:07.269Z · LW(p) · GW(p)

You can only mean

Would you STOP IT? For the love of Cthulhu!

The classic explanation of expected utility vs. expected return deals with hypothetical lottery tickets that have an positive expected return but not positive expected utility.

Replies from: PhilGoetz
comment by PhilGoetz · 2009-04-06T16:27:36.083Z · LW(p) · GW(p)

Okay. Sorry. What I meant was, "Since lotteries always have a negative expected return, I think that maybe the explanations you are talking about are directed at people who think that the lottery has an expected positive return because they don't do the math." Which you just answered. I was not familiar with this classic explanation.

comment by gjm · 2009-04-04T00:43:49.416Z · LW(p) · GW(p)

Perhaps this just indicates that I lead too sheltered a life, but I think most people don't have $< utility function is concave just as it is for positive $. So I'm skeptical of the claim that "poor folks" playing the lottery or using crack are generally maximizing their expected utility.

And I can't speak for anyone else, but I don't think I've ever said anything like "those fools don't deserve our help if they're going to make such stupid decisions" about people playing the lottery or taking crack, and if I ever did I'm pretty sure it wouldn't be the desperately poor and/or miserable ones that I had in mind.

Replies from: PhilGoetz
comment by PhilGoetz · 2009-04-04T01:22:22.175Z · LW(p) · GW(p)

So I'm skeptical of the claim that "poor folks" playing the lottery or using crack are generally maximizing their expected utility.

So would I be, if I heard someone make that claim. I'll edit the post to clarify that I don't mean that.

EDIT: Hmm. While I don't explicitly make that claim, I think it is possible that they are generally doing much better at it than we think they are.

Nobody maximizes their expected utility.

comment by dfranke · 2009-04-04T00:15:10.942Z · LW(p) · GW(p)

I think this post is going to contribute to semantic confusion; when most of us talk about utilons, I think we're talking about the output of a utility function.

Replies from: gjm, Alicorn, PhilGoetz, steven0461, GuySrinivasan
comment by gjm · 2009-04-04T00:40:04.235Z · LW(p) · GW(p)

I concur. I did a quick google for "utilons", and most of the hits I found were (1) from LW or OB, and (2) using "utilons" to mean exactly what Phil is saying it doesn't mean. I don't recall seeing "utilon" in (e.g.) philosophy or economics books with the meaning Phil prefers. Phil, where have you found "utilon" used to mean things like dollars?

Replies from: PhilGoetz
comment by PhilGoetz · 2009-04-04T01:05:57.907Z · LW(p) · GW(p)

Utilon isn't a standard word. I'll re-write the post not to use it. The definition of utilon is a tangential issue that I don't care about.

comment by Alicorn · 2009-04-04T02:18:44.559Z · LW(p) · GW(p)

I've heard "hedons" as units of pleasure ("dolors" for units of pain), although I suppose if we aren't being hedonists then it might be a misleading term.

comment by PhilGoetz · 2009-04-04T01:10:40.975Z · LW(p) · GW(p)

You may be right. I re-wrote the post not to use the word "utilon". The definition of utilon is a tangential issue.

comment by steven0461 · 2009-04-04T01:05:21.929Z · LW(p) · GW(p)

I agree; "utilons" are units of utility, though "utils" is more standard.

Replies from: ciphergoth
comment by Paul Crowley (ciphergoth) · 2009-04-04T09:36:07.427Z · LW(p) · GW(p)

We should make a systematic effort to use standard terminology wherever possible on this site - we worry enough about being a cult without replacing standard terminology with our own.

comment by GuySrinivasan · 2009-04-04T00:56:46.144Z · LW(p) · GW(p)

I agree with the main point of the post, but I cannot recall having seen the word "utilons" used to refer to anything except either marginal utility or expected marginal utility, both of which are of course linear in expected marginal utility.

comment by anonym · 2009-04-04T23:33:03.581Z · LW(p) · GW(p)

I don't think your examples are that plausible in the real world, at least not in terms of the reasoning you give. In your scenarios, it would be much better to hide the money away somewhere and let it accumulate, pretending to the world (and to Uncle Sam) that you spent it on crack or whatever, than to actually spend it on crack.

Having said that, if we determine the rationality of some behavior relative to the actual utility function of the individual, then we can see that for some (possible) utility functions, it would be rational to play the lottery and spend lots of money on crack. The a posteriori question is then whether there are in fact such people who are acting rationally relative to a pathological utility function, or whether they actually have sane utility functions but fail to act rationally relative to their utility function.

If we consider rationality in relation to a pre-existing utility function though, what is it that would allow us to recognize our utility function as being dysfunctional, which we seem to be capable of doing? Is there a different utility function that governs the selection of utility functions that govern behavior (which implies an infinite regress) or does the One True Utility Function govern itself and changes to itself, in which case it seems it would be possible to have a utility function relative to which smoking crack is always rational and relative to which any tweaking of the utility function to make smoking crack have less utility would be irrational.


A more plausible rationale for playing the lottery, as many have noted, is that spending $1 on a lottery ticket gives the individual non-financial benefits of more than $1 -- like keeping them from despairing that their life will ever improve, giving them warm fuzzies that result in better mood (and the attenuation of health problems that we know result from poor mood), etc. A small amount of hope is worth much more than $1 in many cases, and the less you have of it initially, the more it's worth.

comment by infotropism · 2009-04-04T15:00:16.196Z · LW(p) · GW(p)

I like this post. That's a point I think needed to be made.

Before reading this, the way I saw it was that for quite a lot of people, there's something akin to a potential barrier as it exists in chemical reactions, for what they can expect of their life. Unless you can invest enough X (energy, time, money, etc.), then what you're trying to do won't work on average. You could also see it as an escape velocity, or the break-even point in a chain reaction getting critical.

To illustrate in the case of a lottery, many people can't expect to ever be able to get interesting returns on whatever they may invest. It's already difficult enough to make a living, and survive on it. The only way to get past that barrier, at least seems, to be to suddenly earn something large enough. Otherwise, even a life's worth of savings and investment would never earn them enough to make a significant difference in their day to day life.

comment by byrnema · 2009-04-04T20:34:49.501Z · LW(p) · GW(p)

To see a plot of the utility function, I posted one here:

http://audi-lesswrong.blogspot.com/

comment by AlexU · 2009-04-04T15:47:58.231Z · LW(p) · GW(p)

Doesn't this make some very big assumptions about the fixity of people's circumstances? If my life is so bad that smoking crack begins to seem rational, then surely, taking actual steps to improve my life would be more rational. Similarly, I imagine that the $5 spent on a lottery ticket could be better spent on something that was a positive first step toward improving even the worst of circumstances. Seems the only way this wouldn't be true would be if you simply assert, by fiat, that the person's circumstances are immutable, but I'm not sure whether this accords with reality. (One's politics are clearly implicated here.)

Replies from: loqi
comment by loqi · 2009-04-04T19:13:10.717Z · LW(p) · GW(p)

If my life is so bad that smoking crack begins to seem rational, then surely, taking actual steps to improve my life would be more rational.

I don't see how this automatically follows. If U < 0 for all mental states you inhabit except being high on crack, then you should do crack. There may be a discounting effect here, meaning you might want to avoid smoking crack until you have enough resources to smoke even more crack later. Your point seems to imply that "improving your life" would change your utility function, which doesn't really fly as a rational argument.

Replies from: AlexU
comment by AlexU · 2009-04-04T21:39:40.497Z · LW(p) · GW(p)

While I can imagine a situation where one's utility function would be as you described, it's a pretty contrived one, e.g., a destitute crack addict suffering from a painful terminal illness, where the second best choice would be suicide. More importantly, for the typical crack user -- the kind Phil Goetz was referencing -- there's almost always going to be something they could be spending the money on that would give them a higher expected utility over the long run ("bettering one's situation"). It's no small claim to say there isn't.

Replies from: loqi
comment by loqi · 2009-04-04T21:59:29.669Z · LW(p) · GW(p)

While my example is a bit contrived, Phil said "some of them", not "most of them". I don't understand the typical crack user very well, but I can pretty easily conjecture a ruined mind requiring some quite high threshold of stimulation to enjoy itself.

So let's weaken the example, and make them fixable. From there I'd say asserting that they should almost always be able to rationally derive a reliable method of repairing their broken state with higher expected return than smoking crack for the rest of their life is no small claim. But really, I have no idea what it's like to be them.

comment by AllanCrossman · 2009-04-04T11:39:56.708Z · LW(p) · GW(p)

On a whim, I once played the lottery on the theory that the Many Worlds Interpretation is true, and some branch of me would win. I like to think he's out there somewhere.

(Of course, if MWI really is true, then some other me in some other branch would have played the lottery even if I hadn't, so strictly speaking I didn't even need to...)

Replies from: Eliezer_Yudkowsky, Z_M_Davis
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-04-04T12:16:03.517Z · LW(p) · GW(p)

On a whim, I once played the lottery on the theory that the Many Worlds Interpretation is true, and some branch of me would win. I like to think he's out there somewhere.

Did you use a quantum random number generator?

(Of course, if MWI really is true, then some other me in some other branch would have played the lottery even if I hadn't, so strictly speaking I didn't even need to...)

There's no law of physics stating that you make all possible decisions in different MWI branches, though sufficiently different people who otherwise resemble you might.

Replies from: SoullessAutomaton, ciphergoth, AllanCrossman
comment by SoullessAutomaton · 2009-04-04T13:04:34.529Z · LW(p) · GW(p)

Did you use a quantum random number generator?

All random number generators are quantum, just with very skewed probabilities. Maybe a lot of electrons will spontaneously be somewhere unlikely and cause my computer to miscompute the next term of a Mersenne Twister.

This is a somewhat useless point, though...

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2009-04-04T19:11:25.283Z · LW(p) · GW(p)

In this sense, you don't need "random number generators" at all, just wait for your computer to spontaneously transform into a fire-breathing dragon.

Replies from: gjm
comment by gjm · 2009-04-04T23:04:24.403Z · LW(p) · GW(p)

That's actually quite a good way of deciding when to buy lottery tickets and when not to.

comment by Paul Crowley (ciphergoth) · 2009-04-04T12:59:38.306Z · LW(p) · GW(p)

In that case I shall use a quantum random number generator to give myself a 10^-6 chance of playing the lottery :-)

comment by AllanCrossman · 2009-04-04T13:05:43.917Z · LW(p) · GW(p)

Did you use a quantum random number generator?

Alas no. I was thinking that, if bought sufficiently far in advance, quantum noise and chaos theory together would ensure that any ticket would win in some branch...

(But yes, I see now that making the choice of ticket itself depend on quantum noise would have been better... hmm...)

Replies from: Annoyance
comment by Annoyance · 2009-04-04T14:09:45.506Z · LW(p) · GW(p)

In some paths, you used a quantum number generator to decide... in others, you didn't.

In some paths, you conclude that you don't have to do anything because of Many Worlds, and so you simply stop doing. In others, you do not reach that conclusion. In still others, you actively reject it... and in some, you reach the conclusion but continue to do anyway.

Even giving up because nothing means anything is meaningless.

comment by John_Maxwell (John_Maxwell_IV) · 2009-04-04T02:47:46.206Z · LW(p) · GW(p)

You haven't explained why relatively happy people play the lottery. The answer is that they can't understand how small the probability of winning is. (Nor can I, by the way; I only understand it mathematically. To make me understand, you could do something like phrase it in terms of coin flips.)

Replies from: PhilGoetz
comment by PhilGoetz · 2009-04-04T03:34:47.313Z · LW(p) · GW(p)

Fortunately, I'm not obligated to explain everything. :)

comment by byrnema · 2009-04-05T03:09:05.239Z · LW(p) · GW(p)

"For someone with $ << 0, the marginal utility of $5 to them is minimal. "

I'm a newbie, which will soon be obvious, but I don't think the utility function is being applied correctly. At each value of U (the worth that a person has at his disposal in goods), we have the utility that can be purchased with U. (So u is negative for U<0 because you get negative things for owing money.)

I understand that if someone is greatly in debt, their utility may not change much if you increase or decrease their debt by some amount. This is why the utility function would be shallow for $ << 0. Thus, I agree that someone with $ << 0 who happens to find $5 on the ground would have little incentive to use $5 to pay off their debt.

However, let's use the function to see what utility they can purchase with their $5...

The fact that they are spending it on gambling or drugs means they are NOT moving from U=(-X) to U=(-X+5) (they're not using the $5 to pay off their debt). They are staying at U=(-X) and spending their $5 as true disposable income -- in other words, exactly as though U=0.

A person with U=0 gets a steep benefit from the spending of $5.

Replies from: conchis
comment by conchis · 2009-04-05T09:52:57.264Z · LW(p) · GW(p)

I think there's some confusion here as to what the utility function is defined over. And to be fair, the post itself is somewhat confused in this respect.

The argument that it might be more or less rational to gamble is an entirely different matter to whether it is more or less rational to smoke crack.

The shape of the utility function over money can make it more or less rational to accept particular money gambles: risk aversion is after all a property of the shape of the utility function.

The shape of the utility function over money cannot affect whether specific, non-risky choices about how to spend that money (e.g. whether to smoke crack) are more or less rational. If crack is you best option, that's already reflected in your utility function for money; if it's not, then that too, is already built in.

NB: This comment is not as precise as it should be in distinguishing decision-utility, experienced-utility etc. I think the fundamental point is right though.

comment by Paul Crowley (ciphergoth) · 2009-04-04T09:26:34.991Z · LW(p) · GW(p)

The real reason not to say "those fools don't deserve our help" is that it doesn't make sense for materialist consequentialists to weight utility based on who deserves what.

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-04-04T11:41:59.579Z · LW(p) · GW(p)

IAWYC but "consequentialism" of itself, or "materialism" of itself, doesn't stop us from having such a utility function.

Replies from: ciphergoth
comment by Paul Crowley (ciphergoth) · 2009-04-04T12:36:09.646Z · LW(p) · GW(p)

Do you know if this is a well-known position in consequentialist philosophy? It seems like it must be, but I only got as far as the Wikipedia page on deserts) and it seems to cover a discussion among deontologists,,,

Replies from: conchis
comment by conchis · 2009-04-04T12:59:01.448Z · LW(p) · GW(p)

There's a fair amount of debate about what exactly the formalism of consequentialism excludes or doesn't, and whether it's possible to view deontological views (or indeed any other moral theory) as a subset of consequentialism. The idea that any moral view can be seen as a version of consequentlialism is often referred to as "Dreier's conjecture" (see e.g. the discussion here.)

Usually, consequentialist aggregration functions impose an anonymity requirement, which seems to discourage desert as a consideration (it requires that the identity of individuals can't matter to what they get). But even that doesn't really exclude it.

comment by byrnema · 2009-04-04T03:11:39.402Z · LW(p) · GW(p)

Some people who buy lottery tickets argue that a lottery ticket is a small price to pay for the chance of being a millionaire.

While the expected return of the lottery ticket is negative, they place an extra value on the chance of being a millionaire, in addition to the expected return.

For comparison, suppose there is another lottery with the same negative expected return, but the maximum you can win is $5 (corresponding with a much higher probability of winning so that the expected value is the same). Then players will be less interested -- because you've taken away the value of the chance of winning a huge sum of money.

Is it irrational to place extra value on the chance of winning more money than you could ever make in a lifetime? Perhaps not - sometimes greater success of several orders of magnitude is only possible via a gamble with a negative expected return.

Replies from: Eliezer_Yudkowsky, byrnema
comment by byrnema · 2009-04-04T03:27:31.484Z · LW(p) · GW(p)

I just voted myself down -- I'm realizing that the topic isn't "why do people play the lottery". Phil Goetz is presenting one potential reason for playing the lottery. I don't need to worry about how common that reason is; the topic at hand is to think about that reason and its relationship to rationality.

Replies from: PhilGoetz
comment by PhilGoetz · 2009-04-04T03:37:17.954Z · LW(p) · GW(p)

I voted you back up, because it's a really interesting point. I take it you're saying that they get enjoyment out holding the ticket between the buying and the drawing.

Replies from: byrnema
comment by byrnema · 2009-04-04T20:06:38.769Z · LW(p) · GW(p)

Thank you. Actually, my point was this: there is value to a gamble that isn't measured by the expected value. The expected value argues that playing the lottery isn't going to make them rich. But keeping the dollar isn't going to make them rich either. At least spending their dollar playing the lottery gives them the chance of being rich.

When I made this argument I was actually thinking of impossible gambles that are made on the scale of evolution, say. For every million that make a gamble and fail, (for example, to escape an island), eventually one wins and validates the gambles of all (survives the journey and populates the continent).

I was reluctant to provide this example because I definitely don't want to imply that it's an evolutionary advantage or a justified sacrifice for the good of the group. (yuck) Perhaps an analogy from economics will balance -- you can demand more for an opportunity that can't be purchased any other way.

Replies from: PhilGoetz
comment by PhilGoetz · 2009-04-05T14:57:03.441Z · LW(p) · GW(p)

There is at least one parallel in evolution. Many bacteria have heat shock proteins that inhibit DNA proofreading. That means that they respond to stress by increasing their mutation rate. It will probably kill them, but if the entire colony does it, it's more likely to survive.

It's not quite the same. If you count the payoff to the bacteria to include the lives of all its descendants, then it may still be "rational".

But maybe it is the same. Presumably, we instinctively act in a way that counts the utility of all our descendants in our utility functions.

comment by conchis · 2009-04-05T10:01:34.610Z · LW(p) · GW(p)

I think your EDIT is much clearer, and more accurate than your original formulation.

In response to the (IMHO unnecessarily snarky, but perhaps I'm reading in too much) explanation for the edit:

It is possible simultaneously to (a) think that "some [lottery players] may be making much more rational decisions than we think"; (b) think that it's still irrational for them to play the lottery; and (c) not define "rational" as "the unattainable goal of perfect utility maximization."

This just means that you think playing the lottery is really silly.

comment by rtsevo · 2009-04-04T00:51:38.591Z · LW(p) · GW(p)

Dollars in, utilons out. Otherwise what are dollars?

comment by Annoyance · 2009-04-04T14:07:42.723Z · LW(p) · GW(p)

It may be perfectly rational for crabs in a bucket to pull each other down in an attempt to escape individually... from the perspective of a mere individual.

From a perspective of survival of the tribe, it's suicidal, and irrational to boot.

Crabs, of course, do not have tribes.

Replies from: PhilGoetz, janos
comment by PhilGoetz · 2009-04-06T15:59:36.428Z · LW(p) · GW(p)

What does this have to do with the post?

comment by janos · 2009-04-04T16:39:54.927Z · LW(p) · GW(p)

At least not when they're already in the bucket.

Replies from: Annoyance
comment by Annoyance · 2009-04-05T01:18:52.399Z · LW(p) · GW(p)

It's not clear that humans in a bucket (metaphorically speaking) care much about the survival of the tribe, either.

There are no altruists in foxholes - not for very long, anyhow.