[Discussion] The Kelly criterion and consequences for decision making under uncertainty

post by Metus · 2013-01-06T02:14:52.715Z · score: 5 (6 votes) · LW · GW · Legacy · 15 comments

The Kelly criterion is the optimal way to allocate one's bankroll over a lifetime to a series of bets assuming the actor's utility increases logarithmically with the amount of money won. Most importantly the criterion gives motivation to decide between investments with identical expected value but different risk of default. It essentially stipulates that the proportion of one's bankroll invested in a class of bets should be proportional to the expected value divided by the payoff in case it pans out.

Now, nothing in the formalism restricts the rule to bets or money for that matter, but is applicable to any situation an actor as assumed above faces uncertainty and possible payoff in utility. Aside from the obvious application to investments, e.g. bonds, this is also applicable to the purchase of insurance or cryonic services.

Buying an insurance can obviously be modeled as bet in the Kelly sense. A simple generalisation of the Kelly criterion leads to a formula that allows to incorporate losses.

An open question, to me at least, is if it possible to generalise the Kelly criterion to arbitrary probability distributions. Also, how can it be that integration over all payoffs for constant expected value evaluates as infinity?

Finally, how would a similar criterion look like for other forms of utility functions?

 

I did not put this question in the open thread because I think the Kelly criterion deserves more of a discussion and is immediately relevant to this site's interests.

15 comments

Comments sorted by top scores.

comment by jsteinhardt · 2013-01-06T02:39:39.875Z · score: 5 (5 votes) · LW(p) · GW(p)

It's apparently not just for logarithmic utility functions. From the wikipedia page:

In most gambling scenarios, and some investing scenarios under some simplifying assumptions, the Kelly strategy will do better than any essentially different strategy in the long run.

comment by CarlShulman · 2013-01-06T04:42:15.868Z · score: 7 (7 votes) · LW(p) · GW(p)

Right, over an infinite series of bets the probability that Kelly goes ahead of a different fixed allocation goes to 1. Some caveats:

  • In the long run, we're all dead: in decisions like retirement fund investments the game is short enough that Kelly takes too much risk of short-term losses and you should bet less than Kelly
  • Kelly doesn't maximize expected winnings: each bet where you bet more than Kelly multiplies your EV (relative to Kelly) in exchange for a chance of falling behind Kelly
  • A strategy that is "bet Kelly over the infinite series of bets, except for n all-in bets to get q times Kelly EV in exchange for probability p of losing it all" may not be "essentially different" but it's noteworthy and calls for betting more than Kelly in some bets
  • In an odd situation where your utility is linear or super-linear in winnings, the utility-maximizing strategy is 100% all-in bets essentially different strategy in the long run
comment by gwern · 2013-01-06T19:46:14.414Z · score: 1 (1 votes) · LW(p) · GW(p)

In the long run, we're all dead: in decisions like retirement fund investments the game is short enough that Kelly takes too much risk of short-term losses and you should bet less than Kelly

Which is one of the justifications for pension funds and annuities: by having a much longer timespan than any one retiree, they can make larger Kelly bets, see larger returns on investment, with benefits to either the retirees they are paying or the larger economy. Hanson says that this implies that eventually the economy will be dominated by Kelly players.

comment by DanielLC · 2013-01-06T06:03:31.964Z · score: 1 (1 votes) · LW(p) · GW(p)

"the utility-maximizing strategy is 100% all-in bets"

Not quite. It's going all-in when the expected value is greater than one, and not betting anything when it's less. If you have a 51% chance doubling your money, go all in. If you have a 49% chance, don't bet anything. In fact, bet negative if that's allowed.

comment by CarlShulman · 2013-01-07T02:14:18.941Z · score: 1 (1 votes) · LW(p) · GW(p)

Right, and Kelly allocation is 0 for negative EV bets.

comment by jsteinhardt · 2013-01-06T05:32:07.952Z · score: 0 (0 votes) · LW(p) · GW(p)

Carl, thanks, this is great!

comment by DanielLC · 2013-01-06T06:02:00.279Z · score: 1 (1 votes) · LW(p) · GW(p)

In order for that to be true, you have to define "in the long run" in such a way that basically begs the question.

If you define "in the long run" to mean the expected value after than many bets, the Kelly criterion is beaten by taking whatever bet has the highest expected value. For example, suppose you have a bet that has a 50% chance of losing everything and a 50% chance of quadrupling your investment, the Kelly criterion says not to take it, since losing everything has infinite disutility. If you don't take it, your expected value is what you started with. If you take it n times, you have a 2^(-n) chance of having 4^n times as much as you started with, which gives an expected value of 2^n.

comment by Vaniver · 2013-01-06T19:08:53.904Z · score: 3 (3 votes) · LW(p) · GW(p)

For example, suppose you have a bet that has a 50% chance of losing everything and a 50% chance of quadrupling your investment, the Kelly criterion says not to take it, since losing everything has infinite disutility.

A bet where you quadruple your investment has a b of 3, and p is .5. The Kelly criterion says you should bet (b*p-q)/b, which is (3*.5-.5)/3, which is one third of your bankroll every time. The expected value after n times is (4/3)^n.

The assumption of the Kelly criterion is that you get to decide the scale of your investment, and that the investment scales with your bankroll.

If you take it n times, you have a 2^(-n) chance of having 4^n times as much as you started with, which gives an expected value of 2^n.

Indeed, but the probability that the Kelly better does better than that better is 1-2^(-n)!

comment by jsteinhardt · 2013-01-06T06:07:03.743Z · score: 2 (2 votes) · LW(p) · GW(p)

I think "in the long run" is used in the same sense as for the law of large numbers. The reason we get a different result is that the results of a bet constrain the possible choices for future bets, and it basically turns out that bets are roughly multiplicative in nature, hence why you want to maximize something like log(x) (because if x is multiplicative, log(x) would be additive and law of large numbers applies; that's not a proof but it's intuition).

comment by Vaniver · 2013-01-06T03:32:30.158Z · score: 2 (2 votes) · LW(p) · GW(p)

An open question, to me at least, is if it possible to generalise the Kelly criterion to arbitrary probability distributions.

You mean, the potential actions are discrete but the potential outcomes for those actions are continuous, with a probability measure over those outcomes, or that there is a non-discrete set of possible actions, or something else?

Also, how can it be that integration over all payoffs for constant expected value evaluates as infinity?

I'm not sure I'm understanding this correctly. Are you asking how the St. Petersburg Paradox works?

Finally, how would a similar criterion look like for other forms of utility functions?

Before you take the derivative with respect to Delta, apply the desired utility function, and then take the derivative. (Note that linear utility functions behave the same as logarithmic utility functions, and Wikipedia's treatment assumes a linear utility function, not a logarithmic one.)

Another extension you can do is to make use of a finite lifetime, which scraps the assumption that K/N approaches p in the limit. With finite N, you can discover what Delta maximizes the probabilistically weighted mean of the utilities.

comment by Metus · 2013-01-07T05:01:30.097Z · score: 0 (0 votes) · LW(p) · GW(p)

You mean, the potential actions are discrete but the potential outcomes for those actions are continuous, with a probability measure over those outcomes, or that there is a non-discrete set of possible actions, or something else?

Yes, potential actions are discrete and outcomes are arbitrarily distributed.

I'm not sure I'm understanding this correctly. Are you asking how the St. Petersburg Paradox works?

No, I mean that the Kelly criterion says that allocation to a bet should be proportional to expected value over payoff. If I hold expected value constant and integrate over payoff the integral diverges. Intuitively I would expect to see a finite integral, reflecting that Kelly restricts how much risk I should be willing to take.

Before you take the derivative with respect to Delta, apply the desired utility function, and then take the derivative.

Interesting. I should try this later.

(Note that linear utility functions behave the same as logarithmic utility functions, and Wikipedia's treatment assumes a linear utility function, not a logarithmic one.)

The Kelly criterion is the natural result when assuming a logarithmic utility function. For a linear utility function it arises if the actor maximizes expected growth rate.

comment by Vaniver · 2013-01-07T22:45:51.268Z · score: 1 (1 votes) · LW(p) · GW(p)

Yes, potential actions are discrete and outcomes are arbitrarily distributed.

It seems like this paper or this paper might be relevant to your interests. (PM me your email if you don't have access to them.)

No, I mean that the Kelly criterion says that allocation to a bet should be proportional to expected value over payoff. If I hold expected value constant and integrate over payoff the integral diverges. Intuitively I would expect to see a finite integral, reflecting that Kelly restricts how much risk I should be willing to take.

Kelly tells you how much risk you should be willing to take for a particular b; integrating over b is not meaningful, since it's integrating over multiple bets. (Note that f is E/b, if E is the expected value, and 1/x diverges. Since p is capped by 1, then E is capped by b, and the maximum risk you should take is betting everything, if p=1 i.e. it's a sure thing.)

If you put a probability p(b) on any particular payout, you might get something meaningful out of integrating p(b)E/b, but it's not clear to me that's the right way to do things.

Interesting. I should try this later.

It won't work out very prettily, but it is instructive. Basically, that tells you how much your bet should have differed from Delta, given what happened. You can then figure out what would have been optimal for that sequence, then do a weighted sum over sequences. (If your utility function isn't scale invariant, and only log is, then you need information on how long the game runs; if you're allowed to change the fraction of your wealth that you put up each time, then it's an entirely different problem.)

comment by thescoundrel · 2013-01-06T08:15:50.326Z · score: 1 (1 votes) · LW(p) · GW(p)

I made a comment early this week on a thread discussing the lifespan dilemma, and how it appears to untangle it somewhat. I had intended to see if it helped clarify other similar issues, but haven't done so yet. I would be interested in feedback- it seems possible the I have completely misapplied it in this case.

comment by [deleted] · 2013-01-06T22:37:37.711Z · score: 0 (0 votes) · LW(p) · GW(p)

While we're using the kelly criterion, we should probably resolve its paradox to avoid going down its own "garden path" equivalent of the lifespan dilemma.