Is Rationality Maximization of Expected Value?

post by AnlamK · 2010-09-22T23:16:04.427Z · LW · GW · Legacy · 65 comments

Contents

65 comments

Two or three months ago, my trip to Las Vegas made me ponder the following: If all gambles in the casinos have negative expected values, why do people still engage in gambling - especially my friends fairly well-versed in probability/statistics?

Suffice it to say, I still have not answered that question. 

On the other hand, this did lead me to ponder more about whether rational behavior always involves making choices with the highest expected (or positive) value - call this Rationality-Expectation (R-E) hypothesis.

Here I'd like to offer some counterexamples that show R-E is clearly false, to me at least. (In hindsight, these look fairly trivial but some commentators on this site speak as if maximizing expectation is somehow constitutive of rational decision making - as I used to. So, it may be interesting for those people at the very least.)

A is a gamble that shows that choices with negative expectation can sometimes lead to net pay off.

B is a gamble that shows that choices with positive expectation can sometimes lead to net costs.

As I'm sure you've all noticed, expectation is only meaningful in decision-making when the number of trials in question can be large (or more precisely, large enough relative to the variance of the random variable in question). This, I think, in essence is another way of looking at Weak Law of Large Numbers.

In general, most (all? few?) statistical concepts make sense only when we have trials numerous enough relative to the variance of the quantities in question.

This makes me ponder a deeper question, nonetheless.

Does it make sense to speak of probabilities only when you have numerous enough trials? Can we speak of probabilities for singular, non-repeating events?

65 comments

Comments sorted by top scores.

comment by mattnewport · 2010-09-23T00:03:16.001Z · LW(p) · GW(p)

Suppose someone offers you a (single trial) gamble A in which you stand to gain 100k dollars with probability 0.99 and stand to lose 100M dollars with probability 0.01. Even though expectation is -98999000 dollars, you should still take the gamble since the probability of winning on a single trial is very high - 0.99 to be exact.

If I can find another 99 people as confused as you I'll be a rich man.

Replies from: JGWeissman, AnlamK
comment by JGWeissman · 2010-09-23T00:06:26.815Z · LW(p) · GW(p)

If I can find another 99 people as confused as you I'll be a rich man.

You would also need them to have $100M available to lose.

Replies from: mattnewport
comment by mattnewport · 2010-09-23T00:07:11.290Z · LW(p) · GW(p)

You would also need them to have $100M available to lose.

That is a weakness with my plan.

Replies from: b1shop
comment by b1shop · 2010-09-23T00:08:40.458Z · LW(p) · GW(p)

Oh well. Fold the plan into your back pocket and wait for hyperinflation.

Replies from: AdeleneDawner
comment by AdeleneDawner · 2010-09-28T06:28:20.453Z · LW(p) · GW(p)

Or just drop a few zeroes off of the numbers and do it now, as if you'd come up with the idea a couple hundred years ago and the inflation up to this point counts as 'hyper'.

comment by AnlamK · 2010-09-28T05:46:07.983Z · LW(p) · GW(p)

Wouw... Thank you for this charitable interpretation. I'll try to respond.

(1) You don't have to construe the gamble as some sort of coin flips. It could also be something like "the weather in Santa Clara, California in 20 September 2012 will be sunny" - i.e. a singular non-repeating event, in which case having 100 hundred people (as confused as me) will not help you.

(2) I've specifically said that if you have enough trials to converge to the expectation (i.e. the point about Weak Law of Large Numbers), then the point I'm making doesn't hold.

(3) Besides, suppose you have a gamble Z with negative expectation with probability of a positive outcome 1-x, for a very small x. I claim that for small enough x, every one should take Z - despite the negative expectation.

What's your x, sunshine? If 0.01 isn't small enough, pick a suitably small x. Nick Bostrom in Pascal's mugging picks 1 over quadrillion to demonstrate a very similar point. I picked 0.01 since I thought concrete values would demonstrate the point more clearly - I feel like they've been more confusing.

In fact, people take such gambles (with negative expectation but with high probability of winning) everyday.

They fly on airplanes and drive to work.

(4) Besides, even if we construe the gamble being repeated like a coin toss, I feel like with 0.99^99 = 0.37, you stand to lose 10M with probability 0.37 . I don't know about you but I wouldn't risk 10M with those kinds of odds. It helps to be precise when you can and not to go with a heuristic like "on average there should be 1 W in every 100 trial"...

Replies from: mattnewport
comment by mattnewport · 2010-09-28T06:13:32.220Z · LW(p) · GW(p)

(1) You don't have to construe the gamble as some sort of coin flips. It could also be something like "the weather in Santa Clara, California in 20 September 2012 will be sunny" - i.e. a singular non-repeating event, in which case having 100 hundred people (as confused as me) will not help you.

A coin flip is not fundamentally a less singular non-repeating event than the weather at a specific location and specific time. There are no true repeating events on a macro scale if you specify location and time. The relevant difference is how confident you can be that past events are good predictors of the probability of future events. Pretty confident for a coin toss, less so for weather. Note however that if your probability estimates are sufficiently accurate / well-calibrated you can make money by betting on lots of dissimilar events. See for example how insurance companies, hedge funds, professional sports bettors, bookies and banks make much of their income.

(3) Besides, suppose you have a gamble Z with negative expectation with probability of a positive outcome 1-x, for a very small x. I claim that for small enough x, every one should take Z - despite the negative expectation.

'Small enough' here would have to be very much smaller than 1 in 100 for this argument to begin to apply. It would have to be 'so small that it won't happen before the heat death of the universe' scale. I'm still not sure the argument works even in that case.

I believe there is a sense in which small probabilities can be said to also have an associated uncertainty not directly captured by the simple real number representing your best guess probability. I was involved in a discussion on this point here recently.

Replies from: AnlamK
comment by AnlamK · 2010-09-28T09:48:06.537Z · LW(p) · GW(p)

'Small enough' here would have to be very much smaller than 1 in 100 for this argument to begin to apply. It would have to be 'so small that it won't happen before the heat death of the universe' scale. I'm still not sure the argument works even in that case.

How small should x be? And if the argument does hold, are you going to have two different criteria for rational behavior - one with events where probability of positive outcome is 1-x and one that isn't.

And also, from Nick Bostrom's piece (formatting will be messed up):

Mugger: Good. Now we will do some maths. Let us say that the 10 livres that you have in your wallet are worth to you the equivalent of one happy day. Let’s call this quantity of good 1 Util. So I ask you to give up 1 Util. In return, I could promise to perform the magic tomorrow that will give you an extra 10 quadrillion happy days, i.e. 10 quadrillion Utils. Since you say there is a 1 in 10 quadrillion probability that I will fulfil my promise, this would be a fair deal. The expected Utility for you would be zero. But I feel generous this evening, and I will make you a better deal: If you hand me your wallet, I will perform magic that will give you an extra 1,000 quadrillion happy days of life. ... Pascal hands over his wallet [to the Mugger].

Of course, by your reasoning, you would hand your wallet. Bravo.

comment by JGWeissman · 2010-09-22T23:37:37.980Z · LW(p) · GW(p)

Maximize expected utility, not expected money.

Your intuitions in the examples that maximizing expected money is wrong is because you do not value money linearly on that scale.

Replies from: AnlamK
comment by AnlamK · 2010-09-28T05:49:37.903Z · LW(p) · GW(p)

What exactly does maximizing expected utility yield in these particular cases?

For one, I could be convinced not to take A (0.01 could be too risky) but I would never take B.

I feel that if maximization of expected utility involves averaging probabilities of outcomes weighted by payoffs, then it's going to suffer from similar difficulties.

Replies from: Meni_Rosenfeld
comment by Meni_Rosenfeld · 2010-09-28T12:42:20.979Z · LW(p) · GW(p)

What exactly does maximizing expected utility yield in these particular cases?

For one, I could be convinced not to take A (0.01 could be too risky) but I would never take B.

Depends on how much money you currently have. According to the simple logarithmic model, you should take gamble B if your net worth is at least $2.8M.

comment by neq1 · 2010-09-22T23:31:39.000Z · LW(p) · GW(p)

In the first example, you couldn't play unless you had at least 100M dollars of assets. Why would someone with that much money risk 100M to win a measly 100K, when the expected payoff is so bad?

Replies from: Will_Newsome
comment by Will_Newsome · 2010-09-23T00:27:31.748Z · LW(p) · GW(p)

Yeah, uhm, I figured I'd misunderstood that, because my second hypothesis was that someone was trolling us. Looking at the poster's previous comments I'm more inclined to think that he just missed the whole 'Bayes is god' meme.

Replies from: AnlamK
comment by AnlamK · 2010-09-28T05:51:00.043Z · LW(p) · GW(p)

Sorry that talking about money lead to confusion. I guess the point I was making was the following. See my respond to mattnewport, i.e.:

Suppose you have a gamble Z with negative expectation with probability of a positive outcome 1-x, for a very small x. I claim that for small enough x, every one should take Z - despite the negative expectation.

comment by wedrifid · 2010-09-23T04:51:55.766Z · LW(p) · GW(p)

Your implied point about expected utility is way off but...

Suppose someone offers you a (single trial) gamble A in which you stand to gain 100k dollars with probability 0.99 and stand to lose 100M dollars with probability 0.01. Even though expectation is -98999000 dollars, you should still take the gamble since the probability of winning on a single trial is very high - 0.99 to be exact.

I would take it. I will probably gain $100k and if I lose $100M then I will just declare bankruptcy. This is approximately the decision banks make when they take irresponsible risks and can be expected to be bailed out by similarly irresponsible government.

comment by GreenRoot · 2010-09-22T23:37:24.516Z · LW(p) · GW(p)

Does it make sense to speak of probabilities only when you have numerous enough trials?

No, probability theory also has non-frequency applications.

Can we speak of probabilities for singular, non-repeating events?

Yes. This is the core of a Bayesian approach to decision making. The usual interpretation is that the probabilities reflect your state of knowledge about events rather than frequencies of actual event outcomes. Try starting with the LW wiki article on Baesian probability and the blog posts linked therefrom.

Replies from: Kaj_Sotala, AnlamK
comment by Kaj_Sotala · 2010-09-23T18:41:35.263Z · LW(p) · GW(p)

Assigning a non-repeating event the probability P means that, for a well-calibrated agent, if you assign 100 different things this probability then 100 * P of them will actually occur. I believe this is a standard interpretation of Bayesian probability, and it puts things in terms of frequencies of actual event outcomes.

ETA: Alternatively, one may think of Bayesian probability as the answer to the question "if I believed this statement, then in which fraction P of all plausible worlds in which I ended up with this information would I be correct"?

Replies from: SoulAllnighter
comment by SoulAllnighter · 2010-09-26T11:52:13.296Z · LW(p) · GW(p)

I have to disagree with this interpretation. The whole point is that the frequency interpretation of probability can be a specific case of the Bayesian (probability = belief) interpretation, but not vice versa.

If I say I belief in the existence of aliens with 0.2 belief i think its non-intuitive and unrealistic that what im really saying is, "i think aliens exist in 20% of all plausible worlds". Apart from the difficulty in clearly defining 'plausible' the point of Bayesianism is that this simply represents my state of knowledge/belief.

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2010-09-27T07:14:39.076Z · LW(p) · GW(p)

I find Bayesian probability to be meaningless unless you connect it to a pseudo-frequentist interpretation somehow. Sure, you can say "Bayesian probability measures my subjective belief in something", but in that case, what does having a 20% subjective belief in something actually mean, and how's it different from having an 80% subjective belief? You need some scheme of translating it from a meaningless number to an expectation, and all such translations (e.g. in terms of betting behavior) always end up being pseudo-frequentist somehow.

Replies from: Jack
comment by Jack · 2010-09-30T14:15:55.582Z · LW(p) · GW(p)

The traditional way of defining the degree of a belief held by some agent is by finding what the agent thinks is a fair wager on the proposition. Is that pseudo-frequentist in a way I'm not seeing?

comment by AnlamK · 2010-09-28T05:54:02.432Z · LW(p) · GW(p)

Obviously, this needs more discussion but the kind of thought I was trying to motivate was the following:

How is that saying a non-repeating singular event has a very small probability of occurring different from saying it will not happen?

This was motivated by the lottery paradox. Questions like, when you buy a lottery ticket, you don't believe you will win, so why are you buying it?

Examples like these sort of pull my intuitions towards thinking no, it doesn't make sense to speak of probabilities for certain events.

comment by Will_Sawin · 2010-09-23T00:08:53.288Z · LW(p) · GW(p)

The whole nonlinear utility thing makes this specific point wrong, but:

It seems like the main counter-intuitive part of expected utility theory (or counter-expected utility theory part of intuition) is just this type of question. See: Pascal's Mugging.

Humans tend to be loathe to trade of high probabilities of small benefits for low probabilities of big benefits in cases where linearity is very plausible, such as # of people saved.

But people seem to just as often make the opposite mistake about various scary risks.

Are people just bad at dealing with small probabilities?

What does that mean for coming to a reflective equilibrium about ethics?

Replies from: mattnewport, Will_Newsome, AnlamK
comment by mattnewport · 2010-09-23T00:32:55.695Z · LW(p) · GW(p)

Are people just bad at dealing with small probabilities?

It seems like a reasonable heuristic that small probabilities are also likely to be uncertain probabilities (due to being associated with rare events and therefore limited numbers of observations). This may explain some of the apparent paradoxes around how people deal with low probability events but I'd have to think a bit more about what it implies.

Replies from: prase, Will_Newsome
comment by prase · 2010-09-23T06:36:13.354Z · LW(p) · GW(p)

uncertain probabilities

Although I probably agree with your point, the chosen formulation is weird. The uncertainty is hidden in the probability, "uncertain probabilities" is sort of pleonasm. I like this comment, especially

The frequency with which a coin comes up heads isn't a probability, no matter how much it looks like one. This is what's going wrong in the heads of people who say things like "The probability is either 1 or 0, but I don't know which."

Replies from: mattnewport
comment by mattnewport · 2010-09-23T07:09:48.446Z · LW(p) · GW(p)

Although I probably agree with your point, the chosen formulation is weird. The uncertainty is hidden in the probability, "uncertain probabilities" is sort of pleonasm.

I did spend some time thinking about exactly what this means after writing it. It seems to me there is a meaningful sense in which probabilities can be more or less uncertain and I haven't seen it well dealt with by discussions of probability here. If I have a coin which I have run various tests on and convinced myself it is fair then I am fairly certain the probability of it coming up heads is 0.5. I think the probability of the Republicans gaining control of Congress in November is 0.7 but I am less certain about this probability. I think this uncertainty reflects some meaningful property of my state of knowledge.

I tentatively think that this sense of 'certainty' reflects something about the level of confidence I have in the models of the world from which these probabilities derive. It also possibly reflects something about my sense of what fraction of all the non-negligibly relevant information that exists I have actually used to reach my estimate. Another possible interpretation of this sense of certainty is a probability estimate for how likely I am to encounter information in the future which would significantly change my current probability estimate. A probability I am certain about is one I expect to be robust to the kinds of sensory input I think I might encounter in the future.

This sense of how certain or uncertain a probability is may have no place in a perfect Bayesian reasoner but I think it is meaningful information to consider as a human making decisions under uncertainty. In the context of the original comment, low probabilities are associated with rare events and as such are the kinds of thing we might expect to have a very incomplete model of or a very sparse sampling of relevant data for. They are probabilities which we might expect to easily double or halve in response to the acquisition of a relatively small amount of new sensory data.

Perhaps it's as simple as how much you update when someone offers to make a bet with you. If you suspect your model is incomplete or you lack much of the relevant data then someone offering to make a bet with you will make you suspect they know something you don't and so update your estimate significantly.

Replies from: Kaj_Sotala, prase
comment by Kaj_Sotala · 2010-09-23T18:30:07.310Z · LW(p) · GW(p)

It seems to me there is a meaningful sense in which probabilities can be more or less uncertain

Here's another example. Suppose you're drawing balls from a large bin. You know the bin has red and white balls, but you don't know how many there are of each.

After drawing two balls, you have one white and one red ball.

After drawing 100,000 balls, you have 50,000 white and 50,000 red balls.

In both cases you might assign a probability of .5 for drawing a white ball next, but it seems like in the n = 100,000 case you should be more certain of this probability than in the n = 2 case.

One could try to account for this by adding an extra criteria that specifies whether or not you expect your probability estimate to change. E.g. in the n = 10,000 case you're .5 certain of drawing a white ball next, and .99 certain of this estimate not changing regardless of how many more trials you conduct. In the n = 2 case you would still be .5 certain of drawing a white ball next, but only .05 (or whatever your prior) certain of this being the probability you'll eventually end up converging on.

This is the approach taken in Probabilistic Logic Networks, which uses 'indefinite probabilities' of the form <[L,U], b, k>. This stands roughly for "I assign a probability of b to the hypothesis that, after having observed k more pieces of evidence, the truth value I assign to S will lie in the interval [L, U]".

Replies from: mattnewport
comment by mattnewport · 2010-09-23T18:38:27.622Z · LW(p) · GW(p)

Yes. I think this sense of how 'certain' I am about a probability probably corresponds to some larger scale property of a Bayesian network (some measure of how robust a particular probability is to new input data) but for humans using math to help with reasoning it might well be useful to have a more direct way of working with this concept.

Replies from: jsteinhardt, jsteinhardt
comment by jsteinhardt · 2010-09-24T05:27:06.139Z · LW(p) · GW(p)

This is also a problem I have thought about a bit. I plan to think about it more, organize my thoughts, and hopefully make a post about it soon, but in the meantime I'll sketch my ideas. (It's unfortunate that this comment appeared in a post that was so severely downvoted, as less people are likely to think about it now.)

There is no sense in which an absolute probability can be uncertain. Given our priors, and the data we have, Bayes' rule can only give one answer.

However, there is a sense in which conditional probability can be uncertain. Since all probabilities in reality are conditional (at the very least, we have to condition on our thought process making any sense at all), it will be quite common in practice to feel uncertain about a probability, and to be well-justified in doing so.

Let me illustrate with the coin example. When I say that the next flip has a 50% chance of coming up heads, what I really mean is that the coin will come up heads in half of all universes that I can imagine (weighted by likelihood of occurrence) that are consistent with my observations so far.

However, we also have an estimate of another quantity, namely 'the probability that the coin comes up heads' (generically). I'm going to call this the weight of the coin since that is the colloquial term. When we say that we are 50% confident that the coin comes up heads (and that we have a high degree of confidence in our estimate), we really mean that we believe that the distribution over the weight of the coin is tightly concentrated about one-half. This will be the case after 10,000 flips, but not after 5 flips. (In fact after N heads and N tails, a weight of x has probability proportional to [x(1-x)] ^N.)

What is important to realize is that the statement 'the coin will come up heads with probability 50%' means 'I believe that in half of all conceivable universes the coin will come up heads', whereas 'I am 90% confident that the coin will come up heads with probability 50%' means something more along the lines of 'I believe that in 90% of all conceivable universes my models predict a 50% chance of heads'. But there is also the difference that in the second statement, the '90% of all conceivable universes' only actually specifies them up to the extent that our models need in order to take over.

I think that this is similar to what humans do when they express confidence in a probability. However, there is an important difference, as in the previous case my 'confidence in a probability' corresponded to some hidden parameter that dictated the results of the coin under repeated trials. The hidden parameter in most real-world situations is far less clear, and we also don't usually get to see repeated trials (I don't think this should matter, but unfortunately my intuition is frequentist).

comment by jsteinhardt · 2010-09-24T05:27:13.979Z · LW(p) · GW(p)

This is also a problem I have thought about a bit. I plan to think about it more, organize my thoughts, and hopefully make a post about it soon, but in the meantime I'll sketch my ideas. (It's unfortunate that this comment appeared in a post that was so severely downvoted, as less people are likely to think about it now.)

There is no sense in which an absolute probability can be uncertain. Given our priors, and the data we have, Bayes' rule can only give one answer.

However, there is a sense in which conditional probability can be uncertain. Since all probabilities in reality are conditional (at the very least, we have to condition on our thought process making any sense at all), it will be quite common in practice to feel uncertain about a probability, and to be well-justified in doing so.

Let me illustrate with the coin example. When I say that the next flip has a 50% chance of coming up heads, what I really mean is that the coin will come up heads in half of all universes that I can imagine (weighted by likelihood of occurrence) that are consistent with my observations so far.

However, we also have an estimate of another quantity, namely 'the probability that the coin comes up heads' (generically). I'm going to call this the weight of the coin since that is the colloquial term. When we say that we are 50% confident that the coin comes up heads (and that we have a high degree of confidence in our estimate), we really mean that we believe that the distribution over the weight of the coin is tightly concentrated about one-half. This will be the case after 10,000 flips, but not after 5 flips. (In fact after N heads and N tails, a weight of x has probability proportional to [x(1-x)] ^N.)

What is important to realize is that the statement 'the coin will come up heads with probability 50%' means 'I believe that in half of all conceivable universes the coin will come up heads', whereas 'I am 90% confident that the coin will come up heads with probability 50%' means something more along the lines of 'I believe that in 90% of all conceivable universes my models predict a 50% chance of heads'. But there is also the difference that in the second statement, the '90% of all conceivable universes' only actually specifies them up to the extent that our models need in order to take over.

I think that this is similar to what humans do when they express confidence in a probability. However, there is an important difference, as in the previous case my 'confidence in a probability' corresponded to some hidden parameter that dictated the results of the coin under repeated trials. The hidden parameter in most real-world situations is far less clear, and we also don't usually get to see repeated trials (I don't think this should matter, but unfortunately my intuition is frequentist).

comment by prase · 2010-09-23T07:35:39.853Z · LW(p) · GW(p)

This sense of how certain or uncertain a probability is may have no place in a perfect Bayesian reasoner but I think it is meaningful information to consider as a human making decisions under uncertainty.

I don't think the key issue is the imperfect Bayesianism of humans. I suppose that the discussed certainty of a probability has a lot to do with its dependence on priors - the more sensitive the probability is to change in priors we find arbitrary, the less certain it feels. Priors themselves feel most uncertain, while probabilities obtained from evidence-based calculations, especially those quasi-frequentist probabilities, as P(heads in next flip), depend on many priors and change in any single prior doesn't move them too far. Perfect Bayesians may not have the feeling, but still have priors.

Replies from: Will_Sawin, mattnewport
comment by Will_Sawin · 2010-09-25T00:14:05.935Z · LW(p) · GW(p)

Sensitivity to priors is the same as sensitivity to new evidence. And when we're sensitive to new evidence, our estimates are likely to change, which is another reason they're uncertain.

The reason this phenomena occurs is because we are uncertain about some fundamental frequency, or a model more complex than a simple frequency model, and probability(heads|frequency of heads is x)=x.

comment by mattnewport · 2010-09-23T17:50:53.401Z · LW(p) · GW(p)

I think there's something to what you say but a perfect bayesian (or an imperfect human for that matter) is conditional probabilities all the way down. When we talk about our priors regarding a particular question they are really just the output of another chain of reasoning. The boundaries we draw to make discussion feasible are somewhat arbitrary (though they would probably reflect specific mathematical properties of the underlying network for a perfect Bayesian reasoner).

Replies from: prase
comment by prase · 2010-09-24T10:43:37.388Z · LW(p) · GW(p)

Do you think the chain of reasoning is infinite? For actual humans there is certainly some boundary under which the prior no more feels as an output of further computation, although such beliefs could have been influenced by earlier observations either subconsciously, or consciously while this fact has been forgotten later. Especially in the former case, I think the reasoning leading to such beliefs is very likely to be flawed, so it seems fair to treat such beliefs as genuine priors, even if, strictly speaking, they were physically influenced by evidence.

A perfect Bayesian, on the other hand, should be immune to flawed reasoning, but still it has to be finite, so I suppose it must have some genuine priors which are part of its immutable hardware. I imagine it in an analogy with formal systems, which have a finite set of axioms (or an infinite set defined by a finite set of conditions) and a finite set of derivation rules, and a set of theorems consisting of axioms and derived statements. For a Bayesian, axioms are replaced by several statements with associated priors, there is the Bayes' theorem among the derivation rules, and instead of a set of theorems, it has a set of encountered statements with attached probability. Possible issues are:

  • If such formal construction is possible, there should be a lot of literature about it, and I am unaware of any (but I didn't try to find too hard), and
  • I am not sure whether such an approach isn't obsolete in the light of discussions about updateless decision theories and similar stuff.
Replies from: mattnewport
comment by mattnewport · 2010-09-24T17:53:53.873Z · LW(p) · GW(p)

Do you think the chain of reasoning is infinite?

Not infinite but for humans all priors (or their non-strict-Bayesian equivalent at least) ultimately derive either from sensory input over the individual's lifetime or from millions of years of evolution baking in some 'hard-coded' priors to the human brain.

When dealing with any particular question you essentially draw a somewhat arbitrary line and lump millions of years of accumulated sensory input and evolutionary 'learning' together with a lifetime of actual learning and assign a single real number to it and call it a 'prior' but this is just a way of making calculation tractable.

comment by Will_Newsome · 2010-09-23T01:35:58.300Z · LW(p) · GW(p)

It seems like a reasonable heuristic that small probabilities are also likely to be uncertain probabilities (due to being associated with rare events and therefore limited numbers of observations).

The occurrence of very low probability events is also indicative of unaccounted for structural uncertainty. Taking into account both where I find myself in the multiverse as well as thinking seriously about anthropic reasoning led to me being really confused (and I still am, but less so). I think it was good that I became confused and didn't just think "Oh, according to my model, a really low probability event just happened to me, how cool is that?" It wouldn't surprise me all that much if there was a basic evolutionary adaptation not to trust one's models after heavily unanticipated events, and this may generalize to being distrustful of small probabilities in general. (But I'm postulating an evolutionary adaptation for rationality based on almost no evidence, which is most often a byproduct of thinking "What would I do if I was evolution?", which is quite the fallacy.)

comment by Will_Newsome · 2010-09-23T00:33:06.528Z · LW(p) · GW(p)

What does that mean for coming to a reflective equilibrium about ethics?

Are you talking about CEV? Civilization as we know it will end long before people agree about metaethics.

Replies from: Will_Sawin
comment by Will_Sawin · 2010-09-25T00:17:42.980Z · LW(p) · GW(p)

Before CEV, we have to do a rough estimate of our personal extrapolated volition so we know what to do. One way to do this is to extrapolate our volition as far as we can see by, e.g., thinking about ethics.

I intuitively feels that X is good and Y is bad. I believe morality will mostly fit my intuitions. I believe morality will be simple. I know my intuitions, in this case, are pretty stupid. I can't find a simple system that fits my intuitions here. What should I do? How much should I suck up and take the counterintuitiveness? How much should I suck up and take complex morality?

These are difficult questions.

comment by AnlamK · 2010-09-28T06:00:19.733Z · LW(p) · GW(p)

The nonlinear utility of money?

Well, the point I was trying to make was supposed to be abstract and general. Nick Bostrom's Pascal's Mugging piece argues for a very similar (if not identical) point. Thanks for letting me know about this.

And yes, I'm bad at dealing with small probabilities. I feel that these evoke some philosophical questions about the nature of probability in general - or whatever we talk about when we talk about probabilities.

comment by ata · 2010-09-22T23:59:16.542Z · LW(p) · GW(p)

Does it make sense to speak of probabilities only when you have numerous enough trials?

No, the math of probability theory still works if you take probabilities as subjective degrees of belief. That is the foundation of Bayesianity, but even the frequency interpretation depends on subjective ignorance — if you had full knowledge of all information influencing the outcome of a given trial, you wouldn't be doing the trial, because you could predict the result. It depends on isolating certain causal factors and mind-projecting them as "random variables". In reality, they're not random — you just don't know what they are — and you can talk about your degree of knowledge about the result of 1 trial just as well as 1,000,000 trials.

comment by mattnewport · 2010-09-22T23:37:51.585Z · LW(p) · GW(p)

Gamblers are maximizing expected utility, not expected cash. That is all.

Replies from: Yvain, Furcas, AnlamK
comment by Scott Alexander (Yvain) · 2010-09-23T17:43:38.270Z · LW(p) · GW(p)

It's not all. Pramipexole and other dopamine agonist medications can cause compulsive gambling in previous non-gamblers as a side effect. That makes me think that the thrill of gambling has something to do with the dopamine system and the design of the human risk/reward system, and that compulsive gambling probably has some kind of organic cause that you couldn't find in the pure mathematics of expected utility.

comment by Furcas · 2010-09-22T23:46:23.906Z · LW(p) · GW(p)

Or they're just irrational.

Replies from: mattnewport
comment by mattnewport · 2010-09-22T23:56:43.063Z · LW(p) · GW(p)

I find it useful when trying to understand the behaviour of other human beings to start out by assuming that they are basically (imperfectly) rational but may have different values from me. It invokes less of a warm glow of smug superiority but generally leads to more accurate predictions.

Replies from: Furcas, ata
comment by Furcas · 2010-09-23T00:17:25.193Z · LW(p) · GW(p)

I find it useful when trying to understand the behaviour of other human beings to start out by assuming that they are basically (imperfectly) rational but may have different values from me.

So do I. I then look at the evidence and discover they're just irrational.

Seriously, most people don't lose hundreds or thousands of dollars in a few hours at a casino just for the enjoyment. They want money and they expect to win some.

Replies from: Will_Newsome, mattnewport
comment by Will_Newsome · 2010-09-23T01:46:20.574Z · LW(p) · GW(p)

Seriously, most people don't lose hundreds or thousands of dollars in a few hours at a casino just for the enjoyment. They want money and they expect to win some.

mattnewport was talking about gamblers, you're talking about the (small?) subset of irrational gamblers.

The real question can be solved by empiricism; anyone heading to Vegas soon and willing to do a survey? Ask: A) Do you believe that you will leave the casino with more money than you started? B) If you don't leave the casino richer, do you expect the experience to be satisfying anyway? (Except do a better job of optimizing the questions for clarity.) Ask a few hundred people, get some free drinks from the casinos, publish your results in an economics journal or a cognitive biases journal, present your findings to Less Wrong, get karma, die happy.

Replies from: JRMayne, ata
comment by JRMayne · 2010-09-24T20:53:05.014Z · LW(p) · GW(p)

Hey, I'll do the survey on me:

A: Yes. Of course, if I do go to Vegas soon, that's a fait accompli (I bet on the Padres to win the NL and the Reds to win the World Series, among other bets.)

But in general, yes. I expect to win on the bets I place. I go to Las Vegas with my wife to play in the sun and see shows and enjoy the vibe, but I go one week a year by myself to win cash money.

B. If I come back a loser, the experience can still be OK. But I'm betting sports and playing poker, and I expect to win, so it's not quite so fun to lose. That said, a light gambling win - not enough to pay for the hotel, say - leaving me down considering expenses gives me enough hedons to incentivize coming back.

--JRM

comment by ata · 2010-09-23T01:52:21.499Z · LW(p) · GW(p)

If you don't leave the casino richer, do you expect the experience to be satisfying anyway?

Even if you're optimizing for enjoyment and satisfaction and fun, gambling isn't necessarily a great way to do that. Another good question to ask subjects who answer "yes" to questions A and B would be "How much money would you be willing to lose at the casino before that starts to outweigh your enjoyment of the experience?" or "How much money would you be willing to lose at the casino before you'd regret not choosing something that is (in your estimation) a more cost-effective route to the same amount of enjoyment?"

Replies from: Will_Newsome, Will_Newsome
comment by Will_Newsome · 2010-09-23T01:58:25.277Z · LW(p) · GW(p)

Those are good questions, and on Less Wrong I wouldn't be hesitant to ask them, but I figured they'd be beyond the ability of the average person to really think about. In my experience getting people to fill out surveys, they easily get indignant and frustrated when they can't understand a question or, perhaps more importantly, the possible motives behind the question. ("Is he trying to make me look like a fool? What an ass, trying to get status over me with his nerdy smarts!") Even if they did understand the question, I'd doubt their answer would be at all reflectively consistent; significantly less so than the answers to the other two questions.

comment by Will_Newsome · 2010-09-23T02:30:45.603Z · LW(p) · GW(p)

Taking into account my other comment, I think that perhaps it'd be best to ask the less informative but much simpler question "How much money have you set aside for gambling today?" before the other two questions.

comment by mattnewport · 2010-09-23T00:22:31.510Z · LW(p) · GW(p)

Most people at casinos are not problem gamblers, just as most people who drink are not problem drinkers. I know plenty of people (myself included) who gamble on occasion for fun but understand the odds.

comment by ata · 2010-09-23T00:14:57.988Z · LW(p) · GW(p)

More importantly, "x is being irrational" can be a fake explanation if it's given without further detail. Much better to point to a specific fallacy or bias that would explain their behaviour.

In this particular case, though, how is it a matter of "different values"? Would anybody participate in casino-style gambling if they were better at thinking about probabilities and utilities?

Replies from: Nic_Smith
comment by Nic_Smith · 2010-09-23T01:12:47.913Z · LW(p) · GW(p)

I have gambled in a casino or the like exactly once in my adult life, when on a cruise I had a quarter, 25 cents, which I did not wish to carry around with me for the rest of the week. So I decided to "try my luck" at the quarter-push machine in the casino. I did not win anything, but being able to tell that story was worth every penny.

comment by AnlamK · 2010-09-28T06:04:37.706Z · LW(p) · GW(p)

I think it's hard to enjoy gambling if you are sure you'll lose money, which is how I feel like. I may be over pessimistic.

Roulette gives you odds of 1.111 to 1 if you place on Red or Black with expectation -0.053 on the dollar. So I may be over-pessimistic. See the wiki entry.

Replies from: mattnewport
comment by mattnewport · 2010-09-28T06:15:45.834Z · LW(p) · GW(p)

I think it's hard to enjoy gambling if you are sure you'll lose money, which is how I feel like. I may be over pessimistic.

Typical Mind Fallacy.

Replies from: AnlamK
comment by AnlamK · 2010-09-28T09:59:40.568Z · LW(p) · GW(p)

Don't get over-excited. You are still losing money in a less than fair-odds situation.

And since most people don't stop gambling until they have some deficit from gambling, casinos usually make more than the odds give them.

comment by CarlShulman · 2010-09-23T00:07:06.259Z · LW(p) · GW(p)

http://plato.stanford.edu/entries/paradox-stpetersburg/#DecMarUti

Replies from: AnlamK, JenniferRM
comment by AnlamK · 2010-09-28T06:05:31.161Z · LW(p) · GW(p)

Thanks, I already knew about this.

Related is also Martingale gambling.

comment by JenniferRM · 2010-09-25T00:54:54.049Z · LW(p) · GW(p)

Neato! Worth reading!

comment by Meni_Rosenfeld · 2010-09-27T15:44:01.615Z · LW(p) · GW(p)

Suppose someone offers you a (single trial) gamble C in which you stand to gain a nickel with probability 0.95 and stand to lose an arm and a leg with probability 0.05. Even though expectation is (-0.05arm -0.05leg + 0.95nickel), you should still take the gamble since the probability of winning on a single trial is very high - 0.95 to be exact.

Non-sarcastic version: Losing $100M is much worse than gaining $100K is good, regardless of utility of money being nonlinear. This is something you must consider, rather than looking at just the probabilities - so you shouldn't take gamble A. This is easier to see if you formulate the problems with gains and losses you can actually visualize.

Replies from: AnlamK
comment by AnlamK · 2010-09-28T06:07:26.881Z · LW(p) · GW(p)

Is the problem that 0.01 or 0.05 too high?

Take a smaller value then.

In fact, people take such gambles (with negative expectation but with high probability of winning) everyday.

They fly on airplanes and drive to work.

Replies from: mattnewport, Meni_Rosenfeld
comment by mattnewport · 2010-09-28T06:19:19.939Z · LW(p) · GW(p)

In fact, people take such gambles (with negative expectation but with high probability of winning) everyday.

They fly on airplanes and drive to work.

In our world people do not place infinite value on their own lives.

Replies from: AnlamK
comment by AnlamK · 2010-09-28T09:51:47.919Z · LW(p) · GW(p)

There is nothing in what I wrote that implies people value their lives infinitely. People just need to value their lives highly enough such that flying on an airplane (with its probability of crashing) has a negative expected value.

Again, from Nick Bostrom's article:

"Pascal: I must confess: I’ve been having doubts about the mathematics of infinity. Infinite values lead to many strange conclusions and paradoxes. You know the reasoning that has come to be known as ‘Pascal’s Wager’? Between you and me, some of the critiques I’ve seen have made me wonder whether I might not be somehow confused about infinities or about the existence of infinite values . . .

Mugger: I assure you, my powers are strictly finite. The offer before you does not involve infinite values in any way. But now I really must be off; I have an assignation in the Seventh Dimension that I’d rather not miss. Your wallet, please!"

Replies from: mattnewport
comment by mattnewport · 2010-09-28T16:13:03.595Z · LW(p) · GW(p)

There is nothing in what I wrote that implies people value their lives infinitely. People just need to value their lives highly enough such that flying on an airplane (with its probability of crashing) has a negative expected value.

Yes, that is the point.

Your claim that people flying on planes are engaging in an activity that has negative expected value flatly contradicts standard economic analysis and yet provides no supporting evidence to justify such a wildly controversial position. The only way your claim could be true in general would be if humans placed infinite value on their own lives. Otherwise it depends on details of why they are flying and what value they expect to gain if they arrive safely and on the actual probability of a fatal incident.

Since you didn't mention in your original post under what circumstances your claim holds true you did imply that you were making a general claim and thus further imply that people value their lives infinitely.

comment by Meni_Rosenfeld · 2010-09-28T13:55:39.102Z · LW(p) · GW(p)

You can't have your cake and eat it too. If the probability is low enough, or the penalty mild enough, that the rational action is to take the gamble, then necessarily the expected utility will be positive.

Taking your driving example, if I evaluate a day of work as 100 utilons, my life as 10MU, and estimate the probability to die while driving to work as 1/M, then driving to work has an expected gain of 90U.

comment by Dagon · 2010-09-23T05:27:15.162Z · LW(p) · GW(p)

As others have said, maximize expected utility, not expected dollars. Money being roughly logarithmic in value works pretty well, and the common advice is to pick gambles that maximize your expected log-net-worth.

For a more specific recommendation, see http://en.wikipedia.org/wiki/Kelly_criterion

As to your final question, the answer is "yes". Probability can be applied to any unknown. A good description is in the middle of the quantum mechanics sequence: http://lesswrong.com/lw/oj/probability_is_in_the_mind/