Looking for an intuitive explanation of expected outcome

post by Blackened · 2012-06-20T01:33:45.560Z · LW · GW · Legacy · 15 comments

Contents

15 comments

I'll first explain how I see expected outcome, because I'm not sure my definition is the same as the widely accepted definition.

If I have 50% chance to win 10$, I take it as there are two alternative universes, the only difference being that in one of them, I win 10$ and in the other one, I win nothing. Then I treat the 50% chance as 100% chance to be in both of them, divided by two. If winning 10$ means I'll save myself from 1 hour of work, when divided by two it would be 30 minutes of work. In virtually all cases, when it's about winning small sums of money, you can simply multiply the percentage by the money (in this case, we'll get 5$). Exceptions would be the cases analogous to the one where I'm dying of an illness, I can't afford treatment, but I have all the money I need except for the last 10$ and there isn't any other way to obtain them. So if there's 30% chance to save 10 people's lives, that's the same as saving 3 lives.

If you have no idea what you're talking about, then at least you can see a proof of my problem: I find it hard to explain this idea to people, and impossible for some.

I'm not even sure if the idea is correct. I once posted it on a math forum, asking for evidence, but I didn't find any. So, can someone confirm whether is true, also giving any evidence?

And my main question is, how can I explain this in a way that people can understand it as easily as possible.

(it is possible that it's not clear what I meant - I'll check this thread later for that, and if it turns out to be the case, I'll edit it and add more examples and try to clarify and simplify)

15 comments

Comments sorted by top scores.

comment by Benquo · 2012-06-20T03:45:28.211Z · LW(p) · GW(p)

Your understanding of mathematical expectation seems accurate, though the wording could be simplified a bit. I don't think that you need the "many worlds" style exposition to explain it.

One common way of thinking of expected values is as a long-run average. So If I keep playing a game with an expected loss of $10, that means that in the long run it becomes more and more probable that I'll lose an average of about $10 per game.

Replies from: ksvanhorn
comment by ksvanhorn · 2012-06-20T19:08:32.427Z · LW(p) · GW(p)

You could write a whole book about what's wrong with this "long-run average" idea, but E. T. Jaynes already did: Probability Theory: The Logic of Science. The most obvious problem is that it means you can't talk about the expected value of a one-off event. I.e., if Dick is pondering the expected value of (time until he completes his doctorate) given his specific abilities and circumstances... well, he's not allowed to if he's a frequentist who treats probabilities and expected values as long-run averages; there is no ensemble here to take the average of.

Expected values are weighted averages, so I would recommend explaining expected values in two parts:

  • Explain the idea of probabilities as degree of confidence in an outcome (the Bayesian view);

  • Explain the idea of a weighted average, and note that the expected value is a weighted average with outcome probabilities as the weights.

You could explain the idea of a weighted average using the standard analogy of balancing a rod with weights of varying masses attached at various points, and note that larger masses "pull the balance point" towards themselves more strongly than do smaller masses.

Replies from: Benquo
comment by Benquo · 2012-06-20T20:02:42.577Z · LW(p) · GW(p)

The question was:

how can I explain this in a way that people can understand it as easily as possible

You are correct that the "long-run average" description is slightly wrong. But the weighted average explanation presumes a level of mathematical sophistication that I think almost no one has, who doesn't already know about expected value. I suspect that at best that explanation will manage to communicate the idea, "expected value is complicated math."

It's also possible to shoehorn the intuitive "long run average" explanation into a more mathematical one, if you say that when you repeat an experiment over and over again, the expected value is the limit that the long run average converges toward.

If you have enough time to explain the analogy of probability as a density (or set of discrete masses) defined over the sample space, then you can explain that the expected value is your "center of mass," or less precisely the balance point, which is also simple and easy to understand.

comment by buybuydandavis · 2012-06-20T10:05:44.477Z · LW(p) · GW(p)

So if there's 30% chance to save 10 people's lives, that's the same as saving 3 lives.

No, it is not "the same", i.e., equivalent for all purposes. The two scenarios have one particular statistic, the mean lives saved, equal. That does not make them "the same" for all purposes. (Assuming that there was also a 70% chance of saving 0 lives in the first scenario, which you didn't specify, but seemed implied.)

It's too easy to equivocate here.

One could just as well say that the two scenarios do not have equal expected outcomes. The maximum likelihood outcome for the first is 0 lives saved, while the maximum likelihood outcome for the second is 3 lives saved.

Or, one could say that the value to you of your two scenarios is not the same, if you have a preference between them.

An unqualified "Same" is something to taboo. The "Same" according to what equivalence classes? According to what measure? Many don't even see the issue. Once they've calculated "sameness" according to some verbal or numerical equivalence class, they think that "logic dictates" they must treat the two things "the same" too. Wrong. Choose your equivalence classes according to your purpose, instead of constraining your choices according to your equivalence classes.

Replies from: Blackened
comment by Blackened · 2012-06-21T00:55:37.124Z · LW(p) · GW(p)

I do realize that, but I wasn't sure if my text was understandable in the first place, so I decided to keep it simple.

Using our world (where 200k-300k people die of natural causes every day), and using random people and circumstances where saving 10 people would be 10/3 times better than saving 3 people, I argue that 30% chance of saving 10 people (and 70% for saving 0) is equivalent in terms of everything to 100% chance of saving 3 people (it probably requires a few more assumptions, because the cause of their death might be a special illness where if it kills 3 people it could be researched, but not if it kills 10 people). So if my model of expected value is valid, it shouldn't matter which choice you pick.

But that's unnecessary and beyond the point. I'd prefer to say that the one is equivalent to the other in terms of people saved on the moment and not as consequences of the choice.

comment by Shmi (shminux) · 2012-06-20T02:32:15.316Z · LW(p) · GW(p)

Expected value is defined as a weighted average, so this covers your first example. I have trouble understanding your setup in the second example. How is your lacking $10 to survive is related to the 30% chance of saving 10 people?

comment by moridinamael · 2012-06-20T05:19:09.420Z · LW(p) · GW(p)

Maybe this will help, maybe not.

You meet a bored billionaire who offers you the chance to play a game. The outcome of the game is decided by a single coin flip. If the coin comes up heads, you win a million dollars. If it comes up tails, you win nothing.

The bored billionaire enjoys watching people squirm, so he demands that you pay $10,000 for a single chance to play this game.

If you are risk-neutral, you should be willing to pay any amount less than $500,000 to play the game. In fact, you can calibrate your actual risk tolerance by considering a series of similar questions, asking how much you would be willing to pay for a certain chance at a certain sum.

Most people will intuitively grasp that it "makes sense" to pay for the right to play the game, and that it stops seeming obvious at some value.

Replies from: private_messaging
comment by private_messaging · 2012-06-20T08:57:04.068Z · LW(p) · GW(p)

To be risk neutral on the money themselves you need to be extremely rich or betting on very tiny amounts (and at very least you need to have $500 000 to bet).

If you are risk neutral on logarithm of amount of money (which is more realistic), the pay off is a>0 and you start off with n>0 in assets, then your expected pay-off after betting x is 0.5 ln(n-x)+0.5 ln(a+n-x) , the change of utility is 0.5 ln(n-x)+0.5 ln(1000000+n-x) - log(n) and there's zero change in utility when it equals zero when 0.5 log(n-x)+0.5 log(a+n-x) = log(n) , which has solution x=0.5 a + n - 0.5 sqrt(a^2 + 4 * n^2) .

Plugging in 1 million in payoff and $100 000 in assets, you should bet up to about 90 000 $ , which is somewhat surprising. Usually, the utility function is not logarithmic though, as if you were to lose a lot of money you would have to deal with all the logistics of moving to cheaper apartment or the like, so people would be willing to bet less. Actually, (rational) people just compare two imagined outcomes directly to each other rather than convert each to a real number first, this is better when you only partially imagine an outcome, so that you can do both equally partially. Estimating 2 utilities and then comparing runs into problems when estimations are necessarily approximate.

edit: reddit is designed for people who use italics more often than algebra.

comment by roll · 2012-06-20T18:17:28.665Z · LW(p) · GW(p)

I think what may be confusing about expected outcome is the name. You don't actually expect to get 5 dollars out of this :) . You don't even expect to get, say, 5 million dollars after 1 million games, such would be rather unlikely. You do expect to get 5$ per game if you played infinitely many games, though, and if you are playing such games on small amounts of money you can choose the game to play based on expected outcome.

comment by Blackened · 2012-06-20T10:53:07.103Z · LW(p) · GW(p)

I'm glad to see that my idea was understood. I have read all the replies, but unfortunately I've came up with all those ideas when trying to prove the idea to myself.

You meet a bored billionaire who offers you the chance to play a game. The outcome of the game is decided by a single coin flip. If the coin comes up heads, you win a million dollars. If it comes up tails, you win nothing.

The bored billionaire enjoys watching people squirm, so he demands that you pay $10,000 for a single chance to play this game.

I've thought of this, but it's only an intuitive thing and doesn't directly prove my approach. Or if it does, I'm missing something.

One common way of thinking of expected values is as a long-run average. So If I keep playing a game with an expected loss of $10, that means that in the long run it becomes more and more probable that I'll lose an average of about $10 per game.

I've thought of this too, but all it does is to result in a different percentage, closer to the expected outcome. But it's still a percentage, and it remains such if I don't reach an infinite number of trials.

Forget about the intuitive explanation, is there any evidence at all that 50% chance of winning 10$ is the same as 100% chance of winning 5$, in terms of efficiency? I can hardly imagine the expected value approach to be not valid, but I can't find evidence either. Most of the people I want to explain it to would understand it.

I have trouble understanding your setup in the second example. How is your lacking $10 to survive is related to the 30% chance of saving 10 people?

It's not related. It was a separate example.

comment by wmorgan · 2012-06-20T02:37:06.444Z · LW(p) · GW(p)

You're explaining expected value and it's absolutely true. It's the law that tells you what decision to make.

If there's an intuitive explanation, I haven't found it yet. All I know is that there's a reliable cluster of people who

  • prefer a certain $500 to a 15% chance of a $1,000,000.
  • will never bet with you on anything, no matter how sure they are.
  • call a $5 scratch ticket "paying five dollars for entertainment"
  • believe that it's impossible to be a professional gambler / poker player
  • say things like, "the reason people lose money in stocks in they get too greedy...you have to put your money in, wait for the price to go up, then sell it!"

Even weirder are the ones who know the math, agree with you that something is a good bet for them to take, and then refuse to bet anyway! Like math and decisions occupy completely different worlds, and the heuristic "if you gamble, you'll lose" takes precedence over EV.

Replies from: shminux
comment by Shmi (shminux) · 2012-06-20T03:06:08.521Z · LW(p) · GW(p)

Even weirder are the ones who know the math, agree with you that something is a good bet for them to take, and then refuse to bet anyway!

I know a number of mathematically literate people who buy lottery tickets. Their usual justification is that they pay for happiness provided by the hope of winning.

Replies from: wmorgan
comment by wmorgan · 2012-06-20T03:19:27.879Z · LW(p) · GW(p)

Mathematically literate like grad students, or quants? I'd expect to hear that justification much more from the former group than the latter. It doesn't hold water, right?

Replies from: thomblake
comment by thomblake · 2012-06-20T14:24:32.235Z · LW(p) · GW(p)

Why is the top-level comment retracted?

Replies from: wmorgan
comment by wmorgan · 2012-06-20T15:19:12.455Z · LW(p) · GW(p)

Because someone downvoted it. If I had to guess why they did it, it'd probably be some combination of these:

  • It doesn't answer OP's question -- I think Blackened was asking something more specific than what I answered.
  • It comes across as overconfident (whoops)
  • It's needlessly personal (self-aggrandizing) -- the word "I" shouldn't appear in it at all.