Probability puzzle

post by malthrin · 2011-11-28T21:33:36.850Z · LW · GW · Legacy · 26 comments

I came up with this puzzle after reading Vaniver's excellent post on the Value of Information. I enjoyed working it out over Thanksgiving and thought I'd share it with the rest of you.

 

Your friend holds up a curiously warped coin. "Let's play a game," he says. "I've tampered with this quarter. It could come up all heads, all tails, or any value in between. I want you to predict a coin flip; if you get it right, I'll pay you $1, and if you're wrong, you pay me $3."

"Absolutely, on one condition," you reply. "We repeat this bet until I decide to stop or we finish N games."

What is the minimum value of N that lets you come out ahead on average?

 

Each game, you may choose heads or tails, or to end the sequence of bets with that coin. Assume that all heads:tails ratios are equally likely for the coin.

 

edit: since a couple people have gotten it, I'll link my solution: http://pastebin.com/XsEizNFL

26 comments

Comments sorted by top scores.

comment by [deleted] · 2011-11-29T00:12:38.542Z · LW(p) · GW(p)

Here's a sketch of a (possibly bad) upper bound.

We will use the following strategy, parametrized by two constants X and Y. First, we will make the bet X times, guessing randomly each time. If all X coin flips were NOT the same, we stop. If they were, we keep going for Y more flips, each time guessing the same outcome that happened for the first X coin flips.

We lose $X in expectation for the first X flips, no matter what. Our expected winnings on the longer sequence of Y flips, in terms of the unknown probability p, are p^X . (4p - 3) .Y plus a similar term for (1-p). Integrated from 0 to 1, these give 2Y(4/(X+2) - 3/(X+1)), which is positive for X>2, so we just have to choose Y to make this bigger than X.

The total number of flips is X + Y, which is minimized (over the integers) when X = 3 and Y = 30, for a total of 33 coin flips. This just lets us break even, so we bump Y up to 31, for N=34 and an expected profit of ten cents.

Obviously this can be improved -- we should be using our partial information about the coin at each step, instead of making just one decision 3 coin flips in.

Replies from: None
comment by [deleted] · 2011-11-29T01:46:23.867Z · LW(p) · GW(p)

Actually it seems like the optimal strategy is to cynl hagvy guvegrra pbva syvcf be hagvy gur engvb bs urnqf gb gnvyf frra fb sne vf orgjrra guerr gb bar naq bar gb guerr (which, apart from the value of N, seems intuitive enough). I checked this by brute force and computer, but maybe there is a cleaner solution.

Replies from: malthrin
comment by malthrin · 2011-11-29T01:53:22.189Z · LW(p) · GW(p)

That N is correct, or at least it's what I calculated. Nice work.

comment by othercriteria · 2011-11-29T00:31:53.770Z · LW(p) · GW(p)

Please clarify your setup. I'm almost certain that you don't want to put a uniform distribution over the "heads:tails ratios" covering (0, +∞). Rather you mean a uniform distribution of the heads probability over (0, 1), right?

Replies from: None, Manfred
comment by [deleted] · 2011-11-29T00:45:18.829Z · LW(p) · GW(p)

Considering that a uniform distribution on (0, +∞) does not exist, I find this very likely.

Replies from: Matt_Simpson, mwengler
comment by Matt_Simpson · 2011-11-29T01:11:11.070Z · LW(p) · GW(p)

Well, you could use an improper prior. The measure exists, it just isn't a probability measure. The prior is over the ratio of heads to tails, which is a postitive, unbounded real valued number, so U(0,1) is certainly not appropriate. However, this is certainly not the prior you would use for the correct Bayesian calculation, though it may be useful as an approximation.

comment by mwengler · 2011-12-01T16:03:56.416Z · LW(p) · GW(p)

I'm with the intuitivists, sort of. Do the problem for a uniform distribution on (0,1000000) and (0,1000000000) and if the answers are really close to each other, you win.

comment by Manfred · 2011-11-29T03:08:00.052Z · LW(p) · GW(p)

Yeah, if the distribution looked anything like that the answer would just be N=1, bet heads. So in the interest of interesting problems, it should be interpreted as an uniform distribution over P(heads).

comment by shminux · 2011-11-28T23:22:42.444Z · LW(p) · GW(p)

Assume that all heads:tails ratios are equally likely for the coin.

Presumably there is still a fixed (frequentist) probability of a flip being heads, otherwise your coin either has memory or is a subject to an unknown external interference.

You have to accurately predict at least 75% of flips to come out ahead, there is no way to do that with a fair(ish) coin, so, even if you have a perfect model of the coin you will likely lose if the coin is moderately unfair (25%-75% heads), regardless of N.

Maybe I misunderstand your setup.

Replies from: Caspian
comment by Caspian · 2011-11-28T23:34:31.082Z · LW(p) · GW(p)

You can stop playing early if the coin appears too unbiased, to limit losses, but keep playing for the full N turns if the coin seems predictable enough.

Replies from: malthrin
comment by malthrin · 2011-11-28T23:41:44.061Z · LW(p) · GW(p)

Right. The coin has a fixed value for P(heads), set when your friend tampered with it. You just don't know what it is.

Replies from: shminux
comment by shminux · 2011-11-29T00:12:09.552Z · LW(p) · GW(p)

OK, so you are averaging over all possible flip odds, assuming a uniform distribution of them. That's what "Assume that all heads:tails ratios are equally likely for the coin." means.

Your obvious best playing strategy is to look at the history of flips and trust the apparent bias. Assuming that N is much larger than the time it takes for the apparent bias to settle to the real one (perfect modeling), your odds of winning are max(p,1-p). Let's assume p > 0.5. Your expected payout per flip is 1p-3(1-p)=4p-3. Averaged over 0.5<p<1, this gives 0. In other words, even if the coin bias is given to you, you do not come out ahead when averaged over all biases, regardless of N.

Hmm, what else am I missing?

Replies from: malthrin
comment by malthrin · 2011-11-29T00:33:40.993Z · LW(p) · GW(p)

You're missing the third option - the choice to stop playing.

Replies from: thomblake
comment by thomblake · 2011-11-29T00:48:44.376Z · LW(p) · GW(p)

So then can N be the dynamic result of a function, rather than a value?

Replies from: malthrin
comment by malthrin · 2011-11-29T00:52:59.137Z · LW(p) · GW(p)

No, you have to state N before you start flipping coins.

Replies from: thomblake, Matt_Simpson
comment by thomblake · 2011-11-29T01:11:31.689Z · LW(p) · GW(p)

Ah, I get it now. N is stated ahead of time, but you can (predictably) use the information so far to "stop playing" whenever you want, and that's part of the optimal strategy to consider in the expectation.

comment by Matt_Simpson · 2011-11-29T01:15:19.888Z · LW(p) · GW(p)

Yeah, which means if I'm trying to maximize my payout, I'll set N arbitrarily large and abort the game at sufficient evidence that the coin isn't predictable enough for the game to have positive expected value. If the coin is predictable enough, then I'll pump my friend for every last cent he has.

However, note that the problem as stated asks for the minimum value of N so that the game has positive expected value. (I'm not too sure why we're interested in this except as an exercise).

edit: just clarifying for others. Not that I think you misunderstood.

comment by Lapsed_Lurker · 2011-11-28T23:04:00.119Z · LW(p) · GW(p)

...It could come up all heads, all tails, or any value in between....

What is the minimum value of N that lets you come out ahead on average?

I'm pretty sure that in the region of curiously-warped-coin-space near 'fair', you're not going to be able to come out ahead on average.

It's been nearly a couple decades since I studied any probability, but if these problems keep turning up, I might look it up and refresh my memory of it.

comment by magfrump · 2011-11-29T10:27:33.672Z · LW(p) · GW(p)

Nffhzvat lbh nobeg nsgre gur svefg syvc gung pbzrf hc qvssreragyl sebz gur svefg syvc rire, lbh erpbhc lbhe fhax pbfg bs $1.33 nsgre svsgrra syvcf (hayrff zl zngu vf qenfgvpnyyl naq jrveqyl jebat; vg qbrf frrz vapbafvfgrag jvgu bguref'...) Fvapr bgure oenapurf fubhyq unir cbfvgvir rkcrpgrq inyhr (v.r. avar gnvyf bar urnqf lbh fubhyq cebonoyl xrrc tbvat) vg zvtug or yrff guna gung. Vs lbh xrrc tbvat nal gvzr lbh unir cbfvgvir rkcrpgrq inyhr, lbh fubhyq rkcrpg gb erpbhc pbfgf nsgre whfg gjryir pbva syvcf.

vg nyfb qbrf abg nccrne gung gur hgvyvgl vf obhaqrq nf A->vasvavgl, ohg vg qbrf tebj cerggl fybjyl. Sbe rknzcyr, gb rkcrpg gb znxr gjragl-svir qbyynef, lbh'q unir gb unir A nyzbfg n uhaqerq.

Replies from: magfrump
comment by magfrump · 2011-11-29T10:31:08.819Z · LW(p) · GW(p)

I used essentially the same strategy as FAWS.

comment by faul_sname · 2011-11-29T05:26:26.061Z · LW(p) · GW(p)

This seems to be a good opportunity to use a trick I figured out for approximating the probability of a binary event (I'm sure I'm not the first to discover it). The trick goes as follow: start with a probability distribution P(x) = 1. For each time the even happens, multiply P(x) by x, and for each time it doesn't, multiply by 1-x. After doing this for your data, renorm it so that the area beneath the curve is 1: you can do that by dividing your function by its integral from 0 to 1. If you have a range of outcomes, you can multiply by the function representing that to get expected utility. In this case, the function representing the outcomes can be modeled by O(x) = 4x - 3. If we assume the coin is biased towards heads (if it's biased towards tails, just substitute H and T in the following reasoning), we find through some math and puzzling that the formula that determines expected value for the sequence of coins is (N-4*T-4)/N+2) where H is the number of heads and T is the number of tails, and N represents the number of tosses. The probability of reaching any state is given by 1/N. For the first two flips, your expected value is negative (-$1 and -$.033 for the first and second respectively) no matter what happens, so we can count these as sunk costs. So our expected value is as follows here:

It took me 19 flips to come up with a positive expected value, though my answer with the original question is "as many as the person offering this can be convinced to give me".

Replies from: FAWS
comment by FAWS · 2011-11-29T16:54:58.881Z · LW(p) · GW(p)

This seems to be a good opportunity to use a trick I figured out for approximating the probability of a binary event

That's just Bayesian updating. P(H|E)=P(H)*P(E|H)/P(E)

P(x)=P(H), x=P(E|H), and the integral of the previous step is P(E), since that's the whole point of your probability function.

Replies from: faul_sname
comment by faul_sname · 2011-11-29T17:57:36.334Z · LW(p) · GW(p)

So that's why it works... I knew it probably tied into Bayes, but I didn't know exactly how.

comment by FAWS · 2011-11-29T03:02:36.529Z · LW(p) · GW(p)

You can definitely break even with N=13, assuming the unrealistic even distribution between 0 and 1, perhaps a bit earlier with a more sophisticated strategy.

Example strategy: Choose randomly the first time, the same side in all further throws. If the other side comes up before the 6th throw stop (6th throw because that's the earliest your expected winnings for the next throw don't increase by stopping if the other side comes up). Otherwise continue to N.

The probability of the same side coming up n times is 2/(n+1), the probability of the other side coming up exactly the n+1th time 2/(n^2 +3n +2).

Expected winnings each throw: 1st: -1$, 2nd: -1/3 $, 3rd: 0$ 4th: 1/10 $, 5th: 2/15 $, 6th-Nth: 1/7 $

comment by buybuydandavis · 2011-11-29T03:08:26.962Z · LW(p) · GW(p)

Spoiler Alert:

Vg qrcraqf ba lbhe cevbe sbe gur vagryyvtrapr bs lbhe sevraq. Vs ur xabjf jung ur vf qbvat, ur jvyy perngr n jnecrq pbva gung vf fgvyy 50-50 naq lbh jvyy trg lbhe pybpx pyrnarq ertneqyrff bs nyy lbhe snapl Onlrfvna whwvgfh.

N havsbez qvfgevohgvba ba (0,1) vf whfg gur jebat cevbe sbe n thl gung pbzrf hc jvgu n jnecrq pbva naq n cebcbfvgvba sbe n org ba vg. Vg'f nyjnlf orggre gb juvc bhg n yvggyr tbbq frafr orsber lbh juvc bhg lbhe snapl rdhngvbaf.

V'z n yvggyr qvfnccbvagrq gung ab bar frrzf gb unir tbggra guvf lrg. Hfr lbhe urnqf.

V'z pbafvqrevat jurgure gb fuhg zl lnc naq znxr fbzr zbarl ng gur arkg ybpny YJ zrrgvat. ... V'z gbb ynml.

Replies from: None
comment by [deleted] · 2011-11-29T03:25:28.634Z · LW(p) · GW(p)

Presumably the rest of us are more interested in solving a fun math problem than in being smartasses. It is trivially easy to set up an example in which the uniform prior is valid. It so happens that the warped-coin story is more entertaining to think about, but before dismissing the problem on a technicality you should consider addressing the spirit of the problem rather than the letter.