What's the Value of Information?

post by johnlawrenceaspden · 2012-08-29T16:20:59.210Z · LW · GW · Legacy · 43 comments

Contents

43 comments

I posted this problem to my own blog the other day. When I posted it, I thought it looked very easy, more fiddly than difficult:

 

The eccentric millionaire Oswald Mega walks into a bar and he says:

"This morning, I was showing my newborn about Dungeons and Dragons. We took a couple of six sided dice and rolled them, and wrote the results, which are just numbers from 2 to 12, on a piece of paper with 2D6 written at the top.

Then we took a twelve sided dice, and we wrote 1D12 at the top of a piece of paper, and then we rolled it lots and wrote down the results, numbers between 1 and 12, on the paper.

How she laughed at the difference in the patterns! Truly fatherhood is a joy.

Now, I've brought one of the pieces of paper with me, and if you can tell me which one it is, I'll give you £1000.

How much would you be willing to pay me to know the value of the first result on the sheet?"

 

I reasoned thus:

There's no reason that you should have any opinion on which piece of paper he's brought. So you start off thinking 50:50, and that leads you to believe that he's effectively just given you £500.

If he tells you a number, then your belief will change. Say he tells you 1, then you know that he's brought the 1D12 results, and so you're now able to tell him that, and collect your £1000. 

If he tells you 7, then that's twice as likely to be the 2D6 talking as the D12, and you should shift your prior to 1:3.

If you've got a prior of 1:3, then your guess (that it's the 2D6) is now worth £750, on average.

So when you get a new number, your prior shifts, the bet changes value. Average over all the cases and that's what you'll pay to know the first number.

Using this reckoning, I thought the answer to the puzzle was £125.

 

But now I'm not so sure, because the same reasoning tells you that if, for whatever reason, you start out 9:1 in favour of the 1D12, then the value of the new information is zero. (Because whatever the new information is, it won't be enough to change your mind).

But can that really be true? Because that implies that if Omega keeps making you the same offer for £1, then you should keep turning it down.

But if he told you a hundred numbers, you'd be damned sure which piece of paper he'd brought. So surely they have some value over £1?

But maybe you say: "Well, you can't put a value on the information unless you know how many extra opportunities you'll get."

Really? I'm sure that I'd pay £1 for the number in the original problem, and sure that I wouldn't pay £1000. 

Where am I mis-thinking, and how should I calculate the answer to my puzzle?

 


 

Edit:

Just to clarify, if you buy the first number and it's a 2, and then you buy the second number and it's a 12, then I think you're now back in the same situation with a prior of 9:1 and an expected gain of £900. 

I think you'd be mad to stop buying numbers at this point, since there's £100 you're not certain of yet. But if I don't believe that the price is £0, why do I believe that the price for the first one is £125?


Edit II:

It seems that the opinion of most people is that the problem is under-determined, in the sense that you don't know what options are coming. Fair enough.

In which case, what's wrong with the intuition that your beliefs alone determine the worth of your option to guess?

And in the more specific version where Oswald charges a price of one penny for every result, and you can keep buying them one-by-one until you decide you're certain enough and guess, what criterion do you use to stop guessing?

43 comments

Comments sorted by top scores.

comment by faul_sname · 2012-08-29T19:21:35.107Z · LW(p) · GW(p)

Your conclusion that if you have a prior p(1d12) of 0.9, you should not spend even £1 to see the piece of paper is entirely correct, if counterintuitive. The reason is as follows (n is the number, 2d6 is the chance of 2d6 landing on that number, 1d12 is the chance of 1d12 landing on that number, p(6|n)/p(12|n) is the odds ratio, and p(6|n) is the probability that you would give that the 2d6 was correct given a prior of 0.1 for the 2d6 (0.9 for the 1d12).:

 n | *2d6  | 1d12 | p(6|n)/p(12|n) | p(6|n)
 ---+------+------+----------------+--------
 *1 | 0/36 | 3/36 |******0/3 ******| 0.0000
 *2 | 1/36 | 3/36 |******1/3 ******| 0.0357
 *3 | 2/36 | 3/36 |******2/3 ******| 0.0689
 *4 | 3/36 | 3/36 |******3/3 ******| 0.1000
 *5 | 4/36 | 3/36 |******4/3 ******| 0.1290
 *6 | 5/36 | 3/36 |******5/3 ******| 0.1562
 *7 | 6/36 | 3/36 |******6/3 ******| 0.1818
 *8 | 5/36 | 3/36 |******5/3 ******| 0.1562
 *9 | 4/36 | 3/36 |******4/3 ******| 0.1290
10 | 3/36 | 3/36 |******3/3 ******| 0.1000
11 | 2/36 | 3/36 |******2/3 ******| 0.0689
12 | 1/36 | 3/36 |******1/3 ******| 0.0357

As you can see, no matter what information you get, you will never get any piece of information that will convince you to pick differently, and so the value of information is 0. If, however, you had information that could bring p( 2d6 | n ) above 0.5, the value of information would be nonzero. However, for that you would either need a lower prior (in this case, p( 1d12 ) ≤ 2/3) or stronger evidence (such as 4 slips of paper: nothing less can possibly change your mind in this setup, even if all the slips were 7s).

You're getting confused on the word "same" I think. Omega is only offering you the same deal if your prior is 0.9 and you get only one shot. If you get multiple chances to update, that changes the nature of the game and you need to know how many chances you will have (or what the probability of getting another chance is). As is, you're missing necessary information though.

Replies from: johnlawrenceaspden
comment by johnlawrenceaspden · 2012-08-30T10:49:12.868Z · LW(p) · GW(p)

So, I think we're both thinking, in the 0.9 case:

Information that can't be any use to me is worth nothing.

But that information can become worthwhile if combined with future information which may or may not become available.

So the price should be non-zero if Omega says 'What will you pay me for the first number on the sheet, I may also sell you further numbers later'.

But if that's true, then shouldn't the £125 value for a 50:50 prior also be affected by what happens afterwards?

In the original question, that's exactly what I was imagining happening. You'd buy the first number for anything less than £125, and maybe it's a 7, so you're now back in the same situation with a new prior of 25:75, and so what will you pay for the second number, and so on.... And I was hoping that it would all converge nicely.

I think the reason I'm finding it paradoxical is that we've all jumped straight to the conclusion that the fair price is £125 without feeling that we needed to ask 'And what happens next?', and found that unproblematic.

But then when we look at the 9:1 case, where the value calculated this way is 0 and looks a bit suspicious, and we all start thinking 'Ahh, but don't we need to know more about what happens next in order to price this'.

But if that reasoning affects the £0 price, why wouldn't it also affect the £125 price?

Which is why I'm asking 'Is my question ill-posed, and if not, what is the answer'.

Replies from: faul_sname
comment by faul_sname · 2012-08-30T19:19:46.585Z · LW(p) · GW(p)

The reasoning does affect the £125 price. In the case where you get an arbitrarily large number of pieces of information, the value converges on £1000 - (current EV). This makes sense, as an arbitrarily large number of papers gives you an arbitrarily high level of confidence that you will get the £1000. So with no information, the current EV is £500, so the possible value of information is £1000 - £500 = £500. In the case where you've got a prior of 0.9 on the 1d12, your EV is already £900 (90% chance of winning £1000) so the EV of infinite information is still only £100 (£1000 - £900).

In reference to your original question, you should be willing to pay somewhat more than £125, and less than £500 for that first piece of information (I would have to calculate the exact amount). The amount would vary based on how many more opportunities to buy information you would have.

comment by Cyan · 2012-08-30T03:26:41.799Z · LW(p) · GW(p)

OK, so this is as good a place as any to whinge about my pet peeve.

you should shift your prior to...

You can't shift your prior! You can update your probability, after which it's your posterior probability. The terms "prior" and "posterior" are only defined relative to some piece of evidence. Of course, the posterior relative to one piece of evidence can be the prior relative to the next, but usually people are not talking about this sort of sequential setup.

Nothing personal, johnlawrenceaspden. It's typical for folks around here to write things like "update my prior" when they really mean "update my probability", and it's like nails on a chalkboard every time for me .

Replies from: johnlawrenceaspden, army1987, shokwave
comment by johnlawrenceaspden · 2012-08-30T10:29:15.269Z · LW(p) · GW(p)

Agreed that it is loose talk. I think the reason is that the posterior becomes the prior for the next inference, so you can think of your beliefs sloshing around and changing in response to information. After all, even the very first prior will likely have come from somewhere, and be the posterior of some other process.

comment by A1987dM (army1987) · 2012-08-30T22:34:12.265Z · LW(p) · GW(p)

If course, the posterior relative to one piece of evidence can be the prior relative to the next, but usually people are not talking about this sort of sequential setup.

Aren't they (at least implicitly)?

Replies from: Cyan
comment by Cyan · 2012-08-31T02:54:41.787Z · LW(p) · GW(p)

Sure, I suppose. But usually there's only one piece of evidence being discussed explicitly, and I think it makes little sense to use the word "prior" to refer to the probability that results from updating on it.

comment by shokwave · 2012-08-30T05:36:06.679Z · LW(p) · GW(p)

I suppose the slip is common because what they want to say is "calculate my posterior probability and use it as a prior for the next piece of evidence".

comment by cousin_it · 2012-08-29T17:37:59.390Z · LW(p) · GW(p)

It seems to me that if your starting odds are 9:1 in favor of 1D12 and you know you'll get the offer only once, the value of information is indeed zero, i.e. I wouldn't pay even a penny. If you get multiple offers for different numbers on the sheet, the value depends on how many offers you get. If you're uncertain about how many offers you'll get, the value depends on your Bayesian prior for the number of offers.

comment by Vaniver · 2012-08-29T22:52:07.417Z · LW(p) · GW(p)

With a shameless reference to my own post: if your prior is 9:1 1d12:2d6, then one number is worthless because it cannot change your decision, as faul_sname's comment details.

In the original problem, suppose my prior is 1:1 but I pick 2d6 because I think that they're more aesthetically pleasing. If I see the first number off the sheet, it will convince me to switch from 2d6 to 1d12 if the number is 2 or less or 11 or more. I go from winning half of the time to winning 62.5% of the time; that's worth £125, like you suggest.

Also, note that the VoI of the second number off the sheet depends on the first! If I saw a 1 first, I don't need any more numbers. If I saw a 3 or a 10, then a second number is still worth £125, because I'm in the same position as I was before. If I saw a 7 the first time around, then a second number is worth only £51 pounds, because it raises my expected confidence from .625 to .676.

Replies from: johnlawrenceaspden
comment by johnlawrenceaspden · 2012-08-30T10:56:57.555Z · LW(p) · GW(p)

Vaniver, looks like you were thinking about the problem in the same way that I was, getting repeated chances to buy new numbers. So at some point, you might have bought enough information to move your expected confidence into a place where the calculation that gave you £125 now gives you £0.

What do you do then? The conclusion 'I literally won't lift a finger to know more numbers' doesn't seem right unless you're certain of the answer already.

Replies from: Vaniver
comment by Vaniver · 2012-08-30T14:10:24.808Z · LW(p) · GW(p)

Vaniver, looks like you were thinking about the problem in the same way that I was, getting repeated chances to buy new numbers.

Sort of. The calculations that I ran are all one-step-ahead calculations, starting with different priors. Consider three different cases:

  • You pay X now, and he reads you the first number, and then you guess.
  • You pay Y now, then he reads you the first number, then you have the option to buy a second number, and then you guess.
  • You pay Z now, then he reads you the first two numbers, and then you guess.

Pricing X is easy; it's £125. Pricing Z is a bit tougher, but still okay. Pricing Y involves coming up with 13 different prices- the twelve possibilities after the first roll, and then Y (which depends on each of those possibilities!). Doing that with arbitrary n is doable but tough! (It's somewhat easier if you have a set price for each successive number, so you can swiftly terminate trees once you've hit the point that it's no longer worth the price.)

And so, even at 9:1 odds, there is some number of numbers he can read off that will have positive VoI. It will be very low- because it's very unlikely you will get that many informative numbers- but it is true that if you aren't perfectly certain, a test that gives you perfect certainty will have positive VoI.

What do you do then? The conclusion 'I literally won't lift a finger to know more numbers' doesn't seem right unless you're certain of the answer already.

The thing to focus on here is both the amount of additional certainty and the effect of additional certainty. The number you get when you're at 9:1 tells you a lot less than the number you get when you're at 1:1. Imagine the next number being a 1- in the first case, it feels like you just got £100, but in the second case it feels like you just got £500. Similarly, when I'm at 1:1, telling me one additional number is expected to change my guess in some cases. When I'm at 9:1, regardless of what he tells me, I still make the same call.

There is such a thing as certain enough when there are tests that aren't informative enough.

(Interestingly, note that you can never reach perfect certainty that it's 2d6, and there will always be a positive VoI for another number because there will always be a positive chance that it's a 1.)

Replies from: johnlawrenceaspden
comment by johnlawrenceaspden · 2012-08-30T16:20:07.845Z · LW(p) · GW(p)

Great post by the way. Thank you. It sounds like your job is to think about this sort of thing!

I think I now believe that the answer to the original question can't be £125, unless you already know what happens next.

Suppose the question is something like: "Every time you give me a penny, I'll give you the next number. At any time you can stop and make your one guess." It seems to me that there has to be a computer program that is best at playing this game. Do you have any idea what its stopping criterion would be? Or what the price would have to be for it to refuse to take any numbers at all?

It strikes me that this is actually a very dodgy problem indeed, and that if someone asks you these sorts of questions you should be very careful.

On the other hand it also strikes me that even in the absence of information about future offers, you should be prepared to pay something for the first number. You do, after all, expect to be £125 better off as a result of knowing it!

I have a queasy feeling of paradox and I notice that I am confused.

Replies from: Vaniver
comment by Vaniver · 2012-08-31T19:23:43.401Z · LW(p) · GW(p)

I put some time into solving this problem, and have reached a point where the amount of algebra necessary to continue is beyond what I'm willing to do. (The problem is that the transition probabilities are piecewise functions of the odds, and that makes everything unfun.) I have thought of an analogous problem that's mathematically simpler (basically, it'll be the unfair coin, and the reward will be based on guessing the degree of unfairness, not which of two it is) that I'll write up a longer explanation of how to do sometime over the weekend.

Replies from: johnlawrenceaspden
comment by johnlawrenceaspden · 2012-09-01T10:23:41.874Z · LW(p) · GW(p)

I'll look forward to it. Don't put time into this unless you're enjoying it. I haven't seen Oswald in ages, and my current commitment is a mental note to either think about the biased coin version or write some computer simulations next time I'm bored.

Replies from: Vaniver
comment by Vaniver · 2012-09-03T23:54:34.524Z · LW(p) · GW(p)

So, not quite an explanation, more of an exercise:

Oswald brings his laptop to a bar, loads up Matlab, and types:

p=rand(); c=0;

p is now a double between 0 and 1, which we can treat as continuously and uniformly distributed across that range. c is the number of times you've gotten a hint.

Now, Oswald types in another line:

[c+=1, rand()<p]

This will both increase the number of hints you've received, and give you a 0 or 1, if a new, uniformly selected random number is smaller than the first random number. (Basically, this is flipping a biased coin which gives 'heads' with probability p and tails with probability 1-p. You can repeat this line as many times as you like.

Now, this bar is called The Improper Prior, and as such is filled with Bayesians. It's readily obvious to the patrons that their posterior on p should be a beta distribution, with α equal to one plus the number of 1s and β equal to one plus the number of 0s.

But now is when things get interesting: your chance of guessing p exactly is basically zero. So Oswald might instead reward you for guessing within .05 of the actual p. More guesses should be penalized- either by decreasing the acceptable range or by decreasing the reward for guessing correctly. Alternatively, Oswald might reward you based on the precision of your posterior, or some other function.

Unfortunately, the beta distribution's cdf is not pleasant to play with. Matlab can deal with it easily- just type:

betainc(x,a,b)

We could determine the chance that your guess is within .05 of the correct by typing:

betainc(x+.05,a,b)-betainc(x-.05,a,b)

Unfortunately (again!), this isn't maximized by centering your estimate at the mean, unless a=b. You can test this with a=3, b=2; we have:

betainc(.65,3,2)-betainc(.55,3,2)=.17200

betainc(.66,3,2)-betainc(.56,3,2)=.17331

And so if Oswald uses this reward system, we'll have to solve an optimization problem to determine what our guess is at each stage, which isn't going to be fun. (The dumb way to do it throws

betainc(x+.05,a,b)-betainc(x-.05,a,b);

into some nonlinear optimization algorithm which shifts around x until it finds a local maximum, starting with a/(a+b) as the guess. What's the smart way to do it?)

Oswald might also be reluctant to reward us based on precision, because that can grow enormously high as α and β increase. So instead let's suppose he offers a flat reward, minus some constant times the variance minus some constant times the number of guesses we made, and he wants to know how to price entry into the game, so he can set the expected profit where he wants it to be.

Now we're in an interesting situation, because the variance can increase or decrease based on what we've seen. If you get two heads in a row, the variance is .06; a tails will increase it to .077, and a third heads will decrease it to .039. On average, you expect the variance after you see another coin to be .048. On average, the variance should always decrease after we get another hint. We also know that the amount each hint is expected to lower our variance will be a decreasing function of α and β for large enough values. (Really? Why would you believe those two statements?)

We can now easily calculate the actual variance and the expected variance after another hint for any (α,β) pair. If the costs are fixed we can determine when it wouldn't be worthwhile to buy one more. If α and β and large enough, that'll be enough for us to stop because we know future hints will be less valuable than the current hint and the current hint is a bad idea.

We can then propagate backwards from the terminal states to determine the total value of playing the game optimally. We also can be certain this game valuation procedure will terminate in reasonable time for reasonable choices of the penalty parameters. (Again, why?)

comment by RolfAndreassen · 2012-08-29T16:37:11.331Z · LW(p) · GW(p)

Perhaps you should assign some probability to being offered enough information to change your mind? There must be some nonzero chance that after you've bought the Nth number, Omega will offer to sell you the (N+1)th number; and if in fact your 9:1 assignment was wrong, a sufficiently long chain of such offers should change your mind. So there is still some chance of buying the first number leading to a change of mind, even if no information about the first number is itself enough to do so.

Replies from: johnlawrenceaspden
comment by johnlawrenceaspden · 2012-08-29T16:47:27.954Z · LW(p) · GW(p)

But doesn't that imply that the original question is ill-posed? And if so, what sort of questions can we calculate the answer to?

Replies from: RolfAndreassen
comment by RolfAndreassen · 2012-08-29T17:46:22.810Z · LW(p) · GW(p)

No, I don't think it's ill-posed. You've found a specific prior for which the value of one specific piece of information is indeed zero. I don't see why this should make the more general case, where you have a different prior or are offered more information, ill-posed.

Consider a man who has two different, lethal cancers. Omega comes along and asks what he'll pay for the cure for one of them. Assume that the cancers are unique in the history of all mankind, so there's no altruistic benefit; then he will, presumably, pay nothing, since he'll still die and may as well use the money to amuse himself while waiting. But what will he pay for the cure to both cancers? Ah, a very different question! Likewise, you've constructed a situation where one piece of information is without value, but two pieces of information are not. That doesn't make the question ill-posed; the value of one piece of information is perfectly well-defined, namely zero.

Replies from: faul_sname
comment by faul_sname · 2012-08-29T21:25:41.292Z · LW(p) · GW(p)

Well, in the particular case he posed with a prior of 0.9 on the 1d12, 2 pieces of information are also useless. In fact, you need 4 pieces of paper to have nonzero value of information (and even then, I think the expected value of the 4 is < £1).

Replies from: RolfAndreassen
comment by RolfAndreassen · 2012-08-29T21:55:26.887Z · LW(p) · GW(p)

Sure, but I don't see where that changes the analysis. The probability of you getting 4 pieces of information, contingent on getting the first one, has got to be larger than the probability of getting 4, contingent on not getting the first one. (In fact the latter seems to be a contradiction, which presumably has probability zero.) So the first one still has some value, even if it's perhaps rather smaller than the value of the time it takes to do the formal calculation of the value.

Replies from: faul_sname
comment by faul_sname · 2012-08-29T23:37:33.171Z · LW(p) · GW(p)

You're right. This seems like an interesting exercise in programming, actually: build a tool that tells you the VOI of a certain number of guesses. I know 0-3 have an EV of 0, but when I try to plug in 4, I realize why a recursive function might have been a bad idea.

comment by orthonormal · 2012-08-30T04:30:48.171Z · LW(p) · GW(p)

The reason it works like that is that in this artificial setup, there's no difference in your action if your odds are 100:1 or 1.1:1. If you could (say) make hedge bets with other customers at the bar, then the first number on the page has positive utility for you again.

Replies from: johnlawrenceaspden
comment by johnlawrenceaspden · 2012-08-30T11:21:45.839Z · LW(p) · GW(p)

It doesn't seem enormously artificial to me. I could just do this, had I £1000 to spare, and there are plenty of people round here who'd be sophisticated enough to enjoy playing.

Imagine you're in a real bar, and a real (trustworthy) person comes in and says this. What will you pay for the first number? If that's a 2, what will you pay for the second? If that's a 12, what will you pay for the third?

Replies from: Vaniver, orthonormal
comment by Vaniver · 2012-08-30T19:52:58.112Z · LW(p) · GW(p)

orthonormal's making the more subtle point that decisions are binary, and so certainty is crudely partitioned into two regions. With hedging and other financial instruments, then relative degrees of certainty matter- if I'm 90% sure that it's 1d12 and you're 80% sure that it's 1d12, then we can bet against each other, each thinking that we're picking up free money. (Suppose you pay me $3 if it's 1d12, and I pay you $17 if it's 2d6. Both of us have an expected value of $1 from this bet.) The more accurate my estimate is, the better odds I can make.

With the decision problem, we both decide the same way, and will both win or lose together.

comment by orthonormal · 2012-08-30T21:47:30.205Z · LW(p) · GW(p)

What Vaniver said. I'm claiming that it's artificial as a decision theory problem, not in the sense of being unrealistic, but in the sense of having constrained options that don't allow you to make full use of information.

comment by Irgy · 2012-08-31T02:12:55.907Z · LW(p) · GW(p)

Your problem is basically that you're mixing up the idealised problem with the realistic problem. By "idealised" I mean a general approach to reading this sort of problem where you forget all the other factors that clearly aren't intended to be considered. For instance, thoughts like "Does he know something about the first number on that sheet being atypical, and is trying to make me pay money to make a wrong guess?" - which could be part of his clever scam. In the idealised problem you assume he's genuine and honest and so on. The idealised problem is generally the one you're supposed to think about in these cases, but there's never a shortage of wiseacres who'll try and circumvent the whole issue with some "realistic" consideration.

In the idealised problem you're not told anything about a second opportunity to get more information, therefore it doesn't exist. Adding a possibility of more information simply creates a new, different idealised problem. In this modified problem the value of the information may be non-zero.

In the "realistic" problem you can consider the possibility of him offering you another number without it being explicitly mentioned. But in that case there's a wealth of other things to worry about making it all too complicated.

I think you're also barking up the wrong tree in the first place trying to create some sort of well defined "value" of information that's independent of your prior (i.e. of other available information). I don't imagine such a thing exists.

Replies from: johnlawrenceaspden
comment by johnlawrenceaspden · 2012-08-31T09:40:35.767Z · LW(p) · GW(p)

I think you're also barking up the wrong tree in the first place trying to create some sort of well defined "value" of information that's independent of your prior (i.e. of other available information). I don't imagine such a thing exists.

This is (now) my intuition too.

But my old intuition was:

If I think there's a 1/2 chance then I'm in possession of an option worth £500, if I think there's a 3/4 chance then I'm in possession of an option worth £750

so if I think there's a 1/2 chance I should work out all the expected consequences and average over their new value to work out new value after getting the information, and the difference is the price I think that information's worth.

And I still can't see what's wrong with that. Can you?

(What originally prompted me to think of the question was the worry that receiving certain sorts of information would make the expected value of my option go down, and I wanted to play with that. I was completely freaked out when I realized that there were lots of prior beliefs where that method gave £0 as the answer.)

Replies from: Irgy
comment by Irgy · 2012-09-01T05:51:17.748Z · LW(p) · GW(p)

I don't see why you'd think anything was wrong with that. I even did the math now and agree with your specific value of £125. Your value of 0 is correct in the other case too. About the only thing I don't agree with is your sense of surprise. There's plenty of information that's worth nothing, and no reason it couldn't later be worth something in combination with other information.

For example, if he told you the d12 numbers were written in red pen (and the others in blue), that's worth nothing on its own. But suddenly looking at one of the numbers is worth quite a lot more than it was...

comment by Caerbannog · 2012-08-30T15:02:42.520Z · LW(p) · GW(p)

Could this be a trick question?

The top of the paper says "1d12" or "2d6", right? The first number is either "1" or "2". If this interpretation is correct, then knowing the first number has a value of 500 pounds.

As has already been stated, you have a 50% chance of guessing correctly to win 1000, so you already have an expected value of 500. To raise that to 100%, you should be willing to pay 500.

Replies from: johnlawrenceaspden
comment by johnlawrenceaspden · 2012-08-30T16:07:02.051Z · LW(p) · GW(p)

It could be indeed, but Oswald is known for his kindly and straightforward nature and wouldn't pull that sort of fast one. Neither would he arrange the numbers on the sheets in tricky ways, nor only be asking the question because he noticed that one of the sheets had a misleading first few numbers. You can assume that he's playing it straight.

I was intending to worry about that sort of thing at some point, but actually I'm finding the original interesting and paradoxical enough at the moment.

Replies from: Vaniver
comment by Vaniver · 2012-08-30T19:45:34.891Z · LW(p) · GW(p)

It could be indeed, but Oswald is known for his kindly and straightforward nature and wouldn't pull that sort of fast one.

It's not a fast one so much as careful attention to the setup. The first number on the sheet would be the first number in the title- which unambiguously specifies which sheet it is. If you change the setup to "the value of the first result," the point will be defused.

Replies from: johnlawrenceaspden
comment by johnlawrenceaspden · 2012-08-31T09:48:28.963Z · LW(p) · GW(p)

good idea, will so change

comment by billswift · 2012-08-30T01:54:34.815Z · LW(p) · GW(p)

My comment from July 5, "Go Bayes! So if you just make your priors big enough, you never have to change your mind.", was rather snarky, but it illustrates a real problem. If your priors are not reasonably accurate, it takes a lot of new information and updating to get it straightened out. That is one reason a lot of introductions to Bayes rule use medical decision making which has reasonably well-established base-rates (priors) to begin with.

Replies from: johnlawrenceaspden
comment by johnlawrenceaspden · 2012-08-30T11:09:59.061Z · LW(p) · GW(p)

Not quite never, and the predictions of your various theories are also priors. So suppose I'm a physicist in the 19th century. And I've got two theories 'Classical Physics' and 'We're wrong about everything'. My prior for classical physics will be truly immense because of all its successful predictions, and little bits of evidence like seeing clocks on trains running a bit slow won't affect my beliefs in any noticeable way, because I'll always be able to explain them in much more sensible ways than 'physics is broken'.

But once I realise that I literally can't come up with any classical explanation for the observed motion of Mercury, then my immense prior gets squashed out of existence by the hideous unlikeliness of seeing those results if classical physics is true. Something somewhere is broken, and all my probability mass moves over into 'we don't understand'.

If you've got an immense prior belief in a theory that can explain anything at all, then yes, that's hard to shift.

Replies from: Richard_Kennaway, johnlawrenceaspden
comment by Richard_Kennaway · 2012-08-30T14:45:58.131Z · LW(p) · GW(p)

Not quite never, and the predictions of your various theories are also priors. So suppose I'm a physicist in the 19th century. And I've got two theories 'Classical Physics' and 'We're wrong about everything'.

This bears no resemblance to the actual history. How much resemblance was it intended to have? You say in another comment:

And I don't, by the way, put this forward as an account of 'how classical physics fell'.

But your reason for that is only:

Those guys were using classical logic.

There were several known problems with classical physics in the late 19th century, and "classical logic" vs. "new improved Bayesian logic" has nothing to do with how they were resolved.

  1. The black body spectrum could not be explained.
  2. The photoelectric effect (going a few years into the 20th century). It took a certain amount of energy to knock an electron off an atom, but light of arbitrarily low intensity could still do it. Only the wavelength mattered: there was a wavelength threshold but no intensity threshold.
  3. EM theory predicted an absolute velocity of light, but Newtonian mechanics defines no preferred frame of reference, and the Michelson-Morley experiment failed to find one.

If you've got an immense prior belief in a theory that can explain anything at all, then yes, that's hard to shift.

Having a prior so immense that it's hard to shift is a problem anyway. But what is "immense", and what is "hard"? I pointed out here that ordinary people are quite capable of updating against 80dB of prior improbability (and if their posterior certainty is of the same order of magnitude then they've updated by around 160dB).

Replies from: johnlawrenceaspden
comment by johnlawrenceaspden · 2012-08-30T16:27:51.476Z · LW(p) · GW(p)

I agree with everything you say!

comment by johnlawrenceaspden · 2012-08-30T11:14:22.975Z · LW(p) · GW(p)

And I don't, by the way, put this forward as an account of 'how classical physics fell'. Those guys were using classical logic.

Probability theory is the generalization of logic to uncertain propositions, which is why it can deal with 'I only ever see white swans' being evidence for 'All swans are white'.

comment by Kindly · 2012-08-31T00:20:10.334Z · LW(p) · GW(p)

Here's a similar game with a potentially simpler setup:

I have a coin which is either fair or has heads on both sides. For some price P, you can ask me to flip the coin and tell you the outcome; we can do this as many times as you like. Then you guess which kind of coin I'm using, and if you guess right I'll give you £1000.

Replies from: johnlawrenceaspden
comment by johnlawrenceaspden · 2012-08-31T09:42:47.243Z · LW(p) · GW(p)

I think that's a nice model for my model problem. Does it have the scary zeros in it, or is another coin flip always worth something?

I should probably also think about the 'either 2/3 or 1/3' coin.

I'll go off and do so.

Replies from: Kindly
comment by Kindly · 2012-08-31T15:22:23.548Z · LW(p) · GW(p)

Suppose you're certain 1000:1 that the coin is fair. The only coin flip outcomes worth considering are runs of heads (obviously once you see a single outcome of tails, you're done).

If you ask for k=5 or more coin flips, then you can only be wrong if the coin is fair but ended up doing HHH...HH anyway, which is long enough to convince you. This has probability less than 1/2^k (since Pr[fair coin] < 1). At that point, every additional coin flip is worth a ridiculously tiny amount you can solve for.

If you ask for fewer than 5 coin flips, then no sequence of coin flips you see will convince you that the coin isn't fair with probability over 50%, and you'll just end up betting on that no matter what you see. So these coin flips are worthless unless you will have the chance to buy more.

The "either 2/3 or 1/3" coin is actually worse in the scary-zeroes department. The nice feature of both the 2d6 and the double-heads coins is that they completely lack an outcome, so no matter what your prior is, if you are currently considering betting on these, there is a small chance you'll be convinced not to.

On the other hand, if both coins can come up both heads and tails, then a single extra flip is worthless unless your prior odds are between 2:1 and 1:2 -- no matter what outcome you see, your belief will shift in one direction or the other by exactly one bit. It's like a biased random walk on the number line, and you don't know what the bias is. But no matter where you are, there's always some number of coin flips that will be worth buying (all at once), although potentially the price you'd pay for them would be really tiny.

comment by [deleted] · 2012-08-30T14:14:57.525Z · LW(p) · GW(p)

I thought of a problem which was related, but not the same, and it seems much harder. I don't know how to start solving it.

The sheet may(let's say 50%) have been switched by Oswald's son Norbert, except he doesn't know his numbers that well, so he just wrote "2d6" on the top and then filled it with nothing but the number 4, because 6-2 is 4. Oswald doesn't notice this, since this is a bar and well, he's drunk since Norbert's been a handful lately.

Presumably, if this happens, at some point you will realize that buying information is not helping you in the slightest and stop. You are probably not going to pay 10,000 pounds for information 1 pound at a time on the 10,001st 4, because you will have probably considered the possibility that while it has an equal chance of being either, this doesn't seem likely to be a distribution of a 2d6 or a 1d12.

How would you calculate a maximum amount in total you should be willing to pay for information on a potentially corrupted bet like this?

Replies from: johnlawrenceaspden
comment by johnlawrenceaspden · 2012-08-30T16:33:09.889Z · LW(p) · GW(p)

The only answer I can think of here is to take a prior over all possible sequence generating computer programs, weighting them for length, and then to jack up the ones that simulate 2d6 or 1d12, and then to use the numbers to update on that.

I don't know if I've just described Solomonoff Induction or similar, but it sounds complicated, and yet I notice that if I'd just seen 10000 consecutive 4s I'd be pretty hot for the 'always gives 4' theory, and I wonder how I'd be doing that with my limited supply of slow neurons.