Bet or update: fixing the will-to-wager assumption
post by cousin_it · 2017-06-07T15:03:23.923Z · LW · GW · Legacy · 61 commentsContents
61 comments
(Warning: completely obvious reasoning that I'm only posting because I haven't seen it spelled out anywhere.)
Some people say, expanding on an idea of de Finetti, that Bayesian rational agents should offer two-sided bets based on their beliefs. For example, if you think a coin is fair, you should be willing to offer anyone a 50/50 bet on heads (or tails) for a penny. Jack called it the "will-to-wager assumption" here and I don't know a better name.
In its simplest form the assumption is false, even for perfectly rational agents in a perfectly simple world. For example, I can give you my favorite fair coin so you can flip it and take a peek at the result. Then, even though I still believe the coin is fair, I'd be a fool to offer both sides of the wager to you, because you'd just take whichever side benefits you (since you've seen the result and I haven't). That objection is not just academic: using your sincere beliefs to bet money against better informed people is a bad idea in real world markets as well.
Then the question arises, how can we fix the assumption so it still says something sensible about rationality? I think the right fix should go something like this. If you flip a coin and peek at the result, then offer me a bet at 90:10 odds that the coin came up heads, I must either accept the bet or update toward believing that the coin indeed came up heads, with at least these odds. I don't get to keep my 50:50 beliefs about the coin and refuse the bet at the same time. More generally, a Bayesian rational agent offered a bet (by another agent who might have more information) must either accept the bet or update their beliefs so the bet becomes unprofitable. The old obligation about offering two-sided bets on all your beliefs is obsolete, use this one from now on. It should also come in handy in living room Bayesian scuffles, throwing some money on the table and saying "bet or update!" has a nice ring to it.
What do you think?
61 comments
Comments sorted by top scores.
comment by entirelyuseless · 2017-06-08T02:46:06.934Z · LW(p) · GW(p)
This corresponds with what people actually do. For example, when Stephen Diamond said on Overcoming Bias that there was a 99% chance that Clinton would win, I said, ok, I'll pay you $10 if Clinton wins and you can pay me $1,000 if Trump wins. He said no, that's just a break even point, so there's no motive to take the bet. I said fine, $10 - $500. He refused again. And obviously that was precisely because he realized those odds were absurd. So he in fact updated. But insisting, "you have to admit that you updated," is just a status game. If you just offer the bet, and they refuse, that is enough. They will update. You don't have to get them to admit it.
Replies from: ChristianKl, cousin_it↑ comment by ChristianKl · 2017-06-12T08:46:00.981Z · LW(p) · GW(p)
I don't think not believing in one's probability is the only reason to avoid betting. There's also a lot of physical resistance for many people.
Even if he believed in the odds it would be very irrational to take your bet. He would get better odds on PredictIt and PredictIt is likely a more trustworthy third-party to pay him in case he wins the bet.
↑ comment by cousin_it · 2017-06-08T06:02:27.642Z · LW(p) · GW(p)
Yeah, if they refuse the bet that means they probably updated (or weren't trying to be rational to begin with).
Replies from: username2↑ comment by username2 · 2017-06-12T08:33:30.589Z · LW(p) · GW(p)
Or don't have $500.
Replies from: moridinamael↑ comment by moridinamael · 2017-06-12T14:09:21.797Z · LW(p) · GW(p)
True, but then they should counteroffer something they can afford, like $1 to $50, since they should be eager to rake in the "free money".
comment by Jiro · 2017-06-08T15:33:02.557Z · LW(p) · GW(p)
It is possible that you may update in the direction of something which makes the bet unprofitable, but which doesn't lead to more credence in the proposition which the bet was originally offered to prove. For instance, you may update in the direction of the bet being a scam in a way which you haven't managed to figure out.
comment by danieldewey · 2017-06-07T23:59:27.781Z · LW(p) · GW(p)
I really like this post, and am very glad to see it! Nice work.
I'll pay whatever cost I need to for violating non-usefulness-of-comments norms in order to say this -- an upvote didn't seem like enough.
Replies from: cousin_itcomment by Daniel_Burfoot · 2017-06-07T18:12:53.388Z · LW(p) · GW(p)
Yes, definitely. There is something about the presence of other agents with differing beliefs that changes the structure of the mathematics in a deep way.
P(X) is somehow very different from P(X|another agent is willing to take the bet).
How about using a "bet" against the universe instead of other agents? This is easily concretized by talking about data compression. If I do something stupid and assign probabilities badly, then I suffer from increased codelengths as a result, and vice versa. But nobody else gains or loses because of my success or failure.
Replies from: cousin_itcomment by Oscar_Cunningham · 2017-06-07T15:55:54.452Z · LW(p) · GW(p)
How about saying that the Bayesian doesn't have to offer any bets, but must accept a side of any two sided bets offered (even by someone who knows more).
So if you see the result of the coin and offer me either side of a 90:10 bet, I would update based on my beliefs about you and why you would offer that bet, and then I pick whichever side is profitable. If after updating my odds are exactly 90:10, then I am happy to pick either side.
Replies from: casebash, Pimgd, cousin_it↑ comment by casebash · 2017-06-09T01:27:55.423Z · LW(p) · GW(p)
The fact that an agent has chosen to offer the bet, as opposed to the universe, is important in this scenario. If they are trying to make money off you, then the way to do that is to offer an unbalanced bet on the expectation that you will take the wrong side. So for example, if you think you have inside information, but they know that is actually unreliable.
The problem is that you have to always play when they want, whilst the other person only has to sometimes play.
So I'm not sure if this works.
↑ comment by Pimgd · 2017-06-08T14:28:09.766Z · LW(p) · GW(p)
How about no, because I prefer my stability and I don't want to track random bets on stuff I don't care about?
Apply marginal utility and a 50/50 coin with the opportunity to bet a dollar, and you've got 50% chance to, say, gain 9.9998 points and 50% chance to lose 10 points. Why bother playing?
The only reasons to play are is if an option is discounted (4x payout for heads and 1.5x payout on tails on a fair coin), if you don't care about the winnings but about playing the game itself, or if there's a threshold to reach (e.g. if I had 200 dollars then I could do payoff something else which would avoid the deferred interest from coming into play, saving me 1000 dollars, so I would take a 60% chance to lose 100 dollars because those extra 100 dollars are worth not 100 but 1000 to me).
Plus there's always epsilon - "the coin falls on its side" or other variations.
Replies from: Oscar_Cunningham↑ comment by Oscar_Cunningham · 2017-06-08T18:06:35.053Z · LW(p) · GW(p)
I'm not suggesting that people actually do this, just that this is a sensible assumption to make when laying the mathematical foundation of rationality.
Replies from: username2↑ comment by cousin_it · 2017-06-07T16:07:49.075Z · LW(p) · GW(p)
Yeah. It wouldn't be as strong in practice (neither nature nor people are in the habit of offering two-sided bets) but as a theoretical constraint it seems to work as well.
Replies from: Oscar_Cunningham↑ comment by Oscar_Cunningham · 2017-06-08T08:31:00.804Z · LW(p) · GW(p)
Isn't nature always in the habit of offering two-sided bets? Like, you can do one thing or the other.
Replies from: cousin_it↑ comment by cousin_it · 2017-06-08T08:44:22.669Z · LW(p) · GW(p)
Not with the payoffs given by de Finetti. For example, there's no way to play the roulette so it becomes an "anti-roulette", giving you a slight edge instead of the casino. Nature usually gives you a choice between doing X (accepting a one-sided bet as is) or not doing X. You don't always have the option of doing "anti-X" (taking the other side of the bet, with the risks and payoffs exactly reversed).
comment by Lumifer · 2017-06-07T16:28:55.259Z · LW(p) · GW(p)
I don't understand the "must accept" thing at all. There are obvious considerations like the fact that utility is not linear with money and that risk tolerance is a factor. There are other considerations as well, for example, going meta and thinking about the uncertainty of uncertainty -- e.g. when I say that the probability of X is 50%, I can be very certain of that estimate, or I can be very uncertain.
Replies from: cousin_it, dogiv↑ comment by cousin_it · 2017-06-07T16:54:09.709Z · LW(p) · GW(p)
At the scale of living room bets, risk aversion is not a factor, because even a small amount of risk aversion around $100 stakes would imply crazy high risk aversion at larger stakes. It grows exponentially, see this post by Stuart. Most people use risk aversion (diminishing marginal utility of money) as an excuse for loss aversion which is straight up irrational.
As to your second objection, Bayesians don't believe in meta-uncertainty, their willingness to bet is represented by one number which is their uncertainty (a.k.a. their probability).
Replies from: bogus, Lumifer↑ comment by bogus · 2017-06-08T06:58:41.774Z · LW(p) · GW(p)
At the scale of living room bets, risk aversion is not a factor
You're right about this strictly speaking, but liquidity constraints can result in the same practical outcome as risk aversion, and these are definitely relevant "on the margin". I could be willing to take a $10 - $500 bet in the abstract, but if that requires me to borrow the $500 should I lose (for an extra $300 cost, say), it's no longer rational for me to take that side of the bet! It would have to be a $10 - $200 bet or something, but obviously that creates a bid-ask spread which translates to an "imprecise" elicitation of probabilities. The 'proper' fix is to make the stakes small enough that liquidity too becomes a negligible factor - but a 5¢ - $2.5 bet is, um, not very exciting, and fixed transaction costs might make the bet infeasible again!
Replies from: cousin_it↑ comment by Lumifer · 2017-06-07T17:01:14.215Z · LW(p) · GW(p)
At the scale of living room bets
So, "as long as it doesn't matter"? Why should I care about bets which don't matter?
By the way, risk aversion is NOT at all the same thing as the diminishing marginal utility of money.
Bayesians don't believe in meta-uncertainty
Why not?
Replies from: cousin_it↑ comment by cousin_it · 2017-06-07T17:32:01.646Z · LW(p) · GW(p)
The only way for a Bayesian rational agent to be risk averse is via diminishing marginal utility of money, I think. As for meta-uncertainty, this post by Shalizi (who's critical of Bayesianism) is a good starting point.
Replies from: Lumifer↑ comment by Lumifer · 2017-06-07T17:43:33.866Z · LW(p) · GW(p)
The only way for a Bayesian rational agent to be risk averse is via diminishing marginal utility of money, I think.
Why in the world would it be so?
"Bayesian" generally means that you interpret probability as subjective and that you have priors and update them on the basis of evidence. How does risk aversion or lack thereof fall out of this?
meta-uncertainty
You don't think that things like hyperparameters and hyperpriors are meta-uncertainty?
But even on a basic level, let's try this. You have two coins. Coin 1 you have flipped a couple of thousand times, recorded the results, and, as expected, it's a fair coin: the frequency of heads is very close to 50%. I give you Coin 2 which you've never seen before, but it looks like a normal coin.
What's your probability for heads on a Coin 1 flip? 50%
What's your probability for heads on a Coin 2 flip? 50%
Absolutely the same thing? Really?
Replies from: cousin_it↑ comment by cousin_it · 2017-06-07T18:32:14.952Z · LW(p) · GW(p)
Bayesian rationality is also about decision making, not just beliefs. Usually people take it to mean expected utility maximization. Just assume my post said that instead.
My betting behavior w.r.t. the next coinflip is indeed the same for the two coins. My probability distributions over longer sequences of coinflips are different between the two coins. For example, P(10th flip is heads | first 9 are heads) is 1/2 for the first coin and close to 1 for the second coin. You can describe it as uncertainty over a hidden parameter, but you can make the same decisions without it, using only probabilities over sequences. The kind of meta-uncertainty you seem to want, that gets you out of uncomfortable bets, doesn't exist for Bayesians.
Replies from: Lumifer↑ comment by Lumifer · 2017-06-07T18:56:49.590Z · LW(p) · GW(p)
expected utility maximization
You are just rearranging the problem without solving it. Can my utility function include risk aversion? If it can, we're back to the square one: a risk-averse Bayesian rational agent.
And that's even besides the observation that being Bayesian and being committed to expected utility maximization are orthogonal things.
The kind of meta-uncertainty you seem to want, that gets you out of uncomfortable bets, doesn't exist for Bayesians.
I have no need for something that can get me out of uncomfortable bets since I'm perfectly fine with not betting at all. What I want is a representation for probability that is more rich than a simple scalar.
In my hypothetical the two 50% probabilites are different. I want to express the difference between them. There are no sequences involved.
Replies from: Zack_M_Davis, entirelyuseless, evand↑ comment by Zack_M_Davis · 2017-06-07T19:49:30.595Z · LW(p) · GW(p)
Can my utility function include risk aversion?
That would be missing the point. The vNM theorem says that if you have preferences over "lotteries" (probability distributions over outcomes; like, 20% chance of winning $5 and 80% chance of winning $10) that satisfy the axioms, then your decisionmaking can be represented as maximizing expected utility for some utility function over outcomes. The concept of "risk aversion" is about how you react to uncertainty (how you decide between lotteries) and is embodied in the utility function; it doesn't apply to outcomes known with certainty. (How risk-averse are you about winning $5?)
See "The Allais Paradox" for how this was covered in the vaunted Sequences.
In my hypothetical the two 50% probabilites are different. I want to express the difference between them. There are no sequences involved.
Obviously you're allowed to have different beliefs about Coin 1 and Coin 2, which could be expressed in many ways. But your different beliefs about the coins don't need to show up in your probability for a single coinflip. The reason for mentioning sequences of flips, is because that's when your beliefs about Coin 1 vs. Coin 2 would start making different predictions.
Replies from: Lumifer↑ comment by Lumifer · 2017-06-07T20:24:06.791Z · LW(p) · GW(p)
That would be missing the point.
Would it? My interest is in constructing a framework which provides useful, insightful, and reasonably accurate models for actual human decision-making. The vNM theorem is quite useless in this respect -- I don't know what my (or other people's) utility function is, I cannot calculate or even estimate it, a great deal of important choices can be expressed as a set of lotteries only in very awkward ways, etc. And this is even besides the fact that empirical human preferences tend to not be coherent and they change with time.
Risk aversion is an easily observable fact. Every day in financial markets people pay very large amounts of money in order to reduce their risk (for the same expected return). If you think they are all wrong, by all means, go and become rich off these misguided fools.
But your different beliefs about the coins don't need to show up in your probability for a single coinflip.
Why not? As I said, I want a richer way to talk about probabilities, more complex than taking them as simple scalars. Do you think it's a bad idea? Does St.Bayes frown upon it?
Replies from: Zack_M_Davis, cousin_it↑ comment by Zack_M_Davis · 2017-06-07T23:45:53.184Z · LW(p) · GW(p)
As I said, I want a richer way to talk about probabilities, more complex than taking them as simple scalars. Do you think it's a bad idea?
That's right, I think it's a bad idea: it sounds like what you actually want is a richer way to talk about your beliefs about Coin 2, but you can do that using standard probability theory, without needing to invent a new field of math from scratch.
Suppose you think Coin 2 is biased and lands heads some unknown fraction _r_ of the time. Your uncertainty about the parameter _r_ will be represented by a probability distribution: say it's normally distributed with a mean of 0.5 and a standard deviation of 0.1. The point is, the probability of _r_ having a particular value is a different question from the the probability of getting heads on your first toss of Coin 2, which is still 0.5. You'd have to ask a different question than "What is the probability of heads on the first flip?" if you want the answer to distinguish the two coins. For example, the probability of getting exactly _k_ heads in _n_ flips is C(_n_, _k_)(0.5)^_k_(0.5)^(_n_−_k_) for Coin 1, but (I think?) ∫₀¹ (1/√(0.02π))_e_^−((_p_−0.5)^2/0.02) C(_n_, _k_)(_p_)^_k_(_p_)^(_n_−_k_) _dp_ for Coin 2.
Does St.Bayes frown upon it?
St. Cox probably does.
Replies from: Unnamed, Lumifer↑ comment by Unnamed · 2017-06-08T00:43:32.837Z · LW(p) · GW(p)
Suppose you think Coin 2 is biased and lands heads some unknown fraction r of the time. Your uncertainty about the parameter r will be represented by a probability distribution: say it's normally distributed with a mean of 0.5 and a standard deviation of 0.1. The point is, the probability of r having a particular value is a different question from the the probability of getting heads on your first toss of Coin 2, which is still 0.5.
A standard approach is to use the beta distribution to represent your uncertainty over the value of r.
↑ comment by Lumifer · 2017-06-08T01:13:11.381Z · LW(p) · GW(p)
but you can do that using standard probability theory
Of course I can. I can represent my beliefs about the probability as a distribution, a meta- (or a hyper-) distribution. But I'm being told that this is "meta-uncertainty" which right-thinking Bayesians are not supposed to have.
No one is talking about inventing new fields of math
say it's normally distributed
Clearly not since the normal distribution goes from negative infinity to positive infinity and the probability goes merely from 0 to 1.
the probability of r having a particular value is a different question from the the probability of getting heads on your first toss of Coin 2, which is still 0.5
That 0.5 is conditional on the distribution of r, isn't it? That makes it not a different question at all.
Notably, if I'm risk-averse, the risk of betting on Coin 1 looks different to me from the risk of betting on Coin2.
St. Cox probably does.
Can you elaborate? It's not clear to me.
Replies from: Zack_M_Davis↑ comment by Zack_M_Davis · 2017-06-08T03:19:18.207Z · LW(p) · GW(p)
But I'm being told that this is "meta-uncertainty" which right-thinking Bayesians are not supposed to have.
Hm. Maybe those people are wrong??
Clearly not since the normal distribution goes from negative infinity to positive infinity
That's right; I should have either said "approximately", or chosen a different distribution.
That 0.5 is conditional on the distribution of r, isn't it? That makes it not a different question at all.
Yes, it is averaging over your distribution for _r_. Does it help if you think of probability as relative to subjective states of knowledge?
Can you elaborate?
(Attempted humorous allusion to how Cox's theorem derives probability theory from simple axioms about how reasoning under uncertainty should work, less relevant if no one is talking about inventing new fields of math.)
Replies from: Douglas_Knight, Lumifer↑ comment by Douglas_Knight · 2017-06-08T21:42:54.101Z · LW(p) · GW(p)
But I'm being told that this is "meta-uncertainty" which right-thinking Bayesians are not supposed to have.
Hm. Maybe those people are wrong??
Nope.
↑ comment by Lumifer · 2017-06-08T04:29:45.089Z · LW(p) · GW(p)
Maybe those people are wrong?
That's what I thought, too, and that disagreement led to this subthread.
But if we both say that we can easily talk about distributions of probabilities, we're probably in agreement :-)
Replies from: Oscar_Cunningham↑ comment by Oscar_Cunningham · 2017-06-08T09:17:13.671Z · LW(p) · GW(p)
It seems like you've come to an agreement, so let me ruin things by adding my own interpretation.
The coin has some propensity to come up heads. Say it will in the long run come up heads r of the time. The number r is like a probability in that it satisfies the mathematical rules of probability (in particular the rate at which the coin comes up heads plus the rate at which it comes up tails must sum to one). But it's a physical property of the coin; not anything to do with our opinion of it. The number r is just some particular number based on the shape of the coin (and the way it's being tossed), it doesn't change with our knowledge of the coin. So r isn't a "probability" in the Bayesian sense - a description of our knowledge - it's just something out there in the world.
Now if we have some Bayesian agent who doesn't know r, then the must have some probability distribution over it. It could also be uncertain about the weight, w, and have a probability distribution over w. The distribuiton over r isn't "meta-uncertainty" because it's a distribution over a real physical thing in the world, not over our own internal probability assignments. The probability distribution over r is conceptually the same as the one over w.
Now suppose someone is about to flip the coin again. If we knew for certain what the value of r was we would then assign that same value as the probability of the coin coming up heads. If we don't know for certain what r is then we must therefore average over all values of r according to our distribution. The probability of the coin landing heads is its expected value, E(r).
Now E(r) actually is a Bayesian probability - it is our degree of belief that the coin will come up heads. This transformation from r being a physical property to E(r) being a probability is produced by the particular question that we are asking. If we had instead asked about the probability of the coin denting the floor then this would depend on the weight and would be expressed as E(f(w)) for some function f representing how probable it was that the floor got dented at each weight. We don't need a similar f in the case of r because we were free to choose the units of r so that this was unnecessary. If we had instead let r be the average number of heads in 1000 flips then we would have to have calculated the probability as E(f(r)) using f(r)=r/1000.
But the distribution over r does give you the extra information you wanted to describe. Coin 1 would have an r distribution tightly clustered around 1/2, whereas our distribution for Coin 2 would be more spread out. But we would have E(r) = 1/2 in both cases. Then, when we see more flips of the coins, our distributions change (although our distribution for Coin 1 probably doesn't change very much; we are already quite certain) and we might no longer have that E(r_1) = E(r_2).
Replies from: Lumifer↑ comment by Lumifer · 2017-06-08T15:00:27.359Z · LW(p) · GW(p)
But it's a physical property of the coin; not anything to do with our opinion of it.
Well, coin + environment, but sure, you're making the point that r is not a random variable in the underlying reality. That's fine, if we climb the turtles all the way down we'd find a a philosophical debate about whether the universe is deterministic and that's not quite what we are interested in right now.
The distribuiton over r isn't "meta-uncertainty" because it's a distribution over a real physical thing in the world
I don't think describing r as a "real physical thing" is useful in this context.
For example, we treat the outcome of each coin flip as stochastic, but you can easily make an argument that it is not, being a "real physical thing" instead, driven by deterministic physics.
For another example, it's easy to add more meta-levels. Consider Alice forming a probability distribution of what Bob believes the probability distribution of r is...
This transformation from r being a physical property to E(r) being a probability is produced by the particular question that we are asking.
Isn't r itself "produced by the particular question that we are asking"?
But the distribution over r does give you the extra information you wanted to describe.
Yes.
↑ comment by cousin_it · 2017-06-07T22:21:39.172Z · LW(p) · GW(p)
I'm mostly interested in prescriptive rationality, and vNM is the right starting point for that (with game theory being the right next step, and more beyond, leading to MIRI's research among other things). If you want a good descriptive alternative to vNM, check out prospect theory.
↑ comment by entirelyuseless · 2017-06-08T03:30:04.730Z · LW(p) · GW(p)
Can my utility function include risk aversion?
Yes. There is nothing preventing you from assigning a value equal to -$1,000 to the state of affairs, "I made a bet and lost $100." This would simply mean that you consider two situations equally valuable, for example one in which you have been robbed of $1,000, and another in which you made a bet and lost $100.
Assigning such values does nothing to prevent you from having a mathematically consistent utility function, and it does not imply any necessary violation of the VNM axioms.
Replies from: Jiro↑ comment by Jiro · 2017-06-17T03:39:08.122Z · LW(p) · GW(p)
That doesn't follow, since there's also nothing preventing you from assigning a value equal to $-2000 to the state of affairs "I was robbed of $1000".
Replies from: entirelyuseless↑ comment by entirelyuseless · 2017-06-17T13:50:47.362Z · LW(p) · GW(p)
Someone who has risk aversion in Lumifer's sense might assign a value of -$2,000 to "I was robbed of $1,000 because I left my door unlocked," but they will not assign that value to "I took all reasonable precautions and was robbed anyway." The latter is considered not as bad.
Specifically, people assign a negative value to the thought, "If only I had taken such precautions I would not have suffered this loss." If there are no precautions they could have taken, there will be no such regret. Even if there are some precautions, if they are unusual and expensive ones, the regret will be much less, if it exists at all.
Refusing a bet is naturally an obvious precaution, so losses that result from accepting bets will be assigned high negative values in this scheme.
↑ comment by evand · 2017-06-08T14:24:10.022Z · LW(p) · GW(p)
The richer structure you seek for those two coins is your distribution over their probabilities. They're both 50% likely to come up heads, given the information you have. You should be willing to make exactly the same bets about them, assuming the person offering you the bet has no more information than you do. However, if you flip each coin once and observe the results, your new probability estimate for next flips are now different.
For example, for the second coin you might have a uniform distribution (ignorance prior) over the set of all possible probabilities. In that case, if you observe a single flip that comes up heads, your probability that the next flip will be heads is now 2/3.
Replies from: Lumifer↑ comment by dogiv · 2017-06-14T21:17:43.422Z · LW(p) · GW(p)
Let's reverse this and see if it makes more sense. Say I give you a die that looks normal, but you have no evidence about whether it's fair. Then I offer you a two-sided bet: I'll bet $101 to your $100 that it comes up odd. I'll also offer $101 to your $100 that it comes up even. Assuming that transaction costs are small, you would take both bets, right?
If you had even a small reason to believe that the die was weighted towards even numbers, on the other hand, you would take one of those bets but not the other. So if you take both, you are exhibiting a probability estimate of exactly 50%, even though it is "uncertain" in the sense that it would not to make evidence to move that estimate.
Replies from: Lumifer↑ comment by Lumifer · 2017-06-14T21:26:25.410Z · LW(p) · GW(p)
Huh? If I take both bets, there is the certain outcome of me winning $1 and that involves no risk at all (well, other than the possibility that this die is not a die but a pun and the act of rolling it opens a transdimensional portal to the nether realm...)
Replies from: dogiv↑ comment by dogiv · 2017-06-15T00:09:14.240Z · LW(p) · GW(p)
True, you're sure to make money if you take both bets. But if you think the probability is 51% on odd rather than 50%, you make a better expected value by only taking one side.
Replies from: Lumifer↑ comment by Lumifer · 2017-06-15T02:54:53.906Z · LW(p) · GW(p)
The thing, is, I'm perfectly willing to accept the answer "I don't know". How will I bet? I will not bet.
There is a common idea that "I don't know" necessarily implies a particular (usually uniform) distribution over all the possible values. I don't think this is so.
Replies from: dogiv↑ comment by dogiv · 2017-06-15T16:22:53.164Z · LW(p) · GW(p)
You will not bet on just one side, you mean. You already said you'll take both bets because of the guaranteed win. But unless your credence is quite precisely 50%, you could increase your expected value over that status quo (guaranteed $1) by choosing NOT to take one of the bets. If you still take both, or if you now decide to take neither, it seems clear that loss aversion is the reason (unless the amounts are so large that decreasing marginal value has a significant effect).
Replies from: Lumifer↑ comment by Lumifer · 2017-06-15T16:55:39.624Z · LW(p) · GW(p)
You already said you'll take both bets because of the guaranteed win.
From my point of view it's not a bet -- there is no uncertainty involved -- I just get to collect $1.
it seems clear that loss aversion is the reason
Not loss aversion -- risk aversion. And yes, in most situations most humans are risk averse. There are exceptions -- e.g. lotteries and gambling in general.
Replies from: dogiv↑ comment by dogiv · 2017-06-15T20:02:29.986Z · LW(p) · GW(p)
I'm not sure what you mean here by risk aversion. If it's not loss aversion, and it's not due to decreasing marginal value, what is left?
Would you rather have $5 than a 50% chance of getting $4 and a 50% chance of getting $7? That, to me, sounds like the kind of risk aversion you're describing, but I can't think of a reason to want that.
Replies from: Lumifer↑ comment by Lumifer · 2017-06-15T20:32:41.007Z · LW(p) · GW(p)
what is left?
Aversion to uncertainty :-)
Would you rather have $5 than a 50% chance of getting $4 and a 50% chance of getting $7? That, to me, sounds like the kind of risk aversion you're describing, but I can't think of a reason to want that.
Let me give you an example. You are going to the theater to watch the first showing of a movie you really want to see. At the ticket booth you discover that you forgot your wallet and can't pay the ticket cost of $5. A bystander offers to help you, but because he's a professor of decision science he offers you a choice: a guaranteed $5, or a 50% chance of $4 and a 50% chance of $7. What do you pick?
Replies from: cousin_it↑ comment by cousin_it · 2017-06-15T20:43:43.933Z · LW(p) · GW(p)
That's a great example, but it goes both ways. If the professor offered you a choice between guaranteed $4 and a 50% chance between $5 and $2, you'd be averse to certainty instead (and even pay some expected money for the privilege). Both kinds of scenarios should happen equally often, so it can't explain why people are risk-averse overall.
Replies from: Lumifer↑ comment by Lumifer · 2017-06-15T21:03:54.421Z · LW(p) · GW(p)
Both kinds of scenarios should happen equally often
Not in real life, they don't.
People planning future actions prefer the certainty of having the necessary resources on hand at the proper time. Crudely speaking, that's what planning is. If the amount of resources that will be available is uncertain, people often prefer to create that certainty by getting enough resources so that the amount at the lower bound is sufficient -- and that involves paying the price of getting more (in the expectation) than you need.
Because people do plan, the situation of "I'll pick the sufficient and certain amount over a chance to lose and a chance to win" occurs much more often than "I certainly have insufficient resources, so a chance to win is better than no chance at all".
comment by Dagon · 2017-06-08T18:20:04.868Z · LW(p) · GW(p)
I never took this idea literally - it's a thought experiment that helps you see whether your beliefs about your beliefs are consistent. If you have a preference for one side or the other of a wager, that implies that your beliefs about the resolution are not at the line you're consciously considering.
There are LOTS of reasons not to actually make or accept a wager, mostly about the cost of tracking/collecting, and about the difference between the wager outcomes and the nominal description of the wager.
comment by casebash · 2017-06-08T15:28:06.020Z · LW(p) · GW(p)
Thanks for posting this. I've always been skeptical of the idea that you should offer two sided bets, but I never broke it down in detail. Honestly, that is such an obvious counter-example in retrospect.
That said, "must either accept the bet or update their beliefs so the bet becomes unprofitable" does not work. The offering agent has an incentive to only ever offer bets that benefit them since only one side of the bet is available for betting.
I'm not certain (without much more consideration), but it seems that Oscar_Cunningham's solution of always taking one half of a two sided bet sounds more plausible.
Replies from: casebash, cousin_it↑ comment by casebash · 2017-06-09T01:00:16.148Z · LW(p) · GW(p)
Partial analysis:
Suppose David is willing to stake 100:1 odds against Trump winning the presidency (before the election). Assume that David is considered to be a perfectly rational agent who can utilise their available information to calculate odds optimally or at least as well as Cameron, so this suggests David has some quite significant information.
Now, Cameron might have his own information that he suspects that David does not and Cameron know that David has no way of knowing that he has this information. Taking this info into account, and the fact that Cameron offered to stake 100:1 odds, he might calculate 80:1 when his information is incorporated. So this would suggest that David should take the bet as the odds are better than Cameron thinks. Except, perhaps David suspected that Cameron had some inside info and actually thinks the true odds are 200:1 - he only offered 100:1 to fool David into thinking it was better that it was - meaning that the bet is actually bad for Cameron despite his inside info.
Hmm... I still can't get my head around this problem.
↑ comment by cousin_it · 2017-06-08T15:50:45.661Z · LW(p) · GW(p)
The offering agent has an incentive to only ever offer bets that benefit them
Right, and with two-sided bets there's no incentive to offer them at all. One-sided bets do get offered sometimes, so you get a chance for free information (if the other agent is more informed than you) or free money (if you think they might be less informed).
comment by chaosmage · 2017-06-08T10:32:27.389Z · LW(p) · GW(p)
Is there a way to get the benefit of including betting into settling arguments, without the shady associations (and possible legal ramifications) of it being gambling?
Replies from: evand↑ comment by evand · 2017-06-11T16:54:39.793Z · LW(p) · GW(p)
I'm not aware of any legal implications in the US. US gambling laws basically only apply when there is a "house" taking a cut or betting to their own advantage or similar. Bets between friends where someone wins the whole stake are permitted.
As for the shady implications... spend more time hanging out with aspiring rationalists and their ilk?