45% to 55% vs. 90% to 100%

post by yhoiseth · 2023-08-28T19:15:18.524Z · LW · GW · No comments

This is a question post.

Contents

  Answers
    14 GuySrinivasan
    8 Max H
    4 Donald Hobson
    4 Dave Orr
None
No comments

In Never Split The Difference: Negotiating As If Your Life Depended On It, Chris Voss discusses cognitive biases:

Through decades of research with Tversky, Kahneman proved that humans all suffer from Cognitive Bias, that is unconscious—and irrational—brain processes that literally distort the way we see the world. Kahneman and Tversky discovered more than 150 of them.

There’s the Framing Effect, which demonstrates that people respond differently to the same choice depending on how it is framed (people place greater value on moving from 90 percent to 100 percent—high probability to certainty—than from 45 percent to 55 percent, even though they’re both ten percentage points) (p. 12).

Isn’t it rational to value 90% → 100% more than 45% → 55%?

Even going from 90% to 95% means you are wrong half as often — instead of — whereas going from 45% to 55% only removes about 20% of your errors.

Is my thinking and math correct? If not, how am I wrong?

Assuming I’m right, I would also really appreciate a better way to explain this.

Answers

answer by SarahNibs (GuySrinivasan) · 2023-08-28T19:22:12.006Z · LW(p) · GW(p)

It depends.

Chance of a bet paying out? Value them the same.

Amount of information you gained, where you value transferring that learning to other questions, designs, etc? 90% --> 100% is way better.

In a domain where you know you have plenty of uncertainty? 90% --> 100% is a huge red flag that something just went very wrong. ;)

answer by Max H · 2023-08-29T00:14:23.037Z · LW(p) · GW(p)

You're basically right, in that it requires much stronger evidence to move you from 45% -> 55% credence than to move you from 90% -> 99.9%.

It is helpful to think in terms of likelihood ratios. To go from 90% -> 99.9% credence requires observing evidence with a likelihood ratio of (0.999 / 0.001) / (0.9 / 0.1) = 111, which is about 6.8 bits of evidence. [edit: fixed flipped / wrong math]

To go from 45% -> 55% credence, you just need a likelihood ratio of = 1.5, or about 0.6 bits of evidence.

(Getting to 100% credence via Bayesian updating requires +inf bits of evidence; remember that 100% isn't really a probability.)

comment by Noosphere89 (sharmake-farah) · 2023-08-29T00:27:25.809Z · LW(p) · GW(p)

Re getting to 100% probability of a outcome, that's actually surprisingly easy to do sometimes, especially in infinite sets like the real numbers. It's not trivial, but you can get these outcomes sometimes.

Replies from: Maxc
comment by Max H (Maxc) · 2023-08-29T01:08:04.040Z · LW(p) · GW(p)

Sure, you can also easily imagine a set with measure 1 or 0. My remark was just a reminder that 1 and 0 are not probabilities [LW · GW], in the same sense that infinity is not a real number.


Just like you can get away with manipulating infinities intuitively if you're careful, instead of treating everything formally via limits, you can also usually get away with treating 0 and 1 as ordinary probabilities. But you have to be careful, and it makes the math in cases like the OP less intuitive than just using likelihood ratios and logarithms, IMO.

Replies from: sharmake-farah
comment by Noosphere89 (sharmake-farah) · 2023-08-29T01:19:45.472Z · LW(p) · GW(p)

My remark was just a reminder that 1 and 0 are not probabilities, in the same sense that infinity is not a real number.

This is admittedly a minor internet crusade/pet peeve of mine, but the claim that 1 and 0 aren't probabilities is exactly wrong, and the analogy is pretty strained here. In fact, probability theory need to have 0 and 1 as legitimate probabilities, or things fall apart into incoherency.

The post you linked is one of the most egregiously wrong things Eliezer has ever said that is purely mathematical.

And we don't just imagine measure/probability 1 or 0 sets, we have proved that certain sets of this kind exists.

There are fundamental disanalogies that make infinity not a number (except in the extended real line and the projective real line), compared to 0 and 1 being probabilities.

Replies from: Maxc
comment by Max H (Maxc) · 2023-08-29T03:10:22.500Z · LW(p) · GW(p)

If you want to prove a bunch of theorems involving continuities and infinities, treating 0 and 1 as probabilities is much more elegant and things mostly fall apart without them, yes.

If your goal is to reason under uncertainty, thinking in terms of odds ratios and decibels is a way of putting your map in close correspondence with the territory. Allowing for infinities in this use case introduces complications and weird philosophical questions about the (in)finiteness of reality.

On earth, most people start out by learning probability theory in terms of probabilities, for the purpose of solving math problems or proving theorems in school. Later (if they stumble across the right kinds of blogs) they learn probability as a reasoning tool, but often forget or don't realize that thinking in terms of odds ratios when using probability for this purpose is much more convenient once you get used to it.

On a planet where people grew up studying probability as a reasoning tool first, and only as an afterthought studied it as a branch of math, someone might need to write a blog post that 0 and 1 are basically just ordinary probabilities, and sometimes probabilities are more elegant and intuitive than odds and decibels, lest people start over-complicating their proofs.

I don't see anything wrong or contradictory with pointing out the difference between probability as mathematical theory and probability as reasoning method.

answer by Donald Hobson · 2023-08-29T15:41:32.531Z · LW(p) · GW(p)

If you have some utility function that depends on the amount of money you have, then the improvement from a bet that offers a 45% chance of winning a prize to one that offers a 55% chance is identical to the improvement from a bet that offers a 90% chance to one offering a 100% chance. 

Note that this holds only when you have no "intermediate choices".

Suppose you are pretty short of cash at the moment. And you might be getting a prize tomorrow. You have a chance to buy a fancy meal now. If you buy the fancy meal, and then don't get the prize, you will really be struggling to pay off your bills. So it only makes sense to buy the fancy meal if you are >95% sure that you are getting the prize. 

In this setup, it does make sense to value the extra certainty. 

This is all assuming you don't terminally value certainty in and of itself. You terminally value something else. (If not, then you risk being money pumped where you pay to learn info, even though you can't use that info for anything)

But even if certainty isn't a terminal goal, it can be an instrumental goal.

The framing effect thing is about the chance of winning some prize. Why would you want certainty, what you want is the prize. 

answer by Dave Orr · 2023-08-28T23:35:43.797Z · LW(p) · GW(p)

45->55% is a 22% relative gain, while 90->100% is only an 11% gain. 

On the other hand, 45->55% is a reduction in error by 18%, while 90->100% is a 100% reduction in errors.

Which framing is best depends on the use case. Preferring one naively over the other is definitely an error. :)

No comments

Comments sorted by top scores.