How urgent is it to intuitively understand Bayesianism?

post by Adam Zerner (adamzerner) · 2015-04-07T00:43:43.215Z · LW · GW · Legacy · 24 comments

The current state of my understanding (briefly):

  1. I very much understand reductionism and the distinction between the map and the territory. And I very much understand that probability is in the mind.
  2. From what I understand, prior probability is just the probability you thought something was going to happen before having observed some evidence, and posterior probability is just the probability you think something will happen after having observed that evidence.
  3. I don't really have a precise way of using evidence to update my beliefs though. I'm trying to think of and explain how I currently use evidence to update my beliefs, and I'm disappointed to say that I am struggling. I guess I just sort of think something along the lines of "I'd be unlikely that I observe X if A were really true. I observed X. I think it's less likely that A is true now."
  4. I've made attempts at learning Bayes' Theorm and stuff. When I think it through slowly, it makes sense. But it really takes me time to think it through. Without referring to explanations and thinking it through, I forget it. And I know that that demonstrates my lack of "true" understanding. In general, my short term memory and ability to reason through quantitative things quickly seems to be well above average, but far from elite. Probably way below average amongst this community.

I definitely plan on taking the time to study probability completely and from the ground up. I also plan on doing this for math in general, and a handful of other things as well.

Questions:
  1. What are the practical benefits of having an intuitive understanding of Bayes' Theorem? If it helps, please name an example of how it impacted your day today.
  2. I mention in 3) that it takes me time to think it through. To those of you who consider yourselves to have an intuitive understanding, do you have to think it through, or do you instinctively update in a Bayesian way?
  3. How urgent is it to intuitively understand Bayesian thinking? To use me as an example, my short-mid-term goals include getting good at programming and starting a startup. I have a ways to go, and am working towards these things. So I spend most of my time learning programming right now. Is it worth me taking a few weeks/months to study probability?

I don't intend for this post to just be about me. I sense that other's have similar questions, and that this post would be a useful reference. So please keep this in mind and try to respond in such a way that would be useful to a wide audience. However, I do think that concrete examples would be useful here, and so responding to my particular situation would probably be useful to that wide audience.

 

24 comments

Comments sorted by top scores.

comment by lukeprog · 2015-04-07T02:27:47.038Z · LW(p) · GW(p)

Maybe just use odds ratios. That's what I use when I'm trying to make updates on the spot.

Replies from: Dentin
comment by Dentin · 2015-04-09T21:25:57.479Z · LW(p) · GW(p)

Nod, I do the same thing. When the prior is very small (or very large), you can effectively just multiply in the odds, and when it's near 50% I generally just derate by 50%.

comment by Punoxysm · 2015-04-07T01:37:40.451Z · LW(p) · GW(p)

Not particularly urgent. An understanding of how to update priors (which you can get a good deal of with an intro to stats and probability class) doesn't help dramatically with the real problem of having good priors and correctly evaluating evidence.

comment by dhoe · 2015-04-08T09:27:14.883Z · LW(p) · GW(p)

What are the practical benefits of having an intuitive understanding of Bayes' Theorem? If it helps, please name an example of how it impacted your day today

I work in tech support (pretty advanced, i.e. I'm routinely dragged into conference calls on 5 minutes notice with 10 people in panic mode because some database cluster is down). Here's a standard situation: "All queries are slow. There are some errors in the log saying something about packets dropped.". So, do I go and investigate all network cards on these 50 machines to see if the firmware is up to date, or do I look for something else? I see people picking the first option all the time. There are error messages, so we have evidence, and that must be it, right? But I have prior knowledge: it's almost never the damn network, so I just ignore that outright, and only come back to it if more plausible causes can be excluded.

Bayes gives me a formal assurance that I'm right to reason this way. I don't really need it quantitatively - just repeating "Base rate fallacy, base rate fallacy" to myself gets me in the right direction - but it's nice to know that there's an exact justification for what I'm doing. Another way would be to learn tons of little heuristics ("No. It's not a compiler bug.", "No. There's not a mistake in this statewide math exam you're taking"), but it's great to look at the underlying principle.

Replies from: IlyaShpitser, None
comment by IlyaShpitser · 2015-04-09T08:03:03.276Z · LW(p) · GW(p)

Troubleshooting is a great example where a little probability goes a long way, thanks.


Amusingly, there was in fact an error in the GRE Subject test I once took, long ago (in computer science). All of the 5 multiple choice answers were incorrect. I agree that conditional on disagreement between test and testtaker, the test is usually right.

Replies from: othercriteria
comment by othercriteria · 2015-04-10T13:42:16.232Z · LW(p) · GW(p)

The Rasch model does not hate truth, nor does it love truth, but the truth if made out of items which it can use for something else.

comment by [deleted] · 2015-04-09T08:42:12.430Z · LW(p) · GW(p)

Funny, I am trying to use LW knowledge for IT related troubleshooting (ERP software) and usually fail, so far. I am trying to use the Solomonoff induction, to generate hypotheses and compare them to data. But data is very hard to mine. I could either investigate the whole database, as theoretically the whole could affect any routine, or try to see what routines ran and which branches of them, which IF statements were fulfilled true and which false, and this gets me to "aha the user forgot to check checkmark X in form Y". But that also takes a huge amount of time. Often only 1% of a posting codeunit runs at all, finding that is a hell. And I simply don't know where to generate hypotheses from. "Anything could fail" is not a hypothesis. We have user errors, we have bugs, and we have heck-knows-what cases.

Maybe I should try the Bayesian branch, not the Solomonoff branch. As data, evidence, is very hard to mine in this case, maybe I should look for the most frequent causes of errors, instead of trying to find evidence for the current one. This means I should keep a log, what the problem was, and what caused it.

Thank you for the concept https://en.wikipedia.org/wiki/Base_rate_fallacy I think I will spread this in the ERP community and see what happens.

comment by IlyaShpitser · 2015-04-07T19:11:47.476Z · LW(p) · GW(p)

Basic probability is useful to understand, and basic numeracy is useful. Not necessarily on an intuitive level, but on a level where you can back-of-the-envelope or do rough approximations if they are useful (chance of rain, chance of false positive vs disease, etc.)

B is not very useful to really get into unless you are in ML/stats or related area.

I think for most people the limiting factors in "being less crazy" have to do with interpersonal stuff, not math.

Replies from: None
comment by [deleted] · 2015-04-09T09:03:05.710Z · LW(p) · GW(p)

chance of rain

Frankly getting the data seems to be harder to get than the hardness of the calculation. If there is any website that tells you how often did it rain on the 9th April in your region in the last 25 years and how often did it rain between 1 and 15 April in the last 5 years (these are the most relevant data, right?), they may as well do the math themselves. Better yet, meterologists, who hopefully know some Bayes, can combine it with the specific information like current high and low pressure zones and cloud radars, and make predictions. Probably the best idea is to use theirs.

BTW can anyone confirm that meterologists know some Bayes? If August is normally dry as fsck in your region they should be fairly skeptical about specific evidences that suggest rain. While if October is normally torrential then even the slightest evidence of rain should count as one...

comment by fractalcat · 2015-04-09T15:24:05.613Z · LW(p) · GW(p)

First off, I should note that I'm still not really sure what 'Bayesianism' means; I'm interpreting it here as "understanding of conditional probabilities as applied to decision-making".

No human can apply Bayesian reasoning exactly, quantitatively and unaided in everyday life. Learning how to approximate it well enough to tell a computer how to use it for you is a (moderately large) research area. From what you've described, I think you have a decent working qualitative understanding of what it implies for everyday decision-making, and if everyday decision-making is your goal I suspect you might be better-served reading up on common cognitive biases (I heartily recommend /Heuristics and Biases/ ed Kahneman and Tversky as a starting point). Learning probability theory in depth is certainly worthwhile, but in terms of practical benefit outside of the field I suspect most people would be better off reading some cognitive science, some introductory stats and most particularly some experimental design.

Wrt your goals, learning probability theory might make you a better programmer (depends what your interests are and where you are on the skill ladder), but it's almost certainly not the most important thing (if you would like more specific advice on this topic, let me know and I'd be happy to elaborate). I have examples similar to dhoe's, but the important bits of the troubleshooting process for me are "base rate fallacy" and "construct falsifiable hypotheses and test them before jumping to conclusions", not any explicit probability calculation.

comment by ChristianKl · 2015-04-07T19:14:26.929Z · LW(p) · GW(p)

I very much understand reductionism and the distinction between the map and the territory

How do you know that you understand those concepts? What does it means for you to say that you understand them?

Replies from: adamzerner
comment by Adam Zerner (adamzerner) · 2015-04-07T19:31:30.505Z · LW(p) · GW(p)

Hm. After reading that, my brain's first impression was that these are straightforward questions that I should be able to easily answer. However, I'm having more difficulty answering them than I thought I would. I'm not sure what the implications of this are.

How do you know that you understand those concepts?

Part of the reason is because I've "discovered it for myself". I'm 22 and I don't remember ever not understanding reductionism. I'm remembering having a good understanding of it back in middle school (although a) I know that memory isn't always trustworthy, and b) I definitely didn't know the lingo). Another part of the reason is because the stuff I thought before reading LW was very much mirrored in the Reductionism sequence. So the fact that it's so well accepted here is outside view evidence that the ideas of reductionism are true.

But the ultimate reason is just because I've observed that lower level maps make more accurate predictions than higher level ones. I'm actually having a hard time coming up with things that I have actually observed, but the clearest example I could think of off the top of my head is that scientists have found that in terms of what makes the most accurate predictions, physics > chemistry > biology.

What does it means for you to say that you understand them?

Well, reductionism really isn't really a single concept, and so it doesn't really make sense to say that "I understand reductionism". But what I mean is that I think I understand a) the core concepts quite well and b) a reasonably large proportion of what there is to know about it.

Replies from: ChristianKl
comment by ChristianKl · 2015-04-07T20:27:49.496Z · LW(p) · GW(p)

it doesn't really make sense to say that "I understand reductionism".

How much do you really understand if you say something about the topic that you believe not to make sense?

I'm remembering having a good understanding of it back in middle school (although a) I know that memory isn't always trustworthy, and b) I definitely didn't know the lingo).

Not having updated one's beliefs on a topic since middle school doesn't suggest deep understanding.

Replies from: adamzerner
comment by Adam Zerner (adamzerner) · 2015-04-07T20:43:30.068Z · LW(p) · GW(p)

When interpreted literally, "I understand reductionism" might not make sense, but I expect that people won't interpret it completely literally and would know what I mean.

If I implied that I haven't updated since middle school, I didn't mean to do that. I have in fact updated quite a bit.

comment by TheAncientGeek · 2015-04-07T10:57:19.899Z · LW(p) · GW(p)

I very much understand reductionism and the distinction between the map and the territory. And I very much understand that probability is in the mind.

Which, combined together, a problem, because Yudkowskys argument that probability is in the mind attempts, fallaciously, to infer a feature of the territory, the existence of complete causal determinism, from a feature of the map, the way humans think about probability.

Replies from: adamzerner
comment by Adam Zerner (adamzerner) · 2015-04-07T12:32:16.168Z · LW(p) · GW(p)

I don't think this is the right place for that discussion.

Replies from: None
comment by [deleted] · 2015-04-08T15:51:13.556Z · LW(p) · GW(p)

But it would help the OP better formulate his question. He's thinking he needs to internalize Bayes theorem, when in fact what is really important is that he needs to understand probability theory, of which Bayes theorem is one manifestation. Note that Bayesianism describes a philosophical movement, not scientific or mathematical. You don't find statisticians discussing probability theory along such tribal lines. So asking tribalistic questions is perhaps not the best strategy to tease out understanding of the underlying model.

Replies from: TheAncientGeek
comment by TheAncientGeek · 2015-04-12T14:52:49.681Z · LW(p) · GW(p)

On LessWrong, Bayesianism is probability theory. Moreover, it is bundled in with subjectivism about probability, determinism, many worlds theory etc. It all comes down .to whether the OP wants to become a rationalist, or a LessWrong rationalist, like deciding whether you want to be an economist or an Austrian school economist. If the former, some unlearning will be required.

Replies from: ChristianKl
comment by ChristianKl · 2015-04-12T15:00:55.573Z · LW(p) · GW(p)

I don't think there a fixed concept of what a "LessWrong rationalist" happens to be. LW is fairly diverse.

Replies from: TheAncientGeek, IlyaShpitser
comment by TheAncientGeek · 2015-04-12T15:05:36.138Z · LW(p) · GW(p)

There is a fixed concepts of LW rationalISM. There are also dissidents in the community.

comment by IlyaShpitser · 2015-04-12T16:25:02.811Z · LW(p) · GW(p)

I think GP means "people who mostly agree with a fairly ad hoc set of things EY believes."

Replies from: ChristianKl
comment by ChristianKl · 2015-04-12T19:55:22.427Z · LW(p) · GW(p)

Given the limited amount of participation of EY on LW in the last years, I'm not even sure what EY believes today on a bunch of related questions.

There the CFAR research into practical rationality. Julia Galef wrote a post about how it changed her view of rationality. I'm not aware of public statements of EY that specify the extend to which he updated on those questions. To the extend that you presume that EY didn't update, that behavior has little to do with "LW rationality" in any meaningful definition of the term.

At the same time there are shared traits that distinguish people in the LW community. After my first LW meetup I noticed that a fellow meetup attended had a skateboard with a handle to get around. We talk about it's use and how he saves a lot of time because it's faster than walking. Then I asked him about the safety aspect.

He sincerely answered: "That's a valid concern, I don't know the numbers. I should research the numbers."

Outside of LW nobody responds that way. If an anthropologist would go and study the people at our meetups, behavior like that would raise to his attention much more than agreeing with a stereotypical set of beliefs from the sequences.

The idea that the nature of an LW rationalists doesn't get determined by behavior but by agreeing to some set of sequence claims, doesn't really lead somewhere interesting.

Replies from: IlyaShpitser
comment by IlyaShpitser · 2015-04-12T22:17:13.426Z · LW(p) · GW(p)

I am not dissing the positive aspects of LW culture. After all, I am still here, aren't I? I find much here quite valuable.

And anyways, this isn't about what I think, I am giving my views of what GP means. Why are you perceiving this as an attack?

Replies from: ChristianKl
comment by ChristianKl · 2015-04-13T11:40:55.933Z · LW(p) · GW(p)

Why are you perceiving this as an attack?

No.

I am not dissing the positive aspects of LW culture. After all, I am still here, aren't I?

If I remember right you however don't see yourself as part of LW culture because you are contrarian on key claims. The same is likely true for TheAncientGeek. Being contrarian on things, however is quite essential LW behavior.

GP

I'm not quite sure what you mean with that. Do you mean TheAncientGeek?