Am I Understanding Bayes Right?

post by CyrilDan · 2013-11-13T20:40:43.402Z · LW · GW · Legacy · 20 comments

Contents

20 comments

Hello, everyone.

 

I'm relatively new here as a user rather than as a lurker, but even after trying to read ever tutorial on Bayes' Theorem I could get my hands on, I'm still not sure I understand it. So I was hoping that I could explain Bayesianism as I understand it, and some more experienced Bayesians could tell me where I'm going wrong (or maybe if I'm not going wrong and it's a confidence issue rather than an actual knowledge issue). If this doesn't interest you at all, then feel free to tap out now, because here we go!

 

Abstraction

Bayes' Theorem is an application of probability. Probability is an abstraction based on logic, which is in turn based on possible worlds. By this I mean that they are both maps that refer to multiple territories: whereas a map of Cincinatti (or a "map" of what my brother is like, for instance), abstractions are good for more than one thing. Trigonometry is a map of not just this triangle here, but of all triangles everywhere, to the extent that they are triangular. Because of this it is useful even for triangular objects that one has never encountered before, but only tells you about it partially (e.g. it won't tell you the lengths of the sides, because that wouldn't be part of the definition of a triangle; also, it only works at scales at which the object in question approximates a triangle (i.e. the "triangle" map is probably useful at macroscopic scales, but breaks down as you get smaller).

 

Logic and Possible Worlds

Logic is an attempt to construct a map that covers as much territory as possible, ideally all of it. Thus when people say that logic is true at all times, at all places, and with all things, they aren't really telling you about the territory, they're telling you about the purpose of logic (in the same way that the "triangle" map is ideally useful for triangles at all times, at all places).

 

One form of logic is Propositional Logic. In propositional logic, all the possible worlds are imagined as points. Each point is exactly one possible world: a logically-possible arrangement that gives a value to all the different variables in the universe. Ergo no two possible universes are exactly the same (though they will share elements).

 

These possible universes are then joined together in sets called "propositions". These "sets" are Venn diagrams, or what George Lakoff refers to as "container schemas"). Thus, for any given set, every possible universe is either inside or outside of it, with no middle ground (see "questions" below). Thus if the set I'm referring to is the proposition "The Snow is White", that set would include all possible universes in which the snow is white. The rules of propositional logic follow from the container schema.

 

Bayesian Probability

If propositional logic is about what's inside a set or outside of a set, probability is about the size of the sets themselves. Probability is a measurement of how many possible worlds are inside a set, and conditional probability is about the size of the intersections of sets.

 

Take the example of the dragon in your garage. To start with, there either is or isn't a dragon in your garage. Both sets of possible worlds have elements in them. But if we look in your garage and don't see a dragon, then that eliminates all the possibilities of there being a *visible* dragon in your garage, and thus eliminates those possible universes from the 'there is a dragon in your garage' set. In other words, the probability of that being true goes down. And because not seeing a dragon in your garage would be what you would expect if there in fact isn't a dragon in your garage, that set remains intact. Then if we look at the ratio of the remaining possible worlds, we see that the probability of the no-dragon-in-your-garage set has gone up, not because in absolute terms (because the set of all possible worlds is what we started with; there isn't any more!) but relative to the alternate hypothesis (in the same way that if the denominator of a fraction goes down, the size of the fraction goes up.)

 

This is what Bayes' Theorem is about: the use of process of elimination to eliminate *part* of the set of a proposition, thus providing evidence against it without it being a full refutation.

 

Naturally, this all takes place in ones mind: the world doesn't shift around you just because you've encountered new information. Probability is in this way subjective (it has to do with maps, not territories per se), but it's not arbitrary: as long as you accept that possible worlds/logic metaphor, it necessarily follows

 

Questions/trouble points that I'm not sure of:

*I keep seeing probability referred to as an estimation of how certain you are in a belief. And while I guess it could be argued that you should be certain of a belief relative to the number of possible worlds left or whatever, that doesn't necessarily follow. Does the above explanation differ from how other people use probability?

*Also, if probability is defined as an arbitrary estimation of how sure you are, why should those estimations follow the laws of probability? I've heard the Dutch book argument, so I get why there might be practical reasons for obeying them, but unless you accept a pragmatist epistemology, that doesn't provide reasons why your beliefs are more likely to be true if you follow them. (I've also heard of Cox's rules, but I haven't been able to find a copy. And if I understand right, they says that Bayes' theorem follows from Boolean logic, which is similar to what I've said above, yes?)

*Another question: above I used propositional logic, which is okay, but it's not exactly the creme de la creme of logics. I understand that fuzzy logics work better for a lot of things, and I'm familiar with predicate logics as well, but I'm not sure what the interaction of any of them is with probability or the use of it, although I know that technically probability doesn't have to be binary (sets just need to be exhaustive and mutually exclusive for the Kolmogorov axioms to work, right?). I don't know, maybe it's just something that I haven't learned yet, but the answer really is out there?

 

Those are the only questions that are coming to mind right now (if I think of any more, I can probably ask them in comments). So anyone? Am I doing something wrong? Or do I feel more confused than I really am?

20 comments

Comments sorted by top scores.

comment by MrMind · 2013-11-14T13:19:09.846Z · LW(p) · GW(p)

I'll try to give you the formalist perspective, which is a sort of 'minimal' take on the whole matter.

Everything starts with a set of symbols, usually finite, that can be combined to form strings called formulas.
Logic is then defined as two set of rules: one that tells you which strings of symbols are considered valid (morphology), and one that tells you which valid formulas follows from which valid formulas (syntax).

Then there's the concept of truth: when you have a logic, you notice that sometimes formulas refer to entities or states of some environment, and that syntactic rules somehow reflect processes happening between those entities.
Specifying which environment, which processes and which entities you are considering is the purpose of ontology, while the task of relating ontology and morphology/syntax is the purpose of semantics.

As you can probably imagine, there are a myriad of logics and myriads of ontologies (often called models).
There's Hilbertian ontology, where the environment is a set or a class, the entities are its members and relations and functions between them, and semantics relates Tarskian truths to other Tarskian truths.
There's categorial ontology, where formulas are interpreted as object of a category and syntactic rules as arrow between them.
There's dialogical ontology, where formulas are states of a game and rules are ways for attacking or defending the current state.
There is possible world ontology, which you have described.
And so on and so on.

Hystorically, a specific set of rules and symbols emerged exceedingly often, and seemed to be particularly apt to capture the reasoning that mathematicians intuitively adopted. That is now known as classical logic (CL), and for a very very long time it was believed to be the only "true" logic, that is the logic of the atemporal and universal description of all things.
CL is very useful and can be adapted to a wide array of ontologies: for example, Boolean algebras (which are instances of the Hilbertian ontology) and possible worlds semantics. The latter in particular was ideated for an extension of CL, both in symbols and rules, known as modal logic.

Enter probability.
The determination of which concepts the word refers to has a long and heated history, but now the modern understandig is four-fold. On one side, you have those who refer to a stated property of the world, a not very well specified "long-run randomized frequency" (frequentist). On the other side, you have those who believe that probability does not refer to an objective property of a system, but to a degree of beliefs of an observer (Bayesian). Bayesians themselves are divided among those who thinks that probabilities are entirely subjective (subjectivist), relying on the De Finetti coherence theorem and pragmatic interpretation, and those who thinks that probabilites are the degrees of belief that an idealized rational agent has about a system. These folks we can call objectivist, but since they are the majority here on LessWrong, it's simpler to just call them Bayesian.
Bayesians rely on Cox theorem, that justifies the structure of probability from a small set of minimally rational requirements.
All three of them, frequentists, subjectivists and Bayesians, believe that the structure of probability is correctly described by the mathematical concept of a measure, as formalized by the Kolmogorov axioms.

Enter Jaynes.
He was a physicist, and wrote a book from a Bayesian perspective, showing that probabilities thus intended are an extension of CL. Where CL can be seen as assigning to proposition only two values, 0 and 1, PTEL (probability theory as extended logic) actually relaxes that restriction, assigning values in the [0,1] interval, following two simple rules derived from Cox theorem.
In his work, Jaynes greatly systematized and simplified the field, solved many of its paradox and brought probability theory to previously unexplored areas of application. He also talked about the ideal rational agent as a 'robot', it is thus no surprise that his book is regarded as a cornerstone for LW's understanding of Artificial Intelligence.

Now, on to your questions:

Does the above explanation differ from how other people use probability?

As you can see, you are just using one ontology (possible worlds) to justify one interpretation (Kolmogorov measure), but there are many more. The interpretation of choice here is PTEL, usually coupled with an Hilbertian ontology for the underlying CL or just left unspecified.

why should those estimations follow the laws of probability?

Because of Cox theorem, you can show that with a specified amount of initial information, you can do no better than following the laws of probability.

And if I understand right, they says that Bayes' theorem follows from Boolean logic, which is similar to what I've said above, yes?

Actually no. First, there is no Boolean logic: there is classical logic interpreted in the ontology of Boolean algebra, but that doesn't really matter. Cox theorem is based on CL, but makes additional assumptions (that of a minimal, ideally rational observer). It's not just pure logic.
Also there has been some exploration in the direction of extending Cox with other basal logics.

I'm familiar with predicate logics as well, but I'm not sure what the interaction of any of them is with probability or the use of it

As per above, PTEL is based only on classical logic, which is a particular first order, two-valued predicate logic. AFAIK, no successful extension of Cox has been made to other kind of logic.
Fuzzy logic resembles PTEL in the expansions of the set of truth values, but uses different rules than CL, so the resemblance is only superficial: PTEL and fuzzy logics are two very different beasts.

sets just need to be exhaustive and mutually exclusive for the Kolmogorov axioms to work, right?

No, not really. Kolmogorov axioms are defined on any (sigma)-algebra, wether it is the algebra of subsets of a measurable sets or some other things.

the answer really is out there?

Of course it is ;)

Replies from: CyrilDan
comment by CyrilDan · 2013-11-15T07:35:40.128Z · LW(p) · GW(p)

First of all, let me thank you so much, MrMind, for your post. It was really helpful, and I greatly appreciate how much work you put into it!

I'll try to give you the formalist perspective, which is a sort of 'minimal' take on the whole matter.

Much obliged.

Everything starts with a set of symbols, usually finite, that can be combined to form strings called formulas.

Question. I'm making my way through George Lakoff's works on metaphor and embodied thought; are familiar with the theory at all? (I know lukeprog did a blog post about them, but it's not nearly everything there is to know) Basically the theory is that our most basic understandings are linked to our direct sensory experience, and then we abstract away from that metaphorically in various fields, a very bottom-up approach. Whereas what you're saying is starting with symbols, which I think would be the reverse of what he's saying? Which probably means that it's a difference of perspective (it probably is), but as a starting point it gives the concepts less ballast for me. That said, I'm not entirely lost - I think I mentioned that I've studied symbolic logic, so I'll brave ahead!

Then there's the concept of truth: when you have a logic, you notice that sometimes formulas refer to entities or states of some environment, and that syntactic rules somehow reflect processes happening between those entities. Specifying which environment, which processes and which entities you are considering is the purpose of ontology, while the task of relating ontology and morphology/syntax is the purpose of semantics.

As you can probably imagine, there are a myriad of logics and myriads of ontologies (often called models).

How does this connect to the map-territory distinction? Generally as I've understood it, logic is a form of map, but so too would be a model. Would a model be a map and logic be a map of a map? Am I getting that right?

All three of them, frequentists, subjectivists and Bayesians, believe that the structure of probability is correctly described by the mathematical concept of a measure, as formalized by the Kolmogorov axioms.

This is something that has always confused me, the probability definition wars. Is there really something to argue about here? Maybe I'm missing something, but it seems like a "if a tree falls in the woods..." kind of question that should just be taboo'd. But when you taboo frequency-probability off from epistemic-probability, it's not immediately obvious why the same axioms should apply to both of them (which doesn't mean that they don't; thank you to everyone for pointing me to Cox's Theorems again. I know I've seen them before, but I think they're starting to click a little bit more on this pass-over). And Richard Carrier's new book said that they're actually the same thing, which is just confusing (that epistemic probability is the frequency at which beliefs with the same amount of evidence will be true/false, or something like that). (EDIT: Another possibility would be that both frequentist and Bayesian definitions of probability could both be "probability" and both conform to the axioms, but that would just make it more perplexing for people to argue about it)

As you can see, you are just using one ontology (possible worlds) to justify one interpretation (Kolmogorov measure), but there are many more.

Thanks for the terminology. I don't really understand what they are given so brief a description, but knowing the names at least spurs further research. Also, am I doing it right for the one ontology and one interpretation that I've stumbled across, regardless of the others?

Fuzzy logic resembles PTEL in the expansions of the set of truth values, but uses different rules than CL, so the resemblance is only superficial: PTEL and fuzzy logics are two very different beasts.

Right, because in fuzzy logics the spectrum is the truth value (because being hot/cold, near/far, gay/straight, sexual/asexual, etc. is not an either/or), whereas with PTEL the spectrum is the level of certainty in a more staunch true/false dichotomy, right? I don't actually know fuzzy logic, I just know the premise of it.

The other question I forgot to ask in the first post was how Bayes' Theorem interacts with group identity not being a matter of necessary and sufficient conditions, or for other fuzzy concepts like I mentioned earlier (near/far, &c.). For this would you just pick a mostly-arbitrary concept boundary so that you have a binary truth value to work with?

Replies from: MrMind
comment by MrMind · 2013-11-15T10:49:29.521Z · LW(p) · GW(p)

I'm making my way through George Lakoff's works on metaphor and embodied thought; are familiar with the theory at all?

Unfortunately no, but from your description it seems quite like the theory of the mind of General Semantics.

Whereas what you're saying is starting with symbols, which I think would be the reverse of what he's saying?

Not exactly, because in the end symbols are just unit of perceptions, all distinct from one another. But while Lakoff's theory probably aims at psychology, logic is a denotational and computational tool, so it doesn't really matter if they aren't perfect inverse.

How does this connect to the map-territory distinction? Generally as I've understood it, logic is a form of map, but so too would be a model. Would a model be a map and logic be a map of a map? Am I getting that right?

Yes. Since a group of maps can be seen just as a set of things in itself, it can be treated as a valid territory. In logic there are also map/territory loops, where the formulas itself becomes the territory mapped by the same formulas (akin to talking in English about the English language). This trick is used for example in Goedel's and Tarski's theorems.

This is something that has always confused me, the probability definition wars. Is there really something to argue about here?

Yes. Basically the Bayesian definition is more inclusive: e.g. there is no definition of a probability of a single coin toss in the frequency interpretation, but there is in the Bayesian. Also in Bayes take on probability the frequentist definition emerges just as a natural by-product. Plus, the Bayesian framework produced a lot of detangling in frequentist statistics and introduced more powerful methods.

thank you to everyone for pointing me to Cox's Theorems again. I know I've seen them before, but I think they're starting to click a little bit more on this pass-over

The first two chapters of Jaynes' book, a pre-print version of which is available online for free, do a great job in explaining and using Cox to derive Bayesian probability. I urge you to read them to fully grasp this point of view.

And Richard Carrier's new book said that they're actually the same thing, which is just confusing

And easily falsifiable.

Also, am I doing it right for the one ontology and one interpretation that I've stumbled across, regardless of the others?

Yes, but remember that this measure interpretation of probability requires the set of possible world to be measurable, which is a very special condition to impose on a set. It is certainly very intuitive, but technically burdensome. If you plan to work with probability, it's better to start from a cleaner model.

Right, because in fuzzy logics the spectrum is the truth value (because being hot/cold, near/far, gay/straight, sexual/asexual, etc. is not an either/or), whereas with PTEL the spectrum is the level of certainty in a more staunch true/false dichotomy, right?

Yes. Fuzzy logic has an infinity of truth values for its propositions, while in PTEL every proposition is 'in reality' just true or false, you just don't know which is which, and so you track your certainty with a real number.

The other question I forgot to ask in the first post was how Bayes' Theorem interacts with group identity not being a matter of necessary and sufficient conditions, or for other fuzzy concepts like I mentioned earlier (near/far, &c.). For this would you just pick a mostly-arbitrary concept boundary so that you have a binary truth value to work with?

Yes, in PTEL you already have real numbers, so it's not difficult to just say "The tea is 0.7 cold", and provided you have a clean (that is, classical) interpretation for this, the sentence is just true or false. Then you can quantify you uncertainty: "I give 0.2 credence to the belief that the tea is 0.7 cold". More generally, "I give y credence to the belief that the tea is x cold".
What comes out is a probability distribution, that is the assignment of a probability value to every value of a parameter (in this case, the coldness of tea). Notice that this would be impossible in the frequentist interpretation.

Replies from: CyrilDan
comment by CyrilDan · 2013-11-15T19:13:20.605Z · LW(p) · GW(p)

Unfortunately no, but from your description it seems quite like the theory of the mind of General Semantics.

I think it's similar, but Lakoff focuses more on how things are abstracted away. For example, because in childhood affection is usually associated with warmth (e.g. through hugs), the different areas of your brain that code for those things become linked ("neurons that wire together, fire together"). This then becomes the basis of a cognitive metaphor, Affection Is Warmth, such that we can also say "She has a warm smile" or "He gave me the cold shoulder" even though we're not talking literally about body temperature.

Similarly, in Where Mathematics Comes From: How The Embodied Mind Brings Mathematics Into Being, he summarises his chapter "Boole's Metaphor: Classes and Symbolic Logic" thusly:

  • There is evidence ... that Container schemas are grounded in the sensory-motor system of the brain, and that they have inferential structures like those just discussed. These include Container schema versions of the four inferential laws of classical logic.
  • We know ... that conceptual metaphors are cognitive cross-domain mappings that preserve inferential structure.
  • ... [W]e know that there is a Classes are Containers metaphor. This grounds our understanding of classes, by mapping the inferential structure of embodied Container schemas to classes as we understand them.
  • Boole's metaphor and the Propositional Logic metaphor have been carefully crafted by mathematicians to mathematicize classes and map them onto propositional structures.
  • The symbolic-logic mapping was also crafted by mathematicians, so that propositional logic could be made into a symbolic calculus governed by "blind" rules of symbol manipulation.
  • Thus, our understanding of symbolic logic traces back via metaphorical and symbolic mappings to the inferential structure of embodied Container schemas.

That's what I was getting at above, but I'm not sure I explained it very well. I'm less eloquent than Mr. Lakoff is, I think.

Yes. Since a group of maps can be seen just as a set of things in itself, it can be treated as a valid territory. In logic there are also map/territory loops, where the formulas itself becomes the territory mapped by the same formulas (akin to talking in English about the English language). This trick is used for example in Goedel's and Tarski's theorems.

Hmm interesting. I should become more familiar with those.

Yes. Basically the Bayesian definition is more inclusive: e.g. there is no definition of a probability of a single coin toss in the frequency interpretation, but there is in the Bayesian. Also in Bayes take on probability the frequentist definition emerges just as a natural by-product. Plus, the Bayesian framework produced a lot of detangling in frequentist statistics and introduced more powerful methods.

Oh right for sure, another historical example would be "What's the probability of a nuclear reactor melting down?" before any nuclear reactors had melted down. But I mean, even if the Bayesian definition covers more than the frequentist definition (which it definitely does), why not just use both definitions and understand that one application is a subset of the other application?

The first two chapters of Jaynes' book, a pre-print version of which is available online for free, do a great job in explaining and using Cox to derive Bayesian probability. I urge you to read them to fully grasp this point of view.

Right, I think I found the whole thing online, actually. And the first chapter I understood pretty much without difficulty, but the second chapter gave me brainhurt, so I put it down for a while. I think it might be that I never took calculus in school? (something I now regret, oddly enough for the general population) So I'm trying to becoming stronger before I go back to it. Do you think that getting acquainted with Cox's Theorem in general would make Jayne's particular presentation of it easier to digest?

Yes...

Yes...

Yes...

Hooray, I understand some things!

Replies from: MrMind
comment by MrMind · 2013-11-18T10:36:37.991Z · LW(p) · GW(p)

But I mean, even if the Bayesian definition covers more than the frequentist definition (which it definitely does), why not just use both definitions and understand that one application is a subset of the other application?

You'll have to ask to a frequentist :)
Bayesian use both definition (even though they call long-run frequency... well, long-run frequency), but frequentist refuse to acknowledge bayesian probability definition and methods.

but the second chapter gave me brainhurt, so I put it down for a while. I think it might be that I never took calculus in school? (something I now regret, oddly enough for the general population) So I'm trying to becoming stronger before I go back to it. Do you think that getting acquainted with Cox's Theorem in general would make Jayne's particular presentation of it easier to digest?

I skipped the whole derivation too, it was not interesting. What is important is at the end of the chapter, that is that developing Cox requirements brings to the product and the negation rules, and that's all you need.

comment by Watercressed · 2013-11-13T23:11:55.794Z · LW(p) · GW(p)

*I keep seeing probability referred to as an estimation of how certain you are in a belief. And while I guess it could be argued that you should be certain of a belief relative to the number of possible worlds left or whatever, that doesn't necessarily follow. Does the above explanation differ from how other people use probability?

One can ground probability in Cox's Theorem, which uniquely derives probability from a few things we would like our reasoning system to do.

comment by lmm · 2013-11-13T21:43:14.798Z · LW(p) · GW(p)

*I keep seeing probability referred to as an estimation of how certain you are in a belief. And while I guess it could be argued that you should be certain of a belief relative to the number of possible worlds left or whatever, that doesn't necessarily follow. Does the above explanation differ from how other people use probability?

I believe you've defined an equivalent if unusual form (or rather, your definition can be extended to an equivalent form). You need the notion of the measure of a set (because naively it can be confusing whether one infinite set is bigger than another), and measure is basically equivalent to probability; "how certain you are of a belief" is equivalent to "the measure of the worlds in which this belief is true, relative to that of the worlds that you might be in now".

*Also, if probability is defined as an arbitrary estimation of how sure you are, why should those estimations follow the laws of probability? I've heard the Dutch book argument, so I get why there might be practical reasons for obeying them, but unless you accept a pragmatist epistemology, that doesn't provide reasons why your beliefs are more likely to be true if you follow them. (I've also heard of Cox's rules, but I haven't been able to find a copy. And if I understand right, they says that Bayes' theorem follows from Boolean logic, which is similar to what I've said above, yes?)

The only laws of probability measure I know are that the measure of the whole set is 1, and the measure of a union of disjoint subsets is the sum of their measures. I'm finding it hard to imagine how I could hold beliefs that wouldn't conform to them. I mean, I guess it's conceivable that I could believe that A has probability 0.1, and B has probability 0.1, and A OR B has probability 0.3, but that just seems crazy. I guess what convinces me is dutch-booking myself; isn't a dutch books argument precisely an argument that another set of probabilities would be more likely to be true?

*Another question: above I used propositional logic, which is okay, but it's not exactly the creme de la creme of logics. I understand that fuzzy logics work better for a lot of things, and I'm familiar with predicate logics as well, but I'm not sure what the interaction of any of them is with probability or the use of it, although I know that technically probability doesn't have to be binary (sets just need to be exhaustive and mutually exclusive for the Kolmogorov axioms to work, right?). I don't know, maybe it's just something that I haven't learned yet, but the answer really is out there?

I'm not aware of any flaws with propositional logic. If you reach a problem you can't solve with it then by all means extend to something fancier.

Those are the only questions that are coming to mind right now (if I think of any more, I can probably ask them in comments). So anyone? Am I doing something wrong? Or do I feel more confused than I really am?

I think you're trying to be too formal too fast (or else your title isn't what you're really interested in). Try getting a solid practical handle on Bayes in finite contexts before worrying about extending it to infinite possibilities and the real world.

Replies from: CyrilDan
comment by CyrilDan · 2013-11-15T07:59:20.967Z · LW(p) · GW(p)

I believe you've defined an equivalent if unusual form (or rather, your definition can be extended to an equivalent form).

Yeah, that's what MrMind said too. Thanks!

The only laws of probability measure I know are that the measure of the whole set is 1, and the measure of a union of disjoint subsets is the sum of their measures. I'm finding it hard to imagine how I could hold beliefs that wouldn't conform to them. I mean, I guess it's conceivable that I could believe that A has probability 0.1, and B has probability 0.1, and A OR B has probability 0.3, but that just seems crazy.

Yeah, and I fully grasp the "measure of the whole set is 1" thing. (After all, if you're 100% certain something is true, then that's the only thing you think is possible). The additivity axiom is harder for me to grasp, though. It seems like it should be true intuitively, but teaching myself the formal form has been more difficult. Thinking and Deciding tries to derive it from having different bets depending on how things are worded (for example, on whether a coin comes up heads or tails versus whether the sun is up and the coin comes up heads or tails) which I grasp intellectually, but I'm having a hard time grokking it intuitively.

I think you're trying to be too formal too fast (or else your title isn't what you're really interested in). Try getting a solid practical handle on Bayes in finite contexts before worrying about extending it to infinite possibilities and the real world.

I do have a subjective feeling of success when I use Bayes (or Bayes-derive heuristics, more commonly) in my everyday life, but I really want to be sure I understand the nitty-gritty of it. Even if most of my use of it is just in justifying heuristics, I still want to be sure that I can formulate and apply them properly, you know?

comment by [deleted] · 2013-11-14T00:37:51.726Z · LW(p) · GW(p)

You seem to have a high confidence that you have a moderate understanding. If that makes sense to you, you are correct. You are X confident you understand with Y detail and accuracy.

comment by passive_fist · 2013-11-13T23:33:41.726Z · LW(p) · GW(p)

*I keep seeing probability referred to as an estimation of how certain you are in a belief. And while I guess it could be argued that you should be certain of a belief relative to the number of possible worlds left or whatever, that doesn't necessarily follow. Does the above explanation differ from how other people use probability?

Probability is never used as a degree of belief. Likelihood is used as a degree of belief. See this thread: http://stats.stackexchange.com/questions/2641/what-is-the-difference-between-likelihood-and-probability

Once you understand likelihood it's easy to see why it represents degree of belief.

Replies from: Richard_Kennaway, Dan_Weinand
comment by Richard_Kennaway · 2013-11-14T13:44:53.459Z · LW(p) · GW(p)

Probability is never used as a degree of belief.

Probability is always used as degree of belief by Bayesians (and we all seem to be Bayesians here). For example. Frequentists take probabilities to be long-run relative frequencies. There are other schools.

comment by Dan_Weinand · 2013-11-14T03:11:27.432Z · LW(p) · GW(p)

In light of the downvotes, I just wanted to explain that probability is frequently used to refer to a degree of belief by LessWrong folks. You're absolutely right that statistical literature will always use "probability" to denote the true frequency of an outcome in the world, but the community finds it a convenient shorthand to allow "probability" to mean a degree of belief.

Replies from: passive_fist
comment by passive_fist · 2013-11-14T03:24:34.592Z · LW(p) · GW(p)

I haven't seen this shorthand explained anywhere here.

Replies from: Dan_Weinand
comment by Dan_Weinand · 2013-11-14T03:38:11.293Z · LW(p) · GW(p)

This would be the explanation http://lesswrong.com/lw/oj/probability_is_in_the_mind/ It really should be talked about more explicitly elsewhere though.

Replies from: passive_fist
comment by passive_fist · 2013-11-14T04:26:58.827Z · LW(p) · GW(p)

I must have missed that thread, thanks. Though I can't see why I'm wrong. It has nothing to do with frequentism vs. bayesianism (I'm a bayesian). It's simply that likelihood is relative to a model, whereas probability is not relative to anything (or, alternatively, is relative to everything), as they're saying in that thread. Through this interpretation it's easy to see why likelihood represents a degree of belief.

Replies from: solipsist, Richard_Kennaway, Dan_Weinand
comment by solipsist · 2013-11-14T15:34:07.795Z · LW(p) · GW(p)

It's simply that likelihood is relative to a model, whereas probability is not relative to anything

Likelihood is the probability of the data given the model, not the probability of the model given the data. A likelihood function gives you a number between 0 and 1 for every model, but that number does not mean anything like "how certain is it that this model is true".

comment by Richard_Kennaway · 2013-11-14T13:52:17.787Z · LW(p) · GW(p)

Probability (for a Bayesian) is relative to a prior. There is always a prior: P(A|B) is the fundamental concept, not P(A). See, for example, Jaynes, chapter 1, pp.112ff., which is the point where he begins to construct a calculus for reasoning about "plausibilities", and eventually, in chapter 2, derives their measurement by numbers in the range 0-1.

Replies from: passive_fist
comment by passive_fist · 2013-11-14T21:53:22.891Z · LW(p) · GW(p)

This is true, and I can see why it could create some conflict in interpreting this question. Thanks.

comment by Dan_Weinand · 2013-11-14T08:22:29.009Z · LW(p) · GW(p)

It's a quirk of the community, not an actual mistake on your part. LessWrong defines probability as Y, the statistics community defines probability as X. I would recommend lobbying the larger community to a use of the words consistent with the statistical definitions but shrug...

Replies from: CyrilDan
comment by CyrilDan · 2013-11-15T07:11:43.143Z · LW(p) · GW(p)

Okay, that clears up it up a lot.