0 And 1 Are Not Probabilities
post by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2008-01-10T06:58:50.000Z · LW · GW · Legacy · 149 commentsContents
149 comments
One, two, and three are all integers, and so is negative four. If you keep counting up, or keep counting down, you’re bound to encounter a whole lot more integers. You will not, however, encounter anything called “positive infinity” or “negative infinity,” so these are not integers.
Positive and negative infinity are not integers, but rather special symbols for talking about the behavior of integers. People sometimes say something like, “5 + infinity = infinity,” because if you start at 5 and keep counting up without ever stopping, you’ll get higher and higher numbers without limit. But it doesn’t follow from this that “infinity - infinity = 5.” You can’t count up from 0 without ever stopping, and then count down without ever stopping, and then find yourself at 5 when you’re done.
From this we can see that infinity is not only not-an-integer, it doesn’t even behave like an integer. If you unwisely try to mix up infinities with integers, you’ll need all sorts of special new inconsistent-seeming behaviors which you don’t need for 1, 2, 3 and other actual integers.
Even though infinity isn’t an integer, you don’t have to worry about being left at a loss for numbers. Although people have seen five sheep, millions of grains of sand, and septillions of atoms, no one has ever counted an infinity of anything. The same with continuous quantities—people have measured dust specks a millimeter across, animals a meter across, cities kilometers across, and galaxies thousands of lightyears across, but no one has ever measured anything an infinity across. In the real world, you don’t need a whole lot of infinity.1
In the usual way of writing probabilities, probabilities are between 0 and 1. A coin might have a probability of 0.5 of coming up tails, or the weatherman might assign probability 0.9 to rain tomorrow.
This isn’t the only way of writing probabilities, though. For example, you can transform probabilities into odds via the transformation O = (P/(1 - P)). So a probability of 50% would go to odds of 0.5/0.5 or 1, usually written 1:1, while a probability of 0.9 would go to odds of 0.9/0.1 or 9, usually written 9:1. To take odds back to probabilities you use P = (O∕(1 + O)), and this is perfectly reversible, so the transformation is an isomorphism—a two-way reversible mapping. Thus, probabilities and odds are isomorphic, and you can use one or the other according to convenience.
For example, it’s more convenient to use odds when you’re doing Bayesian updates. Let’s say that I roll a six-sided die: If any face except 1 comes up, there’s a 10% chance of hearing a bell, but if the face 1 comes up, there’s a 20% chance of hearing the bell. Now I roll the die, and hear a bell. What are the odds that the face showing is 1? Well, the prior odds are 1:5 (corresponding to the real number 1/5 = 0.20) and the likelihood ratio is 0.2:0.1 (corresponding to the real number 2) and I can just multiply these two together to get the posterior odds 2:5 (corresponding to the real number 2/5 or 0.40). Then I convert back into a probability, if I like, and get (0.4/1.4) = 2/7 = ~29%.
So odds are more manageable for Bayesian updates—if you use probabilities, you’ve got to deploy Bayes’s Theorem in its complicated version. But probabilities are more convenient for answering questions like “If I roll a six-sided die, what’s the chance of seeing a number from 1 to 4?” You can add up the probabilities of 1/6 for each side and get 4/6, but you can’t add up the odds ratios of 0.2 for each side and get an odds ratio of 0.8.
Why am I saying all this? To show that “odd ratios” are just as legitimate a way of mapping uncertainties onto real numbers as “probabilities.” Odds ratios are more convenient for some operations, probabilities are more convenient for others. A famous proof called Cox’s Theorem (plus various extensions and refinements thereof) shows that all ways of representing uncertainties that obey some reasonable-sounding constraints, end up isomorphic to each other.
Why does it matter that odds ratios are just as legitimate as probabilities? Probabilities as ordinarily written are between 0 and 1, and both 0 and 1 look like they ought to be readily reachable quantities—it’s easy to see 1 zebra or 0 unicorns. But when you transform probabilities onto odds ratios, 0 goes to 0, but 1 goes to positive infinity. Now absolute truth doesn’t look like it should be so easy to reach.
A representation that makes it even simpler to do Bayesian updates is the log odds—this is how E. T. Jaynes recommended thinking about probabilities. For example, let’s say that the prior probability of a proposition is 0.0001—this corresponds to a log odds of around -40 decibels. Then you see evidence that seems 100 times more likely if the proposition is true than if it is false. This is 20 decibels of evidence. So the posterior odds are around -40 dB + 20 dB = -20 dB, that is, the posterior probability is ~0.01.
When you transform probabilities to log odds, 0 goes to negative infinity and 1 goes to positive infinity. Now both infinite certainty and infinite improbability seem a bit more out-of-reach.
In probabilities, 0.9999 and 0.99999 seem to be only 0.00009 apart, so that 0.502 is much further away from 0.503 than 0.9999 is from 0.99999. To get to probability 1 from probability 0.99999, it seems like you should need to travel a distance of merely 0.00001.
But when you transform to odds ratios, 0.502 and 0.503 go to 1.008 and 1.012, and 0.9999 and 0.99999 go to 9,999 and 99,999. And when you transform to log odds, 0.502 and 0.503 go to 0.03 decibels and 0.05 decibels, but 0.9999 and 0.99999 go to 40 decibels and 50 decibels.
When you work in log odds, the distance between any two degrees of uncertainty equals the amount of evidence you would need to go from one to the other. That is, the log odds gives us a natural measure of spacing among degrees of confidence.
Using the log odds exposes the fact that reaching infinite certainty requires infinitely strong evidence, just as infinite absurdity requires infinitely strong counterevidence.
Furthermore, all sorts of standard theorems in probability have special cases if you try to plug 1s or 0s into them—like what happens if you try to do a Bayesian update on an observation to which you assigned probability 0.
So I propose that it makes sense to say that 1 and 0 are not in the probabilities; just as negative and positive infinity, which do not obey the field axioms, are not in the real numbers.
The main reason this would upset probability theorists is that we would need to rederive theorems previously obtained by assuming that we can marginalize over a joint probability by adding up all the pieces and having them sum to 1.
However, in the real world, when you roll a die, it doesn’t literally have infinite certainty of coming up some number between 1 and 6. The die might land on its edge; or get struck by a meteor; or the Dark Lords of the Matrix might reach in and write “37” on one side.
If you made a magical symbol to stand for “all possibilities I haven’t considered,” then you could marginalize over the events including this magical symbol, and arrive at a magical symbol “T” that stands for infinite certainty.
But I would rather ask whether there’s some way to derive a theorem without using magic symbols with special behaviors. That would be more elegant. Just as there are mathematicians who refuse to believe in the law of the excluded middle or infinite sets, I would like to be a probability theorist who doesn’t believe in absolute certainty.
1I should note for the more sophisticated reader that they do not need to write me with elaborate explanations of, say, the difference between ordinal numbers and cardinal numbers. I’m familiar with the different set-theoretic notions of infinity, but I don’t see a good use for them in probability theory.
149 comments
Comments sorted by oldest first, as this post is from before comment nesting was available (around 2009-02-27).
comment by Paul_Gowder · 2008-01-10T07:39:57.000Z · LW(p) · GW(p)
hmm... I feel even more confident about the existence of probability-zero statements than I feel about the existence of probability-1 statements. Because not only do we have logical contradictions, but we also have incoherent statements (like Husserl's "the green is either").
Can one form subjective probabilities over the truth of "the green is either" at all? I don't think so, but I remember a some-months-ago suggestion of Robin's about "impossible possible worlds," which might also imply the ability to form probability estimates over incoherencies. (Why not incoherent worlds? One might ask.) So the idea is at least potentially on the table.
And then it seems obvious that we will forever, across all space and time, have no evidence to support an incoherent proposition. That's as good an approximation of infinite lack of evidence as I can come up with. P("the green is either")=0?
Replies from: jslocum, RobbBB↑ comment by Rob Bensinger (RobbBB) · 2012-11-21T18:58:52.331Z · LW(p) · GW(p)
If you assign 0 to logical contradictions, you should assign 1 to the negations of logical contradictions. (Particularly since your confidence in bivalence and the power of negation is what allowed you to doubt the truth of the contradiction in the first place.) So it's strange to say that you feel safer appealing to 0s than to 1s.
For my part, I have a hard time convincing myself that there's simply no (epistemic) chance that Graham Priest is right. On the other hand, assigning any value but 1 to the sentence "All bachelors are bachelors" just seems perverse. It seems as though I could only get that sentence wrong if I misunderstand it. But what am I assigning a probability to, if not the truth of the sentence as I understand it?
Another way of saying this is that I feel queasy assigning a nonzero probability to "Not all bachelors are bachelors," (i.e., ¬(p → p)) even though I think it probably makes some sense to entertain as a vanishingly small possibility "All bachelors are non-bachelors" (i.e., p → ¬p, all bachelors are contradictory objects).
comment by Unknown · 2008-01-10T07:45:05.000Z · LW(p) · GW(p)
One answer would be that an incoherent proposition is not a proposition, and so doesn't have any probability (not even zero, if zero is a probability.)
Another answer would be that there is some probability that you are wrong that the proposition is incoherent (you might be forgetting your knowledge of English), and therefore also some probability that "the green is either" is both coherent and true.
comment by j.edwards · 2008-01-10T07:49:18.000Z · LW(p) · GW(p)
It's difficult to assign probability to incoherent statements, because since we can't mean anything by them, we can't assert a referent to the statement -- in that sense, the probability is indeterminate (additionally, one could easily imagine a language in which a statement such as "the green is either" has a perfectly coherent meaning -- and we can't say that's not what we meant, since we didn't mean anything). Recall also that each probability zero statement implies a probability one statement by its denial and vice versa, so one is equally capable of imagining them, if in a contrived way.
Replies from: Baruta07, Baruta07↑ comment by Baruta07 · 2012-10-31T16:51:00.302Z · LW(p) · GW(p)
Putting this in a slightly more coherent way. (I was having some trouble understanding the explanation, so I broke it down into layman's terms, might make it more easily understandable)
If I assign P(0) to "Green is either" Then I assign P(1) to the statement "Green is not either"
If you assign absolute certainty to any one statement you are, by definition assigning absolute impossibility to all other possibilities.
↑ comment by Baruta07 · 2012-10-31T16:51:08.951Z · LW(p) · GW(p)
Putting this in a slightly more coherent way. (I was having some trouble understanding the explanation, so I broke it down into layman's terms, might make it more easily understandable)
If I assign P(0) to "Green is either" Then I assign P(1) to the statement "Green is not either"
If you assign absolute certainty to any one statement you are, by definition assigning absolute impossibility to all other possibilities.
comment by Paul_Gowder · 2008-01-10T08:20:44.000Z · LW(p) · GW(p)
j.edwards, I think your last sentence convinced me to withdraw the objection -- I can't very well assign a probability of 1 to ~"the green is either" can I? Good point, thanks.
comment by brent · 2008-01-10T08:38:56.000Z · LW(p) · GW(p)
that anecdote wasn't amusing at all.
and it wasn't an anecdote.
and it doesn't prove the point. all it shows is that a single person didn't know their 17 times tables off the top of their head. there's no reason to expect someone to be as confident that 51 is or is not prime than 7 is or is not prime - and anyway, the point of the story should have been that, eventually, 7 might NOT be prime. which it's always going to be.
i didn't get it.
comment by Paul_Crowley2 · 2008-01-10T09:19:22.000Z · LW(p) · GW(p)
Probabilities of 0 and 1 are perhaps more like the perfectly massless, perfectly inelastic rods we learn about in high school physics - they are useful as part of an idealized model which is often sufficient to accurately predict real-world events, but we know that they are idealizations that will never be seen in real life.
However, I think we can assign the primeness of 7 a value of "so close to 1 that there's no point in worrying about it".
Replies from: thrawncacomment by Ben_Jones · 2008-01-10T10:15:53.000Z · LW(p) · GW(p)
In stark contrast to this time last week, I now internally believe the title of this post.
I did enjoy "something, somewhere, is having this thought," Paul, despite all its inherent messiness.
'Green is either' doesn't tell us much. As far as we know it's a nonsensical statement, but I think that makes it more believable than 'green is purple', which makes sense, but seems extremely wrong. You might as well try to assign a probability to 'flarg is nardle'. I can demonstrate that green isn't purple, but not that green isn't either, nor that flarg isn't nardle.
Is there anything truer than '7 is prime'? What's the truest statement anyone can come up with? Can we definitely get no closer to 0 than 1, based on J Edwards & Paul, above?
comment by randomwalker · 2008-01-10T13:26:02.000Z · LW(p) · GW(p)
I think you can still have probabilities sum to 1: probability 1 would be the theoretical limit of probability reaching infinite certitude. Just like you can integrate over the entire real line, i.e -∞ to ∞ even though those numbers don't actually exist.
comment by Caledonian2 · 2008-01-10T13:36:53.000Z · LW(p) · GW(p)
i didn't get it.
Easy: it's a demonstration of how you can never be certain that you haven't made an error even on the things you're really sure about.
It's a cheap, dirty demonstration, but one nevertheless.
comment by Ben_Jones · 2008-01-10T16:09:50.000Z · LW(p) · GW(p)
Cumulant - can you state, with infinite certainty, that no-one will ever run faster than light?
Replies from: Dojan, Dojan, pnrjulius↑ comment by Dojan · 2011-10-25T10:52:21.476Z · LW(p) · GW(p)
By the current model it is impossible for anything to move faster than light*, but what is your confidence in the current model? Certainly high, but not infinite. Lets not mix up the map and the territory. As for running faster than light; certainly unlikely, but not infinitely so. If you define something as impossible in some model, and given that you want a probability within that model, or given that model, I don't know what happens however...
*With certain complications.
[Edit: Formating]
↑ comment by Dojan · 2011-10-25T10:57:15.118Z · LW(p) · GW(p)
By the current model it is impossible for anything to move faster than light*, but what is your confidence in the current model? Certainly high, but not infinite. Lets not mix up the map and the territory. As for running faster than light; certainly unlikely, but not infinitely so. If you define something as impossible in some model, and given that you want a probability within that model, or given that model, I don't know what happens however...
*With certain complications.
comment by Dan_Burfoot · 2008-01-10T16:11:07.000Z · LW(p) · GW(p)
Another way to think about probabilities of 0 and 1 is in terms of code length.
Shannon told us that if we know the probability distribution of a stream of symbols, then the optimal code length for a symbol X is: l(X) = -log p(X)
If you consider that an event has zero probability, then there's no point in assigning a code to it (codespace is a conserved quantity, so if you want to get short codes you can't waste space on events that never happen). But if you think the event has zero probability, and then it happens, you've got a problem - system crash or something.
Likewise, if you think an event has probability of one, there's no point in sending ANY bits. The receiver will also know that the event is certain, so he can just insert the symbol into the stream without being told anything (this could happen in a symbol stream where three As are always followed by a fourth). But again, if you think the event is certain and then it turns out not to be, you've got a problem: the receiver doesn't get the code you want to send.
If you refuse to assign zero or unity probabilities to events, then you have a strong guarantee that you will always be able to encode the symbols that actually appear. You might not get good code lengths, but you'll be able to send your message. So Eliezer's stance can be interpreted as an insistence on making sure there is a code for every symbol sequence, regardless of whether that sequence appears to be impossible.
Replies from: pnrjulius↑ comment by pnrjulius · 2012-05-27T04:17:17.717Z · LW(p) · GW(p)
But then, do you really want to build a binary transmitter that is prepared to handle not only sequences of 0 and 1, but also the occasional "zebrafish" and "Thursday" (imagine somehow fitting these into an electrical signal, or don't, because the whole point is that it can't be done)? Such a transmitter has enormously increased complexity to handle signals that, well... won't ever happen. I guess you could say the probability is low enough that the expected utility of dealing with it is not worth it. But what about the chance that a "zebrafish" in the launch codes will wipe out humanity? Surely that expected utility cannot be ignored? (Except it can!)
Replies from: viktor-riabtsev-1↑ comment by Viktor Riabtsev (viktor-riabtsev-1) · 2018-10-24T13:52:27.335Z · LW(p) · GW(p)
Umm, it's a real thing. ECC memory https://en.m.wikipedia.org/wiki/ECC_memory I'm sure it isn't 100% foolproof (coincidentally the point of this article) but I imagine it reduces error probability by orders of magnitude.
comment by Q · 2008-01-10T16:43:39.000Z · LW(p) · GW(p)
Brent,
From what I understood on reading the Wikipedia article on Bayesian probability and inferring from how he writes (and correct me if I'm wrong), Eliezer is talking about your "subjective probability." You are a being, have consciousness, and interpret input as information. Given a lot of this information, you've formed an idea that 7 is prime. You've also formed an idea that other people exist, and that the sky is blue, which also have a high subjective probability in your mind because you have a lot of direct information to sustain that belief.
Moreover, if you've ever been wrong before, hopefully you've noticed that you have been wrong before. That's a little information that "you are sometimes wrong about things that you are very sure of". So, you might apply this information to your formula of your probability of the idea that "7 is prime", so you still end up with a high probability, but not 1.
Now, you might not think that "you are sometimes wrong about things that you are sure of" about every single subject, such as primeness. But, what if you had the information that other humans, smart people, have at some point in the past, incorrectly understood the primeness of a number (the anecdote). You might state, that "human beings are sometimes wrong about the primeness of a number," and "I am a human being." Again, if you include that information in your calculation of the probability that the idea that "7 is prime" is true, then you end up with a high probability, but not 1.
(Oh, but what if you didn't make the statement "human beings are sometimes wrong about the primeness of a number", but instead, "this idiot is sometimes wrong about the primeness of a number, but I am never" Well, you can. That's one big problem with Bayesian subjective probabilities. How do we generalize? How can we formalize it so that two people with the same information deterministically get the same probability? Logical (or objective epistemic) probability attempts to answer these questions.)
So, you're right that it is just "a single person" getting it wrong, that his cerainty was incorrect. But that's Eliezer's point. We are not supreme beings lording over all reality, we are humans who have memorized some information from the past and made some generalizations, including generalizations that sometimes our generalizations are wrong.
comment by Janos2 · 2008-01-10T16:58:57.000Z · LW(p) · GW(p)
I agree with cumulant. The mathematical subject of probability is based on measure theory, which loses a ton of convergence theorems if we exclude 0 and 1. We can agree that things that are not known a priori can't have probability 0 or 1, but I think we must also agree that "an impossible thing will happen soon" has probability 0, because it's a contradiction. An alternate universe in which the number 7 (in the same kind of number system as ours, etc.) is prime is damn-near inconceivable, but an alternate universe in which impossible things are possible is purely absurd.
If our mathematical reasoning is coherent enough for it to be meaningful to make probability assignments then certainly we are not so fundamentally flawed that what we consider tautologies could be false. If you are willing to accept that maybe 0 is 1, then you can't do any of your probability adjustments, or use Bayes' Theorem, or anything of the sort without having a (possibly unstated) caveat that probability theory might be complete nonsense. But what's the probability that probability theory is nonsense (i.e. false or inconsistent)? What does that even mean? We can only assign a probability if that makes sense, so conditioned on the sentence making sense, probability theory must be nonsense with probability 0, no? So averaged over all possible universes (those where probability theory makes sense, and those where it doesn't) the sentence "probability makes sense with probability 1" better approximates the truth value of probability making sense than "probability makes sense with probability p" for p0. If it's not, it's still not worse, but what the hell are we even saying?
comment by Utilitarian2 · 2008-01-10T17:40:44.000Z · LW(p) · GW(p)
Speaking of measure theory, what probability should we assign to a uniformly distributed random real number on the interval [0, 1] being rational? Something bigger than 0? Maybe in practice we would never hold a uniform distribution over [0, 1] but would assign greater probability to "special" numbers (like, say, 1/2). But regardless of our probability distribution, there will exist subsets of [0, 1] to which we must assign probability 0.
The only way I can see around this is to refuse to talk about infinite (or at least uncountable) sets. Are there others?
comment by Janos2 · 2008-01-10T18:49:09.000Z · LW(p) · GW(p)
I suspect Eliezer would object to my post claiming that I'm confusing map and territory, but I don't think that's fair. If there's a map you're trying to use all over the place (and you do seem to), then I claim it makes no sense to put a little region on the map labelled "maybe this map doesn't make any sense at all". If the map seems to make sense and you're still following it for everything, you'll have to ignore that region anyway. So is it really reasonable to claim that "the probability that probability makes sense is <1"?
Utilitarian:
Measure theory gives a clear answer to this: it's 0. Which is fine. For all x, the probability that your rv will take the value x is 0. Actually the probability that your rv is computable is also 0. (Computable numbers are the largest countable class I know of.) What's false is the tempting statement that probability 0 events are impossible. It's only the converse that's true: impossible events have probability 0. There's another tempting statement that's false, namely the statement that if S is an arbitrary collection of disjoint events, the probability of one of them happening is the sum of the probabilities of each one happening. Instead, this only holds for countable sets S. This is part of the definition of a measure.
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2008-01-10T19:16:49.000Z · LW(p) · GW(p)
If there's a map you're trying to use all over the place (and you do seem to), then I claim it makes no sense to put a little region on the map labelled "maybe this map doesn't make any sense at all". If the map seems to make sense and you're still following it for everything, you'll have to ignore that region anyway.
Janos, are you saying that it is in fact impossible that your map in fact doesn't make any sense? Because I do, indeed, have a little section of my map labelled "maybe this map doesn't make any sense at all", and every now and then, I think about it a little, because there are so many fundamental premises of which I am unsure even in their definitions. (E.g: "the universe exists", and "but why?") Just because this area of my map drops out of my everyday decision theory due to failure to generate coherent advice on preferences, does not mean it is absent from my map. "You must ignore" or rather "You should usually ignore" is decision theory, and probability theory should usually be firewalled off from preferences.
Computable numbers are the largest countable class I know of.
Either all countable sets are the same size anyway, or you can generate a larger set by saying "all computable reals plus the halting probability". How about computable with various oracles?
What's false is the tempting statement that probability 0 events are impossible. It's only the converse that's true: impossible events have probability 0.
If you cannot repose probability 1 in the statement "all events to which I assign probability 0 are impossible" you should apply a correction and stop reposing probability 0 to those events. Do you mean to say that all impossible events have probability 0, plus some more possible events also have probability 0? This makes no sense, especially as a justification for using "probability 0" in a meaningfully calibrated sense.
To use "probability 0" without a finite expectation of being infinitely surprised, you must repose probability 1 in the belief that you use "probability 0" only for actually impossible events; but not necessarily believe that you assign probability 0 to every impossible event (satisfying both conditions implies logical omniscience).
I should mention that I'm also an infinite set atheist.
comment by Nick_Tarleton · 2008-01-10T19:19:04.000Z · LW(p) · GW(p)
I can admit the possibility that probability doesn't work, but not have to do anything about it. If probability doesn't work and I can't make rational decisions, I can expect to be equally screwed no matter what I do, so it cancels out of the equation.
The definable real numbers are a countable superset of the computable ones, I think. (I haven't studied this formally or extensively.)
comment by Neel_Krishnaswami · 2008-01-10T19:34:58.000Z · LW(p) · GW(p)
If you don't want to assume the existence of certain propositions, you're asking for a probability theory corresponding to a co-intutionistic variant of minimal logic. (Cointuitionistic logic is the logic of affirmatively false propositions, and is sometimes called Popperian logic.) This is a logic with false, or, and (but not truth), and an operation called co-implication, which I will write a <-- b.
Take your event space L to be a distributive lattice (with ordering <), which does not necessarily have a top element, but does have dual relative pseudo-complements. Take < to be the ordering on the lattice. (a <-- b) if for all x in the lattice L,
for all x, b < (a or x) if and only if a <-- b < x
Now, we take a probability function to be a function from elements of L to the reals, satisfying the following axioms:
- P(false) = 0
- if A < B then P(A) <= P(B)
- P(A or B) + P(A and B) = P(A) + P(B)
There you go. Probability theory without certainty.
This is not terribly satisfying, though, since Bayes's theorem stops working. It fails because conditional probabilities stop working -- they arise from a forced normalization that occurs when you try to construct a lattice homomorphism between an event space and a conditionalized event space.
That is, in ordinary probability theory (where L is a Boolean algebra, and P(true) = 1), you can define a conditionalization space L|A as follows:
L|A = { X in L | X < A } true' = A false' = false and' = and or' = or not'(X) = not(X) and A P'(X) = P(X)/P(A)
with a lattice homomorphism X|A = X and A
Then, the probability of a conditionalized event P'(X|A) = P(X and A)/P(A), which is just what we're used to. Note that the definition of P' is forced by the fact that L|A must be a probability space. In the non-certain variant, there's no unique definition of P', so conditional probabilities are not well-defined.
To regain something like this for cointuitionistic logic, we can switch to tracking degrees of disbelief, rather than degrees of belief. Say that:
- D(false) = 1
- for all A, D(A) > 0
- if A < B then D(A) >= D(B)
- D(A or B) + D(A and B) = D(A) + D(B)
This will give you the bounds you need to let you need to nail down a conditional disbelief function. I'll leave that as an exercise for the reader.
comment by anonymous21 · 2008-01-10T19:44:20.000Z · LW(p) · GW(p)
Hi guys you don't know me and I prefer to stay anonymous. I look at it backwards and get the very same result as Eliezer Y. What is total degeneracy? In practice, it is being total impervious to updating, regardless of the magnitude of the information seen (even infinity). That can only be achieved by unitary of nul probabilities as priors. Bayesian updating never takes you there (posteriors). And no updating can take place from that situation. Anonymous
comment by Psy-Kosh · 2008-01-10T19:53:48.000Z · LW(p) · GW(p)
"1, 2, and 3 are all integers, and so is -4. If you keep counting up, or keep counting down, you're bound to encounter a whole lot more integers. You will not, however, encounter anything called "positive infinity" or "negative infinity", so these are not integers."
This bothered me, more to the point, it hit on some stuff I've been thinking about. I realize I don't have a very good way to precisely state what I mean by "finite" or "eventually"
The above, for instance, basically says "if infinity is not an integer, then if I start at an integer and move an integer number of steps away from it, I will still be at an integer that's not infinity, therefore infinity isn't an integer"
But if we allowed infinity to be considered an integer, then we allow an infinite number of steps...
How about this: if N is a non infinite integer, SN is N's successor, PN is N's predecessor, neither SN nor PN will be infinite. Great, no matter where we start from, we can't reach an infinity in one step, so that seems to make this notion more solid.
but... if N is an infinity, then neither SN nor PN (thinking about ordinals now, btw, instead of cardinals) will be finite. Doh.
So the situation seems a bit symmetric here. This is really annoying to me.
I have as of late been getting the notion that the notions of "finite" and "eventually" are so tied to the idea of mathematical induction that it's probably best do define the former in terms of the latter... ie, the number of steps from A to A is finite if and only if induction arguments starting from A and going in the direction toward B actually validly prove the relevant proposition for B.
This is a vague notion, but near as I can tell, it comes closes to what I actually think I mean when I say something like "finite" or "eventually reach in a finite number of steps" or something like that.
ie, finite values are exactly those critters for which mathematical induction arguments can be used on. (maybe this is a bad definition. I'm more stating it as a "here's my suspicion of what may be the best basis to really represent the concept")
Anyways, as far as 0,1 not being probabilities... While I agree that one should't believe a proposition with probability 0 or 1, I'm not sure I'd consider them nonprobabilities. Perhaps "unreachable" probabilities instead. Disallowing stuff like sum to 1 normalizations and so on would seem to require "unnatural" hoops to jump through to get around that.
Unless, of course, someone has come up with a clean model without that. (If so, well, I'm curious too.)
comment by Janos2 · 2008-01-10T19:54:23.000Z · LW(p) · GW(p)
Eliezer:
I'm not sure what an "infinite set atheist" is, but it seems from your post that you use different notions of probability than what I think of as standard modern measure theory, which surprises me. Utilitarian's example of a uniform r.v. on [0, 1] is perfect: it must take some value in [0, 1], but for all x it takes value x with probability 0. Clearly you can't say that for all x it's impossible for the r.v. to take value x, because it must in fact take one of those values. But the probabilities are still 0. Pragmatically the way this comes out is that "probability 0" doesn't imply impossible. If you perform an experiment countably-infinitely many times with the probability of a certain outcome being 0 each time, the probability of ever getting that outcome is 0; in this sense you can say the outcome is almost impossible. However it's possible that each outcome individually is almost impossible, even though of course the experiment will have an outcome.
You can object that such experiments are physically impossible e.g. because you can only actually measure/observe countably many outcomes. That's fine; that just means you can get by with only discrete measures. But such assumptions about the real world are not known a priori; I like usual measure theory better, and it seems to do quite a good job of encompassing what I would want to mean by "probability", certainly including the discrete probability spaces in which "probability 0" can safely be interpreted to mean "impossible".
You're right, it's not that hard to come up with larger countable classes of reals than the computables; I just meant that all of the usual, "rolls-off-the-tip-of-your-tongue" classes seem to be subsets of the computables. But maybe Nick is right, and the definables are broader. I haven't studied this either.
And yes, I also sometimes think about how assumptions I make about life and the perceptible universe could be wrong, but I do not do this much for mathematics that I've studied deeply enough, because I'm almost as convinced of its "truth" as I am of my own ability to reason, and I don't see the use in reasoning about what to do if I can't reason. This is doubly true if the statements I'm contemplating are nonsense unless the math works.
comment by michael_vassar3 · 2008-01-10T20:21:17.000Z · LW(p) · GW(p)
Eliezer:
I am curious as to why you asked Peter not to repeat his stunt.
Also, I would really like to know how confident you are in your infinite set atheism and for that matter in your non-standard philosophy of mathematics attitudes in general.
comment by Doug_S. · 2008-01-10T22:22:55.000Z · LW(p) · GW(p)
Regarding infinite set atheism:
Is the set of "possible landing sites of a struck golf ball" finite or infinite?
In other words, can you finitely parameterize locations in space? Physicists normally model "position" as n-tuples of real numbers in a coordinate system; if they were forced to model position discretely, what would happen?
I can claim to see an infinite set each time I use a ruler...
comment by komponisto2 · 2008-01-10T23:14:12.000Z · LW(p) · GW(p)
Eliezer:
I should mention that I'm also an infinite set atheist.
You've mentioned this before, and I have always wondered: what does this mean? Does it mean that you don't believe there are any infinite sets? If so, then you have to believe that a mathematician who claims the contrary (and gives the standard proof) is making a mistake somewhere. What is it?
Frankly, even if you actually are a finitist (which I find hard to imagine), it doesn't seem relevant to this disucssion: every argument you have presented could equally well have been given by someone who accepts standard mathematics, including the existence of infinite sets.
comment by tcpkac · 2008-01-10T23:17:17.000Z · LW(p) · GW(p)
The nature of 0 & 1 as limit cases seem to be fascinating for the theorists. However, in terms of 'Overcoming Bias', shouldn't we be looking at more mundane conceptions of probability ? EY's posts have drawn attention to the idea that the amount of information needed to add additional cetainty on a proposition increases exponentially while the probability increases linearly. This says that in utilitarian terms, not many situations will warrant chasing the additional information above 99.9% certainty (outside technical implementations in nuclear physics, rocket science or whatever). 99.9% as a number is taken out of a hat. In human terms, when we say 'I'm 99.9% sure that 2+2 always =4', where not talking about 1000 equivalent statements. We're talking about one statement, with a spatial representation of what '100% sure' means with respect to that statement, and 0.1% of that spatial representation allowed for 'niggling doubts', of the sort : what have I forgotten ? What don't I know ? What is inconceivable for me ? The interesting question for 'overcoming bias' is : how do we make that tradeoff between seeking additional information on the one hand and accepting a limited degree of certainty on the other ? As an example (cf. the Evil Lords of the Matrix), considering whether our minds are being controlled by magic mushrooms from Alpha Pictoris may someday increase the 'niggling doubt' range from 0.1% to 5%, but the evidence would have to be shoved in our faces pretty hard first.
comment by Rolf_Nelson2 · 2008-01-11T00:45:10.000Z · LW(p) · GW(p)
Doug S., I believe according to quantum mechanics the smallest unit of length is Planck length and all distances must be finite multiples of it.
Not in standard quantum mechanics. Certain of the many theories unsupported hypotheses of quantum gravity (such as Loop Quantum Gravity) might say something similar to this, but that doesn't abolish every infinite set in the framework. The total number of "places where infinity can happen" in modern models has tended to increase, rather than decrease, over the centuries, as models have gotten more complex. One can never prove that nature isn't "allergic to infinities" (the skeptic can always claim, "wait, but if we looked even closer or farther, maybe we would see a heretofore unobserved brick wall"), but this allergy is not something that has been empirically observed.
comment by Doug_S. · 2008-01-11T01:56:37.000Z · LW(p) · GW(p)
I think Eliezer's "infinite set atheism" is a belief that infinite sets, although well-defined mathematically, do not exist in the "real world"; in other words, that any physical phenomenon that actually occurs can be described using a finite number of bits. (This can include numbers with infinite decimal expansions, as long as they can be generated by a finitely long computer program. Therefore, using pi in equations is not prohibited, because you're using the symbol "pi" to represent the program, which is finite.)
A consequence of "infinite set atheism" seems to be that the universe is a finite state machine (although one that is not necessarily deterministic). Am I understanding this properly?
comment by cumulant-nimbus · 2008-01-11T02:24:12.000Z · LW(p) · GW(p)
What do you mean by "infinite set atheism"? You are essentially stating that you don't believe in mathematical limits -- because that is one of the major consequences of infinite sets (or sequences).
If you don't believe in those... well, you lose calculus, you lose the density of real numbers, you lose the need or understanding of man events with probability 0 or 1, and you lose the point of Zeno's Paradox.
Janos is spot on about measure zero not implying impossibility. What is the probability of a golf ball landing at any exact point? Zero. But it has to land somewhere, so no one point is impossible.
Impossibility would mean absence from your sigma algebra. What's that you ask? Without making this painful, you need three things for probability: an idea of what constitutes "the space of everything", an idea of what constitutes possible events out of that space which we can confirm or deny, and an assignment of numbers to those events. (This is often LaTeX'ed as (\Omega, \mathcal{F}, P).) The conversation here seems to be confusing the filtration/sigma-algebra F with the numbers assigned to those events by P.
Can we choose which we're talking about: events or numbers?
comment by Caledonian2 · 2008-01-11T02:58:43.000Z · LW(p) · GW(p)
What is the probability of a golf ball landing at any exact point? Zero.
Wrong.
I don't know which is more painful: Eliezer's errors, or those of his detractors.
comment by Z._M._Davis · 2008-01-11T03:39:27.000Z · LW(p) · GW(p)
Cumulant, I think the idea behind "infinite set atheism" is not that limits don't exist, but that that infinities are acceptable only as limits approached in a specified way. On this view, limits are not a consequence of infinite sets, as you contend; rather, only the limit exists, and the infinite set or sequence is merely a sloppy way of thinking about the limit.
Eliezer, I'll second Matthew's suggestion above that you write a post on infinite set atheism; it looks as if we don't understand you.
I think I understand the motive for rejecting infinite sets (viz., that whenever you deal with infinites you get all sorts of ridiculously counterintuitive results--sums coming out different when you reärrange the terms, the Banach-Tarski paradox, &c., &c.), but I'm not sure you can give up infinite sets without also giving up the real numbers (as others have touched on above), which seems very wrong.
comment by cumulant-nimbus · 2008-01-11T03:40:32.000Z · LW(p) · GW(p)
Caledonian: Not wrong. Take the field you're swinging at to be a plane. There are infinitely many points in that plane; that's just the density of the reals.
Now say there is some probability density of landing spots; and, let's say no one spot is special in that it attracts golf balls more than points immediately nearby (i.e. our pdf is continuous and non-atomic). Right there, you need every point (as a singleton) to have measure 0.
Go pick up Billingsley: measure 0 is not the same as impossible nor does it cause any problems.
comment by Caledonian2 · 2008-01-11T04:14:12.000Z · LW(p) · GW(p)
Take the field you're swinging at to be a plane. There are infinitely many points in that plane; that's just the density of the reals.
And the location that the ball lands on will also be composed of infinitely many reals. Shall we compare the size of two infinite sets?
comment by cumulant-nimbus · 2008-01-11T05:09:32.000Z · LW(p) · GW(p)
I'd say that the ball is a sphere and consider the first point of impact (i.e. the tangency point of the plane to the sphere). Otherwise, you need to know a lot about the ball and the field where it lands.
You can compare infinite sets. Take the sets A and B, A={1,2,3,...} and B={2,3,4,...}. B is, by construction, a subset of A. There's your comparison; yet, both are infinite sets.
What assumptions would you make for the golf ball and the field? (To keep things clear, can we define events and probabilities separately?)
comment by Paul_Gowder · 2008-01-11T07:27:53.000Z · LW(p) · GW(p)
Caledonian, every undergraduate who has ever taken a statistics class knows that the probability of any single point in a continuous distribution is zero. Probabilities in continuous space are measured on intervals. Basic calculus...
comment by Caledonian2 · 2008-01-11T14:40:58.000Z · LW(p) · GW(p)
Caledonian, every undergraduate who has ever taken a statistics class knows that the probability of any single point in a continuous distribution is zero.
Gowder, everyone who's ever given the issue more than three-seconds'-thought knows that no statistical result ever involves a single point.
comment by J_Thomas · 2008-01-11T15:05:30.000Z · LW(p) · GW(p)
Usually, if a die lands on edge we say it was a spoiled throw and do it over. Similarly if a Dark Lord writes 37 on the face that lands on top, we complain that the Dark Lord is spoiling our game and we don't count it.
We count 6 possibilities for a 6-sided die, 5 possibilities for a 5-sided die, 2 possibilities for a 2-sided die, and if you have a die with just one face -- a spherical die -- what's the chance that face will come up?
I think it would be interesting to develop probability theory with no boundaries, with no 0 and 1. It works fine to do it the way it's done now, and the alternative might turn up something interesting too.
comment by Janos2 · 2008-01-11T20:10:02.000Z · LW(p) · GW(p)
Ben:
Well, that depends on your number system. For some purposes +infinity is a very useful value to have. For instance if you consider the extended nonnegative reals (i.e. including +infinity) then every measurable nonnegative extended-real-valued function on a measure space actually has a well-defined extended-nonnegative-real-values integral. There are all kinds of mathematical structures where an infinity element (or many) is indispensable. It's a matter of context. The question of what is a "number" is I think very vague given how many interesting number-like notions mathematicians have come up with. But unquestionably "infinity" is not a natural number, or a real number, or a complex number.
Probability theory, on the other hand, would have to change shape if we comfortably wanted to exclude 0 probabilities. What we now call measures would be wrong for the job. I don't know how it would look, but I find the standard description intuitively appealing enough that I don't think it should be changed. It's probably true that for a Bayesian inference engine of some sort, whose purpose is to find likelihoods of propositions given evidence, the "probabilities" it keeps track of shouldn't become 0 or 1. If there's a rich theory there focussing on how to practically do this stuff (and I bet there is, although I know nothing of it beyond Bayes' Theorem, which is a simple result) then ignoring the possibility of 0s and 1s makes sense there: for example you can use the log odds. But in general probability theory? No.
comment by billswift · 2008-01-12T16:31:54.000Z · LW(p) · GW(p)
I think it would be interesting to develop probability theory with no boundaries, with no 0 and 1. It works fine to do it the way it's done now, and the alternative might turn up something interesting too.
You might want to check out Kosko's Fuzzy Thinking. I haven't gone any further into fuzzy logic, yet, but that sounds like something he discussed. Also, he claimed probability was a subset of fuzzy logic. I intend to follow that up, but there is only one of me, and I found out a long time ago that they can write it faster than I can read it.
comment by Curious_Green_Dreams · 2008-01-12T19:53:43.000Z · LW(p) · GW(p)
"On some golf courses, the fairway is readily accessible, and the sand traps are not. The green is either."
comment by Paul_Gowder · 2008-01-12T20:48:21.000Z · LW(p) · GW(p)
Haha, very nice CGD. Shows how much those philosophers of language know about golf. :-)
Although... hmm... interesting. I think that gives us a way to think about another probability 1 statement: statements that occupy the entire logical space. Example: "either there are probability 1 statements, or there are not probability 1 statements." That statement seems to be true with probability 1...
comment by Yaroslav_Bulatov · 2008-03-05T01:00:48.000Z · LW(p) · GW(p)
Disallowing a symbol for "all events" breaks the definition of a probability space. It's probably easier to allow extended reals and break some field axioms than figure out do rigorous probability without a sigma-algebra.
comment by LeBleu · 2008-05-30T19:47:03.000Z · LW(p) · GW(p)
When re-working this into a book, you need to double check your conversions of log odds into decibels. By definition, decibels are calculated using log base 10, but some of your odds are natural logarithms, which confused the heck out of me when reading those paragraphs.
Probability .0001 = -40 decibels (This is the only correct one in this post, all "decibel" figures afterwards are listed as 10 * the natural logarithm of the odds.) Probability 0.502 = 0.035 decibels Probability 0.503 = 0.052 decibels Probability 0.9999 = 40 decibels Probability 0.99999 = 50 decibels
P.S. It'd be nice if you provided an RSS feed for the comments on a post, in addition to the RSS feed for the posts...
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2008-05-30T22:26:44.000Z · LW(p) · GW(p)
I cannot begin to imagine where those numbers came from. Dangers of "Posted at 1:58 am", I guess. Fixed.
Replies from: CuriousAlbert↑ comment by CuriousAlbert · 2009-09-14T20:48:02.402Z · LW(p) · GW(p)
Could you respond to Neel Krishnaswami's post above, and this one as well?
comment by TobyBartels · 2010-07-28T04:13:51.611Z · LW(p) · GW(p)
My intution as a mathematician declares that nobody will never develop an elegant mathematical formulation of probability theory that does not allow for statements that are logically impossible or certain, such as statements of the form p AND NOT p. And it is necessary, if the theory is to be isomorphic to the usual one, that these statements have probability 0 (if impossible) or 1 (if certain). However, I believe that it is quite reasonable to declare, as a condition demanded of any prior deemed rational, that only truly impossible or certain statements have those probabilities. I think that this gives you what you want.
It's obvious that you can make this very demand when working with discrete probability distributions. It may not be obvious that you can make this demand when working with continuous probability distributions. Certainly the usual theory of these, based on so-called ‘measure spaces’ and ‘σ-algebras’ (I mention those in case they jog the reader's memory), cannot tolerate this requirement, at least not if anything at all similar to the usual examples of continuous distributions are allowed.
One answer is that only discrete probability distributions apply to the real world, in which one can never make measurements with infinite precision or observe an infinite sequence of events. Even if the world has infinite size or is continuous to infinitesimal scales, you will never observe that, so you don't need to predict anything about that.
However, even if you don't buy this argument, never fear! There is a mathematical theory of probability based on ‘pointless measure spaces’ and ‘abstract σ-algebras’. In this theory, it again makes perfect sense to demand that any prior must assign probability 0 or 1 only to impossible or certain events. The idea is that if something can never be observed, even in principle, then it is effectively impossible, and the abstract pointless theory allows one to treat it as such.
Then I agree that one should require, as a condition on considering a prior to be rational, that it should assign probability 0 only to these impossible events and assign probability 1 only to their certain complements.
Replies from: TobyBartels↑ comment by TobyBartels · 2010-07-28T04:47:40.841Z · LW(p) · GW(p)
PS: cumulant-nimbus above gives a brief summary of the usual approach to measure theory. The pointless approach that I advocate can be suggested from that as follows: taboo \Omega. Neel Krishnamurti's comment is implicitly using the pointless approach; his event space is cumulant-nimbus's \mathcal{F}, and he works entirely in terms of events.
comment by timtyler · 2010-08-29T19:40:38.511Z · LW(p) · GW(p)
As Perplexed points out this is usually known as Cromwell's_rule.
Replies from: MarsColony_in10years↑ comment by MarsColony_in10years · 2015-04-06T21:37:49.565Z · LW(p) · GW(p)
Thanks for the link. It sounds like Yudkowsky is arguing something quite close to Cromwell's Rule, with a slight technical difference. From the Wikipedia article:
...the use of prior probabilities of 0 or 1 should be avoided, except when applied to statements that are logically true or false.
Yudkowsky would argue that formal logic is not part of the territory, but rather part of our map (perhaps surveying equipment would be a good analogy, since the compass analogy is already taken by "moral compass"). As such, not even formal mathematical logic should be presumed to have 100% certainty.
Of course, this raises the problem of constantly having to include the term p(math is fundamentally flawed) everywhere. instead of just writing p(heads) when calculating the odds of a coin flip or flips, now we'd have to use p(heads | ~math is fundamentally flawed). As a matter of sheer convenience, it would be easier to just add it to the list of axioms supporting the fundamental theorems that the rest of mathematics is built on.
But that’s just semantics, I suppose. Wikipedia has a couple more interesting tidbits, that I’ve fished out for future readers:
The reference is to Oliver Cromwell. Cromwell wrote to the synod of the Church of Scotland on August 5 1650, including a phrase that has become well known and frequently quoted:
“I beseech you, in the bowels of Christ, think it possible that you may be mistaken.”
As Lindley puts it, assigning a probability should "leave a little probability for the moon being made of green cheese; it can be as small as 1 in a million, but have it there since otherwise an army of astronauts returning with samples of the said cheese will leave you unmoved." Similarly, in assessing the likelihood that tossing a coin will result in either a head or a tail facing upwards, there is a possibility, albeit remote, that the coin will land on its edge and remain in that position.
comment by MathijsJ · 2010-12-02T02:24:59.536Z · LW(p) · GW(p)
I'm kinda surprised that it's only been mentioned once in the comments (I only just discovered this site, really really great, by the way) and one from 2010 at that, but it seems to me that "a magical symbol to stand for "all possibilities I haven't considered" " does exist: the symbol "~" (i.e. not). Even the commenter who does mention it makes things complicated for himself: P(Q or ~Q)=1 is the simplest example of a proposition with probability 1.
The proposition is of course a tautology. I do think (but I'm not sure) that that is the only sort of statement that receives probability 1. This is in sync with Eliezer's "amount of evidence" interpretation. A bayesian update can only generate 1 if the initial proposition was of probability 1 or if the evidence was tautological (i.e. if Q then Q or, slightly less lame, if "Q or R" and "~R" then Q, where "Q or R" and "~R" are the evidence).
Skimming the comments, I saw two other proposals for "sure bets", the runner who clocked a negative time and the golf ball landing in a particular spot. That last one degenerated pretty quickly into a discussion about how many points there are in a field and on a ball. I think that's typical of such arguments: it depends on your model. Once you have your model specified the probability becomes 1 (or not) if the statement is (or isn't) tautological in the model. If the model isn't specified, then neither is the statement (what is a precise point?) and hence the probability. Ask the next man what the probability is of a runner clocking a negative time and he'll rightly respond: "Huh?" (unless he is a particularly obfuscatory know-it-all, in which case he might start blabbering about the speed of light. But then too, he makes a claim because he can ascribe meaning to the question, that is, he picks his model). So these are also tautological examples.
I think Eliezer's hold up pretty well for proposition that aren't tautological and hence empirical in nature: they require evidence and only tautological evidence will suffice for certainty.
About the problem of inserting 0's in certain standard theorems: I don't see a problem with Bayes' theorem (I'm curious about other examples). Dividing by 0 is not defined, so the probability of it raining when hell freezes over is not defined. That seems like a satisfactory arrangement.
Replies from: player_03, fubarobfusco↑ comment by player_03 · 2011-07-07T07:52:50.277Z · LW(p) · GW(p)
Thanks for the analysis, MathijsJ! It made perfect sense and resolved most of my objections to the article.
I was willing to accept that we cannot reach absolute certainty by accumulating evidence, but I also came up with multiple logical statements that undeniably seemed to have probability 1. Reading your post, I realized that my examples were all tautologies, and that your suggestion to allow certainty only for tautologies resolved the discrepancy.
The Wikipedia article timtyler linked to seems to support this: "Cromwell's rule [...] states that one should avoid using prior probabilities of 0 or 1, except when applied to statements that are logically true or false." This matches your analysis - you can only be certain of tautologies.
Also, your discussion of models neatly resolves the distinction between, say, a mathematically-defined die (which can be certain to end up showing an integer between 1 and 6) and a real-world die (which cannot quite be known for sure to have exactly six stable states).
Eliezer makes his position pretty clear: "So I propose that it makes sense to say that 1 and 0 are not in the probabilities; just as negative and positive infinity, which do not obey the field axioms, are not in the real numbers."
It's true - you cannot ever reach a probability of 1 if you start at 0.5 and accumulate evidence, just as you cannot reach infinity if you start at 0 and add integer values. And the inverse is true, too - you cannot accumulate evidence against a tautology and bring its probability down to anything less than 1. But this doesn't mean a probability of 1 is an incoherent concept or anything.
Eliezer: if you're going to say that 0 and 1 are not probabilities, you need to come up with a new term for them. They haven't gone away completely just because we can't reach them.
Edit a year and a half later: I agree with the article as written, partially as a result of reading How to Convince Me That 2 + 2 = 3, and partially as a result of concluding that "tautologies that have probability 1 but no bearing on reality" is a useless concept, and that therefore, "probability 1" is a useless concept.
↑ comment by fubarobfusco · 2011-07-29T09:28:29.703Z · LW(p) · GW(p)
Jaynes avoids P(A|B) for "probability of A given evidence B" and P(B) for "probability of B", preferring P(A|BX) and P(B|X) where X is one's background knowledge. This and the above leads naturally to the question of ~X: the situation in which one's "background knowledge" is false.
Assume that background knowledge X is the conjunction of a finite number of propositions. ~X is true if any of these propositions is false. If we can factor X into YZ where Y is the portion we suspect of being false — that is, if we can isolate for testing a portion of those beliefs we previously treated as "background knowledge" — then we can ask about P(A|BYZ) and P(A|B·~Y·Z).
comment by ksvanhorn · 2011-01-21T04:39:28.210Z · LW(p) · GW(p)
For any state of information X, we have P(A or not A | X) = 1 and P(A and not A | X) = 0. We have to have 0 and 1 as probabilities for probability theory even to work. I think you're taking a reasonable idea -- that P(A | X) should be neither 0 nor 1 when A is a statement about the concrete physical world -- and trying to apply it beyond its applicable domain.
comment by AnthonyC · 2011-03-29T18:22:18.472Z · LW(p) · GW(p)
Consider the set of all possible hypotheses. This is a countable set, assuming I express hypotheses in natural language. It is potentially infinite as well, though in practice a finite mind cannot accomodate infintely-long hypotheses. To each hypothesis, I can try to assign a probability, on the basis of available evidence. These probabilities will be between zero and one. What is the probability that a rational mind will assign at least one hypothesis the status of absolute certainty? Either this is one (there is definitely such a hypothesis), or zero (there is definitely not such a hypothesis, which cannot be, because the hypothesis "there is definitely not such a hypothesis" is then a counterexample), or somewhere in between (there may be, somewhere, a hypothesis that a rational mind would regard as being absolutely certain). So I cannot accept your hypothesis that there does not exist, anywhere, ever, a hypothesis that I should regard as being absolutely certain.
Replies from: jimrandomh↑ comment by jimrandomh · 2011-03-29T19:01:57.946Z · LW(p) · GW(p)
Self-referential hypotheses do not always map to truth values, and "a rational mind will assign at least one hypothesis the status of absolute certainty" is self-referential. The contradiction you've encountered arises from using a statement isomorphic to "this statement is false" and requiring it to have a truth value, not to a problem with excluding 0 and 1 as probabilities.
comment by BlindDancer · 2011-04-03T14:08:26.723Z · LW(p) · GW(p)
Yes 0 and 1 are not probabilities. They're truth or falseness values. it's necessary to make a third 'truth value' for things that are unprovable, and possibly a fourth for things that are untestable.
comment by Sarokrae · 2011-10-18T11:50:36.904Z · LW(p) · GW(p)
Digging up an old thread here, but an interesting point I want to bring up: a friend of mine claims that he internally assigns probability 1 (i.e. an undisprovable belief) only to one statement: that the universe is coherent. Because if not, then mnergarblewtf. Is it reasonable to say that even though no statement can actually have probability 1 if you're a true Bayesian, it's reasonable to internally establish an axiom which, if negated, would just make the universe completely stupid and not worth living in any more?
Replies from: grouchymusicologist, Alejandro1, nshepperd, Richard_Kennaway↑ comment by grouchymusicologist · 2011-10-18T13:35:36.171Z · LW(p) · GW(p)
No, it's not. It's the same fundamental mistake that a lot of religious rhetoric about "faith" and "meaning" is founded on: that wanting something to be true counts as evidence that it is true. There's no reason to think that the universe depends for any of its properties on whether someone finds it stupid or not, or worth living in.
I'd also suggest you try to draw your friend out a bit on what it means exactly for the universe to be "coherent." Can that notion be expressed formally? What would we expect to see if we lived in an incoherent universe?
Obviously, I'm dubious that the "coherence" of the universe is in any proper sense a philosophical or scientific idea -- it sounds a lot more like an aesthetic one.
Replies from: Sarokrae↑ comment by Sarokrae · 2011-10-18T15:30:34.339Z · LW(p) · GW(p)
I think he just means "coherent" as "one which we can actually model based on our observations", i.e. one in which this whole exercise (rationality) makes any sense.
He expects that the universe be incoherent with probability zero, and doesn't think there would be any sensible observations if this were the case (or any observation being possible if this were the case).
ETA: Merriam-Webster Definition of COHERENT
1 a : logically or aesthetically ordered or integrated : consistent b : having clarity or intelligibility : understandable
So, understandable and consistent: a universe which philosophy, mathematics and science can apply to in any meaningful way.
↑ comment by Alejandro1 · 2011-10-18T15:41:19.100Z · LW(p) · GW(p)
A charitable paraphrase of "The universe is coherent" could be a statement of the universal validity of non-contradiction: For every p, not (p and not p). However, given the existence of paraconsistent logic and philosophers who take dialethism seriously, I cannot assign probability 1 to the claim that no aspect of the universe requires a contradiction in its description.
I would go even further to say that I am quite more certain of many other claims (such as "1+1=2" and "2+2=4") than of such general and abstract propositions as "the universe is coherent" or even "there are no true contradictions".
Replies from: Sarokrae↑ comment by Sarokrae · 2011-10-18T15:51:08.403Z · LW(p) · GW(p)
I don't think he goes quite that far - he assigns no statements probability 0 or 1 within our own logic system, even (P and ¬P), because he believes it to be possible (though not very likely) that some other logic system might supersede our own.
His belief is that it is not possible for ALL systems of logic to be incorrect, i.e. that (it is impossible to reason correctly about the universe) is necessarily false.
↑ comment by nshepperd · 2011-10-18T16:52:06.982Z · LW(p) · GW(p)
There's a lot of logic to that. For extremely unlikely possibilities you can often get away with setting their probability to 0 to make the calculations a lot simpler. For possibilities where predicted utility is independent of your actions (like "reality is just completely random") it can also be worthwhile setting their probability to 0 (ie. ignoring them), since they're approximately a constant term in expected utility. These are good ways of approximating actual expected utility so you can still mostly make the right decisions, which bounded rationality requires.
↑ comment by Richard_Kennaway · 2011-10-18T18:19:54.117Z · LW(p) · GW(p)
What is P(A|A)?
Replies from: Sarokrae↑ comment by Sarokrae · 2011-10-18T22:02:23.957Z · LW(p) · GW(p)
What do you mean by "|A"? It's well-defined in mathematics, sure, but in real life, surely the furthest you can go is "|experience/perception of evidence for A".
Also, there's also the probability that the particular version of logic you're using is wrong.
Replies from: None↑ comment by [deleted] · 2011-10-18T22:40:59.400Z · LW(p) · GW(p)
What do you mean by "|A"? It's well-defined in mathematics, sure, but in real life, surely the furthest you can go is "|experience/perception of evidence for A".
How far you can go depends on what you mean by "go".
It's perfectly possible to calculate, say, P(I see the coin come up heads | the coin is flipped once, it is fair, and I see the outcome), and actually much more difficult to calculate P(I see the coin come up heads | I have experience/perception of evidence for the facts that the coin is flipped once, it is fair, and I see the outcome).
Replies from: Sarokrae↑ comment by Sarokrae · 2011-10-19T07:36:30.179Z · LW(p) · GW(p)
"I see" is what I meant by perception/experience of evidence. Whenever I "see" something, there's always a non-zero chance of my brain deceiving me. The only thing you can really have to base your decisions on is P(I see the coin come up heads | I see/know the coin is flipped once, I know it is fair, and I see the outcome). P(the coin comes up heads|the coin is flipped once, it is fair and I know the outcome) is possible and easy to calculate, but not completely accurate to the world we live in.
comment by Richard_Kennaway · 2011-10-19T08:37:19.435Z · LW(p) · GW(p)
The ("Bayesian") framework explored in these essays replaces the two Cartesian options, affirmation and denial, by a continuum of judgmental probabilities in the interval from 0 to 1, endpoints included, or -- what comes to the same thing -- a continuum of judgmental odds in the interval from 0 to infinity, endpoints included. Zero and 1 are probabilities no less than 1/2 and 99/100 are. Probability 1 corresponds to infinite odds, 1:0. That's a reason for thinking in terms of odds: to remember how momentous it may be to assign probability 1 to a hypothesis."
Richard Jeffrey, "Probability and the art of judgement".
I leave it as an exercise to correctly state the relationships between Eliezer's article, the Jeffrey quote, and the value of P(A|A).
(Note: Jeffrey is not to be confused with Jeffreys, although both were Bayesian probability theorists.)
comment by zslastman · 2012-06-29T18:13:13.225Z · LW(p) · GW(p)
"When you work in log odds, the distance between any two degrees of uncertainty equals the amount of evidence you would need to go from one to the other. That is, the log odds gives us a natural measure of spacing among degrees of confidence."
That observation is so useful and intuition friendly it probably deserves it's own blog post, and a prominent place in your book.
comment by [deleted] · 2013-01-04T08:32:14.869Z · LW(p) · GW(p)
Forgive me if this sounds condescending, but isn't saying "0 and 1 are not probabilities because they won't let you update your knowledge" basically the same as saying "you can't know something because knowing makes you unable to learn"? If we assign tautologies as having probability 1, then anything reducible to a tautology should have probability 1 (and similarly, all contradictions and things reducible to contradictions should have probability 0). For any arbitrarily large N, if you put 2 apples next to 2 apples and repeat the test N times, you'll get 4 apples N out of N times, no less (discounting molecular breakdowns in the apples or other possible interferences).
Replies from: Qiaochu_Yuan↑ comment by Qiaochu_Yuan · 2013-01-04T10:09:44.122Z · LW(p) · GW(p)
You shouldn't assign tautologies probability 1 either because your notion of what a tautology is might be a hallucination.
Replies from: Richard_Kennaway↑ comment by Richard_Kennaway · 2013-01-04T11:02:07.881Z · LW(p) · GW(p)
This confuses object level and meta level. In probability theory, P(-A|A) = 0 and P(A|A) = 1, however uncertain you may be about Cox's theorem, or about whether you are actually thinking about the same A each time it appears in those formulas. No-one, as far as I know, has ever constructed a theory of probability in which these are assigned anything else but 0 and 1. That is not to say that it cannot be done, only that it has not been done. Until that is done, 0 and 1 are probabilities.
The title of the article is a rhetorical flourish to convey the idea elaborated in its body, that to assert a probability, as a measure of belief, of 0 or 1 is to assert that no possible evidence could update that belief, that 0 and 1 are probabilities that you should not find yourself assigning to matters about which there could be any real dispute, and to suggest odds ratios or their logarithms as a better concept when dealing with practical matters associated with very low or very high probabilities. There is a very large difference between saying that the probability of winning a lottery is tiny and saying that it cannot happen at all; with enough participants it is almost certain to happen to someone. That difference is made clear by the log-odds scale, which puts the chance of a lottery ticket at 60 or more decibels below zero, not infinitely far below. In a world with 7 billion people, billion-to-1 chances happen every day.
As an example of even tinier probabilities which are still detectably different from zero, consider a typical computer. A billion transistors in its CPU, clocked a billion times a second, running for a conveniently round length of time, a million seconds, which is about 12 days. Computers these days can easily do that without a single hardware error, which means that for every one of a million billion billion switching events, a transistor opened or closed exactly as designed. A million billion billion is about 1.5 times Avogadro's number. The corresponding log-odds is -240 decibels. And yet hardware glitches can still happen.
And P(A|A) is still 1, not any finite number of decibels.
comment by Username · 2014-06-20T01:00:43.812Z · LW(p) · GW(p)
O = (P / (1 - P))
probabilities and odds are isomorphic
This is undefined for P = 1. If you claim that that function is a real-valued bijection between probabilities and odds then P = 1 doesn't work so you're begging the question. Always take care to not divide by zero.
Whether or not real-world events can have a probability of 0 or 1 is a different question than "are 0 and 1 probabilities?". They most certainly are.
Replies from: Jake_NB↑ comment by Jake_NB · 2021-05-17T08:30:13.796Z · LW(p) · GW(p)
I agree with this one. Without probabilities of 0 and 1, it's not merely that some proofs of theorems need to be revised, it's that probability theory simply doesn't work anymore, as its very axioms fall apart.
I can give a statement that is absolutely certain, e.g. "x is true given that x is true". It doesn't teach me much about real life experiences, but it is infinitely certain. Likewise with probability 0. Please note that the probability is assigned to the territory here, not the map.
The fact that I can't encounter these probabilities in real life has to do with my limits of sampling reality and interpreting it, being a flimsy brain, rather than the limits of probability theory.
You may not want to believe that probability theory contains 0 and 1, but like many other cases, Math doesn't care about your beliefs.
comment by Epictetus · 2014-12-23T05:20:04.699Z · LW(p) · GW(p)
If I roll a die, then one of the events that can happen will happen. That's just saying that if S is my sample space, then P(S) = 1. Similarly, P(~S) = 0, which is just saying that impossible things won't happen. The former statement is an axiom in the standard mathematical treatments of the subject. These statements may be trivial, but I distrust any mathematics that can't handle trivial cases.
Rejecting 1 as a probability would be catastrophic when you're dealing with discrete spaces. If you're the sort to reject infinity, then it would follow that all probability spaces are discrete. At that point probability loses its rigor. Preference for odds or log odds just means that you have to live with using the extended reals with special conventions for the infinities.
Replies from: ike↑ comment by ike · 2014-12-23T05:38:11.324Z · LW(p) · GW(p)
You can reject infinity without being able to enumerate every possibility. Your sample space will never practically contain all the possibilities. (How many times has something you never thought of happened?) There are 2^(however many bits of input come into my brain) possibilities for me to observe for any period of time, and I can never think about all of them. Any explicit sample space is going to miss possibilities. S is not well-defined.
I think the point of the post was that 1 shouldn't be used for practical cases.
Replies from: Epictetus↑ comment by Epictetus · 2014-12-23T11:37:45.221Z · LW(p) · GW(p)
Real life is complex enough that there is merit to the philosophical position that one should refrain from assigning probabilities of 0 or 1 to nontrivial events. Categorically denying that any event can have probability 0 or 1 is an extreme position (which, applied to itself, would really mean that a given event would have a high probability of not occurring with probability 0 or 1).
From the purely mathematical standpoint, removing 0 and 1 from the set of possible probabilities breaks the current foundations of the theory. The existence of a sample space containing all possibilities does not depend on whether we humans can comprehend them all. If the sample space of all possibilities exists and P(S) < 1, then a lot of theorems break down. That's where you live with idealizations like absolute certainty (or almost certainty in the infinite case) or else find something other than probability to use to model the real world.
Replies from: ike↑ comment by ike · 2014-12-23T14:46:27.156Z · LW(p) · GW(p)
In theory, if you could list every possible observation you could make, that will have a 1 probability. It would take infinite time, because the following class of outcomes:
my brain bandwidth is increased to X bits, and X random bits are my next input
has an infinite cardinality. I could get into how Godel means you can't even in principle describe all possible outcomes in a finite amount of space, even by referencing classes like I did, but I'll leave that up to you.
There was a suggested fix to your problem in the post, why isn't that good enough for you?
If you made a magical symbol to stand for "all possibilities I haven't considered", then you could marginalize over the events including this magical symbol, and arrive at a magical symbol "T" that stands for infinite certainty.
Sounds like he agrees that S has probability 1.
Note: I agree that the way he "proves" the claim is not very good. He basically tries to switch your intuition by switching the wording of the question. Not too rigorous.
Replies from: Epictetus↑ comment by Epictetus · 2014-12-23T18:40:53.618Z · LW(p) · GW(p)
When I say that the possibilities can be listed in principle, what I mean is that there some set S that contains them and make no reference to any practical problems with describing or storing its elements. Like the points and lines of geometry, it's a Platonic idealization.
There was a suggested fix to your problem in the post, why isn't that good enough for you?
Because talk of magical symbols is a good sign that the passage was meant to ridicule the use of infinity. The very next paragraph seeks to expunge such "magical symbols" from probability theory.
Replies from: ike↑ comment by ike · 2014-12-23T18:48:04.702Z · LW(p) · GW(p)
If he has a rigorous way to ground probability theory without 0 and 1, I'm fine with it. He seemed to be saying that he wishes there was such a way, but until someone develops one, he's stuck with magical symbols. He acknowledges all your problems in the end of the post.
comment by StellaAthena · 2015-08-20T08:49:03.938Z · LW(p) · GW(p)
This article is largely incoherent. The main justification is the abuse of an invalid transformations: y=x/(1-x) is not the bijection that he asserts it is, because it's not a function that maps [0,1] onto R. It's a function that maps [0,1] onto [1,\intfy] as a subset of the topological closure of R. And that's okay, but you can't say "well I don't like the topological closure of R, so I'll just use R and claim that 1 is where the problem is."
Additionally, his discussion of log odds and such is perfectly fine, but ignores the fact that there are places where you do need to have an odds of 0:1, or a log odds of negative infinity. Probability theory stops working when you throw out 0 and 1, it's as simple as that.
Even if you don't want to handle tautologies or contradictions, there are other ways to get P(X)=0 or 1. The probability that a real number chosen uniformly from the real interval [0,1] is 0. It has to be. It's a provable fact under ZFC and to decide otherwise is to say that you're more attached to the idea of 0 and 1 not being probabilities than you are to the fact that mathematics is consistent and if you really believe that, well, there's absolutely nothing I have to say to you.
This is one of those situations where EY just demonstrates he knows very little mathematics.
Replies from: Regex, David_Bolin, nikolaus-hansen↑ comment by Regex · 2015-08-20T09:35:55.030Z · LW(p) · GW(p)
As someone who doesn't know much beyond basic statistics, in what way are 0 or 1 probabilities? Isn't it just axiomatic truth at that point? In that sense saying zero and one are probabilities is just saying 'certain' or 'impossible' as far as I understand it. Situations where an event will definitely or definitely not occur doesn't seem to be consistent with the idea of randomness which I've understood probability to revolve around.
I suppose the alternative would be that we'd have to assume every mathematical proof has infinite evidence if we wanted to get anywhere productive- after all axioms are assumed to be true. It doesn't make much sense to need evidence in that scenario- except perhaps the probability of error and mistake? That isn't particularly calculable and would actually change from person to person.
Using one and zero makes sense to me as a matter of assumed or proven truths, but I'm still unsure how that makes it a probability.
Replies from: Epictetus, StellaAthena↑ comment by Epictetus · 2015-08-20T14:30:48.682Z · LW(p) · GW(p)
Situations where an event will definitely or definitely not occur doesn't seem to be consistent with the idea of randomness which I've understood probability to revolve around.
"Event" is a very broad notion. Let's say, for example, that I roll two dice. The sample space is just a collection of pairs (a, b) where "a" is what die 1 shows and "b" is what die 2 shows. An event is any sub-collection of the sample space. So, the event that the numbers sum to 7 is the collection of all such pairs where a + b = 7. The probability of this event is simply the fraction of the sample space it occupies.
If I rolled eight dice, then they'll never sum to seven and I say that that event occurs with probability 0. If I secretly rolled an unknown number of dice, you could reasonably ask me the probability that they sum to seven. If I answer "0", that just means that I rolled more than one and fewer than eight dice. It doesn't make the process less random nor the question less reasonable.
If you treat an event as some question you can ask about the result of a random process, then 1 and 0 make a lot more sense as probabilities.
For the mathematical theory of probability, there are plenty of technical reasons why you want to retain 1 and 0 as probabilities (and once you get into continuous distributions, it turns out that probability 1 just means "almost certain").
Replies from: Regex↑ comment by Regex · 2015-08-21T08:36:41.306Z · LW(p) · GW(p)
This is what I meant by something being a proven truth- within the rules set one can find outcomes which are axiomatically impossible or necessary. The process itself may be random, but calling it random when something impossible didn't happen seems odd to me. The very idea that 1 may be not-quite-certain is more than a little baffling, and I suspect is the heart of the issue.
Replies from: Epictetus↑ comment by Epictetus · 2015-08-21T14:01:03.847Z · LW(p) · GW(p)
The very idea that 1 may be not-quite-certain is more than a little baffling, and I suspect is the heart of the issue.
If 1 isn't quite certain then neither is 0 (if something happens with probability 1, then the probability of it not happening is 0). It's one of those things that pops up when dealing with infinity.
It's best illustrated with an example. Let's say we play a game where we flip a coin and I pay you $1 if it's heads and you pay me $1 if it's tails. With probability 1, one of us will eventually go broke (see Gambler's ruin). It's easy think of a sequence of coin flips where this never happens; for example, if heads and tails alternated. The theory holds that such a sequence occurs with probability 0. Yet this does not make it impossible.
It can be thought of as the result of a limiting process. If I looked at sequences of N of coin flips, counted the ones where no one went broke and divided this by the total number of possible sequences, then as I let N go to infinity this ratio would go to zero. This event occupies an region with area 0 in the sample space.
Replies from: Regex↑ comment by StellaAthena · 2015-08-20T20:14:17.840Z · LW(p) · GW(p)
Formally, probability is defined via areas. The basic idea is that the probability of picking an element from a set A out of a set B is the ratio of the areas of A to B, where "area" can be defined not only for things like squares but also things like lines, or actually almost every* subset of R. So, lets say you want to randomly select a real number from the interval [0,1] and want to know the odds it falls in a set, S. The area of [0,1] is 1, so the answer is just the area of S.
If S={0}, then S has area zero. If S=[0,1), then S has area 1. Not only are both of these theoretical possibilities, they are practical ones too. There are real world examples of probability zero events (the only one that comes to mind involves QM though so I don't want to bother with the details).
Now, notice that this isn't the same thing as "impossible". Instead, it means more like "it won't happen I promise even by the time the universe ends". The way I tend to think about probability zero events is that they are so unlikely they are beyond the reach of the principle that as the number of trials increases, events become expected. For any nonzero probability, there is a number of trials, n, such that once you do it n times the expected value becomes greater than 1. That's not the case with probability zero events. Probability 1 events can then be thought of as the negation of probability 0 events.
*not actually "almost every" in a formal sense, but "almost any" in a "unless you go try to build a set that you can't measure it probably has a well defined area" sense
Replies from: Regex↑ comment by Regex · 2015-08-21T08:24:39.229Z · LW(p) · GW(p)
That seems a solid enough explanation, but how can something of probability zero have a chance to occur? How then do you represent an impossible outcome? It seems like otherwise 'zero' is equivalent to 'absurdly low'. That doesn't quite jive with my understanding.
Replies from: StellaAthena, Stephen_Cole↑ comment by StellaAthena · 2015-08-21T21:37:41.482Z · LW(p) · GW(p)
Impossible things also have a probability of zero. I totally understand that this seems a bit unintuitive, and the underlying structure (which includes things like infinities of different sizes) is generally pretty unintuitive at first. Which is kinda just saying "sorry, I can't explain the intuition," which is unfortunately true.
Replies from: Regex↑ comment by Stephen_Cole · 2015-08-22T00:07:25.752Z · LW(p) · GW(p)
I think one of the clearest expositions on these issues is ET Jaynes. The first three chapters (which is some of the relevant part) can be found at http://bayes.wustl.edu/etj/prob/book.pdf.
Replies from: Regex↑ comment by Regex · 2015-08-22T14:39:02.687Z · LW(p) · GW(p)
"Not Found
The requested URL /etj/prob/book.pdf. was not found on this server."
Replies from: arundelo↑ comment by arundelo · 2015-08-22T14:59:40.870Z · LW(p) · GW(p)
Fixed Jaynes link (no trailing period).
Replies from: Regex, Stephen_Cole↑ comment by Stephen_Cole · 2015-08-22T15:17:11.912Z · LW(p) · GW(p)
Oops. Thanks for the fix!
↑ comment by David_Bolin · 2015-08-20T13:10:15.939Z · LW(p) · GW(p)
Eliezer isn't arguing with the mathematics of probability theory. He is saying that in the subjective sense, people don't actually have absolute certainty. This would mean that mathematical probability theory is an imperfect formalization of people's subjective degrees of belief. It would not necessarily mean that it is impossible in principle to come up with a better formalization.
Replies from: Lumifer↑ comment by Lumifer · 2015-08-20T14:44:08.385Z · LW(p) · GW(p)
Eliezer isn't arguing with the mathematics of probability theory. He is saying that in the subjective sense, people don't actually have absolute certainty.
Errr... as I read EY's post, he is certainly talking about the mathematics of probability (or about the formal framework in which we operate on probabilities) and not about some "subjective sense".
The claim of "people don't actually have absolute certainty" looks iffy to me, anyway. The immediate two questions that come to mind are (1) How do you know? and (2) Not even a single human being?
Replies from: Bound_up, Gram_Stone, Wes_W, David_Bolin↑ comment by Bound_up · 2015-08-20T15:23:35.423Z · LW(p) · GW(p)
I think he's just acknowledging the minute(?) possibility that our apparently flawless reasoning could have a blind spot. We could be in a Matrix, or have something tampering with our minds, etcetera, such that the implied assertion:
If this appears absolutely certain to me
Then it must be true
is indefensible.
Replies from: Lumifer↑ comment by Lumifer · 2015-08-20T15:43:59.648Z · LW(p) · GW(p)
There are two different things.
David_Bolin said (emphasis mine): "He is saying that in the subjective sense, people don't actually have absolute certainty." I am interpreting this as "people never subjectively feel they have absolute certainty about something" which I don't think is true.
You are saying that from an external ("objective") point of view, people can not (or should not) be absolutely sure that their beliefs/conclusions/maps are true. This I easily agree with.
Replies from: David_Bolin↑ comment by David_Bolin · 2015-08-20T19:08:55.984Z · LW(p) · GW(p)
It should probably be defined by calibration: do some people have a type of belief where they are always right?
Replies from: Lumifer, StellaAthena↑ comment by StellaAthena · 2015-08-20T20:33:17.464Z · LW(p) · GW(p)
You can phrase statements of logical deduction such that they have no premises and only conclusions. If we let S be the set of logical principles under which our logical system operates and T be some sentence that entails Y, then S AND T implies Y is something that I have absolute certainty in, even if this world is an illusion, because the premise of the implication contains all the rules necessary to derive the result.
A less formal example of this would be the sentence: If the rules of logic as I know them hold and the axioms of mathematics are true, then it is the case that 2+2=4
↑ comment by Gram_Stone · 2015-08-20T15:50:44.569Z · LW(p) · GW(p)
The claim of "people don't actually have absolute certainty" looks iffy to me, anyway. The immediate two questions that come to mind are (1) How do you know? and (2) Not even a single human being?
The way I view that statement is: "In our formalization, agents with absolutely certain beliefs cannot change those beliefs, we want our formalization to capture our intuitive sense of how an ideal agent would update its beliefs, a formalization with a quality of fanaticism does not capture our intuitive sense of how an ideal agent would update its beliefs, therefore we do not want a quality of fanaticism."
And what state of the world would correspond to the statement "Some people have absolute certainty." ? Do you think that we can take some highly advanced and entirely fictional neuroimaging technology, look at a brain and meaningfully say, "There's a belief with probability 1." ?
And on the other hand, I'm not afraid to talk about folk certainty, where the properties of an ideal mathematical system are less relevant, where everyone can remain blissfully logically uncertain to the fact that beliefs with probability 1 and 0 imply undesirable consequences in formal systems that possess them, and say things like "I believe that absolutely." I am not afraid to say something like, "That person will not stop believing that for as long as he lives," and mean that I predict with high confidence that that person will not stop believing that for as long as he lives.
And once you believe that the formalization is trying to capture our intuitive sense of an ideal agent, and decide whether or not that quality of fanaticism captures it, and decide whether or not you're going to be a stickler about folk language, then I don't think that any question or confusion around that claim remains.
Replies from: Lumifer↑ comment by Lumifer · 2015-08-20T15:57:58.062Z · LW(p) · GW(p)
People are not "ideal agents". If you specifically construct your formalization to fit your ideas of what an ideal agent should and should not be able to do, this formalization will be a poor fit to actual, live human beings.
So either you make a system for ideal agents -- in which case you'll still run into some problems because, as has been pointed out upthread, standard probability math stops working if you disallow zeros and ones -- or you make a system which is applicable to our imperfect world with imperfect humans.
Replies from: Gram_Stone↑ comment by Gram_Stone · 2015-08-20T21:59:02.172Z · LW(p) · GW(p)
I don't see why both aren't useful. If you want a descriptive model instead of a normative one, try prospect theory.
I just don't see this article as an axiom that says probabilities of 0 and 1 aren't allowed in probability theory. I see it as a warning not to put 0s and 1s in your AI's prior. You're not changing the math so much as picking good priors.
↑ comment by Wes_W · 2015-08-20T17:02:33.234Z · LW(p) · GW(p)
If we're asking what the author "really meant" rather than just what would be correct, it's on record.
The argument for why zero and one are not probabilities is not, "All objects which are special cases should be cast out of mathematics, so get rid of the real zero because it requires a special case in the field axioms", it is, "ceteris paribus, can we do this without the special case?" and a bit of further intuition about how 0 and 1 are the equivalents of infinite probabilities, where doing our calculations without infinities when possible is ceteris paribus regarded as a good idea by certain sorts of mathematicians. E.T. Jaynes in "Probability Theory: The Logic of Science" shows how many probability-theoretic errors are committed by people who assume limits directly into their calculations, without first showing the finite calculation and then finally taking its limit. It is not unreasonable to wonder when we might get into trouble by using infinite odds ratios. Furthermore, real human beings do seem to often do very badly on account of claiming to be infinitely certain of things so it may be pragmatically important to be wary of them.
I... can't really recommend reading the entire thread at the link, it's kind of flame-war-y and not very illuminating.
Replies from: EHeller, Lumifer↑ comment by EHeller · 2015-08-20T17:14:30.212Z · LW(p) · GW(p)
I think the issue at hand is that 0 and 1 aren't special cases at all, but very important for the math of probability theory to work (try and construct a probability measure where some subset doesn't have probability 1 or 0).
This is incredibly necessary for the mathematical idea of probability ,and EY seems to be confusing "are 0 and 1 probabilities relevant to Bayesian agents?" with "are 0 and 1 probabilities?" (yes, they are, unavoidably, not as a special case!).
↑ comment by Lumifer · 2015-08-20T17:18:06.850Z · LW(p) · GW(p)
It seems that EY position boils down to
Pragmatically speaking, the real question for people who are not AI programmers is whether it makes sense for human beings to go around declaring that they are infinitely certain of things. I think the answer is that it is far mentally healthier to go around thinking of things as having 'tiny probabilities much larger than one over googolplex' than to think of them being 'impossible'.
And that's a weak claim. EY's ideas of what is "mentally healthier" are, basically, his personal preferences. I, for example, don't find any mental health benefits in thinking about one over googolplex probabilities.
Replies from: Wes_W↑ comment by Wes_W · 2015-08-20T17:27:16.390Z · LW(p) · GW(p)
Cromwell's Rule is not EY's invention, and relatively uncontroversial for empirical propositions (as opposed to tautologies or the like).
If you don't accept treating probabilities as beliefs and vice versa, then this whole conversation is just a really long and unnecessarily circuitous way to say "remember that you can be wrong about stuff".
Replies from: EHeller, Lumifer↑ comment by EHeller · 2015-08-20T17:44:34.207Z · LW(p) · GW(p)
The part that is new compared to Cromwell's rule is that Yudkowsky doesn't want to give probability 1 to logical statements (53 is a prime number).
Because he doesn't want to treat 1 as a probability, you can't expect complete sets of events to have total probability 1, despite them being tautologies. Because he doesn't want probability 0, how do you handle the empty set? How do you assign probabilities to statements like "A and B" where A and B are logical exclusive? (the coin lands heads AND the coin lands tails).
Removing 0 and 1 from the math of probability breaks most of the standard manipulations. Again, it's best to just say "be careful with 0 and 1 when working with odds ratios."
↑ comment by Lumifer · 2015-08-20T17:48:30.358Z · LW(p) · GW(p)
Nobody is saying EY invented Cromwell's Rule, that's not the issue.
The issue is that "0 and 1 are not useful subjective certainties for a Bayesian agent" is a very different statement than "0 and 1 are not probabilities at all".
Replies from: Wes_W↑ comment by David_Bolin · 2015-08-20T18:50:57.786Z · LW(p) · GW(p)
Of course if no one has absolute certainty, this very fact would be one of the things we don't have absolute certainty about. This is entirely consistent.
↑ comment by Nikolaus Hansen (nikolaus-hansen) · 2019-12-26T15:20:21.557Z · LW(p) · GW(p)
y=x/(1-x) is not the bijection that he asserts it is, [...]. It's a function that maps [0,1] onto [1,\intfy] as a subset of the topological closure of R.
How is that not a bijection? Specifically, a bijection between the sets and , which seems exactly to be the claim EY is making.
On a broader point, EY was not calling into question the correctness or consistency of mathematical concepts or claims but whether they have any useful meaning in reality. He was not talking about the map, he was talking about the territory and how we may improve the map to better reflect the territory.
comment by Houshalter · 2016-02-29T09:45:02.341Z · LW(p) · GW(p)
A real mathematician got in a debate with EY over this post, and made some really good points: https://np.reddit.com/r/badmathematics/comments/2bazyc/0_and_1_are_not_probabilities_any_more_than/cj43y8k
Maybe this doesn't stand up mathematically, but I really like the intuition of log odds instead of probability. And this post explained it quite well. And the main point that you shouldn't believe in absolute certainties is still true. An ideal AI using probability theory would probably use log odds, and not have a 0 or 1.
Replies from: MakoYass↑ comment by mako yass (MakoYass) · 2018-12-21T00:48:46.735Z · LW(p) · GW(p)
/r/badmathematics is shuttered now, apparently.
"This community has become something of a shitshow. Setting badmath to private while we try to decide on a way forward with the subreddit."
Oh no, really? Who would have thought that the sorts of people who have learned to enjoy indulging contempt would eventually turn on each other.
I really wanted to see that argument though, tell me, to what extent was it an argument? Cause I feel like if a person in our school wanted to settle this, they'd just distinguish the practical cases EY's talking about from the mathematical cases the conversants are talking about and everyone would immediately wake up and realise how immaterial the disagreement always was (though some of them might decide to be mad about that instead), but also, maybe Eleizer kind of likes getting people riled up about this so maybe dispersing the confusion never crossed his mind. Contempt vampires meet contempt bender. Kismesis is forged.
I shouldn't contribute to this "fight", but I can't resist. I'd have recommended he bring up how the brunt of the causal network formalization explicitly disallows certain or impossible events on the math level once you cross into a certain level of sophistication (I forget where the threshold was, but I remember thinking "well the bayesian networks that supports 0s and 1s sounds pretty darn limited and I'm going to give up on them just as my elders advised.")
Ultimately, the "can't be 0 or 1" restriction is pretty obviously needed for a lot of the formulas to work robustly (you can't even use the definition of conditional probability without restricting the prior of the evidence! Cause there's a division in it! There are lots of divisions in probability theory!)
So I propose that we give a name to that restriction, and I offer the name "credences". (Currently, it seems the word "credence" is just assigned to a bad overload of "probability" that uses percent notation instead of normal range. I doubt anyone will miss it.)
A probability is a credence iff it is neither 0 nor 1. A practical real-world right and justly radically skeptical bayesian reasoner should probably restrict a large, well-delineated subset of its evidence weights to being credences.
And now we can talk about credences and there's no need for any more confusion, if we want.
Replies from: Houshalter↑ comment by Houshalter · 2019-04-07T17:37:08.872Z · LW(p) · GW(p)
It's back btw. If it ever goes down again you can probably get it on wayback machine. And yes the /r/bad* subreddits are full of terrible academia snobbery. Badmathematics is the best of the bunch because mathematics is at least kind of objective. So they mostly talk about philosophy of mathematics.
The problem is formal models of probability theory have problems with logical uncertainty. You can't assign a nonzero probability to a false logical statement. All the reasoning about probability theory is around modelling uncertainty in the unkown external world. This is an early attempt to think about logical uncertainty. Which MIRI has now published papers on and tried to formalize.
Just calling them "log odds" is fine and they are widely used in real work.
Btw what does "Response to previous version" mean? Was this article significantly editted? It doesn't seem so confrontational reading it now.
Replies from: habryka4, MakoYass↑ comment by habryka (habryka4) · 2019-04-07T19:24:00.855Z · LW(p) · GW(p)
We published new versions of a lot of sequences posts a few months ago. If you click on the "Response to previous version" text, you can read the original text that the comment was referring to.
Replies from: None↑ comment by [deleted] · 2019-12-23T03:23:11.709Z · LW(p) · GW(p)
Wait, these old posts have been edited? I don’t see the “Response to previous version” link. I’d like to read the originals, as they were written, in chronological order... there are other ways to consume the compendium if I so desired.
Replies from: habryka4↑ comment by habryka (habryka4) · 2019-12-23T04:09:00.582Z · LW(p) · GW(p)
Yeah, they were edited as part of the process of compiling Rationality: AI to Zombies. Usually that just involved adding some sources, cleaning up some sentences and fixing some typos.
The "Response to previous version" link is at the top of every comment that was posted on the previous version of the post. See here:
Replies from: None↑ comment by [deleted] · 2019-12-23T06:06:59.297Z · LW(p) · GW(p)
I see it now. Is there some way to make the original article the default View? Or a link to the prior version at the top of the article?
Replies from: habryka4↑ comment by habryka (habryka4) · 2019-12-23T06:24:16.362Z · LW(p) · GW(p)
You can click on the date-stamp at the top of the post and select the earliest version from there.
↑ comment by mako yass (MakoYass) · 2019-04-08T02:56:05.011Z · LW(p) · GW(p)
Hmm. Reading.
Okay. Summary: All of Eliezer's writing on this assumed the context of AGI/applied epistemology. That wasn't obvious from the materials, and it did not occur to this group of pure mathematicians to assume that same focus, because they're pure mathematicians and because of the activity they had decided to engage in on that day.
comment by topherhunt · 2020-12-27T13:54:07.324Z · LW(p) · GW(p)
What are the odds that the face showing is 1? Well, the prior odds are 1:5 (corresponding to the real number 1/5 = 0.20)
I'm years late to this party, and probably missing something obvious. But I'm confused by Yudkowsky's math here. Wouldn't it be more correct to say that the prior odds of rolling a 1
are 1:5
, which corresponds to a probability of 1/6
or 0.1666...
? If odds of 1:5
correspond to a probability of 1/5
= 0.20
, that makes me think there are 5 sides to this six-sided die, each side having equal probability.
Put differently: when I think of how to convert odds back into a probability number, the formula my brain settles on is not P = o / (1 + o)
as stated above, but rather P = L / (L + R)
, if the odds are expressed as L:R
. Am I missing something important about common probability practice / jargon here?
↑ comment by Tetraspace (tetraspace-grouping) · 2020-12-27T14:20:01.161Z · LW(p) · GW(p)
The real number 0.20 isn't a probability, it's just the same odds but written in a different way to make it possible to multiply (specifically you want some odds product *
such that A:B * C:D = AC:BD
). You are right about how you would convert the odds into a probability at the end.
comment by siclabomines · 2023-02-05T16:09:10.991Z · LW(p) · GW(p)
It's a nice analogy, but it all rests on whether infinite evidence is a thing or not, and there aren't arguments one way or the other here. (Sure, infinite evidence would mean "whatever log odds you come up with, this is even stronger", but that doesn't rule out it is a thing).
Like, how much evidence for the hypothesis "I'll perceive the die to come up a 4" does the event "Ok, die was thrown and I am perceiving it to be a 3" provide? Or how much evidence do I have of being conscious right now when I am feeling like something? I think any answer different from infinity is just playing a word game.
comment by azzu · 2023-05-24T13:16:15.312Z · LW(p) · GW(p)
Wouldn't the prior odds in the bell example be 1:4 when the chance is 0.2? But written is 1:5.
Replies from: tiuxtj↑ comment by tiuxtj · 2023-08-23T16:44:15.702Z · LW(p) · GW(p)
What are the odds that the face showing is 1? Well, the prior odds are 1:5 (corresponding to the real number 1/5 = 0.20)
The prior probability of rolling a 1 is 1/6 = ~0.16. The prior odds of rolling a 1 are 1:5 = 1/5 = 0.2.
Some sources would call the 0.2 a "chance" to communicate that it's an odds and not a probability, but Eliezer seems to not do that, he just uses "chance" as a synonym for probability:
If any face except 1 comes up, there’s a 10% chance of hearing a bell, but if the face 1 comes up, there’s a 20% chance of hearing the bell.
Don't get confused, this 20% probability of hearing the bell is not the 0.2 from earlier.
comment by AndrewS (andrew-shulaev) · 2023-06-01T16:33:57.872Z · LW(p) · GW(p)
When you work in log odds, the distance between any two degrees of uncertainty equals the amount of evidence you would need to go from one to the other.
What does "amount of evidence" in this sentence is supposed to mean? Is it the same idea that "bits of evidence" mentioned in these posts previously?
The only way I can interpret this sentence as a definition of "amount of evidence", but then I don't understand what's the point of highlighting the sentence as if it's saying something more significant.
Replies from: tiuxtj