What do you do when you find out you have inconsistent probabilities?
post by NunoSempere (Radamantis)
This is a question post.
I've recently been reading about a rationalist blogger who converted to Catholicism. She may have assigned subjective probabilities like:
Then she may have introspected and come up with:
We can calculate:
Abbreviating Objective Morality by "OM", and "God" by "G", this state of affairs is inconsistent, because we intuitively see that:
To resolve it, she could either increase her subjective probability of there being a God
or she could reduce her probability of there being some kind of objective morality
She could also reconsider P(God|Objective Morality) or P(Objective Morality|God).
Anyways, I find myself very confused by this state of affairs. Is this a solved question? Is there a purely principled way of resolving this which only takes into account the 4 numbers P(OM), P(G), P(OM|G) and P(G|OM)? Is there a standard way of using some kind of metaprobabilities?
answer by Gurkenglas
) · GW
I would make explicit that her beliefs about her subjective probabilities are inaccurate observations of her implied underlying logically omniscient, consistent belief system. She can then assign each possible underlying consistent belief system a probability, and update that assignment once she realizes that some of the possible systems were not consistent. What this comes out to is that whether she should update her belief in God or Objective Morality comes down to which of her beliefs she is less certain about.
answer by Bucky
) · GW
The 4 given probabilities are actually perfectly consistent within the equations you are using. It is provable that whatever 4 probabilities you use the equations will be consistent.
Therefore the question becomes “where did my maths go wrong?”
P(G|OM) = 0.055, not 0.55
I’m pretty confident that the only way probabilities can actually be inconsistent is if it is over constrained (e.g. in this case you define 5 relevant probabilities instead of 4). The whole point of having axioms is to prevent inconsistencies provided you stay inside them.
P.S. Good job on noticing your confusion!
↑ comment by Gurkenglas ·
2019-01-01T11:37:07.865Z · LW(p) · GW(p)
0.9 = P(Objective Morality) ≠ P(God) * P(Objective Morality | God) + P(No God) * P(Objective Morality | No God) = 0.05 * 0.99 + 0.95 * 0.02 = 0.0685. That's inconsistent, right?Replies from: Bucky
↑ comment by Bucky ·
2019-01-01T13:32:34.346Z · LW(p) · GW(p)
Argh,you’re right,I didn’t check that one. P(OM) cancels on the P(G) equation so that one isn’t over constrained.
However for the equation for P(OM) 4 variables is over constrained, 3 is enough.
Comments sorted by top scores.
comment by Dagon ·
2018-12-31T20:39:46.994Z · LW(p) · GW(p)
Some options for addressing this:
1) Be more specific in your probabilities. What experiences are included or excluded from these predictions? Often, this exercise will show you that you have unreasonable estimates for one of these figures, which may or may not bring your beliefs into consistency.
2) Recognize that these probability estimates are pretty wild guesses, and accept that they're probably wrong. Inconsistent beliefs necessarily include falsehoods, but that doesn't mean you have enough information to improve them.
3) See if you can gather any evidence for some of the intermediate probabilities you're working with. These may give hints toward which of them to adjust.
comment by Pattern ·
2019-01-04T04:54:40.026Z · LW(p) · GW(p)
There may be a difference between "No God" and "Not God". P(¬G) includes every other possibility - 2 gods, 3, 0, aliens creating humans, this is a simulation, everything we can think of and more. For this reason, some suggest odds over priors (and using Bayes rule appropriately) because the sum of probabilities we consider need not be one - we may determine of the possibilities we are considering that one is not likely to be true, in place of determining what is true. (For example, if we are considering the possibility that deck of cards someone else is using for a poker game is ordinary, or has 4 extra aces, we may acquire enough evidence, that the second possibility has an order of magnitude more probability. There might not be 8 aces, but we may be very confident that either the deck is not ordinary, or someone is cheating (possibly the person who is shuffling the deck).)
Also, if you ever use Bayes rule, and say "that can't be right because of X", keep going. Are there more givens you're missing?