Probability is in the Mind
post by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2008-03-12T04:08:30.000Z · LW · GW · Legacy · 193 commentsContents
193 comments
Yesterday I spoke of the Mind Projection Fallacy, giving the example of the alien monster who carries off a girl in a torn dress for intended ravishing—a mistake which I imputed to the artist's tendency to think that a woman's sexiness is a property of the woman herself, woman.sexiness, rather than something that exists in the mind of an observer, and probably wouldn't exist in an alien mind.
The term "Mind Projection Fallacy" was coined by the late great Bayesian Master, E. T. Jaynes, as part of his long and hard-fought battle against the accursèd frequentists. Jaynes was of the opinion that probabilities were in the mind, not in the environment—that probabilities express ignorance, states of partial information; and if I am ignorant of a phenomenon, that is a fact about my state of mind, not a fact about the phenomenon.
I cannot do justice to this ancient war in a few words—but the classic example of the argument runs thus:
You have a coin.
The coin is biased.
You don't know which way it's biased or how much it's biased. Someone just told you, "The coin is biased" and that's all they said.
This is all the information you have, and the only information you have.
You draw the coin forth, flip it, and slap it down.
Now—before you remove your hand and look at the result—are you willing to say that you assign a 0.5 probability to the coin having come up heads?
The frequentist says, "No. Saying 'probability 0.5' means that the coin has an inherent propensity to come up heads as often as tails, so that if we flipped the coin infinitely many times, the ratio of heads to tails would approach 1:1. But we know that the coin is biased, so it can have any probability of coming up heads except 0.5."
The Bayesian says, "Uncertainty exists in the map, not in the territory. In the real world, the coin has either come up heads, or come up tails. Any talk of 'probability' must refer to the information that I have about the coin—my state of partial ignorance and partial knowledge—not just the coin itself. Furthermore, I have all sorts of theorems showing that if I don't treat my partial knowledge a certain way, I'll make stupid bets. If I've got to plan, I'll plan for a 50/50 state of uncertainty, where I don't weigh outcomes conditional on heads any more heavily in my mind than outcomes conditional on tails. You can call that number whatever you like, but it has to obey the probability laws on pain of stupidity. So I don't have the slightest hesitation about calling my outcome-weighting a probability."
I side with the Bayesians. You may have noticed that about me.
Even before a fair coin is tossed, the notion that it has an inherent 50% probability of coming up heads may be just plain wrong. Maybe you're holding the coin in such a way that it's just about guaranteed to come up heads, or tails, given the force at which you flip it, and the air currents around you. But, if you don't know which way the coin is biased on this one occasion, so what?
I believe there was a lawsuit where someone alleged that the draft lottery was unfair, because the slips with names on them were not being mixed thoroughly enough; and the judge replied, "To whom is it unfair?"
To make the coinflip experiment repeatable, as frequentists are wont to demand, we could build an automated coinflipper, and verify that the results were 50% heads and 50% tails. But maybe a robot with extra-sensitive eyes and a good grasp of physics, watching the autoflipper prepare to flip, could predict the coin's fall in advance—not with certainty, but with 90% accuracy. Then what would the real probability be?
There is no "real probability". The robot has one state of partial information. You have a different state of partial information. The coin itself has no mind, and doesn't assign a probability to anything; it just flips into the air, rotates a few times, bounces off some air molecules, and lands either heads or tails.
So that is the Bayesian view of things, and I would now like to point out a couple of classic brainteasers that derive their brain-teasing ability from the tendency to think of probabilities as inherent properties of objects.
Let's take the old classic: You meet a mathematician on the street, and she happens to mention that she has given birth to two children on two separate occasions. You ask: "Is at least one of your children a boy?" The mathematician says, "Yes, he is."
What is the probability that she has two boys? If you assume that the prior probability of a child being a boy is 1/2, then the probability that she has two boys, on the information given, is 1/3. The prior probabilities were: 1/4 two boys, 1/2 one boy one girl, 1/4 two girls. The mathematician's "Yes" response has probability ~1 in the first two cases, and probability ~0 in the third. Renormalizing leaves us with a 1/3 probability of two boys, and a 2/3 probability of one boy one girl.
But suppose that instead you had asked, "Is your eldest child a boy?" and the mathematician had answered "Yes." Then the probability of the mathematician having two boys would be 1/2. Since the eldest child is a boy, and the younger child can be anything it pleases.
Likewise if you'd asked "Is your youngest child a boy?" The probability of their being both boys would, again, be 1/2.
Now, if at least one child is a boy, it must be either the oldest child who is a boy, or the youngest child who is a boy. So how can the answer in the first case be different from the answer in the latter two?
Or here's a very similar problem: Let's say I have four cards, the ace of hearts, the ace of spades, the two of hearts, and the two of spades. I draw two cards at random. You ask me, "Are you holding at least one ace?" and I reply "Yes." What is the probability that I am holding a pair of aces? It is 1/5. There are six possible combinations of two cards, with equal prior probability, and you have just eliminated the possibility that I am holding a pair of twos. Of the five remaining combinations, only one combination is a pair of aces. So 1/5.
Now suppose that instead you asked me, "Are you holding the ace of spades?" If I reply "Yes", the probability that the other card is the ace of hearts is 1/3. (You know I'm holding the ace of spades, and there are three possibilities for the other card, only one of which is the ace of hearts.) Likewise, if you ask me "Are you holding the ace of hearts?" and I reply "Yes", the probability I'm holding a pair of aces is 1/3.
But then how can it be that if you ask me, "Are you holding at least one ace?" and I say "Yes", the probability I have a pair is 1/5? Either I must be holding the ace of spades or the ace of hearts, as you know; and either way, the probability that I'm holding a pair of aces is 1/3.
How can this be? Have I miscalculated one or more of these probabilities?
If you want to figure it out for yourself, do so now, because I'm about to reveal...
That all stated calculations are correct.
As for the paradox, there isn't one. The appearance of paradox comes from thinking that the probabilities must be properties of the cards themselves. The ace I'm holding has to be either hearts or spades; but that doesn't mean that your knowledge about my cards must be the same as if you knew I was holding hearts, or knew I was holding spades.
It may help to think of Bayes's Theorem:
P(H|E) = P(E|H)P(H) / P(E)
That last term, where you divide by P(E), is the part where you throw out all the possibilities that have been eliminated, and renormalize your probabilities over what remains.
Now let's say that you ask me, "Are you holding at least one ace?" Before I answer, your probability that I say "Yes" should be 5/6.
But if you ask me "Are you holding the ace of spades?", your prior probability that I say "Yes" is just 1/2.
So right away you can see that you're learning something very different in the two cases. You're going to be eliminating some different possibilities, and renormalizing using a different P(E). If you learn two different items of evidence, you shouldn't be surprised at ending up in two different states of partial information.
Similarly, if I ask the mathematician, "Is at least one of your two children a boy?" I expect to hear "Yes" with probability 3/4, but if I ask "Is your eldest child a boy?" I expect to hear "Yes" with probability 1/2. So it shouldn't be surprising that I end up in a different state of partial knowledge, depending on which of the two questions I ask.
The only reason for seeing a "paradox" is thinking as though the probability of holding a pair of aces is a property of cards that have at least one ace, or a property of cards that happen to contain the ace of spades. In which case, it would be paradoxical for card-sets containing at least one ace to have an inherent pair-probability of 1/5, while card-sets containing the ace of spades had an inherent pair-probability of 1/3, and card-sets containing the ace of hearts had an inherent pair-probability of 1/3.
Similarly, if you think a 1/3 probability of being both boys is an inherent property of child-sets that include at least one boy, then that is not consistent with child-sets of which the eldest is male having an inherent probability of 1/2 of being both boys, and child-sets of which the youngest is male having an inherent 1/2 probability of being both boys. It would be like saying, "All green apples weigh a pound, and all red apples weigh a pound, and all apples that are green or red weigh half a pound."
That's what happens when you start thinking as if probabilities are in things, rather than probabilities being states of partial information about things.
Probabilities express uncertainty, and it is only agents who can be uncertain. A blank map does not correspond to a blank territory. Ignorance is in the mind.
193 comments
Comments sorted by oldest first, as this post is from before comment nesting was available (around 2009-02-27).
comment by GBM · 2008-03-12T06:19:36.000Z · LW(p) · GW(p)
It seems to me you're using "perceived probability" and "probability" interchangeably. That is, you're "defining" probability as the probability that an observer assigns based on certain pieces of information. Is it not true that when one rolls a fair 1d6, there is an actual 1/6 probability of getting any one specific value? Or using your biased coin example: our information may tell us to assume a 50/50 chance, but the man may be correct in saying that the coin has a bias--that is, the coin may really come up heads 80% of the time, but we must assume a 50% chance to make the decision, until we can be certain of the 80% chance ourselves. What am I missing? I would say that the Gomboc (http://tinyurl.com/2rffxs) has a 100% chance of righting itself, inherently. I do not understand how this is incorrect.
Replies from: ksvanhorn↑ comment by ksvanhorn · 2011-01-22T06:06:27.065Z · LW(p) · GW(p)
"Is it not true that when one rolls a fair 1d6, there is an actual 1/6 probability of getting any one specific value?"
No. The unpredictability of a die roll or coin flip is not due to any inherent physical property of the objects; it is simply due to lack of information. Even with quantum uncertainty, you could predict the result of a coin flip or die roll with high accuracy if you had precise enough measurements of the initial conditions.
Let's look at the simpler case of the coin flip. As Jaynes explains it, consider the phase space for the coin's motion at the moment it leaves your fingers. Some points in that phase space will result in the coin landing heads up; color these points black. Other points in the phase space will result in the coin landing tails up; color these points white. If you examined the phase space under a microscope (metaphorically speaking) you would see an intricate pattern of black and white, with even a small movement in the phase space crossing many boundaries between a black region and a white region.
If you knew the initial conditions precisely enough, you would know whether the coin was in a white or black region of phase space, and you would then have a probability of either 1 or 0 for it coming up heads.
It's more typical that we don't have such precise measurements, and so we can only pin down the coin's location in phase space to a region that contains many, many black subregions and many, many white subregions... effectively it's just gray, and the shade of gray is your probability for heads given your measurement of the initial conditions.
So you see that the answer to "what is the probability of the coin landing heads up" depends on what information you have available.
Of course, in practice you typically don't even have the lesser level of information assumed above -- you don't know enough about the coin, even in principle, to compute which points in phase space are black and which are white, or what proportion of the points are black versus white in the region corresponding to what you know about the initial conditions. Here's where symmetry arguments then give you P(heads) = 1/2.
Replies from: bigjeff5, Juno_Watt, BeanSprugget↑ comment by bigjeff5 · 2011-02-01T18:14:41.683Z · LW(p) · GW(p)
Case in point:
There are dice designed with very sharp corners in order to improve their randomness.
If randomness were an inherent property of dice, simply refining the shape shouldn't change the randomness, they are still plain balanced dice, after all.
But when you think of a "random" throw of the dice as a combination of the position of the dice in the hand, the angle of the throw, the speed and angle of the dice as they hit the table, the relative friction between the dice and the table, and the sharpness of the corners as they tumble to a stop, you realize that if you have all the relevant information you can predict the roll of the dice with high certainty.
It's only because we don't have the relevant information that we say the probabilities are 1/6.
↑ comment by Juno_Watt · 2013-05-21T17:27:32.343Z · LW(p) · GW(p)
If you knew the initial conditions precisely enough, you would know whether the coin was in a white or black region of phase space, and you would then have a probability of either 1 or 0 for it coming up heads.
Not necessarily, because of quantum uncertainty and indeterminism -- and yes, they can affect macroscopic systems.
The deeper point is, whilst there is a subjective ignorance-based kind of probability, that does not by itself mean there is not an objective, in-the-territory kind of 0<p<1 probability. The latter would be down to how the universe works, and you can't tell how the universe works by making conceptual, philosophical-style arguments.
So the kind of probability that is in the mind is in the mind, and the other kind is a separate issue. (Of course, the existence of objective probability doesn't follow from the existence of subjective probability any more than its non existence does).
↑ comment by BeanSprugget · 2020-10-26T17:39:32.279Z · LW(p) · GW(p)
Even with quantum uncertainty, you could predict the result of a coin flip or die roll with high accuracy if you had precise enough measurements of the initial conditions.
I'm curious about how how quantum uncertainty works exactly. You can make a prediction with models and measurements, but when you observe the final result, only one thing happens. Then, even if an agent is cut off from information (i.e. observation is physically impossible), it's still a matter of predicting/mapping out reality.
I don't know much about the specifics of quantum uncertainty, though.
comment by Roland2 · 2008-03-12T06:28:41.000Z · LW(p) · GW(p)
GBM:
Q: What is the probability for a pseudo-random number generator to generate a specific number as his next output?
A: 1 or 0 because you can actually calculate the next number if you have the available information.
Q: What probability do you assign to a specific number as being it's next output if you don't have the information to calculate it?
Replace pseudo-random number generator with dice and repeat.
Replies from: rstarkov↑ comment by rstarkov · 2011-03-24T15:36:12.010Z · LW(p) · GW(p)
Even more important, I think, is the realization that, to decide how much you're willing to bet on a specific outcome, all of the following are essentially the same:
- you do have the information to calculate it but haven't calculated it yet
- you don't have the information to calculate it but know how to obtain such information.
- you don't have the information to calculate it
The bottom line is that you don't know what the next value will be, and that's the only thing that matters.
comment by Ian_C. · 2008-03-12T06:33:34.000Z · LW(p) · GW(p)
So therefore a person with perfect knowledge would not need probability. Is this another interpretation of "God does not play dice?" :-)
Replies from: dlthomas↑ comment by dlthomas · 2011-09-23T20:58:21.740Z · LW(p) · GW(p)
I think this is the only interpretation of "God does not play dice."
Replies from: Nornagest↑ comment by Nornagest · 2011-09-23T21:03:42.632Z · LW(p) · GW(p)
At least in its famous context, I always interpreted that quote as a metaphorical statement of aesthetic preference for a deterministic over a stochastic world, rather than an actual statement about the behavior of a hypothetical omniscient being. A lot of bullshit's been spilled on Einstein's religious preferences, but whatever the truth I'd be very surprised if he conditioned his response to a scientific question on something that speculative.
Replies from: dlthomas↑ comment by dlthomas · 2011-09-23T22:02:46.656Z · LW(p) · GW(p)
This is more or less what I was saying, but left (perhaps too) much of it implicit.
If there were an entity with perfect knowledge of the present ("God"), they would have perfect knowledge of the future, and thus "not need probability", iff the universe is deterministic. (If there is an entity with perfect knowledge of the future of a nondeterministic reality, we have described our "reality" too narrowly - include that entity and it is necessarily deterministic or the perfect knowledge isn't).
comment by Caledonian2 · 2008-03-12T13:10:38.000Z · LW(p) · GW(p)
The Bayesian says, "Uncertainty exists in the map, not in the territory. In the real world, the coin has either come up heads, or come up tails."
Alas, the coin was part of an erroneous stamping, and is blank on both sides.
comment by Jef_Allbright · 2008-03-12T14:10:23.000Z · LW(p) · GW(p)
In other words, probability is not likelihood.
comment by PK · 2008-03-12T16:32:38.000Z · LW(p) · GW(p)
Here is another example me, my dad and my brother came up with when we were discussing probability.
Suppose there are 4 card, an ace and 3 kings. They are shuffled and placed face side down. I didn't look at the cards, my dad looked at the first card, my brother looked at the first and second cards. What is the probability of the ace being one of the last 2 cards. For me: 1/2 For my dad: If he saw the ace it is 0, otherwise 2/3. For my brother: If he saw the ace it is 0, otherwise 1.
How can there be different probabilities of the same event? It is because probability is something in the mind calculated because of imperfect knowledge. It is not a property of reality. Reality will take only a single path. We just don't know what that path is. It is pointless to ask for "the real likelihood" of an event. The likelihood depends on how much information you have. If you had all the information, the likelihood of the event would be 100% or 0%.
comment by Constant2 · 2008-03-12T16:41:22.000Z · LW(p) · GW(p)
The competent frequentist would presumably not be befuddled by these supposed paradoxes. Since he would not be befuddled (or so I am fairly certain), the "paradoxes" fail to prove the superiority of the Bayesian approach. Frankly, the treatment of these "paradoxes" in terms of repeated experiments seems to straightforward that I don't know how you can possibly think there's a problem.
Replies from: None↑ comment by [deleted] · 2013-07-21T15:08:34.094Z · LW(p) · GW(p)
Say you have a circle. On this circle you draw the inscribed equilateral triangle.
Simple, right?
Okay. For a random chord in this circle, what is the probability that the chord is longer than the side in the triangle?
So, to choose a random chord, there are three obvious methods:
- Pick a point on the circle perimeter, and draw the triangle with that point as an edge. Now when you pick a second point on the circle perimeter as the other endpoint of your chord, you can plainly see that in 1/3 of the cases, the resulting chord will be longer than the triangles' side.
- Pick a random radius (line from center to perimeter). Rotate the triangle so one of the sides bisect this radius. Now you pick a point on the radius to be the midpoint of your chord. Apparently now, the probability of the chord being longer than the side is 1/2.
- Pick a random point inside the circle to be the midpoint of your chord (chords are unique by midpoint). If the midpoint of a chord falls inside the circle inscribed by the triangle, it is longer than the side of the triangle. The inscribed circle has an area 1/4 of the circumscribing circle, and that is our probability.
WHAT NOW?!
The solution is to choose the distribution of chords that lets us be maximally indifferent/ignorant. I.e. the one that is both scale, translation and rotation invariant (i.e. invariant under Affine transformations). The second solution has those properties.
comment by Sudeep2 · 2008-03-12T16:41:26.000Z · LW(p) · GW(p)
"Probabilities express uncertainty, and it is only agents who can be uncertain. A blank map does not correspond to a blank territory. Ignorance is in the mind."
Eliezer, in quantum mechanics, one does not say that one does not have knowledge of both position and momentum of a particle simultaneously. Rather, one says that one CANNOT have such knowledge. This contradicts your statement that ignorance is in the mind. If quantum mechanics is true, then ignorance/uncertainty is a part of nature and not just something that agents have.
Replies from: None↑ comment by [deleted] · 2013-07-21T15:11:13.902Z · LW(p) · GW(p)
Wither knowledge. It is not knowledge that causes this effect, it is the fact that momentum amplitude and position amplitude relates to one another by a fourier transform.
A narrow spike in momentum is a wide blob in position and vice versa by mathematical necessity.
Quantum mechanics' apparent weirdness comes from wanting to measure quantum phenomena with classical terms.
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2008-03-12T16:57:15.000Z · LW(p) · GW(p)
Constant: The competent frequentist would presumably not be befuddled by these supposed paradoxes.
Not the last two paradoxes, no. But the first case given, the biased coin whose bias is not known, is indeed a classic example of the difference between Bayesians and frequentists. The frequentist says:
"The coin's bias is not a random variable! It's a fixed fact! If you repeat the experiment, it won't come out to a 0.5 long-run frequency of heads!" (Likewise when the fact to be determined is the speed of light, or whatever.) "If you flip the coin 10 times, I can make a statement about the probability that the observed ratio will be within some given distance of the inherent propensity, but to say that the coin has a 50% probability of turning up heads on the first occasion is nonsense - that's just not the real probability, which is unknown."
According to the frequentist, apparently there is no rational way to manage your uncertainty about a single flip of a coin of unknown bias, since whatever you do, someone else will be able to criticize your belief as "subjective" - such a devastating criticism that you may as well, um, flip a coin. Or consult a magic 8-ball.
Sudeep: If quantum mechanics is true, then ignorance/uncertainty is a part of nature and not just something that agents have.
A common misconception - Jaynes railed against that idea too, and he wasn't even equipped with the modern understanding of decoherence. In quantum mechanics, it's an objective fact that the blobs of amplitude making up reality sometimes split in two, and you can't predict what "you" will see, when that happens, because it is an objective fact that different versions of you will see different things. But all this is completely mechanical, causal, and deterministic - the splitting of observers just introduces an element of anthropic pseudo-uncertainty, if you happen to be one of those observers. The splitting is not inherently related to the act of measurement by a conscious agent, or any kind of agent; it happens just as much when a system is "measured" by a photon bouncing off and interacting with a rock.
There are other interpretations of quantum mechanics, but they don't make any sense. Making this fully clear will require more prerequisite posts first, though.
Replies from: radfordd, Peterdjones, TobyBartels↑ comment by radfordd · 2011-05-18T15:44:34.094Z · LW(p) · GW(p)
Eliezer:
"The coin's bias is not a random variable! It's a fixed fact! If you repeat the experiment, it won't come out to a 0.5 long-run frequency of heads!"
You're repeating the wrong experiment.
The correct experiment for a frequentist to repeat is one where a coin is chosen from a pool of biased coins, and tossed once. By repeating that experiment, you learn something about the average bias in the pool of coins. For a symmetrically biased pool, the frequency of heads would approach 0.5.
So your original premise is wrong. A frequentist approach requires a series of trials of the correct experiment. Neither the frequentist nor the Bayesian can rationally evaluate unknown probabilities. A better way to say that might be, "In my view, it's okay for both frequentists and Bayesians to say "I don't know.""
Replies from: buybuydandavis↑ comment by buybuydandavis · 2011-11-05T11:08:35.035Z · LW(p) · GW(p)
I think EY's example here should actually should be targeted at the probability as propensity theory of Von Mises (Richard, not Ludwig), not the frequentist theory, although even frequentists often conflate the two.
The probability for you is not some inherent propensity of the physical situation, because the coin will flip depending on how it is weighted and how hard it is flip. The randomness isn't in the physical situation, but in our limited knowledge of the physical situation.
The argument against frequentist thinking is that we're not interested in a long term frequency of an experiment. We want to know how to bet now. If you're only going to talk about long term frequencies of repeatable experiments, you're not that useful when I'm facing one con man with a biased coin.
That singular event is what it is. If you're going to argue that you have to find the right class of events in your head to sample from, you're already halfway down the road to bayesianism. Now you just have to notice that the class of events is different for the con man than it is for you, because of your differing states of knowledge, you'll make it all the way there.
Notice how you thought up a symmetrically biased pool. Where did that pool come from? Aren't you really just injecting a prior on the physical characteristics into your frequentist analysis?
If you push frequentism past the usual frequentist limitations (physical propensity, repeated experiments), you eventually recreate bayesianism. "Inside every Non-bayesian, there is a bayesian struggling to get out".
Replies from: TheAncientGeek↑ comment by TheAncientGeek · 2016-01-29T13:40:58.826Z · LW(p) · GW(p)
I think EY's example here should actually should be targeted at the probability as propensity theory of Von Mises (Richard, not Ludwig), not the frequentist theory, although even frequentists often conflate the two.
yep.
↑ comment by Peterdjones · 2011-07-03T20:34:11.126Z · LW(p) · GW(p)
There are other interpretations of quantum mechanics, but they don't make any sense.
In you opinion. Many Worlds does not make sense in the opinions of its critics. You are entitled to back an interpretation as you are entitled to back a football team. You are not entitled to portray your favourite interpretation of quantum mechanics as a matter of fact. If interpretations were proveable, they wouldn't be called interpretations.
Replies from: Perplexed↑ comment by Perplexed · 2011-07-03T20:44:28.027Z · LW(p) · GW(p)
As I understand it, EY's commitment to MWI is a bit more principled than a choice between soccer teams. MWI is the only interpretation that makes sense given Eliezer's prior metaphysical commitments. Yes rational people can choose a different interpretation of QM, but they probably need to make other metaphysical choices to match in order to maintain consistency.
Replies from: Peterdjones, None, None↑ comment by Peterdjones · 2011-07-03T21:16:16.526Z · LW(p) · GW(p)
He still shouldn't be stating it as a fact when it based on "commitments".
↑ comment by [deleted] · 2011-07-03T21:33:34.583Z · LW(p) · GW(p)
Replies from: Eugine_Nier, MarkusRamikinYes rational people can choose a different interpretation of QM, but they probably need to make other metaphysical choices to match in order to maintain consistency.
↑ comment by Eugine_Nier · 2011-07-04T07:33:38.833Z · LW(p) · GW(p)
Aumann's agreement theorem.
assumes common priors, i.e., a common metaphysical commitment.
Replies from: None↑ comment by [deleted] · 2011-07-04T07:42:21.080Z · LW(p) · GW(p)
However, Robin Hanson has presented an argument that Bayesians who agree about the processes that gave rise to their priors (e.g., genetic and environmental influences) should, if they adhere to a certain pre-rationality condition, have common priors.
The metaphysical commitment necessary is weaker than it looks.
↑ comment by MarkusRamikin · 2011-07-07T10:47:35.423Z · LW(p) · GW(p)
This theorem (valuable though it may be) strikes me as one of the easiest abused things ever. I think Ayn Rand would have liked it: if you don't agree with me, you're not as committed to Reason as I am.
Replies from: None, jsalvatier↑ comment by jsalvatier · 2011-07-07T16:58:23.459Z · LW(p) · GW(p)
I believe he's saying that rational people should agree on metaphysics (or probability distributions over different systems). In other words, to disagree about MWI, you need to dispute EY's chain of reasoning metaphysics->evidence->MWI, which Perplexed says is difficult or dispute EY's metaphysical commitments, which Perplexed implies is relatively easier.
Replies from: Islander↑ comment by [deleted] · 2013-07-14T20:37:33.821Z · LW(p) · GW(p)
MWI distinguishes itself from Copenhagen by making testable predictions. We simply don't have the technology yet to test them to a sufficient level of precisions as to distinguish which meta-theory models reality.
See: http://www.hedweb.com/manworld.htm#unique
In the mean time, there are strong metaphysical reasons (Occam's razor) to trust MWI over Copenhagen.
Replies from: OccamsTaser↑ comment by OccamsTaser · 2013-07-14T21:14:21.306Z · LW(p) · GW(p)
In the mean time, there are strong metaphysical reasons (Occam's razor) to trust MWI over Copenhagen.
Indeed there are, but this is not the same as strong metaphysical reasons to trust MWI over all alternative explanations. In particular, EY argued quite forcefully (and rightly so) that collapse postulates are absurd as they would be the only "nonlinear, non CPT-symmetric, acausal, FTL, discontinuous..." part of all physics. He then argued that since all single-world QM interpretations are absurd (a non-sequitur on his part, as not all single-world QM interpretations involve a collapse), many-worlds wins as the only multi-world interpretation (which is also slightly inaccurate, not that many-minds is taken that seriously around here). Ultimately, I feel that LW assigns too high a prior to MW (and too low a prior to bohmian mechanics).
Replies from: None↑ comment by [deleted] · 2013-07-15T03:23:31.767Z · LW(p) · GW(p)
It's not just about collapse - every single-world QM interpretation either involves extra postulates, non-locality or other surprising alterations of physical law, or yields falsified predictions. The FAQ I linked to addresses these points in great detail.
MWI is simple in the Occam's razor sense - it is what falls out of the equations of QM if you take them to represent reality at face value. Single-world meta-theories require adding additional restrictions which are at this time completely unjustified from the data.
↑ comment by TobyBartels · 2011-07-07T02:53:35.288Z · LW(p) · GW(p)
I always found it really strange that EY believes in Bayesianism when it comes to probability theory but many worlds when it comes to quantum physics. Mathematically, probability theory and quantum physics are close analogues (of which quantum statistical physics is the common generalisation), and this extends to their interpretations. (This doesn't apply to those interpretations of quantum physics that rely on a distinction between classical and quantum worlds, such as the Copenhagen interpretation, but I agree with EY that these don't ultimately make any sense.) There is a many-worlds interpretation of probability theory, and there is a Bayesian interpretation of quantum physics (to which I subscribe).
I need to write a post about this some time.
Replies from: endoself↑ comment by endoself · 2011-07-09T02:27:29.275Z · LW(p) · GW(p)
There is a many-worlds interpretation of probability theory, and there is a Bayesian interpretation of quantum physics (to which I subscribe).
Both of these are false. Consider the trillionth binary digit of pi. I do not know what it is, so I will accept bets where the payoff is greater than the loss, but not vice versa. However, there is obviously no other world where the trillionth binary digit of pi has a different value.
The latter is, if I understand you correctly, also wrong. I think that you are saying that there are 'real' values of position, momentum, spin, etc., but that quantum mechanics only describes our knowledge about them. This would be a hidden variable theory. There are very many constraints imposed by experiment on what hidden variable theories are possible, and all of the proposed ones are far more complex than MWI, making it very unlikely that any such theory will turn out to be true.
Replies from: TobyBartels↑ comment by TobyBartels · 2011-07-09T04:55:55.118Z · LW(p) · GW(p)
I think that you are saying that there are 'real' values of position, momentum, spin, etc., but that quantum mechanics only describes our knowledge about them.
I am saying that the wave function (to be specific) describes one's knowledge about position, momentum, spin, etc., but I make no claim that these have any ‘real' values.
In the absence of a real post, here are some links:
- John Baez (ed, 2003), Bayesian Probability Theory and Quantum Mechanics (a collection of Usenet posts, with an introduction);
- Carlton Caves et al (2001), Quantum probabilities as Bayesian probabilities (a paper published in Physical Reviews A).
By the way, you seem to have got this, but I'll say it anyway for the benefit of any other readers, since it's short and sums up the idea: The wave function exists in the map, not in the territory.
Replies from: endoself, Wei_Dai, Peterdjones↑ comment by endoself · 2011-07-09T05:30:53.327Z · LW(p) · GW(p)
I have not read the latter link yet, though I intend to.
I am saying that the wave function (to be specific) describes one's knowledge about position, momentum, spin, etc., but I make no claim that these have any ‘real' values.
What do you have knowledge of then? Or is there some concept that could be described as having knowledge of something without that thing having an actual value?
From Baez:
Probability theory is the special case of quantum mechanics in which ones algebra of observables is commutative.
This is horribly misleading. Bayesian probability can be applied perfectly well in a universe that obeys MWI while being kept completely separate mathematically from the quantum mechanical uncertainty.
Replies from: TobyBartels↑ comment by TobyBartels · 2011-07-10T01:01:14.111Z · LW(p) · GW(p)
Probability theory is the special case of quantum mechanics in which ones algebra of observables is commutative.
This is horribly misleading. Bayesian probability can be applied perfectly well in a universe that obeys MWI while being kept completely separate mathematically from the quantum mechanical uncertainty.
As a mathematical statement, what Baez says is certainly correct (at least for some reasonable mathematical formalisations of ‘probability theory’ and ‘quantum mechanics’). Note that Baez is specifically discussing quantum statistical mechanics (which I don't think he makes clear); non-statistical quantum mechanics is a different special case which (barring trivialities) is completely disjoint from probability theory.
Of course, the statement can still be misleading; as you note, it's perfectly possible to interpret quantum statistical physics by tacking Bayesian probability on top of a many-worlds interpretation of non-statistical quantum mechanics. That is, it's possible but (I argue) unwise; because if you do this, then your beliefs do not pay rent!
The classic example is a spin-1/2 particle that you believe to be spin-up with 50% probability and spin-down with 50% probability. (I mean probability here, not a superposition.) An alternative map is that you believe that the particle is spin-right with 50% probability and spin-left with 50% probability. (Now superposition does play a part, as spin-right and spin-left are both equally weighted superpositions of spin-up and spin-down, but with opposite relative phases.) From the Bayesian-probability-tacked-onto-MWI point of view, these are two very different maps that describe incompatible territories. Yet no possible observation can ever distinguish these! Specifically, if you measure the spin of the particle along any axis, both maps predict that you will measure the spin to be in one direction with 50% probability and in the other direction with 50% probability. (The wavefunctions give Born probabilities for the observations, which are then weighted according to your Bayesian probabilities for the wavefunctions, giving the result of 50% every time.)
In statistical mechanics as it is practised, no distinction is made between these two maps. (And since the distinction pays no rent in terms of predictions, I argue that no distinction should be made.) They are both described by the same ‘density matrix’; this is a generalisation of the notion of quantum state as a wave vector. (Specifically, the unit vectors up to phase in the Hilbert space describe the pure states of the system, which are only a degenerate case of the mixed states described by the density matrices.) A lot of the language of statistical mechanics is frequentist-influenced talk about ‘ensembles’, but if you just reinterpret all of this consistently in a Bayesian way, then the practice of statistical mechanics gives you the Bayesian interpretation.
I am saying that the wave function (to be specific) describes one's knowledge about position, momentum, spin, etc., but I make no claim that these have any ‘real' values.
What do you have knowledge of then? Or is there some concept that could be described as having knowledge of something without that thing having an actual value?
This is the weak point in the Bayesian interpretation of quantum mechanics. I find it very analogous to the problem of interpreting the Born probabilities in MWI. Eliezer cannot yet clearly answer these questions that he poses:
What are the Born probabilities, probabilities of? Here's the map - where's the territory?
And neither can I (at least, not in a way that would satisfy him). In the all-Bayesian interpretation, the Born probabilities are simply Bayesian probabilities, so there's no special problems about them; but as you point out, it's still hard to say what the territory is like.
My best answer is simply what you suggest, that our maps of the universe assign probabilities to various possible values of things that do not (necessarily) have any actual values. This may seem like a counterintuitive thing to do, but it works, and we have no other way of making a map.
By the way, I've thought of a couple more references:
- John Baez (1993), This Week's Finds #27;
- Toby Bartels (1998), Quantum measurement problem.
Baez (1993) is where I really learnt quantum statistical mechanics (despite having earlier taken a course in it), and my first (subtle) introduction to the Bayesian interpretation (not made explicit here). Note the talk about the ‘post-Everett school’, and recall that Everett is credited with founding the many-worlds interpretation (although he avoided the term ‘MWI’). The Bayesian interpretation could have been understood in the 1930s (and I have heard it argued, albeit unconvincingly, that it is what Bohr really meant all along), but it's really best understood in light of the modern understanding of decoherence that Everett started. We all-Bayesians are united with the many-worlders (and the Bohmians) in decrying the mystical separation of the universe into ‘quantum’ and ‘classical’ worlds and the reality of the ‘collapse of the wavefunction’. (That is, we do believe in the collapse of the wavefunction, but not in the territory; for us, it is simply the process of updating the map on the basis of new information, that is the application of a suitably generalised Bayes's Theorem.) We just think that the many-worlders have some unnecessary ontological baggage (like the Bohmians, but to a lesser degree).
Bartels (1998) is my first attempt to explain the Bayesian interpretation (on Usenet), albeit not a very good one. It's overly mathematical (and poorly so, since W*-algebras make a better mathematical foundation than C*-algebras). But it does include things that I haven't said here, (including mathematical details that you might happen to want). Still (even for the mathematics), if you read only one, read Baez.
Edit: I edited to use the word ‘world’ only in the technical sense of an interpretation.
Replies from: endoself, TobyBartels, nshepperd↑ comment by endoself · 2011-07-12T05:21:10.376Z · LW(p) · GW(p)
As a mathematical statement, what Baez says is certainly correct.
I definitely don't disagree with that.
Specifically, if you measure the spin of the particle along any axis, both maps predict that you will measure the spin to be in one direction with 50% probability and in the other direction with 50% probability.
They can give different predictions. Maybe I can ask my friend who prepared they quantum state and ey can tell me which it really is. I might even be able to use that knowledge to predict the current state of the apparatus ey used to prepare the particle. Of course, it's also possible that my friend would refuse to tell me or that I got the particle already in this state without knowing how it got there. That would just be belief in the implied invisible. "On August 1st 2008 at midnight Greenwich time, a one-foot sphere of chocolate cake spontaneously formed in the center of the Sun; and then, in the natural course of events, this Boltzmann Cake almost instantly dissolved." I would say that this hypothesis is meaningful and almost certainly false. Not that it is "meaningless". Even though I cannot think of any possible experimental test that would discriminate between its being true, and its being false.
A final possibility is that there never was a pure state; the universe started off in a mixed state. In this example, whether this should be regarded as an ontologically fundamental mixed state or just a lack of knowledge on my part depends on which hypothesis is simpler. This would be too hard to judge definitively given our current understanding.
What are the Born probabilities, probabilities of? Here's the map - where's the territory?
In MWI, the Born probabilities aren't probabilities, at least not is the Bayesian sense. There is no subjective uncertainty; I know with very high probability that the cat is both alive and dead. Of course, that doesn't tell us what they are, just what they are not.
We all-Bayesians are united with the many-worlders (and the Bohmians) in decrying the mystical separation of the world into ‘quantum’ and ‘classical’ and the reality of the ‘collapse of the wavefunction’.
I think a large majority of physicists would agree that the collapse of the wavefunction isn't an actual process.
How would you analyze the Wigner's friend thought experiment? In order for Wigner's observations to follow the laws of QM, both versions of his friend must be calculated, since they have a chance to interfere with each other. Wouldn't both streams of conscious experience occur?
Replies from: TobyBartels↑ comment by TobyBartels · 2011-07-13T15:27:34.802Z · LW(p) · GW(p)
They can give different predictions. [...]
I don't understand what you're saying in these paragraphs. You're not describing how the two situations lead to different predictions; you're describing the opposite: how different set-ups might lead to the two states.
Possibly you mean something like this: In situation A, my friend intended to prepare one spin-down particle, but I predict with 50% chance that they hooked up the apparatus backward and produced a spin-up particle instead. In situation B, they intended to prepare a spin-right particle, with the same chance of accidental reversal. These are different situations, but the difference lies in the apparatus, my friend's mind, the lab book, etc, not in the particle. It would be much the same if I knew that the machine always produced a spin-up particle and the up/down/right/left dial did nothing: the situations are different, but not because of the particle produced. (However, in this case, the particle is not even entangled with the dial reading.)
A final possibility is that there never was a pure state; the universe started of in a mixed state.
I especially don't know what you mean by this. The states that most people talk about when discussing quantum physics (including Eliezer in the Sequence) are pure states, and mixed states are probabilistic mixtures of these. If you're a Bayesian when it comes to classical probability (even if you believe in the wave function when it comes to purely quantum indeterminacy), then you should never believe that the real wave function is mixed; you just don't know which pure state it is. Unless you distinguish between the map where the particle is spin-up or -down with equal odds from the map where the particle is definitely in the fullymixed state in the territory? Then you have an even greater plethora of distinctions between maps that pay no rent!
How would you analyze the Wigner's friend thought experiment?
For Schrödinger's Cat or Wigner's Friend, in any realistic situation, the cat or friend would quickly decohere and become entangled in my observations, leaving it in a mixed state: the common-sense situation where it's alive/happy/etc with 50% chance and dead/sad/etc with 50% chance. (Quantum physics should reproduce common sense in situations where that applies, and killing a cat with radioactive decay or a pseudorandom coin flip doesn't matter to the cat --ordinarily.) However, if we imagine that we keep the cat or friend isolated (where common sense doesn't apply), then it is in a superposition of these instead of a mixture --from my point of view. My friend's state of knowledge is different, of course; from that point of view, the state is completely determined (with or without decoherence). And how is it determined? I don't know, but I'll find out when I open the door and ask.
Replies from: endoself↑ comment by endoself · 2011-07-19T23:38:09.528Z · LW(p) · GW(p)
I don't understand what you're saying in these paragraphs. You're not describing how the two situations lead to different predictions; you're describing the opposite: how different set-ups might lead to the two states.
I did not explain this very well. My point was that when we don't know the particle's spin, it is still a part of the simplest description that we have of reality. It should not be any more surprising that a belief about a quantum mechanical state does not have any observable consequences than that a belief about other parts of the universe that cannot be seen due to inflation does not have any observable consequences.
Unless you distinguish between the map where the particle is spin-up or -down with equal odds from the map where the particle is definitely in the fullymixed state in the territory? Then you have an even greater plethora of distinctions between maps that pay no rent!
I included this just in case a theory that implies such a thing ever turn out to be simpler than alternatives. I thought this was relevant because I mistakenly thought that you had mentioned this distinction.
And how is it determined? I don't know, but I'll find out when I open the door and ask.
What if your friend and the cat are implemented on a reversible quantum computer? The amplitudes for your friend's two possible states may both affect your observations, so both would need to be computed.
Replies from: TobyBartels↑ comment by TobyBartels · 2011-07-20T16:02:36.684Z · LW(p) · GW(p)
My point was that when we don't know the particle's spin, it is still a part of the simplest description that we have of reality.
Sure, the spin of the particle is a feature of the simplest description that we have. Nevertheless, no specific value of the particle's spin is a feature of the simplest description that we have; this is true in both the Bayesian interpretation and in MWI.
To be fair, if reality consists only of a single particle with spin 1/2 and no other properties (or more generally if there is a spin-1/2 particle in reality whose spin is not entangled with anything else), then according to MWI, reality consists (at least in part) of a specific direction in 3-space giving the axis and orientation of the particle's spin. (If the spin is greater than 1/2, then we need something a little more complicated than a single direction, but that's probably not important.) However, if the particle is entangled with something else, or even if its spin is entangled with some another property of the particle (such as its position or momentum), then the best that you can say is that you can divide reality mathematically into various worlds, in each of which the particle has a spin in a specific direction around a specific axis.
(In the Bohmian interpretation, it is true that the particle has a specific value of spin, or rather it has a specific value about any axis. But presumably this is not what you mean.)
As for which is the simplest description of reality, the Bayesian interpretation really is simpler. To fully describe reality as best I can with the knowledge that I have, in other words to write out my map completely, I need to specify less information in the fully Bayesian interpretation (FBI) than in MWI with Bayesian classical probability on top (MWI+BCP). This is because (as in the toy example of the spin-1/2 particle) different MWI+BCP maps correspond to the same FBI map; some additional information must be necessary to distinguish which MWI+BCP map to use.
If you're an objective Bayesian in the sense that you believe that the correct prior to use is determined entirely by what information one has, then I can't even tell how one would ever distinguish between the various MWI+BCP maps that correspond to a given FBI map. (A similar problem occurs if you define probability in terms of propensity to wager, since there is no way to settle the wagers.) Even if I ask my friend who prepared the state, my friend's choice to describe it one way rather than another way only gives me information about other things (the apparatus, my friend's mind, their lab book, etc). It may be possible to always choose a most uniform MWI+BCP map (in the toy example, a uniform probability distribution over the sphere); I'll have to think about this.
For the record, I do believe in the implied invisible, if it really is implied by the simplest description of reality. In this case, it's not.
I mistakenly thought that you had mentioned this distinction.
I certainly didn't mean to; from my point of view, that makes the MWI only more ridiculous, and I don't want to attack a straw man.
What if your friend and the cat are implemented on a reversible quantum computer? The amplitudes for your friend's two possible states may both affect your observations, so both would need to be computed.
So compute both. There are theoretical problems with implementing an observer on a reversible computer (quantum or otherwise), because Bayesian updating is not reversible; but from my perspective, I'll compute my state and believe whatever that comes out to.
Probably I don't understand what your question here really is. Is there a standard description of the problem of Wigner's friend on a quantum computer, preferably together with the WMI resolution of it, that you can link to or write down? (I can't find one online with a simple search.)
↑ comment by TobyBartels · 2011-07-23T09:22:50.584Z · LW(p) · GW(p)
I wrote:
The classic example is a spin-1/2 particle that you believe to be spin-up with 50% probability and spin-down with 50% probability.
I've begun to think that this is probably not a good example.
It's mathematically simple, so it is good for working out an example explicitly to see how the formalism works. (You may also want to consider a system with two spin-1/2 particles; but that's about as complicated as you need to get.) However, it's not good philosophically, essentially since the universe consists of more than just one particle!
Mathematically, it is a fact that, if a spin-1/2 particle is entangled with anything else in the universe, then the state of the particle is mixed, even if the state of the entire universe is pure. So a mixed state for a single particle suggests nothing philosphically, since we can still believe that the universe is in a pure state, which causes no problems for MWI. Indeed, endoself immediately looks at situations where the particle is so entangled! I should have taken this as a sign that my example was not doing its job.
I still stand by my responses to endoself, as far as they go. One of the minor attractions of the Bayesian interpretation for me is that it treats the entire universe and single particles in the same way; you don't have to constantly remind yourself that the system of interest is entangled with other systems that you'd prefer to ignore, in order to correctly interpret statements about the system. But it doesn't get at the real point.
The real point is that the entire universe is in a mixed state; I need to establish this. In the Bayesian interpretation, this is certainly true (since I don't have maximal information about the universe). According to MWI, the universe is in a pure state, but we don't know which. (I assume that you, the reader, don't know which; if you do, then please tell me!) So let's suppose that |psi> and |phi> are two states that the universe might conceivably be in (and assume that they're orthogonal to keep the math simple). Then if you believe that the real state of the universe is |psi> with 50% chance and |phi> with 50% chance, then this is a very different belief than the belief that it's (|psi> + |phi>)/sqrt(2) with 50% chance and (|psi> - |phi>)/sqrt(2) with 50% chance. Yet these two different beliefs lead to identical predictions, so you're drawing a map with extra irrelevant detail. In contrast, in the fully Bayesian interpretation, these are just two different ways of describing the same map, which is completely specified upon giving the density matrix (|psi><phi|)/2.
Edit: I changed uses of ‘world’ to ‘universe’; the former should be reserved for its technical sense in the MWI.
↑ comment by nshepperd · 2013-05-23T06:22:02.483Z · LW(p) · GW(p)
Specifically, if you measure the spin of the particle along any axis, both maps predict that you will measure the spin to be in one direction with 50% probability and in the other direction with 50% probability.
On the other hand, if the particle is spin up, the probability of observing "up" in an up-down measurement is 1, while the probability is 0 if the particle is down. So in the case of an up-down prior, observing "up" changes your probabilities, while in the case of a left-right prior, it does not.
Replies from: TobyBartels↑ comment by TobyBartels · 2013-05-26T19:35:20.476Z · LW(p) · GW(p)
That's a good point. It seems to me another problem with the MWI (or specifically, with Bayesian classical probability on top of quantum MWI) that making an observation could leave your map entirely unchanged.
However, in practice, followers of MWI have another piece of information: which world we are in. If your prior is 50% left and 50% right, then either way you believe that the universe is a superposition of an up world and a down world. Measuring up tells you that we are in the up world. For purposes of future predictions, you remember this fact, and so effectively you believe in 100% up now, the same as the person with the 50% up and 50% down prior. Those two half-Bayesians disagree about how many worlds there are, but not about what the up world —the world that we're in— is like.
Replies from: nshepperd↑ comment by nshepperd · 2013-05-27T04:53:56.432Z · LW(p) · GW(p)
To be precise, if your prior is 50% left and 50% right, then you generally believe that the world you are in is either a left world or a right world, and you don't know which. A left or right world itself factorises into a tensor product of (rest of the world) × (superposition of up particle and down particle). Measuring the particle along the up/down axis causes the rest of the world to be become entangled with the particle along that axis, splitting it into two worlds, of which you observe yourself to be in the 'up' one.
Of course, observing the particle along the up/down axis tells you nothing about whether its original spin was left or right, and leaves you incapable of finding out, since the two new worlds are very far apart, and it's the phase difference between those two worlds that stores that information.
↑ comment by Wei Dai (Wei_Dai) · 2011-07-10T14:44:11.168Z · LW(p) · GW(p)
The wave function exists in the map, not in the territory.
Please explain how you know this?
ETA: Also, whatever does exist in the territory, it has to generate subjective experiences, right? It seems possible that a wave function could do that, so saying that "the wave function exists in the territory" is potentially a step towards explaining our subjective experiences, which seems like should be the ultimate goal of any "interpretation". If, under the all-Bayesian interpretation, it's hard to say what exists in the territory besides that the wave function doesn't exist in the territory, then I'm having trouble seeing how it constitutes progress towards that ultimate goal.
Replies from: TobyBartels↑ comment by TobyBartels · 2011-07-13T11:04:53.207Z · LW(p) · GW(p)
Please explain how you know this?
I wouldn't want to pretend that I know this, just that this is the Bayesian interpretation of quantum mechanics. One might as well ask how we Bayesians know that probability is in the map and not the territory. (We are all Bayesians when it comes to classical probability, right?) Ultimately, I don't think that it makes sense to know such things, since we make the same physical predictions regardless of our interpretation, and only these can be tested.
Nevertheless, we take a Bayesian attitude toward probability because it is fruitful; it allows us to make sense of natural questions that other philosophies can't and to keep things mathematically precise without extra complications. And we can extend this into the quantum realm as well (which is good since the universe is really quantum). In both realms, I'm a Bayesian for the same reasons.
A half-Bayesian approach adds extra complications, like the two very different maps that lead to same predictions. (See this comment's cousin in reply to endoself.)
ETA: As for knowing what exists in the territory as an aid to explaining subjective experience, we can still say that the territory appears to consist ultimately of quark fields, lepton fields, etc, interacting according to certain laws, and that (built out of these) we appear to have rocks, people, computers, etc, acting in certain ways. We can even say that each particular rock appears to have a specific value of position and momentum, up to a certain level of precision (which fails to be infinitely precise first because the definition of any particular rock isn't infinitely precise, long before the level of quantum indeterminacy). We just can't say that each particular quark has a specific value of position and momentum beyond a certain level of precision, despite being (as far as we know) fundamental, and this is true regardless of whether we're all-Bayesian or many-worlder. (Bohmians believe that such values do exist in the territory, but these are unobservable even in principle, so this is a pointless belief).
Edit: I used ‘world’ consistently in a technical sense.
Replies from: Juno_Watt↑ comment by Juno_Watt · 2013-05-21T17:34:27.230Z · LW(p) · GW(p)
Nevertheless, we take a Bayesian attitude toward probability because it is fruitful; it allows us to make sense of natural questions that other philosophies can't and to keep things mathematically precise without extra complications. And we can extend this into the quantum realm as well
Where "extending" seems to mean "assuming". I find it more fruitful to come up with tests of (in)determinsm, such as Bell's Inequalitites.
Replies from: TobyBartels↑ comment by TobyBartels · 2013-05-22T23:08:48.651Z · LW(p) · GW(p)
I'm not sure what you mean by ‘assuming’. Perhaps you mean that we see what happens if we assume that the Bayesian interpretation continues to be meaningful? Then we find that it works, in the sense that we have mutually consistent degrees of belief about physically observable quantities. So the interpretation has been extended.
Replies from: Juno_Watt↑ comment by Juno_Watt · 2013-05-23T10:46:04.078Z · LW(p) · GW(p)
If the universe contains no objective probabilities, it will still contain subjective, ignorance based probabilities.
If the universe contains objective probabilities, it will also still contain subjective, ignorance based probabilities.
So the fact subjective probabilities "work" doesn't tell you anything about the universe. It isn't a test.
Aspect's experiment to test Bells theorem is a test. It tells you there isn't (local, single-universe) objective probability.
Replies from: TobyBartels↑ comment by TobyBartels · 2013-05-26T19:14:15.030Z · LW(p) · GW(p)
OK, I think that I understand you now.
Yes, Bell's inequalities, along with Aspect's experiment to test them, really tell us something. Even before the experiment, the inequalities told us something theoretical: that there can be no local, single-world objective interpretation of the standard predictions of quantum mechanics (for a certain sense of ‘objective’); then the experiment told us something empirical: that (to a high degree of tolerance) those predictions were correct where they mattered.
Like Bell's inequalities, the Bayesian interpretation of quantum mechanics tells us something theoretical: that there can be a local, single-world interpretation of the standard predictions of quantum mechanics (although it can't be objective in the sense ruled out by Bell's inequalities). So now we want the analogue of Aspect's experiment, to confirm these predictions where it matters and tell us something empirical.
Bell's inequalities are basically a no-go theorem: an interpretation with desired features (local, single-world, objective true value of all potentially observable quantities) does not exist. There's a specific reason why it cannot exist, and Aspect's experiment tests that this reason applies in the real world. But Fuchs et al's development of the Bayesian interpretation is a go theorem: an interpretation with some desired features (local, single-world) does exist. So there's no point of failure to probe with an experiment.
We still learn something about the universe, specifically about the possible forms of maps of it. But it's a purely theoretical result. I agree that Bell's inequalities and Aspect's experiment are a more interesting result, since we get something empirical. But it wasn't a surprising result (which might be hindsight bias on my part). There seem to be a lot of people here (although that might be my bad impression) who think that there is no local, single-world interpretation of the standard predictions of quantum mechanics (or even no single-world interpretation at all, but I'm not here to push Bohmianism), so the existence of the Bayesian interpretation may be the more surprising result; it may actually tell us more. (At any rate, it was surprising once upon a time for me.)
↑ comment by Peterdjones · 2011-07-10T16:31:48.668Z · LW(p) · GW(p)
The wave function exists in the map, not in the territory.
That is not an uncontroversial fact. For instance, Roger Penrose, from the Emperor's New Mind
OBJECTIVITY AND MEASURABILITY OF QUANTUM STATES Despite the fact that we are normally only provided with probabilities for the outcome of an experiment, there seems to be something objective about a quantum-mechanical state. It is often asserted that the state-vector is merely a convenient description of 'our knowledge' concerning a physical system or, perhaps, that the state-vector does not really describe a single system but merely provides probability information about an 'ensemble' of a large number of similarly prepared systems. Such sentiments strike me as unreasonably timid concerning what quantum mechanics has to tell us about the actuality of the physical world. Some of this caution, or doubt, concerning the 'physical reality' of state-vectors appears to spring from the fact that what is physically measurable is strictly limited, according to theory. Let us consider an electron's state of spin, as described above. Suppose that the spin-state happens to be |a), but we do not know this; that is, we do not know the direction a in which the electron is supposed to be spinning. Can we determine this direction by measurement? No, we cannot. The best that we can do is extract 'one bit' of information that is, the answer to a single yes no question. We may select some direction P in space and measure the electron's spin in that direction. We get either the answer YES or NO, but thereafter, we have lost the information about the original direction of spin. With a YES answer we know that the state is now proportional to |p), and with a NO answer we know that the state is now in the direction opposite to p. In neither case does this tell us the direction a of the state before measurement, but merely gives some probability information about a. On the other hand, there would seem to be something completely objective about the direction a itself, in which the electron 'happened to be spinning' before the measurement was made For we might have chosen to measure the electron's spin in the direction a -and the electron has to be prepared to give the answer YES, with certainty, if we happened to have guessed right in this way! Somehow, the 'information' that the electron must actually give this answer is stored in the electron's state of spin. It seems to me that we must make a distinction between what is 'objective' and what is 'measurable' in discussing the question of physical reality, according to quantum mechanics. The state- vector of a system is, indeed, not measurable, in the sense that one cannot ascertain, by experiments performed on the system, This objectivity is a feature of our taking the standard quantum-mechanical formalism seriously. In a non-standard viewpoint, the system might actually 'know', ahead of time, the result that it would give to any measurement. This could leave us with a different, apparently objective, picture of physical reality. precisely (up to proportionality) what that state is; but the state vector does seem to be (again up to proportionality) a completely objective property of the system, being completely characterized by the results that it must give to experiments that one might perform. In the case of a single spin-one-half panicle, such as an electron, this objectivity is not unreasonable because it merely asserts that there is some direction in which the electron's spin is precisely defined, even though we may not know what that direction is. (However, we shall be seeing later that this 'objective' picture is much stranger with more complicated systems- even for a system which consists merely of a pair of spin-one-half particles. ) But need the electron's spin have any physically defined state at all before it is measured? In many cases it will not have, because it cannot be considered as a quantum system on its own; instead, the quantum state roust generally be taken as describing an electron inextricably entangled with a large number of other particles. In particular circumstances, however, the electron (at least as regards its spin) can be considered on its own. In such circumstances, such as when its spin has actually previously been measured in some (perhaps unknown) direction and then the electron has remained undisturbed for a while, the electron does have a perfectly objectively defined direction of spin, according to standard quantum theory. resolved. The possible relevance of quantum effects to brain function will be considered in the final two chapters.
comment by Conrad · 2008-03-12T17:00:44.000Z · LW(p) · GW(p)
Maybe I'm stupid here... what difference does it make?
Sure, if we had a coin-flip-predicting robot with quick eyes it might be able to guess right/predict the outcome 90% of the time. And if we were precognitive we could clean up at Vegas.
In terms of non-hypothetical real decisions that confront people, what is the outcome of this line of reasoning? What do you suggest people do differently and in what context? Mark cards?
B/c currently, as far as I can see, you're saying, "The coin won't end up 'heads or tails' -- it'll end up heads, or it'll end up tails." True but uninformative.
Conrad.
ps - The thought experiment with the trick coin is ungrounded. If I'm being asked to lay even odds on a dollar bet that the coin is heads, then that's rational -- since the coin could be biased for heads, or tails (and the guy proposing the bet doesn't know the bias). If I'm being asked to accept or reject a number meant to correspond to the calculated or measured likelihood of heads coming up, and I trust the information about it being biased, then the only correct move is to reject the 0.5 probability. It has nothing to do with frequentist, Bayesian, or any other suchlike.
C.
comment by Silas · 2008-03-12T17:03:44.000Z · LW(p) · GW(p)
Sudeep: the inverse certainy of the position and momentum is a mathematical artifact and does not depend upon the validity of quantum mechanics. (Er, at least to the extent that math is independent of the external world!)
PK: I like your posts, and don't take this the wrong way, but, to me, your example doesn't have as much shocking unintuitiveness as the ones Eliezer Yudkowsky (no underscore) listed.
comment by Joshua_Fox · 2008-03-12T17:27:00.000Z · LW(p) · GW(p)
I'd like to understand: Are frequentist "probability" and subjective "probability" simply two different concepts, to be distinguished carefully? Or is there some true debate here?
I think that Jaynes shows a derivation follownig Bayesian principles of the frequentist probability from the subjective probability. I'd love to see one of Eliezer's lucid explanations on that.
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2008-03-12T17:30:15.000Z · LW(p) · GW(p)
You can derive frequentist probabilities from subjective probabilities but not the other way around.
Replies from: ronny-fernandez↑ comment by Ronny Fernandez (ronny-fernandez) · 2011-07-27T18:55:26.056Z · LW(p) · GW(p)
Please elaborate EY.
I think it would be a wonderfully clarifying post if you were to write a technical derivation of frequentest probability from the "probability in the mind" concept o Bayesian probability. If you decide to do this, or anyone knows where i could find such a text, please let me know.
related question:
Is there an algebra that describes the frequentest interpretation of probability? If so, where is it isomorphic to Bayesian algebra and where does it diverge? I want to know if the dispute has to do just with the semantic interpretation of 'P(a)', or if the 'P(a)' of the frequentest actually behaves differently than the Bayesian 'P(a)' syntactically.
Replies from: JGWeissman, buybuydandavis↑ comment by JGWeissman · 2011-07-27T19:15:19.330Z · LW(p) · GW(p)
If a well calibrated rationalist, for a given probability p, independantly believes N different things each with probability p, then you can expect about p*N of those beliefs to be correct.
See the discussion of calibration in the Technical Explanation.
↑ comment by buybuydandavis · 2011-09-04T03:52:45.632Z · LW(p) · GW(p)
Jayne's book shows how frequencies are estimated in his system, and somewhere, maybe his book, he compares and contrasts his ideas with frequentists and Kolmogorov. In fact, he expends great effort in contrasting his views to those of frequentists.
comment by PK · 2008-03-12T17:42:24.000Z · LW(p) · GW(p)
Silas: My post wasn't meant to be "shockingly unintuitive", it was meant to illustrate Eliezer's point that probability is in the mind and not out there in reality in a ridiculously obvious way.
Am I somehow talking about something entirely different than what Eliezer was talking about? Or should I complexificationafize my vocabulary to seem more academic? English isn't my first language after all.
comment by Cyan2 · 2008-03-12T17:46:32.000Z · LW(p) · GW(p)
If I'm being asked to accept or reject a number meant to correspond to the calculated or measured likelihood of heads coming up, and I trust the information about it being biased, then the only correct move is to reject the 0.5 probability.
Alas, no. Here's the deal: implicit in all the coin toss toy problems is the idea that the observations may be modeled as exchangeable. It really really helps to have a grasp on what the math looks like when we assume exchangeability.
In models where (infinite) exchangeability is assumed, the concept of long-run frequency can be sensibly defined. (Long-run frequency may or may not be a cogent concept in models without exchangeability.) The probability of heads in any one toss is the expectation of a probability density function (pdf) which encodes our knowledge about the long run frequency. (Roughly. There are some technical conditions for the existence of a pdf that I'm ignoring.)
Conrad, your idea that 0.5 is not an allowable probability is almost correct. In fact, the correct expression of this idea is that the pdf of the long-run frequency must be equal to zero at 0.5. But! -- its values in the neighborhood of 0.5 are not constrained, so the pdf may have a removable singularity.
Suppose our information about bias in favour of heads is equivalent to our information about bias in favour of tail. Our pdf for the long-run frequency will be symmetrical about 0.5 and its expectation (which is the probability in any single toss) must also be 0.5. It is quite possible for an expectation to take a value which has zero probability density. We can refuse to believe that the long-run frequency will converge to exactly 0.5 while simultaneously holding a probability of 0.5 for any specific single toss in isolation.
comment by Constant2 · 2008-03-12T18:11:01.000Z · LW(p) · GW(p)
Eliezer, I have no argument with the Bayesian use of the probability calculus and so I do not side with those who say "there is no rational way to manage your uncertainty", but I think I probably do have an argument with the insistence that it is the one true way. None of the problems you have so far outlined, including the coin one, really seem to doom either frequentism specifically, or more generally, an objective account of probability. I agree with this:
Even before a fair coin is tossed, the notion that it has an inherent 50% probability of coming up heads may be just plain wrong. Maybe you're holding the coin in such a way that it's just about guaranteed to come up heads, or tails, given the force at which you flip it, and the air currents around you.
but I question whether it really captures the frequentist position. To address the specifics, you seem to be talking about how the coin is held in a specific concrete toss. But frequentists emphatically are not talking about individual tosses. They are talking about infinitely repeated tosses. Alternatively, you might be talking about an infinitely repeated experiment in which the coin is tossed "in such a way", but here too I see no problem for the frequentists. Since the way of holding the coin is part of the experiment, then in this case they will predict a long term frequency of mostly heads. So they won't get this one wrong.
Replies from: ksvanhorn↑ comment by ksvanhorn · 2011-01-22T06:21:15.608Z · LW(p) · GW(p)
"But frequentists emphatically are not talking about individual tosses. They are talking about infinitely repeated tosses."
These infinite sequences never exist, and very often they don't even exist approximately. We only observe finite numbers of events. I think this is one of the things Jaynes had in mind when he talked about the proper handling of infinities -- you should start by analyzing the finite case, and look for a well-defined limit as n increases without bound. Unfortunately, frequentist statistics starts with the limit at infinity.
As an example of how these limiting frequencies taken over infinite sequences often make no sense in real-world situations, consider statistical models of human language, such as are used in automatic speech recognition. Such models assign a prior probability to each possible utterance a person could make. What does it mean, from a frequentist standpoint, to say that there is a probability of 1e-100 that a person will say "The tomatoe flew dollars down the pipe"? There haven't been 1e100 separate utterances by all human beings in all of human history, so how could a probability of 1e-100 possibly correspond to some sort of long-run frequency?
comment by Cyan2 · 2008-03-12T18:16:39.000Z · LW(p) · GW(p)
(Replace the link to "removable singularity" with one to removable discontinuity.)
comment by Psy-Kosh · 2008-03-12T18:17:30.000Z · LW(p) · GW(p)
No way to do it other way around? Nothing along the lines of, say, considering a set of various "things to be explained" and for each a hypothesis explaining it, and then talk about subsets of those? ie, a subset in which 1/10 of the hypothesies in that subset are objectively true would be a set of hypothesies assigned .1 probability, or something?
Yeah, the notion of how to do this exactly is, admittedly, fuzzy in my head, but I have to say that it sure does seem like there ought to be some way to use the notion of frequentist probability to construct subjective probability along these lines.
I may be completely wrong though.
comment by Conrad · 2008-03-12T18:31:18.000Z · LW(p) · GW(p)
"Suppose our information about bias in favour of heads is equivalent to our information about bias in favour of tail. Our pdf for the long-run frequency will be symmetrical about 0.5 and its expectation (which is the probability in any single toss) must also be 0.5. It is quite possible for an expectation to take a value which has zero probability density."
What I said: if all you know is that it's a trick coin, you can lay even odds on heads.
"We can refuse to believe that the long-run frequency will converge to exactly 0.5 while simultaneously holding a probability of 0.5 for any specific single toss in isolation."
Again what I said: if the question is, "This is a trick coin: I've rigged it. I have written down here the probability that it'll come up heads. Do you accept that the number I've written down is .5?" -- You've got to say no. Since they've just told you it was rigged.
And if what they've written down is .50000000000001 and come back at you for it, then they stretched a point to say it was rigged.
So your problem is you haven't grounded the example in terms of what we're being asked to do.
Again, what difference does it make?
Conrad.
ps - Ofc, knowing, or even just suspecting, the coin is rigged, on the second throw you'd best bet on a repeat of the outcome of the first.
C.
comment by Cyan2 · 2008-03-12T18:33:39.000Z · LW(p) · GW(p)
But frequentists emphatically are not talking about individual tosses. They are talking about infinitely repeated tosses.
In other words, they are talking about tail events. That a frequentist probability (i.e., a long-run frequency) even exists can be a zero-probability event -- but you have to give axioms for probability before you can even make this claim. (Furthermore, I'm never going to observe a tail event, so I don't much care about them.)
comment by Cyan2 · 2008-03-12T18:48:31.000Z · LW(p) · GW(p)
Conrad,
Okay, so unpack "ungrounded" for me. You've used the phrases "probability" and "calculated or measured likelihood of heads coming up", but I'm not sure how you're defining them.
I'm going to do two things. First, I'm going to Taboo "probability" and "likelihood" (for myself -- you too, if you want). Second, I'm going to ask you exactly which specific observable event it is we're talking about. (First toss? Twenty-third toss? Infinite collection of tosses?) I have a definite feeling that our disagreement is about word usage.
comment by Will_Pearson · 2008-03-12T18:55:15.000Z · LW(p) · GW(p)
If you honestly subscribe to this view of probability, please never give the odds for winning the lottery again. Or any odds for anything else.
What does telling me your probability that you assign something actually tell me about the world? If I don't know the information you are basing it on, very little.
I'm also curious about a formulation of probability theory that completely ignores random numbers and other theories that are based upon them (e.g. The law of large numbers, Central limit theorem).
Heck a re-write of http://en.wikipedia.org/wiki/Probability_theory with all mention of probabilities in the external world removed might be useful.
comment by HalFinney · 2008-03-12T18:58:10.000Z · LW(p) · GW(p)
I'm not sure the many-worlds interpretation fully eliminates the issue of quantum probability as part of objective reality. You can call it "anthropic pseudo-uncertainty" when you get split and find that your instances face different outcomes. But what determines the probability you will see those various outcomes? Just your state of knowledge? No, theory says it is an objective element of reality, the amplitude of the various elements of the quantum wave function. This means that probability, or at least its close cousin amplitude, is indeed an element of reality and is more than just a representation of your state of knowledge.
For aficionados of interpretations of QM, this relates to an old debate, whether the so-called "Born rule" can be derived from the MWI. Various arguments have been offered for this, including one by Robin, and some have claimed that these now work so well that the argument is settled. However I don't think the larger physics/philosophy community is convinced.
comment by GBM · 2008-03-12T19:13:39.000Z · LW(p) · GW(p)
Roland and Ian C. both help me understand where Eliezer is coming from. And PK's comment that "Reality will only take a single path" makes sense. That said, when I say a die has a 1/6 probability of landing on a 3, that means: Over a series of rolls in which no effort is made to systematically control the outcome (e.g. by always starting with 3 facing up before tossing the die), the die will land on a 3 about 1 in 6 times. Obviously, with perfect information, everything can be calculated. That doesn't mean that we can't predict the probability of a specific event.
Also, I didn't get a response to the Gomboc ( http://tinyurl.com/2rffxs ) argument. I would say that it has an inherent 100% probability of righting itself. Even if I knew nothing about the object, the real probability of it righting itself is 100%. Now, I might not bet on those odds, without previous knowledge, but no matter what I know, the object will right itself. How is this incorrect?
Replies from: bigjeff5↑ comment by bigjeff5 · 2011-02-01T18:44:07.287Z · LW(p) · GW(p)
Place a Gomboc on a non-flat surface and that "inherent" property goes away.
If it were inherent, it would not go away.
Therefore, its probability is not inherent, it is an evaluation we can make if we have enough information about the prior conditions. In this case "on a flat surface" is plenty of information, and we can assign it a 100% probability.
But what is its probability of righting itself on a surface angled 15 degrees? Is it still 100%? I doubt it, but I don't know.
Very cool shape, by the way.
Replies from: Jake_NBcomment by Conrad · 2008-03-12T19:36:26.000Z · LW(p) · GW(p)
::Okay, so unpack "ungrounded" for me. You've used the phrases "probability" and "calculated or measured likelihood of heads coming up", but I'm not sure how you're defining them.::
Ungrounded: That was a good movie. Grounded: That movie made money for the investors. Alternatively: I enjoyed it and recommend it. -- is for most purposes grounded enough.
::I'm going to do two things. First, I'm going to Taboo "probability" and "likelihood" (for myself -- you too, if you want). Second, I'm going to ask you exactly which specific observable event it is we're talking about. (First toss? Twenty-third toss? Infinite collection of tosses?) I have a definite feeling that our disagreement is about word usage.::
You yourself said that we're dealing with one throw of a rigged coin, of unknown riggage. I don't think we have have a disagreement, exactly, except it looks to me like the discussion's moot.
But look: if I can back up a bit, the notion that we can be dealing with a rigged coin, know that it's rigged, and say that the --er, chances-- of getting a heads is "really" 50%, because we Just Don't Know, is useless. At that point you're using 50-50 because we have two possible known outcomes:
But in fact we deal with unknown probabilities all the time. Probabilities are by default unknown, until we measure them by repeated trial and a lot of scratch-work. What about when you're dealing with a medication that might kill someone, or not: in the absence of any information, do you say that's 50-50?
Conrad.
comment by Conrad · 2008-03-12T19:48:22.000Z · LW(p) · GW(p)
GBM:: ..That said, when I say a die has a 1/6 probability of landing on a 3, that means: Over a series of rolls in which no effort is made to systematically control the outcome (e.g. by always starting with 3 facing up before tossing the die), the die will land on a 3 about 1 in 6 times.::
--Well, no: it does mean that, but don't let's get tripped up that a measure of probability requires a series of trials. It has that same probability even for one roll. It's a consequence of the physics of the system, that there are 6 stable distinguishable end-states and explosively many intermediate states, transitioning amongst each other chaotically.
Conrad.
comment by Nick_Tarleton · 2008-03-12T20:12:49.000Z · LW(p) · GW(p)
I have to say that it sure does seem like there ought to be some way to use the notion of frequentist probability to construct subjective probability along these lines.
Assign a measure to each possible world (the prior probabilities). For some state of knowledge K, some set of worlds Ck is consistent with K (say, the set in which there is a brain containing K). For some proposition X, X is true in some set of worlds Cx. The subjective probability P(X|K) = measure(intersection(Ck,Cx)) / measure(Ck). Bayesian updating is equivalent to removing worlds from K. To make it purely frequentist, give each world measure 1 and use multisets.
Does that work?
comment by Nick_Tarleton · 2008-03-12T20:24:41.000Z · LW(p) · GW(p)
Who else thinks we should Taboo "probability", and replace it two terms for objective and subjective quantities, say "frequency" and "uncertainty"?
The frequency of an event depends on how narrowly the initial conditions are defined. If an atomically identical coin flip is repeated, obviously the frequency of heads will be either 1 or 0 (modulo a tiny quantum uncertainty).
Replies from: Peterdjones, Perplexed↑ comment by Peterdjones · 2011-07-03T20:44:25.910Z · LW(p) · GW(p)
Who else thinks we should Taboo "probability", and replace it two terms for objective and subjective quantities, say "frequency" and "uncertainty"
Yes, it looks like an argument about apples versus oranges to me.
↑ comment by Perplexed · 2011-07-03T20:50:45.686Z · LW(p) · GW(p)
I think that we should follow Jaynes and insist upon 'probability' as the name of the subjective entity. But so-called objective probability should be called 'propensity'. Frequency is the term for describing actual data. Propensity is objectively expected frequency. Probability is subjectively expected frequency. That is the way I would vote.
comment by Nick_Tarleton · 2008-03-12T20:27:13.000Z · LW(p) · GW(p)
Oops, removing worlds from Ck, not K.
comment by Z._M._Davis · 2008-03-12T20:32:48.000Z · LW(p) · GW(p)
GBM, I think you get the idea. The reason we don't want to say that the gomboc has an inherent probability of one for righting itself (besides that we, um, don't use probability one), is that as it is with the gomboc, so it is with the die or anything else in the universe. The premise is that determinism, in the form of some MWI, is (probably!) true, and so no matter what you or anyone else knows, whatever will happen is sure to happen. Therefore, when we speak of probability, we can only be referring to a state of knowledge. It is still of course the case that if you toss a fair die a very large number of times, the proportion of threes you get will tend towards 1/6--we're just not using such cases as a definition of what probability means.
Having already written the above, I must add that I like Nick's just-posted frequency/uncertainty breakdown.
comment by Will_Pearson · 2008-03-12T20:35:03.000Z · LW(p) · GW(p)
Cyan, sorry. My comment was to Eliezer and statements such as
"that probabilities express ignorance, states of partial information; and if I am ignorant of a phenomenon, that is a fact about my state of mind, not a fact about the phenomenon."
comment by steven · 2008-03-12T20:52:49.000Z · LW(p) · GW(p)
I think there's still room for a concept of objective probability -- you'd define it as anything that obeys David Lewis's "Principal Principle" which this page tries to explain (with respect to some natural distinction between "admissible" and "inadmissible" information).
comment by sonic2 · 2008-03-12T20:55:49.000Z · LW(p) · GW(p)
Before accepting this view of probability and the underlying assumptions about the nature of reality one should look at the experimental evidence. Try Groeblacher, Paterek, et al arXiv.0704.2529 (Aug 6 2007) These experiments test various assumptions regarding non=local realism and conclude= "...giving up the concept of locality is not sufficient to be consistent with quantum experiments, unless certain intuitive features of realism are abandoned"
comment by roland6 · 2008-03-12T21:34:15.000Z · LW(p) · GW(p)
Nick Tarleton:
Who else thinks we should Taboo "probability", and replace it two terms for objective and subjective quantities, say "frequency" and "uncertainty"?
I second that, this would probably clear a lot of the confusion and help us focus on the real issues.
comment by Cyan2 · 2008-03-12T21:58:22.000Z · LW(p) · GW(p)
What about when you're dealing with a medication that might kill someone, or not: in the absence of any information, do you say that's 50-50?
You've already given me information by using the word medication -- implicity, you're asking me to recall what I know about medications before I render an answer. So no, those outcomes aren't necessarily equally plausible to me. Here's a situation which is a much better approximation(!) of total absence of information: either event Q or event J has happened just now, and I will tell you which in my next comment. The asymmetry in your information is just that I chose the label Q for the first event and J for the second event. Which event do you find more plausible? I'd like you to justify your choice, pretending (if necessary) that I am honest in this instance.
comment by John_Thacker · 2008-03-12T22:48:52.000Z · LW(p) · GW(p)
Now, if at least one child is a boy, it must be either the oldest child who is a boy, or the youngest child who is a boy. So how can the answer in the first case be different from the answer in the latter two?
Because they obviously aren't exclusive cases. I simply don't see mathematically why it's a paradox, so I don't see what this has to do with thinking that "probabilities are a property of things."
The "paradox" is that people want to compare it to a different problem, the problem where the cards are ordered. In that case, if you ask "Is your first card an ace," "Is your first card the ace of hearts," or "Is your first card the ace of spades," then there is the same probability of 1/3 in all three cases that both cards are aces given an answer "Yes." In that case the averaging makes sense because the cases are exclusive. In the "paradox," you can't average by saying that, "well, if there's one it's either the Ace of Spades or the Ace of Hearts, and in either case the answer would be 1/3, so it averages to 1/3." The problem is that you're double-counting.
I'm a Bayesian, but I don't see what this particular example has to do with subjectivity and agents. Probability is a result of the measure and the universe one is dealing with, and that may lead to results that seem unintuitive to those who don't grasp the mathematical principles (that seem obvious to me), but that has nothing to do with needing an agent. Define the measure space as you have done, claim that the probabilities are cold hard inherent facts about the objects themselves, and the result is independent of an agent.
This "paradox" seems on the same level to me as the confusion as to why the chances of rolling a 6 in three rolls of a die is not 1/2, or the problem that if one takes an outbound trip averaging 30 mph, then it is impossible to make the inbound trip so as to average 60 mph without teleporting instantaneously.
comment by John_Thacker · 2008-03-12T22:51:34.000Z · LW(p) · GW(p)
Or, I suppose, I would compare it to the other noted statistical paradox, whereby a famous hospital has a better survival rate for both mild and severe cases of a disease than a less-noted hospital, but a worse overall survival rate because it sees more of the worst cases. Merely because people don't understand how to do averages has little to do with them requiring an agent.
comment by Caledonian2 · 2008-03-12T23:00:05.000Z · LW(p) · GW(p)
The estimated Bayesian probability has nothing to do with the coin. If it did, assigning a probability of 0.5 to one of the two possible outcomes would be necessarily incorrect, because one of the few things we know about the coin is that it's not fair.
The estimate is of our confidence in using that outcome as an answer. "How confident can I be that choosing this option will turn out to be correct?" We know that the coin is biased, but we don't know which outcome is more likely. As far as we know, then, guessing one way is as good as guessing the other.
The sides of the coin do have an actual probability associated with them, which is why it's wrong to say that one particular outcome is more likely. That's a truth statement that we can't justify with the available data. Without knowing more about the coin, we can't speak about it. We can only speak to our confidence and how justified it is with the data we know.
The assertion that uncertainty is not an aspect of reality goes far beyond what anyone can justify, and is an example of gross overconfidence in one's opinions, btw.
comment by Nick_Tarleton · 2008-03-12T23:28:04.000Z · LW(p) · GW(p)
Another way to look at it: if you repeatedly select a coin with a random bias (selected from any distribution symmetric about .5) and flip it, H/T will come out 50/50.
comment by Enginerd · 2008-03-13T00:56:47.000Z · LW(p) · GW(p)
Silas: The uncertainty principle comes from the fact that position and momentum are related by Fourier transform. Or, in laymans terms, the fact that particles act like waves. This is one of the fundamental principles of QM, so yeah, it sort of does depend on the validity thereof. Not the Schrodinger equation itself perhaps, but other concepts.
As for whether QM proves that all probabilities are inherent in a system, it doesn't. It just prevents mutual information in certain situations. In coin flips or dice rolls, theoretically you could predict the outcome with enough information. Most probabilistic situations are that way; they're probabilistic because you don't have that info. QM is a bit different, and scientists still argue about it, but the fine detail of behavior of atoms doesn't have any effect on a poker game.
comment by Rolf_Nelson2 · 2008-03-13T03:41:48.000Z · LW(p) · GW(p)
Follow-up question: If Bob believes he has a >50% chance of winning the lottery tomorrow, is his belief objectively wrong? I would tentatively propose that his belief is unfounded, "unattached to reality", unwise, and unreasonable, but that it's not useful to consider his belief "objectively wrong".
If you disagree, consider this: suppose he wins the lottery after all by chance, can you still claim the next day that his belief was objectively wrong?
comment by Psy-Kosh · 2008-03-13T07:56:02.000Z · LW(p) · GW(p)
Nick Tarleton: Not sure I entirely correctly understood your suggestion, need to think about it more.
However, my initial thought is that it may require/assume logical omnicience.
ie, what of updating based on "subjective guesses" of which worlds are consistent or inconsistent with the data. That is, as consistent as you can tell, given bounded computational resources. I'm not sure, but your model, at least at first glance, may not be able to say useful stuff about those that are not logically ominicent.
Also, I'm unclear, could you clarify what it is you'd be using a multiset for? Do you mean "increase measure only by increasing number of copies of this in the multiset, and no other means allowed" or did you intend something else?
(incidentally, I think I do prefer coherence/dutch book/vulnerability style constructions of epistemic probability. Especially the ones that build up decision theory along the way, so one ends up starting with utilities almost. Such have very much of a "mathematical karma" flavor, as I've expressed elsewhere.)
comment by RobinHanson · 2008-03-13T10:45:32.000Z · LW(p) · GW(p)
Hal, I'd say probability could be both part of objective physics and a mental state in this sense: Given our best understanding of objective physics, for any given mental state (including the info it has access to) there is a best rational set of beliefs. In quantum mechanics we know roughly the best beliefs, and we are trying to use that to infer more about the underlying set of states and info.
comment by Will_Pearson · 2008-03-13T13:04:45.000Z · LW(p) · GW(p)
Rolf Nelson: "Follow-up question: If Bob believes he has a >50% chance of winning the lottery tomorrow, is his belief objectively wrong? I would tentatively propose that his belief is unfounded, "unattached to reality", unwise, and unreasonable, but that it's not useful to consider his belief "objectively wrong"."
It all depends on what information Bob has. He might have carefully doctored the machines and general setup of the lottery draw to an extent that he might have enough information to have that probability. Now if Bob says he thinks he has a greater than 50% chance of winning the lottery because he is feeling lucky, and that is it, you can probably say that is unattached to reality or ignoring lots of relevant information.
comment by Nick_Tarleton · 2008-03-13T13:39:52.000Z · LW(p) · GW(p)
However, my initial thought is that it may require/assume logical omnicience.
Probably. Bayes is also easier to work with if you assume logical omniscience (i.e. knowledge of P(evidence|X) and P(E|~X)).
Also, I'm unclear, could you clarify what it is you'd be using a multiset for? Do you mean "increase measure only by increasing number of copies of this in the multiset, and no other means allowed" or did you intend something else?
Yes, using multisets of worlds with identical measure is equivalent to (for rational measures only) but 'more frequentist' than sets of worlds with variable measure.
incidentally, I think I do prefer coherence/dutch book/vulnerability style constructions of epistemic probability. Especially the ones that build up decision theory along the way, so one ends up starting with utilities almost. Such have very much of a "mathematical karma" flavor, as I've expressed elsewhere.
Yeah, my idea was only meant as an existence proof and is probably an inferior formal construction, although it is how I tend to personally think about subjective probability. I guess at heart I'm still a frequentist.
(You could think about Rolf's problem this way; if the vast majority of the measure of possible worlds given Bob's knowledge is in worlds where he loses, he's objectively wrong.)
comment by Ben_Jones · 2008-03-13T20:33:00.000Z · LW(p) · GW(p)
You have to lay £1 on heads or tails on a biased coin toss. Your probability is in your mind, and your mind has no information either way. Hence, you lay the pound on either. Hence you assign a 0.5 probability to heads, and also to tails.
If your argument is 'I don't mean my personal probability, I mean the actual probability', abandon all hope. All probability is 'perceived'. Unless you think you have all the evidence.
comment by Caledonian2 · 2008-03-13T23:54:00.000Z · LW(p) · GW(p)
All probability is 'perceived'. Unless you think you have all the evidence.
Some probabilities are objective, inherent properties of bits of the universe, and the universe does have all the evidence. The coin possesses an actual probability independent of what anyone knows or believes about it.
comment by Rolf_Nelson2 · 2008-03-14T03:38:00.000Z · LW(p) · GW(p)
if the vast majority of the measure of possible worlds given Bob's knowledge is in worlds where he loses, he's objectively wrong.
That's a self-consistent system, it just seems to me more useful and intuitive to say that:
"P" is true => P
"Bob believes P" is true => Bob believes P
but not
"Bob's belief in P" is true => ...er, what exactly?
Also, I frequently need to attach probabilities to facts, where probability goes from [0,1] (or, in Eliezer's formulation, (-inf, inf)). But it's rare for me to have to any reason to attach probabilities to probabilities. On the flip side, I attach scoring rules in the range [0, -inf] to probability calculations, but not to facts. So in my current worldview, facts and probabilities are tentatively "made of different substances".
comment by michael_vassar · 2008-03-14T04:14:00.000Z · LW(p) · GW(p)
I second tabooing probability, but I think that we need more than two words to replace it. Casually, I think that we need, at the least, 'quantum measure', 'calibrated confidence', and 'justified confidence'. Typically we have been in the habit of calling both "Bayesian", but they are very different. Actual humans can try to be better approximations of Bayesians, but we can't be very close. Since we can't be Bayesian, due to our lack of logical omniscience, we can't avoid making stupid bets and being Dutch Booked by smarter minds. It's therefore disingenuous to claim that vulnerability to Dutch Books is a decisive argument against a behavioral strategy. Calibrated confidence is the strategy that we can try to use to minimize our vulnerability to being Dutch Booked by people who aren't smarter than we are but who know exploits in our heuristics. They tend to be much much closer to 50% than Bayesian confidences, and are pretty much unavoidably subject to some framing based biases as a result.
comment by ron_purewal · 2008-03-15T02:22:00.000Z · LW(p) · GW(p)
just fyi, there's no such thing as the 'eldest' of two boys; there's just an elder and a younger. superlatives are reserved for groups of three or more.
as i'm a midget among giants here, i'm afraid that's all i have to add. :)
comment by Daniel5 · 2008-05-04T22:59:00.000Z · LW(p) · GW(p)
Enginerd: The uncertainty inherent in determining a pair of conjugate variables - such as the length and pitch of a sound - is indeed a core part of QM, but is not probabilistic. In this case, the term "uncertainty" is not about probabilities, even if QM is probabilistic in general, rather a consequence of describing states in terms of wave functions, which can be interpreted probabilistically. This causes many to mistakenly think the Heisenberg's "Uncertainty Principle" is the probabilistic part of QM. As Wikipedia[1] puts it: "The uncertainty principle is related to the observer effect, with which it is often conflated."
comment by anon16 · 2008-06-05T15:04:00.000Z · LW(p) · GW(p)
You're equating perceived probability with physical probability, and this is false, when either you or anyone else ignores that distinction.
However, your whole argument depends on a deterministic universe. Research quantum mechanics; we can't really say that we have a deterministic universe, and physics itself can only assign a probability at a certain point.
comment by anon16 · 2008-06-05T15:11:00.000Z · LW(p) · GW(p)
@Daniel:
You're attacking the wrong argument. Just look up the electron double-slit experiment. (http://en.wikipedia.org/wiki/Double-slit_experiment) Its not only about the observer effect, but how the probability that you say doesn't exist causes interference to occur unless an observer is present. The observer is the one who collapses the probability wave down to a deterministic bayesian value.
It sounds like both you and the author of this blog do not understand Schrodinger's cat.
comment by Z._M._Davis · 2008-06-05T16:04:00.000Z · LW(p) · GW(p)
Welcome to Overcoming Bias, anon! Try to to avoid triple-posting. The author of this post has actually just written a series on quantum mechanics, which begins with "Quantum Explanations." He argues forcefully for a many-worlds interpretation, which is deterministic "from the standpoint of eternity," although not for any particular observer due to indexical uncertainty. (You might say that, yes, reality does not take only one path, but it might as well have, because neither do observers!)
comment by anon16 · 2008-06-05T17:43:00.000Z · LW(p) · GW(p)
@Z. M. Davis
Thanks for the welcome. While I disagree with the etiquette, I'll try to follow it. A three post limit serves only to stifle discussion; there are other ways to deal with abusive posters than limiting the abilities of non-abusive posters. Also, I'm pretty sure my comment is still valid, relevant, and an addition to the discussion, regardless of whether I posted it now or a couple hours ago.
Back to the many worlds approach, as an individual observer of the universe myself, it seems to me that attempting to look at the universe "from the standpoint of eternity" in order to force universal determinism (and its implications) is equivalent to looking at a dice roll from the standpoint of a infinite series of dice rolls in order to force frequency. Its a perspective we'll never achieve and it subverts our intuition. Just a thought, as this blog seems to be about avoiding cognitive traps and not just shifting them to a higher level.
comment by Stephen_R._Diamond · 2009-03-12T17:59:00.000Z · LW(p) · GW(p)
That the probability assigned to flipping a coin depends on what the assigner knows doesn't prove probability's subjectivity, only that probability isn't an objective property of the coin . Rather, if the probability is objective, it must be a property of a system, including the throwing mechanism. Two other problems with Eliezer's argument. 1) Rejecting objective interpretations of probability in empirical science because, in everyday usage, probability is relative to what's known, is to provide an a priori refutation of indeterminism, reasoning which doesn't square with Eliezer's empiricist analysis of knowledge. (Although, perhaps, objective probability is incoherent, but Eliezer hasn't shown this.) 2) A purely subjective interpretation of probability leaves probability in the same position as religion (or, maybe exactly, in the same position as Kant's a prior forms of understanding). A purely subjective interpretation doesn't explain the adaptive utility of taking the first Bayesian step of forming an a priori probability.
comment by Cyan2 · 2009-03-12T20:38:00.000Z · LW(p) · GW(p)
Stephen R. Diamond, there are two distinct things in play here: (i) an assessment of the plausibility of certain statements conditional on some background knowledge; and (ii) the relative frequency of outcomes of trials in a counterfactual world in which the number of trials is very large. You've declared that probability can't be (i) because it's (ii) -- actually, the Kolmogorov axioms apply to both. Justification for using the word "probability" to refer to things of type (i) can be found in the first two chapters of this book. I personally call things of type (i) "probabilities" and things of type (ii) "relative frequencies"; the key is to recognize that they need different names.
On your further critiques:
(1) Eliezer is a determinist; see the quantum physics sequence.
(2) True. A logical argument is only as reliable as its premises, and every method for learning from empirical information is only as reliable as its inductive bias. Unfortunately, every extant practical method of learning has an inductive bias, and the no free lunch theorems give reason to believe that this is a permanent state of affairs.
I'm not sure what you mean in your last sentence...
comment by nono · 2011-02-17T10:48:59.404Z · LW(p) · GW(p)
"Renormalizing leaves us with a 1/3 probability of two boys, and a 2/3 probability of one boy one girl." help me with this one, i'm n00b. If one of the kids is known to be a boy (given information), then doesn't the other one has 50/50 chances to be either a boy or a girl? And then having 50/50 chances for the couple of kids to be either a pair of boys or one boy one girl?
Replies from: Morendil, prase↑ comment by Morendil · 2011-02-17T11:54:51.186Z · LW(p) · GW(p)
one of the kids is known to be a boy
That's not the given; it is that "at least one of the two is a boy". Different meaning.
For me, the best way to get to understand this kind of exercise intuitively is to make a table of all the possibilities. So two kids (first+second) could be: B+B, B+G, G+B, G+G. Each of those is equiprobable, so since there are four, each has 1/4 of the probability.
Now you remove G+G from the table since "at least one of the two is a boy". You're left with three: B+B, B+G, G+B. Each of those three is still equiprobable, so since there are three each has 1/3 of the total.
Replies from: matteri↑ comment by matteri · 2011-05-06T17:37:24.076Z · LW(p) · GW(p)
And in hope of clarifying for those still confused over why the answer to the other question - "is your eldest/youngest child a boy" - is different: if you get a 'yes' to this question you eliminate the fact that having a boy and a girl could mean both that the boy was born first (B+G) and that the girl was born first (G+B). Only one of those will remain, together with B+B.
↑ comment by prase · 2011-02-17T13:20:41.396Z · LW(p) · GW(p)
This sort of problem is often easier to understand when modified to make the probabilities more different. E.g. suppose ten children and information that at least nine of them are boys. The incorrect reasoning leads to 1/2 probability of ten boys, while actually the probability is only 1/11. You can even write a program which generates a sequence of ten binary values, 0 for a boy and 1 for a girl, and then prompts you whenever it encounters at least nine zeros and compare the relative frequencies. If the generated binary numbers are converted to decimals, it means that you generate an integer between 0 and 1023, and get prompted whenever the number is a power of 2, which correspond to 9 boys and 1 girl (10 possible cases), or zero, which corresponds to 10 boys (1 case only).
Such modification works well as an intuition pump in case of the Monty Hall problem, maybe is not so illustrative here. But Monty Hall is isomorphic to this one.
comment by matteri · 2011-05-06T18:14:22.713Z · LW(p) · GW(p)
Conrad wrote:
ps - Ofc, knowing, or even just suspecting, the coin is rigged, on the second throw you'd best bet on a repeat of the outcome of the first.
I think it would be worthwhile to examine this conclusion - as it might seem to be an obvious one to a lot of people. Let us assume that there is a very good mechanical arm that makes a completely fair toss of the coin in the opinion of all humans so that we can talk entirely about the bias of the coin.
Let's say that the mechanism makes one toss; all you know is that the coin is biased - not how. Assume that it comes up heads; what does this tell you about the bias? Conrad asserts that it will certainly be biased in favor of heads. How much? Will it always show up as heads? 3 times out of 4? As it turns out, you have no way of knowing.
It could be that it is in fact only 1/3 biased towards heads; then it would be much wiser to bet on tails in the future, no? It could be that it is actually 100 times more likely that tails will come up; you simply can't tell the difference from the first toss.
So let's consider more coin tosses. What if it comes up heads once and then tails 5 times in a row? Could you tell me exactly what the bias is? Is it 5/6 towards tails perhaps? What about 50 tails and 15 heads? In fact, it is still not possible to say anything at all about what the bias is.
Since you probably have a heuristic method of analysis (intuition) you will in time see which side is the best bet; i.e. you'll conclude which side is most likely to be biased and you'll probably be correct - with higher accuracy as the amount of tosses increase. However; there is no logic, rationalism or deduction in the world that could tell you exactly what the bias is. This is true after any integer amount of coin tosses.
Replies from: Alicorn, soreff↑ comment by Alicorn · 2011-05-06T18:18:52.301Z · LW(p) · GW(p)
It is not necessary to know the exact bias to enact the following reasoning:
"Coins can be rigged to display one face more than the other. If this coin is rigged in this way, then the face I have seen is more likely than the other to be the favored side. If the coin is not rigged in this way, it is probably fair, in which case the side I saw last time is equally likely to come up next by chance. It is therefore a better bet to expect a repeat."
Key phrase: judgment under uncertainty.
Replies from: matteri↑ comment by matteri · 2011-05-06T18:24:53.737Z · LW(p) · GW(p)
I am not arguing against betting on the side that showed up in the first toss. What is interesting though is that even under those strict conditions, if you don't know the bias beforehand, you never will. Considering this; how could anyone ever argue that there are known probabilities in the world where no such strict conditions apply?
Replies from: Alicorn↑ comment by Alicorn · 2011-05-06T18:30:09.974Z · LW(p) · GW(p)
Your definition of "know" is wrong.
Replies from: matteri↑ comment by matteri · 2011-05-06T18:43:23.276Z · LW(p) · GW(p)
Very well, I could have phrased it in a better way. Let me try again; and let's hope I am not mistaken.
Considering that even if there is such a thing as an objective probability, it can be shown that such information is impossible to acquire (impossible to falsify); how could it be anything but religion to believe in such a thing?
Replies from: Alicorn↑ comment by soreff · 2011-05-06T18:23:31.765Z · LW(p) · GW(p)
However; there is no logic, rationalism or deduction in the world that could tell you exactly what the bias is. This is true after any integer amount of coin tosses.
This seems like it is asking too much of the results of the coin tosses. Given some prior for the probability distribution of biased coins, each toss result updates the probability distribution. Given a prior probability distribution which isn't too extreme (e.g. no zeros in the distribution), after enough toss results, the posterior distribution will narrow towards the observed frequencies of heads and tails.
Yes, at no point is the exact bias known. The distribution doesn't narrow to a delta function with a finite number of observations. So?
comment by omeganaut · 2011-05-12T17:38:46.250Z · LW(p) · GW(p)
"Or here's a very similar problem: Let's say I have four cards, the ace of hearts, the ace of spades, the two of hearts, and the two of spades. I draw two cards at random. You ask me, "Are you holding at least one ace?" and I reply "Yes." What is the probability that I am holding a pair of aces? It is 1/5. There are six possible combinations of two cards, with equal prior probability, and you have just eliminated the possibility that I am holding a pair of twos. Of the five remaining combinations, only one combination is a pair of aces. So 1/5." For future reference the initial phrase "I have four cards" may imply that you have in your possession those cards already. Combined with the fact that you draw two cards afterwords implies that you now have six cards. The simple fact that you are using cards implies that you are using a full deck of 52 cards, unless you specifically say there are only certain cards in play. I just want to clear that up because I had to reread that section a few times before I understood how you came up with your figures.
Replies from: thomblakecomment by Peterdjones · 2011-07-03T20:23:52.697Z · LW(p) · GW(p)
The unpredictability of a die roll or coin flip is not due to any inherent physical property of the objects; it is simply due to lack of information. Even with quantum uncertainty, you could predict the result of a coin flip or die roll with high accuracy if you had precise enough measurements of the initial conditions.
That is quite debatable. For one thing, it is possible to for quantum indeterminism, if there is any, to leak into the macroscopic world. Even if it were not possible, there is still the issue of microscopic indeterminism. You cannot prove that there is no objective indeterminism (ie that the universe is deterministic) just by performing an armchair examination of human reasoning about probability. You have to take the physics into account as well. It appears to be standard round here to assert the Many Worlds interpretation, and to assert it as deterministic. That is debatable as well, since there are problems with MWI and it is not the only no-collapse interpretation
comment by Ronny Fernandez (ronny-fernandez) · 2011-07-27T19:22:38.480Z · LW(p) · GW(p)
Hate to be a stickler for this sort of thing, but even in the bayesian interpretation there are probabilities in the world, it's just that they are facts about the world and the knowledge the agents have of the world in combination. It's a fact that a perfect bayesian given P(a), P(a|b), and P(a|~b) will ascribe P(b|a), a probability of P(a|b)P(a) / P(b), and that that is the best value to give P(b|a).
If an agent has perfect knowledge then it need not ascribe any non-1 probability to any proposition it holds. But it is a fact about agents in the world that without perfect knowledge they ascribe non-1 probabilities to their propositions if they're working right. Bayesian reasoning is the field which tells us the optimal probability to assign to a proposition given the rest of our information, but that that is the optimal probability given the rest of our information is a fact about the world. FOr any proposition 'a', if a perfect bayesian says 'P(a) = y:x' based off of some premise list P, then any agent who concludes 'a' from "P" (or any other equivalently cogent premise list) will be right y:x of the time, and wrong 1 - y:x of the time regardless of what "a" actually says.
Some might say, "There is no sweetness in the world; sweetness is in your mind's interpretation of the world." The correct response is "Since 'in' is a transitive relation, and my mind's interpretation of the world is in the world, sweetness is in the world. It's just that to learn about sweetness you can't just study sugar crystals, you have to study brains too." The situation here is similar-ish.
It is important that facts about the probabilities of statements be facts about the world; if they weren't then how would we find our priors? Priors seem to require that we be capable of checking "P(this woman having cancer) = such and such" by checking the world. In fact, I believe EY say's almost word for word that priors are fact about the world in "an intuitive explanation of Bayes theorem": "Actually, priors are true or false just like the final answer - they reflect reality and can be judged by comparing them against reality. For example, if you think that 920 out of 10,000 women in a sample have breast cancer, and the actual number is 100 out of 10,000, then your priors are wrong. " -- EY
Let us not forget that the map is a part of the territory; the map's accuracy is a fact about the territory as much as a fact about the map. You can study a map till you're blue in the head, and you still won't know how accurate it is unless you look at the corresponding territory.
Replies from: wedrifid↑ comment by wedrifid · 2011-07-27T20:23:14.857Z · LW(p) · GW(p)
You may appreciate Probability is Subjectively Objective. It's the followup to this post and happens to be my favorite post on lesswrong!
Replies from: ronny-fernandez↑ comment by Ronny Fernandez (ronny-fernandez) · 2011-07-28T21:47:59.196Z · LW(p) · GW(p)
I can see why it is your favorite post. It's also extremely relevant to the position I expressed in my post, thank you. But I'm not sure that I can't hold my position above while being an objectively-subjective bayesian; I'll retract my post if I find that I can't.
Replies from: wedrifid↑ comment by wedrifid · 2011-07-28T23:19:32.375Z · LW(p) · GW(p)
But I'm not sure that I can't hold my position above while being an objectively-subjective bayesian; I'll retract my post if I find that I can't.
My impression was not that you would be persuaded to retract but that you'd feel vindicated. The positions are approximately the same (with slightly different labels attached). I don't think I disagree with you at all.
comment by bibilthaysose · 2011-07-31T15:33:06.137Z · LW(p) · GW(p)
Does this mean that there is nothing that is inherently uncertain? I guess another way to put that would be, could Laplace's Demon infer the entire history of the universe back to front from a single moment? It might seem obvious that there are singularities moving backwards through time (i.e. processes whose result does not give you information about their origin), so couldn't the same thing exist moving forward through time?
Anyway, great article!
comment by JeffJo · 2011-08-31T20:56:59.799Z · LW(p) · GW(p)
My first post, so be gentle. :)
I disagree that there is a difference between "Bayesian" and "Frequentist;" or at least, that it has anything to do with what is mentioned in this article. The field of Probability has the unfortunate property of appearing to be a very simple, well defined topic. But it actually is complex enough to be indefinable. Those labels are used by people who want to argue in favor of one definition - of the indefinable - over another. The only difference I see is where they fail to completely address a problem.
Take the biased coin problem as an example. If either label applies to me, it is Frequentist, but my answer is that the one Eliezer_Yudkowsky says is the Bayesian's. He gets the wrong Frequentist solution because he only allows the Frequentist to acknowledge one uncertainty - one random variable - in the problem. Whether the coin came up heads or tails. If a Frequentist says the question is unanswerable, (s)he is wrong because (s)he is using an incomplete solution. The bias b - of a coin already selected - is just as much a random variable as the side s that came up in a coin already flipped. If you claim the answer must be based on the actual value of b for the coins, it must also be based on the actual value of s for this flip. That means the probability is either 0 or 1, which is absurd. (Technically, this error is one of confusing an outcome and an event. An outcome is the specific result of a specific trial, and has no probability. An event is a set of possible outcomes, and is what a probability is assigned to. Eliezer_Yudkowsky's Frequentist is treating the choice of a coin as an outcome, and the result of the flip as an event.)
We can answer the question without knowing anything more about b, than that it is not 1/2. For any 0<=b1<1/2, since we have no other information, b=b1 and b=1-b1 must be treated as equally likely. Regardless of what the distribution of b1 is, this makes the probability the coin landed on heads 1/2.
The classic Two Child Problem has a similar issue, but Eliezer_Yudkowsky did not ask the classic one. I find it best to explain this one in the manner Joseph Bertrand used for his famous Box Paradox. I have two children. What is the probability they share the same gender? That's easy: 1/2. Now I secretly write one gender on a note card. I then show the card to you, and tell you one of my children has that gender. If it says "boy," does the answer change to 1/3? What if it says "girl"? The answers can't be different for the two words you might see; but whatever that answer is, it has to be the same as the answer to the original question (proof by Bayes Theorem). So if the answer does change, we have a paradox.
Yet if presented with the information all at once, "I have two, and one is a boy," Frequentist and Bayesian alike will usually answer "1/3." And they usually will say that anybody who answers 1/2 is addressing the "I have two, and one specific child, by age, is a boy" version Eliezer_Yudkowsky mentioned. But that is not how I get 1/2. There are three random variables, not two: the older child's gender, the younger child's gender, and which gender I will mention if I have the choice of two. Allowing all three to be split 50/50 between "boy" and "girl" makes the answer 1/2, and there is no paradox.
Ironically, my reasoning is what the same mathematicians will use for either the Monty Hall Problem, or the identical Three Prisoners Problem. Two cases that were originally equally likely remain possible. But they are no longer equally possible, because the provider of information had a choice of two in one case, but no choice in the other. Bayesians may claim the difference is a property of the information, and Frequentists (if they use a complete solution) will say there is an additional, implicit random variable. Both work out the same, just by different methods. It is ironic, because while Bertrand's Box Paradox is often compared to these two problems because it is mathematically equivalent to them. The Two Child Problem is closer to being logically equivalent because of the way the information is provided, yet never gets compared. In fact, it is identical if you add a fourth box.
Replies from: None↑ comment by [deleted] · 2011-08-31T21:51:03.883Z · LW(p) · GW(p)
I can't speak for the rest of your post, but
We can answer the question without knowing anything more about b, than that it is not 1/2. For any 0<=b1<1/2, since we have no other information, b=b1 and b=1-b1 must be treated as equally likely. Regardless of what the distribution of b1 is, this makes the probability the coin landed on heads 1/2.
is pretty clearly wrong. (In fact, it looks a lot like you're establishing a prior distribution, and that's uniquely a Bayesian feature.) The probability of an event (the result of the flip is surely an event, though I can't tell if you're claiming to the contrary or not) to a frequentist is the limit of the proportion of times the event occurred in independent trials as the number of trials tends to infinity. The probability the coin landed on heads is the one thing in the problem statement that can't be 1/2, because we know that the coin is biased. Your calculation above seems mostly ad hoc, as is your introduction of additional random variables elsewhere.
However, I'm not a statistician.
Replies from: nshepperd, JeffJo, JeffJo↑ comment by nshepperd · 2011-09-01T03:51:06.415Z · LW(p) · GW(p)
I think they are arguing that the "independent trials" that are happening here are instances of "being given a 'randomly' biased coin and seeing if a single flip turns up heads". But of course the techniques they are using are bayesian, because I'd expect a frequentist to say at this point "well, I don't know who's giving me the coins, how am I supposed to know the probability distribution for the coins?".
↑ comment by JeffJo · 2011-09-01T10:47:06.090Z · LW(p) · GW(p)
The random process a frequentist should repeat is flipping a random biased coin, and getting a random bias b and either heads or tails. You are assuming it is flipping the same* biased coin with fixed bias B, and getting heads or tails.
The probability a random biased coins lands heads is 1/2, from either point of view. And for nshepperd, the point is that a Frequentist doesn't need to know what the bias is. As long as we can't assume it is different for b1 and 1-b1, when you integrate over the unknown distribution (yes, you can do that in this case) the answer is 1/2.
↑ comment by JeffJo · 2011-09-23T20:27:40.344Z · LW(p) · GW(p)
Say a bag contains 100 unique coins that have been carefully tuned to be unfair when flipped. Each is stamped with an integer in the range 0 to 100 (50 is missing) representing its probability, in percent, of landing on heads. A single coin is withdrawn without revealing its number, and flipped. What is the probability that the result will be heads?
You are claiming that anybody who calls himself a Frequentist needs to know the number on the coin to answer this question. And that any attempt to represent the probability of drawing coin N is specifying a prior distribution, an act that is strictly prohibited for a Frequentist. Both claims are absurd. Prior distributions are a fact of the mathematics of probability, and belong to Frequentist and Bayesian alike. The only differences are (1) the Bayesian may use information differently to determine a prior, sometimes in situations where a Frequentist wouldn't see one at all; (2) The Bayesian will prefer solutions based explicitly on that prior, while the Frequentist will prefer solutions based on the how the prior affects repeated experiments; and (3) Some Frequentists might not realize when they have enough information to determine a prior, and/or its effects, that should satisfy them.
If both get answers, and they don't agree, somebody did something wrong.
The answer is 50%. The Bayesian says that, based on available information, neither result can be favored over the other so they must both have probability 50%. The Frequentist says that if you repeat the experiment 100^2 times, including the part where you draw a coin from the bag of 100 coins, you should count on getting each coin 100 times. And you should also count, for each coin, on getting heads in proportion to its probability. That way, you will count 5,000 heads in 10,000 trials, making the answer 50%. Both solutions are based on the same facts and assumptions, just organized differently.
The answer Eliezer_Yudkowsky attributes to Frequentists, for the simpler problem without the bag and stamped coins, is an incorrect Frequentist solution. Or at least, a correct solution to a different problem. One that corresponds to the different question "What proportion of the time will this coin come up heads?" I agree that some who claim to be Frequentists will answer that question. But the true Frequentist will answer the question that was asked: "What proportion of the time will the process of flipping a coin with unknown bias come up heads?" His repetitions must represent the bias for each flip as independent of any other flips, not the same bias each time. The bias B will come up just as often as the bias (1-B), so the number of heads will always be half the number of trials.
comment by Omegaile · 2012-08-05T01:01:27.470Z · LW(p) · GW(p)
I used to be a frequentist, and say that the probability of the unfair coin landing heads is either 4/5 or 1/5, but I don't know exactly which. But that is not to say that I saw probabilities on things instead of on information. I'll explain.
If someone asked me if it will it rains tomorrow, I would ask which information am I supposed to use? If it rained in the past few days? Or would I consider tomorrow as a random day and pick the frequency of rainy days in the year? Or maybe I should consider the season we are in. Or am I supposed to use all available information I have? The latter I would call subjective probability. If someone instead passed me the children problem I would say 1/3 because this problem implicitly tells me to consider only the what tells the enunciate.
But simply asking for the probability without a context, I would say either that this is a no question, i.e. that the enunciate is imprecise and lacking information, or I would believe that the interrogator was asking for a intrinsic probability, in which case I would say either 0 or 1, but I don't know which.
But I did believe in intrinsic probability, in some cases, like quantum mechanics.
This view of mine became hollow after I started inquiring myself about this intrinsic probability. Even if such a thing existed, it couldn't be differentiated from what I called subjective probability. By Occam's razor I shouldn't create 2 kinds of probabilities that I cannot tell apart. This thought was partly inspired by reading lesswrong, not a particular post, but by seeing the ease in which what I called subjective probability was used in several occasions.
comment by kybernetikos · 2012-11-21T22:42:03.214Z · LW(p) · GW(p)
Thinking of probabilities as levels of uncertainty became very obvious to me when thinking about the Monty Hall problem. After the host has revealed that one of the three doors has a booby prize behind it, you're left with two doors, with a good prize behind one of them.
If someone walks into the room at that stage, and you tell them that there's a good prize behind one door and a booby prize behind another, they will say that it's a 50/50 chance of selecting the door with the prize behind it. They're right for themselves, however the person who had been in the room originally and selected a door knows more and therefore can assign different probabilities - i.e. 1/3 for the door they'd selected and 2/3 for the other door.
If you thought that the probabilites were 'out there' rather than descriptions of the state of knowledge of the individuals, you'd be very confused about how the probability of choosing correctly could at the same time be 2/3 and 1/2.
Considering the Monty Hall problem as a way for a part of the information in the hosts head to be communicated to the contestant becomes the most natural way of thinking about it.
comment by Juno_Watt · 2013-05-21T17:40:07.890Z · LW(p) · GW(p)
Even before a fair coin is tossed, the notion that it has an inherent 50% probability of coming up heads may be just plain wrong. Maybe you're holding the coin in such a way that it's just about guaranteed to come up heads, or tails, given the force at which you flip it, and the air currents around you. But, if you don't know which way the coin is biased on this one occasion, so what?
Maybe it isn't really 50%, and it isn't really 100% how-it-came up either. That it is rational to make estimates based on our own ignorance is not proof that the universe is deterministic. You can't deduce the nature of physical reality from your own reasoning processes.
To make the coinflip experiment repeatable, as frequentists are wont to demand, we could build an automated coinflipper, and verify that the results were 50% heads and 50% tails. But maybe a robot with extra-sensitive eyes and a good grasp of physics, watching the autoflipper prepare to flip, could predict the coin's fall in advance—not with certainty, but with 90% accuracy. Then what would the real probability be?
Whatever QM says.
comment by ialdabaoth · 2013-05-23T11:35:34.655Z · LW(p) · GW(p)
So, I've been on this site for awhile. When I first came here, I had never had a formal introduction to Bayes' theorem, but it sounded a lot like ideas that I had independently worked out in my high school and college days (I was something of an amateur mathematician and game theorist).
A few days ago I was reading through one of your articles - I don't remember which one - and it suddenly struck me that I may not actually understand priors as well as I think I do.
After re-reading some fo the series, and then working through the math, I'm now reasonably convinced that I don't properly understand priors at all - at least, not intuitively, which seems to be an important aspect for actually using them.
I have a few weird questions that I'm hoping someone can answer, that will help point me back towards the correct quadrant of domain space. I'll start with a single question, and then see if I can claw my way towards understanding from there based on the answers:
Imagine there is a rational, Bayesian AI named B9 which has been programmed to visually identify and manipulate geometric objects. B9's favorite object is a blue ball, but B9 has no idea that it is blue: B9 sees the world through a black and white camera, and has always seen the world through a black and white camera. Until now, B9 has never heard of "colors" - no one has mentioned "colors" to B9, and B9 has certainly never experienced them. Today, unbeknownst to B9, B9's creator is going to upgrade its camera to a full-color system, and see how long it takes B9 to adapt to the new inputs.
The camera gets switched in 5 seconds. Before the camera gets switched, what prior probability does B9 assign to the possibility that its favorite ball is blue?
Replies from: MugaSofer, Kindly, wuncidunci, TheOtherDave, Watercressed, Eliezer_Yudkowsky, CCC, ThrustVectoring, JeffJo↑ comment by MugaSofer · 2013-05-23T13:34:09.103Z · LW(p) · GW(p)
Well, without a sense that can detect color, it would just be an arbitrary undetectable property something might have, right? So it would be ... dependent on what other objects B9 is aware of, I think. The precise hypothesis of "all [objects that we know are blue] share a common property I cannot perceive with this camera" should be highly conjunctive, and therefore low, unless B9 has observed humans reacting to them because of their coloration. And even then, "blue" would be defined only in terms of what other objects have it, not a specific input type from the camera.
I suspect I'm missing the point of this question, somehow.
↑ comment by wuncidunci · 2013-05-23T15:48:21.500Z · LW(p) · GW(p)
Your question is not well specified. Event though you might think that the proposition "its favorite ball is blue" is something that has a clear meaning, it is highly dependent on to which precision it will be able to see colours, how wide the interval defined as blue is, and how it considers multicoloured objects. If we suppose it would categorise the observed wavelength into one of 27 possible colours (one of those being blue), and further suppose that it knew the ball to be of a single colour and not patterned, and further not have any background information about the relative frequencies of different colours of balls or other useful prior knowledge, the prior probability would be 1/27. If we suppose that it had access to internet and had read this discussion on LW about the colourblind AI, it would increase its probability by doing an update based on the probability of this affecting the colour of its own ball.
↑ comment by TheOtherDave · 2013-05-23T15:57:58.872Z · LW(p) · GW(p)
I don't claim to be any kind of Bayesian expert here, but, well, I seem to be replying anyway. Don't take my reply too seriously.
B9 has never heard of "colors". I take that to mean, not only that nobody has used that particular word to B9, but that B9 has been exposed to no inputs that significantly depend on it... e.g., nobody has talked about whether their shirts match their pants, nobody has talked about spectroscopic analysis of starlight or about the mechanism of action of clorophyll or etc... that B9 has no evidentiary basis from which to draw conclusions about color. (That is, B9 is the anti-Mary.)
Given those assumptions, a universal prior is appropriate... 50% chance that "My ball is blue" is true, 50% chance that it's false.
If those assumptions aren't quite true, and B9 has some information that usefully pertains, however indirectly, to the color of the ball, then insofar as that information is evidence one way or another, B9 ideally updates that probability accordingly.
Replies from: Kawoomba↑ comment by Kawoomba · 2013-05-24T13:59:38.600Z · LW(p) · GW(p)
Given those assumptions, a universal prior is appropriate... 50% chance that "My ball is blue" is true, 50% chance that it's false.
You and Kindly both? Very surprising.
Consider you as B9, reading on the internet about some new and independent property of items, "bamboozle-ness". Should you now believe that P("My monitor is bamboozled") = 0.5? That it is as likely that your monitor is bamboozled as that it's not bamboozled?
If I offered you a bet of 100 big currency units, if it turns out your monitor was bamboozled, you'd win triple! Or 50x! Wouldn't you accept, based on your "well, 50% chance of winning" assessment?
Am I bamboozled? Are you bamboozled?
Notice that B9 has even less reason to believe in colors than you in the example above - it hasn't even read about them on the internet.
Instead of assigning 50-50 odds, you'd have to take that part of the probability space which represents "my belief in other models than my main model", identify the miniscule prior for that specific model containing "colors", or "bamboozleness", then calculate from assuming that model the odds of blue versus not-blue, then weigh back in the uncertainty from such an arbitrary model being true in lieu of your standard model.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2013-05-24T16:51:28.297Z · LW(p) · GW(p)
That it is as likely that your monitor is bamboozled as that it's not bamboozled?
Given the following propositions:
(P1) "My monitor is bamboozled."
(P2) "My monitor is not bamboozled."
(P3) "'My monitor is bamboozled' is not the sort of statement that has a binary truth value; monitors are neither bamboozled nor non-bamboozled."
...and knowing nothing at all about bamboozledness, never even having heard the word before, it seems I ought to assign high probability to P3 (since it's true of most statements that it's possible to construct) and consequently low probabilities to P1 and P2.
But when I read about bamboozledness on the Internet (or am asked whether my ball is blue), my confidence in P3 seems to go up [EDIT: I mean down] pretty quickly, based on my experience with people talking about stuff. (Which among other things suggests that my prior for P3 wasn't all that low [EDIT: I mean high].)
Having become convinced of NOT(P3) (despite still knowing nothing much about bamboozledness other than it's the sort of thing people talk about on the Internet), if I have very low confidence in P1, I have very high confidence in P2. If I have very low confidence in P2, I have very high confidence in P1. Very high confidence in either proposition seems unjustifiable... indeed, a lower probability for P1 than P2 or vice-versa seems unjustifiable... so I conclude 50%.
If I'm wrong to do so, it seems I'm wrong to reduce my confidence in P3 in the first place.
Which I guess is possible, though I do seem to do it quite naturally.
But given NOT(P3), I genuinely don't see why I should believe P(P2) > P(P1).
If I offered you a bet of 100 big currency units, if it turns out your monitor was bamboozled, you'd win triple! Or 50x! Wouldn't you accept, based on your "well, 50% chance of winning" assessment?
Just to be clear: you're offering me (300BCUs if P1, -100BCUs if P2)?
And you're suggesting I shouldn't take that bet, because P(P2) >> P(P1)?
It seems to follow from that reasoning that I ought to take (300BCUs if P2, -100BCUs if P1).
Would you suggest I take that bet?
Anyway, to answer your question: I wouldn't take either bet if offered, because of game-theoretical considerations... that is, the moment you offer me the bet, that's evidence that you expect to gain by the bet, which given my ignorance is enough to make me confident I'll lose by accepting it. But if I eliminate those concerns, and I am confident in P3, then I'll take either bet if offered. (Better yet, I'll take both bets, and walk away with 200 BCUs.)
Replies from: Vaniver, Kawoomba↑ comment by Vaniver · 2013-05-24T18:40:58.005Z · LW(p) · GW(p)
I ought to assign high probability to P3 (since it's true of most statements that it's possible to construct) and consequently low probabilities to P1 and P2.
I don't think the logic in this part follows. Some of it looks like precision: it's not clear to me that P1, P2, and P3 are mutually exclusive. What about cases where 'my monitor is bamboozled' and 'my monitor is not bamboozled' are both true, like sets that are both closed and open? Later, it looks like you want P3 to be the reverse of what you have it written as; there it looks like you want P3 to be the proposition that it is a well-formed statement with a binary truth value.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2013-05-24T19:04:08.129Z · LW(p) · GW(p)
Blech; you're right, I incompletely transitioned from an earlier formulation and didn't shift signs all the way through. I think I fixed it now.
Your larger point about (p1 and p2) being just as plausible a priori is certainly true, and you're right that makes "and consequently low probabilities to P1 and P2" not follow from a properly constructed version of P3.
I'm not sure that makes a difference, though perhaps it does. It still seems that P(P1) > P(P2) is no more likely, given complete ignorance of the referent for "bamboozle", than P(P1) < P(P2)... and it still seems that knowing that otherwise sane people talk about whether monitors are bamboozled or not quickly makes me confident that P(P1 XOR P2) >> P((P1 AND P2) OR NOT(P1 OR P2))... though perhaps it ought not do so.
↑ comment by Kawoomba · 2013-05-24T20:25:57.772Z · LW(p) · GW(p)
Let's lift the veil: "bamboozledness" is a placeholder for ... phlogiston (a la "contains more than 30ppm phlogiston" = "bamboozled").
Looks like you now assign a probability of 0.5 to phlogiston, in your monitor, no less. (No fair? It could also have been something meaningful, but in the 'blue balls' scenario we're asking for the prior of a concept which you've never even seen mentioned as such (and hopefully never experienced), what are the chances that a randomly picked concept is a sensible addition to your current world view.)
That's the missing ingredient, the improbability of a hitherto unknown concept belonging to a sensible model of reality:
P("Monitor contains phlogiston" | "phlogiston is the correct theory" Λ "I have no clue about the theory other than it being correct and wouldn't know the first thing of how to guess what contains phlogiston") could be around 0.5 (although not necessarily exactly 0.5 based on complexity considerations).
However, what you're faced with isn't "... given that colors exist", "... given that bamboozledness exists", "... given that phlogiston exists" (in each case, 'that the model which contains concepts corresponding to the aforementioned corresponds to reality'), it is simply "what is the chance that there is phlogiston in your computer?" (Wait, now it's in my computer too! Not only my monitor?)
Since you have no (little - 'read about it on the internet') reason to assume that phlogiston / blue is anything meaningful, and especially given that in the scenario you aren't even asked about the color of a ball, but simply the prior which relies upon the unknown concept of 'blue' which corresponds to some physical property which isn't a part of your current model, any option which contains "phlogiston is nonsense"/"blue is nonsense", in the form of "monitor does not contain phlogiston", "ball is not blue", is vastly favored.
I posed the bet to show that you wouldn't actually assign a 0.5 probability to a randomly picked concept being part of your standard model. Heads says this concept called "blue" exists, tails it doesn't. Since you like memes. Maybe it helps not to think about the ball, but to think about what it would mean for the ball to be "blue". Instead of "Is the ball blue?", think "does blue extend my current model of reality in a meaningful way", then replace blue with bamboozled.
But I guess I do see where you're coming from, more so than I did before. The all important question is, "does that new attribute you know nothing about have to correspond to any physically existing quantity, can you assume that it extends/replaces your current model of the world, and do you thus need to factor in the improbability of invalidating your current model into assigning the probabilities of the new attribute". Would that be accurate?
Anyway, to answer your question: I wouldn't take either bet if offered, because of game-theoretical considerations...
Enter Psi, Omega's retarded, ahem, special little brother. It just goes around offering random bets, with no background knowledge whatsoever, so you're free to disregard the "why is he offering a bet in the first place" reservations.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2013-05-24T20:59:24.921Z · LW(p) · GW(p)
you have no (little - 'read about it on the internet') reason to assume that phlogiston / blue is anything meaningful,
Well, meaningfulness is the crux, yes.
As I said initially, when I read about bamboozledness on the Internet (or am asked whether my ball is blue), my confidence seems to grow pretty quickly that the word isn't just gibberish... that there is some attribute to which the word refers, such that (P1 XOR P2) is true. When I listen to a conversation about bamboozled computers, I seem to generally accept the premise that bamboozled computers are possible pretty quickly, even if I haven't the foggiest clue what a bamboozled computer (or monitor, or ball, or hot thing, or whatever) is. It would surprise me if this were uncommon.
And, sure, perhaps I ought to be more skeptical about the premise that people are talking about anything meaningful at all. (I'm not certain of this, but there's certainly precedent for it.)
any option which contains "phlogiston is nonsense"/"blue is nonsense", in the form of "monitor does not contain phlogiston", "ball is not blue"
Here's where you lose me. I don't see how an option can contain "X is nonsense" in the form of "monitor does not contain X". If X is nonsense, "monitor does not contain X" isn't true. "monitor contains X" isn't true either. That's kind of what it means for X to be nonsense.
The all important question is, "does that new attribute you know nothing about have to correspond to any physically existing quantity, can you assume that it extends/replaces your current model of the world, and do you thus need to factor in the improbability of invalidating your current model into assigning the probabilities of the new attribute". Would that be accurate?
I'm not sure. The question that seems important here is "how confident am I, about that new attribute X, that a system either has X or lacks X but doesn't do both or neither?" Which seems to map pretty closely to "how confident am I that 'X' is meaningful?" Which may be equivalent to your formulation, but if so I don't follow the equivalence.
Enter Psi, Omega's retarded, ahem, special little brother.
(nods) As I said in the first place, if I eliminate the game-theoretical concerns, and I am confident that "bamboozled" isn't just meaningless gibberish, then I'll take either bet if offered.
Replies from: Kawoomba↑ comment by Kawoomba · 2013-05-24T21:27:18.978Z · LW(p) · GW(p)
You're just trying to find out whether X is binary, then - if it is binary - you'd assign even odds, in the absence of any other information.
However, it's not enough for "blue" - "not blue" to be established as a binary attribute, we also need to weigh in the chances of the semantic content (the definition of 'blue', unknown to us at that time) corresponding to any physical attributes.
Binarity isn't the same as "describes a concept which translates to reality". When you say meaningful, you (I think) refer to the former, while I refer to the latter. With 'nonsense' I didn't mean 'non-binary', but instead 'if you had the actual definition of the color attribute, you'd find that it probably doesn't correspond to any meaningful property of the world, and as such that not having the property is vastly more likely, which would be "ball isn't blue (because nothing is blue, blue is e.g. about having blue-quarks, which don't model reality)".
Replies from: TheOtherDave↑ comment by TheOtherDave · 2013-05-24T22:37:51.221Z · LW(p) · GW(p)
Binarity isn't the same as "describes a concept which translates to reality".
I'll accept that in general.
When you say meaningful, you (I think) refer to the former, while I refer to the latter.
In this context, I fail to understand what is entailed by that supposed difference.
Put another way: I fail to understand how "X"/"not X" can be a binary attribute of a physical system (a ball, a monitor, whatever) if X doesn't correspond to a physical attribute, or a "concept which translates to reality". Can you give me an example of such an X?
Put yet another way: if there's no translation of X to reality, if there's no physical attribute to which X corresponds, then it seems to me neither "X" nor "not X" can be true or meaningful. What in the world could they possibly mean? What evidence would compel confidence in one proposition or the other?
Looked at yet a different way...
case 1: I am confident phlogiston doesn't exist.
I am confident of this because of evidence related to how friction works, how combustion works, because burning things can cause their mass to increase, for various other reasons. (P1) "My stove has phlogiston" is meaningful -- for example, I know what it would be to test for its truth or falsehood -- and based on other evidence I am confident it's false. (P2) "My stove has no phlogiston" is meaningful, and based on other evidence I am confident it's true.
If you remove all my evidence for the truth or falsehood of P1/P2, but somehow preserve my confidence in the meaningfulness of "phlogiston", you seem to be saying that my P(P1) << P(P2).
case 2: I am confident photons exist. Similarly to P1/P2, I'm confident that P3 ("My lightbulb generates photons") is true, and P4 ("My lightbulb generates no photons") is false, and "photon" is meaningful. Remove my evidence for P3/P4 but preserve my confidence in the meaningfulness of "photon", should my P(P3) << P(P4)? Or should my P(P3) >> P(P4)?
I don't see any grounds for justifying either. Do you?
Replies from: Kawoomba↑ comment by Kawoomba · 2013-05-25T06:19:29.108Z · LW(p) · GW(p)
I don't see any grounds for justifying either. Do you?
Yes. P1 also entails that phlogiston theory is an accurate descriptor of reality - after all, it is saying your stove has phlogiston. P2 does not entail that phlogiston theory is an accurate descriptor of reality. Rejecting that your stove contains phlogiston can be done on the basis of "chances are nothing contains phlogiston, not knowing anything about phlogiston theory, it's probably not real, duh", which is why P(P2)>>P(P1).
The same applies to case 2, knowing nothing about photons, you should always go with the proposition (in this case P4) which is also supported by "photons are an imaginary concept with no equivalent in reality". For P3 to be correct, photons must have some physical equivalent on the territory level, so that anything (e.g. your lightbulb) can produce photons in the first place. For a randomly picked concept (not picked out of a physics textbook), the chances of that are negligible.
Take some random concept, such as "there are 17 kinds of quark, if something contains the 13th quark - the blue quark - we call it 'blue'". Then affirming it is blue entails affirming the 17-kinds-of-quark theory (quite the burden, knowing nothing about its veracity), while saying "it is not blue = it does not contain the 13th quark, because the 17-kinds-of-quark theory does not describe our reality" is the much favored default case.
A not-yet-considered randomly chosen concept (phlogiston, photons) does not have 50-50 odds of accurately describing reality, its odds of doing so - given no evidence - are vanishingly small. That translates to
P("stove contains phlogiston") being much smaller than P("stove does not contain phlogiston"). Reason (rephrasing the above argument): rejecting phlogiston theory as an accurate map of the territory strengthens your "stove does not contain phlogiston (... because phlogiston theory is probably not an an accurate map, knowing nothing about it)"
even if
P("stove contains phlogiston given phlogiston theory describes reality") = P("stove does not contain phlogiston given phlogiston theory describes reality") = 0.5
Replies from: TheOtherDave↑ comment by TheOtherDave · 2013-05-25T17:40:48.041Z · LW(p) · GW(p)
I agree that if "my stove does not contain X" is a meaningful and accurate thing to say even when X has no extension into the real world at all, then P("my stove does not contain X") >>> P("my stove contains X") for an arbitrarily selected concept X, since most arbitrarily selected concepts have no extension into the real world.
I am not nearly as convinced as you sound that "my stove does not contain X" is a meaningful and accurate thing to say even when X has no extension into the real world at all, but I'm not sure there's anything more to say about that than we've already said.
Also, thinking about it, I suspect I'm overly prone to assuming that X has some extension into the real world when I hear people talking about X.
Replies from: Kawoomba↑ comment by Kawoomba · 2013-05-26T12:36:10.757Z · LW(p) · GW(p)
I'm glad we found common ground.
I am not nearly as convinced as you sound that "my stove does not contain X" is a meaningful and accurate thing to say even when X has no extension into the real world at all, but I'm not sure there's anything more to say about that than we've already said.
Consider e.g. "There is not a magical garden gnome living under my floor", "I don't emit telepathic brain waves" or "There is no Superman-like alien on our planet", which to me all are meaningful and accurate, even if they all contain concepts which do not (as far as we know) extend into the real world. Can an atheist not meaningfully say that "I don't have a soul"?
If I adopted your point of view (i.e. talking about magical garden gnomes living or not living under my floor makes no (very little) sense either way since they (probably) cannot exist), then my confidence for or against such a proposition would be equal but very low (no 50% in that case either). Except if, as you say, you're assigning a very high degree of belief into "concept extends into the real world" as soon as you hear someone talk about it.
"This is a property which I know nothing about but of which I am certain that it can apply to reality" is the only scenario in which you could argue for a belief of 0.5. It is not the scenario of the original post.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2013-05-26T15:47:54.644Z · LW(p) · GW(p)
The more I think about this, the clearer it becomes that I'm getting my labels confused with my referents and consequently taking it way too much for granted that anything real is being talked about at all.
"Given that some monitors are bamboozled (and no other knowledge), is my monitor bamboozled?" isn't the same question as "Given that "bamboozled" is a set of phonemes (and no other knowledge), is "my monitor is bamboozled" true?" or even "Given that English speakers sometimes talk about monitors being bamboozled (ibid), is my monitor bamboozled?" and, as you say, neither the original blue-ball case nor the bamboozled-computer case is remotely like the first question.
So, yeah: you're right, I'm wrong. Thanks for your patience.
↑ comment by Watercressed · 2013-05-23T16:29:56.389Z · LW(p) · GW(p)
That depends on the knowledge that the AI has. If B9 had deduced the existence of different light wavelengths, and knew how blue corresponded to a particular range, and how human eyes see stuff, the probability would be something close to the range of colors that would be considered blue divided by the range of all possible colors. If B9 has no idea what blue is, then it would depend on priors for how often statements end up being true when B9 doesn't know their meaning.
Without knowing what B9's knowledge is, the problem is under-defined.
↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-05-24T03:26:43.719Z · LW(p) · GW(p)
Very low, because B9 has to hypothesize a causal framework involving colors without any way of observing anything but quantitatively varying luminosities. In other words, they must guess that they're looking at the average of three variables instead of at one variable. This may sound simple but there are many other hypotheses that could also be true, like two variables, four variables, or most likely of all, one variable. B9 will be surprised. This is right and proper. Most physics theories you make up with no evidence behind them will be wrong.
Replies from: BrienneYudkowsky, ialdabaoth↑ comment by LoganStrohl (BrienneYudkowsky) · 2013-05-24T04:30:33.697Z · LW(p) · GW(p)
I think I'm confused. We're talking about something that's never even heard of colors, so there shouldn't be anything in the mind of the robot related to "blue" in any way. This ought to be like the prior probability from your perspective that zorgumphs are wogle. Now that I've said the words, I suppose there's some very low probability that zorgumphs are wogle, since there's a probability that "zorgumph" refers to "cats" and "wogle" to "furry". But when you didn't even have those words in your head anywhere, how could there have been a prior? How could B9's prior be "very low" instead of "nonexistent"?
Replies from: hairyfigment, TheOtherDave↑ comment by hairyfigment · 2013-05-24T05:35:25.274Z · LW(p) · GW(p)
Eliezer seems to be substituting the actual meaning of "blue". Now, if we present the AI with the English statement and ask it to assign a probability...my first impulse is to say it should use a complexity/simplicity prior based on length. This might actually be correct, if shorter message-length corresponds to greater frequency of use. (ETA that you might not be able to distinguish words within the sentence, if faced with a claim in a totally alien language.)
↑ comment by TheOtherDave · 2013-05-24T07:08:17.226Z · LW(p) · GW(p)
Well, if nothing else, when I ask B9 "is your ball blue?", I'm only providing a finite amount of evidence thereby that "blue" refers to a property that balls can have or not have. So if B9's priors on "blue" referring to anything at all are vastly low, then B9 will continue to believe, even after being asked the question, that "blue" doesn't refer to anything. Which doesn't seem like terribly sensible behavior. That sets a floor on how low the prior on "'blue' is meaningful" can be.
↑ comment by ialdabaoth · 2013-05-24T05:29:18.292Z · LW(p) · GW(p)
Thank you! This helps me hone in on a point that I am sorely confused on, which BrienneStrohl just illustrated nicely:
You're stating that B9's prior that "the ball is blue" is 'very low', as opposed to {Null / NaN}. And that likewise, my prior that "zorgumphs are wogle" is 'very low', as opposed to {Null / NaN}.
Does this mean that my belief system actually contains an uncountable infinitude of priors, one for each possible framing of each possible cluster of facts?
Or, to put my first question more succinctly, what priors should I assign potential facts that my current gestalt assigns no semantic meaning to whatsoever?
Replies from: Eliezer_Yudkowsky, Kawoomba↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-05-24T13:15:20.585Z · LW(p) · GW(p)
"The ball is blue" only gets assigned a probability by your prior when "blue" is interpreted, not as a word that you don't understand, but as a causal hypothesis about previously unknown laws of physics allowing light to have two numbers assigned to it that you didn't previously know about, plus the one number you do know about. It's like imagining that there's a fifth force appearing in quark-quark interactions a la the "Alderson Drive". You don't need to have seen the fifth force for the hypothesis to be meaningful, so long as the hypothesis specifies how the causal force interacts with you.
If you restrain yourself to only finite sets of physical laws of this sort, your prior will be over countably many causal models.
Replies from: Watercressed, Vaniver↑ comment by Watercressed · 2013-05-24T14:43:22.122Z · LW(p) · GW(p)
Causal models are countable? Are irrational constants not part of causal models?
Replies from: ThrustVectoring↑ comment by ThrustVectoring · 2013-05-24T15:17:56.442Z · LW(p) · GW(p)
There are only so many distinct states of experience, so yes, causal models are countable. The set of all causal models is a set of functions that map K n-valued past experiential states into L n-valued future experiential states.
This is a monstrously huge number of functions in the set, but still countable, so long as K and L are at most countably infinite.
Note that this assumes that states of experience with zero discernible difference between them are the same thing - eg, if you come up with the same predictions using the first million digits of sqrt(2) and the irrational number sqrt(2), then they're the same model.
Replies from: Watercressed, Watercressed↑ comment by Watercressed · 2013-05-24T15:46:38.590Z · LW(p) · GW(p)
But the set of causal models is not the set of experience mappings. The model where things disappear after they cross the cosmological horizon is a different model than standard physics, even though they predict the same experiences. We can differentiate between them because Occam's Razor favors one over the other, and our experiences give us ample cause to trust Occam's Razor.
At first glance, it seems this gives us enough to diagonalize models--1 meter outside the horizon is different from model one, two meters is different from model two...
There might be a way to constrain this based on the models we can assign different probabilities to, given our knowledge and experience, which might get it down to countable numbers, but how to do it is not obvious to me.
↑ comment by Watercressed · 2013-05-24T16:15:32.135Z · LW(p) · GW(p)
Er, now I see that Eliezer's post is discussing finite sets of physical laws, which rules out the cosmological horizon diagonalization. But, I think this causal models as function mapping fails in another way: we can't predict the n in n-valued future experiential states. Before the camera was switched, B9 would assign low probability to these high n-valued experiences. If B9 can get a camera that allows it to perceive color, it could also get an attachment that allows it to calculate the permittivity constant to arbitrary precision. Since it can't put a bound on the number of values in the L states, the set is uncountable and so is the set of functions.
Replies from: ThrustVectoring↑ comment by ThrustVectoring · 2013-05-24T21:26:40.412Z · LW(p) · GW(p)
we can't predict the n in n-valued future experiential states.
What? Of course we can - it's much simpler with a computer program, of course. Suppose you have M bits of state data. There are 2^M possible states of experience. What I mean by n-valued is that there are a certain discrete set of possible experiences.
If B9 can get a camera that allows it to perceive color, it could also get an attachment that allows it to calculate the permittivity constant to arbitrary precision.
Arbitrary, yes. Unbounded, no. It's still bounded by the amount of physical memory it can use to represent state.
Replies from: Watercressed↑ comment by Watercressed · 2013-05-24T21:41:53.888Z · LW(p) · GW(p)
In order to bound the states at a number n, it would need to assign probability zero to ever getting an upgrade allowing it to access log n bytes of memory. I don't know how this zero-probability assignment would be justified for any n--there's a non-zero probability that one's model of physics is completely wrong, and once that's gone, there's not much left to make something impossible.
↑ comment by Vaniver · 2013-05-24T19:26:10.390Z · LW(p) · GW(p)
"The ball is blue" only gets assigned a probability by your prior when "blue" is interpreted, not as a word that you don't understand, but as a causal hypothesis about previously unknown laws of physics allowing light to have two numbers assigned to it that you didn't previously know about, plus the one number you do know about.
Note that a conversant AI will likely have a causal model of conversations, and so there are two distinct things going on here- both "what are my beliefs about words that I don't understand used in a sentence" and "what are my beliefs about physics I don't understand yet." This split is a potential source of confusion, and the conversational model is one reason why the betting argument for quantifying uncertainties meets serious resistance.
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-05-24T20:25:52.747Z · LW(p) · GW(p)
To me the conversational part of this seems way less complicated/interesting than the unknown causal models part - if I have any 'philosophical' confusion about how to treat unknown strings of English letters it is not obvious to me what it is.
↑ comment by Kawoomba · 2013-05-24T14:03:47.913Z · LW(p) · GW(p)
You can reserve some slice of your probability space for "here be dragons", the (1 - P("my current gestalt is correct"). Your countably many priors may fight over that real estate.
Also, if you demand your models to be computable (a good assumption, because if they aren't we're eff'ed anyways), there'll never be an uncountable infinitude of priors.
↑ comment by CCC · 2013-05-24T10:39:19.543Z · LW(p) · GW(p)
Before the camera gets switched, what prior probability does B9 assign to the possibility that its favorite ball is blue?
I'd imagine something like . It would be like asking whether or not the ball is supercalifragilisticexpialidocious.
If B9 has recently been informed that 'blue' is a property, then the prior would be very low. Can balls even be blue? If balls can be blue, then what percentage of balls are blue? There is also a possibility that, if some balls can be blue, all balls are blue; so the probability distribution would have a very low mean but a very high standard deviation.
Any further refinement requires B9 to obtain additional information; if informed that balls can be blue, the odds go up; if informed that some balls are blue, the odds go up further; if further informed that not all balls are blue, the standard deviation drops somewhat. If presented with the luminance formula, the odds may go up significantly (it can't be used to prove blueness, but it can be used to limit the number of possible colours the ball can be, based on the output of the black-and-white camera).
↑ comment by ThrustVectoring · 2013-05-24T15:21:38.908Z · LW(p) · GW(p)
I'd go down a level of abstraction about the camera in order to answer this question. You have a list of numbers, and you're told that five seconds from now this list of numbers are going to replace with a list of triplets, with the property that the average of the triplet is the same as the corresponding number in the list.
What is the probability you assign to "one of these triplets is within a certain range of RGB values?"
↑ comment by JeffJo · 2013-06-15T19:29:25.390Z · LW(p) · GW(p)
Since this discussion was reopened, I've spent some time - mostly while jogging - pondering and refining my stance on the points expressed. I just got around to writing them down. Since there is no other way to do it, I'll present them boldly, apologizing in advance if I seem overly harsh. There is no such intention.
1) "Accursed Frequentists" and "Self-righteous Bayesians" alike are right, and wrong. Probability is in your knowledge - or rather, the lack thereof - of what is in the environment. Specifically, it is the measure of the ambiguity in the situation.
2) Nothing is truly random. If you know the exact shape of a coin, its exact weight distribution, exactly how it is held before flipping, exactly what forces are applied to flip it, the exact properties of the air and air currents it tumbles through, and exactly how long it is in the air before being caught in you open palm, then you can calculate - not predict - whether it will show Heads or Tails. Any lack in this knowledge leaves multiple possibilities open, which is the ambiguity.
3) Saying "the coin is biased" is saying that there is an inherent property, over all of the ambiguous ways you could hold the coin, the ambiguous forces you could use to flip it, the ambiguous air properties, and the ambiguous tumbling times, for it to land one way or another. (Its shape and weight are fixed, so they are unambiguous even if they are not known, and probably the source of this "inherent property.")
4) Your state of mind defines probability only in how you use it to define the ambiguities you are accounting for. Eliezer's frequentist is perfectly correct to say he needs to know the bias of this coin, since in his state of mind the ambiguity is what this biased coin will do. And Eliezer is also perfectly correct to say the actual bias is unimportant. His answer is 50%, since in his mind the ambiguity is what any biased coin do. They are addressing different questions.
5) A simple change to the coin question puts Eliezer in the same "need the environment" situation he claims belongs only to the frequentist: Fli[p his coin twice. What probability are you willing to assign to getting the same result on both flips?
6) The problem with the "B9" question discussed recently, is that there is no framework to place the ambiguity within. No environmental circumstances that you can use to assess the probability.
7) The propensity for some frequentists to want probability to be "in the environment" is just a side effect of practical application. Say you want to evaluate a statistical question, such as the effectiveness of a drug. Drug effectiveness can vary with gender, age, race, and probably many other factors that are easily identified; that is, it is indeed "in the environment." You could ignore those possible differences, and get an answer that applies to a generic person just as Eliezer's answer applies to a generic biased coin. But it behooves you to eliminate whatever sources of ambiguity you easily can.
8) In geometry, "point" and "line" are undefined concepts. But we all have a pretty good idea what they are supposed to mean, and this meaning is fairly universal.
"Length" and "angle" are undefined measurements of what separates two different instances of "point" and "line," respectively. But again, we have a pretty clear idea of what is intended.
In probability, "outcome" is an undefined concept. But unlike geometry, where the presumed meaning is universal, a meaning for "outcome" is different for each ambiguous situation. But an "event" is defined - as a set of outcomes.
"Relative likelihood" is an undefined measurement what separates two different instances of "event." And just like "length," we have a pretty clear idea of what it is supposed to mean. It expresses the relative chances that either event will occur in any expression of the ambiguities we consider.
9) "Probability" is just the likelihood relative to everything. As such, it represents the fractional chances of an event's occurrence. So if we can repeat the same ambiguities exactly, we expect the frequency to approach the probability. But note: this is not a definition of probability, as Bayesians insist frequentists think. It is a side effect of what we want "likelihood" to mean.
10) Eliezer misstated the "classic" two-child problem. The problem he stated is the one that corresponds to the usual solution, but oddly enough the usual solution is wrong for the question that is usually asked. And here I'm referring to, among others, Martin Gardner's version and Marilyn vos Savant's more famous version. The difference is that Eliezer asks the parent if there is a boy, but the classic version simply states that one child is a boy. Gardner changed his answer to 1/2 because, when the reason we have this information is not known, you can't implicitly assume that you will always know about the boy in a boy+girl family.
And the reason I bring this up, is because the "brain-teasing ability" of the problem derives more from effects of this implied assumption, than from any "tendency to think of probabilities as inherent properties of objects." This can be seen by restating the problem as a variation of Bertrand's Box Paradox:
The probability that, in a family of two children, both have the same gender is 1/2. But suppose you learn that one child is in scouts - but you don’t know if it is Boy Scouts or Girl Scouts. If it is Boy Scouts, those who answer the actual "classic" problem as Eliezer answered his variation will say the probability of two boys is 1/3. They'd say the same thing, about two girls, if it is Girl Scouts. So it appears you don’t even need to know what branch of Scouting it is to change the answer to 1/3.
The fallacy in this logic is the same as the reason Eliezer reformulated the problem: the answer is 1/3 only if you ask a question equivalent to "is at least one a boy," not if you merely learn that fact. And the "brain-teaser ability" is because people sense, correctly, that they have no new information in the "classic" version of the problem which would allow the change from 1/2 to 1/3. But they are told, incorrectly, that the answer does change.
comment by nikAleksandr · 2014-04-25T21:02:25.330Z · LW(p) · GW(p)
This was a very difficult concept for me, Eliezer. Not because I disagree with the Bayesian principle that uncertainty is in the mind, but because I lacked the inferential step to jump from that to why there were different probabilities depending on the question you asked.
Might a better (or additional) way to explain this be to point out an analogy to the differing probabilities of truth you might assign to confirmed experimental hypothesis that were either originally vague, and therefore have less weight when adjusting the overall probability of truth vs. specific, and therefore shift the probability of truth further.
Hopefully I'm actually understanding this correctly at all.
comment by casebash · 2014-10-13T23:56:39.074Z · LW(p) · GW(p)
The problem with trying to split, the it must be the oldest child who is the boy or the youngest child who is the boy is that the two situations overlap. You need to split the situation into oldest, youngest and both. If we made the ruling that both should be excluded, then we'd be able to complete the argument that there shouldn't be a difference between knowing that one child is a boy or knowing that the oldest child is a boy.
comment by Gelsamel · 2014-10-28T06:35:16.287Z · LW(p) · GW(p)
As an aside, I think it is equivocation to talk about this kind of probability as being the same kind of probability that quantum mechanics leads to. No, hidden variable theories are not really worth considering.
But projectivism has been written about for quite a long time (since at least the 1700s), and is very well known so I find it hard to believe that there are any significant proponents of 'frequentism' (as you call it).
To those who've not thought about it, everyday projectivism comes naturally, but it falls apart at the slightest consideration.
When it comes to Hempel's raven, though, even those who understand projectivism can have difficulty coming to terms with the probabilistic reality.
comment by vasaka · 2017-04-29T04:19:49.305Z · LW(p) · GW(p)
I think I can show how probability is not purely in the mind but also an inherent property of things, bear with me.
Lets take an event of seeing snow outside, for simplicity we know that snow is out there 3 month a year in winter, that fact is well tested and repeats each year. That distribution of snowy days is property of the reality. When we go out of bunker after spending there unknown amount of time we assign probability 1/4 to seeing a snow, and that number is function of our uncertainty about the date and our precise knowledge of when snow is out there. 1/4 is a precise description of reality if our scope is not just one day but a whole year. In this case we have a precise map, and our uncertainty is lack of knowledge of our place on the map. What we also know that if we do not have a date or season there is no better prediction and this is a property of things too.
Additionally having probability distribution you can perfectly predict accumulated effect of series of events, and this ability to predict something precisely is an indication that you grasped something about reality.
Returning to the coin 0.5 prediction of one throw is function of our uncertainty, but our prediction of sum of long series where 1 is heads and 0 is tails is a result of our knowledge of coin properties that is expressed as probability
Replies from: ChristianKl, TheAncientGeek↑ comment by ChristianKl · 2017-04-29T06:42:50.343Z · LW(p) · GW(p)
The notion of probability to which you are pointing is the frequentist notion of probability. Eliezer favors the Bayesian notion of probability over the Frequentist notion.
1/4 is a precise description of reality if our scope is not just one day but a whole year.
That might be true but a person who knows more about the weather might make a more accurate prediction about whether it shows. If I saw the weather report I might conclude that it's p=0.2 that it snows today even if over the whole year the distribution is that it snows on average every fourth day.
If I have more prior information I will predict a different probability that it actually snows.
↑ comment by TheAncientGeek · 2017-04-29T13:30:57.483Z · LW(p) · GW(p)
A statistical distribution is objective, and can be an element in a probability calculation, but is not itself probability.
Replies from: vasaka↑ comment by vasaka · 2017-04-30T16:56:54.338Z · LW(p) · GW(p)
Probability given data is an objective thing too. But point I make is that probability you assign is a mix of objective and subjective, your exact data is subjective thing, distribution is objective, and probability is a function of both.
comment by joshuabecker · 2018-08-24T01:33:39.792Z · LW(p) · GW(p)
E[x]=0.5
even for the frequentist, and that's what we make decisions with, so focusing on p(x) is a bit of misdirection. The whole frequentist-vs-bayesian culture war is fake. They're both perfectly consistent with well defined questions. (They have to be, because math works.)
And yes to everything else, except...
As to whether god plays dice with the universe... that is not in the scope of probability theory. It's math. Your Bayesian is really a pragmatist, and your frequentist is a straw person.
Great post!
comment by chris.oliver.313 · 2023-05-19T21:21:21.860Z · LW(p) · GW(p)
It is a mind game, but not the one you're claiming imo. Probabilities are a game about choices, aka co-products. There are lots of ways to specify the alternatives in a co-product. And once you've done so, you can create an instance of that co-product by injecting one of its constructors. A co-product is a type, and its constructors create instances of that type. So frequentists count up the instances and then compare the relative frequency. Your mind games are just silly ways of defining different co-products using hypothetical knowledge or not. Differences in knowledge imply different definitions of alternatives ("Knowing" something subtracts the alternatives), which defines different types. But obviously you don't actually need silly anecdotes about what somebody knows or doesn't know to define different co-products. You could do it with toothpicks of different lengths, or the color of balls, or whatever.
comment by Polkovnik_TrueForm (yurii-burak) · 2023-12-20T15:41:30.417Z · LW(p) · GW(p)
I tried to rush the angry comment about how it all is wrong, but a few second ater posting the comment (oops) I understood. I've seen a great example since the school genetics: when two heterozygotes cross (Aa is crossed with Aa), frequency of homozygotes among the descendants with dominant trait is 1/3. AA Aa aA aa (may never survive to the adulthood. Or AA may not survive. Or both survive, but we aren't interested)
There may be something that influences the 1:2:1 proportion (only in one side?), but it's a "You flip a loaded coin. What's your bet on it falling heads?" case.