# Probability is Subjectively Objective

post by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2008-07-14T09:16:50.000Z · LW · GW · Legacy · 71 comments**Followup to**: Probability is in the Mind

"Reality is that which, when you stop believing in it, doesn't go away."

—Philip K. Dick

There are two kinds of Bayesians, allegedly. Subjective Bayesians believe that "probabilities" are degrees of uncertainty existing in our minds; if you are uncertain about a phenomenon, that is a fact about your state of mind, not a property of the phenomenon itself; probability theory constrains the logical coherence of uncertain beliefs. Then there are objective Bayesians, who... I'm not quite sure what it means to be an "objective Bayesian"; there are multiple definitions out there. As best I can tell, an "objective Bayesian" is anyone who uses Bayesian methods and isn't a subjective Bayesian.

If I recall correctly, E. T. Jaynes, master of the art, once described himself as a subjective-objective Bayesian. Jaynes certainly believed very firmly that probability was in the mind; Jaynes was the one who coined the term Mind Projection Fallacy. But Jaynes also didn't think that this implied a license to make up whatever priors you liked. There was only one *correct* prior distribution to use, given your state of partial information at the start of the problem.

How can something be in the mind, yet still be objective?

It appears to me that a good deal of philosophical maturity consists in being able to keep separate track of nearby concepts, without mixing them up.

For example, to understand evolutionary psychology, you have to keep separate track of the psychological purpose of an act, and the evolutionary pseudo-purposes of the adaptations that execute as the psychology; this is a common failure of newcomers to evolutionary psychology, who read, misunderstand, and thereafter say, "You think you love your children, but you're just trying to maximize your fitness!"

What is it, exactly, that the terms "subjective" and "objective", mean? Let's say that I hand you a sock. Is it a subjective or an objective sock? You believe that 2 + 3 = 5. Is *your belief* subjective or objective? What about two plus three *actually* equaling five—is that subjective or objective? What about a specific act of adding two apples and three apples and getting five apples?

I don't intend to confuse you in shrouds of words; but I do mean to point out that, while you may feel that you know very well what is "subjective" or "objective", you might find that you have a bit of trouble saying out loud what those words mean.

Suppose there's a calculator that computes "2 + 3 = 5". We punch in "2", then "+", then "3", and lo and behold, we see "5" flash on the screen. We accept this as *evidence *that 2 + 3 = 5, but we wouldn't say that the calculator's physical output *defines* the answer to the question 2 + 3 = ?. A cosmic ray could strike a transistor, which might give us misleading evidence and cause us to believe that 2 + 3 = 6, but it wouldn't affect the *actual *sum of 2 + 3.

Which proposition is common-sensically true, but philosophically interesting: while we can easily point to the physical location of a symbol on a calculator screen, or observe the result of putting two apples on a table followed by another three apples, it is rather harder to track down the whereabouts of 2 + 3 = 5. (Did you look in the garage?)

But let us leave aside the question of *where* the fact 2 + 3 = 5 is located—in the universe, or somewhere else—and consider the assertion that the proposition is "objective". If a cosmic ray strikes a calculator and makes it output "6" in response to the query "2 + 3 = ?", and you add two apples to a table followed by three apples, then you'll still see five apples on the table. If you do the calculation in your own head, expending the necessary computing power—we assume that 2 + 3 is a very difficult sum to compute, so that the answer is not immediately obvious to you—then you'll get the answer "5". So the cosmic ray strike didn't change anything.

And similarly—exactly similarly—what if a cosmic ray strikes a neuron inside your brain, causing you to compute "2 + 3 = 7"? Then, adding two apples to three apples, you will expect to see seven apples, but instead you will be surprised to see five apples.

If instead we found that no one was ever mistaken about addition problems, and that, moreover, you could change the answer by an act of will, then we might be tempted to call addition "subjective" rather than "objective". I am not saying that this is *everything* people mean by "subjective" and "objective", just pointing to one aspect of the concept. One might summarize this aspect thus: "If you can change something by thinking differently, it's subjective; if you can't change it by anything you do strictly inside your head, it's objective."

Mind is not magic. Every act of reasoning that we human beings carry out, is *computed within* some particular human brain. But not every computation is *about* the state of a human brain. Not every thought that you think is *about* something that can be changed by thinking. Herein lies the opportunity for confusion-of-levels. The quotation is not the referent. If you are going to consider thoughts as referential at all—if not, I'd like you to explain the mysterious correlation between my thought "2 + 3 = 5" and the observed behavior of apples on tables—then, while the quoted thoughts will always change with thoughts, the referents *may or may not* be entities that change with changing human thoughts.

The calculator computes "What is 2 + 3?", not "What does this calculator compute as the result of 2 + 3?" The answer to the former question is 5, but if the calculator were to ask the latter question instead, the result could self-consistently be anything at all! If the calculator returned 42, then indeed, "What does this calculator compute as the result of 2 + 3?" would in fact be 42.

So just because a computation takes place inside your brain, does not mean that the computation *explicitly mentions* your brain, that it has your brain as a *referent,* any more than the calculator mentions the calculator. The calculator does not attempt to contain a representation of itself, only of numbers.

Indeed, in the most straightforward implementation, the calculator that asks "What does this calculator compute as the answer to the query 2 + 3 = ?" will *never* return a result, just simulate itself simulating itself until it runs out of memory.

But if you punch the keys "2", "+", and "3", and the calculator proceeds to compute "What do I output when someone punches '2 + 3'?", the resulting computation does have one interesting characteristic: the *referent* of the computation is highly subjective, since it depends on the computation, and can be made to be anything just by changing the computation.

Is probability, then, subjective or objective?

Well, probability is computed within human brains or other calculators. A probability is a state of partial information that is possessed by you; if you flip a coin and press it to your arm, the coin is showing heads or tails, but you assign the probability 1/2 until you reveal it. A friend, who got a tiny but not fully informative peek, might assign a probability of 0.6.

So can you make the probability of winning the lottery be anything you like?

Forget about many-worlds for the moment—you should almost always be able to forget about many-worlds—and pretend that you're living in a single Small World where the lottery has only a single outcome. You will nonetheless have a need to call upon probability. Or if you prefer, we can discuss the ten trillionth decimal digit of pi, which I believe is not yet known. (If you are foolish enough to refuse to assign a probability distribution to this entity, you might pass up an excellent bet, like betting $1 to win $1000 that the digit is not 4.) Your uncertainty is a state of your mind, of partial information that you possess. Someone else might have different information, complete or partial. And the entity itself will only ever take on a single value.

So can you make the probability of winning the lottery, or the probability of the ten trillionth decimal digit of pi equaling 4, be anything you like?

You might be tempted to reply: "Well, since I *currently* think the probability of winning the lottery is one in a hundred million, then obviously, I will *currently* expect that assigning any other probability than this to the lottery, will decrease my expected log-score—or if you prefer a decision-theoretic formulation, I will expect this modification to myself to decrease expected utility. So, obviously, I will not choose to modify my probability distribution. It wouldn't be reflectively coherent."

So reflective coherency is the goal, is it? Too bad you weren't born with a prior that assigned probability 0.9 to winning the lottery! Then, by exactly the same line of argument, you wouldn't want to assign any probability except 0.9 to winning the lottery. And you would still be reflectively coherent. And you would have a 90% probability of winning millions of dollars! Hooray!

"No, then I would *think* I had a 90% probability of winning the lottery, but *actually,* the probability would only be one in a hundred million."

Well, of course *you* would be expected to say that. And if you'd been born with a prior that assigned 90% probability to your winning the lottery, you'd consider an alleged probability of 10^-8, and say, "No, then I would *think* I had almost no probability of winning the lottery, but *actually,* the probability would be 0.9."

"Yeah? Then just modify your probability distribution, and buy a lottery ticket, and then wait and see what happens."

What happens? Either the ticket will win, or it won't. That's what will happen. We won't get to see that some particular probability was, in fact, the exactly right probability to assign.

"Perform the experiment a hundred times, and—"

Okay, let's talk about the ten trillionth digit of pi, then. Single-shot problem, no "long run" you can measure.

Probability is subjectively objective: Probability exists in your mind: if you're ignorant of a phenomenon, that's an attribute of you, not an attribute of the phenomenon. Yet it will seem to you that you can't change probabilities by wishing.

You could make yourself compute something *else,* perhaps, *rather than* probability. You could compute "What do I say is the probability?" (answer: anything you say) or "What do I wish were the probability?" (answer: whatever you wish) but these things are not the *probability,* which is subjectively objective.

The thing about subjectively objective quantities is that they *really do* seem objective to you. You don't look them over and say, "Oh, well, of course I don't want to modify my own probability estimate, because no one can just modify their probability estimate; but if I'd been born with a different prior I'd be saying something different, and I wouldn't want to modify that either; and so none of us is superior to anyone else." That's the way a subjectively *subjective* quantity would seem.

No, it will seem to you that, if the lottery sells a hundred million tickets, and you don't get a peek at the results, then the probability of a ticket winning, *is* one in a hundred million. And that you could be born with different priors but that wouldn't give you any better odds. And if there's someone next to you saying the same thing about *their* 90% probability estimate, you'll just shrug and say, "Good luck with that." You won't expect them to *win.*

Probability is subjectively *really* objective, not just subjectively *sort of* objective.

Jaynes used to recommend that no one ever write out an unconditional probability: That you never, ever write simply P(A), but always write P(A|I), where I is your prior information. I'll use Q instead of I, for ease of reading, but Jaynes used I. Similarly, one would not write P(A|B) for the posterior probability of A given that we learn B, but rather P(A|B,Q), the probability of A given that we learn B and had background information Q.

This is good advice in a purely pragmatic sense, when you see how many false "paradoxes" are generated by accidentally using different prior information in different places.

But it also makes a deep philosophical point as well, which I never saw Jaynes spell out explicitly, but I think he would have approved: *there is no such thing as a probability that isn't in any mind.* Any mind that takes in evidence and outputs probability estimates of the next event, remember, can be viewed as a prior—so there is no probability without priors/minds.

You can't unwind the Q. You can't ask "What is the *unconditional* probability of our background information being true, P(Q)?" To make that estimate, you would still need *some* kind of prior. No way to unwind back to an ideal ghost of perfect emptiness...

You might argue that you and the lottery-ticket buyer do not really have a disagreement about *probability.* You say that the probability of the ticket winning the lottery is one in a hundred million given your prior, P(W|Q1) = 10^-8. The other fellow says the probability of the ticket winning given his prior is P(W|Q2) = 0.9. Every time *you* say "The probability of X is Y", you really mean, "P(X|Q1) = Y". And when *he* says, "No, the probability of X is Z", he really *means,* "P(X|Q2) = Z".

Now you might, if you traced out his mathematical calculations, agree that, indeed, the conditional probability of the ticket winning, given his weird prior is 0.9. But you wouldn't agree that "the probability of the ticket winning" is 0.9. Just as he wouldn't agree that "the probability of the ticket winning" is 10^-8.

Even if the two of you refer to different mathematical calculations when you say the word "probability", *you* don't think that puts you on equal ground, neither of you being better than the other. And neither does he, of course.

So you see that, subjectively, probability really *does* feel objective—even after you have subjectively taken all apparent subjectivity into account.

And this is not mistaken, because, by golly, the probability of winning the lottery really *is* 10^-8, not 0.9. It's not as if you're doing your probability calculation *wrong,* after all. If you weren't worried about being fair or about justifying yourself to philosophers, if you only wanted to get the correct answer, your betting odds would be 10^-8.

Somewhere out in mind design space, there's a mind with any possible prior; but that doesn't mean that you'll say, "All priors are created equal."

When you judge those alternate minds, you'll do so using your own mind—your own beliefs about the universe—your own posterior that came out of your own prior, your own posterior probability assignments P(X|A,B,C,...,Q1). But there's nothing wrong with that. It's not like you could judge using something other than yourself. It's not like you could have a probability assignment without any prior, a degree of uncertainty that isn't in any mind.

And so, when all that is said and done, it still seems like the probability of winning the lottery really *is* 10^-8, not 0.9. No matter what other minds in design space say differently.

Which shouldn't be surprising. When you compute probabilities, you're thinking about lottery balls, not thinking about brains or mind designs or other people with different priors. Your probability computation makes no mention of that, any more than it explicitly represents itself. Your goal, after all, is to win, not to be fair. So of course probability will *seem* to be independent of what other minds might think of it.

Okay, but... you *still* can't win the lottery by assigning a higher probability to winning.

If you like, we could regard probability as an idealized computation, just like 2 + 2 = 4 seems to be independent of any particular error-prone calculator that computes it; and you could regard your mind as trying to approximate this ideal computation. In which case, it is good that your mind does not mention people's opinions, and only thinks of the lottery balls; the ideal computation makes no mention of people's opinions, and we are trying to reflect this ideal as accurately as possible...

But what you will calculate is the "ideal calculation" to plug into your betting odds, will depend on your prior, even though the calculation won't have an explicit dependency on "your prior". Someone who thought the universe was anti-Occamian, would advocate an anti-Occamian calculation, regardless of whether or not anyone thought the universe was anti-Occamian.

Your calculations get checked against reality, in a probabilistic way; you either win the lottery or not. But interpreting these results, is done with your prior; once again there is no probability that isn't in any mind.

I am not trying to argue that you can win the lottery by wishing, of course. Rather, I am trying to inculcate the ability to *distinguish between levels*.

When you think about the ontological nature of probability, and perform reductionism on it—when you try to explain how "probability" fits into a universe in which states of mind do not exist *fundamentally*—then you find that probability is computed within a brain; and you find that other possible minds could perform mostly-analogous operations with different priors and arrive at different answers.

But, when you consider probability *as probability,* think about the *referent* instead of the thought process—which thinking you will do in your own thoughts, which are physical processes—then you will conclude that the vast majority of possible priors are *probably wrong*. (You will also be able to conceive of priors which are, in fact, better than yours, because they assign more probability to the actual outcome; you just won't know in advance which alternative prior is the truly better one.)

If you again swap your goggles to think about how probability is implemented in the brain, the seeming objectivity of probability is the way the probability algorithm feels from inside; so it's no *mystery* that, considering probability as probability, you feel that it's not subject to your whims. That's just what the probability-computation would be expected to say, since the computation doesn't represent any dependency on your whims.

But when you swap out those goggles and go back to thinking about probabilities, then, by golly, your algorithm seems to be *right* in computing that probability is not subject to your whims. You *can't* win the lottery just by changing your beliefs about it. And if that is the way you would be expected to feel, then so what? The feeling has been explained, not explained away; it is not a *mere* feeling. Just because a calculation is implemented in your brain, doesn't mean it's *wrong,* after all.

Your "probability that the ten trillionth decimal digit of pi is 4", is an attribute of yourself, and exists in your mind; the real digit is either 4 or not. And if you could change your belief about the probability by editing your brain, you wouldn't expect that to change the probability.

Therefore I say of probability that it is "subjectively objective".

Part of *The Metaethics Sequence*

Next post: "Whither Moral Progress?"

Previous post: "Rebelling Within Nature"

## 71 comments

Comments sorted by oldest first, as this post is from before comment nesting was available (around 2009-02-27).

## comment by The_Compassionate_Cynic · 2008-07-14T13:57:28.000Z · LW(p) · GW(p)

Hi, I think that due to the nature of your site you would be interested in my blog at

www.orble.com/the-compassionate-cynic

please link me on technocrati however that works ;)

Description:

"The Compassionate Cynic is an entity which documents its notes and observations via bit streams directly to the blog in this link."

Replies from: DSimon## comment by Abigail · 2008-07-14T14:30:26.000Z · LW(p) · GW(p)

Is not subjective objectivity the highest degree of objectivity possible for a human being?

Objective truth does exist, but people can only perceive it with their own perception filters. And, perhaps, AIs with the perception filters of their makers.

I have to decide what is truth as best I can, and may choose to assert a truth even though every one else denies it, eg Galileo. It is to my advantage to seek to make my perception filters as little distorting as possible, but I doubt I could ever achieve that completely.

Replies from: DanielLC, venomwolfse## ↑ comment by DanielLC · 2012-07-02T04:50:00.449Z · LW(p) · GW(p)

Is not subjective objectivity the highest degree of objectivity possible for a human being?

"for a human being" means the same thing as "subjectively". For example, attractiveness is subjective, so you could only say that someone is attractive for someone. This is as opposed to objective facts, like height. If someone is six feet tall, it doesn't matter who you ask. They might say that this person is five feet tall, but that's completely irrelevant. They're still six feet tall.

All you're saying is that subjective objectivity is the highest degree of objectivity possible while still being objective.

## ↑ comment by venomwolfse · 2016-07-09T11:41:27.338Z · LW(p) · GW(p)

Just like the truth, proof can only be a perspective as it falls back on whether one chooses to accept it as the truth or deny it completely. Most proofs appear extremely subjective on the back of an objective argument. Like the colours itself, my blue can be completely different from how you see your blue, but reguardless of our indifferences, what we have been told is that we see the same colour. Does that actually mean that blue is actually blue to everyone (objectively speaking ofcource) the same argument can be used with the perception of time itself, that is if time is really relevant , is this PROOF?

Knowing is not a truth. Been told something, anything doesn't mean it's true but it does not make it false either. we can only validate either self or collectively. self truth Vs collective truth. Religion is an objective truth, a belief system of something thats taught, we aren't born with a religion. Faith is subjective, it's that feeling and driving force that there is something more or a possibility of being more than what's perceived at present. In a biological way it does makes sense, after all we do have theses things called emotions and feelings, all that horrible stuff.

How do i percieve GOD? Well it's everything and nothing, that it's really indefinable. How do I relate my faith to god, well I am god. I am the creator and the destroyer the light and the darkness, well that I have the capacity to do either. I have faith in God and thats all I need, thats all anyone needs. I personally do not believe in god.

A world exists beyond the reach of our physical senses and intellect, it is deemed to be nonexistent and purely imaginary by materialist science. But quantum physics is challenging such a limited and myopic view of reality. Quantum physicists have discovered that in the quantum world, Newtonian physical laws no longer apply. What apply are the laws of consciousness.

“This denial of the nonmaterial aspect of life—its sacred participation in the miracle of existence—leaves people with no source of meaning and direction…death is the end of me because life is only physical". Edgar D. Mitchell

Faith and belief are very seperate as i said. Most religous people use an objective perspective and claim it as their own and call it faith. I call that an external belief system that can not be validated. Faith can as its internal and is not taught.

People can believe in the same thing but you cannot say that they all have the same amount of faith. Faith is subjective and can only exists on a individual level.

Colour blindness is bitch, so is the belief that everyone sees this world in the same light.

## comment by Allan_Crossman · 2008-07-14T15:48:15.000Z · LW(p) · GW(p)

*there is no such thing as a probability that isn't in any mind.*

Hmm. Doesn't quantum mechanics (especially if we're forgetting about MWI) give us genuine, objective probabilities?

## comment by Vladimir_Nesov · 2008-07-14T16:15:45.000Z · LW(p) · GW(p)

Probability assigned to a belief is estimated according to a goal that says "high Bayesian score is good". It has a particular optimization target, and so it doesn't run away from it. It is an "objective" fact about this optimization process that it tries to have a good Bayesian score. The probability that it produces is within its mind, and in this sense can be said to be "subjective", but it is no more subjective than my decision to take an apple from the table, that also happens in my mind and is targeted on a goal of eating an apple. There is no point in assuming a state of mind just for the sake of reflecting the territory, if it doesn't contribute to the goal. In this case, the goal happens to include the state of mind itself.

## comment by Tim_Tyler · 2008-07-14T18:08:01.000Z · LW(p) · GW(p)

To answer Allan Crossman and "I never saw Jaynes spell out explicitly":

For some sixty years it has appeared to many physicists that probability plays a fundamentally different role in quantum theory than it does in statistical mechanics and analysis of measurement errors. It is a commonly heard statement that probabilities calculated within a pure state have a different character than the probabilities with which different pure states appear in a mixture, or density matrix. As Pauli put it, the former represents "Eine prinzipielle Unbestimmtheit, nicht nur Unbekanntheit". But this viewpoint leads to so many paradoxes and mysteries that we explore the consequences of the unified view, thatall probability signifies only incomplete human information.

This is from Probability In Quantum Theory (1999). Jaynes seems to be ignoring animals and AIs.

## comment by Constant2 · 2008-07-14T18:21:16.000Z · LW(p) · GW(p)

Probability isn't only used as an expression of a person's own subjective uncertainty when predicting the future. It is also used when making factual statements about the past. If a coin was flipped yesterday and came up heads 60% of the time, then it may have been a fair coin which happened to come up heads 60% of the time, or it may have been a trick, biased, coin, whose bias caused it to come up heads 60% of the time. To say that a coin is biased is to make a statement about probability. As Wikipedia explains:

In probability theory and statistics, a sequence of independent Bernoulli trials with probability 1/2 of success on each trial is metaphorically called a fair coin. One for which the probability is not 1/2 is called a biased or unfair coin.

So a statement about probability can enter into a factual claim about the causes of past events.

## comment by Barkley_Rosser · 2008-07-14T22:07:25.000Z · LW(p) · GW(p)

The distinction here may be quite simple: an objective Bayesian accepts Bayes' Theorem, a subjective one does not. After all, Bayes' Theorem posits that repeated adjustments of priors based on new posterioris from the latest observations will asymptotically converge on the "true" probability distribution. That is only meaningful if one believes in an objective, "true" probability distribution (and of course assuming that certain necessary conditions hold regarding the underlying distribution and its dimensionality).

## comment by **[deleted]** ·
2008-07-15T04:13:26.000Z · LW(p) · GW(p)

All true.

When E.Yudkowsky's foe agrees with something he says, you can be sure its correct ;) Of course, the answers are in the materials on ontology I posted on the everything-list months ago.

As to where an algebraic relation such as '2+3' exists, it exists in the same place as the objective value archetypes of course- it's a global feature of the Tegmark multiverse - a relation between all the possible worlds.

Probabilities, on the other hand, are not a global feature of the Tegmark multiverse, but are computed in the individual minds existing in QM branches. That's the difference.

Allan, QM is completely deterministic. All that actually exists is the QM wave function, which has a completely deterministic evolution. Again, the probabilities only exist in the minds of observers viewing specific 'cross-sections' of the multiverse.

## comment by Cyan2 · 2008-07-15T14:11:40.000Z · LW(p) · GW(p)

Barkley Rosser, I think the difference between objective Bayesians and subjective Bayesians has more to do with how they treat prior distributions than how they view asymptotic convergence.

I'm personally not an objective Bayesian by your definition -- I don't think there are stable, true probability distributions "out there." Nevertheless, I do find the asymptotic convergence theorems meaningful. In my view, asymptotic convergence gets you to the most informative distribution conditional on some state of information, but that state of information need not be maximal, i.e., deterministic for observables. When the best you can do is some frequency distribution over observables, it's because you've left out essential details, not because the observables are random.

## comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2008-07-15T15:46:53.000Z · LW(p) · GW(p)

pdf, fixed.

## comment by Tim_Tyler · 2008-07-15T17:23:55.000Z · LW(p) · GW(p)

Re: *a statement about probability can enter into a factual claim about the causes of past events*.

Not under the view we are discussing. Did you read the referenced Probability is in the Mind page?

The idea is that uncertainty is a psychological phenomenon. It exists in the mind, not in the world.

## comment by Tim_Tyler · 2008-07-15T19:22:03.000Z · LW(p) · GW(p)

In order to criticise Jaynes' perspective, you should first understand it. It is not clear to me that you have done that.

Jaynes' perspective on the historical behaviour of biased coins would make no mention of probability - unless he was talking about the history of the expectations of some observer with partial information about the situation. Do you see anything wrong with that?

## comment by steven · 2008-07-15T19:54:22.000Z · LW(p) · GW(p)

*Reality is that which, when you stop believing in it, doesn't go away.*

This is false, of course; with sufficiently advanced technology you could build a machine that read out your mind state and caused Earth to disappear once it determined you no longer believed in Earth. Doesn't mean Earth was never real.

Replies from: DSimon, orthonormal## ↑ comment by orthonormal · 2012-04-07T15:43:05.628Z · LW(p) · GW(p)

I don't believe such a machine is possible.

## comment by Constant2 · 2008-07-15T20:07:26.000Z · LW(p) · GW(p)

*Jaynes' perspective on the historical behaviour of biased coins would make no mention of probability - unless he was talking about the history of the expectations of some observer with partial information about the situation. Do you see anything wrong with that?*

I see nothing wrong with that. Similarly, if someone mentions only the atoms in my body, and never mentions me, there is nothing wrong with that. However, I am also there.

What I have pointed out is that seemingly unproblematic statements can indeed be made of the sort that I described. That Jaynes himself makes no such statements says nothing one way or another about this. There are different possible responses, including:

1) It might be shown that certain classes of factual statements about history, including the one I gave, are in fact in some sense relative, may incorporate a tacit perspective and therefore may be in that sense subjective. An example of such a statement might be a statement that an object is "at rest" rather than "in motion". This statement tacitly presupposes a frame of reference, and so is in that sense not fully objective.

2) It might be shown that there was something wrong about the sort of statement that I gave as an example.

## comment by Rafe_Champion · 2008-07-15T21:00:19.000Z · LW(p) · GW(p)

A couple of comments on a very big topic.

There is a fascinating history of ideas about the objective contents of thought, starting with the Austrians Brentano and Meinong, running through analytical philosophy (Russell and Moore) and phenomenology (Husserl and Heidegger), and also through evolutionary epistemology (Popper and Munz). http://www.the-rathouse.com/EvenMoreAustrianProgram/EMAThreeAustrianStrands.html

On the Bayesian appraisal of theories, with reference to the Duhem problem, it seems that Bayes gives a good result when there is only one major theory in the picture but is not so good when help is most needed, that is to judge between two serious rival theories. http://www.the-rathouse.com/Theses/Duhem-QuineInBayesNewExperimentalism.html

## comment by Tim_Tyler · 2008-07-15T21:35:07.000Z · LW(p) · GW(p)

Re: *seemingly unproblematic statements can indeed be made of the sort that I described*.

The example appears to be that of the *accursèd frequentists* - the view that Jaynes spent much of his academic life crusading against.

If such ideas *seem* unproblematic to *you*, that's fine - but be aware that others find the notion of real word probabilities to be unsupported by evidence, and contrary to Occam's razor.

## comment by Constant2 · 2008-07-15T22:05:05.000Z · LW(p) · GW(p)

*If such ideas seem unproblematic to you*

It is the example that seems on the face of it unproblematic. I am open to either (a) a demonstration that it is compatible with subjectivism[*], or (b) a demonstration that it is problematic. I am open to either one. Or to something else. In any case, I don't adhere to frequentism.

[*] (I made no firm claim that it is not compatible with subjectivism - you are the one who rejected the compatibility - my own purpose was only to raise the question since it seems on the face of it hard to square with subjectivism, not to answer the question definitively.)

## comment by Vladimir_Nesov · 2008-07-15T23:00:11.000Z · LW(p) · GW(p)

steven: *This is false, of course; with sufficiently advanced technology you could build a machine that read out your mind state and caused Earth to disappear once it determined you no longer believed in Earth. Doesn't mean Earth was never real.*

There is no clear separation between the mind and the territory. The structure of the mind is instrumental to optimization process. When you change the beliefs, change the state of the mind, you are in fact performing an action on the territory, that is instrumental to the goal. Establishing specific state (process) of the territory is the final goal, so in changing the mind, you are moving the territory along *your* course. Reality doesn't go away, but it moves a step closer to the attractor, and if it doesn't, you are not being rational.

## comment by Tim_Tyler · 2008-07-16T06:35:33.000Z · LW(p) · GW(p)

I *thought* we had agreed that the historical behaviour of coins was "compatible with subjectivism":

Jaynes' perspective on the historical behaviour of biased coins would make no mention of probability - unless he was talking about the history of the expectations of some observer with partial information about the situation.

An observer with one set of partial information might have predicted a coin would come up heads 50% of the time. An observer with another set of partial information might have predicted a coin would come up heads 60% of the time. That the predictions of these observers differs illustrates the subjective element of probability estimates.

However, it is their (partial) ignorance of the state of the world that leads to such assessments. With complete information (and a big computer) an observer would *know* which way the coin would land - and would find probabilities irrelevant. The probabilities arise from ignorance and lack of computing power - properties of *observers*, not properties of *the observed*.

## comment by Constant2 · 2008-07-16T07:02:32.000Z · LW(p) · GW(p)

*With complete information (and a big computer) an observer would know which way the coin would land - and would find probabilities irrelevant.*

But this is true of most everyday observations. We observe events on a level far removed from the subatomic level. With complete information and infinite computing power an observer would would find all or virtually all ordinary human-level observations irrelevant. But irrelevancy to such an observer is not the same thing as non-reality. For example, the existence of elephants would be irrelevant to an observer who has complete information on the subatomic level and sufficient computing power to deal with it. But it does not follow that elephants do not exist. Do you think it follows that elephants do not exist?

*The probabilities arise from ignorance and lack of computing power - properties of observers, not properties of the observed.*

The concept of an elephant could with equal reason be said to arise from ignorance and lack of computing power. I can certainly understand that a thought such as, "the elephant likes peanuts, therefore it will accept this peanut" is a much easier thought to entertain than a thought that infallibly tracks every subatomic particle in its body and in the environment around it. So, certainly, the concept of an elephant is a wonderful shortcut. But I'm not so sure about getting from this to the conclusion that elephants (like probability) are subjective. Do you think that elephants are subjective?

## comment by Tim_Tyler · 2008-07-16T08:20:20.000Z · LW(p) · GW(p)

Re: *Do you think it follows that elephants do not exist?*

Elephants are not properties of physics any more than probabilities are.

The concept of an elephant is subjective - as are all concepts.

The atoms composing an elephant are real enough.

AFAICS, nobody ever claimed that probabilites "do not exist". The idea is that uncertainty is a *psychological* phenomenon, not that it is *non-existent*.

I hope that clarifies things.

## comment by Constant2 · 2008-07-16T09:02:27.000Z · LW(p) · GW(p)

*Elephants are not properties of physics any more than probabilities are. The concept of an elephant is subjective - as are all concepts.*

If you are indeed agreeing with the parallel I have set up between probability and elephants and if this is not just your own personal view, then perhaps the subjectivist theory of probability should more properly be called the subjectivist theory of pretty much everything that populates our familiar world. Anyway, I think I can agree that probability is as subjective and as psychological and as non-physical and as existing in the mind and not in the world as an elephant or, say, an exploding nuclear bomb - another item that populates our familiar world.

## comment by Barkley_Rosser · 2008-07-16T21:03:37.000Z · LW(p) · GW(p)

Cyan,

OK, I grant your point. However, assuming that there is some "subjectively real" probability distribution that the Bayes' Theorem process will converge is a mighty strong assumption.

## comment by Barkley_Rosser · 2008-07-17T01:57:46.000Z · LW(p) · GW(p)

Cyan,

Why should there be convergence to some such point when there is no underlying "true" distribution, either subjective or objective? Are you counting on herding by people? It is useful to keep in mind the conditions under which even in classical stats, Bayes' Theorem does not hold, for example when the underlying distribution is not continuous or if it is infinite dimensional. In the former case convergence can be to a cycle of bouncing back and forth between the various disconnected portions of the distribution. This can happen, presumably in a looser purely subjective world, with even a multi-modal distribution.

## comment by Cyan2 · 2008-07-17T03:17:09.000Z · LW(p) · GW(p)

Barkley Rosser, what I have in mind is a reality which in principle predictable given enough information. So there is a "true" distribution -- it's conditional on information which specifies the state of the world exactly, so it's a delta function at whatever the observables actually turn out to be. Now, there exists unbounded sequences of bits which don't settle down to any particular relative frequency over the long run, and likewise, there is no guarantee that any particular sequence of observed data will lead to my posterior distribution getting closer and closer to one particular point in parameter space -- *if* my model doesn't at least partially account for the information which determines what values the observables take. Then I wave my hands and say, "That doesn't seem to happen a lot in practical applications, or at least, when it *does* happen we humans don't publish until we've improved the model to the point of usefulness."

I didn't follow your point about a distribution for which Bayes' Theorem doesn't hold. Are you describing a joint probability distribution for which Bayes' Theorem doesn't hold, or are you talking about a Bayesian modeling problem in which Bayes estimators are inconsistent a la Diaconis and Freedman, or do you mean something else again?

## comment by Barkley_Rosser · 2008-07-17T05:12:53.000Z · LW(p) · GW(p)

Diaconis and Freedman.

## comment by Cyan2 · 2008-07-17T17:50:42.000Z · LW(p) · GW(p)

Barkley Rosser, there definitely is something a little hinky going on in those infinite dimensional model spaces. I don't have the background in measure theory to really grok that stuff, so I just thank my lucky stars that other people have proven the consistency of Dirichlet process mixture models and Gaussian process models.

## comment by Barkley_Rosser · 2008-07-17T18:06:08.000Z · LW(p) · GW(p)

Gotta have that continuous support too, which is the real key to converging on a cycle rather than a point.

In the fuzzier world of not a definite for-real underlying distribution, I note that multiple equilibria or basins in dynamical systems can give the multi-modality that within a herding framework can lead to some sort of cycle in bouncing back and forth between the dominant states.

## comment by Sudeep_Kamath · 2008-07-22T09:40:20.000Z · LW(p) · GW(p)

"Probability exists in your mind: if you're ignorant of a phenomenon, that's an attribute of you, not an attribute of the phenomenon."

Eliezer, a small point here: if QM is true, then the universe and phenomena within the same are inherently probabilistic, are they not?

Great post as usual.

## comment by outofculture · 2008-07-26T03:04:35.000Z · LW(p) · GW(p)

Beside the point, but you can calculate arbitrary digits of pi with the formula explained in this article.

## comment by Yvain2 · 2008-08-21T12:13:24.000Z · LW(p) · GW(p)

I was one of the people who suggested the term h-right before. I'm not great with mathematical logic, and I followed the proof only with difficulty, but I think I understand it and I think my objections remain. I think Eliezer has a brilliant theory of morality and that it accords with all my personal beliefs, but I still don't understand where it stops being relativist.

I agree that some human assumptions like induction and Occam's Razor have to be used partly as their own justification. But an ultimate justification of a belief has to include a reason for choosing it out of a belief-space.

For example, after recursive justification hits bottom, I keep Occam and induction because I suspect they reflect the way the universe really works. I can't prove it without using them. But we already know there are some things that are true but can't be proven. I think one of those things is that reality really does work on inductive and Occamian principles. So I can choose these two beliefs out of belief-space by saying they correspond to reality.

Some other starting assumptions ground out differently. Clarence Darrow once said something like "I hate spinach, and I'm glad I hate it, because if I liked it I'd eat it, and I don't want to eat it because I hate it." He's was making a mistake *somewhere*! If his belief is "spinach is bad", it probably grounds out in some evolutionary reason like insufficient energy for the EEA. But that doesn't justify his current statement "spinach is bad". His real reason for saying "spinach is bad" is that he dislikes it. You can only choose "spinach is bad" out of belief-space based on Clarence Darrow's opinions.

One possible definition of "absolute" vs. "relative": a belief is absolutely true if people pick it out of belief-space based on correspondence to reality; if people pick it out of belief-space based on other considerations, it is true relative to those considerations.

"2+2=4" is absolutely true, because it's true in the system PA, and I pick PA out of belief-space because it does better than, say, self-PA would in corresponding to arithmetic in the real world. "Carrots taste bad" is relatively true, because it's true in the system "Yvain's Opinions" and I pick "Yvain's Opinions" out of belief-space only because I'm Yvain.

When Eliezer say X is "right", he means X satisfies a certain complex calculation. That complex calculation is chosen out of all the possible complex-calculations in complex-calculation space because it's the one that matches what humans believe.

This does, technically, create a theory of morality that doesn't explicitly reference humans. Just like intelligent design theory doesn't explicitly reference God or Christianity. But most people believe that intelligent design should be judged as a Christian theory, because being a Christian is the only reason anyone would ever select it out of belief-space. Likewise, Eliezer's system of morality should be judged as a human morality, because being a human is the only reason anyone would ever select it out of belief-space.

That's why I think Eliezer's system is relative. I admit it's not directly relative, in that Eliezer isn't directly picking "Don't murder" out of belief-space every time he wonders about murder, based only on human opinion. But if I understand correctly, he's referring the question to another layer, and then basing *that* layer on human opinion.

An umpire whose procedure for making tough calls is "Do whatever benefits the Yankees" isn't very fair. A second umpire whose procedure is "Always follow the rules in Rulebook X" and writes in Rulebook X "Do whatever benefits the Yankees" may be following a rulebook, but he is still just as far from objectivity as the last guy was.

I think the second umpire's call is "correct" relative to Rulebook X, but I don't think the call is absolutely correct.

## comment by PhilGoetz · 2010-08-24T03:43:55.282Z · LW(p) · GW(p)

"Reality is that which, when you stop believing in it, doesn't go away." -- Philip K. Dick

I have to comment on the irony of this quote. Philip K. Dick's novels are almost all extended riffs on the idea that there is no reality; or that reality is unknowable or irrelevant (eg. The Man in the High Castle, Flow My Tears the Policemen Said, Do Androids Dream of Electric Sheep? (Blade Runner), We Can Remember It for You Wholesale (Total Recall)). And Philip K. Dick was unable to distinguish reality from fantasy in everyday life.

Replies from: RobinZ## comment by PhilGoetz · 2010-08-24T03:51:01.230Z · LW(p) · GW(p)

But it also makes a deep philosophical point as well, which I never saw Jaynes spell out explicitly, but I think he would have approved: there is no such thing as a probability that isn't in any mind. Any mind that takes in evidence and outputs probability estimates of the next event, remember, can be viewed as a prior - so there is no probability without priors/minds.

Is this why you believe so strongly in Many-Worlds? To avoid mind-free, objective, quantum-mechanical probabilities?

I felt all the way through this post like it was confronting a difficult philosophical problem head-on, and that if I kept reading a little more it would reveal a solution; but I never saw a solution. It described the problem very well; but if it was intended to move beyond that, then I missed it.

## comment by lucidfox · 2010-12-17T07:46:50.212Z · LW(p) · GW(p)

When you judge those alternate minds, you'll do so using your own mind - your own beliefs about the universe - your own posterior that came out of your own prior, your own posterior probability assignments P(X|A,B,C,...,Q1).

Hehe, "your own posterior".

[strangles her inner 12-year-old]

Replies from: HonoreDB## comment by Ronny (potato) · 2011-07-29T06:54:53.212Z · LW(p) · GW(p)

I am almost convinced, honestly. I was leaning towards a frequentest view, but I'm realizing now -- as pointed out here by a fellow community member -- that some of my statements are similar if not identical to the conclusion here:

Jaynes certainly believed very firmly that probability was in the mind... But Jaynes also didn't think that this implied a license to make up whatever priors you liked. There was only one correct prior distribution to use, given your state of partial information at the start of the problem.

is pretty similar to :

Bayesian reasoning is the field which tells us the optimal probability to assign to a proposition given the rest of our information, but that that is the optimal probability given the rest of our information is a fact about the world.

(When I say "our" there I mean each of us as individuals, not our *collective knowledge*. )

I'll give my view; I think I agree with EY; I'll be as short as I can. We have a standard deck of playing cards. It is a fact about them that 1/4 of them are hearts. Not just to me or someone else; it is a fact that *the universe keeps track of*: 1/4 of them are hearts. Two agents A and B are placing bets on the next suit to come out. They both know that 1/4 of the cards are hearts, and that is all that A knows, but B also knows that 8 out of 10 of the top 10 cards are hearts. So B bets that "The top card is a Heart." and A bets that "~The top card is a heart.". Before hand they argue, and B says "you know I think there's an 80% chance that I'll win." B says "Are you crazy? Everyone knows that 75% of the cards in a deck aren't hearts, the chances are 75% in my favor."

Now, of all the *possible* states that satisfy A's knowledge, which is the statement "75% of the cards in a deck aren't hearts.", exactly 75% of them satisfy "~The top card is a heart." So it is no wonder that A ascribes a 75% probability in his favor. A's beliefs constrain A's expected experiences, but not to one line; they constrain the possible worlds A thinks it might be in. B's knowledge is the same. Of the *possible* words that satisfy "8 out of 10 of the top 10 cards are hearts." 80%
of them satisfy "The top card is a heart." So, duh, B concludes that it has an 80% chance of winning. Using this simple setup it becomes clear to me that the more knowledge you have about the state of the deck, the more useful the probability you ascribe will be. This is because the more knowledge you have about the state of the deck, the more you constrict the space of possible worlds that you as an agent think you might be in. A third agent that knows every detail about the deck of course ascribes no non-1 or non-0 probability to any statement about the deck, since it know what possible world its in.

So then I would say that the probability a perfect reasoner P, gives to a statement S, is the fraction of *possible* worlds that satisfy S that also satisfy the rest of the statements that P holds. If P has no ignorance, then he has one possible world, and that fraction is always 1 or 0. And I would go as far as to say that this is a decent explanation of what a probability is. It's a propositional attitude had by an agent; it has a value from 0 to 1, and represents the fraction of states that the agent thinks it might find it self in that satisfy the given proposition. This to me doesn't seem to be inconsistent with the view expressed in this post, but I'm not sure of this.

My view does suggest that "P(a) = such and such" is a claim. Its just that its a claim about the possible worlds that an agent can consistently expect to find him/herself in given the rest of its beliefs. An agent can be wrong about a probability it ascribes. Suppose an agent R, which has the same knowledge as B, but insists that the probability of "The top card is a heart." is 99%. Well, R is wrong, R is wrong about the fraction of worlds that satisfy R's knowledge base that also satisfy "The top card is a heart."

In conclusion, I will risk the hypothesis that: "P(a|b) = x" is true if and only if x of the possible worlds that satisfy b, also satisfy a. But of course, there are no possible worlds without uncertainty, and there is no uncertainty without the ignorance of an agent in a determined world.

## comment by Ronny (potato) · 2011-07-29T20:42:47.887Z · LW(p) · GW(p)

## comment by Ronny (potato) · 2011-11-05T18:28:28.482Z · LW(p) · GW(p)

Now, can't I be a philosophical frequentest and a subjective bayesian? Just because probability theory models subjective beliefs does not mean that it doesn't model frequencies; in fact, if some body told me that bayes doesn't model frequencies I'm pretty sure I could prove them wrong much more easily than someone who said that probabilities don't model degrees of belief.

But there is no contradiction in saying that the komolgorov probability function models both degrees of beliefs and actual frequencies.

(edit)

In fact it seems to me that komolgorov plainly does model frequency since it models odds, and odds model frequencies by a simple conversion. In fact, degrees of belief seem to model frequency as well. Using the thought experiment of frequencies of worlds you think you might find yourself in makes it simple to see how at least some degrees of belief can be seen as frequencies in and of themselves. In this thought experiment we treat probability as a measure of the worlds you think you might find yourself in; if you think that "there are ten cards and eight of them are blue", then in 4/5s of the the worlds where "there are ten cards and eight of them are blue" holds, "the top card is blue" also holds. So you rightfully assign a 80% probability to the top card being blue.

Where the frequentest makes an error is in thinking that probabilities are then out there in the world treated as degrees of belief. What I mean by this is that they take the step from frequencies being in the world to uncertainties being in the world. This is a mistake, but I think that it is not central to the philosophical doctrine of frequent-ism. All that some frequentests claim is that probability models frequency and this is plainly true. And it is also true that there are frequencies in the world. Real frequencies independent of our minds. These are not probabilities, because there are no probabilities anywhere. Not even in minds.

Probability is not degree of subjective belief, probability is a class of automatized functions. These automized function model a great deal of things, measure theory, euclidean geometry, the constraints of rational beliefs, set cardinality, frequencies, etc. and the list can go on and on. Probability is a mathematical tool. And it is isomorphic to many important features of rationality and science, perhaps the most important being subjective degree of belief. But to argue that probability is subjective degree of belief just because it models degree of belief seems as silly to me as arguing that probability is frequency just because it models frequency. Why not the probability position of measure. Measure theory is isomorphic to probability. Why not say that probability is measure? Add that to the debating line.

I think the position to take towards probability is a properly Hofstadter-ish-ian formalism. Where the true statements about probability are simply the statements which are formed when you interpret the theorems of probability theory. Whatever else probability may be able to talk about truthfully it does so through isomorphism.

Replies from: drnickbone## ↑ comment by drnickbone · 2012-03-15T08:48:16.059Z · LW(p) · GW(p)

This is pretty close to my own position... Probability is strictly a mathematical concept (Kolmogorov axioms). Real-world probability is anything that can be successfully modelled by the Kolmogorov axioms. This applies to both betting probabilities (violate the axioms and you get Dutch-booked) and relative frequencies.

I'm a little bit puzzled by Eliezer's view that probability is purely Bayesian, as he also believes in a "Big World", and the relative frequency approach works extremely well in a Big World (as long as it is an infinitely-big world). Chancy events really do get repeated infinitely many times, the repetitions really are independent (because of large separation, and locality of physics), and the relative frequencies really are defined and really do converge to exactly what QM says the probabilities are. All works fine.

Also, there is a formal isomorphism between decoherent branches of a wave function (as applied to a single causal region) and spatially-separated causal regions in a multiverse. So you can, if you like, consider a single space time multiverse with an intuitive interpretation (other universes are just really far away) and forget about all the splitting. Bousso and Susskind have a nice recent paper about this: http://arxiv.org/abs/1105.3796

## comment by rkyeun · 2012-07-30T01:55:42.114Z · LW(p) · GW(p)

"Perform the experiment a hundred times, and—" Okay, let's talk about the ten trillionth digit of pi, then. Single-shot problem, no "long run" you can measure.

And there goes my belief in any kind of probability as a phenomenon. I don't know what the ten trillionth digit of pi is, but I know the algorithm which generates it, and it never involves a die roll or coin flip of any kind. And if the universe is to be lawful, it doesn't roll dice either. There is no probability. To say there was is to say the ten trillionth digit of pi might somehow have come out differently. And that would be unlawful.

Replies from: Zaq## ↑ comment by Zaq · 2012-08-09T16:25:51.348Z · LW(p) · GW(p)

This is silly. To say that there is some probability in the universe is not to say that everything has randomness to it. People arguing that there is intrinsic probability in physics don't argue that this intrinsic probability finds its way into the trillionth digit of pi.

Many Physicists: If I fire a single electron at two slits, with a detector placed immediately after one of the slits, then I detect the electron half the time. Furthermore, leading physics indicates that no ammount of information will ever allow me to accurately predict which trials will result in a detected electron, I can determine a 50/50 chance for detection/non-detection and that's the limit of predictability. Thus it's safe to say that the 50/50 is a property of the experimental set-up, and not a property of how much I know about the setup.

Pretty Much Zero Physicists: The above indicates that the trillionth digit of pi is in a superposition until we calculate it, at which point it collapses to a single value.

Replies from: rkyeun## ↑ comment by rkyeun · 2012-08-14T18:37:22.180Z · LW(p) · GW(p)

Dr. Many the Physicist would be wrong about the electron too. The electron goes both ways, every time. There's no chance involved there either.

But you're right, it is not the ten trillionth digit of pi that proves it.

Replies from: Zaq## ↑ comment by Zaq · 2014-10-27T23:08:57.538Z · LW(p) · GW(p)

The Many Physicists description never talked about the electron only going one way. It talked about detecting the electron. There's no metaphysics there, only experiment. Set up a two-slit configuration and put a detector at one slit, and you see it firing half the time. You may say that the electron goes both ways every time, but we still only have the detector firing half the time. We also cannot predict which half of the trials will have the detector firing and which won't. And everything we understand about particle physics indicates that both the 1/2 and the trial-by-trial unpredictability is NOT coming from ignorance of hidden properties or variables but from the fundamental way the universe works.

Replies from: rkyeun## ↑ comment by rkyeun · 2014-12-29T22:52:35.182Z · LW(p) · GW(p)

Set up a two-slit configuration and put a detector at one slit, and you see it firing half the time.

No, I see it firing both ways every time. In one world, I see it going left, and in another I see it going right. But because these very different states of my brain involve a great many particles in different places, the interactions between them are vanishingly nonexistent and my two otherworld brains don't share the same thought. I am not aware of my other self who has seen the particle go the other way.

You may say that the electron goes both ways every time, but we still only have the detector firing half the time.

We have both detectors firing every time in the world which corresponds to the particle's path. And since that creates a macroscopic divergence, the one detector doesn't send an interference signal to the other world.

We also cannot predict which half of the trials will have the detector firing and which won't.

We can predict it will go both ways each time, and divide the world in twain along its amplitude thickness, and that in each world we will observe the way it went in that world. If we are clever about it, we can arrange to have all particles end in the same place when we are done, and merge those worlds back together, creating an interference pattern which we can detect to demonstrate that the particle went both ways. This is problematic because entanglement is contagious, and as soon as something macroscopic becomes affected putting Humpty Dumpty back together again becomes prohibitive. Then the interference pattern vanishes and we're left with divergent worlds, each seeing only the way it went on their side, and an other side which always saw it go the other way, with neither of them communicating to each other.

And everything we understand about particle physics indicates that both the 1/2 and the trial-by-trial unpredictability is NOT coming from ignorance of hidden properties or variables but from the fundamental way the universe works.

Correct. There are no hidden variables. It goes both ways every time. The dice are not invisible as they roll. There are instead no dice.

Replies from: Zaq## ↑ comment by Zaq · 2015-10-01T20:27:32.326Z · LW(p) · GW(p)

You've only moved the problem down one step.

Five years ago I sat in a lab with a beam-spitter and a single-photon multiplier tube. I watched as the SPMT clicked half the time and didn't click half the time, with no way to predict which I would observe. You're claiming that the tube clicked every time, and the the part of me that noticed one half is very disconnected from the part of me that noticed the other half. The problem is that this still doesn't allow me to postdict which of the two halves the part of me that is typing this should have in his memory right now.

Take the me sitting here right now, with the memory of the specific half of the clicks he has right now. As far as we understand physics, he *can't* postdict which memory that should have been. Even in your model, he can postdict that there will be many branches of him with each possible memory, but he can't postdict *which* of those branches he'll be - only the probability of him being any one of the branches.

## ↑ comment by rkyeun · 2015-12-08T23:41:08.711Z · LW(p) · GW(p)

You've only moved the problem down one step.

Moving the problem down one step puts it at the bottom.

The problem is that this still doesn't allow me to postdict which of the two halves the part of me that is typing this should have in his memory right now.

One half of you should have one, and the other half should have the other. You should be aware intellectually that it is only the disconnect between your two halves' brains not superimposing which prevents you from having both experiences in a singular person, and know that it is your physical entanglement with the fired particle which went both ways that is the cause. There's nothing to post-dict. The phenomenon is not merely explained, but explained away. The particle split, on one side there is a you that saw it split right, on one side there is a you that saw it split left, and both of you are aware of this fact, and aware that the other you exists on the other side seeing the other result, because the particle always goes both ways and always makes each of you. There is no more to explain. You are in all branches, and it is not mysterious that each of you in each branch sees its branch and not the others. And unless some particularly striking consequence happened, all of them are writing messages similar to this, and getting replies similar to this.

Replies from: Zaq## ↑ comment by Zaq · 2016-04-13T04:57:35.166Z · LW(p) · GW(p)

The issue is not want of an explanation for the phenomenon, away or otherwise. We have an explanation of the phenomenon, in fact we have several. That's not the issue. What I'm talking about here is the inherent, not-a-result-of-my-limited-knowledge probabilities that are a part of *all* explanations of the phenomenon.

Past me apparently insisted on trying to explain this in terminology that works well in collapse or pilot-wave models, but not in many-worlds models. Sorry about that. To try and clear this up, let me go through a "guess the beam-splitter result" game in many-worlds terminology and compare that to a "guess the trillionth digit of pi" game in the same terminology.

Aside: Technically it's the amplitudes that split in many-worlds models, and somehow these amplitudes are multiplied by their complex conjugates to get you answers to questions about guessing games (*no* model has an explanation for that part). As is common around these parts, I'm going to ignore this and talk as if it's the probabilities themselves that split. I guess nobody likes writing "square root" all the time.

Set up a 50/50 beam-splitter. Put a detector in one path and block the other. Write your choice of "Detected" or "Not Detected" on a piece of paper. Now fire a single photon. In Everett-speak, half of the yous end up in branches where the photon's path matches your guess while half of the yous don't. The 50/50 nature of this split remains even if you know the exact quantum state of the photon beforehand. Furthermore, the branching of yous that try to use all your physics knowledge to predict their observations have no larger a proportion of success than the branching of yous that make their predictions by just flipping a coin, always guessing Fire, or employing literally *any* other strategy that generates valid guesses. The 50/50 value of this branching process is *completely* decoupled from your predictions, no matter what information you use to make those predictions.

Compare this to the process of guessing the trillionth digit of pi. If you make your guess by rolling a quantum die, then 1 out of 10 yous will end up in a branch where your guess matches the actual trillionth digit of pi. If you instead use those algorithms you know to calculate a guess, and you code/run them correctly, then basically all of the yous end up in a branch where your guess is correct.

We now see the fundamental difference. Changing your guessing strategy results in different correct/incorrect branching ratios for the "guess the trillionth digit of pi" game but *not* for the "guess the beam-splitter result" game. This is the Everett-speak version of saying that the beam-splitter's 50/50 odds is a property of the universe while the trillionth digit of pi's 1/10 odds is a function of our (current) ignorance. You can opt to replace "odds" with "branching ratios" and declare that there is no probability of any kind, but that just seems like semantics to me. In particular the example of the ten trillionth digit of pi should not be what prompts this decision. Even in the many-worlds model there's still a fundamental difference between that and the quantum processes that physicists cite as intrinsically random.

## comment by **[deleted]** ·
2013-09-30T03:32:15.741Z · LW(p) · GW(p)

"Reality is that which, when you stop believing in it, doesn't go away."

By that definition, stuff like the value of the US dollar aren't real.

Replies from: Nornagest, Lumifer## ↑ comment by Nornagest · 2013-09-30T04:15:08.366Z · LW(p) · GW(p)

Whether you personally believe it or not, most gas stations in the US will be happy to exchange $1.50 or thereabouts for a cup of sludgy, lukewarm coffee. That may not be grounded on par with Newton's laws of motion, but it still seems like a step up from, say, body thetans.

Replies from: None## ↑ comment by **[deleted]** ·
2013-09-30T15:48:38.758Z · LW(p) · GW(p)

That's because the gas station owner still believes in the value of the dollar. But if *all* of you stopped believing in it...

## ↑ comment by linkhyrule5 · 2013-09-30T15:55:34.044Z · LW(p) · GW(p)

Right, but if I had an effect that depended on massed belief, I'd be pretty sure that existed too.

Say, Discworld-style gods.

(Things that depend on individual belief generally have a low enough prior that I'm probably dreaming/hallucinating, and also I can't tell myself to stop believing in things so whatever it is I think I'm doing it's not "not believing in X".)

## ↑ comment by Nornagest · 2013-09-30T17:16:14.977Z · LW(p) · GW(p)

Sure. When we're looking at socially constructed stuff, though, I'm not sure it makes sense to treat its reality as discrete rather than continuous.

It's tempting to use money as a measure of social reality, actually, since it's valuable directly in proportion to people's belief in its value. Unfortunately it's hard to put a dollar value on, say, traffic laws, which share the same property (a point driven home for me the first time I encountered rush-hour traffic in Manila).

## ↑ comment by Lumifer · 2013-09-30T16:58:58.310Z · LW(p) · GW(p)

Of course.

Actually, all value (which exists solely inside minds) is not real.

Replies from: linkhyrule5## ↑ comment by linkhyrule5 · 2013-09-30T17:08:22.537Z · LW(p) · GW(p)

...

Things that don't exist can't have effects.

Religion is real. God isn't, but the belief-in-god is and has effects.

Value - a shared pattern between brains - is real and has effects.

Replies from: Lumifer## ↑ comment by Lumifer · 2013-09-30T17:19:42.490Z · LW(p) · GW(p)

Look at the quote in the grandparent. For the purpose of this thread I'm using the word "real" in the sense of "existing outside and independently of your mind". By that approach beliefs are not real and sentences "X exists" and "X is real" have very different meanings.

Replies from: linkhyrule5## ↑ comment by linkhyrule5 · 2013-09-30T22:26:15.270Z · LW(p) · GW(p)

If you stop believing in religion, that does not stop fundamentalists from existing.

If you stop thinking money has value, the economy will not collapse.

Even if you don't trust your *own* brain, a *shared meme* can still be "real."

## comment by SeanMCoincon · 2014-11-20T18:30:47.647Z · LW(p) · GW(p)

Somewhere out in mind design space, there's a mind with any possible prior; but that doesn't mean that you'll say, "All priors are created equal."

The corrected phrase may be: "All *unentangled* priors are created equal."