Duplication versus probability

post by Stuart_Armstrong · 2018-06-20T12:18:29.459Z · LW · GW · 12 comments

Contents

  But where will you end up, really?
  The Many Worlds of Quantum Mechanics
None
12 comments

Suppose you're rushing an urgent message back to the general of your army, and you fall into a deep hole. Down here, conveniently, there's a lever that can create a duplicate of you outside the hole. You can also break open the lever and use the wiring as ropes to climb to the top. You estimate that the second course of action has a 50% chance of success. What do you do?

Obviously, if the message is your top priority, you pull the lever, and your duplicate will deliver that message. This succeeds all the time, while the wire-rope only has 50% chance of working.

The point of this is that duplication is not necessarily like probability splitting. A 50% chance of being either inside or outside the hole, is not the same thing as a certainty of being both.

Now, some selfish, indexical utility functions will treat those two cases as the same, but as we've shown above, most non-indexical utilities will not.

But where will you end up, really?

This is the question that it feels we must answer: after pulling the lever, do you expect to be the copy on the top or the copy on the bottom?

But that's a question without meaning. There is no stochasticity here, and no uncertainty. There will be one copy at the bottom of the hole, maybe asking themselves "I wonder where I will end up", and, after pulling the lever, there will be two copies, one at the top and on at the bottom, both of them remembering falling into the hole, thinking "I wonder where I will end up", and pulling the lever. Everything is deterministic and known.

What if you close your eyes for ten seconds after pulling the lever? You could argue that both copies will now face genuine uncertainty during those ten seconds, as they don't know whether they are on the top or on the bottom.

But both copies are thinking identical thoughts; their mental processes are the same. "Am I at the top or at the bottom?" will be thought by both copies; "if I open my eyes, do I expect to see dirt or clear sky?". Your two copies cannot distinguish themselves in any way: they think the same, reason the same, have the same evidence. As long as they can't distinguish their position, they are to all extents and purposes the same agent.

You may still be tempted to assign probabilities to being below or above, but what if the lever initially creates one copy - and then, five seconds later, creates a million? How do you update your probabilities during these ten seconds? These kind of questions illustrate the problem that seems to bedevil any theory of probability [LW · GW] of "who you are" among identical or similar copies.

The Many Worlds of Quantum Mechanics

So, what does this argument that duplication-is-not-probability imply for many worlds quantum mechanics?

For me, this still remains a great puzzle. Superficially, it just seems like a mass of duplications; the fact that some of these duplications have greater "quantum measure" than others doesn't seem to be relevant: we can't observe [LW · GW] uniform increases or decreases in the quantum measure. Nothing in the quantum measure seems to imply that the "thread of conscious experience" should preferentially flow through larger measure branches.

However, observing the universe we are in, we see everything being very consistent with treating the quantum measure as a probability. Chairs are stable, people don't tunnel through walls, everything doesn't go instantly crazy at every moment, etc.

This puzzles me, I confess. It has caused me to update away from many worlds and more towards alternative interpretations of quantum mechanics, such as the transactional interpretation.

12 comments

Comments sorted by top scores.

comment by gjm · 2018-06-20T23:50:16.679Z · LW(p) · GW(p)

I confess I don't really understand what problem this is attempting to show us, that explains why we shouldn't think of duplication and probability as alike and why Stuart has become less keen on "many worlds" approaches to QM. I mean, it describes some questions one can ask, and (so far as I can see) then just gestures towards them and invites us to feel bad about those questions. It seems like I'm missing a step in the argument.

Certain duplication is not the exact same thing as probability. But who ever said it was?

It is, however, somewhat like probability, and I don't see anything in this post that should change anyone's opinion about that.

It seems as if what Stuart says about "the thread of conscious experience" is intended to be an argument against the Everett interpretation, but I don't understand why. There's nothing consciousness-specific about quantum measure (or about probability); there is some sense in which, if mutually exclusive and exhaustive events A and B happen with measures 0.1 and 0.9 respectively, "9x as much of you" ends up in the B-worlds as in the A-worlds, but what does this mystical thread-of-experience language gain for us? It seems to me that these differences in measure are relevant precisely because they correspond to differences in probability, and e.g. I care 10x more about what happens to me in futures that are 10x as likely. Is there supposed to be something illegitimate about that?

Replies from: Stuart_Armstrong
comment by Stuart_Armstrong · 2018-06-21T11:04:39.139Z · LW(p) · GW(p)
It is, however, somewhat like probability, and I don't see anything in this post that should change anyone's opinion about that.

How "somewhat"? The kind of behaviour I'm talking about - flipping the lever and letting your copy deliver the message - violates the independence axiom of expected utility, were duplication a probability.

As for why I'm less keen on MWI, it's simply that I see a) duplication is not a probability b) duplication with a measure doesn't seem any different to standard duplication, and c) my past experience causes me to see measure as (approximately) a probability.

Hence, MWI seems wrong. You probably disagree with a) or b), but do you see the deduction?

Replies from: gjm
comment by gjm · 2018-06-21T17:38:46.802Z · LW(p) · GW(p)

Again, I'm not saying that ("in-universe") duplication is exactly the same as probability, and neither is everyone else. Duplication is like probability in, e.g., the following sense: Suppose you do a bunch of experiments involving randomization, and suppose that every time you perform the basic operation "pick one of N things, all equally likely, independently of other choices" what actually happens is that you (along with the rest of the universe) are duplicated N times, and each duplicate sees one of those N choices, and then all the duplicates continue with their lives. And suppose we do this many times, duplicating each time. Then in the long run almost all your duplicates see results that look like those of random choices.

I can, of course, see the inference from "in-universe duplication is not essentially the same as probability" + "in-universe duplication is essentially the same as cross-universe duplication" + "MWI says cross-universe duplication is essentially the same as probability" to "MWI is wrong", at least if the essentially-the-same relation is transitive. But I disagree with at least one of those first two propositions; exactly which might depend on how we cash out "essentially the same as".

(I'm not sure whether the paragraph above is exactly responsive to what you said, because you didn't say anything about the distinction between in-universe and cross-universe duplication, which to me seems highly relevant, and because I'm not sure exactly what you mean by "duplication with a measure". Feel free to clarify and/or prod further, if I have missed the point somehow.)

comment by astridain (aristide-twain) · 2018-06-20T16:39:25.100Z · LW(p) · GW(p)

Being a hopeless munchkin, I will note that the thought experiment has an obvious loophole: for the choice to truly be a choice, we would have to assume, somewhat arbitrarily, that using the duplication lever will disintegrate the machinery. Else, you could pull the lever to create a duplicate who'll deliver the message, and *then* the you at the bottom of the well could rip up the machinery and take their shot at climbing up.

Replies from: Stuart_Armstrong
comment by Stuart_Armstrong · 2018-06-22T11:11:41.754Z · LW(p) · GW(p)

You feeble attempts at munchkining are noted and scorned at. The proper munchkin would pull the lever again and again, creating an army of yous...

Replies from: aristide-twain
comment by astridain (aristide-twain) · 2018-06-24T12:51:25.489Z · LW(p) · GW(p)

We don't actually know the machine works more than once, do we? It creates "a" duplicate of you "when" you pull the lever. That doesn't necessarily imply that it outputs additional duplicates if you keep pulling the lever. Maybe it has a limited store of raw materials to make the duplicates from, who knows.

Besides, I was just munchkinning myself out of a situation where a sentient individual has to die (i.e. a version of myself). Creating an army up there may have its uses but does not relate to the solving of the initial problem. Unless we are proposing the army make a human ladder? Seems unpleasant.

comment by Chris_Leong · 2018-06-21T00:46:24.812Z · LW(p) · GW(p)

I agree that duplication is not standard probability. Standard probability has no notion of an indexical so these situations end up overlapping.

There are two routes here as per A tree falls on sleeping beauty [LW · GW]. One is to go the halver route and handle duplications in the decision theory. The other is to go the thirder route and construct what I'll call an agent-state relative probability. Roughly, I mean a notion of probability that is concerned more with what fraction of agent-states will be correct than with any objective notion of probability. As I explained on my comment [LW(p) · GW(p)] to the paradoxes post, we shouldn't be surprised that a notion of probability specifically designed to be relative in this sense will be relative to what agents exist. So I'm more than happy to bite the bullet on the reductio ad absurdum.

comment by Gurkenglas · 2018-06-21T10:13:44.486Z · LW(p) · GW(p)

If we can rule out thoughtcrime, we could have a large quantum computer simulate many environments with different goal systems in charge, then act as if the mechanics of indexical uncertainty don't work out to this already winning us the game.

comment by woodchopper · 2018-06-26T00:12:50.191Z · LW(p) · GW(p)

If an exact copy of you were to be created, it would have to be stuck in the hole as well. If the 'copy' is not in the hole, then it is not you, because it is experiencing different inputs and has a different brain state.

comment by Rafael Harth (sil-ver) · 2018-06-20T16:32:02.650Z · LW(p) · GW(p)
But that's a question without meaning.

I don't think it is. There seems to be an effort not to use the term consciousness, but we need to talk consciousness to understand the difference. Either quantum copies share the same consciousness or they don't. Probabilities may or may not depend on this question – assuming the conclusions of your previous posts on the anthropic principle, I think they do. We can't answer that question, but we can understand that there is a question and grasp the conceptual difference. Discussing these issues without doing so seems fairly misguided to me.

comment by Davide_Zagami · 2018-06-24T20:30:22.924Z · LW(p) · GW(p)

After reading this I feel that how one should deal with anthropics strictly depends on goals. I'm not sure exactly which cognitive algorithm does the correct thing in general, but it seems that sometimes it reduces to "standard" probabilities and sometimes not. May I ask what does UDT say about all of this exactly?

Suppose you're rushing an urgent message back to the general of your army, and you fall into a deep hole. Down here, conveniently, there's a lever that can create a duplicate of you outside the hole. You can also break open the lever and use the wiring as ropes to climb to the top. You estimate that the second course of action has a 50% chance of success. What do you do?
Obviously, if the message is your top priority, you pull the lever, and your duplicate will deliver that message. This succeeds all the time, while the wire-rope only has 50% chance of working.

Agree.

after pulling the lever, do you expect to be the copy on the top or the copy on the bottom?

Question without meaning per se, agree.

what if the lever initially creates one copy - and then, five seconds later, creates a million? How do you update your probabilities during these ten seconds?

Before pulling the lever, I commit to do the following.

For the first five seconds, I will think (all copies of me will think) "I am above". This way, 50% of all my copies will be wrong.

For the remaining five seconds, I will think (all copies of me will think) "I am above". This way, one millionth of all my copies will be wrong.

If each of my copies was receiving money for distinguishing which copy he is, then only one millionth of all my copies would be poor.

This sounds suspiciously like updating probabilities the "standard" way, especially if you substitute "copies" with "measure".

Replies from: Stuart_Armstrong
comment by Stuart_Armstrong · 2018-06-25T08:51:01.760Z · LW(p) · GW(p)

UDT can update in that way, in practice (you need that, to avoid Dutch Books). It just doesn't have a position on the anthropic probability itself, just on the behaviour under evidence update.