MWI, copies and probability
post by Roko · 2010-06-25T16:46:08.379Z · LW · GW · Legacy · 162 commentsContents
MWI mixes copies and probability LW doesn't care about copies How to get away with attempted murder None 162 comments
Followup to: Poll: What value extra copies?
For those of you who didn't follow Eliezer's Quantum Physics Sequence, let me reiterate that there is something very messed up about the universe we live in. Specifically, the Many Worlds Interpretation (MWI) of quantum mechanics states that our entire classical world gets copied something like 1040±20 times per second1. You are not a line through time, but a branching tree.
If you think carefully about Descartes' "I think therefore I am" type skepticism, and approach your stream of sensory observations from such a skeptical point of view, you should note that if you really were just one branch-line in a person-tree, it would feel exactly the same as if you were a unique person-line through time, because looking backwards, a tree looks like a line, and your memory can only look backwards.
However, the rules of quantum mechanics mean that the integral of the modulus squared of the amplitude density, ∫|Ψ|2, is conserved in the copying process. Therefore, the tree that is you has branches that get thinner (where thickness is ∫|Ψ|2 over the localized density "blob" that represents that branch) as they branch off. In fact they get thinner in such a way that if you gathered them together into a bundle, the bundle would be as thick as the trunk it came from.
Now, since each copying event creates a slightly different classical universe, the copies in each of the sub-branches will each experience random events going differently. This means that over a timescale of decades, they will be totally "different" people, with different jobs, probably different partners and will live in different places though they will (of course) have your DNA, approximate physical appearance, and an identical history up until the time they branched off. For timescales on the order of a day, I suspect that almost all of the copies will be virtually identical to you, even down to going to bed at the same time, having exactly the same schedule that day, thinking almost all of the same thoughts etc.
MWI mixes copies and probability
When a "random" event happens, either the event was pseudorandom (like a large digit of pi) or it was a copy event, meaning that both (or all) outcomes were realized elsewhere in the wavefunction. This means that in many situations, when you say "there is a probability p of event X happening", what this really means is "proportion p of my copy-children will experience X".
LW doesn't care about copies
In Poll: What value extra copies?, I asked what value people placed upon non-interacting extra copies of themselves, asking both about lock-step identical and statistically identical copies. The overwhelming opinion was that neither were of much value. For example, Sly comments:2
"I would place 0 value on a copy that does not interact with me. This might be odd, but a copy of me that is non-interacting is indistinguishable from a copy of someone else that is non-interacting. Why does it matter that it is a copy of me?"
How to get away with attempted murder
Suppose you throw a grenade with a quantum detonator at Sly. The detonator will sample a qbit in an even superposition of states 1 and 0. On a 0 it explodes, instantly vaporizing sly (it's a very powerful grenade). On a 1, it defuses the grenade and dispenses a $100 dollar note. Suppose that you throw it and observe that it doesn't explode:
(A) does Sly charge you with attempted murder, or does he thank you for giving him $100 in exchange for something that had no value to him anyway?
(B) if he thanks you for the free $100, does he ask for another one of those nice free hundred dollar note dispensers? (This is the "quantum suicide" option
(C) if he says "the one you've already given me was great, but no more please", then presumably if you throw another one against his will, he will thank you for the free $100 again. And so on ad infinitum. Sly is temporally inconsistent if this option is chosen.
The punch line is that the physics we run on gives us a very strong reason to care about the welfare of copies of ourselves, which is (according to my survey) a counter-intuitive result.
EDIT: Quite a few people are biting the quantum suicide bullet. I think I'll have to talk about that next. Also, Wei Dai summarizes:
Another way to think about this is that many of us seem to share the follow three intuitions about non-interacting extra copies, out of which we have to give up at least one to retain logical consistency:
- We value extra copies in other quantum branches.
- We don't value extra copies that are just spatially separated from us (and are not too far away).
- We ought to value both kinds of copies the same way.
- Giving up 1 is the position of "quantum immortality".
- Giving up 2 seems to be Roko's position in this post.
- Giving up 3 would imply that our values are rather arbitrary: there seems to be no morally relevant differences between these two kinds of copies, so why should we value one and not the other? But according to the "complexity of value" position, perhaps this isn't really a big problem.
I might add a fourth option that many people in the comments seem to be going after: (4) We don't intrinsically value copies in other branches, we just have a subjective anticipation of becoming them.
1: The copying events are not discrete, rather they consist of a continuous deformation of probability amplitude in state space, but the shape of that deformation looks a lot like a continuous approximation to a discrete copying event, and the classical rules of physics approximately govern the time evolution of the "copies" as if they were completely independent. This last statement is the phenomenon of decoherence. The uncertainty in the copying rate is due to my ignorance, and I would welcome a physicist correcting me.
2: There were many others who expressed roughly similar views, and I don't hold it as a "black mark" to pick the option that I am advising against, rather I encourage people to honestly put forward their opinions in a spirit of communal learning.
162 comments
Comments sorted by top scores.
comment by cousin_it · 2010-06-25T19:07:04.685Z · LW(p) · GW(p)
Your whole "paradoxical" setup works just as well if the randomizing device in the grenade is classical rather than quantum. But in the classical case our feelings are just the same, though no copies exist! The moral of the story is, I certainly do care about probability-branches of myself (the probabilities could be classical or quantum, no difference), but you haven't yet persuaded me to care about arbitrary copies of myself elsewhere in the universe that aren't connected to my "main tree", so to speak.
Replies from: Roko, Douglas_Knight, Roko, AlephNeil↑ comment by Douglas_Knight · 2010-06-25T20:14:22.795Z · LW(p) · GW(p)
In the classical case we could convert the probability into indexical uncertainty. That is, the random choices were made at the beginning of time. There's no tree, there are just independent copies marching in lock-step until they behave differently.
Replies from: cousin_it↑ comment by Roko · 2010-06-25T20:54:22.995Z · LW(p) · GW(p)
No, it doesn't, if by classical you mean "pseudorandom". The pseudorandom grenade that sly holds "could" really have killed him, whereas the quantum grenade that he holds never stood any chance of killing the sly that holds it, but it certainly killed his quantum twin who he professes not to care about.
Replies from: bogdanb↑ comment by bogdanb · 2010-06-26T19:56:21.175Z · LW(p) · GW(p)
I think you mean “might have”. If the grenade is pseudorandom and it didn’t kill him, it just means that deterministically it couldn’t kill him. It’s perfectly equivalent to a fake grenade that you don’t know is fake.
It can’t kill you (it’s physically impossible for it to explode), but it might kill you (you don’t know that it’s physically impossible etc. etc.).
:-P
Replies from: Rokocomment by Wei Dai (Wei_Dai) · 2010-06-25T22:32:23.533Z · LW(p) · GW(p)
Another way to think about this is that many of us seem to share the follow three intuitions about non-interacting extra copies, out of which we have to give up at least one to retain logical consistency:
- We value extra copies in other quantum branches.
- We don't value extra copies that are just spatially separated from us (and are not too far away).
- We ought to value both kinds of copies the same way.
_
- Giving up 1 is the position of "quantum immortality".
- Giving up 2 seems to be Roko's position in this post.
- Giving up 3 would imply that our values are rather arbitrary: there seems to be no morally relevant differences between these two kinds of copies, so why should we value one and not the other? But according to the "complexity of value" position, perhaps this isn't really a big problem.
Another possibility is to hold a probabilistic superposition of these three positions, depending on the relative strengths of the relevant intuitions in your mind.
Replies from: timtyler, Vladimir_Nesov, randallsquared, Roko↑ comment by timtyler · 2010-06-26T07:36:12.311Z · LW(p) · GW(p)
Is there any evidence for 1: "We value extra copies in other quantum branches"...?
Who does that? It seems like a crazy position to take - since those are in other worlds!
Rejecting a p(0.5) grenade is not "valuing copies in other quantum branches." It is simply not wanting to die. Making such a decision while not knowing how the probabilty will turn out works just the same classically, with no multiple copies involved. Evidently the decision has nothing to do with "valuing multiple copies" - and is simply the result of the observer's uncertainty.
Replies from: wedrifid, AlephNeil, red75, wedrifid↑ comment by wedrifid · 2010-06-26T08:17:58.172Z · LW(p) · GW(p)
Is there any evidence for 1: "We value extra copies in other quantum branches"...?
Who does that? It seems like a crazy position to take - since those are in other worlds!
Me. Valuing existence in as much Everett branch as possible sounds like one of the least arbitrary preferences one could possibly have.
Replies from: TAG, NancyLebovitz, timtyler↑ comment by NancyLebovitz · 2010-06-26T12:43:59.893Z · LW(p) · GW(p)
How does it compare to wanting to make a large positive difference in as many Everett branches as possible?
Replies from: wedrifid↑ comment by wedrifid · 2010-06-26T12:53:48.783Z · LW(p) · GW(p)
Roughly equal up until the point where you are choosing what 'positive difference' means. While that is inevitably arbitrary it is arbitrary in a, well, positive way. While it does seem to me that basic self perpetuation is in some sense more fundamental or basic than any sophisticated value system I don't endorse it any more than I endorse gravity.
↑ comment by timtyler · 2010-06-26T09:22:43.043Z · LW(p) · GW(p)
Valuing existence?!? I have no idea what that means. The existence - of what?
Replies from: lukstafi, wedrifid↑ comment by lukstafi · 2010-06-27T23:38:23.865Z · LW(p) · GW(p)
The existence of valuing, at least ;-)
If you ask what "existence in another Everett branch" means, it means at least that at some point it was "objectively" a probable option ("objectively" means you were not epistemically wrong about assigning them probability), so that, updatelessly, you should care about them.
Replies from: timtyler↑ comment by timtyler · 2010-06-28T06:45:54.759Z · LW(p) · GW(p)
The multiverse smears me into a messy continuum of me and not-me. In this "least arbitrary" of preference schemes, it is not at all clear what is actually being valued.
If you are saying that the MWI is just a way of visualising probability, then we are back to:
"Making such a decision while not knowing how the probabilty will turn out works just the same classically, with no multiple copies involved. Evidently the decision has nothing to do with "valuing multiple copies" - and is simply the result of the observer's uncertainty."
Observers often place value on future possibilities that they might find themselves witnessing. But that is not about quantum theory, it is about observer uncertainty. You get precisely the same phenomenon in classical universes. To claim that that is valuing your future self in other worlds is thus a really bad way of looking at what is happening. What people are valuing is usually, in part, their own possible future existence. And they value that just the same whether they are in a universe with many worlds physics - or not. The values are nothing to do with whether the laws of physics dictate that copying takes place. If it turns out experimentally that wavefunctions collapse, that will have roughly zero impact on most people's moral systems. They never valued other Everett worlds in the first place - so their loss would mean practically nothing to them.
The "many worlds" do not significantly interfere with each other, once they are remote elements in the superposition. A short while after they have split they are gone for good. There is usually no reason to value things you will never see again. You have no way to influence them at that stage anyway. Actually caring about what happens in other worlds involves counterfactuals - and so is not something evolution can be expected to favour. That is an obvious reason for so few people actually doing it.
Maybe - from the existence of this debate - this is some curious corner of the internet where people really do care about what happens in other worlds - or at least think that they do. If so, IMO, you folk have probably been misled - and are in need of talking down. A moral system that depends on the details of the interpretation of quantum physics? Really? The idea has a high geek factor maybe - but it seems to be lacking in common sense.
Purporting to care about a bunch of things that never happened, that can't influence you and that you can't do anything about makes little sense as morality - but looks a lot like signalling: "see how very much I care?" / "look at all the things I care about". It seems to be an extreme and unbelievable signal, though - so: you are kidding - right?
Replies from: lukstafi↑ comment by lukstafi · 2010-06-30T22:10:02.037Z · LW(p) · GW(p)
Since you are writing below my post and I sense detachment from what I've tried to express, I refer you to my http://lesswrong.com/lw/2di/poll_what_value_extra_copies/27ee and http://lesswrong.com/lw/2e0/mwi_copies_and_probability/27f1 comments.
ETA: I retract "detachment". Why you don' play Russian roulette? Because you could get killed. Why a magician plays Russian roulette? Because he knows he won't. Someone who doesn't value Everett branches according to their "reality mass" doesn't win -- no magician would play quantum Russian roulette. That you cannot experience being dead doesn't mean that you are immortal. (And additionally, my preferences are over worlds, not over experiences.)
Replies from: timtyler↑ comment by timtyler · 2010-07-09T20:27:08.005Z · LW(p) · GW(p)
The thing is, the correct "expected utility" sum to perform is not really much to do with "valuing Everett branches". It is to do with what you know - and what you don't. Some things you don't know - because of quantum uncertanty. However, other things you don't know because you never learned about them, other things you don't know becaue you forgot them, and other things you don't know because of your delusions. You must calculate the expected consequences of your actions based on your knowledge - and your knowledge of your ignorance. Quantum uncertainty is only a small part of that ignorance - and indeed, it is usually insignificant enough to be totally ignored.
This "valuing Everett branches" material mostly seems like a delusion to me. Human decision theory has precious little to do with the MWI.
↑ comment by wedrifid · 2010-06-26T11:18:38.237Z · LW(p) · GW(p)
Take "not wanting to die" and extract the state which people are in if they do not in fact die. Alternately, consider what an observer who has not taken a "crazy position" may choose to value. Then consider the difference between 'deep and mysterious' and just plain silly.
Replies from: timtyler↑ comment by AlephNeil · 2010-06-27T13:28:55.483Z · LW(p) · GW(p)
Rejecting a p(0.5) grenade is not "valuing copies in other quantum branches." It is simply not wanting to die.
You don't seem to realise that under the many worlds interpretation, the probabilities of the different outcomes of quantum events correspond (roughly speaking) to the amplitudes assigned to different universes, each of which contains instances (i.e. 'copies') of you and everything else. In other words, under MWI there is no difference between 'wanting to maximize your quantum probability of survival' and 'valuing copies of yourself in future quantum branches'.
[Note that I've substituted the word future for other. Whether A = "you at time t0" cares about B and C = "two different copies of you at time t1", both of which are 'descendants' of A, is a somewhat different question from whether B cares about C. But this difference is orthogonal to the present debate.]
If you want to simply deny the MWI then fine but you should acknowledge that that's ultimately what you're disagreeing with. (Also, personally I would argue that the only alternatives to the MWI are either (a) incoherent like Copenhagen (b) unparsimonious like Bohm's interpretation or (c) contain unmotivated deviations from the predictions of orthodox quantum mechanics (like the GRW theory).)
Replies from: timtyler↑ comment by timtyler · 2010-06-27T16:36:24.720Z · LW(p) · GW(p)
The phenomenon has nothing to do with quantum theory. You get the same result if the grenade depends on a coin toss - and the grenade recipient is ignorant of the result. That is the point I just explained.
The behaviour isn't the result of valuing copies in other worlds - it is simply valuing your own existence under conditions of uncertainty. The same behaviour would happen just fine in deterministic classical universes with no copying. So, the phenomenon has nothing to do with valuing copies - since it happens just the same regardless of whether the universe makes copies or not.
Replies from: AlephNeil↑ comment by AlephNeil · 2010-06-27T18:02:42.169Z · LW(p) · GW(p)
OK, I'll try again, from the beginning:
What Wei Dai means by "valuing extra copies in other quantum branches" is two things:
- (Weak version:) The fact that A values B and C, where B and C are possible 'future selves' of A.)
- (Strong version:) The fact that B values C, where C is B's "counterpart in a quantum counterfactual world".
Now, there's an argument to be had about whether (2) should be true, even assuming (1), but right now this simply muddies the waters, and it will be much clearer if we concentrate on (1).
So, A valuing his own continued existence means A wanting it to be true that B and C, his possible future selves (in different counterfactual worlds), are both alive. A would not be very happy with B being dead and C being alive, because he would say to himself "that means I have (e.g.) a 1/2 chance of dying". He'd much rather that B and C were both alive.
However, A might think like this: "If the Many Worlds Interpretation is true then it's wrong to say that either B or C but not both will exist. Rather, both of them exist independently in separate universes. Now, what's important to me is that my mind continues in some form. But I don't actually need both B and C for that to happen. So if Roko offered me $100 in exchange for the instantaneous, painless death of B I'd quite happily accept, because from my perspective all that will happen is that I'll receive the $100."
Presumably you disagree with this reasoning, right? Even if MWI is true? Well, the powerful intuition that causes you to disagree is what Wei is talking about. (As he says, giving up that intuition is the position of "quantum immortality".)
The fact that Wei states "the strong version" when "the weak version" would have sufficed is unfortunate. But you will completely miss the point of the debate if you concentrate solely on the difference between the two versions.
Replies from: Roko↑ comment by Roko · 2010-06-27T19:47:49.968Z · LW(p) · GW(p)
OK, I'll try again, from the beginning
Tim sometimes morphs into an "I won't update" bot during debates.
Replies from: timtyler↑ comment by timtyler · 2010-06-27T20:27:51.558Z · LW(p) · GW(p)
Er, what evidence exactly am I supposed to be updating on?
The supplied evidence for 1 ("We value extra copies in other quantum branches") seems feeble. Most people are totally ignorant of the MWI. Most people lived before it was invented. Quantum theory is mostly an irrelevance - as far as people's values goes. If - astonishingly - evidence of wavefunction collapse was ever found, people would carry on caring about things much as before - without any breakdown of morality - despite the loss of practially everything in other worlds. That thought experiment seems to demonstrate that most people care very little about copies of themselves in other worlds - since they would behave much the same if scientists discovered that those worlds did not exist.
Maybe there are somewhere a bunch of people with very odd values, who actually believe that they really do value copies of themselves in other worlds. I can think of at least one fellow who thinks like that - David Pearce. However, if so, this hypothetical silent mass of people have not stood up to be counted here.
↑ comment by red75 · 2010-06-26T09:57:59.623Z · LW(p) · GW(p)
We can construct less intuitive setup. You have created 99 copies of self.
Then every copy gets fake grenade (which always gives $100). Original you get real grenade. After explosion/nonexplosion remaining "you"s are merged. Will you accept next grenade in that setup?
Replies from: timtyler↑ comment by timtyler · 2010-06-26T12:26:38.057Z · LW(p) · GW(p)
I would be fine with that - assuming that the copies came out with the extra money; that the copying setup was reliable, etc.
This apparently has little to do with valuing "extra copies in other quantum branches" though - there is no "Everett merge" procedure.
Replies from: red75, wedrifid↑ comment by red75 · 2010-06-26T14:05:00.475Z · LW(p) · GW(p)
Can I sum it as: if you know that "backup copies" exist then it's OK to risk being exploded? Do you care for being backed up in all Everett branches then? Or is it enough to backup in branch where grenade explodes?
Replies from: timtyler, wedrifid↑ comment by timtyler · 2010-06-27T08:05:09.677Z · LW(p) · GW(p)
The usual idea of a "backup" is that it can be used to restore from if the "original" is lost or damaged. Everett worlds are not "backups" in that sense of the word. If a quantum grenade kills someone, their grieving wife and daughters are not consoled much by the fact that - in other Everett worlds - the bomb did not go off. The supposed "backups" are inaccessible to them.
↑ comment by wedrifid · 2010-06-26T13:04:23.689Z · LW(p) · GW(p)
This apparently has little to do with valuing "extra copies in other quantum branches" though - there is no "Everett merge" procedure.
While for the purposes of this discussion it makes no difference, my understanding is that the "Everett branches" form more of a mesh if you look at them closely. That is, each possible state for a world can be arrived at from many different past states, with some of those states themselves sharing common ancestors.
Replies from: timtyler↑ comment by timtyler · 2010-06-26T14:00:33.373Z · LW(p) · GW(p)
Maybe - but that is certainly not the conventional MWI - see:
"Why don't worlds fuse, as well as split?"
Replies from: wedrifid↑ comment by wedrifid · 2010-06-26T15:19:21.118Z · LW(p) · GW(p)
Yes, entropy considerations make recombining comparatively rare. Much like it's more likely for an egg to break than to recombine perfectly. Physical interactions being reversible in principle doesn't mean we should expect to see things reverse themselves all that often. I doubt that we have a substantial disagreement (at least, we don't if I take your reference to be representative of your position.)
↑ comment by Vladimir_Nesov · 2010-06-25T23:01:17.711Z · LW(p) · GW(p)
Very nice! In this setting, my position is to give up both 2 (because I don't believe moral intuition works adequately to evaluate this situation) and 3 ("complexity of value" argument: even if we do value spatially separated copies, it's not at all in the same way as we value MWI copies), while accepting 1 (for moral intuition, quantum branches are analogous to probability, where normal/classical situations are concerned).
↑ comment by randallsquared · 2010-06-27T01:00:58.290Z · LW(p) · GW(p)
Giving up 3 would imply that our values are rather arbitrary
Is that even in question? If these values (whatever they are in a given person) can be derived from some higher value, then they may not be arbitrary, but at some point you're either going to find a supergoal that all values derive from, or you're going to find two values that are arbitrary with respect to each other.
Finding the latter case sooner rather than later seems to match how humans really are, so unless you're willing to argue that humans have a supergoal, giving up 3 is a step that you've already taken anyway.
comment by Kaj_Sotala · 2010-06-25T20:10:03.756Z · LW(p) · GW(p)
I'll bite this bullet. If the grenade is always reliable, and if MWI is true for certain, and if it's not just a mathematical formalism and the other worlds actually exist, and if I don't have any relatives or other people who'd care of me, and if my existing in this world doesn't produce any net good for the other residents of this world, and if I won't end up mangled... then I would accept the deal of having this grenade thrown at me in exchange for 100 dollars. Likewise, I would deem it legal to offer this trade for other people, if it could be ascertained, without any chance of corruption, coercion or abuse, that these same criteria also applied to all the people the trade was being offered to.
But for that purpose, you have to assume pretty much all of those conditions. Since we can't actually do that in real life, this isn't really as paradoxical a dilemma as you imply it to be. In order to make it a paradox, you'd have to tack on so many assumptions that it ceases to have nearly anything to do with the world we live in anymore.
It reminds me of the argument that utilitarianism is wrong, because utilitarianism says that doctors should kill healthy patients to get life-saving organ transplants for several other people. Yes, if you could institute this as a general policy with no chance of anyone ever finding out about that, and you could always kill the chosen people without them having a chance to escape, and a dozen other caveats, then this might be worth it... but in pretty much any conceivable real-life situation, even trying to institute such a policy would obviously just do more harm than good, so it isn't really an argument against utilitarianism. Likewise, your proposed scenario isn't really an argument against not valuing identical copies.
Replies from: AlephNeil↑ comment by AlephNeil · 2010-06-25T20:20:54.073Z · LW(p) · GW(p)
It almost sounds like you're saying:
If I thought my life was worthless anyway then sure, throw the grenade at me.
(I'm being a bit facetious, because there is a difference between "my existing in this world doesn't produce any net good for anyone" and "my existing in this world doesn't produce any net good for anyone besides myself". But in real life, how plausible is it that we would have one without the other?)
Replies from: Kaj_Sotala, Roko↑ comment by Kaj_Sotala · 2010-06-25T20:33:50.484Z · LW(p) · GW(p)
I never thought of it that way, but you're right.
↑ comment by Roko · 2010-06-26T00:00:01.698Z · LW(p) · GW(p)
You're ignoring the tradeoff that people face between making themselves happier and making others happier.
In this case, the $100 makes it relatively unattractive. But suppose it was $100,000,000?
Replies from: Nisan↑ comment by Nisan · 2010-06-26T13:35:34.901Z · LW(p) · GW(p)
Then grenade, please! I could help more person-moments in the one Everett branch with a hundred million dollars than I could in both branches with my current level of income. And what's more, my life would be more comfortable, averaging over all my person-moments.
Replies from: Roko↑ comment by Roko · 2010-06-26T15:12:02.161Z · LW(p) · GW(p)
Yes, that's true. In reality, finding real-life systems that give you positive expected money for quantum suicide is hard. (Though far from impossible)
Replies from: wedrifid↑ comment by wedrifid · 2010-06-26T15:22:29.611Z · LW(p) · GW(p)
Really? Can't I just make a deal with Fred, pool our finances and distribute our quantum probability of life in proportion to our financial contribution?
Or, reading again, are you referring to maximising "p(life) * money_if_you_live"? I suppose that just relies on trading with people who are desperate. For example if Fred requires $1,000,000 to cure his cancer and has far less money than that he will benefit from trading a far smaller slice of quantum_life at a worse price so that his slice of life actually involves living.
Incidentally Fred's situation there is one of very few cases where I would actually Quantum Suicide myself. I value my quantum slice extremely highly... but everything has a price (in utility if not dollars).
Replies from: Roko↑ comment by Roko · 2010-06-26T15:50:43.539Z · LW(p) · GW(p)
The problem with a lot of these tricks is that in a fair lottery, p(life) * money_if_you_live is fixed, and in a real lottery, it goes down every time you play because real lotteries have negative expected value.
Replies from: wedrifid↑ comment by wedrifid · 2010-06-26T15:54:01.647Z · LW(p) · GW(p)
That's why you always make sure you're the house.
By the way, underscores work like asterisks do. Escape them with an \ if you want to use more than one.
Replies from: Roko↑ comment by Roko · 2010-06-26T15:56:57.148Z · LW(p) · GW(p)
I think that a better solution is to use the stock market as a fair lottery. Then you pick your bet: E($) is always 0. (If there were obvious ways to lose money in expectation on the market, then there would be obvious ways to make it. But that is unlikely)
Replies from: wedrifid↑ comment by wedrifid · 2010-06-26T16:15:52.472Z · LW(p) · GW(p)
I think that a better solution is to use the stock market as a fair lottery.
Fair systems are good for the person who would otherwise be exploited. They aren't good for the one who is seeking advantage. The whole point in this branch is that you were considering the availability of finding deals that give you positive expected returns.
If you are looking for a way to ensure a real positive expectation from a deal you don't create a stock market, you create a black market.
Replies from: Rokocomment by Dre · 2010-06-25T20:13:08.721Z · LW(p) · GW(p)
I don't think MWI is analogous to creating extra simultaneous copies. In MWI one maximizes the fraction of future selves experiencing good outcomes. I don't care about parallel selves, only future selves. As you say, looking back at my self-tree I see a single path, and looking forward I have expectations about future copies, but looking sideways just sounds like daydreaming, and I don't have place a high marginal value on that.
Replies from: bogdanb, timtyler, TAG↑ comment by bogdanb · 2010-06-26T19:52:28.134Z · LW(p) · GW(p)
Exactly my view.
A clarification: suppose Roko throws such a qGrenade (TM) at me, and I get $100. I will become angry and my attempt to inflict violence upon Roko. However, that is not because I’m sad about the 50% of parallel, untouchable universes where I’m dead. Instead, that is because Roko’s behavior is strong evidence that in the future he may do dangerous things; righteous anger now (and, perhaps, violence) is simply intended to reduce the measure of my current “futures” where Roko kills me.
On a slightly different note, worrying about my “parallel” copies (or even about their futures) seems to me quite akin to worrying about my past selves. I simply doesn’t mean anything. I really don’t care that my past self a year ago had a toothache — except in the limited sense that it’s slight evidence that I may be in the future predisposed to tooth aches. I do care about the probability of my future selves having aching teeth, because I may become them.
Like Sly, I don’t put much value in “versions” of me I can’t interact with. (The “much” is there because, of course, I don’t know with 100% certainty how the universe works, so I can’t be 100% sure what I can interact with.) But my “future selves” are in a kind of interaction with me: what I do influences which of those future selves I’ll become. The value assigned to them is akin to the value someone in free-fall assigns to the rigidity of the surface below them: they aren’t angry because (say) the pavement is hard, in itself; they are angry because it implies a squishy future for themselves. On the other hand, they really don’t care about the surface they’ve fallen from.
Replies from: wedrifid↑ comment by wedrifid · 2010-06-30T16:10:36.477Z · LW(p) · GW(p)
On a slightly different note, worrying about my “parallel” copies (or even about their futures) seems to me quite akin to worrying about my past selves. I simply doesn’t mean anything. I really don’t care that my past self a year ago had a toothache — except in the limited sense that it’s slight evidence that I may be in the future predisposed to tooth aches. I do care about the probability of my future selves having aching teeth, because I may become them.
With this in mind it seems that you treat a qGrenade in exactly the same way you would treat a pseudo-random grenade. You don't care whether the probability was quantum or just 'unknown'. My reasoning may be very slightly different but in this regard we are in agreement.
Replies from: bogdanb↑ comment by bogdanb · 2010-07-22T03:24:29.772Z · LW(p) · GW(p)
Yep. Grenades in MY past are always duds, otherwise I wouldn’t be here to talk about them. It doesn’t matter if they were fake, or malfunctioned, or had a pseudorandom or quantum probability to blow up. Past throwers of grenades are only relevant in the sense that they are evidence of future grenade-throwing.
Grenades in my future are those that I’m concerned about. With regards to people intending to throw grenades at me, the only distinction is how sure I am I’ll live; even something deterministic but hard to compute (for me) I consider a risk, and I’d be angry with the presumptive thrower.
(A fine point: I would be less angry to find out that someone who threw a grenade at me knew it wouldn’t blow, even if I didn’t know it at the time. I’d still be pissed, though.)
Replies from: wedrifid↑ comment by wedrifid · 2010-07-31T03:03:13.858Z · LW(p) · GW(p)
Yep. Grenades in MY past are always duds, otherwise I wouldn’t be here to talk about them. It doesn’t matter if they were fake, or malfunctioned, or had a pseudorandom or quantum probability to blow up. Past throwers of grenades are only relevant in the sense that they are evidence of future grenade-throwing.
I would still kill them, even if I knew they were now completely reformed or impotent. If convenient, I'd beat them to death with a single box just to ram the point home.
↑ comment by timtyler · 2010-06-30T14:59:56.361Z · LW(p) · GW(p)
Re: "In MWI one maximizes the fraction of future selves experiencing good outcomes."
Note that the MWI is physics - not morality, though.
Replies from: Dre, wedrifid↑ comment by wedrifid · 2010-06-30T16:07:37.824Z · LW(p) · GW(p)
Is there something wrong with the parent beyond perhaps being slightly awkward in expression?
Tim seems to be pointing out that MWI itself doesn't say anything about maximising nor anything about what you should try to maximise. This corrects a misleading claim in the quote.
(Upvoted back to 0 with explanation).
comment by wedrifid · 2010-06-26T07:51:49.332Z · LW(p) · GW(p)
The punch line is that the physics we run on gives us a very strong reason to care about the welfare of copies of ourselves, which is (according to my survey) a counter-intuitive result.
No, it doesn't. Maximising your Everett blob is a different thing than maximising copies. They aren't the same thing. It is perfectly consistent to care about having yourself existing in as much Everett stuff as possible but be completely indifferent to how many clones you have in any given branch.
Reading down to Wei's comment and using that break down, premise 3) just seems totally bizarre to me:
- We ought to value both kinds of copies the same way.
Huh? What? Why? The only reason to intuitively consider that those must have the same value is to have intuitions that really don't get quantum mechanics.
I happen to like the idea of having clones. I would pay to have clones across the cosmic horizon. But this is in a whole different league of preference to not having me obliterated from half the quantum tree. So if I was Sly my response would be to lock you in a room and throw your 50% death grenade in with you. Then the Sly from the relevant branch would throw in a frag grenade to finish off the job. You just 50% murdered him.
It occurs to me that my intuitions for such situations are essentially updateless. Wedrifid-Sly cares about the state of the multiverse, not that of the subset of the Everett tree that happens to flow through him at that precise moment in time (timeless too). It is actually extremely difficult for me to imagine thinking in such a way that quantum-murder isn't just mostly murderring me, even after the event.
Replies from: Roko↑ comment by Roko · 2010-06-27T20:05:24.862Z · LW(p) · GW(p)
If your intuitions are updateless, you should definitely care about the welfare of copies. Updatelessly, you are a copy of yourself.
Replies from: wedrifid↑ comment by wedrifid · 2010-06-27T21:01:14.324Z · LW(p) · GW(p)
I have preferences across the state of the universe and all of my copies share them. Yet I, we, need not value having two copies of us in the universe. It so happens that I do have a mild preference for having such copies and a stronger preference for none of them being tortured but this preference is orthogonal to timeless intuitions.
Replies from: Roko↑ comment by Roko · 2010-06-27T21:16:52.055Z · LW(p) · GW(p)
It so happens that I do have a mild preference for having such copies and a stronger preference for none of them being tortured but this preference is orthogonal to timeless intuitions.
Wanting your identical copies to not be tortured seems to be quintessential timeless decision theory...
Replies from: wedrifid↑ comment by wedrifid · 2010-06-27T21:49:40.085Z · LW(p) · GW(p)
Wanting your identical copies to not be tortured seems to be quintessential timeless decision theory...
If that is the case then I reject it timeless decision theory and await a better one. (It isn't.)
What I want for identical copies is a mere matter of preference. There are many situations, for example, where I would care not at all whether a simulation of me is being tortured and that simulation doesn't care either. I don't even consider that to be a particularly insane preference.
Replies from: Rokocomment by Vladimir_Nesov · 2010-06-25T19:50:56.245Z · LW(p) · GW(p)
MWI copies and possible world copies are not problematic, because both situations naturally admit an interpretation in terms of "the future me" concept ("splitting subjective experience"), and so the moral intuitions anchored to this concept work fine.
It is with within-world copies, or even worse near-copies, that the intuition breaks down: then there are multiple "future me", but no "the future me". Analysis of such situations can't rely on those moral intuitions, but nihilistic position would also be incorrect: we are just not equipped to evaluate them.
Replies from: Vladimir_M↑ comment by Vladimir_M · 2010-06-26T20:46:51.345Z · LW(p) · GW(p)
Vladimir_Nesov:
Analysis of such situations can't rely on those moral intuitions, but nihilistic position would also be incorrect: we are just not equipped to evaluate them.
Do you believe that a nihilistic position would be incorrect on the grounds of internal logical inconsistency, or on the grounds that it would involve an incorrect factual statement about some objectively existing property of the universe?
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2010-06-26T20:58:22.149Z · LW(p) · GW(p)
There are no grounds to privilege nihilistic hypothesis. It's like asserting that the speed of light is exactly 5,000,000,000 m/s before making the first experiment. I'm ignorant, and I argue that you must be ignorant as well.
(Of course, this situation doesn't mean that we don't have some state of knowledge about this fact, but this state of knowledge would have to involve a fair bit of uncertainty. Decision-making is possible without much of epistemic confidence, understanding of what's going on.)
Replies from: Vladimir_M↑ comment by Vladimir_M · 2010-06-26T23:07:24.706Z · LW(p) · GW(p)
Could you give an example of a possible future insight that would invalidate the nihilistic position? I honestly don't understand on what grounds you might be judging "correctness" here.
comment by Simulation_Brain · 2010-06-25T18:07:15.089Z · LW(p) · GW(p)
I think the point is that not valuing non-interacting copies of oneself might be inconsistent. I suspect it's true; that consistency requires valuing parallel copies of ourselves just as we value future variants of ourselves and so preserve our lives. Our future selves also can't "interact" with our current self.
Replies from: Morendil↑ comment by Morendil · 2010-06-25T19:07:13.719Z · LW(p) · GW(p)
The poll in the previous post had to do with a hypothetical guarantee to create "extra" (non-interacting) copies.
In the situation presented here there is nothing justifying the use of the word "extra", and it seems analogous to quantum-lottery situations that have been discussed previously. I clearly have a reason to want the world to be such that (assuming MWI) as many of my future selves as possible experience a future that I would want to experience.
As I have argued previously, the term "copy" is misleading anyway, on top of which the word "extra" was reinforcing the connotations linked to copy-as-backup, where in MWI nothing of the sort is happening.
So, I'm still perplexed. Possibly a clack on my part, mind you.
Replies from: Roko, Roko↑ comment by Roko · 2010-06-25T19:42:05.813Z · LW(p) · GW(p)
Does it make an important difference to you that the MWI copies were "going to be there anyway", hence losing them is not foregoing a gain but losing something you already had? Is this an example of loss aversion?
Replies from: Morendil↑ comment by Morendil · 2010-06-25T20:11:11.972Z · LW(p) · GW(p)
I value having a future that accords with my preferences. I am in no way indifferent to your tossing a grenade my way, with a subjective 1/2 probability of dying. (Or non-subjectively, "forcing half of the future into a state where all my plans, ambitions and expectations come to a grievous end.")
I am, however, indifferent to your taking an action (creating an "extra" non-interacting copy) which has no influence on what future I will experience.
Replies from: Roko, Roko↑ comment by Roko · 2010-06-25T20:48:43.046Z · LW(p) · GW(p)
Or non-subjectively, "forcing half of the future into a state where all my plans, ambitions and expectations come to a grievous end."
Well, it's not really half the future. It's half of the future of this branch, which is itself only an astronomically tiny fraction of the present. The vast majority of the future already contains no Morendil.
↑ comment by Roko · 2010-06-25T20:47:17.155Z · LW(p) · GW(p)
I am, however, indifferent to your taking an action (creating an "extra" non-interacting copy) which has no influence on what future I will experience.
So you'd be OK with me putting you to sleep, scanning your brain, creating 1000 copies, then waking them all up and killing all but the original you? (from a selfish point of view, that is -- imagine that all the copies are woken then killed instantly and painlessly)
Replies from: Morendil↑ comment by Morendil · 2010-06-25T21:09:04.348Z · LW(p) · GW(p)
I wouldn't be happy to experience waking up and realizing that I was a copy about to be snuffed (or even wondering whether I was). So I would prefer not to inflict that on any future selves.
Replies from: Roko↑ comment by Roko · 2010-06-25T19:44:04.812Z · LW(p) · GW(p)
as many of my future selves as possible experience a future that I would want to experience.
Do you mean "as large a fraction of" or "as many as possible in total"? Because if you kill (selectively) most of your future selves, you could end up with the overwhelming majority of those remaining living very well...
Replies from: Morendil↑ comment by Morendil · 2010-06-25T20:03:21.375Z · LW(p) · GW(p)
As cousin_it has argued, "selectively killing most of my future selves" is something that I subjectively experience as "having a sizeable probability of dying". That doesn't appeal.
Replies from: Rokocomment by AlephNeil · 2010-06-25T19:15:40.401Z · LW(p) · GW(p)
I think the situation with regard to MWI 'copies' is different from that with regard to multiple copies existing in the same Everett branch, or in a classical universe.
I can't fully explain why, but I think that when, through decoherence, the universe splits into two or more 'worlds', each of which are assigned probability weights that (as nearly as makes no difference) behave like classical probabilities, it's rational to act as though the Copenhagen interpretation was (as nearly as makes no difference) true. Strictly speaking the Copenhagen interpretation is incoherent, but still you should act as though, in a "quantum suicide" scenario, there is probability p that you will 'cease to exist', rather than a probability 1 that a copy of you will go on existing but 'tagged' with the information 'norm square amplitude of this copy is [1-p times that of the pre-existing person]'.
My rationale is roughly as follows: Suppose the universe were governed by laws of physics that were 'indeterministic' in the sense of describing the evolution over time of a classical probability distribution. Then if we want to, we can still pretend that there is a multiverse, that physics is deterministic, that all possible worlds exist with a certain probability density etc. And clearly the difference between a 'single universe' and a 'multiverse' view is 'metaphysical' in the sense that no experiment can tell them apart. Here I want to be a verificationist and say that there is no 'fact of the matter' as to which interpretation is true. Therefore, the question of how to act rationally shouldn't depend on this.
When we move from classical indeterminism to quantum indeterminism the probabilities get replaced by complex-valued 'amplitudes' but for reasons I struggle to articulate, I think the fact that the universe is 'nearly' classical means that our prescriptions for rational action must be 'nearly' the same as they would have been in a classical universe.
As for sly, he should act as though he has luckily survived an event that had a 1/2 chance of killing him ('once and for all', 'irrevocably' etc). Presumably he would charge you with something, though I'm not sure whether 'attempted murder' is what you'd be guilty of.
Replies from: Roko, Roko, Baughn↑ comment by Roko · 2010-06-26T00:25:43.134Z · LW(p) · GW(p)
And clearly the difference between a 'single universe' and a 'multiverse' view is 'metaphysical' in the sense that no experiment can tell them apart.
No, sorry, there are in-principle experiments that tell the two apart. For example, with good enough apparatus, you could do the double-slit experiment with people. (Currently they are doing it with bacteria I believe). You would be able to interfere with yourself in other branches in a wave-like way.
Replies from: Blueberry↑ comment by Blueberry · 2010-06-26T06:28:04.438Z · LW(p) · GW(p)
For example, with good enough apparatus, you could do the double-slit experiment with people. (Currently they are doing it with bacteria I believe). You would be able to interfere with yourself in other branches in a wave-like way.
Wait, what? How would you do it with people or bacteria? Do you have a link to the bacteria experiment? I thought that the different worlds couldn't interact; I'm very confused by this comment.
Replies from: wedrifid, Roko, Sniffnoy↑ comment by Roko · 2010-06-26T10:25:23.032Z · LW(p) · GW(p)
MWI doesn't strictly say that the worlds don't interact. It just says that they are mostly approximately independent if they have decohered (itself a continuous process).
Experiments in controlled conditions show that small sub-branches can, in fact, interfere with each other like waves, hence the double-slit experiment with electrons. But the size of the object merely contributes to the difficulty of the experiment, it seems. So far, large molecules have been used, but in the future it is planned to use viruses and bacteria. See Toward Quantum Superposition of Living Organisms. Also note that a quantum computer is essentially using computation across the multiverse (though no-one has built a particularly large one of those).
Replies from: AlephNeil, Blueberry↑ comment by AlephNeil · 2010-06-27T13:55:12.435Z · LW(p) · GW(p)
Is there any reason to use viruses and bacteria as opposed to, say, bacterium-sized salt crystals? Is it to refute people who say: "But if it's alive then perhaps it has magical quantum properties. Because life is magical."
Replies from: wedrifid, Roko↑ comment by Roko · 2010-06-27T19:45:37.926Z · LW(p) · GW(p)
Well, isn't the copenhagen interpretation the statement that life has magic effects on physics, by causing the wavefunction to collapse?
Replies from: Blueberry, Kingreaper↑ comment by Blueberry · 2010-06-27T20:26:28.971Z · LW(p) · GW(p)
Human consciousness specifically, not just life. Would different interpretations give different predictions for an experiment with a human interfering with himself in other branches?
Replies from: Vladimir_M↑ comment by Vladimir_M · 2010-06-27T20:43:43.188Z · LW(p) · GW(p)
Are you asking about what this would look like to observers on the side, or about the subjective experience of the person undergoing interference?
Regarding the first question, I don't think it would be different in principle from any other hypothetical experiment with macroscopic quantum interference; how much different interpretations manage to account for those is a complex question, but I don't think proponents of either of them would accept the mere fact of experimentally observed macroscopic interference as falsifying their favored interpretation. (Though arguably, collapse-based interpretations run into ever greater difficulties as the largest scales of detected quantum phenomena increase.)
As for the second one, I think answering that question would require more knowledge about the exact nature of human consciousness than we presently have. Scott Aaronson presents some interesting discussion along these lines in one of his lectures:
http://www.scottaaronson.com/democritus/lec11.html
↑ comment by Kingreaper · 2010-06-27T20:04:48.555Z · LW(p) · GW(p)
Nope. The copenhagen interpretation says that the wavefunction collapses when it interacts with anything.
But the thing it interacts with is still part of a larger wavefunction until that collapses etc.
Schroedingers Cat is about this fact, the fact that a cat (an observer) can be superposed.
I have yet to work out what the difference is between the Copenhagen Interpretation and the Many Worlds Interpretation. The physical reality they describe is identical.
↑ comment by Blueberry · 2010-06-26T20:10:42.832Z · LW(p) · GW(p)
Thanks for the link. I'm still not clear on exactly what it would mean to be able to interfere with myself in other branches in a wave-like way. Also, I thought a non-reversible process forced decoherence: is this not correct, or is there a way to force living organisms to be reversible?
↑ comment by Sniffnoy · 2010-06-26T07:32:07.067Z · LW(p) · GW(p)
If different worlds didn't interact, you wouldn't even get the ordinary double-slit result. With ordinary probability, you can split off branches without a problem, but quantum amplitudes can be negative or complex, they can cancel out, etc. You just don't typically see this macroscopically due to decoherence.
↑ comment by Baughn · 2010-06-26T11:03:28.227Z · LW(p) · GW(p)
Interesting. You seem to be saying that if the laws of physics appeared to be nondeterministic, there would be no way to be sure they don't actually create copies instead.
I think this is correct, with the caveat that the laws of physics for this universe do not appear to work that way. However, the map is not the territory - not knowing how the laws work does not mean there is no fact of how they work. Even so, assuming that the rational course action differs between these situations, the best you can do is assign a prior to both (Solomonoff prior will do, I suppose), and average your actions in some way between them.
It's possible you're also correct that, in that case, there would be no fact of the matter about it - I'm not quite sure what you meant. If there are multiple universes (with different laws of physics), and clones in other universes count, then indexical uncertainty about which universe you're in translates directly into, in effect, existing in both. I think.
I'm still confused about this, I'll admit.
comment by timtyler · 2010-06-25T20:32:17.574Z · LW(p) · GW(p)
Attempted murder, I reckon!
We can't have people going around throwing grenades at each other - even if they "only" have a 50-50 chance of exploding. This dangerous idiot is clearly in need of treatment and / or serving as a lesson to others.
comment by UnholySmoke · 2010-06-29T15:21:29.250Z · LW(p) · GW(p)
(B) if he thanks you for the free $100, does he ask for another one of those nice free hundred dollar note dispensers? (This is the "quantum suicide" option
I laugh in the face of anyone who attests to this and doesn't commit armed robbery on a regular basis. If 'at least one of my branches will survive' is your argument, why not go skydiving without a parachute? You'll survive - by definition!
So many of these comments betray people still unable to think of subjective experience as anything other than a ghostly presence sitting outside the quantum world. 'Well if this happens in the world, what would I experience?' If you shoot yourself in the head, you will experience having your brains blown out. The fact that contemplating oneself's annihilation is very difficult is not an excuse for muddling up physics.
Replies from: Aurini↑ comment by Aurini · 2010-07-01T06:22:28.302Z · LW(p) · GW(p)
I think the response is: that MWI isn't "Infinite plot-threads of fate" - or narrativium as it's put in the Discworld novels - quantum decay doesn't give a whit of care whether it's effects are noteworthy for us or not.
On the two 'far ends' of the spectrum, I'd expect to see significant plot-decay - particle causes Hitler to get cancer, he dies halfway through WWII - but I have trouble imagining a situation where a quantum event which will make the difference between my motorcycle sticking to the curve, and the tire skidding out, leaving my fragile body to skid across the pavement at 120 km/h, leaving a greasy trail that skids-out the semi riding behind me.
Quantum-grenades are one of the few exceptions, where small-world events affect us here in the middle-world. But I wouldn't count on MWI to produce a perfect bank robbery.
Replies from: UnholySmoke, wedrifid↑ comment by UnholySmoke · 2010-07-01T13:13:17.316Z · LW(p) · GW(p)
Very true, and well put. A combination of quantum events could probably produce anything you wanted, at whatever vanishingly tiny probability. Bear in mind that it's the configuration that evolves every which way, not 'this particle can go here, or here, or here....' But we're into Greg Egan territory here.
Suffice it to say that anyone who says they subscribe to quantum suicide but isn't either dead or richer than god is talking out of their bottom.
Replies from: wedrifid, Aurini↑ comment by wedrifid · 2010-07-01T13:24:42.004Z · LW(p) · GW(p)
Suffice it to say that anyone who says they subscribe to quantum suicide but isn't either dead or richer than god is talking out of their bottom.
Or, to be fair, just lacking in motivation or creativity. It may not have occured to them to isolate a source of accessible quantum randomness then shoot themselves on every day in which they do not win a lottery.
↑ comment by Aurini · 2010-07-01T18:45:05.020Z · LW(p) · GW(p)
Some caveats are in order:
I (mostly, probably?) find Quantum Suicide to be a perfectly reasonable option when it works as intended - however there are two cases of possible-future-branches which concern me.
1) Non-complete destruction
While MWI death doesn't bother me - and in fact, even total death bothers me less than most people - certain situations terrify me. Crippling injury, personality distorting pain (torture), and brain damage are deal breakers. Even if I bumped my assessment of MWI to 1.00, then I still wouldn't take a deal which involved one Everett branch being tortured for 50 years (at least, not without some sort of incredible payout).
2) While I don't worry about my own Everett copies so much, I do value the Ethics of alternate lines - which is to say that I wouldn't want my family left behind in 50% of the universes without me around due to some sort of MWI experiment (if I die by accident that's, ethically speaking, not the same).
So for the classic case of Quantum Russian Roulette - where I and the other party have no loved ones to leave behind - I'm fully game. And in other situations where all of my loved ones are utterly destroyed alongside me, I'm also game. But finding the mechanics to create said situations in our day-to-day, semi-technologicaly evolved world are pretty much impossible.
The only exception is (maybe - I'll freely admit to not having sufficient background here) the LHC. That argument would go that, the reason we've had so many difficulties, is because the Universes where it worked destroyed all humanity. But my question there is how long can we keep trying it, without completely destroying ourselves? My guess is that we'd have a finite number of goes at it before no quantum event can stop us - and all Everett branches are dead.
But like I said, I don't really have the background. I just hope the experts fully acknowledge the dangers (but they probably don't).
↑ comment by wedrifid · 2010-07-01T06:39:35.341Z · LW(p) · GW(p)
Quantum-grenades are one of the few exceptions, where small-world events affect us here in the middle-world. But I wouldn't count on MWI to produce a perfect bank robbery.
Agree, because most of the quantum events that determine the success of the robbery are ones that happened quite a while back. We're probably already in a branch where the success or failure is very nearly determined at this point in time.
comment by mwengler · 2010-06-26T18:30:57.224Z · LW(p) · GW(p)
All these comments and no one questions MWI? Is there another thread for that?
What does MWI explain that can't be explained without it? Name even a gedanken experiment that gives a different result in an MWI universe than it does in a universe with either 1) randomly determined but truly collapsed uniquely quantum collapses 2) quantum collapses determined uniquely by a force or method we do not yet know about.
Having said all that, Sly has plenty of reason to dislike me, even to want me arrested for attempted murder even without MWI. A world in which Sly is arrested and stopped of his odd gambling with other people's lives habit, I and other people I value have a higher probabiliy of surviving in my subjective timeline. I know that people who do something tend to do that in the future with much higher probability than do people who don't do that. This inference has been bred in to me because it was true enough to aid the survival, especially in our socially competitve groups, of my ancestors, so it makes good sense to go with this idea.
Replies from: pjeby↑ comment by pjeby · 2010-06-26T18:34:07.796Z · LW(p) · GW(p)
All these comments and no one questions MWI? Is there another thread for that?
See the sequences, specifically the quantum physics sequence. I found it by clicking the "Sequences" link in the upper right of the page, then reading the table of contents, and clicking through.
comment by Hook · 2010-06-25T17:59:31.004Z · LW(p) · GW(p)
I would place 0 value on creating identical non-interacting copies of myself. However, I would place a negative value on creating copies of my loved ones who were suffering because I got blown up by a grenade. If Sly is using the same reasoning, I think he should charge me with attempted murder.
Replies from: Rokocomment by lukstafi · 2010-06-27T23:19:55.549Z · LW(p) · GW(p)
(A) Sly charges me with attempted murder. Sly does not think that his counterpart in the other Everett branch is not interacting with him -- actually, Sly, after reading LW a lot, thinks that he has a "causal" responsibility, going-back-in-time, on whether his counterpart lives in the other branch, and therefore on how well the other branch fares. (If we transform this grenade into a counterfactual mugging grenade that has an option to not be "probably lethal" conditional on Sly's refusing of $100, Sly has causal effect on whether he lives in the other branch. With a lethal-only quantum grenade, Sly is left with responsibility to consistently ban uses of the grenade, so he should refuse $100 and put charges anyway -- producing of such grenades is "sick".)
comment by red75 · 2010-06-25T17:59:43.228Z · LW(p) · GW(p)
D) He will thank you, and charge you for attempted murder to minimize risk of being mangled by non-100%-kill grenade.
BTW, I've some thoughts on anthropic trilemma, maybe they'll be of use.
Replies from: Roko↑ comment by Roko · 2010-06-25T18:05:33.844Z · LW(p) · GW(p)
Least convenient possible world: the grenade is extremely reliable, moreso than the car he drives or the water he drinks.
Replies from: red75↑ comment by red75 · 2010-06-25T18:19:27.836Z · LW(p) · GW(p)
And MWI is true. And Sly knows that it is true. If Sly has no relatives than B) of course. If he has, than he should charge you for murder, as court probably knows too that MWI is true.
EDIT: And if Sly thinks that his future experiences are based on Solomonoff prior, than he will charge you for attempted murder, as he has relatively big chance to end up mangled in not so least convenient possible world.
Replies from: Rokocomment by Ghatanathoah · 2013-08-15T06:50:19.029Z · LW(p) · GW(p)
I reject "3" (We ought to value both kinds of copies the same way), but don't think that it is arbitrary at all. Rather it is based off of an important aspect of our moral values called "Separability." Separability is, in my view, an extremely important moral intuition, but it is one that is not frequently discussed or thought about because we encounter situations where it applies very infrequently. Many Less Wrongers, however, have expressed the intuition of separability when stating that they don't think that non-causally connected parallel universe should affect their behavior.
Separability basically says that how connected someone is to certain events matters morally in certain ways. There is some debate as to whether this principle is a basic moral intuition, or whether it can be derived from other intuitions, I am firmly in favor of the former.
That probably sounds rather abstract, so let me give a concrete example: Imagine that the government is considering taking an action that will destroy a unique ecosystem. There are millions of environmentalists who oppose this action, protest against it, and lobby to stop it. Should their preference for the ecosystem to not be destroyed be taken into consideration when calculating the utility of this situation? Have they, in a sense, been harmed if the ecosystem is destroyed? I'd say yes, and I think a lot of people would agree with me.
Now imagine that in a distant galaxy there exist approximately 90 quadrillion alien brain emulators living in a Matrioshka Brain. All these aliens are fervent environmentalists and have a strong preference that no unique ecosystem ever be destroyed. Assume we will never meet these aliens. Should their preference for the ecosystem to not be destroyed be taken into consideration when calculating the utility of this situation? Have they, in a sense, been harmed if the ecosystem is destroyed? I'd say no, even if Omega told me they existed.
What makes these two situations different? I would say that in the first situation the environmentalists possess strong causal connections to the ecosystem in question, while the aliens do not. For this reason the environmentalists' preferences were morally relevant, the aliens' not so.
Separability is really essential for utilitarianism to avoid paralysis. After all, if everyone's desires count equally when evaluating the morality of situations, regardless of how connected they are to them, then there is no way of knowing if you are doing right or not. Somewhere in the universe there is doubtless a vast amount of people who would prefer you not do whatever it is you are doing.
So how does this apply to the question of creating copies in my own universe, versus desiring a copy of me in another universe not be destroyed by a quantum grenade?
Well, in the issue of whether or not to create identical copies in my own universe, I would not spend a cent trying to do that. I believe in everything Eliezer wrote in In Praise of Boredom and place great value on having new, unique experiences. Creating lockstep copies of me would be counterproductive, to say the least.
However, at first this approach seems to run into trouble in MWI. If there are so many parallel universes it stands to reason that I'll be duplicating an experience some other me has already had no matter what I do. Fortunately, the Principle of Separability allows me to rescue my values. Since all those other worlds lack any causal connection to me, they are not relevant in determining whether I am living up to the Value of Boredom.
This allows us to explain why I am upset when the grenade is thrown at me. The copy that was killed had no causal connection to me. Nothing I or anyone else did resulted in his creation, and I cannot really interact with him. So when I assess the badness of his death, I do not include my desire to have unique, nonduplicated experiences in my assessment. All that matters is that he was killed.
So rejecting (3) does not make our values arbitrary, not in the slightest. There is an extremely important moral principle behind doing so, a moral principle that is essential to our system of ethics. Namely, the Principle of Separability.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2013-08-15T14:46:59.007Z · LW(p) · GW(p)
You say that "separability is really essential for utilitarianism to avoid paralysis" but also that it "is not frequently discussed or thought about because we encounter situations where it applies very infrequently."
I have trouble understanding how both of these can be true. If situations where it applies are very infrequent, how essential can it really be?
To avoid paralysis, utilitarians need some way of resolving intersubjective differences in utility calculation for the same shared world-state. Using "separability" to discount the unknowable utility calculations of unknown Matrioshka Brains is a negligible portion of the work that needs to be done here.
For my own part, I would spend considerably more than a cent to create an identical copy of myself whom I can interact with, because the experience of interacting with an identical but non-colocalized version of myself would be novel and interesting, and also because I suspect that we would both get net value out of the alliance.
Identical copies I can't interact with directly are less valuable, but I'd still spend a fair amount to create one, because I would expect them to differentially create things in the world I value, just as I do myself.
Identical copies I can't interact with even indirectly -- nothing they do or don't do will affect my life -- I care about much much less, more due to selfishness than any kind of abstract principle of separability. What's in it for me?
Replies from: Ghatanathoah↑ comment by Ghatanathoah · 2013-08-15T19:30:19.985Z · LW(p) · GW(p)
I have trouble understanding how both of these can be true. If situations where it applies are very infrequent, how essential can it really be?
What I should have said is "When discussing or thinking about morality we consider situations where it applies very infrequently." When people think about morality, and posit moral dilemmas, they typically only consider situations where everyone involved is capable of interacting. When people consider the Trolley Problem they only consider the six people on the tracks and the one person with the switch.
I suppose that technically separability applies to every decision we make. For every action we take there is a possibility that someone, somewhere does not approve of our taking it and would stop us if they could. This is especially true if the universe is as vast as we now think it is. So we need separability in order to discount the desires of those extremely causally distant people.
To avoid paralysis, utilitarians need some way of resolving intersubjective differences in utility calculation for the same shared world-state. Using "separability" to discount the unknowable utility calculations of unknown Matrioshka Brains is a negligible portion of the work that needs to be done here.
You are certainly right that separability isn't the only thing that utilitarianism needs to avoid paralysis, and that there are other issues that it needs to resolve before it even gets to the stage where separability is needed. I'm merely saying that, at that particular stage, separability is essential. It certainly isn't the only possible way utilitarianism could be paralyzed, or otherwise run into problems.
For my own part, I would spend considerably more than a cent to create an identical copy of myself whom I can interact with
When I refer to identical copies I mean a copy that starts out identical to me, and remains identical throughout its entire lifespan, like the copies that exist in parallel universes, or the ones in this matrix-scenario Wei Dai describes. You appear to also be using "identical" to refer to copies that start out identical, but diverge later and have different experiences.
Like you, I would probably pay to create copies I could interact with, but I'm not sure how enthusiastic about it I would be. This is because I find experiences to be much more valuable if I can remember them afterwards and compare them to other experiences. If both mes get net value out of the experience like you expect then this isn't a relevant concern. But I certainly wouldn't consider having 3650 copies of me existing for one day and then being deleted to be equivalent to living an extra 10 years the way Robin Hanson appears to.
comment by orthonormal · 2010-06-28T03:09:16.301Z · LW(p) · GW(p)
If you think carefully about Descartes' "I think therefore I am" type skepticism, and approach your stream of sensory observations from such a skeptical point of view, you should note that if you really were just one branch-line in a person-tree, it would feel exactly the same as if you were a unique person-line through time, because looking backwards, a tree looks like a line, and your memory can only look backwards.
I like this way of explaining how MWI all adds up to normality, and I'll use it in future discussions.
comment by Kingreaper · 2010-06-27T20:12:52.793Z · LW(p) · GW(p)
What if I simply don't trust the many worlds interpretation that much?
comment by Sly · 2010-06-26T11:03:53.929Z · LW(p) · GW(p)
I think that I can be consistent with charging you with attempted murder.
In your scenario, if the grenade is not in my favor; this particular instance of me will be dead. The fact that a bunch of copies collect $100 is of little value to the copy that my subjective experience occupied.
For instance, if Omega came up to me right now and said that he just slew some copies of me in other lines, than it is unclear how that event has affected me. Likewise if I die, and Omega tells my other copies, it seems like it is only this subjective branch that suffers.
So because the grenade can affect the current branch that I experience, I can object.
I think anyway, I may have misunderstood everything.
Also: I was very surprised to be the subject of a post. It has been interesting. =)
EDIT: Wouldn't the grenade thought experiment be more accurate if the grenade only killed or gave out $100 to copies when thrown at me? The fact that it interacts with me and not just copies of me is where I get a disconnect.
Replies from: Roko↑ comment by Roko · 2010-06-27T20:24:56.357Z · LW(p) · GW(p)
the copy that my subjective experience occupied.
Magical extraphysical subjective experience fact-of-the-matter anyone?
Replies from: Sly↑ comment by Sly · 2010-06-27T22:14:01.191Z · LW(p) · GW(p)
How is it magical? Or extra-physical?
All it requires is that the copy that survives is not the me that got annihilated in the grenade. I do not think this requires magic.
Like I said though, I may be misunderstanding something. In that case I would appreciate it if it were explained better.
Replies from: Rokocomment by red75 · 2010-06-25T20:08:42.044Z · LW(p) · GW(p)
BTW. I don't understand, why it is taken for granted that Sly's thread of subjective experience in branch where grenade exploded will merge with thread of subjective experience in other branch.
Question of assigning probabilities to subjective anticipations is still open. Thus, it's possible that after grenade explosion Sly will experience being infinite chain of dying Boltzmann brains, and not a happy owner of $100.
EDIT: Using Solomonoff prior, I think he will wake up in a hospital, as it is one of simplest continuations of his prior experiences.
Replies from: AlephNeil↑ comment by AlephNeil · 2010-06-25T20:43:06.403Z · LW(p) · GW(p)
Having read Dennett, I reach the conclusion that the third horn of the 'anthropic trilemma' is obviously the correct one. There is no such thing as a thread of subjective experience. There is no 'fact of the matter' as to whether 'you' will die in a teleporter or 'find yourself' at your destination.
After the grenade explodes with probability 1/2, we can say that with probability 1/2 there is a dead Sly and with probability 1/2 there is a relieved Sly whose immediate memories consist of discussing the 'deal' with Roko, agreeing to go ahead with it, then waiting anxiously. There is no reason whatsoever to believe in a further, epiphenomenal fact about what happened to Sly's subjective experience.
That there 'seems' to be a thread of subjective experience shouldn't give us any more pause than the fact that the Earth 'seems' to be motionless - in both cases we can explain why it must seem that way without assuming it to be true.
Replies from: Roko, red75, red75↑ comment by red75 · 2010-06-25T22:34:03.117Z · LW(p) · GW(p)
Having read Dennett, I reach the conclusion that the third horn of the 'anthropic trilemma' is obviously the correct one.
Oh! And why do you care about grenades then?
Replies from: Will_NewsomeThe third horn of the anthropic trilemma is to deny that there is any meaningful sense whatsoever in which you can anticipate being yourself in five seconds, rather than Britney Spears; to deny that selfishness is coherently possible; to assert that you can hurl yourself off a cliff without fear, because whoever hits the ground will be another person not particularly connected to you by any such ridiculous thing as a "thread of subjective experience".
↑ comment by Will_Newsome · 2010-06-25T23:18:23.069Z · LW(p) · GW(p)
To loosely paraphrase Charles Babbage: 'I am not able rightly to apprehend the kind of confusion of ideas that could provoke such an argument.' I don't believe Eliezer was thinking very clearly when he wrote that post.
Replies from: wedrifid, red75↑ comment by wedrifid · 2010-06-26T18:46:58.007Z · LW(p) · GW(p)
I don't believe Eliezer was thinking very clearly when he wrote that post.
I agree. That is the one post of Eliezer's that I can think of that seems to be just confused (when he probably didn't need to be).
Replies from: red75↑ comment by red75 · 2010-06-28T07:01:21.967Z · LW(p) · GW(p)
Can you share your way of "disconfusion"?
I've made a copy of myself. After that I will find myself either as an original, or as a copy, but both these subjective states will correspond to one physical state. It seems there is something beyond physical state.
I have an idea I've already partially described in other posts, that allows me to remain reductionist. But I wonder how others deal with that.
Edit: Physical state refers to physical state of world, not to the physical state of particular copy.
Replies from: Nick_Tarleton, wedrifid↑ comment by Nick_Tarleton · 2010-06-28T19:07:30.280Z · LW(p) · GW(p)
I've made a copy of myself. After that I will find myself either as an original, or as a copy, but both these subjective states will correspond to one physical state. It seems there is something beyond physical state.
Er, aren't those the same subjective state as well?
Replies from: red75↑ comment by red75 · 2010-06-29T02:04:43.271Z · LW(p) · GW(p)
Original-I will see that I still stand/lie on scanner end of copying apparatus. Copy-I will see that I "teleported" to construction chamber of copying apparatus. One's current experiences is a part of subjective state, isn't it?
If scanner and construction chambers are identical, then my subjective state "splits" when Original-I and Copy-I leave chamber (note that "in-chamber" states are one subjective state).
Edit: I've added minor clarification.
Replies from: Nick_Tarleton↑ comment by Nick_Tarleton · 2010-07-01T16:49:44.701Z · LW(p) · GW(p)
These different subjective states correspond to different physical states: different patterns of photons impinge on your retinas, causing different neural activity in your visual cortex, and so forth.
Replies from: red75↑ comment by red75 · 2010-07-01T18:29:56.235Z · LW(p) · GW(p)
I've left space for misinterpretation in my root post. I meant world physical state, not a state of particular copy. World state: original-body exists and copy-body exists. Subjective state: either I am original, or I am copy.
Replies from: Kingreaper, AlephNeil↑ comment by Kingreaper · 2010-07-01T22:49:53.788Z · LW(p) · GW(p)
As far as I can see both those subjective states exist simultaneously, it's not an "either or".
You-before-copying wakes up as both original-you and copy-you after the copying. From there the subjective states diverge. Analogously, in the grenade example before-grenade you continues as both dead-you and you-with-$100*
*(who doesn't experience anything, unless there's an afterlife)
**(as well as all other possible quantum-yous. But that's another issue entirely)
I suspect I'm rather missing the point. What point are you in fact trying to make if I may ask?
Replies from: red75↑ comment by red75 · 2010-07-02T06:23:44.180Z · LW(p) · GW(p)
Yes, both subjective states exist. It is not the point.
You wake up after copying, at what point should you experience being both copy and original?
Before you open your eyes, you can't know if you copy or original, but it is not experience of being both, because physical states of both brains at this point are identical to state of brain that wasn't copied at all. After you open your eyes you will find youself being either copy or original, but again not both by obvious reason.
*(who doesn't experience anything, unless there's an afterlife)
It is not true if subjective experience isn't ontologically fundamental. As existense of e.g. paricular kind of Boltzmann brain seem in that case sufficient for continuation of subjective experience (very unpleasant experience in this case).
Replies from: Kingreaper↑ comment by Kingreaper · 2010-07-02T14:22:54.578Z · LW(p) · GW(p)
Before you open your eyes, you can't know if you copy or original, but it is not experience of being both, because physical states of both brains at this point are identical to state of brain that wasn't copied at all. After you open your eyes you will find youself being either copy or original, but again not both by obvious reason.
Again, what point are you actually trying to make here?
It is not true if subjective experience isn't ontologically fundamental. As existense of e.g. paricular kind of Boltzmann brain seem in that case sufficient for continuation of subjective experience (very unpleasant experience in this case).
A continuation of subjective experience after death is an afterlife.
Replies from: red75↑ comment by red75 · 2010-07-02T16:21:40.758Z · LW(p) · GW(p)
Again, what point are you actually trying to make here?
Did you read "Anthropic trilemma"? I am trying to
Convince someone that Eliezer Yudkowsky wasn't that confused, or at least that he had a reason to be.
Check if I am wrong and there is satisfactory answer to trilemma, or that the question "What I will experience next?" has no meaning/doesn't matter.
A continuation of subjective experience after death is an afterlife.
As far as know word afterlife implles dualism, which is not the case here.
↑ comment by AlephNeil · 2010-07-01T18:46:01.936Z · LW(p) · GW(p)
This has nothing to do with 'splitting' per se. If your point was valid then you could make it equally well by saying:
A and B are different people in the same universe. World state: "A exists and B exists". Subjective state: "Either A is thinking or B is thinking." Same physical state, different subjective states. Therefore, "it seems there is something beyond the physical state".
But this 'either or' business is nonsense. A and B are both thinking. You and copy are both thinking. What's the big deal?
(Apparently, you think the universe is something like in the film Aliens where in addition to whatever's actually happening, there is a "bank of screens" somewhere showing everyone's points of view. And then after you split, "your screen" must either show the original's point of view or else it must show the copy's.)
Replies from: red75↑ comment by red75 · 2010-07-01T22:16:14.014Z · LW(p) · GW(p)
If you're saying that you don't have subjective experiences, I'll bite the bullet, and will not trust your view on the matter. However I doubt that you want me to think so. What is subjective experience or "one's screen", as you put it, to you?
Replies from: Blueberry↑ comment by Blueberry · 2010-07-01T22:46:31.417Z · LW(p) · GW(p)
Of course we have subjective experience: it's just that both copies of you have it, and there is no special flame of consciousness that goes to one but not the other. After the copy, both copies remember being the original. They're both "you".
Replies from: red75↑ comment by red75 · 2010-07-02T03:48:54.875Z · LW(p) · GW(p)
They are both me for external observer. But there's no subjective experience of being both copies. Imagine youself being copied... Copying... Done. Now you find youself standing either in scanner chamber, knowing that there's another "you" in construction chamber, or in construction chamber, knowing that there's another "you" in scanner chamber. If you think that you'll experience someting unimaginable, you need to clarify what causes your/copy's brain to create unimaginable experience.
Replies from: Blueberry↑ comment by Blueberry · 2010-07-02T04:32:27.509Z · LW(p) · GW(p)
But there's no subjective experience of being both copies. Imagine youself being copied... Copying... Done. Now you find youself standing either in scanner chamber, knowing that there's another "you" in construction chamber, or in construction chamber, knowing that there's another "you" in scanner chamber.
The problem is with the word "you," which usually refers to one specific mind. In this case, when I am copied, the end result will be two identical minds, each of which will have identical memories and continuity with the past. The self in the scanner chamber will find itself with the subjective experience of looking at the copy in the construction chamber, and the self in the construction chamber will find itself with the subjective experience of looking at the copy in the scanner chamber. They are both equally "you", but from that point on they will have separate experiences.
Replies from: red75↑ comment by red75 · 2010-07-02T05:27:05.855Z · LW(p) · GW(p)
"You" in this context is ambiguous only for external observer. Both minds will know who "you" refers to. Right?
Your rephrasing seems to dissociate situation from subjective experience. I can't see how it helps however. It will be not "the self" standing in scanner/construction chamber, it will be you (not "you") standing there. Once again. They are both equally "you" for external observer, but you will not be external observer.
Replies from: Blueberry↑ comment by Blueberry · 2010-07-02T07:57:58.953Z · LW(p) · GW(p)
When you say "you", mind 1 will think of mind 1, and mind 2 will think of mind 2. One entity and one subjective experience has split into two separate ones. They are both you.
If you prefer: your subjective experience stops when the copy is created. Two new entities appear and start having subjective experiences based from your past. There is no more you. It makes more sense to me to think of them both as me, but it's the same thing. The past you no longer exists, so you can't ask what happens to him.
I think your problem is with the word "you". The question is "what happens next after the split?" Well, what happens to who? Mind 1 starts having one subjective experience, and Mind 2 starts having a slightly different one. It's tricky, because there's a discontinuity at the point of the split, and all our assumptions about personal identity are based on not having such a discontinuity.
Replies from: red75↑ comment by red75 · 2010-07-02T08:53:57.491Z · LW(p) · GW(p)
Two new entities appear and start having subjective experiences based from your past. There is no more you. It makes more sense to me to think of them both as me, but it's the same thing. The past you no longer exists, so you can't ask what happens to him.
And even without copying "past you" no longer exist in that sense. If we agree that subjective experiences are unambiguously determined by physical processes in brain, than it must be clear that creation of copy doesn't create any special conditions in subjective experience, as processes in both brains evolve in usual manner but with different inputs.
It's tricky, because there's a discontinuity at the point of the split, and all our assumptions about personal identity are based on not having such a discontinuity.
There is, er, uncertainty in our expectations, yes. But I can't see discontinuity of what you are speaking of? Which entity is discontinuous at the point of copying?
Edit: Just for information. I am aware that inputs to copy-brain is discontinuous in a sense (compare it with sleep).
Replies from: Blueberry↑ comment by Blueberry · 2010-07-02T16:42:01.126Z · LW(p) · GW(p)
And even without copying "past you" no longer exist in that sense.
Yes, exactly. But normally we have a notion of personal identity as a straight line through time, a path with no forks.
If we agree that subjective experiences are unambiguously determined by physical processes in brain, than it must be clear that creation of copy doesn't create any special conditions in subjective experience, as processes in both brains evolve in usual manner but with different inputs.
True. From the point of view of each copy, nothing seems any different, and experience goes on uninterrupted. The creation of a copy creates a special condition in personal identity, however, because there's no longer just one "you", so asking which one "you" will be after the copy no longer makes sense.
But I can't see discontinuity of what you are speaking of? Which entity is discontinuous at the point of copying?
"You." Your personal identity. Instead of being like a line through time with no forks, it splits into two.
Replies from: red75↑ comment by red75 · 2010-07-02T17:44:55.586Z · LW(p) · GW(p)
The creation of a copy creates a special condition in personal identity, however, because there's no longer just one "you", so asking which one "you" will be after the copy no longer makes sense.
I don't understand. Do you mean that personal identity is something that exist independently of subjective experience of being oneself? External observer again?
Sorry,but I see that you (unintentionally?) try to "jump out" of youself, when anticipating post-copy experience.
How about another copying setup? You stand in scanner chamber, but instead of immediate copy copying apparatus store scanned data for a year and then makes a copy of you. When personal identity splits in this setup? What you expect to experience after scanning? After copy is made? And if a copy will never be made?
The point I trying to make is that there is no such thing as personal identity detached from your subjective experience of being you. So it can't be discontinuous.
Edit: I have preliminary resolution of anthropic trilemma, but I was hesitant to put it out, as I was trying to check that I am wrong about the need to resolve it. So I propose, that subjective experience has one-to-one correspondence not to a physical state of system capable of having subjective experience, but to a set of systems in possible worlds which are invariant to information preserving substrate change. This definition is far from perfect of course, but at least it partially resolves anthropic trilemma.
Replies from: Blueberry↑ comment by Blueberry · 2010-07-02T18:01:25.510Z · LW(p) · GW(p)
How about another copying setup? You stand in scanner chamber, but instead of immediate copy copying apparatus store scanned data for a year and then makes a copy of you. When personal identity splits in this setup? What you expect to experience after scanning? After copy is made? And if a copy will never be made?
After scanning, nothing unusual. You're still standing in the chamber. You ask what you will experience after the copy is made. After the copy is made, two people have the subjective experience of being "you". One of them will experience a forward jump in time of a year. They are both equally you.
The point I trying to make is that there is no such thing as personal identity detached from your subjective experience of being you. So it can't be discontinuous.
The discontinuity is with the word "you". Each copy has a continuous subjective experience. But once there's two copies, the word "you" suddenly becomes ambiguous.
Replies from: red75↑ comment by red75 · 2010-07-02T18:15:01.733Z · LW(p) · GW(p)
the word "you" suddenly becomes ambiguous.
..for external observer. Original-you still knows that "you" refers to original-you, copy-you knows that "you" refers to copy-you.
Sorry, I am unable to put it other way. Maybe we have incompatible priors.
Replies from: Blueberry↑ comment by Blueberry · 2010-07-02T18:24:41.511Z · LW(p) · GW(p)
Original-you still knows that "you" refers to original-you, copy-you knows that "you" refers to copy-you.
I agree with this statement. So what's the problem?
Replies from: red75↑ comment by red75 · 2010-07-02T18:32:25.132Z · LW(p) · GW(p)
The problem is to compute probability of becoming you that refers to original-you, and probability of becoming you that refers to copy-you. No one yet resolved this problem.
Edit: So you can't be sure that you will not become you that refers to me for example.
Replies from: Blueberry↑ comment by Blueberry · 2010-07-02T18:43:57.213Z · LW(p) · GW(p)
Because it's a meaningless question to ask what happens to the original's subjective experience after the copies are made. There is no flame or spirit that probabilistically shifts from you to one copy or the other. It's not that you have a 50% chance of being copy A and a 50% chance of being copy B. It's that both copies are you, and each of them will view themselves as your continuation. Your subjective experience will split and continue into both.
The interesting question is how to value what happens to the copies. I can't quite bring myself to allow one copy to be tortured for -1000 utiles and have the other rewarded for +1001, even though, if we value them evenly, this is a gain of 0.5 utiles. I'm not sure if this is a cognitive bias or not.
Replies from: red75↑ comment by red75 · 2010-07-02T19:08:56.541Z · LW(p) · GW(p)
The interesting question is how to value what happens to the copies.
On what basis you restrict your evaluation to those two particular copies? The universe is huge. And it doesn't matter when copy exists (as we agreed earlier). There can be any number of Boltzmann brains, which continue your current subjective experience.
Edit: You can't evaluate anything if you can't anticipate what happens next or if you anticipate that everything that can happen will happen to you.
Replies from: Blueberry↑ comment by Blueberry · 2010-07-04T06:31:58.401Z · LW(p) · GW(p)
Not sure I understand exactly. We don't know if the universe is Huge, or just how Huge it is. If Tegmark's hypothesis is correct the only universes that exist may be ones that correspond to certain mathematical structures, and these structures may be ones with specific physical regularities that make Boltzmann brains extremely unlikely.
We don't seem to notice any Boltzmann-brain-like activity, which may be evidence that they are very rare.
Replies from: red75↑ comment by red75 · 2010-07-04T06:51:15.177Z · LW(p) · GW(p)
Here is relevant post. And if Tegmark's hypothesis are true, than you don't need Boltzmann brains. There are infinitely many continuations of subjective experience, as there are infinitely many universes which have same physical laws as our universe, but distinct initial conditions. For example there is universe with initial conditions identical to current state of our universe, but the color of your room's wallpaper.
Edit: spelling.
↑ comment by wedrifid · 2010-06-28T08:09:59.271Z · LW(p) · GW(p)
Can you share your way of "disconfusion"?
Right now, definitely not. That would involve re-immersing myself in the issue, reviewing several post threads of comments and recreating all those thoughts that my brain oh so thoughtfully chose to not maintain in memory in the absence of sufficient repitition. If I had made my post back then, taken the extra steps of putting words to an explanation that would be comprehensible to others, then I would most likely not have lost those thoughts. (Note to self....)
↑ comment by red75 · 2010-06-25T20:50:59.541Z · LW(p) · GW(p)
Blank map doesn't imply blank territory. Did you read what I write on this? There is consistent (as far as I can see) way to deal with subjective experiences.
Replies from: AlephNeil↑ comment by AlephNeil · 2010-06-25T21:13:38.791Z · LW(p) · GW(p)
Objections:
Isn't it possible (at least non-contradictory) that there could be a universe whose minimum description length is infinite? And even a brain within that universe with infinite minimum description length? If this universe contains intelligent beings having witty conversations and doing science and stuff, do we really want to just flat out deny that these beings are conscious (and/or deny that such a universe is metaphysically possible.)
There isn't always a fact of the matter as to whether a being is conscious and if so, what it's conscious of. For the former, note that the question of when a foetus starts to have experiences is obviously indeterminate. For the latter, consider Dennett's distinction between 'Orwellian' and 'Stalinesque' revisions. Neither is there always a fact of the matter as to how many minds are present (consider a split brain patient). Don't these considerations undermine the idea of using the Solomonoff prior? (If we don't know (a) whether there are experiences here at all, (b) what experiences there are or even (c) whether this smaller object or this larger object counts as a 'single mind' then how on earth can we talk meaningfully about how the 'thread' of subjectivity is likely to continue?)
↑ comment by red75 · 2010-06-25T22:13:51.140Z · LW(p) · GW(p)
Well, they will need infinite processing power to effectively use their brains content. And infinite processing power is very strange thing. This situation lies outside of applicability area of my proposal.
My proposal nor discusses, nor uses emergence/existence of subjective experience. If something has subjective experiences and it has experience of having subjective experience, then this something can use Solomonoff prior to infer anticipations of future experiences.