Preference For (Many) Future Worlds

post by wedrifid · 2011-07-15T23:31:46.156Z · LW · GW · Legacy · 37 comments

Contents

  Utility Applies to Universes, Not (just) Mental States
  Quantum Russian Roulette
  Friends and Family: The Externalities Cop-Out
  Failure Amplification
  Quantum Sour Grapes
  Not Wrong (But Perhaps Crazy)
None
37 comments

Followup to: Quantum Russian Roulette; The Domain of Your Utility Function

The only way to win is cheat
And lay it down before I'm beat
and to another give my seat
for that's the only painless feat.

Suicide is painless
It brings on many changes
and I can take or leave it if I please.

-- M.A.S.H.

Let us pretend, for the moment, that we are rational Expected Utility Maximisers. We make our decisions with the intention of achieving outcomes that we judge to have high utility. Outcomes that satisfy our preferences. Since developments in physics have led us to abandon the notion of a simple single future world our decision making process must now grapple with the notion that some of our decisions will result in more than one future outcome. Not simply the possibility of more than one future outcome but multiple worlds, each of which with different events occurring. In extreme examples we can consider the possibility of staking our very lives on the toss of a quantum die, figuring that we are going to live in one world anyway!

How do preferences apply when making decisions with Many Worlds? The description I’m giving here will be obvious to the extent of being trivial to some, confusing to others and, I expect, considered outright wrong by others. But it is the post that I want to be able to link to whenever the question “Do you believe in quantum immortality?” comes up. Because it is a wrong question!

Utility Applies to Universes, Not (just) Mental States

I am taking for granted here that when we are considering utility and the maximisation thereof we are not limiting ourselves to maximising how satisfied we expect to feel with an outcome in the future. Peter_de_Blanc covered this topic in the post The Domain of Your Utility Function. Consider the following illustration of evaluating expected utility:


Note in particular the highlighted arrow from Extrapolated Future A to Utility of A and not from Extrapolated Mind A to Utility A. Contrary to surprisingly frequent assumptions, the sane construction of expected utility cares not just how we expect our own mind to be. Our utility function applies to our whole extrapolation of the future. We can care about our friends, family and strangers. We can have preferences about the passengers in a rocket that will have flown outside our future light cone. We can have preferences about the state of the universe even in futures where we die.

Quantum Russian Roulette

Given a source of quantum randomness we can consider a game people may play in a hope to exploit the idea of quantum immortality as a way to Get Rich Quick! Quantum Russian Roulette is an obvious example. 16 people pool all their money. They then use a device that uses 4 quantum bits to pick one of the players to live and kills the other 15 painlessly. Winner takes all! Each of the players will wake up in a world where they are the alive, well and rich. One hopes they were at least wise enough not to play the game against their loved ones!

Now, what does the decision of whether to play QRoulette look like?

Two way Quantum Roulette.


The idea is that after going to sleep with $300k you will always and only wake up with $600k. Thus it is said by some that if you accept the Many Worlds implications of Quantum Mechanics you should consider it rational to volunteer for Quantum Roulettes whenever they become available. Kind of like future oriented anthropic planning or something.

Friends and Family: The Externalities Cop-Out

When someone dies the damage done is to more than just the deceased. There are all sorts of externalities, most of them negative. The economy is damaged, many people grieve. As such a common rejoinder to a quantum suicide advocate is “The reason that I wouldn’t commit quantum suicide is that all the people who care about me would be distraught”. That is actually a good reason. In fact it should be more than sufficient to prevent just about everyone from getting quantum suicidal all by itself. But it misses the point.

Consider the Least Convenient Possible World. The Quantum Doomsday Lottery. It’s a solo game:

Those who have “other people will be sad” as their true rejection of the option of playing quantum roulette can be expected to jump at the opportunity to get rich quick without making loved ones grieve. Most others will instead look a little closer and find a better reason not to kill themselves and destroy the world.

Failure Amplification

In the scenarios above only two options are considered. “Win” and “lose/oblivion/armageddon”. Of course the real world is not so simple. All sorts of other events could occur that aren’t accounted for in the model. For example, the hardware could break, causing your execution to be botched. Instead of waking up only when you win you also wake up when you lose but the machine only manages to make you “mostly dead” then breaks. You survive in huge amounts of pain, crippled and with 40 less IQ points.

Now, you may have extreme confidence in your engineer. You expect the machine to work as specified 99.9999% of the time. In the other 0.0001% of cases minor fluctuations in the power source or a hit by a cosmic ray somewhere triggers a vulnerability and a failed execution occurs. Humans accept that level of risk on a daily basis with such activities as driving a car or breathing (in New York). (In this case we would call it a “Micronotmort”.) Yet you are quantum suicidal and have decided that all Everett branches in which you die don’t count.

So, if you engage in a 1:2,000,000 Quantum Lottery you can consider (approximately) the following outcomes: (1 live:1,999,998 die:2 crippled). Having decided that 1,999,998 of those branches don’t count you are left with a one in three chance of being a cripple. Mind you given the amount of money that would be staked in such a lottery it would probably be pretty good deal financially!

What does this mean?

Don’t try to use Quantum Suicide to brute force cryptoanalysis problems. It just isn’t going to work even if you think the theoretical expected outcome to be worthwhile! You aren’t that good at building stuff.

While this is also also an interesting topic in itself (I believe there are posts about it floating around here someplace) it is also somewhat beside the main point. Is quantum suicide a good idea even when you iron out all the technical difficulties?

Quantum Sour Grapes

People have been gambling for millennia. Most of the people who have lost bets have done so without killing themselves. Much can be learned from this. For example, that killing yourself is worse than not killing yourself. This intuition is one that should follow over to ‘quantum’ gambles rather straightforwardly. Consider this alternative:


Don't fucking kill yourself

Here we see just as many futures in which Blue ends up with the jackpot. And this time Blue manages not to kill himself in branches where he loses. Blue is sad for a while until he makes some more money and gets over it. He isn’t dead. This is obviously just plain better. The only reasons for Blue to kill himself when he loses would be contrived examples such as those involving torture that can only be avoided by payment that can not be arranged any other way.

You get just as much quantum ‘winningness’ if you don’t kill yourself. For this reason I consider games like Quantum Roulette to be poorly named, particularly when “Quantum Immortality” is also mentioned. I much prefer the label “Quantum Sour Grapes”.

Lesson: Don’t make decisions based on anticipated future anthropic reasoning. That’s just asking for trouble.

Not Wrong (But Perhaps Crazy)

I personally consider anyone who wants to play quantum roulette to be crazy. And anyone who wanted to up the stakes to a doomsday quantum variant would be a threat to be thwarted at all costs if they were not too crazy to even take seriously. But I argue that this is a matter of preference and not (just) one of theoretical understanding. We care about possible future states of the states of the universe - of all of the universe. If we happen to prefer futures of our current Everett branch where most sub-branches have us dead but one does not then that we can do that.

 

37 comments

Comments sorted by top scores.

comment by Wei Dai (Wei_Dai) · 2011-07-16T02:23:46.884Z · LW(p) · GW(p)

Suppose Omega appears to you and says that you're living in a deterministic simulation. (Apparent quantum randomness is coming from a pseudorandom number generator.) He gives you two options:

A. He'll create an FAI inside the simulation to serve you, essentially turning the simulation into your personal heaven.
B. He'll do A, and also make a billion identical copies of the unmodified simulation (without the FAI), and scatter them around the outside universe. (He tells you that in the unmodified simulation, no FAI will be invented by humanity, so you'll just live an ordinary human life and then die.)

Is it obvious that choosing A is "crazy"? (Call this thought experiment 1.)

Suppose someone chooses A, and then Omega says "Whoops, I forgot to tell you that the billion copies of the unmodified simulation have already been created, just a few minutes ago, and I'm actually appearing to you in all of them. If you choose A, I'll just instantly shut off the extra copies. Do you still want A?" (Call this thought experiment 2.)

Thought experiment 1 seems essentially the same as thought experiment 2, which seems similar to quantum suicide. (In other words, I agree with Carl that the situation is not actually as clear as you make it out to be.)

Replies from: endoself
comment by endoself · 2011-07-16T16:52:17.547Z · LW(p) · GW(p)

Is it obvious that choosing A is "crazy"?

It seems crazy to me, though I don't think that it is too unlikely that someone will be able to establish it with an argument I haven't heard.

Replies from: latanius
comment by latanius · 2011-07-16T20:04:10.389Z · LW(p) · GW(p)

Is it obvious that running in many instances is better compared to being active on just one "thread"?

It is not empty, sad, but still existing worlds that we are talking about here, but just about creating lots of less happy copies of yourself (in fact, the two versions of the experiment only differ in a bit of added redundancy in the first few minutes of the computation, I think this doesn't count as "existence").

Alternative experiment: you have to possibility to make a copy of yourself, who will have living conditions of the same quality as you will. Furthermore, you stay conscious during the process, so the copy won't be "you".... do you agree to the process?

And what if the copy would have -40 IQ and much worse living conditions compared to you?

I think the problem here is that our utility functions (conditional on such thing actually existing) doesn't seem to be consistent when considering copying living entities... (as not creating a copy and killing it later are sometimes identical operations, but they seem to be very different to our intuitions). It is just only additional complexity if the questions are asked about yourself or in a QM world.

Replies from: endoself
comment by endoself · 2011-07-17T19:23:02.636Z · LW(p) · GW(p)

I will assume we are only considering the well being of the possible people, not their outward consequences of their existence because that simplifies things and that seems to be implicit here.

Alternative experiment: you have to possibility to make a copy of yourself, who will have living conditions of the same quality as you will. Furthermore, you stay conscious during the process, so the copy won't be "you".... do you agree to the process?

Yes.

And what if the copy would have -40 IQ and much worse living conditions compared to you?

As long as his life will be better than not living, yes. It seems strange to want a being not to exist if ey will enjoy eir life, ceteris paribus.

I think the problem here is that our utility functions (conditional on such thing actually existing) doesn't seem to be consistent when considering copying living entities... (as not creating a copy and killing it later are sometimes identical operations, but they seem to be very different to our intuitions).

I have tentatively bitten the bullet and decided to consider them equivalent. Death is only bad because of the life that otherwise could have been lived.

Replies from: DSimon
comment by DSimon · 2011-07-25T11:11:47.065Z · LW(p) · GW(p)

Doesn't this lead to requiring support for increasing human population as much as possible, up to the point where resources-per-person makes life just barely more pleasant than not living, but no more so?

comment by CarlShulman · 2011-07-16T01:39:50.149Z · LW(p) · GW(p)

I think this sidesteps the underlying intuitions too quickly. We have cognitive mechanisms to predict "our next experience," memories of this algorithm working well, and preferences in terms of "our next experience." If we become convinced by the data that this model of a unique thread of experience is false, we then have problems in translating preferences defined in terms of that false model. We don't start with total utilitarian-like preferences over the fates of our future copies (i.e. most aren't eager to lower their standard of living by a lot so as to be copied many times (with the copies also having low standards of living)), and one needs to explain why to translate our naive intuitions into the additive framework (rather than something more like averaging).

Replies from: wedrifid, steven0461, Manfred
comment by wedrifid · 2011-07-16T15:50:41.675Z · LW(p) · GW(p)

I think this sidesteps the underlying intuitions too quickly.

I think you are right. I also seem not to have conveyed quite the same position as the one I intended. That is:

  • Quantum Suicide is not something that you "Believe In" but rather a preference that in all worlds in which you don't win you are killed.
  • This is a valid, coherent and not intrinsically irrational goal.
  • You don't get more "winningness" by killing yourself.
  • The Everett branches in which you are killed are just as real as the ones where you are alive. They are not trimmed from reality.

These are the points I have found myself wishing I had a post to link to when I have been asked to explain a position. Going on to explain in detail why I have the preferences I have would open up another post or three worth of discussion of whether existence in more branches is equivalent to copies and a bunch of related philosophical questions like those that you allude to.

comment by steven0461 · 2011-07-16T03:41:17.220Z · LW(p) · GW(p)

We have ... preferences in terms of "our next experience." If we become convinced by the data that this model of a unique thread of experience is false, we then have problems in translating preferences defined in terms of that false model.

In what sense would I want to translate these preferences? Why wouldn't I just discard the preferences, and use the mind that came up with them to generate entirely new preferences in the light of its new, improved world-model? If I'm asking myself, as if for the first time, the question, "if there are going to be a lot of me-like things, how many me-like things with how good lives would be how valuable?", then the answer my brain gives is that it wants to use empathy and population ethics-type reasoning to answer that question, and that it feels no need to ever refer to "unique next experience" thinking. Is it making a mistake?

Replies from: Wei_Dai, CarlShulman, Peter_de_Blanc
comment by Wei Dai (Wei_Dai) · 2011-07-17T17:36:33.174Z · LW(p) · GW(p)

In what sense would I want to translate these preferences?

I think in the sense that the new world-model ought to add up to normality. The move you propose probably only works (i.e., is intuitively acceptable) for someone who already has a strong intuition that they ought to apply empathy and population ethics-type reasoning to all decisions, not just those that only affect other people. For others who don't share such intuition, switching from "unique thread of experience" to empathy and population ethics-type reasoning would imply making radically different decisions, even for current real-world (i.e., not thought experiment) decisions, like whether to donate most of their money to charity (the former says "no" while the latter says "yes", since the difference in empathy-level between "someone like me" and "a random human" isn't that great).

comment by CarlShulman · 2011-07-16T04:17:37.824Z · LW(p) · GW(p)

That's one approach to take, with various attractive features, but one needs to be careful in that case in thinking about thought-experiments like those Wei Dai offers (which are implicitly callling on the thread model).

comment by Peter_de_Blanc · 2011-07-17T13:02:32.840Z · LW(p) · GW(p)

Why wouldn't I just discard the preferences, and use the mind that came up with them to generate entirely new preferences

What makes you think a mind came up with them?

Replies from: steven0461
comment by steven0461 · 2011-07-17T16:33:21.485Z · LW(p) · GW(p)

I don't understand what point you're making; could you expand?

Replies from: Peter_de_Blanc
comment by Peter_de_Blanc · 2011-07-18T02:27:30.745Z · LW(p) · GW(p)

You can't use the mind that came up with your preferences if no such mind exists. That's my point.

Replies from: steven0461
comment by steven0461 · 2011-07-18T02:50:17.608Z · LW(p) · GW(p)

What would have come up with them instead?

Replies from: Peter_de_Blanc
comment by Peter_de_Blanc · 2011-07-18T05:02:39.346Z · LW(p) · GW(p)

Evolution.

Replies from: steven0461
comment by steven0461 · 2011-07-19T18:24:36.838Z · LW(p) · GW(p)

In the sense that evolution came up with my mind, or in some more direct sense?

comment by Manfred · 2011-07-16T23:23:19.753Z · LW(p) · GW(p)

Well, assuming that you generally don't want to die, quantum suicide is irrational (not independent of irrelevant alternatives). The extent to which we should do irrational things because we want to is definitely something to think about, but I think it's also alright to just say "it's irrational and that's bad."

comment by MoreOn · 2011-08-19T02:50:52.986Z · LW(p) · GW(p)

People have been gambling for millennia. Most of the people who have lost bets have done so without killing themselves. Much can be learned from this. For example, that killing yourself is worse than not killing yourself. This intuition is one that should follow over to ‘quantum’ gambles rather straightforwardly.

You weren't one of those people.

That non-ancestor of yours who played Quantum Russian Roulette with fifteen others is dead from your perspective, his alleles slightly underrepresented in the gene pool. In fact, if there was an allele for "QRoulette? Sure!" that had caused these 16 people to gample, then it had lost 15+ of its copies from your ancestral population. Post-gambling suicide really isn't a good evolutionary strategy.

But, from the perspective of your non-ancestor, he got to live in his own perfect little world with 16x his wealth (barring being crippled--but then, he'd only been up against 4 bits).

comment by PhilGoetz · 2011-07-16T22:58:23.806Z · LW(p) · GW(p)

Here we see just as many futures in which Blue ends up with the jackpot. And this time Blue manages not to kill himself in branches where he loses. Blue is sad for a while until he makes some more money and gets over it. He isn’t dead. This is obviously just plain better.

If you are an average utilitarian, then it is better to kill yourself if you lose in quantum roulette. It increases your average utility. This is the same reasoning that makes an average utilitarian prefer to kill off the poorer members of society.

Replies from: lessdazed, wedrifid, Desrtopa
comment by lessdazed · 2011-07-17T11:54:40.679Z · LW(p) · GW(p)

This is the same reasoning that makes an average utilitarian prefer to kill off the poorer members of society.

Can you name that person? I might like to talk with him or her.

comment by wedrifid · 2011-07-16T23:10:09.754Z · LW(p) · GW(p)

If you are an average utilitarian, then it is better to kill yourself if you lose in quantum roulette. It increases your average utility. This is the same reasoning that makes an average utilitarian prefer to kill off the poorer members of society.

Agree, with the following clarification:

Average (or total) utilitarian preferences are analogous to preferences for many future selves but we should remember that they are not the same thing. Choosing to take an average over yourself in future everett branches is an entirely different decision to choosing to average utility of a population in one branch. (I liked the language Carl used).

Most average utilitarians will not want to commit quantum suicide*. However, "Average utilitarian-like preferences over many future selves" does seem like a good recipe for being quantum suicidal.

* Disclaimer: Being an average utilitarian in the first place is already insane. So I cannot really speak for what they prefer when things get complicated.

Replies from: PhilGoetz
comment by PhilGoetz · 2011-07-17T04:40:37.661Z · LW(p) · GW(p)

Average (or total) utilitarian preferences are analogous to preferences for many future selves but we should remember that they are not the same thing.

Why are they not the same thing? (Or, manifestations of the same thing?)

Replies from: wedrifid
comment by wedrifid · 2011-07-17T05:59:47.552Z · LW(p) · GW(p)

Why are they not the same thing? (Or, manifestations of the same thing?)

Because there is one guy who is averaging some metric based on every person in the population and there is a different guy who is averaging some metric about future outcomes of himself excluding those where he is dead.

The similarities seem to be along the lines of "do silly math then decide to kill losers".

comment by Desrtopa · 2011-07-16T23:19:53.270Z · LW(p) · GW(p)

If you are an average utilitarian, then it is better to kill yourself if you lose in quantum roulette. It increases your average utility.

What about the utility of the acquaintances affected by your suicide?

Replies from: PhilGoetz
comment by PhilGoetz · 2011-07-17T04:39:19.822Z · LW(p) · GW(p)

That is a complicating factor; but it has the same impact in either case (quantum roulette player vs. average utilitarian dictator).

comment by Khaled · 2011-07-17T07:15:10.832Z · LW(p) · GW(p)

When calculting the odds of the winning/losing/machine defect, shouldn't you add the odds of the Many Worlds hypothesis being true? Perhaps wouldn't affect the relative odds, but might change odds in relation to "moderately rich/poor and won't try Quantum Immortality"

Replies from: wedrifid
comment by wedrifid · 2011-07-17T07:36:14.635Z · LW(p) · GW(p)
comment by Jordan · 2011-07-16T02:35:25.404Z · LW(p) · GW(p)

If I were to build a death machine it would be based on high explosives. I would encase my head in a mound of C4 clay (or perhaps a less stable material). The machine could fail, most likely at the detonator, but it's difficult to imagine how it could maim me.

Replies from: wedrifid, Alexei
comment by wedrifid · 2011-07-16T17:27:39.936Z · LW(p) · GW(p)

but it's difficult to imagine

I believe that is the point they are trying to illustrate. If you are trying to QS your way to one in several million events then you had better start imagining pretty hard and consider every possible unexpected event of that order of improbability, including black swans. You can influence the relative probability of various failure modes but you must acknowledge that the failure modes become magnified alongside the win outcome.

This consideration is probably not a problem when playing a simple low-n roulette. It becomes insurmountable when you are trying to brute force 4096 bit encryption. You just have to hope the machine breaks gracefully instead of doing something unexpectedly bad.

Replies from: Jordan
comment by Jordan · 2011-07-16T18:59:40.714Z · LW(p) · GW(p)

you had better start imagining pretty hard and consider every possible unexpected event of that order of improbability, including black swans

With QS you must guard yourself against all local Everett branches. Those branches could conceivably contain black swans, like a few electrons tunneling out of a circuit preventing a CPU from performing correctly. Even that is a 1:1,000,000,000 or more event. But they will not contain something macroscopic.

If I look around and notice no one nearby, I might say "I am only 99% confident that there isn't anyone near." If I then sample all local branches (with a device that has a 1:1,000,000 fail rate), killing myself in those branches that no one appears, what is the probability that I will find myself in a branch with another person nearby? I would say about 1%. The presence or absence of another person should behave classically for the small numbers we are talking about. Quantum probabilities are different than my own Bayesian probabilities.

In short, while some failure modes will become more common, others will not.

Replies from: wedrifid
comment by wedrifid · 2011-07-16T20:04:02.263Z · LW(p) · GW(p)

In short, while some failure modes will become more common, others will not.

I agree (for the same reasons you specified.) It becomes even more complicated when trying to account for a probability distribution over possible quantum configurations that could lead to your own subjective state. Because culling from the futures of some possible current states makes the other possible 'now's considered to be more relevant there are additional failure modes that become even more likely to be relevant.

comment by Alexei · 2011-07-16T03:45:25.878Z · LW(p) · GW(p)

I agree. I think even with our modern technology we can create a suicide machine that will have a very very high chance of working.

comment by rasthedestroyer · 2011-07-16T12:21:01.377Z · LW(p) · GW(p)

The fundamental flaw in this game is the separation of 'me' from the 'many possible worlds' that I may occupy. Abstracting the self from the world is done as a matter of convenience, but there are just as many possible "Me's" as there are possible future states in which 'I' exist. In Quantum Roullette, what if the new 'me' in the new state doesn't care about being rich, or what if inflation has devalued my wealth, or what if it's a possible world without the money-form?

Replies from: MoreOn, DSimon
comment by MoreOn · 2011-08-19T02:34:21.823Z · LW(p) · GW(p)

Reality wouldn't be mean to you on purpose.

Of course there would be worlds where something would have gone horribly wrong if you won the lottery. But there's no reason for you to expect that you'd wake up in one of those worlds because you won the lottery. The difference between your "horribly wrong" worlds (don't care about money/ inflation / no money) and wedrifid's (lost the lottery and became crippled) is that waking up in wedrifid's is caused by one's participation in the lottery.

comment by DSimon · 2011-07-25T11:33:03.021Z · LW(p) · GW(p)

I think there are a few problems with this reasoning:

  1. None of the events you listed make it any worse to have won the lottery than not, at most they just even it out. Assuming you support the basic premise of quantum suicide, then it doesn't really matter very much to trim out cases where you didn't win but it turns out that winning wouldn't have been that great anyways.

  2. We can even include some unlikely and wacky but still possible outcomes where winning is worse (i.e. a sudden widespread revolution made up of super-naive anti-capitalists who go around putting all people above a certain tax bracket through a meat grinder), but don't forget that there are also silly-rare-but-possible outcomes for heavily decreasing the utility of not winning the lottery (massive deflation, world where money is far far more important than it is now, super-naive capitalists who go around with a meat grinder, etc. etc.). You're still more likely to want to win the lottery than not.

  3. If you're still very concerned about these types of events, then you can pre-commit to suicide under any future where it turns out that winning the lottery sucks by more than a certain amount, or more aggressively, under any future where winning the lottery is less awesome than a certain threshold.

comment by dvasya · 2011-07-29T16:30:46.310Z · LW(p) · GW(p)

We can have preferences about the passengers in a rocket that will have flown outside our future light cone

Can we? I mean, are you sure that whatever you verbally mean by "preferences" here can actually be mathematically formulated as a preference? How do you compute anything based on these "preferences"?

The economy is damaged

Ironically, there's this proposal to use Quantum Suicide to improve the ecomomy: http://www.higgo.com/quantum/modest.htm

You aren’t that good at building stuff.

This is a nice point.

comment by DanielLC · 2011-10-28T01:29:39.085Z · LW(p) · GW(p)

The problem is the illusion of personal identity. If Alice and Bob play quantum rolette, you're probably not going to be Alice in the universe where she wins or Bob in the universe where he wins. You'll probably end up being someone else altogether. One of them dying does increase the likelihood of being the other, but it goes from 1 in 6,000,000,000 to 1 in 5,999,999,999, not from 1 in 2 to 1 in 1. There's no reason why being Alice at time t has to have the same probability as being Alice at time t+1. They're two different people.