Alternate approaches to Pascal's Mugging

post by DataPacRat · 2012-10-13T15:03:18.480Z · LW · GW · Legacy · 24 comments

A lot's been said about Pascal's Mugging, counter-muggings, adding extra arrows to the hyper-power stack, and so forth; but if anyone's said anything about my own reaction, I've yet to read it; and, in case it might spur some useful discussion, I'll try explaining it.

Over the years, I've worked out a rough rule-of-thumb to figure out useful answers to most everyday sorts of ethical quandaries, one which seems to do at least as well as any other I've seen: I call it the "I'm a selfish bastard" rule, though my present formulation of it continues with the clauses, "but I'm a smart selfish bastard, interested in my long-term self-interest." This seems to give enough guidance to cover anything from "should I steal this or not?" to "is it worth respecting other peoples' rights in order to maximize the odds that my rights will be respected?" to "exactly whose rights should I respect, anyway?". From this latter question, I ended up with a 'Trader's Definition' for personhood: if some other entity can make a choice about whether or not to make an exchange with me of a banana for a backrub, or playtime for programming, or anything of the sort, then at least generally, it's in my own self-interest to treat them as if they were a person, whether or not they match any other criteria for personhood.

Which brings us to Pascal's Mugging itself: "Give me five dollars, or I'll use my magic powers from outside the Matrix to run a Turing machine that simulates and kills 3^^^^3 people."

To put it bluntly... why should I care? Even if the Mugger's claim is accurate, the entities he threatens to simulate and calls 'people' don't seem as if they will have any opportunity to interact with my portion of the Matrix; they will never have any opportunity to offer me any benefit or any harm. How would it benefit myself to treat such entities as if they not only had a right to life, but that I had an obligation to try to defend their right?

 

For an alternate approach: Most forms of ethics that have been built, have been created with an unstated assumption about the number of possible beings someone can interact with in their life - on the outside, someone who lived 120 years and met with a new person every second would meet less than 4 billion individuals. It's only in the past few centuries that hard, physical experience has given us sufficient insights into some of the rather more basic foundations of economics for humanity to have even developed enough competing theories of ethics for some to start being winnowed out; and we're still a long way away from getting a broad consensus on an ethical theory that can deal with the existence of a mere 10 billion individuals. What are the odds that we possess sufficient information to have any inkling of the assumptions required to deal with a mere 3^^^3 individuals existing? What knowledge would be required so that when the answer is figured out, we would actually be able to tell that that was it?

 

For another alternate approach: Assuming that the Mugger is telling the truth... is what he threatens to do actually a bad thing? He doesn't say anything about the nature of the lives the people he simulates; one approach he might take could be to simulate a large number of copies of the universe, which eventually peters out in heat-death; would the potential inhabitants of such a simulated universe really object to their existence being created in the first place?

 

For yet another alternate approach: "You have outside-the-Matrix powers capable of simulating 3^^^^3 people? Whoah, that implies so much about the nature of reality that this particular bet is nearly meaningless. Which answer could I give that would induce you to tell me more about the true substrate of existence?"

 

 

Do any of these seem to be a worthwhile way of looking at PM? Do you have any out-of-left-field approaches of your own?

24 comments

Comments sorted by top scores.

comment by benelliott · 2012-10-13T15:27:57.654Z · LW(p) · GW(p)

Most of these are just a failure to consider the Least Convenient Possible world, all may be valid arguments in that specific case but fail to address the general problem.

comment by Manfred · 2012-10-13T15:21:14.369Z · LW(p) · GW(p)

The mugger can also just say "Give me $5 or I will cause you to lose 3^^^3 times more utility than that $5 is worth." This makes the issue a bit clearer, though I'm sure one can still fail to answer the challenge head-on.

Replies from: DataPacRat, buybuydandavis
comment by DataPacRat · 2012-10-13T15:26:28.977Z · LW(p) · GW(p)

This brings up the end of the second approach I mentioned; does it really make sense to talk about an individual experiencing that much (negative) utility? Even if it's not a 'more northerly than the north pole' definitional impossibility, might it be a 'climb higher than Mount Everest' physically practical impossibility?

Replies from: Benja
comment by Benya (Benja) · 2012-10-13T16:02:38.214Z · LW(p) · GW(p)

Oops, took so long to write my reply that this post went from having no comments to having a discussion of my main point :-) (But my reply still expands on these points, so I won't just delete it.)

This brings up the end of the second approach I mentioned; does it really make sense to talk about an individual experiencing that much (negative) utility?

Short answer: This objection is saying that your utility function should be bounded, which is one of the standard answers, though I think that many people suggesting a bounded utility function were just doing so as a way to fix the problem, rather than arguing from first principles that you really shouldn't care that much more about the mugger's threat than about losing five dollars.

comment by buybuydandavis · 2012-10-14T02:12:50.303Z · LW(p) · GW(p)

Thank you. That removes many extraneous issues from the problem.

comment by Kindly · 2012-10-13T15:27:51.706Z · LW(p) · GW(p)

Presumably upon reflection everyone here could already assign a probability (possibly zero if that's your thing) to me torturing a kajillion Matrix people or causing specifically you to lose lots of disutility or whatever.

Does this probability really change significantly if I make a bunch of cheap and familiar statements about how I'm going to do this if you don't give me $5?

It seems like the "mugging" step of Pascal's Mugging isn't really that important. If you'd be vulnerable to it, then you can mug yourself just fine without my help.

Replies from: Vladimir_Nesov, Benja
comment by Vladimir_Nesov · 2012-10-13T15:37:07.063Z · LW(p) · GW(p)

The mugging part is helpful in that is frames the thought experiment as a decision problem, which allows abstracting it from most theoretical frameworks (such as ways of assigning a prior or flavors of utilitarianism) and avoids related disputes about definitions. In a decision problem, a common understanding of probability or utility doesn't have to be presupposed, instead each solution is free to apply whatever tools and additional assumptions seem appropriate.

comment by Benya (Benja) · 2012-10-13T16:23:15.144Z · LW(p) · GW(p)

The problem, as explained in Eliezer's original post, isn't that the probability changes significantly by any ordinary standards of "significantly". But if the change is as large as 1/googolplex, then multiplied with a utility differential of 3^^^3, it completely dominates what you should do. And with a Solomonoff prior, it's hard to see how the change could be as small as 1/googolplex, seeing as the hypothesis surely takes less than a googol bytes to describe.

Replies from: Kindly
comment by Kindly · 2012-10-13T16:38:33.956Z · LW(p) · GW(p)

A change of 1/googolplex only dominates what you should do if, in advance of my mugging, your probability distributions are already balanced to within a difference of 1/googolplex.

Replies from: Benja
comment by Benya (Benja) · 2012-10-13T16:59:01.266Z · LW(p) · GW(p)

Oh. I totally misunderstood the first time I read that comment, sorry!

If anyone else had trouble: Kindly is pointing out that a problem very similar to the Mugging exists even if you don't have an actual mugger, i.e., someone who's going around telling people to give them $5. For example, the existence of monotheistic religion is weak Bayesian evidence for an omnipotent god who will torture people for actual eternity if they steal from the cookie jar; does it follow that an FAI should destroy all cookie jars? (Well, it doesn't, because there's no reason to think that this particular hypothesis is what would dominate the FAI's actions, but something would.)

Which is actually a very good point, because the original statement of the problem triggered our anti-cheating adaptations (must avoid being exploitable by a clever other tribemember that way), but the problem doesn't seem to go away if you remove that aspect. It doesn't even seem to need a sentient matrix lord, really. -- Or perhaps some will feel that it does make the problem go away, since they are fine with ridiculous hypotheses dominating their actions as long as these hypotheses have large enough utility differences... in which case I think they should bite the bullet on the mugging, since the causal reason they don't is obviously(?) that evolution built them with anti-cheating adaptations, which doesn't seem a good reason in the large scheme of things.

Replies from: endoself
comment by endoself · 2012-10-14T20:27:23.786Z · LW(p) · GW(p)

Or perhaps some will feel that it does make the problem go away, since they are fine with ridiculous hypotheses dominating their actions as long as these hypotheses have large enough utility differences... in which case I think they should bite the bullet on the mugging

Even this doesn't make the problem go away as, given the Solomonoff prior, the expected utility under most sensible unbounded utitily functions fails to converge. (A nonnegligible fraction of my LW comments direct people to http://arxiv.org/abs/0712.4318)

Replies from: Benja, drnickbone
comment by Benya (Benja) · 2012-10-15T08:52:57.497Z · LW(p) · GW(p)

That's an important problem, but to me it doesn't feel like the same problem as the mugging -- would you agree, or if not, could you explain how they're somehow the same problem in disguise?

Replies from: endoself
comment by endoself · 2012-10-16T05:20:23.627Z · LW(p) · GW(p)

I guess this depends on how willing you are to bite the bullet on the mugging. I'm rather uncertain, as I don't trust my intuition to deal with large numbers properly, but I also don't trust math that behaves the way de Blanc describes.

If you actually accept that small probabilities of huge utilities are important and you try to consider an actual decision, you run into the informal version of this right away; when the mugger asks you for $5 dollars in exchange for 3^^^3 utilons, you consider the probability that you can persuade the mugger to give you even more utility and the probability that there is another mugger just around the corner who will offer you 4^^^4 utilons if you just offer them your last $5 rather than giving it to this mugger. This explosion of possibilies is basically the same thing described in the paper.

comment by drnickbone · 2012-10-14T21:50:23.360Z · LW(p) · GW(p)

Has this de Blanc proof been properly discussed in a main post yet? (For instance, has anyone managed to get across the gist of de Blanc's argument in a way suitable for non-mathematicians? I saw there is a paragraph in the Lesswrongwiki, but not a reference to main articles on this subject.)

Also, how does Eliezer feel about this topic since from the Sequences he clearly believes he has an unbounded utility function and it is not up for grabs?

Replies from: endoself
comment by endoself · 2012-10-14T22:48:50.805Z · LW(p) · GW(p)

Has this de Blanc proof been properly discussed in a main post yet?

Not that I can find. I did write a comment that is suitable for at least some non-mathematicians, which I could expand into a post and make clearer/more introductory. However, some people didn't take it very well, so I am very reluctant to do so. If you want to read my explanation, even with that caveat, you can click upward from the linked comment.

Also, how does Eliezer feel about this topic since from the Sequences he clearly believes he has an unbounded utility function and it is not up for grabs?

I'm not sure. In the Sequences, he thinks there are unsolved problems relating to unbounded utility functions and he states that he feels confused about such things, such as in The Lifespan Dilemma. I don't know how his thoughts have changed since then.

comment by Benya (Benja) · 2012-10-13T15:57:55.901Z · LW(p) · GW(p)

"Give me five dollars, and I will use my outside-the-Matrix powers to make your wildest dreams come true, including living for 3^^^3 years of eudaimonic existence and, yes, even telling you about the true substrate of existence. Hey, I'll top it off and let you out of the box, if only you decide to give me five of your simulated dollars."

For your kinds of arguments to work, it seems that there must be nothing that the mugger could possibly promise or threaten to do and that, if it came true, you would rate as making a difference of 3^^^3 utils (where declining the offer and continuing your normal life is 0 utils, and giving five dollars to a jokester is -5 utils). It seems like a minor variation on the arguments in Eliezer's original post to say that if your utility function does assign utilities differing by 3^^^3 to some scenarios, then it seems extremely unlikely that the probability of each of these coming true will balance out just so that the expected utility of paying the mugger is always smaller than zero, no matter what the mugger promises or threatens. If your utility function doesn't assign utilities so great or small to any outcome, then you have a bounded utility function, which is one of the standard answers to the Mugging.

My own current position is that perhaps I really do have a bounded utility function. If it were only the mugging, I would perhaps still hold out more hope for a satisfactory solution that doesn't involve bounded utility, but there's also the unpalatability of having to prefer a (googolplex-1)/googolplex chance of everyone being tortured for a thousand years and all life in the multiverse ending after that + a 1/googolplex chance of 4^^^^4 years of positive post-singularity humanity to the certainty of a mere 3^^^3 years of positive post-singularity humanity, given that 3^^^3 years is far more than enough to cycle through every possible configuration of a present-day human brain. Yes, having more space to expand into post-singularity is always better than less, but is it really that much better?

(ObNote: In order not too make this seem one-sided, I should also mention the standard counter to that, especially since it was a real eye-opener the first time I read Eliezer explain it -- namely, with G := googolplex, I would then also have to accept that I'd prefer a 1/G chance of living (G + 1) years + a (G-1)/G chance of living 3^^^3 years to a 1/G chance of living G years + a (G-1)/G chance of living 4^^^^4 years -- in other words, I'll prefer a near-certainty of an unimaginably smaller existence, if I get for that a miniscule increase of existence in a scenario that only has a miniscule chance of happening in the first place. But I've started to think that, perhaps, the unimaginably large difference between these lifetimes possibly really might be that unimportant, given that I can cycle through all of current-brain-size human mindspace many times in a mere 3^^^3 years, and given the also-unpalatable conclusions from the unbounded utility function.)

comment by [deleted] · 2012-10-15T14:28:17.749Z · LW(p) · GW(p)

I'm not sure if this is out of left field or not, but I think I currently have the approach below:

If a Pascal Mugger can mug you for n utils, it seems like A Pascal Mugger can mug you for 2n utils by restructuring the math of his statement slightly. Or alternatively, he can just mug you for n utils again.

There doesn't appear to be any obvious limit to the above. If my evaluation of the math says I should give in to a Pascal mugger for 5 dollars, it also seems to indicate I should give in to a Pascal mugger for 10 dollars, 100 dollars, my house, a year of slavery, or allowing myself to be a mind controlled puppet for the rest of my life, if the mugger phrases the numbers correctly. It seems like I can be mugged for anything (Another way to say this might be that there is no clear upper limit on the maximum amount of utility I can be Pascal mugged for if I can be Pascal mugged for 5 utility.)

It does not seem reasonable to become a mind controlled puppet for the rest of my life because anyone says:

"Pascal's Mugging, A=#, P(a)=#, B=#, P(b)=#, C=Mind controlled puppetry for the rest of your life."

So I guess I'm left with:

A: If my utility function is such that it is correct to give into Pascal's Mugging, then it is correct to give into Pascal's Larger Mugging. (By the argument above) B: Giving into Pascal's Larger Mugging is incorrect according to my utility function. (This just appears to be the case. I mean I could be that submissive to absolutely any math student who had a function capable of generating a generically copiable string of numbers, but it does not seem correct to do so.) C: So giving into Pascal's Mugging is incorrect according to my utility function. (By Modus Tollens)

Now, there are alternatives to that. If for instance, The circumstance causing Pascal's mugging does not appear to be a sentient capable of modifying the dilemma to turn you into a slave for itself, (Pascal's Incoming Meteor?) Then that's entirely different. It does seem reasonable to give 5 dollars to help build a meteor deflection system under some circumstances.

So I suppose I could say "I will allow myself to be Pascal Mugged if and only if it does not appear likely to turn me into a utility pump for lots of people." Which also makes sense, because if your utility function is unbounded, then allowing millions of people to steal as much utility from you as they want whenever they want seems rather catastrophically bad, since as bad as making any one decision incorrectly is, allowing your decision making process to be subverted for all of your next decisions seems likely to be worse, and there is no limit to how bad it can get with an unbounded utility function.

Does that seem reasonable?

Replies from: thomblake
comment by thomblake · 2012-10-15T19:06:30.321Z · LW(p) · GW(p)

It does not seem reasonable

There's the problem with this answer. Pascal's Mugging is a problem for those who want to think rigorously about it. If you're willing to use the absurdity heuristic at all, then it's not a problem in the first place.

Can you calculate that the expected disutility you can expect from giving in to recursive Pascal's Muggers is greater than the expected disutility of failing to give in to a Pascal's Mugger?

Replies from: None
comment by [deleted] · 2012-10-16T13:41:45.176Z · LW(p) · GW(p)

Can you calculate that the expected disutility you can expect from giving in to recursive Pascal's Muggers is greater than the expected disutility of failing to give in to a Pascal's Mugger?

Hmm. I would say in the case of being a mind controlled puppet, yes, because it contains a penalty to my decision making itself, but I don't know if I have enough math knowledge to lay it out correctly. Let me try to give some examples and see if I am on the right track.

Take the offer, "I'll give you 1 million dollars, but you will be forced into letting me make your next two decisions for you."

If my next two decisions are "Should I eat 1 pound of pie, or should I eat 1 pound of cake?" and "Should I wear blue pants, or Should I wear brown pants?" then this is probably an easy million.

If my next two decisions are "Should I launch or not launch the Nuclear missiles that kill billions?" and "Should I accept or not accept the next Pascal's Mugging?" then clearly giving up decision making power to someone else is much much worse.

From my previous post, It still seems that if I submit to a Pascal mugging for something, than I submit to a Pascal mugging for anything. That means if I can be mugged for five dollars, I can be mugged into giving up my decision making power... Except, if a Pascal's mugging occurs once, then it is certainly more likely to occur again, and then the expected value of being able to make the correct decisions skyrockets. After all, if Pascal's muggings occur, that means you would have to greatly increase the chance of making decisions about utility so large it dwarfs Graham's number.

This seems to mean that it becomes just as critically important to NOT give up any of your decision making power at all, under any circumstances, in the same way that if you learned that Omega may be offering you 1 million utility or 1,000 utility tomorrow depending on the letter of boxes you pick it would be a really bad time to get so drunk you were hungover and the letters A and B blurred together into the same letter.

I suppose another way of saying this would be "As bad as making any one decision incorrectly is, becoming a mind controlled slave is probably going to be worse, because that includes a multitude of future incorrect decisions later, and you probably do not have evidence that would suggest that the sum of disutility from those future incorrect decisions is going to be smaller than this one."

Another more direct way to say this might be that if I give into to recursive Pascal's Muggers, then in addition to losing large amounts of money/utility from repeated submission, that can INCLUDE failing to give into some Pascal's Muggers as well, either because one of them mugged me into not obeying a future mugger, or I was recursively mugged to the point where I don't even HAVE five dollars and can't obey the mugger. (And there is nothing that guarantees the mugger will be nice and mug you for something that you have.)

On the other hand, if I defy Pascal's muggers, that should generally decrease the chance of additional muggings on me, (assuming the muggers actually want what they are mugging me for) which would be a very good thing considering how bad failing to give into a real Pascal Mugging is.

Still, if it is a UNIQUE threat, (Pascal's Meteor) or, the Mugger who is mugging me also happens to appear out of thin air, unlike all other previous muggers, then I AM likely to give in because then the whole "Well, this is likely to be repeated." logic or the "Well, there are probably going to be much more important decisions later" logic would not apply.

I don't think I used the absurdity Heuristic there, but I also didn't do much explicit math either. Does that lay it out better?

Replies from: thomblake
comment by thomblake · 2012-10-16T14:28:22.411Z · LW(p) · GW(p)

but I also didn't do much explicit math either

Yeah, "calculate" was the key word. Pascal's mugging is a problem for someone who is doing explicit math.

Your examples of "losing large amounts of money" and "launch nuclear missiles that kill billions" are not anywhere near the correct order of magnitude to compare to any conceivable probability times 3^^^^3 lives.

Replies from: None
comment by [deleted] · 2012-10-21T11:36:40.849Z · LW(p) · GW(p)

Okay, I''ve had time to think about this, and I've realized two entirely different somethings, both of which appear to be as a result of me trying to think about this more from a math/programming/probability perspective. However, they start off with opposite viewpoints so it doesn't seem likely that they are both right, unless the math involved is simply that complicated, in which case, it's just beyond me. Despite that, I can't tell which angle appears to be more correct (or perhaps they are just both wrong again.) Can you help me out?

1: Being threatened with 3^^^^3 disutility, is bizarrely, not that bad. Because a second person threatening you can simply threaten you with endless disutility. Given an infinite utility function, the slightest chance of that happening should be strictly worse then any finite number (from a calculation perspective)

In essence, this is saying

A chance of: u=u-3^^^^3

is not as bad as a smaller chance of: do; u=u-3^^^^3 repeat;

or really, even a smaller chance of: do; u=u-1; repeat;

And since either way, we are imagining magical powers from outside the matrix, it's does not seem like being caught in an endless loop requires any significant amount more effort on the part of the threatener, I mean, in theory, he could just revoke your ability to die, and put you in a box where someone teleports in, fearful of their life for 30 seconds or so, and then dies painfully, with you feeling their pain physically, but not dying... and then a new entirely different person appears in, with no defined end condition. (Whereas with the original threat, people will stop appearing in the box and you will be released after 3^^^^3 people.) Either way, all he needs to is affect two people at a time and a box. An endless condition is almost easier in that he doesn't need to track the number of times he does it. So maybe it's not even less likely.

So I have to think "Am I more likely to suffer endless disutility if I do or don't give into the demand?"

"Well, if it's a real demand, then clearly people may randomly ask me for 5 dollars to avoid disutility in the future. If one of those is threatening endless disutility, I'd better keep my five dollars for him, since endless disutility would be infinitely worse."

or perhaps

"Well, if it's a real demand, then nothing stops people from mugging me in the future for an endless amount of disutility, while I am disabled from the disutility this guy is inflicting on me. I'd better give away my five dollars, since the slight chance of endless disutility would be infinitely worse."

or perhaps he was aware of that and just threatened me with endless disutility right off the bat.

Since this doesn't seem to go anywhere, that is why I thought of point 2.

2: The mere possibility of being threatened with 3^^^^3 disutility, is enough to drive you insane. If you are Pascal's Mugged, then usually you are certain that you're being Pascal's Mugged. However, there's always the possibility of a communications break down.

Let's say a person is talking to you over a static filled communications channel. You're about pretty sure that the person is either actually mugging you, or just asking about Pascal's Mugging in general, but it's 50-50 either way.

As you pointed out, 3^^^^3 lives is of inconceivable magnitude. halving the chance is not likely to change your behavior, so you would respond as if you were certain it is a Pascal's Mugging. Okay.

How much static would you have to have to NOT respond in this way?

I mean, let's say you're aware of the fact that "PMUG" is an abbreviation for Pascal's mugging someone, because it's four letters, and Pascal's mugging is talked about alot.

You receive a message that is four random alpha characters. Apparently, it's just utterly garbled, so you know the original message was 4 characters long, but that's it. Other than that, it could be any 4 random characters, from "AAAA" to "ZZZZ" Well, there is a 1 in 456,976 (26^4) chance it's was originally the message "PMUG", and if there is a 1 in 456,976 chance of their being a Pascal Mugging, clearly you should send the person who sent that garbled message money if you would send money in the original Pascal's Mugging. (Because it would seem unlikely that the probability times the disutility is sufficiently small that dividing it by less than a million would change much.

You get an email from someone. It's a Paypal account, followed by the first 50 characters of the original Pascal's Mugging paper, followed by unreadable static. Do you send money to the Paypal account? Well, let's say you think there is a 1 in a trillion chance they were trying to mug you and the email client glitched, and the rest of the probability is that this is some random fishing ploy copying a sample of plain text from somewhere. Presumably, it's still worth it to give, if you would give to the original mugging, since it seems likely that even if you think there is a 1 in a trillion chance that you are being mugged, it is worth it to give.

You receive a phone call from someone. It's a dropped call, and you have no information about that phone number, but perhaps you have heard that at least once, someone attempted to Pascal's Mug someone over the phone. You have no idea if that was the case here, but perhaps you have heard from a reliable source that the phone company generated statistics on it, and Pascal's Muggings only happened in 1 in 1 quintillion phone calls. Your phone can text that phone number 5 dollars. Presumably, if you have a guess that there was a 1 in 1 quintillion chance it might have been a mugging, and you would give into muggings, then you would send that number 5 dollars.

Let's say you go to work and your coworker says "Hi. Are you still worried about Pascal's Mugging?" You have heard that people under stress will sometimes experience verbal confusion, where they misunderstand parts of a sentence. It occurs to you while this is very unlikely, there is perhaps a 1 in a googol chance of this happening to you right as you come in the door, and that your coworker is trying to Pascal's Mug you. If you would give into the original Pascal's Mugging, then you should give your Coworker 5 dollars, right?

I suppose what I'm getting at is: If you would give into Pascal's mugging, and you would give into Pascal's mugging under uncertainty that the communication even is a mugging, then there should at some point be an amount of uncertainty that you would NOT give into Pascal's mugging, or it seems to suggest that you will start giving 5 dollars out to almost any stimuli out of paranoia, (without necessarily even ever actually being Pascal's mugged) because that stimuli MIGHT have been a mugging.

comment by Kawoomba · 2012-10-14T07:18:17.837Z · LW(p) · GW(p)

To put it bluntly... why should I care? Even if the Mugger's claim is accurate, the entities he threatens to simulate and calls 'people' don't seem as if they will have any opportunity to interact with my portion of the Matrix; they will never have any opportunity to offer me any benefit or any harm. How would it benefit myself to treat such entities as if they not only had a right to life, but that I had an obligation to try to defend their right?

This is easily remedied: The 3^^^3 people will all be put into the exact same simulated Pascal's Mugging scenario, and each will be given some previous life experiences that may or may not be like your own, but that the simulated person will assume as his/her own. Their choice won't actually affect what happens to them, but depending on the original non-simulated person's choice, they'll either lead a happy simulated life, or be killed (dependent on the original's choice, not their own). Overall, 3^^^3 + 1 choices, only one of which counts (the original's).

Now, you'd have to assume that you yourself are not in the original PM situation, but one of the simulated ones (self sampling assumption). This means that while you'd have to expect your choice not to have any impact (since the original already made the choice that counts), even the tiny probability of being the original would cause you to pay the 5$, counting on TDT reasoning to ensure the best outcome for all simulated muggees - and thus for yourself (Possibly contributing towards not getting killed yourself would outweigh the 5$ you'd expect to be simulated anyways.)

comment by Vladimir_Nesov · 2012-10-13T15:27:19.482Z · LW(p) · GW(p)

To put it bluntly... why should I care?

(Putting things bluntly or expressing opinions based on unclear considerations is usually not helpful for forming robust understanding of subtle questions.)

comment by buybuydandavis · 2012-10-14T02:08:29.992Z · LW(p) · GW(p)

Never mind.