DRAFT:Ethical Zombies - A Post On Reality-Fluid

post by MugaSofer · 2013-01-09T13:38:02.754Z · LW · GW · Legacy · 116 comments

Contents

116 comments

I came up with this after watching a science fiction film, which shall remain nameless due to spoilers, where the protagonist is briefly in a similar situation to the scenario at the end. I'm not sure how original it is, but I certainly don't recall seeing anything like it before.


Imagine, for simplicity, a purely selfish agent. Call it Alice. Alice is an expected utility maximizer, and she gains utility from eating cakes. Omega appears and offers her a deal - they will flip a fair coin, and give Alice three cakes if it comes up heads. If it comes up tails, they will take one cake away her stockpile. Alice runs the numbers, determines that the expected utility is positive, and accepts the deal. Just another day in the life of a perfectly truthful superintelligence offering inexplicable choices.


The next day, Omega returns. This time, they offer a slightly different deal - instead of flipping a coin, they will perfectly simulate Alice once. This copy will live out her life just as she would have done in reality - except that she will be given three cakes. The original Alice, however, receives nothing. She reasons that this is equivalent to the last deal, and accepts.

 

(If you disagree, consider the time between Omega starting the simulation and providing the cake. What subjective odds should she give for receiving cake?)


Imagine a second agent, Bob, who gets utility from Alice getting utility. One day, Omega show up and offers to flip a fair coin. If it comes up heads, they will give Alice - who knows nothing of this - three cakes. If it comes up tails, they will take one cake from her stockpile. He reasons as Alice did an accepts.


Guess what? The next day, Omega returns, offering to simulate Alice and give her you-know-what (hint: it's cakes.) Bob reasons just as Alice did in the second paragraph there and accepts the bargain.


Humans value each other's utility. Most notably, we value our lives, and we value each other not being tortured. If we simulate someone a billion times, and switch off one simulation, this is equivalent to risking their life at odds of 1:1,000,000,000. If we simulate someone and torture one of the simulations, this is equivalent to risking a one-in-a-billion chance of them being tortured. Such risks are often acceptable, if enough utility is gained by success. We often risk our own lives at worse odds.


If we simulate an entire society a trillion times, or 3^^^^^^3 times, or some similarly vast number, and then simulate something horrific - an individual's private harem or torture chamber or hunting ground - then the people in this simulation *are not real*. Their needs and desires are worth, not nothing, but far less then the merest whims of those who are Really Real. They are, in effect, zombies - not quite p-zombies, since they are conscious, but e-zombies - reasoning, intelligent beings that can talk and scream and beg for mercy but *do not matter*.


My mind rebels at the notion that such a thing might exist, even in theory, and yet ... if it were a similarly tiny *chance*, for similar reward, I would shut up and multiply and take it. This could be simply scope insensitivity, or some instinctual dislike of tribe members declaring themselves superior.


Well, there it is! The weirdest of Weirdtopias, I should think. Have I missed some obvious flaw? Have I made some sort of technical error? This is a draft, so criticisms will likely be encorporated into the final product (if indeed someone doesn't disprove it entirely.)

 

116 comments

Comments sorted by top scores.

comment by Qiaochu_Yuan · 2013-01-09T21:21:28.679Z · LW(p) · GW(p)

(If you disagree, consider the time between Omega starting the simulation and providing the cake. What subjective odds should she give for receiving cake?)

I don't currently accept the validity of this kind of anthropic reasoning (actually I am confused about anthropic reasoning in general). Is there an LW post where it is thoroughly defended?

Replies from: Vladimir_Nesov, MugaSofer, OrphanWilde
comment by Vladimir_Nesov · 2013-01-09T21:49:19.487Z · LW(p) · GW(p)

Anthropic reasoning not working or not making sense in many cases is closer to being a standard position on LW (for example). The standard trick for making anthropic problems less confusing is to pose them as decision problems instead of as problems about probabilities. This way, when there appears to be no natural way of assigning probabilities (to instances of an agent) that's useful for understanding the situation, we are not forced to endlessly debate which way of assigning them anyway is "the right one".

comment by MugaSofer · 2013-01-10T09:50:20.696Z · LW(p) · GW(p)

anthropic reasoning

You keep using that word. I don't think it means what you think it means.

Seriously, though, what do you think the flaw in the argument is, as presented in your quote?

Replies from: Qiaochu_Yuan
comment by Qiaochu_Yuan · 2013-01-11T00:29:01.530Z · LW(p) · GW(p)

I think I'm using "anthropic" in a way consistent with the end of the first paragraph of Fundamentals of kicking anthropic butt (to refer to situations in which agents get duplicated and/or there is some uncertainty about what agent an agent is). If there's a more appropriate word then I'd appreciate knowing what it is.

My first objection is already contained in Vladimir_Nesov's comment: it seems like in general anthropic problems should be phrased entirely as decision problems and not as problems involving the assignment of odds. For example, Sleeping Beauty can be turned into two decision problems: one in which Sleeping Beauty is trying to maximize the expected number of times she is right about the coin flip, and one in which Sleeping Beauty is trying to maximize the probability that she is right about the coin flip. In the first case, Sleeping Beauty's optimal strategy is to guess tails, whereas in the second case it doesn't matter what she guesses. In a problem where there's no anthropic funniness, there's no difference between trying to maximize the expected number of times you're right and trying to maximize the probability that you're right, but with anthropic funniness there is.

My second objection is that I don't understand how an agent could be convinced of the truth of a sufficiently bizarre premise. (I have the same issue with Pascal's mugging, torture vs. dust specks, and Newcomb's problem.) In this particular case, I don't understand how I could be convinced that another agent really has the capacity to perfectly simulate me. This seems like exactly the kind of thing that agents would be incentivized to lie about in order to trick me.

Replies from: Wei_Dai, ESRogs, MugaSofer
comment by Wei Dai (Wei_Dai) · 2013-01-19T00:56:50.958Z · LW(p) · GW(p)

My second objection is that I don't understand how an agent could be convinced of the truth of a sufficiently bizarre premise. (I have the same issue with Pascal's mugging, torture vs. dust specks, and Newcomb's problem.) In this particular case, I don't understand how I could be convinced that another agent really has the capacity to perfectly simulate me. This seems like exactly the kind of thing that agents would be incentivized to lie about in order to trick me.

You may eventually obtain the capacity to perfectly simulate yourself, in which case you'll run into similar issues. I used Omega in a scenario a couple of years ago that's somewhat similar to the OP's, but really Omega is just a shortcut for establishing a "clean" scenario that's relatively free of distractions so we can concentrate on one specific problem at a time. There is a danger of using Omega to construct scenarios that have no real-world relevance, and that's something that we should keep in mind, but I think it's not the case in the examples you gave.

comment by ESRogs · 2013-01-11T02:35:30.197Z · LW(p) · GW(p)

How would you characterize your issue with Pascal's mugging? The dilemma is not supposed to require being convinced of the truth of the proposition, just assigning it a non-zero probability.

Replies from: Qiaochu_Yuan
comment by Qiaochu_Yuan · 2013-01-11T02:55:43.311Z · LW(p) · GW(p)

Hmm. You're right. Upon reflection, I don't have a coherent rejection of Pascal's mugging yet.

Replies from: ESRogs
comment by ESRogs · 2013-01-11T04:13:22.675Z · LW(p) · GW(p)

Gotcha. Your posts have seemed pretty thoughtful so far so I was surprised by / curious about that comment. :)

comment by MugaSofer · 2013-01-11T12:32:42.332Z · LW(p) · GW(p)

Regarding anthropic reasoning, I always understood the term to refer to situations in which you could have been killed/prevented from existing.

it seems like in general anthropic problems should be phrased entirely as decision problems and not as problems involving the assignment of odds

How then, do you assign the odds?

My second objection is that I don't understand how an agent could be convinced of the truth of a sufficiently bizarre premise. (I have the same issue with Pascal's mugging, torture vs. dust specks, and Newcomb's problem.) In this particular case, I don't understand how I could be convinced that another agent really has the capacity to perfectly simulate me. This seems like exactly the kind of thing that agents would be incentivized to lie about in order to trick me.

You believe Omega because it's Omega, who always tells the truth and has access to godlike power.

Replies from: Qiaochu_Yuan
comment by Qiaochu_Yuan · 2013-01-11T20:18:00.603Z · LW(p) · GW(p)

How then, do you assign the odds?

You don't.

You believe Omega because it's Omega, who always tells the truth and has access to godlike power.

How does any particular agent go about convincing me that it's Omega?

Replies from: Decius, MugaSofer
comment by Decius · 2013-01-12T02:27:56.322Z · LW(p) · GW(p)

How does any particular agent go about convincing me that it's Omega?

I don't know, but Omega does. Probably by demonstrating the ability to do something such that you believe the chance that it could be faked are epsilon^2, where epsilon is your prior belief that a given agent could have godlike powers.

comment by MugaSofer · 2013-01-13T10:28:40.712Z · LW(p) · GW(p)

How then, do you assign the odds?

You don't.

So ... you don't know what the odds are, but you know how to act anyway? I notice that I am confused.

it's Omega, who always tells the truth and has access to godlike power.

How does any particular agent go about convincing me that it's Omega?

Assuming that there is only one godlike agent known or predicted in the environment, that Omega is a known feature of the environment, and that you have no reason to believe that e.g. you are hallucinating, then presumably all Omega needs to do is demonstrate his godlike powers - by predicting your every action ahead of time, say, or turning the sky green with purple spots.

Replies from: Qiaochu_Yuan
comment by Qiaochu_Yuan · 2013-01-13T12:22:35.531Z · LW(p) · GW(p)

So ... you don't know what the odds are, but you know how to act anyway? I notice that I am confused.

In an anthropic situation, I don't think it makes sense to assign odds to statements like "I will see X" because the meaning of the term "I" becomes unclear. (For example, I don't think it makes sense for Sleeping Beauty to assign odds to statements like "I will see heads.") I can still assign odds to statements like "exactly fifteen copies of me will see X" by reasoning about what I currently expect my copies to see, given what I know about how I'll be copied, and using those odds I can still make decisions.

Assuming that there is only one godlike agent known or predicted in the environment, that Omega is a known feature of the environment, and that you have no reason to believe that e.g. you are hallucinating, then presumably all Omega needs to do is demonstrate his godlike powers - by predicting your every action ahead of time, say, or turning the sky green with purple spots.

Omega needs to both always tell the truth and have access to godlike power. How does Omega prove to me that it always tells the truth?

Replies from: MugaSofer
comment by MugaSofer · 2013-01-13T16:15:38.063Z · LW(p) · GW(p)

I don't think it makes sense to assign odds to statements like "I will see X" because the meaning of the term "I" becomes unclear. (For example, I don't think it makes sense for Sleeping Beauty to assign odds to statements like "I will see heads.")

I don't understand this, TBH, but whatever.

What do you think Alice should choose?

Omega needs to both always tell the truth and have access to godlike power. How does Omega prove to me that it always tells the truth?

It is a known feature of the environment that people are regularly ambushed by an agent, calling itself Omega, which has never yet been known to lie and has access to godlike power.

Replies from: Qiaochu_Yuan
comment by Qiaochu_Yuan · 2013-01-13T17:19:25.164Z · LW(p) · GW(p)

What do you think Alice should choose?

There's nothing to choose until you've specified a decision problem.

It is a known feature of the environment that people are regularly ambushed by an agent, calling itself Omega, which has never yet been known to lie and has access to godlike power.

An agent with godlike power can manufacture evidence that it has any other traits it wants, so observing such evidence isn't actually evidence that it has those traits.

Replies from: MugaSofer
comment by MugaSofer · 2013-01-13T17:53:54.480Z · LW(p) · GW(p)

There's nothing to choose until you've specified a decision problem.

Um, I have.

Imagine, for simplicity, a purely selfish agent. Call it Alice. Alice is an expected utility maximizer, and she gains utility from eating cakes. Omega appears and offers her a deal - they will flip a fair coin, and give Alice three cakes if it comes up heads. If it comes up tails, they will take one cake away her stockpile. Alice runs the numbers, determines that the expected utility is positive, and accepts the deal. Just another day in the life of a perfectly truthful superintelligence offering inexplicable choices.

The next day, Omega returns. This time, they offer a slightly different deal - instead of flipping a coin, they will perfectly simulate Alice once. This copy will live out her life just as she would have done in reality - except that she will be given three cakes. The original Alice, however, receives nothing.

What do you think Alice should choose?

An agent with godlike power can manufacture evidence that it has any other traits it wants, so observing such evidence isn't actually evidence that it has those traits.

An agent with godlike power can manufacture evidence of anything. This seems suspiciously like a Fully General Counterargument. Such an agent could, in any case, directly hack your brain so you don't realize it can falsify evidence.

Replies from: Qiaochu_Yuan
comment by Qiaochu_Yuan · 2013-01-13T19:41:46.617Z · LW(p) · GW(p)

Oh, you were referring to the original decision problem. I was referring to the part of the post I was originally responding to about what subjective odds Alice should assign things. I just ran across an LW post making an important comment on these types of problems, which is that the answer depends on how altruistic Alice feels towards copies of herself. If she feels perfectly altruistic towards copies of herself, then sure, take the cakes.

An agent with godlike power can manufacture evidence of anything. This seems suspiciously like a Fully General Counterargument. Such an agent could, in any case, directly hack your brain so you don't realize it can falsify evidence.

Yes, it's a fully general counterargument against believing anything that an agent with godlike power says. Would this sound more reasonable if "godlike" were replaced with "devil-like"?

Replies from: MugaSofer
comment by MugaSofer · 2013-01-13T21:18:35.492Z · LW(p) · GW(p)

I just ran across an LW post making an important comment on these types of problems, which is that the answer depends on how altruistic Alice feels towards copies of herself. If she feels perfectly altruistic towards copies of herself, then sure, take the cakes.

That's ... a very good point, actually.

I assume, on this basis, that you agree with the hypothetical at the end?

Would this sound more reasonable if "godlike" were replaced with "devil-like"?

Yes. Devil-like implies known hostility.

By this logic, the moment you turn on a provably Friendly AI you should destroy it, because it might have hacked you into thinking it's friendly. Worse still, a hostile god would presumably realize you wont believe it, and so hack you into thinking it's not godlike; so anything claiming not to be godlike is lying.

Bottom line: gods are evidence that the world may be a lie. But not strong evidence.

Replies from: Qiaochu_Yuan
comment by Qiaochu_Yuan · 2013-01-13T23:38:55.270Z · LW(p) · GW(p)

I assume, on this basis, that you agree with the hypothetical at the end?

Nope. I still don't agree that simulating an agent N times and doing X to one of them is morally equivalent to risking X to them with probability 1/(N+1). For example, if you are not at all altruistic to copies of yourself, then you don't care about the former situation as long as the copy that X is being done to is not you. On the other hand, if you value fairness among your copies (that is, if you value your copies having similar quality of life) then you care about the former situation more strongly than the latter situation.

By this logic, the moment you turn on a provably Friendly AI you should destroy it, because it might have hacked you into thinking it's friendly. Worse still, a hostile god would presumably realize you wont believe it, and so hack you into thinking it's not godlike; so anything claiming not to be godlike is lying.

Pretty much the only thing a godlike agent can convince me of is that it's godlike (and I am not even totally convinced this is possible). After that, again, whatever evidence a godlike agent presents of anything else could have been fabricated. Your last inference doesn't follow from the others; my priors regarding the prevalence of godlike agents is currently extremely low, and claiming not to be godlike is not strong evidence either way.

Replies from: MugaSofer
comment by MugaSofer · 2013-01-14T10:04:22.500Z · LW(p) · GW(p)

Nope. [snip explanation]

To be clear, since humans are specified as valuing all agents (including sims of themselves and others) shouldn't it be equivalent to Alice-who-values-copies-of-herself?

my priors regarding the prevalence of godlike agents is currently extremely low,

And what are those priors based on? Evidence! Evidence that a godlike being would be motivated to falsify!

Replies from: Qiaochu_Yuan
comment by Qiaochu_Yuan · 2013-01-14T21:32:04.336Z · LW(p) · GW(p)

To be clear, since humans are specified as valuing all agents (including sims of themselves and others) shouldn't it be equivalent to Alice-who-values-copies-of-herself?

Sure, but the result you describe is equivalent to Alice being an average utilitarian with respect to copies of herself. What if Alice is a total utilitarian with respect to copies of herself?

Replies from: MugaSofer
comment by MugaSofer · 2013-01-15T10:27:28.529Z · LW(p) · GW(p)

Actually, she should still make the same choice, although she would choose differently in other scenarios.

comment by OrphanWilde · 2013-01-09T21:51:30.966Z · LW(p) · GW(p)

If it helps you avoid fighting the hypothetical, Omega already knows what her answer will be, and has already acted on it.

comment by [deleted] · 2013-01-09T14:39:42.545Z · LW(p) · GW(p)

If we simulate an entire society a trillion times, or 3^^^^^^3 times, or some similarly vast number, and then simulate something horrific - an individual's private harem or torture chamber or hunting ground - then the people in this simulation are not real

Well, as a draft comment, I don't think a trillion times, and 3^^^^^^3 times are conflatable in this context. There are simply too many arguments that apply to one and not the other.

For instance, a programmer can define 1 trillion unique societies. You could do this for instance, by having each society seeded from 12 variables with 10 levels each. You could then say that society seeded from 1,2,3,4,5,6,7,8,9,0,1,2 was the only society that was 1,2,3,4,5,6,7,8,9,0,1,2. I could generate a computer program which wrote out a textual description of each one. 1 trillion just isn't that large. For instance, there are well more than 1 trillion possible saved games in a simplistic game.

I don't even know if there are even 3^^^^^^3 potential possible states in the observable universe that can be physically distinguished from one another under current physics, but I would suspect not.

So I keep getting distracted by "A particular society out of 1 trillion societies may matter, but a particular society out of 3^^^^^^3 societies doesn't seem like it would matter any more than one of my atoms being less than one Planck length to the right would." but I'm not sure if that relates back to your point.

Replies from: MugaSofer
comment by MugaSofer · 2013-01-10T10:09:31.275Z · LW(p) · GW(p)

I suspect from some of the comments I'm getting that I didn't make this clear: copies are identical in this scenario. They receive the same inputs, make the same choices, they think and feel as one. They are, in short, one person (or civilization.) But one with vastly more reality-fluid (sometimes known as "measure") and thus, as far as I can tell, moral weight.

Replies from: Pentashagon, None
comment by Pentashagon · 2013-01-10T23:22:49.625Z · LW(p) · GW(p)

Is it worse to torture a virtual person running on redundant hardware (say 3 computers in lock-step, like the Space Shuttle used) whose permanent state (or backups) is stored on a RAID1 of disks instead of a virtual person running on a single CPU with one disk? Or even simpler; is it worse to torture a more massive person than a less massive person? Personally, I would say no.

Just like there's only one electron, I think there's only one of any particular thing, at least in the map. The territory may actually be weird and strange, but I don't have any evidence that redundant exact copies have as much moral weight as a single entity. I think that it's worse to torture 1 non-redundant person than it is to torture n-1 out of n exact copies of that person, for any n. That only applies if it's exactly the same simulation n-1 times. If those simulations start to diverge into n different persons, it starts to become as bad as torturing n different unique people. Eventually even those n-1 exact copies would diverge enough from the original to be considered copies of a different person with its own moral weight. My reasoning is just probabilistic in expected utility: It's worse for an agent to expect p(torture)=1 than p(torture)=n-1/n, and an identical agent can't distinguish between identical copies (including its environment) of itself.

Replies from: OrphanWilde, MugaSofer
comment by OrphanWilde · 2013-01-10T23:31:22.861Z · LW(p) · GW(p)

As soon as you start torturing one of those identical agents, it ceases to be identical.

I guess the question from there is, does this produce a cascade of utility, as small divergences in the simulated universe produce slightly different agents for the other 6 billion people in the simulation, whose utility then exists independently?

comment by MugaSofer · 2013-01-11T12:39:31.639Z · LW(p) · GW(p)

That it is true, if unintuitive, that people gain moral worth the more "real" they get, is a position I have seen on LW, and the arguments do seem reasonable. (It is also rather more coherent when used in a Big Universe.) This post assumes that position, and includes a short version of the most common argument for that position.

Incidentally, I used to hold the position you describe; how do you deal with the fact that a tortured copy is, by definition, no longer "part" of the original?

comment by [deleted] · 2013-01-10T15:28:59.353Z · LW(p) · GW(p)

But one with vastly more reality-fluid (sometimes known as "measure") and thus, as far as I can tell, moral weight.

This is very thought provoking. Can you add clarity on your views on this point?

For instance, should I imply a "vastly" in front of moral weight as well as if there is a 1:1 correspondence or should I not do that?

Is this the only moral consideration you are considering on this tier? (I.E, there may be other moral considerations, but if this is the only "vast" one, it will probably outweigh all others.)

Does the arrangement of the copies reality fluid matter? Omega is usually thought of as a computer, so I am considering the file system. He might have 3 copies in 1 file for resilience, such as in a RAID array. Or he can have 3 copies that link to 3 files, such as in just having Sim001.exe and Sim002.exe and Sim003.exe having the exact same contents and being in the same folder. In both cases, the copies are identical. And if they are being run simultaneously and updated simultaneously, then the copies might not be able to tell which structure Omega was using. Which of these are you envisioning (or would it not matter? [Or do I not understand what a RAID array is?])

Some of these questions may be irrelevant, and if so, I apologize, I'm really am not sure I understand enough about your point to reply to it appropriately, and again, it does sound thought provoking.

Replies from: MugaSofer
comment by MugaSofer · 2013-01-11T13:11:03.205Z · LW(p) · GW(p)

For instance, should I imply a "vastly" in front of moral weight as well as if there is a 1:1 correspondence or should I not do that?

Pretty much, yeah.

Is this the only moral consideration you are considering on this tier? (I.E, there may be other moral considerations, but if this is the only "vast" one, it will probably outweigh all others.)

Well, I'm considering the torture's disutility, and the torturers' utility.

Does the arrangement of the copies reality fluid matter? Omega is usually thought of as a computer, so I am considering the file system. He might have 3 copies in 1 file for resilience, such as in a RAID array. Or he can have 3 copies that link to 3 files, such as in just having Sim001.exe and Sim002.exe and Sim003.exe having the exact same contents and being in the same folder. In both cases, the copies are identical. And if they are being run simultaneously and updated simultaneously, then the copies might not be able to tell which structure Omega was using. Which of these are you envisioning (or would it not matter? [Or do I not understand what a RAID array is?])

I'm not entirely sure I understand this question, but I don't think it should matter.

comment by tgb · 2013-01-09T19:39:31.224Z · LW(p) · GW(p)

Situation A: There are 3^^^^3 simulations of me, as well as myself. You come up to me and say "I'm going to torture forever one of our or your simulations, chosen randomly." Do I shrug and say, "well, whatever it 'certainly' won't be me" or do I scream in horror at the thought of you torturing a copy of me forever?

Situation B: There are me and 3^^^^3 other people in a rather large universe. You come up to me and say "I'm going to torture forever one of the people in this universe, chosen randomly." Do I shrug and say, "well, whatever, it 'certainly' won't be me" or do I scream in horror at the thought of you torturing someone forever?

What's the difference between these situations?

Replies from: MugaSofer
comment by MugaSofer · 2013-01-10T09:54:52.413Z · LW(p) · GW(p)

This is why I added Bob.

The difference is that a tiny risk of killing one, specific person is different to a certainty of killing any person, but not knowing who.

comment by Richard_Kennaway · 2013-01-09T14:26:48.473Z · LW(p) · GW(p)

There's a jump between the paragraph beginning "Humans value each other's utility" and the next. Up to and including that paragraph, simulations are treated as equivalent to "real" people, but in the next, "the people in this simulation are not real", and are worth less than the Really Real. How do you get from the first part of the essay to the second?

Replies from: gjm, MugaSofer
comment by gjm · 2013-01-09T15:23:29.406Z · LW(p) · GW(p)

I think the idea is meant to be that "one of many simulations" = "low probability" = "unimportant".

If so, I think this is simply a mistake. MugaSofer: you say that being killed in one of N simulations is just like having a 1/N chance of death. I guess you really mean 1/(N+1). Anyway, now Omega comes to you and says: unless you give me $100k (replace this with some sum that you could raise if necessary, but would be a hell of an imposition), I will simulate one copy of you and then stop simulating it at around the point of it's life you're currently at. Would you pay up? Would you pay up in the same way if the threat were "I'll flip a coin and kill you if it comes up heads"?

The right way to think about this sort of problem is still contentious, but I'm pretty sure that "make another copy of me and kill it" is not at all the same sort of outcome as "kill me with probability 1/2".

Now, suppose there are a trillion simulations of you. If you really believe what it says at the start of this article, then I think the following positions are open to you. (1) All these simulations matter about as much as any other person does. (2) All these simulations matter only about 10^-12 as much as any other person -- and so do I, here in the "real" world. Only if you abandon your belief that there's no relevant difference between simulated-you and real-you, do you have the option of saying that your simulations matter less than you do. In that case, maybe you can say that each of N simulations matters 1/N as much, though to me this feels like a bad choice.

Replies from: MugaSofer
comment by MugaSofer · 2013-01-10T10:17:32.103Z · LW(p) · GW(p)

Anyway, now Omega comes to you and says: unless you give me $100k (replace this with some sum that you could raise if necessary, but would be a hell of an imposition), I will simulate one copy of you and then stop simulating it at around the point of it's life you're currently at. Would you pay up? Would you pay up in the same way if the threat were "I'll flip a coin and kill you if it comes up heads"?

No. He doubles my "reality", then halves it. This leaves me just as real as I was in the first place.

However, if he ran the simulation anyway, even if I paid up, then I think it does work out equivalent, because it's equivalent to creating the sim and then threatening to delete one of me if I didn't pay.

Does this answer your question?

comment by MugaSofer · 2013-01-10T10:11:47.310Z · LW(p) · GW(p)

I suspect I didn't made this as clear as I thought I had, but the term Really Real does not refer to people outside of simulations, it refers to people with vastly more simulations.

comment by aelephant · 2013-01-09T22:56:50.025Z · LW(p) · GW(p)

Could this be an example of the noncentral fallacy? One big reason humans try to avoid death is because there is only one of each individual & once they die they are gone forever. If a simulation is made of me and gets turned off, there's still one of me (the original). In this alternate reality there's also the chance that Omega could always just make another new copy. I think the two situations are dissimilar enough that our standard intuitions can't be applied.

Replies from: MugaSofer
comment by MugaSofer · 2013-01-10T09:45:42.000Z · LW(p) · GW(p)

Well, presumably you would care less about that death depending on how "real" it was. If there's only one of you, you care as much as you do now (obviously,) if there's two sims you care half as much and so on.

comment by Shmi (shminux) · 2013-01-09T17:09:55.626Z · LW(p) · GW(p)

First, I like the term e-zombie. It highlights the issue of "sim rights" vs "human rights" for me.

Second, I don't quite get the point you are trying to illustrate with this convoluted example. Is it that sims are intrinsically valueless or what? I don't see how this follows. Maybe some calculation is in order.

The weirdest of Weirdtopias, I should think.

Not by a long shot. Pratchett has weirder ones in every chapter. For example, Only You Can Save Mankind.

Replies from: MugaSofer
comment by MugaSofer · 2013-01-10T10:00:21.120Z · LW(p) · GW(p)

Second, I don't quite get the point you are trying to illustrate with this convoluted example. Is it that sims are intrinsically valueless or what? I don't see how this follows. Maybe some calculation is in order.

It's that if sims that have less copies - are less "real" - are worth less, for the reasons presented above, then the whims of the many are worth more than the lives of the few, or the one.

Not by a long shot. Pratchett has weirder ones in every chapter. For example, Only You Can Save Mankind.

Speaking as a Pratchett fan who's read pretty much everything he ever wrote, although it's been a while since I read OYCSM, I don't understand this. Ankh-Morpork is a weirdtopia, and so are a few other things he's written, (eg the "tradition as dead voting rights" bit from Johnny And The Dead,) but I don't recall anything like this. Maybe I'm just being an idiot, though.

comment by wuncidunci · 2013-01-10T20:34:38.833Z · LW(p) · GW(p)

Let N=3^^^^^^3, surely N nice world + another nice world is better than N nice worlds + a torture world. Why? Because another nice world is better than a torture world, and the prior existence of the N previous worlds shouldn't matter to that decision.

What about the probability of actually being in the torture world which is tiny 1/(N+1), the expected negative utility from this must surely be so small it can be neglected? Sure, but equally the expected utility of being the master of a torture world with probability 1/(N+1) can be neglected.

What this post tells me is that I'm still very very confused about reality fluid.

Replies from: MugaSofer
comment by MugaSofer · 2013-01-11T12:49:55.964Z · LW(p) · GW(p)

Let N=3^^^^^^3, surely N nice world + another nice world is better than N nice worlds + a torture world. Why? Because another nice world is better than a torture world, and the prior existence of the N previous worlds shouldn't matter to that decision.

The torture world, in this case, is being used to satisfy the whims of the Niceworld's residents. Lots of Niceworld copies = lots of Reality = lots of utility. So goes the logic.

Sure, but equally the expected utility of being the master of a torture world with probability 1/(N+1) can be neglected.

Since they are all the same, they can share a torture world.

comment by Decius · 2013-01-12T02:22:28.148Z · LW(p) · GW(p)

What does it mean to simulate someone, and why should I value manipulation of a simulation?

How good does the simulation have to be before I value it; should I value a book in which I get cakes more? What about a fairly good simulation of the world, but the contents of my pantry are randomized each time I open it–should I value that simulation more if the expected number of cakes the next time the simulated me opens the pantry are higher?

Replies from: MugaSofer
comment by MugaSofer · 2013-01-13T10:29:47.769Z · LW(p) · GW(p)

I was assuming perfect quantum-level modelling of you and everything you interact with, acquired and sustained via magic. It makes things much simpler

As for your actual question ... I'm not sure. The sim would have to conscious, obviously, but the point at which it becomes "you" is ... unclear. It seems trivially true that a magical perfect simulation as above is "you", but an AI programmed to believe it's you is not. Beyond those two extremes ... it's tricky to say.

Of course, if utilities are additive, two almost-yous should be worth as much as one you with twice as much reality-fluid. So I guess humans can get away with ignoring the distinction between me and you, at least as long as they're using TDT or similar.

Replies from: Decius
comment by Decius · 2013-01-13T20:16:28.193Z · LW(p) · GW(p)

How close is a model that has an arbitrary number of cakes added?

I also say that no simulation has value to me if I am in a frame that knows they are a simulation. Likewise for quantum states that I don't manipulate.

Replies from: MugaSofer
comment by MugaSofer · 2013-01-13T21:20:26.626Z · LW(p) · GW(p)

How close is a model that has an arbitrary number of cakes added?

Perfectly so before the cakes are added.

I also say that no simulation has value to me if I am in a frame that knows they are a simulation.

To be clear, are you actually asserting this or merely suggesting a possible resolution to the dilemma?

Replies from: Decius
comment by Decius · 2013-01-13T23:39:50.996Z · LW(p) · GW(p)

So you believe that it is irrelevant whether or not Omega' (a resident of the universe running a simulation) can create things of value to you but chooses not to? You have no preference for living in a world with constant physical laws?

I also say that no simulation has value to me if I am in a frame that knows they are a simulation.

To be clear, are you actually asserting this or merely suggesting a possible resolution to the dilemma?

It's a solution, but for it to apply to others they would have to share my values. What I'm saying is that there is no intrinsic value to me to the orientations of electrons representing a number which has a transformation function which results in a number which is perfectly analogous to me, or to any other person. Other people are permitted to value the integrity of those electrical orientations representing bits as they see fit.

Replies from: MugaSofer
comment by MugaSofer · 2013-01-14T09:59:43.138Z · LW(p) · GW(p)

So you, in fact, do not value simulations of yourself? Or anyone else, for that matter?

Replies from: Decius
comment by Decius · 2013-01-14T13:47:47.822Z · LW(p) · GW(p)

With the caveat that I am not a simulation for the purposes if that judgement. I care only about my layer and the layers which are upstream of (simulating) me, if any.

Replies from: MugaSofer
comment by MugaSofer · 2013-01-14T14:38:01.975Z · LW(p) · GW(p)

Well, obviously this post is not aimed at you, but I must admit I am curious as to why you hold this belief. What makes "downstream" sims unworthy of ethical consideration?

Replies from: Decius
comment by Decius · 2013-01-14T15:35:30.732Z · LW(p) · GW(p)

Maybe I've got a different concept of 'simulation'. I consider a simulation to be fully analogous to a sufficiently well-written computer program, and I don't believe that representations of numbers are morally comparable to living creatures, even if those numbers undergo transformations completely analogous to those creatures.

Why should I care if you calculate f(x) or f'(x), where x is the representation of the current state of the universe, f() is the standard model, and f'() is the model with all the cake?

Replies from: TheOtherDave
comment by TheOtherDave · 2013-01-14T15:39:52.139Z · LW(p) · GW(p)

I don't believe that representations of numbers are morally comparable to living creatures

Does that stay true if those representations are implemented in a highly distributed computer made out of organic cells?

Replies from: Decius
comment by Decius · 2013-01-14T15:45:41.813Z · LW(p) · GW(p)

Are you trying to blur the distinction between a simulated creature and a living one, or are you postulating a living creature which is also a simulator? I don't have moral obligation regarding my inner Slytherin beyond any obligations I have regarding myself.

Replies from: TheOtherDave
comment by TheOtherDave · 2013-01-14T15:53:29.844Z · LW(p) · GW(p)

I'm not so much trying to blur the distinction, as I am trying to figure out what the relevant parameters are. I started with "made of organic cells" because that's often the parameter people have in mind.

Given your clarification, I take it that "living" is the parameter you have in mind, in which case I'm interested in is how you decide that something is a living system. For example, are you a living system? Can you be certain of that?

If you can't be certain, does it follow that there's a possibility that you don't in fact have a moral obligation to yourself (because you might not be the sort of thing to which you can have such obligations)?

Replies from: Decius
comment by Decius · 2013-01-14T16:48:41.222Z · LW(p) · GW(p)

If I am a number in a calculation, I priviledge the simulation I am in above all others. I expect residents of all other simulations to priviledge their own simulation above all others.

Being made of carbon chains isn't relevant; being made of matter instead of information or an abstraction is important, and even if there exists a reference point from which my matter is abstract information, I, the abstract information, insrinically value my flavor of abstraction more so than any other reference. (there is an instrumental value to manipulating the upstream contexts, however)

Replies from: TheOtherDave, MugaSofer
comment by TheOtherDave · 2013-01-14T17:27:18.955Z · LW(p) · GW(p)

Ah, OK. Sure, I can understand local-context privileging. Thanks for clarifying.

Replies from: Decius
comment by Decius · 2013-01-14T21:54:47.052Z · LW(p) · GW(p)

I can't understand the lack of local-universe privilege.

Suppose that literally everything I observe is a barely imperfect simulation made by IBM, as evidenced by the observation that a particular particle interaction leaves traces which reliably read "World sim version 7.00.1.5 build 11/11/11 Copyright IBM, special thanks JKR" instead of the expected particle traces. Also, invoking certain words and gestures allows people with a certain genetic expression to break various physical laws.

Now, suppose that a golden tablet appeared before me explicitly stating that Omega has threatened the world which created our simulation. However, we, the simulation, are able to alter the terms of this threat. If a selected resident (me) of Sim-Earth decides to destroy Sim-Earth, Meta-1 Earth will suffer no consequences other than one instance of an obsolete version of one of their simulations crashing. If I refuse, then Omega will roll a fair d6, and on a result of 3 or higher will destroy Meta-1 Earth, along with all of their simulations including mine.

Which is the consequentialist thing to do? (I dodge the question by not being consequentialist; I am not responsible for Omega's actions, even if Omega tells me how to influence him. I am responsible for my own actions.)

Replies from: wedrifid, TheOtherDave
comment by wedrifid · 2013-01-16T02:50:36.438Z · LW(p) · GW(p)

Which is the consequentialist thing to do?

Undefined. Legitimate and plausible consequentialist value systems can be conceived that go either way.

Replies from: Decius
comment by Decius · 2013-01-16T05:55:25.465Z · LW(p) · GW(p)

To prefer a 60% chance of the destruction of more than two existences to the certainty of the extinction of humanity in one of them is an interesting position.

Clearly, however, either such a preference either incurs local privilege, or it should be just as logical to prefer the 60% destruction of more than everything over the certain destruction of a different simulation, one that would never have interaction with the one that the agent experiences.

Replies from: wedrifid
comment by wedrifid · 2013-01-16T11:20:53.403Z · LW(p) · GW(p)

To prefer a 60% chance of the destruction of more than two existences to the certainty of the extinction of humanity in one of them is an interesting position.

Yes, far from inconceivable and perhaps even held coherently by a majority of humans but certainly different to mine. I have decidedly different preferences, in certain cases it's less than that. If I found I was in certain kinds of simulations I'd value my own existence either less or not at all.

Clearly, however, either such a preference either incurs local privilege

Yes, it would (assuming I understand correctly what you mean by that).

Replies from: Decius
comment by Decius · 2013-01-16T20:46:53.878Z · LW(p) · GW(p)

I hadn't considered the angle that the simulation might be run by an actively hostile entity; in that case, destroying the hostile entity (ending the simulation) is the practical thing to do at the top layer, and also the desired result in the simulation (end of simulation rather than torture).

comment by TheOtherDave · 2013-01-14T22:43:22.316Z · LW(p) · GW(p)

Just to make sure I understand, let me restate your scenario: there's a world ("Meta-1 Earth") which contains a simulation ("Sim-Earth"), and I get to choose whether to destroy Sim-Earth or not. If I refuse, there's a 50% chance of both Sim-Earth and Meta-1 Earth being destroyed. Right?

So, the consequentialist thing to do is compare the value of Sim-Earth (V1) to the value of Meta-1 Earth (V2), and destroy Sim-Earth iff V2/2 > V1.

You haven't said much about Meta-1 Earth, but just to pick an easily calculated hypothetical, if Omega further informs me that there are ten other copies of World sim version 7.00.1.5 build 11/11/11 running on machines in Meta-1 Earth (not identical to Sim-Earth, because there's some randomness built into the sim, but roughly equivalent), I would conclude that destroying Sim-Earth is the right thing to do if everything is as Omega has represented it.

I might not actually do that, in the same way that I might not kill myself to save ten other people, or even give up my morning latte to save ten other people, but that's a different question.

Replies from: Decius
comment by Decius · 2013-01-15T01:10:05.729Z · LW(p) · GW(p)

Subtle distinctions. We have no knowledge about Meta-1 Earth. We only have the types of highly persuasive but technically circumstantial evidence provided; Omega exists in this scenario and is known by name, but he is silent on the question of whether the inscription on the massive solid gold tablet is truthful. The doomseday button is known to be real.

What would evidence regarding the existence of M1E look like?

(Also:4/6 chance of a 3 or higher. I don't think the exact odds are critical.)

Replies from: TheOtherDave
comment by TheOtherDave · 2013-01-15T01:45:55.754Z · LW(p) · GW(p)

Well, if there are grounds for confidence that the button destroys the world, but no grounds for confidence in anything about the Meta-1 Earth stuff, then a sensible decision theory chooses not to press the button.

(Oh, right. I can do basic mathematics, honest! I just can't read. :-( )

Replies from: Decius
comment by Decius · 2013-01-15T04:09:43.179Z · LW(p) · GW(p)

What would evidence for or against being in a simulation look like?

Replies from: TheOtherDave
comment by TheOtherDave · 2013-01-15T04:41:10.530Z · LW(p) · GW(p)

I'm really puzzled by this question.

You started out by saying:

"Suppose that literally everything I observe is a barely imperfect simulation made by IBM, as evidenced by the observation that a particular particle interaction leaves traces which reliably read "World sim version 7.00.1.5 build 11/11/11 Copyright IBM, special thanks JKR" instead of the expected particle traces. Also, invoking certain words and gestures allows people with a certain genetic expression to break various physical laws."

I was content to accept that supposition, not so much because I think I would necessarily be convinced of it by experiencing that, as because it seems plausible enough for a thought experiment and I didn't want to fight the hypothetical.

But now it sounds like you've changed the question completely? Or am I deeply confused? In any case, I've lost the thread of whatever point you're making.

Anyway, to answer your question, I'm not sure what would be compelling evidence for or against being in a simulation per se. For example, I can imagine discovering that physical constants encode a complex message under a plausible reading frame, and "I'm in a simulation" is one of the theories which accounts for that, but not the only one. I'm not sure how I would disambiguate "I'm in a simulation" from "there exists an intelligent entity with the power to edit physical constants" from "there exists an intelligent entity with the power to edit the reported results of measurements of physical constants." Mostly, I would have to accept I was confused and start rethinking everything I used to believe about the universe.

Replies from: Decius
comment by Decius · 2013-01-15T06:26:01.719Z · LW(p) · GW(p)

Here's a better way of looking at the problem: Is it possible to run a simulation which is both indistinguishable from reality (from within the simulation) and such that something which develops within the simulation will realize that it is in a simulation?

Is it possible, purely from within a simulation, for a resident to differentiate the simulation from reality, regardless of the quality of the simulation?

How can moral imperatives point towards things which are existence-agnostic?

Replies from: TheOtherDave
comment by TheOtherDave · 2013-01-15T14:28:54.744Z · LW(p) · GW(p)

Is it possible to run a simulation which is both indistinguishable from reality (from within the simulation) and such that something which develops within the simulation will realize that it is in a simulation?

We may need to further define "realize." Supposing that it is possible to run a simulation which is indistinguishable from reality in the first place, it's certainly possible for something which develops within the simulation to believe it is in a simulation, just like it's possible for people in reality to do so.

Is it possible, purely from within a simulation, for a resident to differentiate the simulation from reality, regardless of the quality of the simulation?

Within a simulation that is indistinguishable from reality, it is of course not possible for a resident to distinguish the simulation from reality.

How can moral imperatives point towards things which are existence-agnostic?

I have no idea what this question means. Can you give me some examples of proposed moral imperatives that are existence-agnostic?

Replies from: Decius
comment by Decius · 2013-01-15T22:28:14.281Z · LW(p) · GW(p)

A moral imperative which references something which may or may not be exemplified; it doesn't change if that which it references does not exist.

"Maximize the density of the æther." is such an imperative.

"Include God when maximizing total utility." is the version I think you are using (with 'God' being the creator of the simulation; I think that the use of the religious referent is appropriate because they have the same properties.)

Replies from: TheOtherDave
comment by TheOtherDave · 2013-01-16T02:58:50.563Z · LW(p) · GW(p)

So, if I'm understanding you: when my father was alive, I endorsed "Don't kill your father." When he died I continued to endorse it just as I had before. That makes "Don't kill your father" a moral imperative which points towards something existence-agnostic, on your account... yes?

I have no idea what you're on about by bringing God into this.

Replies from: Decius
comment by Decius · 2013-01-16T05:49:10.973Z · LW(p) · GW(p)

No- because fathers exist.

"Maximize the amount of gyration and gimbling of slithy toves" would be a better example.

I'm using God as a shorthand for the people running the simulation. I'm not introducing anything from religion but the name for something with that power.

Replies from: TheOtherDave
comment by TheOtherDave · 2013-01-16T13:04:14.132Z · LW(p) · GW(p)

OK; thanks for the clarification.

I don't think a moral imperative can meaningfully include a meaningless term.
I do think a moral imperative can meaningfully include a meaningful term whose referent doesn't currently exist in the world.

Also, it can be meaningful to make a moral assertion that depends on an epistemically unreachable state. For example, if I believe (for whatever reason) that I've been poisoned and that the pill in my hand contains an antidote, but in fact I haven't been poisoned and the pill is poison, taking the pill is in fact the wrong thing to do, even though I can't know that.

Replies from: Decius
comment by Decius · 2013-01-16T20:37:33.705Z · LW(p) · GW(p)

I prefer to have knowable morality- I must make decisions without information about the world, but only with my beliefs.

For example, it is wrong to pull the trigger of a gun aimed at an innocent person without knowing if it is loaded. The expected outcome is what matters, not the actual outcome.

Replies from: TheOtherDave
comment by TheOtherDave · 2013-01-17T02:28:53.494Z · LW(p) · GW(p)

I prefer to have knowable morality- I must make decisions without information about the world, but only with my beliefs.

Well, I certainly agree that we make decisions based on our beliefs (I would also say that our beliefs are, or at least can be, based on information about the world, but I understand you here to be saying that we must make decisions without perfect information about the world, which I agree with).

That said, I think you are eliding morality and decision procedures, which I think elides an important distinction.

For example, if at time T1 the preponderance of the evidence I have indicates the pill is an antidote, and at some later time T2 the preponderance of the evidence indicates that the pill is poison, a sensible decision theory says (at T1) to take the pill and (at T2) not to take the pill.

But to say that taking the pill is morally right at T1 and not-taking the pill is morally right at T2 seems no more justified to me than to say that the pill really is an antidote at T1 and is poison at T2. That just isn't the case, and a morality or an ontology that says it is the case is simply mistaken. The pill is always poison, and taking the pill is therefore the wrong thing to do, whether I know it or not.

I guess you could say that I prefer that my morality, like my ontology, be consistent to having it be knowable.

Replies from: Decius
comment by Decius · 2013-01-17T17:17:33.999Z · LW(p) · GW(p)

So then it is nonsense to claim that someone did the right thing, but had a bad outcome?

If you see someone drowning and are in a position where you can safely do nothing or risk becoming another victim by assisting, you should assist iff your assistance will be successful, right?

Is it moral to bet irresponsibly if you win? Is it immoral to refuse an irresponsible bet that would have paid off?

I can't see the practical use of a system where the morality of a choice is very often unknowable.

Replies from: TheOtherDave, TheOtherDave
comment by TheOtherDave · 2013-01-17T18:52:55.738Z · LW(p) · GW(p)

Also, thinking about this some more:

Suppose I have two buttons, one red and one green. I know that one of those buttons (call it "G") creates high positive utility and the other ("B") creates high negative utility. I don't know whether G is red and B green, or the other way around.

On your account, if I understand you correctly, to say "pressing G is the right thing to do" is meaningless, because I can't know which button is G. Pressing G, pressing B, and pressing neither are equally good acts on your account, even though one of them creates high positive utility and the other creates high negative utility. Is that right?

On my account, I would say that the choice between red and green is a question of decision theory, and the choice between G and B is a question of morality. Pressing G is the right thing to do, but I don't know how to do it.

Replies from: Decius
comment by Decius · 2013-01-18T02:56:36.819Z · LW(p) · GW(p)

'Pressing a button' is one act, and 'pressing both buttons' and 'pressing neither button' are two others. If you press a button randomly, it isn't morally relevant which random choice you made.

What does it mean to choose between G and B, when you have zero relevant information?

Replies from: TheOtherDave
comment by TheOtherDave · 2013-01-18T07:11:30.725Z · LW(p) · GW(p)

(shrug) It means that I do something that either causes G to be pressed, or causes B to be pressed. It means that the future I experience goes one way or another as a consequence of my act.

I have trouble believing that this is unclear; I feel at this point that you're asking rhetorical questions by way of trying to express your incredulity rather than to genuinely extract new knowledge.Either way, I think we've gotten as far as we're going to get here; we're just going in circles.

I prefer a moral system in which the moral value of an act relative to a set of values is consistent over time, and I accept that this means it's possible for there to be a right thing to do even when I don't happen to have any way of knowing what the right thing to do is... that it's possible to do something wrong out of ignorance. I understand you reject such a system, and that's fine; I'm not trying to convince you to adopt it.

I'm not sure there's anything more for us to say on the subject.

comment by TheOtherDave · 2013-01-17T17:36:24.842Z · LW(p) · GW(p)

So then it is nonsense to claim that someone did the right thing, but had a bad outcome?

Well, it's not nonsense, but it's imprecise.

One thing that can mean is that the action had a net positive result globally, but negative results in various local frames. I assume that's not what you mean here though, you mean had a bad outcome overall.

Another thing that can mean is that someone decided correctly, because they did the thing that had the highest expected value, but that led to doing the wrong thing because their beliefs about the world were incorrect and led them to miscalculate expected value. I assume that's what you mean here.

If you see someone drowning and are in a position where you can safely do nothing or risk becoming another victim by assisting, you should assist iff your assistance will be successful, right?

Again, the language is ambiguous:

Moral "should" - yes, I should assist iff my assistance will be successful (assuming that saving the person's life is a good thing).

Decision-theory "should" - I should assist if the expected value of my assistance is sufficiently high.

Is it moral to bet irresponsibly if you win? Is it immoral to refuse an irresponsible bet that would have paid off?

Assuming that winning the bet is moral, then betting irresponsibly was the morally right thing to do, though I could not have known that, and it was therefore an incorrect decision to make with the data I had.

Is it immoral to refuse an irresponsible bet that would have paid off?

Same reasoning.

I can't see the practical use of a system where the morality of a choice is very often unknowable.

All right.

Replies from: Decius
comment by Decius · 2013-01-18T02:53:25.893Z · LW(p) · GW(p)

Another thing that can mean is that someone decided correctly, because they did the thing that had the highest expected value, but that led to doing the wrong thing because their beliefs about the world were incorrect and led them to miscalculate expected value. I assume that's what you mean here.

Again, not quite. It's possible for someone to accurately determine the expected results of a decision, but the actual results to vary significantly from the expected. Take a typical parimutuel gambling-for-cash scenario; the expected outcome is typically that the house gets a little richer, and all of the gamblers get a little poorer. That outcome literally never happens, according to the rules of the game.

Replies from: TheOtherDave
comment by TheOtherDave · 2013-01-18T07:15:55.677Z · LW(p) · GW(p)

I agree, but this seems entirely tangential to the points either of us were making.

comment by MugaSofer · 2013-01-14T17:30:31.934Z · LW(p) · GW(p)

Once again: why? Why privilege your simulation? Why not do the same for your planet? Your species? Your country? (Do you implement some of these?)

Replies from: Decius
comment by Decius · 2013-01-14T21:31:24.034Z · LW(p) · GW(p)

Because my simulation (if I am in one) includes all of my existence. Meanwhile, a simulation run inside this existence contains only mathematical constructs or the equivalent.

Surely you don't think that your mental model of me deserves to have its desires considered in addition to mine? You use that model of me to estimate what I value, which enters into your utility function. To also include the model's point of view is double-counting the map.

Replies from: MugaSofer
comment by MugaSofer · 2013-01-15T09:59:41.889Z · LW(p) · GW(p)

My "mental model of you" consists of little more than a list of beliefs, which I then have my brain pretend it believes. In your case, it is woefully incomplete; but even the most detailed of those models are little more than characters I play to help predict how people would really respond to them. My brain lacks the knowledge and computing power to model people on the level of neurons or atoms, and if it had such power I would refuse to use it (at least for predictive purposes.)

OTOH, I don't see what the difference is between two layers of simulation just because I happen to be in one of them. Do you think they don't have qualia? Do you think they don't have souls? Do you think they are exactly the same as you, but don't care?

Replies from: Decius
comment by Decius · 2013-01-15T22:35:01.514Z · LW(p) · GW(p)

Does Dwarf Fortress qualify as a simulation? If so, is there a moral element to running it?

Does f'(), which is the perfect simulation function f(), modified such that a cake appears in my cupboard every night, qualify?

Replies from: MugaSofer
comment by MugaSofer · 2013-01-16T14:44:41.934Z · LW(p) · GW(p)

Dorfs aren't conscious.

To reiterate:

I don't see what the difference is between two layers of simulation just because I happen to be in one of them. Do you think they don't have qualia? Do you think they don't have souls? Do you think they are exactly the same as you, but don't care?

Replies from: Decius
comment by Decius · 2013-01-16T20:21:16.768Z · LW(p) · GW(p)

Ok, entities which exist only in simulation aren't conscious. (or, if I am in a simulation, there is some characteristic which I lack which makes me irrelevant to the upstream entity.)

That seems to be a pretty clear answer to your questions.

Replies from: MugaSofer
comment by MugaSofer · 2013-01-18T12:26:12.225Z · LW(p) · GW(p)

No, it's really not.

What is this mysterious characteristic? How do you know about it? Is it possible to create a sim that does have it? Why should I care if someone has this characteristic, if they act just as intelligent and conscious as you do, and inspection of the source code reveals that you do so for the same reasons?

Replies from: Decius
comment by Decius · 2013-01-19T01:41:47.805Z · LW(p) · GW(p)

You were able to claim that one set of simulated individuals wasn't conscious, but didn't say how others were different.

What does it mean to inspect the source code of the universe, of of the simulation from within the simulation?

And the core difference seems to be that I don't think simulated people can be harmed in the same sense that physical people can be harmed, and you disagree. Is that an apt summary as you see it?

Replies from: MugaSofer
comment by MugaSofer · 2013-01-19T16:24:54.963Z · LW(p) · GW(p)

You were able to claim that one set of simulated individuals wasn't conscious, but didn't say how others were different.

Eh? A dorf is just a few lines of code. If you built a robot with the same thought process, it wouldn't be conscious either.

What does it mean to inspect the source code of the universe, of of the simulation from within the simulation?

No, you inspect the source code of the simulation, and check it does things for the same reason as the physical version.

Although there's no reason, in theory, why a sim couldn't read it's own source code, so I'm not sure I understand your objection.

And the core difference seems to be that I don't think simulated people can be harmed in the same sense that physical people can be harmed, and you disagree. Is that an apt summary as you see it?

You claimed to know (a priori?) that any level of simulation below your own would be different in such a way that we shouldn't care about their suffering. You refused to state what this difference was, how it came about, or indeed answer any of know how much we disagree. I don't know what your position is at all.

  • A matrix lord copies you. Both copies are in the same layer of simulation you currently occupy. Is the copy a person? Is it you?
  • A matrix lord copies your friend, Bob. Is the copy still Bob?
  • A matrix lord copies you. The copy is in another simulation, but one no "deeper" than this one. Is the copy a person? Is it you?
  • A matrix lord copies you. The copy is one layer "deeper" than this one. Is the copy a person? Is it you?
  • A matrix lord copies your friend Bob. Is the copy a person? Is it Bob?
  • A matrix lord copied you, without your knowledge. You are one layer deeper than the original. Are you a person? Are you still "you"?
  • You meet a robot. As far as you can tell, it is as sentient as your friend Bob. Is the robot a person?
  • A matrix lord scans your brain, simplifies it down to pure math (as complex as required to avoid changing how anything behaves) and programs this into the brain of a robot. Is this robot a person? Is it you?
  • A matrix lord copies your brain. The copy is one layer deeper than the original. They connect a robot in your original layer to the simulation. Is the result a person? Is it you?
  • A matrix lord tortures you for a thousand years. Is this wrong, in your estimation?
  • A matrix lord tortures your friend Bob for a thousand years. Is this wrong, in your estimation?
  • A matrix lord tortures you for a thousand years, then resets the program to before the torture began. Is the result the same person who was tortured? Is the result the same person as before the torture?
  • A matrix lord tortures you for a thousand years, then resets the program. Is this wrong in your estimation?
Replies from: Decius
comment by Decius · 2013-01-19T17:06:57.362Z · LW(p) · GW(p)

How many lines of code are required for a sim to be conscious? "a few" is too few, and "an entire universe" is too few if the universe is too different from our own. I say no amount is adequate.

A perfect copy that appears by magic begins identical with the original, but the descendent of the copy is not identical with the descendent of the original.

It's possible to have a universal Turing machine without having the code which, when run on that machine, runs the universal Turing machine. If there exists more than one UTM, it is impossible by looking at the output to tell which one is running a given program. Similarly, examining the source of a simulation also required knowing the physics of the world in which the simulation runs.

For all of your torture suggestions, clarify. Do you mean "a matrix lord edits a simulation of X to be torture. Does this act violate the morality of the simulation?"? It's currently unclear where the victim is in relation to both the torturer and the judgement.

Replies from: MugaSofer
comment by MugaSofer · 2013-01-20T15:20:31.113Z · LW(p) · GW(p)

How many lines of code are required for a sim to be conscious?

Funny.

A perfect copy that appears by magic begins identical with the original, but the descendent of the copy is not identical with the descendent of the original.

Even if the copy is a sim?

Similarly, examining the source of a simulation also required knowing the physics of the world in which the simulation runs.

Um, yes. Hence "in theory".

For all of your torture suggestions, clarify. Do you mean "a matrix lord edits a simulation of X to be torture. Does this act violate the morality of the simulation?"? It's currently unclear where the victim is in relation to both the torturer and the judgement.

Torture means inflict large amounts of pain. The precise method may be assumed to be one that does not interfere with the question (e.g. they haven't been turned into anti-orgazmium.)

Where questions touch on morality, I stated whose morality was being referred to. Bear in mind that different questions may ask different things. The same applies to "where the victim is in relation to both the torturer and the judgement."

Replies from: Decius
comment by Decius · 2013-01-20T20:37:57.465Z · LW(p) · GW(p)

Should I conclude that 'a matrix lord' is altering a simulation, and I, the judge, am in the next cubicle? If so, he is doing amoral math and nobody cares. The simulation might simulate caring, but that can be modified out without terminating the simulation, because it isn't real.

There is a difference between simulating something and doIng it, regardless of the accuracy of the simulation.

Confirm that you don't think a Turing machine can be or contain consciousness?

Replies from: MugaSofer
comment by MugaSofer · 2013-01-21T09:03:59.884Z · LW(p) · GW(p)

Should I conclude that 'a matrix lord' is altering a simulation, and I, the judge, am in the next cubicle?

A matrix lord refers to the (one of) simulator(s) who control this layer. In other words, they are one layer higher than you, and basically omnipotent from our perspective.

If so, he is doing amoral math and nobody cares. The simulation might simulate caring, but that can be modified out without terminating the simulation, because it isn't real.

A superintelligence could modify you so you stop caring. I'm guessing you wouldn't be OK with them torturing you?

There is a difference between simulating something and doIng it, regardless of the accuracy of the simulation.

What difference?

Confirm that you don't think a Turing machine can be or contain consciousness?

Why ... why would I think that? I'm the one defending sim rights, remember?

Replies from: Decius
comment by Decius · 2013-01-21T09:34:01.044Z · LW(p) · GW(p)

A matrix lord refers to the (one of) simulator(s) who control this layer. In other words, they are one layer higher than you, and basically omnipotent from our perspective.

In other words, the matrix lord IS the laws of physics. They exist beyond judgement from this layer.

What difference?

Actual things are made out of something besides information; there is a sense in which concrete things exist and abstract things (like simulations) don't exist.

Why ... why would I think that? I'm the one defending sim rights, remember?

Because that position requires that either a set of numbers or every universal Turing machine is conscious and capable of experiencing harm. Plus if a simulation can be conscious you need to describe a difference between a conscious sim and a dorf. Both of them are mathematical constructs, so your original objection is invalid. Dorfs have souls, noted as such in the code; how are the souls of sims qualitatively different?

Replies from: MugaSofer
comment by MugaSofer · 2013-01-21T14:09:15.227Z · LW(p) · GW(p)

I don't claim to have a perfect nonperson predicate, I'm attacking yours as excluding entities that are clearly conscious.

In other words, the matrix lord IS the laws of physics.

Well, they can manipulate them. I'll specify that they are roughly equivalent to a sim of a human at the same level, if that helps.

They exist beyond judgement from this layer.

Could you just go down the list and answer the questions?

Actual things are made out of something besides information; there is a sense in which concrete things exist and abstract things (like simulations) don't exist.

Is there? Really? A sim isn't floating in platonic space, you know.

Because that position requires that either a set of numbers or every universal Turing machine is conscious and capable of experiencing harm.

A set of numbers can't be conscious. A set of numbers interpreted as computer code and run can be. Or, for that matter, interpreted as genetic code and cloned.

Plus if a simulation can be conscious you need to describe a difference between a conscious sim and a dorf.

As noted above, I don't claim to have a perfect nonperson predicate. However, since you ask, a sim is doing everything the origional (who I believe was probably as conscious as I am based on their actions and brainscans) was - when they see a red ball, virtual neurons light up in the same patterns the real one did; when they talk about experiencing qualia of a red ball, the same thoughts run through their mind and if I was smart enough to decode them from real neurons I could decode them from virtual ones too.

Both of them are mathematical constructs, so your original objection is invalid.

And both humans and rocks (or insects) are physical constructs. My objection is not that it is a mathematical construct, but that it is one too simple to support the complexity of conscious thought.

Dorfs have souls, noted as such in the code; how are the souls of sims qualitatively different?

Eh? Writing "soul" on something does not a person make. Writing "hot" on something does not a fire make, either.

Replies from: Decius, Decius
comment by Decius · 2013-01-22T01:05:12.402Z · LW(p) · GW(p)

Now that I have the time and capability:

The laws of physics copies you. Both copies are next to you. Is the copy a person? Is it you?

It is a person, and it is as much me as I am- it has a descendent one tick later which is as much me` as the descendent of the other copy. Here we hit the ship problem.

The laws of physics copies your friend, Bob. The copy is next to Bob Is the copy still Bob?

The descendant of the copy is as much Bob as the descendent of the original.

The laws of physics copies you. The copy exists in a universe with different rules. Is the copy a person? Is it you?

That depends on the rules of the simulation in which the copy exists; assuming it is only the starting condition which differs, the two are indistinguishable.

Duplicate

Duplicate

You meet a robot. As far as you can tell, it is as sentient as your friend Bob. Is the robot a person?

If 'person' is understood to mean 'sentient', then I conclude that the robot is a person. If 'person' is understood to mean 'human', then I conclude that the robot is a robot.

The laws of physics scans your brain, simplifies it down to pure math (as complex as required to avoid changing how anything behaves) and programs this into the brain of a robot. Is this robot a person? Is it you?

Assuming that the premise is possible; assuming that life-support is also maintained identically (the brain of the robot has identical blood flow through it, which requires that the robot brain have a physically identical structure); the robot is as sentient as I am and it's decisions are defined to be identical to mine. It is not as much me as the direct descendent of me is.

The laws of physics causes to appear a simulating machine with a copy of your brain They connect a robot to the simulation. Is the result a person? Is it you?

Assuming that the robot uses a perfect simulation of a brain instead of a real one, it is as sentient as if it were using a brain. It is not identical with the previous robot nor with me.

The laws of physics tortures you for a thousand years. Is this wrong, in your estimation?

Undesirable. Since the matrix lord does not make decisions in any context I am aware of, it can't be wrong.

The laws of physics tortures your friend Bob for a thousand years. Is this wrong, in your estimation?

Ditto

The laws of physics tortures you for a thousand years, then the universe returns to a state identical to the state prior to the torture. Is the result the same person who was tortured? Is the result the same person as before the torture?

'Same' has lost meaning in this context.

The laws of physics tortures you for a thousand years, then the universe returns to a state identical to the state prior to the torture. Is this wrong in your estimation?

I can't tell the difference between this case and any contrary case; either way, I observe a universe in which I have not yet been tortured.

comment by Decius · 2013-01-21T20:38:31.406Z · LW(p) · GW(p)

It is not always meaningful to refer to 'human' when referencing a different level. What is a matrix lord, and how do I tell the difference between a matrix lord and physics?

Replies from: MugaSofer
comment by MugaSofer · 2013-01-22T09:34:19.487Z · LW(p) · GW(p)

Well, a matrix lord can talk, assume a human shape, respond to verbal requests etc. as well as modify the current laws of physics (including stuff like conjuring a seat out of thin air, which is more like using physical law that was clearly engineered for their benefit.

However, the questions are meant to be considered in the abstract; please assume you know with certainty that this occurred, for simplicity.

  • A matrix lord copies you. Both copies are in the same layer of simulation you currently occupy. Is the copy a person? Is it you?
  • A matrix lord copies your friend, Bob. Is the copy still Bob?
  • A matrix lord copies you. The copy is in another simulation, but one no "deeper" than this one. Is the copy a person? Is it you?
  • A matrix lord copies you. The copy is one layer "deeper" than this one. Is the copy a person? Is it you?
  • A matrix lord copies your friend Bob. Is the copy a person? Is it Bob?
  • A matrix lord copied you, without your knowledge. You are one layer deeper than the original. Are you a person? Are you still "you"?
  • You meet a robot. As far as you can tell, it is as sentient as your friend Bob. Is the robot a person?
  • A matrix lord scans your brain, simplifies it down to pure math (as complex as required to avoid changing how anything behaves) and programs this into the brain of a robot. Is this robot a person? Is it you?
  • A matrix lord copies your brain. The copy is one layer deeper than the original. They connect a robot in your original layer to the simulation. Is the result a person? Is it you?
  • A matrix lord tortures you for a thousand years. Is this wrong, in your estimation?
  • A matrix lord tortures your friend Bob for a thousand years. Is this wrong, in your estimation?
  • A matrix lord tortures you for a thousand years, then resets the program to before the torture began. Is the result the same person who was tortured? Is the result the same person as before the torture?
  • A matrix lord tortures you for a thousand years, then resets the program. Is this wrong in your estimation?
Replies from: Decius
comment by Decius · 2013-01-23T02:41:33.159Z · LW(p) · GW(p)

The matrix lord can cause a person to poof into (or out of) existence, but the person so created is not a matrix lord. If the matrix lord is communicating to me (for example, by editing the air density in the room to cause me to hear spoken words, or by editing my brain so that I hear the words, or editing my brain so that I believe, the edits used by the lord are different from him.

I don't see what the distinction is between "Objects have now accelerated toward each other by an amount proportional to the product of their masses divided by the cube of the distance between them" and "There is now a chair here." Both are equally meaningful as 'physical law'.

Replies from: MugaSofer
comment by MugaSofer · 2013-01-23T14:02:21.182Z · LW(p) · GW(p)

Fair enough. Your evidence that the Matrix Lord exists is probably laws of physics being changed in ways that appear to be the work of intelligence, and conveying information claiming to be from a Matrix Lord.

Or they could have edited your brain to think so; the point is that you are reasonably certain that the events described in the question actually happened.

Replies from: Decius
comment by Decius · 2013-01-23T17:01:00.091Z · LW(p) · GW(p)

So, I am reasonably certain that I am (part of?) a number which is being processed by an algorithm.

That breaks all of my moral values, and I have to start again from scratch.

Cop-out: I decide whatever the matrix lord chooses for me to decide.

Replies from: MugaSofer
comment by MugaSofer · 2013-01-24T12:37:26.631Z · LW(p) · GW(p)

Fair enough.

How about if it's Omega, and you're real as far as you can tell:

  • Omega duplicates you. Is the copy a person? Is it you?
  • Omega duplicates your friend, Bob. Is the copy still Bob?
  • Omega simulates you. Is this sim a person? Is it you?
  • Omega duplicates your friend Bob. Is the copy a person? Is it Bob?
  • Omegacopied you, without your knowledge. You are actually in a simulation. Are you a person? Are you still "you"?
  • You meet a robot. As far as you can tell, it is as sentient as your friend Bob. Is the robot a person?
  • Omega scans your brain, simplifies it down to pure math (as complex as required to avoid changing how anything behaves) and programs this into the brain of a robot. Is this robot a person? Is it you?
  • Omega scans your brain. He then simulates it. Then he connects a (real) robot to the simulation. Is the result a person? Is it you?
  • Omega tortures you for a thousand years. Is this wrong, in your estimation?
  • Omega tortures your friend Bob for a thousand years. Is this wrong, in your estimation?
  • Omega tortures you for a thousand years, then "resets" you with nanotech to before the torture began. Is the result the same person who was tortured? Is the result the same person as before the torture?
  • Omega tortures you for a thousand years, then "resets" you with nanotech. Is this wrong in your estimation?

And a new one, to balance out the question that required you to be in a sim:

  • Omega scans you and simulates you. The simulation tells you that it's still conscious, experiences qualia etc. and admits this seems to contradict it's position on the ethics of simulations. Do you change your mind on anything?
Replies from: Decius
comment by Decius · 2013-01-24T21:12:32.373Z · LW(p) · GW(p)

Define "duplicates", "original", and "same" well enough to answer the Ship of Theseus problem.

Can I summarize the last question to "Omega writes a computer program which outputs 'I am conscious, experience qualia, etc. and this contradicts my position on the ethics of simulations"?

If not, what additional aspects need be included? If so, the simulation is imperfect because I do not believe that such a contradiction would be indicated.

Replies from: MugaSofer
comment by MugaSofer · 2013-01-25T09:14:39.784Z · LW(p) · GW(p)

Can I summarize the last question to "Omega writes a computer program which outputs 'I am conscious, experience qualia, etc. and this contradicts my position on the ethics of simulations"?

Well, it talks to you first. IDK what you would talk about with a perfect copy of yourself, but it says what you would expect an actual conscious copy to say (because it's a perfect simulation.)

If so, the simulation is imperfect because I do not believe that such a contradiction would be indicated.

You don't think finding yourself as a conscious sim would indicate sims are conscious? Because I assumed that's what you meant by

So, I am reasonably certain that I am (part of?) a number which is being processed by an algorithm.

That breaks all of my moral values, and I have to start again from scratch.

Replies from: Decius
comment by Decius · 2013-01-25T16:54:54.101Z · LW(p) · GW(p)

Well, it talks to you first. IDK what you would talk about with a perfect copy of yourself, but it says what you would expect an actual conscious copy to say (because it's a perfect simulation.)

So, it passes the Turing test, as I adjudicate it? It's a simulation of me which sits at a computer and engages with me over the internet?

When I tell you that you are the copy of me, and prove it without significantly changing the conditions of the simulation or breaking the laws of physics, I predict that you will change your position. Promptly remove the nearest deck of cards from the pack, and throw it against the ceiling fairly hard. Only all of the black cards will land face up.

You don't think finding yourself as a conscious sim would indicate sims are conscious?

When I recognize that numbers in general are conscious entities which experience all things simultaneously, (proof: Consider the set of all universal turing machines. Select a UTM which takes this number as input and simulates a world with some set of arbitrary conditions.) I stop caring about conscious entities and reevaluate what is and is not an agent.

comment by OrphanWilde · 2013-01-09T17:05:07.946Z · LW(p) · GW(p)

I think I see where you're going with this, but your presentation actually leads somewhere else entirely; you discuss your point in the introduction and the conclusion, but you invest the majority of the weight of your argument into a place where your point is nowhere to be found.

Your argument, as presented, seems to be along these lines: Suppose there is one person, and one person is tortured. That's really important. Suppose there are a billion people, and one of them is tortured. That's not very important.

What I think you're getting at is more along these lines: Suppose there is one person, and one person is tortured; that is extremely important to that person. Suppose there are a billion copies of that one person, and one of them is tortured. Even a slight benefit arising from a decision leading to that torture may outweigh, by virtue of the fact that the benefit has been reproduced a billion less one times, the badness of the torture.

In other words, your presentation conflates the number of people with the importance of something bad happening to one of them. You don't discuss potential rewards at all; it's just, this torture is happening. Torture is equally bad regardless of the percentage of the population that is being tortured (given a specific number of people that are being tortured, I mean); we shouldn't care less about torture merely because there are more people who aren't being tortured. Whereas your actual point, as hinted at, is that, for some group that gains utility per individual as a result of the decision that results in that torture, the relative badness of that decision is dependent on the size of the group.

Or, in other words, you seem to be aiming for a discussion of dustmotes in eyes compared to torture of one person, but you're forgetting to actually discuss the dustmotes.

Replies from: MugaSofer
comment by MugaSofer · 2013-01-10T10:06:21.902Z · LW(p) · GW(p)

Your argument, as presented, seems to be along these lines: Suppose there is one person, and one person is tortured. That's really important. Suppose there are a billion people, and one of them is tortured. That's not very important.

No. My argument is as follows: suppose there is one person, duplicated a billion times. These copies are identical, they are the same person. Suppose one copy is deleted. This is equivalent to a-billion-to-one odds of killing all of them. Furthermore, this holds for torture. Assuming this argument holds (and I have yet to see a reason it shouldn't) then the scenario at the bottom is a Good Thing.

However, if you consider it in terms of rewards adding up, then surely the trillions etc. copies of the society at the end receive enough utility to outweigh the disutility of the few copies getting tortured?

Replies from: OrphanWilde
comment by OrphanWilde · 2013-01-10T12:47:30.500Z · LW(p) · GW(p)

Would you do me a favor and reread my comment?

The point wasn't that that was what you were trying to say, my point was that that was the most natural way to interpret what you were actually saying. Hence my comment at the end - you seem to be trying to raise a dustmotes in the eye versus torture argument, but you never actually discuss the dustmotes. Your comments invest a heavy amount of weight in the torture aspect of the argument, and remember that's already an emotionally charged concept, and then you never discuss the utility that actually comes out of it. Allow me to elaborate:

Suppose, we vivisect an entire universe full of simulated people. If there are enough people it might not matter, the utility might outweigh the costs.

That's what your thread is right now. The reader is left baffled as to what utility you could possible be referring to; are we referring to the utility some lunatic gets from knowing that there are people getting vivisected? And are we disregarding the disutility of the people getting vivisected? Why is their disutility lower because there are more people in the universe? Does the absolute important of a single person decrease relative to absolute number of people?

You don't discuss the medical knowledge, or whatever utility everybody else is getting, from these vivisections.

Are you familiar with the thought experiment I'm referring to with dustmotes versus torture?

Replies from: MugaSofer
comment by MugaSofer · 2013-01-10T13:11:25.512Z · LW(p) · GW(p)

Allow me to elaborate:

Suppose, we vivisect an entire universe full of simulated people. If there are enough people it might not matter, the utility might outweigh the costs.

That's what your thread is right now. The reader is left baffled as to what utility you could possible be referring to; are we referring to the utility some lunatic gets from knowing that there are people getting vivisected? And are we disregarding the disutility of the people getting vivisected? Why is their disutility lower because there are more people in the universe? Does the absolute important of a single person decrease relative to absolute number of people?

You don't discuss the medical knowledge, or whatever utility everybody else is getting, from these vivisections.

Ah.

I though I made that clear:

If we simulate an entire society a trillion times, or 3^^^^^^3 times, or some similarly vast number, and then simulate something horrific - an individual's private harem or torture chamber or hunting ground - then the people in this simulation are not real. Their needs and desires are worth, not nothing, but far less then the merest whims of those who are Really Real. They are, in effect, zombies - not quite p-zombies, since they are conscious, but e-zombies - reasoning, intelligent beings that can talk and scream and beg for mercy but do not matter.

I think I may have aid too much emphasis on the infinitesmally small Reality of the victims, as opposed to the Reality of the citizens.

Replies from: OrphanWilde
comment by OrphanWilde · 2013-01-10T13:25:41.119Z · LW(p) · GW(p)

I'm puzzled at to why they should matter less.

Replies from: MugaSofer
comment by MugaSofer · 2013-01-10T13:42:28.952Z · LW(p) · GW(p)

Because they are less.

Replies from: OrphanWilde, OrphanWilde
comment by OrphanWilde · 2013-01-10T15:52:35.142Z · LW(p) · GW(p)

Retracted last comment because I realized I was misreading what you were saying.

Let me approach this from another direction:

You're basically supposing that 1/N odds of being tortured is morally equivalent to 1/N odds of being tortured with an implicit guarantee that somebody is going to get tortured. I think it is consistent to regard that 1/N for some sufficiently large N odds of me being tortured is less important than 1/N people actually being tortured.

If you create a precise duplicate of the universe in a simulation, I don't regard that we have gained anything; I consider that two instances of indistinguishable utility aren't cumulative. If you create a precise duplicate of me in a simulation and then torture that duplicate, utility decreases.

This may seem to be favoring "average" utility, but I think the distinction is in the fact that torturing an entity represents, not lower utility, but disutility; because I regard a duplicate universe as adding no utility, the negative utility shows up as a net loss.

I'd be hard-pressed to argue about the "indistinguishability" part, though I can sketch where the argument would lay; because utility exists as a product of the mind, and duplicate minds are identical from an internal perspective, an additional indistinguishable mind doesn't add anything. Of course, this argument may require buying into the anthropic perspective.

Replies from: MugaSofer
comment by MugaSofer · 2013-01-11T13:15:23.859Z · LW(p) · GW(p)

If you create a precise duplicate of the universe in a simulation, I don't regard that we have gained anything; I consider that two instances of indistinguishable utility aren't cumulative. If you create a precise duplicate of me in a simulation and then torture that duplicate, utility decreases.

This may seem to be favoring "average" utility, but I think the distinction is in the fact that torturing an entity represents, not lower utility, but disutility; because I regard a duplicate universe as adding no utility, the negative utility shows up as a net loss.

I'm basically assuming this reality-fluid stuff is legit for the purposes of this post. I included the most common argument in it's favor (the probability argument) but I'm not setting out to defend it, I'm just exploring the consequences.

comment by OrphanWilde · 2013-01-10T14:54:34.421Z · LW(p) · GW(p)

Why?

If you're in a simulation right now, how would you feel about those running the machine simulating you? Do you grant them moral sanction to do whatever they like with you, because you're less than them?

I mean, maybe you're here as a representative of the people running the machine simulating me. I'm not sure I like where your train of thought is going, in that case.

Replies from: MugaSofer
comment by MugaSofer · 2013-01-11T13:17:00.867Z · LW(p) · GW(p)

I mean, maybe you're here as a representative of the people running the machine simulating me

Honestly, I would have upvoted just for this bit.