Comment by xxd on Not Taking Over the World · 2012-01-27T18:20:56.696Z · LW · GW

Could reach the same point.

Said Eliezer agent is programmed genetically to value his own genes and those of humanity.

An artificial Elizer could reach the conclusion that humanity is worth keeping but is by no means obliged to come to that conclusion. On the contrary, genetics determines that at least some of us humans value the continued existence of humanity.

Comment by xxd on Not Taking Over the World · 2012-01-27T18:07:11.619Z · LW · GW

This is a cliche and may be false but it's assumed true: "Power corrupts and absolute power corrupts absolutely".

I wouldn't want anybody to have absolute power not even myself, the only possible use of absolute power I would like to have would be to stop any evil person getting it.

To my mind evil = coercion and therefore any human who seeks any kind of coercion over others is evil.

My version of evil is the least evil I believe.

EDIT: Why did I get voted down for saying "power corrupts" - the corrollary of which is rejection of power is less corrupt whereas Eliezer gets voted up for saying exactly the same thing? Someone who voted me down should respond with their reasoning.

Comment by xxd on Not Taking Over the World · 2012-01-27T17:59:08.790Z · LW · GW

Now this is the $64 google-illion question!

I don't agree that the null hypothesis: take the ring and do nothing with it is evil. My definition of evil is coercion leading to loss of resources up to and including loss of one's self. Thus absolute evil is loss of one's self across humanity which includes as one use case humanity's extinction (but is not limited to humanity's extinction obviously because being converted into zimboes isn't technically extinction..)

Nobody can argue that the likes of Gaddafi exist in the human population: those who are interested in being the total boss of others (even thought they add no value to the lives of others) to the extent that they are willing to kill to maintain their boss position.

I would define these people as evil or with evil intent. I would thus state that I would like under no circumstances somebody like this to grab the ring of power and thus I would be compelled to grab it myself.

The conundrum is that I fit the definition of evil myself. Though I don't seek power to coerce as an end in itself I would like the power to defend myself against involuntary coercion.

So I see a Gaddafi equivalent go to grab the ring and I beat him to it.

What do I do next?

Well I can't honestly say that I have the right to kill the millions of Gaddafi equivalent but I think that on average they add a net negative to the utility of humanity.

I'm left, however, with the nagging suspicion that under certain circumstances, Gaddafi type figures might be beneficial to humanity as a whole. Consider: crowdsourcing the majority of political decisions would probably satisfy the average utility function of humanity. It's fair but not to everybody. We have almost such a system today (even though it's been usurped by corporations). But in times of crisis such as during war, it's more efficient to have rapid decisions made by a small group of "experts" combined with those who need to make ruthless decisions so we can't kill the Gaddafis.

What is therefore optimal in my opinion? I reckon I'd take all the Gaddafis off planet and put them in simulations to be recalled only at times of need and leave sanitized nice people zimbo copies of them. Then I would destroy the ring of power and return to my previous life before I was tempted to torture those who have done me harm in the past.

Comment by xxd on The Bedrock of Fairness · 2012-01-26T22:42:57.267Z · LW · GW

Xannon decides how much Zaire gets. Zaire decides how much Yancy gets. Yancy decides how much Xannon gets.

If any is left over they go through the process again for the remainder ad infinitum until an approximation of all of the pie has been eaten.

Comment by xxd on Welcome to Less Wrong! · 2011-12-27T21:31:01.782Z · LW · GW

Very Good response. I can't think of anything to disagree with and I don't think I have anything more to add to the discussion.

My apologies if you read anything adversarial into my message. My intention was to be pointed in my line of questioning but you responded admirably without evading any questions.

Thanks for the discussion.

Comment by xxd on Welcome to Less Wrong! · 2011-12-27T18:31:32.983Z · LW · GW

Thanks for the suggestion. Yes I already have read it (steal beach). It was OK but didn't really touch much on our points of contention as such. In fact I'd say it steered clear from them since there wasn't really the concept of uploads etc. Interestingly, I haven't read anything that really examines closely whether the copied upload really is you. Anyways.

"I would also say that it doesn't matter that the vast majority of the cells comprising me twenty years ago are dead, even though the cells currently comprising me aren't identical to the cells that comprised me then."

OK I have to say that now I've thought it through I think this is a straw man argument that "you're not the same as you were yesterday" used as a pretext for saying that you're exactly the same from one moment to the next. It is missing the point entirely.

Although you are legally the same person, it's true that you're not exactly physically the same person today as you were yesterday and it's also true that you have almost none of the original physical matter or cells in you today as you had when you were a child.

That this is true in no way negates the main point: human physical existence at any one point in time does have continuity. I have some of the same cells I had up to about seven to ten years ago. I have some inert matter in me from the time I was born AND I have continual memories to a greater or lesser extent. This is directly analogous to my position that I posted before about a slow hybridizing transition to machine form before I had even clearly thought this out consciously.

Building a copy of yourself and then destroying the original has no continuity. It's directly analgous to budding asexually a new copy of yourself and then imprinting it with your memories and is patently not the same concept as normal human existence. Not even close.

That you and some others might dismiss the differences is fine and if you hypothetically wanted to take the position that killing yourself so that a copy of your mind state could exist indefinitely then I have no problem with that, but it's patently not the same as the process you, I and everyone else goes through on a day to day basis. It's a new thing. (Although it's already been tried in nature as the asexual budding process of bacteria).

I would appreciate, however, that if that is a choice being offered to others, that it is clearly explained to them what is happening. i.e. physical body death and a copy being resurrected, not that they themselves continue living, because they do not. Whether you consider it irrelevant is besides the point. Volition is very important, but I'll get to that later.

"I agree with you that if a person is perfectly duplicated and the original killed, then the original has been killed. (I would also say that the person was killed, which I think you would agree with. I would also say that the person survived, which I think you would not agree with.)"

That's directly analogous to multi worlds interpretation of quantum physics which has multiple timelines. You could argue from that perspective that death is irrelevant because in an infintude of possibilities if one of your instances die then you go on existing. Fine, but it's not me. I'm mortal and always will be even if some virtual copy of me might not be. So you guessed correctly, unless we're using some different definition of "person" (which is likely I think) then the person did not survive.

"I agree that volition is important for its own sake, but I don't understand what volition has to do with what we've thus far been discussing. If forcing the original to bud kills the original, then it does so whether the original wants to die or not. If it doesn't kill the original, then it doesn't, whether the original wants to die or not. It might be valuable to respect people's volition, but if so, it's for some reason independent of their survival. (For example, if they want to die, then respecting their volition is opposed to their survival.)"

Volition has everything to do with it. While it's true that volition is independent of whether they have died or not (agreed), the reason it's important is that some people will likely take your position to justify forced destructive scanning at some point because it's "less wasteful of resources" or some other pretext.

It's also particularly important in the case of an AI over which humanity would have no control. If the AI decides that uploads via destructive scanning are exactly the same thing as the original, and it needs the space for it's purposes then there is nothing to stop it from just going ahead unless volition is considered to be important.

Here's a question for you: Do you have a problem with involuntary forced destructive scanning in order to upload individuals into some other substrate (or even a copied clone)?

So here's a scenario for you given that you think information is the only important thing: Do you consider a person who has lost much of their memory to be the same person? What if such a person (who has lost much of their memory) then has a backed up copy of their memories from six months ago imprinted over top. Did they just die? What if it's someone else's memories: did they just die?

Here's yet another scenario. I wonder if you have though about this one: Scan a person destructively (with their permission). Keep their scan in storage on some static substrate. Then grow a perfectly identical clone of them (using "identical" to mean functionally indentical because we can't get exactly identical as discussed before). Copy the contents of the mindstates into that clone.

Ask yourself this question: How many deaths have taken place here?

Comment by xxd on Welcome to Less Wrong! · 2011-12-22T20:16:19.525Z · LW · GW

Other stuff:

"Yes, I would say that if the daughter cell is identical to the parent cell, then it doesn't matter that the parent cell died at the instant of budding."

OK good to know. I'll have other questions but I need to mull it over.

"I would also say that it doesn't matter that the vast majority of the cells comprising me twenty years ago are dead, even though the cells currently comprising me aren't identical to the cells that comprised me then." I agree with this but I don't think it supports your line of reasoning. I'll explain why after my meeting this afternoon.

"I agree with you that if a person is perfectly duplicated and the original killed, then the original has been killed. (I would also say that the person was killed, which I think you would agree with. I would also say that the person survived, which I think you would not agree with.)" Interesting. I have a contrary line of argument which I'll explain this afternoon.

"I agree that volition is important for its own sake, but I don't understand what volition has to do with what we've thus far been discussing. If forcing the original to bud kills the original, then it does so whether the original wants to die or not. If it doesn't kill the original, then it doesn't, whether the original wants to die or not. It might be valuable to respect people's volition, but if so, it's for some reason independent of their survival. (For example, if they want to die, then respecting their volition is opposed to their survival.)" Disagree. Again I'll explain why later.

"A question for you: if someone wants to stop existing, and they destructively scan themselves, am I violating their wishes if I construct a perfect duplicate from the scan? I assume your answer is "no," since the duplicate isn't them; they stopped existing just as they desired." Maybe. If you have destructively scanned them then you have killed them so they now no longer exist so that part you have complied perfectly with their wishes from my point of view. But in order to then make a copy, have you asked their permission? Have they signed a contract saying they have given you the right to make copies? Do they even own this right to make copies? I don't know.

What I can say is that our differences in opinion here would make a superb science fiction story.

Comment by xxd on Welcome to Less Wrong! · 2011-12-22T18:40:28.617Z · LW · GW

Of course I would do it because it would be better than nothing. My memories would survive. But I would still be dead.

Here's a thought experiment for you to outline the difference (whether you think it makes sense from your position whether you only value the information or not): Let's say you could slowly transfer a person into an upload by the following method: You cut out a part of the brain. That part of the brain is now dead. You replace it with a new part, a silicon part (or some computational substrate) that can interface directly with the remaining neurons.

Am I dead? Yes but not all of me is and we're now left with a hybrid being. It's not completely me, but I've not yet been killed by the process and I get to continue to live and think thoughts (even though part of my thoughts are now happening inside something that isn't me).

Gradually over a process of time (let's say years rather than days or minutes or seconds) all of the parts of the brain are replaced.

At the end of it I'm still dead, but my memories live on. I did not survive but some part of the hybrid entity I became is alive and I got the chance to be part of that.

Now I know the position you'd take is that speeding that process up is mathematically equivalent.

It isn't from my perspective. I'm dead instantly and I don't get the chance to transition my existence in a meaningful way to me.

Sidetracking a little: I suspect you were comparing your unknown quantity X to some kind of "soul". I don't believe in souls. I value being alive and having experiencing and being able to think. To me, dying and then being resurrected on the last day by some superbeing who has rebuilt my atoms using other atoms and then copies my information content into some kind of magical "spirit being" is exactly identical to deconstructing me - killing me - and making a copy even if I took the position that the reconstructed being on "the last day" was me. Which I don't. As soon as I die that's me gone, regardless of whether some superbeing reconstructs me later using the same or different atoms (if that were possible).

Comment by xxd on Welcome to Less Wrong! · 2011-12-22T18:17:54.908Z · LW · GW

EDIT: Yes, you did understand though I can't personally say that I'm willing to come out and say definitively that the X is a red herring though it sounds like you are willing to do this.

I think it's an axiomatic difference Dave.

It appears from my side of the table that you're starting from the axiom that all that's important is information and that originality and/or physical existence including information means nothing.

And you're dismissing the quantum states as if they are irrelevant. They may be irrelevant but since there is some difference between the two copies below the macro scale (and the position is different and the atoms are different - though unidentifiably so other than saying that the count is 2x rather than x of atoms) then it's impossible to dismiss the question "Am I dying when I do this?" because your are making a lossy copy even from your standpoint. The only get-out clause is to say "it's a close enough copy because the quantum states and position are irrelevant because we can't measure the difference between atoms in two identical copies on the macro scale other than saying we've now got 2X the same atoms whereas before we had 1X).

It's exactly analogous to a bacteria budding. The original cell dies and close to an exact copy is budded off a. If the daughter bacteria were an exact copy of the information content of the original bacteria then you'd have to say from your position that it's the same bacteria and the original is not dead right? Or maybe you'd say that it doesn't matter that the original died.

My response to that argument (if it were the line of reasoning you took - is it?) would be that "it matters volitionally - if the original didn't want to die and it was forced to bud then it's been killed).

Comment by xxd on Welcome to Less Wrong! · 2011-12-22T16:33:32.763Z · LW · GW

"Again, just to be clear, what I'm trying to understand is what you value that I don't. If data at these high levels of granularity is what you value, then I understand your objection. Is it?"

OK I've mulled your question over and I think I have the subtley of what you are asking down as distinct from the slight variation I answered.

Since I value my own life I want to be sure that it's actually me that's alive if you plan to kill me. Because we're basically creating an additional copy really quickly and then disposing of the original I have a hard time believing that we're doing something equivalent to a single copy walking through a gate.

I don't believe that just the information by itself is enough to answer the question "Is it the original me?" in affirmative. And given that it's not even all of the information (though is all of the information on the macro scale) I know for a fact we're doing a lossy copy. The quantum states are possibly irrelevant on a macro scale for determing is (A == B) but since I knew from physics that they're not exactly equivalent once you go down to the quantum level I just can't buy into it though things would be murkier if the quantum states were provably identical.

Does that answer your question?

Comment by xxd on Welcome to Less Wrong! · 2011-12-22T16:25:00.941Z · LW · GW

I guess from your perspective you could say that the value of being the original doesn't derive from anything and it's just a primitive because the macro information is the same except for position (thought the quantum states are all different even at point of copy). But yes I value the original more than the copy because I consider the original to be me and the others to be just copies, even if they would legally and in fact be sentient beings in their own right.

Yes, if I woke up tomorrow and you could convince me I was just a copy then this is something I have already modeled/daydreamed about and my answer would be: I'd be disappointed that I wasn't the original but glad that I had existence.

Comment by xxd on Welcome to Less Wrong! · 2011-12-21T21:59:40.827Z · LW · GW

Thanks Dave. This has been a very interesting discussion and although I think we can't close the gap on our positions I've really enjoyed it.

To answer your question "what do I value"? I think I answered it already, I valued not being killed.

The difference in our positions appears to be some version "but your information is still around" and my response is "but it's not me" and your response is "how is it not you?"

I don't know.

"What is it I value that you don't?" I don't know. Maybe I consider myself to be a higher resolution copy or a less lossy copy or something. I can't put my finger on it because when it comes down to it why do just random quantum states make a difference to me when all the macro information is the same apart from position and perhaps momentum. I don't really have an answer for that.

Comment by xxd on Welcome to Less Wrong! · 2011-12-21T21:55:10.393Z · LW · GW

I thought I had answered but perhaps I answered what I read into it.

If you are asking "will I prevent you from gradually moving everything to digital perhaps including yourselves" then the answer is no.

I just wanted to clarify that we were talking about with consent vs without consent.

Comment by xxd on Welcome to Less Wrong! · 2011-12-21T20:57:17.335Z · LW · GW

Yes that's right.

I will not consent to being involuntarily destructively scanned and yes I will devote all of my resources to prevent myself from being involunarily destructively scanned.

That said, if you or anyone else wants to do it to themselves voluntarily it's none of my business.

If what you're really asking, however, is whether I will attempt to intervene if I notice a group of invididuals or an organization forcing destructive scanning on individuals I suspect that I might but we're not there yet.

Comment by xxd on Welcome to Less Wrong! · 2011-12-21T20:49:41.111Z · LW · GW

You're basically asking why I should value myself over a separate in space exact copy of myself (and by exact copy we mean as close as you can get) and then superimposing another question of "isn't it the information that's important?"

Not exactly.

I'm concerned that I will die and I'm examining the hyptheses as to why it's not me that dies. Best as I can come up with the response is "you will die but it doesn't matter because there's another identical (or close as possible) copy still around.

As to what you value that I don't I don't have an answer. Perhaps a way to elicit the answer would be to ask you the question of why you only value the information and not the physical object also?

Comment by xxd on Welcome to Less Wrong! · 2011-12-21T19:45:22.971Z · LW · GW

"If the information is different, and the information constitutes people, then it constitutes different people."

True and therein lies the problem. Let's do two comparisons: You have two copies. One the original, the other the copy.

Compare them on the macro scale (i.e. non quantum). They are identical except for position and momentum.

Now let's compare them on the quantum scale: Even at the point where they are identical on the macro scale, they are not identical on the quantum scale. All the quantum states are different. Just the simple act of observing the states (either by scanning it or by rebuilding it) changes it and thus on the quantum scale we have two different entities even though they are identical on the macro scale except for position and momentum.

Using your argument that it's the information content that's important, they don't really have any useful differences from an information content especially not on the macro scale but they have significant differences in all of their non useful quantum states. They are physically different entities.

Basically what you're talking about is using a lossy algorithm to copy the individuals. At the level of detail you care about they are the same. At a higher level of detail they are distinct.

I'm thus uncomfortable with killing one of them and then saying the person still exists.

Comment by xxd on Welcome to Less Wrong! · 2011-12-21T19:36:23.949Z · LW · GW

This is a different point entirely. Sure it's more efficient to just work with instances of similar objects and I've already said elsewhere I'm OK with that if it's objects.

And if everyone else is OK with being destructively scanned then I guess I'll have to eke out an existence as a savage. The economy can have my atoms after I'm dead.

Comment by xxd on Welcome to Less Wrong! · 2011-12-21T19:31:24.572Z · LW · GW

I understand that you value the information content and I'm OK with your position.

Let's do another tought experiment then: Say we're some unknown X number of years in the future and some foreign entity/government/whatever decided it wanted the territory of the United States (could be any country, just using the USA as an example) but didn't want the people. It did, however, value the ideas, opinions, memories etc of the American people. If said entity then destructively scanned the landmass but painstakingly copied all of the ideas, opinions, memories etc into some kind of data store which it could access at it's leisure later then would that be the same thing as the original living people?

I'd argue that from a comp sci perspective what you have just done is built a static class which describes the people, their ideas, memories etc but this is not the original people it's just a model of them.

Now don't get me wrong, a model like that would be very valuable, it just wouldn't be the original.

And yes, of course some people value originals otherwise you wouldn't have to pay millions of dollars for postage stamps printed in the 1800s even though I'd guess that scanning that stamp and printing out a copy of it should to all intents and purposes be the same.

Comment by xxd on Welcome to Less Wrong! · 2011-12-21T19:17:36.390Z · LW · GW

Exactly. Reasonable assurance is good enough, absolute isn't necessary. I'm not willing to be destructively scanned even if a copy of me thinks it's me, looks like me, and acts like me.

That said I'm willing to accept the other stance that others take: they believe they are reasonably convinced that destructive scanning just means they will appear somewhere else a fraction of a second (or however long it takes). Just don't ask me to do it. And expect a bullet if you try to force me!

Comment by xxd on Welcome to Less Wrong! · 2011-12-21T19:12:50.410Z · LW · GW

What do I make of his argument? Well I'm not a PHD in Physics though I do have a Bachelors in Physics/Math so my position would be the following:

Quantum physics doesn't scale up to macro. While swapping the two helium atoms in two billiard balls results in you not being able to tell which helium atom was which, the two billiard balls certainly can be distinguished from each other. Even "teleporting" one from one place to another will not result in an identical copy since the quantum states will all have changed just by dint of having been read by the scanning device. Each time you measure, quantum state changes so the reason why you cannot distinguish two identical copies from each other is not because they are identical it's just that you cannot even distinguish the original from itself because the states change each time you measure them.

A macro scale object composed of multiple atoms A, B and C could not distinguish the atoms from another macro scale object composed of multiple atoms of type A, B and C in exactly the same configuration.

That said, we're talking about a single object here. As soon as you go to comparing more than one single object it's not the same: there is position, momentum et cetera of the macro scale objects to distinguish them even though they are the same type of object.

I strongly believe that the disagreement around this topic comes from looking at things as classes from a comp sci perspective.

From a physics perspective it makes sense to say two objects of the same type are different even though the properties are the same except for minor differences such as position and momentum.

From a compsci perspective, talking about the position and momentum of instances of classes doesn't make any sense. The two instances of the classes ARE the same because they are logically the same.

Anyways I've segwayed here: Take the two putative electrons in a previous post above: there is no way to distinguish between the two of them except by position but they ARE two separate electrons, they're not a single electron. If one of them is part of e.g. my brain and then it's swapped out for the other then there's no longer any way to tell which is which. It's impossible. And my guess is this is what's causing the confusion. From a point of view of usefulness neither of the two objects is different from each other. But they are separate from each other and destroying one doesn't mean that there are still two of them, there are now only one and one has been destroyed.

Dave seems to take the position that that is fine because the position and number of copies are irrelevant for him because it's the information content that's important.

For me, sure if my information content lived on that would be better than nothing but it wouldn't be me.

Comment by xxd on Welcome to Less Wrong! · 2011-12-21T18:04:33.306Z · LW · GW

I think we're on the same page from a logical perspective.

My guess is the perspective taken is that of physical science vs compsci.

My guess is a compsci perspective would tend to view the two individuals as being two instances of the class of individual X. The two class instances are logically equivalent exception for position.

The physical science perspective is that there are two bunches of matter near each other with the only thing differing being the position. Basically the same scenario as two electrons with the same spin state, momentum, energy etc but different positions. There's no way to distinguish the two of them from physical properties but there are two of them not one.

Regardless, if you believe they are the same person then you go first through the teleportation device... ;->

Comment by xxd on Welcome to Less Wrong! · 2011-12-21T05:18:01.017Z · LW · GW

It matters to you if you're the original and then you are killed.

You are right that they are both an instance of person X but my argument is that this is not the equivalent to them being the same person in fact or even in law (whatever that means).

Also when/if this comes about I bet the law will side with me and define them as two different people in the eyes of the law. (And I'm not using this to fallaciously argue from authority, just pointing out I strongly believe I am correct - though willing to concede if there is ultimately some logical way to prove they are the same person.)

The reason is obvious. If they are the same person and one of them kills someone are both of them guilty? If one fathers a child, is the child the offspring of both of them?

Because of this I cannot agree beyond saying that the two different people are copies of person x. Even you are prepared to concede that they are different people to each other after the mental states begin to diverge so I can't close the logical gap why you say they are the same person and not copies of the same person one being the original. You come partway to saying they are different people. Why not come all the way?

Comment by xxd on Welcome to Less Wrong! · 2011-12-21T03:57:50.810Z · LW · GW

I understand completely your logic but I do not buy it because I do not agree that at the instant of the copying you have one person at two locations. They are two different people. One being the original and the other being an exact copy.

Comment by xxd on Welcome to Less Wrong! · 2011-12-21T03:55:26.405Z · LW · GW

K here's where we disagree:

Original Copy A and new Copy B are indeed instances of person X but it's not a class with two instances as in CompSci 101. The class is Original A and it's B that is the instance. They are different people.

In order to make them the same person you'd need to do something like this: Put some kind of high bandwidth wifi in their heads which synchronize memories. Then they'd be part of the same hybrid entity. But at no point are they the same person.

Comment by xxd on Welcome to Less Wrong! · 2011-12-21T03:39:13.031Z · LW · GW

Come on. Don't vote me down without responding.

Comment by xxd on Welcome to Less Wrong! · 2011-12-21T03:34:21.853Z · LW · GW

Here's why I conclude a risk exists:

Comment by xxd on Welcome to Less Wrong! · 2011-12-21T03:33:56.657Z · LW · GW

this follows:

Comment by xxd on Welcome to Less Wrong! · 2011-12-21T03:28:48.019Z · LW · GW

I'm talking exactly about a process that is so flawless you can't tell the difference. Where my concern comes from is that if you don't destroy the original you now have two copies. One is the original (although you can't tell the difference between the copy and the original) and the other is the copy.

Now where I'm uncomfortable is this: If we then kill the original by letting Freddie Krueger or Jason do his evil thing then though the copy is still alive AND is/was indistinguishable from the original then the alternative hypothesis which I oppose states that the original is still alive and yet I can see the dead body there.

Simply speeding the process up perhaps by vaporizing the original doesn't make the outcome any different, the original is still dead.

It gets murkier if the original is destructively scanned and then rebuilt from the same atoms but I'd still be reluctant to do this myself.

That said, I'd be willing to become a hybrid organism slowly by replacing parts of me and although it wouldn't be the original me at the end of the total replacement process it would still be the hybrid "me".

Interesting position on the killing of the NPCs and in terms of usefulness that's why it doesn't matter to me if a being is sentient or not in order to meet my definition of AI.

Comment by xxd on Welcome to Less Wrong! · 2011-12-21T03:19:50.558Z · LW · GW

Risk avoidance. I'm uncomfortable with taking the position that creating a second copy and destroying the original is the original simply because if it isn't then the original is now dead.

Comment by xxd on Welcome to Less Wrong! · 2011-12-21T03:08:43.681Z · LW · GW

Here's one: Let's say that the world is a simulation AND that strongly godlike AI is possible. To all intents and purposes, even though the bible in the simulation is provably inconsistent, the existence of a being indistinguishable from the God in such a bible would not be ruled out because though the inhabitants of the world are constrained by the rules of physics in their own state machines or objects or whatever, the universe containing the simulation is subject to it's own set of physics and logic and therefore may vary even inside the simulation but not be detectable to you or I.

Comment by xxd on Welcome to Less Wrong! · 2011-12-21T02:59:52.766Z · LW · GW

"(shrug) After the process you describe, there exist two people in identical bodies with identical memories. What conceivable difference does it make which of those people we label "me"? What conceivable difference does it make whether we label both of those people "me""

Because we already have a legal precedent. Twins. Though their memories are very limited they are legally different people. My position is rightly so.

Comment by xxd on Welcome to Less Wrong! · 2011-12-21T02:57:28.468Z · LW · GW

Ha Ha. You're right. Thanks for reflecting that back to me.

Yes if you break apart my argument I'm saying exactly that though I hadn't broken it down to that extent before.

The last part I disagree with which is that I assume that I'm always better at detecting people than the AI is. Clearly I'm not but in my own personal case I don't trust it if it disagrees with me because of simple risk management. If it's wrong and it kills me then resurrects a copy then I have experienced total loss. If it's right then I'm still alive.

But I don't know the answer. And thus I would have to say that it would be necessary to only allow scenario #1 if I were designing the AI because though I could be wrong I'd prefer not to take the risk of personal destruction.

That said if someone chose to destructively scan themselves to upload that would be their personal choice.

Comment by xxd on Welcome to Less Wrong! · 2011-12-21T02:51:08.950Z · LW · GW

You're right. It is impossible to determine that the current copy is the original or not.

"Disturbing how?" Yes I would dismiss the person as being a fruitbar of course. But if the technology existed to destructively scan an individual and copy them into a simulation or even reconstitute them from different atoms after being destructively scanned I'd be really uncomfortable with it. I personally would strenously object to ever teleporting myself or copying myself by this method into a simulation.

"edges away slowly" lol. Not any more evil than I believe it was Phil who explicitly stated he would kill others who would seek to prevent the building of an AI based on his utility function. I would fight to prevent the construction of an AI based on anything but the average utility function of humanity even if it excluded my own maximized utility function because I'm honest enough to say that maximizing my own personal utility function is not in the best interests of humanity. Even then I believe that producing an AI whose utility function is maximizing the best interests of humanity is incredibly difficult and thus have concluded that created an AI whose definition is just NOT(Unfriendly) and attempting to trade with it is probably far easier. Though I have not read Eliezer's CEV paper so I require further input.

"difficult to force them to do my bidding".

I don't know if you enjoy video games or not. Right now there's a 1st person shooter called Modern Warfare 3. It's pretty damn realistic though the non-player-characters [NPCs] - which you shoot and kill - are automatons and we know for sure that they're automatons. Now fast forward 20 years and we have NPCs which are so realistic that to all intents and purposes they pass the turing test. Is killing these NPCs in Modern Warfare 25 murder?

Comment by xxd on Welcome to Less Wrong! · 2011-12-21T02:35:07.081Z · LW · GW

That's a point of philosophical disagreement between us. Here's why:

Take an individual.

Then take a cell from that individual. Grow it in a nutrient bath. Force it to divide. Rinse, wash, repeat.

You create a clone of that person.

Now is that clone the same as the original? No it is not. It is a copy. Or in a natural version of this, a twin.

Now let's say technology exists to transfer memories and mind states.

After you create the clone-that-is-not-you you then put your memories into it.

If we keep the original alive the clone is still not you. How does killing the original QUICKLY make the clone you?

Comment by xxd on Welcome to Less Wrong! · 2011-12-21T02:25:29.000Z · LW · GW

OK give me time to digest the jargon.

Comment by xxd on Welcome to Less Wrong! · 2011-12-21T02:22:31.865Z · LW · GW

But is it destroying people if the simulations are the same as the original?

Comment by xxd on Welcome to Less Wrong! · 2011-12-21T02:20:21.611Z · LW · GW

Isn't doing anything for us...

Comment by xxd on Welcome to Less Wrong! · 2011-12-21T02:18:57.594Z · LW · GW

Really good discussion.

Would I believe? I think the answer would depend on whether I could find the original or not. I would, however, find it disturbing to be told that the copy was a copy.

And yes, if the beings are fully sentient then yes I agree it's ethically questionable. But since we cannot tell then it comes down to the conscience of the individual so I guess I'm evil then.

Comment by xxd on Welcome to Less Wrong! · 2011-12-21T02:15:13.827Z · LW · GW

Agreed. It's the only way we have of verifying that it's a duck.

But is the destructively scanned duck the original duck even though it appears to be the same to all intents and purposes even though you can see the mulch that used to be the body of the original lying there beside the new copy?

Comment by xxd on Welcome to Less Wrong! · 2011-12-21T02:12:49.694Z · LW · GW

While I don't doubt that many people would be OK with this I wouldn't because of the lack of certainty and provability.

My difficulty with this concept goes further. Since it's not verifiable that the copy is you even though it seems to present the same outputs to any verifiable test then what is to prevent an AI getting round the restriction on not destroying humanity?

"Oh but the copies running in a simulation are the same thing as the originals really", protests the AI after all the humans have been destructively scanned and copied into a simulation...

Comment by xxd on Welcome to Less Wrong! · 2011-12-21T02:07:02.740Z · LW · GW

You're determined to make me say LOL so you can downvote me right?

EDIT: Yes you win. OFF.

Comment by xxd on Welcome to Less Wrong! · 2011-12-21T01:01:26.254Z · LW · GW


So "friendly" is therefore a conflation of NOT(unfriendly) AND useful rather than just simply NOT(unfriendly) which is easier.

Comment by xxd on Welcome to Less Wrong! · 2011-12-21T00:59:31.080Z · LW · GW

Very good questions.

No I'd not particularly care if it was my car that was returned to me because it gives me utility and it's just a thing.

I'd care if my wife was kidnapped and some simulacrum was given back in her stead but I doubt I would be able to tell if it was such an accurate copy and though if I knew the fake-wife was fake I'd probably be creeped out but if I didn't know I'd just be so glad to have my "wife" back.

In the case of the simulated porn actress, I wouldn't really care if she was real because her utility for me would be similar to watching a movie. Once done with the simulation she would be shut off.

That said the struggle would be with whether or not she (the catgirl version of porn actress) was truly sentient. If she was truly sentient then I'd be evil in the first place because I'd be coercing her to do evil stuff in my personal simulation but I think there's no viable way to determine sentience other than "if it walks like a duck and talks like a duck" so we're back to the beginning again and THUS I say "it's irrelevant".

Comment by xxd on Welcome to Less Wrong! · 2011-12-21T00:54:44.333Z · LW · GW

Correct. I (unlike some others) don't hold the position that a destructive upload and then a simulated being is exactly the same being therefore destructively scanning the porn actresses would be killing them in my mind. Non destructively scanning them and them using the simulated versions for "evil purposes", however, is not killing the originals. Whether using the copies for evil purposes even against their simulated will is actually evil or not is debatable. I know some will take the position that the simulations could theoretically be sentient, If they are sentient then I am therefroe de facto evil.

And I get the point that we want to get the AGI to do something, just that I think it will be incredibly difficult to get it to do something if it's recursively self improving and it becomes progressively more difficult to do the further away you go from defining friendly as NOT(unfriendly).

Comment by xxd on Welcome to Less Wrong! · 2011-12-21T00:49:18.860Z · LW · GW

And I'd say that taking that step is a point of philosophy.

Consider this: I have a dodge durango sitting in my garage.

If I sell that dodge durango and buy an identical one (it passes all the same tests in exactly the same way) then is it the same dodge durango? I'd say no, but the point is irrelevant.

Comment by xxd on Welcome to Less Wrong! · 2011-12-21T00:46:00.939Z · LW · GW

"I suppose one potential failure mode which falls into the grey territory is building an AI that just executes peoples' current volition without trying to extrapolate"

i.e. the device has to judge the usefulness by some metric and then decide to execute someone's volition or not.

That's exactly what my issue is with trying to define a utility function for the AI. You can't. And since some people will have their utility function denied by the AI then who is to choose who get's theirs executed?

I'd prefer to shoot for a NOT(UFAI) and then trade with it.

Here's a thought experiment:

Is a cure for cancer maximizing everyone's utility function?

Yes on average we all win.


Companies who are currently creating drugs to treat the symptoms of cancer and their employees would be out of business.

Which utility function should be executed? Creating better cancer drugs to treat the symptoms and then allowing the company to sell them, or put the companies out of business and cure cancer.

Comment by xxd on Welcome to Less Wrong! · 2011-12-21T00:39:03.876Z · LW · GW

"But an AI does need to have some utility function"

What if the "optimization of the utility function" is bounded like my own personal predilection with spending my paycheck on paperclips one time only and then stopping?

Is it sentient if it sits in a corner and thinks to itself, running simulations but won't talk to you unless you offer it a trade e.g. of some paperclips?

Is it possible that we're conflating "friendly" with "useful but NOT unfriendly" and we're struggling with defining what "useful" means?

Comment by xxd on Welcome to Less Wrong! · 2011-12-21T00:24:33.660Z · LW · GW

Nice thought experiment.

No I probably would not consent to being non-destructively scanned so that my simulated version could be evilly manipulated.

Regardless of whether it's sentient or not provably so.

Comment by xxd on Welcome to Less Wrong! · 2011-12-21T00:22:39.048Z · LW · GW


Therein lies the crux: you want the AI to do stuff for you.

EDIT: Oh yeah I get you. So it's by definition evil if I coerce the catgirls by mind control. I suppose logically I can't have my cake and eat it since I wouldn't want my own non-sentient simulation controlled by an evil AI either.

So I guess that makes me evil. Who would have thunk it. Well I guess strike my utility function of the list of friendly AIs. But then again I've already said that elsewhere that I wouldn't trust my own function to be the optimal.

I doubt, however, that we'd easily find a candidate function from a single individual for similar reasons.

Comment by xxd on Welcome to Less Wrong! · 2011-12-21T00:21:09.279Z · LW · GW

More friendly to you. Yes.

Not necessarily friendly in the sense of being friendly to everyone as we all have differing utility functions, sometimes radically differing.

But I dispute the position that "if an AI doesn't care about humans in the way we want them to, it almost certainly takes us apart and uses the resources to create whatever it does care about".

Consider: A totally unfriendly AI whose main goal is explicitly the extinction of humanity then turning itself off. For us that's an unfriendly AI.

One, however that doesn't kill any of us but basically leaves us alone is defined by those of you who define "friendly AI" to be "kind to us"/"doing what we all want"/"maximizing our utility functions" etc is not unfriendly because by definition it doesn't kill all of us.

Unless unfriendly also includes "won't kill all of us but ignores us" et cetera.

Am I for example unfriendly to you if I spent my next month's paycheck on paperclips but did you no harm?