Uploading: what about the carbon-based version?

post by NancyLebovitz · 2012-07-23T08:49:29.722Z · LW · GW · Legacy · 49 comments

Contents

49 comments

In this video, long about 48:00, Eliezer talks about uploading and about how it wouldn't be murder if his meat body were anesthetized before the upload and killed without regaining consciousness.

It's arguable that it wouldn't be murder, but I'm not clear about why Eliezer would want to do it that way. I've got some guesses about why one might want to not let the meat body wake up (legal and practical complications of a double but diverging identity, the meat version feeling hopelessly envious), but I'm not sure whether either of them apply.

On the other hand, I can think of a couple of reasons for *not* eliminating the meat version-- one is that two Eliezers would presumably be better than one, though I don't have a strong intuition about the optimum number of Eliezers. The other, which I consider to be more salient, is that the meat version is a backup in case the upload isn't as good as hoped.

More generally, what would folks here consider to be good enough evidence that uploading was worth doing? 

49 comments

Comments sorted by top scores.

comment by knb · 2012-07-23T12:03:56.705Z · LW(p) · GW(p)

It definitely wouldn't be murder, so long as Eliezer agreed to the procedure. At worst it would be assisted-suicide. I would even go a step further and say that it seems plausible that killing the carbon version might be the only sensible long-term economic decision.

Imagine if there were a small number of Mt. Everest-sized humans who needed massive amounts of food to stay alive. They are sentient, but think far more slowly than regular humans. They subjectively experience only a couple days every sidereal year. Because they need so much food, and think so slowly, they can't do much productive work, and to survive they collectively need trillions of dollars of resources donated from the world's governments.

  1. Would it be wrong to let the huge version die?
  2. Wouldn't it be better to painlessly mercy-kill the huge version so it didn't have to starve to death?
  3. If you could cheaply make a human-sized, human-speed copy, wouldn't that be kinder than simply killing the huge copy?
  4. What if society had the resources to keep the huge people alive, but only with anti-natalist legislation that keeps the human-sized population smaller?
Replies from: Dr_Manhattan, army1987, JaneQ
comment by Dr_Manhattan · 2012-07-23T16:02:07.933Z · LW(p) · GW(p)

I just added you to "do not let this guy upload" list :)

You realize that your logic justifies genocide of fleshers by uploads, even those that chose not to upload.

Replies from: knb
comment by knb · 2012-07-23T16:59:25.379Z · LW(p) · GW(p)

Yes, my goal was to make that repugnant conclusion explicit. I'm not saying I agree with the repugnant conclusion, but I do think it's the central meaningful question for this topic. This version of the repugnant conclusion is actually much easier to justify than Parfit's version.

comment by A1987dM (army1987) · 2012-07-23T17:08:40.710Z · LW(p) · GW(p)

Yeah, but future uploads shouldn't view us exactly the same way we would view the mountain people: for starters, we are able to provide food for ourselves (and indeed have done so since the dawn of time), but the mountain people can't.

Replies from: fezziwig, knb
comment by fezziwig · 2012-07-23T18:27:52.930Z · LW(p) · GW(p)

That won't be true in the face of uploading, though; most (all?) people will find themselves outcompeted. For example, Uploaded!Fezziwig can sell his services as a programmer much more cheaply than Meat!Fezziwig can.

comment by knb · 2012-07-24T00:39:20.987Z · LW(p) · GW(p)

Clearly what I meant with this analogy is that humans won't be able to pay our own way. Once uploads become common, and can run quickly, they would be able to do vastly more work (because they could run much more quickly than human brains). They would also need far less money (most uploads just needing to buy some cycles on future super-duper computers). In this future, upload wages could fall far below human subsistence levels while still providing a good quality of life fore the uploads. Organic humans are pretty closely analogous to mountain people in this scenario. I doubt uploads would voluntarily choose to support the monumentally expensive and slow-thinking organic humans.

An alternate future is Eliezer's "AI god" scenario, where a single AI becomes so dominant, it can take over pretty much everything. In this case, all economic decisions are centrally planned, and the AI has to expend vastly more resources on organic humans than on uploads (per capita), this costs a lot in any kind of utility function that values human uploads and organic humans equally. Maybe the AI would keep devoting resources to organic humans, but my guess is that most FAIs won't choose to do this unless it is specifically programmed into its preferences.

comment by JaneQ · 2012-07-24T12:25:16.193Z · LW(p) · GW(p)

Or what if the 'mountain people' are utterly microscopic mites on a tiny ball hurling through space. Ohh, wait, that's the reality.

sidenote: I doubt mind uploads scale all the way up, and it appears quite likely that amoral mind uploads would be unable to get along with the copies, so I am not very worried about the first upload having any sort of edge. The first upload will probably be crippled and on the brink of insanity, suffering from hallucinations and otherwise broken thought (after massively difficult work to get this upload to be conscious and not to just go into simulated seizure ). From that you might progress to sane but stupefied uploads, with very significant IQ drop. Get a whiff of xenon to see what small alteration to electrical properties of the neurons amounts to. It will take a lot of gradual improvement until there are well working uploads, and even then I am pretty sure that nearly anyone would be utterly unable to massively self improve on one's own in any meaningful way rather than just screw itself into insanity, without supervision; sane person shouldn't even attempt that because if your improvement is making things worse then the next improvement will make things even worse, and one needs external verification.

Replies from: JenniferRM, NancyLebovitz, knb
comment by JenniferRM · 2012-07-25T00:28:16.548Z · LW(p) · GW(p)

Get a whiff of xenon to see what small alteration to electrical properties of the neurons amounts to.

My initial reaction was shock that a heavier-than-air radioactive gas might go into someone's lungs on purpose. It triggers a lot of my "scary danger" heuristics for gases. Googling turned up a bunch of fascinating stuff. Thanks for the surprise! For anyone else interested, educational content includes:

Neat!

Replies from: JaneQ
comment by JaneQ · 2012-07-26T13:54:16.420Z · LW(p) · GW(p)

Heh. Well, it's not radioactive, the radon is. It is inert but it dissolves in membranes, changing electrical properties.

comment by NancyLebovitz · 2012-07-24T17:46:45.281Z · LW(p) · GW(p)

Some of the basic problems will presumably be (partially?) solved with animal research before uploading is tried with humans.

One of the challenges of uploading would be including not just the current record, but also the ability to learn and heal.

Replies from: JaneQ
comment by JaneQ · 2012-07-26T13:59:32.869Z · LW(p) · GW(p)

With regards to animal experimentation before first upload and so on, a running upload is nothing but fancy processing of a scan of, likely, a cadaver brain, legally no different from displaying that brain on computer, and doesn't require any sort of FDA style stringent functionality testing on animals, not that such testing would help for a brain much bigger, with different neuron sizes, and with the failure modes that are highly non-obvious in animals. Nor that such regulation is even necessary, as the scanned upload of dead person, functional enough to recognize his family, is a definite improvement over being completely dead, and to prevent it equates mercy killing accident victims who have good prospect at full recovery, to avoid the mere discomfort of being sick.

The gradual progress on humans is pretty much a certainty, if one is to drop the wide eyed optimism bias. There are enough people who would bite the bullet, and it is not human experimentation - it is mere data processing - it might become human experimentation decades after functional uploads.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2012-07-26T14:47:23.942Z · LW(p) · GW(p)

There's a consensus here that conscious computer programs have the same moral weight as people, so getting uploading moderately wrong in some directions is worse than getting it completely wrong.

Replies from: wedrifid, JaneQ
comment by wedrifid · 2012-07-26T15:26:38.454Z · LW(p) · GW(p)

There's a consensus here that conscious computer programs have the same moral weight as people

No there isn't. I would have remembered something like that happened, what with all the disagreeing I would have been doing.

The mindspace of 'conscious computer programs' is unfathomably large and most of those programs are morally worthless. A "some" and/or "could" inserted in there could make the 'consensus' correct.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2012-07-26T16:21:24.082Z · LW(p) · GW(p)

I may well have overgeneralized. I was basing the idea on remembering an essay saying that FAI should be designed to be non-sentient, and seeing concerns about how uploads and simulated people would be treated.

I suppose that moral concern would apply to any of sentient programs that humans would be likely to create.

Replies from: wedrifid
comment by wedrifid · 2012-07-27T00:10:21.522Z · LW(p) · GW(p)

I may well have overgeneralized. I was basing the idea on remembering an essay saying that FAI should be designed to be non-sentient, and seeing concerns about how uploads and simulated people would be treated.

Yes, that 'have' vs 'can have' distinction changes everything---but most people are less picky with general claims than I.

I suppose that moral concern would apply to any of sentient programs that humans would be likely to create.

Food for thought. I wouldn't rule this out as a possibility and certainly the proportion of 'morally relevant' programs in this group skyrockets over the broader class. I'm not too sure what we are likely to create. How probable is it that we succeed at creating sentience but fail at FAI?

Replies from: NancyLebovitz
comment by NancyLebovitz · 2012-07-27T00:55:52.804Z · LW(p) · GW(p)

I think creating sentience is a much easier project than FAI, especially proven FAI. We've got plenty of examples of sentience.

Creating sentience which isn't much like the human model seems very difficult-- I'm not even sure what that would mean, with the possible exception of basing something on cephalopods. OK, maybe beehives are sentient, too. How about cities?

Replies from: wedrifid
comment by wedrifid · 2012-07-27T01:48:34.191Z · LW(p) · GW(p)

I think creating sentience is a much easier project than FAI, especially proven FAI. We've got plenty of examples of sentience.

This is why I was hesitant to fully agree with your prediction that any sentient programs created by humans have intrinsic moral weight. Sentient uFAI can have neutral or even negative moral weight (although this is a subjective value.)

The main reason this outcome could be unlikely is that most of the reasons that a created GAI would fail to be an FAI would also obliterate the potential for sentience.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2012-07-27T02:48:19.503Z · LW(p) · GW(p)

I didn't think about the case of a sentient UFAI-- I should think that self-defense would apply, though I suppose that self-defense becomes a complicated issue if you're a hard-core utilitarian.

Replies from: wedrifid
comment by wedrifid · 2012-07-27T02:58:02.183Z · LW(p) · GW(p)

I didn't think about the case of a sentient UFAI-- I should think that self-defense would apply

The problem there is that the UFAI has to specifically value its own sentience. That is, it has to be at least a "Self Sentience Satisficer". Creating an AI that stably values something so complex and reflective requires defeating most of the same challenges that creating an FAI requires. This ought to make it relatively improbable to create accidentally.

Replies from: private_messaging
comment by private_messaging · 2012-07-28T05:55:38.510Z · LW(p) · GW(p)

That goes also for valuing real world paperclips, perhaps even more so. World -> real function that gives number of paperclips may even be trickier than valuing own sentience.

Replies from: wedrifid
comment by wedrifid · 2012-07-28T06:06:37.954Z · LW(p) · GW(p)

I would bet against that. (For reasons including but not limited to the observation that sentience exists in the real world too.)

Replies from: private_messaging
comment by private_messaging · 2012-07-28T06:21:35.621Z · LW(p) · GW(p)

For the AI to be dangerously effective it still needs to be able to optimize it's own processes and have sufficient self understanding. Also it needs to understand that the actions it makes are made from it's sensory input, to value the sensory input correctly. You have to have a lot of reflectivity to be good at maximizing paperclips. The third alternative, neither friendly nor unfriendly, is AI that solves formally defined problems. Hook it up to a simulator with god's eye view, give it full specs of the simulator, define what's a paperclip, and it'll maximize simulated paperclips there. I've impression that people mistake this - which doesn't require solving any philosophical problems - for real world paperclip maximizer, which is much much trickier.

comment by JaneQ · 2012-07-27T11:44:07.496Z · LW(p) · GW(p)

Legally, a mind upload is only different from any other medical scan in mere quantity, and a simulation of brain is only qualitatively different from any other processing. Just as the cryopreservation is only a form of burial.

Furthermore, while it would seem to be better to magically have the mind uploading completely figured out without any experimentation on human mind uploads, we aren't writing a science fiction/fantasy story, we are actually building the damn thing in the real world where things tend to go wrong.

edit: also, a rather strong point can be made that it is more ethical to experiment on a copy of yourself than on a copy of your cat or any other not completely stupid mammal. The consent matters.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2012-07-27T14:52:58.814Z · LW(p) · GW(p)

This is an area that hasn't been addressed by the law, for the very good reason that it isn't close to being a problem yet. I don't know whether people outside LW have been looking at the ethical status of uploads.

I agree with you that there's no way to have uploading without making mistakes first. And possibly no way to have FAI without it having excellent simulations of people so that it can estimate what to do.

That's a good point about consent.

comment by knb · 2012-07-24T18:43:08.619Z · LW(p) · GW(p)

Or what if the 'mountain people' are utterly microscopic mites on a tiny ball hurling through space. Ohh, wait, that's the reality.

Well, yes I am aware that my scenario is not literally descriptive of the world right now. The purpose is to inspire an intuitive understanding of why the economic reality of a society with strong upload technology would encourage destroying carbon copies of people who have been uploaded.

so I am not very worried about the first upload having any sort of edge.

I am not worried either. Nothing I said assumes a first-mover advantage or hard takeoff from the first mind upload. I'm describing society after upload technology has matured.

I am pretty sure that nearly anyone would be utterly unable to massively self improve on one's own in any meaningful way rather than just screw itself into insanity

I'm certainly not assuming uploads will be self-improving, so it seems you are pretty comprehensively misunderstanding my point. I do assume uploads will become faster, due to hardware improvements. After some time, the ease and low cost of copying uploads will likely make them far more numerous than physical humans, and their economic advantages (being able to do orders of magnitude more work per year than physical humans) will drive wages far below human subsistence standards (even if the wages allow a great lifestyle for the uploads).

Replies from: JaneQ
comment by JaneQ · 2012-07-26T13:47:50.344Z · LW(p) · GW(p)

That was more a note on the Dr_Manhattan's comment.

With regards to 'economic advantage', the advantage has to outgrow the overall growth for the state of carbon originals to decline. Also, you may want to read Accelerando by Charles Stross.

Replies from: knb
comment by knb · 2012-07-26T17:07:04.667Z · LW(p) · GW(p)

With regards to 'economic advantage', the advantage has to outgrow the overall growth for the state of carbon originals to decline.

There is no reason why this would be true. The economy can grow enormously while per-capita income and standard of living fall. This has happened before, the global economy and population grew enormously after the transition to agriculture, but living standards probably actually fell, and farmers were shorter, poorer, harder-working and more malnourished than their forager ancestors. It is not inevitable (or even very likely, IMO) that the economy will perpetually outgrow population.

Also, you may want to read Accelerando by Charles Stross.

I read it years ago, and wasn't impressed. Why is that relevant?

comment by buybuydandavis · 2012-07-23T09:15:27.977Z · LW(p) · GW(p)

More generally, what would folks here consider to be good enough evidence that uploading was worth doing?

I think many would give different answers for a destructive versus non destructive upload. I'm more sentimental about my meat sack, and can't see snuffing it in one fell swoop. I'd rather go the Ship of Theseus route.

comment by kilobug · 2012-07-23T10:35:44.849Z · LW(p) · GW(p)
  1. I assume that in the hypothesis in which Eliezer was speaking, the upload is then a safe procedure, not an experimental one. And anyway you can sedate the living body, do the upload, check that the upload went well, and then destroy the carbon body.

  2. Whatever your optimal number of copies of Eliezer it (be it 1 because split-heads of personality creates too much problems, or 2^42 because you love Eliezer so much), if an upload is significantly better than a carbon version (which I assume too), then it would be better to get n upload versions and still get rid of the carbon one.

If carbon and upload versions have different pros and cons (like some senses not yet well implemented in the upload version), it could make sense to keep both. But in the long term, I'm pretty sure the upload version will be much better, and keeping a carbon version wouldn't make sense.

comment by buybuydandavis · 2012-07-24T08:09:22.081Z · LW(p) · GW(p)

Changing the issue from murder to suicide might be interesting.

You're the leftover meatsack. You wake up after the upload, and see UploadedYou frolicking in a virtual world. What would you have to see to shoot yourself in the head?

I doubt that's going to seem like a good idea to this meatsack.

Replies from: hairyfigment, tgb, Dolores1984
comment by hairyfigment · 2012-07-25T18:59:34.919Z · LW(p) · GW(p)

My two selves share memories, one of them slowing down for the occasion, and agree to kill their less useful body. See also Jack's comment, though I don't think it influenced me.

comment by tgb · 2012-07-25T00:23:05.540Z · LW(p) · GW(p)

And could you precommit to the suicide if an Omega was going to let you upload iff a simulated you chose to suicide in this case?

comment by Dolores1984 · 2012-07-24T21:20:20.181Z · LW(p) · GW(p)

I'd do it, if I were confident in the procedure. I'd do it quickly, too, because the longer you wait and think about it, the more divergence occurs, and the more you lose.

comment by Jack · 2012-07-25T17:37:00.470Z · LW(p) · GW(p)

one is that two Eliezers would presumably be better than one, though I don't have a strong intuition about the optimum number of Eliezers.

Once you've uploaded I would expect it to be fairly trivial create as many Eliezers as one likes, all with the advantages of being uploaded. But it isn't implausible that there will be some unique advantage to being meat... it just depends on contingent features of the world in which this happens. Features that no one is going to be able to predict reliably.

I recall once having serious objections to the idea of duplicating but those have disappeared over time. I suspect my copies would empathize with each other and I don't think I would worry about 'becoming the unhappy twin'. Though that might change if owned significant property that one or the other copy wouldn't inherit. The flip side of that coin is that I think I would be alright 'killing myself' if I knew a copy of myself (which only differed in very recent memories) was still alive and kicking. Though the longer the divergence the less okay I would be with it. What I hope might one day be possible is integrating the memories of multiple copies so that I could merge and split repeatedly and not have to miss out on anything.

comment by TheOtherDave · 2012-07-23T14:44:26.965Z · LW(p) · GW(p)

Given your clarification of the question... I think my minimum standard for treating X as a continuation of my personal identity (whether X is an upload, or my organic body after a stroke, or various other things that some people treat as such a continuation and other people don't) is that it experiences a significant fraction of the memories I experience, and that its other memories are consistent with those memories (that is, it's plausible that a single entity in fact had all of those remembered experiences).

That's not to say that the existence of X would satisfy me in a broader sense. Then again, my current existence isn't always satisfactory either, but I still believe it's my current existence.

comment by MileyCyrus · 2012-07-23T12:30:10.879Z · LW(p) · GW(p)

one is that two Eliezers would presumably be better than one, though I don't have a strong intuition about the optimum number of Eliezers.

The cost of keeping a carbon-based Eliezer alive would mean that whoever pays the bill would have less money to run the uploaded Eliezer. So the uploaded Eliezer would live for fewer years. Since it would probably be cheaper to run uploaded Eliezer for a subjective year than to keep carbon Eliezer alive for a year, running only the uploaded Eliezer would mean more Eliezer years overall.

Replies from: roystgnr, drethelin
comment by roystgnr · 2012-07-24T01:43:09.683Z · LW(p) · GW(p)

The cost of keeping a carbon-based Eliezer alive is negative so long as his productivity exceeds the cost of giving him food to eat and a place to sleep. The existence of uploaded Eliezers may drive up the price of resources (or drive down the price of Elizezer-thought) until that condition becomes false, but that's not going to happen with the first copy. Thanks to comparative advantage it may not happen for a long time afterward either; the resources humans use and the resources computers use aren't perfectly exchangable.

comment by drethelin · 2012-07-23T16:29:28.201Z · LW(p) · GW(p)

This ignores network effects of having simultaneous Eliezers, and also the ability of physical Eliezers to accomplish goals the upload cannot because it's stuck in the internet.

Replies from: DanielLC
comment by DanielLC · 2012-07-23T18:32:19.259Z · LW(p) · GW(p)

In that case, run two uploaded Elizers and give one of them an android. It will still probably be cheaper.

comment by [deleted] · 2012-10-19T15:41:15.151Z · LW(p) · GW(p)

I have always wondered what it would be like to kill myself. Not that I am suicidal, just morbbidly curious.

comment by bogdanb · 2012-07-29T21:04:40.451Z · LW(p) · GW(p)

I always thought those “I get uploaded, my body is killed before waking up, there was no murder” scenarios were closer to the trolley problem than an actual policy proposal. Sort of strengthening the opposing argument—“destructive uploading would be murder”—before countering it.

By “destructive uploading” I mean something like freezing, slicing and scanning a brain, which by default kills the source. Proposing a process that harmlessly uploads a living (but unconscious) person, and then kills it (before it regains consciousness) is a strengthening of the associated moral dilemma. My memory tells me I heard the weaker argument/question discussed a long time ago (7-10 years) but not recently; I don’t trust my memory much, though.

So I don’t think of it as “should the source die after upload”, but rather as “wouldn’t actually be murder if it did”, thus it is not (necessarily) murder to upload someone, even if destructively.


Taking the question at face value, I agree with the moral attitude—it’s not murder to kill the source after upload (barring obvious things like the operation being involuntary, which might need a more nuanced approach.)

It’s not quite my policy—I don’t see a general reason to get rid of the source. But if something like the particular individual no longer being interested in being incarnated, or resource limits favoring digital-only people, happens to be the case, that might be enough reason.

From a practical point of view, I think it’s more likely that at first, and probably for quite a while, we won’t have a choice, i.e. uploading will be destructive, and if and when we’ll be able to do a live upload (or at least one where the body wakes up after) we’ll have bigger moral problems to solve (of which this might be just a special case).

comment by Normal_Anomaly · 2012-07-25T01:10:03.327Z · LW(p) · GW(p)

My ideal plan for uploading is as follows:

  1. I go unconscious and my brain is scanned.
  2. The upload wakes up.
  3. Someone who knows me very well, and whom I trust, observes and interacts with the upload to make sure they're complete, sane, and sufficiently similar to meat-me that meat-me would agree, if asked, that they and the upload are the same person.
  4. If those conditions are met, the trusted person euthanizes meat-me. If the transfer failed badly enough not to create a mind, the trusted person deletes the upload and wakes meat-me up.
Replies from: Mitchell_Porter
comment by Mitchell_Porter · 2012-07-25T01:25:56.455Z · LW(p) · GW(p)

Why do you want to be killed if the copying was successful? What are you even trying to achieve with this procedure, then?

Replies from: Normal_Anomaly
comment by Normal_Anomaly · 2012-07-29T23:08:48.688Z · LW(p) · GW(p)

I want the carbon version to be killed if the uploading is successful. I'd rather be a (functional) upload than a carbon person. I don't want there to be two of me, because I only have one house/job/legal identity/family. So, I kill the meat if the upload works, and keep the meat around if it doesn't.

comment by FeepingCreature · 2012-07-24T16:24:35.790Z · LW(p) · GW(p)

What I settled on doing (after thinking about this for a while) is to just keep both bodies, and when meatbag-me gets envious about being "the one to die", I'd just reupload. :)

comment by Lapsed_Lurker · 2012-07-23T09:23:42.673Z · LW(p) · GW(p)

More generally, what would folks here consider to be good enough evidence that uploading was worth doing?

Good enough evidence that (properly done) uploading would be a good thing, as opposed to the status quo of tens of thousands of people dying every day, you mean?

[edit] If you want to compare working SENS to uploading, then I'd have to think a lot harder.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2012-07-23T09:50:50.035Z · LW(p) · GW(p)

I was wondering about evidence that uploading was accurate enough that you'd consider it to be a satisfactory continuation of personal identity.

Replies from: Lapsed_Lurker
comment by Lapsed_Lurker · 2012-07-23T11:15:28.870Z · LW(p) · GW(p)

I was wondering about evidence that uploading was accurate enough that you'd consider it to be a satisfactory continuation of personal identity.

I'd think that until even one of those little worms with only a couple hundred neurons is uploaded (or maybe a lobster), all evidence of the effectiveness of uploading is theory or fiction.

If computing continues to get cheaper at Moore's Law rates for another few decades, then maybe...

comment by [deleted] · 2012-07-23T15:07:40.555Z · LW(p) · GW(p)

And to spell out the application to Unfriendly AI: You've got various people insisting that an arbitrary mind, including an expected paperclip maximizer, would do various nice things or obey various comforting conditions: "Keep humans around, because diversity is important to creativity, and the humans will provide a different point of view."

EY, "Contaminated by Optimism" 06 August 2008 12:26AM