"Ray Kurzweil and Uploading: Just Say No!", Nick Agar

post by gwern · 2011-12-02T21:42:48.608Z · LW · GW · Legacy · 79 comments

Contents

79 comments

A new paper has gone up in the November 2011 JET: "Ray Kurzweil and Uploading: Just Say No!" (videos) by Nick Agar (Wikipedia); abstract:

There is a debate about the possibility of mind-uploading – a process that purportedly transfers human minds and therefore human identities into computers. This paper bypasses the debate about the metaphysics of mind-uploading to address the rationality of submitting yourself to it. I argue that an ineliminable risk that mind-uploading will fail makes it prudentially irrational for humans to undergo it.

 


 

The argument is a variant of Pascal's wager he calls Searle's wager. As far as I can tell, the paper contains mostly ideas he has already written on in his book; from Michael Hauskeller's review of Agar's Humanity's End: Why We Should Reject Radical Enhancement

Starting with Kurzweil, he gives a detailed account of the latter’s “Law of Accelerating Returns” and the ensuing techno-optimism,  which leads Kurzweil to believe that we will eventually be able to get rid of our messy bodies and gain virtual immortality by uploading ourselves into a computer. The whole idea is ludicrous, of course, but Agar takes it quite seriously and tries hard to convince us that “it may take longer than Kurzweil thinks for us to know enough about the human brain to successfully upload it” (45) – as if this lack of knowledge was the main obstacle to mind-uploading. Agar’s principal objection, however, is that it will always be irrational for us to upload our minds onto computers, because we will never be able to completely rule out the possibility that, instead of continuing to live, we will simply die and be replaced by something that may be conscious or unconscious, but in any case is not identical with us. While this is certainly a reasonable objection, the way Agar presents it is rather odd. He takes Pascal’s ‘Wager’ (which was designed to convince us that believing in God is always the rational thing to do, because by doing so we have little to lose and a lot to win) and refashions it so that it appears irrational to upload one’s mind, because the procedure might end in death, whereas refusing to upload will keep us alive and is hence always a safe bet. The latter conclusion does not work, of course, since the whole point of mind-uploading is to escape death (which is unavoidable as long as we are stuck with our mortal, organic bodies). Agar argues, however, that by the time we are able to upload minds to computers, other life extension technologies will be available, so that uploading will no longer be an attractive option. This seems to be a curiously techno-optimistic view to take.

John Danaher (User:JohnD) examines the wager, as expressed in the book, further in 2 blog posts:

  1. "Should we Upload Our Minds? Agar on Searle's Wager (Part One)"
  2. "Should we Upload Our Minds? Agar on Searle's Wager (Part Two)"

After laying out what seems to be Agar's argument, Danaher constructs the game-theoretic tree and continues the criticism above:

The initial force of the Searlian Wager derives from recognising the possibility that Weak AI is true. For if Weak AI is true, the act of uploading would effectively amount to an act of self-destruction. But recognising the possibility that Weak AI is true is not enough to support the argument. Expected utility calculations can often have strange and counterintuitive results. To know what we should really do, we have to know whether the following inequality really holds (numbering follows part one):

  • (6) Eu(~U) > Eu(U)
But there’s a problem: we have no figures to plug into the relevant equations, and even if we did come up with figures, people would probably dispute them (“You’re underestimating the benefits of uploading”, “You’re underestimating the costs of uploading” etc. etc.). So what can we do? Agar employs an interesting strategy. He reckons that if he can show that the following two propositions hold true, he can defend (6).
  • (8) Death (outcome c) is much worse for those considering to upload than living (outcome b or d).
  • (9) Uploading and surviving (a) is not much better, and possibly worse, than not uploading and living (b or d).
...2. A Fate Worse than Death?
On the face of it, (8) seems to be obviously false. There would appear to be contexts in which the risk of self-destruction does not outweigh the potential benefit (however improbable) of continued existence. Such a context is often exploited by the purveyors of cryonics. It looks something like this:

You have recently been diagnosed with a terminal illness. The doctors say you’ve got six months to live, tops. They tell you to go home, get your house in order, and prepare to die. But you’re having none of it. You recently read some adverts for a cryonics company in California. For a fee, they will freeze your disease-ridden body (or just the brain!) to a cool -196 C and keep it in storage with instructions that it only be thawed out at such a time when a cure for your illness has been found. What a great idea, you think to yourself. Since you’re going to die anyway, why not take the chance (make the bet) that they’ll be able to resuscitate and cure you in the future? After all, you’ve got nothing to lose.

This is a persuasive argument. Agar concedes as much. But he thinks the wager facing our potential uploader is going to be crucially different from that facing the cryonics patient. The uploader will not face the choice between certain death, on the one hand, and possible death/possible survival, on the other. No; the uploader will face the choice between continued biological existence with biological enhancements, on the one hand, and possible death/possible survival (with electronic enhancements), on the other.

The reason has to do with the kinds of technological wonders we can expect to have developed by the time we figure out how to upload our minds. Agar reckons we can expect such wonders to allow for the indefinite continuance of biological existence. To support his point, he appeals to the ideas of Aubrey de Grey. de Grey thinks that -- given appropriate funding -- medical technologies could soon help us to achieve longevity escape velocity (LEV). This is when new anti-aging therapies consistently add years to our life expectancies faster than age consumes them.

If we do achieve LEV, and we do so before we achieve uploadability, then premise (8) would seem defensible. Note that this argument does not actually require LEV to be highly probable. It only requires it to be relatively more probable than the combination of uploadability and Strong AI.
...3. Don’t you want Wikipedia on the Brain?
Premise (9) is a little trickier. It proposes that the benefits of continued biological existence are not much worse (and possibly better) than the benefits of Kurweil-ian uploading. How can this be defended? Agar provides us with two reasons.

The first relates to the disconnect between our subjective perception of value and the objective reality. Agar points to findings in experimental economics that suggest we have a non-linear appreciation of value. I’ll just quote him directly since he explains the point pretty well:

For most of us, a prize of $100,000,000 is not 100 times better than one of $1,000,000. We would not trade a ticket in a lottery offering a one-in-ten chance of winning $1,000,000 for one that offers a one-in-a-thousand chance of winning $100,000,000, even when informed that both tickets yield an expected return of $100,000....We have no difficulty in recognizing the bigger prize as better than the smaller one. But we don’t prefer it to the extent that it’s objectively...The conversion of objective monetary values into subjective benefits reveals the one-in-ten chance at $1,000,000 to be significantly better than the one-in-a-thousand chance at $100,000,000 (pp. 68-69).

How do these quirks of subjective value affect the wager argument? Well, the idea is that continued biological existence with LEV is akin to the one-in-ten chance of $1,000,000, while uploading is akin to the one-in-a-thousand chance of $100,000,000: people are going to prefer the former to the latter, even if the latter might yield the same (or even a higher) payoff.

I have two concerns about this. First, my original formulation of the wager argument relied on the straightforward expected-utility-maximisation-principle of rational choice. But by appealing to the risks associated with the respective wagers, Agar would seem to be incorporating some element of risk aversion into his preferred rationality principle. This would force a revision of the original argument (premise 5 in particular), albeit one that works in Agar’s favour. Second, the use of subjective valuations might affect our interpretation of the argument. In particular it raises the question: Is Agar saying that this is how people will in fact react to the uploading decision, or is he saying that this is how they should react to the decision?

One point is worth noting: the asymmetry of uploading with cryonics is deliberate. There is nothing in cryonics which renders it different from Searle's wager with 'destructive uploading', because one can always commit suicide and then be cryopreserved (symmetrical with committing suicide and then being destructively scanned / committing suicide by being destructively scanned). The asymmetry exists as a matter of policy: the cryonics organizations refuse to take suicides.

Overall, I agree with the 2 quoted people; there is a small intrinsic philosophical risk to uploading as well as the obvious practical risk that it won't work, and this means uploading does not strictly dominate life-extension or other actions. But this is not a controversial point and has already in practice been embraced by cryonicists in their analogous way (and we can expect any uploading to be either non-destructive or post-mortem), and to the extent that Agar thinks that this is a large or overwhelming disadvantage for uploading ("It is unlikely to be rational to make an electronic copy of yourself and destroy your original biological brain and body."), he is incorrect.

79 comments

Comments sorted by top scores.

comment by Morendil · 2011-12-02T22:08:33.096Z · LW(p) · GW(p)

Zombies again? Meh.

Replies from: Nornagest, vi21maobk9vp
comment by Nornagest · 2011-12-02T22:35:31.368Z · LW(p) · GW(p)

It astonishes me how many otherwise skeptical people are happy to ascribe magical properties to their substrate.

Replies from: Curiouskid, Laoch, torekp
comment by Curiouskid · 2011-12-03T20:28:19.016Z · LW(p) · GW(p)

Suppose the copy is not a perfect replication. Suppose you can emulate your brain with 90% accuracy cheaply and that each percentage increase in accuracy requires 2X more computing power. This makes the issue not one of "magical thinking" (supposing that a software (not substrate) replica that is 100% accurate is different from the exact same software on a different substrate) to a question of whether a simulation that is 90% accurate is "good enough".

Of course, "good enough" is a vague phrase, so it's necessary to determine how we should evaluate the quality of a replica. I can think of a few off the top of my head (speed of emulation, similarity of behavioral responses to similar situations). It certainly makes for some puzzling philosophy over the nature of identity.

comment by Laoch · 2012-07-02T08:12:35.053Z · LW(p) · GW(p)

I do not ascribe any magical properties to "my substrate", however I think it's extremely foolish to think of the mind and body as something separate. The mind is a process of the body at least from my understanding of contemporary cognitive science. My body is my mind is another way to put it. I'm all for radical technology but I think mind uploading is the most ludicrous, weak and underwhelming of transhumanist thought (speaking as an ardent transhumanist).

Replies from: PrometheanFaun, TheOtherDave
comment by PrometheanFaun · 2013-08-11T04:16:39.357Z · LW(p) · GW(p)

Well, OK, What if we change our pitch from "approximate mind simulation" to "approximate identity-focal body simulation"?

Replies from: Laoch
comment by Laoch · 2013-08-12T10:40:21.631Z · LW(p) · GW(p)

A simulation of X is not X.

Replies from: gwern
comment by gwern · 2013-08-12T14:07:39.242Z · LW(p) · GW(p)

That's not a reply to his point.

Replies from: Laoch
comment by Laoch · 2013-08-13T10:25:56.439Z · LW(p) · GW(p)

That's because I don't understand his point? I'd wager though that it implies that simulations of a mind are themselves minds with subjective experience. In which case we'd have problems.

Replies from: gwern
comment by gwern · 2013-08-13T14:58:26.931Z · LW(p) · GW(p)

That's because I don't understand his point?

Then you should be asking him more questions, not replying with dogma which begs the question; for example, is a 'simulation' of arithmetic also arithmetic? Then your formula would have been refuted.

Replies from: Laoch, Laoch
comment by Laoch · 2013-12-02T09:18:13.355Z · LW(p) · GW(p)

Bump.

comment by Laoch · 2013-08-15T11:23:31.381Z · LW(p) · GW(p)

What's a simulation of arithmetic except just arithmetic? In any case PrometheanFaun what does "approximate identity-focal body simulation" mean?

Replies from: Laoch
comment by Laoch · 2013-12-02T09:18:04.198Z · LW(p) · GW(p)

Accidently retracted:

What's a simulation of arithmetic except just arithmetic? In any case PrometheanFaun what does "approximate identity-focal body simulation" mean?

comment by TheOtherDave · 2012-07-02T16:00:32.009Z · LW(p) · GW(p)

I'm not sure that's the right question to ask.

I agree that Dave is partially implemented by a brain and partially implemented by a non-brain body. I would also say that Dave is partially implemented by a social structure, and partially implemented by various physical objects outside my body.

If Dave undergoes a successful "mind upload," we have successfully implemented Dave on a different platform. We can ask, then, how much of Dave's existing implementation in each system needs to be re-implemented in the new platform in order for the resulting entity to be Dave. We can also ask how much of the new platform implementation of Dave is unique to Dave, and how much of it is essentially identical for every other "uploaded" human.

Put a different way: if we already have a generic human template installed on our target platform, how much of Dave's current implementation do we need to port over in order to preserve Dave? I suspect it's a pretty vanishingly small amount, actually, and I expect that >99% of it is stored in my brain.

Replies from: Laoch
comment by Laoch · 2012-07-02T17:01:46.286Z · LW(p) · GW(p)

What question was I asking I think you replied to the wrong post. But for what it's worth brain is a subset of body.

Replies from: TheOtherDave
comment by TheOtherDave · 2012-07-02T17:09:55.339Z · LW(p) · GW(p)

No, I replied to the post I meant to reply to.

I agree that the brain is a subset of the body, and I agree that a not-insignificant portion of "mind" is implemented in parts of the body other than the brain, but I don't think this means anything in particular about the viability of mind uploads.

Replies from: Laoch
comment by Laoch · 2012-07-02T20:19:02.209Z · LW(p) · GW(p)

I can't disagree that there are no parts of the body/brain that aren't amenable i.e. non magical and thus capable of emulation. I guess where I'm having trouble is with 1) the application and 2) how and where do you draw the line in the physical workings of the body that are insignificant to the phenomenon of mind. What colours my think on this are people like von Uexkuell in the sense that what encapsulates our cognition is how we function as animals.

Replies from: TheOtherDave
comment by TheOtherDave · 2012-07-02T21:24:34.369Z · LW(p) · GW(p)

I'm not sure what you mean by the application. Implementing the processes we identify as Dave on a different platform is a huge engineering challenge, to be sure, and nobody knows how to do it yet, so if that's what you mean you are far from being alone in having trouble with that part.

As for drawing the line, as I tried to say originally, I draw it in terms of analyzing variance. If Dave is implemented on some other platform, Dave will still have a body, although it will be a different body than the one Dave has now. The question then becomes, how much difference does that make?

If we come to function differently as animals, or if we come to function as something other than animals, our cognition will be encapsulated in different ways, certainly, but whether we should care or not depends quite a bit on what we value about our current encapsulation.

comment by torekp · 2011-12-03T22:01:43.924Z · LW(p) · GW(p)

Where's the attribution of magic? You just have a different semantics of "conscious", "pain", "pleasure" and so on than they do. They hold that it applies to a narrower range of things, you hold that it applies to a wider range. Why is one semantics more "magical thinking" than the other?

Suppose we're arguing about "muscle cars", and I insist that a "muscle car" by definition has to have an internal combustion engine. You say that electric cars can be muscle cars too. Our friend says that neither definition is true nor false, and that the historical record of muscle cars and their enthusiasts is insufficient to settle the matter. Our friend might be right, in which case we both may be guilty of magical thinking - but not just one of us. Our friend might be wrong, in which case one of us may be mistaken, but that's different.

comment by vi21maobk9vp · 2011-12-03T09:03:11.833Z · LW(p) · GW(p)

Nope, not exactly zombies.

Alive and well person - just not you.

Replies from: Morendil
comment by Morendil · 2011-12-03T11:26:47.315Z · LW(p) · GW(p)

The reference to Searle clearly classifies this as a zombie argument: it hinges on consciousness.

The "not you" argument makes no sense at all - if we are positing a fully conscious, fully intelligent entity which shares all of my memories, all of my preferences, all of my dispositions, all of my projects; which no person, even my wife or children, would be able to tell apart from the meat-me; but which nevertheless is not me.

The rare neurological syndrome known as Capgras delusion illustrates why words like "me" or "self" carry such a mysterious aura: the sense of someone's identity is the result of not one but several computations carried out in different parts of the human brain, which sometimes get out of step resulting in weird distortions of identity-perception.

But to the extent that "self" is a non-mysterious notion associated with being possessed of a certain set of memories, of future plans and of dispositions, our biological selves already become "someone other than they are" quite naturally with the passage of time; age and experience turn you into someone with a slightly different set of memories, plans and dispositions.

In that sense, uploading and aging are not fundamentally different processes, and any argument which applies to one applies to the other as far as the preservation of "self" is concerned.

Replies from: vi21maobk9vp
comment by vi21maobk9vp · 2011-12-03T12:32:52.953Z · LW(p) · GW(p)

Well, there can be a question of what rate of disposition change is consistent with being still the same person. About telling apart - well if someone cannot tell a computer program and animal apart, they have a trouble.

It looks like currently humanity can learn a lot about concept of self but seems to be a bit afraid to try by medically temporarily freezing inter-hemisphere link... What would the person remember as "past self" after re-merging?

All this is moot anyway because gradual uploads are as likely to be possible as stop-and-go ones.

comment by lukeprog · 2011-12-02T23:32:53.318Z · LW(p) · GW(p)

Also, more generally, this is from JET's first new issue in over a year.

comment by buybuydandavis · 2011-12-03T06:45:53.937Z · LW(p) · GW(p)

The identify of an object is a choice, a way of looking at it. The "right" way of making this choice is the way that best achieves your values. When you ask yourself what object is really you, and therefore to be valued, you're engaged in a tail biting exercise without "rational" answer.

If you value the continuance of your thought patterns, you'll likely he happy to upload. If you value your biological substrate, you won't. In a world where some do, and some don't, I don't see either as irrational - they just value different things, and take different actions thereby. You're not "irrational" for picking Coke over Pepsi.

Replies from: AlephNeil
comment by AlephNeil · 2011-12-03T16:49:29.726Z · LW(p) · GW(p)

The identify of an object is a choice, a way of looking at it. The "right" way of making this choice is the way that best achieves your values.

I think that's really the central point. The metaphysical principles which either allow or deny the "intrinsic philosophical risk" mentioned in the OP are not like theorems or natural laws, which we might hope some day to corroborate or refute - they're more like definitions that a person either adopts or does not.

I don't see either as irrational

I have to part company here - I think it is irrational to attach 'terminal value' to your biological substrate (likewise paperclips), though it's difficult to explain exactly why. Terminal values are inherently irrational, but valuing the continuance of your thought patterns is likely to be instrumentally rational for almost any set of terminal values, whereas placing extra value on your biological substrate seems like it could only make sense as a terminal value (except in a highly artificial setting e.g. Dr Evil has vowed to do something evil unless you preserve your substrate).

Of course this raises the question of why the deferred irrationality of preserving one's thoughts in order to do X is better than the immediate irrationality of preserving one's substrate for its own sake. At this point I don't have an answer.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2011-12-03T18:33:55.892Z · LW(p) · GW(p)

The metaphysical principles which either allow or deny the "intrinsic philosophical risk" mentioned in the OP are not like theorems or natural laws, which we might hope some day to corroborate or refute - they're more like definitions that a person either adopts or does not.

What do the definitions do?

Replies from: AlephNeil
comment by AlephNeil · 2011-12-03T18:55:39.584Z · LW(p) · GW(p)

I don't understand the question, but perhaps I can clarify a little:

I'm trying to say that (e.g.) analytic functionalism and (e.g.) property dualism are not like inconsistent statements in the same language, one of which might be confirmed or refuted if only we knew a little more, but instead like different choices of language, which alter the set of propositions that might be true or false.

It might very well be that the expanded language of property dualism doesn't "do" anything, in the sense that it doesn't help us make decisions.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2011-12-03T19:33:55.084Z · LW(p) · GW(p)

OK, the problem I was getting at is that adopting a definition usually has consequences that make some definitions better than others, thus not exempting them from criticism, with implication of their usefulness still possible to refute.

Replies from: AlephNeil
comment by AlephNeil · 2011-12-03T23:23:36.065Z · LW(p) · GW(p)

I agree that definitions (and expansions of the language) can be useful or counterproductive, and hence are not immune from criticism. But still, I don't think it makes sense to play the Bayesian game here and attach probabilities to different definitions/languages being correct. (Rather like how one can't apply Bayesian reasoning in order to decide between 'theory 1' and 'theory 2' in my branching vs probability post.) Therefore, I don't think it makes sense to calculate expected utilities by taking a weighted average over each of the possible stances one can take in the mind-body problem.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2011-12-04T00:13:07.362Z · LW(p) · GW(p)

Gosh, that's not useful in practice far more widely than that, and not at all what I suggested. I object to exempting any and all decisions from potential to be incorrect, no matter what tools for noticing the errors are available or practical or worth applying.

comment by AlephNeil · 2011-12-02T22:38:14.352Z · LW(p) · GW(p)

If the rules of this game allow one side to introduce a "small intrinsic philosophical risk" attached to mind-uploading, even though it's impossible in principle to detect whether someone has suffered 'arbitrary Searlean mind-annihiliation', then surely the other side can postulate a risk of arbitrary mind-annihilation unless we upload ourselves. (Even ignoring the familiar non-Searlean mind-annihilation that awaits us in old age.)

Perhaps a newborn mind has a half-life of only three hours before spontaneously and undetectably annihilating itself.

Replies from: gwern
comment by gwern · 2011-12-02T22:40:26.139Z · LW(p) · GW(p)

You really think there is logical certainty that uploading works in principle and your suggestions are exactly as likely as the suggestion 'uploading doesn't actually work'?

Replies from: AlephNeil, AlephNeil
comment by AlephNeil · 2011-12-02T22:59:03.255Z · LW(p) · GW(p)

For any particular proposal for mind-uploading, there's probably a significant risk that it doesn't work, but I understand that to mean: there's a risk that what it produces isn't functionally equivalent to the person uploaded. Not "there's a risk that when God/Ripley is watching everyone's viewscreens from the control room, she sees that uploaded person's thoughts are on a different screen from the original."

Replies from: gwern
comment by gwern · 2011-12-02T23:10:26.725Z · LW(p) · GW(p)

Of course there is such a risk. We can't even do formal mathematics without significant and ineradicable risk in the final proof; what on earth makes you think any anti-zombie or anti-Riply proof is going to do any better? And in formal math, you don't usually have tons of experts disagreeing with the proof and final conclusion either. If you think uploading is so certain the risk it is fundamentally incorrect is zero or epsilon, you have drunk the koolaid.

Replies from: Nornagest, TheOtherDave, billswift
comment by Nornagest · 2011-12-03T00:50:00.025Z · LW(p) · GW(p)

I'd rate the chance that early upload techniques miss some necessary components of sapience as reasonably high, but that's a technical problem rather than a philosophical one. My confidence in uploading in principle, on the other hand, is roughly equivalent to my confidence in reductionism: which is to say pretty damn high, although not quite one or one minus epsilon. Specifically: for all possible upload techniques to generate a discontinuity in a way that, say, sleep doesn't, it seems to me that not only do minds need to involve some kind of irreducible secret sauce, but also that that needs to be bound to substrate in a non-transferable way, which would be rather surprising. Some kind of delicate QM nonsense might fit the bill, but that veers dangerously close to woo.

The most parsimonious explanation seems to be that, yes, it involves a discontinuity in consciousness, but so do all sorts of phenomena that we don't bother to note or even notice. Which is a somewhat disquieting thought, but one I'll have to live with.

Replies from: vi21maobk9vp
comment by vi21maobk9vp · 2011-12-03T09:07:23.855Z · LW(p) · GW(p)

Actually, http://lesswrong.com/lw/7ve/paper_draft_coalescing_minds_brain/ seems to discuss a way of upload being non-destructive transition. We know that brain can learn to use implanted neurons under some very special conditions now; so maybe you could first learn to use an artificial mind-holder (without a mind yet) as a minor supplement and then learn to use it more and more until death of your original brain is just a flesh wound. Maybe not - but it does seem to be a technological problem.

Replies from: Nornagest
comment by Nornagest · 2011-12-03T18:09:19.787Z · LW(p) · GW(p)

Yeah, I was assuming a destructive upload for simplicity's sake. Processes similar to the one you outline don't generate an obvious discontinuity, so I imagine they'd seem less intuitively scary; still, a strong Searlean viewpoint probably wouldn't accept them.

comment by TheOtherDave · 2011-12-03T01:37:35.526Z · LW(p) · GW(p)

This double-negative "if you really believe not-X then you're wrong" framing is a bit confusing, so I'll just ask.

Consider the set P of all processes that take a person X1 as input and produce X2 as output, where there's no known test that can distinguish X1 from X2. Consider three such processes:
P1 - A digital upload of X1 is created.
P2 - X1 is cryogenically suspended and subsequently restored.
P3 - X1 lives for a decade of normal life.

Call F(P) the probability that X2 is in any sense that matters not the same person as X1, or perhaps not a person at all.

Do you think F(P1) is more than epsilon different from F(P2)? Than F(P3)?
Do you think F(P2) is more than epsilon different from F(P3)?

For my part, I consider all three within epsilon of one another, given the premises.

Replies from: gwern
comment by gwern · 2011-12-04T09:21:37.632Z · LW(p) · GW(p)

Do you think F(P1) is more than epsilon different from F(P2)? Than F(P3)? Do you think F(P2) is more than epsilon different from F(P3)?

Erm, yes, to all three. The two transitions all involve things which are initially plausible and have not been driven down to epsilon (which is a very small quantity) by subsequent research.

For example, we still don't have great evidence that brain activity isn't dynamicly dependent on electrical activity (among others!) which is destroyed by death/cryonics. All we have are a few scatter-shot examples about hypothermia and stuff, which is a level of proof I would barely deign to look at for supplements, much less claim that it's such great evidence that it drives down the probability of error to epsilon!

Replies from: TheOtherDave
comment by TheOtherDave · 2011-12-04T15:01:16.899Z · LW(p) · GW(p)

OK, thanks for clarifying.

comment by billswift · 2011-12-03T18:19:56.225Z · LW(p) · GW(p)

Indeed, the line in the quote:

argue that an ineliminable risk that mind-uploading will fail makes it prudentially irrational for humans to undergo it.

Could apply equally well to crossing a street. There is very, very little we can do without some "ineliminable risk" being attached to it.

We have to balance the risks and expected benefits for our actions; which requires knowledge not philosophical "might-be"s.

Replies from: gwern
comment by gwern · 2011-12-03T18:31:36.938Z · LW(p) · GW(p)

Yes, I agree, as do the quotes and Agar even: because this is not Pascal's wager where the infinites render the probabilities irrelevant, we ultimately need to fill in specific probabilities before we can decide that destructive uploading is a bad idea, and this is where Agar goes terribly wrong - he presents poor arguments that the probabilities will be low enough to make it an obviously bad idea. But I don't think this point is relevant to this conversation thread.

Replies from: billswift
comment by billswift · 2011-12-04T00:53:08.216Z · LW(p) · GW(p)

It occurred to me when I was reading the original post, but I was inspired to post it here mostly as a me-too to your line:

We can't even do formal mathematics without significant and ineradicable risk in the final proof

That is, reinforcing that everything has some "ineradicable risk".

comment by AlephNeil · 2011-12-03T17:17:57.352Z · LW(p) · GW(p)

You really think there is logical certainty that uploading works in principle and your suggestions are exactly as likely as the suggestion 'uploading doesn't actually work'?

How would you show that my suggestions are less likely? The thing is, it's not as though "nobody's mind has annihilated" is data that we can work from. It's impossible to have such data except in the first-person case, and even there it's impossible to know that your mind didn't annihilate last year and then recreate itself five seconds ago.

We're predisposed to say that a jarring physical discontinuity (even if afterwards, we have an agent functionally equivalent to the original) is more likely to cause mind-annihilation than no such discontinuity, but this intuition seems to be resting on nothing whatsoever.

Replies from: gwern
comment by gwern · 2011-12-03T17:21:33.276Z · LW(p) · GW(p)

We're predisposed to say that a jarring physical discontinuity (even if afterwards, we have an agent functionally equivalent to the original) is more likely to cause mind-annihilation than no such discontinuity, but this intuition seems to be resting on nothing whatsoever.

Yes. How bizarre of us to be so predisposed.

Replies from: AlephNeil
comment by AlephNeil · 2011-12-03T17:27:07.692Z · LW(p) · GW(p)

Nice sarcasm. So it must be really easy for you to answer my question then: "How would you show that my suggestions are less likely?"

Right?

Replies from: gwern
comment by gwern · 2011-12-03T17:36:07.532Z · LW(p) · GW(p)

Do you have any argument that all our previous observations where jarring physical discontinuities tend to be associated with jarring mental discontinuities (like, oh I don't know, death) are wrong? Or are you just trying to put the burden of proof on me and smugly use an argument from ignorance?

Replies from: AlephNeil
comment by AlephNeil · 2011-12-03T17:56:42.866Z · LW(p) · GW(p)

Of course, we haven't had any instances of jarring physical discontinuities not being accompanied by 'functional discontinuities' (hopefully it's clear what I mean).

But the deeper point is that the whole presumption that we have 'mental continuity' (in a way that transcends functional organization) is an intuition founded on nothing.

(To be fair, even if we accept that these intuitions are indefensible, it's remains to be explained where they come from. I don't think it's all that "bizarre".)

comment by JohnD · 2011-12-03T00:21:57.848Z · LW(p) · GW(p)

First of all, thanks for sharing from my blog posts. Second, and perhaps unsurprisingly, I disagree with Hauskeller's interpretation of Agar's argument as being "curiously techno-optimistic" because of its appeal to LEV. Agar isn't particularly optimistic about LEVs chances of success (as is shown by his comments in subsequent chapters of his book). He just thinks LEV is more likely than the combination of Strong AI and apparently successful mind-uploading.

comment by Douglas_Knight · 2011-12-03T00:48:26.810Z · LW(p) · GW(p)

I find it odd that the reviewer Hauskeller, who finds uploading not just absurd, but obviously absurd, bothers to address this topic, though he seems fair. Reading the rest of the review, I find it similarly odd that since Agar rejects longevity that he bothers to talk about uploading. Finally, the review claims that Agar says something simply stupid. He rejects Bostrom's complaints about status quo bias on the grounds that they are endowment effect. They are the same thing! How did Agar pick up the phrase "endowment effect" without noticing that people were condemning it? I'm not interested in what he might have meant, but how he made the mistake of thinking that other people thought the phrase a good thing.

comment by [deleted] · 2012-03-14T07:18:10.324Z · LW(p) · GW(p)

I think the question of assigning a probability to whether or not you'll 'really go on' or 'be replaced by something else' is an example of a fundamentally confused question.

comment by MileyCyrus · 2011-12-03T06:14:44.421Z · LW(p) · GW(p)

Gwern notes that when you create an upload of yourself, you risk that upload being abused. A sadist could copy your upload millions of times and torture you for subjective aeons.

Replies from: rwallace
comment by rwallace · 2011-12-03T06:50:27.916Z · LW(p) · GW(p)

If you think that kind of argument holds water, you should commit suicide today lest a sadist kidnap you and torture you in real life.

Replies from: gwern, NancyLebovitz, None, MileyCyrus
comment by gwern · 2011-12-03T16:45:36.498Z · LW(p) · GW(p)

I would point out that the scenario I was writing about was clearly one in which ems are common and em society is stable. If you think that in such a society, there won't be em kidnapping or hacking, for interrogation, slavery, or torture, you hold very different views from mine indeed. (If you think such a society won't exist, that's another thing entirely.)

As a human, you can only die once.

Replies from: Curiouskid, rwallace
comment by Curiouskid · 2011-12-03T20:38:02.223Z · LW(p) · GW(p)

I think such a society won't exist. I think that much war and conflict in general can be reduced to unmet human needs. Those needs could be physical (land, oil, money) or emotional/ideological (theism, patriotism, other forms of irrationality). However, I think that if uploads were to happen, then we would be able to solve the other problems. Wouldn't uploading imply the end of psychological disturbance? You wouldn't need to resort to chemical altering of the brain. You could do it digitally. There would be no physical limitations on how you alter the data in your uploaded mind, whereas there are limitations as to chemically altering your mind today. No longer would you need to go on vacation, or buy videogames. You could just go to a virtual reality.

Even if the things I predict don't happen (I use them more as examples than as predictions), the point is that trying to imagine the specific psychology of an upload is more speculative than new age metaphysics. However, I think that as far as we're concerned, we can make extremely vague predictions like generally good (above) , or generally bad (kidnapping, hijacks, torture). Since generally bad things are generally do to psychological disturbance or lack of resources, my bet is on generally good.

comment by rwallace · 2011-12-03T17:46:05.474Z · LW(p) · GW(p)

There is kidnapping for interrogation, slavery and torture today, so there is no reason to believe there won't be such in the future. But I don't believe it will make sense in the future to commit suicide at the mere thought, any more than it does today.

As for whether such a society will exist, I think it's possible it may. It's possible there may come a day when people don't have to die. And there is a better chance of that happening if we refrain from poisoning our minds with scare stories optimized for appeal to primate brains over correspondence to external reality.

Replies from: gwern
comment by gwern · 2011-12-04T09:10:11.090Z · LW(p) · GW(p)

But I don't believe it will make sense in the future to commit suicide at the mere thought, any more than it does today.

At least, not unless you are an upload and your suicide trigger is the equivalent of those tripwires that burn the contents of the safe.

comment by NancyLebovitz · 2011-12-03T16:13:30.200Z · LW(p) · GW(p)

I can make a reasonable estimate of the risk of being kidnapped or arrested and being tortured.

There's a lot less information about the risk of ems being tortured, and such information may never be available, since I think it's unlikely that computers can be monitored to that extent.

People do commit suicide to escape torture, but it doesn't seem to be the most common response. Also, fake executions are considered to be a form of torture because of the fear of death. So far as I know, disappointment at finding out one hasn't been killed isn't considered to be part of what's bad about fake executions.

Replies from: None
comment by [deleted] · 2011-12-04T00:39:16.543Z · LW(p) · GW(p)

"I can make a reasonable estimate of the risk of being kidnapped or arrested and being tortured.

"There's a lot less information about the risk of ems being tortured, and such information may never be available, since I think it's unlikely that computers can be monitored to that extend."

If we can't make a reasonable estimate, what estimate do we make? The discounted validity of the estimate is incorporated in the prior probability. (Actually I'm not sure if this always works, but a consistent Bayesianism must so hold. Please correct me if this is wrong.)

My reaction to the most neutral form of the question about downloading--"If offered the certain opportunity of success at no cost, would I accept?"--Is "No." The basis is my fear that I wouldn't like the result. I justify it—perhaps after the fact—by assigning an equal a priori likelihood to a good and bad outcome. In Nancy's terms, I'm saying that we have no ability to make a reasonable estimate. The advantage of putting it my way is that it implies a conclusion, rather than resulting in agnosticism (but at the cost of a less certain justification).

In general, I think people over-value the continuation of life. One consequence is that people put too little effort into mitigating the circumstances of their death--which many times, involves inclining it to come sooner rather than later.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2011-12-04T12:29:32.929Z · LW(p) · GW(p)

If we can't make a reasonable estimate, what estimate do we make?

What's the status of error bars in doing this sort of reasoning? It seems to me that a probability of .5 +/- epsilon (a coin you have very good reason to think is honest) is a very different thing from .5 +/- .3 (outcome of an election in a country about which you only know that they have elections and the names of the candidates).

I'm not sure +/- .3 is reasonable-- I think I'm using it to represent that people familiar with that country might have a good idea who'd win.

comment by [deleted] · 2011-12-03T07:02:33.825Z · LW(p) · GW(p)

Do you have some substantial disagreement with the possibility of the scenario?

Replies from: rwallace
comment by rwallace · 2011-12-03T07:10:39.117Z · LW(p) · GW(p)

With the possibility? Of course not. Anything that doesn't involve a logical self-contradiction is possible. My disagreement is with the idea that it is sane or rational to base decisions on fantasies about being kidnapped and tortured in the absence of any evidence that this is at all likely to occur.

Replies from: MileyCyrus, None
comment by MileyCyrus · 2011-12-03T07:54:44.243Z · LW(p) · GW(p)

Evidence:

People are greedy. When people have the opportunity to exploit others, they often take it.

If anyone gets a hold of your em, they can torture your for subject aeons. Anyone who has a copy of your em can blackmail you: "Give me 99% of your property. For every minute you delay, I will torture your ems for a million subjective years."

And what if someone actually wants to hurt you, instead of just exploit you? You and your romantic partner get in a fight. In a fit of passion, she leaves with a copy of your em. By the time the police find her the next day, you've been tortured for a subjective period of time longer than the universe.

Very few, perhaps no one, will have the engineering skill to upload a copy of themselves without someone else's assistance. When you're dead and Apple is uploading your iEm, you're trusting Apple not to abuse you. Is anyone worthy of that trust? And even if you're uploaded safely, how will you store backup copies? And how will you protect yourself against hackers?

Sound more plausible now?

Replies from: rwallace
comment by rwallace · 2011-12-03T08:08:05.018Z · LW(p) · GW(p)

If you postulate ems that can run a million subjective years a minute (which is not at all scientifically plausible), the mainline copies can do that as well, which means talking about wall clock time at all is misleading; the new subjective timescale is the appropriate one to use across the board.

As for the rest, people are just as greedy today as they will be in the future. Organized criminals could torture you until you agree to sign over your property to them. Your girlfriend could pour petrol over you and set you on fire while you're asleep. If you sign up for a delivery or service with Apple and give them your home address, you're trusting them not to send thugs around to your house and kidnap you. Ever fly on an airliner? Very few, perhaps no one, will have the engineering skill to fly without someone else's assistance. When you're on the plane, you're trusting the airline not to deliver you to a torture camp. Is anyone worthy of that trust? And even if you get home safely, how will you stay safe while you're asleep? And how will you protect yourself against criminals?

Does committing suicide today sound a more plausible idea now?

Replies from: drethelin, MileyCyrus
comment by drethelin · 2011-12-03T08:18:14.895Z · LW(p) · GW(p)

All of those scenarios are not only extremely inconvenient and not very profitable for the people involved, but also have high risks of getting caught. This means that the probability of any of them taking place is marginal, because the incentives just aren't there in almost any situation. On the other hand, a digital file is hugely more easy to acquire, incarcerate, transport, and torture, and also easier to hide from any authorities. If someone gets their hands on a digital copy of you, torturing you for x period of time can be as easy as pressing a button. You might never kidnap an orchestra and force them to play for you, but millions of people download MP3s illegally.

I would still rather be uploaded rather than die, but I don't think you're giving the opposing point of view anything like the credit it deserves.

Replies from: None, rwallace
comment by [deleted] · 2011-12-03T14:24:26.876Z · LW(p) · GW(p)

On the other hand, a digital file is hugely more easy to acquire, incarcerate, transport, and torture, and also easier to hide from any authorities. If someone gets their hands on a digital copy of you, torturing you for x period of time can be as easy as pressing a button.

If Y amount of computational resources can be used to simulate a million person-years, then the opportunity cost of using Y to torture someone is very large.

comment by rwallace · 2011-12-03T13:53:40.275Z · LW(p) · GW(p)

An upload, at least of the early generations, is going to require a supercomputer the size of a rather large building to run, to point out just one of the reasons why the analogy with playing a pirate MP3 is entirely spurious.

comment by MileyCyrus · 2011-12-03T08:24:19.524Z · LW(p) · GW(p)

Now you're just getting snarky.

This document is a bit old, but:

...the laws of physics as now understood would allow one gram (more or less) to store and run the entire human race at a million subjective years per second.

No one can hurt me today the way I could be hurt in a post-em world. In a world where human capacity for malevolence is higher, more precaution is required. One should not rule out suicide as a precaution against being tortured for subjective billions of years.

Replies from: rwallace
comment by rwallace · 2011-12-03T13:59:32.830Z · LW(p) · GW(p)

I've been snarky for this entire conversation - I find advocacy of death extremely irritating - but I am not just snarky by any means. The laws of physics as now understood allow no such thing, and even the author of the document to which you refer - a master of wishful thinking - now regards it as obsolete and wrong. And the point still holds - you cannot benefit today the way you could in a post-em world. If you're prepared to throw away billions of years of life as a precaution against the possibility of billions of years of torture, you should be prepared to throw away decades of life as a precaution against the possibility of decades of torture. If you aren't prepared to do the latter, you should reconsider the former.

Replies from: XiXiDu
comment by XiXiDu · 2011-12-03T15:10:29.889Z · LW(p) · GW(p)

...the author of the document to which you refer - a master of wishful thinking...

I rather subscribe to how Greg Egan describes what the author is doing:

I don’t think you’re a “bad guy”. I do think it’s a shame that you’re burying an important and interesting subject — the kind of goals and capabilities that it would be appropriate to encode in AI — under a mountain of hyperbole.

comment by [deleted] · 2011-12-03T07:15:17.779Z · LW(p) · GW(p)

Also, in the absence of any evidence that this is at all unlikely to occur. But notice the original poster does not dwell on the probability of this scenario, only on its mere possibility. It seems to me you're disagreeing with some phantasm you imported into the conversation.

Replies from: rwallace
comment by rwallace · 2011-12-03T07:25:17.032Z · LW(p) · GW(p)

Also, in the absence of any evidence that this is at all unlikely to occur.

If you think the situation is that symmetrical, you should be indifferent on the question of whether to commit suicide today.

But notice the original poster does not dwell on the probability of this scenario, only on its mere possibility.

If it had been generated as part of an exhaustive listing of all possible scenarios, I would have refrained from comment. As it is, being raised in the context of a discussion on whether one should try for uploading in the unlikely event one lives that long, it's obviously intended to be an argument for a negative answer, which means it constitutes:

  1. http://lesswrong.com/lw/19m/privileging_the_hypothesis/

  2. Advocacy of death.

Replies from: None
comment by [deleted] · 2011-12-03T08:12:32.240Z · LW(p) · GW(p)

If you think the situation is that symmetrical, you should be indifferent on the question of whether to commit suicide today.

Do you have some actual data for me to update on? Otherwise, we're just bickering over unjustifiable priors. That's why I'm withholding judgment.

As it is, being raised in the context of a discussion on whether one should try for uploading in the unlikely event one lives that long, it's obviously intended to be an argument for a negative answer

It did come out as this later, but not "obviously" from the original comment.

comment by MileyCyrus · 2011-12-03T06:57:22.307Z · LW(p) · GW(p)

My physical body can only be tortured a few decades, tops. An em can be tortured for a billion years, along with a billion em copies of myself.

Replies from: None
comment by [deleted] · 2011-12-03T07:05:06.579Z · LW(p) · GW(p)

Not the correct counterargument. Your torturer merely needs to keep you alive, or possibly cryopreserved, until lengthening your natural lifespan becomes possible.

Replies from: MileyCyrus
comment by MileyCyrus · 2011-12-03T07:27:26.667Z · LW(p) · GW(p)

Which is not a plausible scenario in today's world.

If em torture is viable in the future, and I don't think I can defend myself, I will seriously consider suicide. But rwallace comment was regarding today's world.

Replies from: rwallace
comment by rwallace · 2011-12-03T07:37:44.794Z · LW(p) · GW(p)

The comment holds regardless. In today's world, you can only be tortured for a few decades, but by the same token you can only lose a few decades of lifespan by committing suicide. If in some future world you can be tortured for a billion years, then you will also be losing a billion years of happy healthy life by committing suicide. If you think the mere possibility of torture - with no evidence that it is at all likely - will be grounds for committing suicide in that future world, then you should think it equally good grounds for committing suicide today. If you agree with me that would be insanely irrational today, you should also agree it will be insanely irrational in that future world.

Replies from: MileyCyrus
comment by MileyCyrus · 2011-12-03T08:11:50.553Z · LW(p) · GW(p)

I, and I suspect the entire human species, is risk averse. Suppose I have to choose between to bets:

A: 50% chance of living 100 happy years. 50% chance of living 100 torture years.

B: 50% chance of living 1,000,0000 happy years, 50% chance of living 1,000,000 torture years.

I will pick the first because it has the better bad option. While additional happy years have diminishing additional utility, additional torture have increasing dis-utility. I would rather a 50% chance of being tortured for 10 years than a 10% chance of being tortured for 50 years.

When WBE is invented, the stakes will be upped. The good possibilities get much better, and the bad possibilities get much worse. As a risk averse person, this scares me.

Replies from: FAWS
comment by FAWS · 2011-12-03T12:35:07.270Z · LW(p) · GW(p)

Would you prefer

C: 50% chance of living 1 happy minute. 50% chance of living 1 torture minute.

over both? If not, why not?

Replies from: Oligopsony
comment by Oligopsony · 2011-12-03T15:19:31.689Z · LW(p) · GW(p)

At those ratios, absolutely. I'm not sure how to explain why, since it just seems obvious that suicide would be preferable to a 50% chance of being tortured for a century. (I'm not sure at what what ratio it would become a real dilemma.)