Every Implementation of You is You: An Intuition Ladder

post by lolbifrons · 2018-03-29T05:14:07.396Z · LW · GW · 47 comments

I was recently arguing in /r/transhumanism on reddit about the viability of uploading/forking consciousness, and I realized I didn't have any method of assessing where someone's beliefs actually lay - where I might need to move them from if I wanted to convince them of what I thought.

So I made an intuition ladder. Please correct me if I made any mistakes (that aren't by design), and let me know if you think there's anything past the final level.

Some instructions on how to use this: Read the first level. If you notice something definitely wrong with it, move to the next level. Repeat until you come to a level where your intuition about the entire level is either "This is true" or "I'm not sure." That is your level.

1. Clones and copies (the result of a medical procedure that physically reproduces you exactly, including internal brain state) are the same thing. Every intuition I have about a clone, or an identical twin, applies one-to-one to copies as well, and vice versa. Because identical twins are completely different people on every level except genetically, copies are exactly the same way.

2. Clones and copies aren't the same thing, as copies had a brain and memories in common with me in the past, but for one of us those memories are false and that copy is just a copy, while my consciousness would remain with the privileged original.

3. Copies had a common brain and memories, which make them indistinguishable from each other in principle, so they believe they're me, and they're not wrong in any meaningful sense, but I don't anticipate waking up from any copying procedure in any body but the one I started in. As such, I would never participate in a procedure that claims to "teleport" me by making a copy at a new location and killing the source copy, because I would die.

4. Copies are indistinguishable from each other in principle, even from the inside, and thus I actually become both, and anticipate waking up as either. But once I am one or the other, my copy doesn't share an identity with me. Furthermore, if a copy is destroyed before I wake up from the procedure, I might die, or I might wake up as the copy that is still alive. As such, the fork-and-die teleport is a gamble for my life, and I would only attempt it if I was for some reason comfortable with the chance that I will die.

5. If a copy is destroyed during the procedure, I will wake up as the other one with near certainty, but this is a particular discrete consequence of how soon it's done. If one copy were to die shortly after, I wouldn't be less likely to wake up as that one or anything. I am therefore willing to fork-and-die teleport as long as the procedure is flawless. Furthermore, if I was instead backed up and copied from the backup at a later date, I would certainly wake up immediately after the procedure, and not anticipate waking up subjectively-immediately as the backup copy in the future.

6. I anticipate with less likelihood waking up as a copy that will die soon after the procedure - or for some other reason has a lower amplitude according to the Born rule - as a continuous function, and also it's entirely irrelevant when the copy is instantiated in my anticipation of what I experience, as long as the copy has the mind state I did when the procedure was done. However, consciousness can only transfer to copies made of me. I can never wake up as an identical mind state somewhere in the universe if it wasn't a result of copying, if such a thing were to exist, even in principle.

7. Continuity of consciousness is completely an artifact of mind state, including memory, and need not strictly require adjacency in spacetime at all. If, by some complete miraculous coincidence, in a galaxy far far away, a person exists at some time t' that is exactly identical to me at some time in my life t, in a way a copy made of me at t would be, at the moment t, I anticipate my consciousness transferring to that far away not-copy with some probability. The only reason this doesn't happen is the sheer unlikelihood of an exact mind state being duplicated, memories and all, by happenstance, anywhere in spacetime, even given the age of the universe from beginning to end. However, my consciousness can only be implemented on a human brain, or something that precisely mimics its internal structure.

8. Copies of me need not be or even resemble a human being. I am just an algorithm, and the hardware I am implemented on is irrelevant. If it's done on a microchip or a human brain, any implementation of me is me. However, simulations aren't truly real, so an implementation of me in a simulated world, no matter how advanced, isn't actually me or conscious to the extent I am in the reality I know.

9. Implementations of me can exist within simulations that are sufficiently advanced to implement me fully. If a superintelligence who is able to perfectly model human minds is using that ability to consider what I would do, their model of me is me. Indeed, the only way to model me perfectly is to implement me.

10. In progress, see Dacyn [LW · GW]'s comment below [LW(p) · GW(p)].

47 comments

Comments sorted by top scores.

comment by Said Achmiz (SaidAchmiz) · 2018-03-29T05:38:42.916Z · LW(p) · GW(p)

Clones and copies are the same thing.

This needs more elaboration, if you want to use it in the way you do. I know what you mean here (at least, I think I do), but it may not be obvious to many interlocutors—“the same thing” in what way, exactly? (Especially since this is step #1.)

Furthermore, if a copy is destroyed before I wake up from the procedure, I anticipate dying with some probability

I’m not actually sure I understand this. What experience(s) does one anticipate, exactly, if one believes #4? (Or is it your intention to show that this view is incoherent, for exactly this reason?)

a lower amplitude according to the Born rule

How does this step on the ladder work for people who are not learned in quantum mechanics? (Which is to say—I have no idea what #6 means.)

… I anticipate my consciousness transferring to that far away not-copy with some probability

I confess to being confused about how the concept of “probability” is being used here (and in similar comments I’ve seen). Can you elaborate?


As for your entire approach… I think it’s an interesting and worthy attempt, but flawed, for this reason: that around #4 or #5 (I’m not sure, due to the unclearness mentioned above), it’s hard to know what intuition to have—whether to reject or accept the higher rungs of the ladder—without a satisfactory solution to the Hard Problem.

That is, if you ask me whether I believe the mid-to-high-numbered statements, I won’t say “yes” or “no”; I’ll say “I don’t know, and I don’t know if they even make sense enough to be right or wrong, and really the whole subject is deeply confusing to me”. You can’t convince me to accept (say) rung #7, because I don’t currently reject it; rather, I don’t know whether it even makes sense as a position, or whether there is (as seems to me to be likely) something that we’re missing about the whole matter—something which, if we understood it, would make #7 either obviously true, or obviously false, or obviously incoherent, etc. But I don’t know what that something is—because the Hard Problem remains… well… Hard, and a Problem.

do you have a solution to the Hard Problem?

Replies from: lolbifrons, lolbifrons
comment by lolbifrons · 2018-03-29T06:21:14.638Z · LW(p) · GW(p)

I have expanded on 1 in an edit. Let me know if it makes sense.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2018-03-29T06:58:03.155Z · LW(p) · GW(p)

Looks good!

comment by lolbifrons · 2018-03-29T05:59:43.278Z · LW(p) · GW(p)

This needs more elaboration, if you want to use it in the way you do. I know what you mean here (at least, I think I do), but it may not be obvious to many interlocutors—“the same thing” in what way, exactly? (Especially since this is step #1.)

I see your point. Admittedly, I built this while talking to someone who was past 1, and I consider the position a gross misunderstanding of current technology. I'll consider how to describe the proposition better, though.

Or is it your intention to show that this view is incoherent, for exactly this reason

Precisely. I expect any lesswronger to be past 4. Perhaps I am building a strawman?

How does this step on the ladder work for people who are not learned in quantum mechanics? (Which is to say—I have no idea what #6 means.)

I suppose it's enough to intuitively consider it "vaguely less likely" that you'll wake up as one copy or the other based on "some unknown criteria that possibly includes likelihood of future survival", but I submit that either:

1. one doesn't understand fully why he is at this step if he isn't familiar with QM, or

2. I don't understand QM or the subject at hand as well as I think I do.

I confess to being confused about how the concept of “probability” is being used here (and in similar comments I’ve seen). Can you elaborate?

That... might be a pretty long discussion. Essentially, whatever you, before you undergo a copying procedure, anticipate happening to you (subjectively experiencing ending up in the grown body vs. ending up in the body that walked in, each with some respective probability), you have the same anticipation about staying on Earth vs. instantaneously mind-warping to where-ever the hell a human brain could possibly have formed that not only implements you, but has exactly your memories of growing up on earth up to that moment.

As for your entire approach… I think it’s an interesting and worthy attempt, but flawed, for this reason: that around #4 or #5 (I’m not sure, due to the unclearness mentioned above), it’s hard to know what intuition to have—whether to reject or accept the higher rungs of the ladder

One thing that perhaps isn't obvious, is that this ladder isn't meant to convince anyone of anything upon reading it, other than perhaps that I might have a consistent position and not be spouting utter nonsense. Once someone and I have agreed about where they are, then we can get to work on the task of advancing up the ladder. One level at a time.

do you have a solution to the Hard Problem?

Could you elaborate? I may know what you're talking about, but not by name.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2018-03-29T06:22:50.501Z · LW(p) · GW(p)

I expect any lesswronger to be past 4. Perhaps I am building a strawman?

I can speak for myself only, but—I’m not past #4, for exactly the reasons I outlined. (I think #4 is incoherent, for—apparently—the reason you intended it to be incoherent; but the steps above that are problematic, as I said.)

I submit that either:

  1. one doesn’t understand fully why he is at this step if he isn’t familiar with QM, or

  2. I don’t understand QM or the subject at hand as well as I think I do.

Fair enough. Would you say that if the discussion reaches this part of the ladder, a digression must then be made to ensure that both parties well and truly understand QM? If so, then suppose one lacks the mathematical aptitude / physics background / whatever to grasp QM; is further progress impossible? In that case, what ought one’s view of this topic be?

whatever you, before you undergo a copying procedure, anticipate happening to you (subjectively experiencing ending up in the grown body vs. ending up in the body that walked in, each with some respective probability)

Sorry, I guess I was not clear… I take this answer to be just pushing the problem back one irrelevant step, since my question applies to this scenario also!

One thing that perhaps isn’t obvious, is that this ladder isn’t meant to convince anyone of anything upon reading it, other than perhaps that I might have a consistent position and not be spouting utter nonsense. Once someone and I have agreed about where they are, then we can get to work on the task of advancing up the ladder. One level at a time.

Well, that’s the thing; I don’t know (after reading this) whether you’re spouting utter nonsense! That’s precisely the problem I was trying to point to: I, for example, couldn’t tell you where on the ladder I am, for the reasons I outlined in the grandparent.

do you have a solution to the Hard Problem?

Could you elaborate?

https://en.wikipedia.org/wiki/Hard_problem_of_consciousness

Replies from: lolbifrons
comment by lolbifrons · 2018-03-29T06:48:40.820Z · LW(p) · GW(p)
Fair enough. Would you say that if the discussion reaches this part of the ladder, a digression must then be made to ensure that both parties well and truly understand QM?

I'm not sure it "must" be made, but that's exactly the route I would go at this point.

suppose one lacks the mathematical aptitude / physics background / whatever to grasp QM; is further progress impossible? In that case, what ought one’s view of this topic be?

I guess I haven't considered this. When I find myself in this position, I try to gain the requisite skills. EY's QM sequence on this very site isn't too hard to follow.

Sorry, I guess I was not clear… I take this answer to be just pushing the problem back one irrelevant step, since my question applies to this scenario also!

This may help you here [LW · GW].

Well, that’s the thing; I don’t know (after reading this) whether you’re spouting utter nonsense! That’s precisely the problem I was trying to point to: I, for example, couldn’t tell you where on the ladder I am, for the reasons I outlined in the grandparent.

I suppose that's entirely fair. I'm not sure how to improve the ladder to this end, though.

https://en.wikipedia.org/wiki/Hard_problem_of_consciousness

I am familiar with this after all. I believe solving this may be necessary to implement or even possibly successfully copy a mind, but not to reason about the consequences of such assuming we've figured it out. In any case, as a reductionist, I believe very strongly that the solution arises from the structure of physical things only, and thus is only as hard a problem as GAI.

Further, EY goes into pretty great depth about how our current understanding of QM gives us an affirmative belief that there's no identity difference between one copy or another in principle, particularly here [LW · GW] and here [LW · GW] but also in many posts in the QM sequence. This further suggests the solution to the HPoC isn't necessary to reason about our identities.

I can speak for myself only, but—I’m not past #4, for exactly the reasons I outlined. (I think #4 is incoherent, for—apparently—the reason you intended it to be incoherent; but the steps above that are problematic, as I said.)

I admit I have no idea what the subjective experience of dying could be, especially in your sleep, but it seems like whatever lack of experience or end or whatever that you'd have when you die normally would occur here if you believed this?

But I'm inclined to believe manyworlds implies subjective quantum immortality to an extent, as well, so 4 is possibly even more meaningless. I'm just not sure how to fix it, or if I should even try, because I know people who wouldn't fork-and-die teleport because they thing the person waking up isn't them; they're dead.

How do I describe that position on this ladder?

Replies from: SaidAchmiz, SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2018-03-29T07:13:37.230Z · LW(p) · GW(p)

suppose one lacks the mathematical aptitude /​ physics background /​ whatever to grasp QM; is further progress impossible? In that case, what ought one’s view of this topic be?

I guess I haven’t considered this. When I find myself in this position, I try to gain the requisite skills. EY’s QM sequence on this very site isn’t too hard to follow.

I beg to differ! I found the QM sequence impenetrable (and I don’t consider myself to entirely lack math aptitude). (Granted, it’s been a while since the last time I gave it a close read, and perhaps if I try again I’ll get through it, but I do not have high hopes for gaining anything like the kind of understanding it would take to base intuitions about consciousness and identity on!)

That said, I think that if your approach relies on your interlocutor having a solid understanding of quantum mechanics, then… I’m afraid it’s even more flawed than I thought at first… :(

This may help you here.

I have read this post, though again, it has been a while. I will re-read it and get back to you!

I am familiar with this after all. I believe solving this may be necessary to implement or even possibly successfully copy a mind, but not to reason about the consequences of such assuming we’ve figured it out. In any case, as a reductionist, I believe very strongly that the solution arises from the structure of physical things only, and thus is only as hard a problem as GAI.

I, too, am a reductionist, and concur with your strong belief; unfortunately, this doesn’t actually help… it doesn’t move us any closer to a solution. And I disagree about the first part of what I quoted; I see no reason to assent to that. Why do you think this?

Further, EY goes into pretty great depth about how our current understanding of QM gives us an affirmative belief that there’s no identity difference between one copy or another in principle, particularly here and here but also in many posts in the QM sequence. This further suggests the solution to the HPoC isn’t necessary to reason about our identities.

I was able to follow the QM sequence just enough to… well, not to grasp this point, precisely, but to grasp that Eliezer was claiming this. But I don’t see how it entails or implies ot even suggests that a solution to the Hard Problem is unnecessary here?

I know people who wouldn’t fork-and-die teleport because they thing the person waking up isn’t them; they’re dead.

How do I describe that position on this ladder?

You just did, right? That’s the description, right there. (But it’s not identical with #4 as written! … was it meant to be?)

Replies from: lolbifrons
comment by lolbifrons · 2018-03-29T07:48:59.193Z · LW(p) · GW(p)
You just did, right? That’s the description, right there. (But it’s not identical with #4 as written! … was it meant to be?)

Fair, I did my best to fix 3 and 4.

And I disagree about the first part of what I quoted; I see no reason to assent to that. Why do you think this?

Can you be more specific about what exactly I said that you're referring to? Forgive me but I actually am not sure which part you mean.

I was able to follow the QM sequence just enough to… well, not to grasp this point, precisely, but to grasp thatEliezer was claiming this. But I don’t see how it entails or implies ot even suggests that a solution to the Hard Problem is unnecessary here?

EY's assertion, and I tend to see his point, is that there can't possibly be anything about [you] that isn't true of [a perfect copy of you] that would distinguish - to the universe itself or anything in it - between the instances, other than their positions in spacetime.

And, since we change positions in spacetime all the time without claiming we've lost our identities or consciousnesses, that method of distinction is not sufficient to threaten any consideration of identity or consciousness. Further, because consciousness is completely emergent-from-the-physical, if there's nothing physically different about two instances besides spacetime displacements (which do not threaten consciousness), there's no way in principle that consciousness doesn't behave this way.

And since these aren't things we think, as long as we don't find anything in the future that contradict them, but rather are things that are most assuredly true, unless the universe is lying to us, it ought not matter what we don't yet know, because this positive fact about identity (or lack thereof) is sufficient to make this conclusion.

That said, I think that if your approach relies on your interlocutor having a solid understanding of quantum mechanics, then… I’m afraid it’s even more flawed than I thought at first… :(

:(

----

I wanted to say an additional couple of things that may take this in a different direction. To go back to your top level comment:

That is, if you ask me whether I believe the mid-to-high-numbered statements, I won’t say “yes” or “no”; I’ll say “I don’t know, and I don’t know if they even make sense enough to be right or wrong, and really the whole subject is deeply confusing to me”. You can’t convince me to accept (say) rung #7, because I don’t currently reject it; rather, I don’t know whether it even makes sense as a position, or whether there is (as seems to me to be likely) something that we’re missing about the whole matter

I want to clarify that I consider this position (indeed, any position that isn't "yes, I believe it could be no other way") a rejection of that intuition. To be an intuition, it should be intuitively true.

Also:

it’s hard to know what intuition to have

My goal with this ladder, specifically, is not to change anyone's mind, but rather to provide enough resolution to where everyone can point to some level and say either:

1. That is true, or

2. That could be true

and then point to the level below it and say either:

1. That is clearly not true, or

2. That is not the whole picture.

If you don't have an adjacent pair of levels like that, I do in fact need to fix my ladder. Namely, I need to include a level that corresponds to your intuition. I suppose I'm looking for the highest level someone thinks is clearly wrong.

That said, with all due respect, I don't think that implies that levels higher than that point should make intuitive sense to you. The fact that they don't was legitimately my intention.

Replies from: SaidAchmiz, SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2018-03-29T16:17:53.458Z · LW(p) · GW(p)

Re:

I want to clarify that I consider this position (indeed, any position that isn’t “yes, I believe it could be no other way”) a rejection of that intuition. To be an intuition, it should be intuitively true.

and

That said, with all due respect, I don’t think that implies that levels higher than that point should make intuitive sense to you. The fact that they don’t was legitimately my intention.

Fair enough, this is a reasonable way of looking at it.

So, let me go ahead and try to “rate” each of the levels in the way you’re looking for:

  1. This seems like obvious nonsense.

  2. This seems like slightly less obvious but still nonsense.

  3. I don’t really know whether this is true [pending resolution of the Hard Problem], but it seems unlikely.

  4. I don’t really know whether this is true; I don’t even know if it’s likely. Without a resolution of the Hard Problem, I can’t really reason about this.

  5. Ditto #4.

  6. I am very uncertain about this because I don’t understand quantum mechanics. That aside, ditto #4.

  7. I have no clue whether this could be true.

  8. I have no clue whether this could be true.

  9. I have no clue whether this could be true.

I think that fails to satisfy your desired criteria, yes? (Unless I have misunderstood?)

Replies from: lolbifrons
comment by lolbifrons · 2018-03-29T18:25:21.711Z · LW(p) · GW(p)

I'm not so sure this fails. I'm inclined to take this to mean you are between 2 and 3 or between 3 and 4, depending on what specifically you object to in 3.

But I'm curious, if you had to hazard a guess in your own words as to what is most likely to be true about identity and consciousness in the case of a procedure that reproduces your physical body (including brain state) exactly - pick the hypothesis you have with the highest prior, even if it's nowhere close to 50% - what would it be, completely independent of what I've said in this ladder?

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2018-03-29T20:23:23.102Z · LW(p) · GW(p)

Honestly, I have no idea. I really don’t know how to reason about subjective phenomenal consciousness. That’s the problem. It seems clear to me that anyone who, given the state of our current knowledge, is very certain of anything like the latter part of #3 or of any of the higher numbers, is simply unjustified in their certainty. If you can’t give me a satisfying reduction of consciousness—one which fully and comprehensively dissolves the Hard Problem—then nothing approaching certainty in any of these views is possible.

I wholly agree with this:

Copies had a common brain and memories, which make them indistinguishable from each other in principle, so they believe they’re me, and they’re not wrong in any meaningful sense

But everything beyond that? Everything that deals with subjective experience, anticipation, etc.? I just plain don’t know.

Replies from: lolbifrons
comment by lolbifrons · 2018-03-29T20:52:15.747Z · LW(p) · GW(p)

In that case, I would say you're between 3 and 4. And I can't say you're wrong about my relative certainty being unwarranted, but obviously I think you're wrong, and it's because I believe that QM leaves only enough wiggle room of uncertainty for the things we don't yet know to never actually affect the physical consequences of such a procedure (even from the inside).

This is why I think QM is necessary to advance up the ladder; it's the reason I advanced up the ladder, and it's the only experimentally true thing we have so far that permits you to advance up the ladder. Trying to go a different route would be dishonest.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2018-03-29T20:57:59.762Z · LW(p) · GW(p)

I will take your word about QM, I suppose, but I’m afraid that is scant concession. Much like I said in my other comment—how the physical consequences of this or that event translate into subjective experience is exactly what’s at issue!

Replies from: lolbifrons
comment by lolbifrons · 2018-03-29T21:19:45.406Z · LW(p) · GW(p)

Right. I agree that we don't know how, but I submit that we know that they do, and we believe strongly in reductionism, and we can condition conclusions on reductionism and the belief that they do, without conditioning them on how they do, and I submit further that limiting ourselves in this way is still sufficient to advance up the ladder.

We have a black box - in computer science, an interface - but we don't need to know what's inside if we know everything about its behavior on the outside. We can still use it in an algorithm, and know what to expect will happen.

It doesn't seem like we can know this much in principle (without access to the inside of the black box) until you understand why we know this much, as EY talks about here [LW · GW].

I'd also like to be clear that "inside the black box" (the answer to the hard problem) is not the same as "the subjective feeling inside the mind" (a physical consequence of whatever the black box is doing).

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2018-03-29T21:35:27.699Z · LW(p) · GW(p)

Sorry, I think I wasn’t clear. When I said:

how the physical consequences of this or that event translate into subjective experience is exactly what’s at issue

I didn’t mean this in the sense of “how is it possible that they do”, but rather in the sense of “in what way do they”. To that formulation, your answer is non-responsive.

We have a black box—in computer science, an interface—but we don’t need to know what’s inside if we know everything about its behavior on the outside.

But we don’t know everything about the black box’s behavior! That’s precisely the problem in the first place! We are, in essence, trying to predict the behavior of the black box. And we’re trying to do it without knowing what’s inside it—which seems futile and ill-advised, given that we can’t exactly observe the box’s behavior, post-hoc!

As for the linked Sequence post—again, I really do take your word for it. I just don’t think that stuff is relevant.

Replies from: lolbifrons
comment by lolbifrons · 2018-03-29T21:40:24.650Z · LW(p) · GW(p)

I truly do think we can't move further from this point, in this thread of this argument, without you reading and understanding the sequence :(

I could be mistaken, but it seems to me that the distinction you're trying to make between what I'm saying and what I'd have to say for my answer to be responsive dissolves as you understand QM.

I could, of course, be misunderstanding you completely. But there also isn't anything you're linking that I'm unwilling to read :P

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2018-03-29T21:46:25.920Z · LW(p) · GW(p)

Well, to be honest, I don’t think there is anywhere further to move.

I mean, suppose I re-read the QM sequence, and this time I understand it. What propositions will I then accept, that I currently reject? What beliefs will I hold, that I currently do not?

If I’ve read your comments correctly thus far, then it seems to me that everything you list, in answer to my above questions, will be things that I have already assented to (at least, for the sake of argument, if not in fact). So what is gained?

Replies from: lolbifrons
comment by lolbifrons · 2018-03-29T22:31:29.831Z · LW(p) · GW(p)

We gain these:

1. Everything obeys QM. To wit, nothing can exist anywhere/when that is not describable in the math of QM in principle.

2. If everything obeys QM, consciousness obeys QM.

3. As long as consciousness is not or does not consist of some fundamental element that does not obey QM, there is nothing anywhere that can differentiate between copies in principle in any way besides how we can differentiate between a person in the past and the same person in the future, having taken a mundane trajectory through spacetime. This includes how it "feels from the inside." If we can claim consciousness is continuous at all, we can claim it is continuous regardless of proximity in spacetime or any other consideration except for a particular change in physical structure across some change in spacetime. There is provably no computable difference, in the structure of our universe, between existing from one second to the next, and blinking out of existence somewhere/when and then into existence somewhere/when else.

4. As long as consciousness is not or does not consist of some fundamental element that does not obey QM, our subjective experience of being in one place at one time is a limitation of our own perceptions. When we exist in multiple places at the same time, each placetime!us perceives a single, unbroken continuity of consciousness in hindsight, but this is an artifact of a failure to perceive or communicate with other placetime!us branches.

5. We constantly exist in multiple places, as decoherence is the rule, and factorable subspaces where a particular placetime!us can exist and identify as a "world" are a significant exception. It just so happens that decoherence of a certain kind tends to create locally factorable subspaces that move away from each other in configuration space, so they can't meaningfully interact. For any reasonable definition of "we", "we" are constantly being copied every time there is any detectable (in principle) decoherence anywhere in our "world" (a "universe" that a single placetime!us has access to in principle). By observation, we know that at least every time the "we" we identify as has split, it hasn't interrupted our continuity of consciousness. As there are decoherence events constantly on scales we have trouble imagining in numbers of places we have trouble imagining at once, we can reason that we didn't just get absurdly lucky, and every copy of us looks back on these splits with the same feeling of continuity.

6. The splits we've observed and cannot interact with are in principle no different from a split in a single "world" where we can interact with our copy.

And possibly more. It's a lot and I'm doing the best I can from memory.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2018-03-29T22:45:26.414Z · LW(p) · GW(p)

I grant (for the sake of argument) #1 and #2. I don’t see that understanding QM would suffice to grant #3 and #4 without having solved the Hard Problem. Without actually having a full reduction of consciousness, there’s just no way to be certain that the reasoning you provide makes sense. This is in large part this is because the reasoning has “holes” in it—that is, parts which we currently take essentially on faith, pending a resolution of the Hard Problem.

Some specifics:

in any way besides how we can differentiate between a person in the past and the same person in the future

And how do we do this? What makes a person “feel like the same person”, “from the inside”, through the passage of time? Do those quoted phrases even make sense? What do they mean, exactly? We really don’t know.

If we can claim consciousness is continuous at all

Can we? It seems like we can, but… is that just an illusion? Somehow? Why does it seem like consciousness is continuous? Or is that a confused question (as some people indeed seem to claim)?

As long as consciousness is not or does not consist of some fundamental element that does not obey QM

Well, and what if it does? We’re back to the “conditioning on reductionism” thing; until we actually have a full reduction, we just can’t blithely toss about assumptions like this!

… actually, we needn’t even go that far. It’s not even certainty of reductionism that you’re suggesting we condition on—it’s certainty of… what? Quantum mechanics applying to everything? But that’s a great deal weaker! I am not nearly as certain of that (in fact, I have no real solid belief about it), so by no means will I condition on a certainty of this claim!

our subjective experience of being in one place at one time is a limitation of our own perceptions. When we exist in multiple places at the same time, each placetime!us perceives a single, unbroken continuity of consciousness in hindsight, but this is an artifact of a failure to perceive or communicate with other placetime!us branches

This part I actually just don’t get the point of. I mean, you’re not wrong, but so what?

As for #5 and #6, well, there I just don’t understand what you’re saying, so I can’t judge whether it’s relevant.

Replies from: lolbifrons
comment by lolbifrons · 2018-03-29T23:10:11.982Z · LW(p) · GW(p)
I don’t see that understanding QM would suffice to grant #3 and #4 without having solved the Hard Problem. Without actually having a full reduction of consciousness, there’s just no way to be certain that the reasoning you provide makes sense

This should change when you understand QM. I was trying to black box it.

And how do we do this? What makes a person “feel like the same person”, “from the inside”, through the passage of time? Do those quoted phrases even make sense? What do they mean, exactly? We really don’t know.
Can we? It seems like we can, but… is that just an illusion? Somehow? Why does it seemlike consciousness is continuous? Or is that a confused question (as some people indeed seem to claim)?

It doesn't matter, because we can prove they are the same black box, and thus their behavior is the same, even if we don't know how it works (or fully what that behavior even is). As long as we have A === B (which QM says we must), we can say (A->C) -> (B->C) even if we don't know whether A->C or how. To the extent that it A gives off some evidence that convinces us of C, B does exactly the same thing.

Well, and what if it does? We’re back to the “conditioning on reductionism” thing; until we actually have a full reduction, we just can’t blithely toss about assumptions like this!
… actually, we needn’t even go that far. It’s not even certainty of reductionism that you’re suggesting we condition on—it’s certainty of… what? Quantum mechanics applying to everything? But that’s a great deal weaker! I am not nearly as certain of that (in fact, I have no real solid belief about it), so by no means will I condition on a certainty of this claim!

1 and 2 imply this, and you were willing to give me those. QM supports reductionism independent of all the classical and empirical reasons we believe in reductionism. Like I said above, you asked me to black box it, and I'm claiming these are things that are clear when you understand QM. QM is brazen about being the exclusive language our universe uses to describe everything. It's physical nonsense to talk about something that exists and isn't described by QM. That's what existence means.

This part I actually just don’t get the point of. I mean, you’re not wrong, but so what?

It's preparation for the point "we do it all the time and maintain our sense of continuity in every branch" in 5 and 6.

As for #5 and #6, well, there I just don’t understand what you’re saying, so I can’t judge whether it’s relevant.

5 is basically:

When Schrödinger's cat enters a superposition of alive|dead, so does the entire universe, including us. Like the cat, we split into a!us and d!us (us in the world where the cat is alive and us in the world where it is dead). When we observe the cat, and find it out is alive|dead, we are finding out which world we are in, and correlating our brain state with the state of the cat. This is decoherence, and it pushes a!universe and d!universe apart in the mathematical substrate that defines them (so they can't interact anymore).

If we observe the cat is alive, we realize we are a!us. But there is still a d!us. We split, and they are observing a dead cat. a!us and d!us both can think back to the time before the split and say "that's me and I had an unbroken chain of time slices that lead me here - my consciousness was continuous". Both maintain continuity throughout the process.

6 is basically:

The above experience for a person cannot be described differently in QM (which is the language of existence) from the kind of copying that occurs in one branch!universe, except by differences that can't in principle have an effect as per point 2, so they black box as the same thing, and implications about one are implications about the other.

comment by Said Achmiz (SaidAchmiz) · 2018-03-29T16:08:47.365Z · LW(p) · GW(p)

Fair, I did my best to fix 3 and 4.

Ah, these are much better descriptions now, well done!

And I disagree about the first part of what I quoted; I see no reason to assent to that. Why do you think this?

Can you be more specific about what exactly I said that you’re referring to? Forgive me but I actually am not sure which part you mean.

Sure. You said:

I am familiar with this after all. I believe solving this may be necessary to implement or even possibly successfully copy a mind, but not to reason about the consequences of such assuming we’ve figured it out. In any case, as a reductionist, I believe very strongly that the solution arises from the structure of physical things only, and thus is only as hard a problem as GAI.

[emphasis added]

The bolded part is what I was referring to; I see no basis for claiming that. Why would solving the Hard Problem not be necessary for reasoning about the consequences of implementing or copying a mind?

Now, realistically, what I think would happen in such a case is that either we’d have solved the Hard Problem before reaching that point (as you suggest), or we’ll simply decide to ignore it… which is not really the same thing as not needing to solve it.

EY’s assertion, and I tend to see his point, is that […]

Yes, I understand that that’s Eliezer’s point. But it’s hardly convincing! If we haven’t solved the Hard Problem, then even if we tell ourselves “copying can’t possibly matter for identity”, we will have no idea what the heck that actually means. It doesn’t, in other words, help us understand what happens in any of the scenarios you describe—and more importantly, why.

As an aside:

… because consciousness is completely emergent-from-the-physical …

No, we can’t assert this. We can say that consciousness has to be completely emergent-from-the-physical. But there’s a difference between that and what you said; “consciousness is completely emergent-from-the-physical” is something that we’re only licensed to say after we discover how consciousness emerges from the physical.

Until then, perhaps it has to be, but it’s an open question whether it is

[rest of my response is conceptually separate, so it’s in a separate comment]

Replies from: lolbifrons
comment by lolbifrons · 2018-03-29T18:37:13.988Z · LW(p) · GW(p)
Ah, these are much better descriptions now, well done!

Thanks, I sincerely appreciate your help in clarifying :)

We can say that consciousness has to be completely emergent-from-the-physical. But there’s a difference between that and what you said; “consciousness is completely emergent-from-the-physical”

Can you explain why the former doesn't imply the latter? I'm under the impression it does, for any reasonable definition of "has to be", as long as what you're conditioning on (in this case reductionism) is true. I suppose I don't see your objection.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2018-03-29T20:15:44.852Z · LW(p) · GW(p)

Can you explain why the former doesn’t imply the latter?

Sure. Basically, this is the problem:

as long as what you’re conditioning on (in this case reductionism) is true

Now, I think reductionism is true. But suppose we encounter something we can’t reduce. (Of course your instinct—and mine, in a symmetric circumstance—would be to jump in with a correction: “can’t yet reduce”! I sympathize entirely with this—but in this case, that formulation would beg the question.) We should of course condition on our belief that reductionism is true, and conclude that we’ll be able to find a reduction. But, conversely, we should also condition on the fact that we haven’t found a reduction yet, and reduce our belief in reductionism! (And, as I mentioned in the linked comment thread, this depends on how much effort we’ve spent so far on looking for a reduction, etc.)

What this means is that we can’t simply say “consciouness is completely emergent-from-the-physical”. What we have to say is something like:

“We don’t currently know whether consciousness is completely emergent from the physical. Conditional on reductionism being true, consciousness has to be completely emergent from the physical. On the other hand, if consciousness turns out not to be completely emergent from the physical, then—clearly—reductionism is not true.”

In other words, whether reductionism is true is exactly at issue here! Again: I do think that it is; I would be very, very surprised if it were otherwise. But to assume it is to beg the question.


Tangentially:

for any reasonable definition of “has to be”

To the contrary: the implications of the phrase “has to be”, in claims of the form “[thing] has to be true” is very different from the implications of the word “is” (in the corresponding claims). Any reasonable definition of “has to be” must match the usage, and the usage is fairly clear: you say that something “has to be true” when you don’t have any direct, clear evidence that it’s true, but have only concluded it from general principles.

Consider:

A: Is your husband at home right now?

B: He has to be; he left work over two hours ago, and his commute’s only 30 minutes long.

Here B doesn’t really know where her husband is. He could be stuck in traffic, he could’ve taken a detour to the bar for a few drinks with his buddies to celebrate that big sale, he could’ve been abducted by aliens—who knows? Imagine, after all, the alternative formulation (and let’s say that A is actually a police officer—lying to him is a crime):

A: Is your husband at home right now?

B: Yes, he is.

A: You know that he’s at home?

B: Well… no. But he has to be at home.

A: But you didn’t go home and check, did you? You didn’t call your house and talk to him?

B: No, I didn’t.

And so on. (I imagine you could easily come up with innumerable other examples.)

Replies from: lolbifrons
comment by lolbifrons · 2018-03-29T21:10:41.572Z · LW(p) · GW(p)

Now, I think reductionism is true. But suppose we encounter something we can’t reduce. (Of course your instinct—and mine, in a symmetric circumstance—would be to jump in with a correction: “can’t yetreduce”! I sympathize entirely with this—but in this case, that formulation would beg the question.) We should of course condition on our belief that reductionism is true, and conclude that we’ll be able to find a reduction. But, conversely, we should also condition on the fact that we haven’t found a reduction yet, and reduce our belief in reductionism! (And, as I mentioned in the linked comment thread, this depends on how much effort we’ve spent so far on looking for a reduction, etc.)
What this means is that we can’t simply say “consciouness is completely emergent-from-the-physical”. What we have to say is something like:
“We don’t currently know whether consciousness is completely emergent from the physical. Conditional on reductionism being true, consciousness has to be completely emergent from the physical. On the other hand, if consciousness turns out not to be completely emergent from the physical, then—clearly—reductionism is not true.”
In other words, whether reductionism is true is exactly at issue here! Again: I do think that it is; I would be very, very surprised if it were otherwise. But to assume it is to beg the question.

Okay, this is entirely fair, and I see your point and agree. I counter with the questions: What numerical strength would you give your belief that reductionism is true? Are you willing to extend that number to your belief in things at greater levels of the ladder that condition on it, according to the principles of conditional probability?

If your answers to those questions are "well above 50%" and "yes," why is it so difficult to answer the question:

if you had to hazard a guess in your own words as to what is most likely to be true about identity and consciousness in the case of a procedure that reproduces your physical body (including brain state) exactly - pick the hypothesis you have with the highest prior, even if it's nowhere close to 50% - what would it be

?

To the contrary: the implications of the phrase “has to be”, in claims of the form “[thing] has to be true” is very different from the implications of the word “is” (in the corresponding claims). Any reasonable definition of “has to be” must match the usage, and the usage is fairly clear: you say that something “has to be true” when you don’t have any direct, clear evidence that it’s true, but have only concluded it from general principles.

It seems to me that you're separating (deductive and inductive) reasoning from empirical observation, which I agree is a reasonable separation. But there are different strengths of reasoning. Observe:

A: Is your husband at home right now?

B: He has to be; he left work over two hours ago, and his commute’s only 30 minutes long.

vs.

A: Is your husband at home right now?

B: He has to be; I put him in a straight jacket, in a locked room, submerged the house completely in a crater of concrete, watched it harden without him escaping, and left satisfied, two hours ago.

Neither of these are "is", i.e. direct, contemporaneous, empirical observation. They are both "has to be", i.e. chains of induction. But one assumes the best case at every opportunity, and one at least attempts to eliminate all cases that could result in the negation.

I submit that my "has to be" is of the latter type, but even more airtight.

I concede that this is all hypothesis, but it is of the same sort as "the Higgs Boson exists, or else we're wrong about a lot of things"... before we found it.

Replies from: SaidAchmiz, SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2018-03-29T21:43:32.314Z · LW(p) · GW(p)

I counter with the questions: What numerical strength would you give your belief that reductionism is true?

I have no idea, and indeed am skeptical of the entire practice of assigning numerical strengths to beliefs of this nature. However, I think I am sufficiently certain of this belief to serve our needs in this context.

Are you willing to extend that number to your belief in things at greater levels of the ladder that condition on it, according to the principles of conditional probability?

Absolutely not, because the whole problem is that even given my assent to the proposition that consciousness is completely emergent from the physical, if I don’t know how it emerges from the physical, I am still unable to reason about the things on the higher parts of the ladder.

That’s the conceptual objection, and it suffices on its own; but I also have a more technical one, which is—

—the laws of conditional probability, you say? But hold on; to apply Bayes’ Rule, I have to have a prior probability for the belief in question. But how can I possibly assign a prior probability to a proposition, when I haven’t any idea what the proposition means? I can’t have a belief in any of those things you list! I don’t even know if they’re coherent!

In short: my answer to the latter half of your query is “no, and in fact you’re asking a wrong question”.

Replies from: lolbifrons
comment by lolbifrons · 2018-03-29T21:55:32.558Z · LW(p) · GW(p)

The limit of [the effect your original prior has on your ultimate posterior] as [the number of updates you've done] approaches infinity is zero. In the grand scheme of things, it doesn't matter what prior your start with. As a convenience, if we have literally no information or evidence, we usually use the uniform prior (equally likely as not, in this case), and then our first update is probably to run it through occam's razor.

The rest of your objections, if I understand QM and its implications right, fall upon the previously unintuitive and possibly incoherent things that become intuitively true as you understand QM. As I said elsewhere:

I truly do think we can't move further from this point, in this thread of this argument, without you reading and understanding the sequence :(
I could be mistaken, but it seems to me that the distinction you're trying to make between what I'm saying and what I'd have to say for my answer to be [coherent] dissolves as you understand QM.
I could, of course, be misunderstanding you completely. But there also isn't anything you're linking that I'm unwilling to read :P

The big disconnect here is you are willing to say you'll take my word for it about QM, but then I say "QM allows us to 'reason about the things on the higher parts of the ladder' without 'knowing how consciousness emerges from the physical.'"

I could be wrong, but if I'm wrong, you'd have to dive into QM to show me how. QM provides us a conceptual black swan, I claim, and reasoning about this without it is orders of magnitude less powerful than reasoning with it, in a way that is impossible to conceive of except in hindsight.

Replies from: SaidAchmiz, SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2018-03-29T22:08:51.931Z · LW(p) · GW(p)

The big disconnect here is you are willing to say you’ll take my word for it about QM, but then I say “QM allows us to ‘reason about the things on the higher parts of the ladder’ without ‘knowing how consciousness emerges from the physical.’”

I could be wrong, but if I’m wrong, you’d have to dive into QM to show me how. QM provides us a conceptual black swan, I claim, and reasoning about this without it is orders of magnitude less powerful than reasoning with it, in a way that is impossible to conceive of except in hindsight.

Well, in that case, I’m afraid we have indeed hit a dead end. But I will say this: if (as you seem to be saying) you are unable to treat quantum mechanics as a conceptual black box, and simply explain how its claims (those unrelated to consciousness) allow us to reason about consciousness without dissolving the Hard Problem, then… that is very, very suspicious. (The phrase “impossible to conceive except in hindsight” also raises red flags!) I hope you won’t take it personally if I view this business of “conceptual black swans” with the greatest skepticism.

I will, if I can find the time, try to give the QM sequence a close re-read, however.

Replies from: lolbifrons
comment by lolbifrons · 2018-03-29T22:33:59.248Z · LW(p) · GW(p)

I made an attempt to treat it as a black box in a different thread reply, but I still had to use the language of QM. I might be able to sum it up into short sentences as well, but I wanted to start with some amount of formality and explanation.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2018-03-29T23:01:28.183Z · LW(p) · GW(p)

Indeed, I’ve now read those comments, and I do appreciate it. As I think we’ve agreed now, further progress requires me to have a good understanding of QM, so I don’t think I have much to add past what we’ve already gone over.

I hope, at least, that this back-and-forth has been useful?

Replies from: lolbifrons
comment by lolbifrons · 2018-03-29T23:12:36.221Z · LW(p) · GW(p)
I hope, at least, that this back-and-forth has been useful?

Absolutely. Talking to you was refreshing, and it helped me not only flesh out my ladder but also pin down my beliefs. Thank you for taking time to talk about this stuff.

so I don’t think I have much to add past what we’ve already gone over.

I did make an attempt to address your last reply. If you still feel that way after, let me know.

comment by Said Achmiz (SaidAchmiz) · 2018-03-29T22:04:58.256Z · LW(p) · GW(p)

The limit of [the effect your original prior has on your ultimate posterior] as [the number of updates you’ve done] approaches infinity is zero. In the grand scheme of things, it doesn’t matter what prior your start with. As a convenience, if we have literally no information or evidence, we usually use the uniform prior (equally likely as not, in this case), and then our first update is probably to run it through occam’s razor.

This doesn’t address my objection. You are responding as if I were skeptical of assigning some particular prior, whereas in fact I was objecting to assigning any prior, or indeed any posterior—because one cannot assign a probability to a string of gibberish! Probability (in the Bayesian framework, anyway—not that any other interpretations would save us here) attaches to beliefs, but I am saying that I can’t have a belief in a statement that is incoherent. (What probability do you assign to the statement that “fish the inverted flawlessly on”? That’s nonsense, isn’t it—word salad? Can the uniform prior help you here?)

Replies from: lolbifrons
comment by lolbifrons · 2018-03-29T22:35:14.593Z · LW(p) · GW(p)

Fair enough. I don't see them as gibberish, so treating them that way is hard. I admit I didn't actually see what you meant.

comment by Said Achmiz (SaidAchmiz) · 2018-03-29T21:28:20.519Z · LW(p) · GW(p)

(Answering the latter half of your comment first; I’ll respond to the other half in a separate comment.)

I submit that my “has to be” is of the latter type, but even more airtight.

Indeed, there is a sense in which your “has to be” is of the latter type. In fact, we can go further, and observe that even the “is” (at least in this case—and probably in most cases) is also a sort of “has to be”, viz., this scenario:

A: Is your husband at home?

B: Yes, he is. Why, I’m looking at him right now; there he is, in the kitchen. Hi, honey!

A: Now, you don’t know that your husband’s at home, do you? Couldn’t he have been replaced with an alien replicant while you were at work? Couldn’t you be hallucinating right now?

B: Well… he has to be at home. I’m really quite sure that I can trust the evidence of my sense…

A: But not absolutely sure, isn’t that right?

B: I suppose that’s so.

This is, fundamentally, no more than a stronger version of your “submerged in a crater of concrete” scenario, so by what right do we claim it to be qualitatively different than “he left work two hours ago”?

And that’s all true. The problem, however, comes in when we must deduce specific claims from very general beliefs—however certain the latter may be!—using a complex, high-level, abstract model. And of this I will speak in a sibling comment.

Replies from: lolbifrons
comment by lolbifrons · 2018-03-29T21:34:37.561Z · LW(p) · GW(p)
This is, fundamentally, no more than a stronger version of your “submerged in a crater of concrete” scenario, so by what right do we claim it to be qualitatively different than “he left work two hours ago”?

I agree. At the core, every belief is bayesian. I don't recognize a fundamental difference, just one of categorization. We carved up reality, hopefully at its joints, but we still did the carving. You seemed to be the one arguing a material difference between "has to" and "is".

As an aside, it's possible you missed my edit. I'll reproduce it here:

I concede that this is all hypothesis, but it is of the same sort as "the Higgs Boson exists, or else we're wrong about a lot of things"... before we found it.
Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2018-03-29T21:57:32.886Z · LW(p) · GW(p)

Concerning your edit—no, I really don’t think that it is of the same sort. The prediction of the Higgs Boson was based on a very specific, detailed model, whereas—to continue where the grandparent left off—what you’re asking me to do here is to assent to propositions that are not based on any kind of model, per se, but rather on something like a placeholder for a model. You’re saying: “either these things are true, or we’re wrong about reductionism”.

Well, for one thing, “these things” are, as I’ve said, not even clearly coherent. It’s not entirely clear what they mean, because it’s not clear how to reason about this sort of thing, because we don’t have an actual model for how subjective phenomenal consciousness emerges from physics.

And, for another thing, the dilemma is a false one—it should properly be a quatrilemma (is that a word…?), like so:

“Either these things are true, or we’re wrong about reductionism, or we’re wrong about whether reductionism implies that these things are true, or these things are not so much false as ‘not even wrong’ (because there’s something we don’t currently understand, that doesn’t overturn reductionism but that renders much of our analysis here moot).”

“Ah!” you might exclaim, “but we know that reductionism implies these things! That is—we’re quite certain! And it’s really very unlikely that we’re missing some key understanding, that would render moot our reasoning and our scenarios!” To that, I again say: without an actual reduction of consciousness, an actual and complete dissolution of the Hard Problem, no such certainty is possible. And so it is these latter two horns of the quatrilemma which seems to me to be at least as likely as the truth of the higher rungs of the ladder.

Replies from: lolbifrons
comment by lolbifrons · 2018-03-29T22:02:25.647Z · LW(p) · GW(p)

My response here would be the same as my responses to the other outstanding threads.

comment by Said Achmiz (SaidAchmiz) · 2018-03-30T06:52:45.277Z · LW(p) · GW(p)

… I anticipate my consciousness transferring to that far away not-copy with some probability

I confess to being confused about how the concept of “probability” is being used here (and in similar comments I’ve seen). Can you elaborate?

whatever you, before you undergo a copying procedure, anticipate happening to you (subjectively experiencing ending up in the grown body vs. ending up in the body that walked in, each with some respective probability)

Sorry, I guess I was not clear… I take this answer to be just pushing the problem back one irrelevant step, since my question applies to this scenario also!

This may help you here.

I have now re-read the post. I’m afraid it didn’t help. Or rather, to be more precise—I conclude from it that using the concept of “probability” in this way is incoherent.

Basically, I think that Yu’el is wrong and De’da is right. (In fact, I would go further—I don’t even think that De’da’s answer concerning which way to bet makes a whole lot of sense… but this is a tangent, and one which brings in issues unrelated to “probability” pe se.)

Replies from: lolbifrons
comment by lolbifrons · 2018-03-30T08:32:51.965Z · LW(p) · GW(p)

Ah. I guess I'm not sure where to go from here, in that case.

comment by Dagon · 2018-03-29T23:44:21.771Z · LW(p) · GW(p)

Let me try to see where I am on the ladder by critiquing each. Most of them I start to agree with and then you add a

conclusion that I don't think follows from the lead-in. I think you've got an unstated and likely incorrect assumption

that "me" in the past, "me" as experiencing something in the moment and "me" in the future are all singular and

necessarily linked. I think they've been singular in my experience, but that's not the only way it could be.

If you insist on mapping to QM, my answers match MWI somewhat. Each branch's past has an amplitude of 1, the future

diverges into all possible states which sum to 1. I'm not actually uncertain that "possible" is sensible concept, though, so

I'll answer these as if we were talking about copies in a classical universe, so they can sum to more than 1 of "me".

1: without defining the mechanism of copy, I don't know how it differs from cloning. I think from further questions,

you're positing some copy of existing brain configuration, potentials, and inputs, which is very different from a clone

(a copy of DNA and some amount of similarity of early environment).

2: The first and last parts of these are separate. Why not "both copies have true memories, there are two distinct

entities (as soon as state diverges) both of which have equal claim to being me".

3: First half makes sense, but I anticipate waking up in both. The two mes will each begin to experience just one

line from each of the two.

4: First half fine, but you're weirdly assuming that dieing matters in this form. I think there's no experiential

difference, except only one of me wakes up rather than two. I would not hesitate to undertake this unless the chance

that BOTH would die is greater than the chance that I'd die if I didn't participate.

5: I don't think it's probability of waking up as one or the other. I think it's both, and each will truly be me in

their pasts, and different mes in the future. If only one wakes, then it's more similar to today's experience as

there's only one entity who experiences apparent-continuity. If one wakes then dies, that one is me and experiences

death, the other one is me and doesn't.

6: Incoherent. Likelihood is about exclusive options, and this framing is not exclusive: both happen in the same universe. I predict that branches of me will experience both.

7: Good, then a weird "transfer" concept. A perfect duplicate at any point has the same conscous experience, and

diverges when the inputs diverge. It's not transfered, it's just in both places/times.

8: I'm uncertain what level of fidelity is required to be "me". My intuition is that a sufficiently-true simulation is

effectively me, just like a sufficiently-exact copy.

9: Sure. I don't think it necessarily follows that the ONLY way the superintelligence can "think about what I would do"

is to execute a full-fidelity model, it could very easily use much simpler models (like humans do for each other) for

many questions.

Replies from: lolbifrons
comment by lolbifrons · 2018-03-30T00:53:04.274Z · LW(p) · GW(p)
Most of them I start to agree with and then you add a conclusion that I don't think follows from the lead-in.

This is by design. I tried to make the levels mutually exclusive. The way I did this was by having each level add a significant insight to the truth (as I see it) and then say something wrong (as I see it) to constrain any further insight.

I think you've got an unstated and likely incorrect assumption
that "me" in the past, "me" as experiencing something in the moment and "me" in the future are all singular and
necessarily linked. I think they've been singular in my experience, but that's not the only way it could be.
If you insist on mapping to QM, my answers match MWI somewhat. Each branch's past has an amplitude of 1, the future
diverges into all possible states which sum to 1. I'm not actually uncertain that "possible" is sensible concept, though, so
I'll answer these as if we were talking about copies in a classical universe, so they can sum to more than 1 of "me".

My intention is not to ignore QM/MWI or anything, but I did intend to provide levels where someone who doesn't understand (or even know about) QM would find themselves. The language I used was (hopefully) the language someone at that level would use to describe what they think, so all levels that can't be true under QM should sound ignorant of any QM insights. That was my intention. Intuition about QM should automatically push you at least to the first level where it sounds like I stopped describing a classical universe.

Further, this is mostly about our anticipation of subjective experiences. I didn't really mention amplitude, I just alluded to it by mentioning the squares of them we'd use to calculate our anticipation. When I say "me," I mean "some unmangled amplitude of me".

Unfortunately, even if I tried to use precise language, I'd have had trouble, and I wasn't even trying to do that, as this is supposed to be a resource anyone could use to place themselves.

I would address each of your entries, but most of them would probably be rephrasings of what I said above. Each level is supposed to contain something objectionable that pushes you to the next level.

6: Incoherent. Likelihood is about exclusive options, and this framing is not exclusive: both happen in the same universe. I predict that branches of me will experience both.

As far as this goes, I was trying to use the intuitive but inaccurate language from here [LW · GW]. If you prefer, pretend I said "squared amplitude". Alternatively, if you have suggestions for better language someone at this level would still intuitively use, I'd be happy to hear it.

7: Good, then a weird "transfer" concept. A perfect duplicate at any point has the same conscous experience, and
diverges when the inputs diverge. It's not transfered, it's just in both places/times.

It's really hard to describe anticipation of subjective experience in these scenarios. If you have a suggestion of language I can use that is still wrong in a way that precludes the insights from successive levels, and also speaks as someone who is wrong in that way would speak about their expectation, I am very open to suggestions.

8: I'm uncertain what level of fidelity is required to be "me". My intuition is that a sufficiently-true simulation is
effectively me, just like a sufficiently-exact copy.

That is the intentional problem with 8, yes.

9: Sure. I don't think it necessarily follows that the ONLY way the superintelligence can "think about what I would do"
is to execute a full-fidelity model, it could very easily use much simpler models (like humans do for each other) for
many questions.

I agree that the only model isn't a perfect model, but I have a strong hypothesis that "perfect model" and "implementation" are synonymous.

Edit: I see your objection with 9 and I (hopefully) fixed it.

Replies from: Dagon
comment by Dagon · 2018-03-30T06:18:29.467Z · LW(p) · GW(p)

I think my main point in most of this is:

It's really hard to describe anticipation of subjective experience in these scenarios

True! These scenarios don't actually happen (yet) to humans, and you're trying to extrapolate from a fairly poorly defined base case (individual experiential continuity). However, I think most of them dissolve if you believe (as I do) that consciousness is a purely physical reductive phenomenon.

Take the analogy of a light bulb, where you duplicate everything including the current (pun intended) state of electrical potential in the wires and element, but then after duplication allow that the future electrical inputs may vary. You can easily answer all of these questions about anticipated light output levels. It's identical at time of duplication, and diverges afterward.

Replies from: lolbifrons
comment by lolbifrons · 2018-03-30T08:30:34.872Z · LW(p) · GW(p)

Every level but the last one is supposed to be wrong.

The point is they're supposed to be wrong in a specifically crafted way.

comment by Dagon · 2018-04-01T15:52:10.693Z · LW(p) · GW(p)

It's helpful to use subscripts to indicate the different kinds of identity and continuity you're considering.

Me(yesterday) is related to me(now), and that relationship is different than me(in a scanner) compared with me (in a printer). Lumping memory, anticipation, and experience into one thing is likely to lead you into incorrect beliefs.

comment by philh · 2018-03-31T21:28:35.266Z · LW(p) · GW(p)

It sounds like you expect someone at level 7 or higher to agree with the first part of #6:

I anticipate with less likelihood waking up as a copy that will die soon after the procedure—or for some other reason has a lower amplitude according to the Born rule—as a continuous function

I don't think I agree with this. If one copy gets immediately dropped into a vat of water and one gets to live out his natural life, I anticipate a 50/50 chance of drowning.

I also don't know why either of these copies would have lower amplitude, in the short period where both are alive.

Can you clarify whether we actually disagree?

comment by Dacyn · 2018-03-29T17:53:59.121Z · LW(p) · GW(p)

When you wrote

However, true, conscious implementations of me can only exist on real hardware that exists in our universe.

I was expecting #9 to contradict this, but it doesn't, really. Are the words "simulation" and "superintelligence" meant to only refer to objects in the "real world" (whatever that is)? I would suggest that the level above #9 is

10. Any implementation of me is me, regardless of whether or not it "exists" in the "real world". In particular, any mathematical object that implements me is me. For example, if the laws of physics can be perfectly described by a set of equations, then I am the same thing as the mathematical object that would be used to represent me. Note that it follows from this that there are versions of me that are about to have any possible experience, regardless of whether our own laws of physics + initial conditions allow Boltzmann brains (since there is a mathematically consistent physics + initial conditions that does allow them).

Replies from: lolbifrons
comment by lolbifrons · 2018-03-29T18:17:53.135Z · LW(p) · GW(p)

That's fair; I wasn't sure how to phrase the idea in 8 to exclude 9, so the language isn't perfect, and I agree, now that I've seen it, that your proposed 10 is conceptually a step above my 9. Let me know if it is okay for me to add your 10 to my list.

Out of curiosity, do you consider the 10 you wrote "intuitively true", or just the logical next step in a hypothetical ladder?

Edit: I did my best to fix 8.

Replies from: Dacyn
comment by Dacyn · 2018-03-30T14:17:10.350Z · LW(p) · GW(p)

You can add my #10 to your list. Regarding your new #8, I'm not sure I understand the distinction between a brain implemented on a computer chip vs a simulation of a brain. Regarding my opinion on what is "intuitively true", it seems like all of them are different ways of making more precise the notion of identity, I don't know that it makes sense to give one of them a privileged status. In other words they all seem to be referring to slightly different concepts, all of which appear to be valid...