The Anthropic Trilemma
post by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-09-27T01:47:54.920Z · LW · GW · Legacy · 232 commentsContents
232 comments
Speaking of problems I don't know how to solve, here's one that's been gnawing at me for years.
The operation of splitting a subjective worldline seems obvious enough - the skeptical initiate can consider the Ebborians, creatures whose brains come in flat sheets and who can symmetrically divide down their thickness. The more sophisticated need merely consider a sentient computer program: stop, copy, paste, start, and what was one person has now continued on in two places. If one of your future selves will see red, and one of your future selves will see green, then (it seems) you should anticipate seeing red or green when you wake up with 50% probability. That is, it's a known fact that different versions of you will see red, or alternatively green, and you should weight the two anticipated possibilities equally. (Consider what happens when you're flipping a quantum coin: half your measure will continue into either branch, and subjective probability will follow quantum measure for unknown reasons.)
But if I make two copies of the same computer program, is there twice as much experience, or only the same experience? Does someone who runs redundantly on three processors, get three times as much weight as someone who runs on one processor?
Let's suppose that three copies get three times as much experience. (If not, then, in a Big universe, large enough that at least one copy of anything exists somewhere, you run into the Boltzmann Brain problem.)
Just as computer programs or brains can split, they ought to be able to merge. If we imagine a version of the Ebborian species that computes digitally, so that the brains remain synchronized so long as they go on getting the same sensory inputs, then we ought to be able to put two brains back together along the thickness, after dividing them. In the case of computer programs, we should be able to perform an operation where we compare each two bits in the program, and if they are the same, copy them, and if they are different, delete the whole program. (This seems to establish an equal causal dependency of the final program on the two original programs that went into it. E.g., if you test the causal dependency via counterfactuals, then disturbing any bit of the two originals, results in the final program being completely different (namely deleted).)
So here's a simple algorithm for winning the lottery:
Buy a ticket. Suspend your computer program just before the lottery drawing - which should of course be a quantum lottery, so that every ticket wins somewhere. Program your computational environment to, if you win, make a trillion copies of yourself, and wake them up for ten seconds, long enough to experience winning the lottery. Then suspend the programs, merge them again, and start the result. If you don't win the lottery, then just wake up automatically.
The odds of winning the lottery are ordinarily a billion to one. But now the branch in which you win has your "measure", your "amount of experience", temporarily multiplied by a trillion. So with the brief expenditure of a little extra computing power, you can subjectively win the lottery - be reasonably sure that when next you open your eyes, you will see a computer screen flashing "You won!" As for what happens ten seconds after that, you have no way of knowing how many processors you run on, so you shouldn't feel a thing.
Now you could just bite this bullet. You could say, "Sounds to me like it should work fine." You could say, "There's no reason why you shouldn't be able to exert anthropic psychic powers." You could say, "I have no problem with the idea that no one else could see you exerting your anthropic psychic powers, and I have no problem with the idea that different people can send different portions of their subjective futures into different realities."
I find myself somewhat reluctant to bite that bullet, personally.
Nick Bostrom, when I proposed this problem to him, offered that you should anticipate winning the lottery after five seconds, but anticipate losing the lottery after fifteen seconds.
To bite this bullet, you have to throw away the idea that your joint subjective probabilities are the product of your conditional subjective probabilities. If you win the lottery, the subjective probability of having still won the lottery, ten seconds later, is ~1. And if you lose the lottery, the subjective probability of having lost the lottery, ten seconds later, is ~1. But we don't have p("experience win after 15s") = p("experience win after 15s"|"experience win after 5s")*p("experience win after 5s") + p("experience win after 15s"|"experience not-win after 5s")*p("experience not-win after 5s").
I'm reluctant to bite that bullet too.
And the third horn of the trilemma is to reject the idea of the personal future - that there's any meaningful sense in which I can anticipate waking up as myself tomorrow, rather than Britney Spears. Or, for that matter, that there's any meaningful sense in which I can anticipate being myself in five seconds, rather than Britney Spears. In five seconds there will be an Eliezer Yudkowsky, and there will be a Britney Spears, but it is meaningless to speak of the current Eliezer "continuing on" as Eliezer+5 rather than Britney+5; these are simply three different people we are talking about.
There are no threads connecting subjective experiences. There are simply different subjective experiences. Even if some subjective experiences are highly similar to, and causally computed from, other subjective experiences, they are not connected.
I still have trouble biting that bullet for some reason. Maybe I'm naive, I know, but there's a sense in which I just can't seem to let go of the question, "What will I see happen next?" I strive for altruism, but I'm not sure I can believe that subjective selfishness - caring about your own future experiences - is an incoherent utility function; that we are forced to be Buddhists who dare not cheat a neighbor, not because we are kind, but because we anticipate experiencing their consequences just as much as we anticipate experiencing our own. I don't think that, if I were really selfish, I could jump off a cliff knowing smugly that a different person would experience the consequence of hitting the ground.
Bound to my naive intuitions that can be explained away by obvious evolutionary instincts, you say? It's plausible that I could be forced down this path, but I don't feel forced down it quite yet. It would feel like a fake reduction. I have rather the sense that my confusion here is tied up with my confusion over what sort of physical configurations, or cascades of cause and effect, "exist" in any sense and "experience" anything in any sense, and flatly denying the existence of subjective continuity would not make me feel any less confused about that.
The fourth horn of the trilemma (as 'twere) would be denying that two copies of the same computation had any more "weight of experience" than one; but in addition to the Boltzmann Brain problem in large universes, you might develop similar anthropic psychic powers if you could split a trillion times, have each computation view a slightly different scene in some small detail, forget that detail, and converge the computations so they could be reunified afterward - then you were temporarily a trillion different people who all happened to develop into the same future self. So it's not clear that the fourth horn actually changes anything, which is why I call it a trilemma.
I should mention, in this connection, a truly remarkable observation: quantum measure seems to behave in a way that would avoid this trilemma completely, if you tried the analogue using quantum branching within a large coherent superposition (e.g. a quantum computer). If you quantum-split into a trillion copies, those trillion copies would have the same total quantum measure after being merged or converged.
It's a remarkable fact that the one sort of branching we do have extensive actual experience with - though we don't know why it behaves the way it does - seems to behave in a very strange way that is exactly right to avoid anthropic superpowers and goes on obeying the standard axioms for conditional probability.
In quantum copying and merging, every "branch" operation preserves the total measure of the original branch, and every "merge" operation (which you could theoretically do in large coherent superpositions) likewise preserves the total measure of the incoming branches.
Great for QM. But it's not clear to me at all how to set up an analogous set of rules for making copies of sentient beings, in which the total number of processors can go up or down and you can transfer processors from one set of minds to another.
To sum up:
- The first horn of the anthropic trilemma is to confess that there are simple algorithms whereby you can, indetectably to anyone but yourself, exert the subjective equivalent of psychic powers - use a temporary expenditure of computing power to permanently send your subjective future into particular branches of reality.
- The second horn of the anthropic trilemma is to deny that subjective joint probabilities behave like probabilities - you can coherently anticipate winning the lottery after five seconds, anticipate the experience of having lost the lottery after fifteen seconds, and anticipate that once you experience winning the lottery you will experience having still won it ten seconds later.
- The third horn of the anthropic trilemma is to deny that there is any meaningful sense whatsoever in which you can anticipate being yourself in five seconds, rather than Britney Spears; to deny that selfishness is coherently possible; to assert that you can hurl yourself off a cliff without fear, because whoever hits the ground will be another person not particularly connected to you by any such ridiculous thing as a "thread of subjective experience".
- The fourth horn of the anthropic trilemma is to deny that increasing the number of physical copies increases the weight of an experience, which leads into Boltzmann brain problems, and may not help much (because alternatively designed brains may be able to diverge and then converge as different experiences have their details forgotten).
- The fifth horn of the anthropic trilemma is to observe that the only form of splitting we have accumulated experience with, the mysterious Born probabilities of quantum mechanics, would seem to avoid the trilemma; but it's not clear how to have analogous rules could possibly govern information flows in computer processors.
I will be extremely impressed if Less Wrong solves this one.
232 comments
Comments sorted by top scores.
comment by Wei Dai (Wei_Dai) · 2009-09-27T09:40:16.195Z · LW(p) · GW(p)
I've been thinking about this topic, off and on, at least since September 1997, when I joined the Extropians mailing list, and sent off a "copying related probability question" (which is still in my "sent" folder but apparently no longer archived anywhere that Google can find). Both Eliezer and Nick were also participants in that discussion. What are the chances that we're still trying to figure this out 12 years later?
My current position, for what it's worth, is that anticipation and continuity of experience are both evolutionary adaptations that will turn maladaptive when mind copying/merging becomes possible. Theoretically, evolution could have programmed us to use UDT, in which case this dilemma wouldn't exist now, because anticipation and continuity of experience is not part of UDT.
So why don't we just switch over to UDT, and consider the problem solved (assuming this kind of self-modification is feasible)? The problem with that is that much of our preferences are specified in terms of anticipation of experience, and there is no obvious way how to map those onto UDT preferences. For example, suppose you’re about to be tortured in an hour. Should you make as many copies as you can of yourself (who won’t be tortured) before the hour is up, in order to reduce your anticipation of the torture experience? You have to come up with a way to answer that question before you can switch to UDT.
One approach that I think is promising, which Johnicholas already suggested, is to ask "what would evolution do?" The way I interpret that is, whenever there’s an ambiguity in how to map our preferences onto UDT, or where our preferences are incoherent, pick the UDT preference that maximizes evolutionary success.
But a problem with that, is that what evolution does depends on where you look. For example, suppose you sample Reality using some weird distribution. (Let’s say you heavily favor worlds where lottery numbers always come out to be the digits of pi.) Then you might find a bunch of Bayesians who use that weird distribution as their prior (or the UDT equivalent of that), since they would be the ones having the most evolutionary success in that part of Reality.
The next thought is that perhaps algorithmic complexity and related concepts can help here. Maybe there is a natural way to define a measure over Reality, to say that most of Reality is here, and not there. And then say we want to maximize evolutionary success under this measure.
How to define “evolutionary success” is another issue that needs to be resolved in this approach. I think some notion of “amount of Reality under one’s control/influence” (and not “number of copies/descendants”) would make the most sense.
Replies from: davidad, DanArmak, Wei_Dai, Eliezer_Yudkowsky, MichaelHoward, Johnicholas↑ comment by davidad · 2021-12-10T12:25:50.480Z · LW(p) · GW(p)
I just happened to see this whilst it happens to be 12 years later. I wonder what your sense of this puzzle is now (object-level as well as meta-level).
Replies from: Wei_Dai↑ comment by Wei Dai (Wei_Dai) · 2021-12-12T19:53:03.368Z · LW(p) · GW(p)
I'm not really aware of any significant progress since 12 years ago. I've mostly given up working on this problem, or most object-level philosophical problems, due to slow pace of progress and perceived opportunity costs. (Spending time on ensuring a future where progress on such problems can continue to be made [LW · GW], e.g., fighting against x-risk and value/philosophical lock-in or drift, seems a better bet even for the part of me that really wants to solve philosophical problems.) It seems like there's a decline in other LWer's interest in the problem, maybe for similar reasons?
↑ comment by DanArmak · 2009-09-27T13:07:01.298Z · LW(p) · GW(p)
My thread of subjective experience is a fundamental part of how I feel from the inside. Exchanging it for something else would be pretty much equivalent to death - death in the human, subjective sense. I would not wish to exchange it unless the alternative was torture for a googol years or something of that ilk.
Why would you wish to switch to UDT?
Replies from: Wei_Dai↑ comment by Wei Dai (Wei_Dai) · 2009-09-27T14:04:23.606Z · LW(p) · GW(p)
That's a good point. I probably wouldn't want to give up my thread of subjective experience either. But unless I switch (or someone comes up with a better solution than UDT), when mind copying/merging becomes possible I'm probably going to start making some crazy decisions.
I'm not sure what the solution is, but here's one idea. An UDT agent doesn't use anticipation or continuity of experience to make decisions, but perhaps it can run some computations on the side to generate the qualia of anticipation and continuity.
Another idea, which may be more intuitively acceptable, is don't make the switch yourself. Create a copy, and have the copy switch to UDT (before it starts running). Then give most of your resources to the copy and live a single-threaded life under its protection. (I guess the copy in this case isn't so much a copy but more of a personal FAI.)
Replies from: DanArmak↑ comment by DanArmak · 2009-09-27T14:32:52.391Z · LW(p) · GW(p)
That's what I was thinking, too. You make tools the best way you can. The distinction between tools that are or aren't part of you will ultimately become meaningless anyway. We're going to populate the galaxy with huge Jupiter brains that are incredibly smart and powerful but whose only supergoal is to protect a tiny human-nugget inside.
↑ comment by Wei Dai (Wei_Dai) · 2009-09-28T23:19:08.562Z · LW(p) · GW(p)
Seeing that others here are trying to figure out how to make probabilities of anticipated subjective experiences work, I should perhaps mention that I spent quite a bit of time near the beginning of those 12 years trying to do the same thing. As you can see, I eventually gave up and decided that such probabilities shouldn't play a role in a decision theory for agents who can copy and merge themselves.
This isn't to discourage others from exploring this approach. There could easily be something that I overlooked, that a fresh pair of eyes can find. Or maybe someone can give a conclusive argument that explains why it can't work.
BTW, notice that UDT not only doesn't involve anticipatory probabilities, it doesn't even involve indexical probabilities (i.e. answers to "where am I likely to be, given my memories and observations?" as opposed to "what should I expect to see later?"). It seems fairly obvious that if you don't have indexical probabilities, then you can't have anticipatory probabilities. (See ETA below.) I tried to give an argument against indexical probabilities, which apparently nobody (except maybe Nesov) liked. Can anyone do better?
ETA: In the Absent-Minded Driver problem, suppose after you make the decision to EXIT or CONTINUE, you get to see which intersection you're actually at (and this is also forgotten by the time you get to the next intersection). Then clearly your anticipatory probability for seeing 'X', if it exists, ought to be the same as your indexical probability of being at X.
↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-09-27T16:51:30.849Z · LW(p) · GW(p)
So why don't we just switch over to UDT, and consider the problem solved
Because we can't interpret UDT's decision algorithm as providing epistemic advice. It says to never update our priors and even to go on putting weight on logical impossibilities after they're known to be impossible. UDT tells us what to do - but not what to anticipate seeing happen next.
Replies from: Vladimir_Nesov, Wei_Dai↑ comment by Vladimir_Nesov · 2009-09-27T16:58:02.228Z · LW(p) · GW(p)
This presumably places anticipation together with excitement and fear -- an aspect of human experience, but not a useful concept for decision theory.
Replies from: Eliezer_Yudkowsky, UnholySmoke↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-09-28T16:20:33.351Z · LW(p) · GW(p)
I'm not convinced that "It turns out that pi is in fact greater than three" is a mere aspect of human experience.
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2009-09-28T16:29:33.102Z · LW(p) · GW(p)
If you appeal to intuitions about rigor, it's not so much an outlier since fear and excitement must be aspects of rigorously reconstructed preference as well.
↑ comment by UnholySmoke · 2009-09-28T12:42:44.750Z · LW(p) · GW(p)
I find myself simultaneously convinced and unconvinced by this! Anticipation (dependent, of course, on your definition) is surely a vital tool in any agent that wants to steer the future? Or do you mean 'human anticipation' as differentiated from other kinds? In which case, what demarcates that from whatever an AI would do in thinking about the future?
However, Dai, your top level comment sums up my eventual thoughts on this problem very well. I've been trying for a long time to resign myself to the idea that a notion of discrete personal experience is incompatible with what we know about the world. Doesn't make it any easier though.
My two cents - the answer to this trilemma will come from thinking about the system as a whole rather than personal experience. Can we taboo 'personal experience' and find a less anthropocentric way to think about this?
↑ comment by Wei Dai (Wei_Dai) · 2009-09-29T12:10:58.721Z · LW(p) · GW(p)
UDT tells us what to do - but not what to anticipate seeing happen next.
Ok, we can count that as a disadvantage when comparing UDT with alternative solutions, but why is it a deal-killer for you, especially since you're mainly interested in decision theory as a tool for programming FAI? As long as the FAI knows what to do, why do you care so much that it doesn't anticipate seeing what happen next?
Replies from: Eliezer_Yudkowsky, SilasBarta↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-09-29T15:56:11.379Z · LW(p) · GW(p)
Because I care about what I see next.
Therefore the FAI has to care about what I see next - or whatever it is that I should be caring about.
Replies from: Wei_Dai, Vladimir_Nesov↑ comment by Wei Dai (Wei_Dai) · 2009-09-29T22:10:49.472Z · LW(p) · GW(p)
Ok, but that appears to be the same reason that I gave (right after I asked the question) for why we can't switch over to UDT yet. So why did you give a another answer without reference to mine? That seems to be needlessly confusing. Here's how I put it:
The problem with that is that much of our preferences are specified in terms of anticipation of experience, and there is no obvious way how to map those onto UDT preferences.
There's more in that comment where I explored one possible approach to this problem. Do you have any thoughts on that?
Also, do you agree (or think it's a possibility) that specifying preferences in terms of anticipation (instead of, say, world histories) was an evolutionary "mistake", because evolution couldn't anticipate that one day there would be mind copying/merging technology? If so, that doesn't necessarily mean we should discard such preferences, but I think it does mean that there is no need to treat it as somehow more fundamental than other kinds of preferences, such as, for example, the fear of stepping into a teleporter that uses destructive scanning, or the desire not to be consigned to a tiny portion of Reality due to "mistaken" preferences.
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-09-29T23:02:39.926Z · LW(p) · GW(p)
I can't switch over to UDT because it doesn't tell me what I'll see next, except to the extent it tells me to expect to see pi < 3 with some measure. It's not that it doesn't map. It's that UDT goes on assigning measure to 2 + 2 = 5, but I'll never see that happen. UDT is not what I want to map my preferences onto, it's not a difficulty of mapping.
Replies from: Wei_Dai, Vladimir_Nesov↑ comment by Wei Dai (Wei_Dai) · 2009-09-29T23:12:17.354Z · LW(p) · GW(p)
UDT goes on assigning measure to 2 + 2 = 5
That's not what happens in my conception of UDT. Maybe in Nesov's, but he hasn't gotten it worked out, and I'm not sure it's really going to work. My current position on this is still that you should update on your own internal computations, but not on input from the outside.
ETA:
UDT is not what I want to map my preferences onto, it's not a difficulty of mapping.
Is that the same point that Dan Armak made, which I responded to, or a different one?
↑ comment by Vladimir_Nesov · 2009-09-29T23:11:54.209Z · LW(p) · GW(p)
I can't switch over to UDT because it doesn't tell me what I'll see next, except to the extent it tells me to expect to see pi < 3 with some measure.
It's not you who should use UDT, it's the world. This is a salient point of departure between FAI and humanity. FAI is not in the business of saying in words what you should expect. People are stuff of the world, not rules of the world or strategies to play by those rules. Rules and strategies don't depend on particular moves, they specify how to handle them, but plays consist of moves, of evidence. This very distinction between plays and strategies is the true origin of updatelessness. It is the fault to make this distinction that causes the confusion UDT resolves.
Replies from: Wei_Dai↑ comment by Wei Dai (Wei_Dai) · 2009-09-30T00:20:04.762Z · LW(p) · GW(p)
Nesov, your writings are so hard to understand sometimes. Let me take this as an example and give you some detailed feedback. I hope it's useful to you to determine in the future where you might have to explain in more detail or use more precise language.
It's not you who should use UDT, it's the world.
Do you mean "it's not only you", or "it's the world except you"? If it's the latter, it doesn't seem to make any sense. If it's the former, it doesn't seem to answer Eliezer's objection.
This is a salient point of departure between FAI and humanity.
Do you mean FAI should use UDT, and humanity shouldn't?
FAI is not in the business of saying in words what you should expect.
Ok, this seems clear. (Although why not, if that would make me feel better?)
People are stuff of the world, not rules of the world or strategies to play by those rules.
By "stuff", do you mean "part of the state of the world"? And people do in some sense embody strategies (what they would do in different situations), so what do you mean by "people are not strategies"?
Rules and strategies don't depend on particular moves, they specify how to handle them, but plays consist of moves, of evidence. This very distinction between plays and strategies is the true origin of updatelessness. It is the fault to make this distinction that causes the confusion UDT resolves.
This part makes sense, but I don't see the connection to what Eliezer wrote.
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2009-09-30T00:48:58.709Z · LW(p) · GW(p)
It's not you who should use UDT, it's the world.
Do you mean "it's not only you", or "it's the world except you"? If it's the latter, it doesn't seem to make any sense. If it's the former, it doesn't seem to answer Eliezer's objection.
I mean the world as substrate, with "you" being implemented on the substrate of FAI. FAI runs UDT, you consist of FAI's decisions (even if in the sense of "influenced by", there seems to be no formal difference). The decisions are output of the strategy optimized for by UDT, two levels removed from running UDT themselves.
Do you mean FAI should use UDT, and humanity shouldn't?
Yes, in the sense that humanity runs on the FAI-substrate that uses UDT or something on the level of strategy-optimization anyway, but humanity itself is not about optimization.
By "stuff", do you mean "part of the state of the world"? And people do in some sense embody strategies (what they would do in different situations), so what do you mean by "people are not strategies"?
I suspect that people should be found in plays (what actually happens given the state of the world), not strategies (plans for every eventuality).
↑ comment by Vladimir_Nesov · 2009-09-29T17:36:51.864Z · LW(p) · GW(p)
There is no problem with FAI looking at both past and future you -- intuition only breaks down when you speak of first-person anticipation. You don't care what FAI anticipates to see for itself and whether it does. The dynamic of past->future you should be good with respect to anticipation, just as it should be good with respect to excitement.
Replies from: Nick_Tarleton↑ comment by Nick_Tarleton · 2009-09-29T18:24:25.082Z · LW(p) · GW(p)
There is no problem with FAI looking at both past and future you -- intuition only breaks down when you speak of first-person anticipation.
But part of the question is: must past/future me be causally connected to me?
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2009-09-29T18:28:32.108Z · LW(p) · GW(p)
Part of which question? And whatever you call "causally connected" past/future persons is a property of the stuff-in-general that FAI puts into place in the right way.
↑ comment by SilasBarta · 2009-09-29T16:19:22.191Z · LW(p) · GW(p)
Unless I'm misunderstanding UDT, isn't speed another issue? An FAI must know what's likely to be happening in the near future in order to prioritize its computational resources so they're handling the most likely problems. You wouldn't want it churning through the implications of the Loch Ness monster being real while a mega-asteroid is headed for the earth.
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-09-29T17:33:07.759Z · LW(p) · GW(p)
Wei Dai should not be worrying about matters of mere efficiency at this point. First we need to know what to compute via a fast approximation.
(There are all sorts of exceptions to this principle, and they mostly have to do with "efficient" choices of representation that affect the underlying epistemology. You can view a Bayesian network as efficiently compressing a raw probability distribution, but it can also be seen as committing to an ontology that includes primitive causality.)
Replies from: SilasBarta↑ comment by SilasBarta · 2009-09-29T23:33:25.511Z · LW(p) · GW(p)
Wei Dai should not be worrying about matters of mere efficiency at this point. First we need to know what to compute via a fast approximation.
But that path is not viable here. If UDT claims to make decisions independently of any anticipation, then it seems it must be optimal on average over all the impossibilities it's prepared to compute an output for. That means it must be sacrificing optimality in this world-state (by No Free Lunch), even given infinite computing time, so having a quick approximation doesn't help.
If an AI running UDT is just as prepared to find Nessie as to find out how to stop the incoming asteroid, it will be inferior to a program designed just to find out how to stop asteroids. Expand the Nessie possibility to improbable world-states, and the asteroid possibility to probable ones, and you see the problem.
Though I freely admit I may be completely lost on this.
↑ comment by MichaelHoward · 2009-09-27T13:44:40.282Z · LW(p) · GW(p)
I've been thinking about this topic, off and on, at least since September 1997, when I joined the Extropians mailing list... What are the chances that we're still trying to figure this out 12 years later?
Not small. I read that list and similar forums in the early 90s before becoming an AGI relinquishmentarian till about 2 years ago. When coming back to the discussions, I was astonished how most of the topics on discussion were essentially the same ones I remembered from 15 years earlier.
↑ comment by Johnicholas · 2009-09-27T13:50:41.703Z · LW(p) · GW(p)
Note - there is a difference between investigating "what would evolution do?", as a jumping-off point for other strategies, and recommending "we should do what evolution does".
But a problem with that, is that what evolution does depends on where you look.
Why is it that if I set up a little grid-world on my computer and evolve little agents, I seem to get answers to the question "what does evolution do"? Am I encoding "where to look" into the grid-world somehow?
comment by Furcas · 2009-09-27T03:03:24.890Z · LW(p) · GW(p)
Whatever the correct answer is, the first step towards it has to be to taboo words like "experience" in sentences like, "But if I make two copies of the same computer program, is there twice as much experience, or only the same experience?"
What making copies is, is creating multiple instances of the same pattern. If you make two copies of a pattern, there are twice as many instances but only one pattern, obviously.
Are there, then, two of 'you'? Depends what you mean by 'you'. Has the weight of experience increased? Depends what you mean by 'experience'. Think in terms of patterns and instances of patterns, and these questions become trivial.
I feel a bit strange having to explain this to Eliezer Yudkowsky, of all people.
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-09-27T04:42:58.254Z · LW(p) · GW(p)
Are there, then, two of 'you'? Depends what you mean by 'you'.
Can I redefine what I mean by "me" and thereby expect that I will win the lottery? Can I anticipate seeing "You Win" when I open my eyes? It still seems to me that expectation exists at a level where I cannot control it quite so freely, even by modifying my utility function. Perhaps I am mistaken.
Replies from: SilasBarta, Nominull↑ comment by SilasBarta · 2009-09-28T18:11:10.401Z · LW(p) · GW(p)
I think the conflict is resolved by backing up to the point where you say that multiple copies of yourself count as more subjective experience weight (and therefore a higher chance of experiencing).
But if I make two copies of the same computer program, is there twice as much experience, or only the same experience? Does someone who runs redundantly on three processors, get three times as much weight as someone who runs on one processor?
Let's suppose that three copies get three times as much experience. (If not, then, in a Big universe, large enough that at least one copy of anything exists somewhere, you run into the Boltzmann Brain problem.)
I have a top-level post partly written up where I attempt to reduce "subjective experience" and show why your reductio about the Boltzmann Brain doesn't follow, but here's a summary of my reasoning:
Subjective experience appears to require a few components: first, forming mutual information with its space/time environment. Second, forming M/I with its past states, though of course not perfectly.
Now, look at the third trilemma horn: Britney Spears's mind does not have M/I with your past memories. So it is flat-out incoherent to speak of "you" bouncing between different people: the chain of mutual information (your memories) is your subjective experience. This puts you in the position of having to say that "I know everything about the universe's state, but I also must posit a causally-impotent thing called the 'I' of Silas Barta." -- which is an endorsement of epiphenominalism.
Now, look back at the case of copying yourself: these copies retain mutual information with each other. They have each other's exact memory. They are experiencing (by stipulation) the same inputs. So they have a total of one being's subjective experience, and only count once. From the perspective of some computer that runs the universe, it does not need additional data to store each copy, but rather, just the first.
The reason the Boltzmann Brain scenario doesn't follow is this: while each copy knows the output of a copy, they would still not have mutual information with the far-off Big Universe copy, because they don't know where it is! In the same way, a wall's random molecual motions do not have a copy of me, even though, under some interpretation, they will emulate me at some point.
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-09-28T19:40:36.379Z · LW(p) · GW(p)
I see! So you're identifying the number of copies with the number of causally distinct copies - distinct in the causality of a physical process. So copying on a computer does not produce distinct people, but spontaneous production in a distant galaxy does. Thus real people would outweigh Boltzmann brains.
But what about causally distinct processes that split, see different tiny details, and then merge via forgetting?
(Still, this idea does seem to me like progress! Like we could get a bit closer to the "magical rightness" of the Born rules this way.)
Replies from: SilasBarta, UnholySmoke, SilasBarta↑ comment by SilasBarta · 2009-10-03T04:00:31.259Z · LW(p) · GW(p)
Actually, let me revise that: I made it more complicated than it needs to be. Unless I'm missing something (and this does seem too simple), you can easily resolve the dilemma this way:
Copying your upload self does multiply your identities but adds nothing to your anticipated probabilities that stem from quantum branching.
So here's what you should expect:
-There's still a 1 in a billion chance of experiencing winning the lottery.
-In the event you win the lottery, you will also experience being among a trillion copies of yourself, each of whom also have this experience. Note the critical point: since they all wake up in the same Everett branch, their subjective experience does not get counted in at the same "level" as the experience of the lottery loser.
-If you merge after winning the lottery you should expect, after the merge, to remember winning the lottery, and some random additional data that came from the different experiences the different copies had.
-This sums to: ~100% chance of losing the lottery, 1 in a billion chance of winning the lottery plus forgetting a few details.
-Regarding the implications of self-copying in general: Each copy (or original or instantiation or whatever -- I'll just say "copy" for brevity) feels just like you. Depending on how the process was actually carried out, the group of you could trace back which one was the source, and which one's algorithm was instilled into an empty shell. If the process was carried out while you were asleep, you should assign an equal probability of being any given copy.
After the copy, your memories diverge and you have different identities. Merging combines the post-split memories into one person and then deletes such memories until you're left with as much subjective time-history as if you one person the whole time, meaning you forget most of what happened in any given copy -- kind of like the memory you have of your dreams when you wake up.
↑ comment by UnholySmoke · 2009-09-29T15:42:50.534Z · LW(p) · GW(p)
Yeah I get into trouble there. It feels as though two identical copies of a person = 1 pattern = no more people than before copying. But flip one bit and do you suddenly have two people? Can't be right.
That said, the reason we value each person is because of their individuality. The more different two minds, the closer they are to two separate people? Erk.
Silas, looking forward to that post.
Replies from: gwern↑ comment by gwern · 2009-10-10T02:43:15.312Z · LW(p) · GW(p)
But flip one bit and do you suddenly have two people? Can't be right.
Why not? Imagine that bit is the memory/knowledge of which copy they are. After the copying, each copy naturally is curious what happened, and recall that bit. Now, if you had 1 person appearing in 2 places, it should be that every thought would be identical, right? Yet one copy will think '1!'; the other will think '0!'. As 1 != 0, this is a contradiction.
Not enough of a contradiction? Imagine further that the original had resolved to start thinking about hot sexy Playboy pinups if it was 1, but to think about all his childhood sins if 0. Or he decides quite arbitrarily to become a Sufi Muslim if 0, and a Mennonite if 1. Or... (insert arbitrarily complex mental processes contingent on that bit).
At some point you will surely admit that we now have 2 people and not just 1; but the only justifiable step at which to say they are 2 and not 1 is the first difference.
Replies from: UnholySmoke, Psy-Kosh↑ comment by UnholySmoke · 2009-10-12T15:49:55.613Z · LW(p) · GW(p)
At some point you will surely admit that we now have 2 people and not just 1
Actually I won't. While I grok your approach completely, I'd rather say my concept of 'an individual' breaks down once I have two minds with one bit's difference, or two identical minds, or any of these borderline cases we're so fond of.
Say I have two optimisers with one bit's difference. If that bit means one copy converts to Sufism and the other to Mennonism, then sure, two different people. If that one bit is swallowed up in later neural computations due to the coarse-grained-ness of the wetware, then we're back to one person since the two are, once again, functionally identical. Faced with contradictions like that, I'm expecting our idea of personal identity to go out the window pretty fast once tech like this actually arrives. Greg Egan's Diaspora pretty much nails this for me, have a look.
All your 'contradictions' go out the window once you let go of the idea of a mind as an indivisible unit. If our concept of identity is to have any value (and it really has to) then we need to learn to think more like reality, which doesn't care about things like 'one bit's difference'.
Replies from: gwern↑ comment by gwern · 2009-10-12T23:21:49.807Z · LW(p) · GW(p)
If that one bit is swallowed up in later neural computations due to the coarse-grained-ness of the wetware, then we're back to one person since the two are, once again, functionally identical.
Ack. So if I understand you right, your alternative to bit-for-bit identity is to loosen it to some sort of future similarity, which can depend on future actions and outcomes; or in other words, there's a radical indeterminacy about even the minds in our example: are they same or are they different, who knows, it depends on whether the Sufism comes out in the wash! Ask me later; but then again, even then I won't be sure whether those 2 were the same when we started them running (always in motion the future is).
That seems like quite a bullet to bite, and I wonder whether you can hold to any meaningful 'individual', whether the difference be bit-wise or no. Even 2 distant non-borderline mindsmight grow into each other.
Replies from: UnholySmoke↑ comment by UnholySmoke · 2009-10-13T13:19:00.637Z · LW(p) · GW(p)
I wonder whether you can hold to any meaningful 'individual', whether the difference be bit-wise or no.
Indeed, that's what I'm driving at.
Harking back to my earlier comment, changing a single bit and suddenly having a whole new person is where my problem arises. If you change that bit back, are you back to one person? I might not be thinking hard enough, but my intuition doesn't accept that. With that in mind, I prefer to bite that bullet than talk about degrees of person-hood.
Replies from: gwern↑ comment by gwern · 2009-10-14T00:38:50.186Z · LW(p) · GW(p)
If you change that bit back, are you back to one person? I might not be thinking hard enough, but my intuition doesn't accept that.
Here's an intuition for you: you take the number 5 and add 1 to it; then you subtract 1 from it; don't you have what you started with?
With that in mind, I prefer to bite that bullet than talk about degrees of person-hood.
Well, I can't really argue with that. As long as you realize you're biting that bullet, I think we're still in a situation where it's just dueling intuitions. (Your intuition says one thing, mine another.)
↑ comment by Psy-Kosh · 2009-10-10T02:47:49.176Z · LW(p) · GW(p)
The downside is that it's not really that reductionistic.
What if you flip a bit in part of an offline memory store that you're not consciously thinking about at the time or such?
Replies from: gwern↑ comment by gwern · 2009-10-10T03:05:16.048Z · LW(p) · GW(p)
What if I hack & remove $100 from your bank account. Are you just as wealthy as you were before, because you haven't looked? If the 2 copies simply haven't looked or otherwise are still unaware, that doesn't mean they are the same. Their possible futures diverge.
And, sure, it's possible they might never realize - we could merge them back before they notice, just as I could restore the money before the next time you checked, but I think we would agree that I still committed a crime (theft) with your money; why couldn't we feel that there was a crime (murder) in the merging?
Replies from: Psy-Kosh, UnholySmoke↑ comment by Psy-Kosh · 2009-10-10T05:22:19.744Z · LW(p) · GW(p)
Huh? My point is a bitflip in a non conscious part, before it influences any of the conscious processing, well, if prior to that bit flip you would have said there was only one being, then I'd say after that they'd still not yet diverged. Or at least, not entirely.
As far as a merging, well, in that case who, precisely, is the one that's being killed?
Replies from: gwern↑ comment by gwern · 2009-10-10T15:54:43.203Z · LW(p) · GW(p)
So only anything in immediate consciousness counts? Fine, let's remove all of the long-term memories of one of the copies - after all, he's not thinking about his childhood...
As far as a merging, well, in that case who, precisely, is the one that's being killed?
Obviously whichever one isn't there afterwards; if the bit is 1, then 0 got killed off & vice versa. If we erase both copies and replace with the original, then both were killed.
Replies from: Psy-Kosh↑ comment by Psy-Kosh · 2009-10-10T16:16:40.502Z · LW(p) · GW(p)
I'd have to say that IF two (equivalent) instances of a mind count as "one mind", then removing an unaccessed data store does not change that for the duration that the effect of the removal doesn't propagate directly or indirectly to the conscious bits.
If one then restores that data store before anything was noticed regarding it being missing, then, conditional on the assumption that IF the two instances originally only counted as one being, then.... so they remain.
EDIT: to clarify, though... my overall issue here is that I think we may be effectively implicitly treating conscious agents as irreducible entities. If we're ever going to find an actual proper reduction of consciousness, well, we probably need to ask ourselves stuff like "what if two agents are bit for bit identical... except for these couple of bits here? What if they were completely identical? Is the couple bit difference enough that they might as well be completely different?" etc...
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-10-10T16:46:23.683Z · LW(p) · GW(p)
And if we restore a different long-term memory instead?
Replies from: Psy-Kosh↑ comment by Psy-Kosh · 2009-10-10T16:59:32.656Z · LW(p) · GW(p)
I think I'd have to say still "Nothing of significance happened until memory access occurs"
Until then, well, how's it any different then stealing your books... and then replacing them before you notice?
Now, as I said, we probably ought be asking questions like "what if in the actual "conscious processing" part of the agent, a few bits were changed in one instance... but just that... so initially, before it propagates enough to completely diverge... what should we say? To say it completely changes everything instantly, well... that seems too much like saying "conscious agents are irreducible", so...
(just to clarify: I'm more laying out a bit of my confusion here rather than anything else, plus noting that we seem to have been, in our quest to find reductions for aspects of consciousness, implicitly treating agents as irreducible in certain ways)
Replies from: gwern↑ comment by gwern · 2009-10-10T18:49:00.014Z · LW(p) · GW(p)
(just to clarify: I'm more laying out a bit of my confusion here rather than anything else, plus noting that we seem to have been, in our quest to find reductions for aspects of consciousness, implicitly treating agents as irreducible in certain ways)
Indeed. It's not obvious what we can reduce agents down further into without losing agents entirely; bit-for-bit identity is at least clear in a few situations.
(To continue the example - if we see the unaccessed memory as being part of the agent, then obviously we can't mess with it without changing the agent; but if we intuitively see it as like the agent having Internet access and the memory being a webpage, then we wouldn't regard it as part of its identity.)
↑ comment by UnholySmoke · 2010-02-19T12:03:44.443Z · LW(p) · GW(p)
What if I hack & remove $100 from your bank account. Are you just as wealthy as you were before, because you haven't looked?
Standard Dispute. If wealthy = same amount of money in the account, no. If wealthy = how rich you judge yourself to be. The fact that 'futures diverge' is irrelevant up until the moment those two different pieces of information have causal contact with the brain. Until that point, yes, they are 'the same
↑ comment by SilasBarta · 2009-09-28T21:30:58.696Z · LW(p) · GW(p)
I don't know; I'm still working through the formalism and drawing causal networks. And I just realized I should probably re-assimilate all the material in your Timeless Identity post, to see the relationship between identity and subjective experience. My brain hurts.
For now, let me just mention that I was trying to do something similar to what you did when identifying what d-connects the output of a calculator on Mars and Venus doing the same calculation. There's an (imperfect) analog to that, if you imagine a program "causing" its two copies, which each then get different input. They can still make inferences about each other despite being d-separated by knowledge of their pre-fork state. The next step is to see how this mutual information relates to the kind between one sentient program's subsequent states.
And, for bonus points, make sure to eliminate time by using the thermodynamic arrow and watch the entropy gain from copying a program.
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-09-29T15:55:13.656Z · LW(p) · GW(p)
...okay, that part didn't make any particular sense to me.
Replies from: SilasBarta↑ comment by SilasBarta · 2009-09-29T16:24:42.693Z · LW(p) · GW(p)
Heh, maybe you just had read more insight into my other comment than there actually was. Let me try to rephrase the last:
I'm starting from the perspective of viewing subjective experience as something that forms mutual information with its space/time surroundings, and with its past states (and has some other attributes I'll add later). This means that identifying which experience you will have in the future is a matter of finding which bodies have mutual information with which.
M/I can be identified by spotting inferences in a Bayesian causal network. So what would a network look like that has a sentient program being copied? You'd show the initial program as being the parent of two identical programs. But, as sentient programs with subjective experience, they remember (most of) their state before the split. This knowledge has implications for what inferences one of them can make about the other, and therefore how much mutual information they will have, which in turn has implications for how their subjective experiences are linked.
My final sentence was noting the importance of checking the thermodynamic constraints on the processes going on, and the related issue, of making time removable from the model. So, I suggested that instead of phrasing questions about "previous/future times", you should phrase such questions as being about "when the universe had lower/higher total entropy". This will have implications for what the sentience will regard as "its past".
Furthermore, the entropy calculation is affected by copy (and merge) operations. Copying involves deleting to make room for the new copies, whereas merging throws away information if the copies aren't identical.
Now, does that make it any clearer, or does it just make it look like you overestimated my first post?
↑ comment by Nominull · 2009-09-27T15:27:53.395Z · LW(p) · GW(p)
Can I redefine what I mean by "me" and thereby expect that I will win the lottery?
Yes? Obviously? You can go around redefining anything as anything. You can redefine a ham sandwich as a steel I-beam and thereby expect that a ham sandwich can support hundreds of pounds of force. The problem is that in that case you lose the property of ham sandwiches that says they are delicious.
In the case of redefining you as someone who wins the lottery, the property you are likely to lose is the property of generating warm fuzzy feelings of identification inside Eliezer Yudkowsky.
Replies from: Alicorn↑ comment by Alicorn · 2009-09-27T15:33:49.360Z · LW(p) · GW(p)
"If you call a tail a leg, how many legs does a dog have...? Four. Calling a tail a leg doesn't make it one."
Replies from: Furcas↑ comment by Furcas · 2009-09-27T16:27:19.595Z · LW(p) · GW(p)
That was said by someone who didn't realize that words are just labels.
Replies from: Alicorn, RichardChappell↑ comment by Alicorn · 2009-09-27T16:48:58.189Z · LW(p) · GW(p)
Words are just labels, but in order to be able to converse at all, we have to hold at least most of them in one place while we play with the remainder. We should try to avoid emulating Humpty Dumpty. Someone who calls a tail a leg is either trying to add to the category originally described by "leg" (turning it into the category now identified with "extremity" or something like that), or is appropriating a word ("leg") for a category that already has a word ("tail"). The first exercise can be useful in some contexts, but typically these contexts start with somebody saying "Let's evaluate the content of the word "leg" and maybe revise it for consistency." The second is juvenile code invention.
Replies from: SilasBarta↑ comment by SilasBarta · 2009-09-28T14:07:03.923Z · LW(p) · GW(p)
Someone who calls a tail a leg is either trying to add to the category originally described by "leg" (turning it into the category now identified with "extremity" or something like that), or is appropriating a word ("leg") for a category that already has a word ("tail"). The first exercise can be useful in some contexts, but typically these contexts start with somebody saying "Let's evaluate the content of the word "leg" and maybe revise it for consistency." The second is juvenile code invention.
What about if evolution repurposed some genus's tail to function as a leg? The question wouldn't be so juvenile or academic then. And before you roll your eyes, I can imagine someone saying,
"How many limbs does a mammal have, if you count the nose as a limb? Four. Calling a nose a limb doesn't make it one."
And then realizing they forgot about elephants, whose trunks have muscles that allow it to grip things as if it had a hand.
Replies from: Alicorn↑ comment by Alicorn · 2009-09-28T15:24:22.111Z · LW(p) · GW(p)
That looks like category reevaluation, not code-making, to me. If you think an elephant's trunk should be called a limb, and you think that elephants have five limbs, that's category reevaluation; if you think that elephant trunks should be called limbs and elephants have one limb, that's code.
↑ comment by RichardChappell · 2009-09-27T20:31:09.312Z · LW(p) · GW(p)
Speakers Use Their Actual Language, so someone who uses 'leg' to mean leg or tail speaks truly when they say 'dogs have five legs.' But it remains the case that dogs have only four legs, and nobody can reasonably expect a ham sandwich to support hundreds of pounds of force. This is because the previous sentence uses English, not the counterfactual language we've been invited to imagine.
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-09-27T04:47:58.403Z · LW(p) · GW(p)
To condense my response to a number of comments here:
It seems to me that there's some level on which, even if I say very firmly, "I now resolve to care only about future versions of myself who win the lottery! Only those people are defined as Eliezer Yudkowskys!", and plan only for futures where I win the lottery, then, come the next day, I wake up, look at the losing numbers, and say, "Damnit! What went wrong? I thought personal continuity was strictly subjective, and I could redefine it however I wanted!"
You reply, "But that's just because you're defining 'I' the old way in evaluating the anticipated results of the experiment."
And I reply, "...I still sorta think there's more to it than that."
To look at it another way, consider the Born probabilities. In this case, Nature seems to have very definite opinions about how much of yourself flows where, even though both copies exist. Now suppose you try to redefine your utility function so you only care about copies of yourself that see the quantum coin land heads up. Then you are trying to send all of your measure to the branch where the coin lands up heads, by exercising your right to redefine personal continuity howsoever you please; whereas Nature only wants to send half your measure there. Now flip the coin a hundred times. I think Nature is gonna win this one.
Tired of being poor? Redefine personal continuity so that tomorrow you continue as Bill Gates and Bill Gates continues as you - just better hope Gates doesn't swap again the next day.
It seems to me that experience and anticipation operate at a more primitive level than my utility function. Perhaps I am wrong. But I would like a cleaner demonstration of how I am wrong, than pointing out how convenient it would be if there were no question.
Of course it must be a wrong question - it is unanswerable, therefore, it is a wrong question. That is not the same as there being no question.
Replies from: Furcas, CronoDAS, Eliezer_Yudkowsky, Nominull, Z_M_Davis↑ comment by Furcas · 2009-09-27T05:20:46.131Z · LW(p) · GW(p)
I'm sorry, I don't think I can help. It's not that I don't believe in personal continuity, it's that I can't even conceive of it.
At t=x there's an Eliezer pattern and there's a Bill Gates pattern. At t=x+1 there's an Eliezer+1 pattern and a Bill Gates+1 pattern. A few of the instances of those patterns live in worlds in which they won the lottery, but most don't. There's nothing more to it than that. How could there be?
Some Eliezer instances might have decided to only care about Eliezer+1 instances that won the lottery, but that wouldn't change anything. Why would it?
Replies from: orthonormal↑ comment by orthonormal · 2009-09-28T18:50:58.413Z · LW(p) · GW(p)
I can't be the only one who sees this discussion as parallel to the argument over free will, right down to the existence of people who proudly complain that they can't see the problem.
Do you see how this is the same as saying "Of course there's no such thing as free will; physical causality rules over the brain"? Not false, but missing completely that which actually needs to be explained: what it is that our brain does when we 'make a choice', and why we have a deeply ingrained aversion to the first question being answered by some kind of causality.
Replies from: Furcas↑ comment by Furcas · 2009-09-28T19:09:38.693Z · LW(p) · GW(p)
There's a strong similarity, all right. In both cases, the bullet-biters describe reality as we have every reason to believe it is, and ask the deniers how reality would be different if free will / personal continuity existed. The deniers don't have an answer, but they're very insistent about this feeling they have that this undefined free will or continuity thing exists.
Explaining this feeling could be interesting, but it has very little to do with the question of whether what the feeling is about, is real.
↑ comment by CronoDAS · 2009-09-28T17:46:21.928Z · LW(p) · GW(p)
Wouldn't that just mean that, there was someone who was very much like Eliezer Yudkowsky and who remembered being Eliezer Yudkowsky, but woke up and discovered they were no longer Eliezer Yudkowsky?
/me suspects that he just wrote a very confused sentence
It seems to me as though our experience of personal continuity has an awful lot to do with memory... I remember being me, so I'm still the same me. I think.
It feels like there's a wrong question in here somewhere, but I don't know what it is!
↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-09-28T17:20:44.817Z · LW(p) · GW(p)
Okay, let me try another tack.
One of the last greatest open questions in quantum mechanics, and the only one that seems genuinely mysterious, is where the Born statistics come from - why our probability of seeing a particular result of a quantum experiment, ending up in a particular decoherent blob of the wavefunction, goes as the squared modulus of the complex amplitude.
Is it the case that the Born probabilities are necessarily explained - can only be explained - by some hidden component of our brain which says that we care about the alternatives in proportion to their squared modulus?
Since (after all) if we only cared about results that went a particular way, then, from our perspective, we would always anticipate seeing the results go that way? And so what we anticipate seeing, is entirely and only dependent on what we care about?
Or is there a sense in which we end up seeing results with a certain probability, a certain measure of ourselves going into those worlds, regardless of what we care about?
If you look at it closely, this is really about an instantaneous measure of the weight of experience, not about continuity between experiences. But why don't the same arguments on continuity work on measure in general?
Replies from: Christian_Szegedy, Wei_Dai, Johnicholas, Psy-Kosh↑ comment by Christian_Szegedy · 2009-10-03T18:26:34.560Z · LW(p) · GW(p)
Is it the case that the Born probabilities are necessarily explained - can only be explained - by some hidden component of our brain which says that we care about the alternatives in proportion to their squared modulus?
I have been thinking about quite a bit in the last few days and I have to say, I find this close to impossible.
The solution must be much more fundamental: Assumptions like the above ignore that the Born rule is also necessary for almost everything to work: For example the working of our most basic building blocks are tied to this rule. It is much more then just our psychological "caring". Everything in our "hardware" and environment would immediately cease to exist if the rule would be different
Based on this, I think that attempts (like that of David Wallace, even if it would be correct) trying to prove the Born rule based on rationality and decision theory have no chance to be conclusive or convincing. A good theory to explain the rule should also explain why we see the reality as we see it, even if never really make conscious measurements on particles.
In our lives, we (may) see different type of apparent randomnesses:
- incomplete information
- inherent (quantum) randomness
To some extent these two type of randomness are connected and look isomorphic on surface (in the macro-world).
The real question is: "Why are they connected?"
Or more specifically: "Why does the amplitude of the wave function result in (measured) probabilities that resembles to those of random geometric perturbations of the wave function?"
If you flip a real coin, for you it does not look very different from flipping a quantum coin. However the 50/50 chance of heads and tails can be explained purely by considering the geometric symmetry of the object. If you assume that the random perturbing events are distributed in a geometrically uniform way, you will immediately deduce the necessity of even chance. I think the clue of the Born rule will be to relate similar geometric considerations to relate perturbation based probability to quantum probability.
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2009-10-03T19:46:19.037Z · LW(p) · GW(p)
Quantum probability is only "inherent" because by default you are looking at it from the system that only includes one world. With a coin, the probability is merely "epistemic" because there is a definite answer (heads or tails) in the system that includes one world, but this same probability is as inherent for the system that only includes you, the person who is uncertain, and doesn't include the coin. The difference between epistemic and inherent randomness is mainly in the choice of the system for which the statement is made, with epistemic probability meaning the same thing as inherent probability with respect to the system that doesn't include the fact in question. (Of course, this doesn't take into account the specifics of QM, but is right for the way "quantum randomness" is usually used in thought experiments.)
Replies from: Christian_Szegedy↑ comment by Christian_Szegedy · 2009-10-03T21:17:52.590Z · LW(p) · GW(p)
I don't dispute this. Still, my posting implicitly assumed the MWI.
My argument is that the brain as an information processing unit has a generic way of estimating probabilities based on a single-worldline of the Multiverse. This world both contains randomness stemming from missing information and quantum branching, but our brain does not differentiate between these two kind of randomnesses.
The question is how to calibrate our brain's expectation of the quantum branch it will end up. What I speculate is that the quantum randomness to some extent approximates an "incomplete information" type of randomness on the large scale. I don't know the math (if I'd knew I'd be writing a paper :)), but I have a very specific intuitive idea, that could be turned into a concrete mathematical argument:
I expect the calibration to be performed based on geometric symmetries of our 3 dimensional space: if we construct a sufficiently symmetric but unstable physical process (e.g. throwing a coin) than we can deduce a probability for the outcome to be 50/50 assuming a uniform geometric distribution of possible perturbations. Such a process must somehow be related to the magnitudes of wave function and has to be shown to behave similarly on the macro level.
Admitted, this is just a speculation, but it is not really philosophical in nature, rather an intuitive starting point on what I think has a fair chance ending up in a concrete mathematical explanation of the Born probabilities in a formal setting.
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-10-04T22:50:51.727Z · LW(p) · GW(p)
Does your notion of "incomplete information" take into account Bell's Theorem? It seems pretty hard to make the Born probabilities represent some other form of uncertainty than indexical uncertainty.
Replies from: Christian_Szegedy↑ comment by Christian_Szegedy · 2009-10-05T06:22:33.136Z · LW(p) · GW(p)
I don't suggest hidden variables. The idea is that quantum randomness should resemble incomplete information type of randomness on the large scale and the reason that we perceive the world according to the Born rule is that our brain can't distinguish between the two kind of randomnesses.
↑ comment by Wei Dai (Wei_Dai) · 2009-09-29T08:00:35.541Z · LW(p) · GW(p)
There are beings out there in other parts of Reality, who either anticipate seeing results with non-Born probabilities, or care about future alternatives in non-Born proportions. But (as I speculated earlier) those beings have much less measure under a complexity-based measure than us.
Or is there a sense in which we end up seeing results with a certain probability, a certain measure of ourselves going into those worlds, regardless of what we care about?
In other words, what you're asking is, is there is an objective measure over Reality, or is it just a matter of how much we care about about each part of it. I've switched positions on this several times, and I'm still undecided now. But here are my current thoughts.
First, considerations from algorithmic complexity suggest that the measure we use can't be completely arbitrary. For example, we certainly can't use one that takes an infinite amount of information to describe, since that wouldn't fit into our brain.
Next, it doesn't seem to make sense to assign zero measure to any part of Reality. Why should there be a part of it that we don't care about at all?
So that seems to narrow down the possibilities quite a bit, even if there is no objective measure. Maybe we can find other considerations to further narrow down the list of possibilities?
If you look at it closely, this is really about an instantaneous measure of the weight of experience, not about continuity between experiences.
I'd say that "continuity between experiences" is a separate problem. Even if the measure problem is solved, I might still be afraid to step into a transporter based on destructive scanning and reconstruction, and need to figure out whether I should edit that fear away, tell the FAI to avoid transporting me that way, or do something else.
But why don't the same arguments on continuity work on measure in general?
I don't understand this one. What "arguments on continuity" are you referring to?
↑ comment by Johnicholas · 2009-09-28T17:48:07.370Z · LW(p) · GW(p)
QM has to add up to normality.
We know it is a dumb idea to attempt (quantum) suicide. We're pretty confident it is a dumb idea to do simple algorithms increasing one's redundancy before pleasant realizations and reducing it afterward.
It sounds as if you are refusing to draw inferences from normal experience regarding (the correct interpretation of) QM. There is no "Central Dogma" that inferences can only go from micro-scale to macro-scale.
From the macro-scale values that we do hold (e.g. we care about macro-scale probable outcomes), we can derive the micro-scale values that we should hold (e.g. care about Born weights).
I don't have an explanation why Born weights are nonlinear - but the science is almost completely irrelevant to the decision theory and the ethics. The mysterious, nonintuitive nature of QM doesn't percolate up that much. That is why we have different fields called "physics", "decision theory", and "ethics".
↑ comment by Psy-Kosh · 2009-09-28T17:49:39.728Z · LW(p) · GW(p)
Since (after all) if we only cared about results that went a particular way, then, from our perspective, we would always anticipate seeing the results go that way? And so what we anticipate seeing, is entirely and only dependent on what we care about?
I read that part several times, and I'm still not quite following. Mind elaborating or rephrasing that bit? Thanks.
↑ comment by Z_M_Davis · 2009-09-27T04:55:52.414Z · LW(p) · GW(p)
But all the resulting observers who see the coin come up tails aren't you. You just specified that they weren't. Who cares what they think?
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-09-27T10:20:46.889Z · LW(p) · GW(p)
If I jumped off a cliff and decided not to care about hitting the ground, I would still hit the ground. If I played a quantum lottery and decided not to care about copies who lost, almost all of me would still see a screen saying "You lose". It seems to me that there is a rule governing what I see happen next, which does not care what I care about. I am asking how that rule works, because it does so happen that I care about it.
Replies from: Tyrrell_McAllister↑ comment by Tyrrell_McAllister · 2009-09-27T21:28:20.260Z · LW(p) · GW(p)
You-now doesn't want to jump off the cliff because, among all the blobs of protoplasm that will exist in 5 minutes, you-now cares especially about one of them: the one that is causally connected in a certain way to the blob that is you-now. You-now evidently doesn't get to choose the nature of the causal connection that induces this concern. That nature was probably fixed by natural selection. That is why all talk about "determining to be the person who doesn't jump off the cliff" is ineffectual.
The question for doubters is this. Suppose, contrary to your intuition, that matters were just as I described. How would what-you-are-experiencing be any different? If you concede that there would be no difference, perhaps your confusion is just with how to talk about "your" future experiences. So then, what precisely is lost if all such talk is in terms of the experiences of future systems causally connected to you-now in certain ways?
Of course, committing to think this way highlights our ignorance about which causal connections are among these "certain ways". But our ignorance about this question doesn't mean that there isn't a determinate answer. There most likely is a determinate answer, fixed by natural selection and other determinants of what you-now cares about.
comment by Johnicholas · 2009-09-27T02:39:19.431Z · LW(p) · GW(p)
It's helpful in these sorts of problems to ask the question "What would evolution do?". It always turns out to be coherent, reality-based actions. Even though evolution, to the extent that it "values" things, values different things than I do, I'd like my actions to be comparably coherent and reality-based.
Regarding the first horn: Regardless of whether simple algorithms move "subjective experience" around like a fluid, if the simple algorithms take some resources, evolution would not perform them.
Regarding the second horn: If there was an organism that routinely split, merged, played lottery, and priced side-bets on whether it had won the lottery, then, given zero information about whether it had won the lottery, it would price the side-bet at the standard lottery odds. Splitting and merging, so long as the procedure did not provide any new information, would not affect its price.
Regarding the third horn: Evolution would certainly not create an organism that hurls itself off cliffs without fear. However, this is not because of it "cares" about any thread of subjective experience. Rather, this is because of the physical continuity. Compare this with evolution's choice in an environment where there are "transporters" that accurately convey entities by molecular disassembly/reassembly. Creatures which had evolved in that environment would certainly step through those transporters without fear.
I can't answer the fourth or fifth horns; I'm not sure I understand them.
comment by Nubulous · 2009-09-27T19:43:10.124Z · LW(p) · GW(p)
When you wake up, you will almost certainly have won (a trillionth of the prize). The subsequent destruction of winners (sort of - see below) reduces your probability of being the surviving winner back to one in a billion.
Merging N people into 1 is the destruction of N-1 people - the process may be symmetrical but each of the N can only contribute 1/N of themself to the outcome.
The idea of being (N-1)/N th killed may seem a little odd at first, but less so if you compare it to the case where half of one person's brain is merged with half of a different person's (and the leftovers discarded).
EDIT: Note that when the trillion were told they won, they were actually being lied to - they had won a trillionth part of the prize, one way or another.
Replies from: PlatypusNinja, snarles, JamesAndrix↑ comment by PlatypusNinja · 2009-10-02T00:25:49.753Z · LW(p) · GW(p)
Note that when the trillion were told they won, they were actually being lied to - they had won a trillionth part of the prize, one way or another.
Suppose that, instead of winning the lottery, you want your friend to win the lottery. (Or you want your random number generator to crack someone's encryption key, or you want a meteor to fall on your hated enemy, etc.) Then each of the trillion people would experience the full satisfaction from whatever random result happened.
↑ comment by snarles · 2011-05-20T19:40:18.521Z · LW(p) · GW(p)
This.
How does Yudkosky's careless statement "Just as computer programs or brains can split, they ought to be able to merge" not immediately light up as the weakest link of the entire post?
If you think merging ought to work, then why not also think that quantum suicide ought to work?
↑ comment by JamesAndrix · 2009-09-29T04:43:01.314Z · LW(p) · GW(p)
In the case where the people are computer programs, none of that works.
Replies from: Nubulous↑ comment by Nubulous · 2009-09-29T19:34:41.358Z · LW(p) · GW(p)
If you mean that a quantitative merge on a digital computer is generally impossible, you may be right. But the example I gave suggests that merging is death in the general case, and is presumably so even for identical merges, which can be done on a computer.
Replies from: JamesAndrix↑ comment by JamesAndrix · 2009-09-29T22:06:35.870Z · LW(p) · GW(p)
I fail to see why that is the general case.
For that matter, I fail to see why losing some(many, most) of my atoms and having them be quickly replaced by atoms doing the exact same job should be viewed as me dying at all.
Replies from: Nubulous↑ comment by Nubulous · 2009-10-01T05:27:27.569Z · LW(p) · GW(p)
I fail to see why that is the general case.
If you have two people to start with, and one when you've finished, without any further stipulation about which people they are, then you
have lost a person somewhere. To come to a different conclusion would require an additional rule, which is why it's the general case.
That additional rule would have to specify that a duplicate doesn't count as a second person. But since that duplicate could subsequently
go on to have a separate different life of its own, the grounds for denying it personhood seem quite weak.
For that matter, I fail to see why losing some(many, most) of my atoms and having them be quickly replaced by atoms doing the exact same job should be viewed as me dying at all.
It's not dying in the sense of there no longer being a you, but it is still dying in the sense of there being fewer of you.
To take the example of you being merged with someone, those atoms you lose, together with the ones you don't take from the other person, make
enough atoms, doing the right jobs, to make a whole new person. In the symmetrical case, a second "you". That "you" could have gone on to live its
own life, but now won't. Hence a "you" has died in the process.
In other words, merge is equivalent to "swap pieces then kill".
The above looks as though it will work just as well with bits, or the physical representation of bits, rather than atoms (for the symmetrical case).
↑ comment by JamesAndrix · 2009-10-01T06:39:01.562Z · LW(p) · GW(p)
If a person were running on a inefficiently designed computer with transistors and wires much larger than they needed to be, it would be possible to peel away and discard (perhaps) half of the atoms in the computer without affecting it's operation or the person. This would be much like ebborian reproduction, but merely a shedding of atoms.
In any sufficiently large information processing device, there are two or more sets of atoms (or whatever its made of) processing the same information, such they they could operate independently of each other if they weren't spatially intertwined.
Why are they one person when spatially intertwined, but two people when they are apart? That they 'could have' gone on independently is a counterfactual in the situation that they are both receiving the same inputs. You 'could' be peeled apart into two people, but both halves of your parts are currently still making up 1 person.
Personhood is in the pattern. Not the atoms or memory or whatever. There's only another person when there is another sufficiently different pattern.
merge is equivalent to 'spatially or logically reintegrate, then shed atoms or memory allocation as desired'
comment by Christian_Szegedy · 2009-09-30T22:00:59.722Z · LW(p) · GW(p)
Whenever I read about "weight of experience", "quantum goo", "existentness" etc. I can't keep myself of also thinking of "vital spark", "phlogiston", "ether" and other similar stuff... And it somehow spoils the whole fun...
In the history of mankind, hard looking (meta-)physical dilemmas were much more often resolved by means of elimination rather than by introduction of new "essences". The moral of the history of physics so far is that relativity typically trumps absoluteness in the long run.
For example, I would not be surprised at all, if it turned out that experienced Born probabilities would not be absolute, but would depend on some reference frame (in a very high dimensional space) just like the experience of time, speed, mass, etc. depends on the relativistic frame of reference.
Replies from: Eliezer_Yudkowsky, Psy-Kosh↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-09-30T22:37:20.135Z · LW(p) · GW(p)
Marcello and I use the term "reality-fluid" to remind ourselves that we're confused.
↑ comment by Psy-Kosh · 2009-09-30T22:09:56.361Z · LW(p) · GW(p)
Dunno about others but I agree that terms like that seem to indicate a serious confusion, or at least something that I am very confused about, and seems others here do too. We're more using them as a way of talking about our confusion. Just noting "there's something here we're failing to comprehend" isn't enough to help us comprehend it. It's more a case of "we're not sure what concepts to replace those terms with", at least so it is to me.
comment by Z_M_Davis · 2009-09-27T04:39:32.750Z · LW(p) · GW(p)
Following Nominull and Furcas, I bite the third bullet without qualms for the perfectly ordinary obvious reasons. Once we know how much of what kinds of experiences will occur at different times, there's nothing left to be confused about. Subjective selfishness is still coherent because you're not just an arbitrary observer with no distinguishing characteristics at all; you're a very specific bundle of personality traits, memories, tendencies of thought, and so forth. Subjective selfishness corresponds to only caring about this one highly specific bundle: only caring about whether someone falls off a cliff if this person identifies as such-and-such and has such-and-these specific memories and such-and-those personality traits: however close a correspondence you need to match whatever you define as personal identity.
The popular concepts of altruism and selfishness weren't designed for people who understand materialism. Once you realize this, you can just recast whatever it was you were already trying to do in terms of preferences over histories of the universe. It all adds up to, &c., &c.
Replies from: Wei_Dai, komponisto↑ comment by Wei Dai (Wei_Dai) · 2009-09-30T10:09:41.734Z · LW(p) · GW(p)
I agree that giving up anticipation does not mean giving up selfishness. But as Dan Armak pointed out there is another reason why you may not want to give up anticipation: you may prefer to keep the qualia of anticipation itself, or more generally do not want to depart too much from the subjective experience of being human.
Eliezer, if you are reading this, why do you not want to give up anticipation? Do you still think it means giving up selfishness? Is it for Dan Armak's reason? Or something else?
↑ comment by komponisto · 2009-09-27T21:41:56.382Z · LW(p) · GW(p)
The (only) trouble with this is that it doesn't answer the question about what probabilities you_0 should assign to various experiences 5 seconds later. Personal identity may not be ontologically fundamental, it may not even be the appropriate sort of thing to be programmed into a utility function -- but at the level of our everyday existence (that is, at whatever level we actually do exist), we still have to be able to make plans for "our own" future.
Replies from: Z_M_Davis↑ comment by Z_M_Davis · 2009-09-28T03:36:28.510Z · LW(p) · GW(p)
I would say that the ordinarily very useful abstraction of subjective probability breaks down in situations that involve copying and remerging people, and that our intuitive morality breaks down when it has to deal with measure of experience. In the current technological regime, this isn't a problem at all, because the only branching we do is quantum branching, and there we have this neat correspondence between quantum measure and subjective probability, so you can plan for "your own" future in the ordinary obvious way. How you plan for "your own" future in situations where you expect to be copied and merged depends on the details of your preferences about measure of experience. For myself, I don't know how I would go about forming such preferences, because I don't understand consciousness.
Replies from: Douglas_Knight↑ comment by Douglas_Knight · 2009-09-28T04:58:18.686Z · LW(p) · GW(p)
In the current technological regime, this isn't a problem...[the future] depends on the details of your preferences about measure of experience.
Quantum suicide is already a problem in the current regime, if you allow preference over measure.
Splitting and merging adds another problem, but I think it is a factual problem, not an ethical problem. At least, I think that there is a factual problem before you come to the ethical problem, which may be the same as for Born measure.
comment by Stuart_Armstrong · 2009-09-27T09:32:15.467Z · LW(p) · GW(p)
Just an aside - this is obviously something that Eliezer - someone highly intelligent and thoughful - has thought deeply about, and has had difficulty answering.
Yet most of the answers - including my own - seem to be of the "this is the obvious solution to the dilemma" sort.
Replies from: DanArmak, casebash↑ comment by DanArmak · 2009-09-27T13:03:09.028Z · LW(p) · GW(p)
...Only each obvious solution proposed is different.
Replies from: Stuart_Armstrong, Vladimir_Nesov↑ comment by Stuart_Armstrong · 2009-09-27T16:46:21.168Z · LW(p) · GW(p)
Bien entendue...
↑ comment by Vladimir_Nesov · 2009-09-27T13:12:49.225Z · LW(p) · GW(p)
...Only each obvious solution proposed is different.
It's philosophy, what'd u expect?
comment by cousin_it · 2011-08-01T10:52:15.207Z · LW(p) · GW(p)
Bonus: if you're uncomfortable with merging/deleting copies, you can skip that part! Just use the lottery money to buy some computing equipment and keep your extra copies running in lockstep forever. Is this now an uncontroversial algorithm for winning the lottery, or what?
comment by Stuart_Armstrong · 2009-09-27T16:53:18.323Z · LW(p) · GW(p)
More near-equivalent reformulations of the problem (in support of the second horn):
A trillion copies will be created, believing they have won the lottery. All but one will be killed (1/trillion that your current state leads directly to your future state). If you add some uniportant differentiation between the copies - give each one a speratate number - then the situation is clearer: you have one chance in a trillion that the future self will remember your number (so your unique contribution will have 1/trillion chance of happening), while he will be certain to believe he has won the lottery (he gets that belief from everyone.
A trillion copies are created, each altruistically happy that one among the group has won the lottery. One of them at random is designated the lottery winner. Then everyone else is killed.
Follow the money: you (and your copies) are not deriving utility from winning the lottery, but from spending the money. If each copy is selfish, there is no dilema: the lottery winnings divided amongst a trillion cancels out the trillion copies. If each copy is altruistic, then the example is the same as above; in which case there is a mass of utility generated from the copies, which vanish when the copies vanish. But this extra mass of utility is akin to the utility generated by: "It's wonderful to be alive. Quick, I copy myself, so now many copies feel it's wonderful to be alive. Then I delete the copies, so the utility goes away".
↑ comment by casebash · 2016-04-16T05:18:43.729Z · LW(p) · GW(p)
"You (and your copies) are not deriving utility from winning the lottery, but from spending the money"
I would say that you derive utility from knowing that you've won money you can spend. But, if you only get $1, you haven't won very much.
I think that a better problem would be if you split if your favourite team won the super bowl. Then you'd have a high probability of experiencing this happiness, and the number of copies wouldn't reduce it.
Replies from: Stuart_Armstrong↑ comment by Stuart_Armstrong · 2016-04-18T14:03:01.474Z · LW(p) · GW(p)
I think that a better problem would be if you split if your favourite team won the super bowl. Then you'd have a high probability of experiencing this happiness, and the number of copies wouldn't reduce it.
Neat!
comment by DanielVarga · 2009-09-29T23:53:03.410Z · LW(p) · GW(p)
Time and (objective or subjective) continuity are emergent notions. The more basic notion they emerge from is memory. (Eliezer, you read this idea in Barbour's book, and you seemed to like it when you wrote about that book.)
Considering this: yes, caring about the well-being of "agents that have memories of formerly being me" is incoherent. It is just as incoherent as caring about the well-being of "agents mostly consisting of atoms that currently reside in my body". But in typical cases, both of these lead to the same well known and evolutionarily useful heuristics.
I don't think any of the above implies that "thread of subjective experience" is a ridiculous thing, or that you can turn into being Britney Spears. Continuity being an emergent phenomenon does not mean that it is a nonexistent one.
comment by randallsquared · 2009-09-28T12:00:57.195Z · LW(p) · GW(p)
As for what happens ten seconds after that, you have no way of knowing how many processors you run on, so you shouldn't feel a thing
Here's the problem, as far as I can see. You shouldn't feel a thing, but that would also be true if none of you ever woke up again. "I won't notice being dead" is not an argument that you won't be dead, so lottery winners should anticipate never waking up again, though they won't experience it (we don't anticipate living forever in the factual world, even though no one ever notices being dead).
I'm sure there's some reason this is considered invalid, since quantum suicide is looked on so favorably around here. :)
Replies from: abramdemski↑ comment by abramdemski · 2009-09-28T20:26:49.005Z · LW(p) · GW(p)
The reason is simply that, in the multiple worlds interpretation, we do survive-- we just also die. If we ask "Which of the two will I experience?" then it seems totally valid to argue "I won't experience being dead."
Replies from: randallsquared↑ comment by randallsquared · 2009-10-01T13:08:35.814Z · LW(p) · GW(p)
So it basically comes back to pattern-as-identity instead of process-as-identity. Those of me who survive won't experience being dead. I think you can reach my conclusions by summing utility across measure, though, which would make an abrupt decrease in measure equivalent to any other mass death.
comment by Furcas · 2009-09-27T03:25:45.502Z · LW(p) · GW(p)
I still have trouble biting that bullet for some reason. Maybe I'm naive, I know, but there's a sense in which I just can't seem to let go of the question, "What will I see happen next?" I strive for altruism, but I'm not sure I can believe that subjective selfishness - caring about your own future experiences - is an incoherent utility function; that we are forced to be Buddhists who dare not cheat a neighbor, not because we are kind, but because we anticipate experiencing their consequences just as much as we anticipate experiencing our own. I don't think that, if I were really selfish, I could jump off a cliff knowing smugly that a different person would experience the consequence of hitting the ground.
I don't really understand your reasoning here. It's not a different person that will experience the consequences of hitting the ground, it's Eliezer+5. Sure, Eliezer+5 is not identical to Eliezer, but he's really, really, really similar. If Eliezer is selfish, it makes perfect sense to care about Eliezer+5 too, and no sense at all to care equally about Furcas+5, who is really different from Eliezer.
Replies from: orthonormal↑ comment by orthonormal · 2009-09-28T16:54:58.593Z · LW(p) · GW(p)
Suppose I'm duplicated, and both copies are told that one of us will be thrown off a cliff. While it makes some kind of sense for Copy 1 to be indifferent (or nearly indifferent) to whether he or Copy 2 gets tossed, that's not what would actually occur. Copy 1 would probably prefer that Copy 2 gets tossed (as a first-order thing; Copy 1's morals might well tell him that if he can affect the choice, he ought to prefer getting tossed to seeing Copy 2 getting tossed; but in any case we're far from mere indifference).
There's something to "concern for my future experience" that is distinct from concern for experiences of beings very like me.
Replies from: Furcas, alex_zag_al↑ comment by Furcas · 2009-09-28T19:11:48.053Z · LW(p) · GW(p)
I have the same instincts, and I would have a very hard time overriding them, were my copy and I put in the situation you described, but those instincts are wrong.
Replies from: orthonormal↑ comment by orthonormal · 2009-09-28T19:18:05.694Z · LW(p) · GW(p)
Emend "wrong" to "maladapted for the situation" and I'll agree.
Replies from: DanielVarga↑ comment by DanielVarga · 2009-09-29T21:53:49.254Z · LW(p) · GW(p)
These instincts are only maladapted for situations found in very contrived thought experiments. For example, you have to assume that Copy 1 can inspect Copy 2's source code. Otherwise she could be tricked into believing that she has an identical copy. (What a stupid way to die.) I think our intuitions are already failing us when we try to imagine such source code inspections. (To put it another way: we have very few in common with agents that can do such things.)
Replies from: orthonormal↑ comment by orthonormal · 2009-09-30T23:13:43.927Z · LW(p) · GW(p)
For example, you have to assume that Copy 1 can inspect Copy 2's source code.
It would suffice, instead, to have strong evidence that the copying process is trustworthy; in the limit as the evidence approaches certainty, the more adaptive instinct would approach indifference between the cases.
↑ comment by alex_zag_al · 2014-10-11T18:44:58.911Z · LW(p) · GW(p)
good thought experiment, but I actually would be indifferent, as long as I actually believed that my copy was genuine, and wouldn't be thrown off a cliff. Unfortunately I can't actually imagine any evidence that would convince me of this. I wonder if that's the source of your reservations too--if the reason you imagine Copy 1 caring is because you can't imagine Copy 1 being convinced of the scenario.
comment by Ishaan · 2013-09-26T04:20:43.319Z · LW(p) · GW(p)
And the third horn of the trilemma is to reject the idea of the personal future - that there's any meaningful sense in which I can anticipate waking up as myself tomorrow, rather than Britney Spears. Or, for that matter, that there's any meaningful sense in which I can anticipate being myself in five seconds, rather than Britney Spears. In five seconds there will be an Eliezer Yudkowsky, and there will be a Britney Spears, but it is meaningless to speak of the current Eliezer "continuing on" as Eliezer+5 rather than Britney+5; these are simply three different people we are talking about.
There are no threads connecting subjective experiences. There are simply different subjective experiences. Even if some subjective experiences are highly similar to, and causally computed from, other subjective experiences, they are not connected.
I still have trouble biting that bullet for some reason. Maybe I'm naive, I know, but there's a sense in which I just can't seem to let go of the question, "What will I see happen next?" I strive for altruism, but I'm not sure I can believe that subjective selfishness - caring about your own future experiences - is an incoherent utility function; that we are forced to be Buddhists who dare not cheat a neighbor, not because we are kind, but because we anticipate experiencing their consequences just as much as we anticipate experiencing our own. I don't think that, if I were really selfish, I could jump off a cliff knowing smugly that a different person would experience the consequence of hitting the ground.
I do bite the bullet, but I think you are wrong about the implications of biting this bullet.
Eliezer Yudkowsky cares about what happens to Eliezer Yudkowsky+5 seconds, in a way that he doesn't care about what happens to Ishaan+5 or Brittany+5.
E+5 holds a special place in E's utility function. To E, universes in which E+5 is happy are vastly superior to universes in which E+5 is unhappy or dead.
It makes no difference to E that E+5 is not identical to E. E still cares about E+5, and E aught not need any magic subjective thread connecting E and E+5 to justify this preference. It's not incoherent to prefer a future where certain entities that are causally connected to you continue to thrive - That's all "selfishness" really means.
E anticipates the universe that E+5 will experience. E+5 will carry the memory of this anticipation. If there are lotteries and clones, E will anticipate a universe with a 1% chance of a bunch of E+5 clones winning the lottery and a 99% chance of no E+5 clones winning the lottery. Anticipation is expectation concerning what you+5 will experience in the future. You're basically imagining your future self and experiencing a specialized and extreme version of "empathy". It doesn't matter whether or not there is a magical thread tying you to your future self. If you strip the emotional connotation on "anticipation" and just call it "prediction", you can even predict what happens after you die (it's just that there is no future version of you to "empathize" with anymore)
There are no souls. That holds spatially and temporally.
comment by RobinHanson · 2009-09-27T23:11:32.413Z · LW(p) · GW(p)
Oddly, I feel myself willing to bite all three bullets. Maybe I am too willing to bite bullets? There is a meaningful sense in which I can anticipate myself being one of the future people who will remember being me, though perhaps there isn't a meaningful way to talk about which of those many people I will be; I will be all of them.
comment by Steve_Rayhawk · 2009-09-28T00:31:03.865Z · LW(p) · GW(p)
I suggested that, in some situations, questions like "What is your posterior probability?" might not have answers, unless they are part of decision problems like "What odds should you bet at?" or even "What should you rationally anticipate to get a brain that trusts rational anticipation?". You didn't comment on the suggestion, so I thought about problems you might have seen in it.
In the suggestion, the "correct" subjective probability depends on a utility function and a UDT/TDT agent's starting probabilities, which never change. The most important way the suggestion is incomplete is that it doesn't itself explain something we do naturally: we care about the way our "existentness" has "flowed" to us, and if we learn things about how "existentness" or "experiencedness" works, we change what we care about. So when we experiment on quantum systems, and we get experimental statistics that are more probable under a Born rule with a power of 2 than (hand-waving normalization problems) under a Born rule with a power of 4, we change our preferences, so that we care about what happens in possible future worlds in proportion to their integrated squared amplitude, and not in proportion to the integral of the fourth power. But, if there were people who consistently got experimental statistics that were more probable under a Born rule with a power of 4 (whatever that would mean), we would want them to care about possible future worlds in proportion to the integral of the fourth power of their amplitude.
This can even be done in classical decision theory. Suppose you were creating an agent to be put into a world with Ebborean physics, and you had uncertainty about whether, in the law relating world-thickness ratios (at splitting time) to "existentness" ratios, the power was 2 or 4. It would be easy to put a prior probability of 1/2 on each power, and then have "the agent" update from measurements of the relative thicknesses of the sides of the split worlds it (i.e. its local copy) ended up on. But this doesn't explain why you would want to do that.
What would a UDT/TDT prior belief distribution or utility function have to look like in order to define agents that can "update" in this way, while only thinking in terms of copying and not subjective probability? Suppose you were creating an agent to be put into a world with Ebborean physics, and you had uncertainty about whether, in the relation between world thickness ratios and "existentness" ratios, the power was 2 or 4. And this time, suppose the agent was to be an updateless decision theory agent. I think a UDT agent which uses "probability" can be converted by an expected utility calculation into a behaviorally equivalent UDT agent which uses no probability. Instead of probability, the agent uses only "importances": relative strengths of its (linearly additive) preferences about what happens in the various deterministic worlds the agent "was" copied into at the time of its creation. To make such an agent in Ebborean physics "update" on "evidence" about existentness, you could take the relative importance you assigned to influencing world-sheets, split it into two halves, and distribute each half across world-sheets in a different way. Half of the importance would be distributed in proportion to the cumulative products of the squares of the worlds' thickness ratios at their times of splitting, and half of the importance would be distributed in proportion to the cumulative products of the fourth powers of the worlds' thickness ratios at their times of splitting. Then, in each world-sheet, the copy of the agent in that world-sheet would make some measurements of the relative thicknesses on its side of a split, and it would use use those measurements to decide what kinds of local futures it should prioritize influencing.
But, again, this doesn't explain why you would want to do that. (Maybe you wanted the agents to take a coordinated action at the end of time using the world-sheets they controlled, and you didn't know which kinds of world-sheets would become good general-purpose resources for that action?)
I think there was another way my suggestion is incomplete, which has something to do with the way your definition of altruism doesn't work without a definition of "correct" subjective probability. But I don't remember what your definition of altruism was or why it didn't work without subjective probability.
I still think the right way to answer the question, "What is the correct subjective probability?" might be partly to derive "Bayesian updating" as an approximation that can be used by computationally limited agents implementing an updateless or other decision theory, with a utility function defined over mathematical descriptions of worlds containing some number of copies of the agent, when the differences in utility that result from the agent's decisions fulfill certain independence and linearity assumptions. I need to mathematically formalize those assumptions. "Subjective probability" would then be a variable used in that approximation, which would be meaningless or undefined when the assumptions failed.
↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-09-28T17:27:41.569Z · LW(p) · GW(p)
I'm terribly sorry, but I had to delete that comment because it used the name of M-nt-f-x, which, if spoken in plain text, causes that one to appear. He Googles his name.
Here was the original comment by outlawpoet:
Marc Geddes was a participant in the SL4 list back when it was a bit more active, and kind of proverbial for degenerating into posting very incoherent abstract theory. He's basically carrying on the torch for M-nt-f-x and Achimedes Plutonium and other such celebrities of incommunicable genius.
Now please stop replying to this thread!
comment by Vladimir_Nesov · 2009-09-27T10:00:53.813Z · LW(p) · GW(p)
The problem is that copying and merging is not as harmless as it seems. You are basically doing invasive surgery on the mind, but because it's performed using intuitively "non-invasive" operations, it looks harmless. If, for example, you replaced the procedure with rewriting "subjective probability" by directly modifying the brain, the fact that you'd have different "subjective probability" as a result won't be surprising.
Thus, on one hand, there is an intuition that the described procedure doesn't damage the brain, and on the other the intuition about what subjective probability should look like in an undamaged brain, no matter in what form this outcome is delivered (that is, probability is always the same, you can just learn about it in different ways, and this experiment is one of them). The problem is that the experiment is not an instance of normal experience to which one can generalize the rule that subjective probability works fine, but an instance of arbitrary modification of the brain, from which you can expect anything.
Assuming that the experiment with copying/merging doesn't damage the brain, the resulting subjective probability must be correct, and so we get a perception of modifying the correct subjective probability arbitrarily.
Thought experiments with doing strange things to decision-theoretic agents are only valid if the agents have an idea about what kind of situation they are in, and so can try to find a good way out. Anything less, and it's just phenomenology: throw a rat in magma and see how it burns. Human intuitions about subjective expectation are optimized for agents who don't get copied or merged.
comment by KatjaGrace · 2011-05-19T16:59:02.696Z · LW(p) · GW(p)
I responded here http://meteuphoric.wordpress.com/2011/05/19/on-the-anthropic-trilemma/
comment by jimrandomh · 2009-09-27T15:25:25.603Z · LW(p) · GW(p)
The problem with anthropic reasoning and evidence is that unlike ordinary reasoning and evidence, it can't be transferred between observers. Even if "anthropic psychic powers" actually do work, you still should expect all other observers to report that they don't.
Replies from: DanArmakcomment by Nominull · 2009-09-27T02:37:03.312Z · LW(p) · GW(p)
I bite the third bullet. I am not as gifted with words as you are to describe why biting it is just and good and even natural if you look at it from a certain point of view, but...
You are believing in something mystical. You are believing in personal identity as something meaningful in reality, without giving any reason why it ought to be, because that is how your algorithm feels from the inside. There is nothing special about your brain as compared to your brain+your spinal cord, or as compared to your brain+this pencil I am holding. How could there be? Reality doesn't know what a brain is. Brains are not baked into the fundaments of reality, they are emergent from within. How could there be anything meaningful about them?
Consider this thought experiment. We take "you", and for a brief timespan, say, epsilon seconds, we replace "you" with "Brittany Spears". Then, after the epsilon seconds have passed, we swap "you" back in. Does this have a greater than order epsilon effect on anything? If so, what accounts for this discontinuity?
Replies from: Johnicholas↑ comment by Johnicholas · 2009-09-27T02:45:32.997Z · LW(p) · GW(p)
EY seems to have equated the third bullet with throwing oneself off of cliffs. Do you throw yourself off of cliffs? Why or why not?
Replies from: Nick_Tarleton, Z_M_Davis, Nominull, orthonormal↑ comment by Nick_Tarleton · 2009-09-27T03:49:38.460Z · LW(p) · GW(p)
It sounds to me like EY is equating the second bullet with "perfect altruism is coherent, as is caring only about one's self at the current moment, but nothing in between is." To that, though, as Furcas says, one can be selfish according to similarity of pattern rather than ontologically privileged continuity.
Replies from: torekp↑ comment by torekp · 2010-04-02T14:22:57.230Z · LW(p) · GW(p)
Or one could be selfish according to a non-fundamental, ontologically reducible continuity. At least, I don't see why not. Has anyone offered an argument for pattern over process?
randallsquared has it dead right, I think.
↑ comment by Z_M_Davis · 2009-09-27T04:41:57.131Z · LW(p) · GW(p)
I don't throw myself off cliffs for very roughly the same reason I don't throw other people off cliffs.
Replies from: SilasBarta↑ comment by SilasBarta · 2009-09-28T16:21:45.767Z · LW(p) · GW(p)
And for the same reason you buy things for yourself more often than for other people? And for the same reason you (probably) prefer someone else falling off a cliff than yourself?
Replies from: Z_M_Davis↑ comment by Z_M_Davis · 2009-09-28T20:35:31.455Z · LW(p) · GW(p)
I was trying to be cute.
Replies from: SilasBarta↑ comment by SilasBarta · 2009-09-28T21:13:34.529Z · LW(p) · GW(p)
Considering that your cute comment was consistent with your other comments in this discussion, I think I can be forgiven for thinking you were serious.
Actually, which of your other comments here are just being cute?
Replies from: Z_M_Davis↑ comment by Z_M_Davis · 2009-09-29T00:23:21.934Z · LW(p) · GW(p)
Right, so of course I'm rather selfish in the sense of valuing things-like-myself, and so of course I buy more things for myself than I do for random strangers, and so forth. But I also know that I'm not ontologically fundamental; I'm just a conjunction of traits that can be shared by other observers to various degrees. So "I don't throw myself off cliffs for very roughly the same reason I don't throw other people off cliffs" is this humorously terse and indirect way of saying that identity is a scalar, not a binary attribute. (Notice that I said "very roughly the same reason" and not "exactly the same reason"; that was intentional.)
Replies from: SilasBarta↑ comment by SilasBarta · 2009-09-29T00:46:08.116Z · LW(p) · GW(p)
And ... you expected everyone else to get that out of your cute comment?
You know, sometimes you just have to throw in the towel and say, "Oops. I goofed."
ETA: I'm sure that downmod was because this comment was truly unhelpful to the discussion, rather than because it made someone look bad.
Replies from: Z_M_Davis, Alicorn↑ comment by Z_M_Davis · 2009-09-29T02:37:22.056Z · LW(p) · GW(p)
Oops. I goofed.
Replies from: Douglas_Knight↑ comment by Douglas_Knight · 2009-09-29T02:50:11.400Z · LW(p) · GW(p)
I am sad to see this comment. Perhaps you were mistaken in how clear the comment was to how broad an audience, but I think the original comment was valuable and that we lose a lot of our ability to communicate if we are too careful.
Replies from: SilasBarta↑ comment by SilasBarta · 2009-09-29T03:11:41.917Z · LW(p) · GW(p)
Except this wasn't an issue of being too careful, and it definitely doesn't count as good communication!
Z_M_Davis made a remark that was both poorly-reasoned and supportive of every other comment he left in the discussion (in trivializing the privileged state of any choice of identity). If he had been arguing against the third horn, okay, maybe it could have been read as "oh, he's cleverly mocking a position he disagrees with".
But then he comes back with, "I was trying to be cute." Okay, so he's ... doing self-parody. Great -- we all need to be able to laugh at ourselves. So what's his real position, then?
Oh, you see, he was making a very subtle point about identity being scalar rather than binary (which has some as-of-yet unspecified implication for the merit of his position). And there was a hidden argument in there that allows him to see his life as no different from any others and yet still act in preference to himself. And it was obvious what distinction he was making by using the words "very roughly the same reason" instead of "exactly the same reason".
I'm sorry, but that's just not "how it works". You can claim illusion of transparency issues if the assumed common knowledge is small, and you have a reasonable basis for assuming it, and your full explanation doesn't look blatantly ad hoc.
In other words, anywhere but here.
I'm sorry to belabor the point, but yes, sometimes you just have to admit you goofed. Mistakes are okay! We all make them! But we don't all try to say "I meant to do that".
Replies from: Z_M_Davis↑ comment by Z_M_Davis · 2009-09-29T05:31:47.296Z · LW(p) · GW(p)
which has some as-of-yet unspecified implication for the merit of his position
See Furcas's comment.
that allows him to see his life as no different from any others and yet still act in preference to himself
I never said it was no different. Elsewhere in the thread, I had argued that selfishness is entirely compatible with biting the third bullet. Egan's Law.
And it was obvious what distinction he was making by using the words "very roughly the same reason" instead of "exactly the same reason".
I disagree; if it had been obvious, I wouldn't have had to point it out explicitly. Maybe the cognitive history would help? I had originally typed "the same reason," but added "very roughly" before posting because I anticipated your objection. I think the original was slightly funnier, but I thought it was worth trading off a little of the humor value in exchange for making the statement more defensible when taken literally.
I'm sorry, but that's just not "how it works". [...] your full explanation [looks] blatantly ad hoc.
I'm curious. If what actually happened looks ad hoc to you, what's your alternative theory? If you don't trust what I say about what I was thinking, then what do you believe instead? You seem to think I've committed some error other than writing two admittedly somewhat opaque comments, but I'm not sure what it's supposed to be.
↑ comment by Alicorn · 2009-09-29T01:00:47.894Z · LW(p) · GW(p)
Is there any chance that you will soon mature / calm down / whatever it is you need to do to stop being so hostile, so frequently? This is only the latest example of you coming down on people with the utmost contempt for fairly minor offenses, if they're offenses at all. It looks increasingly like you think anyone who conducts themselves in the comments differently than you prefer ought to be beheaded or something. It's really unpleasant to read, and I don't think it's likely to be effective in getting people to adopt your standards of behavior. (I can think of few posters I'm less motivated to emulate than you, as one data point.)
Edit: I downvoted the parent of this comment because I would like to see fewer comments that resemble it.
Replies from: SilasBarta, SilasBarta↑ comment by SilasBarta · 2009-09-29T01:29:09.119Z · LW(p) · GW(p)
I hate to play tu quoque, but it's rather strange of you to make this criticism considering that just a few months ago you gave a long list of very confining restrictions you wanted on commenters, enforced by bannings. Despite their best efforts, no one could discern the pattern behind what made something beyond-the-pale offensive, so you were effectively asking for unchecked, arbitrary authority to remove comments you don't like.
You even went so far as to ask to deputize your own hand-picked cadre of posters "with their heads on right" to assist in classifying speech as unacceptable!
Yes, Alicorn, I've been very critical of those who claim objectivity in modding people they're flaming, but I don't think I've ever demanded the sort of arbitrary authority over the forum that you feel entitled to.
I will gladly make my criticisms more polite in the future, but I'm not going to apologize for having lower karma than if I abused the voting system the way some of you seem to.
And in the meantime, perhaps you could make it a habit of checking if the criticisms you make of others could apply to yourself. I'm not asking that you be perfect or unassailable in this respect. I'm not even asking that you try to adhere to your own standards. I just ask that you check for whether you're living up to it.
Edit: I didn't downvote the parent of this comment because I'm not petty like that.
Replies from: Alicorn↑ comment by Alicorn · 2009-09-29T02:11:07.670Z · LW(p) · GW(p)
I hate to play tu quoque
I'm glad you pointed this out, because I never would have figured it out on my own. It's subtle!
you gave a long list of very confining restrictions you wanted on commenters, enforced by bannings
I dispute "long", "very confining", and "bannings". There were a handful of things on my list, and they could all be summed up as "sexism", which is only one thing. Many commenters have no trouble abiding by the restrictions. I also don't remember ever proposing actual bans, just social mechanisms of discouragement and some downvoting.
Despite their best efforts, no one could discern the pattern behind what made something beyond-the-pale offensive, so you were effectively asking for unchecked, arbitrary authority to remove comments you don't like.
I dispute "their best efforts", "no one", "asking for unchecked, arbitrary authority", and "don't like". I am not convinced that everyone tried their very best. I am convinced that many people understood me very well. I did not request any personal authority, much less the unchecked arbitrary kind. My requests had to do with comments that had a particular feature, which does not overlap completely with things I do not like.
You even went so far as to ask to deputize your own hand-picked cadre of posters "with their heads on right" to assist in classifying speech as unacceptable!
That was in response to discomfort with being implicitly given, because of my gender, the "authority" you accused me of requesting. I did not go on to actually deputize anyone.
I will gladly make my criticisms more polite in the future
Yaaaaaaaay!
I'm not going to apologize for having lower karma than if I abused the voting system the way some of you seem to.
Wait, you're saying that some people conduct an abuse of the voting system that increases their own karma? As opposed to increasing or decreasing others'? What accusation are you making, exactly? Against whom? What's your evidence for it? Or are you saying that if you abused the voting system, you could get others to upvote you more and downvote you less?
And in the meantime, perhaps you could make it a habit of checking if the criticisms you make of others could apply to yourself.
I don't think, in ordinary language, it's possible to make a habit of the same thing twice, so unfortunately, I can't do that, anymore.
I'm not asking that you be perfect or unassailable in this respect. I'm not even asking that you try to adhere to your own standards.
I'm so glad you said so. Subtle.
Edit: I didn't downvote the parent of this comment because I'm not petty like that.
Okay. You have my blanket approval to refrain from downvoting anything you are disinclined to downvote.
Replies from: SilasBarta↑ comment by SilasBarta · 2009-09-29T02:19:21.482Z · LW(p) · GW(p)
Thanks: when I make my future posts more mature and less hostile, I can use this as a guide.
Replies from: Alicorn↑ comment by Alicorn · 2009-09-29T02:28:14.549Z · LW(p) · GW(p)
Since at least one person seems to agree with you, I'm genuinely curious now. Assuming I'm correct in detecting sarcasm there, can you elaborate?
Replies from: SilasBarta↑ comment by SilasBarta · 2009-09-29T02:34:59.400Z · LW(p) · GW(p)
Can I elaborate? When the discussion has become polarized to the point where people will downmod pretty much any future comment on the grounds that it's perpetuating a flamewar or they view the poster as being on "the other side"? Not a chance, I'm afraid.
I do, however, feel very honored that at least some people sympathize with me here.
Replies from: Alicorn↑ comment by SilasBarta · 2009-09-29T01:55:12.740Z · LW(p) · GW(p)
I downvoted the parent of this comment because I would like to see fewer comments that resemble it.
Yes, you would like to see fewer comments that have "SilasBarta" at the top.
↑ comment by Nominull · 2009-09-27T02:51:25.309Z · LW(p) · GW(p)
A lot of people, especially religious people, equate lack of belief in a fundamental meaning of life with throwing oneself off cliffs. Eliezer is committing the same sort of mistake.
Replies from: loqi↑ comment by loqi · 2009-09-27T12:16:44.834Z · LW(p) · GW(p)
No, I think he's just pointing out that the common intuitions behind anticipatory fear are grossly violated by the third horn.
I'd like to see you chew this bullet a bit more, so try this version. You are to be split (copied once). One of you is randomly chosen to wake up in a red room and be tortured for 50 years, while the other wakes up in a green room and suffers a mere dust speck. Ten minutes will pass for both copies before the torture or specking commences.
How much do you fear being tortured before the split? Does this level of fear go up/down when you wake up in a red/green room? To accept the third horn seems to imply that you should feel no relief upon waking in the green room.
Replies from: Nick_Tarleton↑ comment by Nick_Tarleton · 2009-09-27T16:48:47.580Z · LW(p) · GW(p)
To accept the third horn seems to imply that you should feel no relief upon waking in the green room.
Only if you assume feelings of relief should bind to reality (or reality+preferences) in a particular way.
Replies from: loqi↑ comment by loqi · 2009-09-27T19:59:37.704Z · LW(p) · GW(p)
Good point, I should have phrased that differently: "To accept the third horn seems to imply that any relief you feel upon waking in the green room is just 'legacy' human intuition, rather than any rational expectation of having avoided future suffering."
Replies from: gwern↑ comment by gwern · 2009-10-10T03:26:15.135Z · LW(p) · GW(p)
You know, your example is actually making that horn look more attractive: replace the torture to the person with '50000 utilities subtracted from the cosmos', etc, and then it's obvious that the green-room is no grounds for relief since the -50000 is still a fact. More narrowly, if you valued other persons equal to yourself, then the green room is definitely no cause for relief.
We could figure out how much you value other people by varying how bad the torture, is and maybe adding a deal where if the green-room person will flip a fair coin (heads, the punishment is swapped; tails, no change), the torture is lessened by n. If you value the copy equal to yourself, you'll be willing to swap for any difference right down to 1, since if it's tails, there's no loss or gain, but if it's heads, there's a n profit.
Now, of course even if the copy is identical to yourself, and even if we postulate that somehow the 2 minds haven't diverged (we could do this by making the coinflip deal contingent on being the tortured one - 2 identical rooms, neither of which knows whether they are the tortured one; by making it contingent, there's no risk in not taking the bet), I think essentially no human would take the coinflip for just +1 - they would only take it if there was a major amelioration of the torture. Why? Because pain is so much realer and overriding to us, which is a fact about us and not about agents we can imagine.
(If you're not convinced, replace the punishments with rewards and modify the bet to increase the reward but possibly switch it to the other fellow; and imagine a parallel series of experiments being run with rational agents who don't have pain/greed. After a lot of experiments, who will have more money?)
Replies from: loqi↑ comment by loqi · 2009-10-10T20:38:59.171Z · LW(p) · GW(p)
More narrowly, if you valued other persons equal to yourself, then the green room is definitely no cause for relief.
Yes, and this hypothesis can even be weakened a bit, since the other persons involved are nearly identical to you. All it takes is a sufficiently "fuzzy" sense of self.
Now, of course even if the copy is identical to yourself, and even if we postulate that somehow the 2 minds haven't diverged [...] I think essentially no human would take the coinflip for just +1 - they would only take it if there was a major amelioration of the torture.
To clarify what you mean by "haven't diverged"... does that include the offer of the flip? E.g., both receive the offer, but only one of the responses "counts"? Because I can't imagine not taking the flip if I knew I was in such a situation... my anticipation would be cleanly split between both outcomes due to indexical uncertainty. It's a more complicated question once I know which room I'm in.
Replies from: gwern↑ comment by gwern · 2009-10-10T21:17:42.203Z · LW(p) · GW(p)
To clarify what you mean by "haven't diverged"... does that include the offer of the flip? E.g., both receive the offer, but only one of the responses "counts"? Because I can't imagine not taking the flip if I knew I was in such a situation... my anticipation would be cleanly split between both outcomes due to indexical uncertainty. It's a more complicated question once I know which room I'm in.
Well, maybe I wasn't clear. I'm imagining that there are 2 green rooms, say, however, one room has been secretly picked out for the torture and the other gets the dustspeck.
Each person now is made the offer: if you flip this coin, and you are not the torture room, the torture will be reduced by n and the room tortured may be swapped if the coin came up heads; however, if you are the torture room, the coin flip does nothing.
Since the minds are the same, in the same circumstances, with the same offer, we don't need to worry about what happens if the coins fall differently or if one accepts and the other rejects. The logic they should follow is: if I am not the other, then by taking the coin flip I am doing myself a disservice by risking torture, and I gain under no circumstance and so should never take the bet; but if I am the other as well, then I lose under no circumstance so I should always take the bet.
(I wonder if I am just very obtusely reinventing the prisoner's dilemma or newcomb's paradox here, or if by making the 2 copies identical I've destroyed an important asymmetry. As you say, if you don't know whether "you" have been spared torture, then maybe the bet does nothing interesting.)
Replies from: loqi↑ comment by loqi · 2009-10-11T05:24:12.566Z · LW(p) · GW(p)
The logic they should follow is: if I am not the other, then by taking the coin flip I am doing myself a disservice by risking torture, and I gain under no circumstance and so should never take the bet; but if I am the other as well, then I lose under no circumstance so I should always take the bet.
I'm not sure what "not being the other" means here, really. There may be two underlying physical processes, but they're only giving rise to one stream of experience. From that stream's perspective, its future is split evenly between two possibilities, so accepting the bet strictly dominates. Isn't this just straightforward utility maximization?
The reason the question becomes more complicated if the minds diverge is that the concept of "self" must be examined to see how the agent weights the experiences of an extremely similar process in its utility function. It's sort of a question of which is more defining: past or future. A purely forward-looking agent says "ain't my future" and evaluates the copy's experiences as those of a stranger. A purely backward-looking agent says "shares virtually my entire past" and evaluates the copy's experiences as though they were his own. This all assumes some coherent concept of "selfishness" - clearly a purely altruistic agent would take the flip.
I wonder if I am just very obtusely reinventing the prisoner's dilemma or newcomb's paradox here, or if by making the 2 copies identical I've destroyed an important asymmetry.
The identical copies scenario is a prisoner's dilemma where you make one decision for both sides, and then get randomly assigned to a side. It's just plain crazy to defect in a degenerate prisoner's dilemma against yourself. I think this does destroy an important asymmetry - in the divergent scenario, the green-room agent knows that only his decision counts.
Speaking for my own values, I'm still thoroughly confused by the divergent scenario. I'd probably be selfish enough not to take the flip for a stranger, but I'd be genuinely unsure of what to do if it was basically "me" in the red room.
↑ comment by orthonormal · 2009-09-28T16:40:06.542Z · LW(p) · GW(p)
The SIA predicts that you will say "no".
comment by Mitchell_Porter · 2009-09-27T05:42:49.075Z · LW(p) · GW(p)
Let's explore this scenario in computational rather than quantum language.
Suppose a computer with infinite working memory, running a virtual world with a billion inhabitants, each of whom has a private computational workspace consisting of an infinite subset of total memory.
The computer is going to run an unusual sort of 'lottery' in which a billion copies of the virtual world are created, and in each one, a different inhabitant gets to be the lottery winner. So already the total population after the lottery is not a billion, it's a billion billion, spread across a billion worlds.
Virtual Yu'el perceives that he could utilize his workspace as described by Eliezer: pause himself, then have a single copy restored from backup if he didn't win the lottery, but have a trillion copies made if he did. So first he wonders whether it's correct to see this as making his victory in the lottery all but certain. Then he notices that if after winning he then does a merge, the certain victory turns back into certain loss, and becomes really worried about the fundamental soundness of his decision procedures and understanding of probability, etc.
Stating the scenario in these concrete terms brings out, for me, aspects that aren't so obvious in the original statement. For example: If everyone else has the same option (the trillionfold copying), Yu'el is no longer favored. Is the trilemma partly due to supposing that only one lottery participant has this radical existential option? Also, it seems important to keep the other worlds where Yu'el loses in sight. By focusing on that one special world, where we go from a billion people, to a trillion people, mostly Yu'els, and then back to a billion, we are not even thinking about the full population elsewhere.
I think a lot of the assumptions going into this thought experiment as originally proposed are simply wrong. But there might be a watered-down version involving copies of decision-making programs on a single big computer, etc, to which I could not object. The question for me is how much of the impression of paradox will remain after the problem has been diluted in this fashion.
Replies from: pengvado↑ comment by pengvado · 2009-09-28T01:41:53.815Z · LW(p) · GW(p)
If everyone else has the same option (the trillionfold copying), Yu'el is no longer favored.
If everyone performs the copying strategy, then after the lottery and the copying but before the merge, the majority of all agents (summed over all the worlds) have won the lottery. So if copy-merge works at all, Yu'el is still favored, and so is everyone else.
comment by solipsist · 2013-11-05T02:51:08.513Z · LW(p) · GW(p)
....you have to throw away the idea that your joint subjective probabilities are the product of your conditional subjective probabilities....If you win the lottery, the subjective probability of having still won the lottery, ten seconds later, is ~1.
If copying increases your measure, merging decreases it. When you notice yourself winning the lottery, you are almost certainly going to cease to exist after ten seconds.
Replies from: ESRogs↑ comment by ESRogs · 2013-12-20T06:04:06.064Z · LW(p) · GW(p)
Which is to pick Nick's suggestion, and something like horn #2, except perhaps without the last part where you "anticipate that once you experience winning the lottery you will experience having still won it ten seconds later."
Scanning through the comments for "quantum suicide," it sounds like a few others agree with you.
comment by DanielLC · 2011-02-17T16:57:02.452Z · LW(p) · GW(p)
I don't think you have the third horn quite right. It's not that you're equally likely to wake up as Brittany Spears. It's that the only meaningful idea of "you" exists only right now. Your subjective anticipation of winning the lottery in five minutes should be zero. You clearly aren't winning the lottery or in five minutes.
Also, isn't that more of a quintlemma?
comment by avturchin · 2020-09-13T18:44:39.948Z · LW(p) · GW(p)
Maybe I am too late to comment here and it is already covered in collapsed comments, but it looks like that it is possible to make this experiment in real life.
Imagine that instead of copying, I will use waking up. If I win, I will be waked up 3 times and informed that I won and will be given a drug which will make me forget the act of awakening. If I lose, I will be wakened only one time and informed that I lost. Now I have 3 to 1 observer moments where I informed about winning.
In such setup in is exactly the Sleeping beauty problem, with all its pro and contra, which I will not try to explore here.
comment by jimrandomh · 2010-03-12T21:51:11.752Z · LW(p) · GW(p)
Tafter thinking about the Anthropic Trilemma for awhile, I've come up with an alternative resolution which I think is better than, or at least simpler than, any of the other resolutions. Rather than try to construct a path for consciousness to follow inductively forwards through time, start at the end and go backwards: from the set of times at which an entity I consider to be a copy of me dies, choose one at random weighted by quantum measure, then choose uniformly at random from all paths ending there.
The trick is that this means that while copying your mind increases the probability of ending up in the universe where you're copied, merging back together cancels it out perfectly. This means that you can send information back in time and influence your path through the universe by self-copying, but only log(n) bits of information or influence for n copies that you can't get rid of without probably dying.
comment by PlatypusNinja · 2009-09-28T21:04:05.973Z · LW(p) · GW(p)
I deny that increasing the number of physical copies increases the weight of an experience. If I create N copies of myself, there is still just one of me, plus N other agents running my decision-making algorithms. If I then merge all N copies back into myself, the resulting composite contains the utility of each copy weighted by 1/(N+1).
My feeling about the Boltzmann Brain is: I cheerfully admit that there is some chance that my experience has been produced by a random experience generator. However, in those cases, nothing I do matters anyway. Thus I don't give them any weight in my decision-making algorithm.
This solution still works correctly if the N copies of me have slightly different experiences and then forget them.
↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-09-28T03:44:39.342Z · LW(p) · GW(p)
Banned for attempted top-posting of Geddes-standard gibberish.
Replies from: wedrifid↑ comment by wedrifid · 2009-09-28T09:22:33.522Z · LW(p) · GW(p)
Geddes-standard? I don't understand the reference. An unflattering comparison to the biologist?
Edit: Thanks for the explaination EY. (And my expressed contempt for whoever thinks a query trying to understand the reference must be punished.)
comment by rwallace · 2009-09-27T14:53:50.147Z · LW(p) · GW(p)
I don't know the answer either. My best guess is that the question turns out to involve comparing incommensurable things, but I haven't pinned down which things. (As I remarked previously, I think the best answer for policy purposes is to just optimize for total utility, but that doesn't answer the question about subjective experience.)
But one line of attack that occurs to me is the mysterious nature of the Born probabilities.
Suppose they are not fundamental, suppose the ultimate layer of physics -- maybe superstrings, maybe something else -- generates the various outcomes in its own way, like a computer (digital or analog) forking processes...
and we subjectively experience outcomes according to the Born probabilities because this is the correct answer to the question about subjective experience probability.
Is there a way to test that conjecture? Is there a way to figure out what the consequences would be if it were true, or if it were false?
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-09-27T16:58:29.185Z · LW(p) · GW(p)
Suppose they are not fundamental, suppose the ultimate layer of physics -- maybe superstrings, maybe something else -- generates the various outcomes in its own way, like a computer (digital or analog) forking processes...
and we subjectively experience outcomes according to the Born probabilities because this is the correct answer to the question about subjective experience probability.
That's indeed what we should all be hoping for. But what possible set of "axioms" for subjective experience - never mind what possible underlying physics - could correspond to the Born probabilities, while solving the computer-processor trilemma as well?
Replies from: rwallace↑ comment by rwallace · 2009-09-27T17:12:54.250Z · LW(p) · GW(p)
Well... following this line of thought, we should expect that the underlying physics is not special, because any physics that satisfies certain generic properties will lead to subjective experience of the Born probabilities.
Suppose we can therefore without loss of generality take the underlying physics to be equivalent to a digital computer programmed in a straightforward way, so that the quantum and computer trilemmas are equivalent.
Is there any set of axioms that will lead (setting aside other intuitions for the moment) to subjective experience of the Born probabilities in the case where we are running on a computer and therefore do know the underlying physics? If there is, that would constitute evidence for the truth of those axioms even if they are otherwise counterintuitive; if we can somehow show that there is not, that would constitute evidence that this line of thought is barking up the wrong tree.
Replies from: Psy-Kosh, Eliezer_Yudkowsky↑ comment by Psy-Kosh · 2009-09-27T23:20:45.132Z · LW(p) · GW(p)
we should expect that the underlying physics is not special, because any physics that satisfies certain generic properties will lead to subjective experience of the Born probabilities.
Elaborate on that bit please? Thanks.
Replies from: rwallace↑ comment by rwallace · 2009-09-28T00:45:07.677Z · LW(p) · GW(p)
Well basically, we start off with the claim (which I can't confirm of my own knowledge, but have no reason to doubt) that the Born rule has certain special properties, as explained in the original post.
We observe that the Born rule seems to be empirically true in our universe.
We would like an explanation as to why our universe exhibits a rule with special properties.
Consider the form this explanation must take. It can't be because the Born rule is encoded into the ultimate laws of physics, because that would only push the mystery back a few steps. It should be a logical conclusion that we would observe the Born rule given any underlying physics (within reason).
Of course there is far too much armchair handwaving here to constitute proof, but I think it at least constitutes an interesting conjecture.
Replies from: Psy-Kosh↑ comment by Psy-Kosh · 2009-09-28T01:06:28.922Z · LW(p) · GW(p)
Well, even if it turns out that there're special properties of our physics that are required to produce the Born rule, I'd say that mystery would be a different, well, kind of mystery. Right now it's a bit of "wtf? where is this bizzaro subjective nonlinearity etc coming from? and it seems like something 'extra' tacked onto the physics"
If we could reduce that to "these specific physical laws give rise to it", then even though we'd still have "why these laws and not others", it would, in my view, be an improvement over the situation in which we seem to have an additional law that seems almost impossible to even meaningfully phrase without invoking subjective experience.
I do agree though that given the special properties of the rule, any special properties in the underlying physics that are needed to give rise to the rule should be in some sense "non arbitrary"... that is, should look like, well, like a nonaribitrarily selected physical rule.
↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-09-27T17:38:11.257Z · LW(p) · GW(p)
Sounds like a right question to me. Got an answer?
A related problem: If we allow unbounded computations, then, when we try to add up copies, we can end up with different limiting proportions of copies depending on how we approach t -> infinity; and we can even have algorithms for creating copies such that their proportions fail to converge. (1 of A, 3 of B, 9 of A, 27 of B, etc.) So then either it is a metaphysical necessity that reality be finite, because otherwise our laws will fail to give correct answers; or the True Rules must be such as to give definitive answers in such a situation.
Replies from: rwallace↑ comment by rwallace · 2009-09-27T18:05:52.711Z · LW(p) · GW(p)
I'm afraid I'm not familiar enough with the Born probabilities to know how to approach an answer -- oh, I've been able to quote the definition about squared amplitudes since I was a wee lad, but I've never had occasion to actually work with them, so I don't have any intuitive feel about their implications.
As for the problem of infinity, you're right of course, though there are other ways for that to arise too -- for example, if the underlying physics is analog rather than digital. Which suggests it can't be fiated away. I don't know what the solution is, but it reminds me of the way cardinality says all shapes contain the same number of points, so it was necessary to invent measure to justify the ability to do geometry.
Replies from: Psy-Kosh↑ comment by Psy-Kosh · 2009-09-27T23:25:56.190Z · LW(p) · GW(p)
Deeply fundamentally analog physics, ie, infinite detail, would just be another form of infinity, wouldn't it? So it's a variation of the same problem of "what happens to all this when there's an infinity involved?"
Replies from: bogus, rwallace↑ comment by bogus · 2009-09-27T23:38:23.476Z · LW(p) · GW(p)
Deeply fundamentally analog physics, ie, infinite detail,
To the best of our understanding, there's no such thing as "infinite detail" in physics. Physical information is limited by the Bekenstein bound.
Replies from: Psy-Kosh↑ comment by Psy-Kosh · 2009-09-28T00:03:02.561Z · LW(p) · GW(p)
Sorry, I may have been unclear. I didn't mean to make a claim that physics actually does have this property, but rather I was saying that if physics did have this property, it would just be another instance of an infinity, rather than an entirely novel source for the problem mentioned.
(Also, I'm unclear on the BB, if it takes into account possible future tech that may be able to manipulate the geometry of spacetime to some extent. ie, if we can do GR hacking, would that affect the bound or are the limits of that effectively already precomputed into that?)
comment by Douglas_Knight · 2009-09-27T03:28:18.947Z · LW(p) · GW(p)
The third option seems awfully close to the second. In the second, you anticipate winning the lottery for a few seconds, and then going back to not. In the third, the universe anticipates winning the lottery for a few seconds, and then going back to Britney.
comment by martinkunev · 2023-10-24T12:21:39.823Z · LW(p) · GW(p)
I'm just wondering what would Britney Spears say when she reads this.
comment by AgentME · 2019-05-18T23:45:45.101Z · LW(p) · GW(p)
Now you could just bite this bullet. You could say, "Sounds to me like it should work fine." You could say, "There's no reason why you shouldn't be able to exert anthropic psychic powers." You could say, "I have no problem with the idea that no one else could see you exerting your anthropic psychic powers, and I have no problem with the idea that different people can send different portions of their subjective futures into different realities."
I think there are other problems that may prevent the "anthropic psychic powers" example from working (maybe copying doesn't duplicate measure, but splits it gradually as the copies become increasingly separated in information content or in location; I think my comment here [LW(p) · GW(p)] might provide a way to think about that), but "the idea that different people can send different portions of their subjective futures into different realities" is not one of the problems, as I believe it's implied to be possible by the "two Schrodinger's cats" thought experiment (https://phys.org/news/2019-11-quantum-physics-reality-doesnt.html, https://arxiv.org/abs/1604.07422, https://web.archive.org/web/20200215011940/https://curiosity.com/topics/adding-a-second-cat-to-schrodingers-cat-experiment-might-break-quantum-physics-curiosity/, and the Frauchiger-Renner thought experiment mentioned in https://www.quantamagazine.org/frauchiger-renner-paradox-clarifies-where-our-views-of-reality-go-wrong-20181203/). (I'm not completely confident in my understanding of this, so please let me know if I'm understanding that thought experiment incorrectly. My understanding of the experiment is that the different participants should rightly expect different things to happen, and I think the easiest explanation is that the participants have their measure going in different proportions to different outcomes.)
comment by jschulter · 2011-03-02T23:02:55.292Z · LW(p) · GW(p)
The odds of winning the lottery are ordinarily a billion to one. But now the branch in >which you win has your "measure", your "amount of experience", temporarily >multiplied by a trillion. So with the brief expenditure of a little extra computing power, >you can subjectively win the lottery - be reasonably sure that when next you open >your eyes, you will see a computer screen flashing "You won!"
As I see it, the odds of being any one of those trillion "me"s in 5 seconds is 10^21 to one(one trillion times one billion). since there are a trillion ways for me to be one of those, the total probability of experiencing winning is still a billion to one. To be more formal:
P("experiencing winning")=sum(P("winning"|"being me #n")P("being me #n")) =sum(P("winning" and "being me #n"))=10^12*10^-21=10^-9 since "being me #n" partitions the space.
Overall this means I:
anticipate not winning at 5 sec.
anticipate not winning at 15 sec.
don't have super-psychic-anthropic powers
- don't see why anyone has an issue with this
Checking consistency just in case:
p("experience win after 15s") = p("experience win after 15s"|"experience win after >5s")p("experience win after 5s") + p("experience win after 15s"|"experience not-win >after 5s")p("experience not-win after 5s").
p("experience win after 15s") = (~1)*(10^-9) + (~0)(1-10^-9)=~10^-9=~p("experience win after 5s")
Additionally, I should note that the total amount of "people who are me who experience winning" will be 1 trillion at 5 sec. and exactly 1 at 15 sec. This is because those trillion "me"s must all have identical experiences for merging to work, meaning the merged copy only has one set of consistent memories of having won the lottery. I don't see this as a problem, honestly.
Replies from: jacob_cannell↑ comment by jacob_cannell · 2011-12-12T21:21:14.889Z · LW(p) · GW(p)
I have nearly the same viewpoint and was surprised to find what seems to me to be the obvious solution so far down this thread.
One little nitpick:
Additionally, I should note that the total amount of "people who are me who experience winning" will be 1 trillion at 5 sec. and exactly 1 at 15 sec.
From your analysis, I think you mean you expect there is a 1 in a billion chance there will be 1 trillion "people who are me who experience winning" at 5 sec.
comment by Baughn · 2009-11-08T10:30:00.018Z · LW(p) · GW(p)
I'm coming in a bit late, and not reading the rest of the posts, but I felt I had to comment on the third horn of the trilemma, as it's an option I've been giving a lot of thought.
I managed to independently invent it (with roughly the same reasoning) back in high school, though I haven't managed to convince myself of it or, for that matter, to explain it to anyone else. Your explanation is better, and I'll be borrowing it.
At any rate. One of your objections seems to be "...to assert that you can hurl yourself off a cliff without fear, because whoever hits the ground will be another person not particularly connected to you by any such ridiculous thing as a "thread of subjective experience".
For that to make sense would require that, while you can anticipate subjective experiences from just about anywhere, you would only anticipate experiencing a limited subset of them; 1/N of the total, where N represents.. what? The total number of humans, and why? Of souls?
Things get simpler if you set N to 1. Then your anticipation would be to experience Eliezer+5, Britney+5 and Cliffdiver+5, as well as every other subjective experience available for experiencing; sidestepping the cliffdiver problem, and more importantly removing any need to explain the value of N.
There's still the alternate option of it being infinity. I feel relatively certain that this is not the case, but I'm not sure this isn't simply wishful thinking. Help?
comment by MichaelHoward · 2009-09-27T12:11:40.424Z · LW(p) · GW(p)
In quantum copying and merging, every "branch" operation preserves the total measure of the original branch,
Maybe branch quantum operations don't make new copies, but represent already existing but identical copies "becoming" no longer identical?
In the computer program analogy: instead of having one program at time t and n slightly different versions at time t+1, start out with n copies already existing (but identical) at time t, and have each one change in the branching. If you expect a t+2, you need to start with at least n^2 copies.
(That may mean a lot more copies of everything than would otherwise be expected even under many worlds, but even if it's enough to give this diabolical monster bed-wetting nightmares, by the Occam's razor that works for predicting physical laws, that's absolutely fine).
Come to think of it... if this interpretation isn't true...
or for that matter, even if this is true but it isn't true that someone who runs redundantly on three processors gets three times as much weight as someone who runs on one processor...
then wouldn't we be vastly likely to be experiencing the last instant of experienceable existence in the Universe, because that's where the vast majority of distinct observers would be? Omega-point simulation hypothesis anyone? :-)
Replies from: Psy-Kosh↑ comment by Psy-Kosh · 2009-09-27T14:36:30.176Z · LW(p) · GW(p)
But where does all the quantum interference stuff come from then?
Replies from: MichaelHoward↑ comment by MichaelHoward · 2009-09-27T16:37:36.498Z · LW(p) · GW(p)
I'm not trying to resolve any quantum interference mysteries with the above, merely anthropic ones. I have absolutely no idea where the born probabilities come from.
Replies from: Psy-Kosh↑ comment by Psy-Kosh · 2009-09-27T16:39:28.312Z · LW(p) · GW(p)
Sorry, I was unclear. I meant "if what you say is the correct explanation, then near as I can tell, there shouldn't be anything resembling quantum interference. In your model, where is there room for things to 'cancel out' if copies just keep multiplying like that?"
Or did I misunderstand what you were saying?
Replies from: MichaelHoward, MichaelHoward↑ comment by MichaelHoward · 2009-09-27T21:06:35.909Z · LW(p) · GW(p)
In your model, where is there room for things to 'cancel out' if copies just keep multiplying like that?
Ah, sorry if I wasn't clear. The copies wouldn't multiply. In the computer program analogy, you'd have the same number of programs at every time step. So instead of doing this...
Step1: "Program".
Step2: "Program0", "Program1".
Step3: "Program00", "Program01", "Program10", "Program11".
You do this...
Step1: "Program", "Program", "Program", "Program", ...
Step2: "Program0", "Program1", "Program0", "Program1", ...
Step3: "Program00", "Program01", "Program10", "Program11", ...
For the sake of simplicity this is the same algorithm, but with part of the list not used when working out the next step. If our universe did this, surely at any point in time it would produce exactly the same experimental results as if it didn't.
If we're not experiencing the last instant of experienceable existence, I think that may imply that the second model is closer to the truth, and also that someone who runs redundantly on three processors gets three times as much weight as someone who runs on one processor, for the reasons above.
Replies from: Psy-Kosh↑ comment by MichaelHoward · 2009-09-27T21:01:45.701Z · LW(p) · GW(p)
In your model, where is there room for things to 'cancel out' if copies just keep multiplying like that?
Ah, sorry if I wasn't clear. The copies wouldn't multiply. In the computer program analogy, you'd have the same number of programs at every time step. So instead of doing this...
Step1: "Program".
Step2: "Program0", "Program1".
Step3: "Program00", "Program01", "Program10", "Program11".
You do this...
Step1: "Program", "Program", "Program", "Program", ...
Step2: "Program0", "Program1", "Program0", "Program1", ...
Step3: "Program00", "Program01", "Program10", "Program11", ...
For the sake of simplicity this is the same algorithm, but with part of the list not used when working out the next step. If our universe did this, surely at any point in time it would produce exactly the same experimental results as if it didn't.
comment by timtyler · 2009-09-27T08:41:49.699Z · LW(p) · GW(p)
The "second horn" seems to be phrased incorrectly. It says:
"you can coherently anticipate winning the lottery after five seconds, anticipate the experience of having lost the lottery after fifteen seconds, and anticipate that once you experience winning the lottery you will experience having still won it ten seconds later."
That's not really right - the fate of most of those agents that experience a win of the lottery is to be snuffed out of existence. They don't actually win the lottery - and they don't experience having won it eleven seconds later either. The chances of the lottery staying won after it has been experienced as being won are slender.
Either that "horn" needs rephrasing - or another "horn" needs to be created with the correct answer on it.
Replies from: Johnicholas↑ comment by Johnicholas · 2009-09-27T13:38:09.620Z · LW(p) · GW(p)
If I understand the proposed merging procedure correctly, the procedure treats the trillion observers who experience a win of the lottery symmetrically. None of them are "snuffed" any more than any other. For each of the observers, there is a continuous space-time-causality "worm" connecting to the future self who spends the money.
This space-time-causality worm is supposed to be as analogous as possible to the one that connects any ordinary moment in your life to your future self. The difference is that this one merges (symmetrically) with almost a trillion others, all identical.
Replies from: timtyler, Eliezer_Yudkowsky↑ comment by timtyler · 2009-09-27T18:01:12.748Z · LW(p) · GW(p)
I see, I think. I can't help wondering what the merge procedure does with any flipped bits in the diff, though. Anyway, horn 2 now seems OK - I think it describes the situation.
Replies from: timtyler↑ comment by timtyler · 2009-11-25T22:32:59.402Z · LW(p) · GW(p)
Rereading the comments on this thread, the problem is more subtle than I had thought - and I had better retract the above comment. I am inclined towards the idea that copying doesn't really alter the pattern - but that kind of anthropic reasoning seems challenging to properly formalise under the given circumstances.
↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-09-27T16:56:10.681Z · LW(p) · GW(p)
Yup! If you can't do the merge without killing people, then the trilemma is dissolved.
comment by Nominull · 2009-09-27T02:47:04.498Z · LW(p) · GW(p)
I strive for altruism, but I'm not sure I can believe that subjective selfishness - caring about your own future experiences - is an incoherent utility function; that we are forced to be Buddhists who dare not cheat a neighbor, not because we are kind, but because we anticipate experiencing their consequences just as much as we anticipate experiencing our own. I don't think that, if I were really selfish, I could jump off a cliff knowing smugly that a different person would experience the consequence of hitting the ground.
It strikes me that this is a problem of comparing things at different levels of meta. You are talking about your "motivations" as if they were things that you could freely choose based on your a priori determinations about how one well and truly ought to act. You sound, forgive me for saying this, I do respect you deeply, almost Kantian here.
The underlying basis for your ethical system, or even further, your motivational system, does not lie in this abstract observer that is Eliezer today and Brittany Spears tomorrow. I think I'm ripping this off of some videogame I've played, but think of the brain as a container, and this "observerstuff" (dare I call it a soul?) as water poured into the container. Different containers have different shapes and shape the water differently, even though it is the same water. But the key point is, motivations are a property of the container, not of the water, and their referents are containers, not water. Eliezer-shaped water cares about what happens to the Eliezer-container, not what happens to the water in it. That's just not how the container is built.
comment by casebash · 2016-04-16T06:19:38.920Z · LW(p) · GW(p)
I will bite the first horn of the trilemma. I'm will argue that the increase in subjective probability results from losing information and that it is no different from other situations where you lose information in such a way to make subjective probabilities seem higher. For example, if you watch the lotto draw, but then forget every number except those that match your ticket, your subjective probability that you won will be much higher than originally.
Let's imagine that if you win the lottery that a billion copies of you will be created.
t=0: The lottery is drawn t=1: If you won the lottery, then a billion clones are created. The original remembers that they are the original as they see the clones being created, but if clones were made, they don't know they are clones and don't know that the original knows that they are the original, so they can't figure it out that way. t=2: You have a bad memory and so you forget whether you are an original or a clone. t=3: If any clones exist, they are all killed off t=4: Everyone is informed about whether or not they ran the lottery.
Let's suppose that you know you are the original and that you are at t=1. Your chances of winning the lottery are still 1 in a million as the creation of clones does not affect your probability of waking up to a win at t=4 if you know that you are not a clone.
Now let's consider the probability at t=2. Your subjective odds of winning the lottery have rise massively, since you most probably are a copy. Even though there is only a one in a million chance that copies will be made, the fact that a billion copies will be made more than cancels this out.
What we have identified is that it is the information loss that is the key feature. Of course you can increase your subjective probabilities by erasing any information that is contrary. What is interesting about cloning is that if we are able to create clones with the exact same information, we are able to effectively remove knowledge without touching your brain. That is, if you know that you are not a clone, after we have cloned you exactly, then you no longer know you are not a clone, unless someone tells you or you see it happen.
Now at t=3 we kill off/merge any remaining clones. If you are still alive, you've gained information when you learned that you weren't killed off. In fact, you've been retaught the same information you've forgotten.
Replies from: Chris_Leong↑ comment by Chris_Leong · 2018-07-23T05:49:59.290Z · LW(p) · GW(p)
So, I (as casebash) originally wanted to bite the first horn of the dilemma, then I wanted to bite different horns depending on assumptions.
Suppose we embrace physical continuity. Then all we've done by create copies is manage the news. Your original is just as likely to be in any state, but you now no longer know if you are the original or a clone. But even putting this aside, the ability to merge copies seems to contradict the idea of additional copies being given extra weight. If we embrace this idea, it seems more accurate to say that we can only add or delete copies.
On the other hand suppose we reject it. What would psychological continuity mean? Imagine a physical agent A and a clone C which undergo identical experiences over 10 seconds. Now we use A and C as our psychologically continuous agents, but there's no reason why we couldn't construct an psychologically continuous agent that is A during the even numbered seconds and C during the odd numbered seconds. In fact, there are a ridiculous number of overlapping, psychologically continuous agents, all of which ought to be equally valid. This seems absurd, so I'd go so far as to suggest that if we reject physical continuity, we ought to reject the notion of continuity altogether. This would led us to take the third horn.
comment by tmx · 2013-02-07T03:32:18.267Z · LW(p) · GW(p)
1) the probability of ending up in the set of winners is
1/billion
2) the probability of being (a specific) one of the trillion is
1/(b * t)
the probability of being a 2) given you are awake is
p(2 | awake) = P(awake | 2) * p(2)
p(awake)
= 1 * 1E-21
---------
1
= very small
comment by A1987dM (army1987) · 2012-08-12T15:21:00.344Z · LW(p) · GW(p)
Buy a ticket. Suspend your computer program just before the lottery drawing - which should of course be a quantum lottery, so that every ticket wins somewhere. Program your computational environment to, if you win, make a trillion copies of yourself, and wake them up for ten seconds, long enough to experience winning the lottery. Then suspend the programs, merge them again, and start the result. If you don't win the lottery, then just wake up automatically.
How would you do that???
Replies from: wedrifid↑ comment by wedrifid · 2012-08-13T01:27:31.412Z · LW(p) · GW(p)
How would you do that???
Are you asking for a solution to the engineering problem of how to convert yourself into an Em? I can't help there. Once you have that, this part with the lottery seems simple. The 'merge them again' would be tricky both on the philosophy and the engineering side (perhaps harder than converting to an Em in the first place.)
comment by jacob_cannell · 2011-12-12T21:36:00.977Z · LW(p) · GW(p)
The odds of winning the lottery are ordinarily a billion to one. But now the branch in which you win has your "measure", your "amount of experience", temporarily multiplied by a trillion.
This seems perhaps too obvious, but how can branches multiply probability by anything greater than 1? Conditional branches follow the rules of conjunctive probability . . .
Probability in regards to the future is simply a matter of counting branches. The subset of branches in which you win is always only one in a billion of all branches - and any further events in a branch only create further sub-branches, so the probability of anything happening in that sub-branch can never be greater than 10^-9. The exact number of copies in this context is irrelevant - it could be infinite and it wouldn't matter.
Whether we accept identification with only one copy of ourself as in jschculter's result or we consider our 'self' to be all copies, the results still work out to 1 billion to 1 against winning.
Another way of looking at the matter: we should be wary of any non-objective decision process. If we substitute 'you' for 'person X' in the example, we wouldn't worry that person X splitting themselves into a trillion sub-copies only if they win the lottery would somehow increase their actual likelihood of winning.
comment by paulfchristiano · 2010-12-27T07:33:08.608Z · LW(p) · GW(p)
The flaw is that anticipation should not be treated as a brute thing. Anticipation should be a tool used in the service of your decision theory. Once you bring in some particular decision theory and utility function, the question is dissolved (if you use TDT and your utility function is just the total quality of simulated observer moments, then you can reverse engineer exactly Nick Bostrom's notion of "anticipate." So if I had to go with an answer, that would be mine.)
Two people disagreeing about what they should anticipate is like two people arguing about whether a tree falling in an empty forest makes a sound. They disagree about what they anticipate, yes, but they behave identically.
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2010-12-27T10:58:53.019Z · LW(p) · GW(p)
Anticipation should be a tool used in the service of your decision theory. Once you bring in some particular decision theory and utility function, the question is dissolved (if you use TDT and your utility function is just the total quality of simulated observer moments, then you can reverse engineer exactly Nick Bostrom's notion of "anticipate." So if I had to go with an answer, that would be mine.)
Do that. Isn't as straightforward as it perhaps looks, I still have no idea how to approach the problem of anticipation. (Also, "total quality of simulated observer moments"?)
Replies from: paulfchristiano↑ comment by paulfchristiano · 2010-12-27T19:57:25.211Z · LW(p) · GW(p)
Do that
Do you mean try to reverse engineer a notion of anticipation, or try to dissolve the question?
For the first, I mean to define anticipation in terms of what wagers you would make. In this case, how you treat a wager depends on whether having a simulation win the wager causes something good to happen to your utility function in one simulated copy, or in a million of them. Is that fair enough? I don't see why we care about anticipation at all, except as it bears on our decision making.
I don't really understand how the second question is difficult. Whatever strategy you choose, you can predict exactly what will happen. So as long as you can compare the outcomes, you know what you should do. If you care about the number of simulated paperclips that are ever created, then you should take an even paperclip bet on whether you won the lottery if the paperclips would be created before the extra simulations are destroyed. Otherwise, you shouldn't.
(Also, "total quality of simulated observer moments"?)
How do you describe a utility function that cares twice as much what happens to a consciousness which is being simulated twice?
comment by red75 · 2010-06-25T11:23:07.656Z · LW(p) · GW(p)
I'm curious why no one mentioned Solomonoff prior here. Anticipation of subjective experience can be expressed as: what is a probability of experiencing X, given my prior experiences. Thus we "swap" ontological status of objective reality and subjective experience, then we can use Solomonoff prior to infer probabilities.
When one wakes up as a copy, one experiences instantaneous arbitrary space-time travel, thus Solomonoff prior for this experience should be lower, than that of wake-up-as-original one (if original one can wake up at all).
Given that approach, it seems that our subjective experience will tend to be as much "normal" as it allowed by simplest computable laws of physics.
Replies from: red75↑ comment by red75 · 2010-06-25T17:45:55.693Z · LW(p) · GW(p)
It seems I've given too little information to make it worth thinking on it. Here's detailed explanation.
I'll abbreviate thread of subjective experience as TSE.
If I make 10^6 copies of myself, then all 10^6+1 continuations of TSE are indistinguishable to external observer. Thus all these continuations are invariant under change of TSE, and it seems that we can assign equal probability to them. Yes, we can, but:
If TSE is not ontologically fundamental, then it is not bound by spacetime, laws of physics, universe, Everett multiverse, etc. There will be no logical contradiction, if you will find youself next instant as Boltzmann brain, or in one of infinitely many universes of level 4 multiverse, or outside your own lightcone. Thus:
Every finite set of continuations of TSE has zero probability. And finally:
We have no options, but Solomonoff prior to infer what we will experience next.
comment by SforSingularity · 2009-10-03T00:59:10.047Z · LW(p) · GW(p)
a truly remarkable observation: quantum measure seems to behave in a way that would avoid this trilemma completely
Which is why Roger Penrose is so keen to show that consciousness is a quantum phenomenon.
comment by orthonormal · 2009-09-28T19:14:41.670Z · LW(p) · GW(p)
We have a strong subjective sense of personal experience which is optimized for passing on genes, and which thus coincides with the Born probabilities. In addition, it seems biased toward "only one of me" thinking (evidence: most people's intuitive rejection of MWI as absurd even before hearing any of the physics, and most people's intuitive sense that if duplicated, 'they' will be the original and 'someone else' will be the copy). The plausible ev-psych explanation for this, ISTM, is that you won't ever encounter another version of your actual self, and that it's very bad to be tricked into really loving your neighbor as yourself. Thus the rigid sense of continuity of personal identity.
Thus, when complications like quantum suicide or splitting or merging of minds are introduced, the basic intuitions become extremely muddled. In particular, Nick Bostrom's solution prompts the objection of absurdity, even though it is made up of ingredients that seem reasonable (to rationalist materialists, anyhow) taken separately. That makes me suspicious that Bostrom might in fact be right, and that our objections stem more from the ev-psych than anything else.
The following thought experiment pumps my intuition: what concept of subjective probability might Ebborian-like creatures evolve? Of course, they'd split more frequently when resources were plentiful. Imagine that a quantum random event X would double an Ebborian's resources if it happened, and that X happened in half the branches of the wavefunction. If it's assumed that the Ebborian would split if and only if X happened, what subjective probability would ve evolve to assign to X? Well, since what really counts for the evolutionary process is the total 'population of descendants' averaged across all branches, ve should in fact weight more heavily the futures in which ve splits: ve should evolve to assign X probability 2/3, even though the splitting happens after observing X. And there's really no inconsistency with that: the single copy in half the branches feels a bit unlucky while the two copies in the other branches rejoice that the odds were in their favor. Repeat this a great many times, and most of the descendants will feel pretty well calibrated in their probabilities.
So I think that our sense of subjective probability has to be an evolved aid to decision-making, rather than an inherent aspect of conscious experience; and I have to go with Nick Bostrom's probabilities, as strange as they sound. (Nods to Tyrrell and Wei Dai, whose comments greatly helped my thought process.)
ETA: I just realized an ingredient of "what it would feel like": Ebborians would evolve to give the same probabilities we would for events that don't affect their splitting times, but all events that would make them richer or poorer would have subjective probability skewed in this fashion. Basically, Ebborians evolve so that each one just feels that being a consistently lucky frood is the natural state of things, and without that necessarily giving them the ego it would give us.
Replies from: Johnicholas↑ comment by Johnicholas · 2009-09-28T20:26:15.735Z · LW(p) · GW(p)
Suppose that the Ebborians gamble. What odds would it give for event X?
Suppose ve gives odds of 2:1 (probability of 2/3). A bookie takes the bet, and in half of the branches, collects 2 (from the two "descendants"), and in half of the branches, pays out 1, for an average profit of 0.5.
I think your argument leads to the Ebborians being vulnerable to Dutch books.
Replies from: orthonormal↑ comment by orthonormal · 2009-09-28T20:45:14.756Z · LW(p) · GW(p)
Er, your math is the wrong way around, but your point at first seems right: the Ebborian sees 2/3 odds, so ve is willing to pay the bookie 2 if X doesn't happen, and get paid 1 (split between copies, as in correlated decision theory) if X does happen.
However, if instead the Ebborian insists on paying 2 for X not happening, but on each copy receiving 1 if X happens, the Dutch book goes away. Are there any inconsistencies that could arise from this sort of policy? Perhaps the (thus developed) correlated decision theory only works for the human form of subjective probability? Or more probably, I'm missing something.
Replies from: Johnicholas↑ comment by Johnicholas · 2009-09-29T20:38:43.473Z · LW(p) · GW(p)
From the bookie's perspective, the "each copy" deal corresponds to 1:1 odds, right?
comment by Stuart_Armstrong · 2009-09-28T15:34:45.814Z · LW(p) · GW(p)
The third horn is a fake, being capable of being defined in or out of existence at will. It I am indifferent to my state ten seconds from now, it is true; if my current utility function includes a term for my state ten seconds from now, it is false.
The 'thread of subjective experience' is not the issue; whether I throw myself of the cliff, will depend on whether I am currently indifferent as to whether my future self will die.
Replies from: Tyrrell_McAllister↑ comment by Tyrrell_McAllister · 2009-09-28T18:45:46.404Z · LW(p) · GW(p)
I don't follow you. You write
The third horn is a fake, being capable of being defined in or out of existence at will. It I am indifferent to my state ten seconds from now, it is true; if my current utility function includes a term for my state ten seconds from now, it is false.
What do you mean by calling the horn "fake"? What is the "it" that is true or false?
comment by snarles · 2009-09-28T02:24:34.829Z · LW(p) · GW(p)
"I still have trouble biting that bullet for some reason.... I don't think that, if I were really selfish, I could jump off a cliff knowing smugly that a different person would experience the consequence of hitting the ground."
Replace "jump off a cliff" with "heroin overdose." Sure you could, and many do. Not caring about the future is actually very common in humans, and it's harder for smart people to understand this attitude because many of us have very good impulse control. But I still find it strange that you seem to want continuity of self to be more than an illusion. Many more paradoxes occur when you insist on continuity of self than when you don't; Occam's Razor would tell you to throw away the concept.
comment by saturn · 2009-09-27T23:36:54.327Z · LW(p) · GW(p)
Is there a contradiction in supposing that the total subjective weight increases as unconnected threads of subjective experience come into existence, but copies branching off of an existing thread of subjective experience divide up the weight of the parent thread?
comment by steven0461 · 2009-09-27T19:51:26.748Z · LW(p) · GW(p)
For what it's worth, here is the latest attempt by philosophers of physics to prove the Born rule from decision theory.
Replies from: rwallace↑ comment by rwallace · 2009-09-28T00:38:20.013Z · LW(p) · GW(p)
Interesting paper, but from skimming it without grokking all the mathematics, it looks to me like it doesn't quite prove the Born rule from decision theory, only proves that given the empirical existence of the Born rule in our universe, a rational agent should abide by it. Am I understanding the paper correctly?
Replies from: steven0461, Johnicholas↑ comment by steven0461 · 2009-09-28T15:32:50.632Z · LW(p) · GW(p)
My understanding is the proof doesn't use empirical frequencies -- though if we observed different frequencies, we'd have to start doubting either QM or the proof. The question is just whether the proof's assumptions are true rationality constraints or "wouldn't it be convenient if" constraints.
Everett and Evidence is another highly relevant paper.
↑ comment by Johnicholas · 2009-09-28T01:53:35.967Z · LW(p) · GW(p)
I think the paper starts from the empirical existence of Born rule "weights" and attempts to explain in what sense they should be treated, decision-theoretically, as classical probabilities (since in the MWI sense, everything that might happen does happen) - but I admit I didn't grok the mathematics either.
comment by steven0461 · 2009-09-27T14:45:24.890Z · LW(p) · GW(p)
Sentences in this comment asserted with only 95% probability of making sense, read on at own peril.
There's a mainstream program to derive the Born probabilities from physics and decision theory which David Wallace, especially, has done a lot of work on. If I remember correctly, he distinguishes two viewpoints:
- "Subjective Uncertainty", which says you're a stage of some 4D space-time worm and you're indexically uncertain which worm, because many of them have stages that are exactly alike
- "Objective Determinism", which says you'll continue as all your future continuations and you should just maximize utility over them and see if doing so involves something that, even though it doesn't express uncertainty, behaves like a probability
Wallace's opinion is that both SU and OD are correct ways to think. (Given his assumptions, both allow a proof of the Born probabilities, but it's easier with SU.) That strikes me as being parallel to Eliezer's claims here. If you think SU is a wrong way to think and OD is a correct way to think, that strikes me as being parallel to everyone else's claims here.
There's an argument that SU and OD have different implications, e.g., OD allows you to care about inter-branch diversity and SU doesn't. If something like OD gives you a way to reductionize "anticipation", but you still want a different kind of "anticipation" that accords more with your intuitions, then you may be in trouble if their roles ever overlap. It seems to me one has more to do with decision-making and the other has more to do with conscious experience; those are completely different things and it's important to keep them separate.
Anyway, since we can be reductionist about these 4D worms and we're already being reductionist about decision theory, it shouldn't be hard to figure out exactly how they relate.
comment by MichaelHoward · 2009-09-27T12:08:28.257Z · LW(p) · GW(p)
In quantum copying and merging, every "branch" operation preserves the total measure of the original branch,
Maybe branch quantum operations don't make new copies, but represent already existing but identical copies "becoming" no longer identical?
In the computer program analogy: instead of having one program at time t and n slightly different versions at time t+1, start out with n copies already existing (but identical) at time t, and have each one change in the branching. If you expect a t+2, you need to start with at least n^2 copies.
(That may mean a lot more copies of everything than would otherwise be expected even under many worlds, but even if it's enough to give this diabolical monster bed-wetting nightmares, by the Occam's razor that works for predicting physical laws, that's absolutely fine).
Come to think of it... if this interpretation isn't true, or for that matter, even if this is true but it isn't true that identical copies get equal experience measure, then wouldn't we be vastly likely to be experiencing the last instant of experienceable existence in the Universe, because that's where the vast majority of distinct observers would be? Omega-point simulation hypothesis anyone? :-)
comment by MichaelHoward · 2009-09-27T11:31:47.008Z · LW(p) · GW(p)
In quantum copying and merging, every "branch" operation preserves the total measure of the original branch,
Maybe branch quantum operations don't make new copies, but represent already existing but identical copies "becoming" no longer identical?
In the computer program analogy: instead of having one program at time t and n slightly different versions at time t+1, start out with n copies already existing (but identical) at time t, and have each one change in the branching. If you expect a t+2, you need to start with at least n^2 copies...
...and that may mean a lot more copies of everything than would otherwise be expected even under many worlds, but even if it's enough to give this diabolical monster bed-wetting nightmares, by the Occam's razor that works for predicting physical laws, that's absolutely fine.
comment by Stuart_Armstrong · 2009-09-27T09:32:05.807Z · LW(p) · GW(p)
Just an aside - this is obviously something that Eliezer - someone highly intelligent and thoughful - has thought deeply about, and has had difficulty answering.
Yet most of the answers - including my own - seem to be of the "this is the obvious solution to the dilemma" sort.
comment by Stuart_Armstrong · 2009-09-27T09:19:12.631Z · LW(p) · GW(p)
Since I have a theory of Correlated decision making, let's use it! :-)
Let's look longer at the Nick Bostrom solution. How much contribution is there towards "feeling I will have won the lottery ten seconds from now" from "feeling I have currently won the lottery"? By the rules of this set-up, each of the happy copies contributes one trillionth towards that result.
(quick and dirty argument to convince you of that: replace the current rules by one saying "we will take the average feeling of victory across the trillion copies"; since all the feelings are exactly correlated, this rule gives the same ultimate result, while making it clear that each copy contributes one trillionth of the final result).
Thus Nick's position is, I believe, correct.
As for dealing with the fourth horn, I've already written on how to do that: here you have partially correlated experiences contributing to future feelings of victory, which you should split into correlated and anti-correlated parts. Since the divergence is low, the anti-correlated parts are of low probability, and the solution is approxiamtely the same as before.
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-09-27T17:01:13.514Z · LW(p) · GW(p)
So... what does it feel like to be merged into a trillion exact copies of yourself?
Answer: it feels like nothing, because you couldn't detect the event happening.
So in terms of what I expect to see happen next... if I've seen myself win the lottery, then in 10 seconds, I expect to still see evidence that I won the lottery. Even if, for some reason, I care about it less, that is still what I see... no?
Replies from: Stuart_Armstrong↑ comment by Stuart_Armstrong · 2009-09-27T17:12:48.114Z · LW(p) · GW(p)
See my other reformulations; here there is no "feeling of victory", but instead, you have scenarios where all but one of the trillions is spared, the others are killed. Then your expecations - if you didn't know what the other trillion had been told or shown - is that there is 1/trillion chance that the you in 10 seconds will still remember evidence that he has won the lottery.
You can only say the you in 10 seconds will remember winning the lottery with certainty because you know that all the other copies also remember winning the lottery. Their contributions bumps it up to unity.
comment by Psy-Kosh · 2009-09-27T05:41:47.555Z · LW(p) · GW(p)
I've thought about this before and I think I'd have to take the second horn. Argument: assuming we can ignore quantum effects for the moment, imagine setting up a computer running one instance of some mind. There're no other instances anywhere. Shut the machine down. Assuming no dust theory style immortality (which, if there was such a thing, would seem to violate born stats, and given that we actually observe the validity of born stats...), total weight/measure/reality-fluid assigned to that mind goes from 1 to 0, so it looks reasonable that second horn type stuff is allowed to happen.
I'd say personal continuity is real, but is made up of stuff like memory, causality maybe, etc. I suspect those things explain it rather than explain it away.
However, given that in this instance it seems QM actually makes things behave in a saner way, there's one other option I think we ought consider, though I'm hesitant to bring it up:
Horn 5b: consciousness may be inherently quantum. This is not, on its own, an explanation of consciousness, but maybe we ought consider the possibility that the only types of physical processes that are "allowed" to be conscious are in some way tied to inherently quantum ones, and the only type of mind branching that's allowed is via quantum branching.
Given that, as you point out, it seems that the only form of branching that we experience (ie, quantum branching) is the one way that actually seems to (for some reason) automatically make it work out in a way that doesn't produce confusing weirdness, well...
(EDIT: main reason I'm bringing up this possibility is that it's an option that would actually help recover "it all adds up to normality")
(EDIT2: Ugh, I'm stupid: "normality" except for more or less allowing stuff that's pretty close to being p-zombies... so this doesn't actually improve the situation all that much as far as "normality" after all.)
Other than that, maybe when we explicitly solve the Born stats fully satisfactorily, when we see how nature is pulling the trick off, then we'll hopefully automatically see the consequences of this situation.
comment by kim0 · 2009-09-28T07:04:36.478Z · LW(p) · GW(p)
I have an Othello/Reversi playing program.
I tried making it better by applying probabilistic statistics to the game tree, quite like antropic reasoning. It then became quite bad at playing.
Ordinary minimax with A-B did very well.
Game algorithms that ignore density of states in the game tree, and only focus on minimaxing, do much better. This is a close analogy to the experience trees of Eliezer, and therefore a hint that antropic reasoning here has some kind of error.
Kim0
Replies from: rwallace↑ comment by rwallace · 2009-09-28T13:47:39.032Z · LW(p) · GW(p)
That's because those games are nonrandom, and your opponent can be expected to single out the best move.
Algorithms for games like backgammon and poker that have a random element, do pay attention to density of states.
(Oddly enough, so nowadays do the best known algorithms for Go, which surprised almost everyone in the field when this discovery was made. Intuitively, this can be seen as being because the game tree of Go is too large and complex for exhaustive search to work.)
comment by timtyler · 2009-09-27T04:10:43.280Z · LW(p) · GW(p)
Re: "But if I make two copies of the same computer program, is there twice as much experience, or only the same experience? Does someone who runs redundantly on three processors, get three times as much weight as someone who runs on one processor?"
Do they get three times as much "weight" - in some moral system?
Er, that depends on the moral system in question.
Replies from: Nominull↑ comment by Nominull · 2009-09-27T04:16:51.761Z · LW(p) · GW(p)
We're not talking about morality here, we're talking anthropics.
http://j.photos.cx/AnthropicPrinciple-135.jpg
Replies from: timtyler