Poll: What value extra copies?
post by Roko · 2010-06-22T12:15:54.408Z · LW · GW · Legacy · 177 commentsContents
177 comments
In the future, it may be possible for you to scan your own brain and create copies of yourself. With the power of a controllable superintelligent AI, it may even be possible to create very accurate instances of your past self (and you could take action today or in the near future to make this easier by using lifelogging tools such as these glasses).
So I ask Less Wrong: how valuable do you think creating extra identical, non-interacting copies of yourself is? (each copy existing in its own computational world, which is identical to yours with no copy-copy or world-world interaction)
For example, would you endure a day's hard labor to create an extra self-copy? A month? A year? Consider the hard labor to be digging a trench with a pickaxe, with a harsh taskmaster who can punish you if you slack off.
Do you think having 10 copies of yourself made in the future is 10 times as good as having 1 copy made? Or does your utility in copies drop off sub-linearly?
Last time I spoke to Robin Hanson, he was extremely keen on having a lot of copies of himself created (though I think he was prepared for these copies to be emulant-wage-slaves).
I have created a poll for LW to air its views on this question, then in my next post I'll outline and defend my answer, and lay out some fairly striking implications that this has for existential risk mitigation.
For those on a hardcore-altruism trip, you may substitute any person or entity that you find more valuable than your own good self: would you sacrifice a day of this entity's life for an extra copy? A year? etc.
UPDATE: Wei Dai has asked this question before, in his post "The moral status of independent identical copies" - though his post focuses more on lock-step copies that are identical over time, whereas here I am interested in both lock-step identical copies and statistically identical copies (a statistically identical copy has the same probability distribution of futures as you do).
177 comments
Comments sorted by top scores.
comment by JenniferRM · 2010-06-22T17:01:36.179Z · LW(p) · GW(p)
There seems to be a lot of assumptions in the poll but one in particular jumps out at me. I'm curious why there is no way to express that the creation of a copy might have negative value.
It seems to me that, for epistemic balance, there should be poll options which contemplates the idea that making a copy might be the "default" outcome unless some amount of work was done to specifically avoid the duplication - and then how much work would someone do to to save a duplicate of yourself from the hypothetical harm of coming into existence.
Why is there no option like that?
Replies from: Roko↑ comment by Roko · 2010-06-22T18:15:20.054Z · LW(p) · GW(p)
Because the polling site limits the number of options I can give.
Is that the option you would be ticking?
Replies from: JenniferRM↑ comment by JenniferRM · 2010-06-22T19:11:51.462Z · LW(p) · GW(p)
I'm not sure. The first really big thing that jumped out at me was the total separateness issue. The details of how this is implemented would matter to me and probably change my opinion in dramatic ways. I can imagine various ways to implement a copy (physical copy in "another dimension", physical copy "very far away", with full environmental detail similarly copied out to X kilometers and the rest simulated or changed, with myself as an isolated boltzman brain, etc, etc). Some of them might be good, some might be bad, and some might require informed consent from a large number of people.
For example, I think it would be neat to put a copy of our solar system ~180 degrees around the galaxy so that we (and they) have someone interestingly familiar with whom to make contact thousands of years from now. That's potentially a kind of "non-interacting copy", but my preference for it grows from the interactions I expect to happen far away in time and space. Such copying basically amounts to "colonization of space" and seems like an enormously good thing from that perspective.
I think simulationist metaphysics grows out of intuitions from dreaming (where our brain probably literally implements something like a or content tag so that we don't become confused by memories of our dreams), programming (where simulations happen in RAM that we can "miraculously edit", thereby copying and/or changing the course of the simulation), and mathematics (where we get a sense of data structures "Platonically existing" before we construct a definition, which our definitions "find", so that we can explore the implications and properties of the hypothetical object).
Its very easy to get these inspirational sources confused, mix them together some, and they talk about "making a copy", and then have the illusion of mutual understanding with someone else.
For example, I expect that all possible realities already "exist" in platospace. Tunnels between realities can be constructed, but anything that connects to our reality is likely (due to thermodynamic and information theoretic concerns) to be directional. We can spend energy to embed other realities within our own as simulations. In theory, we might be embedded in larger contexts without ever being aware of the fact. Embedding something that embeds yourself is cute, but not computationally realistic, implying either directional "compression" or radically magical metaphysics.
Perhaps a context in which we are embedded might edit our universe state and explore counter-factual simulations, but even if our simulators did that, an unedited version of our universe would still continue on within platospace, as would all possible "edit and continuations" that our supposed simulators did not explore via simulation as an embedding within their own context.
But however much fun it is to think about angels on the head of a pin, all such speculation seems like an abuse of the predicate of "existence" to me. I might use the word "existence" when thinking of "platonic existence" but it is a very different logical predicate than the word that's used when I ponder "whether $100 exists in my purse".
Possible spoiler with rot13'ed amazon link:
Fbzrgvzrf V guvax znlor gurer fubhyq or n fhosbehz sbe crbcyr jub unir nyernql ernq Crezhgngvba Pvgl fb gung pbairefngvbaf pna nffhzr pregnva funerq ibpnohynel jvgubhg jbeelvat nobhg fcbvyref :-C
uggc://jjj.nznmba.pbz/Crezhgngvba-Pvgl-Tert-Rtna/qc/006105481K
Replies from: Roko↑ comment by Roko · 2010-06-23T09:49:26.527Z · LW(p) · GW(p)
For example, I think it would be neat to put a copy of our solar system ~180 degrees around the galaxy so that we (and they) have someone interestingly familiar with whom to make contact thousands of years from now. That's potentially a kind of "non-interacting copy"
No, that's not non-interacting, because as you say later, you want to interact with it. I mean really strictly noninteracing: no information flow either way. Imagine it's over the cosmic horizon.
Replies from: AlephNeil, wedrifid↑ comment by AlephNeil · 2010-06-23T12:03:20.883Z · LW(p) · GW(p)
Imagine it's over the cosmic horizon.
That's an interesting ingredient to throw in. I've been imagining scenarios where, though the copies don't interact with each other, there will nevertheless be people who can obtain information about both (e.g. music scholars who get to write treatises on Beethoven's 9th symphony vs "parallel-Beethoven's 9th symphony").
But if the copies are (to all intents and purposes) in causally disjoint parallel universes then intuitively it seems that an exact copy of Beethoven is (on average) no better or worse than a 'statistical copy'.
Hmm, this is certainly a more interesting question. My first instinct (which I'd easily be persuaded to reconsider) is to say that the question ceases to make sense when the 'copy' is in a 'parallel universe'. Questions about what is 'desirable' or 'good' for X only require (and only have) answers when there's some kind of information flow between the thinker and X. (But note that the case where X is an imaginary or simulated universe is quite different from that where X is a 'real' parallel universe that no-one has imagined or simulated.)
ETA: But we can imagine two people starting off in the same universe and then travelling so far apart that their future light cones become disjoint. And then we could consider the question of the relative value of the following three scenarios:
- A single Beethoven in universe A, no Beethoven in universe B.
- Beethoven in universe A and "lock-step Beethoven" in universe B.
- Beethoven in universe A and "statistical Beethoven" in universe B.
and ask "how much effort we should put in to bring about 2 or 3 rather than 1, and 3 rather than 2?"
This is an even more interesting question (or was this all along the question?) But I don't think it's really a question about copies of oneself or even of a person (except insofar as we regard utility as supervening on people's experiences), it's a general question about how we should 'account for' the fates of regions of the universe that become inaccessible from our own, when trying to judge whether our actions are good or bad.
Replies from: torekp, Roko↑ comment by torekp · 2010-06-26T00:28:24.491Z · LW(p) · GW(p)
Questions about what is 'desirable' or 'good' for X only require (and only have) answers when there's some kind of information flow between the thinker and X.
Suppose that your hour of hard labor creates a separate spacetime - black hole in our universe, Big Bang in theirs type scenario. Does that count as an information flow between you and an inhabitant (X) of the new universe? I'd think it does, so you're still on the hook to answer Roko's question.
↑ comment by Roko · 2010-06-23T12:12:05.016Z · LW(p) · GW(p)
Questions about what is 'desirable' or 'good' for X only require (and only have) answers when there's some kind of information flow between the thinker and X.
How many boxes do you take on Newcomb's problem?
Replies from: JenniferRM, AlephNeil↑ comment by JenniferRM · 2010-06-23T22:04:34.250Z · LW(p) · GW(p)
No, that's not non-interacting, because as you say later, you want to interact with it. I mean really strictly noninteracing: no information flow either way. Imagine it's over the cosmic horizon.
So I'm assuming, in this case, that the scenario to judge is a material copy of everything in our "recursive cosmic horizon" (that is, including the stuff at the edge of the cosmic horizon of the stuff at the edge of our cosmic horizon and so on until everything morally significant has either been firmly excluded or included so no one at the "very outer edge" relative to Sol has a change in experience either over their entire "cosmic existence" for the next N trillion years so we and everything that will eventually be in our light cone has identical but non-interacting astronomical waste issues to deal with) and then that physical system is moved off to its own place that is unimaginably far away and isolated.
This triggers my platospace intuitions because, as near as I can figure, every possible such universe already "exists" as a mathematical object in platospace (given that weak version of that kind of "existence" predicate) and copies in platospace are the one situation where the identity of indiscernibles is completely meaningful.
That kind of duplication is a no-op (like adding zero in a context where the there are no opportunity costs because you could have computed something meaningful instead) and has no value.
For reference, I one-box on Newcombe's paradox if there really is something that can verifiably predict what I'll do (and I'm not being scammed by a huckster in an angel costume with some confederates who have pre-arranged to one-box or two-box or to signal intent via backchannels if I randomly instruct them how to pick for experimental purposes, etc, etc).
Back in like 2001 I tried to build a psych instrument that had better than random chance of predicting whether someone would one-box or two-box in a specific, grounded and controlled Newcombe's Paradox situation - and that retained its calibration even when the the situation was modified into its probabilistic form and the actual measured calibration was honestly reported to the participants.
Eventually I moved on, because I ran into numerous practical difficulties (money, time, competing interests, etc) trying to implement a research program of this sort as an undergrad in my spare time :-P
Still, I've retained the orientation towards reality that makes me more interested in psychological instruments on the subject of Newcombe's Paradox than about abstract models of rationality that are capable of getting the right answer. I don't think it requires special math to get the right answer, it just requires clear thinking.
The gritty details of practical reality always seems to matter in the end, and questions about the terminal value of imaginary nothings have, in my experience, never mattered except for signaling tribal membership... where imaginary nothings matter a lot, but in complicated ways with their own quirky domain details that are probably pretty far off topic :-P
Replies from: Blueberry, Eliezer_Yudkowsky, cupholder↑ comment by Blueberry · 2010-06-23T22:32:45.664Z · LW(p) · GW(p)
Back in like 2001 I tried to build a psych instrument that had better than random chance of predicting whether someone would one-box or two-box in a specific, grounded and controlled Newcomb's Paradox situation - and that retained its calibration even when the the situation was modified into its probabilistic form and the actual measured calibration was honestly reported to the participants.
That sounds incredibly interesting and I'm curious what else one-boxing correlates with. By "instrument", you mean a questionnaire? What kinds of things did you try asking? Wouldn't the simplest way of doing that be to just ask "Would you one-box on Newcomb's Problem?"
Replies from: JenniferRM↑ comment by JenniferRM · 2010-06-24T01:45:58.125Z · LW(p) · GW(p)
I think I might end up disappointing because I have almost no actual data...
By an instrument I meant a psychological instrument, probably initially just a quiz and if that didn't work then perhaps some stroop-like measurements of millisecond delay when answering questions on a computer.
Most of my effort went into working out a strategy for iterative experimental design and brainstorming questions for the very first draft of the questionnaire. I didn't really have a good theory about what pre-existing dispositions or "mental contents" might correlate with dispositions one way or the other.
I thought it would be funny if people who "believed in free will" in the manner of Martin Gardner (an avowed mysterian) turned out to be mechanically predictable on the basis of inferring that they are philosophically confused in ways that lead to two-boxing. Gardner said he would two box... but also predicted that it was impossible for anyone to successfully predict that he would two box.
In his 1974 "Mathematical Games" article in Scientific American he ended with a question:
But has either side really done more than just repeat its case "loudly and slowly"? Can it be that Newcombe's paradox validates free will by invalidating the possibility, in principle, of a predictor capable of guessing a person's choice between two equally rational actions* with better than 50% probability?
- = From context I infer that he means that one and two boxing are equally rational and doesn't mean to cover the more general claim this seems to imply.
In his post script to the same article, reprinted in The Night Is Large he wrote:
It is my view that Newcomb's predictor, even if accurate only 51% of the time, forces a logical contradiction that makes such a prediction, like Russell's barber, impossible. We can avoid the contradiction arising from two different "shoulds" (should you take one or two boxes?) by stating the contradiction as follows. One flawless argument implies that the best way to maximize your reward is to take only the closed box. Another flawless argument implies that the best way to maximize your reward is to take both boxes. Because the two conclusions are contradictory, the prediction cannot be even probably valid. Faced with a Newcomb decision, I would share the suspicions of Max Black and others that I was either the victim of a hoax or of a badly controlled experiment that had yielded false data about the predictor's accuracy. On this assumption, I would take both boxes.
This obviously suggests a great opportunity for falsification by rolling up one's sleeves and just doing it. But I didn't get very far...
One reason I didn't get very far is that I was a very poor college student and I had a number of worries about ecological validity if there wasn't really some money on the line which I couldn't put up.
A quick and dirty idea I had to just get moving was to just get a bunch of prefab psych instruments (like the MMPI and big five stuff but I tried tracking down other things like religious belief inventories and such) and then also make up a Newcomb's quiz of my own, that explained the situation, had some comprehension questions, and then asked for "what would you do".
The Newcomb's quiz would just be "one test among many", but I could score the quizes and come back to give the exact same Newcomb's quiz a second time with a cover sheet explaining that the answer to the final question was actually going to determine payoffs for the subject. All the other tests would give a plausible reason that the prediction might be possible, act as a decoy (soliciting an unvarnished Newcomb's opinion because it wouldn't leap out), and provide fascinating side material to see what might be correlated with opinions about Newcomb's paradox.
This plan foundered on my inability to find any other prefab quizes. I had thought, you know?... science? ...openness? But in my context at that time and place (with the internet not nearly as mature as it is now, not having the library skills I now have, and so on) all my attempts to acquire such tests were failures.
One of the things I realized is that the claim about the accuracy might substantially change the behavior of the subject so I potentially had a chick and egg problem - even nonverbals could influence things as I handed the second set of papers claiming success rates of 1%, 50%, or 99% to help explore the stimulus-reaction space... it would be tricky. I considered eventually bringing in some conformity experiment stuff, like with confederates who one-box or two-box in a way the real subject could watch and maybe be initially fooled by, but that was just getting silly, given my resources.
Another issue is that, if a subject of prediction isn't sure what the predictor may have determined about your predicted action, it seems plausible that the very first time you faced the situation you might have a unique opportunity to do something moderately creative like flipping a coin, having it tell you to two box, and coming out with both prizes, so one of the questions I wanted to stick in was something to very gently probe the possibility that the person would "go random" like this and optimize over this possibility. Do you test for this propensity in advance? How? How without suggesting the very possibility?
This also raises the interesting question about iterated prediction. Its one thing to predict that a smart 12 year old who has just been introduced to the paradox will do, and a different thing to give the test to people who have publicly stated what they would do, and still a different thing to run someone through the system five times in a row so the system and the subject started gaining mutually reflective information (for example, the results on the fifth attempt would probably wash out any information from confederates and giving the subject first hand experience with the success rate, but it creates all kinds of opportunities for the instrument to be hacked by a clever subject).
Or another angle, what about doing the experiment on a group of people where they get to talk about it and watch each other as they go through. Do people influence each other on this subject? Would the instrument have to know what the social context was to predict subject behavior?
This leads to the conclusion that I'd probably have to add some metadata to my instrument so that it could ask the question "how many times have you seen this specific questionnaire?" or "when was the first time you heard about Newcomb's paradox?" and possibly also have the person giving the questionnaire fill in some metadata about recent history of previous attempts and/or the experimenter's subjective estimate of the answers the person should give to familiarity questions.
Another problem was simply finding people with the patience to take the quizes :-P
I ended up never having a final stable version of the quiz and a little bit after that I became way more interested in complex system theory and then later more practical stuff like machine learning and bioinformatics and business models and whatnot - aiming for domain expertise in AI and nano for "save with world" purposes in the coming decades.
I think I did the right thing by leaving the academic psych track, but I was so young and foolish then that I'm still not sure. In any case, I haven't seriously worked on Newcomb's paradox in almost a decade.
Nowadays, I have a sort of suspicion that upon seeing Newcomb's paradox, some people see that the interaction with the predictor is beneficial to them and that it would be good if they could figure out some way to get the most possible benefit from the situation, which would involve at least being predicted to one-box. Then its an open question as to whether its possible (or moral) to "cheat" and two-box on top of that.
So the suspicious part of this idea is that a lot of people's public public claims in this area are a very very geeky form of signaling, with one-box being a way to say "Its simple: I want to completely win... but I won't cheat". I think the two-box choice is also probably a matter of signaling, and it functions as a way to say something like "I have put away childish aspirations of vaguely conceived get rich quick schemes, accepted that in the real world everyone can change their mind at any moment, and am really trustworthy because I'm neither lean nor hungry nor trying to falsely signal otherwise".
Like... I bet C.S. Lewis would have two-boxed. The signaling theory reminds me of his version of moral jujitsu, though personally, I still one box - I want to win :-P
↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-06-23T22:35:34.102Z · LW(p) · GW(p)
Second Blueberry's question.
↑ comment by cupholder · 2010-06-23T22:38:22.401Z · LW(p) · GW(p)
Back in like 2001 I tried to build a psych instrument that had better than random chance of predicting whether someone would one-box or two-box in a specific, grounded and controlled Newcombe's Paradox situation - and that retained its calibration even when the the situation was modified into its probabilistic form and the actual measured calibration was honestly reported to the participants.
This sounds very interesting, so I second Blueberry's questions.
(Edit - beaten by Eliezer - I guess I third them.)
↑ comment by AlephNeil · 2010-06-23T12:33:18.863Z · LW(p) · GW(p)
One, if it's set up in what I think is the standard way. (Though of course one can devise very similar problems like the Smoking Lesion where the 'right' answer would be two.)
I'm not entirely sure how you're connecting this with the statement you quoted, but I will point out that there is information flow between the Newcomb player's predisposition to one-box or two-box and the predictor's prediction. And that without some kind of information flow there couldn't be a correlation between the two (short of a Cosmic Coincidence.)
Replies from: Rokocomment by Morendil · 2010-06-22T13:44:14.077Z · LW(p) · GW(p)
Would I sacrifice a day of my life to ensure that (if that could be made to mean something) a second version of me would live a life totally identical to mine?
No. What I value is that this present collection of memories and plans that I call "me" should, in future, come to have novel and pleasant experiences.
Further, using the term "copy" as you seem to use it strikes me as possibly misleading. We make a copy of something when we want to preserve it against loss of the original. Given your stipulations of an independently experienced world for the "copy", which can have no effect on the "original world", I'm not sure that's the best word to use. Perhaps "twin", as in the philosophical twin-world experiments, or maybe "metaphysical copy" to distinguish it from the more common usage.
Replies from: wstrinz↑ comment by wstrinz · 2010-06-22T14:00:29.906Z · LW(p) · GW(p)
You said pretty much what I was thinking. My (main) motivation for copying myself would be to make sure there is still a version of the matter/energy pattern wstrinz instantiated in the world in the event that one of us gets run over by a bus. If the copy has to stay completely separate from me, I don't really care about it (and I imagine it doesn't really care about me).
As with many uploading/anthropics problems, I find abusing Many Worlds to be a good way to get at this. Does it make me especially happy that there's a huge number of other me's in other universes? Not really. Would I give you anything, time or money, if you could credibly claim to be able to produce another universe with another me in it? probably not.
Replies from: cousin_it, Roko↑ comment by cousin_it · 2010-06-22T14:12:52.384Z · LW(p) · GW(p)
Yep, I gave the same answer. I only care about myself, not copies of myself, high-minded rationalizations notwithstanding. "It all adds up to normality."
Replies from: Vladimir_Nesov, RobinZ↑ comment by Vladimir_Nesov · 2010-06-22T17:12:14.210Z · LW(p) · GW(p)
"It all adds up to normality."
Only where you explain what's already normal. Where you explain counterintuitive unnatural situations, it doesn't have to add up to normality.
Replies from: cousin_it, cousin_it↑ comment by cousin_it · 2010-06-22T20:27:37.201Z · LW(p) · GW(p)
Should I take it as an admission that you don't actually know whether to choose torture over dust specks, and would rather delegate this question to the FAI?
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2010-06-22T20:43:32.772Z · LW(p) · GW(p)
Should I take it as an admission that you don't actually know whether to choose torture over dust specks, and would rather delegate this question to the FAI?
All moral questions should be delegated to FAI, whenever that's possible, but this is trivially so and doesn't address the questions.
What I'll choose will be based on some mix of moral intuition, heuristics about the utilitarian shape of morality, and expected utility estimates. But that would be a matter of making the decision, not a matter of obtaining interesting knowledge about the actual answers to the moral questions.
I don't know whether torture or specks are preferable, I can offer some arguments that torture is better, and some arguments that specks are better, but that won't give much hope for eventually figuring out the truth, unlike with the more accessible questions in natural science, like the speed of light. I can say that if given the choice, I'd choose torture, based on what I know, but I'm not sure it's the right choice and I don't know of any promising strategy for learning more about which choice is the right one. And thus I'd prefer to leave such questions alone, so long as the corresponding decisions don't need to be actually made.
I don't see what these thought experiments can teach me.
Replies from: cousin_it↑ comment by cousin_it · 2010-06-23T10:51:42.545Z · LW(p) · GW(p)
As it happened several times before, you seem to take as obvious some things that I don't find obvious at all, and which would make nice discussion topics for LW.
How can you tell that some program is a fair extrapolation of your morality? If we create a program that gives 100% correct answers to all "realistic" moral questions that you deal with in real life, but gives grossly unintuitive and awful-sounding answers to many "unrealistic" moral questions like Torture vs Dustspecks or the Repugnant Conclusion, would you force yourself to trust it over your intuitions? Would it help if the program were simple? What else?
I admit I'm confused on this issue, but feel that our instinctive judgements about unrealistic situations convey some non-zero information about our morality that needs to be preserved, too. Otherwise the FAI risks putting us all into a novel situation that we will instinctively hate.
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2010-06-23T11:46:37.249Z · LW(p) · GW(p)
How can you tell that some program is a fair extrapolation of your morality?
This is the main open question of FAI theory. (Although FAI doesn't just extrapolate your revealed reliable moral intuitions, it should consider at least the whole mind as source data.)
If we create a program that gives 100% correct answers to all "realistic" moral questions that you deal with in real life, but gives grossly unintuitive and awful-sounding answers to many "unrealistic" moral questions like Torture vs Dustspecks or the Repugnant Conclusion, would you force yourself to trust it over your intuitions?
I don't suppose agreeing on more reliable moral questions is an adequate criterion (sufficient condition), though I'd expect agreement on such questions to more or less hold. FAI needs to be backed by solid theory, explaining why exactly its answers are superior to moral intuition. That theory is what would force one to accept even counter-intuitive conclusions. Of course, one should be careful not to be fooled by a wrong theory, but being fooled by your own moral intuition is also always a possibility.
I admit I'm confused on this issue, but I feel that our instinctive judgments about unrealistic situations convey some non-zero information about our morality that needs to be preserved, too.
Maybe they do, but how much would you expect to learn about quasars from observations made by staring at the sky with your eyes?
We need better methods that don't involve relying exclusively on vanilla moral intuitions. What kinds of methods would work, I don't know, but I do know that moral intuition is not the answer. FAI refers to successful completion of this program, and so represents the answers more reliable than moral intuition.
Replies from: cousin_it↑ comment by cousin_it · 2010-06-23T12:00:47.021Z · LW(p) · GW(p)
FAI needs to be backed by solid theory, explaining why exactly its answers are superior to moral intuition.
If by "solid" you mean "internally consistent", there's no need to wait - you should adopt expected utilitarianism now and choose torture. If by "solid" you mean "agrees with our intuitions about real life", we're back to square one. If by "solid" you mean something else, please explain what exactly. It looks to me like you're running circles around the is-ought problem without recognizing it.
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2010-06-23T12:13:28.773Z · LW(p) · GW(p)
If by "solid" you mean "internally consistent", there's no need to wait - you should adopt expected utilitarianism now and choose torture.
How could I possibly mean "internally consistent"? Being consistent conveys no information about a concept, aside from its non-triviality, and so can't be a useful characteristic. And choosing specks is also "internally consistent". Maybe I like specks in others' eyes.
FAI theory should be reliably convincing and verifiable, preferably on the level of mathematical proofs. FAI theory describes how to formally define the correct answers to moral questions, but doesn't at all necessarily help in intuitive understanding of what these answers are. It could be a formalization of "what we'd choose if we were smarter, knew more, had more time to think", for example, which doesn't exactly show how the answers look.
Replies from: cousin_it↑ comment by cousin_it · 2010-06-23T12:19:50.128Z · LW(p) · GW(p)
Then the FAI risks putting us all in a situation we hate, which we'd love if only we were a bit smarter.
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2010-06-23T12:21:34.641Z · LW(p) · GW(p)
FAI doesn't work with "us", it works with world-states, which include all detail including whatever distinguishes present humans from hypothetical smarter people. A given situation that includes a smarter person is distinct from otherwise the same situation that includes a human person, and so these situations should be optimized differently.
Replies from: cousin_it↑ comment by cousin_it · 2010-06-23T12:41:13.463Z · LW(p) · GW(p)
I see your point, but my question still stands. You seem to take it on faith that an extrapolated smarter version of humanity would be friendly to present-day humanity and wouldn't want to put it in unpleasant situations, or that they would and it's "okay". This is not quite as bad as believing that a paperclipper AI will "discover" morality on its own, but it's close.
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2010-06-23T12:50:42.090Z · LW(p) · GW(p)
You seem to take it on faith that a hypothetical smarter version of humanity would be friendly to present-day humanity and wouldn't want to put it in unpleasant situations, or that they would and it's "okay".
I don't "take it on faith", and the example with "if we were smarter" wasn't supposed to be an actual stab at FAI theory.
On the other hand, if we define "smarter" as also keeping preference fixed (the alternative would be wrong, as a Smiley is also "smarter", but clearly not what I meant), then smarter versions' advice is by definition better. This, again, gives no technical guidance on how to get there, hence "formalization" word was essential in my comment. The "smarter" modifier is about as opaque as the whole of FAI.
Replies from: cousin_it↑ comment by cousin_it · 2010-06-24T10:56:45.980Z · LW(p) · GW(p)
You define "smarter" as keeping "preference" fixed, but you also define "preference" as the extrapolation of our moral intuitions as we become "smarter". It's circular. You're right, this stuff is opaque.
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2010-06-24T11:02:21.875Z · LW(p) · GW(p)
It's a description, connection between the terms, but not a definition (pretty useless, but not circular).
↑ comment by RobinZ · 2010-06-22T17:30:33.715Z · LW(p) · GW(p)
Seconding Vladimir_Nesov's correction - for context, the original quote, in context:
Replies from: Vladimir_NesovI feel like grabbing hold of him and shaking the metaphysical stuffing out; instead I say evenly, "I'm asking you to help me. I don't care how experience is constructed. I don't care if time is an illusion. I don't care if nothing's real until it's five minutes old. It all adds up to normality—or it ought to. It used to. And don't tell me everyone smears a hundred times a day; everyone does not suffer hallucinations, mod failures—"
↑ comment by Vladimir_Nesov · 2010-06-22T17:38:51.401Z · LW(p) · GW(p)
The phrase was used in the novel multiple times, and less confusingly so on other occasions. For example:
Replies from: RobinZThe ordinary course of events must add up to normality.
comment by magfrump · 2010-06-22T22:51:16.629Z · LW(p) · GW(p)
I went straight to the poll without a careful enough reading of the post before seeing "non-interacting" specified.
My first interpretation of this is completely non-interacting which has no real value to me (things I can't interact with don't 'exist' for my definition of exist); a copy that I would not interact with on a practical level might have some value to me.
Anyway I answered the poll based on an interactive interpretation so there is at least one misnomer of a result, depending on how you plan to interpret all this.
comment by wedrifid · 2010-06-22T16:18:51.601Z · LW(p) · GW(p)
The mathematical details vary too much with the specific circumstances for me to estimate in terms of days of labor. Important factors to me include risk mitigation and securing a greater proportion of the negentropy of the universe for myself (and things I care about). Whether other people choose to duplicate themselves (which in most plausible cases will impact on neg-entropy consumption) would matter. Non-duplication would then represent a cooperation with other potential trench diggers.
Replies from: Rokocomment by [deleted] · 2010-06-23T06:18:32.601Z · LW(p) · GW(p)
What about using compressibility as a way of determining the value of the set of copies?
In computer science, there is a concept known as deduplication (http://en.wikipedia.org/wiki/Data_deduplication) which is related to determining the value of copies of data. Normally, if you have 100MB of uncompressable data (e.g. an image or an upload of a human), it will take up 100MB on a disk. If make a copy of that file, a standard computer system will require a total of 200MB to track both files on disk. A smart system that uses deduplication will see that they are the same file and discard the redundant data so that only 100MB is actually required. However, this is done transparently so the user will see two files and think that there is 200MB of data. This can be done with N copies and the user will think there is N*100MB of data, but the file system is smart enough to only use up 100MB of disk space as long as no one modifies the files.
For the case of an upload, you have N copies of a human of X MB each which will only require X MB on the disk even though the end user sees N*X MB of data being processed. As long as the simulations never diverge, the file system will never use more that X MB of data. As long as the copies never diverge, running N copies of an upload should never take up more than X MB of space (though they will take up more time since the each process is still being run).
In the case where the copies /do/ diverge, you can use COW optimization (http://en.wikipedia.org/wiki/Copy_on_write) to determine the amount of resources used. In the first example, if you change the first 1MB of one of the two 100MB files but leave the rest untouched, a smart computer will only use 101MB of disk space. It will use up 99MB for the shared data, 1MB for the first file's unique data, and 1MB for the second file's unique data. So in this case, the resources for the two copies is 1% more than the resources used for the single copy.
From a purely theoretical perspective deduplication and COW will give you an efficiency equivalent to what you would get if you tried to compress an upload or a bunch of uploads. (In practice it depends on the type of data) So value of N copies is equal to the Shannon entropy (alternatively, you probably could use the Komogorov complexity) of the data that is the same in both copies plus the unique data in each copy. I figure that any supercomputer designed to run multiple copies of an upload would use these types of compression by default since all modern high end file storage systems use dedup and COW to save on costs.
Note that this calculation of value is different from if you make a backup of youself to guard against disaster. In the case of a backup, you would normally run the second copy in a more isolated environment from the first that would make deduplication impossible. E.g. you would have one upload running in California and another running in Australia. That way if the computer in California falls into the ocean, you still have a working copy in Australia. This this case, the value of the two copies is greater than the value of just one copy because the second copy adds a measure of redundancy even though it adds no new information.
P.S. While we're on the topic, this is a good time you backup your own computer if you haven't done so recently. If your hard drive crashes, then you will fully comprehend the value of a copy :)
Replies from: Roko, Roko↑ comment by Roko · 2010-06-23T09:24:46.173Z · LW(p) · GW(p)
Consider the case where you are trying to value (a) just yourself versus (b) the set of all future yous that satisfy the constraint of not going into negative utility.
The shannon information of the set (b) could be (probably would be) lower than that of (a). To see this, note that the complexity (information) of the set of all future yous is just the info required to specify (you,now) (because to compute the time evolution of the set, you just need the initial condition), whereas the complexity (information) of just you is a series of snapshots (you, now), (you, 1 microsecond from now), ... . This is like the difference between a JPEG and an MPEG. The complexity of the constraint probably won't make up for this.
If the constraint of going into negative utility is particularly complex, one could pick a simple subset of nonnegative utility future yous, for example by specifying relatively simple constraints that ensure that the vast majority of yous satisfying those constraints don't go into negative utility.
This is problematic because it means that you would assign less value to a large set of happy future yous than to just one future you.
Replies from: PhilGoetz, None↑ comment by PhilGoetz · 2010-06-27T03:32:30.107Z · LW(p) · GW(p)
This is very disturbing. But I don't think the set of all possible future yous has no information. You seem to be assuming it's a discrete distribution, with 1 copy of all possible future yous. I expect the distribution to be uneven, with many copies clustered near each other in possible-you-space. The distribution, being a function over possible yous, contains even more information than a you.
Replies from: Roko↑ comment by [deleted] · 2010-06-23T16:43:39.937Z · LW(p) · GW(p)
In your new example, (b) is unrelated to the original question. For (b) a simulation of multiple diverging copies is required in order to create this set of all future yous. However, in your original example, the copies don't statistically diverge.
The entropy of (a) would be the information required to specify you at state t0 + the entropy of a random distribution of input used to generate the set of all possible t1s. In the original example, the simulations of the copies are closed (otherwise you couldn't keep them identical) so the information contained in the single possible t1 cannot be any higher than the information in t0.
Replies from: Roko↑ comment by Roko · 2010-06-23T17:05:56.473Z · LW(p) · GW(p)
Sorry I don't understand this.
Replies from: None↑ comment by [deleted] · 2010-06-23T18:58:23.907Z · LW(p) · GW(p)
Which part(s) don't you understand?
It is possible that we are using different unstated assumptions. Do you agree with these assumptions:
1) An uploaded copy running in a simulation is Turing-complete (As JoshuaZ points out, the copy should also be Turing-equivalent). Because of this, state t_n+1 of a given simulation can be determined by the value of t_n and value of the input D_n at that state. (The sequence D is not random so I can always calculate the value of D_n. In the easiest case D_n=0 for all values of n.) Similarly, if I have multiple copies of the simulation at the same state t_n and all of them have the same input D_n, they should all have the same value for t_n+1. In the top level post, having multiple identical copies means that they all start at the same state t_0 and are passed in the same inputs D_0, D_1, etc as they run in order to force them to remain identical. Because no new information is gained as we run the simulation, the entropy (and thus the value) remains the same no matter how many copies are being run.
2)For examples (a) and (b) you are talking about replacing the input sequence D with a random number generator R. The value of t_1 depends on t_0 and the output of R. Since R is no longer predictable, there is information being added at each stage. This means the entropy of this new simulation depends on the entropy of R
Replies from: JoshuaZ, Roko, Vladimir_Nesov↑ comment by JoshuaZ · 2010-06-25T12:41:10.660Z · LW(p) · GW(p)
1) An uploaded copy running in a simulation is Turing-complete
That is not what Turing complete means. Roughly speaking, something is Turing complete if it can simulate any valid Turing machine. What you are talking about is simply that the state change in question is determined by input data and state. This says nothing about Turing completness of the class of simulations, or even whether the class of simulations can be simulated on Turing machines.. For example, if the physical laws of the universe actually require real numbers then you might need a Blum-Shub-Smale machine to model the simulation.
Replies from: None↑ comment by Roko · 2010-06-25T12:24:15.441Z · LW(p) · GW(p)
Ok, let me see if you agree on something simple. What is the complexity (information content) of a randomly chosen integer of length N binary digits? About N bits, right?
What is the information content of the set of all 2^N integers of length N binary digits, then? Do you think it is N*2^N ?
Replies from: None, wedrifid↑ comment by [deleted] · 2010-06-25T16:57:36.167Z · LW(p) · GW(p)
I agree with the first part. In the second part, where is the randomness in the information? The set of all N-bit integers is completely predictable for a given N.
Replies from: Roko↑ comment by Roko · 2010-06-25T17:06:40.141Z · LW(p) · GW(p)
Exactly. So, the same phenomenon occurs when considering the set of all possible continuations of a person. Yes?
Replies from: None↑ comment by [deleted] · 2010-06-26T03:17:32.523Z · LW(p) · GW(p)
For the set of all possible inputs (and thus all possible continuations), yes.
Replies from: Roko↑ comment by Roko · 2010-06-27T09:49:04.874Z · LW(p) · GW(p)
So the complexity of the set of all possible continuations of a person has less information content than just the person.
And the complexity of the set of happy or positive utility continuations is determined by the complexity of specifying a boundary. Rather like the complexity of the set of all integers of binary length <= N digits that also satisfy property P is really the same as the complexity of property P.
Replies from: None↑ comment by [deleted] · 2010-06-28T03:11:04.832Z · LW(p) · GW(p)
So the complexity of the set of all possible continuations of a person has less information content than just the person.
When you say "just the person" do you mean just the person at H(T_n) or a specific continuation of the person at H(T_n)? I would say H(T_n) < H(all possible T_n+1) < H(specific T_n+1)
I agree with the second part.
↑ comment by Vladimir_Nesov · 2010-06-25T12:28:01.741Z · LW(p) · GW(p)
Escape the underscores to block their markup effect: to get A_i, type "A\_i".
↑ comment by Roko · 2010-06-23T10:11:07.802Z · LW(p) · GW(p)
Note that Wei Dai also had this idea.
Replies from: Wei_Dai↑ comment by Wei Dai (Wei_Dai) · 2010-06-23T10:19:02.690Z · LW(p) · GW(p)
I don't quite understand sigmaxipi's idea, but from what I can tell, it's not the same as mine.
In my proposal, your counter-example isn't a problem, because something that is less complex (easier to specify) is given a higher utility bound.
Replies from: Rokocomment by DSimon · 2010-06-23T01:39:02.155Z · LW(p) · GW(p)
This strikes me as being roughly similar to peoples' opinions of the value of having children who outlive them. As the last paragraph of the OP points out, it doesn't really matter if it's a copy of me or not, just that it's a new person whose basic moral motivations I support, but whom I cannot interact with
Having their child hold to moral motivations they agree with is a major goal of most parents. Having their child outlive one them is another (assuming they don't predict a major advance in lifespan-extending technology soon), and that's where the non-interactivity comes in.
The post-death value of the child's existence is their total value minus the value of the experiences I share with that child, or more generally the effects of the child's existence that I can interact with.
In this sense, the question of the poll can (I think) be rephrased as: what, to you, would the post-your-death value of a child that you raise well be?
comment by DanArmak · 2010-06-22T14:38:44.070Z · LW(p) · GW(p)
It seems everyone who commented so far isn't interested in copies at all, under the conditions stipulated (identical and non-interacting). I'm not interested myself. If anyone is interested, could you tell us about it? Thanks.
Replies from: rwallace, Mass_Driver, torekp↑ comment by rwallace · 2010-06-22T19:26:47.228Z · LW(p) · GW(p)
I would place positive value on extra copies, as an extension of the finding that it is better to be alive than not. (Of course, I subscribe to the pattern philosophy of identity -- those who subscribe to the thread philosophy of identity presumably won't consider this line of reasoning valid.)
How much I would be willing to pay per copy, I don't know, it depends on too many other unspecified factors. But it would be greater than zero.
Replies from: DanArmak↑ comment by DanArmak · 2010-06-22T20:10:11.340Z · LW(p) · GW(p)
In your pattern philosophy of identity, what counts as a pattern? In particular, a simulation or our world (of the kind we are likely to run) doesn't contain all the information needed to map it to our (simulating) world. Some of the information that describes this mapping resides in the brains of those who look at and interpret the simulation.
It's not obvious to me that there couldn't be equally valid mappings from the same simulation to different worlds, and perhaps in such a different world is a copy of you being tortured. Or perhaps there is a mapping of our own world to itself that would produce such a thing.
Is there some sort of result that says this is very improbable given sufficiently complex patterns, or something of the kind, that you rely on?
Replies from: rwallace↑ comment by rwallace · 2010-06-22T22:43:30.307Z · LW(p) · GW(p)
Yes, Solomonoff's Lightsaber: the usual interpretations need much shorter decoder programs.
Replies from: DanArmak↑ comment by DanArmak · 2010-06-22T23:38:18.581Z · LW(p) · GW(p)
Why? How do we know this?
Replies from: rwallace↑ comment by rwallace · 2010-06-23T00:26:36.615Z · LW(p) · GW(p)
Know in what sense? If you're asking for a formal proof, of course there isn't one because Kolmogorov complexity is incomputable. But if you take a radically skeptical position about that, you have no basis for using induction at all, which in turn means you have no basis for believing you know anything whatsoever; Solomonoff's lightsaber is the only logical justification anyone has ever come up with for using experience as a guide instead of just acting entirely at random.
Replies from: DanArmak↑ comment by DanArmak · 2010-06-23T07:47:14.327Z · LW(p) · GW(p)
I'm not arguing with Solomonoff as a means for learning and understanding the world. But when we're talking about patterns representing selves, the issue isn't just to identify the patterns represented and the complexity of their interpretation, but also to assign utility to these patterns.
Suppose that I'm choosing whether to run a new simulation. It will have a simple ('default') interpretation, which I have identified, and which has positive utility to me. It also has alternative interpretations, whose decoder complexities are much higher (but still lower than the complexity of specifying the simulation itself). It would be computationally intractable for me to identify all of them. These alternatives may well have highly negative utility to me.
To choose whether the run the simulation, I need to sum the utilities of these alternatives. More complex interpretations will carry lower weight. But what is the guarantee that my utility function is built in such a way that the total utility will still be positive?
I'm guessing this particular question has probably been answered in the context of analyzing behavior of utility functions. I haven't read all of that material, and a specific pointer would be helpful.
The reason this whole discussion arises is that we're talking about running simulations that can't be interacted with. You say that you assign utility to the mere existence of patterns, even non-interacting. A simpler utility function specified only in terms of affecting our single physical world would not have that difficulty.
ETA: as Nisan helped me understand in comments below, I myself in practical situations do accept the 'default' interpretation of a simulation. I still think non-human agents could behave differently.
Replies from: Nisan, rwallace↑ comment by Nisan · 2010-06-23T09:05:19.248Z · LW(p) · GW(p)
These are interesting questions. They might also apply to a utility function that only cares about things affecting our physical world.
If there were a person in a machine, isolated from the rest of the world and suffering, would we try to rescue it, or would we be satisfied with ensuring that the person never interacts with the real world?
Replies from: DanArmak↑ comment by DanArmak · 2010-06-23T20:15:34.569Z · LW(p) · GW(p)
I understood the original stipulation that the simulation doesn't interact with our world to mean that we can't affect it to rescue the suffering person.
Let's consider your alternative scenario: the person in the simulation can't affect our universe usefully (the simulating machine is well-wrapped and looks like a uniform black body from the outside), and we can't observe it directly, but we know there's a suffering person inside and we can choose to break in and modify (or stop) the simulation.
In this situation I would indeed choose to intervene to stop the suffering. Your question is a very good one. Why do I choose here to accept the 'default' interpretation which says that inside the simulation is a suffering person?
The simple answer is that I'm human, and I don't have an explicit or implicit-and-consistent utility function anyway. If people around me tell me there's a suffering person inside the simulation, I'd be inclined to accept this view.
How much effort or money would I be willing to spend to help that suffering simulated person? Probably zero or near zero. There are many real people alive today who are suffering and I've never done anything to explicitly help anyone anonymously.
In my previous comments I was thinking about utility functions in general - what is possible, self-consistent, and optimizes something - rather than human utility functions or my own. As far as I personally am concerned, I do indeed accept the 'default' interpretation of a simulation (when forced to make a judgement) because it's easiest to operate that way and my main goal (in adjusting my utility function) is to achieve my supergoals smoothly, rather than to achieve some objectively correct super-theory of morals. Thanks for helping me see that.
↑ comment by rwallace · 2010-06-23T10:06:31.804Z · LW(p) · GW(p)
In Solomonoff induction, the weight of a program is the inverse of the exponential of its length. (I have an argument that says this doesn't need to be assumed a priori, it can be derived, though I don't have a formal proof of this.) Given that, it's easy to see that the total weight of all the weird interpretations is negligible compared to that of the normal interpretation.
It's true that some things become easier when you try to restrict your attention to "our single physical world", but other things become less easy. Anyway, that's a metaphysical question, so let's leave it aside; in which case, to be consistent, we should also forget about the notion of simulations and look at an at least potentially physical scenario.
Suppose the copy took the form of a physical duplicate of our solar system, with the non-interaction requirement met by flinging same over the cosmic event horizon. Now do you think it makes sense to assign this a positive utility?
Replies from: DanArmak↑ comment by DanArmak · 2010-06-23T20:01:47.457Z · LW(p) · GW(p)
Given that, it's easy to see that the total weight of all the weird interpretations is negligible compared to that of the normal interpretation.
I don't see why. My utility function could also assign a negative utility to (some, not necessarily all) 'weird' interpretations whose magnitude would scale exponentially with the bit-lengths of the interpretations.
Is there a proof that this is inconsistent? if I understand correctly, you're saying that any utility function that assigns very large-magnitude negative utility to alternate interpretations of patterns in simulations, is directly incompatible with Solomonoff induction. That's a pretty strong claim.
Suppose the copy took the form of a physical duplicate of our solar system, with the non-interaction requirement met by flinging same over the cosmic event horizon. Now do you think it makes sense to assign this a positive utility?
I don't assign positive utility to it myself. Not above the level of "it might be a neat thing to do". But I find your utility function much more understandable (as well as more similar to that of many other people) when you say you'd like to create physical clone worlds. It's quite different from assigning utility to simulated patterns requiring certain interpretations.
Replies from: rwallace↑ comment by rwallace · 2010-06-24T02:45:35.686Z · LW(p) · GW(p)
Well, not exactly; I'm saying Solomonoff induction has implications for what degree of reality (weight, subjective probability, magnitude, measure, etc.) we should assign certain worlds (interpretations, patterns, universes, possibilities, etc.).
Utility is a different matter. You are perfectly free to have a utility function that assigns Ackermann(4,4) units of disutility to each penguin that exists in a particular universe, whereupon the absence of penguins will presumably outweigh all other desiderata. I might feel this utility function is unreasonable, but I can't claim it to be inconsistent.
↑ comment by Mass_Driver · 2010-06-22T14:54:07.247Z · LW(p) · GW(p)
I would spend one day's hard labor (8-12 hours) to create one copy of me, just because I'm uncertain enough about how the multiverse works that having an extra copy would be vaguely reassuring. I might do another couple of hours on another day for copy #3. After that I think I'm done.
Replies from: Jonathan_Graehl, DanArmak↑ comment by Jonathan_Graehl · 2010-06-22T17:38:32.143Z · LW(p) · GW(p)
I'm interested, but suspicious of fraud - how do I know the copy really exists?
Also, it seems like as posed, my copies will live in identical universes and have identical futures as well as present state - i.e. I'm making an exact copy of everyone and everything else as well. If that's the offer, then I'd need more information about the implications of universe cloning. If there are none, then the question seems like nonsense to me.
I was only initially interested at the thought of my copies diverging, even without interaction (I suppose MWI implies this is what goes on behind the scenes all the time).
Replies from: DanArmak↑ comment by DanArmak · 2010-06-22T18:32:05.074Z · LW(p) · GW(p)
If the other universe(s) are simulated inside our own, then there may be relevant differences between the simulating universe and the simulated ones.
In particular, how do we create universes identical to the 'master copy'? The easiest way is to observe our universe, and run the simulations a second behind, reproducing whatever we observe. That would mean decisions in our universe control events in the simulated worlds, so they have different weights under some decision theories.
Replies from: Jonathan_Graehl↑ comment by Jonathan_Graehl · 2010-06-24T21:11:39.431Z · LW(p) · GW(p)
I assumed we couldn't observe our copies, because if we could, then they'd be observing them too. In other words, somebody's experience of observing a copy would have to be fake - just a view of their present reality and not of a distinct copy.
This all follows from the setup, where there can be no difference between a copy (+ its environment) and the original. It's hard to think about what value that has.
↑ comment by DanArmak · 2010-06-22T18:29:53.841Z · LW(p) · GW(p)
If you're uncertain about how the universe works, why do you think that creating a clone is more likely to help you than to harm you?
Replies from: orthonormal↑ comment by orthonormal · 2010-06-22T22:32:23.954Z · LW(p) · GW(p)
I assume Mass Driver is uncertain between certain specifiable classes of "ways the multiverse could work" (with some probability left for "none of the above"), and that in the majority of the classified hypotheses, having a copy either helps you or doesn't hurt.
Thus on balance, they should expect positive expected value, even considering that some of the "none of the above" possibilities might be harmful to copying.
Replies from: DanArmak↑ comment by DanArmak · 2010-06-22T23:36:36.008Z · LW(p) · GW(p)
I understand that that's what Mass_Driver is saying. I'm asking, why think that?
Replies from: orthonormal↑ comment by orthonormal · 2010-06-23T02:27:43.499Z · LW(p) · GW(p)
Because scenarios where having an extra copy hurts seem... engineered, somehow. Short of having a deity or Dark Lord of the Matrix punish those with so much hubris as to copy themselves, I have a hard time imagining how it could hurt, while I can easily think of simple rules for anthropic probabilities in the multiverse under which it would (1) help or (2) have no effect.
I realize that the availability heuristic is not something in which we should repose much confidence on such problems (thus the probability mass I still assign to "none of the above"), but it does seem to be better than assuming a maxentropy prior on the consequences of all novel actions.
Replies from: Mass_Driver, DanArmak↑ comment by Mass_Driver · 2010-06-23T04:54:44.343Z · LW(p) · GW(p)
I think, in general, the LW community often errs by placing too much weight on a maxentropy prior as opposed to letting heuristics or traditions have at least some input. Still, it's probably an overcorrection that comes in handy sometimes; the rest of the world massively overvalues heuristics and tradition, so there are whole areas of possibility-space that get massively underexplored, and LW may as well spend most of its time in those areas.
Replies from: wedrifid↑ comment by wedrifid · 2010-06-23T05:57:07.454Z · LW(p) · GW(p)
You could be right about the LW tendency to err... but this thread isn't the place where it springs to mind as a possible problem! I am almost certain that neither the EEA nor current circumstance are such that heuristics and tradition are likely to give useful decisions about clone trenches.
↑ comment by DanArmak · 2010-06-23T07:50:40.864Z · LW(p) · GW(p)
Well, short of having a deity reward those who copy themselves with extra afterlife, I'm having difficulty imagining how creating non-interacting identical copies could help, either.
The problem with the availability heuristic here isn't so much that it's not a formal logical proof. It's that it fails to convince me, because I don't happen to have the same intuition about it, which is why we're having this conversation in the first place.
I don't see how you could assign positive utility to truly novel actions without being able to say something about their anticipated consequences. But non-interacting copies are pretty much specified to have no consequences.
Replies from: orthonormal↑ comment by orthonormal · 2010-06-24T05:45:33.177Z · LW(p) · GW(p)
Well, in my understanding of the mathematical universe, this sort of copying could be used to change anthropic probabilities without the downsides of quantum suicide. So there's that.
Robin Hanson probably has his own justification for lots of noninteracting copies (assuming that was the setup presented to him as mentioned in the OP), and I'd be interested to hear that as well.
↑ comment by torekp · 2010-06-26T01:04:29.641Z · LW(p) · GW(p)
I'm interested. As a question of terminal value, and focusing only on the quality and quantity of life of me and my copies, I'd value copies' lives the same as my own. Suppose pick-axing for N years is the only way I can avoid dying right now, where N is large enough that I feel that pick-axing is just barely the better choice. Then I'll also pick-ax for N years to create a copy.
For what it's worth, I subscribe to the thread philosophy of identity per se, but the pattern philosophy of what Derek Parfit calls "what matters in survival".
comment by Dagon · 2010-06-22T13:45:28.954Z · LW(p) · GW(p)
economist's question: "compared to what?"
If they can't interact with each other, just experience something, I'd rather have copies of me than of most other people. If we CAN interact, then a mix of mes and others is best - diversity has value in that case.
Replies from: Roko↑ comment by Roko · 2010-06-22T15:54:54.696Z · LW(p) · GW(p)
"compared to what?"
Compared to no extra copy, and you not having to do a day's hard labor.
Replies from: Dagon, Nick_Tarleton↑ comment by Dagon · 2010-06-22T23:34:56.084Z · LW(p) · GW(p)
Valuing a day's hard labor is pretty difficult for me even in the current world - this varies by many orders of magnitude across time, specific type of labor, what other labor I've committed to and what leisure opportunities I have.
By "compared to what", I meant "what happens to those computing resources if they're not hosting copies of me", and "what alternate uses could I put the results of my day of labor in this universe"? Describe my expected experiences in enough detail for both possible choices (make the sim or do something else), and then I can tell you which I prefer.
Of course, I'll be lying, as I have no idea who this guy is who lives in that world and calls himself me.
↑ comment by Nick_Tarleton · 2010-06-22T18:53:39.152Z · LW(p) · GW(p)
Does "no extra copy" mean one less person / person's worth of resource use in the world, or one more person drawn from some distribution / those resources being used elsewhere?
comment by Kingreaper · 2010-06-22T13:05:14.062Z · LW(p) · GW(p)
If the copies don't diverge their value is zero.
They are me. We are one person, with one set of thoughts, one set of emotions etc.
Replies from: Roko, Thomas↑ comment by Roko · 2010-06-22T16:15:18.660Z · LW(p) · GW(p)
What about if the copies do diverge, but they do so in a way such that the probability distribution over each copy's future behavior is identical to yours (and you may also assume that they, and you, are in a benign environment, i.e. only good things happen)?
Replies from: Kingreaper↑ comment by Kingreaper · 2010-06-23T00:40:43.187Z · LW(p) · GW(p)
Hmmm, probability distribution; at what level of knowledge?
I guess I should assume you mean at what is currently considered the maximum level of knowledge?
In which case, I suspect that'd be a small level of divergence. But, maybe not negligible. I'm not sure, my knowledge of how quantum effects effect macroscopic reality is rather small.
Or is it probability based on my knowledge? In which case it's a huge divergence, and I'd very much appreciate it.
Before deciding how much I value it, I'd like to see an illustration example, if possible. Perhaps take Einstein as an example: if he had been copied at age 12, what is an average level of divergence?
↑ comment by Thomas · 2010-06-22T13:24:13.662Z · LW(p) · GW(p)
You are one person today and tomorrow. You don't think, that the tomorrow copy of you is useless?
Replies from: khafra, Kingreaper↑ comment by Kingreaper · 2010-06-22T14:18:47.838Z · LW(p) · GW(p)
If there was a time travel event, such that me and me tomorrow existed at the same time, would we have the same thoughts? No.
Would we have the same emotions? No.
We would be different.
If it was a time travel event causing diverging timelines I'd consider it a net gain in utility for mes. (especially if I could go visit the other timeline occasionally :D )
If it was a time loop, where present me will inevitably become future me? There's still precisely as many temporal mes as there would be otherwise. It is neither innately a gain nor a loss.
comment by PhilGoetz · 2010-06-24T04:10:22.976Z · LW(p) · GW(p)
I don't think I would place more value on lock-step copies. I would love to have lots of copies of me, because then we could all do different things, and I'd not have to wonder whether I could have been a good composer, or writer, or what have you. And we'd probably form a commune and buy a mansion and have other fun economies of scale. I have observed that identical twins seem to get a lot of value out of having a twin.
As to the "value" of those copies, this depends on whether I'm speaking of "value" in the social sense, or the personal utility sense. They wouldn't increase my personal hedonistic utility very much, maybe double or triple it; but the increase of utility to the world would probably be more than linear in the number of copies. I would probably make decisions using the expected social utility. I'm not sure why. I think my personal utility function would make me suffer if I didn't.
Replies from: JoshuaZ↑ comment by JoshuaZ · 2010-06-24T04:28:21.318Z · LW(p) · GW(p)
Speaking as a non-identical twin, one gets a lot of value even from being fraternal twins.
Replies from: PhilGoetzcomment by ata · 2010-06-23T02:28:00.961Z · LW(p) · GW(p)
I'm still tentatively convinced that existence is what mathematical possibility feels like from the inside, and that creating an identical non-interacting copy of oneself is (morally and metaphysically) identical to doing nothing. Considering that, plus the difficulty* of estimating which of a potentially infinite number of worlds we're in, including many in which the structure of your brain is instantiated but everything you observe is hallucinated or "scripted" (similar to Boltzmann brains), I'm beginning to worry that a fully fact-based consequentialism would degenerate into emotivism, or at least that it must incorporate a significant emotivist component in determining who and what is terminally valued.
* E. T. Jaynes says we can't do inference in infinite sets except those that are defined as well-behaved limits of finite sets, but if we're living in an infinite set, then there has to be some right answer, and some best method of approximating it. I have no idea what that method is.
So. My moral intuition says that creating an identical non-interacting copy of me, with no need for or possibility of it serving as a backup, is valued at 0. As for consequentialism... if this were valued even slightly, I'd get one of those quantum random number generator dongles, have it generate my desktop wallpaper every few seconds (thereby constantly creating zillions of new slightly-different versions of my brain in their own Everett branches), and start raking in utilons. Considering that this seems not just emotionally neutral but useless to me, my consequentialism seems to agree with my emotivist intuition.
Replies from: Roko, Roko↑ comment by Roko · 2010-06-23T10:03:24.272Z · LW(p) · GW(p)
I'm still tentatively convinced that existence is what mathematical possibility feels like from the inside
If this is in some sense true, then we have an infinite ethics problem of awesome magnitude.
Though to be honest, I am having trouble seeing what the difference is between this statement being true and being false.
Replies from: ata, Roko, timtyler↑ comment by ata · 2010-06-23T21:52:17.657Z · LW(p) · GW(p)
Though to be honest, I am having trouble seeing what the difference is between this statement being true and being false.
My argument for that is essentially structured as a dissolution of "existence", an answer to the question "Why do I think I exist?" instead of "Why do I exist?". Whatever facts are related to one's feeling of existence — all the neurological processes that lead to one's lips moving and saying "I think therefore I am", and the physical processes underlying all of that — would still be true as subjunctive facts about a hypothetical mathematical structure. A brain doesn't have some special existence-detector that goes off if it's in the "real" universe; rather, everything that causes us to think we exist would be just as true about a subjunctive.
This seems like a genuinely satisfying dissolution to me — "Why does anything exist?" honestly doesn't feel intractably mysterious to me anymore — but even ignoring that argument and starting only with Occam's Razor, the Level IV Multiverse is much more probable than this particular universe. Even so, specific rational evidence for it would be nice; I'm still working on figuring out what qualify as such.
There may be some. First, it would anthropically explain why this universe's laws and constants appear to be well-suited to complex structures including observers. There doesn't have to be any The Universe that happens to be fine-tuned for us; instead, tautologically, we only find ourselves existing in universes in which we can exist. Similarly, according to Tegmark, physical geometries with three non-compactified spatial dimensions and one time dimension are uniquely well-suited to observers, so we find ourselves in a structure with those qualities.
Anyway, yeah, I think there are some good reasons to believe (or at least investigate) it, plus some things that still confuse me (which I've mentioned elsewhere in this thread and in the last section of my post about it), including the aforementioned "infinite ethics problem of awesome magnitude".
Replies from: Roko, Roko↑ comment by Roko · 2010-06-24T11:35:58.339Z · LW(p) · GW(p)
A brain doesn't have some special existence-detector that goes off if it's in the "real" universe; rather, everything that causes us to think we exist would be just as true about a subjunctive.
This seems to lead to madness, unless you have some kind of measure over possible worlds. Without a measure, you become incapable of making any decisions, because the past ceases to be predictive of the future (all possible continuations exist, and each action has all possible consequences).
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2010-06-24T11:46:44.725Z · LW(p) · GW(p)
Measure doesn't help if each action has all possible consequences: you'd just end up with the consequences of all actions having the same measure! Measure helps with managing (reasoning about) infinite collections of consequences, but there still must be non-trivial and "mathematically crisp" dependence between actions and consequences.
Replies from: Roko↑ comment by Roko · 2010-06-24T12:01:16.041Z · LW(p) · GW(p)
No, it could help because the measure could be attached to world-histories, so there is a measure for "(drop ball) leads to (ball to fall downwards)", which is effectively the kind of thing our laws of physics do for us.
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2010-06-24T12:18:08.931Z · LW(p) · GW(p)
There is also a set of world-histories satisfying (drop ball) which is distinct from the set of world-histories satisfying NOT(drop ball). Of course, by throwing this piece of world model out the window, and only allowing to compensate for its absence with measures, you do make measures indispensable. The problem with what you were saying is in the connotation, of measure somehow being the magical world-modeling juice, which it's not. (That is, I don't necessarily disagree, but don't want this particular solution of using measure to be seen as directly answering the question of predictability, since it can be understood as a curiosity-stopping mysterious answer by someone insufficiently careful.)
Replies from: Roko↑ comment by Roko · 2010-06-24T13:03:14.656Z · LW(p) · GW(p)
I don't see what the problem is with using measures over world histories as a solution to the problem of predictability.
If certain histories have relatively very high measure, then you can use that fact to derive useful predictions about the future from a knowledge of the present.
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2010-06-24T13:39:39.643Z · LW(p) · GW(p)
I don't see what the problem is with using measures over world histories as a solution to the problem of predictability.
It's not a generally valid solution (there are solutions that don't use measures), though it's a great solution for most purposes. It's just that using measures is not a necessary condition for consequentialist decision-making, and I found that thinking in terms of measures is misleading for the purposes of understanding the nature of control.
You said:
Replies from: RokoWithout a measure, you become incapable of making any decisions, because the past ceases to be predictive of the future
↑ comment by Roko · 2010-06-24T11:10:51.356Z · LW(p) · GW(p)
First, it would anthropically explain why this universe's laws and constants appear to be well-suited to complex structures including observers
But smaller ensembles could also explain this, such as chaotic inflation and the string landscape.
↑ comment by timtyler · 2010-06-23T10:28:02.579Z · LW(p) · GW(p)
"Infinite ethics" is surely a non-problem for individuals - since an individual agent can only act locally. Things that are far away are outside the agent's light cone.
Replies from: Mitchell_Porter↑ comment by Mitchell_Porter · 2010-06-23T10:47:02.984Z · LW(p) · GW(p)
This is an all-possible-worlds-exist philosophy. There are an infinite number of worlds where there are entities which are subjectively identical to you and cognitively similar enough that they will make the same decision you make, for the same reasons. When you make a choice, all those duplicates make the same choice, and there are consequences in an infinity of worlds. So there's a fuzzy neoplatonic idea according to which you identify yourself with the whole equivalence class of subjective duplicates to which you belong.
But I believe there's an illusion here and for every individual, the situation described actually reduces to an individual making a decision and not knowing which possible world they're in. There is no sense in which the decision by any one individual actually causes decisions in other worlds. I postulate that there is no decision-theoretic advantage or moral imperative to indulging the neoplatonic perspective, and if you try to extract practical implications from it, you won't be able to improve on the uncertain-single-world approach.
Replies from: timtyler↑ comment by timtyler · 2010-06-23T10:53:00.607Z · LW(p) · GW(p)
Re: "There are an infinite number of worlds"
By hypothesis. There is no evidence for any infinities in nature. Agents need not bother with infinity when making decisions or deciding what the right thing to do is. As, I think, you go on to say.
Replies from: Mitchell_Porter, wedrifid↑ comment by Mitchell_Porter · 2010-06-23T11:47:19.246Z · LW(p) · GW(p)
By hypothesis.
I agree. I was paraphrasing what ata and Roko were talking about. I think it's a hypothesis worth considering. There may be a level of enlightenment beyond which one sees that the hypothesis is definitely true, definitely false, definitely undecidable, or definitely irrelevant to decision-making, but I don't know any of that yet.
There is no evidence for any infinities in nature. Agents need not bother with infinity when making decisions or deciding what the right thing to do is.
I think, again, that we don't actually know any of that yet. Epistemically, there would appear to be infinitely many possibilities. It may be that a rational agent does need to acknowledge and deal with this fact somehow. For example, maximizing utility in this situation may require infinite sums or integrals of some form (the expected utility of an action being the sum, across all possible worlds, of its expected utility in each such world times the world's apriori probability). Experience with halting probabilities suggests that such sums may be uncomputable, even supposing you can rationally decide on a model of possibility space and on a prior, and the best you can do may be some finite approximation. But ideally one would want to show that such finite methods really do approximate the unattainable infinite, and in this sense the agent would need to "bother with infinity", in order to justify the rationality of its procedures.
As for evidence of infinities within this world, observationally we can only see a finite distance in space and time, but if the rationally preferred model of the world contains infinities, then there is such evidence. I see this as primarily a quantum gravity question and so it's in the process of being answered (by the ongoing, mostly deductive examination of the various available models). If it turns out, let us say, that gravity and quantum mechanics imply string theory, and string theory implies eternal inflation, then you would have a temporal infinity implied by the finite physical evidence.
Replies from: timtyler↑ comment by timtyler · 2010-06-23T13:33:08.734Z · LW(p) · GW(p)
There's no temporal infinity without spatial infinity (instead you typically get eternal return). There's incredibly weak evidence for spatial infinity - since we can only see the nearest 13 billion light years - and that's practiacally nothing - compared to infinity.
The situation is that we don't know with much certainty whether the world is finite or infinite. However, if an ethical system suggests people behave very differently here and now depending on the outcome of such abstract metaphysicis, I think that ethical system is probably screwed.
↑ comment by Roko · 2010-06-23T09:46:02.719Z · LW(p) · GW(p)
fact-based consequentialism would degenerate into emotivism, or at least that it must incorporate a significant emotivist component in determining who and what is terminally valued.
If you are feeling this, then you are waking up to moral antirealism. Reason alone is simply insufficient to determine what your values are (though it weeds out inconsistencies and thus narrows the set of possible contenders). Looks like you've taken the red pill.
Replies from: ata↑ comment by ata · 2010-06-23T21:08:42.591Z · LW(p) · GW(p)
Reason alone is simply insufficient to determine what your values are (though it weeds out inconsistencies and thus narrows the set of possible contenders).
I was already well aware of that, but spending a lot of time thinking about Very Big Worlds (e.g. Tegmark's multiverses, even if no more than one of them is real) made even my already admittedly axiomatic consequentialism start seeming inconsistent (and, worse, inconsequential) — that if every possible observer is having every possible experience, and any causal influence I exert on other beings is canceled out by other copies of them having opposite experiences, then it would seem that the only thing I can really do is optimize my own experiences for my own sake.
I'm not yet confident enough in any of this to say that I've "taken the red pill", but since, to be honest, that originally felt like something I really really didn't want to believe, I've been trying pretty hard to leave a line of retreat about it, and the result was basically this. Even if I were convinced that every possible experience were being experienced, I would still care about people within my sphere of causal influence — my current self is not part of most realities and cannot affect them, but it may as well have a positive effect on the realities it is part of. And if I'm to continue acting like a consequentialist, then I will have to value beings that already exist, but not intrinsically value the creation of new beings, and not act like utility is a single universally-distributed quantity, in order to avoid certain absurd results. Pretty much how I already felt.
And even if I'm really only doing this because it feels good to me... well, then I'd still do it.
Replies from: Roko↑ comment by Roko · 2010-06-23T22:29:08.282Z · LW(p) · GW(p)
consequentialism is certainly threatened by big worlds. The fix of trying to help those within your sphere of influence only is more like a sort of deontological "desire to be a consequentialist even though it's impossible" that just won't go away. It is an ugly hack that ought to not work.
One concrete problem is that we might be able to acausally influence other parts of the multiverse.
Replies from: ata↑ comment by ata · 2010-06-23T22:34:13.197Z · LW(p) · GW(p)
One concrete problem is that we might be able to acausally influence other parts of the multiverse.
Could you elaborate on that?
Replies from: Roko↑ comment by Roko · 2010-06-23T22:38:15.992Z · LW(p) · GW(p)
We might, for example, influence other causally disconnected places by threatening them with punishment simulations. Or they us.
Replies from: AlephNeil, ata↑ comment by AlephNeil · 2010-06-24T10:42:39.515Z · LW(p) · GW(p)
We might, for example, influence other causally disconnected places by threatening them with punishment simulations. Or they us.
How? And how would we know if our threats were effective?
Replies from: Roko↑ comment by Roko · 2010-06-24T11:07:12.834Z · LW(p) · GW(p)
Details, details. I don't know whether it is feasible, but the point is that this idea of saving consequentialism by defining a limited sphere of consequence and hoping that it is finite is brittle: facts on the ground could overtake it.
Replies from: AlephNeil↑ comment by AlephNeil · 2010-06-24T11:30:30.594Z · LW(p) · GW(p)
Ah, I see.
Having a 'limited sphere of consequence' is actually one of the core ideas of deontology (though of course they don't put it quite like that).
Speaking for myself, although it does seem like an ugly hack, I can't see any other way of escaping the paranoia of "Pascal's Mugging".
Replies from: Roko↑ comment by Roko · 2010-06-24T23:47:33.945Z · LW(p) · GW(p)
Well, one way is to have a bounded utility function. Then Pascal Mugging is not a problem.
Replies from: AlephNeil↑ comment by AlephNeil · 2010-06-25T09:22:03.391Z · LW(p) · GW(p)
Certainly, but how is a bounded utility function anything other than a way of sneaking in a 'delimited sphere of consequence', except that perhaps the 'sphere' fades out gradually, like a Gaussian rather than a uniform distribution?
To be clear, we should disentangle the agent's own utility function from what the agent thinks is ethical. If the agent is prepared to throw ethics to the wind then it's impervious to Pascal's Mugging. If the agent is a consequentialist who sees ethics as optimization of "the universe's utility function" then Pascal's Mugging becomes a problem, but yes, taking the universe to have a bounded utility function solves the problem. But now let's see what follows from this. Either:
We have to 'weight' people 'close to us' much more highly than people far away when calculating which of our actions are 'right'. So in effect, we end up being deontologists who say we have special obligations towards friends and family that we don't have towards strangers. (Delimited sphere of consequence.)
If we still try to account for all people equally regardless of their proximity to us, and still have a bounded utility function, then upon learning that the universe is Vast (with, say, Graham's number of people in it) we infer that the universe is 'morally insensitive' to the deaths of huge numbers of people, whoever they are: Suppose we escape Pascal's Mugging by deciding that, in such a vast universe, a 1/N chance of M people dying is something we can live with (for some M >> N >> 1.) Then if we knew for sure that the universe was Vast, we ought to be able to 'live with' a certainty of M/N people dying. And if we're denying that it makes a moral difference how close these people are to us then these M/N people may as well include, say, the citizens of one of Earth's continents. So then if a mad tyrant gives you perfect assurance that they will nuke South America unless you give them your Mars bar (and perfect assurance that they won't if you do) then apparently you should refuse to hand it over (on pain of inconsistency with your response to Pascal's Mugging.)
↑ comment by Roko · 2010-06-25T11:02:50.810Z · LW(p) · GW(p)
To answer (2), your utility function can have more than one reason to value people not dying. For example, You could have one component of utility for the total number of people alive, and another for the fraction of people who lead good lives. Since having their lives terminated decreases the quality of life, killing those people would make a difference to the average quality of life across the multiverse, if the multiverse is finite.
If the multiverse is infinite, then something like "caring about people close to you" is required for consequentialism to work.
Replies from: Roko↑ comment by ata · 2010-06-23T22:52:19.822Z · LW(p) · GW(p)
Still not sure how that makes sense. The only thing I can think of that could work is us simulating another reality and having someone in that reality happen to say "Hey, whoever's simulating this realty, you'd better do x or we'll simulate your reality and torture all of you!", followed by us believing them, not realizing that it doesn't work that way. If the Level IV Multiverse hypothesis is correct, then the elements of this multiverse are unsupervised universes; there's no way for people in different realities to threaten each other if they mutually understand that. If you're simulating a universe, and you set up the software such that you can make changes in it, then every time you make a change, you're just switching to simulating a different structure. You can push the "torture" button, and you'll see your simulated people getting tortured, but that version of the reality would have existed (in the same subjunctive way as all the others) anyway, and the original non-torture reality also goes on subjunctively existing.
Replies from: Vladimir_Nesov, Roko↑ comment by Vladimir_Nesov · 2010-06-24T10:05:41.626Z · LW(p) · GW(p)
You don't grok UDT control. You can control the behavior of fixed programs, programs that completely determine their own behavior.
Take a "universal log program", for example: it enumerates all programs, for each program enumerates all computational steps, on all inputs, and writes all that down on an output tape. This program is very simple, you can easily give a formal specification for it. It doesn't take any inputs, it just computes the output tape. And yet, the output of this program is controlled by what the mathematician ate for breakfast, because the structure of that decision is described by one of the programs logged by the universal log program.
Take another look at the UDT post, keeping in mind that the world-programs completely determine what the word is, they don't take the agent as parameter, and world-histories are alternative behaviors for those fixed programs.
Replies from: AlephNeil↑ comment by AlephNeil · 2010-06-24T11:05:00.791Z · LW(p) · GW(p)
OK, so you're saying that A, a human in 'the real world', acausally (or ambiently if you prefer) controls part of the output tape of this program P that simulates all other programs.
I think I understand what you mean by this: Even though the real world and this program P are causally disconnected, the 'output log' of each depends on the 'Platonic' result of a common computation - in this case the computation where A's brain selects a choice of breakfast. Or in other words, some of the uncertainty we have about both the real world and P derives from the logical uncertainty about the result of that 'Platonic' computation.
Now if you identify "yourself" with the abstract computation then you can say that "you" are controlling both the world and P. But then aren't you an 'inhabitant' of P just as much as you're an inhabitant of the world? On the other hand, if you specifically identify "yourself" with a particular chunk of "the real world" then it seems a bit misleading to say that "you" ambiently control P, given that "you" are yourself ambiently controlled by the abstract computation which is controlling P.
Perhaps this is only a 'semantic quibble' but in any case I can't see how ambient control gets us any nearer to being able to say that we can threaten 'parallel worlds' causally disjoint from "the real world", or receive responses or threats in return.
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2010-06-24T11:27:32.325Z · LW(p) · GW(p)
Now if you identify "yourself" with the abstract computation then you can say that "you" are controlling both the world and P. But then aren't you an 'inhabitant' of P just as much as you're an inhabitant of the world?
Sure, you can read it this way, but keep in mind that P is very simple, doesn't have you as explicit "part", and you'd need to work hard to find the way in which you control its output (find a dependence). This dependence doesn't have to be found in order to compute P, this is something external, they way you interpret P.
I agree (maybe, in the opposite direction) that causal control can be seen as an instance of the same principle, and so the sense in which you control "your own" world is no different from the sense in which you control the causally unconnected worlds. The difference is syntactic: representation of "your own world" specifies you as part explicitly, while to "find yourself" in a "causally unconnected world", you need to do a fair bit of inference.
Note that since the program P is so simple, the results of abstract analysis of its behavior can be used to make decisions, by anyone. These decisions will be controlled by whoever wants them controlled, and logical uncertainty often won't allow to rule out the possibility that a given program X controls a conclusion Y made about the universal log program P. This is one way to establish mutual dependence between most "causally unconnected" worlds: have them analyze P.
When a world program isn't presented as explicitly depending on an agent (as in causal control), you can have logical uncertainty about whether a given agent controls a given world, which makes it necessary to consider the possibility of more agents potentially controlling more worlds.
comment by Douglas_Knight · 2010-06-22T18:37:06.611Z · LW(p) · GW(p)
The question is awfully close to the reality juice of many worlds. We seem to treat reality juice as probability for decision theory, and thus we should value the copies linearly, if they are as good as the copies of QM.
comment by Vladimir_Golovin · 2010-06-22T14:13:48.026Z · LW(p) · GW(p)
I want at least 11 copies of myself with full copy-copy / world-world interaction. This is a way of scaling myself. I'd want the copies to diverge -- actually that's the whole point (each copy handles a different line of work.) I'm mature enough, so I'm quite confident that the copies won't diverge to the point when their top-level values / goals would become incompatible, so I expect the copies to cooperate.
As for how much I'm willing to work for each copy, that's a good question. A year of pickaxe trench-digging seems to be way too cheap and easy for a fully functioning copy. On the other hand, if I want 11 copies, that's 11 years of pickaxing. So, there's a risk that I'd lose all my purpose in progress, and deteriorate mentally and physically. The quality of copies would also deteriorate due to the degradation of original me due to accumulating years of pick-axing.
Regarding the copy / copy interacton: currently I see little value in non-interacting copies locked in their worlds, other than scaling up my ability to safely explore the world and learn from 'my own' mistakes (which assumes that their worlds must diverge.) BTW, does your constraint of no copy-copy interaction mean that I myself can't interact with the copies?
As for any longer-term implications, I can't say anything deep -- I haven't put much thought into this.
(This idea first occurred to me about 7 years ago, way before I started to read OB / LW, or anything like that. I was overloaded with work and surrounded with uncooperative employees and co-founders lacking domain experience.)
Replies from: Kingreaper, NancyLebovitz↑ comment by Kingreaper · 2010-06-22T14:30:31.560Z · LW(p) · GW(p)
Actually to get 11 yous (or indeed 16 yous) in your scenario would take only 4 years of pickaxing.
After year 1 there are two of you. Both keep pickaxing. After year 2 there are four of you. After year 3, 8 After year 4, 16
(this could be speeded up slightly if you could add together the work of different copies. With that addition you'd have 11 copies in just over 3 years)
Replies from: Vladimir_Golovin↑ comment by Vladimir_Golovin · 2010-06-22T14:35:04.213Z · LW(p) · GW(p)
Yes, but this assumes that the contract allows the copies to pickaxe. If it does, I think I'd take the deal.
Replies from: Kingreaper↑ comment by Kingreaper · 2010-06-22T14:38:55.002Z · LW(p) · GW(p)
If the contract arbitrarily denies my copies rights, then I'm not sure I want to risk it at all.
I mean, what if I've missed that it also says "your copies aren't allowed to refuse any command given by Dr. Evil"?
Now if my copies simply CAN'T pickaxe, what with being non-physical, that's fair enough. But the idea seemed to be that the copies had full worlds in which they lived; in which case within their world they are every bit physical.
Replies from: Vladimir_Golovin↑ comment by Vladimir_Golovin · 2010-06-22T14:48:09.167Z · LW(p) · GW(p)
A contract denying specifically the right to contribute man-hours towards my share of pickaxing and no other rights would be fine with me. I'd have to watch the wording though. As for missing anything when reading it, such a contract will get very, very serious examination by myself and the best lawyers I can get.
Replies from: JenniferRM↑ comment by JenniferRM · 2010-06-22T16:33:57.364Z · LW(p) · GW(p)
That would be an pretty big "original privilege" :-)
Generally, when I think about making copies, I assume that the status of being "original" would be washed away and I would find myself existing with some amount of certainty (say 50% to 100%) that I was the copy. They I try to think about how I'd feel about having been created by someone who has all my memories/skills/tendencies/defects but has a metaphysically arbitrary (though perhaps emotionally or legally endorsed) claim to being "more authentic" than me by virtue of some historical fact of "mere physical continuity".
I would only expect a copy to cooperate with my visions for what my copy "should do" if I'm excited by the prospect of getting to do that - if I'm kinda hoping that after the copy process I wake up as the copy because the copy is going to have a really interesting life.
In practice, I would expect that what I'd really have to do is write up two "divergence plans" for each future version of me, that seem equally desirable, then copy, then re-negotiate with my copy over the details of the divergence plans (because I imagine the practicalities of two of us existing might reveal some false assumptions in the first draft of the plans), and finally we'd flip a coin to find out which plan each of us is assigned to.
I guess... If only one of us gets the "right of making more copies" I'd want the original contact to make "copyright" re-assignable after the copying event, so I could figure out whether "copyright" is more of a privilege or a burden, and what the appropriate compensation is for taking up the burden or losing the privilege.
ETA:: Perhaps our preferences would diverge during negotiation? That actually seems like something to hope for because then a simple cake cutting algorithm could probably be used to ensure the assignment to a divergence plan was actually a positive sum interaction :-)
↑ comment by NancyLebovitz · 2010-06-22T14:34:31.151Z · LW(p) · GW(p)
Presumably, each copy of you would also want to be part of a copy group, so if the year of pickaxe trench-digging seems to be a good idea at the end of it, your copy will presumably be willing to also put in a year.
Now we get to the question of whether you can choose the time of when the copy is made. You'd probably want a copy from before the year of trenching.
If you have to make copies of your current moment, then one of you would experience two consecutive years of trenching.
The good news is that the number of you doubles each year, so each of you only has to do 4 or 5 years to get a group of 12.
comment by orthonormal · 2010-06-22T22:21:37.232Z · LW(p) · GW(p)
It depends on external factors, since it would primarily be a way of changing anthropic probabilities (I follow Bostrom's intuitions here). If I today committed to copy myself an extra time whenever something particularly good happened to me (or whenever the world at large took a positive turn), I'd expect to experience a better world from now on.
If I couldn't use copying in that way, I don't think it would be of any value to me.
comment by Vladimir_Nesov · 2010-06-22T13:44:31.252Z · LW(p) · GW(p)
This question is no good. Would you choose to untranslatable-1 or untranslatable-2? I very much doubt that reliable understanding of this can be reached using human-level philosophy.
Replies from: Roko, Nisan↑ comment by Roko · 2010-06-22T16:03:50.127Z · LW(p) · GW(p)
I think it is clear what a copy of you in its own world is. Just copy, atom-for-atom, everything in the solar system, and put the whole thing in another part of the universe such that it cannot interact with the original you. If copying the other people bothers you, just consider the value of the copy of you itself, ignoring the value or disvalue of the other copies.
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2010-06-22T17:04:09.868Z · LW(p) · GW(p)
It's clear what the situations you talk about are, but these are not the kind of situation your brainware evolved to morally estimate. (This is not the case of a situation too difficult to understand, nor is it a case of a situation involving opposing moral pressures.) The "untranslatable" metaphor was intended to be a step further than you interpreted (which is more clearly explained in my second comment).
Replies from: Roko↑ comment by Roko · 2010-06-22T18:19:10.631Z · LW(p) · GW(p)
oh ok. But the point of this post and the followup is to try to make inroads into morally estimating this, so I guess wait until the sequel.
Replies from: Wei_Dai, Vladimir_Nesov↑ comment by Wei Dai (Wei_Dai) · 2010-06-22T23:11:06.917Z · LW(p) · GW(p)
Roko, have you seen my post The Moral Status of Independent Identical Copies? There are also some links in the comments of that post to earlier discussions.
↑ comment by Vladimir_Nesov · 2010-06-22T18:29:17.305Z · LW(p) · GW(p)
Will see. I just have very little hope for progress to be made on this particular dead horse. I offered some ideas about how it could turn out that on human level progress can't in principle be made on this question (and some similar ones).
Replies from: orthonormal↑ comment by orthonormal · 2010-06-22T22:27:12.817Z · LW(p) · GW(p)
Can you call this particular issue a 'dead horse' when it hasn't been a common subject of argument before? (I mean, most of the relevant conversations in human history hadn't gone past the sophomoric question of whether a copy of you is really you.)
If you're going to be pessimistic on the prospect of discussion, I think you'd at very least need a new idiom, like "Don't start beating a stillborn horse".
Replies from: wedrifid↑ comment by Nisan · 2010-06-22T14:01:34.241Z · LW(p) · GW(p)
What kind of philosophy do we need, then?
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2010-06-22T14:44:03.791Z · LW(p) · GW(p)
This is a question about moral estimation. Simple questions of moral estimation can be resolved by observing reactions of people to situations which they evolved to consider: to save vs. to eat a human baby, for example. For more difficult questions involving unusual or complicated situations, or situations involving contradicting moral pressures, we simply don't have any means for extraction of information about their moral value. The only experimental apparatus we have are human reactions, and this apparatus has only so much resolution. Quality of theoretical analysis of observations made using this tool is also rather poor.
To move forward, we need better tools, and better theory. Both could be obtained by improving humans, by making smarter humans that can consider more detailed situations and perform moral reasoning about them. This is not the best option, since we risk creating "improved" humans that have slightly different preferences, and so moral observations obtained using the "improved" humans will be about their preference and not ours. Nonetheless, for some general questions, such as the value of copies, I expect that the answers given by such instruments would also be true about out own preference.
Another way is of course to just create a FAI, which will necessarily be able to do moral estimation of arbitrary situations.
comment by Nisan · 2010-06-22T12:27:42.950Z · LW(p) · GW(p)
Will the worlds be allowed to diverge, or are they guaranteed to always be identical?
Replies from: Roko↑ comment by Roko · 2010-06-22T15:56:47.953Z · LW(p) · GW(p)
Consider both cases.
The case where they are allowed to diverge, but the environment they are in is such that none of the copies end up being "messed up", e.g. zero probability of becoming a tramp, drug addict, etc, seems more interesting.
Replies from: Cyan↑ comment by Cyan · 2010-06-23T01:59:44.623Z · LW(p) · GW(p)
In my response to the poll, I took the word "identical" to mean that no divergence was possible (and thus, per Kingreaper and Morendil, the copy was of no value to me) . If divergence were possible, then my poll responses would be different.
comment by RobinZ · 2010-06-22T12:19:40.551Z · LW(p) · GW(p)
...non-interacting? Why?
Replies from: SilasBarta, Roko↑ comment by SilasBarta · 2010-06-22T12:39:32.559Z · LW(p) · GW(p)
So Mitchell Porter doesn't start talking about monads again.
↑ comment by Roko · 2010-06-22T12:22:55.095Z · LW(p) · GW(p)
That's just stipulated.
Replies from: RobinZ↑ comment by RobinZ · 2010-06-22T12:39:52.658Z · LW(p) · GW(p)
But the stipulation as stated leads to major problems - for instance:
each copy existing in its own computational world, which is identical to yours with no copy-copy or world-world interaction
implies that I'm copying the entire world full of people, not just me. That distorts the incentives.
Edit: And it also implies that the copy will not be useful for backup, as whatever takes me out is likely to take it out.
Replies from: Roko, Roko↑ comment by Roko · 2010-06-22T16:00:56.671Z · LW(p) · GW(p)
And it also implies that the copy will not be useful for backup, as whatever takes me out is likely to take it out.
For the moment, consider the case where the environment that each copy is in is benign, so there is no need for backup.
I'm just trying to gauge the terminal value of extra, non-interacting copies.
↑ comment by Roko · 2010-06-22T15:59:17.990Z · LW(p) · GW(p)
Consider that the other people in the world are either new (they don't exist elsewhere) or nonsentient if that bothers you. In the case of the other people being new, the copies would have to diverge. But consider (as I said in another comment) the case where the environment controls the divergence to not be that axiologically significant, i.e. none of the copies end up "messed up".
comment by lukstafi · 2010-06-27T22:00:20.504Z · LW(p) · GW(p)
No value at all: to answer "how valuable do you think creating extra identical, non-interacting copies of yourself is? (each copy existing in its own computational world, which is identical to yours with no copy-copy or world-world interaction)"
Existence of worlds that are not causally related to me should not influence my decisions (I learn from the past and I teach the future: my world cone is my responsibility). I decide by considering whether the world that I create/allow my copy (or child) to exist in is better off (according to myself -- my "better" is my best approximation to the "objective better" if there's any) because of the copy or not. I do not even have a causal voice on the shape of the world in question, it is already postulated in a fixed way. Even more strongly, as far as it is wasted computation in some other world, its value is negative. (If the results of this simulation are "consumed by someone", it needs not be "wasted".)
Disclaimer: I haven't followed the discussion too closely.
comment by AlephNeil · 2010-06-23T11:13:39.947Z · LW(p) · GW(p)
With this kind of question I like to try to disentangle 'second-order effects' from the actual core of what's being asked, namely whether the presence of these copies is considered valuable in and of itself.
So for instance, someone might argue that "lock-step copies" in a neighboring galaxy are useful as back-ups in case of a nearby gamma-ray burst or some other catastrophic system crash. Or that others in the vicinity who are able to observe these "lock-step copies" without affecting them will nevertheless benefit in some way (so, the more copies, the more people can see them). Or simply that having "lock-step copies" is good because we don't need to keep them in lock-step. These are all sensible responses but they miss the point.
To me it seems absolutely obvious that having extra copies serves no purpose at all. E.g. If I was a great composer, and I wrote an outstanding symphony, what good would it be if my copy wrote the very same symphony? Actually, this example illustrates my entire approach to moral questions - that the essence of goodness is the creation of things that are beautiful and/or profound. Beauty and profundity depend on the 'information content' of whatever is created, and a universe containing two copies of X contains only negligibly more information than a universe with a single copy of X.
If we're only talking about 'statistically identical copies' then of course the situation is different. One can imagine two 'statistically identical' copies of Beethoven circa 1800 writing rather similar first symphonies but radically different ninth symphonies.
comment by Eneasz · 2010-06-22T21:47:56.502Z · LW(p) · GW(p)
Can you specify if the copy of me I'm working to create is Different Everett-Branch Me or Two Days In The Future Me? That will effect my answer, as I have a bit of a prejudice. I know it's somewhat inconsistent, but I think I'm a Everett-Branch-ist
Replies from: wedrifid↑ comment by wedrifid · 2010-06-22T21:55:47.426Z · LW(p) · GW(p)
Can you specify if the copy of me I'm working to create is Different Everett-Branch Me
Don't you create a bajillion of those every second anyway? You'd want to be getting more than one for a day's work in the trench. Heck, you get at least one new Different Everett-Branch you in the process of deciding whether or not to work for a new 'you'. Hopefully you're specifying just how big a slice of Everett pie the new you gets!
Replies from: Eneasz↑ comment by Eneasz · 2010-06-23T00:42:19.264Z · LW(p) · GW(p)
Well the question seems to assume that this isn't really the case, or at least not in any significantly meaningful way. Otherwise why ask? Maybe it's a lead-in to "if you'd work for 1 hour to make another you-copy, why won't you put in X-amount of effort to slightly increase your measure"?
comment by thomblake · 2010-06-22T21:02:36.206Z · LW(p) · GW(p)
It's a difficult question to answer without context. I would certainly work for some trivial amount of time to create a copy of myself, if only because there isn't such a thing already. It would be valuable to have a copy of a person, if there isn't such a thing yet. And it would be valuable to have a copy of myself, if there isn't such a thing yet. After those are met, I think there are clearly diminishing returns, at least because you can't cash in on the 'discovery' novelty anymore.
comment by Jonathan_Graehl · 2010-06-22T17:37:23.376Z · LW(p) · GW(p)
If my copies can make copies of themselves, then I'm more inclined to put in a year's work to create the first one. Otherwise, I'm no altruist.
Replies from: orthonormal↑ comment by orthonormal · 2010-06-22T22:34:15.975Z · LW(p) · GW(p)
Given that they're identical copies, they'll only make further copies if you make more copies. Sorry.
Replies from: DSimon↑ comment by DSimon · 2010-06-23T01:44:00.746Z · LW(p) · GW(p)
Well, they'll make more copies if they're a copy of you from before you put in the year's work.
Replies from: orthonormal↑ comment by orthonormal · 2010-06-24T05:46:38.892Z · LW(p) · GW(p)
Clever; I should have thought of that.
Replies from: Jonathan_Graehl↑ comment by Jonathan_Graehl · 2010-06-24T21:05:10.849Z · LW(p) · GW(p)
You're both clever! In any case, if I can repeat the offer serially and the copy is from after the work, then there will actually be 2^N of me if I do it N times. Obviously there are an infinite number of me if the copy is from before.