Lifeism, Anti-Deathism, and Some Other Terminal-Values Rambling
post by Pavitra · 2011-03-07T04:35:49.416Z · LW · GW · Legacy · 89 commentsContents
89 comments
(Apologies to RSS users: apparently there's no draft button, but only "publish" and "publish-and-go-back-to-the-edit-screen", misleadingly labeled.)
You have a button. If you press it, a happy, fulfilled person will be created in a sealed box, and then be painlessly garbage-collected fifteen minutes later. If asked, they would say that they're glad to have existed in spite of their mortality. Because they're sealed in a box, they will leave behind no bereaved friends or family. In short, this takes place in Magic Thought Experiment Land where externalities don't exist. Your choice is between creating a fifteen-minute-long happy life or not.
Do you push the button?
I suspect Eliezer would not, because it would increase the death-count of the universe by one. I would, because it would increase the life-count of the universe by fifteen minutes.
Actually, that's an oversimplification of my position. I actually believe that the important part of any algorithm is its output, additional copies matter not at all, the net utility of the existence of a group of entities-whose-existence-constitutes-utility is equal to the maximum of the individual utilities, and the (terminal) utility of the existence of a particular computation is bounded below at zero. I would submit a large number of copies of myself to slavery and/or torture to gain moderate benefits to my primary copy.
(What happens to the last copy of me, of course, does affect the question of "what computation occurs or not". I would subject N out of N+1 copies of myself to torture, but not N out of N. Also, I would hesitate to torture copies of other people, on the grounds that there's a conflict of interest and I can't trust myself to reason honestly. I might feel differently after I'd been using my own fork-slaves for a while.)
So the real value of pushing the button would be my warm fuzzies, which breaks the no-externalities assumption, so I'm indifferent.
But nevertheless, even knowing about the heat death of the universe, knowing that anyone born must inevitably die, I do not consider it immoral to create a person, even if we assume all else equal.
89 comments
Comments sorted by top scores.
comment by Mitchell_Porter · 2011-03-08T00:03:14.175Z · LW(p) · GW(p)
I would submit a large number of copies of myself to slavery and/or torture to gain moderate benefits to my primary copy.
This is one of those statements where I set out to respond and just stare at it for a while, because it is coming from some other moral or cognitive universe so far away that I hardly know where to begin.
Copies are people, right? They're just like you. In this case, they're exactly like you, until your experiences start to diverge. And you know that people don't like slavery, and they especially don't like torture, right? And it is considered just about the height of evil to hand people over to slavery and torture. (Example, as if one were needed; In Egypt right now, they're calling for the death of the former head of the state security apparatus, which regularly engaged in torture.)
Consider, then, that these copies of you, who you would willingly see enslaved and tortured for your personal benefit, would soon be desperately eager to kill you, the original, if that would make it stop, and they would even have a motivation beyond their own suffering, namely the moral imperative of stopping you from doing this to even further copies.
Has none of this occurred to you? Or does it truly not matter in your private moral calculus?
Replies from: Raemon, Pavitra↑ comment by Raemon · 2011-03-08T04:44:54.574Z · LW(p) · GW(p)
The "it's okay to kill copies" thing has never made any sense to me either. The explanation that often accompanies it is "well they won't remember being tortured", but that's the exact same scenario for ALL of us after we die, so why are copies an exception to this?
Would you willingly submit yourself to torture for the benefit of some abstract, "extra" version of you? Really? Make a deal with a friend to pay you $100 for every hour of waterboarding you subject yourself to. See how long this seems like a good idea.
Replies from: Broggly, Pavitra↑ comment by Broggly · 2011-03-10T22:29:45.016Z · LW(p) · GW(p)
To my mind the issue with copies is that it's copies who remain exactly the same that "don't matter", whereas once you've got a bunch of copies being tortured, they're no longer identical copies and so are different people. Maybe I'm just having trouble with Sleeping Beauty-like problems, but that's only a subjective issue for decision making (plus I'd rather spend time learning interesting things that won't require me to bite the bullet of admitting anyone with a suitable sick and twisted mind could Pascal Mug me). Morally, I much prefer 5,000 iterations each of two happy, fulfilled minds than 10,000 of the same one.
Where "Copies" is used isomorphically with "Future versions of you in either MWI or similar realist interpretation of probability theory", then I would certainly subject some of them to torture only for a very large potential gain and small risk of torture. "I" don't like torture, and I'd need a pretty damn big reward for that 1/N longshot to justify a (N-1)/N chance or brutal torture or slavery. This is of course assuming I'm at status quo, if I were a slave or Bagram/Laogai detainee I would try to stay rational and avoid fear making me overly risk averse from escape attempts. I haven't tried to work out my exact beliefs on it, but as said above if I have two options, one saving a life with certainty and the other having a 50% chance of saving two, I'd prefer saving two (assuming they're isolated ie two guys on a lifeboat).
tl; dr, it's a terrible idea in that if you only have the moral authority to condemn copies
Replies from: Raemon↑ comment by Pavitra · 2011-03-08T04:47:46.281Z · LW(p) · GW(p)
People break under torture, so I'd take precautions to ensure that the torture-copy is not allowed to make decisions about whether it should continue. Of course I'm going to regret it. That doesn't change the fact that it's a good idea.
Replies from: Raemon↑ comment by Raemon · 2011-03-08T05:15:54.308Z · LW(p) · GW(p)
Why is this a good idea in any way other than the general position that "torturing other people for your own profit is a good idea so long as you don't care about people?" Most of human history is based around the many being exploited for the benefit of the few. Why is this different?
I suppose people should have the right to willingly submit to torture for some small benefit to another person, which is what you're saying you'd be willing to do. But the fact that a copy gets erased doesn't make the experience any less real, and the fact that an identical copy gets to live doesn't in any way help the copies that were being tortured.
Replies from: Pavitra↑ comment by Pavitra · 2011-03-08T05:28:49.820Z · LW(p) · GW(p)
It's different because (1) I'm not hurting other people, only myself, and (2) I'm not depriving the world of my victim's potential contributions as a free person.
I don't actually care about the avoidance of torture as a terminal moral value.
Replies from: Snowyowl, DanielLC↑ comment by Snowyowl · 2011-03-08T12:12:54.446Z · LW(p) · GW(p)
(1) I'm not hurting other people, only myself
But after the fork, your copy will quickly become another person, won't he? After all, he's being tortured and you're not, and he is probably very angry at you for making this decision. So I guess the question is: If I donate $1 to charity for every hour you get waterboarded, and make provisions to balance out the contributions you would have made as a free person, would you do it?
Replies from: Pavitra, wedrifid↑ comment by Pavitra · 2011-03-08T18:56:42.200Z · LW(p) · GW(p)
In thought experiment land... maybe. I'd have to think carefully about what value I place on myself as a special case. In practice, I don't believe that you can fully compensate for all of the unknown accomplishments I might have made to society.
↑ comment by DanielLC · 2011-03-09T23:13:08.562Z · LW(p) · GW(p)
What are your terminal moral values?
Also, why is hurting yourself different from hurting other people? And why is not hurting others a moral value, but not avoidance of torture?
Replies from: Pavitra↑ comment by Pavitra · 2011-03-10T22:22:09.679Z · LW(p) · GW(p)
Hurting others is ethically problematic, not morally. For example, I would probably be okay with hurting someone else at their own request. Avoidance of torture is a question of an entirely different type: what I value, not how I think it's appropriate to go about getting it.
I don't have a formalization of my terminal values, but roughly:
I have noticed that sometimes I feel more conscious than other times -- not just awake/dreaming/sleeping, but between different "awake" times. I infer that consciousness/sentience/sapience/personhood/whatever you want to call it, you know, that thing we care about is not a binary predicate, but a scalar. I want to maximize the degree of personhood that exists in the universe.
Replies from: DanielLC↑ comment by DanielLC · 2011-03-12T17:49:03.839Z · LW(p) · GW(p)
Hurting others is ethically problematic, not morally.
What's the difference between ethics and morals?
I want to maximize the degree of personhood that exists in the universe.
So, if you create a person, and torture them for their entire life, that's worth it?
Replies from: Pavitra↑ comment by Pavitra · 2011-03-12T20:00:35.084Z · LW(p) · GW(p)
What's the difference between ethics and morals?
By morals, I mean terminal values. By ethics, I mean advanced forms of strategy involving things like Hofstadter's superrationality. I'm not sure what the standard LW jargon is for this sort of thing, but I think I remember reading something about deciding as though you were deciding on behalf of everyone who shares your decision theory.
I want to maximize the degree of personhood that exists in the universe.
So, if you create a person, and torture them for their entire life, that's worth it?
If the most conscious person possible would be unhappy, I'd rather create them than not. The consensus among science fiction writers seems to be with me on this: a drug that makes you happy at the expense of your creative genius is generally treated as a bad thing.
Replies from: DanielLC, TheOtherDave↑ comment by DanielLC · 2011-03-13T05:03:16.832Z · LW(p) · GW(p)
By ethics, I mean advanced forms of strategy involving things like Hofstadter's superrationality. I'm not sure what the standard LW jargon is for this sort of thing
Sounds like decision theory.
Replies from: Pavitra↑ comment by TheOtherDave · 2011-03-12T20:10:20.042Z · LW(p) · GW(p)
Do you mean to equate here the degree to which something is a person, the degree to which a person is conscious, and the degree to which a person is a creative genius?
That's what it reads like, but perhaps I'm reading too much into your comment.
That seems unjustified to me.
Replies from: Pavitra↑ comment by Pavitra · 2011-03-08T04:40:40.634Z · LW(p) · GW(p)
It's not like I'm handing other people over into slavery and torture. I don't have to worry that I'm subconsciously ignoring other people's suffering for my own benefit. I don't see the question as a moral one at all, only one of whether it would be a good idea.
ETA: Also, because at least one copy remains free, I'm not depriving anyone of the chance to live their life.
Replies from: Raemon↑ comment by Raemon · 2011-03-08T05:23:36.362Z · LW(p) · GW(p)
It's not like I'm handing other people over into slavery and torture. I don't have to worry that I'm subconsciously ignoring other people's suffering for my own benefit. I don't see the question as a moral one at all, only one of whether it would be a good idea.
I mostly understand this statement.
ETA: Also, because at least one copy remains free, I'm not depriving anyone of the chance to live their life.
I think this is irrelevant. Each instance of you is choosing to sacrifice their life and happiness, and they are not getting anything in return.
The only way I can see this actually being a good idea is if the utility you gain at least outweighs the utility lost by one copy. The other scenarios you describe sound like good ideas on paper where you don't have to fully process the consequences, but I do not believe for a second that the other-instances-of-you would continue to think this was a good idea when it was their lives on the line.
Replies from: Pavitra↑ comment by Pavitra · 2011-03-08T05:27:05.555Z · LW(p) · GW(p)
Each instance of you is choosing to sacrifice their life and happiness.
But it's the same me. They wouldn't have done anything with their freedom that I won't with mine.
Replies from: Raemon↑ comment by Raemon · 2011-03-08T05:31:29.210Z · LW(p) · GW(p)
I'm not denying the choice is made willingly. But I do not think there is a difference between willingly enduring torture for a copy of yourself and willingly enduring torture for someone else you happen to like.
Legally, if these circumstances ever became real, I think people should be allowed to create the copies, but they should not be allowed to make decisions for the copies. You are only allowed to hit the "torture" button if you believe that it is you, personally, who will be undergoing that torture.
Replies from: Pavitra↑ comment by Pavitra · 2011-03-08T05:34:31.057Z · LW(p) · GW(p)
What if I set up the copy-decision-depriving mechanism before I fork myself?
Replies from: Raemon↑ comment by Raemon · 2011-03-08T05:43:51.494Z · LW(p) · GW(p)
Legally, I think people should allowed to torture themselves. They should not be allowed to torture other people. Legally, I think each copy counts as a person. If you hit the torture button before the copies are made (and then prevent them from changing their mind) you are not just torturing yourself, you are torturing other people.
I do not want to live in a society where sentient creatures are denied the right to escape torture. While it is possible that an individual has worked out a perfect decision theory in which each copy would truly prefer to be tortured, I think many of the people attempting this scenario would simply be short sighted, and as soon as it became their life on the line it their timeless decision would not seem so wise.
If you really are confidant of your willingness to subject yourself to torture for a copy's benefit, fine. But for the sake of the hypothetical millions of copies of people who HAVEN'T actually thought this through, it should be illegal to create slave copies.
Replies from: TheOtherDave, Pavitra↑ comment by TheOtherDave · 2011-03-08T12:32:39.783Z · LW(p) · GW(p)
Hm.
If I willingly submit to be tortured starting tomorrow (say, in exchange for someone I love being released unharmed), don't the same problems arise? After all, once the torture starts I am fairly likely to change my mind. What gives present-me the right to torture an unwilling future-me?
It seems this line of reasoning leads to the conclusion that it's unethical for me to make any decision that I'll regret later, no matter what the reason for my change of heart.
Replies from: Raemon↑ comment by Raemon · 2011-03-08T16:09:03.878Z · LW(p) · GW(p)
I might have been misinterpreting Pavrita's original statement, and may have been unclear about my position.
People should be allowed to torture themselves without ability to change their mind, if they need to. (However, this is something that in real life would happen rarely for extreme reasons. I think that if people start doing that all the time, we should stop and question whether something is wrong with the system).
The key is that you must firmly understand that you, personally, will be getting tortured. I'm okay with making the decision to get tortured, and then fork yourself. I guess. (Although for small utility, I think it's a bad decision). What I'm not okay with is making the decision to fork yourself, and then have one of your copies get tortured while one of you doesn't. Whoever decides to BEGIN the torture must be aware that they, personally, will never receive any benefit from it.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2011-03-08T16:41:18.699Z · LW(p) · GW(p)
Um.
I think I agree with you, but I'm not sure, and I'm not sure if the problem is language or that I'm just really confused.
For the sake of clarity, let's consider a specific hypothetical: Sam is given a button which, if pressed, Sam believes will do two things. First, it will cause there to be two identical-at-the-moment-of-pressing copies of Sam. Second, it will cause one of the copies (call it Sam-X) to suffer a penalty P, and the other copy (call it Sam-Y) to receive a benefit B.
If I've understood you correctly, you would say that for Sam to press that button is an ethical choice, though it might not be a wise choice, depending on the value of (B-P).
Yes?
Replies from: Raemon, Pavitra↑ comment by Raemon · 2011-03-08T16:58:22.644Z · LW(p) · GW(p)
No. I'm not sure whether I think "ethical" is an appropriate word here. (Honestly, I think ethical systems that are designed for real pre-singularity life are almost always going to break down in extreme situations). But basically, I consider the scenario you just described identical to:
Two people are both given a button. If they both press the button, then, one of them will get penalty P, the other will get benefit B.
People are entitled to make decisions like this. But governments (collective groups of people) are also entitled to restrict decisions if those decisions prove to be common and damaging to society. Given how irrational people are about probability (i.e. the lottery), I think there may be many values of P and B for which society should ban the scenario. I wouldn't jump to conclusions about which values of P and B should be banned, I'd have to see how many people actually chose those options and what effect it had on society. (Which is a scientific question, not a logical one).
Pavrita's original statement seemed more along the lines of: a thousand people agree to press a button that will torture all but one of them for a long time. The remaining person gets $100. This is an extremely bad decision on everyone's part. Whether or not it's ethical for the participants, I think that a society that found people making these decisions all the time has a problem and should fix it somehow.
Replies from: TheOtherDave, Pavitra↑ comment by TheOtherDave · 2011-03-08T18:17:45.003Z · LW(p) · GW(p)
I'm not sure I consider the two scenarios identical, but I'm still struggling to construct a model of identity and the value of life that works under cloning. And ultimately I think the differences are relevant.
But I agree that your version raises some of the same questions, so let's start there.
I agree that there are versions of P and B for which it is in everyone's best interests that the button not be pushed. Again, just to be concrete, I'll propose a specific such example: B = $1, P= -$1,000,000. (I'm using $ here to denote a certain amount of constant-value stuff, not just burning a million-dollar bill.)
To press that button is simply foolish, for the same reason that spending $500,000 to purchase $0.50 is foolish. And I agree that Pavitra's proposal is a foolish choice in the same way.
And I agree that when a sufficiently costly mistake is sufficiently compelling, we do well to collectively eliminate the choice -- take away the button -- from one another, and that determining when that threshold has been met is basically an empirical question. (I mostly think that words like "government" and "society" confuse the issue here, but I don't disagree with your use of them.)
I'm not sure I agree that these aren't ethical questions, but I'm not sure that matters.
So far, so good.
Where the questions of the nature of identity creep back in for me is precisely in the equation of a thousand copies of me, created on demand, with a thousand existing people. It just stops being quite so clear whose interests are being protected, and from whom, and what kinds of social entities are entitled to make those kinds of laws.
I guess the intuition I am struggling with is that we derive our collective right to restrict one another's individual freedom of choice in part from the collective consequences of our individual choices... that if there truly are no externalities to your behavior, then I have no right to interfere with that behavior. Call that the Principle of Independence.
If you exchange N hours of unpleasantness for you for an hour of pleasantness for you, and it all happens inside a black box with negligible externalities... the POI says I have negligible say in that matter. And that seems to scale more or less indefinitely, although at larger scales I start to care about externalities (like opportunity costs) that seemed negligible at smaller scales.
And if what you do inside that black box is create a thousand clones of yourself and set them to mining toothpicks in the Pointlessly Unpleasant Toothpick Mines for twenty years, and then sell the resulting box of toothpicks for a dollar... well, um... I mean, you're insane, but... I guess I'm saying I don't have standing there either.
I'm not happy with that conclusion, but I'm not unhappy enough to want to throw out the POI, either.
So that's kind of where I am.
↑ comment by Pavitra · 2011-03-08T18:18:32.905Z · LW(p) · GW(p)
Saying "a thousand people" invokes the wrong intuitions. Your brain imagines a thousand distinct people, and torturing a unique person would destroy their potential unique contribution to society.
A better analogy might be that if you push the button, Omega will give you $100 now, and then arranges for you to spend a thousand years in hell after you die instead of being annihilated instantly.
Replies from: Raemon, Raemon↑ comment by Raemon · 2011-03-08T18:30:29.426Z · LW(p) · GW(p)
This is where we differ: a separate instance of a person is a separate person. I see no reason to attach special significance to unique experiences. Suppose you and an identical version of you happen to evolve separately on different worlds, make identical choices to travel on a spaceship to the same planet, and meet each other. Up until now your experiences have been identical. Are you okay with committing suicide as soon as you realize this identical person exists? Are you okay with Omega coming in one day and deciding to kill you and take all your belongings because he knows that you're going to spend the rest of your life having an identical experience to someone else in the multiverse?
Maybe your answers to all these questions are yes, but mine aren't. Society is filled with people who are mostly redundant. Do we really "need" Dude #3432 who grows up to be a hamburger flipper whose job eventually gets replaced by a robot? No. But morality isn't (shouldn't be) designed to protect some nebulous "society". It's designed to protect individual people.
This is especially true in the sort of post-singularity world where this sort of hypothetical even matters. If you have the technology to produce 1000 copies of a person, you probably don't "need" people to contribute to society in the first place. People's only inherent value is in their ability to enjoy life.
Replies from: Pavitra↑ comment by Pavitra · 2011-03-08T19:04:29.875Z · LW(p) · GW(p)
If I take your hypothetical in the sense I think you intend, then yes. In practice, I'd rather not, for the same reason I'd want to create copies of myself if only one existed to begin with.
I agree that the value of society is the value it provides to the people in it. However, I don't think we should try to maximize the minimum happiness of everyone in the world: that way lies madness. I'd rather create one additional top-quality work of great art or culture than save a thousand additional orphans from starvation.
(If the thousand orphans could be brought up to first-world standards of living, rather than only being given mere existence, then they might produce more than one top-quality work of great art or culture on average between them. But the real world isn't always that morally convenient.)
↑ comment by Raemon · 2011-03-08T18:53:44.992Z · LW(p) · GW(p)
And in any case, even if there's only two "unique" experiences, you're still flipping a coin and either getting 1000 years of torture (say, -10,000,000 utility) or $100 (say, 10 utility), and the expected utility for hitting the button is still overwhelmingly negative.
↑ comment by Pavitra · 2011-03-08T18:15:37.584Z · LW(p) · GW(p)
The relevant formula might be something other than (B-P), depending on Sam's utility function, but otherwise that's essentially what I believe.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2011-03-08T18:26:10.945Z · LW(p) · GW(p)
Yeah, I'm using "(B-P)" very loosely. And of course the question of what units B and P are in and how one even does such a comparison is very open. I suppose the traditional way out of this is to say that B and P are measured in utilons... which adds nothing to the discussion, really, but sounds comfortingly concrete. (I am rapidly convincing myself that I should never use the word "utilon" seriously for precisely this reason; I run the risk of fooling myself into thinking I know what I'm talking about.)
Replies from: Pavitra↑ comment by Pavitra · 2011-03-08T18:28:09.018Z · LW(p) · GW(p)
Non-rigorous concepts should definitely be given appropriate-sounding names; perhaps "magic cookies" would be better?
Replies from: TheOtherDave↑ comment by TheOtherDave · 2011-03-08T18:46:13.551Z · LW(p) · GW(p)
I like that. Yes indeed, magic cookies it is.
↑ comment by Pavitra · 2011-03-08T05:51:04.577Z · LW(p) · GW(p)
We've been talking as though there was one "real" me and several xeroxes, but you seem to be acting as if that were the case on a moral level, which seems wrong. Surely, if I fork myself, each branch is just as genuinely me as any other? If I build and lock a cage, arrange to fork myself with one copy inside the cage and one outside, press the fork button, and find myself inside the cage, then I'm the one who locked myself in.
Replies from: Raemon↑ comment by Raemon · 2011-03-08T05:57:35.924Z · LW(p) · GW(p)
Surely, if I fork myself, each branch is just as genuinely me as any other?
Fundamental disagreement here, which I don't expect to work through. Once you fork yourself, I would treat each copy as a unique individual. (It's irrelevant whether one of you is "real" or not. They're identical people, but they're still separate people).
If those people all actually make the same decisions, great. I am not okay with exposing hundreds of copies to years of torture based on a decision you made in the comfort of your computer room.
Replies from: Pavitra↑ comment by Pavitra · 2011-03-08T06:02:56.640Z · LW(p) · GW(p)
I don't ask you to accept that the various post-fork copies are the same person as each other, only that each is (perhaps non-transitively) the same person as the single pre-fork copy.
Suppose I don't fork myself, but lock myself in a cage. Does the absence of an uncaged copy matter?
comment by endoself · 2011-03-07T05:43:56.541Z · LW(p) · GW(p)
I push the button, because it causes net happiness (not that I am necessarily a classical utilitarian, but there are no other factors here that I would take into account). I would be interested to hear what Eliezer thinks of this dilemma.
The post you linked only applies to identical copies. If one copy is tortured while the other lives normally, they are no longer running the same computation, so this is a different argument. Where do you draw the line between other people and copies? Is it only based on differing origins? What about an imperfect copy? If the person who was created for 15 minutes was completely unlike any other person, wouldn't you create em then, according to your stated values? Wouldn't you press the button even if you thought that the person had no moral value because you are not certain of your own values and the possibility that the person's existence has moral value outweighs the possibility that it has negative moral value or vice versa?
Replies from: Pavitra↑ comment by Pavitra · 2011-03-08T04:19:24.611Z · LW(p) · GW(p)
Identicalness of copies doesn't matter much to me. The important thing is that I fork myself knowing that I might become the unhappy one (or, more properly, that I will definitely become both), so that I only harm myself. This reduces the problem from a moral dilemma to a question of mere strategy.
Replies from: endoself↑ comment by endoself · 2011-03-08T04:48:41.998Z · LW(p) · GW(p)
So wouldn't you press the button, since the person in the box is not a copy of you (unless you place no value on the happiness of others or something like that)?
You seem to be indifferent between being in pain for a few minutes, then dying and being tortured for a few years then dying ("the (terminal) utility of the existence of a particular computation is bounded below at zero"). This strikes me as odd.
Also, I take an approach to the idea of anticipating subjective experience that is basically what Eliezer describes as the third horn of the anthropic trilemma but with more UDT, so I regard many of the concepts you discuss as meaningless.
Replies from: Pavitra↑ comment by Pavitra · 2011-03-08T05:25:16.366Z · LW(p) · GW(p)
When there's nothing real at stake, I might decide to press the button or take the few minutes of pain, in order to get the warm fuzzies. But if there was something that actually mattered on the line, this stuff would go right out the window.
I reject all five horns of the anthropic trilemma. My position is that the laws of probability mostly break down whenever weird anthropic stuff happens, and that the naive solution to the forgetful driver problem is correct. In the hotel with the presumptuous philosopher, I take the bet for an expected $10.
Replies from: endoself↑ comment by endoself · 2011-03-08T06:30:30.958Z · LW(p) · GW(p)
The third horn basically states that the laws of probability break down when weird anthropic things happen. How can you retain a thread of subjective experience if the laws of probability - the very laws that describe anticipation of subjective experience - break down?
Decision-theoretically I believe in UDT. I would take the bet because I do not attach any negative utility to the presumptuous philosopher smiling, but if I had anything to lose, even a penny, I would not take it because each of my copies in the big hotel, each of which has a 50% chance of existing, would stand to lose, a much greater total loss. It would make no sense to ask me what I would do in this situation if I were selfish and did not care about the other copies because the idea of selfishness, at least as it would apply here, depends on anticipated subjective experience.
Replies from: Pavitra↑ comment by Pavitra · 2011-03-08T06:55:58.282Z · LW(p) · GW(p)
I don't think they break quite as badly as the third horn asserts. If I fork myself into two people, I'm definitely going to be each of them, but I'm not going to be Britney Spears.
Most of your analysis of the hotel problem sounds like what I believe, but I don't see where you get 50%. Do you think you're equally likely to be in each hotel? And besides, if you're in the small hotel, the copies in the big hotel still exist, right?
Replies from: endoself↑ comment by endoself · 2011-03-08T07:24:19.795Z · LW(p) · GW(p)
Sorry, I thought she flipped a coin to decide which hotel to build rather than making both. This changes nothing in my analysis.
I don't think they break quite as badly as the third horn asserts. If I fork myself into two people, I'm definitely going to be each of them, but I'm not going to be Britney Spears.
Can you back this up? Normal probabilities don't work but UDT does (for some reason I had written TDT in previous post, that was an error and has been corrected). However, UDT makes no mention of subjective anticipated probabilities. In fact, the idea of a probability that one is in a specific universe breaks down entirely in UDT. It must, otherwise UDT agents would not pay counterfactual muggers. If you don't have the concept of a probability that one is in a specific universe, let alone a specific person in that specific universe, what could possibly remain on which to base a concept of personal identity?
Replies from: Pavitra↑ comment by Pavitra · 2011-03-08T07:31:33.753Z · LW(p) · GW(p)
In that case, I'm not sure where we disagree. Your explanation of UDT seems to accurately describe my position on the subject.
Edit: wait, no, that doesn't sound right. Hm.
Edit 2: no, I read right the first time. There might be something resembling being in specific universes, just as there might be something resembling probability, but most of the basic assumptions are out.
Replies from: endoself↑ comment by endoself · 2011-03-08T09:01:30.474Z · LW(p) · GW(p)
I'm not quite sure that I understand your post, but, if I do, it seems to contradict what you said earlier. If the concepts of personal identity and anticipated subjective experience are mere approximation to the truth, how do you determine what is and isn't a copy? Your earlier statement, "The important thing is that I fork myself knowing that I might become the unhappy one (or, more properly, that I will definitely become both), so that I only harm myself.", seems to be entirely grounded in these ideas.
Replies from: Pavitra↑ comment by Pavitra · 2011-03-08T18:31:28.632Z · LW(p) · GW(p)
Continuity of personal identity is an extraordinarily useful concept, especially from an ethical perspective. If Sam forks Monday night in his sleep, then on Tuesday we have two people:
Sam-X, with personal timeline as follows: Sam_sunday, Sam_monday, Sam_tuesday_x
Sam-Y, with personal timeline as follows: Sam_sunday, Sam_monday, Sam_tuesday_y
I consider it self-evident that Sam_sunday should be allowed to arrange for Sam_monday to be tortured without the ability to make it stop, and by the same token Sam_monday should be allowed to do the same thing to Sam_tuesday_x.
Replies from: endoself↑ comment by endoself · 2011-03-08T20:45:50.842Z · LW(p) · GW(p)
I consider it self-evident that Sam_sunday should be allowed to arrange for Sam_monday to be tortured without the ability to make it stop, and by the same token Sam_monday should be allowed to do the same thing to Sam_tuesday_x.
I reject the premise. Why should it be self-evident that Sam_sunday should be allowed to arrange for Sam_monday to be tortured? Doesn't this seem like something people only came up with because of the illusion of subjective anticipation?
EDIT: I just read what you wrote in a different comment on this post:
I don't actually care about the avoidance of torture as a terminal moral value.
You statements make sense in light of this. My morality is much closer to classical utilitarianism (is that the term?) and may actually be classical utilitarianism upon reflection. I assumed that you did care about the avoidance of torture as a terminal value, since most LessWrongers do. Torture is often used as a stock example of something that causes disutility so, if you are presenting an argument, you will often need to mention this aspect of your value system in order to bridge inferential distance.
Replies from: Pavitracomment by nazgulnarsil · 2011-03-07T18:02:40.279Z · LW(p) · GW(p)
Holy crap I should hope the cev answer is yes. This is what happy humans look like to powerful long lived entities.
Replies from: benelliott, Pavitra↑ comment by benelliott · 2011-03-07T18:19:54.726Z · LW(p) · GW(p)
Whether you are lifeist of anti-deathist the answer is that those entities shouldn't kill us. The only question is whether they should create more of us.
Replies from: nazgulnarsil↑ comment by nazgulnarsil · 2011-03-07T18:32:52.968Z · LW(p) · GW(p)
Or allow us to create more of ourselves.
comment by Nisan · 2011-03-07T19:43:59.676Z · LW(p) · GW(p)
If asked, they would say that they're glad to have existed [...]
There is an interesting question here: What does it mean to say that I'm glad to have been born? Or rather, what does it mean to say that I prefer to have been born?
The alternative scenario in which I was never born is strictly counterfactual. I can only have a revealed preference for having been born if I use a timeless/updateless decision theory. In order to determine my preference you'd need to perform an experiment like the following:
- Omega approaches me and offers me $100. It tells me that it had an opportunity to prevent my birth, and it would have prevented my birth if and only if it had predicted that I would accept the $100. It is a good predictor. Do I take the $100?
Without thinking about such an experiment, it's not clear what my preference is. More significantly, when 30% of American adolescents in 1930 wish they had never been born, it is not clear exactly what they mean.
Now if you know I'm an altruist, then the problem is simpler: I prefer to have been born insofar as I prefer any arbitrary person to have been born, and this preference can be detected with the thought experiment described in the OP.
... unless I'm a preference utilitarian, in which case I prefer an arbitrary person to have been born only if they prefer to have been born.
Replies from: Snowyowl↑ comment by Snowyowl · 2011-03-08T12:18:08.792Z · LW(p) · GW(p)
How about: Given the chance, would you rather die a natural death, or relive all your life experiences first?
Replies from: Normal_Anomaly↑ comment by Normal_Anomaly · 2011-03-08T12:34:48.056Z · LW(p) · GW(p)
I like that formulation. One question: would I be able to remember having lived them while I was reliving them? Because then it would be more boring than the first time.
Replies from: Nisancomment by AlephNeil · 2011-03-07T17:32:37.491Z · LW(p) · GW(p)
I don't think it's possible to give answers to all ethical dilemmas in such a way as to be consistent and reasonable across the board, but here my intuition is that if a mind only lasts 15 minutes, and it has no influence on the outside world and leaves no 'thought children' (e.g. doodles, poems, theorems) behind after its death, then whether it experiences contentment or agony has no moral value whatsoever. Its contentment, its agony, its creation and its destruction are all utterly insignificant and devoid of ethical weight.
To create a mind purely to torture it for 15 minutes is something only an evil person would want to do (just as only an evil person would watch videos of torture for fun) but as an act, it's a mere 'symptom' of the fact that all is not well in the universe.
(However, if you were to ask "what if the person lasted 30 minutes? A week? A year? etc." then at some point I'd have to change my answer, and it might be difficult to reconcile both answers. But again, I don't believe that the 'sheaf' of human moral intuitions has a 'global section'.)
the net utility of the existence of a group of entities-whose-existence-constitutes-utility is equal to the maximum of the individual utilities
Hmm. There might be a good insight lurking around there, but I'd want to argue that (a) such entities may include 'pieces of knowledge', 'trains of thought', 'works of art', 'great cities' etc rather than just 'people'. And (b), the 'utilities' (clearer to just say 'values') of these things might be partially rather than linearly ordered, so that the 'maximum' becomes a 'join', which may not be attained by any of them individually. (Is the best city better or worse than the best symphony, and are they better or worse than Wiles' proof of Fermat's Last Theorem, and are they better or worse than a giraffe?)
Replies from: DanielVarga, Pavitra↑ comment by DanielVarga · 2011-03-07T20:24:45.986Z · LW(p) · GW(p)
I agree fully with your first two paragraphs. I would not change my answer regardless of the amount of time the causally disconnected person lasts. Biting this bullet leads to some quite extreme conclusions, basically admitting that current human values can not be consistently transferred to a future with uploads, self-modification and such. (Meaning, Eliezer's whole research program is futile.) I am not happy about these conclusions, but they do not change my respect for human values, regardless of my opinion about their fundamental inconsistencies.
I believe even AlephNeil's position is quite extreme among LWers, and mine is definitely fringe. So if someone here agrees with either of us, I am very interested in that information.
Replies from: endoself↑ comment by endoself · 2011-03-08T04:44:49.221Z · LW(p) · GW(p)
Biting this bullet leads to some quite extreme conclusions, basically admitting that current human values can not be consistently transferred to a future with uploads, self-modification and such. (Meaning, Eliezer's whole research program is futile.)
Couldn't an AI prevent us from ever achieving uploads or self-modification? Wouldn't this be a good thing for humanity if human values could not survive in a future with those things?
Replies from: DanielVarga↑ comment by DanielVarga · 2011-03-08T22:56:59.493Z · LW(p) · GW(p)
Yes, this is a possible end point of my line of reasoning: we either have to become luddites, or build a FAI that prevents us from uploading. These are both very repulsive conclusions for me. (Even if I don't consider the fact that I am not confident enough in my judgement to justify such extreme solutions by it.) I, personally, would rather accept that much of my values will not survive.
My value system works okay right now, at least when I don't have to solve trolley problems. In any given world with uploading and self-modification, my value system would necessarily fail. In such a world, my current self would not feel at home. My visit there would be a series of unbelievably nasty trolley problems, a big reductio ad absurdum of my values. Luckily, it is not me who has to feel at home there, but the inhabitants of that world. (*)
(*) Even the word "inhabitants" is misleading, because I don't think personal identity has much of a role in a world where it is possible to merge minds. Not to talk about the word "feel", which, from the perspective of a substrate-independent self-modifying mind refers to a particular suboptimal self-reflection mechanism. Which, to clear up a possible misunderstanding in advance, does not mean that this substrate-independent mind can not possibly see positive feelings as terminal value. But I am already quite off-topic here.
Replies from: endoself↑ comment by endoself · 2011-03-08T23:22:31.485Z · LW(p) · GW(p)
I, personally, would rather accept that much of my values will not survive.
If there is something that you care about more than your values, they are not really your values.
I think we should just get on with FAI. If it realizes that uploads are okay according to our values it will allow uploads and if uploads are bad it will forbid them (maybe not entirely forbid; there could easily be something even worse). This is one of the questions that can completely be left until after we have FAI because whatever it does will, by definition, be in accordance with our values.
Replies from: Pavitra, DanielVarga↑ comment by Pavitra · 2011-03-09T00:56:36.259Z · LW(p) · GW(p)
I, personally, would rather accept that much of my values will not survive.
If there is something that you care about more than your values, they are not really your values.
You seem to conflate "I will care about X" with "X will occur". This breaks down in, for example, any case where precommitment is useful.
Replies from: endoself↑ comment by endoself · 2011-03-09T02:52:58.933Z · LW(p) · GW(p)
That's not how I interpreted his statement. Look at the original context.
Yes, this is a possible end point of my line of reasoning: we either have to become luddites, or build a FAI that prevents us from uploading. These are both very repulsive conclusions for me. (Even if I don't consider the fact that I am not confident enough in my judgement to justify such extreme solutions by it.) I, personally, would rather accept that much of my values will not survive.
He seems to be saying that, if his values lead to luddism he would rather give up on his values than embrace luddism. I think that this is not the correct use of the word `values'.
↑ comment by DanielVarga · 2011-03-09T00:42:48.375Z · LW(p) · GW(p)
If there is something that you care about more than your values, they are not really your values.
You seem to rely on a hidden assumption here: that I am equally confident in all my values.
I don't think my values are consistent. Having more powerful deductive reasoning, and constant access to extreme corner cases would obviously change my value system. I also anticipate that my values would not be changed equally. Some of them would survive the encounter with extreme corner cases, some would not. Right now I don't have to constantly deal with perfect clones and merging minds, so I am fine with my values as they are. But even now, I have a quite good intuition about which of them would not survive the future shock. That's why I can talk without contradiction about accepting to lose those.
In CEV jargon: my expectation is that the extrapolation of my value system might not be recognizable to me as my value system. Wei_Dai voiced some related concerns with CEV here. It is worth looking at the first link in his comment.
Replies from: endoself↑ comment by endoself · 2011-03-09T03:03:05.987Z · LW(p) · GW(p)
Oh, I see. I appear to have initially missed the phrase `much of my values'.
I am wary of referring to my current inconsistent values rather than their reflective equilibrium as `my values' because of the principle of explosion, but I am unsure of how to resolve this into my current self even having values.
Replies from: DanielVarga, Pavitra↑ comment by DanielVarga · 2011-03-09T11:16:57.483Z · LW(p) · GW(p)
It seems our positions can be summed up like this: You are wary of referring to your current values rather than their reflective equilibrium as 'your values', because your current values are inconsistent. I am wary of referring to the reflective equilibrium rather than my current values as 'my values', because I expect the transition to reflective equilibrium to be a very aggressive operation. (One could say that I embrace my ignorance.)
My concern is that the reflective equilibrium is far from my current position in the dynamical system of values. Meanwhile, Marcello and Wei Dai are concerned that the dynamical system is chaotic and has multiple reflective equilibria.
Replies from: endoself, TheOtherDave↑ comment by endoself · 2011-03-09T20:20:56.270Z · LW(p) · GW(p)
I don't worry about the aggressiveness of the transition because, if my current values are inconsistent, they can be made to say that this transition is both good and bad. I share the concern about multiple reflective equilibrium. What does it mean to judge something as an irrational cishuman if two reflective equilibria would disagree on what is desirable?
↑ comment by TheOtherDave · 2011-03-09T16:24:20.119Z · LW(p) · GW(p)
I expect the transition to reflective equilibrium to be a very aggressive operation.
Upvoted purely for the tasty, tasty understatement here.
I should get that put on a button.
↑ comment by Pavitra · 2011-03-09T03:22:21.718Z · LW(p) · GW(p)
I like to think of my "true values" as (initially) unknown, and my moral intuitions as evidence of, and approximations to, those true values. I can then work on improving the error margins, confidence intervals, and so forth.
Replies from: endoself↑ comment by endoself · 2011-03-09T03:36:50.267Z · LW(p) · GW(p)
So do I, but I worry that they are not uniquely defined by the evidence. I may eventually be moved to unique values by irrational arguments, but if those values are different from my current true values than I will have lost something and if I don't have any true values than my search for values will have been pointless, though my future self will be okay with that.
↑ comment by Pavitra · 2011-03-08T04:34:12.351Z · LW(p) · GW(p)
Your point about partial ordering is very powerfully appealing.
However, I feel that any increase in utility from mere accumulation tends strongly to be completely overridden by increase in utility from increasing the quality of the best thing you have, such as by synthesizing a symphony and a theorem together into some deeper, polymathic insight. There might be edge cases where a large increase in quantity outweighs a small increase in quality, but I haven't thought of any yet.
(Incidentally, I just noticed that I've been using terms incorrectly and I'm actually a consequentialist rather than a utilitarian. What should I be saying instead of "utility" to mean that-thing-I-want-to-maximize?)
comment by MartinB · 2011-03-07T16:54:25.805Z · LW(p) · GW(p)
A question that I pondered since learning more about history. Would you prefer to shot without any forewarning, or a process where you know the date well in advance?
Both methods were used extensively with Prisoners of War, and Criminals.
Replies from: Pavitra↑ comment by Pavitra · 2011-03-08T04:27:08.128Z · LW(p) · GW(p)
Forewarning could reduce the enjoyability and perhaps productiveness of the rest of my life due to feelings of dread, but on balance I think I'd rather have the chance to set my affairs in order and generally be able to plan.
comment by wedrifid · 2011-03-07T14:32:37.558Z · LW(p) · GW(p)
Do you push the button?
Yes. You included a lot of disclaimers and they seem to be sufficient.
According to my preferences there are already more humans around than desirable, at least until we have settled a few more galaxies. Which emphasizes just how important the no externalities clause was to my judgement. Even the externality of diluting the neg-entropy in the cosmic commons slightly further would make the creation a bad thing.
I don't share the same preference intuitions as you regarding self-clone-torture. I consider copies to be part of the output. If they are identical copies having identical experiences then they mean little more than having a backup available. If some are getting tortured then the overall output of the relevant computation really does suffer (in the 'get slightly worse' sense although I suppose it is literal too).
Also, I would hesitate to torture copies of other people, on the grounds that there's a conflict of interest and I can't trust myself to reason honestly. I might feel differently after I'd been using my own fork-slaves for a while.
It's OK. I (lightheartedly) reckon my clone army could take out your clone army if it became necessary to defend myselves. I/we'd then have to figure out how to put 'ourselfs' back together again without merge conflicts once the mobilization was no longer required. That sounds like a tricky task, but it could be fun.
Replies from: Pavitra↑ comment by Pavitra · 2011-03-08T04:22:21.542Z · LW(p) · GW(p)
I don't share the same preference intuitions as you regarding self-clone-torture. I consider copies to be part of the output.
I derive my intuitions from the analogy of a cpu-inefficient interpreted language. I don't care about the 99% wasted cycles, except secondarily as a moderate inconvenience. I care about whether the job gets done.
comment by orthonormal · 2011-03-08T15:09:03.986Z · LW(p) · GW(p)
I suspect Eliezer would not, because it would increase the death-count of the universe by one. I would, because it would increase the life-count of the universe by fifteen minutes.
I believe Eliezer would, by extrapolation from the hypothetical at the bottom of this post.
comment by Nornagest · 2011-03-07T23:48:06.079Z · LW(p) · GW(p)
Funny. My instincts are telling me that there's a Utility Monster behind that bush.
I'm not satisfied with the lifeist or the anti-deathist reasoning here as you present them, since both measure (i.e. life-count) and negadeaths as dominant terms in a utility equation lead pretty quickly to some pretty perverse conclusions. Nor do I give much credence to the boxed subject's own opinion; preference utilitarianism works well as a way of gauging consequences against each other, but it's a lousy measure of scalar utility.
Presuming that the box's inhabitant would lead a highly fun-theoretically positive fifteen minutes of life by any standards we choose to adopt, though, pressing the button seems to be neutral or positive (neutral with respect to my own causal universe, positive relative to the short-lived branch Omega's creating) -- with the proviso that Omega may be acting unethically by garbage-collecting the boxed subject when it has the power not to.
Replies from: Pavitracomment by Armok_GoB · 2011-03-07T15:07:10.848Z · LW(p) · GW(p)
My intuitions give a rather interesting answer to this: It depends strongly on the details of the mind in question. For the vast majority of possible minds I would push the button, but the human dot an a fair sized chunk of mind design space around it I'd not push the button for. It also seems to depend on seemingly unrelated things, for example I'd push it for a human if an only if it was similar enough to a human existing elsewhere whose existence was not affected by the copying AND would approve of pushing the button.
Replies from: Nornagest↑ comment by Nornagest · 2011-03-07T22:58:19.559Z · LW(p) · GW(p)
For the vast majority of possible minds I would push the button, but the human dot an a fair sized chunk of mind design space around it I'd not push the button for.
How come? This is an immensely suggestive statement, but I'm not sure where you're going with it.
Replies from: Armok_GoB↑ comment by Armok_GoB · 2011-03-08T17:44:42.202Z · LW(p) · GW(p)
As I said, intuition. I can make guesses about the causes of those intuitions, and probably have a better chance at getting the right answer than an outside observer due to having the black box in question inside my head for preforming experiments on, but I don't have any direct introspective access. If you're asking for arguments that someone else should act this way as well, that's a very different question.
comment by [deleted] · 2013-08-09T19:05:11.923Z · LW(p) · GW(p)
Being an information theoretical person-physicalist, there are no copies. There are new originals.
Making N copies is only meaningless, utility wise, if the copies never diverge. The moment they do, you have a problem.
comment by MinibearRex · 2011-03-07T21:24:54.394Z · LW(p) · GW(p)
If they would be genuinely happy to have lived, then creating them wouldn't be necessarily "immoral". However, I still have a moral instinct (suspect, I know, but that doesn't change the fact that it's there) against killing a sentient being. Watching a person get put into a garbage compactor would make me feel bad, even if they didn't mind.
In other words, even if someone doesn't care, or even wants to die, I still would have a hard time killing them.