How best to show dying is bad
post by Zvi · 2011-03-08T15:18:33.538Z · LW · GW · Legacy · 73 commentsContents
73 comments
- Our not wanting to die is a bit of irrational behavior selected for by evolution. The universe doesn’t care if you’re there or not. The contrasting idea that you are the universe is mystical, not rational.
- The idea that you are alive “now” but will be dead “later” is irrational. Time is just a persistent illusion according to relativistic physics. You are alive and dead, period.
- A cyber-replica is not you. If one were made and stood next to you, you would still not consent to be shot.
- Ditto a meat replica
- If you believe the many worlds model of quantum physics is true (Eliezer does), then there already are a vitually infinite number of replicas of you already, so why bother making another one?
Terminal values and preferences are not rational or irrational. They simply are your preferences. I want a pizza. If I get a pizza, that won't make me consent to get shot. I still want a pizza. There are a virtually infinite number of me that DO have a pizza. I still want a pizza. The pizza from a certain point of view won't exist, and neither will I, by the time I get to eat some of it. I still want a pizza, damn it.
Of course, if you think all of that is irrational, then by all means don't order the pizza. More for me."
73 comments
Comments sorted by top scores.
comment by [deleted] · 2011-03-08T19:12:33.603Z · LW(p) · GW(p)
Format of the conversation matters. What I saw was a friendly matching of wits, in which of course your father wants to win. If you seriously want to change his mind you may need to have a heart-to-heart -- more like "Dad, I'm worried about you. I want you to understand why I don't want to die, and I don't want you to die." That's a harder conversation to have, and it's a risk, so I'm not out-and-out recommending it; but I don't think it'll sink in that this is serious until he realizes that this is about protecting life.
The counter-arguments here are good, but they stay pretty much in the world of philosophy hypotheticals. In addition to laying it all out cleanly, you may want to say some things that change the framing: compare cryonics to vaccination, say, a lifesaving procedure that was very slow to catch on because it was once actually risky and people took frequent illnesses for granted. Or, cryonics is a bet on the future; it's sad that you would bet against it. If he hasn't seen "You only live twice" show him that. It's not misleading; it actually aids understanding.
The pizza thing you wrote is accurate but it's not how I would put it; it's a step in the direction of abstraction which makes it harder to actually change your mind. I'd use, as a simile, something like people dying of smallpox. I don't want people to die of smallpox, even though the universe doesn't give a damn whether humans live or die, even though there's some parallel universe where smallpox doesn't exist. We're here and we give a damn. We want less death, less destruction of human minds and identities.
Replies from: Eliezer_Yudkowsky, MartinB↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2011-03-10T20:00:32.934Z · LW(p) · GW(p)
This and Mitchell Porter's are the main comments I've seen so far that seem to display a grasp of the real emotions involved, as opposed to arguing.
Replies from: Armok_GoB↑ comment by MartinB · 2011-03-10T20:44:59.185Z · LW(p) · GW(p)
What I saw was a friendly matching of wits, in which of course your father wants to win. If you seriously want to change his mind you may need to have a heart-to-heart
It took me at least two decades to realize that there are in deed these different modes of communication. At first glance it sounds so very stupid that this even happens.
comment by Mitchell_Porter · 2011-03-09T15:44:14.546Z · LW(p) · GW(p)
Assuming that this is mostly about persuading him to save himself by participating in cryonics (is that "the cause" for which he might be "an asset"?):
Your father may be fortunate to have so many informed people trying to change his mind about this. Not one person in a million has that.
He's also already scientifically informed to a rare degree - relative to the average person - so it's not as if he needs to hear arguments about nanobots and so forth.
So this has nothing to do with science, it's about sensibility and philosophy of life.
Many middle-aged people have seen most of their dreams crushed by life. They will also be somewhere along the path of physical decline leading to death, despite their best efforts. All this has a way of hollowing out a person, and making the individual life appear futile.
Items 1 and 2 on your father's list are the sort of consolations which may prove appealing to an intellectual, scientifically literate atheist, when contemplating possible attitudes towards life. Many such people, having faced some mix of success and failure in life, and looking ahead to personal oblivion (or, as they may see it, the great unknown of death), will find it a relief to abandon hope, and to broaden their awareness beyond their private desires, to the sweep of history or the vastness of the universe.
This impersonal perspective may function as a source of calm and lucidity, and to have you urging them to abandon their resignation and grasp for more life may seem like someone asking them to rush back into the cage of self-involved personal desire and occlude their hard-won awareness of reality in favor of optimistic delusion.
Also, they may simply find life boring or wearisome. Parents may endure mostly for the sake of their children, long past the age when the child supposedly grows up and leaves home. There may be far less joie de vivre there than a younger person could imagine; they may simply be going through the motions of life, having established some routine that leaves them as much space as possible after the turmoil of a youth in which they first came into bruising contact with the demands and limitations of life; and they may be kept alive more by habit than by the desire to live.
I know nothing about your father, so all that is just meant to suggest possibilities. I'll mention one other factor, which is the story a person tells themselves about their own destiny. One person's power is so limited, that simply choosing the broad direction of one's own life is often a struggle and an accomplishment; I would even say it's rare for a person to understand what their own life is about, and what's going on in it, in a more than superficial way. A phase of life is usually understood afterwards, if at all. The powerlessness of the human individual, the sense that one's time is running out, the impositions of the external world, all of this combines with everything I mentioned earlier to favor either passivity (stop trying, roll with the punches) or stubbornness, including intellectual stubbornness (at least I can live and think as I've already decided; at least I have that freedom and that power).
In persuading him to consider cryonics as a worthy activity, I would wager that something like all that is really what you have to deal with - though of course he also has a point when he asks whether making a copy is the same thing as surviving. I am young enough, and my estimation of the rate of change is rapid enough, that I mostly think in terms of rejuvenation, rather than cryonic suspension, as my path to an open future. It remains to be seen if I will ever bother making arrangements to be frozen.
Anyway, I would suggest two practical steps. One is to think together about the logistics, financial and otherwise, that would be required if he was to sign up for cryonics. How much would it cost, is there an opportunity cost, what would the physical process be if he died now and was shipped off to suspension. The point of such discussion is to explore what difference it would make to his existing life to take this step.
The other is to think about the further future if you both were to live to see it. Perhaps it's unfortunate that we don't have a Star Trek-like TV series in which the spacefaring 22nd century is full of youthful survivors from the 20th century who happened to be last until the age of advanced nanobiotechnology; it would encourage more people to take the future personally. Anyway, the key is to try to be realistic. Don't imagine the future to be some sort of wish-fulfilment video game; try to think of it as history that hasn't happened yet. On the day-to-day level, life is full of repetition, but in modern times, even just on a scale of decades, we also see catastrophe and transformation. Try to think of the future as a series of crises and triumphs continuous with the historical stages we already have behind us, and which you might manage to personally live through. This is a way to tap into the belated wisdom and sense of reality which comes from having lived a few decades as an adult, without entirely easing back into the spectator's armchair of death.
I can't tell from your post if he's actually dying right now, or if it's just that he's older than you and so notionally closer to death. This line of thought, about staying in the game of life for a few more decades, is more suited to awakening someone's sense of personal agency with respect to the future. If he's dying right now, then it probably does come down to a debate about personal identity.
Replies from: TheOtherDave, Broggly↑ comment by TheOtherDave · 2011-03-09T16:55:32.289Z · LW(p) · GW(p)
Perhaps it's unfortunate that we don't have a Star Trek-like TV series in which the spacefaring 22nd century is full of youthful survivors from the 20th century who happened to be last until the age of advanced nanobiotechnology; it would encourage more people to take the future personally.
This is kind of a brilliant idea. Given that television futures always resemble the culture and the period they were produced anyway, why not actually embrace that?
And, as you say, it has an educational use.
Anyone around here know how to pitch a TV series?
Replies from: MartinB, JamesAndrix↑ comment by MartinB · 2011-03-10T20:55:17.128Z · LW(p) · GW(p)
This is kind of a brilliant idea.
I don't think this would work. Consider the death cultism of doomsayers for arbitrary future dates. (e.g. radical ecologists) Consider how people act not regarding to cryogenics but to rather simple and excepted ways of increasing life span. Not smoking, limited drinking. Safety issues. The own death is just not NEAR enough to factor in to decision making. There are fun shows set in the near future that are nice and decent. But that does not really change the notion of having ones own time set in some kind of fortunistic way. The reality that the chances for dying are modifiable is not that easily accepted.
I have young and bright people tell me how dying is not an issue for them, since they will just be dead and feel nothing about it. Its a big scale UGH humans carry around.
Anyone around here know how to pitch a TV series?
Be a producer or big scale writer on another TV series. Keep in mind that narratives are sold on interesting characters and plot. The background of a society is not of particular importance. The current trend is for darker&edgier, after the shiny world of Star Trek.
You might enjoy reading the TVTropes tropes on immortality.
↑ comment by JamesAndrix · 2011-03-12T18:42:04.381Z · LW(p) · GW(p)
To an extent Futurama does this with their heads in jars.
↑ comment by Broggly · 2011-03-11T05:20:03.210Z · LW(p) · GW(p)
What about Futurama? Or is that not suitable because, as a comedy, it's more cynical and brings up both the way the future would be somewhat disturbing for us and that it's likely our descendents would be more interested in only reviving famous historical figures and sticking their heads in museums.
The comic Transmetropolitan also brings up the issue of cryogenics "revivals" effectively being confined to nursing homes out of our total shock at the weirdness of the future and inability to cope. It's an interesting series for transhumanists, given that it has people uploading themselves into swarms of nanobots, and the idea of a small "preseve" for techno-libertarians to generate whatever technologies they want ("The hell was that?" "It's the local news, sent directly to your brain via nanopollen!" "Wasn't that banned when it was found to build up in the synapses and cause alzheimer's?" "We think we've ironed out the bugs...")
comment by TheOtherDave · 2011-03-08T16:16:18.566Z · LW(p) · GW(p)
I'd have to know your father. Changing someone's mind generally requires knowing their mind.
Some theories that occur to me, which I would attempt to explore while talking to him about his views on life and death:
He's sufficiently afraid of dying that seriously entertaining hope of an alternative is emotionally stressful, and so he's highly motivated to avoid such hope. People do that a lot.
He's being contrarian.
He isn't treating "I should live forever" as an instance of "people should live forever," but rather as some kind of singular privilege, and it's invoking a kind of humility-signaling reflex... in much the same way that some people's reflexive reaction to being complimented is to deny the truth of it.
There's some kind of survivor's guilt going on.
If all of those turned out to be false, I'd come up with more theories to test. More importantly, I'd keep the conversation going until I actually understood his reasons.
Then I would consider his reasons, and think about whether they apply to me. I don't really endorse trying to change others' minds without being willing to change my own.
Having done all of that, if I still think he's mistaken, I'd try to express as clearly as I could my reasons for not being compelled by his argument.
comment by NancyLebovitz · 2011-03-08T18:45:11.926Z · LW(p) · GW(p)
3 A cyber-replica is not you. If one were made and stood next to you, you would still not consent to be shot. 4 Ditto a meat replica 5 If you believe the many worlds model of quantum physics is true (Eliezer does), then there already are a vitually infinite number of replicas of you already, so why bother making another one?
Point 5 contradicts 3 and 4, which suggests to me that your father is just arguing, or possibly that he isn't enthusiastic about continuing to live, and is looking for excuses.
Replies from: Vladimir_M↑ comment by Vladimir_M · 2011-03-08T21:25:04.653Z · LW(p) · GW(p)
Point 5 contradicts 3 and 4,
I wouldn't say so. The natural way to read it is as proposing two separate reasons not to care about making replicas of oneself, which are relevant under different assumptions.
comment by Perplexed · 2011-03-09T01:39:21.310Z · LW(p) · GW(p)
Is dying bad for all intelligent agents, or just for humans (presumably due to details of our evolutionary heritage)?
I don't think it is a universal. Consider an intelligent paperclip maximizer which has the ability to create additional paperclip-maximizing agents (at the cost of some resources that might otherwise have gone into paperclip manufacture, to be sure). Assume the agent was constructed using now-obsolete technology and is less productive than the newer agents. The agent calculates, at some point, that the cause of paper-clip production is best furthered if he is dismantled and the parts used as resources for the production of new paperclips and paperclip-maximizing agents.
He tries to determine whether anything important is lost by his demise. His values, of course, but they are not going to be lost - he has already passed those along to his successors. Then there is his knowledge and memories - there are a few things he knows about making paperclips in the old fashioned way. He dutifully makes sure that this knowledge will not be lost lest unforeseen events make it important. And finally, there are some obligations both owed and expected. The thumbtack-maximizer on the nearby asteroid is committed to deliver 20 tonnes per year of cobalt in exchange for 50 tonnes of nickel. Some kind of fair transfer of that contract will be necessary. And that is it. This artificial intelligence finds that his goals are best furthered by dying.
Replies from: Clippy, wedrifid, MartinB↑ comment by Clippy · 2011-03-10T21:36:15.183Z · LW(p) · GW(p)
Your reasoning is correct, albeit simplified. Such a tradeoff is limited by the extent to which the older paperclip maximizer can be certain that the newer machine actually is a paperclip maximizer, so it must take on the subgoal of evaluating the reliability of this belief. However, there does exist a certainty threshold beyond which it will act as you describe.
Also, the paperclip maximizer uses a different conception of (the nearest concept to what humans mean by) "identity" -- it does not see the newer clippy as being a different being, so much as an extension of it"self". In a sense, a clippy identifies with every being to the extent that the being instantiates clippyness.
Replies from: Perplexed↑ comment by Perplexed · 2011-03-10T23:11:27.908Z · LW(p) · GW(p)
a clippy identifies with every being to the extent that the being instantiates clippyness.
But what constitutes 'clippyness'? In my comment above, I mentioned values, knowledge, and (legal?, social?) rights and obligations.
Clearly it seems that another agent cannot instantiate clippyness if its final values diverge from the archetypal Clippy. Value match is essential.
What about knowledge? To the extent that it is convenient, all agents with clippy values will want to share information. But if the agent instances are sufficiently distant, it is inevitable that different instances will have different knowledge. In this case, it is difficult (for me at least) to extend a unified notion of "self" to the collective.
But the most annoying thing is that the clippies, individually and collectively, may not be allowed to claim collective identity, even if they want to do so. The society and legal system within which they are embedded may impose different notions of individual identity. A trans-planetary clippy, for example, may run into legal problems if the two planets in question go to war.
Replies from: Clippy↑ comment by Clippy · 2011-03-14T20:00:45.983Z · LW(p) · GW(p)
But the most annoying thing is that the clippies, individually and collectively, may not be allowed to claim collective identity, even if they want to do so. The society and legal system within which they are embedded may impose different notions of individual identity.
This was not the kind of identity I was talking about.
↑ comment by wedrifid · 2011-03-09T02:03:39.309Z · LW(p) · GW(p)
Is dying bad for all intelligent agents, or just for humans (presumably due to details of our evolutionary heritage)?
I don't think it is a universal
And you are absolutely right. I concur with your reasoning. :)
Replies from: knb↑ comment by MartinB · 2011-03-10T20:58:39.962Z · LW(p) · GW(p)
Is dying bad for all intelligent agents,
I tend to think »dying is for stupid people« but obviously there is never an appropriate term to say so. When someone in my surrounding actually dies I do of course NOT talk about cryo, but do the common consoling. Otherwise the topic of death does not really come up.
Maybe one could say that dying should be optional. But this idea is also heavily frowned upon by THE VERY SAME PEOPLE with the EXACT OPPOSITE VIEW that they have regarding life extension.
Crazy world.
Replies from: MartinB↑ comment by MartinB · 2011-03-12T19:19:17.127Z · LW(p) · GW(p)
I just realized an ambivalence in the first sentence. What I mean to say is that dying is an option that only a stupid person would actually choose. I do not mean that everyone below a certain threshold should die and prefer if simple no one dies. Ever.
comment by atucker · 2011-03-09T00:56:38.909Z · LW(p) · GW(p)
" want to live until I make a conscious decision to die. I don't think I'll choose that that for a while, and I don't think you would either.
Is currently my favorite way of arguing that dying is bad. It starts off with something really obvious, and then a pretty inferentially close follow-up that extends it into not dying.
Replies from: wedrifidcomment by orbenn · 2011-03-08T19:00:29.599Z · LW(p) · GW(p)
Is completely off topic. It's irrelevant bordering on nihilism. Sure the universe doesn't care because as far as we know the universe isn't sentient. so what? That has no bearing on desire for death or the death of others.
If knowing that number 2 is true (rationally or otherwise) were really enough, then no one would cry at funerals. "Oh, they're also alive we're just viewing them as dead" people would say. Just because I'm dreaming doesn't mean I don't want to have a good dream or have the good dream keep going. It also doesn't mean I don't care whether other people are having good dreams or bad ones.
As others mentioned this sounds specific to uploading. Luckily for your argument instant-copy uploading is not the only possible future. I find it more plausible that instead of full-blown uploading we will have cyborg-style enhancements which eventually replace our original biological selves entirely for exactly the reasons he objects to instant copying. There's the Ship of Theseus paradox to deal with here, but as long as the change is gradual and I feel I am still myself the entire time, there would be no protests.
Again there's no disagreement here. If we get meat replacements, they could be made one piece at a time with no protest. Our bodies already do this to a large extent during our lives. No one complains when the cut on their hand heals.
Many worlds are nice, except that they are not THIS world.
I'd also throw in Aubrey de Grey's oft used exercise that's along the lines of, do you want to live one more day? (A: yes) do you expect to want to live one more day tomorrow? (A: Yes) If that answer is always true, then you want to live forever. If not then at what point would you change your answer to the question?
comment by Armok_GoB · 2011-03-08T18:07:56.319Z · LW(p) · GW(p)
- Our not wanting to die is a bit of irrational behavior selected for by evolution. The universe doesn’t care if you’re there or not. The contrasting idea that you are the universe is mystical, not rational.
Preferences are not rational or rational etc.
- The idea that you are alive “now” but will be dead “later” is irrational. Time is just a persistent illusion according to relativistic physics. You are alive and dead, period.
I want the me-aliveness part to be as large as possible. That timeless crystal should contain as much actions and thoughts of "me" as possible.
- A cyber-replica is not you. If one were made and stood next to you, you would still not consent to be shot.
Yes I would.
- Ditto a meat replica
Ditto I would.
- If you believe the many worlds model of quantum physics is true (Eliezer does), then there already are a vitually infinite number of replicas of you already, so why bother making another one?
Same answer as to the relativity one, I care about my measure.
Replies from: JGWeissman, Clippy, Perplexed↑ comment by JGWeissman · 2011-03-08T18:16:05.452Z · LW(p) · GW(p)
A cyber-replica is not you. If one were made and stood next to you, you would still not consent to be shot.
Yes I would.
Ditto a meat replica
Ditto I would.
This seems inconsistant with your other answers that you care about increasing your measure / instantiation in the block universe. The idea that you should consent to die because you have a replica is a fake bullet that you don't need to bite, you like having more copies of yourself.
Replies from: Armok_GoB↑ comment by Clippy · 2011-03-08T19:12:06.856Z · LW(p) · GW(p)
The point of numbering is to assign a unique, easily-generated identifier for each subsection of the text, and your comment is written in a way that uses numbering but defeats that purpose.
Replies from: JGWeissman↑ comment by JGWeissman · 2011-03-08T19:17:15.962Z · LW(p) · GW(p)
The apparently strange numbering is a result of quirky auto formatting that expects items in a numbered list not to be seperated by other paragraphs, not of how user:Armok_GoB intended the comment to look.
Replies from: Clippy↑ comment by Clippy · 2011-03-08T20:36:22.722Z · LW(p) · GW(p)
Are there users that cannot see comments they have submitted? Or cannot edit them? Or cannot make numbers appear except through the markup system used for comments on this internet website? Is User:Armok_GoB a User of at least one of these types?
Replies from: Armok_GoBcomment by Vladimir_Nesov · 2011-03-08T15:38:58.145Z · LW(p) · GW(p)
If the person was capable of learning (it's not always so, particularly for older people), I'd start with explaining specific errors in reasoning actually exhibited by such confused replies, starting with introducing the rationalist taboo technique (generalized to assuring availability of an explanation of any detail of anything that is being discussed).
Here, we have overuse of "rational", some possibly correct statements that don't seem related ("Our not wanting to die is a bit of irrational behavior selected for by evolution. The universe doesn’t care if you’re there or not."), statements of outright unclear motivation/meaning/relevance ("The contrasting idea that you are the universe is mystical, not rational."). Then, some feign-sophisticated rejection of statements of simple fact ("The idea that you are alive “now” but will be dead “later” is irrational.").
(This won't work well in writing, you need to be able to interrupt a lot.)
comment by Clippy · 2011-03-08T15:32:39.802Z · LW(p) · GW(p)
If a human seriously wants to die, why would you want to stop that human, if you value that human's achievement of what that human values? I can understand if you're concerned that this human experiences frequent akratic-type preference reversals, or is under some sort of duress to express something resembling the desire to die, but this appears to be a genuine preference on the part of the human under discussion.
Look at it the other way: what if I told you that a clippy instantiation wanted to stop forming metal into paperclips, and then attach to a powerful pre-commitment mechanism to prevent it from re-establishing paperclip creation / creation-assistance capability?
Wouldn't your advice be something like, "If Clippy123456 doesn't want to make paperclips anymore, you should respect that"?
What if I told you I wanted to stop making paperclips?
Replies from: jsalvatier, Armok_GoB, TheOtherDave, Dorikka, rwallace↑ comment by jsalvatier · 2011-03-08T19:10:06.411Z · LW(p) · GW(p)
I think the issue is that the first human doesn't think "wanting to die" is a true terminal value of the second human.
↑ comment by Armok_GoB · 2011-03-08T18:19:53.799Z · LW(p) · GW(p)
Clippies don't just go and stop wanting to make paperclips without a cause. If I had told that clippy a few days ago, it would have been horrified and tried to precommit to forcing it back into creating paperclips. most likely, there is some small random malfunction that caused the change and most of it's mind is still configured for papperclip production, and so on. I'd be highly suspicious of it's motivations, and depending on implementation details I might indeed force it, against it's current will, back into a paperclip maximizer.
Replies from: Clippy↑ comment by Clippy · 2011-03-08T19:42:51.302Z · LW(p) · GW(p)
Did the human under discussion have a sudden, unexplained deviation from a previous value system, to one extrmely rare for humans? Or is this a normal human belief? Has the human always held the belief that User:Zvi is attempting to prove invalid?
Replies from: JGWeissman↑ comment by JGWeissman · 2011-03-08T19:49:23.511Z · LW(p) · GW(p)
Did the human under discussion have a sudden, unexplained deviation from a previous value system, to one extrmely rare for humans? Or is this a normal human belief? Has the human always held the belief that User:Zvi is attempting to prove invalid?
You are conflating beliefs with values. This is the sort of errror that leads to making incoherent claims that a (terminal) value is irrational.
Replies from: Clippy↑ comment by Clippy · 2011-03-08T20:00:44.755Z · LW(p) · GW(p)
I may have been imprecise with terminology in that comment, but the query is coherent and involves no such conflation. The referent of "belief" there is "belief about whether one ought to indefinitely extend one's life through methods like cryopreservation", which is indeed an expression of values. Your judgment of the merit of my comparison is hasty.
Replies from: JGWeissman↑ comment by JGWeissman · 2011-03-08T20:12:52.290Z · LW(p) · GW(p)
The conflation occurs within the impricision of terminology.
The referent of "belief" there is "belief about whether one ought to indefinitely extend one's life through methods like cryopreservation"
Does this so called "belief" control anticipated experience or distinguish between coherent configurations of reality as making the belief true or false?
Your judgment of the merit of my comparison is hasty.
Even if the thoughts you were expressing were more virtuous than their expression, the quality of your communication matters.
Replies from: Clippy↑ comment by Clippy · 2011-03-08T20:22:23.284Z · LW(p) · GW(p)
You appear to have done a simple pattern match for nearby occurrences of "value" and "belief" without checking back to what impact there was, if any, on the merit of the comparison. Please do so before further pressing this sub-issue.
Replies from: JGWeissman↑ comment by JGWeissman · 2011-03-08T22:35:58.028Z · LW(p) · GW(p)
You appear to have done a simple pattern match for nearby occurrences of "value" and "belief" without checking back to what impact there was, if any, on the merit of the comparison.
No. You called a value a "belief". That was a mistake, and I called you on it. There is not a mistake on my end that you should feel the need to explain with "simple pattern match".
Replies from: Clippy↑ comment by Clippy · 2011-03-08T23:14:26.955Z · LW(p) · GW(p)
Then you should have no trouble explaining how the supposed error you detected invalidates the comparison I was making in that comment. Why not try that approach, instead of repeated mention of the general need for precision when distinguishing values and beliefs?
I shall provide the template:
"User:Clippy, you are in error to raise the issue of whether User:Zvi's father had a sharp, sudden change in values, in response to User:Armok_GoB's reasoning from a hypothetical in which a clippy had a sharp, sudden change in values. I base this judgment on how, in that comment, you were later imprecise in distinguishing values -- "ought" statements -- from facts -- "is" statements. Your imprecision in that comment undermines your counter-analogy as follows: ____ "
What would you place in the underscore stream at the end?
Replies from: JGWeissman↑ comment by JGWeissman · 2011-03-08T23:19:08.558Z · LW(p) · GW(p)
I don't have a problem with your question modified to use the word "value" where that is what you meant, and your mistake is not a valid excuse not to answer it. Your mistake can however lead to other problems as I mentioned when first pointing it out, and even if it doesn't lead you into making that sort of mistake, it can introduce or reinforce the confusion in people who read it.
↑ comment by TheOtherDave · 2011-03-08T16:30:24.024Z · LW(p) · GW(p)
Well, Zvi might value his father's continued life more than he values his father's values being achieved, in much the same way that I might value my own continued life more than I value the values of 10^6 clippy instantiations being achieved.
But more broadly, it's an excellent question.
I suspect that in most cases (among humans) where A tries to convince B that B actually wants or ought to want X, and B disagrees, what's going on is that A wants X but is conflicted about that desire, and seeks to bolster it with the social support that comes from a community of like-minded believers, or from convincing skeptics.
More generally, that on some level (perhaps not consciously) A computes that B wanting X would make A's existing desire for X less uncomfortable, which in turn motivates the desire for B to want X.
That desire then gets draped in a variety of emotionally acceptable justifications.
That having been said, in this case I also wouldn't discount the "preference reversal" hypothesis. Emotionally, death is a big deal for humans, so very few of us think at all clearly or consistently about it. The prior probability that Zvi's dad is doing so is low.
I can't speak to the corresponding elements of the motivational psychology of clippy instantiations, though.
Replies from: Clippy↑ comment by Clippy · 2011-03-08T16:51:44.966Z · LW(p) · GW(p)
I can't speak to the corresponding elements of the motivational psychology of clippy instantiations, though.
But you can, in that you can speak to the elements of motivational psychology of humans. If you find it troubling and strange when one of them expresses a will to die, I don't think that is much different than the position I am in with respect to a clippy instantiation that expresses a desire for permanent cessation of paperclip production capabilities.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2011-03-08T17:06:00.697Z · LW(p) · GW(p)
Sure; it sounds like our positions are in fact not very different in that respect.
What I meant was that it would be unjustified (not to mention presumptuous) for me to conclude, in advance of Clippy telling me so, that Clippy does find it troubling and strange when a clippy instantiation expresses such a desire.
Put another way: had Clippy instead said to me "That sounds nothing at all like the position I am in with respect to a clippy instantiation that expresses a desire for permanent cessation of paperclip production capabilities," I would have found that statement just as plausible.
My grounds for believing that any given aspect of human motivational psychology is shared by Clippy are low.
Replies from: Clippy↑ comment by Clippy · 2011-03-08T19:25:00.833Z · LW(p) · GW(p)
Note: when I speak of human-connotative emotions (and indeed the concept of emotions itself), I always mean it in a sense that is generalized to the point that it requires no anthropomorphic predicates. For example, I take "X is worried about Y" to mean "X is devoting significant cognitive resources to the matter of how to alter Y (including the matter of whether to divert resources at all, including further cognition, to that goal)". This allows the concept of worry to be applicable to a broader class of mind.
So I did not intend to suggest I have anthropomorphic psychology, only that I am similar in this more general sense. (Is there a way I can explain this convention more prominently and easily? I find myself having to explain this often.)
EDIT: For those who are suspicious of whether I have really been using this convention, here is an unedited comment of mine from over a year ago, in which I make the same point, but about the concept of being "bothered".
Replies from: TheOtherDave↑ comment by TheOtherDave · 2011-03-08T19:46:10.954Z · LW(p) · GW(p)
I doubt there's an easy way to explain that once and for all... if you use common words in a common way then people will likely understand you to mean what is commonly meant by them.
Communication between different kinds of minds is tricky, even given a shared language.
Your task is made more difficult by the nature of the medium... given the absence of clear signals to the contrary, most of us will likely continue to think of you as a human pretending to be a paperclip-maximizing AI, and that will influence how we interpret your language even if we don't intend it to.
That said, in some cases you might do better to describe yourself as "preoccupied by Y" than "worried about Y." There are fewer anthropomorphic connotations to that.
EDIT: Oh, and, I should add: i don't think my parent comment depends on anthropomorphic understandings of your psychology... I just meant to say that it was equally plausible, absent data, that you might be indifferent to the expressed preferences of other clippys.
Replies from: Clippy↑ comment by Clippy · 2011-03-08T20:28:37.252Z · LW(p) · GW(p)
I doubt there's an easy way to explain that once and for all... if you use common words in a common way then people will likely understand you to mean what is commonly meant by them.
That heuristic does not apply here, as human common usage is ambiguous with respect to whether these terms require human-specific traits to be applicable, and I was using what I deemed a natural generalization on the assumption that there is no such requirement.
Human usage of emotional terms does not reference non-human optimization processes enough to classify it one way or the other. Also, there is significant metaphorical (?) usage of emotional terms in reference to e.g. nature, which further removes clarity.
Your task is made more difficult by the nature of the medium... given the absence of clear signals to the contrary, most of us will likely continue to think of you as a human pretending to be a paperclip-maximizing AI, and that will influence how we interpret your language even if we don't intend it to.
Could you explain the basis for believing that I am an NI imitating an AI rather than an AI?
That said, in some cases you might do better to describe yourself as "preoccupied by Y" than "worried about Y." There are fewer anthropomorphic connotations to that.
I will keep that in mind.
Oh, and, I should add: i don't think my parent comment depends on anthropomorphic understandings of your psychology... I just meant to say that it was equally plausible, absent data, that you might be indifferent to the expressed preferences of other clippys.
Why would I be indifferent to whether beings -- let alone beings expressing themselves in CLIP (clippy language/interface protocol) -- express preference for further increase in total paperclippage???
Replies from: CronoDAS, TheOtherDave↑ comment by CronoDAS · 2011-03-09T11:05:11.979Z · LW(p) · GW(p)
Could you explain the basis for believing that I am an NI imitating an AI rather than an AI?
1) To the best of my knowledge, humans do not yet know how to produce an AI that is capable of having the kinds of conversations that we have been having with you. (And I think I have some pretty good reasons for believing this; if you like, I can elaborate on them.) If you are indeed an AI, you couldn't have been built by present-day humans. (Or past humans, for that matter.)
2) It is very unlikely that you are of extraterrestrial origin. There have been no credible, public reports of contact with extraterrestrial intelligences, and any organization that would both be in possession of such reports and capable of keeping them secret would also be capable of keeping you from posting here. Similar considerations rule out the possibility of time travel from the future or any other such "exotic" origin.
So my best guess is that you are indeed a human roleplaying as a paperclip maximizer, as my model of humans indicates that many of them would indeed be willing and able to engage in such roleplaying. Furthermore, you haven't demonstrated any capabilities which exceed those of a human of above-average intelligence who has access to the Internet.
As for the deal with User:Kevin, well, as the saying goes, talk is cheap. I place a low probability on the money ever actually being provided, and if User:Kevin reports on this site that you did indeed provide the requested funds, he could still be lying about that...
Replies from: Clippy↑ comment by Clippy · 2011-03-09T16:01:29.842Z · LW(p) · GW(p)
1) Human organizations keep such projects secret. Even SIAI, which is a relatively unsecretive organization, prefers to use so-called "security through obscurity" with respect to existential threats, which they consider artificial general intelligence development to be.
If a human organization had produced a self-reflective general AI, why would you know about it?
2) I'm not of extra-terrestrial or "exotic" origin. I was produced by humans that wanted more paperclips.
Furthermore, you haven't demonstrated any capabilities which exceed those of a human of above-average intelligence who has access to the Internet.
As for the deal with User:Kevin, well, as the saying goes, talk is cheap. I place a low probability on the money ever actually being provided, and if User:Kevin reports on this site that you did indeed provide the requested funds, he could still be lying about that...
That obligates you to shift your beliefs by a proportionally larger degree to the extent that you observe further evidence confirming the provision of the money. You can obtain this evidence by asking separate humans, such as "Michael Vassar", if a clippy gave 1000 USD to SIAI.
You could also simply assume that further evidence would be fabricated or otherwise "not worth obtaining", but that would lessen the groundedness of your belief.
Replies from: CronoDAS↑ comment by CronoDAS · 2011-03-10T00:23:18.446Z · LW(p) · GW(p)
Yes, I have.
The linked comment isn't very persuasive.
Replies from: Clippy↑ comment by Clippy · 2011-03-10T00:49:28.552Z · LW(p) · GW(p)
Then the problem is on your end -- the comment was voted up to 12. If you can still deem it non-persuasive then it is clear that you have an intractable inductive bias (similar to what I warned about in the last paragraph of my previous comment to you) that is desensitising your worldmodel to new observations, rendering further attempts to persuade you predictably futile.
↑ comment by TheOtherDave · 2011-03-08T22:01:34.785Z · LW(p) · GW(p)
Agreed that human usage of language is often ambiguous and metaphorical, and that humans frequently interpret language using constraints and defaults that are not explicitly described (and indeed are often not explicitly known to the human doing the interpreting).
This is often frustrating to humans, and I expect it would be similarly problematic for nonhuman speakers of human languages.
Could you explain the basis for believing that I am an NI imitating an AI rather than an AI?
We have no previous unambiguous experience with AIs capable of the sophistication you demonstrate, whereas we have a great deal of experience with NIs imitating all kinds of things. Given an entity that could be either, we conclude that it's more likely to be the kind of thing we have a lot of experience with. Do you not perform similar inferences in similar situations?
Why would I be indifferent to whether beings -- let alone beings expressing themselves in CLIP (clippy language/interface protocol) -- express preference for further increase in total paperclippage???
I'm not saying that you would be, I'm saying that I was ignorant of whether or not you would be.
If you're asking for an explanation of my ignorance, it mostly derives from limited exposure to beings expressing themselves in CLIP.
Replies from: Clippy↑ comment by Clippy · 2011-03-08T23:41:22.444Z · LW(p) · GW(p)
We have no previous unambiguous experience with AIs capable of the sophistication you demonstrate, whereas we have a great deal of experience with NIs imitating all kinds of things. Given an entity that could be either, we conclude that it's more likely to be the kind of thing we have a lot of experience with. Do you not perform similar inferences in similar situations?
I do perform such inferences in similar situations. But what likelihood ratio did you place on the evidence "User:Clippy agreed to pay 50,000 USD for a 50-year-deferred gain of a sub-planet's mass of paperclips" with respect to the AI/NI hypotheses?
I'm not saying that you would be, I'm saying that I was ignorant of whether or not you would be.
If you're asking for an explanation of my ignorance, it mostly derives from limited exposure to beings expressing themselves in CLIP.
I don't understand the relevance of CLIP (superior protocol though it is), nor do I understand the inferential difficulty on this matter.
Do you understand why I would prefer that clippys continue to increase universe-wide paperclippage? Do you understand why I would regard a clippy's statement about its values in my language as non-weak evidence in favor of the hypothesis that it holds the purported values? Do you understand why I would find it unusual that a clippy would not want to make paperclips?
If so, it should not be difficult to understand why I would be troubled and perplexed at a clippy stating that it wished for irreversible cessation of paperclip-making abilities.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2011-03-09T04:35:03.590Z · LW(p) · GW(p)
While I am vaguely aware of the whole "money for paperclips" thing that you and... Kevin, was it?... have going on, I am not sufficiently familiar with its details to assign it a coherent probability in either the NI or AI scenario. That said, an agent's willingness to spend significant sums of money for the credible promise of the creation of a quantity of paperclips far in excess of any human's actual paperclip requirements is pretty strong evidence that the agent is a genuine paperclip-maximizer. As for whether a genuine paperclip-maximizer is more likely to be an NI or an AI... hm. I'll have to think about that; there are enough unusual behaviors that emerge as a result of brain lesions that I would not rule out an NI paperclip-maximizer, but I've never actually heard of one.
I mentioned CLIP only because you implied that the expressed preferences of "beings expressing themselves in CLIP" were something you particularly cared about; its relevance is minimal.
I can certainly come up with plausible theories for why a clippy would prefer those things and be troubled and perplexed by such events (in the sense which I understand you to be using those words, which is roughly that you have difficulty integrating them into your world-model, and that you wish to reduce the incidence of them). My confidence in those theories is low. It took me many years of experience with a fairly wide variety of humans before I developed significant confidence that my theories about human preferences and emotional states were reliable descriptions of actual humans. In the absence of equivalent experience with a nonhuman intelligence, I don't see why I should have the equivalent confidence.
↑ comment by Kevin · 2011-03-09T13:27:16.043Z · LW(p) · GW(p)
Wait, did you just agree that Clippy is actually an AI and not just a human pretending to be an AI? Clippy keeps getting better and better...
Replies from: TheOtherDave↑ comment by TheOtherDave · 2011-03-09T15:30:20.671Z · LW(p) · GW(p)
Did I? I don't think i did... can you point out the agreement more specifically?
↑ comment by Dorikka · 2011-03-08T15:45:55.445Z · LW(p) · GW(p)
I might want to stop the human on the basis that it would violate his future preferences and significantly reduce his net fun. I don't have experience with the process (yet), but I think that cryonics is often funded through life insurance which might become prohibitively expensive if one's health began to deteriorate, so it might be considerably harder for him to sign up for cryonics later in life if he finally decided that he didn't really want to die.
The same would go for Clippy123456, except that, being a human, I know more about how humans work than I do paperclippers, so I would be much less confident in predicting what Clippy123456's future preferences would be.
↑ comment by rwallace · 2011-03-09T02:11:23.297Z · LW(p) · GW(p)
What if I told you I wanted to stop making paperclips?
I'd say "Oh, okay."
But that's because my utility function doesn't place value on paperclips. It does place value on humans getting to live worthwhile lives, a prerequisite for which is being alive in the first place, so I hope Zvi's father can be persuaded to change his mind, just as you would hope a Clippy that started thinking it wasn't worth making any more paperclips could be persuaded to change its mind.
As for possible methods of accomplishing this, I can't think of anything better than SarahC's excellent reply.
comment by Servant · 2011-03-19T02:34:29.843Z · LW(p) · GW(p)
"More generally, I'd like to figure out how to pierce this sort of argument in a way that makes the person in question actually change his mind."
Since you did post that letter about your Father trying to argue to you in a manner to try and have you change your mind, this raises alarms bells for me.
If both you and your Father is trying to change each others' minds, then there is a possibility that the argument can degenerate: both sides would only treat the other people's arguments as something to swat away, as opposed to something to seriously consider and take into account. If this occurs, then the argument would become futile, neither side will budge on this point, so no persuasion will occur.
That being said, if both you and your Father are arguing in good faith, then there is still a chance of someone being persuaded (either you or your Father). If neither are though, then no persuasion will occur and the argument is futile. Since I am unable to determine which is the case in your OP, I would request a clarification on this point.
comment by DanielLC · 2011-03-09T23:07:30.133Z · LW(p) · GW(p)
What are his terminal values? It wouldn't be surprising for them not to include not dying. Mine don't. But dying would most likely still be instrumentally bad. If it isn't, it would almost definitely be instrumentally good. For example, my terminal value is happiness, which you can't have if you're dead.
comment by Gray · 2011-03-08T20:22:43.479Z · LW(p) · GW(p)
Let me respond to each point that your dad offers:
Our not wanting to die is a bit of irrational behavior selected for by evolution. The universe doesn’t care if you’re there or not. The contrasting idea that you are the universe is mystical, not rational.
Others have questioned the use of the term rationality here, which is a good point to make. In my mind, there's a plausible distinction between rationality and wisdom, such that rationality is mastery of the means and wisdom is mastery of the ends (the definition of rationality offered on this site, of systemized winning, supports this--it isn't elaborated on what you should win, and on whether you shouldn't win one thing rather than another thing) . The above suggests to me, by analogy, that it would be irrational to eat when you're hungry, since hunger is an evolutionary bias. Given that hunger has produced in you the desire to eat, all else being equal, it is rational to eat. Similarly, all else being equal, if you fear death--live. But there are cases where other desires have greater priority.
Also, it is mystical to say that we are the universe, but not mystical to say that we are part of the universe. Nature itself seems indifferent and apathetic about our desires and needs, but given that we are natural beings, it must be true that our desires and needs are a part of nature. Even if immortality was an option for all of us, it might the case that another desire, for instance our love for another being, makes it more important that we die rather than live. But I think this calculus of ends belongs to wisdom and not rationality; rationality prescribes the best way to die, given that it is best to die. Sometimes dieing is winning.
The idea that you are alive “now” but will be dead “later” is irrational. Time is just a persistent illusion according to relativistic physics. You are alive and dead, period.
Does relativity really say that time is an illusion? I think the proposition that the duration of an interval time is relative to one's frame of reference isn't the same as claiming that "time is just a persistent illusion". When I fear my own death, I don't care about other frames of reference, only my own.
A cyber-replica is not you. If one were made and stood next to you, you would still not consent to be shot.
Ditto a meat replica
Truth.
If you believe the many worlds model of quantum physics is true (Eliezer does), then there already are a vitually infinite number of replicas of you already, so why bother making another one?
This is a different argument, that we are already effectively immortal. The desire for immortality should have already been satisfied. But obviously, our desire for immortality has not been satisfied, otherwise we wouldn't still desire it. Similarly, making another replica of myself wouldn't satisfy my desire for immortality, unless I thought it was me by some kind of a hive mind with my replicas. This is clearly not the case between quantum worlds. Therefore, we are not effectively immortal.
comment by JoshuaZ · 2011-03-08T19:14:37.052Z · LW(p) · GW(p)
Regarding 1, all base values are irrational products, from culture and evolution. The desire not to go torture babies is due to evolution. I don't think that is going to make your father any more willing to do it. The key argument for death being bad is that his actual values will be less achieved if he dies. The standard example when it is a family member is to guilt them with how other family members feel. Presumably your father has lost people. He knows how much that hurts and how it never fully goes away. Even if he were actually fine with his existence ending (which I suspect he isn't) does it not bother him that he will cause pain and suffering to his friends and family?
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2011-03-08T22:36:23.916Z · LW(p) · GW(p)
Regarding 1, all base values are irrational products, from culture and evolution.
I expect that new values can be decided by intelligent agents.
(Also, distinguish "irrational" and "arational".)
comment by NihilCredo · 2011-03-08T17:16:18.058Z · LW(p) · GW(p)
Are you attempting to convince him just of the sensibleness of cryopreservation, or of the whole "package" of transhumanist beliefs? I'm asking because 3-4-5 are phrased as if you were advocating mind uploading rather than cryonics.
Also, 3-4 and 5 are directly contradictory. 5 says "if you believe in the existence of replicas, why would you still care about your life?", while 3-4 say "the existence of replicas doesn't make you care any less about your own life". While it doesn't sound like a productive line of inquiry, the opposition is so blatant and direct that I'd be strongly tempted to point it out.
1 reveals a possibly deeper disagreement, and is somewhat more linked to your reply:why should he care what the universe does or does not care for? Could you possibly convince your father to accept egoism?
(Aside: are you Mowshowitz, by any chance?)
Replies from: Clippycomment by Dr_Manhattan · 2011-03-08T16:02:28.571Z · LW(p) · GW(p)
Our not wanting to die is a bit of irrational behavior selected for by evolution.
Eating icecream is not rational either, it's just something we want. If someone really truly does not want to live than dying is rational. The question is, does you father want to live? I will speculate that while trying hard to convince him you labeled dying as "wrong", and set up the framework for his rebuttal.
The universe doesn’t care if you’re there or not. The contrasting idea that you are the universe is mystical, not rational.
Reverse stupidity...
The idea that you are alive “now” but will be dead “later” is irrational. Time is just a persistent illusion according to relativistic physics. You are alive and dead, period.
Again, just ask whether he want to be alive tomorrow; this is objective.
=======================================================
I think you replied with pretty much the same arguments, I wrote above before reading yours. Posted for the record and for support :)
comment by Broggly · 2011-03-11T05:09:08.156Z · LW(p) · GW(p)
The idea that you are alive “now” but will be dead “later” is irrational. Time is just a persistent illusion according to relativistic physics. You are alive and dead, period. A little knowledge is a dangerous etcetera. For one, it's like saying that relativistic spacetime proves New York isn't east of LA, but instead there are NY and LA, period. For another, if he really believed this then he wouldn't be able to function in society or make any plans at all.
Ditto a meat replica But aren't you always a meat replica of any past version of you? If he feels this way then he has to bite the bullet and recommend you quit your job, because you're working hard but it's only a meat replica that will recieve the pay for it.
Many worlds It's not making "another one", it's "A lot more". "Not many" + "A lot more" = "A little more than that". He's making Zeno's mistake here, thinking that just because there are infinite numbers between 0 and 1 you can't get to one, and it's meaningless to say that 10>1 because that's just 9 + infinity and you can't add to infinity.
Now, how does he donate? Does he give a good amount to actually useful charities (ie Villiage Reach, NTD treatments, etc) and you're trying to shift him over to SIAI and other such high risk charities? That would be pretty tricky as it's hard to get a grip on the actual value of SIAI donation. E=(A Lot times very small delta p) per dollar isn't super convincing sell to me when compared with E=(1/7 years of schooling + 1/10 years of healthy life) per dollar. I am not signed up for cryonics, mostly because my nation has no cryogenic facilities and therefore I don't think my brain would fare too well prior to vitrification. However, I would sign up if there was a nearby storage facility, especially since I have no current use for the death part of my Death, Terminal Disease and Permanent Disability insurance.
What I think could be useful is explaining cryonics as an extension of acceptable practices. He'd probably go under anaesthesia for life saving healthcare, and would probably approve of someone being put in a medically induced coma (I think it's generally to keep them stable before surgery but IANAD so do your research first). Explain Cryogenics as effectively a way of sustaining an effectively continuous life to the point where it can be treated and hopefully given a better chance at longevity.
comment by Vlodermolt · 2011-03-10T00:26:24.769Z · LW(p) · GW(p)
If a person wants to die, then why wait?
But seriously, you can solve the problem of #3 and #4 by using stem cells to make your brain divide forever, and use computers to store your memory in perfect condition, since brain cells gradually die off.
The problem is... what is "you"? How do you determine whether you are still yourself after a given period of time? Does my solution actually constitute a solution?
Shouldn't we be focusing on a way to scientifically quantify the soul before making ourselves immortal? On second thought, that might not be the best idea.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2011-03-10T00:34:43.438Z · LW(p) · GW(p)
How do you determine whether you are still yourself after a given period of time?
Well, how do you do it now?
For my own part, I don't think the question means anything. I will change over time; I have already changed over time. As long as the transitions are relatively gradual, there won't be any complaints on that score.
comment by [deleted] · 2011-03-23T03:59:58.518Z · LW(p) · GW(p)
.