Abnormal Cryonics

post by Will_Newsome · 2010-05-26T07:43:49.650Z · LW · GW · Legacy · 420 comments

Written with much help from Nick Tarleton and Kaj Sotala, in response to various themes here, here, and throughout Less Wrong; but a casual mention here1 inspired me to finally write this post. (Note: The first, second, and third footnotes of this post are abnormally important.)

It seems to have become a trend on Less Wrong for people to include belief in the rationality of signing up for cryonics as an obviously correct position2 to take, much the same as thinking the theories of continental drift or anthropogenic global warming are almost certainly correct. I find this mildly disturbing on two counts. First, it really isn't all that obvious that signing up for cryonics is the best use of one's time and money. And second, regardless of whether cryonics turns out to have been the best choice all along, ostracizing those who do not find signing up for cryonics obvious is not at all helpful for people struggling to become more rational. Below I try to provide some decent arguments against signing up for cryonics — not with the aim of showing that signing up for cryonics is wrong, but simply to show that it is not obviously correct, and why it shouldn't be treated as such. (Please note that I am not arguing against the feasibility of cryopreservation!)

Signing up for cryonics is not obviously correct, and especially cannot obviously be expected to have been correct upon due reflection (even if it was the best decision given the uncertainty at the time):

Calling non-cryonauts irrational is not productive nor conducive to fostering a good epistemic atmosphere:

Debate over cryonics is only one of many opportunities for politics-like thinking to taint the epistemic waters of a rationalist community; it is a topic where it is easy to say 'we are right and you are wrong' where 'we' and 'you' are much too poorly defined to be used without disclaimers. If 'you' really means 'you people who don't understand reductionist thinking', or 'you people who haven't considered the impact of existential risk', then it is important to say so. If such an epistemic norm is not established I fear that the quality of discourse at Less Wrong will suffer for the lack of it.

One easily falls to the trap of thinking that disagreements with other people happen because the others are irrational in simple, obviously flawed ways. It's harder to avoid the fundamental attribution error and the typical mind fallacy, and admit that the others may have a non-insane reason for their disagreement.

 

1 I don't disagree with Roko's real point, that the prevailing attitude towards cryonics is decisive evidence that people are crazy and the world is mad. Given uncertainty about whether one's real values would endorse signing up for cryonics, it's not plausible that the staggering potential benefit would fail to recommend extremely careful reasoning about the subject, and investment of plenty of resources if such reasoning didn't come up with a confident no. Even if the decision not to sign up for cryonics were obviously correct upon even a moderate level of reflection, it would still constitute a serious failure of instrumental rationality to make that decision non-reflectively and independently of its correctness, as almost everyone does. I think that usually when someone brings up the obvious correctness of cryonics, they mostly just mean to make this observation, which is no less sound even if cryonics isn't obviously correct.

2 To those who would immediately respond that signing up for cryonics is obviously correct, either for you or for people generally, it seems you could mean two very different things: Do you believe that signing up for cryonics is the best course of action given your level of uncertainty? or, Do you believe that signing up for cryonics can obviously be expected to have been correct upon due reflection? (That is, would you expect a logically omniscient agent to sign up for cryonics in roughly your situation given your utility function?) One is a statement about your decision algorithm, another is a statement about your meta-level uncertainty. I am primarily (though not entirely) arguing against the epistemic correctness of making a strong statement such as the latter.

3 By raising this point as an objection to strong certainty in cryonics specifically, I am essentially bludgeoning a fly with a sledgehammer. With much generalization and effort this post could also have been written as 'Abnormal Everything'. Structural uncertainty is a potent force and the various effects it has on whether or not 'it all adds up to normality' would not fit in the margin of this post. However, Nick Tarleton and I have expressed interest in writing a pseudo-sequence on the subject. We're just not sure about how to format it, and it might or might not come to fruition. If so, this would be the first post in the 'sequence'.

4 Disclaimer and alert to potential bias: I'm an intern (not any sort of Fellow) at the Singularity Institute for (or 'against' or 'ambivalent about' if that is what, upon due reflection, is seen as the best stance) Artificial Intelligence.

420 comments

Comments sorted by top scores.

comment by avalot · 2010-05-26T15:37:30.743Z · LW(p) · GW(p)

Getting back down to earth, there has been renewed interest in medical circles in the potential of induced hibernation, for short-term suspended animation. The nice trustworthy doctors in lab coats, the ones who get interviews on TV, are all reassuringly behind this, so this will be smoothly brought into the mainstream, and Joe the Plumber can't wait to get "frozed-up" at the hospital so he can tell all his buddies about it.

Once induced hibernation becomes mainstream, cryonics can simply (and misleadingly, but successfully) be explained as "hibernation for a long time."

Hibernation will likely become a commonly used "last resort" for many many critical cases (instead of letting them die, you freeze 'em until you've gone over their chart another time, talked to some colleagues, called around to see if anyone has an extra kidney, or even just sleep on it, at least.) When your loved one is in the fridge, and you're being told that there's nothing left to do, we're going to have to thaw them and watch them die, your next question is going to be "Can we leave them in the fridge a bit longer?"

Hibernation will sell people on the idea that fridges save lives. It doesn't have to be much more rational than that.

If you're young, you might be better off pushing hard to help that tech go mainsteam faster. That will lead to mainstream cryo faster than promoting cryo, and once cryo is mainstream, you'll be able to sign up for cheaper, probably better cryo, and more importantly, one that is integrated into the medical system, where they might transition me from hibernation to cryo, without needing to make sure I'm clinically dead first.

I will gladly concede that, for myself, there is still an irrational set of beliefs keeping me from buying into cryo. The argument above may just be a justification I found t avoid biting the bullet. But maybe I've stumbled onto a good point?

Replies from: cousin_it
comment by cousin_it · 2010-05-26T15:44:14.762Z · LW(p) · GW(p)

I don't think you stumbled on any good point against cryonics, but the scenario you described sounds very reassuring. Do you have any links on current hibernation research?

Replies from: avalot
comment by avalot · 2010-05-26T16:05:09.160Z · LW(p) · GW(p)

Maybe it's a point against investing directly into cryonics as it exists today, and working more through the indirect approach that is most likely to lead to good cryonics sooner. I'm much much more interested in being preserved before I'm brain-dead.

I'm looking for specifics on human hibernation. Lots of sci-fi out there, but more and more hard science as well, especially in recent years. There's the genetic approach, and the hydrogen sulfide approach.

March 2010: Mark Roth at TED

...by the way, the comments threads on the TED website could use a few more rationalists... Lots of smart people there thinking with the wrong body parts.

May 2009: NIH awards a $2,227,500 grant

2006: Doctors chill, operate on, and revive a pig

Replies from: magfrump
comment by magfrump · 2010-05-26T23:01:08.098Z · LW(p) · GW(p)

Voted up for extensive linkage

comment by Rain · 2010-05-26T16:42:33.683Z · LW(p) · GW(p)

An interesting comparison I mentioned previously: the cost to Alcor of preserving one human (full-body) is $150,000. The recent full annual budget of SIAI is on the order of (edit:) $500,000.

Replies from: alyssavance, Robin, Roko
comment by alyssavance · 2010-05-26T17:39:04.016Z · LW(p) · GW(p)

Cryonics Institute is a factor of 5 cheaper than that, the SIAI budget is larger than that, and SIAI cannot be funded through life insurance while cryonics can. And most people who read this aren't actually substantial SIAI donors.

Replies from: Rain, Autumn
comment by Rain · 2010-05-26T17:49:05.005Z · LW(p) · GW(p)

You can't assign a life insurance policy to a non-profit organization?

Is the long-term viability of low-cost cryonics a known quantity? Is it noticeably similar to the viability of high-cost cryonics?

Did Michael Anissimov, Media Director for SIAI, when citing specific financial data available on Guidestar, lie about SIAI's budget in the linked blog post?

Do people who aren't donors not want to know potential cost ratios regarding the arguments specifically made by the top level post?

Replies from: alyssavance
comment by alyssavance · 2010-05-26T17:58:55.485Z · LW(p) · GW(p)

"You can't assign a life insurance policy to a non-profit organization?"

You can, but it probably won't pay out until relatively far into the future, and because of SIAI's high discount rate, money in the far future isn't worth much.

"Is the long-term viability of low-cost cryonics a known quantity? Is it noticeably similar to the viability of high-cost cryonics?"

Yes. The Cryonics Institute has been in operation since 1976 (35 years) and is very financially stable.

"Did Michael Anissimov, Media Director for SIAI, when citing specific financial data available on Guidestar, lie about SIAI's budget in the linked blog post?"

Probably not, he just wasn't being precise. SIAI's financial data for 2008 is available here (guidestar.org) for anyone who doesn't believe me.

Replies from: Rain
comment by Rain · 2010-05-26T19:27:01.023Z · LW(p) · GW(p)

The Cryonics Institute has been in operation since 1976 (35 years) and is very financially stable.

Please provide evidence for this claim. I've heard contradictory statements to the effect that even $150,000 likely isn't enough for long term viability.

Probably not, he just wasn't being precise.

I'm curious how the statement, "our annual budget is in the $200,000/year range", may be considered "imprecise" rather than outright false when compared with data from the source he cited.

SIAI Total Expenses (IRS form 990, line 17):

  • 2006: $395,567
  • 2007: $306,499
  • 2008: $614,822
Replies from: CarlShulman
comment by CarlShulman · 2010-05-26T21:27:01.549Z · LW(p) · GW(p)

I sent Anissimov an email asking him to clarify. He may have been netting out Summit expenses (matching cost of venue, speaker arrangements, etc against tickets to net things out). Also note that 2008 was followed by a turnover of all the SIAI staff except Eliezer Yudkowsky, and Michael Vassar then cut costs.

Replies from: mranissimov
comment by mranissimov · 2010-05-26T22:08:31.355Z · LW(p) · GW(p)

Hi all,

I was completely wrong on my budget estimate, I apologize. I wasn't including the Summit, and I was just estimating the cost on my understanding of salaries + misc. expenses. I should have checked Guidestar. My view of the budget also seems to have been slightly skewed because I frequently check the SIAI Paypal account, which many people use to donate, but I never see the incoming checks, which are rarer but sometimes make up a large portion of total donations. My underestimate of money in contributing to my underestimating monies out.

Again, I'm sorry, I was not lying, just a little confused and a few years out of date on my estimate. I will search over my blog to modify any incorrect numbers I can find.

Replies from: Rain
comment by Rain · 2010-05-27T17:37:01.467Z · LW(p) · GW(p)

Thank you for the correction.

comment by Autumn · 2010-05-27T04:15:47.975Z · LW(p) · GW(p)

You could fund SIAI through life insurance if you list them as a beneficiary just as you would with cryonics.

comment by Robin · 2010-05-27T16:52:52.828Z · LW(p) · GW(p)

That's a very good point. It seems there is some dispute about the numbers but the general point is that it would be a lot cheaper to fund SIAI which may save the world than to cryogenically freeze even a small fraction of the world's population.

The point about life insurance is moot. Life insurance companies make a profit so having SIAI as your beneficiary upon death wouldn't even make that much sense. If you just give whatever you'd be paying in life insurance premiums directly to SIAI, you're probably doing much more overall good than paying for a cryonics policy.

comment by Roko · 2010-05-26T19:50:52.839Z · LW(p) · GW(p)

CI costs $30K, and you only have to pay about 9K if you're young, and not up front -- you just pay your insurance premiums.

comment by Vladimir_M · 2010-05-26T23:44:41.926Z · LW(p) · GW(p)

I haven't yet read and thought enough about this topic to form a very solid opinion, but I have two remarks nevertheless.

First, as some previous commenters have pointed out, most of the discussions of cryonics fail to fully appreciate the problem of weirdness signals. For people whose lives don't revolve around communities that are supportive of such undertakings, the cost of signaled weirdness can easily be far larger than the monetary price. Of course, you can argue that this is because the public opinion on the topic is irrational and deluded, but the point is that given the present state of public opinion, which is impossible to change by individual action, it is individually rational to take this cost into account. (Whether the benefits ultimately overshadow this cost is a different question.)

Second, it is my impression that many cryonics advocates -- and in particular, many of those whose comments I've read on Overcoming Bias and here -- make unjustified assertions about supposedly rational ways to decide the question of what entities one should identify oneself with. According to them, signing up for cryonics increases the chances that at some distant time in the future, in which you'll otherwise probably be dead and gone, some entity will exist with which it is rational to identify to the point where you consider it, for the purposes of your present decisions, to be the same as your "normal" self that you expect to be alive tomorrow.

This is commonly supported by arguing that your thawed and revived or uploaded brain decades from now is not a fundamentally different entity from you in any way that wouldn't also apply to your present brain when it wakes up tomorrow. I actually find these arguments plausible, but the trouble is that they, in my view, prove too much. What I find to be the logical conclusion of these arguments is that the notion of personal identity is fundamentally a mere subjective feeling, where no objective or rational procedure can be used to determine the right answer. Therefore, if we accept these arguments, there is no reason at all to berate as irrational people who don't feel any identification with these entities that cryonics would (hopefully) make it possible to summon into existence in the future.

In particular, I personally can't bring myself to feel any identification whatsoever with some computer program that runs a simulation of my brain, no matter how accurate, and no matter how closely isomorphic its data structures might be to the state of my brain at any point in time. And believe me, I have studied all the arguments for the contrary position I could find here and elsewhere very carefully, and giving my utmost to eliminate any prejudice. (I am more ambivalent about my hypothetical thawed and nanotechnologically revived corpse.) Therefore, in at least some cases, I'm sure that people reject cryonics not because they're too biased to assess the arguments in favor of it, but because they honestly feel no identification with the future entities that it aims to produce -- and I don't see how this different subjective preference can be considered "irrational" in any way.

That said, I am fully aware that these and other anti-cryonics arguments are often used as mere rationalizations for people's strong instinctive reactions triggered by the weirdness/yuckiness heuristics. Still, they seem valid to me.

Replies from: Roko, kodos96, JoshuaZ, byrnema, red75
comment by Roko · 2010-05-27T10:40:56.858Z · LW(p) · GW(p)

Would it change your mind if you discovered that you're living in a simulation right now?

In particular, I personally can't bring myself to feel any identification whatsoever with some computer program that runs a simulation of my brain, no matter how accurate, and no matter how closely isomorphic its data structures might be to the state of my brain at any point in time.

Replies from: Vladimir_M
comment by Vladimir_M · 2010-05-27T20:40:59.995Z · LW(p) · GW(p)

Roko:

Would it change your mind if you discovered that you're living in a simulation right now?

It would probably depend on the exact nature of the evidence that would support this discovery. I allow for the possibility that some sorts of hypothetical experiences and insights that would have the result of convincing me that we live in a simulation would also have the effect of dramatically changing my intuitions about the question of personal identity. However, mere thought-experiment considerations of those I can imagine presently fail to produce any such change.

I also allow for the possibility that this is due to the limitations of my imagination and reasoning, perhaps caused by unidentified biases, and that actual exposure to some hypothetical (and presently counterfactual) evidence that I've already thought about could perhaps have a different effect on me than I presently expect it would.

For full disclosure, I should add that I see some deeper problems with the simulation argument that I don't think are addressed in a satisfactory manner in the treatments of the subject I've seen so far, but that's a whole different can of worms.

Replies from: Roko
comment by Roko · 2010-05-27T21:16:56.742Z · LW(p) · GW(p)

Well, a concrete scenario would be that the simulators calmly reveal themselves to you and demonstrate that they can break the laws of physics, for example by just wiggling the sun around in the sky, disconnecting your limbs without blood coming out or pain, making you float, etc.

Replies from: Vladimir_M
comment by Vladimir_M · 2010-05-27T21:40:20.359Z · LW(p) · GW(p)

That would fall under the "evidence that I've already thought about" mentioned above. My intuitions would undoubtedly be shaken and moved, perhaps in directions that I presently can't even imagine. However, ultimately, I think I would be led to conclude that the whole concept of "oneself" is fundamentally incoherent, and that the inclination to hold any future entity or entities in special regard as "one's future self" is just a subjective whim. (See also my replies to kodos96 in this thread.)

Replies from: Roko
comment by Roko · 2010-05-27T22:07:59.862Z · LW(p) · GW(p)

Interesting!

Seems a bit odd to me, but perhaps we should chat in more detail some time.

comment by kodos96 · 2010-05-26T23:59:45.292Z · LW(p) · GW(p)

In particular, I personally can't bring myself to feel any identification whatsoever with some computer program that runs a simulation of my brain

Would it change your mind if that computer program [claimed to] strongly identify with you?

Replies from: Vladimir_M
comment by Vladimir_M · 2010-05-27T00:11:30.459Z · LW(p) · GW(p)

I'm not sure I understand your question correctly. The mere fact that a program outputs sentences that express strong claims about identifying with me would not be relevant in any way I can think of. Or am I missing something in your question?

Replies from: kodos96
comment by kodos96 · 2010-05-27T00:25:00.268Z · LW(p) · GW(p)

Well right, obviously a program consiting of "printf("I am Vladmir_M")" wouldn't qualify... but a program which convincingly claimed to be you.. i.e. had access to all your memories, intellect, inner thoughts etc, and claimed to be the same person as you.

Replies from: Vladimir_M
comment by Vladimir_M · 2010-05-27T00:53:50.444Z · LW(p) · GW(p)

No, as I wrote above, I am honestly unable to feel any identification at all with such a program. It might as well be just a while(1) loop printing a sentence claiming it's me.

I know of some good arguments that seem to provide a convincing reductio ad absurdum of such a strong position, most notably the "fading qualia" argument by David Chalmers, but on the other hand, I also see ways in which the opposite view entails absurdity (e.g. the duplication arguments). Thus, I don't see any basis for forming an opinion here except sheer intuition, which in my case strongly rebels against identification with an upload or anything similar.

Replies from: kodos96
comment by kodos96 · 2010-05-27T07:06:19.923Z · LW(p) · GW(p)

If you woke up tomorrow to find yourself situated in a robot body, and were informed that you had been killed in an accident and your mind had been uploaded and was now running on a computer, but you still felt, subjectively, entirely like "yourself", how would you react? Or do you not think that that could ever happen? (that would be a perfectly valid answer, I'm just curious what you think, since I've never had the opportunity to discuss these issues with someone who was familiar with the standard arguments, yet denied the possibility)

Replies from: Vladimir_M
comment by Vladimir_M · 2010-05-27T21:04:28.375Z · LW(p) · GW(p)

For the robotic "me" -- though not for anyone else -- this would provide a conclusive answer to the question of whether uploads and other computer programs can have subjective experiences. However, although fascinating, this finding would provide only a necessary, not a sufficient condition for a positive answer to the question we're pursuing, namely whether there is any rational reason (as opposed to freely variable subjective intuitions and preferences) to identify this entity with my present self.

Therefore, my answer would be that I don't know how exactly the subjective intuitions and convictions of the robotic "me" would develop from this point on. It may well be that he would end up feeling strongly as the true continuation of my person and rejecting what he would remember as my present intuitions on the matter (though this would be complicated by the presumable easiness of making other copies). However, I don't think he would have any rational reason to conclude that it is somehow factually true that he is the continuation of my person, rather than some entirely different entity that has been implanted false memories identical to my present ones.

Of course, I am aware that a similar argument can be applied to the "normal me" who will presumably wake up in my bed tomorrow morning. Trouble is, I would honestly find it much easier to stop caring about what happens to me tomorrow than to start caring about computer simulations of myself. Ultimately, it seems to me that the standard arguments that are supposed to convince people to broaden their parochial concepts of personal identity should in fact lead one to dissolve the entire concept as an irrational reification that is of no concern except that it's a matter of strong subjective preferences.

Replies from: jimrandomh
comment by jimrandomh · 2010-05-27T21:27:11.718Z · LW(p) · GW(p)

Getting copied from a frozen brain into a computer is a pretty drastic change, but suppose instead it were done gradually, one neuron at a time. If one of your neurons were replaced with an implant that behaved the same way, would it still be you? A cluster of N neurons? What if you replaced your entire brain with electronics, a little at a time?

Obviously there is a difference, and that difference is significant to identity; but I think that difference is more like the difference between me and my younger self than the difference between me and someone else.

comment by JoshuaZ · 2010-05-26T23:59:21.597Z · LW(p) · GW(p)

While I understand why someone would see the upload as possibly not themselves (and I have strong sympathy with that position), I do find it genuinely puzzling that someone wouldn't identify their revived body as themselves. While some people might argue that they have no connection to the entity that will have their memories a few seconds from now, the vast majority of humans don't buy into that argument. If they don't, then it is hard to see how a human which is cooled and then revived is any different than a human which who has their heart stopped for a bit as they have a heart transplant, or for someone who stops breathing in a very cold environment for a few minutes, or someone who goes to sleep under an anesthesia, or even someone who goes to sleep normally and wakes up in the morning.

Your point about weirdness signaling is a good one, and I'd expand on it slightly: For much of society, even thinking about weird things at a minimal level is a severe weirdness signal. So for many people, the possible utility of any random weird idea is likely to be so low that even putting in effort to think about it will almost certainly outweigh any benefit. And when one considers how many weird ideas are out there, the chance that any given one of them will turn out to be useful is very low. To use just a few examples, just how many religions are there? How many conspiracy theories? How many miracle cures? Indeed, the vast majority of these, almost all LW readers will never investigate for essentially this sort of utility heuristic.

Replies from: Vladimir_M
comment by Vladimir_M · 2010-05-27T00:37:02.795Z · LW(p) · GW(p)

JoshuaZ:

While some people might argue that they have no connection to the entity that will have their memories a few seconds from now, the vast majority of humans don't buy into that argument. If they don't, then it is hard to see how a human which is cooled and then revived is any different than a human which who has their heart stopped for a bit as they have a heart transplant, or for someone who stops breathing in a very cold environment for a few minutes, or someone who goes to sleep under an anesthesia, or even someone who goes to sleep normally and wakes up in the morning.

The problem here is one of continuum. We can easily imagine a continuum of procedures where on one end we have relatively small ones that intuitively appear to preserve the subject's identity (like sleep or anesthesia), and on the other end more radical ones that intuitively appear to end up destroying the original and creating a different person. By Buridan's principle, this situation implies that for anyone whose intuitions give different answers for the procedures at the opposite ends of the continuum, at least some procedures that lie inbetween will result in confused and indecisive intuitions. For me, cryonic revival seems to be such a point.

In any case, I honestly don't see any way to establish, as a matter of more than just subjective opinion, at which exact point in that continuum personal identity is no longer preserved.

Replies from: Will_Newsome
comment by Will_Newsome · 2010-05-27T00:46:42.752Z · LW(p) · GW(p)

This seems similar to something that I'll arbitrarily decide to call the 'argument from arbitrariness': every valid argument should be pretty and neat and follow the zero, one, infinity rule. One example of this was during the torture versus dust specks debate, when the torturers chided the dust speckers for having an arbitrary point at which stimuli that were not painful enough to be considered true pain became just painful enough to be considered as being in the same reference class as torture. I'd be really interested to find out how often something like the argument from arbitrariness turns out to have been made by those on the ultimately correct side of the argument, and use this information as a sort of outside view.

Replies from: RichardW
comment by RichardW · 2010-05-27T13:57:53.987Z · LW(p) · GW(p)

I share the position that Kaj_Sotala outlined here: http://lesswrong.com/lw/1mc/normal_cryonics/1hah

In the relevant sense there is no difference between the Richard that wakes up in my bed tomorrow and the Richard that might be revived after cryonic preservation. Neither of them is a continuation of my self in the relevant sense because no such entity exists. However, evolution has given me the illusion that tomorrow-Richard is a continuation of my self, and no matter how much I might want to shake off that illusion I can't. On the other hand, I have no equivalent illusion that cryonics-Richard is a continuation of my self. If you have that illusion you will probably be motivated to have yourself preserved.

Ultimately this is not a matter of fact but a matter of personal preference. Our preferences cannot be reduced to mere matters of rational fact. As David Hume famously wrote: "'Tis not contrary to reason to prefer the destruction of the whole world to the scratching of my finger." I prefer the well-being of tomorrow-Richard to his suffering. I have little or no preference regarding the fate of cryonics-Richard.

Replies from: JenniferRM
comment by JenniferRM · 2010-05-27T21:43:08.758Z · LW(p) · GW(p)

I don't mean to insult you (I'm trying to respect your intelligence enough to speak directly rather than delicately) but this kind of talk is why cryonics seems like a pretty useful indicator of whether or not a person is rational. You're admitting to false beliefs that you hold "because you evolved that way" rather than using reason to reconcile two intuitions that you "sort of follow" but which contradict each other.

Then you completely discounted the suffering or happiness of a human being who is not able to be helped by anyone other than your present self in this matter. You certainly can't be forced to seek medical treatment against your will for this, so other people are pretty much barred by law from forcing you to not be dumb with respect to the fate of future-Richard. He is in no one's hands but your own.

Hume was right about a huge amount of stuff in the context of initial epistemic conditions of the sort that Descartes proposed when he extracted "I think therefore I am" as one basis for a stable starting point.

But starting from that idea and a handful of others like "trust of our own memories as a sound basis for induction" we have countless terabytes of sense data from which we can develop a model of the universe that includes physical objects with continuity over time - one class of which are human brains that appear to be capable of physically computing the same thoughts with which we started out in our "initial epistemic conditions". The circle closes here. There might be some new evidence somewhere if some kind of Cartesian pineal gland is discovered someday which functions as the joystick by which souls manipulate bodies, but barring some pretty spectacular evidence, materialist views of the soul are the best theory standing.

Your brain has physical continuity in exactly the same way that chairs have physical continuity, and your brain tomorrow (after sleeping tonight while engaging in physical self repair and re-indexing of data structures) will be very similar to your brain today in most but not all respects. To the degree that you make good use of your time now, your brain then is actually likely to implement someone more like your ideal self than even you yourself are right now... unless you have no actualized desire for self improvement. The only deep change between now and then is that you will have momentarily lost "continuity of awareness" in the middle because your brain will go into a repair and update mode that's not capable of sensing your environment or continuing to compute "continuity of awareness".

If your formal theory of reality started with Hume and broke down before reaching these conclusions then you are, from the perspective of pragmatic philosophy, still learning to crawl. This is basically the same thing as babies learning about object permanence except in a more abstract context.

Barring legitimate pragmatic issues like discount rates, your future self should be more important to you than your present self, unless you're mostly focused on your "contextual value" (the quality of your relationships and interactions with the broader world) and feel that your contextual value is high now and inevitably declining (or perhaps will be necessarily harmed by making plans for cryonics).

The real thing to which you should be paying attention (other than to make sure they don't stop working) is not the mechanisms by which mental content is stored, modified, and transmitted into the future. The thing you should be paying attention to is the quality of that content and how it functionally relates to the rest of the physical universe.

For the record, I don't have a cryonics policy either, but I regard this as a matter of a failure to conscientiously apply myself to executing on an issue that is obviously important. Once I realized the flaw in my character that lead to this state of affairs I began working to fix it, which is something that, for me, is still a work in progress.

Part of my work is analyzing the issue enough to have a strongly defensible, coherent, and pragmatic argument for cryonics which I'll consider to have been fully resolved either (1) once I have argument for not signing up that would be good enough for a person able to reason in a relatively universal manner or (2) I have a solid argument the other way which has lead me and everyone I care about including my family and close friends to have taken the necessary steps and signed ourselves up.

When I set up a "drake equation for cryonics" and filled in the probabilities under optimistic (inside view) calculations I determined the value to be trillions of dollars. Under pessimistic assumptions (roughly, the outside view) I found that the expected value was epsilon and realized that my model was flawed because it didn't even have terms for negative value outcomes like "loss of value in 'some other context' because of cryonics/simulationist interactions".

So, pretty much, I regard the value of information here as being enormously large, and once I refine my models some more I expect to have a good idea as to what I really should do as a selfish matter of securing adequate health care for me and my family and friends. Then I will do it.

Replies from: RichardW, Eneasz
comment by RichardW · 2010-05-28T12:34:04.900Z · LW(p) · GW(p)

Hi Jennifer. Perhaps I seem irrational because you haven't understood me. In fact I find it difficult to see much of your post as a response to anything I actually wrote.

No doubt I explained myself poorly on the subject of the continuity of the self. I won't dwell on that. The main question for me is whether I have a rational reason to be concerned about what tomorrow-Richard will experience. And I say there is no such rational reason. It is simply a matter of brute fact that I am concerned about what he will experience. (Vladimir and Byrnema are making similar points above.) If I have no rational reason to be concerned, then it cannot be irrational for me not to be concerned. If you think I have a rational reason to be concerned, please tell me what it is.

Replies from: Blueberry
comment by Blueberry · 2010-05-28T14:33:10.512Z · LW(p) · GW(p)

I don't understand why psychological continuity isn't enough of a rational reason. Your future self will have all your memories, thoughts, viewpoints, and values, and you will experience a continuous flow of perception from yourself now to your future self. (If you sleep or undergo general anesthesia in the interim, the flow may be interrupted slightly, but I don't see why that matters.)

Replies from: RichardW, byrnema
comment by RichardW · 2010-05-28T17:30:49.465Z · LW(p) · GW(p)

Hi Blueberry. How is that a rational reason for me to care what I will experience tomorrow? If I don't care what I will experience tomorrow, then I have no reason to care that my future self will have my memories or that he will have experienced a continuous flow of perception up to that time.

We have to have some motivation (a goal, desire, care, etc) before we can have a rational reason to do anything. Our most basic motivations cannot themselves be rationally justified. They just are what they are.

Of course, they can be rationally explained. My care for my future welfare can be explained as an evolved adaptive trait. But that only tells me why I do care for my future welfare, not why I rationally should care for my future welfare.

Replies from: JenniferRM
comment by JenniferRM · 2010-06-01T05:17:10.516Z · LW(p) · GW(p)

Richard, you seem to have come to a quite logical conclusion about the difference between intrinsic values and instrumental values and what happens when an attempt is made to give a justification for intrinsic values at the level of values.

If a proposed intrinsic value is questioned and justified with another value statement, then the supposed "intrinsic value" is revealed to have really been instrumental. Alternatively, if no value is offered then the discussion will have necessarily moved out of the value domain into questions about the psychology or neurons or souls or evolutionary mechanisms or some other messy issue of "simple" fact. And you are quite right that these facts (by definition as "non value statements") will not be motivating.

We fundamentally like vanilla (if we do) "because we like vanilla" as a brute fact. De gustibus non est disputandum. Yay for the philosophy of values :-P

On the other hand... basically all humans, as a matter of fact, do share many preferences, not just for obvious things like foods that are sweet or salty or savory but also for really complicated high level things, like the respect of those with whom we regularly spend time, the ability to contribute to things larger than ourselves, listening to beautiful music, and enjoyment of situations that create "flow" where moderately challenging tasks with instantaneous feedback can be worked on without distraction, and so on.

As a matter of simple observation, you must have noticed that there exist some things which it gives you pleasure to experience. To say that "I don't care what I will experience tomorrow" can be interpreted as a prediction that "Tomorrow, despite being conscious, I will not experience anything which affects my emotions, preferences, feelings, or inclinations in either positive or negative directions". This statement is either bluntly false (my favored hypothesis), or else you are experiencing a shocking level of anhedonia for which you should seek professional help if you want to live very much longer (which of course you might not if you're really experiencing anhedonia), or else you are a non human intelligence and I have to start from scratch trying to figure you out.

Taking it as granted that you and I can both safely predict that you will continue to enjoy life tomorrow... then an inductive proof can be developed that "unless something important changes from one day to the next" you will continue to have a stake in the day after that, and the day after that, and so on. When people normally discuss cryonics and long term values it is the "something important changing" issue that they bring up.

For example, many people think that they only care about their children... until they start seeing their grandchildren as real human beings whose happiness they have a stake in, and in whose lives they might be productively involved.

Other people can't (yet) imagine not falling prey to senescence, and legitimately think that death might be preferable to a life filled with pain which imposes costs (and no real benefits) on their loved ones who would care for them. In this case the critical insight is that not just death but also physical decline can be thought of as a potentially treatable condition and so we can stipulate not just vastly extended life but vastly extended youth.

But you are not making any of these points so that they can even be objected to by myself or others... You're deploying the kind of arguments I would expect from an undergrad philosophy major engaged in motivated cognition because you have not yet "learned how to lose an argument gracefully and become smarter by doing so".

And it is for this reason that I stand by the conclusion that in some cases beliefs about cryonics say very much about the level of pragmatic philosophical sophistication (or "rationality") that a person has cultivated up to the point when they stake out one of the more "normal" anti-cryonics positions. In your case, you are failing in a way I find particularly tragic, because normal people raise much better objections than you are raising - issues that really address the meat of the matter. You, on the other hand, are raising little more than philosophical confusion in defense of your position :-(

Again, I intend these statements only in the hope that they help you and/or audiences who may be silently identifying with your position. Most people make bad arguments sometimes and that doesn't make them bad people - in fact, it helps them get stronger and learn more. You are a good and valuable person even if you have made comments here that reveal less depth of thinking than might be hypothetically possible.

That you are persisting in your position is a good sign, because you're clearly already pretty deep into the cultivation of rationality (your arguments clearly borrow a lot from previous study) to the point that you may harm yourself if you don't push through to the point where your rationality starts paying dividends. Continued discussion is good practice for this.

On the other hand, I have limited time and limited resources and I can't afford to spend any more on this line of conversation. I wish you good luck on your journey, perhaps one day in the very far future we will meet again for conversation, and memory of this interaction will provide a bit of amusement at how hopelessly naive we both were in our misspent "childhood" :-)

comment by byrnema · 2010-05-28T16:41:22.129Z · LW(p) · GW(p)

Why is psychological continuity important? (I can see that it's very important for an identity to have psychological continuity, but I don't see the intrinsic value of an identity existing if it is promised to have psychological continuity.)

In our lives, we are trained to worry about our future self because eventually our plans for our future self will affect our immediate self. We also might care about our future self altruistically: we want that person to be happy just as we would want any person to be happy whose happiness we are responsible for. However, I don't sense any responsibility to care about a future self that needn't exist. On the contrary, if this person has no effect on anything that matters to me, I'd rather be free of being responsible for this future self.

In the case of cryogenics, you may or may not decide that your future self has an effect on things that matter to you. If your descendants matter to you, or propagating a certain set of goals matters to you, then cryonics makes sense. I don't have any goals that project further than the lifespan of my children. This might be somewhat unique, and it is the result of recent changes in philosophy. As a theist, I had broad-stroke hopes for the universe that are now gone.

Less unique, I think, though perhaps not generally realized, is the fact that I don't feel any special attachment to my memories, thoughts, viewpoints and values. What if a person woke up to discover that the last days were a dream and they actually had a different identity? I think they wouldn't be depressed about the loss of their previous identity. They might be depressed about the loss of certain attachments if the attachments remained (hopefully not too strongly, as that would be sad). They salient thing here is that all identities feel the same.

Replies from: RichardW
comment by RichardW · 2010-05-30T12:05:51.817Z · LW(p) · GW(p)

I've just read this article by Ben Best (President of CI): http://www.benbest.com/philo/doubles.html

He admits that the possibility of duplicating a person raises a serious question about the nature of personal identity, that continuity is no solution to this problem, and that he can find no other solution. But he doesn't seem to consider that the absence of any solution points to his concept of personal identity being fundamentally flawed.

Replies from: byrnema
comment by byrnema · 2010-05-30T16:23:21.138Z · LW(p) · GW(p)

Interesting. However, I don't see any problems with the nature of personal identity. My hunch is that I'm actually not confused about it.

In a lifetime, there is continuity of memories and continuity of values and goals even as they slowly change over time. I can trust that the person who wakes up tomorrow will be 'me' in this sense. She may be more refreshed and have more information, but I trust her her to act as "I" would. On the other hand, she might be excessively grouchy or suffer a brain injury, in which case this trust is misplaced. However, she is not me personal-identity wise for a variety of reasons:

  • I do not have access to her stream of consciousness.
  • I do not have operative control of her body.

[In both cases, the reason is because her thoughts and actions take place in the future. Eventually, I will have access to her thoughts and control of her body and then she becomes "me".]

  • Personal identity exists only for a moment. It is the running of some type of mental thought process.

Suppose I was duplicated overnight, and two byrnemas woke up in the morning. Both byrnemas would have continuity with the previous byrnema with respect to memories, values and goals. However, neither of them are the personal identity of byrnema of the night before just as whenever I wake up I'm not the personal identity of the night before, exactly for the reasons I bulleted.

With the two duplicates, there would be two distinct personal identities. You simply count the number of independent accesses to thoughts and motor control of bodies and arrive at two. Both byrnema have a subjective experience of personal identity, of course, and consider the other byrnema an "other". However, this "other" is similar to oneself in a way that is unprecedented, a twin sister that also has your memories, goals and values.

I think duplicates would be most problematic for loved ones. They would find themselves in a position of loving both duplicates, and being able to empathize with both, but not really caring so much if one was deleted, but being very distraught if both were deleted. That would be strange, because we haven't had any experience with that, but I'm sure we would adjust well enough.

People would take risks with their person, but only after checking and double-checking that their backup was recent and well. People wouldn't care if their person died -- they would understand (now through experience rather than introspection) that what makes them them is their memories, values, goals and a moment. And the moment is transient anyway. The illusion of self existing for more than a moment would be broken.

The post you linked to by Ben Best mentioned the impossibility of a personal identity in two different physical locations. Actually, interestingly, it would be possible to have an identity in two physical locations. To do this, you would need to stream the sensory data of two bodies into a single brain, located anywhere. As long as the brain had access to both bodies' sensory data, and could operate both bodies, and there was a single shared stream of consciousness, then that person would be physically located in two places at once. (But this is completely different from just duplicating a person.)

Replies from: NancyLebovitz
comment by NancyLebovitz · 2010-06-01T18:36:44.477Z · LW(p) · GW(p)

If you care about a person, then while you might not care as much if a recent duplicate or a recently duplicated person were lost, you would still care about as much if either of them suffers..

As is implied by my 'recently', the two will diverge, and you might end up with loyalty to both as distinct individuals, or with a preference for one of them.

Also, I don't think parents value each of newborn twins less because they have a spare.

comment by Eneasz · 2010-05-28T19:35:19.298Z · LW(p) · GW(p)

For the record, I don't have a cryonics policy either, but I regard this as a matter of a failure to conscientiously apply myself to executing on an issue that is obviously important. Once I realized the flaw in my character that lead to this state of affairs I began working to fix it, which is something that, for me, is still a work in progress

I'm in the signing process right now, and I wanted to comment on the "work in progress" aspect of your statement. People think that signing up for cyronics is hard. That it takes work. I thought this myself up until a few weeks ago. This is stunningly NOT true.

The entire process is amazingly simple. You contact CI (or your preserver of choice) via their email address and express interest. They ask you for a few bits of info (name, address) and send you everything you need already printed and filled out. All you have to do is sign your name a few times and send it back. The process of getting life insurance was harder (and getting life insurance is trivially easy).

So yeah, the term "working on it" is not correctly applicable to this situation. Someone who's never climbed a flight of stairs may work out for months in preparation, but they really don't need to, and afterwards might be somewhat annoyed that no one who'd climbed stairs before had bothered to tell them so.

Literally the only hard part is the psychological effort of doing something considered so weird. The hardest part for me (and what had stopped me for two+ years previously) was telling my insurance agent when she asked "What's CI?" that it's a place that'll freeze me when I die. I failed to take into account that we have an incredibly tolerant society. People interact - on a daily basis - with other humans who believe in gods and energy crystals and alien visits and secret-muslim presidents without batting an eye. This was no different. It was like the first time you leap from the high diving board and don't die, and realize that you never would have.

Replies from: JenniferRM, Alicorn, SilasBarta
comment by JenniferRM · 2010-05-30T20:39:23.277Z · LW(p) · GW(p)

The hard part (and why this is also a work in progress) involve secondary optimizations, the right amount of effort to put into them, and understanding whether these issues generalize to other parts of my life.

SilasBartas identified some of the practical financial details involved in setting up whole life versus term plus savings versus some other option. This is even more complex for me because I don't currently have health insurance and ideally would like to have a personal physician, health insurance, and retirement savings plan that are consistent with whatever cryonics situation I set up.

Secondarily, there are similarly complex social issues that come up because I'm married, love my family, am able to have philosophical conversations them, and don't want to "succeed" at cryonics but then wake up for 1000 years of guilt that I didn't help my family "win" too. If they don't also win, when I could have helped them, then what kind of a daughter or sister would I be?

Finally, I've worked on a personal version of a "drake equation for cryonics" and it honestly wasn't a slam dunk economic decision when I took a pessimistic outside view of my model. So it would seem that more analysis here would be prudent, which would logically require some time to perform. If I had something solid I imagine that would help convince my family - given that they are generally rational in their own personal ways :-)

Finally, as a meta issue, there are issues around cognitive inertia in both the financial and the social arenas so that whatever decisions I make now, may "stick" for the next forty years. Against this I weigh the issue of "best being the enemy of good" because (in point of fact) I'm not safe in any way at all right now... which is an obvious negative. In what places should I be willing to tolerate erroneous thinking and sloppy execution that fails to obtain the maximum lifetime benefit and to what degree should I carry that "sloppiness calibration" over to the rest of my life?

So, yeah, its a work in progress.

I'm pretty much not afraid of the social issues that you brought up. If people who disagree with me about the state of the world want to judge me, that's their problem up until they start trying to sanction me or spread malicious gossip that blocks other avenues of self improvement or success. The judgment of strangers who I'll never see again is mostly a practical issue and not that relevant compared to relationships that really matter, like those with my husband, nuclear family, friends, personal physician, and so on.

Back in 1999 I examined these issues. In 2004 I got to the point of having all the paperwork to sign and turn in with Alcor and Insurance, with all costs pre-specified. In each case I backed off because I calculated the costs and looked at my income and looked at the things I'd need to cut out of my life (and none of it was coffee from starbucks or philanthropy or other fluffy BS like that - it was more like the simple quality of my food and whether I'd be able to afford one bedroom vs half a bedroom) and they honestly didn't seem to be worth it. As I've gotten older and richer and more influential (and partly due to influence from this community) I've decided I should review the decision again.

The hard part for me is dotting the i's and crossing the t's (and trying to figure out where its safe to skip some of these steps) while seeking to minimize future regrets and maximize positive outcomes.

Replies from: Eneasz
comment by Eneasz · 2010-06-01T17:47:56.348Z · LW(p) · GW(p)

don't want to "succeed" at cryonics but then wake up for 1000 years of guilt that I didn't help my family "win" too. If they don't also win, when I could have helped them, then what kind of a daughter or sister would I be?

You can't hold yourself responsible for their decisions. That way lies madness, or tyranny. If you respect them as free agents then you can't view yourself as the primary source for their actions.

Replies from: DSimon
comment by DSimon · 2010-09-14T14:51:33.229Z · LW(p) · GW(p)

It might be rational to do so under extreme enough circumstances. For example, if a loved one had to take pills every day to stay alive and had a tendency to accidentally forget them (or to believe new-agers who told them that the pills were just a Big Pharma conspiracy), it would be neither madness nor tyranny to do nearly anything to prevent that from happening.

The question is: to what degree is failing to sign up for cryonics like suicide by negligence?

comment by Alicorn · 2010-05-28T19:39:00.471Z · LW(p) · GW(p)

getting life insurance is trivially easy

I'm not finding this. Can you refer me to your trivially easy agency?

Replies from: Eneasz
comment by Eneasz · 2010-05-28T21:02:08.134Z · LW(p) · GW(p)

I used State Farm, because I've had car insurance with them since I could drive, and renters/owner's insurance since I moved out on my own. I had discounts both for multi-line and loyalty.

Yes, there is some interaction with a person involved. And you have to sit through some amount of sales-pitching. But ultimately it boils down to answering a few questions (2-3 minutes), signing a few papers (1-2 minutes), sitting through some process & pitching (30-40 minutes), and then having someone come to your house a few days later to take some blood and measurements (10-15 minutes). Everything else was done via mail/email/fax.

Heck, my agent had to do much more work than I did, previous to this she didn't know that you can designate someone other than yourself as the owner of the policy, required some training.

Replies from: Alicorn
comment by Alicorn · 2010-05-28T23:02:50.773Z · LW(p) · GW(p)

I tried a State Farm guy, and he was nice enough, but he wanted a saliva sample (not blood) and could not tell me what it was for. He gave me an explicitly partial list but couldn't complete it for me. That was spooky. I don't want to do that.

Replies from: Eneasz
comment by Eneasz · 2010-05-29T07:06:47.554Z · LW(p) · GW(p)

Huh. That is weird. I don't blame you.

Come to think of it, I didn't even bother asking what the blood sample was for. But I tend to be exceptionally un-private. I don't expect privacy to be a part of life among beings who regularly share their source code.

Replies from: Alicorn
comment by Alicorn · 2010-05-29T19:03:22.088Z · LW(p) · GW(p)

It's not a matter of privacy. I can't think of much they'd put on the list that I wouldn't be willing to let them have. (The agent acted like I could only possibly be worried that they were going to do genetic testing, but I'd let them do that as long as they, you know, told me, and gave me a copy of the results.) It was just really not okay with me that they wanted it for undisclosed purposes. Lack of privacy and secrets shouldn't be unilateral.

comment by SilasBarta · 2010-05-28T19:46:04.813Z · LW(p) · GW(p)

and getting life insurance is trivially easy

Disagree. What's this trivially easy part? You can't buy it like you can buy mutual fund shares, where you just go online, transfer the money, and have at it. They make it so you have to talk to an actual human insurance agent, just to get quotes. (I understand you'll have to get a medical exam, but still...)

Of course, in fairness, I'm trying to combine it with "infinite banking" by getting a whole life policy, which has tax advantages. (I would think whole life would make more sense than term anyway, since you don't want to limit the policy to a specific term, risking that you'll die afterward and no be able to afford the preservation, when the take-off hasn't happened.)

Replies from: Blueberry
comment by Blueberry · 2010-05-28T19:53:36.045Z · LW(p) · GW(p)

I would think whole life would make more sense than term anyway

Nope. Whole life is a colossal waste of money. If you buy term and invest the difference in the premiums (what you would be paying the insurance company if you bought whole life) you'll end up way ahead.

Replies from: SilasBarta, RobinZ
comment by SilasBarta · 2010-05-28T20:04:44.147Z · LW(p) · GW(p)

Yes, I'm intimately familiar with the argument. And while I'm not committed to whole life, this particular point is extremely unpersuasive to me.

For one thing, the extra cost for whole is mostly retained by you, nearly as if you had never spent it, which make it questionable how much of that extra cost is really a cost.

That money goes into an account which you can withdraw from, or borrow from on much more favorable terms than any commercial loan. It also earns dividends and guaranteed interest tax-free.

If you "buy term and invest the difference", you either have to pay significant taxes on any gains (or even, in some cases, the principle) or lock it up the money until you're ~60. The optimistic "long term" returns of the stock market have shown to be a bit too optimistic, and given the volatility, you are being undercompensated. (Mutual whole life plans typically earned over 6% in '08, when stocks tanked.) You are also unlikely to earn the 12%/year they always pitch for mutual funds -- and especially not after taxes.

Furthermore, if the tax advantages of IRAs are reneged on (which given developed countries' fiscal situations, is looking more likely every day), they'll most likely be hit before life insurance policies.

So yes, I'm aware of the argument, but there's a lot about the calculation that people miss.

Replies from: HughRistik, Blueberry
comment by HughRistik · 2010-05-28T23:47:40.950Z · LW(p) · GW(p)

It's really hard to understand insurance products with the information available on the internet, and you are right that it is extremely unfriendly to online research. When I investigated whole life vs. term a few years ago, I came to the conclusions that there are a lot of problems with whole life and I wouldn't touch it with a ten foot pole.

For one thing, the extra cost for whole is mostly retained by you, nearly as if you had never spent it, which make it questionable how much of that extra cost is really a cost.

Actually, there is something far weirder and insidious going on. By "extra cost," I assume you are referring to the extra premium that goes into the insurance company's cash value investment account, beyond the amount of premium that goes towards your death benefit (aka "face amount," aka "what the insurance company pays to your beneficiary if you die while the policy is in force). Wait, what? Didn't I mean your cash value account, and my words the "insurance company's cash value account" were a slip of the tongue? Read on...

Let's take a look at the FAQ of the NY Dept. of Insurance which explains the difference between the face amount of your policy (aka "death benefit" aka "what the insurance company pays to your beneficiary if you die while the policy is in force):

The face amount is the amount of coverage you wish to provide your beneficiaries in the event of death. The cash value is the value that builds up in the policy. The minimum cash values are set by the Insurance Law and reflect an accumulation of your premiums after allowances for company expenses and claims. When you are young, your premiums are more than the cost of insuring your life at that time. Over time the cash value grows, usually tax-deferred, and the owner may be allowed access to that money in the form of a policy loan or payment of the cash value. The face amount of your policy will be higher than your cash value especially in the early years of your policy. If you surrender your policy you will receive the cash value not the face amount. If you die your beneficiaries will receive the face amount.

So, you have a $1 million face amount insurance policy. The premiums are set so that by age 100, "your" cash value investment account will have a value of $1 million. If you die right before turning 100, how much money will your beneficiary get?

If you guessed $1 million face amount + $1 million cash value account = $2 million, you guessed wrong. See the last quoted sentence: "If you die your beneficiaries will receive the face amount." Your beneficiary gets the $1 million face amount, but the insurance company keeps the $1 million investment account to offset their loss (which would instead go to your beneficiary if you had done "buy term and invest the difference).

This is because the cash value account is not your money anymore. The account belongs to the insurance company; I've read whole life policies and seen this stated in the fine print that people don't read. Now, you may think you can access this account, right? Yes and no. It's true that the money in it grows tax-free, but getting your money from the account isn't as simple as you might think.

You can't just take money out of a cash value account. If you want to take money out of the cash value account without surrendering the entire policy, it is not usually a withdrawal, it's a loan.The reason it's called a "loan" is because, as we've established, the account is not really yours, it's the insurance company's! According to the FAQ, here is what happens when you try to take a loan on a cash value account (emphasis mine):

There may be a waiting period of up to three years before a loan is available. If the policy owner borrows from the policy, the cash value is used as collateral, and interest is charged at the rate specified or described in the policy. Any money owed on an outstanding policy loan is deducted from the benefits upon the insured's death or from the cash value if the policy owner surrenders the policy for cash.

As it says, you can get the money out of the cash value account by surrendering your policy... but then you have no life insurance anymore (whereas with buy term and invest the difference, taking money out of an investment account may incur taxes if they are not already paid, but you don't have to cancel your life insurance to do so). See the penultimate sentence of the first quote: "If you surrender your policy you will receive the cash value not the face amount." Your coverage (the "face amount") is gone if you surrender your policy to get the cash values. Here is what happens when you surrender the policy:

When a policy is surrendered, the owner is entitled to at least a portion of the cash value. The actual amount that the owner receives will depend whether there are any outstanding loans or unpaid premiums that can be deducted from the cash value.

With "buy term and invest the difference," if you take money out of your investment account, it doesn't decrease the death benefit of your policy. Another article claims that you can do a partial withdrawal from the cash value account without it being a loan, but it can decrease the death benefit:

You can also make a full or partial withdrawal of your cash value. Depending on your policy and level of cash value, a withdrawal might reduce your death benefit. Exactly how much varies by policy, but in the case of universal life insurance your death benefit would be reduced on a dollar-for-dollar basis. For example, if you had a $100,000 death benefit with a $20,000 cash value and you withdrew $10,000, your resulting death benefit would be $90,000.

In some cases, partially withdrawing your cash value could decimate your death benefit. For some traditional whole life insurance policies, the death benefit could be reduced by more than the amount you withdraw.

The cash value surrender values will be spelled out in a schedule in a whole life contract. And for the first 3-5 years, they can be dismal (and would be less than if you had invested the difference and withdrew it paying taxes). From the insure.com article (emphasis mine):

Also important to note is the fluctuating rate of return on cash value in this particular whole life insurance policy. Your first year's premium disappears into fees and expenses without a penny into your cash value account. Only at year 4 does the cash value rate of return go positive. That means if you drop this policy within the first few years, you've made a terrible investment. [...] The chart at the right summarizes the estimated average rate of return if you kept this particular life insurance policy 5, 10, 15 or 20 years. Even if you held this policy for 10 years, your estimated cash value average rate of return works out to only 2 percent because you're still making up ground for those expensive first few years. You should be prepared to hold a whole life insurance policy for the long haul in order to make a potentially good investment.

That whole article is a good read. Notice that even though a cash value account can match a "buy term and invest the difference" strategy that accumulates 4.6% a year, your beneficiary does not get the cash value investment if you die:

You may be looking at this example and adding up cash value plus death benefit, but remember: With ordinary whole life insurance policies like this one, your beneficiaries do not receive the cash value when you die; they receive only the death benefit.

So if you die with the cash value account, your beneficiary gets $100,000, but if you die with the term strategy, your beneficiary gets $100,000 + the value of the investment account. If you die in year 20, that is $28,000 (don't know if those dollars are taxed yet or not, but the difference is still stark), making the total gain by your beneficiary $128,000, instead of $100,000 with whole life.

So, what's the deal with cash value accounts and why are they so wacky? To understand, realize that the cash value account is not an investment vehicle for you; it is a protection for the insurance company. From this article:

Whole life was the name of the original policy form. Premiums were payable for the whole of life (hence the name) or some shorter premium paying period (e.g., to age 65 or for 20 years). Regardless of the premium paying period, a guaranteed cash value was built up such that, at the terminal age of the policy (typically age 95 or 100), the cash value would equal the face amount. Thus, as policyholders got older, the "net amount at risk" to the insurance company (the difference between the cash value and the face amount) would decline while the reserve built up tax free. The true objective was not to build up a "savings account," but rather to create a reserve against a known future claim.

Cash value accounts are for mitigating the risk of insurance companies, so they can make money even though they are insuring you your "whole life" (well, up to age 95-100). In contrast, the way term life insurance policies make money is that a certain percentage of policies expire and are not renewed before the insured dies, so the insurance company keeps those premiums... but this is how insurance in general works, and it's far more straight forward. You can always get a guaranteed renewable term policy, and then actually renew it.

It's very dangerous to bundle life insurance and investments in whole life policies.

Replies from: NoMLM
comment by NoMLM · 2010-05-29T00:56:22.319Z · LW(p) · GW(p)

I believe "buy term and invest the difference" is the slogan of the Amway-like Multi Level Marketer (MLM, legal pyramid scheme) Primerica.

Replies from: HughRistik
comment by HughRistik · 2010-05-29T04:44:22.179Z · LW(p) · GW(p)

That's how I first encountered it, too. But it seems to be mainstream and widely accepted advice that is confirmed independently.

comment by Blueberry · 2010-05-28T20:10:45.823Z · LW(p) · GW(p)

Wow, thanks for all that! Upvoted. I'm biased in favor of DIY, but those are really good points and I didn't realize some of that.

Replies from: SilasBarta
comment by SilasBarta · 2010-05-28T21:47:24.042Z · LW(p) · GW(p)

Hey, glad to help, and sorry if I came off as impatient (more than I usually do, anyway). And I'm in favor of DIY too, which is how I do my mutual fund/IRA investing, and why I complained about how online-unfriendly life insurance is. But the idea behind "infinite banking" (basically, using a mutual whole life insurance plan, which have been around for hundreds of years and endured very hard times robustly, as a savings account) is very much DIY, once you get it set up.

Again, take it with a grain of salt because I'm still researching this...

comment by RobinZ · 2010-05-28T20:10:07.813Z · LW(p) · GW(p)

It occurs to me: are there legal issues with people contesting wills? I think that a life insurance policy with the cryonics provider listed as the beneficiary would be more difficult to fight.

comment by byrnema · 2010-05-27T17:05:31.947Z · LW(p) · GW(p)

I actually find these arguments plausible, but the trouble is that they, in my view, prove too much.

Well said.

Therefore, in at least some cases, I'm sure that people reject cryonics not because they're too biased to assess the arguments in favor of it, but because they honestly feel no identification with the future entities that it aims to produce -- and I don't see how this different subjective preference can be considered "irrational" in any way.

I think this is true. Cryonics being the "correct choice" doesn't just depend on correct calculations and estimates (probability of a singularity, probability of revival, etc) and a high enough sanity waterline (not dismissing opportunities out of hand because they seem strange). Whether cryonics is the correct choice also depends upon your preferences. This fact seems to be largely missing from the discussion about cryonics. Perhaps because advocates can't imagine people not valuing life extension in this way.

In particular, I personally can't bring myself to feel any identification whatsoever with some computer program that runs a simulation of my brain, no matter how accurate, and no matter how closely isomorphic its data structures might be to the state of my brain at any point in time.

I wouldn't pay 5 cents for a duplicate of me to exist. (Not for the sole sake of her existence, that is. If this duplicate could interact with me, or interact with my family immediately after my death, that would be a different story as I could delegate personal responsibilities to her.)

comment by red75 · 2010-06-10T19:01:01.066Z · LW(p) · GW(p)

Well, they say that cryonics works whether you believe in it or not. Why don't give it a try?

comment by Liron · 2010-05-26T10:14:34.267Z · LW(p) · GW(p)

I think cryonics is used as a rationality test because most people reason about it from within the mental category "weird far-future stuff". The arguments in the post seem like appropriate justifications for choices within that category. The rationality test is whether you can compensate for your anti-weirdness bias and realize that cryonics is actually a more logical fit for the mental category "health care".

comment by Morendil · 2010-05-28T07:11:35.854Z · LW(p) · GW(p)

This post, like many others around this theme, revolves around the rationality of cryonics from the subjective standpoint of a potential cryopatient, and it seems to assume a certain set of circumstances for that patient: relatively young, healthy, functional in society.

I've been wondering for a while about the rationality of cryonics from a societal standpoint, as applied to potential cryopatients in significantly different circumstances; two categories specifically stand out, death row inmates and terminal patients.

This article cites the cost of a death row inmate (over serving a life sentence) to $90K. This is a case where we already allow that society may drastically curtail an individual's right to control their own destiny. It would cost less to place someone in cryonic suspension than to execute him, and in so doing we would provide a chance, however small, that a wrongful conviction could be reversed in the future.

As for terminal patients, this article says:

Aggressive treatments attempting to prolong life in terminally ill people typically continue far too long. Reflecting this overaggressive end-of-life treatment, the Health Care Finance Administration reported that about 25% of Medicare funds are spent in the last 6 months of life (about $68 billion in 2003 or $42,000 per dying patient). Actually, the last 6 months of a Medicare recipient's life consumes about $80,000 for medical services, since Medicare pays only 53% of the bill. Dying cancer patients cost twice the average amount or about $160,000.

These costs are comparable to that charged for cryopreservation. It seems to me that it would be rational (as a cost reduction measure) to offer patients diagnosed with a likely terminal illness the voluntary option of being cryopreserved. At worst, if cryonics doesn't work, this amounts to an "assisted suicide", something that many progressive groups are already lobbying for.

Replies from: cjb, byrnema
comment by cjb · 2010-05-28T19:16:08.514Z · LW(p) · GW(p)

It would cost less to place someone in cryonic suspension than to execute him, and in so doing we would provide a chance, however small, that a wrongful conviction could be reversed in the future.

Hm, I don't think that works -- the extra cost is from the stronger degree of evidence and exhaustive appeals process required before the inmate is killed, right? If you want to suspend the inmate before those appeals then you've curtailed their right to put together a strong defence against being killed, and if you want to suspend the inmate after those appeals then you haven't actually saved any of that money.

.. or did I miss something?

Replies from: Morendil
comment by Morendil · 2010-05-28T19:48:28.469Z · LW(p) · GW(p)

the extra cost is from the stronger degree of evidence and exhaustive appeals process required before the inmate is killed, right?

Some of it is from more expensive incarceration, but you're right. This has one detailed breakdown:

  • Extra defense costs for capital cases in trial phase $13,180,385
  • Extra payments to jurors $224,640
  • Capital post-conviction costs $7,473,556
  • Resentencing hearings $594,216
  • Prison system $169,617

However, we're assuming that with cryonics as an option the entire process would stay the same. That needn't be the case.

comment by byrnema · 2010-05-28T11:59:55.603Z · LW(p) · GW(p)

It would cost less to place someone in cryonic suspension than to execute him, and in so doing we would provide a chance, however small, that a wrongful conviction could be reversed in the future.

Also, depending upon advances in psychology, there could be the opportunity for real rehabilitation in the future. A remorseful criminal afraid they cannot change may prefer cryopreservation.

comment by byrnema · 2010-05-27T21:25:35.909Z · LW(p) · GW(p)

This comment is a more fleshed-out response to VladimirM’s comment.

This is commonly supported by arguing that your thawed and revived or uploaded brain decades from now is not a fundamentally different entity from you in any way that wouldn't also apply to your present brain when it wakes up tomorrow. I actually find these arguments plausible, but the trouble is that they, in my view, prove too much.

Whether cryonics is the right choice depends on your values. There are suggestions that people who don’t think they value revival in the distant future are mislead about their real values. I think it might be the complete opposite: advocation of cryonics completely missing what it is that people value about their lives.

The reason for this mistake could be that cryonics is such a new idea that we are culturally a step or two behind in identifying what it is that we value about existence. So people think about cryonics a while and just conclude they don’t want to do it. (For example, the stories herein.) Why? We call this a ‘weirdness’ or ‘creep’ factor, but we haven’t identified the reason.

When someone values their life, what is it that they value? When we worry about dying, we worry about a variety of obligations unmet (values not optimized), and people we love abandoned. It seems to me that people are attached to a network of interactions (and value-responsibilities) in the immediate present. There is also an element of wanting more experience and more pleasure, and this may be what cryonics advocates are over-emphasizing. But after some reflection, how do you think most people would answer this question: when it comes to experiencing 5 minutes of pleasure, does it matter if it is you or someone else if neither of you remember it?

A lot of the desperation we feel when faced with death is probably a sense of responsibility for our immediate values. We are a bundle of volition that is directed towards shaping an immediate network of experience. I don't really care about anything 200 years from now, and enjoy the lack of responsibility I feel for the concerns I would have if I were revived then. As soon as I was revived, however, I know I would become a bundle of volition directed towards shaping that immediate network of experience.

Considering what we do value about life -- immediate connections, attachments and interactions, it makes much more sense to invest in figuring out technology to increase lifespan and prevent accidental death. Once the technology of cryonics is established, I think that there could be a healthy market for people undergoing cryonics in groups. (Not just signing up in groups, but choosing to be vitrified simultaneously in order to preserve a network of special importance to them.)

Replies from: Roko
comment by Roko · 2010-05-29T16:43:30.599Z · LW(p) · GW(p)

This comment may be a case of other-optimizing

e.g.

Considering what we do value about life -- immediate connections, attachments and interactions, it makes much more sense to invest in figuring out technology to increase lifespan and prevent accidental death.

That may be what you value -- but how do you know whether that applies to me?

Replies from: byrnema
comment by byrnema · 2010-05-30T00:45:35.871Z · LW(p) · GW(p)

The 'we' population I was referring to was deliberately vague. I don't know how many people have values as described, or what fraction of people who have thought about cryonics and don't choose cryonics this would account for. My main point, all along, is that whether cryonics is the "correct" choice depends on your values.

Anti-cryonics "values" can sometimes be easily criticized as rationalizations or baseless religious objections. ('Death is natural', for example.) However, this doesn't mean that a person couldn't have true anti-cryonics values (even very similar-sounding ones).

Value-wise, I don't know even whether cryonics is the correct choice for much more than half or much less than half of all persons, but given all the variation in people, I'm pretty sure it's going to be the right choice for at least a handful and the wrong choice for at least a handful.

Replies from: Roko
comment by Roko · 2010-05-30T13:48:34.115Z · LW(p) · GW(p)

My main point, all along, is that whether cryonics is the "correct" choice depends on your values.

Sure. If you don't value your life that much, then cryo is not for you, but I think that many people who refuse cryo don't say "I don't care if I die, my life is worthless to me", and if they were put in a near-mode situation where many of their close friends and relatives had died, but they had the option to make a new start in a society of unprecedentedly high quality of life, they wouldn't choose to die instead.

Perhaps I should make an analogy: would it be rational for a medieval peasant to refuse cryo where revival was as a billionaire in contemporary society, with an appropriate level of professional support and rehab from the cryo company? She would have to be at an extreme of low-self value to say "my life without my medieval peasant friends was the only thing that mattered to me", and turn down the opportunity to live a new life of learning, comfort and life in the absence of constant pain and hunger.

Replies from: Vladimir_M, byrnema
comment by Vladimir_M · 2010-05-31T01:41:42.518Z · LW(p) · GW(p)

Roko:

Perhaps I should make an analogy: would it be rational for a medieval peasant to refuse cryo where revival was as a billionaire in contemporary society, with an appropriate level of professional support and rehab from the cryo company?

This is another issue where, in my view, pro-cryonics people often make unwarranted assumptions. They imagine a future with a level of technology sufficient to revive frozen people, and assume that this will probably mean a great increase in per-capita wealth and comfort, like today's developed world compared to primitive societies, only even more splendid. Yet I see no grounds at all for such a conclusion.

What I find much more plausible are the Malthusian scenarios of the sort predicted by Robin Hanson. If technology becomes advanced enough to revive frozen brains in some way, it probably means that it will be also advanced enough to create and copy artificial intelligent minds and dexterous robots for a very cheap price. [Edit to avoid misunderstanding: the remainder of the comment is inspired by Hanson's vision, but based on my speculation, not a reflection of his views.]

This seems to imply a Malthusian world where selling labor commands only the most meager subsistence necessary to keep the cheapest artificial mind running, and biological humans are out-competed out of existence altogether. I'm not at all sure I'd like to wake up in such a world, even if rich -- and I also see some highly questionable assumptions in the plans of people who expect that they can simply leave a posthumous investment, let the interest accumulate while they're frozen, and be revived rich. Even if your investments remain safe and grow at an immense rate, which is itself questionable, the price of lifestyle that would be considered tolerable by today's human standards may well grow even more rapidly as the Malthusian scenario unfolds.

Replies from: Roko, Roko
comment by Roko · 2010-05-31T13:35:35.400Z · LW(p) · GW(p)

The honest answer to this question is that it is possible that you'll get revived into a world that is not worth living in, in which case you can go for suicide.

And then there's a chance that you get revived into a world where you are in some terrible situation but not allowed to kill yourself. In this case, you have done worse than just dying.

Replies from: jimrandomh, Vladimir_M
comment by jimrandomh · 2010-05-31T13:42:42.963Z · LW(p) · GW(p)

And then there's a chance that you get revived into a world where you are in some terrible situation but not allowed to kill yourself. In this case, you have done worse than just dying.

That's a risk for regular death, too, albeit a very unlikely one. This possibility seems like Pascal's wager with a minus sign.

comment by Vladimir_M · 2010-05-31T18:19:39.032Z · LW(p) · GW(p)

That said, I am nowhere near certain that bad future awaits us, nor that the above mentioned Malthusian scenario is inevitable. However, it does seem to me as the most plausible course of affairs given a cheap technology for making and copying minds, and it seems reasonable to expect that such technology would follow from more or less the same breakthroughs that would be necessary to revive people from cryonics.

Replies from: Roko
comment by Roko · 2010-05-31T18:47:14.504Z · LW(p) · GW(p)

I think that we wouldn't actually end up in a malthusian regime -- we'd coordinate so that that didn't happen. Especially compelling is the fact that in these regimes of high copy fidelity, you could end up with upload "clans" that acted as one decision-theoretic entity, and would quickly gobble up lone uploads by the power that their cooperation gave them.

comment by Roko · 2010-05-31T13:24:25.580Z · LW(p) · GW(p)

the price of lifestyle that would be considered tolerable by today's human standards may well grow even more rapidly as the Malthusian scenario unfolds.

I think that this is the exact opposite of what Robin predicts, he predicts that if the economy grows at a faster rate because of ems, the best strategy for a human is to hold investments, which would make you fabulously rich in a very short time.

Replies from: Vladimir_M
comment by Vladimir_M · 2010-05-31T17:59:39.787Z · LW(p) · GW(p)

That is true -- my comment was worded badly and open to misreading on this point. What I meant is that I agree with Hanson that ems likely imply a Malthusian scenario, but I'm skeptical of the feasibility of the investment strategy, unless it involves ditching the biological body altogether and identifying yourself with a future em, in which case you (or "you"?) might feasibly end up as a wealthy em. (From Hanson's writing I've seen, it isn't clear to me if he automatically assumes the latter, or if he actually believes that biological survival might be an option for prudent investors.)

The reason is that in a Malthusian world of cheap AIs, it seems to me that the prices of resources necessary to keep biological humans alive would far outrun any returns on investments, no matter how extraordinary they might be. Moreover, I'm also skeptical if humans could realistically expect their property rights to be respected in a Malthusian world populated by countless numbers of far more intelligent entities.

Replies from: Roko
comment by Roko · 2010-05-31T18:43:15.083Z · LW(p) · GW(p)

Suppose that my biological survival today costs 2000 MJ of energy per year and 5000kg of matter. Since I can spend (say) $50,000 today to buy 10,000 MJ of energy and 5000kg of matter. I invest my $50,000 and get cryo. Then, the em revolution happens, and the price of these commodities becomes very high, at the same time as the economy (total amount of wealth) grows, at say 100% per week, corrected for inflation.

That means that every week, my 10000 MJ of energy and 5000kg of matter investment becomes twice as valuable, so after one week, I own 20,000MJ of energy and 10,000kg of matter. Though, at the same time, the dollar price of these commodities has also increased a lot.

The end result: I get very very large amounts of energy/matter very quickly, limited only by the speed of light limit of how quickly earth-based civilization can grow.

The above all assumes preservation of property rights.

Replies from: Vladimir_M
comment by Vladimir_M · 2010-05-31T19:52:36.708Z · LW(p) · GW(p)

Roko:

That means that every week, my 10000 MJ of energy and 5000kg of matter investment becomes twice as valuable, so after one week, I own 20,000MJ of energy and 10,000kg of matter. Though, at the same time, the dollar price of these commodities has also increased a lot.

This is a fallacious step. The fact that risk-free return on investment over a certain period is X% above inflation does not mean that you can pick any arbitrary thing and expect that if you can afford a quantity Y of it today, you'll be able to afford (1+X/100)Y of it after that period. It merely means that if you're wealthy enough today to afford a particular well-defined basket of goods -- whose contents are selected by convention as a necessary part of defining inflation, and may correspond to your personal needs and wants completely, partly, or not at all -- then investing your present wealth will get you the power to purchase a similar basket (1+X/100) times larger after that period. [*] When it comes to any particular good, the ratio can be in any direction -- even assuming a perfect laissez-faire market, let alone all sorts of market-distorting things that may happen.

Therefore, if you have peculiar needs and wants that don't correspond very well to the standard basket used to define the price index, then the inflation and growth numbers calculated using this basket are meaningless for all your practical purposes. Trouble is, in an economy populated primarily by ems, biological humans will be such outliers. It's enough that one factor critical for human survival gets bid up exorbitantly and it's adios amigos. I can easily think of more than one candidate.

The above all assumes preservation of property rights.

From the perspective of an em barely scraping a virtual or robotic existence, a surviving human wealthy enough to keep their biological body alive would seem as if, from our perspective, a whole rich continent's worth of land, capital, and resources was owned by a being whose mind is so limited and slow that it takes a year to do one second's worth of human thinking, while we toil 24/7, barely able to make ends meet. I don't know with how much confidence we should expect that property rights would be stable in such a situation.


[*] - To be precise, the contents of the basket will also change during that period if it's of any significant length. This however gets us into the nebulous realm of Fisher's chain indexes and similar numerological tricks on which the dubious edifice of macroeconomic statistics rests to a large degree.

Replies from: Roko
comment by Roko · 2010-05-31T21:34:42.957Z · LW(p) · GW(p)

If the growth above inflation isn't defined in terms of today's standard basket of goods, then is it really growth? I mean if I defined a changing basket of goods that was the standard one up until 1991, and thereafter was based exclusively upon the cost per email of sending an email, we would see massive negative inflation and spuriously high growth rates as emails became cheaper to send due to falling computer and network costs.

I.e. Robin's prediction of fast growth rates is presumably in terms of today's basket of goods, right?

The point of ems is that they will do work that is useful by today's standard, rather than just creating a multiplicity of some (by our standard) useless commodity like digits of pi that they then consume.

Replies from: Vladimir_M
comment by Vladimir_M · 2010-05-31T23:49:05.656Z · LW(p) · GW(p)

Roko:

If the growth above inflation isn't defined in terms of today's standard basket of goods, then is it really growth? I mean if I defined a changing basket of goods that was the standard one up until 1991, and thereafter was based exclusively upon the cost per email of sending an email, we would see massive negative inflation and spuriously high growth rates as emails became cheaper to send due to falling computer and network costs.

You're asking some very good questions indeed! Now think about it a bit more.

Even nowadays, you simply cannot maintain the exact same basket of goods as the standard for any period much longer than a year or so. Old things are no longer produced, and more modern equivalents will (and sometimes won't) replace them. New things appear that become part of the consumption basket of a typical person, often starting as luxury but gradually becoming necessary to live as a normal, well-adjusted member of society. Certain things are no longer available simply because the world has changed to the point where their existence is no longer physically or logically possible. So what sense does it make to compare the "price index" between 2010 and 1950, let alone 1900, and express this ratio as some exact and unique number?

The answer is that it doesn't make any sense. What happens is that government economists define new standard baskets each year, using formalized and complex, but ultimately completely arbitrary criteria for selecting their composition and determining the "real value" of new goods and services relative to the old. Those estimates are then chained to make comparisons between more distant epochs. While this does make some limited sense for short-term comparisons, in the long run, these numbers are devoid of any sensible meaning.

Not to even mention how much the whole thing is a subject of large political and bureaucratic pressures. For example, in 1996, the relevant bodies of the U.S. government concluded that the official inflation figures were making the social security payments grow too fast for their taste, so they promptly summoned a committee of experts, who then produced an elaborate argument that the methodology hitherto used had unsoundly overstated the growth in CPI relative to some phantom "true" value. And so the methodology was revised, and inflation obediently went down. (I wouldn't be surprised if the new CPI math indeed gives much more prominence to the cost of sending emails!)

Now, if such is the state of things even when it comes to the fairly slow technological and economic changes undertaken by humans in recent decades, what sense does it make to project these numbers into an em-based economy that develops and changes at a speed hardly imaginable for us today, and whose production is largely aimed at creatures altogether different from us? Hardly any, I would say, which is why I don't find the attempts to talk about long-term "real growth" as a well-defined number meaningful.

I.e. Robin's prediction of fast growth rates is presumably in terms of today's basket of goods, right?

I don't know what he thinks about how affordable biological human life would be in an em economy, but I'm pretty sure he doesn't define his growth numbers tied to the current CPI basket. From the attitudes he typically displays in his writing, I would be surprised if he would treat things valued by ems and other AIs as essentially different from things valued by humans and unworthy of inclusion into the growth figures, even if humans find them irrelevant or even outright undesirable.

Replies from: Roko
comment by Roko · 2010-05-31T23:54:32.478Z · LW(p) · GW(p)

You're asking some very good questions indeed!

Thankyou, it's a pleasure to chat with you, we should meet up in real life sometime!

comment by byrnema · 2010-05-31T01:14:08.332Z · LW(p) · GW(p)

I don't think it's a matter of whether you value your life but why. We don't value life unconditionally (say, just a metabolism, or just having consciousness -- both would be considered useless).

if they were put in a near-mode situation where many of their close friends and relatives had died, but they had the option to make a new start in a society of unprecedentedly high quality of life, they wouldn't choose to die instead.

I wouldn't expect anyone to choose to die, no, but I would predict some people would be depressed if everyone they cared about died and would not be too concerned about whether they lived or not. [I'll add that the truth of this depends upon personality and generational age.]

Regarding the medieval peasant, I would expect her to accept the offer but I don't think she would be irrational for refusing. In fact, if she refused, I would just decide she was a very incurious person and she couldn't think of anything special to bring to the future (like her religion or a type of music she felt passionate about.) But I don't think lacking curiosity or any goals for the far impersonal future is having low self-esteem. [Later, I'm adding that if she decided not to take the offer, I would fear she was doing so due to a transient lack of goals. I would rather she had made her decision when all was well.]

(If it was free, I definitely would take the offer and feel like I had a great bargain. I wonder if I can estimate how much I would pay for a cryopreservation that was certain to work? I think $10 to $50 thousand, in the case of no one I knew coming with me, but it's difficult to estimate.)

comment by CronoDAS · 2010-05-26T21:29:56.845Z · LW(p) · GW(p)

Reason #6 not to sign up: Cryonics is not compatible with organ donation. If you get frozen, you can't be an organ donor.

Replies from: Sniffnoy, Blueberry, magfrump, taw
comment by Sniffnoy · 2010-05-26T22:31:38.480Z · LW(p) · GW(p)

Is that true in general, or only for organizations that insist on full-body cryo?

Replies from: CronoDAS
comment by CronoDAS · 2010-05-27T02:59:02.971Z · LW(p) · GW(p)

AFACT (from reading a few cryonics websites), it seems to be true in general, but the circumstances under which your brain can be successfully cryopreserved tend to be ones that make you not suitable for being an organ donor anyway.

Replies from: None
comment by [deleted] · 2010-05-27T03:12:45.567Z · LW(p) · GW(p)

Could you elaborate on that? Is cryonic suspension inherently incompatible with organ donation, even when you are going with the neuro option or does the incompatibility stem from current obscurity of cryonics? I imagine that organ harvesting could be combined with early stages of cryonic suspension if the latter was more widely practiced.

Replies from: Matt_Duing
comment by Matt_Duing · 2010-05-27T06:00:58.221Z · LW(p) · GW(p)

The cause of death of people suitable to be organ donors is usually head trauma.

comment by Blueberry · 2010-05-27T03:30:28.585Z · LW(p) · GW(p)

Alternatively, that's a good reason not to sign up for organ donation. Organ donation won't increase my well-being or happiness any, while cryonics might.

In addition, there's the problem that being an organ donor creates perverse incentives for your death.

Replies from: Jack
comment by Jack · 2010-06-05T10:37:02.397Z · LW(p) · GW(p)

You get no happiness knowing there is a decent chance your death could save the lives of others?

Would you turn down a donated organ if you needed one?

Replies from: Blueberry
comment by Blueberry · 2010-06-05T17:27:04.335Z · LW(p) · GW(p)

You get no happiness knowing there is a decent chance your death could save the lives of others?

It's a nice thought, I guess, but I'd rather not die in the first place. And any happiness I might get from that is balanced out by the risks of organ donation: cryonic preservation becomes slightly less likely, and my death becomes slightly more likely (perverse incentives). If people benefit from my death, they have less of an incentive to make sure I don't die.

Would you turn down a donated organ if you needed one?

No. But I'd vote to make post-death organ donation illegal, and I'd encourage people not to donate their organs after they die. (I don't see a problem with donating a kidney while you're still alive.)

Replies from: Jack
comment by Jack · 2010-06-05T18:13:20.418Z · LW(p) · GW(p)

It's a nice thought, I guess, but I'd rather not die in the first place. And any happiness I might get from that is balanced out by the risks of organ donation: cryonic preservation becomes slightly less likely,

Well I understand that you will be so much more happy if you avoid death for the foreseeable future that cryonics outweighs organ donation. I'm just saying that the happiness from organ donation can't be zero.

and my death becomes slightly more likely (perverse incentives). If people benefit from my death, they have less of an incentive to make sure I don't die.

The incentives seem to me so tiny as to be a laughable concern. I presume you're talking about doctors not treating you as effectively because they want your organs? Do you have this argument further developed elsewhere? It seems to me a doctor's aversion to letting someone die, fear of malpractice lawsuits and ethics boards are more than sufficient to counter whatever benefit they would get from your organs (which would be what precisely?). Like I would be more worried about the doctors not liking me or thinking I was weird because I wanted to be frozen and not working as hard to save me because of that. (ETA: If you're right there should be studies saying as much.)

Would you turn down a donated organ if you needed one?

No.

It seems to me legislation to punish defectors in this cooperative action problem would make sense. Organ donors should go to the top of the implant lists if they don't already. Am I right that appealing to your sense of justice regarding your defection would be a waste of time?

But I'd vote to make post-death organ donation illegal, and I'd encourage people not to donate their organs after they die. (I don't see a problem with donating a kidney while you're still alive.)

If your arguments are right I can see how it would be a bad individual choice to be a organ donor (at least if you were signed up for cryonics). But those arguments don't at all entail that banning post-death organ donation would be the best public policy, especially since very few people will sign up for cryonics in the near future. Do you think that the perverse incentives lead to more deaths than the organs save?

And from a public interest perspective an organ donor is more valuable than a frozen head. It might be in the public interest to have some representatives from our generation in the future but there is a huge economic cost to losing 20 years of work from an experienced and trained employee-- a cost which is mediated little by the economic value of a revived cryonics patient who would likely have no marketable skills for his time period. So the social benefit to people signing up for cryonics diminishes rapidly.

comment by magfrump · 2010-05-26T22:54:30.539Z · LW(p) · GW(p)

There was a short discussion previously about how cryonics is most useful in cases of degenerative diseases, whereas organ donation is most successful in cases of quick deaths such as due to car accidents; which is to say that cryonics and organ donation are not necessarily mutually exclusive preparations because they may emerge from mutually exclusive deaths.

Though maybe not, which is why I had asked about organ donation in the first place.

comment by taw · 2010-05-26T22:32:21.559Z · LW(p) · GW(p)

This is my reason I wouldn't sign up for free (and I am registered organ donor). If it wasn't for that, it would still be too expensive, all bullshit creative accounting I've seen on this site notwithstanding.

Replies from: Will_Newsome
comment by Will_Newsome · 2010-05-27T22:15:37.526Z · LW(p) · GW(p)

Would you consider Alicorn trustworthy enough to determine whether or not the accounting is actually bullshit? She's going through the financial stuff right now, and I could ask her about any hidden fees the cryonauts on Less Wrong have been quiet about.

Replies from: Alicorn, taw
comment by Alicorn · 2010-05-27T22:21:22.183Z · LW(p) · GW(p)

Um, I'm not a good person to go to for financial advice of any kind. Mostly I'm going to shop around until I find an insurance agent who isn't creepy and wants a non-crippling sum of money.

comment by taw · 2010-05-28T20:33:20.074Z · LW(p) · GW(p)

How can I estimate if Alicorn is trustworthy or not? Eliezer has been outright lying about cost of cryonics in the past.

Replies from: radical_negative_one
comment by radical_negative_one · 2010-05-28T20:36:04.922Z · LW(p) · GW(p)

Eliezer has been outright lying about cost of cryonics in the past.

A link or explanation would be relevant here.

(ETA: the link)

Replies from: taw
comment by taw · 2010-05-28T20:58:56.382Z · LW(p) · GW(p)

I would link it if lesswrong had a reasonable search engine. It has not so feel free to spend an evening searching past articles about cost of cryonics.

EDIT: this one

Replies from: cupholder, radical_negative_one, Oscar_Cunningham, Kazuo_Thow
comment by cupholder · 2010-05-28T23:23:27.370Z · LW(p) · GW(p)

Are you using the Google sidebar to search this site? It doesn't work for me, so I'm guessing it doesn't work for you? An alternative I prefer is doing Google searches with the 'site:lesswrong.com' term; the Googlebot digs deep enough into the site that it works well.

comment by radical_negative_one · 2010-05-28T21:58:33.227Z · LW(p) · GW(p)

Looks like this is the previous discussion of the topic, for anyone who's interested.

comment by Kazuo_Thow · 2010-05-28T21:07:40.218Z · LW(p) · GW(p)

Eliezer has been outright lying about cost of cryonics in the past.

We would find it helpful if you could provide some insight into why you think this.

comment by PhilGoetz · 2010-05-26T20:08:11.342Z · LW(p) · GW(p)

The most common objections (most of them about the infeasibility of cryopreservation) are simply wrong.

Thus triggering the common irrational inference, "If something is attacked with many spurious arguments, especially by religious people, it is probably true."

(It is probably more subtle than this - When you make argument A against X, people listen just until they think they've matched your argument to some other argument B they've heard against X. The more often they've heard B, the faster they are to infer A = B.)

Replies from: Mardonius, RobinZ
comment by Mardonius · 2010-05-26T21:19:02.679Z · LW(p) · GW(p)

Um, isn't the knowledge of many spurious arguments and no strong ones over a period of time weak evidence that no better argument exists (or at least, has currently been discovered?)

I do agree with the second part of your post about argument matching, though. The problem becomes even more serious when it is often not an argument against X from someone who takes the position, but a strawman argument they have been taught by others for the specific purposes of matching up more sophisticated arguments to.

Replies from: Nick_Tarleton
comment by Nick_Tarleton · 2010-05-26T21:24:52.471Z · LW(p) · GW(p)

Um, isn't the knowledge of many spurious arguments and no strong ones over a period of time weak evidence that no better argument exists (or at least, has currently been discovered?)

Yes. This is discussed well in the comments on What Evidence Filtered Evidence?.

Replies from: PhilGoetz
comment by PhilGoetz · 2010-05-27T17:11:22.601Z · LW(p) · GW(p)

No, because that assumes that the desire to argue about a proposition is the same among rational and insane people. The situation I observe is just the opposite: There are a large number of propositions and topics that most people are agnostic about or aren't even interested in, but that religious people spend tremendous effort arguing for (circumcision, defense of Israel) or against (evolution, life extension, abortion, condoms, cryonics, artificial intelligence).

This isn't confined to religion; it's a general principle that when some group of people has an extreme viewpoint, they will A) attract lots of people with poor reasoning skills, B) take opinions on otherwise non-controversial opinions based on incorrect beliefs, and C) spend lots of time arguing against things that nobody else spends time arguing against, using arguments based on the very flaws in their beliefs that make them outliers to begin with.

Therefore, there is a large class of controversial issues on which one side has been argued almost exclusively by people whose reasoning is especially corrupt on that particular issue.

Replies from: JoshuaZ
comment by JoshuaZ · 2010-05-27T17:32:51.346Z · LW(p) · GW(p)

I don't think many religious people spend "tremendous effort" arguing against life extension, cryonics or artificial intelligence. For the vast majority of the population, whether religious or not, these issues simply aren't prominent enough to think about. To be sure, when religious individuals do think about these, they more often than not seem to come down on the against side (Look at for example computer scientist David Gelernter's arguing against the possibility of AI). And that may be explainable by general tendencies in religion (especially the level at which religion promotes cached thoughts about the soul and the value of death).

But even that is only true to a limited extent. For example, consider the case of life extension, if we look at Judaism, then some Orthodox ethicists have taken very positive views about life extension. Indeed, my impression is that the Orthodox are more likely to favor life extension than non-Orthodox Jews. My tentative hypothesis for this is that Orthodox Judaism places a very high value on human life and downplays the afterlife at least compared to Christianity and Islam. (Some specific strains of Orthodoxy do emphasize the afterlife a bit more (some chassidic sects for example) ). However Conservative and Reform Judaism have been more directly influenced in by Christian values and therefore have picked up a stronger connection to the Christian values and cached thoughts about death.

I don't think however that this issue can be exclusively explained by Christianity, since I've encountered Muslims, neopagans, Buddhists and Hindus who have similar attitudes. (The neopagans all grew up in Christian cultures so one could say that they were being influenced by that but that doesn't hold too much ground given how much neopaganism seems to be a reaction against Christianity).

Replies from: PhilGoetz
comment by PhilGoetz · 2010-05-28T16:54:52.111Z · LW(p) · GW(p)

All I mean to say is this: Suppose you say, "100 people have made arguments against proposition X, and all of them were bad arguments; therefore the probability of finding a good argument against X is some (monotonic increasing) function of 1/100."

If X is a proposition that is particularly important to people in cult C because they believe something very strange related to X, and 90 of those 100 arguments were made by people in cult C, then you should believe that the probability of finding a good argument against X is a function of something between 1/10 and 1/100.

comment by RobinZ · 2010-05-26T21:10:42.443Z · LW(p) · GW(p)

(It is probably more subtle than this - When make argument A against X, people listen just until they think they've matched your argument to some other argument B they've heard against X. The more often they've heard B, the faster they are to infer A = B.)

This problem is endemic in the affirmative atheism community. It's a sort of Imaginary Positions error.

comment by PhilGoetz · 2010-05-27T16:50:06.171Z · LW(p) · GW(p)

I told Kenneth Storey, who studies various animals that can be frozen and thawed, about a new $60M government initiative (mentioned in Wired) to find ways of storing cells that don't destroy their RNA. He mentioned that he's now studying the Gray Mouse Lemur, which can go into a low-metabolism state at room temperature.

If the goal is to keep you alive for about 10 years while someone develops a cure for what you have, then this room-temperature low-metabolism hibernation may be easier than cryonics.

(Natural cryonics, BTW, is very different from liquid-nitrogen cryonics. There are animals that can be frozen and thawed; but most die if frozen to below -4C. IMHO natural cryonics will be much easier than liquid-nitrogen cryonics.)

comment by alyssavance · 2010-05-26T17:37:26.764Z · LW(p) · GW(p)

I object to many of your points, though I express slight agreement with your main thesis (that cryonics is not rational all of the time).

"Weird stuff and ontological confusion: quantum immortality, anthropic reasoning, measure across multiverses, UDTesque 'decision theoretic measure' or 'probability as preference', et cetera, are not well-understood enough to make claims about whether or not you should even care about the number of 'yous' that are living or dying, whatever 'you' think you are."

This argument basically reduces to, once you remove the aura of philosophical sophistication, "we don't really know whether death is bad, so we should worry less about death". This seems to me absurd. For more, read eg. http://yudkowsky.net/other/yehuda .

"If people believe that a technological singularity is imminent, then they may believe that it will happen before they have a significant chance of dying:"

If you assume the median date for Singularity is 2050, Wolfram Alpha says I have a 13% chance of dying before then (cite: http://www.wolframalpha.com/input/?i=life+expectancy+18yo+male), and I'm only eighteen.

"A person might find that more good is done by donating money to organizations like SENS, FHI, or SIAI3 than by spending that money on pursuing a small chance of eternal life."

If you already donate more than 5% of your income or time to one of these organizations, I'll buy that. Otherwise (and this "otherwise" will apply to the vast majority of LW commenters), it's invalid. You can't say "alternative X would be better than Y, therefore we shouldn't do Y" if you're not actually doing X.

"Calling non-cryonauts irrational is not productive nor conducive to fostering a good epistemic atmosphere"

Why? Having a good epistemic atmosphere demands that there be some mechanism for letting people know if they are being irrational. You should be nice about it and not nasty, but if someone isn't signing up for cryonics for a stupid reason, maintaining a high intellectual standard requires that someone or something identify the reason as stupid.

"People will not take a fringe subject more seriously simply because you call them irrational for not seeing it as obvious "

This is true, but maintaining a good epistemic atmosphere and getting people to take what they see as a "fringe subject" seriously are two entirely separate and to some extent mutually exclusive goals. Maintaining high epistemic standards internally requires that you call people on it if you think they are being stupid. Becoming friends with a person who sees you as a kook requires not telling them about every time they're being stupid.

"Likewise, calling people irrational for having kids when they could not afford cryonics for them is extremely unlikely to do any good for anyone."

If people are having kids who they can't afford (cryonics is extremely cheap; someone who can't afford cryonics is unlikely to be able to afford even a moderately comfortable life), it probably is, in fact, a stupid decision. Whether we should tell them that it's a stupid decision is a separate question, but it probably is.

"One easily falls to the trap of thinking that disagreements with other people happen because the others are irrational in simple, obviously flawed ways."

99% of the world's population is disagreeing with us because they are irrational in simple, obviously flawed ways! This is certainly not always the case, but I can't see a credible argument for why it wouldn't be the case a large percentage of the time.

Replies from: Will_Newsome, Gavin
comment by Will_Newsome · 2010-05-26T19:34:52.110Z · LW(p) · GW(p)

This argument basically reduces to, once you remove the aura of philosophical sophistication, "we don't really know whether death is bad, so we should worry less about death".

No. It more accurately reduces to "we don't really know what the heck existence is, so we should worry even more about these fundamental questions and not presume their answers are inconsequential; taking precautions like signing up for cryonics may be a good idea, but we should not presume our philosophical conclusions will be correct upon reflection."

If you assume the median date for Singularity is 2050, Wolfram Alpha says I have a 13% chance of dying before then (cite: http://www.wolframalpha.com/input/?i=life+expectancy+18yo+male), and I'm only eighteen.

Alright, but I would argue that a date of 2050 is pretty damn late. I'm very much in the 'singularity is near' crowd among SIAI folk, with 2050 as an upper bound. I suspect there are many who would also assign a date much sooner than 2050, but perhaps this was simply typical mind fallacy on my part. At any rate, your 13% is my 5%, probably not the biggest consideration in the scheme of things; but your implicit point is correct that people who are much older than us should give more pause before dismissing this very important conditional probability as irrelevant.

If you already donate more than 5% of your income or time to one of these organizations, I'll buy that. Otherwise (and this "otherwise" will apply to the vast majority of LW commenters), it's invalid. You can't say "alternative X would be better than Y, therefore we shouldn't do Y" if you're not actually doing X.

Maybe, but a major point of this post is that it is bad epistemic hygiene to use generalizations like 'the vast majority of LW commenters' in a rhetorical argument. You and I both know many people who donate much more than 5% of their income to these kinds of organizations.

Having a good epistemic atmosphere demands that there be some mechanism for letting people know if they are being irrational. You should be nice about it and not nasty, but if someone isn't signing up for cryonics for a stupid reason, maintaining a high intellectual standard requires that someone or something identify the reason as stupid.

But I'm talking specifically about assuming that any given argument against cryonics is stupid. Yes, correct people when they're wrong about something, and do so emphatically if need be, but do not assume because weak arguments against your idea are more common that there do not exist strong arguments that you should presume your audience does not possess.

This is true, but maintaining a good epistemic atmosphere and getting people to take what they see as a "fringe subject" seriously are two entirely separate and to some extent mutually exclusive goals.

If the atmosphere is primarily based on memetics and rhetoric, than yes; but if it is founded in rationality, then the two should go hand in hand. (At least, my intuitions say so, but I could just be plain idealistic about the power of group epistemic rationality here.)

If people are having kids who they can't afford (cryonics is extremely cheap; someone who can't afford cryonics is unlikely to be able to afford even a moderately comfortable life), it probably is, in fact, a stupid decision. Whether we should tell them that it's a stupid decision is a separate question, but it probably is.

It's not a separate question, it's the question I was addressing. You raised the separate question. :P

99% of the world's population is disagreeing with us because they are irrational in simple, obviously flawed ways! This is certainly not always the case, but I can't see a credible argument for why it wouldn't be the case a large percentage of the time.

What about 99% of Less Wrong readers? 99% of the people you're trying to reach with your rhetoric? What about the many people I know at SIAI that have perfectly reasonable arguments against signing up for cryonics and yet consistently contribute to or read Less Wrong? You're not actually addressing the world's population when you write a comment on Less Wrong. You're addressing a group with a reasonably high standard of thinking ability and rationality. You should not assume their possible objections are stupid! I think it should be the duty of the author not to generalize when making in-group out-group distinctions; not to paint things as black and white, and not to fall into (or let readers unnecessarily fall into) groupthink.

comment by Gavin · 2010-05-26T18:54:42.476Z · LW(p) · GW(p)

This argument basically reduces to, once you remove the aura of philosophical sophistication, "we don't really know whether death is bad, so we should worry less about death". This seems to me absurd. For more, read eg. http://yudkowsky.net/other/yehuda .

Death is bad. The question is whether being revived is good. I'm not sure whether or not I particularly care about the guy who gets unfrozen. I'm not sure how much more he matters to me than anyone else. Does he count as "me?" Is that a meaningful question?

I'm genuinely unsure about this. It's not a decisive factor (it only adds uncertainty), but to me it is a meaningful one.

comment by Dagon · 2010-05-26T17:35:39.864Z · LW(p) · GW(p)

I don't know if this is a self-defense mechanism or actually related to the motives of those promoting cryonics in this group, but I've always taken the "you're crazy not to be signed up for cryonics" meme to be intentional overstatement. If the intent is to remind me that things I do may later turn out to be not just wrong, but extremely wrong, it works pretty well.

It's a good topic to explore agreement theory, as different declared-intended-rationalists have different conclusions, and can talk somewhat dispassionately about such disagreement.

I have trouble believing that anyone means it literally, that for most humans a failure to sign up for cryonics at the earliest opportunity is as wrong as believing there's a giant man in the sky who'll punish or reward you after you die.

Replies from: Will_Newsome
comment by Will_Newsome · 2010-05-26T20:05:16.745Z · LW(p) · GW(p)

I've always taken the "you're crazy not to be signed up for cryonics" meme to be intentional overstatement.

I hadn't thought of this, but if so, it's dangerous rhetoric and just begging to be misunderstood.

comment by ShardPhoenix · 2010-05-26T12:52:32.968Z · LW(p) · GW(p)

On a side note, speaking of "abnormal" and cryonics, apparently Britney Spears wants to sign up with Alcor: http://www.thaindian.com/newsportal/entertainment/britney-spears-wants-to-be-frozen-after-death_100369339.html

I think this can be filed under "any publicity is good publicity".

Replies from: Unnamed, JoshuaZ, steven0461
comment by Unnamed · 2010-05-26T17:26:57.883Z · LW(p) · GW(p)

Is there any way that we could get Britney Spears interested in existential risk mitigation?

Replies from: Will_Newsome
comment by Will_Newsome · 2010-05-26T17:34:01.740Z · LW(p) · GW(p)

It's not obvious that this would be good: it could very well make existential risks research appear less credible to the relevant people (current or future scientists).

comment by JoshuaZ · 2010-05-26T15:08:00.933Z · LW(p) · GW(p)

I was thinking of filing this as an example of Reversed stupidity is not intelligence.

comment by steven0461 · 2010-05-26T19:31:30.753Z · LW(p) · GW(p)

I'm surprised. Last time it was Paris Hilton and it turned out not to be true, but it looks like there's more detail this time.

Replies from: steven0461
comment by steven0461 · 2010-05-26T20:29:29.539Z · LW(p) · GW(p)

This claims it's a false rumor.

Replies from: ShardPhoenix
comment by ShardPhoenix · 2010-05-27T07:15:54.484Z · LW(p) · GW(p)

That only cites a "source close to the singer" compared to the detail given by the original rumour. However given the small prior probability of this being true I guess it's probably still more likely to be false.

comment by blogospheroid · 2010-05-27T10:53:33.871Z · LW(p) · GW(p)

I'm not sure if this is the right place to ask this or even if it is possible to procure the data regarding the same, but who is the highest status person who has opted for Cryonics? The wealthiest or the most famous..

Having high status persons adopt cryonics can be a huge boost to the cause, right?

Replies from: apophenia, RomanDavis
comment by apophenia · 2010-05-28T06:16:19.583Z · LW(p) · GW(p)

It certainly boosts publicity, but most of the people I know of who have signed up for cryonics are either various sorts of transhumanists or celebrities. The celebrities generally seem to do it for publicity or as a status symbol. From the reactions I've gotten telling people about cryonics, I feel it has been mostly a negative social impact. I say this not because people I meet are creeped out by cryonics, but because they specifically mention various celebrities. I think if more scientists or doctors (basically, experts) opted for cryonics it might add credibility. I can only assume that lack of customers for companies like Alcor decreases the chance of surviving cryonics.

comment by RomanDavis · 2010-05-28T17:25:55.077Z · LW(p) · GW(p)

Uhhh... no. People developed the Urban legend about Walt Disney for a reason. It's easy to take rich, creative, ingenious, successful people and portray them as eccentric, isolated and out of touch.

Think about the dissonance between "How crazy those Scientologists are" and "How successful those celebrities are." We don't want to create a similar dissonance with cryonics.

Replies from: Jack
comment by Jack · 2010-06-05T10:50:48.904Z · LW(p) · GW(p)

It depends on the celebrity. Michael Jackson, not so helpful. But Oprah would be.

comment by ShardPhoenix · 2010-05-26T12:51:43.351Z · LW(p) · GW(p)

Probably my biggest concern with cryonics is that if I was to die at my age (25), it would probably be in a way where I would be highly unlikely to be preserved before a large amount of decay had already occurred. If there was a law in this country (Australia) mandating immediate cryopreservation of the head for those contracted, I'd be much more interested.

Replies from: Jordan
comment by Jordan · 2010-05-26T22:18:57.203Z · LW(p) · GW(p)

Agreed. On the other hand, in order to get laws into effect it may be necessary to first have sufficient numbers of people signed up for cryonics. In that sense, signing up for cryonics might not only save your life, it might spur changes that will allow others to be preserved better (faster), potentially saving more lives.

comment by Roko · 2010-05-27T15:08:42.091Z · LW(p) · GW(p)

I get the feeling that this discussion [on various threads] is fast becoming motivated cognition aiming to reach a conclusion that will reduce social tension between people who want to sign up for cryo and people who don't. I.e. "Surely there's some contrived way we can leverage our uncertainties so that you can not sign up and still be defensibly rational, and sign up and be defensibly rational".

E.g. No interest on reaching agreement on cryo success probabilities, when this seems like an absolutely crucial consideration. Is this indicative of people who genuinely want to get to the truth of the matter?

Replies from: JoshuaZ, PhilGoetz
comment by JoshuaZ · 2010-05-27T15:37:50.795Z · LW(p) · GW(p)

This is a valid point, but it is slightly OT to discuss precise probability for cryonics. I think that one reason people might not be trying to reach a consensus about the actual probability of success is because it may simply require so much background knowledge that one might need to be an expert to reasonably evaluate the subject. (Incidentally, I'm not aware of any sequence discussing what the proper thing to do is when one has to depend heavily on experts. We need more discussion of that.) The fact that there are genuine subject matter experts like de Magalhaes who have thought about this issue a lot and come to the conclusion that it is extremely unlikely while others who have thought about consider it likely makes it very hard to estimate. (Consider for example if someone asks me if string theory is correct. The most I'm going to be able to do is to shrug my shoulders. And I'm a mathematician. Some issues are just really much too complicated for non-experts to work out a reliable likelyhood estimate based on their own data.)

It might however be useful to start a subthread discussing pro and anti arguments. To keep the question narrow, I suggest that we simply focus on the technical feasibility question, not on the probability that a society would decide to revive people.

I'll start by listing a few:

For:

1) Non-brain animal organs have been successfully vitrified and revived. See e.g. here

2) Humans have been revived from low-oxygen, very cold circumstances with no apparent loss of memory. This has been duplicated in dogs and other small mammals in controlled conditions for upwards of two hours. (However the temperatures reduced are still above freezing).

Against:

1) Vitrification denatures and damages proteins. This may permanently damage neurons in a way that makes their information content not recoverable. If glial cells have a non-trivial role in thought then this issue becomes even more severe. There's a fair bit of circumstantial evidence for glial cells having some role in cognition, including the fact that they often behave abnormally in severe mental illness. See for example this paper discussing glial cells and schizophrenia. We also know that in some limited circumstances glial cells can release neurotransmitters.

2) Even today's vitrification procedures do not necessarily penetrate every brain cell, so there may be severe ice-crystal formation in a lot of neurons.

3) Acoustic fracturing is still a major issue. Since acoustic fracturing occurs even when one is just preserving the head, there's likely severe macroscopic brain damage occurring. This also likely can cause permanent damage to memory and other basic functions in a non-recoverable way. Moreover, acoustic fracturing is only the fracturing from cooling that is so bad that we hear it. There's likely a lot of much smaller fracturing going on. (No one seems to have put a sensitive microphone right near a body or a neuro when cooling. The results could be disconcerting).

Replies from: Roko
comment by Roko · 2010-05-27T21:32:02.375Z · LW(p) · GW(p)

Yeah, this is a good list.

Note Eliezer's argument that partial damage is not necessarily a problem.

Also note my post: Rationality, Cryonics and Pascal's Wager.

comment by PhilGoetz · 2010-05-27T16:57:03.440Z · LW(p) · GW(p)

No interest on reaching agreement on cryo success probabilities, when this seems like an absolutely crucial consideration. Is this indicative of people who genuinely want to get to the truth of the matter?

You're trying to get to the truth of a different matter. You need to go one level meta. This post is arguing that either position is plausible. There's no need to refine the probabilities beyond saying something like "The expected reward/cost ratio of signing up for cryonics is somewhere between .1 and 10, including opportunity costs."

comment by Will_Newsome · 2010-05-26T14:49:55.458Z · LW(p) · GW(p)

EDIT: Nick Tarleton makes a good point in reply to this comment, which I have moved to be footnote 2 in the text.

Replies from: Nick_Tarleton
comment by Nick_Tarleton · 2010-05-26T20:13:38.162Z · LW(p) · GW(p)

This distinction might warrant noting in the post, since it might not be clear that you're only criticizing one position, or that the distinction is really important to keep in mind.

comment by MartinB · 2010-11-04T20:12:08.418Z · LW(p) · GW(p)

As yet another media reference. I just rewatched the Star Trek TNG episode 'the neutral zone' which deals with recovery of 3 frozen humans from our time. It was really surprising to me how much disregard for human life is shown in this episode. "Why did you recover them, they were already dead". "Oh bugger, now that you revived/healed them we have to treat them as humans". Also surprising is how much insensibility in dealing with them is shown. When you awake someone from an earlier time you might send the aliens and the robots out of the room.

comment by FraserOrr · 2010-05-27T14:27:42.985Z · LW(p) · GW(p)

Question for the advocates of cryonics: I have heard talk in the news and various places that organ donor organizations are talking about giving priority to people who have signed up to donate their organs. That is to say, if you sign up to be an organ donor, you are more likely to receive a donated organ from someone else should you need one. There is some logic in that in the absence of a market in organs; free riders have their priority reduced.

I have no idea if such an idea is politically feasible (and, let me be clear, I don't advocate it), however, were it to become law in your country, would that tilt the cost benefit analysis away from cryonics sufficiently that you would cancel your contract? (There is a new cost imposed by cryonics: namely that the procedure prevents you from being an organ donor, and consequently, reduces your chance of a life saving organ transplant.)

Replies from: gregconen, Roko
comment by gregconen · 2010-05-28T02:38:57.128Z · LW(p) · GW(p)

In most cases, signing up for cryonics and signing up as an organ donor are not mutually exclusive. The manner of death most suited to organ donation (rapid brain death with (parts of) the body still in good condition, generally caused by head trauma) is not well suited to cryonic preservation. You'd probably need a directive in case the two do conflict, but such a conflict is unlikely.

Alternatively, neuropreservation can, at least is theory, occur with organ donation.

comment by Roko · 2010-05-27T21:40:15.501Z · LW(p) · GW(p)

No, the reasoning being that by the time you're decrepit enough to be in need of an organ, you have relatively little to gain from it (perhaps 15 years of medium-low quality life), and the probability of needing an organ is low ( < 1%), whereas Cryo promises a much larger gain (thousands? of years of life) and a much larger probability of success (perhaps 10%).

Replies from: FraserOrr, steven0461
comment by FraserOrr · 2010-05-28T01:32:07.516Z · LW(p) · GW(p)

The 15 year gain may be enough to get you over the tipping point where medicine can cure all your ails, which is to say, 15 years might buy you 1000 years.

I think you are being pretty optimistic if you think the probability of success of cryonics is 10%. Obviously, no one has any data to go on for this, so we can only guess. However, there is a lot of strikes against cryonics, especially so if only your head gets frozen. In the future will they be able to recreate a whole body from head only? In the future will your cryogenic company still be in business? If they go out of business does your frozen head have any rights? If technology is designed to restore you, will it be used? Will the government allow it to be used? Will you be one of the first guinea pigs to be tested, and be one of the inevitable failures? Will anyone want an old fuddy duddy from the far past to come back to life? In the interim has there been an accident, war, malicious action by eco terrorists, that unfroze your head? And so forth.

It seems to me that preserving actual life as long as possible is the best bet.

comment by steven0461 · 2010-05-27T22:07:17.331Z · LW(p) · GW(p)

In those 15 years, indefinite life extension may be invented, so the calculation is less obvious than that. I haven't done any explicit calculations, but if the mid-21st century is a plausible time for such inventions, then the chances of indefinite life extension through cryonics, though non-negligible, shouldn't be of a different order of magnitude than the chances of indefinite life extension through e.g. quitting smoking or being female.

comment by utilitymonster · 2010-05-27T11:19:04.915Z · LW(p) · GW(p)

Thanks for this post. I tend to lurk, and I had some similar questions about the LW enthusiasm for cryo.

Here's something that puzzles me. Many people here, it seems to me, have the following preference order:

pay for my cryo > donation: x-risk reduction (through SIAI, FHI, or SENS) > paying for cryo for others

Of course, for the utilitarians among us, the question arises: why pay for my cryo over risk reduction? (If you just care about others way less than you care about yourself, fine.) Some answer by arguing that paying for your own cryo maximizes x-risk reduction better than the other alternatives because of its indirect effects. This reeks of wishful thinking and doesn't fit well with the preference order above. There are plenty of LWers, I assume, who haven't signed up for cryo, but would if someone else would pay the life insurance policy. If you really think that paying for your own cryo maximizes x-risk reduction, shouldn't you also think that getting others signed up for cryo does as well? (There are some differences, sure. Maybe the indirect effects aren't as substantial if others don't pay their own way in full. But I doubt this justifies the preference.) If so, it would seem that rather than funding x-risk reduction through donating to these organizations, you should fund the cryo preservation of LWers and other willing people.

So which is it utilitarians: you shouldn't pay for your own cryo or you should be working on paying for the cryo of others as well?

If you think paying for cryo is better, want to pay for mine first?

Replies from: Baughn
comment by Baughn · 2010-05-28T12:13:01.621Z · LW(p) · GW(p)

I care more about myself than about others. This is what would be expected from evolution and - frankly - I see no need to alter it. Well, I wouldn't.

I suspect that many people who claim they don't are mistaken, as the above preference ordering seems to illustrate. Maximize utility, yes; but utility is a subjective function, as my utility function makes explicit reference to myself.

comment by CarlShulman · 2010-05-26T21:37:05.939Z · LW(p) · GW(p)

If people believe that a technological singularity is imminent, then they may believe that it will happen before they have a significant chance of dying

This only makes sense given large fixed costs of cryonics (but you can just not make it publicly known that you've signed up for a policy, and the hassle of setting one up is small compared to other health and fitness activities) and extreme (dubious) confidence in quick technological advance, given that we're talking about insurance policies.

Replies from: Roko
comment by Roko · 2010-05-26T23:48:00.080Z · LW(p) · GW(p)

extreme (dubious) confidence in quick technological advance

To put it another way, if you correctly take into account structural uncertainty about the future of the world, you can't be that confident that the singularity will happen in your lifetime.

Replies from: Will_Newsome, JoshuaZ
comment by Will_Newsome · 2010-05-27T00:27:55.405Z · LW(p) · GW(p)

Note that I did not make any arguments against the technological feasibility of cryonics, because they all suck. Likewise, and I'm going to be blunt here, all arguments against the feasibility of a singularity that I've seen also suck. Taking into account structural uncertainty around nebulous concepts like identity, subjective experience, measure, et cetera, does not lead to any different predictions around whether or not a singularity will occur (but it probably does have strong implications on what type of singularity will occur!). I mean, yes, I'm probably in a Fun Theory universe and the world is full of decision theoretic zombies, but this doesn't change whether or not an AGI in such a universe looking at its source code can go FOOM.

Replies from: CarlShulman, steven0461, Jordan
comment by CarlShulman · 2010-05-27T03:52:45.996Z · LW(p) · GW(p)

Will, the singularity argument above relies on not just the likely long-term feasibility of a singularity, but the near-certainty of one VERY soon, so soon that fixed costs like the inconvenience of spending a few hours signing up for cryonics defeat the insurance value. Note that the cost of life insurance for a given period scales with your risk of death from non-global-risk causes in advance of a singularity.

With reasonable fixed costs, that means something like assigning 95%+ probability to a singularity in less than five years. Unless one has incredible private info (e.g. working on a secret government project with a functional human-level AI) that would require an insane prior.

Replies from: Will_Newsome, steven0461
comment by Will_Newsome · 2010-05-27T04:07:25.699Z · LW(p) · GW(p)

Will, the singularity argument above relies on not just the likely long-term feasibility of a singularity, but the near-certainty of one VERY soon, so soon that fixed costs like the inconvenience of spending a few hours signing up for cryonics defeat the insurance value. Note that the cost of life insurance for a given period scales with your risk of death from non-global-risk causes in advance of a singularity.

I never argued that this objection alone is enough to tip the scales in favor of not signing up. It is mostly this argument combined with the idea that loss of measure on the order of 5-50% really isn't all that important when you're talking about multiverse-affecting technologies; no, really, I'm not sure 5% of my measure is worth having to give up half a Hershey's bar everyday, when we're talking crazy post-singularity decision theoretic scenarios from one of Escher's worst nightmares. This is even more salient if those Hershey bars (or airport parking tickets or shoes or whatever) end up helping me increase the chance of getting access to infinite computational power.

Replies from: steven0461
comment by steven0461 · 2010-05-27T05:00:13.078Z · LW(p) · GW(p)

Wut. Is this a quantum immortality thing?

Replies from: Will_Newsome
comment by Will_Newsome · 2010-05-27T22:09:00.193Z · LW(p) · GW(p)

No, unfortunately, much more complicated and much more fuzzy. Unfortunately it's a Pascalian thing. Basically, if post-singularity (or pre-singularity if I got insanely lucky for some reason - in which case this point becomes a lot more feasible) I get access to infinite computing power, it doesn't matter how much of my measure gets through, because I'll be able to take over any 'branches' I could have been able to reach with my measure otherwise. This relies on some horribly twisted ideas in cosmology / game theory / decision theory that will, once again, not fit in the margin. Outside view, it's over a 99% chance these ideas totally wrong, or 'not even wrong'.

comment by steven0461 · 2010-05-27T04:10:51.568Z · LW(p) · GW(p)

Note that the cost of life insurance for a given period scales with your risk of death from non-global-risk causes in advance of a singularity.

My understanding was in policies like Roko was describing you're not paying year by year, you're paying for a lifetime thing where in the early years you're mostly paying for the rate not to go up in later years. Is this inaccurate? If it's year by year, $1/day seems expensive on a per life basis given that the population-wide rate of death is something like 1 in 1000 for young people, probably much less for LWers and much less still if you only count the ones leaving preservable brains.

comment by steven0461 · 2010-05-27T01:30:45.709Z · LW(p) · GW(p)

I mean, yes, I'm probably in a Fun Theory universe and the world is full of decision theoretic zombies

How serious 0-10, and what's a decision theoretic zombie?

Replies from: Will_Newsome
comment by Will_Newsome · 2010-05-27T01:39:53.459Z · LW(p) · GW(p)

A being that has so little decision theoretic measure across the multiverse as to be nearly non-existenent due to a proportionally infinitesimal amount of observer-moment-like-things. However, the being may have very high information theoretic measure to compensate. (I currently have an idea that Steve thinks is incorrect arguing for information theoretic measure to correlate roughly to the reciprocal of decision theoretic measure, which itself is very well-correlated with Eliezer's idea of optimization power. This is all probably stupid and wrong but it's interesting to play with the implications (like literally intelligent rocks, me [Will] being ontologically fundamental, et cetera).)

I'm going to say that I am 8 serious 0-10 that I think things will turn out to really probably not add up to 'normality', whatever your average rationalist thinks 'normality' is. Some of the implications of decision theory really are legitimately weird.

Replies from: steven0461
comment by steven0461 · 2010-05-27T01:45:18.412Z · LW(p) · GW(p)

What do you mean by decision theoretic and information theoretic measure? You don't come across as ontologically fundamental IRL.

Replies from: Will_Newsome
comment by Will_Newsome · 2010-05-27T01:57:11.988Z · LW(p) · GW(p)

Hm, I was hoping to magically get at the same concepts you had cached but it seems like I failed. (Agent) computations that have lower Kolmogorov complexity have greater information theoretic measure in my twisted model of multiverse existence. Decision theoretic measure is something like the significantness you told me to talk to Steve Rayhawk about: the idea that one shouldn't care about events one has no control over, combined with the (my own?) idea that having oneself cared about by a lot of agent-computations and thus made more salient to more decisions is another completely viable way of increasing one's measure. Throw in a judicious mix of anthropic reasoning, optimization power, ontology of agency, infinite computing power in finite time, 'probability as preference', and a bunch of other mumbo jumbo, and you start getting some interesting ideas in decision theory. Is this not enough to hint at the conceptspace I'm trying to convey?

"You don't come across as ontologically fundamental IRL." Ha, I was kind of trolling there, but something along the lines of 'I find myself as me because I am part of the computation that has the greatest proportional measure across the multiverse'. It's one of many possible explanations I toy with as to why I exist. Decision theory really does give one the tools to blow one's philosophical foot off. I don't take any of my ideas too seriously, but collectively, I feel like they're representative of a confusion that not only I have.

comment by Jordan · 2010-05-27T00:33:51.819Z · LW(p) · GW(p)

If you were really the only non-zombie in a Fun Theory universe then you would be the AGI going FOOM. What could be funner than that?

Replies from: Will_Newsome
comment by Will_Newsome · 2010-05-27T00:40:20.688Z · LW(p) · GW(p)

Yeah, that seems like a necessary plot point, but I think it'd be more fun to have a challenge first. I feel like the main character(s) should experience the human condition or whatever before they get a taste of true power, or else they'd be corrupted. First they gotta find something to protect. A classic story of humble beginnings.

Replies from: Jordan
comment by Jordan · 2010-05-27T03:24:14.018Z · LW(p) · GW(p)

Agreed. Funnest scenario is experiencing the human condition, then being the first upload to go FOOM. The psychological mind games of a transcending human. Understanding fully the triviality of human emotions that once defined you, while at the same moment modifying your own soul in an attempt to grasp onto your lingering sanity, knowing full well that the fate of the universe and billions of lives rests on the balance. Sounds like a hell of a rollercoaster.

comment by JoshuaZ · 2010-05-27T00:08:52.392Z · LW(p) · GW(p)

Not necessarily. Someone may for example put a very high confidence in an upcoming technological singularity but put a very low confidence on some other technologies. To use one obvious example, it is easy to see how someone would estimate the chance of a singularity in the near future to be much higher than the chance that we will have room temperature superconductors. And you could easily assign a high confidence to one estimate for one technology and not a high confidence in your estimate for another. (Thus for example, a solid state physicist might be much more confident in their estimate for the superconductors). I'm not sure what estimates one would use to reach this class of conclusion with cryonics and the singularity, but at first glance this is a consistent approach.

Replies from: Roko
comment by Roko · 2010-05-27T10:43:34.393Z · LW(p) · GW(p)

Logical consistency, whilst admirably defensible, is way too weak a condition for a belief to satisfy before I call it rational.

It is logically consistent to assign probability 1-10^-10 to the singularity happening next year.

Replies from: JoshuaZ
comment by JoshuaZ · 2010-05-27T14:49:46.689Z · LW(p) · GW(p)

Right, but if it fits minimal logical consistency it means that there's some thinking that needs to go on. And having slept on this I can now give other plausible scenarios for someone to have this sort of position. If for example, someone puts a a high probability on a coming singularity, but they put a low probability that effective nanotech will ever be good enough to restore brain function.For example, If you believe that the vitrification procedure damages neurons in fashion that is likely to permanently erases memory, then this sort of attitude would make sense.

comment by cousin_it · 2010-05-26T12:27:57.111Z · LW(p) · GW(p)

Not signing up for cryonics is a rationality error on my part. What stops me is an irrational impulse I can't defeat: I seem to subsonsciously value "being normal" more than winning in this particular game. It is similar to byrnema's situation with religion a while ago. That said, I don't think any of the enumerated arguments against cryonics actually work. All such posts feel like they're writing the bottom line in advance.

Replies from: Will_Newsome
comment by Will_Newsome · 2010-05-26T12:45:33.006Z · LW(p) · GW(p)

Quite embarrassingly, my immediate reaction was 'What? Trying to be normal? That doesn't make sense. Europeans can't be normal anyway.' I am entirely unsure as to what cognitive process managed to create that gem of an observation.

Replies from: cousin_it
comment by cousin_it · 2010-05-26T12:57:11.961Z · LW(p) · GW(p)

I'm a Russian living in Moscow, so I hardly count as a European. But as perceptions of normality go, the most "normal" people in the world to me are those from the poor parts of Europe and the rich parts of the 3rd world, followed by richer Europeans (internal nickname "aliens"), followed by Americans (internal nickname "robots"). So if the scale works both ways, I'd probably look even weirder to you than the average European.

Replies from: Blueberry, Will_Newsome
comment by Blueberry · 2010-05-26T14:05:01.453Z · LW(p) · GW(p)

followed by Americans (internal nickname "robots")

I would love to hear more about how you see the behavior of Americans, and why you see us as "robots"!

Replies from: cousin_it
comment by cousin_it · 2010-05-26T14:15:36.971Z · LW(p) · GW(p)

I feel that Americans are more "professional": they can perform a more complete context-switch into the job they have to do and the rules they have to follow. In contrast, a Russian at work is usually the same slacker self as the Russian at home, or sometimes the same unbalanced work-obsessed self.

comment by Will_Newsome · 2010-05-26T13:14:43.136Z · LW(p) · GW(p)

What is your impression of the 'weirdness' of the Japanese culture? 'Cuz it's pretty high up there for me.

Replies from: cousin_it
comment by cousin_it · 2010-05-26T13:30:33.185Z · LW(p) · GW(p)

I'm not judging culture, I'm judging people. Don't personally know anyone from Japan. Know some Filipinos and they seemed very "normal" and understandable to me, moreso than Americans.

Replies from: Will_Newsome
comment by Will_Newsome · 2010-05-26T13:54:12.526Z · LW(p) · GW(p)

I wanted to visit Russia and Ukraine anyway, but this conversation has made me update in favor of the importance of doing so. I've never come into contact with an alien before. I've heard, however, that ex-Soviets tend to have a more live-and-let-live style of interacting with people who look touristy than, for example, Brazil or Greece, so perhaps it will take an extra effort on my part to discover if there really is a tangible aspect of alienness.

comment by NaN · 2010-05-26T10:07:45.862Z · LW(p) · GW(p)

I'm new here, but I think I've been lurking since the start of the (latest, anyway) cryonics debate.

I may have missed something, but I saw nobody claiming that signing up for cryonics was the obvious correct choice -- it was more people claiming that believing that cryonics is obviously the incorrect choice is irrational. And even that is perhaps too strong a claim -- I think the debate was more centred on the probability of cyronics working, rather than the utility of it.

Replies from: Blueberry, ShardPhoenix
comment by Blueberry · 2010-05-26T14:03:37.291Z · LW(p) · GW(p)

I may have missed something, but I saw nobody claiming that signing up for cryonics was the obvious correct choice

If I didn't explicitly say so before: signing up for cryonics is the obvious correct choice.

comment by ShardPhoenix · 2010-05-26T13:05:48.994Z · LW(p) · GW(p)

At one point Eliezer was accusing literally people who don't sign their kids up for Cyronics of "child abuse".

Replies from: timtyler, cupholder
comment by timtyler · 2010-05-26T13:37:03.579Z · LW(p) · GW(p)

"If you don't sign up your kids for cryonics then you are a lousy parent." - E.Y.

Replies from: ShardPhoenix
comment by ShardPhoenix · 2010-05-27T07:17:57.416Z · LW(p) · GW(p)

Yeah looks like I misremembered, but it's essentially the same thing for purposes of illustrating to the OP that some people apparently do think that cryonics is the obvious correct choice.

comment by cupholder · 2010-05-26T19:44:26.980Z · LW(p) · GW(p)

Literally?+"child+abuse"+cryonics+(Eliezer_Yudkowsky+OR+"Eliezer+Yudkowsky")&filter=0)

Replies from: Will_Newsome
comment by Will_Newsome · 2010-05-26T20:02:51.103Z · LW(p) · GW(p)

Um, why would anyone vote this down? It's bad juju to put quote marks around things someone didn't actually say, especially when you disagree with the person you're mischaracterizing. Anyway, thanks for the correction, cupholder.

Replies from: ShardPhoenix
comment by ShardPhoenix · 2010-05-27T07:19:21.119Z · LW(p) · GW(p)

Oops, I knew I should have actually looked that up. The difference between "lousy parent" and "child abuse" is only a matter of degree though - Eliezer is still claiming that cryonics is obviously right, which was the point of contention.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2010-05-27T09:04:44.446Z · LW(p) · GW(p)

It's a difference of degree which matters, especially since people are apt to remember insults and extreme statements.

comment by Jowibou · 2010-05-29T11:25:51.720Z · LW(p) · GW(p)

Is it so irrational to not fear death?

Replies from: mistercow, ciphergoth, ata, Vladimir_Nesov, timtyler
comment by mistercow · 2010-05-31T03:43:33.710Z · LW(p) · GW(p)

Surely you aren't implying that a desire to prolong one's lifespan can only be motivated by fear.

comment by Paul Crowley (ciphergoth) · 2010-05-29T14:08:55.150Z · LW(p) · GW(p)

No, that could be perfectly rational, but many who claim not to fear death tend to look before crossing the road, take medicine when sick and so on.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-05-29T16:19:27.329Z · LW(p) · GW(p)

It is rational for a being-who-has-no-preference-for-survival, but it's not obvious that any human however unusual or deformed can actually have this sort of preference.

Replies from: Vladimir_M
comment by Vladimir_M · 2010-05-29T23:28:47.654Z · LW(p) · GW(p)

Lots of people demonstrate a revealed preference for non-survival by committing suicide and a variety of other self-destructive acts; others willingly choose non-survival as the means towards an altruistic (or some other sort of) goal. Or do you mean that it is not obvious that humans could lack the preference for survival even under the most favorable state of affairs?

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-05-30T10:54:11.606Z · LW(p) · GW(p)

Revealed preference as opposed to actual preference, what they would prefer if they were much smarter, knew much more, had unlimited time to think about it. We typically don't know our actual preference, and don't act on it.

Replies from: jasticE
comment by jasticE · 2010-05-30T11:52:25.539Z · LW(p) · GW(p)

If the actual preference is neither acted upon, nor believed in, how is it a preference?

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-05-30T12:24:01.107Z · LW(p) · GW(p)

If the actual preference is neither acted upon, nor believed in, how is it a preference?

It is something you won't regret giving as a goal to an obsessive world-rewriting robot that takes what you say its goals are really seriously and very literally, without any way for you to make corrections later. Most revealed preferences, you will regret, exactly for the reasons they differ from the actual preferences: on reflection, you'll find that you'd rather go with something different.

See also this thread.

Replies from: jasticE, Vladimir_M
comment by jasticE · 2010-05-30T20:09:53.415Z · LW(p) · GW(p)

That definition may be problematic in respect to life-and-death decisions such as cryonics: Once I am dead, I am not around to regret any decision. So any choice that leads to my death could not be considered bad.

For instance, I will never regret not having signed up for cryonics. I may however regret doing it if I get awakened in the future and my quality of life is too low. On the other hand, I am thinking about it out of sheer curiosity for the future. Thus, signing up would simply help me increasing my current utility by having a hope of more future utility. I am just noticing, this makes the decision accessible to your definition of preference again, by posing the question to myself: "If I signed up for cryonics today, would I regret the [cost of the] decision tomorrow?"

comment by Vladimir_M · 2010-05-30T17:55:35.334Z · LW(p) · GW(p)

This, however, is not the usual meaning of the term "preference." In the standard usage, this word refers to one's favored option in a given set of available alternatives, not to the hypothetical most favorable physically possible state of the world (which, as you correctly note, is unlikely to be readily imaginable). If you insist on using the term with this meaning, fair enough; it's just that your claims sound confusing when you don't include an explanation about your non-standard usage.

That said, one problem I see with your concept of preference is that, presumably, the actions of the "obsessive world-rewriting robot" are supposed to modify the world around you to make it consistent with your preferences, not to modify your mind to make your preferences consistent with the world. However, it is not at all clear to me whether a meaningful boundary between these two sorts of actions can be drawn.

Replies from: Nick_Tarleton
comment by Nick_Tarleton · 2010-05-30T22:13:20.934Z · LW(p) · GW(p)

That said, one problem I see with your concept of preference is that, presumably, the actions of the "obsessive world-rewriting robot" are supposed to modify the world around you to make it consistent with your preferences, not to modify your mind to make your preferences consistent with the world. However, it is not at all clear to me whether a meaningful boundary between these two sorts of actions can be drawn.

Preference in this sense is a rigid designator, defined over the world but not determined by anything in the world, so modifying my mind couldn't make my preference consistent with the world; a robot implementing my preference would have to understand this.

comment by ata · 2010-05-29T11:46:24.982Z · LW(p) · GW(p)

As with most (all?) questions of whether an emotion is rational, it depends on what you value and what situation you're facing. If you can save a hundred lives by risking yours, and there's no less risky way nor (hypothetically) any way for you to save more people by other means while continuing to live, and you want to save lives, and if fear of death would stop you from going through with it, then it's irrational to fear death in that case. But in general, when you're not in a situation like that, you should feel as strongly as necessary whatever emotion best motivates you to keep living and avoid things that would stop you from living (assuming you like living). Whether that's fear of death or love of life or whatever else, feel it.

If you're talking about "fear of death" as in constant paranoia over things that might kill you, then that's probably irrational for most people's purposes. Or if you're not too attached to being alive, then it's not too irrational to not fear death, though that's an unfortunate state of affairs. But for most people, generally speaking, I don't see anything irrational about normal levels of fear of death.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-05-29T12:09:20.763Z · LW(p) · GW(p)

Or if you're not too attached to being alive

(Keeping in mind the distinction between believing that you are not too attached to being alive and actually not having a strong preference for being alive, and the possibility of the belief being incorrect.)

comment by Vladimir_Nesov · 2010-05-29T11:45:10.370Z · LW(p) · GW(p)

Is it so irrational to not fear death?

Yes, it seems to be irrational, even if you talk about fear in particular and not preferring-to-avoid in general. (See also: Emotion, Reversal test.)

Replies from: Jowibou
comment by Jowibou · 2010-05-30T11:05:25.612Z · LW(p) · GW(p)

Since I can see literally nothing to fear in death - in nonexistence itself - I don't really understand why cryonics is seen by so many here as such an essentially "rational" choice. Isn't a calm acceptance of death's inevitability preferable to grasping at a probably empty hope of renewed life simply to mollify one's instinct for survival? I live and value my life, but since post-death I won't be around to feel one way or another about it, I really don't see why I should not seek to accept death rather than counter it. In its promise of "eternal" life, cryonics has the whiff of religion to me.

Replies from: Morendil, Vladimir_Nesov
comment by Morendil · 2010-05-30T14:48:34.281Z · LW(p) · GW(p)

It's certainly best to accept that death is inevitable if you know for a fact that death is inevitable. Which emotion should accompany that acceptance (calm, depression, etc.) depends on particular facts about death - and perhaps some subjective evaluation.

However, the premise seems very much open to question. Death is not "inevitable", it strikes me as something very much evitable, that is which "can be avoided". People used to die when their teeth went bad: dental care has provided ways to avoid that kind of death. People used to die when they suffered infarctus, the consequences of which were by and large unavoidable. Fibrillators are a way to avoid that. And so on.

Historically, every person who ever lived has died before reaching two hundred years of age; but that provides no rational grounds for assuming a zero probability that a person can enjoy a lifespan vastly exceeding that number.

Is it "inevitable" that my life shall be confined to a historical lifespan? Not (by definition) if there is a way to avoid it. Is there a way to avoid it? Given certain reasonable assumptions as to what consciousness and personal identity consist of, there could well be. I am not primarily the cells in my body, I am still me if these cells die and get replaced by functional equivalents. I suspect that I am not even primarily my brain, i.e. that I would still be me if the abstract computation that my brain implements were reproduced on some other substrate.

This insight - "I am a substrate independent computation" - builds on relatively recent scientific discoveries, so it's not surprising it is at odds with historical culture. But it certainly seems to undermine the old saw "death comes to all".

Is it rational to feel hopeful once one has assigned substantial probability to this insight being correct? Yes.

The corollary of this insight is that death, by which I mean information theoretical death (which historically has always followed physical death) holds no particular horrors. It is nothing more and nothing less than the termination of the abstract computation I identify with "being me". I am much more afraid of pain that I am of death, and I view my own death now with something approaching equanimity.

So it seems to me that you're setting up a false opposition here. One can live in calm acceptance of what death entails yet fervently (and rationally) hope for much longer and better life.

Replies from: Jowibou
comment by Jowibou · 2010-05-30T15:40:52.400Z · LW(p) · GW(p)

Good arguments and I largely agree. However postponable does not equal evitable. At some point any clear minded self (regardless of the substratum) is probably going to have to come to accept that it is either going to end or be transformed to the point where definition of the word "self" is getting pretty moot. I guess my point remains that post-death nonexistence contains absolute zero horrors in any case. In a weirdly aesthetic sense, the only possible perfect state is non-existence. To paraphrase Sophocles, perhaps the best thing is never to have been born at all. Now given a healthy love of life and a bit of optimism it feels best to soldier on, but to hope really to defeat death is a delusional escape from the mature acceptance of death. None of those people who now survive their bad teeth or infarctus have had their lives "saved" (an idiotic metaphor) merely prolonged. Now if that's what you want fine - but it strikes me as irrational as a way to deal with death itself.

Replies from: Morendil
comment by Morendil · 2010-05-30T16:25:15.209Z · LW(p) · GW(p)

to hope really to defeat death is a delusional escape from the mature acceptance of death

Let's rephrase this with the troublesome terms unpacked as per the points you "largely agree" with: "to hope for a life measured in millenia is a delusional escape from the mature acceptance of a hundred-year lifespan".

In a nutshell: no! Hoping to see a hundred was not, in retrospect, a delusional escape from the mature acceptance of dying at fourty-something which was the lot of prehistoric humans. We don't know yet what changes in technology are going to make the next "normal" lifespan, but we know more about it than our ancestors did.

it strikes me as irrational as a way to deal with death itself

I can believe that it strikes you as weird, and I understand why it could be so. A claim that some argument is irrational is a stronger and less subjective claim. You need to substantiate it.

Your newly introduced arguments are: a) if you don't die you will be transformed beyond any current sense of identity, and b) "the only possible perfect state is non-existence". The latter I won't even claim to understand - given that you choose to continue this discussion rather than go jump off a tall building I can only assume your life isn't a quest for a "perfect state" in that sense.

As to the former, I don't really believe it. I'm reasonably certain I could live for millenia and still choose, for reasons that belong only to me, to hold on to some memories from (say) the year 2000 or so. Those memories are mine, no one else on this planet has them, and I have no reason to suppose that someone else would choose to falsely believe the memories are theirs.

I view identity as being, to a rough approximation, memories and plans. Someone who has (some of) my memories and shares (some of) my current plans, including plans for a long and fun-filled life, is someone I'd identify as "me" in a straightforward sense, roughly the same sense that I expect I'll be the same person in a year's time, or the same sense that makes it reasonable for me to consider plans for my retirement.

Replies from: Jowibou, NancyLebovitz
comment by Jowibou · 2010-05-30T17:33:24.277Z · LW(p) · GW(p)

Perhaps my discomfort with all this is in cryogenic's seeming affinity with the sort of fear mongering about death that's been the bread and butter of religion for millennia. It just takes it as a fundamental law of the universe that life is better than non life - not just in practice, not just in terms of our very real, human, animal desire to survive (which I share) - but in some sort of essential, objective, rational, blindingly obvious way. A way that smacks of dogma to my ears.

If you really want to live for millennia, go ahead. Who knows I might decide to join you. But in practice I think cryonics for many people is more a matter of escaping death, of putting our terrified, self-centered, hubristic fear of mortality at the disposal of another dubious enterprise.

As for my own view of "identity": I see it as a kind of metapattern, a largely fictional story we tell ourselves about the patterns of our experience as actors, minds and bodies. I can't quite bring myself to take it so seiously that I'm willing to invest in all kinds of extraordinary measures aimed at its survival. If I found myself desperately wanting to live for millennia, I'd probably just think "for chrissakes get over yourself".

Replies from: Will_Newsome
comment by Will_Newsome · 2010-08-08T15:42:57.425Z · LW(p) · GW(p)

Please, please, please don't let the distaste of a certain epistemic disposition interfere with a decision that has a very clear potential for vast sums of positive or negative utility. Argument should screen off that kind of perceived signaling. Maybe it's true that there is a legion of evil Randian cryonauts that only care about sucking every last bit out of their mortal lives because the Christian background they've almost but not quite forgotten raised them with an almost pitiable but mostly contemptible fear of death. Folks like you are much more enlightened and have read up on your Hofstadter and Buddhism and Epicureanism; you're offended that these death-fearing creatures that are so like you didn't put in the extra effort to go farther along the path of becoming wiser. But that shouldn't matter: if you kinda sorta like living (even if death would be okay too), and you can see how cryonics isn't magical and that it has at least a small chance of letting you live for a long time (long enough to decide if you want to keep living, at least!), then you don't have to refrain from duly considering those facts out of a desire to signal distaste for the seemingly bad epistemic or moral status of those who are also interested in cryonics and the way their preachings sound like the dogma of a forgotten faith. Not when your life probabilistically hangs in the balance.

(By the way, I'm not a cryonaut and don't intend to become one; I think there are strong arguments against cryonics, but I think the ones you've given are not good.)

comment by NancyLebovitz · 2010-06-01T18:41:44.208Z · LW(p) · GW(p)

I'm not so sure that if it's possible to choose to keep specific memories, then it will be impossible to record and replay memories from one person to another. It might be a challenge to do so from one organic brain to another, it seems unlikely to be problematic between uploads of different people unless you get Robin Hanson's uneditable spaghetti code upolads.

There still might be some difference in experiencing the memory because different people would notice different things in it.

Replies from: Morendil
comment by Morendil · 2010-06-01T19:16:36.741Z · LW(p) · GW(p)

Perhaps "replay" memories has the wrong connotations - the image it evokes for me is that of a partly transparent overlay over my own memories, like a movie overlaid on top of another. That is too exact.

What I mean by keeping such memories is more like being able, if people ask me to tell them stories about what it was like back in 2010, to answer somewhat the same as I would now - updating to conform to the times and the audience.

This is an active process, not a passive one. Next year I'll say things like "last year when we were discussing memory on LW". In ten years I might say "back in 2010 there was this site called LessWrong, and I remember arguing this and that way about memory, but of course I've learned a few things since so I'd now say this other". In a thousand years perhaps I'd say "back in those times our conversations took place in plain text over Web browsers, and as we only approximately understood the mind, I had these strange ideas about 'memory' - to use a then-current word".

Keeping a memory is a lot like passing on a story you like. It changes in the retelling, though it remains recognizable.

comment by Vladimir_Nesov · 2010-05-30T11:45:10.616Z · LW(p) · GW(p)

I live and value my life, but since post-death I won't be around to feel one way or another about it, I really don't see why I should not seek to accept death rather than counter it.

Apply this argument to drug addiction: "I value not being an addict, but since post-addiction I will want to continue experiencing drugs, and I-who-doesn't-want-to-be-an-addict won't be around, I really don't see why I should stay away from becoming an addict". See the problem? Your preferences are about the whole world, with all of its past, present and future, including the time when you are dead. These preferences determine your current decisions; the preferences of future-you or of someone else are not what makes you make decisions at present.

Replies from: Jowibou
comment by Jowibou · 2010-05-30T11:52:41.192Z · LW(p) · GW(p)

I suppose I'd see your point if I believed that drug addiction was inevitable and knew that everyone in the history of everything had eventually become a drug addict. In short, I'm not sure the analogy is valid. Death is a special case, especially since "the time when you are dead" is from one's point of view not a "time" at all. It's something of an oxymoron. After death there IS no time - past present or future.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-05-30T12:27:02.335Z · LW(p) · GW(p)

I suppose I'd see your point if I believed that drug addiction was inevitable and knew that everyone in the history of everything had eventually become a drug addict.

Whether something is inevitable is not an argument about its moral value. Have you read the reversal test reference?

After death there IS no time - past present or future.

Please believe in physics.

Replies from: Jowibou
comment by Jowibou · 2010-05-30T12:40:13.275Z · LW(p) · GW(p)

1) Who said anything about morality? I'm asking for a defence of the essential rationality of cryogenics. 2) Please read the whole paragraph and try to understand subjective point of view - or lack thereof post-death. (Which strikes me as the essential point of reference when talking about fear of death)

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-05-30T13:01:41.736Z · LW(p) · GW(p)

1) Who said anything about morality? I'm asking for a defense of the essential rationality of cryogenics.

See What Do We Mean By "Rationality"?. When you ask about a decision, its rationality is defined by how well it allows to achieve your goals, and "moral value" refers to the way your goals evaluate specific options, with the options of higher "moral value" being the same as options preferred according to your goals.

2) Please read the whole paragraph and try to understand subjective point of view - or lack thereof post-death.

Consider the subjective point of view of yourself-now, on the situation of yourself dying, or someone else dying for that matter, not the point of view of yourself-in-the-future or subjective point of view of someone-else. It's you-now that needs to make the decision, and rationality of whose decisions we discuss.

Replies from: Jowibou
comment by Jowibou · 2010-05-30T13:25:46.052Z · LW(p) · GW(p)

Clearly, I'm going to need to level up about this. I really would like to understand it in a satisfactory way; not just play a rhetorical game. That said the phrase "the situation of yourself dying" strikes me as an emotional ploy. The relevant (non)"situation" is complete subjective and objective non-existence, post death. The difficulty and pain etc of "dying" is not at issue here. I will read your suggestions and see if I can reconcile all this. Thanks.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-05-30T13:37:32.082Z · LW(p) · GW(p)

That said the phrase "the situation of yourself dying" strikes me as an emotional ploy.

This wasn't my intention. You can substitute that phrase with, say, "Consider the subjective point of view of yourself-now, on the situation of yourself being dead for a long time, or someone else being dead for a long time for that matter." The salient part was supposed to be the point of view, not what you look at from it.

Replies from: Jowibou
comment by Jowibou · 2010-05-30T14:08:26.670Z · LW(p) · GW(p)

Fair enough but I still think think that the "situation of yourself being dead" is still ploy-like in that it imagines non-existence as a state or situation rather than an absence of state or situation. Like mistaking a map for an entirely imaginary territory.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-05-30T14:24:23.757Z · LW(p) · GW(p)

You can think about a world that doesn't contain any minds, and yours in particular. The property of a world to not contain your mind does not say "nothing exists in this world", it says "your mind doesn't exist in this world". Quite different concepts.

Replies from: Jowibou
comment by Jowibou · 2010-05-30T14:34:51.682Z · LW(p) · GW(p)

Of course I can think about such a world. Where people get into trouble is where they think of themselves as "being dead" in such a world rather than simply "not being" i.e. having no more existence than anything else that doesn't exist. It's a distinction that has huge implications and rarely finds its way into the discussion. No matter how rational people try to be, they often seem to argue about death as if it were a state of being - and something to be afraid of.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-05-30T14:41:53.632Z · LW(p) · GW(p)

I give up for now, and suggest reading the sequences, maybe in particular the guide to words and map-territory.

Replies from: Jowibou
comment by Jowibou · 2010-05-30T14:50:39.214Z · LW(p) · GW(p)

Clearly some of my underlying assumptions are flawed. There's no doubt I could be more rigorous in my use of the terminology. Still, I can't help but feel that some of the concepts in the sequences obfuscate as much as they clarify on this issue. Sorry if I have wasted your time. Thanks again for trying.

comment by timtyler · 2010-05-30T11:23:41.500Z · LW(p) · GW(p)

Re: "Is it so irrational to not fear death?"

Fear of death should be "managable":

http://en.wikipedia.org/wiki/Terror_management_theory#Criticism

comment by Violet · 2010-05-27T08:58:20.004Z · LW(p) · GW(p)

I am not liking long term cryonics for the following reasons: 1) If an unmodified Violet would be revived she would not be happy in the far future 2) If a Violet modified enough would be revived she would not be me 3) I don't place a large value on there being a "Violet" in the far future 4) There is a risk of my values and the values of being waking Violet up being incompatible, and avoiding possible "fixing" of brain is very high priority 5) Thus I don't want to be revived by far-future and death without cryonics seems a safe way for that

Replies from: DSimon
comment by DSimon · 2010-09-14T14:47:40.717Z · LW(p) · GW(p)

If an unmodified Violet would be revived she would not be happy in the far future

What makes you sure of this?

comment by Roko · 2010-05-26T18:34:07.450Z · LW(p) · GW(p)

Just noting that buried in the comments Will has stated that he thinks the probability that cryo will actually save your life is one in a million -- 10^-6 -- (with some confusion surrounding the technicalities of how to actually assign that and deal with structural uncertainty).

I think that we need to iron out a consensus probability before this discussion continues.

Edit: especially since if this probability is correct, then the post no longer makes sense...

Replies from: Will_Newsome, PhilGoetz
comment by Will_Newsome · 2010-05-26T18:43:55.197Z · LW(p) · GW(p)

Correction: not 'you', me specifically. I'm young, phyisically and psychologically healthy, and rarely find myself in situations where my life is in danger (the most obvious danger is of course car accidents). It should also be noted that I think a singularity is a lot nearer than your average singularitarian, and think the chance of me dying a non-accidental/non-gory death is really low.

I'm afraid that 'this discussion' is not the one I originally intended with this post: do you think it is best to have it here? I'm afraid that people are reading my post as taking a side (perhaps due to a poor title choice) when in fact it is making a comment about the unfortunate certainty people seem to consistently have on both sides of the issue. (Edit: Of course, this post does not present arguments for both sides, but simply attempts to balance the overall debate in a more fair direction.)

Replies from: Roko
comment by Roko · 2010-05-26T19:48:41.809Z · LW(p) · GW(p)

Indeed, perhaps not the best place to discuss. But it is worth thinking about this as it does make a difference to the point at issue.

Replies from: Will_Newsome
comment by Will_Newsome · 2010-05-26T19:58:42.523Z · LW(p) · GW(p)

Should we nominate a victim to write a post summarizing various good points either for or against signing up for cryonics (not the feasibility of cryonics technologies!) while taking care to realize that preferences vary and various arguments have different weights dependent on subjective interpretations? I would love to nominate Steve Rayhawk because it seems right up his ally but I'm afraid he wouldn't like to be spotlighted. I would like to nominate Steven Kaas if he was willing. (Carl Shulman also comes to mind but I suspect he's much too busy.)

Replies from: steven0461
comment by steven0461 · 2010-05-26T20:51:20.683Z · LW(p) · GW(p)

(edit) I guess I don't fully understand how the proposed post would differ from this one (doesn't it already cover some of the "good points against" part?), and I've also always come down on the "no" side more than most people here.

Replies from: Will_Newsome
comment by Will_Newsome · 2010-05-26T21:07:01.747Z · LW(p) · GW(p)

I think I missed some decent points against (one of which is yours) and the 'good arguments for' do not seem to have been collected in a coherent fashion. If they were in the same post, written by the same person, then there's less of a chance that two arguments addressing the same point would talk past each other. I think that you wouldn't have to suggest a conclusion, and could leave it completely open to debate. I'm willing to bet most people will trust you to unbiasedly and effectively put forth the arguments for both sides. (I mean, what with that great quote about reconstruction from corpses and all.)

comment by PhilGoetz · 2010-05-26T20:06:40.578Z · LW(p) · GW(p)

I don't think so - the points in the post stand regardless of the probability Will assigns. Bringing up other beliefs of Will is an ad hominem argument. Ad hominem is a pretty good argument in the absence of other evidence, but we don't need to go there today.

Replies from: Roko
comment by Roko · 2010-05-26T23:54:02.230Z · LW(p) · GW(p)

It wasn't intended as an ad-hom argument.

The point is simply that if people have widely varying estimates of how likely cryo is to work (0.000001 versus say 0.05 for Robin Hanson and say 0.1 for me), we should straighten those out before getting on to other stuff, like whether it is plausible to rationally reject it. It just seems silly to me that the debate goes on in spite of no effort to agree on this crucial parameter.

If Will's probability is correct, then I fail to see how his post makes sense: it wouldn't make sense for anyone to pay for cryo.

Replies from: Will_Newsome, Will_Newsome, JoshuaZ, timtyler
comment by Will_Newsome · 2010-05-27T00:12:41.510Z · LW(p) · GW(p)

If Will's probability is correct, then I fail to see how his post makes sense: it wouldn't make sense for anyone to pay for cryo.

Once again, my probability estimate was for myself. There are important subjective considerations, such as age and definition of identity, and important sub-disagreements to be navigated, such as AI takeoff speed or likelihood of Friendliness. If I was 65 years old, and not 18 like I am, and cared a lot about a very specific me living far into the future, which I don't, and believed that a singularity was in the distant future, instead of the near-mid future as I actually believe, then signing up for cryonics would look a lot more appealing, and might be the obviously rational decision to make.

Replies from: Roko, Vladimir_Nesov
comment by Roko · 2010-05-27T10:53:27.598Z · LW(p) · GW(p)

Most people who are considering cryo here are within 10 years of your age. In particular, I am only 7 years older. 7 years doesn't add up to moving from 0.0000001 to 0.1, so one of us has a false belief.

Replies from: Will_Newsome
comment by Will_Newsome · 2010-05-27T21:54:22.133Z · LW(p) · GW(p)

What?! Roko, did you seriously not see the two points I had directly after the one about age? Especially the second one?! How is my lack of a strong preference to stay alive into the distant future a false preference? Because it's not a false belief.

Replies from: Roko
comment by Roko · 2010-05-27T22:04:30.509Z · LW(p) · GW(p)

I agree with you that not wanting to be alive in the distant future is a valid reason to not sign up for cryo, and I think that if that's what you want, then you're correct to not sign up.

Replies from: Will_Newsome
comment by Will_Newsome · 2010-05-27T22:11:31.364Z · LW(p) · GW(p)

Okay. Like I said, the one in a million thing is for myself. I think that most people, upon reflection (but not so much reflection as something like CEV requires), really would like to live far into the future, and thus should have probabilities much higher than 1 in a million.

Replies from: Roko
comment by Roko · 2010-05-27T22:24:15.424Z · LW(p) · GW(p)

How is the probability dependent upon whether you want to live into the future? Surely either you get revived or not? Or do you mean something different than I do by this probability? Do you mean something different than I do by the term "probability"?

Replies from: Will_Newsome
comment by Will_Newsome · 2010-05-27T22:33:27.801Z · LW(p) · GW(p)

We were talking about the probability of getting 'saved', and 'saved' to me requires that the future is suited such that I will upon reflection be thankful that I was revived instead of those resources being used for something else I would have liked to happen. In the vast majority of post-singularity worlds I do not think this will be the case. In fact, in the vast majority of post-singularity worlds, I think cryonics becomes plain irrelevant. And hence my sorta-extreme views on the subject.

I tried to make it clear in my post and when talking to both you and Vladimir Nesov that I prefer talking about 'probability that I will get enough utility to justify cryonics upon reflection' instead of 'probability that cryonics will result in revival, independent of whether or not that will be considered a good thing upon reflection'. That's why I put in the abnormally important footnote.

Replies from: Roko
comment by Roko · 2010-05-27T22:37:09.360Z · LW(p) · GW(p)

Oh, I see, my bad, apologies for the misunderstanding.

In which case, I ask: what is your probability that if you sign up for cryo now, you will be cryopreserved and revived (i.e. that your brain-state will be faithfully restored)? (This being something that you and I ought to agree on, and ought to be roughly the same replacing "Will" with "Roko")

Replies from: Will_Newsome
comment by Will_Newsome · 2010-05-27T23:27:40.903Z · LW(p) · GW(p)

Cool, I'm glad to be talking about the same thing now! (I guess any sort of misunderstanding/argument causes me a decent amount of cognitive burden that I don't realize was there until after it is removed. Maybe a fear of missing an important point that I will be embarrassed about having ignored upon reflection. I wonder if Steve Rayhawk experiences similar feelings on a normal basis?)

Well here's a really simple, mostly qualitative analysis, with the hope that "Will" and "Roko" should be totally interchangeable.

Option 1: Will signs up for cryonics.

  • uFAI is developed before Will is cyopreserved. Signing up for cryonics doesn't work, but this possibility has no significantness in our decision theory anyway.

  • uFAI is developed after Will is cryopreserved. Signing up for cryonics doesn't work, but this possibility has no significantness in our decision theory anyway.

  • FAI is developed before Will is cryopreserved. Signing up for cryonics never gets a chance to work for Will specifically.

  • FAI is developed after Will is cryopreserved. Cryonics might work, depending on the implementation and results of things like CEV. This is a huge question mark for me. Something close to 50% is probably appropriate, but at times I have been known to say something closer to 5%, based on considerations like 'An FAI is not going to waste resources reviving you: rather, it will spend resources on fulfilling what it expects your preferences probably were. If your preferences mandate you being alive, then it will do so, but I suspect that most humans upon much reflection and moral evolution won't care as much about their specific existence.' Anna Salamon and I think Eliezer suspect that personal identity is closer to human-ness than e.g. Steve Rayhawk and I do, for what it's worth.

  • An existential risk occurs before Will is cryopreserved. Signing up for cryonics doesn't work, but this possibility has no significantness in our decision theory anyway.

  • An existential risk occurs after Will is cryopreserved. Signing up for cryonics doesn't work, but this possibility has no significantness in our decision theory anyway.

Option 2: Will does not sign up for cryonics.

  • uFAI is developed before Will dies. This situation is irrelevant to our decision theory.

  • uFAI is developed after Will dies. This situation is irrelevant to our decision theory.

  • FAI is developed before Will dies. This situation is irrelevant to our decision theory.

  • FAI is developed after Will dies. Because Will was not cryopreserved the FAI does not revive him in the typical sense. However, perhaps it can faithfully restore Will's brain-state from recordings of Will in the minds of humanity anyway, if that's what humanity would want. Alternatively Will is revived in ancestor simulations done by the FAI or any other FAI that is curious about humanity's history around the time right before its singularity. Measure is really important here, so I'm confused. I suspect less but not orders of magnitude less than the 50% figure above? This is an important point.

  • An existential risk occurs and Will dies. This possibility has no significantness in our decision theory anyway.

  • An existential risk occurs and Will dies. This possibility has no significantness in our decision theory anyway.

Basically, the point is that the most important factor by far is what an FAI does after going FOOM, and we don't really know what's going to happen there. So cryonics becomes a matter of preference more than a matter of probability. But if you're thinking about worlds that our decision theory discounts, e.g. where a uFAI is developed or rogue MNT is developed, then the probability of being revived drops a lot.

Replies from: Roko, jimrandomh, Roko
comment by Roko · 2010-05-29T15:30:12.926Z · LW(p) · GW(p)

You could still actually give a probability that you'll get revived. Yes, I agree that knowing what the outcome of AGI is is extremely important, but you should still just have a probability for that.

Replies from: Will_Newsome
comment by Will_Newsome · 2010-05-30T01:26:13.834Z · LW(p) · GW(p)

Well, that gets tricky, because I have weak subjective evidence that I can't share with anyone else, and really odd ideas about it, that makes me think that an FAI is the likely outcome. (Basically, I suspect something sorta kinda a little along the lines of me living in a fun theory universe. Or more precisely, I am a sub-computation of a longer computation that is optimized for fun, so that even though my life is sub-optimal at the moment I expect it to get a lot better in the future, and that the average of the whole computation's fun will turned out to be argmaxed. Any my life right now rocks pretty hard anyway. I suspect other people have weaker versions of this [with different evidence from mine] with correspondingly weaker probability estimates for this kind of thing happening.) So if we assume with p=1 that a positive singularity will occur for sake of ease, that leaves about 2% that cryonics will work (5% that an FAI raises the cryonic dead minus 3% that an FAI raises all the dead) if you die times the probability that you die before the singularity (about 15% for most people [but about 2% for me]) which leads to 0.3% as my figure for someone with a sense of identity far stronger than me, Kaj, and many others, who would adjust downward from there (an FAI can be expected to extrapolate our minds and discover it should use the resources on making 10 people with values similar to ourself instead, or something). If you say something like 5% positive singularity instead, then it comes out to 0.015%, or very roughly 1 in 7000 (although of course your decision theory should discount worlds in which you die no matter what anyway, so that the probability of actually living past the singularity shouldn't change your decision to sign up all that much). I suspect someone with different intuitions would give a very different answer, but it'll be hard to make headway in debate because it really is so non-technical. The reason I give extremely low probabilities for myself is due to considerations that apply to me only and that I'd rather not go into.

Replies from: Vladimir_Nesov, Roko, Roko
comment by Vladimir_Nesov · 2010-05-30T14:01:58.972Z · LW(p) · GW(p)

Hmm... Seems like crazy talk to me. It's your mind, tread softly.

Replies from: Will_Newsome
comment by Will_Newsome · 2010-05-30T20:50:15.142Z · LW(p) · GW(p)

The ideas about fun theory are crazy talk indeed, but they're sort of tangential to my main points. I have much crazier ideas peppered throughout the comments of this post (very silly implications of decision theory in a level 4 multiverse that are almost assuredly wrong but interesting intuition pumps) and even crazier ideas in the notes I write to myself. Are you worried that this will lead to some sort of mental health danger, or what? I don't know how often high shock levels damage one's sanity to an appreciable degree.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-06-01T11:40:51.563Z · LW(p) · GW(p)

I have much crazier ideas peppered throughout the comments of this post (very silly implications of decision theory in a level 4 multiverse that are almost assuredly wrong but interesting intuition pumps) and even crazier ideas in the notes I write to myself. Are you worried that this will lead to some sort of mental health danger, or what? I don't know how often high shock levels damage one's sanity to an appreciable degree.

It's not "shock levels" which are a problem, it's working in the "almost assuredly wrong" mode. If you yourself believe ideas you develop to be wrong, are they knowledge, are they progress? Do crackpots have "damaged sanity"?

It's usually better to develop ideas on as firm ground as possible, working towards the unknown from statements you can rely on. Even in this mode will you often fail, but you'd be able to make gradual progress that won't be illusory. Not all questions are ready to be answered (or even asked).

comment by Roko · 2010-05-30T13:46:44.421Z · LW(p) · GW(p)

times the probability that you die before the singularity about 2% for me

98% certain that the singularity will happen before you die (which could easily be 2070)? This seems like an unjustifiably high level of confidence.

Replies from: Will_Newsome
comment by Will_Newsome · 2010-05-30T20:41:11.078Z · LW(p) · GW(p)

For what it's worth the uncertain future application gives me 99% chance of a singularity before 2070 if I recall correctly. The mean of my distrubution is 2028.

I really wish more SIAI members talked to each other about this! Estimates vary wildly, and I'm never sure if people are giving estimates taking into account their decision theory or not (that is, thinking 'We couldn't prevent a negative singularity if it was to occur in the next 10 years, so let's discount those worlds and exclude them from our probability estimates'.) I'm also not sure if people are giving far-off estimates because they don't want to think about the implications otherwise, or because they tried to build an FAI and it didn't work, or because they want to signal sophistication and sophisticated people don't predict crazy things happening very soon, or because they are taking an outside view of the problem, or because they've read the recent publications at the AGI conferences and various journals, thought about advances that need to be made, estimated the rate of progress, and determined a date using the inside view (like Steve Rayhawk who gives a shorter time estimate than anyone else, or Shane Legg who I've heard also gives a short estimate but I am not sure about that, or Ben Goertzel who I am again not entirely sure about, or Juergen Schmidhuber who seems to be predicting it soonish, or Eliezer who used to have a soonish estimate with very wide tails but I have no idea what his thoughts are now). I've heard the guys at FHI also have distant estimates, and a lot of narrow AI people predict far-off AGI as well. Where are the 'singularity is far' people getting their predictions?

Replies from: Roko
comment by Roko · 2010-05-31T13:55:41.019Z · LW(p) · GW(p)

uncertain future application gives me 99% chance of a singularity before 2070

UF is not accurate!

Replies from: Will_Newsome
comment by Will_Newsome · 2010-05-31T22:46:32.535Z · LW(p) · GW(p)

True. But the mean of my distribution is still 2028 regardless of the inaccuracy of UF.

Replies from: Roko
comment by Roko · 2010-05-31T23:20:39.754Z · LW(p) · GW(p)

The problem with the uncertain future is that it is a model of reality which allows you to play with the parameters of the model, but not the structure. For example, it has no option for "model uncertainty", e.g. the possibility that the assumptions it makes about forms of probability distributions are incorrect. And a lot of these assumptions were made for the sake of tractability rather than realism. I think that the best way to use it is as an intuition pump for your own model, which you could make in excel or in your head.

Giving probabilities of 99% is a classic symptom of not having any model uncertainty.

Replies from: Will_Newsome
comment by Will_Newsome · 2010-05-31T23:37:22.711Z · LW(p) · GW(p)

Giving probabilities of 99% is a classic symptom of not having any model uncertainty.

If Nick and I write some more posts I think this would be the theme. Structural uncertainty is hard to think around.

Anyway, I got my singularity estimations by listening to lots of people working at SIAI and seeing whose points I found compelling. When I arrived at Benton I was thinking something like 2055. It's a little unsettling that the more arguments I hear from both sides the nearer in the future my predictions are. I think my estimates are probably too biased towards Steve Rayhawk's, but this is because everyone else's estimates seem to take the form of outside view considerations that I find weak.

comment by Roko · 2010-05-30T13:44:18.375Z · LW(p) · GW(p)

(5% that an FAI raises the cryonic dead minus 3% that an FAI raises all the dead)

This seems to rely on your idea that, on reflection, humans probably don't care about themselves, i.e. if I reflected sufficiently hard, I would place zero terminal value on my own life.

I wonder how you're so confident about this? Like, 95% confident that all humans would place zero terminal value their own lives?

Note also that it is possible that some but not all people would, on reflection, place zero value on their own lives.

Replies from: Will_Newsome
comment by Will_Newsome · 2010-05-30T20:45:02.084Z · LW(p) · GW(p)

if I reflected sufficiently hard, I would place zero terminal value on my own life.

Not even close to zero, but less terminal value than you would assign to other things that an FAI could optimize for. I'm not sure how much extrapolated unity of mankind there would be on this regard. I suspect Eliezer or Anna would counter my 5% with a 95%, and I would Aumann to some extent, but I was giving my impression and not belief. (I think that this is better practice at the start of a 'debate': otherwise you might update on the wrong expected evidence. EDIT: To be more clear, I wouldn't want to update on Eliezer's evidence if it was some sort of generalization from fictional evidence from Brennan's world or something, but I would want to update if he had a strong argument that identity has proven to be extremely important to all of human affairs since the dawn of civilization, which is entirely plausible.)

Replies from: Roko
comment by Roko · 2010-05-31T13:41:43.416Z · LW(p) · GW(p)

It seems odd to me that out of the 10^40 atoms in the solar system, there would not be any left to revive cryo patients. My impression is that FAI would revive cryo patients, with probability 80%, the remaining 20% being for very odd scenarios that I just can't think of.

Replies from: Will_Newsome
comment by Will_Newsome · 2010-05-31T22:53:30.996Z · LW(p) · GW(p)

I guess I'm saying the amount of atoms it takes to revive a cryo patient is vastly more wasteful than its weight in computronium. You're trading off one life for a huge amount of potential lives. A few people, like Alicorn if I understand her correctly, think that people who are already alive are worth a huge number of potential lives, but I don't quite understand that intuition. Is this a point of disagreement for us?

Replies from: Roko
comment by Roko · 2010-05-31T23:11:41.942Z · LW(p) · GW(p)

Yeah, but the cryo patient could be run in software rather than in hardware, which would mean that it would be a rather insignificant amount of extra effort.

Replies from: Will_Newsome
comment by Will_Newsome · 2010-05-31T23:30:12.947Z · LW(p) · GW(p)

Gah, sorry, I keep leaving things out. I'm thinking about the actual physical finding out where cryo patients are, scanning their brains, repairing the damage, and then running them. Mike Blume had a good argument against this point: proportionally, the startup cost of scanning a brain is not much at all compared to the infinity of years of actually running the computation. This is where I should be doing the math... so I'm going to think about it more and try and figure things out. Another point is that an AGI could gain access to infinite computing power in finite time during which it could do everything, but I think I'm just confused about the nature of computations in a Tegmark multiverses here.

Replies from: Roko
comment by Roko · 2010-05-31T23:47:41.573Z · LW(p) · GW(p)

actual physical finding out where cryo patients are, scanning their brains, repairing the damage, and then running them.

I hadn't thought of that; certainly if the AI's mission was to run as many experience-moments as possible in the amount of space-time-energy it had, then it wouldn't revive cryo patients.

Note that the same argument says that it would kill all existing persons rather than upload them, and re-use their mass and energy to run ems of generic happy people (maximizing experience moments without regard to any deontological constraints has some weird implications...)

Replies from: Will_Newsome
comment by Will_Newsome · 2010-06-01T00:06:48.126Z · LW(p) · GW(p)

Yes, but this makes people flustered so I prefer not to bring it up as a possibility. I'm not sure if it was Bostrom or just generic SIAI thinking where I heard that an FAI might deconstruct us in order to go out into the universe, solve the problem of astronomical waste, and then run computations of us (or in this case generic transhumans) far in the future.

Replies from: Roko
comment by Roko · 2010-06-01T01:18:31.334Z · LW(p) · GW(p)

Of course at this point, the terminology "Friendly" becomes misleading, and we should talk about a Goal-X-controlled-AGI, where Goal X is a variable for the goal that that AGI would optimize for.

There is no unique value for X. Some have suggested the output of CEV as the goal system, but if you look at CEV in detail, you see that it is jam-packed with parameters, all of which make a difference to the actual output.

I would personally lobby against the idea of an AGI that did crazy shit like killing existing people to save a few nanoseconds.

Replies from: Will_Newsome
comment by Will_Newsome · 2010-06-01T01:26:48.160Z · LW(p) · GW(p)

Hm, I've noticed before that the term 'Friendly' is sort of vague. What would I call an AI that optimizes strictly for my goals (and if I care about others' goals, so be it)? A Will-AI? I've said a few times 'your Friendly is not my Friendly' but I think I was just redefining Friendliness in an incorrect way that Eliezer wouldn't endorse.

Replies from: Douglas_Knight, Roko, Vladimir_Nesov
comment by Douglas_Knight · 2010-06-01T16:36:49.220Z · LW(p) · GW(p)

What would I call an AI that optimizes strictly for my goals...A Will-AI?

One could say "Friendly towards Will."

But the problem of nailing down your goals seems to me much harder than the problem of negotiating goals between different people. Thus I don't see a problem of being vague about the target of Friendliness.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-06-01T16:57:24.583Z · LW(p) · GW(p)

But the problem of nailing down your goals seems to me much harder than the problem of negotiating goals between different people. Thus I don't see a problem of being vague about the target of Friendliness.

Agreed. And asking the question of what is preference of a specific person, represented in some formal language, seems to be a natural simplification of the problem statement, something that needs to be understood before the problem of preference aggregation can be approached.

comment by Roko · 2010-06-01T14:45:56.282Z · LW(p) · GW(p)

but I think I was just redefining Friendliness in an incorrect way that Eliezer wouldn't endorse.

Beware of the urge to censor thoughts that disagree with authority. I personally agree that there is a serious issue here -- the issue of moral antirealism, which implies that there is no "canonical human notion of goodness", so the terminology "Friendly AI" is actually somewhat misleading, and it might be better to say "average human extrapolated morality AGI" when that's what we want to talk about, e.g.

"an average human extrapolated morality AGI would oppose a paperclip maximizer".

Then it sounds less onerous to say that you disagree with what an average human extrapolated morality AGI would do than that you disagree with what a "Friendly AI" would do, because most people on this forum disagree with averaged-out human morality (for example, the average human is a theist). Contrast:

"What, you disagree with the FAI? Are you a bad guy then?"

comment by Vladimir_Nesov · 2010-06-01T15:10:13.867Z · LW(p) · GW(p)

"Friendly AI" is about as specific/ambiguous as "morality" - something humans mostly have in common, allowing for normal variation, not referring to details about specific people. As with preference (morality) of specific people, we can speak of FAI optimizing the world to preference of specific people. Naturally, for each given person it's preferable to launch a personal-FAI to a consensus-FAI.

comment by jimrandomh · 2010-05-27T23:40:15.034Z · LW(p) · GW(p)

However, perhaps it can faithfully restore Will's brain-state from recordings of Will in the minds of humanity anyway, if that's what humanity would want. Alternatively Will is revived in ancestor simulations done by the FAI or any other FAI that is curious about humanity's history around the time right before its singularity.

I am reasonably confident that no such process can produce an entity that I would identify as myself. Being reconstructed from other peoples' memories means losing the memories of all inner thoughts, all times spent alone, and all times spent with people who have died or forgotten the occasion. That's too much lost for any sort of continuity of consciousness.

Replies from: Will_Newsome
comment by Will_Newsome · 2010-05-27T23:44:29.781Z · LW(p) · GW(p)

Hm, well we can debate the magic powers a superintelligence possesses (whether or not it can raise the dead), but I think this would make Eliezer sad. I for one am not reasonably confident either way. I am not willing to put bounds on an entity that I am not sure won't get access to an infinite amount of computation in finite time. At any rate, it seems we have different boundaries around identity. I'm having trouble removing the confusion about identity from my calculations.

comment by Roko · 2010-05-29T15:32:23.017Z · LW(p) · GW(p)

I suspect that most humans upon much reflection and moral evolution won't care as much about their specific existence

You suspect that most people, upon reflection, won't care whether they live or die? I'm intrigued: what makes you think this?

comment by Vladimir_Nesov · 2010-05-27T09:40:50.726Z · LW(p) · GW(p)

There are important subjective considerations, such as age and definition of identity,

Nope, "definition of identity" doesn't influence what actually happens as a result of your decision, and thus doesn't influence how good what happens will be.

You are not really trying to figure out "How likely is it to survive as a result of signing up?", that's just an instrumental question that is supposed to be helpful, you are trying to figure out which decision you should make.

Replies from: Will_Newsome
comment by Will_Newsome · 2010-05-27T22:02:16.828Z · LW(p) · GW(p)

Nope, "definition of identity" doesn't influence what actually happens as a result of your decision, and thus doesn't influence how good what happens will be.

Simply wrong. I can assign positive utility to whatever interpretation of an event I please. If the map changes, the utility changes, even if the territory stays the same. Preferences are not in the territory. Did I misunderstand you?

EDIT: Ah, I think I know what happened: Roko and I were talking about the probability of me being 'saved' by cryonics in the thread he linked to, but perhaps you missed that. Let me copy/paste something I said from this thread: "I tried to make it clear in my post and when talking to both you and Vladimir Nesov that I prefer talking about 'probability that I will get enough utility to justify cryonics upon reflection' instead of 'probability that cryonics will result in revival, independent of whether or not that will be considered a good thing upon reflection'. That's why I put in the abnormally important footnote." I don't think I emphasized this enough. My apologies. (I feel silly, because without this distinction you've probably been thinking I've been committing the mind projection fallacy this whole time, and I didn't notice.)

You are not really trying to figure out "How likely is it to survive as a result of signing up?", that's just an instrumental question that is supposed to be helpful, you are trying to figure out which decision you should make.

Not sure I'm parsing this right. Yes, I am determining what decision I should make. The instrumental question is a part of that, but it is not the only consideration.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-05-27T23:17:28.860Z · LW(p) · GW(p)

I can assign positive utility to whatever interpretation of an event I please. If the map changes, the utility changes, even if the territory stays the same. Preferences are not in the territory. Did I misunderstand you?

You haven't misunderstood me, but you need to pay attention to this question, because it's more or less a consensus on Less Wrong that your position expressed in the above quote is wrong. You should maybe ask around for clarification of this point, if you don't get a change of mind from discussion with me.

You may try the metaethics sequence, and also/in particular these posts:

That preference is computed in the mind doesn't make it any less of territory than anything else. This is just a piece of territory that happens to be currently located in human minds. (Well, not quite, but to a first approximation.)

Your map may easily change even if the territory stays the same. This changes your belief, but this change doesn't influence what's true about the territory. Likewise, your estimate of how good situation X is may change, once you process new arguments or change your understanding of the situation, for example by observing new data, but that change of your belief doesn't influence how good X actually is. Morality is not a matter of interpretation.

Replies from: Will_Newsome
comment by Will_Newsome · 2010-05-27T23:41:14.474Z · LW(p) · GW(p)

Before I spend a lot of effort trying to figure out where I went wrong (which I'm completely willing to do, because I read all of those posts and the metaethics sequence and figured I understood them), can you confirm that you read my EDIT above, and that the misunderstanding addressed there does not encompass the problem?

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-05-27T23:52:56.959Z · LW(p) · GW(p)

Now I have read the edit, but it doesn't seem to address the problem. Also, I don't see what you can use the concepts you bring up for, like "probability that I will get enough utility to justify cryonics upon reflection". If you expect to believe something, you should just believe it right away. See Conservation of expected evidence. But then, "probability this decision is right" is not something you can use for making the decision, not directly.

Replies from: Nick_Tarleton, Will_Newsome
comment by Nick_Tarleton · 2010-05-28T04:36:28.797Z · LW(p) · GW(p)

Also, I don't see what you can use the concepts you bring up for, like "probability that I will get enough utility to justify cryonics upon reflection".

This might not be the most useful concept, true, but the issue at hand is the meta-level one of people's possible overconfidence about it.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-05-28T11:51:01.176Z · LW(p) · GW(p)

"Probability of signing up being good", especially obfuscated with "justified upon infinite reflection", being subtly similar to "probability of the decision to sign up being correct", is too much of a ruse to use without very careful elaboration. A decision can be absolutely, 99.999999% correct, while the probability of it being good remains at 1%, both known to the decider.

comment by Will_Newsome · 2010-05-28T00:11:14.167Z · LW(p) · GW(p)

So you read footnote 2 of the post and do not think it is a relevant and necessary distinction? And you read Steven's comment in the other thread where it seems he dissolved our disagreement and determined we were talking about different things?

I know about the conservation of expected evidence. I understand and have demonstrated understanding of the content in the various links you've given me. I really doubt I've been making the obvious errors you accuse me of for the many months I've been conversing with people at SIAI (and at Less Wrong meetups and at the decision theory workshop) without anyone noticing.

Here's a basic summary of what you seem to think I'm confused about: There is a broad concept of identity in my head. Given this concept of identity I do not want to sign up for cryonics. If this concept of identity changed such that the set of computations I identified with became smaller, then cryonics would become more appealing. I am talking about the probability of expected utility, not the probability of an event. The first is in the map (even if the map is in the territory, which I realize, of course), the second is in the territory.

EDIT: I am treating considerations about identity as a preference: whether or not I should identify with any set of computations is my choice, but subject to change. I think that might be where we disagree: you think everybody will eventually agree what identity is, and that it will be considered a fact about which we can assign different probabilities, but not something subjectively determined.

Replies from: Vladimir_Nesov, Vladimir_Nesov
comment by Vladimir_Nesov · 2010-05-28T00:25:26.246Z · LW(p) · GW(p)

I am treating considerations about identity as a preference: whether or not I should identify with any set of computations is my choice, but subject to change. I think that might be where we disagree: you think everybody will eventually agree what identity is, and that it will be considered a fact about which we can assign different probabilities, but not something subjectively determined.

That preference is yours and yours alone, without any community to share it, doesn't make its content any less of a fact than if you'd had a whole humanity of identical people to back it up. (This identity/probability discussion is tangential to a more focused question of correctness of choice.)

comment by Vladimir_Nesov · 2010-05-28T00:20:19.089Z · LW(p) · GW(p)

The easiest step is for you to look over the last two paragraphs of this comment and see if you agree with that. (Agree/disagree in what sense, if you suspect essential interpretational ambiguity.)

I don't know why you brought up the concept of identity (or indeed cryonics) in the above, it wasn't part of this particular discussion.

Replies from: Will_Newsome
comment by Will_Newsome · 2010-05-28T00:26:36.683Z · LW(p) · GW(p)

At first glance and 15 seconds of thinking, I agree, but: "but that change of your belief doesn't influence how good X actually is" is to me more like "but that change of your belief doesn't influence how good X will be considered upon an infinite amount of infinitely good reflection".

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-05-28T00:40:05.629Z · LW(p) · GW(p)

Now try to figure out what does the question "What color the sky actually is?" mean, when compared with "How good X actually is?" and your interpretation "How good will X seem after infinite amount of infinitely good reflection". The "infinitely good reflection" thing is a surrogate for the fact itself, no less in the first case, and no more in the second.

If you essentially agree that there is fact of the matter about whether a given decision is the right one, what did you mean by the following?

I can assign positive utility to whatever interpretation of an event I please. If the map changes, the utility changes, even if the territory stays the same. Preferences are not in the territory.

You can't "assign utility as you please", this is not a matter of choice. The decision is either correct or it isn't, and you can't make it correct or incorrect by willing so. You may only work on figuring out which way it is, like with any other fact.

Replies from: Will_Newsome, Nick_Tarleton
comment by Will_Newsome · 2010-05-28T02:17:06.500Z · LW(p) · GW(p)

Edit: adding a sentence in bold that is really important but that I failed to notice the first time. (Nick Tarleton alerted me to an error in this comment that I needed to fix.)

Any intelligent agent will discover that the sky is blue. Not every intelligent agent will think that the blue sky is equally beautiful. Me, I like grey skies and rainy days. If I discover that I actually like blue skies at a later point, then that changes the perceived utility of seeing a grey sky relative to a blue one. The simple change in preference also changes my expected utility. Yes, maybe the new utility was the 'correct' utility all along, but how is that an argument against anything I've said in my posts or comments? I get the impression you consistently take the territory view where I take the map view, and I further think that the map view is way more useful for agents like me that aren't infinitely intelligent nor infinitely reflective. (Nick Tarleton disagrees about taking the map view and I am now reconsidering. He raises the important point that taking the territory view doesn't mean throwing out the map, and gives the map something to be about. I think he's probably right.)

You may only work on figuring out which way it is, like with any other fact.

And the way one does this is by becoming good at luminosity and discovering what one's terminal values are. Yeah, maybe it turns out sufficiently intelligent agents all end up valuing the exact same thing, and FAI turns out to be really easy, but I do not buy it as an assertion.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-05-28T12:08:02.466Z · LW(p) · GW(p)

And the way one does this is by becoming good at luminosity and discovering what one's terminal values are. Yeah, maybe it turns out sufficiently intelligent agents all end up valuing the exact same thing, and FAI turns out to be really easy, but I do not buy it as an assertion.

This reads to me like

To figure out the weight of a person, we need to develop experimental procedures, make observations, and so on. Yes, maybe it turns out that "weight of a person" is a universal constant and that all experimenters will agree that it's exactly 80 kg in all cases, and weighting people will thus turn out really easy, but I don't buy this assertion.

See the error? That there are moral facts doesn't imply that everyone's preference is identical, that "all intelligent agents" will value the same thing. Every sane agent should agree on what is moral, but not every sane agent is moved by what is moral, some may be moved by what is prime or something, while agreeing with you that what is prime is often not moral. (See also this comment.)

Replies from: Blueberry
comment by Blueberry · 2010-05-28T14:57:45.492Z · LW(p) · GW(p)

I'm a little confused about your "weight of a person" example because 'a' is ambiguous in English. Did you mean one specific person, or the weighing of different people?

Every sane agent should agree on what is moral

What if CEV doesn't exist, and there really are different groups of humans with different values? Is one set of values "moral" and the other "that other human thing that's analogous to morality but isn't morality"? Primeness is so different from morality that it's clear we're talking about two different things. But say we take what you're calling morality and modify it very slightly, only to the point where many humans still hold to the modified view. It's not clear to me that the agents will say "I'm moved by this modified view, not morality". Why wouldn't they say "No, this modification is the correct morality, and I am moved by morality!"

I have read the metaethics sequence but don't claim to fully understand it, so feel free to point me to a particular part of it.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-05-28T16:33:43.186Z · LW(p) · GW(p)

What if CEV doesn't exist, and there really are different groups of humans with different values?

Of course different people have different values. These values might be similar, but they won't be identical.

Primeness is so different from morality that it's clear we're talking about two different things.

Yes, but what is "prime number"? Is it 5, or is it 7? 5 is clearly different from 7, although it's very similar to it in that it's also prime. Use the analogy of prime=moral and 5=Blueberry's values, 7=Will's values.

It's not clear to me that the agents will say "I'm moved by this modified view, not morality". Why wouldn't they say "No, this modification is the correct morality, and I am moved by morality!"

Because that would be pointless disputing of definitions - clearly, different things are meant by word "morality" in your example.

Replies from: Blueberry
comment by Blueberry · 2010-06-02T21:45:55.744Z · LW(p) · GW(p)

Yes, but what is "prime number"? Is it 5, or is it 7? 5 is clearly different from 7, although it's very similar to it in that it's also prime. Use the analogy of prime=moral and 5=Blueberry's values, 7=Will's values.

I see your point, but there is an obvious problem with this analogy: prime and nonprime are two discrete categories. But we can consider a continuum of values, ranging from something almost everyone agrees is moral, through values that are unusual or uncommon but still recognized as human values, all the way to completely alien values like paperclipping.

My concern is that it's not clear where in the continuum the values stop being "moral" values, unlike with prime numbers.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-06-02T22:21:21.150Z · LW(p) · GW(p)

My concern is that it's not clear where in the continuum the values stop being "moral" values, unlike with prime numbers.

It might be unclear where the line lies, but it shouldn't make the concept itself "fuzzy", merely not understood. What we talk about when we refer to a certain idea is always something specific, but it's not always clear what is implied by what we talk about. That different people can interpret the same words as referring to different ideas doesn't make any of these different ideas undefined. The failure to interpret the words in the same way is a failure of communication, not a characterization of the idea that failed to be communicated.

I of course agree that "morality" admits a lot of similar interpretations, but I'd venture to say that "Blueberry's preference" does as well. It's an unsolved problem - a core question of Friendly AI - to formally define any of the concepts interpreting these words in a satisfactory way. The fuzziness in communication and elusiveness in formal understanding are relevant equally for the aggregate morality and personal preference, and so the individual/aggregate divide is not the point that particularly opposes the analogy.

Replies from: Blueberry
comment by Blueberry · 2010-06-02T22:39:06.240Z · LW(p) · GW(p)

I'm still very confused.

Do you think there is a clear line between what humans in general value (morality) and what other entities might value, and we just don't know where it is? Let's call the other side of the line 'schmorality'. So a paperclipper's values are schmoral.

Is it possible that a human could have values on the other side of the line (schmoral values)?

Suppose another entity, who is on the other side of the line, has a conversation with a human about a moral issue. Both entities engage in the same kind of reasoning, use the same kind of arguments and examples, so why is one reasoning called "moral reasoning" and the other just about values (schmoral reasoning)?

Suppose I am right on the edge of the line. So my values are moral values, but a slight change makes these values schmoral values. From my point of view, these two sets of values are very close. Why do you give them completely different categories? And suppose my values change slightly over time, so I cross the line and back within a day. Do I suddenly stop caring about morality, then start again? This discontinuity seems very strange to me.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-06-02T22:56:02.035Z · LW(p) · GW(p)

I don't say that any given concept is reasonable for all purposes, just that any concept has a very specific intended meaning, at the moment it's considered. The concept of morality can be characterized as, roughly, referring to human-like preference, or aggregate preference of humanity-like collections of individual preferences - this is a characterization resilient to some measure of ambiguity in interpretation. The concepts themselves can't be negotiated, they are set in stone by their intended meaning, though a different concept may be better for a given purpose.

Replies from: Blueberry
comment by Blueberry · 2010-06-02T23:02:57.477Z · LW(p) · GW(p)

I don't say that any given concept is reasonable for all purposes, just that any concept has a very specific intended meaning, at the moment it's considered. The concept of morality can be characterized as, roughly, referring to human-like preference

Thanks! That actually helped a lot.

comment by Nick_Tarleton · 2010-05-28T04:29:55.562Z · LW(p) · GW(p)

If you essentially agree that there is fact of the matter about whether a given decision is the right one, what did you mean by the following?

I can assign positive utility to whatever interpretation of an event I please. If the map changes, the utility changes, even if the territory stays the same. Preferences are not in the territory.

In this exchange

If Will's probability is correct, then I fail to see how his post makes sense: it wouldn't make sense for anyone to pay for cryo.

There are important subjective considerations, such as age and definition of identity,

Nope, "definition of identity" doesn't influence what actually happens as a result of your decision, and thus doesn't influence how good what happens will be.

Will, by "definition of identity", meant a part of preference, making the point that people might have varying preferences (this being the sense in which preference is "subjective") that make cryonics a good idea for some but not others. He read your response as a statement of something like moral realism/externalism; he intended his response to address this, though it was phrased confusingly.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-05-28T12:11:18.816Z · LW(p) · GW(p)

That would be a potentially defensible view (What are the causes of variation? How do we know it's there?), but I'm not sure it's Will's (and using the word "definition" in this sense goes very much against the definition of "definition").

comment by Will_Newsome · 2010-05-27T01:04:38.595Z · LW(p) · GW(p)

If Will's probability is correct, then I fail to see how his post makes sense: it wouldn't make sense for anyone to pay for cryo.

Similar to what I think JoshuaZ was getting at, signing up for cryonics is a decently cheap signal of your rationality and willingness to take weird ideas seriously, and it's especially cheap for young people like me who might never take advantage of the 'real' use of cryonics.

comment by JoshuaZ · 2010-05-27T00:05:38.158Z · LW(p) · GW(p)

Really? Even if you buy into Will's estimate, there are at least three arguments that are not weak:

1) The expected utility argument (I presented above arguments for why this fails, but it isn't completely clear that those rebuttals are valid)

2) One might think that buying into cryonics helps force people (including oneself) to think about the future in a way that produces positive utility.

3) One gets a positive utility from the hope that one might survive using cryonics.

Note that all three of these are fairly standard pro-cryonics arguments that all are valid even with the low probability estimate made by Will.

Replies from: Roko
comment by Roko · 2010-05-27T10:55:55.315Z · LW(p) · GW(p)

none of those hold for p = 1 in a million.

Expected utility doesn't hold because you can use the money to give yourself more than a + 1 in a million chance of survival to the singularity, for example by buying 9000 lottery tickets and funding SIAI if you win.

1 in a million is really small.

Replies from: JoshuaZ
comment by JoshuaZ · 2010-05-27T14:17:47.212Z · LW(p) · GW(p)

none of those hold for p = 1 in a million.

That really depends a lot on the expected utility. Moreover, argument 2 above (getting people to think about long-term prospects) has little connection to the value of p.

Replies from: Roko
comment by Roko · 2010-05-27T15:03:17.082Z · LW(p) · GW(p)

The point about thinking more about the future with cryo is that you expect to be there.

p=1 in 1 million means you don't expect to be there.

Replies from: JoshuaZ
comment by JoshuaZ · 2010-05-27T15:09:04.837Z · LW(p) · GW(p)

Even a small chance that you will be there helps put people in the mind-set to think long-term.

comment by timtyler · 2010-05-30T11:54:08.438Z · LW(p) · GW(p)

Re: "whether it is plausible to rationally reject it"

Of course people can plausibly rationally reject cryonics!

Surely nobody has been silly enough to argue that cryonics makes good financial sense - irrespective of your goals and circumstances.

Replies from: Roko
comment by Roko · 2010-05-30T14:16:27.851Z · LW(p) · GW(p)

If your goals don't include self-preservation, then it is not for you.

Replies from: timtyler
comment by timtyler · 2010-05-30T14:58:55.368Z · LW(p) · GW(p)

In biology, individual self-preservation is a emergent subsidiary goal - what is really important is genetic self-preservation.

Organisms face a constant trade-off - whether to use resources now to reproduce, or whether to invest them in self-perpetuation - in the hope of finding a better chance to reproduce in the future.

Calorie restriction and cryonics are examples of this second option - sacrificing current potential for the sake of possible future gains.

Replies from: Morendil
comment by Morendil · 2010-05-30T15:11:09.385Z · LW(p) · GW(p)

Organisms face a constant trade-off - whether to use resources now to reproduce, or whether to invest them in self-perpetuation - in the hope of finding a better chance to reproduce in the future.

Evolution faces this trade-off. Individual organisms are just stuck with trade-offs already made, and (if they happen to be endowed with explicit motivations) may be motivated by something quite other than "a better chance to reproduce in the future".

Replies from: timtyler
comment by timtyler · 2010-05-30T15:24:20.290Z · LW(p) · GW(p)

Organisms choose - e.g. they choose whether to do calorie restriction - which diverts resources from reproductive programs to maintenance ones. They choose whether to divert resources in the direction of cryonics companies as well.

Replies from: Morendil
comment by Morendil · 2010-05-30T15:39:33.131Z · LW(p) · GW(p)

I'm not disputing that organisms choose. I'm disputing that organisms necessarily have reproductive programs. (You can only face a trade-off between two goals if you value both goals to start with.) Some organisms may value self-preservation, and value reproduction not at all (or only insofar as they view it as a form of self-preservation).

Replies from: timtyler
comment by timtyler · 2010-05-30T15:42:46.142Z · LW(p) · GW(p)

Not all organisms choose - for example, some have strategies hard-wired into them - and others are broken.

comment by JoshuaZ · 2010-05-26T14:34:19.090Z · LW(p) · GW(p)

This post seems to focus too much on Singularity related issues as alternative arguments. Thus, one might think that if one assigns the Singularity a low probability one should definitely take cryonics. I'm going to therefore suggest a few arguments against cryonics that may be relevant:

First, there are other serious existential threats to humans. Many don't even arise from our technology. Large asteroids would be an obvious example. Gamma ray bursts and nearby stars going supernova are other risks. (Betelgeuse is a likely candidate for a nearby supernova making our lives unpleasant. If current estimates are correct there will be substantial radiation from Betelgeuse in that situation but not so much as to wipe out humanity. But we could be wrong.)

Second, one may see a high negative utility if one gets cryonics and one's friends and relatives do not. The abnormal after death result could substantially interfere with their grieving processes. Similarly, there's a direct opportunity cost to paying and preparing for cryonics.

The above argument about lost utility is normally responded to by claiming that the expected utility for cryonics is infinite. If this were actually the case, this would be a valid response.

This leads neatly to my third argument: The claim that my expected utility from cryonics is infinite fails. Even in the future, there will be some probability that I die at any given point. If that probability is never reduced below a certain fixed amount, then my expected life-span is still finite even if I assume cryonics succeeds. (Fun little exercise, suppose that my probability of dying is x on any given day. What is my expected number of days of life? Note that no matter how small x is, as long as x>0, you still get a finite number). Thus, even if one agrees that an infinite lifespan can give infinite utility, it doesn't follow that cryonics gives an expected value that is infinite. (Edit: What happens in a MWI situation is more complicated but similar arguments can be made as the fraction of universes where you exist declines at a geometric rate so the total sum of utility over all universes is still finite)

Fourth, it isn't even clear that one can meaningfully talk about infinite utility. For example, consider the situation where you are given two choices (probably given to you by Omega because that's the standard genie equivalent on LW). In one of them, you are guaranteed immortality with no costs. In the other you are guaranteed immortality but are first tortured for a thousand years. The expected utility for both is infinite, but I'm pretty sure that no one is indifferent to the two choices. This is closely connected to the fact that economists when using utility make an effort to show that their claims remain true under monotonic transformations of total utility. This cannot hold when one has infinite utility being bandied about (it isn't even clear that such transformations are meaningful in such contexts). So much of what we take for granted about utility breaks down.

Replies from: orthonormal
comment by orthonormal · 2010-05-27T01:42:55.581Z · LW(p) · GW(p)

And if the expected utility of cryonics is simply a very large yet finite positive quantity?

Replies from: JoshuaZ
comment by JoshuaZ · 2010-05-27T02:00:13.908Z · LW(p) · GW(p)

In that case, arguments that cryonics is intrinsically the better choice become much more dependent on specific estimates of utility and probability.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-05-27T09:42:44.220Z · LW(p) · GW(p)

In that case, arguments that cryonics is intrinsically the better choice become much more dependent on specific estimates of utility and probability.

And so they should.

comment by Roko · 2010-05-26T11:59:44.131Z · LW(p) · GW(p)

It would be interesting to see a more thorough analysis of whether the "rational" objections to cryo actually work.

For example, the idea that money is better spent donated to some x-risk org than to your own preservation deserves closer scrutiny. Consider that cryo is cheap ($1 a day) for the young, and that getting cryo to go mainstream would be a strong win as far as existential risk reduction is concerned (because then the public at large would have a reason to care about the future) and as far as rationality is concerned.

Replies from: timtyler, Will_Newsome, steven0461
comment by timtyler · 2010-05-26T13:46:45.774Z · LW(p) · GW(p)

Most people already have a reason to care about the future - since it contains their relatives and descendants - and those are among the things that they say they care about.

If you are totally sterile - and have no living relatives - cryonics might seem like a reasonable way of perpetuating your essence - but for most others, there are more conventional options.

Replies from: Roko
comment by Roko · 2010-05-26T18:16:41.750Z · LW(p) · GW(p)

Most people already have a reason to care about the future - since it contains their relatives and descendants - and those are among the things that they say they care about.

Interest rates over the past 20 years have been about 7%, implying that people's half-life of concern for the future is only about 15 years.

I think that the reason that people say they care about their childrens' future but actual interest rates set a concern half-life of 15 years is that people's far-mode verbalizations do not govern their behavior that much.

Cryo would give people a strong selfish interest in the future, and since psychological time between freezing and revival is zero, discount rates wouldn't hurt so much.

Let me throw out the figure of 100 years as the kind of timescale of concern that's required.

Replies from: taw, timtyler
comment by taw · 2010-05-26T22:28:01.544Z · LW(p) · GW(p)

Interest rates over the past 20 years have been about 7%, implying that people's half-life of concern for the future is only about 15 years.

This is plain wrong. Most of these rates is inflation premium (premium for inflation you need to pay is higher than actual inflation because you also bear entire risk if inflation gets higher than predicted, and it cannot really get lower than predicted - it's not normally distributed).

Inflation-adjusted US treasury bonds have rates like 1.68% a year over last 12 years., and never really got much higher than 3%.

For most interest rates like the UK ones you quote there's non-negligible currency exchange risk and default risk in addition to all that.

Replies from: Vladimir_M, Roko
comment by Vladimir_M · 2010-05-26T23:04:28.183Z · LW(p) · GW(p)

taw:

Inflation-adjusted US treasury bonds have rates like 1.68% a year over last 12 years., and never really got much higher than 3%.

Not to mention that even these figures are suspect. There is no single obvious or objectively correct way to calculate the numbers for inflation-adjustment, and the methods actually used are by no means clear, transparent, and free from political pressures. Ultimately, over a longer period of time, these numbers have little to no coherent meaning in any case.

comment by Roko · 2010-05-26T23:44:25.549Z · LW(p) · GW(p)

It is true that you have to adjust for inflation. 1.68% seems low to me. Remember that those bonds may sell at less than their face value, muddying the calculation.

This article quotes 7% above inflation for equity.

Replies from: taw
comment by taw · 2010-05-27T18:36:56.089Z · LW(p) · GW(p)

It seems low but it's correct. Risk-free interests rate are very very low.

Individual stocks carry very high risk, so this is nowhere near correct calculation.

And even if you want to invest in S&P index - notice the date - 2007. This is a typical survivorship bias article from that time. In many countries stock markets crashed hard, and failed to rise for decades. Not just tiny countries, huge economies like Japan too. And by 2010 the same is true about United States too (and it would be ever worse if it wasn't for de facto massive taxpayers subsidies)

Here's Wikipedia:

  • Empirically, over the past 40 years (1969–2009), there has been no significant equity premium in (US) stocks.

This wasn't true back in 2007.

Replies from: Roko, Roko
comment by Roko · 2010-05-27T19:27:40.326Z · LW(p) · GW(p)

Actually, yes, there is such a web app

It comes out at a rate of 4.79% PA if you reinvest dividends, and 1.6% if you don't, after adjustment for inflation. If you're aiming to save efficiently for the future, you would reinvest dividends.

4.79^41 = 6.81

So your discount factor over 41 years is pretty huge. For 82 years that would be a factor of 46, and for 100 years that's a factor of 107.

Replies from: taw
comment by taw · 2010-05-27T19:45:57.321Z · LW(p) · GW(p)

This is all survivorship bias and nothing more, many other stock exchanges crashed completely or had much lower returns like Japanese.

Replies from: SilasBarta, Roko
comment by SilasBarta · 2010-05-27T19:53:18.451Z · LW(p) · GW(p)

And I should add that markets are wickedly anti-inductive. With all the people being prodded into the stock market by tax policies and "finance gurus" ... yeah, the risk is being underpriced.

Also, there needs to be a big shift, probably involving a crisis, before risk-free rates actually make up for taxation, inflation, and sovereign risk. After that happens, I'll be confident the return on capital will be reasonable again.

comment by Roko · 2010-05-27T20:40:37.259Z · LW(p) · GW(p)

This is all survivorship bias and nothing more, many other stock exchanges crashed completely

I presume that you mean cases where some violent upheaval caused property right violation, followed by the closing of a relevant exchange?

I agree that this is a significant problem. What is the real survival ratio for exchanges between 1870 and 2010?

However, let us return to the original point: that cryo would make people invest more in the future. Suppose I get a cryo contract and expect to be reanimated 300 years hence. Suppose that I am considering whether to invest in stocks, and I expect 33% of major exchanges to actually return my money if I am reanimated. I split my money between, say, 10 exchanges, and in those that survive, I get 1.05^300 or 2,200,000 times more than I invested - amply making up for exchanges that don't survive.

comment by Roko · 2010-05-27T19:21:31.231Z · LW(p) · GW(p)

So are you saying that the S&P returned 1.0168^41 times more than you invested, if you invested in 1969 and pulled out today? Is there a web app that we can test that on?

comment by timtyler · 2010-05-26T20:53:38.700Z · LW(p) · GW(p)

Levels of concern about the future vary between individuals - whereas interest rates are a property of society. Surely these things are not connected!

High interest rates do not reflect a lack of concern about the future. They just illustrate how much money your government is printing. Provided you don't invest in that currency, that matters rather little.

I agree that cryonics would make people care about the future more. Though IMO most of the problems with lack of planning are more to do with the shortcomings of modern political systems than they are to do with voters not caring about the future.

The problem with cryonics is the cost. You might care more, but you can influence less - because you no longer have the cryonics money. If you can't think of any more worthwhile things to spend your money on, go for it.

Replies from: Larks
comment by Larks · 2010-05-26T21:12:28.343Z · LW(p) · GW(p)

Real interest rates should be fairly constant (nominal interest rates will of course change with inflation), and reflect the price the marginal saver needs to postpone consumption, and the highest price the marginal borrower will pay to bring his forward. If everyone had very low discount rates, you wouldn't need to offer savers so much, and borrowers would consider the costs more prohibitive, so rates would fall.

Replies from: taw, timtyler
comment by taw · 2010-05-26T22:30:47.400Z · LW(p) · GW(p)

Real interest rates should be fairly constant

They're nothing of the kind. See this. Inflation-adjusted as-risk-free-as-it-gets rates vary between 0.2%/year to 3.4%/year.

This isn't about discount rates, it's about supply and demand of investment money, and financial sector essentially erases any connection with people's discount rates.

Replies from: Larks
comment by Larks · 2010-05-26T22:46:33.475Z · LW(p) · GW(p)

Point taken; I concede the point. Evidently saving/borrowing rates are sticky, or low enough to be not relevant.

comment by timtyler · 2010-05-26T21:47:00.585Z · LW(p) · GW(p)

Perhaps decide to use gold, then. Your society's interest rate then becomes irrelevant to you - and you are free to care about the future as much - or as little - as you like.

Interest rates just do not reflect people's level of concern about the future. Your money might be worth a lot less in 50 years - but the same is not necessarily true of your investments. So - despite all the discussion of interest rates - the topic is an irrelevant digression, apparently introduced through fallacious reasoning.

comment by Will_Newsome · 2010-05-26T12:13:32.836Z · LW(p) · GW(p)

Good point: mainstream cryonics would be a big step towards raising the sanity waterline, which may end up being a prerequisite to reducing various kinds of existential risk. However, I think that the causal relationship goes the other way, and that raising the sanity waterline comes first, and cryonics second: if you can get the average person across the inferential distance to seeing cryonics as reasonable, you can most likely get them across the inferential distance to seeing existential risk as really flippin' important. (I should take the advice of my own post here and note that I am sure there are really strong arguments against the idea that working to reduce existential risk is important, or at least against having much certainty that reducing existential risk will have been the correct thing to do upon reflection, at the very least on a personal level.) Nonetheless, I agree further analysis is necessary, though difficult.

Replies from: Roko
comment by Roko · 2010-05-26T13:15:58.609Z · LW(p) · GW(p)

that raising the sanity waterline comes first, and cryonics second:

But how do we know that's the way it will pan out? Raising the sanity waterline is HARD. SUPER-DUPER HARD. Like, you probably couldn't make much of a dent even if you had a cool $10 million in your pocket.

An alternative scenario is that cryonics gets popular without any "increases in general sanity", for example because the LW/OB communities give the cryo companies a huge increase in sales and a larger flow of philanthropy, which allows them to employ a marketing consultancy to market cryo to market cryonics to exactly the demographic who are already signing up, where additional signups come not from increased population sanity, but from just marketing cryo so that 20% of those who are sane enough to sign up hear about it, rather than 1%.

I claim that your $10M would be able to increase cryo signup by a factor of 20, but probably not dent sanity.

Replies from: Will_Newsome
comment by Will_Newsome · 2010-05-26T13:41:34.644Z · LW(p) · GW(p)

Your original point was that "getting cryo to go mainstream would be a strong win as far as existential risk reduction is concerned (because then the public at large would have a reason to care about the future) and as far as rationality is concerned", in which case your above comment is interesting, but tangential to what we were discussing previously. I agree that getting people to sign up for cryonics will almost assuredly get more people to sign up for cryonics (barring legal issues becoming more salient and thus potentially more restrictive as cryonics becomes more popular, or bad stories publicized whether true or false), but "because then the public at large would have a reason to care about the future" does not seem to be a strong reason to expect existential risk reduction as a result (one counterargument being the one raised by timtyler in this thread). You have to connect cryonics with existential risk reduction, and the key isn't futurism, but strong epistemic rationality. Sure, you could also get interest sparked via memetics, but I don't think the most cost-effective way to do so would be investment in cryonics as opposed to, say, billboards proclaiming 'Existential risks are even more bad than marijuana: talk to your kids.' Again, my intuitions are totally uncertain about this point, but it seems to me that the option a) 10 million dollars -> cryonics investment -> increased awareness in futurism -> increased awareness in existential risk reduction, is most likely inferior to option b) 10 million dollars -> any other memetic strategy -> increased awareness in existential risk reduction.

Replies from: Roko
comment by Roko · 2010-05-26T14:15:50.004Z · LW(p) · GW(p)

a) 10 million dollars -> cryonics investment -> increased awareness in futurism -> increased awareness in existential risk reduction, is most likely inferior to option b) 10 million dollars -> any other memetic strategy -> increased awareness in existential risk reduction.

It is true that there are probably better ways out there to reduce x-risk than via cryo, i.e. the first $10M you have should go into other stuff, so the argument would carry for a strict altruist to not get cryo.

However, the fact that cryo is both cheap and useful in and of itself means that the degree of self-sacrificingness required to decide against it is pretty high.

For example, your $1 a day on cryo provides the following benefits to x-risk:

  • potentially increased personal commitment from you
  • network effects causing others to be more likely to sign up and therefore not die and potentially be more concerned and committed
  • revenue and increased numbers/credibility for cyro companies
  • potentially increased rationality because you expect more to actually experience the future

Now you could sacrifice your $1 a day and get more x-risk reduction by spending it on direct x-risk efforts (in addition to the existing time and money you are putting that way), BUT if you;re going to do that, then why not sacrifice another marginal $1 a day of food/entertainment money?

Benton house has not yet reached the level of eating the very cheapest possible food and doesn't yet spend $0 per person per day on luxuries.

And if you continue to spend more than $1 a day on food and luxuries, do you really value your life at less than one Hershey bar a day?

I think that there is another explanation: people are using extreme altruism as a cover for their own irrationality, and if a situation came up where they could either contribute net +$9000 (cost of cryo) to x-risk right now but die OR not die, they would choose to not die. In fact, I believe that a LW commenter has worked out how to sacrifice your life for a gain of a whole $1,000,000 to x-risk using life insurance and suicide. As far as I know, people who don't sign up for cryo for altruistic reasons are not exactly flocking to this option.

(EDIT: I'll note that this comment does constitute a changing argument in response to the fact that Will's counterargument quoted at the top defeats the argument I was pursuing before)

Replies from: Will_Newsome
comment by Will_Newsome · 2010-05-26T14:25:17.234Z · LW(p) · GW(p)

And if you continue to spend more than $1 a day on food and luxuries, do you really value your life at less than one Hershey bar a day?

I think the correct question here is instead "Do you really value a very, very small chance at you having been signed up for cryonics leading to huge changes in your expected utility in some distant future across unfathomable multiverses more than an assured small amount of utility 30 minutes from now?" I do not think the answer is obvious, but I lean towards avoiding long-term commitments until I better understand the issues. Yes, a very very very tiny amount of me is dying everyday due to freak kitchen accidents, but that much of my measure is so seemingly negligible that I don't feel too horrible trading it off for more thinking time and half a Hershey's bar.

The reasons you gave for spending a dollar a day on cryonics seem perfectly reasonable and I have spent a considerable amount of time thinking about them. Nonetheless, I have yet to be convinced that I would want to sign up for cryonics as anything more than a credible signal of extreme rationality. From a purely intuitive standpoint this seems justified. I'm 18 years old and the singularity seems near. I have measure to burn.

Replies from: Roko, kpreid
comment by Roko · 2010-05-26T14:28:57.129Z · LW(p) · GW(p)

very, very small chance

Can you give me a number? Maybe we disagree because of differing probability estimates that cryo will save you.

Replies from: Will_Newsome
comment by Will_Newsome · 2010-05-26T14:40:30.591Z · LW(p) · GW(p)

Perhaps. I think a singularity is more likely to occur before I die (in most universes, anyway). With advancing life extension technology, good genes, and a disposition to be reasonably careful with my life, I plan on living pretty much indefinitely. I doubt cryonics has any effect at all on these universes for me personally. Beyond that, I do not have a strong sense of identity, and my preferences are not mostly about personal gain, and so universes where I do die do not seem horribly tragic, especially if I can write down a list of my values for future generations (or a future FAI) to consider and do with that they wish.

So basically... (far) less than a 1% chance of saving 'me', but even then, I don't have strong preferences for being saved. I think that the technologies are totally feasible and am less pessimistic than others that Alcor and CI will survive for the next few decades and do well. However, I think larger considerations like life extension technology, uFAI or FAI, MNT, bioweaponry, et cetera, simply render the cryopreservation / no cryopreservation question both difficult and insignificant for me personally. (Again, I'm 18, these arguments do not hold equally well for people who are older than me.)

Replies from: Airedale, Roko
comment by Airedale · 2010-05-26T19:16:48.262Z · LW(p) · GW(p)

a disposition to be reasonably careful with my life

When I read this, two images popped unbidden into my mind: 1) you wanting to walk over the not-that-stable log over the stream with the jagged rocks in it and 2) you wanting to climb out on the ledge at Benton House to get the ball. I suppose one person's "reasonably careful" is another person's "needlessly risky."

Replies from: Will_Newsome, Will_Newsome
comment by Will_Newsome · 2010-05-27T22:40:28.353Z · LW(p) · GW(p)

This comment inspired me to draft a post about how much quantum measure is lost doing various things, so that people can more easily see whether or not a certain activity (like driving to the store for food once a week instead of having it delivered) is 'worth it'.

comment by Will_Newsome · 2010-05-26T19:47:07.905Z · LW(p) · GW(p)

Ha, good times. :) But being careful with one's life and being careful with one's limb are too very different things. I may be stupid, but I'm not stupid.

Replies from: Jonathan_Graehl
comment by Jonathan_Graehl · 2010-05-27T02:59:58.001Z · LW(p) · GW(p)

Unless you're wearing a helmet, moderate falls that 99+% of the time just result in a few sprains/breaks, may <1% of the time give permanent brain damage (mostly I'm thinking of hard objects' edges striking the head). Maybe my estimation is skewed by fictional evidence.

Replies from: Will_Newsome
comment by Will_Newsome · 2010-05-27T03:06:26.737Z · LW(p) · GW(p)

So a 1 in a 100 chance of falling and a roughly 1 in a 1,000 chance of brain damage conditional on that (I'd be really surprised if it was higher than that; biased reporting and what not) is about a 1 in 100,000 chance of severe brain damage. I have put myself in such situations roughly... 10 times in my life. I think car accidents when constantly driving between SFO and Silicon Valley are a more likely cause of death, but I don't have the statistics on hand.

Replies from: Jonathan_Graehl
comment by Jonathan_Graehl · 2010-05-27T08:41:23.434Z · LW(p) · GW(p)

Good point about car risks. Sadly, I was considerably less cautious when I was younger - when I had more to lose. I imagine this is often the case.

comment by Roko · 2010-05-26T14:49:36.173Z · LW(p) · GW(p)

(far) less than a 1% chance

How much far less? 0? 10^-1000?

[It is perfectly OK for you to endorse the position of not caring much about yourself whilst still acknowledging the objective facts about cryo, even if they seem to imply that cryo could be used relatively effectively to save you ... facts =! values ...]

Replies from: Will_Newsome
comment by Will_Newsome · 2010-05-26T14:56:14.655Z · LW(p) · GW(p)

Hm, thanks for making me really think about it, and not letting me slide by without doing calculation. It seems to me, given my preferences, about which I am not logically omniscient, and given my structural uncertainty around these issues, of which there is much, I think that my 50 percent confidence interval is between .00001%, 1 in 10 million, to .01%, 1 in ten thousand.

Replies from: Roko, Vladimir_Nesov
comment by Roko · 2010-05-26T14:58:46.185Z · LW(p) · GW(p)

shouldn't probabilities just be numbers?

i.e. just integrate over the probability distribution of what you think the probability is.

Replies from: Will_Newsome
comment by Will_Newsome · 2010-05-26T15:18:25.942Z · LW(p) · GW(p)

Oh, should they? I'm the first to admit that I sorely lack in knowledge of probability theory. I thought it was better to give a distribution here to indicate my level of uncertainty as well as my best guess (precision as well as accuracy).

Replies from: orthonormal, Roko
comment by orthonormal · 2010-05-27T01:23:42.880Z · LW(p) · GW(p)

Contra Roko, it's OK for a Bayesian to talk in terms of a probability distribution on the probability of an event. (However, Roko is right that in decision problems, the mean value of that probability distribution is quite an important thing.)

comment by Roko · 2010-05-26T15:22:19.089Z · LW(p) · GW(p)

This would be true if you were estimating the value of a real-world parameter like the length of a rod. However, for a probability, you just give a single number, which is representative of the odds you would bet at. If you have several conflicting intuitions about what that number should be, form a weighted average of them, weighted by how much you trust each intuition or method for getting the number.

Replies from: Will_Newsome
comment by Will_Newsome · 2010-05-26T15:33:08.384Z · LW(p) · GW(p)

Ahhh, makes sense, thanks. In that case I'd put my best guess at around 1 in a million.

Replies from: Roko
comment by Roko · 2010-05-26T15:41:31.719Z · LW(p) · GW(p)

For small probabilities, the weighted average calculation is dominated by the high-probability possibilities - if your 50% confidence interval was up to 1 in 10,000, then 25% of the probability probability mass is to the right of 1 in 10,000, so you can't say anything less than (0.75)x0 + (0.25)x1 in 10000 = 1 in 40,000.

Replies from: Will_Newsome
comment by Will_Newsome · 2010-05-26T15:46:32.662Z · LW(p) · GW(p)

I wasn't using a normal distribution in my original formulation, though: the mean of the picture in my head was around 1 in a million with a longer tail to the right (towards 100%) and a shorter tail to the left (towards 0%) (on a log scale?). It could be that I was doing something stupid by making one tail longer than the other?

Replies from: Jonathan_Graehl
comment by Jonathan_Graehl · 2010-05-27T03:03:13.435Z · LW(p) · GW(p)

It would only be suspicious if your resulting probability were a sum of very many independent, similarly probable alternatives (such sums do look normal even if the individual alternatives aren't).

comment by Vladimir_Nesov · 2010-05-26T17:16:47.585Z · LW(p) · GW(p)

It seems to me, given my preferences, about which I am not logically omniscient, [...]

I'd say your preference can't possibly influence the probability of this event. To clear up the air, can you explain how does taking into account your preference influence the estimate? Better, how does the estimate break up on the different defeaters (events making the positive outcome impossible)?

Replies from: Will_Newsome
comment by Will_Newsome · 2010-05-26T17:22:27.636Z · LW(p) · GW(p)

Sorry, I should have been more clear: my preferences influence the possible interpretations of the word 'save'. I wouldn't consider surviving indefinitely but without my preferences being systematically fulfilled 'saved', for instance; more like damned.

comment by kpreid · 2010-05-26T21:13:33.044Z · LW(p) · GW(p)

I have measure to burn.

I like this turn of phrase.

comment by steven0461 · 2010-05-26T19:23:50.753Z · LW(p) · GW(p)

cryo is cheap ($1 a day) for the young

It's cheap because you will not actually die in the near future. ETA: though it sounds as if you're paying mostly to be allowed to keep having cheap life insurance in the future?

comment by dripgrind · 2010-05-27T08:41:58.264Z · LW(p) · GW(p)

Here's another possible objection to cryonics:

If an Unfriendly AI Singularity happens while you are vitrified, it's not just that you will fail to be revived - perhaps the AI will scan and upload you and abuse you in some way.

"There is life eternal within the eater of souls. Nobody is ever forgotten or allowed to rest in peace. They populate the simulation spaces of its mind, exploring all the possible alternative endings to their life." OK, that's generalising from fictional evidence, but consider the following scenario:

Suppose the Singularity develops from an AI that was initially based on a human upload. When it becomes clear that there is a real possibility of uploading and gaining immortality in some sense, many people will compete for upload slots. The winners will likely be the rich and powerful. Billionaires tend not to be known for their public-spirited natures - in general, they lobby to reorder society for their benefit and to the detriment of the rest of us. So, the core of the AI is likely to be someone ruthless and maybe even frankly sociopathic.

Imagine being revived into a world controlled by a massively overclocked Dick Cheney or Vladimir Putin or Marquis De Sade. You might well envy the dead.

Unless you are certain that no Singularity will occur before cryonics patients can be revived, or that Friendly AI will be developed and enforced before the Singularity, cryonics might be a ticket to Hell.

Replies from: humpolec
comment by humpolec · 2010-05-27T10:23:42.565Z · LW(p) · GW(p)

What you're describing is an evil AI, not just an unFriendly one - unFriendly AI doesn't care about your values. Wouldn't an evil AI be even harder to achieve than a Friendly one?

Replies from: dripgrind, wedrifid
comment by dripgrind · 2010-05-27T11:01:26.604Z · LW(p) · GW(p)

An unFriendly AI doesn't necessarily care about human values - but I can't see why, if it was based on human neural architecture, it might not exhibit good old-fashioned human values like empathy - or sadism.

I'm not saying that AI would have to be based on human uploads, but it seems like a credible path to superhuman AI.

Why do you think that an evil AI would be harder to achieve than a Friendly one?

Replies from: humpolec
comment by humpolec · 2010-05-27T17:30:59.649Z · LW(p) · GW(p)

Agreed, AI based on a human upload gives no guarantee about its values... actually right now I have no idea about how Friendliness of such AI could be ensured.

Why do you think that an evil AI would be harder to achieve than a Friendly one?

Maybe not harder, but less probable - 'paperclipping' seems to be a more likely failure of friendliness than AI wanting to torture humans forever.

I have to admit I haven't thought much about this, though.

Replies from: Baughn
comment by Baughn · 2010-05-28T12:20:31.388Z · LW(p) · GW(p)

Paperclipping is a relatively simple failure. The difference between paperclipping and evil is mainly just that - a matter of complexity. Evil is complex, turning the universe into tuna is decidedly not.

On the scale of friendliness, I ironically see an "evil" failure (meaning, among other things, that we're still in some sense around to notice it being evil) becoming more likely as friendliness increases. As we try to implement our own values, failures become more complex, and less likely to be total - thus letting us stick around to see them.

comment by wedrifid · 2012-06-02T03:21:40.922Z · LW(p) · GW(p)

What you're describing is an evil AI, not just an unFriendly one - unFriendly AI doesn't care about your values. Wouldn't an evil AI be even harder to achieve than a Friendly one?

"Where in this code do I need to put this "-ve" sign again?"

The two are approximately equal in difficulty, assuming equivalent flexibility in how "Evil" or "Friendly" it would have to be to qualify for the definition.

comment by steven0461 · 2010-05-26T19:15:39.843Z · LW(p) · GW(p)

Good post. People focus only on the monetary cost of cryonics, but my impression is there are also substantial costs from hassle and perceived weirdness.

Replies from: Torben
comment by Torben · 2010-05-27T09:28:52.762Z · LW(p) · GW(p)

Really? I may be lucky, but I have quite the opposite experience. Of course, I haven't signed up due to my place of residence but I have mentioned it to friends and family and they don't seem to think much about it.

comment by Vladimir_Nesov · 2010-05-26T11:08:00.252Z · LW(p) · GW(p)

One easily falls to the trap of thinking that disagreements with other people happen because the others are irrational in simple, obviously flawed ways. It's harder to avoid the fundamental attribution error and the typical mind fallacy, and admit that the others may have a non-insane reason for their disagreement.

Harder, not harder, but which is actually right? This is not about signaling one's ability to do the harder thing.

The reasons you listed are not ones moving most people to not sign up for cryonics. Most people, as you mention at the beginning, simply don't take the possibility seriously enough to even consider it in detail.

Replies from: Will_Newsome
comment by Will_Newsome · 2010-05-26T11:30:54.269Z · LW(p) · GW(p)

I agree, but there exists a non-negligible amount of people that have not obviously illegitimate reasons for not being signed up: not most of the people in the world, and maybe not most of Less Wrong, but at least a sizable portion of Less Wrongers (and most of the people I interact with on a daily basis at SIAI). It seems that somewhere along the line people started to misinterpret Eliezer (or something) and group the reasonable and unreasonable non-cryonauts together.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-05-26T11:39:30.277Z · LW(p) · GW(p)

Then state the scope of the claim explicitly in the post.

Replies from: Will_Newsome
comment by Will_Newsome · 2010-05-26T11:52:17.163Z · LW(p) · GW(p)

Bolded and italicized; thanks for the criticism, especially as this is my first post on Less Wrong.

comment by Kevin · 2010-05-26T10:39:09.726Z · LW(p) · GW(p)

I think cryonics is a great idea and should be part of health care. However, $50,000 is a lot of money to me and I'm reluctant to spend money on life insurance, which except in the case of cryonics is almost always a bad bet.

I would like my brain to be vitrified if I am dead, but I would prefer not to pay $50,000 for cryonics in the universes where I live forever, die to existential catastrophe, or where cryonics just doesn't work.

What if I specify in my (currently non-existent) cryonics optimized living will that up to $100,000 from my estate is to be used to pay for cryonics? It's not nearly as secure as a real cryonics contract, but it has the benefit of not costing $50,000.

Replies from: khafra, Will_Newsome, Blueberry
comment by khafra · 2010-05-26T14:39:40.561Z · LW(p) · GW(p)

Alcor recommends not funding out of your estate, because in the current legal system any living person with the slightest claim will take precedence over the decedent's wishes. Even if the money eventually goes to Alcor, it'll be after 8 months in probate court; and your grey matter's unlikely to be in very good condition for preservation at that point.

Replies from: Kevin
comment by Kevin · 2010-05-26T22:41:39.722Z · LW(p) · GW(p)

I know they don't recommend this, but I suspect a sufficiently good will and trust setup would have a significant probability of working, and the legal precedent set by that would be beneficial to other potential cryonauts.

comment by Will_Newsome · 2010-05-26T10:48:12.410Z · LW(p) · GW(p)

This sounds like a great practical plan if you can pull it off, and, given your values, possibly an obviously correct course of action. However, it does not answer the question of whether being vitrified after death will be seen as correct upon reflection. The distinction here is important.

comment by Blueberry · 2010-05-26T14:14:12.357Z · LW(p) · GW(p)

I'm not sure if cryonics organizations would support that option, as it would be easier for potential opponents to defeat. Also, it wouldn't protect you against accidental death, if I'm understanding correctly, only against an illness that incapacitated you.

comment by thezeus18 · 2010-05-27T06:17:21.683Z · LW(p) · GW(p)

I'm surprised that you didn't bring up what I find to be a fairly obvious problem with Cryonics: what if nobody feels like unthawing you? Of course, not having followed this dialogue I'm probably missing some equally obvious counter to this argument.

Replies from: Bo102010
comment by Bo102010 · 2010-05-27T07:25:36.427Z · LW(p) · GW(p)

If I were defending cryonics, I would say that a small chance of immortality beats sure death hands-down.

It sounds like Pascal's Wager (small chance at success, potentially infinite payoff), but it doesn't fail for the same reasons Pascal's Wager does (Pascal's gambit for one religion would work just as well for any other one.) - discussed here a while back.

Replies from: timtyler
comment by timtyler · 2010-05-30T11:25:53.661Z · LW(p) · GW(p)

Re: "If I were defending cryonics, I would say that a small chance of immortality beats sure death hands-down."

That's what advocates usually say. It assumes that the goal of organisms is not to die - which is not a biologically realistic assumption.

comment by cjb · 2010-05-27T03:38:05.653Z · LW(p) · GW(p)

Hi, I'm pretty new here too. I hope I'm not repeating an old argument, but suspect I am; feel free to answer with a pointer instead of a direct rebuttal.

I'm surprised that no-one's mentioned the cost of cryonics in relation to the reduction in net human suffering that could come from spending the money on poverty relief instead. For (say) USD $50k, I could save around 100 lives ($500/life is a current rough estimate at lifesaving aid for people in extreme poverty), or could dramatically increase the quality of life of 1000 people (for example, cataract operations to restore sight to a blind person are around $50).

How can we say it's moral to value such a long shot at elongating my own life as being worth more than 100-1000 lives of other humans who happened to do worse in the birth wealth lottery than I did?

Replies from: knb, nazgulnarsil, Will_Newsome
comment by knb · 2010-05-27T08:23:00.944Z · LW(p) · GW(p)

This is also an argument against going to movies, buying coffee, owning a car, or having a child. In fact, this is an argument against doing anything beyond living at the absolute minimum threshold of life, while donating the rest of your income to charity.

How can you say it's moral to value your own comfort as being worth more than 100-1000 other humans? They just did worse at the birth lottery, right?

Replies from: cjb
comment by cjb · 2010-05-28T01:49:16.449Z · LW(p) · GW(p)

It's not really an argument against those other things, although I do indeed try to avoid some luxuries, or to match the amount I spend on them with a donation to an effective aid organization.

What I think you've missed is that many of the items you mention are essential for me to continue having and being motivated in a job that pays me well -- well enough to make donations to aid organizations that accomplish far more than I could if I just took a plane to a place of extreme poverty and attempted to help using my own skills directly.

If there's a better way to help alleviate poverty than donating a percentage of my developed-world salary to effective charities every year, I haven't found it yet.

Replies from: knb
comment by knb · 2010-05-28T02:51:06.604Z · LW(p) · GW(p)

Ah, I see. So when you spend money on yourself, it's just to motivate yourself for more charitable labor. But when those weird cryonauts spend money on themselves, they're being selfish!

How wonderful to be you.

Replies from: cjb
comment by cjb · 2010-05-28T15:02:49.361Z · LW(p) · GW(p)

Ah, I see. So when you spend money on yourself, it's just to motivate yourself for more charitable labor. But when those weird cryonauts spend money on themselves, they're being selfish!

No, I'm arguing that it would be selfish for me to spend money on myself, if that money was on cryonics, where selfishness is defined as (a) spending an amount of money that could relieve a great amount of suffering, (b) on something that doesn't relate to retaining my ability to get a paycheck.

One weakness in this argument is that there could be a person who is so fearful of death that they can't live effectively without the comfort that signing up for cryonics gives them. In that circumstance, I couldn't use this criticism.

Replies from: Blueberry
comment by Blueberry · 2010-05-28T15:10:46.458Z · LW(p) · GW(p)

Cryonics is comparable to CPR or other emergency medical care, in that it gives you extra life after you might otherwise die. Of course it's selfish, in the sense that you're taking care of yourself first, to spend money on your medical care, but cryonics does relate to your ability to get a paycheck (after your revival).

To be consistent, are you reducing your medical expenses in other ways?

Replies from: cjb
comment by cjb · 2010-05-28T16:03:04.127Z · LW(p) · GW(p)

Cryonics is comparable to CPR or other emergency medical care

.. at a probability of (for the sake of argument) one in a million.

Do I participate in other examples of medical care that might save my life with probability one in a million (even if they don't cost any money)? No, not that I can think of.

Replies from: Morendil
comment by Morendil · 2010-05-28T16:06:39.786Z · LW(p) · GW(p)

Did you ever get any vaccination shots? Some of these are for diseases that have become quite rare.

Replies from: cjb
comment by cjb · 2010-05-28T16:37:08.431Z · LW(p) · GW(p)

That's true. I didn't spend my own money on them (I grew up in the UK), and they didn't cost very much in comparison, but I agree that it's a good example of a medical long shot.

Replies from: Morendil
comment by Morendil · 2010-05-28T16:50:00.551Z · LW(p) · GW(p)

Yep, the cost and especially the administrative hassles are, in comparison to the probability considerations, closer to the true reason I (for instance) am not signed up yet, in spite of seeing it as my best shot of insuring long life.

To be fair, vaccination is also a long shot in terms of frequency, but definitely proven to work with close to certainty on any given patient. Cryonics is a long shot intrisically.

But it might not be if more was invested in researching it, and more might be invested if cryonics was already used on a precautionary basis in situations where it would also save money (e.g. death row inmates and terminal patients) and risk nothing of significance (since no better outcome than death can be expected).

In that sense it seems obviously rational to advocate cryonics as a method of assisted suicide, and only the "weirdness factor", religious-moralistic hangups and legislative inertia can explain the reluctance to adopt it more broadly.

comment by nazgulnarsil · 2010-05-27T04:46:54.161Z · LW(p) · GW(p)

like this: I value my subjective experience more than even hundreds of thousands of other similar-but-not-me subjective experiences.

additionally, your argument applies to generic goods you choose over saving people, not just cryonics.

Replies from: cjb
comment by cjb · 2010-05-28T01:27:26.688Z · LW(p) · GW(p)

Well, sure, but I asked how it could be moral, not how you can evade the question by deciding that you don't have any responsibilities to anyone.

Replies from: nazgulnarsil
comment by nazgulnarsil · 2010-06-08T18:42:25.140Z · LW(p) · GW(p)

what are morals? I have preferences. sometimes they coincide with other people's preferences and sometimes they conflict. when they conflict In socially unacceptable ways I seek ways to hide or downplay them.

comment by Will_Newsome · 2010-05-27T23:03:51.468Z · LW(p) · GW(p)

One can expect to live a life at least 100-1000 times longer than those other poor people, or live a life that has at least 100-1000 times as much positive utility, as well as the points in the other comments.

Although this argument is a decent one for some people, it's much more often the product of motivated cognition than carefully looking at the issues, so I did not include it in the post.

Replies from: cjb
comment by cjb · 2010-05-28T01:36:17.063Z · LW(p) · GW(p)

Thanks for the reply.

One can expect to live a life at least 100-1000 times longer than those other poor people

.. when you say "can expect to", what do you mean? Do you mean "it is extremely likely that.."? That's the problem. If it was a sure deal, it would be logical to spend the money on it -- but in fact it's extremely uncertain, whereas the $50 being asked for by a group like Aravind Eye Hospital to directly fund a cataract operation is (close to) relieving significant suffering with a probability of 1.

comment by Unnamed · 2010-05-26T17:24:48.293Z · LW(p) · GW(p)

Another argument against cryonics is just that it's relatively unlikely to work (= lead to your happy revival) since it requires several things to go right. Robin's net present value calculation of the expected benefits of cryonic preservation isn't all that different from the cost of cryonics. With slightly different estimates for some of the numbers, it would be easy to end up with an expected benefit that's less than the cost.

Replies from: Will_Newsome
comment by Will_Newsome · 2010-05-27T22:55:38.182Z · LW(p) · GW(p)

Given his future predictions, maybe, but the future predictions of a lot of smart people (especially singularitarians) can lead to drastically different expected values which often give the proposition of signing up for cryonics a Pascalian flavor.

comment by Vladimir_Nesov · 2010-05-26T10:59:21.844Z · LW(p) · GW(p)

et cetera, are not well-understood enough to make claims about whether or not you should even care about the number of 'yous' that are living or dying, whatever 'you' think you are.

This argument from confusion doesn't shift the decision either way, so it could as well be an argument for signing up, or against signing up; similarly for immediate suicide, or against that. On the net, this argument doesn't move, because there is no default to fall off to once you get more confused.

Replies from: steven0461, Will_Newsome
comment by steven0461 · 2010-05-26T18:59:05.495Z · LW(p) · GW(p)

I'd say the argument from confusion argues more strongly against benefits that are more inferential steps away. E.g., maybe it supports eating ice cream over cryonics but not necessarily existential risk reduction over cryonics.

comment by Will_Newsome · 2010-05-26T11:02:41.770Z · LW(p) · GW(p)

Correct: it is simply an argument against certainty in either direction. It is the certainty that I find worrisome, not the conclusion. Now that I look back, I think I failed to duly emphasize the symmetry of my arguments.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-05-26T11:18:26.035Z · LW(p) · GW(p)

And which way is certainty? There is no baseline in beliefs, around the magical "50%". When a given belief diminishes, its opposite grows in strength. At which point are they in balance? Is the "normal" level of belief the same for everything? Russell's teapot? The sky is blue?

Replies from: Will_Newsome
comment by Will_Newsome · 2010-05-26T11:41:31.002Z · LW(p) · GW(p)

Here I show my ignorance. I thought that I was describing the flattening of a probability distribution for both the propositions 'I will reflectively endorse that signing up for cryonics was the best thing to do' and 'I will reflectively endorse that not signing up for cryonics was the best thing to do'. (This is very different from the binary distinction 'Signing up for cryonics is the current best course of action' and 'Not signing up for cryonics is the best current course of action'.) You seem to be saying that this is meaningless because I am not flattening the distributions relative to anything else, whereas I have the intuition that I should be flattening them towards the shape of some ignorance prior (I would like to point out that I am using technical terms I do not fully understand here: I am a mere novice in Bayesian probability theory (as distinct from Bayesianism)). I feel like you have made a valid point but that I am failing to see it.

Replies from: steven0461, Will_Newsome
comment by steven0461 · 2010-05-26T20:35:28.353Z · LW(p) · GW(p)

So it looks like what's going on is you have estimates for U(cryonics) and U(not cryonics), and structural confusion increases the variance for both these utilities, and Vladimir is saying this doesn't change the estimate of U(cryonics) - U(not cryonics), and you're saying it increases P(U(not cryonics) > U(cryonics)) if your estimate of U(cryonics) starts out higher, and both of you are right?

Replies from: Will_Newsome
comment by Will_Newsome · 2010-05-26T20:42:46.735Z · LW(p) · GW(p)

That seems correct to me.

comment by Will_Newsome · 2010-05-26T12:27:12.891Z · LW(p) · GW(p)

This is a try at resolving my own confusion:

Suppose there is a fair coin that is going to flipped, and I have been told that it is biased towards heads, so I bet on heads. Suppose that I am then informed that it is in fact biased in a random direction: all of a sudden I should reconsider whether I think betting on heads is the best strategy. I might not decide to switch to tails (cost of switching, and anyway I had some evidence that heads was the direction of bias even if it later turned out to be less-than-totally-informative), but I will move the estimate of my success a lot closer to 50%.

I seem to be arguing that when there's a lot of uncertainty about the model I should assume any given P and not-P are equally likely, because this seems like the best ignorance prior for a binary event about which I have very little information. When one learns there is a lot of structural/metaphysical uncertainty around the universe, identity, et cetera, one should revise their probabilities of any given obviously relevant P/not-P pair towards 50% each, and note that they would not be too surprised by any result being true (as they're expecting anything of everything to happen).

comment by Houshalter · 2010-05-30T02:01:55.265Z · LW(p) · GW(p)

I am kind of disturbed by the idea of cryonics. Wouldn't it be theoretically possible to prove they don't work, assuming that they really don't. If the connections between neurons are lost in the process, then you have died.

Replies from: ata
comment by ata · 2010-05-30T04:02:00.995Z · LW(p) · GW(p)

I am kind of disturbed by the idea of cryonics.

Why?

Wouldn't it be theoretically possible to prove they don't work, assuming that they really don't.

If it cannot work, then we would expect to find evidence that it cannot work, yes. But it sounds like you're starting from a specific conclusion and working backwards. Why do you want to "prove [it doesn't] work"?

If the connections between neurons are lost in the process, then you have died.

Alcor's FAQ has some information on the evidence indicating that cryonics preserves the relevant information. That depends on the preservation process starting quickly enough, though.

Replies from: Houshalter
comment by Houshalter · 2010-05-30T13:32:53.265Z · LW(p) · GW(p)

Why do you want to "prove [it doesn't] work"?

Because if it doesn't, its a waste of time.

comment by Nanani · 2010-05-28T00:54:54.937Z · LW(p) · GW(p)

Interesting post, but perhaps too much is being compressed into a single expression.

The niceness and weirdness factors of thinking about cryonics do not actually affect the correctness of cryonics itself. The correctness factor depends only on one's values and the weight of probability.

Not thinking one's own values through sufficiently enough to make an accurate evaluation is both irrational and a common failure mode. Miscalculating the probabilities is also a mistake, though perhaps more a mathematical error than a rationality error.

When these are the reasons for rejecting cryonics, then that rejection is obviously incorrect.

That said, you are quite correct to point out that differing values are not automatically a rationality failure, and it is definitely good to consider the image problem associated with the niceness issues.

Perhaps the niceness and weirdness ought to not be jumbled together with the correctness evaluation question.

Replies from: Nick_Tarleton
comment by Nick_Tarleton · 2010-05-28T03:28:12.980Z · LW(p) · GW(p)

Perhaps the niceness and weirdness ought to not be jumbled together with the correctness evaluation question.

On niceness, good point. On weirdness, I'm not sure what you mean; if you mean "weird stuff and ontological confusion", that is uncertainty about one's values and truths.

comment by VijayKrishnan · 2010-05-27T05:46:11.656Z · LW(p) · GW(p)

I have been heavily leaning towards the anti-cryonics stance at least for myself with the current state of information and technology. My reasons are mostly the following.

I can see it being very plausible that somewhere along the line I would be subject to immense suffering, over which death would have been a far better option, but that I would be either potentially unable to take my life due to physical constraints or would lack the courage to do so (it takes quite some courage and persistent suffering to be driven to suicide IMO). I see this as analogous to a case where I am very near death and am faced with the two following options. (a) Have my life support system turned off and die peacefully.

(b) Keep the life support system going but subsequently give up all autonomy over my life and body and place it entirely in the hands of others who are likely not even my immediate kin. I could be made to put up with immense suffering either due to technical glitches which are very likely since this is a very nascent area, or due to willful malevolence. In this case I would very likely choose (a).

Note that in addition to prolonged suffering where I am effectively incapable of pulling the plug on myself, there is also the chance that I would be an oddity as far as future generations are concerned. Perhaps I would be made a circus or museum exhibit to entertain that generation. Our race is highly speciesist and I would not trust the future generations with their bionic implants and so on to even necessarily consider me to be of the same species and offer me the same rights and moral consideration.

Last but not the least is a point I made as a comment in response to Robin Hanson's post. Robin Hanson expressed a preference for a world filled with more people with scarce per-capita resources compared to a world with fewer people with significantly better living conditions. His point was that this gives many people the opportunity to "be born" who would not have come into existence. And that this was for some reason a good thing.

I couldn't care less if I weren't born. As the saying goes, I have been dead/not existed for billions of years and haven't suffered the slightest inconvenience. I see cryonics and a successful recovery as no different from dying and being re-born. Thus I assign virtually zero positives to being re-born, while I assign huge negatives to 1 and 2 above. This is probably related to the sense of identity mentioned in this post.

We are evolutionarily driven to dislike dying and try to postpone it for as long as possible. However I don't think we are particularly hardwired to prefer this form of weird cryonic rebirth over never waking up at all. Given that our general preference to not die has nothing fundamental about it, but is rather a case of us following our evolutionary leanings, what makes it so obvious that cryonic rebirth is a good thing. Some form of longetivity research which extends our life to say 200 years without going the cryonic route with all the above risks especially for the first few generations of cryonic guinea pigs, seems much harder to argue against.

comment by CronoDAS · 2010-05-26T21:34:07.051Z · LW(p) · GW(p)

Reason #7 not to sign up: There is a significant chance that you will suffer information-theoretic death before your brain can be subjected to the preservation process. Your brain could be destroyed by whatever it is that causes you to die (such as a head injury or massive stroke) or you could succumb to age-related dementia before the rest of your body stops functioning.

Replies from: JoshuaZ
comment by JoshuaZ · 2010-05-26T21:42:34.160Z · LW(p) · GW(p)

In regards to dementia, it isn't at all clear that that will necessarily lead to information-theoretic death. We don't have a good enough understanding of dementia to know if the information is genuinely lost or just difficult to recover. The fact that many forms of dementia have more or less lucid periods and periods where they can remember who people are and other times where they cannot is all tentative evidence that the information is recoverable.

Also, this argument isn't that strong an argument. This isn't going to be substantially altering whether or not it makes sense to sign up by more than probably an order of magnitude at the very most (relying on chance of violent death and chance that one will have dementia late in life).

comment by CronoDAS · 2010-05-26T21:25:33.168Z · LW(p) · GW(p)

Reason #5 to not sign up: Because life sucks.

Replies from: Will_Newsome
comment by Will_Newsome · 2010-05-26T21:29:11.607Z · LW(p) · GW(p)

Huh, I think I may have messed up, because (whether I should admit it or not is unclear to me) I was thinking of you specifically when I wrote the second half of reason 4. Did I not adequately describe your position there?

Replies from: CronoDAS
comment by CronoDAS · 2010-05-26T21:35:43.575Z · LW(p) · GW(p)

You came pretty close.

comment by Daniel_Burfoot · 2010-05-28T03:27:10.338Z · LW(p) · GW(p)

Anyone else here more interested in cloning than cryonics?

Seems 100x more feasible.

Replies from: JoshuaZ, timtyler, Nick_Tarleton, Nick_Tarleton
comment by JoshuaZ · 2010-05-28T03:37:18.129Z · LW(p) · GW(p)

More feasible yes, but not nearly as interesting a technology. What will cloning do? If we clone to make new organs then it is a helpful medical technique, one among many. If we are talking about reproductive cloning, then that individual has no closer identity to me than an identical twin (indeed a bit less since the clone won't share the same environment growing up). The other major advantage of cloning is that we could potentially use it to deliberately clone copies of smart people. But that's a pretty minor use, and fraught with its own ethical problems. And that would still take a long time to be useful. Let's say we get practical cloning tomorrow. Even if some smart person agreed to be cloned, we'd still need to wait around 12 years at very minimum before they can be that useful.

Cryonics is a much larger game changer than cloning.

comment by timtyler · 2010-05-30T11:50:10.326Z · LW(p) · GW(p)

Re: "Anyone else here more interested in cloning than cryonics?"

Sure. Sexual reproduction is good too.

comment by Nick_Tarleton · 2010-05-28T03:50:44.666Z · LW(p) · GW(p)

Interested in what way? Do you see it as a plausible substitute good from the perspective of your values?

Replies from: Daniel_Burfoot
comment by Daniel_Burfoot · 2010-05-28T04:50:30.490Z · LW(p) · GW(p)

Yes. If cloning were an option today, and I were forced to choose cloning vs. cryonics, I would choose the former.

Replies from: Nick_Tarleton, Sniffnoy
comment by Nick_Tarleton · 2010-05-28T05:27:10.351Z · LW(p) · GW(p)

What benefit do you see in having a clone of you?

Replies from: Daniel_Burfoot
comment by Daniel_Burfoot · 2010-05-28T16:14:07.981Z · LW(p) · GW(p)

I think by raising my own clone, I could produce a "more perfect" version of myself. He would have the same values, but an improved skill set and better life experiences.

Replies from: DanielVarga, Emile
comment by DanielVarga · 2010-05-29T19:00:06.635Z · LW(p) · GW(p)

You know what, I am quite content with a 50% faithful clone of myself. It is even possible that there is some useful stuff in that other 50%.

comment by Emile · 2010-05-30T14:13:51.932Z · LW(p) · GW(p)

I think by raising my own clone, I could produce a "more perfect" version of myself. He would have the same values, but an improved skill set and better life experiences.

Do you have any convincing reasons to believe that? How do you account for environmental differences?

comment by Sniffnoy · 2010-05-28T04:54:49.212Z · LW(p) · GW(p)

What exactly would "choosing cloning" consist of?

comment by Nick_Tarleton · 2010-05-28T03:48:15.519Z · LW(p) · GW(p)

Interested in what way? Do you highly value the existence of organisms with your genome?

comment by DanielLC · 2010-05-28T20:53:28.038Z · LW(p) · GW(p)

I don't understand the big deal with this. Is it just selfishness? You don't care how good the world will be, unless you're there to enjoy it?

comment by zero_call · 2010-05-28T06:27:46.011Z · LW(p) · GW(p)

There's a much better, simpler reason to reject cryonics: it isn't proven. There might be some good signs and indications, but it's still rather murky in there. That being said, it's rather clear from prior discussion that most people in this forum believe that it will work. I find it slightly absurd, to be honest. You can talk a lot about uncertainties and supporting evidence and burden of proof and so on, but the simple fact remains the same. There is no proof cryonics will work, either right now, 20, or 50 years in the future. I hate to sound so cynical, I don't mean to rain on anyone's parade, but I'm just stating the facts.

Bear in mind they don't just have to prove it will work. They also need to show you can be uploaded, reverse-aged, or whatever else that comes next. (Now awaiting hoards of flabbergasted replies and accusations.)

Replies from: CronoDAS, JoshuaZ, Morendil, Blueberry
comment by JoshuaZ · 2010-05-28T14:03:10.334Z · LW(p) · GW(p)

There's a much better, simpler reason to reject cryonics: it isn't proven. There might be some good signs and indications, but it's still rather murky in there.

This is a very bad argument. First, all claims are probabilistic, so it isn't even clear what you mean by proof. Second of all, I could under the exact same logic say that one shouldn't try anything that involves technology that doesn't exist yet because we don't know if it will actually work. So the argument has to fail.

comment by Morendil · 2010-05-28T06:46:42.620Z · LW(p) · GW(p)

There's a much better, simpler reason to reject cryonics: it isn't proven.

That's a widely acknowledged fact. And, if you make that your actual reason for rejecting cryonics, there are some implications that follow from that: for instance, that we should be investing massively more in research aiming to provide proof than we currently are.

The arguments we tend to hear are more along the lines of "it's not proven, it's an expensive eccentricity, it's morally wrong, and besides even if it were proved to work I don't believe I'd wake up as me so I wouldn't want it".

comment by Blueberry · 2010-05-28T06:35:35.074Z · LW(p) · GW(p)

I have no idea whether it will work, but right now, the only alternative is death. I actually think it's unlikely that people preserved now will ever be revived, more for social and economic reasons than technical ones.

Replies from: Baughn
comment by Baughn · 2010-05-28T12:05:53.864Z · LW(p) · GW(p)

How much do you believe it would cost?

In as much as I'm for cryopreservation (but am having some trouble finding a way to do it in Norway - well, I'll figure something out), I've also decided to be the kind of person who would, if still alive once reviving them becomes technically possible, pay for reviving as many as I can afford.

I tend to assume that other cryopreservationists think the same way. This means the chance of being revived, assuming nobody else wants to pay for it (including a possible FAI), is related to the proportion of cryopreservationists who are still alive divided by the cost of reviving someone, as a portion of their average income at the time.

Thus, I wonder - how costly will it be?

Replies from: Blueberry
comment by Blueberry · 2010-05-28T15:19:42.080Z · LW(p) · GW(p)

Once the infrastructure and technology for revival is established, it won't be very costly. The economic problem is getting that infrastructure and technology established in the first place.

I would guess you're far more altruistic than most people. Really, as many as you can afford?

Replies from: Baughn
comment by Baughn · 2010-05-28T16:40:56.597Z · LW(p) · GW(p)

It's not altruism, it's selfishness.

I'm precommiting myself to reviving others, if I have the opportunity; on the assumption that others do the same, this means the marginal benefit to me from signing up for cryopreservation goes up.

And, admittedly, I expect to have a considerable amount of disposable income. "As many as I can afford" means "While maintaining a reasonable standard of living", but "reasonable" is relative; by deliberately not increasing it too much from what I'm used to as a student, I can get more slack without really losing utilons.

It helps that my hobbies are, by and large, very cheap. Hiking and such. ;)