Normal Cryonics

post by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-01-19T19:08:48.301Z · LW · GW · Legacy · 964 comments

Contents

964 comments

I recently attended a small gathering whose purpose was to let young people signed up for cryonics meet older people signed up for cryonics - a matter of some concern to the old guard, for obvious reasons.

The young cryonicists' travel was subsidized.  I suspect this led to a greatly different selection filter than usually prevails at conferences of what Robin Hanson would call "contrarians".  At an ordinary conference of transhumanists - or libertarians, or atheists - you get activists who want to meet their own kind, strongly enough to pay conference fees and travel expenses.  This conference was just young people who took the action of signing up for cryonics, and who were willing to spend a couple of paid days in Florida meeting older cryonicists.

The gathering was 34% female, around half of whom were single, and a few kids.  This may sound normal enough, unless you've been to a lot of contrarian-cluster conferences, in which case you just spit coffee all over your computer screen and shouted "WHAT?"  I did sometimes hear "my husband persuaded me to sign up", but no more frequently than "I pursuaded my husband to sign up".  Around 25% of the people present were from the computer world, 25% from science, and 15% were doing something in music or entertainment - with possible overlap, since I'm working from a show of hands.

I was expecting there to be some nutcases in that room, people who'd signed up for cryonics for just the same reason they subscribed to homeopathy or astrology, i.e., that it sounded cool.  None of the younger cryonicists showed any sign of it.  There were a couple of older cryonicists who'd gone strange, but none of the young ones that I saw.  Only three hands went up that did not identify as atheist/agnostic, and I think those also might have all been old cryonicists.  (This is surprising enough to be worth explaining, considering the base rate of insanity versus sanity.  Maybe if you're into woo, there is so much more woo that is better optimized for being woo, that no one into woo would give cryonics a second glance.)

The part about actually signing up may also be key - that's probably a ten-to-one or worse filter among people who "get" cryonics.  (I put to Bill Faloon of the old guard that probably twice as many people had died while planning to sign up for cryonics eventually, than had actually been suspended; and he said "Way more than that.")  Actually signing up is an intense filter for Conscientiousness, since it's mildly tedious (requires multiple copies of papers signed and notarized with witnesses) and there's no peer pressure.

For whatever reason, those young cryonicists seemed really normal - except for one thing, which I'll get to tomorrow.  Except for that, then, they seemed like very ordinary people: the couples and the singles, the husbands and the wives and the kids, scientists and programmers and sound studio technicians.

It tears my heart out.

At some future point I ought to post on the notion of belief hysteresis, where you get locked into whatever belief hits you first.  So it had previously occurred to me (though I didn't write the post) to argue for cryonics via a conformity reversal test:

If you found yourself in a world where everyone was signed up for cryonics as a matter of routine - including everyone who works at your office - you wouldn't be the first lonely dissenter to earn the incredulous stares of your coworkers by unchecking the box that kept you signed up for cryonics, in exchange for an extra $300 per year.

(Actually it would probably be a lot cheaper, more like $30/year or a free government program, with that economy of scale; but we should ignore that for purposes of the reversal test.)

The point being that if cryonics were taken for granted, it would go on being taken for granted; it is only the state of non-cryonics that is unstable, subject to being disrupted by rational argument.

And this cryonics meetup was that world.  It was the world of the ordinary scientists and programmers and sound studio technicians who had signed up for cryonics as a matter of simple common sense.

It tears my heart out.

Those young cryonicists weren't heroes.  Most of the older cryonicists were heroes, and of course there were a couple of other heroes among us young folk, like a former employee of Methuselah who'd left to try to put together a startup/nonprofit around a bright idea he'd had for curing cancer (note: even I think this is an acceptable excuse).  But most of the younger cryonicists weren't there to fight a desperate battle against Death, they were people who'd signed up for cryonics because it was the obvious thing to do.

And it tears my heart out, because I am a hero and this was like seeing a ray of sunlight from a normal world, some alternate Everett branch of humanity where things really were normal instead of crazy all the goddamned time, a world that was everything this world could be and isn't.

Then there were the children, some of whom had been signed up for cryonics since the day they were born.

It tears my heart out.  I'm having trouble remembering to breathe as I write this.  My own little brother isn't breathing and never will again.

You know what?  I'm going to come out and say it.  I've been unsure about saying it, but after attending this event, and talking to the perfectly ordinary parents who signed their kids up for cryonics like the goddamn sane people do, I'm going to come out and say it:  If you don't sign up your kids for cryonics then you are a lousy parent.

If you aren't choosing between textbooks and food, then you can afford to sign up your kids for cryonics.  I don't know if it's more important than a home without lead paint, or omega-3 fish oil supplements while their brains are maturing, but it's certainly more important than you going to the movies or eating at nice restaurants.  That's part of the bargain you signed up for when you became a parent.  If you can afford kids at all, you can afford to sign up your kids for cryonics, and if you don't, you are a lousy parent.  I'm just back from an event where the normal parents signed their normal kids up for cryonics, and that is the way things are supposed to be and should be, and whatever excuses you're using or thinking of right now, I don't believe in them any more, you're just a lousy parent.

964 comments

Comments sorted by top scores.

comment by sbharris · 2010-01-21T09:39:29.956Z · LW(p) · GW(p)

January 21, 2010

Eliezer Yudkowsky writes (in Normal Cryonics):

The part about actually signing up may also be key - that's probably a ten-to-one or worse filter among people who "get" cryonics. (I put to Bill Faloon of the old guard that probably twice as many people had died while planning to sign up for cryonics eventually, than had actually been suspended; and he said "Way more than that.") Actually signing up is an intense filter for Conscientiousness, since it's mildly tedious (requires multiple copies of papers signed and notarized with witnesses) and there's no peer pressure.<

Comment: there’s that, but if that was all it was, it wouldn’t be harder than doing your own income taxes by hand. A lot more people manage that, than do atheists who can afford it manage to sign up for cryonics.

So what’s the problem? A major one is what I might term the “creep factor.” Even if you have no fears of being alone in the future, or being experimented upon by denizens of the future, there’s still the problem that you have to think about your own physical mortality in a very concrete way. A way which requires choices, for hours and perhaps even days.

And they aren’t comforting choices, either, such as planning your own funeral. The conventional funeral is an event where you can imagine yourself in a comfortable nice casket, surrounded by people either eulogizing you, or kicking themselves because they weren’t nicer to you while you were alive. These thoughts may comfort those contemplating suicide, but they don’t comfort cryonicists.

No, you won’t be in any slumber-chamber. Instead they’ll cut your head off and it will push up bubbles, not daisies. At the very least they’ll fill your vessels with cold dehydrating solution and you’ll end up upside down and naked at 321 F. below zero, like some shriveled up old vampire.

Will you feel any of this? No. Is it any more gruesome than the alternatives of skeletonizing in a flame, or by slow decay? No. But the average person manages to mostly avoid thinking of the alternatives, and the funeral industry helps them do it. But there’s no avoiding thinking hard about this nitty-gritty physical death stuff, when you sign up for cryonics.

There’s even some primal primate fear involved, something like the fear of snakes. Except that cryonics taps into fears about being alone and alienated in the future, along with primal fears of decapitation (monkeys hate seeing monkey parts, particularly monkey heads). My illustration of the power of these memes is Washington Irving’s short stories: out of the very many he wrote, only two are now remembered, and yet, at the same time, remarkably almost everyone knows those two. They are Rip Van Winkle and The Legend of Sleepy Hollow. There’s a reason for this.

The psychological factors can surprise the most dyed-in-the-wool atheists who have experience with death. I myself came to cryonics as a physician, already having spent most of a year dissecting corpses, and later seeing much real-time dying. It didn’t completely fix the problem of my own physical mortality. When I came to actually signing up for cryonics, already having been convinced of it for some time, I felt significant psychological resistance, even so. There’s a difference between what you know intellectually and what your gut tells you. Cryonics is like skydiving in that regard.

At this point, it’s worth repeating two of my favorite cryonics stories (the intellectual world is composed of stories, as somebody said, in the same way the physical world is composed of atoms).

Story #1 involves the winner of the Omni magazine essay contest of Why I Want To Be Cryonically Suspended. The prize: a free sign-up to Alcor, no money needed. The young man who won with the best essay about why he wanted to do it, was duly offered the prize he’d eloquently convinced himself, and everyone else, that he wanted. And when it came down to doing it, he couldn’t make himself do it. Interesting.

Story #2 is about Frederik Pohl, atheist S.F. writer of a lot of good tales, including one of the better cryonics stories, The Age of the Pussyfoot. Thirty years ago Pohl was approached by a cryonics organization about signing up, on the basis of his novel and known beliefs. He gave the usual counter argument about the chance not being worth the expense. The return was an offer to cryopreserve him free, for the publicity. He was taken aback, and said he’d have to think about it. Later, after much prodding, he produced what he admitted (and hadn’t realized before) was the real reason: he couldn’t get past the creep factor. Pohl is still alive as of this writing (he’s 90), but he’ll eventually die and won’t be cryopreserved, even though his intellect tells him (and has long told him) that he should.

So, in summary, I’m happy that Eliezer spent some time in Florida socializing with happy yuppies who had already made it past the barrier to signing up for cryonics. But for those out in the world who haven’t actually done that yet--- signed and notarized--- there is one more test of mettle for the Hero, which even they may not realize yet awaits them. This is a test of the power of will over emotion, and it’s not for the faint of spirit. In some ways it’s like the scene from the Book of the Dead where the dead person’s heart is weighed, except that this is where the would-be cryonicist finds that his or her courage is being weighed. It’s like doing the long tax return while signing yourself up for organ donation or medical school dissection, or the like.

I wish them luck. I wonder if anybody asked people at the conference what their own experiences had been, in getting past the tests of the underworld, or the under-MIND, to gain that strange chance to be your own Osiris.

Steve Harris, M.D. Alcor member since 1987

Replies from: Blueberry, ciphergoth, ShannonVyff, Normal_Anomaly
comment by Blueberry · 2010-01-21T23:36:05.774Z · LW(p) · GW(p)

there’s still the problem that you have to think about your own physical mortality in a very concrete way. A way which requires choices, for hours and perhaps even days.

I'm baffled that this is the stumbling block for so many people. I can understand being worried about the cost/uncertainty trade-off, but I really don't understand why it's any less troublesome than buying life insurance, planning a funeral, picking a cemetery plot, writing a will, or planning for cremation. People make choices that involve contemplating their death all the time, and people make choices about unpleasant-sounding medical treatments all the time.

Is it not less gruesome than the alternatives of skeletonizing in a flame, or by slow decay? No. But the average person manages to mostly avoid thinking of the alternatives, and the funeral industry helps them do it.

Well, maybe more people would sign up if Alcor's process didn't involve as much thinking about the alternatives? I had thought that the process was just signing papers and arranging life insurance. But if Alcor's process is turning people away, maybe that needs to change.

Maybe I'm just deluding myself: I'm not in a financial position to sign up yet, and I plan on signing up when I am. But I can't see the "creep factor" being an issue for me at all. I have no idea what that would feel like.

Replies from: Technologos, Dustin, Zian
comment by Technologos · 2010-01-22T15:40:41.672Z · LW(p) · GW(p)

buying life insurance

For what it's worth, I've heard people initially had many of the same hangups about life insurance, saying that they didn't want to gamble on death. The way that salespeople got around that was by emphasizing that the contracts would protect the family in event of the breadwinner's death, and thus making it less of a selfish thing.

I wonder if cryo needs a similar marketing parallel. "Don't you want to see your parents again?"

comment by Dustin · 2010-01-27T01:08:32.182Z · LW(p) · GW(p)

I have no idea what that would feel like.

This is the exact sentence that crossed my mind upon reading the original comment.

I often find that my reactions and feelings are completely different from other people's, though.

comment by Zian · 2013-08-11T20:23:39.706Z · LW(p) · GW(p)

so much thinking about alternatives

Speaking as someone who tried getting a concrete price estimate, the process can stand to be much improved. I had/will have to (if I follow through):

  1. Get convinced that cryo is worth digging into (maybe call this step "0").
  2. Figure out where to get price info (this took another chunk of time until I ran across some useful Less Wrong posts) for life insurance related stuff.
  3. Contact a life insurance person (as a cold call)
  4. Hand over some personal info.
  5. Get a pile of PDFs in return along with finding out that I still have to...

  6. Decide between different cryo organizations. 6 a) Find out info about the organizations' recurring fees. 6 b) Do research into each organization

  7. Decide which cryo approach to take.
  8. Read over all the stuff from step 5.
  9. Talk to the organization from step 6 about the physical logistics such as the wrist band thingy.
  10. Make a final Y/N decision
  11. Hunt down notary(ies) and witness(es) (?) 11 a) Make appointments with everyone
  12. Fill out the papers from the life insurance people
  13. Fill out the papers from the cryo organization.
  14. Sign stuff.
  15. Sign more stuff.
  16. Mail everything

At any time between steps 1 and 16, the process can fall completely apart.

comment by Paul Crowley (ciphergoth) · 2010-01-27T08:46:02.398Z · LW(p) · GW(p)

there’s that, but if that was all it was, it wouldn’t be harder than doing your own income taxes by hand. A lot more people manage that, than do atheists who can afford it manage to sign up for cryonics.

I live in the UK, and when I was self-employed I had an accountant do my taxes. I'm looking into signing up, and it looks to be much, much harder than that; not an "oh, must get around to it" thing but a long and continuing period of investigation to even find out what I need to sort out. This bar currently seems very, very high to me; if it were as simple as getting a mortgage I'd probably already be signed up.

Replies from: Morendil
comment by Morendil · 2010-01-29T16:11:15.425Z · LW(p) · GW(p)

Rudi Hoffman has sent word back.

The quote I was given for whole-life (constant coverage, constant premiums, no time limit) is $1900 per year (I'm 40, male and healhty), for a payout of $200K.

The more problematic news is that the life insurance company may start requiring a US Social Security number.

Replies from: ciphergoth
comment by Paul Crowley (ciphergoth) · 2010-01-29T16:16:19.354Z · LW(p) · GW(p)

Wow, that's a lot. Thanks!

Replies from: Morendil
comment by Morendil · 2010-01-29T16:34:29.521Z · LW(p) · GW(p)

Yes. The major conclusion here is - if you are going to sign up, sign up early.

comment by ShannonVyff · 2010-01-21T22:30:30.160Z · LW(p) · GW(p)

Steve I didn't know that story about Frederik Pohl-thank you for posting it, fascinating. Also, they weren't all yuppies at the FL teens & twenties cryonicist conference, there were representatives from all sorts of backgrounds/classes. Personally my motivation in signing up for cryonics is that I think the amount of knowledge that we have to learn about the Universe pales in comparison to my short natural lifespan, that keeps me in awe-as I currently learn all that I can, and all the new things I realize I don't know. That said, I'm perfectly happy with my own life, with my family, friends and community work and if I get more time, an "extreme lifespan" to see what is out there in the billions of light years of space, if I get more time to help end inequality if it still exists--or to move on to other goals, then so be it-I got lucky that cryonics worked ;-)

comment by Normal_Anomaly · 2010-11-28T22:18:21.932Z · LW(p) · GW(p)

This is an awesome comment. I plan to sign up for cryonics when I can, and I'm really hoping I have the guts to go through with it for myself and any children I may someday have. I hope it really is comparable to signing up to be an organ donor, because I did that without a second thought. On the other hand, that was just one of many boxes on the driver's license paperwork.

comment by Alicorn · 2010-01-19T21:17:33.812Z · LW(p) · GW(p)

I'm still trying to convince my friends.

It's still not working.

Maybe I'm doing it backwards. Who is already signed up and wants to be my friend?

Replies from: scotherns, MichaelGR, roland, MichaelGR, Psy-Kosh, AngryParsley
comment by scotherns · 2010-01-21T13:47:13.249Z · LW(p) · GW(p)

I find it rather odd that no one has answered the original question.

I'm signed up, and I'll be your friend.

Replies from: elityre, Alicorn
comment by Eli Tyre (elityre) · 2021-06-30T09:41:14.432Z · LW(p) · GW(p)

This made me smile. : )

comment by Alicorn · 2010-01-21T14:02:19.369Z · LW(p) · GW(p)

Someone did answer via PM, but the more, the merrier. Preferred mode of offsite contact?

Replies from: scotherns
comment by scotherns · 2010-01-22T08:27:36.240Z · LW(p) · GW(p)

PM sent with details.

comment by MichaelGR · 2010-01-20T04:10:45.555Z · LW(p) · GW(p)

What's the difference between making friends now and making friends after you wake up? What's the difference between making a family now, and making a new family then? (here I'm referencing both this comment about finding new friends, and your comment in the other thread about starting a new family)

If a friendly singularity happens, I think it's likely that the desire of extroverts like you for companionship and close relationship will have been taken into account along the way and that forming these bonds will still be possible.

Of course right now I'd want to be with my current fiancé, and I'm planning to try to convince her to sign up for cryonics, but if I lost her, I'd still rather live and have to figure out another way to get companionship in the far future than to die.

Replies from: Alicorn
comment by Alicorn · 2010-01-20T04:14:41.620Z · LW(p) · GW(p)

First of all, my friends aren't interchangeable. It's already a big step for me to be willing to make a presorted cryonics-friendly friend as a substitute for getting my entire existing cohort of companions on board, or even just one. Second of all, waiting until after revival introduces another chain of "ifs" - particularly dreadful ifs - into what's already a long, tenuous chain of ifs.

Replies from: MichaelGR, RulerofBenthos
comment by MichaelGR · 2010-01-20T04:33:21.400Z · LW(p) · GW(p)

First of all, my friends aren't interchangeable.

Of course they aren't. I'm just saying that I'd prefer making new friends to death, and that despite the fact that I love my friends very much, there's nothing that says that they are the "best friends I can ever make" and that anybody else can only provide an inferior relationship.

Second of all, waiting until after revival introduces another chain of "ifs" - particularly dreadful ifs - into what's already a long, tenuous chain of ifs.

Once again, between the certitude of death and the possibility of life in a post-friendly-singularity world, I'll take the "ifs" even if it means doing hard things like re-building a social circle (not something easy for me).

I'm just having a really hard time imagining myself making the decision to die because I lost someone (or even everyone). In fact, I just lost my uncle (brain cancer), and I loved him dearly, he was like a second father to me. His death just made me feel even more strongly that I want to live.

But I suppose we could be at opposite ends of the spectrum when it comes to these kinds of things.

Replies from: Alicorn
comment by Alicorn · 2010-01-20T04:36:14.986Z · LW(p) · GW(p)

I guess I'm just more dependent on ready access to deeply connected others than you? This sounds like a matter of preferences, not a matter of correctly turning those preferences into plans.

Replies from: MichaelVassar
comment by MichaelVassar · 2010-01-21T07:14:41.247Z · LW(p) · GW(p)

If you need friends post suspension you can pay for my suspension (currently my budget goes to X-risk) and I will promise to spend a total of at least one subjective current human lifetime sincerely trying to be the best friend I can for you unless the revived get a total of less than 100 subjective human lifetimes of run-time in which case I will give you 1% of my total run-time instead. If that's not enough, you can also share your run-time with me. I will even grant you the right to modify my reward centers to directly make me like you in any copy running on run time you give me. This offer doesn't allow your volition to replace mine in any other respect if the issue is important.

Replies from: orthonormal
comment by orthonormal · 2010-01-21T07:26:32.239Z · LW(p) · GW(p)

I'd bet karma at 4 to 1 odds that Alicorn finds this proposal deeply disturbing rather than helpful.

Replies from: wedrifid
comment by wedrifid · 2010-01-21T07:35:19.938Z · LW(p) · GW(p)

You're on. Alicorn, would you be so kind as to arbitrate? We need you to evaluate which of these three categories Michael's offer fits in to:

  1. Deeply Disturbing
  2. Helpful
  3. Just 'somewhat' disturbing all the way through to indifference.

Would 'slightly amusing' count as helpful if it served to create slightly more confidence in the prospect of actively seeking out the friendship the potentially cryonically inclined?

Replies from: Alicorn
comment by Alicorn · 2010-01-21T14:11:02.611Z · LW(p) · GW(p)

Yep, disturbing. "Deeply" might be pushing it a little. But a) I'll have to mess with my budget to afford one suspension, let alone two, and while I'd chip in for my sister if she'd let me, people I do not yet know and love are not extended the same disposition. b) There's presently no way to enforce such a promise. c) Even if there were, that kind of enforcement would itself be creepy, since my ethics would ordinarily oblige me to abide by any later change of mind. d) This arrangement does nothing to ensure that I will enjoy MichaelVassar's company; I'm sure he's a great person, but there are plenty of great people I just don't click with. e) I do not like the idea of friendships with built-in time quotas, I mean, ew.

Replies from: wedrifid
comment by wedrifid · 2010-01-21T14:16:52.331Z · LW(p) · GW(p)

Yep, disturbing. "Deeply" might be pushing it a little.

"Deeply" seemed unlikely given that 'deeply disturbing' would have to be reserved in case Michael had seriously offered his services as a mercenary to carry out a kidnapping, decapitation, and non-consensual vitrification.

I do not like the idea of friendships with built-in time quotas, I mean, ew.

But it is so efficient! Surely Robin has made a post advocating such arrangements somewhere. ;)

Replies from: orthonormal, Alicorn
comment by orthonormal · 2010-01-22T01:03:47.571Z · LW(p) · GW(p)

So I guess that's a "push" on the original terms of the bet, falling between "helpful" and "deeply disturbing".

Replies from: wedrifid
comment by wedrifid · 2010-01-22T02:07:21.818Z · LW(p) · GW(p)

Yes, bookkeeper loses his overheads. That's what the bookie gets for accepting bets with ties.

comment by Alicorn · 2010-01-21T14:19:19.843Z · LW(p) · GW(p)

Now, Robin, there's a person who regularly deeply disturbs me.

comment by RulerofBenthos · 2018-05-10T20:39:40.758Z · LW(p) · GW(p)

You're forgetting the part where they revive you only when there is a cure for whatever you died from. You may be revived long before or after they are revived. And if that happens, there's also the chance they can die again and not be stored before you're revived. You'd probably have to give instructions to hold off on revival, otherwise, risk the missed connection.

comment by roland · 2010-01-19T23:20:39.347Z · LW(p) · GW(p)

EDIT:

I found all the information I need here: http://www.cryonics.org/become.html

comment by MichaelGR · 2010-01-22T21:09:59.809Z · LW(p) · GW(p)

I'm in the process of signing up (yeah, I know, they're all saying that... But I really am! and plan to post about my experience on LW once it's all over) and I'll be your friend too, if you'll have me as a friend.

Replies from: Alicorn
comment by Alicorn · 2010-01-22T21:31:01.005Z · LW(p) · GW(p)

Even if you were not signed up and never planned to be, I can always use more friends! What's your preferred offsite contact method?

Replies from: komponisto, MichaelGR
comment by komponisto · 2010-01-22T21:57:59.570Z · LW(p) · GW(p)

I can always use more friends!

I've always wondered what the "Add to Friends" button on LW does, so I'm trying it out on you. (I hope you don't mind!)

Replies from: RobinZ, MichaelGR, Alicorn
comment by RobinZ · 2010-01-22T22:09:26.985Z · LW(p) · GW(p)

It's a feed agregator. There used to be a link on LessWrong to view all contributions by "Friends", but it was removed some time past.

comment by MichaelGR · 2010-01-22T22:41:50.904Z · LW(p) · GW(p)

I had never noticed that button. I'll try it too.

comment by Alicorn · 2010-01-22T21:59:16.010Z · LW(p) · GW(p)

I don't mind at all, but I haven't found it to do anything much when I've tried it.

Replies from: komponisto
comment by komponisto · 2010-01-22T22:02:59.207Z · LW(p) · GW(p)

Indeed not; all it seemed to do (at least on my end) was transform itself into a "Remove from Friends" button. Did anything happen on your end?

Replies from: Alicorn
comment by Alicorn · 2010-01-22T22:05:38.192Z · LW(p) · GW(p)

I detected no change.

Replies from: bgrah449
comment by bgrah449 · 2010-01-22T22:08:45.819Z · LW(p) · GW(p)

On his overview page, can you see which articles he liked/disliked?

Replies from: Alicorn
comment by Alicorn · 2010-01-22T22:09:18.549Z · LW(p) · GW(p)

Doesn't look like it.

Replies from: RobinZ
comment by RobinZ · 2010-01-22T22:12:58.069Z · LW(p) · GW(p)

I can see bgrah449's - I think that's what "Make my votes public" does.

comment by MichaelGR · 2010-01-22T21:57:18.935Z · LW(p) · GW(p)

I sent you a private message.

comment by Psy-Kosh · 2010-01-21T14:36:23.289Z · LW(p) · GW(p)

I'm working on it. Is taking a bit longer than planned because insurance company seemed to throw a few extra hoops for me to jump through. (including some stuff from some samples they took from me that they don't like. Need to see a doc and have them look at the data and pass judgement on it for the insurance company). Hence need to make doc appointment.

Replies from: Alicorn
comment by Alicorn · 2010-01-21T15:05:21.285Z · LW(p) · GW(p)

Actually having the process underway is probably close enough. Preferred mode of offsite contact?

Replies from: Psy-Kosh
comment by Psy-Kosh · 2010-01-21T15:34:44.194Z · LW(p) · GW(p)

Am available email, IM, phone or online voice chat. (Any direct meetup depends on where you live, of course)

The first two though would probably be the main ones for me.

Anyways, will PM you specifics (e-addy, phone number, other stuff if you want (as far as IM, lemme know which IM service you use, if any).

Hrm... LWbook: Where giving (or getting) the (extremely) cold shoulder is a plus. ;)

comment by AngryParsley · 2010-01-20T01:13:19.128Z · LW(p) · GW(p)

I'll say it again: It's much easier for you to sign up alone than it is to convince your friends to sign up with you.

Replies from: Alicorn
comment by Alicorn · 2010-01-20T01:14:16.547Z · LW(p) · GW(p)

I will sign up when I have a reasonable expectation that I'm not buying myself a one-way ticket to Extrovert Hell.

Replies from: wedrifid, AngryParsley
comment by wedrifid · 2010-01-20T01:43:42.648Z · LW(p) · GW(p)

Given the opening post I am not sure I understand what you are saying. What about being resurrected with the people described would be an Extrovert Hell? That you don't have any pre revival friends?

Replies from: Alicorn
comment by Alicorn · 2010-01-20T01:46:42.482Z · LW(p) · GW(p)

I'm referencing a prior thread. Pre-revival friends or family are a prerequisite for me not looking at the prospect of revival with dread instead of hope.

Replies from: wedrifid, Kevin, Vladimir_Nesov
comment by wedrifid · 2010-01-20T01:52:10.797Z · LW(p) · GW(p)

With those values the 'find friends who are signed up to cryonics' sounds like the obvious plan. (Well, less obvious than the one where you kidnap your friends, cut of their head and preserve it against their will. But more sane.)

Replies from: Alicorn
comment by Alicorn · 2010-01-20T01:54:47.146Z · LW(p) · GW(p)

I don't think most of my friendships would survive kidnapping, decapitation, and non-consensual vitrification, even if my friends survived it.

Replies from: wedrifid
comment by wedrifid · 2010-01-20T02:00:43.181Z · LW(p) · GW(p)

A friend will help you move. A good friend will help you move a body. A great friend is the body.

Replies from: Eliezer_Yudkowsky, Bindbreaker
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-01-20T02:28:47.793Z · LW(p) · GW(p)

That sounded pretty odd until I looked up the parent comment, I gotta tell you.

comment by Bindbreaker · 2010-02-09T04:14:48.486Z · LW(p) · GW(p)

This is an incredibly good joke.

comment by Kevin · 2010-01-20T11:12:52.534Z · LW(p) · GW(p)

I bet that online dating and friend making will work a lot better in the future. Can you elaborate about what is so dreadful about waking up without knowing anyone?

comment by Vladimir_Nesov · 2010-01-20T02:30:58.812Z · LW(p) · GW(p)

I'm referencing a prior thread. Pre-revival friends or family are a prerequisite for me not looking at the prospect of revival with dread instead of hope.

But, but!..

You know what? This isn't about your feelings. A human life, with all its joys and all its pains, adding up over the course of decades, is worth far more than your brain's feelings of comfort or discomfort with a plan. Does computing the expected utility feel too cold-blooded for your taste? Well, that feeling isn't even a feather in the scales, when a life is at stake. Just shut up and multiply.

Replies from: Alicorn
comment by Alicorn · 2010-01-20T02:35:02.682Z · LW(p) · GW(p)

Okay, 1) I dislike the "shut up and multiply" sentiment anyway, since it's so distinctly consequentialist. I will not shut up, and I will only multiply when everything I'm multiplying is really commensurate including in a deontic sense. I will walk away from Omelas should I have occasion. And 2) it's my freakin' life. I'm not deciding to deny someone else the chance to be ferried to the future on the basis of it sounding lonely.

Is there some other significance to the links and quote that you hoped I'd extract?

Replies from: wedrifid, Vladimir_Nesov
comment by wedrifid · 2010-01-20T03:03:45.313Z · LW(p) · GW(p)

Is there some other significance to the links and quote that you hoped I'd extract?

The significant claim seems to be that it is often necessary to quell an instinctive reaction in order to best meet your own preferences. There are some reflectively consistent preferences systems in which it is better to die than to suffer the distress of a lonely revival but there are many more that are not. I take Vladmir's suggestion to be "make sure this is what you really want, not just akrasia magnified a thousand times".

And 2) it's my freakin' life. I'm not deciding to deny someone else the chance to be ferried to the future on the basis of it sounding lonely.

Often claims of the shape of Vladimir's are intended to enforce a norm upon the recipient. In this case the implied 'should' is of the kind "action X may best give Y what they want" which is at least slightly less objectionable.

Replies from: Alicorn
comment by Alicorn · 2010-01-20T03:13:13.689Z · LW(p) · GW(p)

I did a reversal test on the preference; if everybody I cared about disappeared from my life all at once and everybody who remained was as alien as the people of the future will likely be, I would probably want to die, no cryonics required.

Replies from: Kevin, Bindbreaker
comment by Kevin · 2010-01-20T11:26:32.421Z · LW(p) · GW(p)

I bet that online dating and friend making will work a lot better in the future. There probably exist many people in the future that appreciate your unique knowledge and want to get to know you better.

When you wake up in the future, you will probably immediately meet people from a time not so unlike our own. Going through physical and mental rehab with them could be a good way to form lifelong friendships. You are never going to be the only person from the 20th and 21st century in the future.

Can you talk more about why your future is so dreadful? Stating that all possible futures are worse than death is a strong statement. In this reversal test, it even assigns a "probably" to being suicidal. I think your flaw in reasoning lies there. I don't think that being "probably" suicidal in the future is sufficient reason to not visit the future.

In our time, we morally justify the forcible hospitalization and medication of suicidal people until they aren't suicidal anymore. With Friendly AI, this moral justification may remain true in the future, and once you're on drugs or other brain enhancements, you'll probably love life and think your self from your first life absolutely insane for preferring death to glorious existence. Again, I think your desire for deep connections with other people is likely to be nearly immediately fixable in the future. This does sound a little dystopian, but I don't think there exist very many wake-up futures in which your existential misery can not be fixed.

To me, it seems like in nearly all cases it is worth waiting until the future to decide whether or not it is worth living.

Replies from: Aurini, Alicorn, wedrifid
comment by Aurini · 2010-01-21T03:24:48.994Z · LW(p) · GW(p)

"When you wake up in the future, you will probably immediately meet people from a time not so unlike our own. Going through physical and mental rehab with them could be a good way to form lifelong friendships. You are never going to be the only person from the 20th and 21st century in the future."

Woman: You're from 1999? I'm from 2029! Say, remember when we got invaded by the cybernetic ape army?

Fry: Uh... yeah. Those were some crazy times!

comment by Alicorn · 2010-01-20T14:46:23.419Z · LW(p) · GW(p)

Yeah, uh... threatening me with psychoactive medication is not a good way to make me buy a ticket to the future.

comment by wedrifid · 2010-01-20T12:41:31.716Z · LW(p) · GW(p)

In our time, we morally justify the forcible hospitalization and medication of suicidal people until they aren't suicidal anymore. With Friendly AI, this moral justification may remain true in the future, and once you're on drugs or other brain enhancements, you'll probably love life and think your self from your first life absolutely insane for preferring death to glorious existence. Again, I think your desire for deep connections with other people is likely to be nearly immediately fixable in the future. This does sound a little dystopian, but I don't think there exist very many wake-up futures in which your existential misery can not be fixed.

Resistance is illogical, you will be upgraded.

comment by Bindbreaker · 2010-01-20T04:29:36.931Z · LW(p) · GW(p)

I take it you read "Transmetropolitan?" I don't think that particular reference case is very likely.

Replies from: Alicorn
comment by Alicorn · 2010-01-20T04:33:04.735Z · LW(p) · GW(p)

I have not read that (*googles*) series of comic books.

comment by Vladimir_Nesov · 2010-01-20T02:54:56.634Z · LW(p) · GW(p)

it's my freakin' life

I believe that you are not entitled to your choice of values. Preference and priors are not for grabs.

Replies from: Alicorn
comment by Alicorn · 2010-01-20T02:56:00.678Z · LW(p) · GW(p)

I cannot make heads nor tails of what you're trying to convey.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-01-20T03:03:26.324Z · LW(p) · GW(p)

Hmm... At least the content of my position seems to have been rehashed a lot, even if you won't agree with it.

I believe that your opinion about what your values are has very little influence on what your values actually are, which in the backbone are human-universal values plus a lot of person-specific detail that is so below the level of conscious understanding that isn't even worth speculating about. Whenever someone states an opinion about their values being extreme, they are seriously wrong about their actual values. Consequently, acting on the misconstrued values is against the person's own actual values.

Replies from: Alicorn, Kaj_Sotala, loqi
comment by Alicorn · 2010-01-20T03:10:17.591Z · LW(p) · GW(p)

I don't grant nearly as much credence to the idea that there are human-universal values as most people around here seem to. People are a wacky, diverse bunch.

Also, if you have an idea about what my values Really Are that is unconnected to what I tell you about them, I don't want you anywhere near any decisions about my life. Back! Back! The power of my value of self-determination compels you!

Replies from: wedrifid, Vladimir_Nesov
comment by wedrifid · 2010-01-20T03:36:37.481Z · LW(p) · GW(p)

Also, if you have an idea about what my values Really Are that is unconnected to what I tell you about them, I don't want you anywhere near any decisions about my life.

I get my ideas about what people's values Really Are based on their decisions. How much weight I place on what they tell me about their values varies based on their behaviour and what they say. I don't make it my business to be anywhere near any decisions about other people's lives except to the extent that they could impact me and I need to protect my interests.

I don't grant nearly as much credence to the idea that there are human-universal values as most people around here seem to. People are a wacky, diverse bunch.

That assumption (and presumption!) of human-universal values scares me at times. It triggers my instinctive "if you actually had the power to act on that belief I would have to kill you" instinct.

Even with that kind of ruthless self-determination in mind it is true that "acting on the misconstrued values is against the person's own actual values". Vladmir's point is not particularly controversial, whether it applies to you or not is for you to decide and Vladmir to speculate on if he happens to be curious.

Replies from: Alicorn
comment by Alicorn · 2010-01-20T03:41:00.392Z · LW(p) · GW(p)

My decision to tell you about my values counts as a decision, doesn't it?

Replies from: wedrifid
comment by wedrifid · 2010-01-20T03:55:23.189Z · LW(p) · GW(p)

Absolutely. And I weigh that information higher coming from yourself than from many people given my observations of apparent self awareness and maturity somewhat beyond what I expect given your self reported age. Obviously such judgements also vary based on topic and context.

In general, however, my life has been a lot simpler and more successful since realising what people say about their values is not always a reliable indicator.

comment by Vladimir_Nesov · 2010-01-20T03:30:37.265Z · LW(p) · GW(p)

Back! Back! The power of my value of self-determination compels you!

Friendly AI be the judge (I'm working on that). :-)

By the way, this reminds of Not Taking Over the World (the world is mad and is afraid of getting saved, or course, in the hypothetical scenario where the idea gets taken seriously to begin with!).

Replies from: wedrifid
comment by wedrifid · 2010-01-20T11:14:21.795Z · LW(p) · GW(p)

Friendly AI be the judge (I'm working on that). :-)

Be sure to keep us posted on your progress. It's always good to know who may need a dose of Sword of Good ahead of time. ;)

comment by Kaj_Sotala · 2010-01-20T09:15:14.010Z · LW(p) · GW(p)

I don't recall hearing that kind of an argument presented here anywhere. Yes, there have been arguments about your values shifting when you happen to achieve power, as well as seemingly altruistic behavior actually working to promote individual fitness. But I don't think anybody has yet claimed that whenever somebody feels they have extreme values, they are wrong about them.

Furthermore - if the discussion in those referenced posts is the one you're referring to - I'd be hesitant to claim that the consciously held values are false values. People might actually end up acting on the non-conscious values more than they do on the conscious ones, but that's no grounds for simply saying "your declared values are false and not worth attention". If you went down that route, you might as well start saying that since all ethics is rationalization anyway, then any consequentialist arguments that didn't aim at promoting the maximum fitness of your genes were irrelevant. Not to mention that I would be very, very skeptical of any attempts to claim you knew someone else's values better than they did.

There have also been posts specifically arguing that those non-conscious values might not actually be your true values.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-01-29T19:59:39.843Z · LW(p) · GW(p)

I'm not arguing for the supremacy of non-conscious values: in many cases, people have good sense of their actual values and consciously resolve their implications, which is what I see as the topic of Which Parts Are "Me"?. The inborn values are not a fixed form, although they are a fixed seed, and their contradictions need to be resolved.

If you went down that route, you might as well start saying that since all ethics is rationalization anyway, then any consequentialist arguments that didn't aim at promoting the maximum fitness of your genes were irrelevant.

Genes? The expression of that evil alien elder god? They don't write a default morality.

The links relevant to my argument:

Human universal (we all share the bulk of our values), Complexity of value (there is a lot of stuff coded in the inborn values; one can't explain away huge chunks of this complexity by asserting them not present in one's particular values), Fake simplicity (it's easy to find simple arguments that gloss over a complex phenomenon), No, Really, I've Deceived Myself (it's not a given that one even appreciates the connection of the belief with the asserted content of that belief)

These obviously don't form a consistent argument, but may give an idea of where I'm coming from. I'm only declining to believe particularly outrageous claims, where I assume the claims being made because of error and not because of the connection to reality; where the claims are not outrageous, they might well indicate the particular ways in which the person's values deviate from the typical.

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2010-01-30T16:55:33.867Z · LW(p) · GW(p)

I suspect this community overemphasizes the extent to which human universals are applicable to individuals (as opposed to cultures), and underemphasizes individual variation. I should probably write a post regarding this at some point.

comment by loqi · 2010-01-20T19:09:50.312Z · LW(p) · GW(p)

Well put. My own uncertainty with regard to my values is the main reason I'm reluctant to take "mind hacks" out for casual spins - I've been quite surprised in the past by how sophisticated subconscious reactions can be. That said, I don't think I could bring myself to ignore my consciously-held values to the point of doing something as significant as signing up for cryonics, were that necessary.

comment by AngryParsley · 2010-01-20T01:45:29.174Z · LW(p) · GW(p)

I thought "I'm so embarrassed I could die" was just a figure of speech.

You weren't convinced by Eliezer's post? Do you think signing up for cryonics will get you ostracized from your social circles? Besides the two witnesses on some of the forms, nobody will know unless you tell them or flaunt your ID tags. Are there no two people who you are willing to trust with a secret?

Replies from: Alicorn
comment by Alicorn · 2010-01-20T01:47:48.286Z · LW(p) · GW(p)

...This has nothing to do with embarrassment. The problem isn't that people will stop being my friend over it, the problem is that they will all die and then the best case scenario will be that I will wake up in a bright new future completely alone.

Replies from: wedrifid, gwern, AngryParsley, mattnewport, Peter_de_Blanc
comment by wedrifid · 2010-01-20T01:54:52.130Z · LW(p) · GW(p)

I'm actually still confused. That doesn't sound like 'Extrovert Hell'. Extroverts would just make a ton of new friends straight away. A lone introvert would have more trouble. Sure, it would be an Extrovert Very Distressing Two Weeks, but death is like that. (Adjust 'two weeks' to anything up to a decade depending on how vulnerable to depression you believe you will be after you are revived.)

Replies from: Alicorn
comment by Alicorn · 2010-01-20T01:56:33.344Z · LW(p) · GW(p)

I honestly do not think I'd last two weeks. If I go five conscious hours without having a substantial conversation with somebody I care about, I feel like I got hit by a brick wall. I'm pretty sure I only survived my teens because I had a pesky sister who prevented me from spending too long in psychologically self-destructive seclusion.

Replies from: Eliezer_Yudkowsky, pdf23ds, wedrifid, Kutta
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-01-21T18:01:01.312Z · LW(p) · GW(p)

This sounds like an unrealistically huge discount rate. To be precise, you anticipate:

(a) One week of being really unhappy while you go through the process of making new friends (perhaps with someone else who's really unhappy for similar reasons). I assume here that you do not find the process of "making a new friend" to be itself enjoyable enough to compensate. I also suspect that you would start getting over the psychological shock almost immediately, but let's suppose it actually does take until you've made a friend deep enough to have intimate conversations with, and let's suppose that this does take a whole week.

(b) N years of living happily ever after.

It's really hard to see how the former observer-moments outweigh the latter observer-moments.

I think it's this that commenters are probably trying to express when they wonder if you're thinking in the mode we name "rational": it seems more like a decision made by mentally fleeing from the sheer terror of imagining the worst possible instant of the worst possible scenario, than any choice made by weighing and balancing.

I also tend to think of cryonics as a prophylactic for freak occurrences rather than inevitable death of old age, meaning that if you sign up now and then have to get suspended in the next 10 years for some reason, I'd rate a pretty good chance that you wake up before all your friends are dead of old age. But that shouldn't even be an issue. As soon as you weigh a week against N years, it looks pretty clear that you're not making your decision around the most important stakes in the balance.

I know you don't endorse consequentialism, but it seems to me that this is just exactly the sort of issue where careful verbal thinking really does help people in real life, a lot - when people make decisions by focusing on one stake that weighs huge in their thoughts but obviously isn't the most important stake, where here the stakes are "how I (imagine) feeling in the very first instant of waking up" versus "how I feel for the rest of my entire second life". Deontologist or not, I don't see how you could argue that it would be a better world for everyone if we all made decisions that way. Once you point it out, it just seems like an obvious bias - for an expected utility maximizer, a formal bias; but obviously wrong even in an informal sense.

Replies from: Alicorn
comment by Alicorn · 2010-01-21T20:50:39.646Z · LW(p) · GW(p)

I think that the distress would itself inhibit me in my friend-making attempts. It is a skill that I have to apply, not a chemical reaction where if you put me in a room with a friendly stranger and stir, poof, friendship.

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-01-21T21:17:21.993Z · LW(p) · GW(p)

Um... would I deeply offend you if I suggested that, perhaps, your worst fears and nightmares are not 100% reflective of what would actually happen in reality? I mean, what you're saying here is that if you wake up without friends, you'll be so shocked and traumatized that you'll never make any friends again ever, despite any future friend-finding or friend-making-prediction software that could potentially be brought to bear. You're saying that your problem here is unsolvable in the long run by powers up to and including Friendly superintelligence and it just doesn't seem like THAT LEVEL of difficulty. Or you're saying that the short-run problem is so terrible, so agonizing, that no amount of future life and happiness can compensate for it, and once again it just doesn't seem THAT BAD. And I've already talked about how pitting verbal thought against this sort of raw fear really is one of those places where rationality excels at actually improving our lives.

Are you sure this is your true rejection or is there something even worse waiting in the wings?

Replies from: Alicorn, thomblake
comment by Alicorn · 2010-01-21T21:34:27.699Z · LW(p) · GW(p)

I'm making projections based on psychological facts about myself. Anticipating being friendless and alone makes me unhappy all by itself; but I do have some data on how I get when it actually happens. I don't think I would be able to bring to bear these clever solutions if that happened (to the appropriate greater magnitude).

I do consider this a problem, so I am actively trying to arrange to have someone I'd find suitable signed up (in either direction would work for). This is probably a matter of time, since my top comment here did yield responses. I'd bet you money, if you like, that (barring financial disaster on my part) I'll be signed up within the next two years.

Replies from: pdf23ds
comment by pdf23ds · 2010-01-22T03:24:20.504Z · LW(p) · GW(p)

I asked this elsewhere, but I'll ask again: what if the unhappiness and distress caused by the lack of friends could suddenly just disappear? If you could voluntarily suppress it, or stop suppressing it? There will almost certainly be technology in a post-revival future to let you do that, and you could wake up with that ability already set up.

comment by thomblake · 2010-01-21T21:30:00.936Z · LW(p) · GW(p)

This is an interesting point to consider, and I'm one who's offered a lot of reasons to not sign up for cryonics.

For the record, a lower bound on my "true rejection" is "I'd sign up if it was free".

comment by pdf23ds · 2010-01-21T07:18:08.511Z · LW(p) · GW(p)

What about this: leave instructions with your body to not revive you until there is technology that would allow you to temporarily voluntarily suppress your isolation anxiety until you got adjusted to the new situation and made some friends.

If you don't like how extraverted you are, you don't have to put up with it after you get revived.

Replies from: Alicorn
comment by Alicorn · 2010-01-21T14:02:56.415Z · LW(p) · GW(p)

But the availability of such technology would not coincide with my volunteering to use it.

Replies from: pdf23ds
comment by pdf23ds · 2010-01-22T03:32:22.652Z · LW(p) · GW(p)

Would you be opposed to using it? Would you be opposed to not returning to consciousness until the technology had been set up for you (i.e. installed in your mind), so it would be immediately available?

Replies from: Alicorn
comment by Alicorn · 2010-01-22T03:39:02.202Z · LW(p) · GW(p)

I assign a negligible probability that there exists some way I'd find acceptable of achieving this result. It sounds way creepy to me.

Replies from: pdf23ds
comment by pdf23ds · 2010-01-22T03:52:42.601Z · LW(p) · GW(p)

I find that surprising. (I don't mean to pass judgment at all. Values are values.) Would you call yourself a transhumanist? I wonder how many such people have creepy feelings about mind modifications like that. I would have thought it's pretty small, but now I'm not sure. I wonder if reading certain fiction tends to change that attitude.

Replies from: Alicorn
comment by Alicorn · 2010-01-22T03:56:40.811Z · LW(p) · GW(p)

I would call myself a transhumanist, yes. Humans suck, let's be something else - but I would want such changes to myself to be very carefully understood by me first, and if at all possible, directed from the inside. I mentioned elsewhere that I'd try cognitive exercises if someone proposed them. Brain surgery or drugs or equivalents, though, I am not open to without actually learning what the heck they'd entail (which would take more than the critical time period absent other unwelcome intervention), and these are the ones that seem captured by "technology".

Replies from: pdf23ds, AdeleneDawner
comment by pdf23ds · 2010-01-22T04:16:27.579Z · LW(p) · GW(p)

Hmm. What I had in mind isn't something I would call brain surgery. It would be closer to a drug. My idea (pretty much an "outlook" from Egan's Diaspora) is that your mind would be running in software, in a huge neuron simulator, and that the tech would simply inhibit the output of certain, targeted networks in your brain or enhance others. This would obviously be much more targeted than inert drugs could achieve. (I guess you might be able to achieve this in a physical brain with nanotech.)

I'm not sure if this changes your intuition any. Perhaps you would still be uncomfortable with it without understanding it first. But if you trust the people who would be reviving you to not torture and enslave you, you could conceivably leave enough detailed information about your preferences for you to trust them as a first-cut proxy on the mind modification decision. (Though that could easily be infeasible.) Or perhaps you could instruct them to extrapolate from your brain whether you would eventually approve of the modification, if the extrapolation wouldn't create a sentient copy of you. (I'm not sure if that's possible, but it might be.)

Replies from: Alicorn
comment by Alicorn · 2010-01-22T04:21:24.279Z · LW(p) · GW(p)

I trust the inhabitants of the future not to torture and enslave me. I don't trust them not to be well-intentioned evil utilitarians who think nothing of overriding my instructions and preferences if that will make me happy. So I'd like to have the resources to be happy without anybody having to be evil to me.

Replies from: pdf23ds
comment by pdf23ds · 2010-01-22T04:46:57.727Z · LW(p) · GW(p)

But that wouldn't be making you happy. It'd be making someone very much like you happy, but someone you wouldn't have ever matured into. (You may still care that the latter person isn't created, or not want to pay for cryonics just for the latter person to be created; that's not the point.) I doubt that people in the future will have so much disregard for personal identity and autonomy that they would make such modifications to you. Do you think they would prevent someone from committing suicide? If they would make unwanted modifications to you before reviving you, why wouldn't they be willing to make modifications to unconsenting living people*? They would see your "do not revive unless..." instructions as a suicide note.

* Perhaps because they view you as a lower life form for which more paternalism is warranted than for normal transhuman.

Of course that's not a strong argument. If you want to be that cautious, you can.

Replies from: Alicorn
comment by Alicorn · 2010-01-22T04:52:56.727Z · LW(p) · GW(p)

I doubt that people in the future will have so much disregard for personal identity and autonomy that they would make such modifications to you.

I don't. I wouldn't be very surprised to wake up modified in some popular way. I'm protecting the bits of me that I especially want safe.

Do you think they would prevent someone from committing suicide?

Maybe.

why wouldn't they be willing to make modifications to unconsenting living people*?

Who says they're not? (Or: Maybe living people are easier to convince.)

comment by AdeleneDawner · 2010-01-22T04:07:12.547Z · LW(p) · GW(p)

How about a scenario where they gave you something equivalent to a USB port, and the option to plug in an external, trivially removable module that gave you more conscious control over your emotional state but didn't otherwise affect your emotions? That still involves brain surgery (to install the port), but it doesn't really seem to be in the same category as current brain surgery at all.

Replies from: Alicorn
comment by Alicorn · 2010-01-22T04:14:01.786Z · LW(p) · GW(p)

Hmmm. That might work. However, the ability to conceptualize one way to achieve the necessary effect doesn't guarantee that it's ever going to be technically feasible. I can conceptualize various means of faster-than-light travel, too; it isn't obliged to be physically possible.

Replies from: AdeleneDawner
comment by AdeleneDawner · 2010-01-22T04:30:31.147Z · LW(p) · GW(p)

I suspect I have a more complete and reality-connected model of how such a system might work than you have of ftl. :)

I'm basically positing a combination of more advanced biofeedback and non-pleasure-center-based wireheading, for the module: You plug it in, and it starts showing you readings for various systems, like biofeedback does, so that you can pinpoint what's causing the problem on a physical level. Actually using the device would stimulate relevant brain-regions, or possibly regulate more body-based components of emotion like heart- and breathing-rate and muscle tension (via the brain regions that normally do that), or both.

I'm also assuming that there would be considerable protection against accidentally stimulating either the pleasure center or the wanting center, to preclude abuse, if they even make those regions stimulateable in the first place.

Replies from: Alicorn
comment by Alicorn · 2010-01-22T04:41:55.690Z · LW(p) · GW(p)

Of course I know how FTL works! It involves hyperspace! One gets there via hyperdrive! Then one can get from place to place hyper-fast! It's all very hyper!

*ahem*

You have a point. But my more emotionally satisfying solution seems to be fairly promising. I'll turn this over in my head more and it may serve as a fallback.

comment by wedrifid · 2010-01-20T02:19:35.585Z · LW(p) · GW(p)

Wow. That isn't an exaggerating? Is that what normal extraverts are like or are you an outlier. So hard to imagine.

Replies from: Bindbreaker, Alicorn
comment by Bindbreaker · 2010-01-20T02:46:37.658Z · LW(p) · GW(p)

That seems like a fairly extreme outlier to me. I'm an extrovert, and for me that appears to mean simply that I prefer activities in which I interact with people to activities where I don't interact with people.

comment by Alicorn · 2010-01-20T02:20:48.374Z · LW(p) · GW(p)

Nope, not exaggerating. I say "five hours" because I timed it. I don't know if I'm an outlier or not; most of my friends are introverts themselves.

Replies from: GuySrinivasan
comment by SarahSrinivasan (GuySrinivasan) · 2010-01-20T08:02:48.652Z · LW(p) · GW(p)

Sounds like "five hours" might be something worth the pain of practicing to extend. Maybe not for you, but outlier time-brittle properties like that in me worry me.

Replies from: Alicorn
comment by Alicorn · 2010-01-20T15:06:26.604Z · LW(p) · GW(p)

Refraining from pushing the five hour limit harder than I have to is a very important part of my mood maintenance, which lets me not be on drugs, in danger of hurting myself, or just plain unhappy all the time. The farther I let myself get, the harder it is to muster the motivation to use my recovery strategies, and the longer they take to work.

Replies from: Dustin
comment by Dustin · 2010-01-20T19:29:59.941Z · LW(p) · GW(p)

From my point of view this state of being seems unstable and unhealthy. I cannot imagine having my personal state of mind being so reliant on others.

I love having a good conversation with a friend. But I could also probably go for weeks without having such a thing. Probably the longest I've been alone is a week and I enjoyed it.

I can't see from your viewpoint, but from my viewpoint you should do everything in your power to change how reliant you are on others. It seems like if you are so reliant on others that you are going to, consciously or not, change your values and beliefs merely to ensure that you have people who you can associate with.

Replies from: Alicorn, wedrifid
comment by Alicorn · 2010-01-20T19:44:40.012Z · LW(p) · GW(p)

I'm dependent on many things, and the ability to chat with people is one of the easiest to ensure among them. If I decide that I'm too dependent on external factors, I think I'll kick the brie habit before I try to make my friends unnecessary.

I'm not sure whence your concern that I'll change my values and beliefs to ensure that I have people I can associate with. I'd consider it really valuable evidence that something was wrong with my values and beliefs if nobody would speak to me because of them. That's not the case - I have plenty of friends and little trouble making more when the opportunity presents itself - so I'm not sure why my beliefs and values might need to shift to ensure my supply.

Replies from: Dustin
comment by Dustin · 2010-01-21T19:49:02.445Z · LW(p) · GW(p)

Perhaps I misunderstood what your "dependency" actually is. If your dependency was that you really need people to approve of you (a classic dependency and the one I apparently wrongly assumed), then it seems like your psyche is going to be vastly molded by those around you.

If your dependency is one of human contact, than the pressure to conform would probably me much less of a thing to worry about.

I would like to address your first paragraph..."making your friends unnecessary" isn't what I suggested. What I had in mind was making them not so necessary that you have to have contact with them every few hours.

Anyway, it's all academic now, because if you don't think it's a problem, I certainly don't think it's a problem.

ETA: I did want to point out that I have changed over time. During my teenage years I was constantly trying to be popular and get others to like me. Now, I'm completely comfortable with being alone and others thinking I'm wrong or weird.

Replies from: Alicorn
comment by Alicorn · 2010-01-21T20:47:20.255Z · LW(p) · GW(p)

Well, I like approval. But for the purposes of not being lonely, a heated argument will do!

comment by wedrifid · 2010-01-20T23:38:51.597Z · LW(p) · GW(p)

From my point of view this state of being seems unstable and unhealthy. I cannot imagine having my personal state of mind being so reliant on others.

If you cannot so imagine then perhaps making judgements on what is 'unhealthy' for a person that does rely so acutely on others may not be entirely reliable. If someone clearly has a different neurological makeup it can be objectionable to either say they should act as you do or that they should have a different neurological makeup.

It is absolutely fascinating to me to see the 'be more like me' come from the less extroverted to the extrovert.

Replies from: Alicorn, Dustin
comment by Alicorn · 2010-01-20T23:43:42.534Z · LW(p) · GW(p)

It is absolutely fascinating to me to see the 'be more like me' come from the less extroverted to the extrovert.

Well, in fairness, my particular brand of extroversion really is more like a handicap than a skill. The fact that I need contact has made me, through sheer desperation and resulting time devoted to practice, okay at getting contact; but that's something that was forced, not enabled, by my being an extrovert.

Replies from: wedrifid
comment by wedrifid · 2010-01-21T00:20:03.595Z · LW(p) · GW(p)

Well, in fairness, my particular brand of extroversion really is more like a handicap than a skill.

Definitely. It could get you killed. It had me wondering, for example, if the ~5 hours figure is highly context dependent: You are on a hike with a friend and 12 hours from civilisation. Your friend breaks a leg. He is ok, but unable to move far and in need of medical attention. You need to get help. Does the fact that every step you take is bound up in your dear friend's very survival help at all? Or is the brain like "No! Heroic symbolic connection sucks. Gimme talking or physical intimacy now. 5 hours I say!"? (No offence meant by mentioning a quirk of your personality as a matter of speculative curiosity. I just know the context and nature of isolation does make a difference to me, even though it takes around 5 weeks for such isolation to cause noticeable degradation of my sanity.)

If it was my handicap I would be perfectly fine with an FAI capping any distress at, say, the level you have after 3 hours. Similarly, if I was someone who was unable to endure 5 consecutive hours of high stimulus social exposure without discombobulating I would want to have that weakness removed. But many people object to being told that their natural state is unhealthy or otherwise defective and in need of repair and I consider that objection a valid one.

Replies from: Alicorn
comment by Alicorn · 2010-01-21T00:34:58.514Z · LW(p) · GW(p)

I would certainly endure the discomfort involved in saving my friend in the scenario you describe. I'd do the same thing if saving my friend involved an uncomfortable but non-fatal period of time without, say, water, food, or sleep. That doesn't mean my brain wouldn't report on its displeasure with the deprivation while I did so.

Replies from: wedrifid
comment by wedrifid · 2010-01-21T00:52:27.510Z · LW(p) · GW(p)

water ~ few days
food ~ a few weeks
sleep ~ a few days
social contact ~ a handful of hours

Water depends on temperature, food on exertion both mental and physical. I speculate if the context influenced the rate of depletion in similar manner.

comment by Dustin · 2010-01-21T00:38:33.446Z · LW(p) · GW(p)

I very intentionally had qualifiers a-many in my comment to try and make it apparent that I wasn't "judging" Alicorn. "I cannot imagine" is perhaps the wrong phrase. "I find it hard to imagine" would be better, I think.

Perhaps I'm crazy, but I don't think pointing out the disadvantages of the way someone thinks/feels is or should be objectionable.

If someone differs from me in what kind of vegetables taste good, or if they like dry humor, or whatever, I'm not going to try and tell them they may want to rethink their position. There's no salient disadvantages to those sort of things.

If Alicorn had said, "I really prefer human contact and I just get a little uncomfortable without it after 5 hours" I wouldn't have even brought it up.

If someone has a trait that does have particular disadvantages, I just don't see how discussing it with them is objectionable.

Perhaps the person to say whether it's objectionable would be Alicorn. :)

comment by Kutta · 2010-01-20T07:53:31.444Z · LW(p) · GW(p)

I also think it's extremely disproportionate to die because the old friends are gone. A post FAI world would be a Nice Enough Place that they will not even remotely mistreat you and you will not remotely regret your signing up.

comment by gwern · 2010-01-20T01:53:55.695Z · LW(p) · GW(p)

the best case scenario will be that I will wake up in a bright new future completely alone.

Because the last time you woke up in a brand-new world with no friends turned out so badly?

Replies from: Alicorn
comment by Alicorn · 2010-01-20T01:58:22.742Z · LW(p) · GW(p)

If you're talking about how I have no prior experience with revival, all I can say is that I have to make plans for the future based on what predictions (however poor) I can make now. If you're talking about how I was born and that turned out okay, I have... y'know.. parents.

Replies from: gwern
comment by gwern · 2010-01-20T14:41:31.858Z · LW(p) · GW(p)

If you're talking about how I was born and that turned out okay, I have... y'know.. parents.

For many people, parents are a neutral or net negative presence. But alright.

If you had to choose between being born to an orphanage and not being born - a situation which is symmetrical as far as I can see to your objection to cryonics - would you choose to not be born?

Replies from: Alicorn
comment by Alicorn · 2010-01-20T14:50:16.444Z · LW(p) · GW(p)

That depends on the circumstances which would have led to me being born to an orphanage. If somebody is going around creating people willy-nilly out of genetic material they found lying around, uh, no, please stop them, I'd be okay with not having been born. If I'm an accident and happened to have a pro-life mother in this hypothetical... well, the emphasis in pro-choice is "choice", so in that case it depends whether someone would swoop in and prevent my birth against her will or whether she would change her mind. In the latter case, the abortion doctor has my blessing. In the former case, (s)he hasn't, but only because I don't think medically elective surgery should be performed on unwilling patients, not because I think the lives of accidental fetuses are particularly valuable. If I was conceived by a stable, loving, child-wanting couple and my hypothetical dad was hit by a bus during my gestation and my mom died in childbirth, then I'd be okay with being born as opposed to not being born.

comment by AngryParsley · 2010-01-20T01:59:54.487Z · LW(p) · GW(p)

If you don't like being alone in the bright new future you can always off yourself.

Or try to make friends with other recently-revived cryonicists. That's what extroverts are good at, right?

Replies from: Alicorn
comment by Alicorn · 2010-01-20T02:02:40.436Z · LW(p) · GW(p)

That would be a fine way to spend money, wouldn't it, paying them to not let me die only for me to predictably undo their work?

Replies from: AngryParsley
comment by AngryParsley · 2010-01-20T02:20:39.289Z · LW(p) · GW(p)

My comment about suicide was a joke to contrast my recommendation: make friends.

I think you assign high probability to all of the following:

  1. None of your current friends will ever sign up for cryonics.
  2. You won't make friends with any current cryonicists.
  3. You won't make friends after being revived.
  4. Your suicidal neediness will be incurable by future medicine.

Please correct me if I'm wrong. If you think any of those are unlikely and you think cryonics will work, then you should sign up by yourself.

Replies from: Alicorn
comment by Alicorn · 2010-01-20T02:23:12.086Z · LW(p) · GW(p)
  1. Yeah. Even though a couple of them have expressed interest, there is a huge leap from being interested to actually signing up.

  2. This is my present plan. We'll see if it works.

  3. I'm not willing to bet on this.

  4. I do not want my brain messed with. If I expected to arrive in a future that would mess with my brain without my permission, I would not want to go there.

Replies from: Eliezer_Yudkowsky, AngryParsley
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-01-20T02:27:53.350Z · LW(p) · GW(p)

I have to say, if 3 fails, I would tend to downvote that future pretty strongly. We seem to have very different ideas of what a revival-world will and should look like, conditional on revival working at all.

Replies from: Alicorn
comment by Alicorn · 2010-01-20T02:36:37.908Z · LW(p) · GW(p)

I was including a "promptly enough" in the "will make friends" thing. I'm sure that, if I could stay alive and sane long enough, I'd make friends. I don't think I could stay alive and sane and lonely long enough to make close enough friends without my brain being messed with (not okay) or me being forcibly prevented from offing myself (not fond of this either).

Replies from: Eliezer_Yudkowsky, blogospheroid
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-01-20T04:06:07.751Z · LW(p) · GW(p)

If your life were literally at stake and I were a Friendly AI, I bet I could wake you up next to someone who could become fast friends with you within five hours. It doesn't seem like a weak link in the chain, let alone the weakest one.

Replies from: Alicorn
comment by Alicorn · 2010-01-20T04:10:09.568Z · LW(p) · GW(p)

It is the most terrifying link in the chain. Most of the other links, if they break, just look like a dead Alicorn, not a dead Alicorn who killed herself in a fit of devastating, miserable starvation for personal connection.

If you thought it was reasonably likely that, given the success of cryonics, you'd be obliged to live without something you'd presently feel suicidal without (I'm inclined to bring up your past analogy of sex and heroin fix here, but substitute whatever works for you), would you be so gung-ho?

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-01-20T04:20:52.153Z · LW(p) · GW(p)

I could sorta understand this if we were talking about one person you couldn't live without, it's the idea of worrying about not having any deep friends in general that's making me blink.

Some people are convinced they'll have to live without the strangest things after the Singularity... having encountered something possibly similar before, I do seriously wonder if you might be suffering from a general hope-in-the-future deficiency.

PS/Edit: Spider Robinson's analogy, not mine.

Replies from: Kevin, Alicorn
comment by Kevin · 2010-01-20T12:23:47.091Z · LW(p) · GW(p)

If you were the friendly AI and Alicorn failed to make a fast friend as predicted and that resulted in suicidal depression, would that depression be defined as mental illness and treated as such? Would recent wake-ups have the right to commit suicide? I think that's an incredibly hard question so please don't answer if you don't want to.

Have you written anything on suicide in the metaethics sequence or elsewhere?

Replies from: wedrifid
comment by wedrifid · 2010-01-20T12:34:38.924Z · LW(p) · GW(p)

would that depression be defined as mental illness and treated as such?

And the relevant question extends to the assumption behind the phrase 'and treated as such'. Do people have the right to be nuts in general?

Replies from: Kevin
comment by Kevin · 2010-01-20T12:39:36.561Z · LW(p) · GW(p)

I suppose having to rigorously prove the mathematics behind these questions is why Eliezer is so much more pessimistic about the probability of AI killing us than I am.

comment by Alicorn · 2010-01-20T04:27:08.171Z · LW(p) · GW(p)

I have only managed to live without particular persons who've departed from my life for any reason by virtue of already having other persons to console me.

That said, there are a handful of people whose loss would trouble me especially terribly, but I could survive it with someone else around to grieve with.

comment by blogospheroid · 2010-01-20T08:00:31.296Z · LW(p) · GW(p)

I would think that the corporate reviving you would be either a foundation of your family, a general charity organization or a fan club of yours (Don't laugh! There are fan clubs for super stars in India. Extend it further in the future and each LW commentor might have a fan club.) Since you will be, relatively speaking, an early adopter of cryonics, you will be relatively, a late riser. Cryonics goes LIFO, if I understand it correctly.

I'm pretty sure now that your fears are explicitly stated in a public forum, they are on the record for almost all eternity and they will be given sufficient consideration by those reviving you.

Eliezer has already presented one solution. A make-do best friend who can be upgraded to sentience whenever need be.

A simpler solution will be a human child, holding your palm and saying "I'm your great great grand child". Are you still sure you'll still not care enough? (Dirty mind hack, I understand, but terribly easy to implement)

Replies from: Larks, Alicorn
comment by Larks · 2010-01-20T09:51:12.237Z · LW(p) · GW(p)

I'm pretty sure now that your fears are explicitly stated in a public forum, they are on the record for almost all eternity and they will be given sufficient consideration by those reviving you.

Probably worth backing up though, in the form of a stone tablet adjacent to your body.

Replies from: ciphergoth
comment by Paul Crowley (ciphergoth) · 2010-01-20T09:54:10.996Z · LW(p) · GW(p)

Alcor do keep some of your stuff in a secret location, but given problems with data retrieval from old media it might be good if they offered an explicit service to store your data - which I'd expect them to defer to providers like Amazon, but handle the long-term problems of moving to new providers as the need arises, and of decryption only on revival.

comment by Alicorn · 2010-01-20T14:45:28.180Z · LW(p) · GW(p)

I would take the "I'm your great great grandchild" solution in a heartbeat - but I do not already have children, and something could still come up to prevent me from having them (and hence great great grandchildren).

Replies from: Blueberry
comment by Blueberry · 2010-01-21T17:08:59.338Z · LW(p) · GW(p)

If you'd take that solution, why not a great great ... great grand niece? Or distant cousin? Any human child of that time will be related to you at some remove.

Replies from: Alicorn
comment by Alicorn · 2010-01-21T20:46:30.476Z · LW(p) · GW(p)

My sister doesn't have children yet either, and may or may not in the future. It does matter if they're a relation I'd ever be disposed to see at Christmas, which has historically bottomed out with second cousins.

Replies from: Blueberry
comment by Blueberry · 2010-01-21T21:04:12.008Z · LW(p) · GW(p)

It does matter if they're a relation I'd ever be disposed to see at Christmas

Then it looks like I misunderstood. Say you have a child, then get preserved (though no one else you know does). Then say you wake up, it's 500 years in the future, and you meet your great (great ... great) great grandchild, someone you would never have seen at Christmas otherwise. Would this satisfy you?

If so, then you don't have to worry. You will have relatives alive when you're revived. Even if they're descendants of cousins or second cousins. And since it will be 500 years in the future, you are equally likely to see your cousin's 2510 descendant and your 2510 descendant at Christmas (that is, not at all).

Replies from: Alicorn
comment by Alicorn · 2010-01-21T21:09:50.639Z · LW(p) · GW(p)

If I had a child, I'd sign up me and said child simultaneously - problem solved right there. There's no need to postulate any additional descendants to fix my dilemma.

I can't get enthusiastic about second cousins 30 times removed. I wouldn't expect to have even as much in common with them as I have in common with my second cousins now (with whom I can at least swap reminisces about prior Christmases and various relatives when the situation calls for it).

Replies from: Blueberry
comment by Blueberry · 2010-01-21T23:39:30.902Z · LW(p) · GW(p)

You can't guarantee that your child will go through with it, even if you sign em up.

I can't get enthusiastic about second cousins 30 times removed.

Then why can you get enthusiastic about a great great grandchild born after you get frozen?

Replies from: Alicorn
comment by Alicorn · 2010-01-21T23:49:22.944Z · LW(p) · GW(p)

I can't guarantee it, no, but I can be reasonably sure - someone signed up from birth (with a parent) would not have the usual defense mechanisms blocking the idea.

Then why can you get enthusiastic about a great great grandchild born after you get frozen?

Direct descent seems special to me.

Replies from: Dustin
comment by Dustin · 2010-01-27T01:56:07.596Z · LW(p) · GW(p)

I find this thread fascinating.

I can usually think about something enough and change my feelings about it through reason.

For example, if I thought "direct descent seems special", I could think about all the different ideas like the questions Blueberry asks and change my actual emotions about the subject.

I suspect this comes from my guilty pleasure...I glee at biting-the-bullet.

Is this not the case with you?

Replies from: Alicorn
comment by Alicorn · 2010-01-27T02:08:01.353Z · LW(p) · GW(p)

I do not have a reliable ability to change my emotional reactions to things in a practically useful time frame.

comment by AngryParsley · 2010-01-20T03:46:14.430Z · LW(p) · GW(p)

If you want make friends with cryonicists, sign up. For every one person I meet who is signed up, I hear excuses from ten others: It won't work. It will work but I could be revived and tortured by an evil AI. The freezing process could cause insanity. It'll probably work but I've been too lazy to sign up. I'm so needy I'll kill myself without friends. Etc.

It gets old really fast.

Replies from: Alicorn
comment by Alicorn · 2010-01-20T03:48:37.934Z · LW(p) · GW(p)

Wow, calling me names has made me really inclined to take advice from you. I'll get right on that, since you're so insightful about my personal qualities and must know the best thing to do in this case, too.

comment by mattnewport · 2010-01-20T01:55:39.465Z · LW(p) · GW(p)

Are you supposed to be the extrovert in the 'extrovert hell' scenario? Extroverts generally don't have trouble finding new friends, or fear a situation where they find themselves surrounded by strangers.

Replies from: Alicorn
comment by Alicorn · 2010-01-20T02:01:17.887Z · LW(p) · GW(p)

I'm the extrovert, yes. In the sense of needing people, not in the sense of finding them easy to be around (I have a friend who finds it fantastically amusing to call herself a social introvert and me an antisocial extrovert, which is a fair enough description). I actually get very little value from interacting with strangers, especially in large groups. I need people who I'm reasonably close to in order to accomplish anything, and that takes some time to build up to. None of my strategies for making new friends will be present in a no-pre-reviv-friends-or-family wake-up scenario.

Replies from: Richard_Kennaway, mattnewport
comment by Richard_Kennaway · 2010-01-20T14:59:49.165Z · LW(p) · GW(p)

I actually get very little value from interacting with strangers, especially in large groups. I need people who I'm reasonably close to in order to accomplish anything

If the choice were available, would you change any of that?

Replies from: Alicorn
comment by Alicorn · 2010-01-20T15:01:47.016Z · LW(p) · GW(p)

I think that would depend heavily on the mechanism by which it'd be changed. I'd try cognitive exercises or something to adjust the value I get from strangers and large groups; I don't want to be drugged.

comment by mattnewport · 2010-01-20T02:08:57.233Z · LW(p) · GW(p)

Hmm, ok. I'd say you're using 'extrovert' in a fairly non-standard way but I think I understand what you're saying now.

Replies from: bgrah449
comment by bgrah449 · 2010-01-20T02:11:31.186Z · LW(p) · GW(p)

I think of an extrovert as someone who recharges by being around other people, and an introvert as someone who recharges by being alone, regardless of social proclivity or ability.

Replies from: mattnewport, wedrifid
comment by mattnewport · 2010-01-20T02:16:30.683Z · LW(p) · GW(p)

"I make new friends easily" is one of the standard agree/disagree statements used to test for extraversion which is why I find this usage a little unusual.

Replies from: bgrah449
comment by bgrah449 · 2010-01-20T02:21:23.244Z · LW(p) · GW(p)

But it's not the only agree/disagree statement on the test, right?

Replies from: mattnewport
comment by mattnewport · 2010-01-20T02:31:24.990Z · LW(p) · GW(p)

No, it seems Alicorn's usage of extrovert is valid. It is just not what I'd previously understood by the word. The 'makes friends easily' part of extrovert is the salient feature of extraversion for me.

Replies from: Kevin
comment by Kevin · 2010-01-20T12:35:42.980Z · LW(p) · GW(p)

It's all on an introvert/extrovert test, but to me the salient feature of extroversion is finding interaction with others energizing and finding being alone draining. Introverts find it tiring to interact with others and they find being alone energizing, on a continuous spectrum.

I fall in the dead center on an introvert/extrovert test; I'm not sure how uncommon that is.

comment by wedrifid · 2010-01-20T02:39:55.415Z · LW(p) · GW(p)

(Although naturally there tends to be a correlation with the latter two.)

comment by Peter_de_Blanc · 2010-01-21T21:27:18.895Z · LW(p) · GW(p)

Maybe you could specify that you only want to be revived if some of your friends are alive.

Replies from: Alicorn
comment by Alicorn · 2010-01-21T21:36:38.995Z · LW(p) · GW(p)

I'll certainly do that on signup; but if I don't think that condition will ever obtain, it'd be a waste.

Replies from: Peter_de_Blanc
comment by Peter_de_Blanc · 2010-01-22T01:30:17.870Z · LW(p) · GW(p)

I'm pretty sure you will have friends and relatives living in 2070. Do you think it'll be more than 60 years before cryonics patients are revived? Do you think it'll be more than 60 years before we can reverse aging?

Replies from: Alicorn
comment by Alicorn · 2010-01-22T01:37:29.099Z · LW(p) · GW(p)

I think it is reasonably likely that those tasks will take longer than that, yes.

comment by TheNerd · 2010-01-20T17:37:41.526Z · LW(p) · GW(p)

"If you don't sign up your kids for cryonics then you are a lousy parent. If you aren't choosing between textbooks and food, then you can afford to sign up your kids for cryonics."

This is flat-out classism. The fact is, the only reason I'm not choosing between textbooks or food is that the US government has deemed me poor enough to qualify for government grant money for my higher education. And even that doesn't leave me with enough money to afford a nice place to live AND a car with functioning turn signals AND quality day-care for my child while I'm at work AND health insurance for myself.

Shaming parents into considering cryonics is a low blow indeed. Instead of sneering at those of us who cannot be supermom/dad, why not spend your time preparing a persuasive case for the scientific community to push for a government-sponsored cryonics program? Otherwise the future will be full of those lucky enough to be born into privileged society: the Caucasian, white-collar, English-speaking segment of the population, and little else. What a bland vision for humanity.

Replies from: Eliezer_Yudkowsky, LucasSloan
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-01-20T19:47:25.633Z · LW(p) · GW(p)

Response voted up in the hopes that it shames comfortable middle-class parents into signing up their kids for cryonics. Which will, if enough people do it, make cryonics cheaper even if there is no government program. Or eventually get a private charity started to help make it affordable, which is far more likely than a government program, though still unlikely.

comment by LucasSloan · 2010-01-20T19:08:03.062Z · LW(p) · GW(p)

Then why did you have a kid? The consequences of an action are the same regardless of the circumstances in which it occurred. If you knew that you couldn't afford to prevent your child's death why did you have one at all? It isn't classist at all to say "don't live beyond your means." Is it acceptable for the father in the ghetto to beat his child to death, because he's too poor to afford a psychologist? Is it acceptable for a single parent to drive drunk with their child because they're too poor to afford a baby sitter or a cab fare when they want to drink? Eliezer said that he doesn't have children consciously so as not to expose them to the enormous risk and endemic suffering that life today means.

And to your last paragraph, there are better things to spend time, money and energy doing, especially given the absolute impossibility of convincing even 26% of the population to go for it.

Replies from: Kaj_Sotala, TheNerd, bradmo
comment by Kaj_Sotala · 2010-01-20T19:26:36.482Z · LW(p) · GW(p)

If you knew that you couldn't afford to prevent your child's death why did you have one at all?

Considering that all parents so far have had children in the knowledge that they can't prevent the kid's eventual death, this question feels kinda absurd.

Most people would say that they prefer being alive regardless of the fact that they might one day die. Having a child who'll die is, arguably, better than having no child at all.

comment by TheNerd · 2010-01-20T19:36:16.931Z · LW(p) · GW(p)

"Then why did you have a kid? If you knew that you couldn't afford to prevent your child's death why did you have one at all?"

Who said I knew that? When I was pregnant, I had a job which seemed to be secure at the time. Then the recession happened.

Also, what do you have to say to the 88% of the world's population who make less per year than I do?

Replies from: LucasSloan
comment by LucasSloan · 2010-01-21T01:04:55.437Z · LW(p) · GW(p)

What do I say to that 88%? They are setting up their offspring for near-certain death, and they are ignorant of the fact. I can call them negligent. I can partially excuse them for circumstances. Regardless, they are endangering the lives of children. It is truly unfortunate, and the only response is to work harder.

Replies from: isacki, isacki, isacki
comment by isacki · 2010-01-25T06:32:23.967Z · LW(p) · GW(p)

Is your position a kind of unfunny joke, like you were put up to say this? It is only because I am open enough to the possibility that this is actually your opinion that I feel forced to bother with a rebuttal.

It is unreasonable in the extreme, given current knowledge about cryonics, to force your own beliefs of what every child that is born in the world should have, almost as unreasonable as your comparisons above: "Is it acceptable for the father in the ghetto to beat his child to death, because he's too poor to afford a psychologist?" Why? Because it is yet not even remotely a proven technique, and explicitly acknowledges so in the hope of a smarter future, you are not to go about slinging moral outrage based on the presupposition that it is. For the average person, there are a million things they could spend the money on for a kid, and you bet that the certainty of them seeing a return on 99% of them are better.

To suggest people having kids are "endangering the lives of children" is so ironic that humour seems the only explanation to me. In addition to the fact that everyone, regardless of cryonics, will have to die, you appear to have myopically discounted the entire value of a life once lived.

I am not discounting cryonics being theoretically possible. I am saying that it remains exactly that, unproven, and until it is you can implore people to try it, but you are ridiculous to -demand- that they do.

Replies from: LucasSloan
comment by LucasSloan · 2010-01-25T22:41:31.369Z · LW(p) · GW(p)

I believe that the acts of creation and destruction are not equivalent. Creating a life in the instant you murder does not absolve you of the latter. I do not believe that it is okay to eat meat, because you are allowing an animal to live, if only for a short while. Does that make sense? Maybe it is necessary to have children, and certainly I cannot prevent children from being born, but that does not mean that I have to like the fact that children are being born into intolerable situations, where they can never rise to the level of achievement, fulfillment and happiness I think all humans should. I was not joking when I said that, but I was comparing this world to a nowhere-place (utopia). Does that clarify my position?

Replies from: byrnema, antibole
comment by byrnema · 2010-01-26T00:14:15.900Z · LW(p) · GW(p)

I can afford cryonics, but I think I wouldn't want to vitrify children for the same reasons you are criticizing parents for having children. If it is ethical to bring children into the world only if you can care for them, protect them and provide for them, how could it be ethical to send a helpless, dependent child to an indeterminate future? We can make a decision to have a child in the present with lots of relevant information about the present. Sending a child to the future might be negligent.

Replies from: XFrequentist, Vladimir_Nesov, LucasSloan
comment by XFrequentist · 2010-01-26T03:01:12.906Z · LW(p) · GW(p)

Are they better off dead?

Replies from: byrnema
comment by byrnema · 2010-01-26T03:37:59.520Z · LW(p) · GW(p)

Yeah, maybe.

I would like to imagine a post-cryonic life for my child that is positive.

However, what if it isn't positive? What if my child thinks I abandoned her, as she is exploited or abused or neglected? Better to know that she experienced a few happy years, and accept that that is all there is, then risk a horrible future she can't get away from.

If there was one person I trusted that she would be in the custody of, it would make a difference. If she was old enough to reason on her own, and know the difference between right and wrong, it would make a difference. She's just so helpless. I shouldn't send her there without someone who loves her, but I can't guarantee that someone who loves her would be there.

Replies from: Alicorn
comment by Alicorn · 2010-01-26T03:47:43.489Z · LW(p) · GW(p)

Can't you sign yourself up too, and go with her?

Replies from: byrnema
comment by byrnema · 2010-01-26T04:07:39.665Z · LW(p) · GW(p)

Yes, of course. My husband would sign up too, and the grandparents, and aunts and uncles and grown siblings and their descendants. However, in this future beyond my control, they may not have any meaningful custody or be woken up at all.

I might offer that what I am imagining most vividly is a splintered, trans-humanist society that might value small human children but not the things that human children need to be happy.

Replies from: Alicorn
comment by Alicorn · 2010-01-26T04:17:59.920Z · LW(p) · GW(p)

So what you're concerned about is that if your entire family signed up, they might wake up your child but not any of her relatives, or wake all of you up and then not let you actually take care of her?

Replies from: byrnema
comment by byrnema · 2010-01-26T04:43:35.796Z · LW(p) · GW(p)

Yes.

I should add that I don't think my husband and I think cryonics is "creepy". We would sign up, whatever that means.* And if my kids want to sign up when they're old enough to make that decision, then I would let them sign up. It's just not something I feel comfortable doing to a small child; sending them someplace I haven't been and can't imagine.

.* I think the "would" means that so far it sounds OK, but we realize we haven't worked through all the angles and anticipate some oscillations in our POV.

Replies from: Eliezer_Yudkowsky, Alicorn
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-01-26T05:50:52.354Z · LW(p) · GW(p)

It's just not something I feel comfortable doing to a small child; sending them someplace I haven't been and can't imagine.

If your children were about to leave for a strange country without you - or for that matter with you, to some place that none of you had ever been - would you, in your pity, shoot them?

WHAT IS WRONG WITH YOU PEOPLE? WHY IS YOUR BRAIN NOT PROCESSING THIS? IT'S YOUR KIDS' FUCKING LIVES NOT A FAIRY TALE YOU'RE WRITING. You don't get to be uncomfortable with the fairy tale and so refuse to write it. All you can do is kill your kids. That's it. That's all refusal means.

Replies from: thomblake, byrnema, LucasSloan
comment by thomblake · 2010-01-26T17:16:55.104Z · LW(p) · GW(p)

All you can do is kill your kids.

The visceral reaction to "kill your kids" comes from imagining that you're actually killing them, not letting them go about a normal life. You can argue that it comes down to the same thing, but if they were really the same thing, you could use the less emotionally-loaded language.

What you're saying: What kind of terrible parent lets their kids live a life slightly better than they had?

Replies from: Eliezer_Yudkowsky, byrnema
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-01-26T18:10:44.995Z · LW(p) · GW(p)

Mere framing, depending simply on what your brain thinks is normal. Visit a convention of cryonicists and talk to the kids signed up for cryonics. Those parents wouldn't think very highly of themselves if they didn't pay to sign up their kids. If their children died and were lost, they would hold themselves at fault. They're right.

Replies from: RobinZ
comment by RobinZ · 2010-01-26T20:16:11.204Z · LW(p) · GW(p)

(The obvious metaphor - so obvious, in fact, that it is not even a metaphor - is withholding lifesaving medical care. Consider how we feel about parents who refuse to treat their kid's cancer, for example.)

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-01-26T20:42:36.474Z · LW(p) · GW(p)

Yes, that is indeed the analogy - pardon me, classification - that I was looking for.

comment by byrnema · 2010-01-26T17:29:23.391Z · LW(p) · GW(p)

What kind of terrible parent [doesn't] let their kids live a life slightly better than they had?

Huh? How about:

What kind of terrible parent isn't willing to make a small gamble for a substantially better life for their kids?

seems more fair.

Replies from: thomblake, AdeleneDawner
comment by thomblake · 2010-01-26T17:48:51.766Z · LW(p) · GW(p)

What kind of terrible parent [doesn't] let their kids live a life slightly better than they had?

Not quite. If my phrasing was confusing, try instead:

What kind of terrible parent lets their kids [merely] live a life slightly better than they had?

comment by AdeleneDawner · 2010-01-26T17:35:24.640Z · LW(p) · GW(p)

Exactly. Or "What kind of parent settles for letting their kids have merely a slightly better life than they had when a dramatically better life might be possible?"

comment by byrnema · 2010-01-26T06:07:03.156Z · LW(p) · GW(p)

The world is largely a pretty normal place. I've lived in Africa and Europe and have spent time in Central America and almost every type of place in the United States. I feel like I could begin to assess the risk to some extent.

What do I know about a future with alien minds? I thought it was you who argued that we can't possibly know their motives and values.

(Take the horrible/awfulness of me wanting to kill my kids and project that onto the future society that might revive them. If it's in me, why can't it be in them?)

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-01-26T07:12:17.748Z · LW(p) · GW(p)

Your children are standing in front of the boat. You can send them on the boat. You can go with them on the boat. Or you can cut their throats. That's it. There's nothing else.

I hand you the knife.

What do you do?

I think I'm starting to understand what the absence of clicking is. People who click process problems as if they're in the real world. If they wouldn't cut their child's throat, then they sign their kid up for cryonics.

People who don't click don't process the problem like it's the real world. Strange reactions rise up in them, fears of the unknown, fears of the known, and they react to these fears by running away within the landscape of their minds, and somewhere on the outside words come out of their lips like "But who knows what will happen? How can I send my kids into that?" It's an expression of that inner fear, an expression of that running away, words coming out of the lips that match up to what's going on inside their heads somehow... the dread of losing control, the feeling of not understanding, the horror of thinking about mortality, all of these are expressed in a flinch away from the uncomfortable thought and put stumblingly into words.

So they kill their children, because they aren't processing a real world, they're processing words connected to words, ways of flinching and running away and giving vent to those odd internal feelings.

And the clickers are standing in front of that boat.

Replies from: byrnema, AdeleneDawner
comment by byrnema · 2010-01-26T16:42:07.198Z · LW(p) · GW(p)
  • Yes, I’m not a “clicker”. I realize this wasn’t addressed to me, but about me, but I don’t see how this should make me feel ashamed or even inadequate. I need to make ethical/moral decisions and I have no choice but to think through them on my own and make my own decision. When I was 16, I was certain that Proof by Induction would not work, and ever since I understood that it did work, I’ve never claimed certainty based on intuition. However, some arrogance remains in that if something doesn’t convince me, I think: why should I be convinced, if I’m not convinced? I haven’t had any feedback from life that my ability to make decisions isn’t working. I have some problems, but they don’t seem related in any way to not clicking. (Well, maybe I need to “click” on you guys just being too culturally different from me.)

  • I wonder if in response to your hypothetical you expect a reasonable me to suddenly realize, “oh no! I would never kill them!” and thus find the contradiction in my far-mode reasoning about cryonics. But I would. (Filling in drastic and dire reasons for why the children were being taken on a boat against my will.) So would you, I think, slip a deadly but painless pill to a young boy about to be tortured and killed in a religious ceremony if you were certain it was going to happen. Perhaps you were trying to identify an ethical failing: that at one probability of risk I “let them” live, but at a higher level I arbitrarily, cruelly kill them. I don’t think even this is correct; I don’t know where to begin to know how to reason where the ‘killing’ probability would be, and don’t claim that I do. I only know that it would be an agonizing thing for a parent to ever have to decide, but one they can’t escape from just by glibly pretending such scenarios cannot happen, if the scenario does happen.

  • I submit that I’m an open-minded and curious person that isn’t afraid of new ideas. (I might be afraid of a lion, but I’m not afraid of thinking about lions.) One problem that I seem to have – though I actually like it – is that I tend to forget what my reasoning on any topic is after a while, and I’m more or less a blank slate again. If I have a negative view of cryonics, when I never even heard of it outside of LW, I think it is because I found some inconsistency in your own world view about it.

For example, it hadn’t really occurred to me at first that 'somebody strange' might revive my daughter. My concerns were “near-concerns” – how in the world would I ever get an ambulance in time, much less get her frozen in time, in this backwater place I live in where they aren’t even competent enough to insert a child catheter correctly? But then I read several times this suspiciously repetitive chant that ‘they’re not worried’ about negative-value futures because being revived would select for positive futures.

Well, that’s clearly not dependable optimism. We might get revived just because they want to cut down on energy costs in Arizona, and keeping 20 million people frozen takes a lot of power. Maybe they have a penchant for realistic theater and want to simulate the Holocaust with real non-genetically modified humans.

In my mind, previous to hearing the chant, was that all of these scenarios were unlikely because the world is normal. Obama and byrnema and Joe 6-pack and maybe Eliezer have children, and then their children have children, and then the children of these revive us and we live in a world that is essentially the same or somewhat better. But when I process people talking about the set of possible futures like it’s actually really large enough to include all kinds of horrors with non-negligible probability, then unwarranted optimism in the direction of the probability of something I or they know nothing about does not comfort me.

That is the outcome of the group applying epistemic hygiene to only arguments that lead to conclusions they disagree with. The bad arguments for the views they agree with, left untouched, will sway a person like me who does not think in a linear way, but organically assimilates assumptions and hypotheses as I encounter them.

comment by AdeleneDawner · 2010-01-26T12:42:49.728Z · LW(p) · GW(p)

Your description of not-clicking sounds functionally similar to what Amanda Baggs calls 'widgets', though she uses the term in a more political than personal context.

comment by LucasSloan · 2010-01-26T06:10:24.675Z · LW(p) · GW(p)

This. This so god-damn hard.

comment by Alicorn · 2010-01-26T05:00:51.027Z · LW(p) · GW(p)

It looks to me like you have the choice between running a small risk of your daughter thinking you abandoned her (to a scary future that won't leave you in a satisfactory family unit)... or running a slightly larger risk of actually abandoning her (to the gaping maw of death). The ideal is that she gets to be 18 without dying and then decides she wants to sign up, of course (and you and other relatives are still alive and ready to join her with stacks of paperwork at the ready), but we're talking about managing risks, here, not the best case.

Replies from: byrnema
comment by byrnema · 2010-01-26T05:24:45.653Z · LW(p) · GW(p)

I hope you don't mind the clarification, but I think you've underestimated the extent to which I negatively value a scenario in which my daughter comes to mental anguish that I cannot experience with her. (For example, I'm not too concerned about the satisfactory family unit, as long as my daughter is psychologically healthy.)

This compared to death, which is terrible for reasons other than "death". Terrible because I will miss her and because of all the relationships disconnected and because her potential living this life won't be fulfilled -- nothing that cryonics will give back.

It seems like the stream of consciousness of a person is greatly valued here on Less Wrong, for its own sake independent of relationships. Could you/someone write something to help me relate to that?

Replies from: Alicorn
comment by Alicorn · 2010-01-26T05:32:53.965Z · LW(p) · GW(p)

I hope you don't mind the clarification, but I think you've underestimated the extent to which I negatively value a scenario in which my daughter comes to mental anguish that I cannot experience with her. (For example, I'm not too concerned about a satisfactory family unit, as long as my daughter is psychologically healthy.)

I realize this is probably weird coming from me, considering my own cryonics hangup, but we're already assuming they won't revive anyone they can't render passably physically healthy - I think they'd make some effort to take the same precautions regarding psychological health. My psychological need is weird and might be very hard to arrange to satisfy or predict what would be satisfactory; generic needs for care and affection in a small child are so obvious I would be astounded if the future didn't have an arrangement in place before they revived any frozen children.

It seems like the stream of consciousness of a person is greatly valued here on Less Wrong, for its own sake independent of relationships. Could you write something to help me relate to that?

I'll try, but I'm not sure exactly what you mean by "the stream of consciousness" or "independent of relationships". I value me (my software), I value you (your software), I prefer that these softwares be executed in pleasant environments rather than sitting around statically - but then, I'd probably cease to value my software in an awful hurry if it had no relationships with other software, and I'd respect a preference on your part to end your own software execution if that seemed to be your real and reasoned desire.

Why do I have these values? Well, people are just so darned special, that's all I can say.

Replies from: Eliezer_Yudkowsky, byrnema
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-01-26T05:44:50.865Z · LW(p) · GW(p)

My psychological need is weird and might be very hard to arrange to satisfy or predict what would be satisfactory

No it's not. It's just scary.

generic needs for care and affection in a small child are so obvious

You really, really think that this, on the one hand, is "obvious", but on the other hand, a superintelligence is going to look inside your head and go, "Huh, I just can't figure that out."

YOU ARE A SMALL CHILD. We all are. I know that, why can't everyone see it?

Replies from: Alicorn, AdeleneDawner
comment by Alicorn · 2010-01-26T06:07:26.001Z · LW(p) · GW(p)

No it's not. It's just scary.

I'm going to outright ignore you on this one. I have been met with incredulity, not mere curiosity ("Can you tell us more about the experiences you've had that let you model this extreme need?"), let alone commiseration ("wow, me too! let's make friends and sign up together and solve each other's problems!") when I have described this need here. This tells me that what I have going on is really weird and nobody here has accurately modeled it. I do not think you can make predictions about this characteristic of mine when you are still so confused about it. A FAI probably could. You aren't one. And since I know more about the phenomenon than you, I'm going to trust my predictions about what the FAI would say on inspecting my brain over yours. I think it'd say "wow, she would not hold up well without any loved ones nearby for longer than a few hours, unless I messed with her in ways she would not approve."

YOU ARE A SMALL CHILD. We all are. I know that, why can't everyone see it?

You're raving. Perhaps you are deficient in a vitamin or mineral.

Replies from: Eliezer_Yudkowsky, LucasSloan, Vladimir_Nesov
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-01-26T07:36:48.860Z · LW(p) · GW(p)

I am not incredulous that you want friends! I am incredulous that you think not even a superintelligence could get them for you! This has nothing to do with you and your needs and your private inner life and everything to do with superintelligence! It wouldn't even have to do anything creepy! Human beings are simply not that complicated!

Replies from: thomblake
comment by thomblake · 2010-01-27T18:53:37.620Z · LW(p) · GW(p)

Upvoted because: with that many exclamation points, how could you be wrong?

comment by LucasSloan · 2010-01-26T06:20:49.527Z · LW(p) · GW(p)

You think the best thing a FAI could do would be to throw up its hands and say, "welp, she's screwed"?

Replies from: Jordan, Alicorn
comment by Jordan · 2010-01-26T06:29:09.491Z · LW(p) · GW(p)

Why not? There are likely problems we think are impossible that a superintelligence will be able to solve. But there are also likely problems we think impossible which turn out to actually be impossible.

Replies from: LucasSloan
comment by LucasSloan · 2010-01-26T06:33:42.391Z · LW(p) · GW(p)

I am very confident that an FAI could, if necessary create a person to order, who would be perfectly tuned to becoming someone's friend in a few hours. How often does this kind of thing happen by accident in kindergarten?

Impossibility should be reserved for things like FTL and reversal of entropy, not straightforward problems of human interaction.

Replies from: Alicorn
comment by Alicorn · 2010-01-26T06:35:13.200Z · LW(p) · GW(p)

an FAI could, if necessary create a person to order, who would be perfectly tuned to becoming someone's friend in a few hours.

Dude, creeeeeeeeeeepy.

Replies from: LucasSloan, JGWeissman
comment by LucasSloan · 2010-01-26T06:38:39.632Z · LW(p) · GW(p)

That's a worst case scenario. Even if necessary, are you willing to die so as to avoid a little creeeeeeeeeeepiness? Honestly, don't you value your life? Why are you so willing to assume that super intelligence can't think of any better solutions than you can?

Replies from: Alicorn
comment by Alicorn · 2010-01-26T06:41:12.113Z · LW(p) · GW(p)

In principle, I'm willing to die to prevent the unethical creation of a person. (I might not act in accordance with this principle if I were presented with a very immediate threat to my survival, which I could avert by unethically creating a person; but the threats here are not immediate enough to cause me to so compromise my ethics.)

Replies from: LucasSloan
comment by LucasSloan · 2010-01-26T06:45:06.354Z · LW(p) · GW(p)

Why would the creation of such a person be unethical? Eir life would be worth living, and ey would make you happy as well. Human instincts around creepiness are not good metrics when discussing morality.

Replies from: Alicorn
comment by Alicorn · 2010-01-26T06:50:59.438Z · LW(p) · GW(p)

I think that people should be created by other persons who are motivated, at least in part, by an expectation to intrinsically value the person so created. If a FAI created a person for the express purpose of being my friend, it would presumably expect to value the person intrinsically, but that wouldn't be its motivation in creating the person; its motivation in creating the person would have to do with valuing me. And if it modified its motivations to avoid annoying me in this way before it created the person, that would probably have other consequences on its actions that I wouldn't care for, like motivating it to go around creating lots of persons left and right because people are just so darned intrinsically valuable and more are needed.

Replies from: LucasSloan, JGWeissman
comment by LucasSloan · 2010-01-26T06:55:37.826Z · LW(p) · GW(p)

I'm sorry, but I'm going to have to call bollocks on this. Jesus Christ, don't you want to live? Why aren't you currently opting for euthanasia on the risk you end up friendless tomorrow?

Replies from: Alicorn
comment by Alicorn · 2010-01-26T06:58:02.310Z · LW(p) · GW(p)

Why aren't you currently opting for euthanasia on the risk you end up friendless tomorrow?

Well, I probably won't end up friendless tomorrow; and most of the mechanisms by which that could happen would not prohibit me from "opting for euthanasia".

Replies from: LucasSloan
comment by LucasSloan · 2010-01-26T07:02:00.801Z · LW(p) · GW(p)

You probably won't end up friendless in the event of a recovery from cryo storage. There is no reason you couldn't chose to opt for euthanasia then either.

comment by JGWeissman · 2010-01-26T06:56:52.110Z · LW(p) · GW(p)

But in this case, it would be you that creates the person, with purpose of intrinsically valuing em, and the FAI is just a tool you use to do it.

Replies from: Alicorn
comment by Alicorn · 2010-01-26T06:59:39.836Z · LW(p) · GW(p)

If we modify the case so the FAI isn't autonomously creating the person, but rather waking me up and quizzing me on what I want em to be like, a) I really doubt I could do that in a timely fashion, and b) I think the creepiness might prevent me from wanting to do it at all.

comment by JGWeissman · 2010-01-26T06:39:57.836Z · LW(p) · GW(p)

Would it be less creepy if the FAI found an existing person, out of the billions available, with whom you would be very likely to make friends in a few hours?

Replies from: Alicorn
comment by Alicorn · 2010-01-26T06:42:46.162Z · LW(p) · GW(p)

That would be fine, and the possibility has already been covered (it was described, I think, as "super-Facebook") but I wouldn't bet on it. Frankly, I'm not even sure I'm comfortable with the level of mind-reading the AI would have to do to implement any of these finer-tuned solutions. I like my mental privacy.

Replies from: Jordan, LucasSloan, JGWeissman, MichaelGR
comment by Jordan · 2010-01-26T07:35:42.076Z · LW(p) · GW(p)

I'm not sure mind reading would be necessary. I hear Netflix does a pretty good job of guessing which movies people would like.

comment by LucasSloan · 2010-01-26T06:47:39.179Z · LW(p) · GW(p)

You like your mental privacy vis-a-vis an (effectively) omnipotent, perfectly moral being, more than you value your life?

Replies from: Alicorn
comment by Alicorn · 2010-01-26T06:55:48.058Z · LW(p) · GW(p)

*thinks*

I value the ability to consciously control which of my preferences are acted on that much. Mental privacy qua mental privacy, perhaps not.

Replies from: LucasSloan
comment by LucasSloan · 2010-01-26T06:59:31.451Z · LW(p) · GW(p)

You prefer that the hardware inside your head, with its known (and unknown) limitations compute your utility function, as opposed to internal to the aforementioned omniscient being? Why?

Replies from: Alicorn
comment by Alicorn · 2010-01-26T07:02:23.110Z · LW(p) · GW(p)

hardware

No. I'm software. My preferences stand even if you hypothetically implement me in silico.

your utility function

No. Geez, can we drop the "utility functions" and all the other consequentialism debris for like a week sometime? It would be a welcome respite.

Why?

It's a terminal value. We have a convention of not having to answer "why" about those.

Replies from: komponisto, LucasSloan
comment by komponisto · 2010-01-26T18:22:42.984Z · LW(p) · GW(p)

Geez, can we drop the "utility functions" and all the other consequentialism debris for like a week sometime? It would be a welcome respite.

Utility functions describe your preferences. Their existence doesn't presuppose consequentialism, I don't think.

Replies from: Eliezer_Yudkowsky, thomblake
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-01-26T18:37:50.620Z · LW(p) · GW(p)

Utility functions are actually an extreme of consequentialism; they state that your actions should not just be based on consequences, but a weighted probability distribution over outcomes.

Replies from: komponisto
comment by komponisto · 2010-01-26T18:47:56.203Z · LW(p) · GW(p)

In that case, how could you be said to have preferences about outcomes without being a consequentialist?

Replies from: Jack, thomblake
comment by Jack · 2010-01-26T19:13:44.256Z · LW(p) · GW(p)

Can we not have preferences without a utility function?

comment by thomblake · 2010-01-26T19:06:46.264Z · LW(p) · GW(p)

Hmm... I think Eliezer might have overstated his case a little (for the lay audience). If you take a utility function to be normative with respect to your actions, it's not merely descriptive of your preferences, for some meanings of "preference" - not including, I would think, the definition Eliezer would use.

Using more ordinary language, a Kantian might have preferences about the outcomes of his actions, but doesn't think such preferences are the primary concern in what one ought to do.

Replies from: komponisto
comment by komponisto · 2010-01-26T19:18:21.412Z · LW(p) · GW(p)

Using more ordinary language, a Kantian might have preferences about the outcomes of his actions, but doesn't think such preferences are the primary concern in what one ought to do.

Oh. Well, that's not a distinction that seems terribly important to me. I'm happy to talk about "preferences" as being (necessarily) causally related to one's actions.

comment by thomblake · 2010-01-26T18:53:40.538Z · LW(p) · GW(p)

Utility functions describe your preferences. Their existence doesn't presuppose consequentialism, I don't think.

There are a few things meant by "consequentialism". It can be as general as "outcomes/consequences are what's important when making decisions" to as specific as "Mill's Utilitarianism". The term was only coined mid-20th century and it's not-very-technical jargon, so it hasn't quite settled yet. I'm pretty sure the use here is more on the general side.

Other theories about what's important when making decisions (deontology, virtue ethics) could possibly be expressed as utility functions, but are not amenable to it.

Replies from: komponisto
comment by komponisto · 2010-01-26T19:12:57.625Z · LW(p) · GW(p)

Other theories about what's important when making decisions (deontology, virtue ethics) could possibly be expressed as utility functions, but are not amenable to it.

Why not, if they're about preferences?

My understanding is that a utility function is nothing but a scaled preference ordering, and I interpret ethical debates as being disputes about what one's preferences --i.e. one's utility function -- ought to be.

For example (to oversimplify and caricature): the "consequentialist" might argue that one should be willing to torture one person to save 1000 from certain death, while the "deontologist" argues that one should not because Torture is Wrong. Both sides of this argument are asserting preferences about the state of the world: the "consequentialist" assigns higher utility to the situation in which 1000 people are alive and you're guilty of torture, and the "deontologist" assigns higher utility to the situation in which the 1000 have perished but your hands are clean.

Replies from: Alicorn, Blueberry
comment by Alicorn · 2010-01-26T19:20:04.066Z · LW(p) · GW(p)

This is called the "consequentialist doppelganger" phenomenon, when I've heard it described, and it's very, very annoying to non-consequentialists. Yes, you can turn any ethical system into a consequentialism by applying the following transformation:

  1. What would the world be like if everyone followed Non-Consequentialism X?
  2. You should act to achieve the outcome yielded by Step 1.

But this ignores what we might call the point of Non-Consequentialism X, which holds that you should follow it for reasons unrelated to how it will make the world be.

Replies from: komponisto
comment by komponisto · 2010-01-26T19:33:20.999Z · LW(p) · GW(p)

But this ignores what we might call the point of Non-Consequentialism X, which holds that you should follow it for reasons unrelated to how it will make the world be.

I'm tempted to ask what kind of reasons could possibly fall into such a category -- but we don't have to have that discussion now unless you particularly want to.

Mainly, I just wanted to point out that when whoever-it-was above mentioned "your utility function", you probably should have interpreted that as "your preferences".

Replies from: Blueberry, Jack
comment by Blueberry · 2010-01-26T19:38:28.829Z · LW(p) · GW(p)

I'm tempted to ask what kind of reasons could possibly fall into such a category -- but we don't have to have that discussion now unless you particularly want to.

There should be a "Deontology for Consequentialists" post, if there isn't already.

Replies from: Alicorn
comment by Alicorn · 2010-01-26T19:49:29.078Z · LW(p) · GW(p)

I might write that.

Replies from: thomblake, Blueberry, komponisto
comment by thomblake · 2010-01-26T20:01:16.093Z · LW(p) · GW(p)

Perhaps I should write "Utilitarianism for Deontologists". Here goes:

"Follow the maxim: 'Maximize utility'".

Replies from: ciphergoth, Alicorn
comment by Paul Crowley (ciphergoth) · 2010-01-27T08:26:25.555Z · LW(p) · GW(p)

Actually, it was exactly the problems with this formulation that I was talking about in the pub with LessWrongers on Saturday. Consequentialism isn't about maximizing anything; that's a deontologist's way of looking at it. Consequentialism says that if action A has a Y better outcome than action B, then action A is better than action B by Y. It follows that the best action is the one with the best outcome, but there isn't some bright crown on the best action compared to which all other actions are dull and tarnished; other actions are worse to exactly the extent to which they bring about worse consequences, that's all.

comment by Alicorn · 2010-01-26T20:03:05.403Z · LW(p) · GW(p)

I'd like to see you write Virtue Ethics for Consequentialists, or for Deontologists.

Replies from: Jack, Eliezer_Yudkowsky
comment by Jack · 2010-01-26T20:36:18.896Z · LW(p) · GW(p)

or for Deontologists.

"Being virtuous is obligatory, being vicious is forbidden."

This feels like cheating.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-01-26T20:12:09.386Z · LW(p) · GW(p)

Virtue Ethics for Consequentialists

"Do that which leads to people being virtuous."

Replies from: Jack, thomblake
comment by Jack · 2010-01-26T20:42:50.503Z · LW(p) · GW(p)

I don't think this is right. This would seem to indicate that one could do the ethical thing by being a paragon of viciousness if people learned from your example.

How about, "Maximize your virtue."

Replies from: Eliezer_Yudkowsky, RobinZ
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-01-26T20:51:45.021Z · LW(p) · GW(p)

So other people's virtue is worth nothing?

Replies from: Jack
comment by Jack · 2010-01-26T21:15:11.820Z · LW(p) · GW(p)

Strictly, no. Virtue ethics is self-regarding that way. But it isn't like virtue ethics says you shouldn't care about other people's virtue. It just isn't calculated at that level of the theory. Helping other people be virtuous is the compassionate and generous thing to do.

Replies from: thomblake
comment by thomblake · 2010-01-26T21:31:35.128Z · LW(p) · GW(p)

Agreed, at least on the common (recent American) ethical egoist reading of virtue ethics.

comment by RobinZ · 2010-01-26T20:47:19.086Z · LW(p) · GW(p)

I don't think this is right. This would seem to indicate that one could do the ethical thing by being a paragon of viciousness if people learned from your example.

Such a person is sometimes called a "Mad Bodhisattva".

comment by thomblake · 2010-01-26T20:39:16.739Z · LW(p) · GW(p)

Certainly a way I've framed it in the past (and it sounds perfectly in line with the Confucian conception of virtue ethics) but I don't think it's quite right. At the very least, it's worth mentioning that a lot of virtue ethicists don't believe a theory of right action is appropriately part of virtue ethics.

comment by Blueberry · 2010-01-26T19:53:04.875Z · LW(p) · GW(p)

Please do. I'd love to read it.

comment by komponisto · 2010-01-26T19:51:17.549Z · LW(p) · GW(p)

Ha! I was about to say, "I wonder if Alicorn might be interested in writing such a post".

comment by Jack · 2010-01-26T19:45:00.879Z · LW(p) · GW(p)

I'm tempted to ask what kind of reasons could possibly fall into such a category -- but we don't have to have that discussion now unless you particularly want to.

Not to butt in but "x is morally obligatory" is a perfectly good reason to do any x. That is the case where x is exhibiting some virtue, following some rule or maximizing some end.

comment by Blueberry · 2010-01-26T19:17:46.782Z · LW(p) · GW(p)

You may run into problems trying to create a utility function for some forms of deontology, at least if you're mapping into the real numbers. For instance, some deontologists would say that killing a person has infinite negative utility which can't be cancelled out by any number of positive utility outcomes.

Replies from: komponisto
comment by komponisto · 2010-01-26T19:23:38.794Z · LW(p) · GW(p)

That wouldn't be mapping into the real numbers, of course, since infinity isn't a real number.

As I understand it, utility functions are supposed to be equivalence classes of mappings into the real numbers, where two such mappings are said to be equivalent if they are related by a (positive) affine transformation (x -> ax + b where a>0).

Replies from: wnoise, Blueberry
comment by wnoise · 2010-02-02T00:20:38.352Z · LW(p) · GW(p)

Why do you think this restricts to positive affine transformations, rather than any strictly monotonic transformation?

Replies from: Nick_Tarleton, Jordan
comment by Nick_Tarleton · 2010-02-02T00:23:54.575Z · LW(p) · GW(p)

Other monotonic transformations don't preserve preferences over gambles.

Replies from: wnoise
comment by wnoise · 2010-02-02T00:45:21.848Z · LW(p) · GW(p)

Ah, right, that's what I was missing. Thanks.

comment by Jordan · 2010-02-02T00:29:26.090Z · LW(p) · GW(p)

A strictly monotonic transformation will preserve your preference ordering of states but not your preference ordering for actions to achieve those states. That is, only affine transformations preserve the ordering of expected values of different actions.

comment by Blueberry · 2010-01-26T19:27:27.583Z · LW(p) · GW(p)

Right, which is why I was saying that some ethical theories can't be expressed by a utility function. And there could be many such incomparable qualities: even adding in infinity and negative infinity may not be enough (though the transfinite ordinals, or the surreal numbers, might be).

I'm surprised at that +b, because that doesn't preserve utility ratios.

Replies from: komponisto
comment by komponisto · 2010-01-26T19:48:11.434Z · LW(p) · GW(p)

Right, which is why I was saying that some ethical theories can't be expressed by a utility function.

Ah, I see. But I'm still not actually sure that's true, though...see below.

I'm surprised at that +b, because that doesn't preserve utility ratios.

Indeed not; utilities are measured on an interval scale, not a ratio scale. There's no "absolute zero". (I believe Eliezer made a youthful mistake along these lines, IIRC.) This expresses the fact that utility functions are just (scaled) preference orderings.

comment by LucasSloan · 2010-01-26T07:06:43.341Z · LW(p) · GW(p)

You say you are software, which could be implemented on other computational substrates. You deny the preferability of having a more knowledgeable, less error prone substrate be used to compute your preferences. This is a contradiction. Why are you currently endorsing stupid "terminal" values?

Replies from: Alicorn
comment by Alicorn · 2010-01-26T07:11:02.543Z · LW(p) · GW(p)

You say you are software, which could be implemented on other computational substrates. You deny the preferability of having a more knowledgeable, less error prone substrate be used to compute your preferences.

Wait, are you suggesting that I be uploaded into something with really excellent computational power so I myself would become a superintelligence? As opposed to an external agent that happened to be superintelligent? That might actually work. I will have to think about that. You could have been less rude in proposing it, though.

Replies from: LucasSloan
comment by LucasSloan · 2010-01-26T07:13:43.701Z · LW(p) · GW(p)

No. I am suggesting that the situation I described is what you would find in an FAI. You really should be deferring to Eliezer's expertise in this case.

What about my statements was rude? How can I present these arguments without making you feel uncomfortable?

Replies from: Alicorn
comment by Alicorn · 2010-01-26T07:18:58.071Z · LW(p) · GW(p)

No. I am suggesting that the situation I described is what you would find in an FAI.

Then I don't understand what you said.

You really should be deferring to Eliezer's expertise in this case.

I will not do that as long as he seems confused about the psychology he's trying to predict things for.

What about my statements was rude? How can I present these arguments without making you feel uncomfortable?

I think calling my terminal values "stupid" was probably the most egregious bit. It is wise to avoid that word as applied to people and things they care about. I would appreciate it if people who want to help me would react with curiosity, not screeching incredulity and metaphorical tearing out of hair, when they find my statements about myself or other things puzzling or apparently inconsistent.

Replies from: LucasSloan
comment by LucasSloan · 2010-01-26T07:23:34.066Z · LW(p) · GW(p)

If he and I are confused, you are seriously failing to describe your situation. You are a human brain. Brains work by physical laws. Bayesian super-intelligences can figure out how to fix the issues you have, even with the handicap of making sure their intervention is acceptable to you.

I understand your antipathy for the word stupid. I shall try to avoid it in the future.

Replies from: Alicorn
comment by Alicorn · 2010-01-26T07:27:54.161Z · LW(p) · GW(p)

If he and I are confused, you are seriously failing to describe your situation.

Yes, this is very likely. I don't think I ever claimed that the problem wasn't in how I was explaining myself; but a fact about my explanation isn't a fact about the (poorly) explained phenomenon.

Bayesian super-intelligences can figure out how to fix the issues you have, even with the handicap of making sure their intervention is acceptable to you.

I can figure out how to fix the issues I have too: I'm in the process of befriending some more cryonics-friendly people. Why do people think this isn't going to work? Or does it just seem like a bad way to approach the problem for some reason? Or do people think I won't follow through on signing up should I acquire a suitable friend, even though I've offered to bet money on my being signed up within two years barring immense financial disaster?

Replies from: Kevin, LucasSloan
comment by Kevin · 2010-01-26T07:34:56.321Z · LW(p) · GW(p)

Your second paragraph clears up my lingering misunderstandings; that was the missing piece of information for me. We were (or at least I was) arguing about a hypothetical situation instead of the actual situation. What you're doing sounds perfectly reasonable to me.

comment by LucasSloan · 2010-01-27T00:28:37.093Z · LW(p) · GW(p)

If you are willing to take the 1 in 500 chance, my best wishes.

Replies from: Alicorn
comment by Alicorn · 2010-01-27T00:31:39.568Z · LW(p) · GW(p)

Where did that number come from and what does it refer to?

Replies from: LucasSloan
comment by LucasSloan · 2010-01-27T00:34:37.402Z · LW(p) · GW(p)

Actuarial tables, odds of death for a two year period for someone in their twenties (unless I misread the table, which is not at all impossible).

Replies from: Alicorn
comment by Alicorn · 2010-01-27T00:42:35.145Z · LW(p) · GW(p)

It's really that likely? Can I see the tables? The number sounds too pessimistic to me.

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-01-27T00:46:16.354Z · LW(p) · GW(p)

http://www.socialsecurity.gov/OACT/STATS/table4c6.html

Looks like it should be 1/1000 for two years to me.

Replies from: Blueberry
comment by Blueberry · 2010-01-27T01:13:14.510Z · LW(p) · GW(p)

It should be around 1 in 400 for males in their 20s and 1 in 1000 for females in their 20s.

comment by JGWeissman · 2010-01-26T06:50:03.043Z · LW(p) · GW(p)

I like my mental privacy too, but I am OK with the idea of a non-sentient FAI reading my mind to better predict what it can do for me.

Replies from: Alicorn
comment by Alicorn · 2010-01-26T06:56:16.320Z · LW(p) · GW(p)

I don't have much expectation of non-sentience in a sufficiently smart AI.

comment by MichaelGR · 2010-01-26T06:47:08.193Z · LW(p) · GW(p)

A "user-friendly" way to do this would be for the FAI to send an avatar/proxy to act as a guide when you wake up. Explain how things work, introduce you to others who you might enjoy the company off, answer any question you might have, help you get set up in a way that works for you, help you locate people who you know that might be alive, etc.

A FAI would know better than we do what we find creepy/uncomfortable/etc, and would probably avoid it as much as possible.

comment by Alicorn · 2010-01-26T06:25:30.695Z · LW(p) · GW(p)

Nope. The best thing it could do would be retrieve my dead friends and family. But if we're talking about whether I should sign up for cryonics, I'm assuming that's the only way somebody gets to be not dead after having died a while ago. If we have an AI that's so brilliant that it can reconstruct people accurately just by looking at the causal history of the universe and extrapolating backwards, I'm safe whether I sign up or not! And if we have one that can't, I think I'm only safe if I am signed up with at least one loved one.

Replies from: Kaj_Sotala, LucasSloan
comment by Kaj_Sotala · 2010-01-26T17:36:04.014Z · LW(p) · GW(p)

The best thing it could do would be retrieve my dead friends and family.

Out of curiosity - how accurate would the retrieval need to be? For instance, suppose the FAI accessed your memories and reconstructed your friends based on the information found there, extrapolating the bits you didn't know. Obviously they wouldn't be the same people, since the FAI had to make up a lot of stuff neither you nor it knew. But since the main model was a fit to your memories, they'd still seem just like your friends to you. Would you find that acceptable?

Replies from: Alicorn, ciphergoth
comment by Alicorn · 2010-01-26T17:48:31.100Z · LW(p) · GW(p)

No. That would not be okay with me, assuming I knew this about the process.

comment by Paul Crowley (ciphergoth) · 2010-01-26T17:40:52.994Z · LW(p) · GW(p)

My initial reaction is that I would really hate this. It's one of the things that makes me really uneasy about extreme "neural archaeology"-style cryonics: I want an actual reconstruction, not just a plausible one.

comment by LucasSloan · 2010-01-26T06:27:50.727Z · LW(p) · GW(p)

You can think of no scenarios between those two that would entice you to sign up? Your arguments seem really specious to me.

Replies from: Alicorn
comment by Alicorn · 2010-01-26T06:30:34.412Z · LW(p) · GW(p)

You can think of no scenarios between those two that would entice you to sign up?

Nope. You're welcome to try, though, if you value my life and don't want to try the "befriend me while signed up or on track to become so" route via which several wonderful people are helping.

comment by Vladimir_Nesov · 2010-01-27T10:40:33.779Z · LW(p) · GW(p)

I think the right context for Eliezer's comment is Expected Creative Surprises.

comment by AdeleneDawner · 2010-01-26T05:59:59.209Z · LW(p) · GW(p)

My psychological need is weird and might be very hard to arrange to satisfy or predict what would be satisfactory

No it's not. It's just scary.

Am I parsing this correctly? You're intending to say that Alicorn isn't really experiencing what she's reporting that she is, but is instead just making it up to avoid acknowledging a fear of cryonics?

That's fairly obviously wrong: If Alicorn really was scared of cryonics, the easiest thing for her to do would be to ignore the discussions, not try to solve her stated problem.

It's also pretty offensive for you to keep suggesting that. Do you really think you're in a better position to know about her than she's in to know about herself? You're implying a severe lack of insight on her part when you say things like that.

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-01-26T07:05:24.448Z · LW(p) · GW(p)

I am not suggesting that Alicorn is anything other than what she thinks she is.

But when she suggests that she has psychological problems a superintelligence can't solve, she is treading upon my territory. It is not minimizing her problem to suggest that, honestly, human brains and their emotions would just not be that hard for a superintelligence to understand, predict, or place in a situation where happiness is attainable.

There simply isn't anything Alicorn could feel, or any human brain could feel, which justifies the sequitur, "a superintelligence couldn't understand or handle my problems!" You get to say that to your friends, your sister, your mother, and certainly to me, but you don't get to shout it at a superintelligence because that is silly.

Human brains just don't have that kind of complicated in them.

I am not suggesting any lack of self-insight whatsoever. I am suggesting that Alicorn lacks insight into superintelligences.

Replies from: AdeleneDawner, Alicorn
comment by AdeleneDawner · 2010-01-26T09:11:26.192Z · LW(p) · GW(p)

I see at least one plausible case where an AI couldn't solve the problem: All it takes is for none of Alicorn's friends to be cryopreserved and for it to require significantly more than 5 hours for her brain to naturally perform the neurological changes involved in going from considering someone a stranger to considering them a friend. (I'm assuming that she'd consider speeding up that process to be an unacceptable brain modification. ETA: And that being asked if a particular solution would be acceptable is a significant part of making that solution acceptable, such that suggested solutions would not be acceptable if they hadn't already been suggested. (This is true for me, but may not be similarly true for Alicorn.))

comment by Alicorn · 2010-01-26T07:12:27.444Z · LW(p) · GW(p)

psychological problems

That's a... nasty way to describe one of my thousand shards of desire that I want to ensure gets satisfied.

Replies from: Eliezer_Yudkowsky, Kevin
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-01-26T07:20:30.084Z · LW(p) · GW(p)

Your desire isn't the problem. Maybe it was poorly phrased; "psychological challenge" or "psychological task for superintelligence to perform" or something like that. The problem is finding you a friend, not eliminating your desire for one. Sorry that this happened to match a common phrase with a different meaning.

comment by Kevin · 2010-01-26T07:26:43.437Z · LW(p) · GW(p)

It's just a phrase. If someone isn't being intentionally hurtful, you should remind yourself that a lot of what we are doing here is linguistic games.

This argument might have already gone on too long, but I'm going to try stating as what I see as your main objection to see if I actually understand your true objection.

You hold not having your consciousness altered or manipulated or otherwise tinkered with as an extremely high value. You think you'll probably be miserable in the future and you find it hard to believe that the FAI will find you a friend comparable to your current friends. You won't want to accept any type of brain modification or enhancement that would make you not miserable. If you're sufficiently miserable, it's likely than a FAI could change you without your consent, and you prefer death to the chance of that happening.

Replies from: Alicorn
comment by Alicorn · 2010-01-26T07:32:27.385Z · LW(p) · GW(p)

You hold not having your consciousness altered or manipulated or otherwise tinkered with as an extremely high value.

Insert "without my conscious, deliberate, informed consent, and ideally agency".

You think you'll probably be miserable in the future

Replace "you'll probably" with "you are reasonably likely to".

and you find it hard to believe that the FAI will find you a friend comparable to your current friends.

Add "with whom I could become sufficiently close within a brief and critical time period".

You won't want to accept any type of brain modification or enhancement that would make you not miserable.

See first adjustment. n.b.: without my already having been modified, the "informed" part would probably take longer than the brief, critical time period.

If you're sufficiently miserable, it's likely than a FAI could change you without your consent

Yes. Or, perhaps not change me, but prevent me from acting to end my misery in a non-brain-tinkery way.

and you prefer death to the chance of that happening.

For certain subvalues of "that", yes.

comment by byrnema · 2010-01-26T05:56:45.678Z · LW(p) · GW(p)

I like people too. :)

I agree with Eliezer that any benevolent reviver would be able to figure out how to create conditions that would make a child (and you) happy.

I definitely have in mind a non-benevolent reviver.

Replies from: Jordan
comment by Jordan · 2010-01-26T06:21:44.002Z · LW(p) · GW(p)

Consider this hypothetical situation:

Medical grade nanobots capable of rendering people immortal exist. They're a one time injection that protect you from all disease forever. Do you and your family accept the treatment? If so, you're essentially guaranteeing your family will survive until the singularity, at which point a malevolent singleton might take over the universe and do all sorts of nasty things to you.

I agree that cryonics is scarier than the hypothetical, but the issue at hand isn't actually different.

Replies from: byrnema
comment by byrnema · 2010-01-26T06:34:01.576Z · LW(p) · GW(p)

Children are only helpless for about 10 years. If the singleton came within 10 years of my child being born without warning, it would be awful but not my fault. If I had any warning of it coming, and I still chose to have children that then came to harm, it would be my fault.

Replies from: Jordan
comment by Jordan · 2010-01-26T07:06:59.866Z · LW(p) · GW(p)

Why does fault matter?

Replies from: byrnema
comment by byrnema · 2010-01-26T17:38:11.489Z · LW(p) · GW(p)

Good question. The reason is because that this has recently become an ethical problem for me rather than an optimization problem. Perhaps that is why I think of it in far mode, if that is what I'm doing. But I do know that in ethical mode, it can be the case that you're no longer allowed to base a decision on the computed "average value" ... even small risks or compromises might be unacceptable. If I allow my child to come to harm, and I'm not allowed to do that, then it doesn't matter what advantage I'm gambling for. I perceive at a certain age they can make their own decision, and then with relief I may sign them up for cryonics at their request.

comment by Vladimir_Nesov · 2010-01-27T10:08:38.374Z · LW(p) · GW(p)

Sending a child to the future might be negligent.

Only if letting them die is worse.

comment by LucasSloan · 2010-01-26T00:48:41.753Z · LW(p) · GW(p)

First, I doubt that an future which would revive my child would be any worse than today. Second, my position is that cryonics can ameliorate the creation of a child, not obviate the inherent problems. I would ask you to read all of the replies about the preferably of cryo over dying - If it's good enough for me, then it's good enough for my child.

comment by antibole · 2010-01-25T23:57:11.858Z · LW(p) · GW(p)

I can afford cryonics, but I think I wouldn't want to vitrify children for the same reasons you are criticizing parents for having children. If it is ethical to bring children into the world only if I can care for them, protect them and provide for them, how could it be ethical to send a helpless, dependent child to an even more indeterminate future -- a future in which I have less means and knowledge to care for them than I do now?

Replies from: XFrequentist
comment by XFrequentist · 2010-01-26T00:10:00.041Z · LW(p) · GW(p)

Right, if they're just dead there's nothing to feel guilty about.

comment by isacki · 2010-01-25T06:30:54.518Z · LW(p) · GW(p)

Is your position a kind of unfunny joke, like you were put up to say this?It is only because I am open enough to the possibility that this is actually your opinion that I feel forced to bother with a rebuttal.

It is unreasonable in the extreme, given current knowledge about cryonics, to force your own beliefs of what every child that is born in the world should have, almost as unreasonable as your comparisons above: "Is it acceptable for the father in the ghetto to beat his child to death, because he's too poor to afford a psychologist?" Why? Because it is yet not even remotely a proven technique, and explicitly acknowledges so in the hope of a smarter future, you are not to go about slinging moral outrage based on the presupposition that it is. For the average person, there are a million things they could spend the money on for a kid, and you bet that the certainty of them seeing a return on 99% of them are better.

To suggest people having kids are "endangering the lives of children" is so ironic tha humour seems the only explanation to me. In addition to the fact that everyone, regardless of cryonics, will have to die, you appear to have myopically discounted the entire value of a life once lived.

I am not discounting cryonics being theoretically possible. I am saying that it remains exactly that, unproven, and until it is you can implore people to try it, but you are ridiculous to -demand- that they do.

comment by isacki · 2010-01-25T06:21:53.155Z · LW(p) · GW(p)

Is your position a kind of unfunny joke, like you were put up to say this? It is unreasonable in the extreme, given current knowledge about cryonics, to force your own beliefs of what every child that is born in the world should have, almost as unreasonable as your comparisons above: "Is it acceptable for the father in the ghetto to beat his child to death, because he's too poor to afford a psychologist?" Why? Because it is not even remotely a proven technique yet, you are not to go about slinging moral outrage based on the presumption that it is. For the average person, there are a million things they could spend the money on for a kid, and you bet that the certainty of them seeing a return on 99% of them are better.

I am not discounting cryonics being theoretically possible. I am saying that it remains exactly that, unproven, and until it is you can implore people to try it, but you are ridiculous to demand that they do!

comment by bradmo · 2010-01-20T19:47:29.945Z · LW(p) · GW(p)

You don't have any children, do you, Lucas?

Non-parents weighing in on parental issues is like men deciding abortion issues.

Replies from: thomblake
comment by thomblake · 2010-01-20T20:21:06.163Z · LW(p) · GW(p)

is like men deciding abortion issues.

A truly Godwinesque objection. "You aren't x and I'm x so you can't judge me" seems a little bit too all-purpose.

That said, I generally agree with the sentiment. As Postrel might say, in general an individual making a decision has access to local, distributed information that is not accessible to anyone else, and so (all else being equal) is more likely to be a better judge than anyone else.

Replies from: bogdanb
comment by bogdanb · 2010-01-20T22:22:57.922Z · LW(p) · GW(p)

is more likely to be a better judge than anyone else

That's far too grand a generalization for me to agree with. Big pieces of the justice system (and more) in most places are built on the basis that it's not true, by the way.

That said, Lucas' comment—despite being opinionated—started with a couple of questions, not judgments. (Not explicit ones, at least.)

Replies from: thomblake
comment by thomblake · 2010-01-20T23:18:13.663Z · LW(p) · GW(p)

That's far too grand a generalization for me to agree with.

And here I thought I had put in enough qualifiers to make it nearly a tautology.

Big pieces of the justice system (and more) in most places are built on the basis that it's not true, by the way.

I'd need to know what you're thinking of to dispute this, but I can think of one thing that might qualify: In justice, we don't want people to judge their own cases, since they'll act in their own interest. This doesn't apply to the general case, however, since acting in one's own interest is usually acceptable.

Replies from: bogdanb
comment by bogdanb · 2010-02-08T20:22:02.572Z · LW(p) · GW(p)

That's far too grand a generalization for me to agree with.

And here I thought I had put in enough qualifiers to make it nearly a tautology.

It seems a quite specific statement to me. Reading liberally some qualifiers (“all else being equal” in particular can mean lots of things) this might become tautological, but I reflexively interpreted them as what I think you meant (since I didn't think you just made a useless statement).

About the justice system, you got it right. Justice systems try to correct lots of bias sources. Not only the one you mentioned, but the “own interest” problem is especially pertinent to the origin of this thread (the example of “men deciding abortion issues”).

comment by Dustin · 2010-01-19T19:59:22.442Z · LW(p) · GW(p)

Well, crap. That's something I hadn't even thought of yet.

I'm currently struggling with actually signing up for cryonics myself so this angle hadn't even crossed my mind.

I'll face very strong opposition from my wife, family, and friends when I finally do sign up. I can't imagine what kind of opposition I'll face when I attempt to sign my 3-month old daughter up.

I've been planning a top-level post about the rational thing to do when you're in my position. What position you ask? You'll find out in my post. Suffice it to say for now that I don't think I'm a typical member of the LW community.

Replies from: MichaelGR, soreff
comment by MichaelGR · 2010-01-19T22:03:01.347Z · LW(p) · GW(p)

I, for one, look forward to reading your post.

If Eliezer's post has motivated you, I encourage you to write it soon before that motivation fades.

Replies from: Dustin, Dustin
comment by Dustin · 2010-01-20T00:31:54.004Z · LW(p) · GW(p)

Point taken. Writing commenced.

Replies from: Grognor
comment by Grognor · 2011-09-30T02:35:34.735Z · LW(p) · GW(p)

If you've written this article already [and I imagine you have], it would be helpful to put a link to it in this comment thread.

comment by Dustin · 2010-02-02T20:56:28.303Z · LW(p) · GW(p)

I'm still working on this post, but writing it has become more difficult than I anticipated.

Much of what I want to say is things that I would like to remain private for now.

When I say "private", I mean I don't mind them being connected with my LW user account, but I'd rather they weren't connected with my real life and since they're unique sort of things, and my real name is also my LW user name, I'm having difficulty with anonymizing the content of the post.

comment by soreff · 2010-01-19T23:31:02.811Z · LW(p) · GW(p)

What flavor of opposition do you anticipate? "selfish", "won't work/wasteful", or "weird"? If it is the former, you might consider the tactic of signing up your daughter first.

(I have comments further down in this thread about the odds for cryonics and changes in my views over time.)

Replies from: Tiiba
comment by Tiiba · 2010-01-20T03:37:35.080Z · LW(p) · GW(p)

The opposition I got when I told my parents that there is such a thing was that they didn't want to wake up as machines. I think they didn't agree that they'll be the same person.

Combine that with the uncertainty that you'll be frozen, the uncertainty that you'll wake up, the chance of Blue Gender, and of course, the cost, and it stops being such an obvious decision. Blue Gender is probably the biggest factor for me.

*Blue Gender is an anime about a kid who signed up for cryonics, and woke up while being evacuated from fugly giant insects.God forbid. But even you guys suggest that FAI has a 1% chance of success or so. Is it so great to die, be reborn, and die AGAIN?

Replies from: MichaelGR
comment by MichaelGR · 2010-01-20T04:00:51.229Z · LW(p) · GW(p)

Be careful about evidence from fiction.

Let's see...

What are the chances of you being revived without AGI? It's possible, but probably less likely for a variety of reasons (without AGI, it's harder to reach that technological level, and without AGI, it's harder for humanity to survive long enough (because of existential risks) to get to that technological level in the first place, etc).

But that's not all. If this AGI isn't Certified Friendly, the chances of humanity surviving for very long after it starts recursively improving are also pretty slim.

So chances are, if you are woken up, it'll be in a world with FAI. If things go really bad, you'd probably never find out...

Am I making up a just-so story here? Do others think this makes sense?

Replies from: Mitchell_Porter, Tiiba
comment by Mitchell_Porter · 2010-01-20T05:31:37.684Z · LW(p) · GW(p)

chances are, if you are woken up, it'll be in a world with FAI. If things go really bad, you'd probably never find out...

The possibility of being woken up by an UFAI might be regarded as a good reason to avoid cryonics.

Replies from: MichaelGR
comment by MichaelGR · 2010-01-20T05:53:14.145Z · LW(p) · GW(p)

From what I know, the danger of UFAI isn't that such an AI would be evil like in fiction (anthropomorphized AIs), but rather that it wouldn't care about us and would want to use resources to achieve goals other than what humans would want ("all that energy and those atoms, I need them to make more computronium, sorry").

I suppose it's possible to invent many scenarios where such an evil AI would be possible, but it seems unlikely enough based on the information that I have now that I wouldn't gamble a chance at life (versus a certain death) based on this sci-fi plot.

But if you are scared of UFAI, you can do something now by supporting FAI research. It might actually be more likely for us to face a UFAI within our current lives than after being woken up from cryonic preservation (since just the fact of being woken up is probably a positive sign of FAI).

Replies from: wedrifid
comment by wedrifid · 2010-01-20T06:14:37.410Z · LW(p) · GW(p)

From what I know, the danger of UFAI isn't that such an AI would be evil like in fiction (anthropomorphized AIs), but rather that it wouldn't care about us and would want to use resources to achieve goals other than what humans would want ("all that energy and those atoms, I need them to make more computronium, sorry").

I presume he was referring to disutopias and wireheading scenarios that he could hypothetically consider worse than death.

Replies from: MichaelGR
comment by MichaelGR · 2010-01-20T15:10:44.418Z · LW(p) · GW(p)

That was my understanding, but I think that any world in which there is an AGI that isn't Friendly probably won't be very stable. If that happens, I think there's a lot more chances that humanity will be destroyed quickly and you won't be woken up than that a stable but "worse than death" world will form and decide to wake you up.

But maybe I'm missing something that makes such "worse than death" worlds plausible.

Replies from: wedrifid
comment by wedrifid · 2010-01-20T15:34:27.032Z · LW(p) · GW(p)

That was my understanding, but I think that any world in which there is an AGI that isn't Friendly probably won't be very stable.

I think you're right. The main risk would be Friendly to Someone Else AI.

comment by Tiiba · 2010-01-20T04:51:55.947Z · LW(p) · GW(p)

I hope so. Most UFAI scenarios so far suggested, IIRC, end with everyone either dead or as mindless blobs of endless joy (which may or may not be the same thing, but I'd pick wireheading over death). But remember that the UFAI's designers, stupid though they may be, will be unlikely to forget "thou shalt not kill featherless bipeds with straight nails". So there's a disturbing and non-negligible chance of waking up in Christian heaven.

Edit: So after all this, does cryonics still sound like a good idea? If yes, why? I really, really WANT there to be reasons to sign up. I want to see that world without traffic jams or copyright lawyers. But I'm just not convinced, and that's depressing.

Replies from: CronoDAS
comment by CronoDAS · 2010-01-20T04:57:05.913Z · LW(p) · GW(p)

Or in "The Metamorphosis of Prime Intellect".

Replies from: Alicorn
comment by Alicorn · 2010-01-20T05:08:20.896Z · LW(p) · GW(p)

Prime Intellect was like this close to being Friendly.

Replies from: Eliezer_Yudkowsky, bogdanb
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-01-20T23:42:18.685Z · LW(p) · GW(p)

Yep, you've got to get your AI like 99.8% right for it to go wrong that way.

Replies from: wedrifid
comment by wedrifid · 2010-01-20T23:56:06.076Z · LW(p) · GW(p)

And given Lawrence's 42 years of life after reverting the change why ever did he not work on getting another 0.199% right? In fact, what was Caroline thinking reverting the change before they had solid plan for post Prime-Intellect survival?

Fictional characters and their mortality fetishes. Pah.

Replies from: Kevin
comment by Kevin · 2010-01-21T06:09:27.173Z · LW(p) · GW(p)

The correct interpretation of the ending (based on the excerpt from the sequel posted and an interview localroger did with an Australian radio/podcast host) is that Caroline did not really revert the change; Prime Intellect remained in control of the universe.

http://www.kuro5hin.org/prime-intellect/mopidnf.html

Replies from: CronoDAS
comment by CronoDAS · 2010-01-23T03:51:08.734Z · LW(p) · GW(p)

"The Change" was keeping humans in a simulation of the universe (and turning the actual universe into computronium) instead of in the universe itself. So when it "reversed the Change" it was still as powerful as it was before the Change. What had happened was that Prime Intellect had been convinced that the post-Change world it created was not the best way of achieving its goals, so it set up a different universe. (I imagine that, as of chapter 8, Prime Intellect's current plan for humanity is something like Buddhist-style reincarnation - after all, its highest priority is to prevent human deaths.)

comment by bogdanb · 2010-01-20T23:25:33.260Z · LW(p) · GW(p)

Actually, I'm more tempted to say that he was friendly, just not generally intelligent enough. Some of the humans seemed really silly, though...

I've no idea what extrapolated volition would mean in a population with that many freaks :-)

Replies from: Kevin
comment by Kevin · 2010-01-22T06:30:22.428Z · LW(p) · GW(p)

I agree. Prime Intellect is absolutely friendly in that most important sense of caring about the continued existence and well-being of humans.

It was a good story, but I'm not sure that humans would have actually behaved as in that universe. Or we only saw a small subset of that universe. For example, we saw no one make themselves exponentially smarter. No one cloned themselves. No people merged consciousnesses. No one tried to convince Prime Intellect to reactivate the aliens inside of a zoo that allowed them to exist and for humanity to interact with them, without the danger of the aliens gaining control of Technology.

If I could choose between waiting around for Eliezer to make Friendly AI (or fail), I would choose the universe of Prime Intellect in a heartbeat. I don't see why Fun Theory doesn't apply there.

comment by Paul Crowley (ciphergoth) · 2010-02-07T20:53:58.956Z · LW(p) · GW(p)

I've written a 2000 word blog article on my efforts to find the best anti-cryonics writing I can:

A survey of anti-cryonics writing

Edit: now a top level article

Replies from: orthonormal, Kevin, Cyan
comment by orthonormal · 2010-02-07T23:08:18.704Z · LW(p) · GW(p)

An excellent post.

I have one issue, though. It may be poor form to alter your opening paragraph at this stage, Paul, but I'd appreciate it if you did. While it makes a very good 'hook' for those of us inclined to take cryonics seriously, it means that posting a link for other friends (as I'd otherwise do) will have the opposite effect than it should. (I am fairly sure that a person inclined to be suspicious of cryonics would read the first few lines only, departing in the knowledge that their suspicions were confirmed.)

An introduction that is at first glance equivocal would be a great improvement over one that is at first glance committed to the anti-cryonics viewpoint, for that reason.

Replies from: ciphergoth, Eliezer_Yudkowsky
comment by Paul Crowley (ciphergoth) · 2010-02-07T23:17:11.042Z · LW(p) · GW(p)

Thanks!

I am inclined to agree, but I can't work out how. I found it pretty difficult to get started writing that, and that way seemed to work. If you can give me any more specific ideas on how best to fix it, I might well try. Have saved first version in version control!

Replies from: Morendil, JGWeissman, orthonormal
comment by Morendil · 2010-02-07T23:32:11.937Z · LW(p) · GW(p)

Suggested edits:

  • turn the opening sentence into a question, "Is cryonics pseudoscience ?"
  • change "is nothing more than wishful thinking" into "could be nothing more", etc.
  • change "If you don't believe that, you can read" into "This is the point of view argued in"
  • strike "This makes me sad, because", so the sentence starts "To my naive eyes"

This way your hook is neutral enough to draw everyone in.

ETA: I would be careful with the £25/mo quote, until and unless you get a quote from Rudi Hoffman or elsewhere. At least mention that the pricing is one of those logistical issues you've promised to cover in further posts.

comment by JGWeissman · 2010-02-07T23:25:28.570Z · LW(p) · GW(p)

Suggested edit:

Cryonics is controversial. Critics claim the idea that we could freeze someone today in such a way that future technology might be able to re-animate them is nothing more than wishful thinking on the desire to avoid death, dressed up in scientific-sounding language. Criticisms include ...

Replies from: ciphergoth
comment by Paul Crowley (ciphergoth) · 2010-02-08T00:18:50.597Z · LW(p) · GW(p)

I did something like this in the end.

comment by orthonormal · 2010-02-07T23:36:52.089Z · LW(p) · GW(p)

The Feynman anecdote actually seems to me like the best place to begin, both for literary interest and for a clearer introduction. If you started there, you could take the rest of that paragraph almost unchanged (inserting the parenthetical definition of cryonics from your current first paragraph) before introducing the skeptics' links and continuing as before?

Replies from: ciphergoth
comment by Paul Crowley (ciphergoth) · 2010-02-07T23:47:27.622Z · LW(p) · GW(p)

Just had a go, but I can't quite make it work; the skeptic's links seem hard to introduce.

comment by Kevin · 2010-02-07T21:40:08.402Z · LW(p) · GW(p)

Top-level post it

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-02-07T23:16:10.491Z · LW(p) · GW(p)

Me too. This is not just about cryonics. It is not remotely just about cryonics. It is about the general quality of published argument that you can expect to find against a true contrarian idea, as opposed to a false contrarian idea.

Replies from: whpearson, ciphergoth
comment by whpearson · 2010-02-07T23:39:15.661Z · LW(p) · GW(p)

I now want to go and look for the pre-chemistry arguments against alchemy.

I don't think that cryonics is inherently wrong, but equally I don't think we have a theory or language of identity and mind sufficiently advanced to refute it.

Replies from: gwern
comment by gwern · 2011-07-31T21:27:03.670Z · LW(p) · GW(p)

You know, I did a lot of reading about alchemy when I was younger, and when I try to think back to contemporary criticisms (and there was a lot, alchemy was very disreputable), they all seem to boil down to 1) no alchemists have yet succeeded despite lavish funding, and they are all either failures or outright conmen like Casanova; and 2) alchemical immortality is overreaching and against God.

#1 is pretty convincing but not directly applicable (cryonics since the 1970s has met its self-defined goal of keeping patients cold); #2 strikes me as false, but I also regard the similar anti-cryonics arguments as false.

Replies from: david-m-brown
comment by David M. Brown (david-m-brown) · 2019-02-01T04:58:25.036Z · LW(p) · GW(p)

The goal is to stay cold? I thought it was to be resurrected by science that will be able to cure all.

comment by Paul Crowley (ciphergoth) · 2010-02-07T23:37:42.960Z · LW(p) · GW(p)

I've posted a link to my blog at the moment; do you think it's better that the entire article be included here?

Replies from: Eliezer_Yudkowsky
comment by Cyan · 2010-02-07T21:20:22.238Z · LW(p) · GW(p)

Have a karma point.

comment by CassandraR · 2010-01-20T14:10:03.114Z · LW(p) · GW(p)

To me cryonics causes a stark panic inducing terror that is only alittle less than death itself and I would never in a million years do it if I used my own judgment on the matter but I decided that Eliezer probably knows more than me on this subject and that I should trust his judgement above my own. So i am in the process of signing up now. Seems much less expensive than I imagined also.

This is at least one skill I have tried to cultivate until I grew more educated myself; the ability to export my judgement consciously to another person. Thinking for yourself is great to learn new things and practice thinking skills but since I am just starting out I am trying to build solid mindset so its kinda silly for me to think I can provide one to myself by myself without tons wasted effort when I could just use one of the good ones that are already available.

I would probably be more likely to try such a thing if I was younger but I am getting started abit late and need a leg up. Though I do guess the idea is abit risky but on an inituitve level it seems less risky than trusting my own judgement which is generally scared of everything. Yep.

Replies from: aausch, XiXiDu
comment by aausch · 2010-01-23T04:52:03.501Z · LW(p) · GW(p)

This is at least one skill I have tried to cultivate until I grew more educated myself; the ability to export my judgement consciously to another person. Thinking for yourself is great to learn new things and practice thinking skills but since I am just starting out I am trying to build solid mindset so its kinda silly for me to think I can provide one to myself by myself without tons wasted effort when I could just use one of the good ones that are already available.

I believe Eliezer has been assimilated.

comment by XiXiDu · 2010-01-20T15:22:46.804Z · LW(p) · GW(p)

"The only two legitimate occupations in our current world are (1) working directly on Singularity-related issues, and (2) donate a substantial fraction of your money to the Singularity Institute for Artificial Intelligence."* -- "If you don't sign up your kids for cryonics then you are a lousy parent." -- Eliezer Yudkowsky

Hah! If I only had more faith in Mr. Yudkowsky' judgement. Or otherwise be an educated smart ass, like most people on lesswrong.com, so I could estimate how credible these extraordinary statements are.

Anyway, I'll probably donate something to the SIAI sooner or later. I'd also sign up for cryonics if I wouldn't live in Germany and be not as 'lazy' as I am. Though at some point I might try to do so.

comment by byrnema · 2010-01-21T16:33:01.782Z · LW(p) · GW(p)

Curiously -- not indignantly -- how should I interpret your statement that all but a handful of parents are "lousy"? Does this mean that your values are different from theirs? This might be what is usually meant when someone says someone is "lousy".

Your explicit argument seems to be that they're selfish if they're purchasing fleeting entertainment when they could invest that money in cryonics for their children. However, if they don't buy cryonics for themselves, either, it seems like cryonics is something they don't value, not that they're too selfish to buy it for their children.

Replies from: Unknowns, Jess_Riedel
comment by Unknowns · 2010-01-22T15:26:11.091Z · LW(p) · GW(p)

Eliezer is criticizing parents who in principle think that cryonics is a good thing, but don't get it for their children, whether or not they get it for themselves.

My guess is that such parents are much more common than parents who buy it for themselves but not for their children, just because "thinking that cryonics is good in principle" is much more common than actually buying it for yourself.

comment by Jess_Riedel · 2010-01-22T15:14:35.915Z · LW(p) · GW(p)

Exactly. If a parent doesn't think cryonics makes sense, then they wouldn't get it for their kids anyways. Eliezer's statement can only criticize parents who get cryonics for themselves but not their children. This is a small group, and I assume it is not the one he was targeting.

comment by ShannonVyff · 2010-01-21T13:09:25.672Z · LW(p) · GW(p)

Eliezer--don't know how many people reading this had the same response I did, but you tore my heart out.

As Nick Bostrom Ph.D. Director of the Oxford Future of Humanity institute, Co-founder of the World Transhumanist Association said about my book "21st Century Kids" "Childhood should be fun and so should the future. Read this to your children, and next you know they'll demand a cryonics contract for Christmas."

You know, I do what I can to educate others to the fact that cryonics is possible, and thus there is the common sense obligation to try. For me it is a noble endeavor that humans are attempting, I'm proud to help that effort. If you do a search on "teaching kids cryonics" you'll get: http://www.depressedmetabolism.com/2008/07/04/teaching-children-about-cryonics/ from a few years ago. I still do classes when I can, I've been talking to my children's friends and parents here in the UK after moving from Austin this past summer. The reception I get over here from parents and kids is generally the same as what I heard in the States--people express interest, but never really go through the effort of signing up.

I will be writing more, in the mean time I love hearing from fans of http://www.amazon.com/21st-Century-Kids-Shannon-Vyff/dp/1886057001 It was a thrill to get pictures and feedback from kids who got the book this Christmas and loved it!

Thank you for writing about the Teens & Twenties conference Eliezer, I sincerely look forward to further analysis from you. I'll be attending with my teens in the future, my 13 year old daughter actually had wanted to go this year but we were not able to work it in. She'll be more mature, and my son will be a teen by the time the next event occurs. It is great to have the heroes who have devoted their lives to cryoncis, meet the "normal folk" who sign up--and for the kids to make friends with other cryonicists.

I'm sorry about your brother Eliezier, your writing tore my heart out. I agree that parents should sign their kids up, my own were raised with it and plan on "talking their spouse" into doing it (that will be interesting ;-) ). I've seen other older cryonicists who have raised their kids with cryonics, and the kids kept up the arrangements. I've also seen it go the other way. We need more books written for kids :-)

Thanks for all you do.

comment by Paul Crowley (ciphergoth) · 2010-01-20T20:24:24.570Z · LW(p) · GW(p)

Sorry if this is a tedious question. Just started the conversation with my family in a more serious way after looking up life insurance prices (think it's going OK so far), and there's something I wanted to ask so that I know the answer if they ask. Do you have shares in Alcor or CI, or any other interests to declare?

Thanks!

Replies from: Eliezer_Yudkowsky, AngryParsley
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-01-21T18:31:28.052Z · LW(p) · GW(p)

As far as I know, there's currently no one on Earth who gets paid when another cryonicist signs up, except Rudi Hoffman who sells the life insurance. I'll go ahead and state specifically that I have no shares in either of those nonprofits (nor does anyone, but they have paid employees) and I do not get paid a commission when anyone signs up (nor does anyone AFAIK except Rudi, and he's paid by the life insurance company).

Replies from: ciphergoth, ciphergoth
comment by Paul Crowley (ciphergoth) · 2010-01-22T12:42:20.259Z · LW(p) · GW(p)

BTW, thanks for the reference to Hoffman. Looking at Hoffman's page about life assurance for non-US people it looks like for me cryonics is much, much more expensive than your estimates - he quotes $1500-$3000 a year. Talking to my friends reveals no cheaper options in the UK and big legal problems. I definitely will not be able to afford this barring a big change in my circumstances :(

Replies from: pdf23ds, MichaelGR, Tom_Talbot, Morendil
comment by pdf23ds · 2010-01-22T13:56:43.978Z · LW(p) · GW(p)

I would really like someone to expand upon this:

Understanding and complying with ownership and beneficiary requirements of cryonics vendors is often confusing to insurance companies, and most insurance companies will consequently not allow the protocols required by cryonics vendors. Understanding and complying with your cryonics organization requirements is confusing and often simply will not be done by most insurance companies.

comment by MichaelGR · 2010-01-22T23:56:39.071Z · LW(p) · GW(p)

Don't let the prices on that page discourage your from doing independent research.

There might be life insurance providers in your areas that would have no problem naming Alcor or CI as the beneficiary and that could sell your enough life insurance to cover all costs for a lot less money than that.

edit: I've just had a look, and I could get a 10-year term insurance for $200,000 for about $200/year. Definitely doesn't have to be many thousands.

comment by Tom_Talbot · 2010-01-23T23:47:42.222Z · LW(p) · GW(p)

This extrobrittania video contains some financial details about cryonics in the uk.

comment by Morendil · 2010-01-22T18:56:54.454Z · LW(p) · GW(p)

The estimate on that page is just that, an estimate. I'm awaiting an actual quote before making up my mind on the matter. Suggest you fill out the quote request form and contact him; if you do, I'd be interested in what you learned. I'm still waiting for word back myself.

In France cryonics is actually illegal, which is more of a challenge. Bodies are to be buried or cremated within 6 days of death; I don't know if that's a reasonable window of opportunity for transport. What is the UK's position ?

The decision tree for cryonics is complex at least. I was briefly tempted earlier today to look into software for argument mapping to expose my reasoning more clearly, if only for myself. Even if I ultimately confirm my intuition that it's what I want to do, the mapping would show more clearly which steps are critical to tackle and in what order.

Replies from: ciphergoth, ciphergoth
comment by Paul Crowley (ciphergoth) · 2010-01-23T10:13:38.401Z · LW(p) · GW(p)

Oh, and an argument map for cryonics would be fantastic.

Replies from: Morendil, Morendil, Morendil
comment by Morendil · 2010-01-23T11:27:38.489Z · LW(p) · GW(p)

Existing cryonics argument maps: here.

(I'll update this comment if I find more.)

comment by Morendil · 2010-01-23T10:57:47.715Z · LW(p) · GW(p)

I'm up for creating a more complete one than can currently be found on the Web.

I'd appreciate some help in selecting some form of software support.

Replies from: ciphergoth
comment by Paul Crowley (ciphergoth) · 2010-01-23T11:10:52.404Z · LW(p) · GW(p)

Actually the difficulty is going to be in finding the opposition.

I have scoured Google as best I can, asked my friends on my blog for help, and even emailed some prominent people who've spoken out against cryonics, looking for the best anti-cryonics articles I can find. It is really astonishing that the pickings are so slim. You'd think there would be at least one blogger with medical knowledge who occasionally posted articles that tried to rebut things that cryonicists actually say, for example; I haven't found it.

Replies from: Morendil
comment by Morendil · 2010-01-23T11:53:59.335Z · LW(p) · GW(p)

I'm not sure what a broad search for objections really buys you. From my perspective there is a "basic cryonics scenario" and a smallish number of variants. If you pick a scenario which is a good compromise between maximally plausible and maximally inconvenient, you should flush out most of the key points where things can go wrong.

The basic scenario might be something like this:

  • I sign up for cryonics and life insurance
  • I keep up with my payments for a few years
  • I get run over by a car
  • I am rushed to the hospital and die there
  • I am transferred to the care of a funeral director in France
  • the required paperwork gets completed
  • my body is packed in ice and shipped by air to the US
  • I am prepared for suspension, sustaining inevitable damage
  • years pass, during which the facility stays viable
  • a revival procedure is developed and becomes cheap
  • surviving relatives fund my revival
  • I blink, smile and say "OK, let's go see what's changed"
  • I turn out to be the same person, continuous with the old me
  • that new life turns out to be enjoyable enough

There are shorter alternative scenarios, such as the ones in which you pay for the insurance but never need it owing to life extension and other technology catching up faster than expected, so that you never actually execute your suspension contract. You'd turn up fewer reasons not to do it if you only examined those, so it makes sense to look at the scenario that exercises the greater number of options for things to go wrong. On the other hand, we shouldn't burden the scenario with extraneous details, such as major changes in the legal status of cryonics facilities, etc. These should be accounted for by a "background uncertainty" about what the relatively far future holds in store.

The backbone of our argument map is that outline above, perhaps with more "near" details filled in as we go back over that insanely long discussion thread.

The research articles are only likely to help us out with the major theoretical issue, which is "How much of your personality is erased through damage done by death, suspension and revival." Answers range from "all" to "none" and hinge partly on philosophical stances, such as whether you believe personality is equivalent to information encoded in the brain.

All that is definitely part of the decision tree, but seems only a small part of the story. They are things we won't be able to do much about. The interesting part of the tree is the things we could do something about. As you noted, if cryonics works in principle but is unaffordable or runs into tricky practical problems such as getting your body moved about, you're no longer weighing just a money cost against a philosophical possibility; you're weighing the much larger hassle cost of changing your life plans (e.g. moving to the US sooner or later, or setting yourself a goal of getting rich), and that changes the equation drastically.

comment by Morendil · 2010-01-23T15:01:16.601Z · LW(p) · GW(p)

Here is a sketch of one made with bCisive online, the best of what I have evaluated so far. If you sign up it looks as if I can invite you in to edit collaboratively.

It's easy to use and I find the visualization useful, but it bothers me that it's basically just a mind map, you can't add semantic information about how plausible you find various arguments. Another app with much the same characteristics is Debategraph.

I have tried ArguNet, which is supposed to reconstruct logical structure, but the UI is unusable.

comment by Paul Crowley (ciphergoth) · 2010-01-22T23:34:31.920Z · LW(p) · GW(p)

The estimate is a range with a lower end. I may be able to afford it one day, in which case I don't want to piss him off by mucking him about before then.

There's some evidence other options may be closer to reach; I haven't entirely given up yet. At the very least I'll have cleared a lot of the hurdles that people "cryocrastinate" about, like investigating the options and talking to family, making a later signup more likely.

Replies from: Morendil
comment by Morendil · 2010-01-23T11:09:23.259Z · LW(p) · GW(p)

Let me rephrase. Rudi Hoffman says it costs a minimum of $1500 a year. The quotes I have seen for term life insurance work out to less than $300 a year for a $100K payout and a 30-year period. There is a discrepancy here which is puzzling, and one of the best ways I see to resolve the discrepancy is to ask the man himself, which I have done.

He is taking way more time to respond than I was expecting, which is messing up my feelings about the whole thing. You would help me if you were to contact him yourself and share your info. We don't know each other much, so I won't feel bad if you aren't interested in helping me out. Having said that: will you help me ?

Replies from: ciphergoth
comment by Paul Crowley (ciphergoth) · 2010-01-23T11:13:40.297Z · LW(p) · GW(p)

You should be looking at whole life insurance, not term life insurance.

Let me know if he takes more than a week to reply...

Replies from: MichaelGR, Morendil
comment by MichaelGR · 2010-01-25T15:44:47.751Z · LW(p) · GW(p)

You should be looking at whole life insurance, not term life insurance.

Could you elaborate on why you think that?

Replies from: ciphergoth
comment by Paul Crowley (ciphergoth) · 2010-01-25T15:52:49.811Z · LW(p) · GW(p)

If I get 40 year term insurance now but live to 78, I'll then be uninsured and unable to afford more insurance, so I won't be covered when I most need it.

Replies from: Cyan, Kevin, MichaelGR
comment by Cyan · 2010-01-25T16:20:43.958Z · LW(p) · GW(p)

The general idea of term insurance is to ensure that your heirs get something if you die during the n years it takes you to build them an inheritance. The cryonics equivalent of this idea is that you don't need whole life insurance if you expect over the term of the insurance to save enough money to pay for cryonics out of pocket.

Replies from: Douglas_Knight
comment by Douglas_Knight · 2010-01-25T18:06:57.777Z · LW(p) · GW(p)

The general idea of term insurance is to ensure that your heirs get something if you die during the n years it takes you to build them an inheritance.

I think the idea is to ensure an income during the term of childhood; when the term ends, one expects that they are capable of supporting themselves. If it were about inheritance, then it would be comparable to cryonics, but I don't think it is.

Replies from: Cyan
comment by Cyan · 2010-01-25T18:26:42.564Z · LW(p) · GW(p)

From Wikipedia: "Many financial advisors or other experts commonly recommend term life insurance as a means to cover potential expenses until such time that there are sufficient funds available from savings to protect those whom the insurance coverage was intended to protect." That's the idea I meant to convey; the above phrasing nicely covers both the "inheritance" and "funding cryonics" cases.

comment by Kevin · 2010-01-25T16:37:27.703Z · LW(p) · GW(p)

I suspect you are under-estimating your future earnings ability, unless you plan on going into something that pays poorly, like trying to solve existential risk.

comment by MichaelGR · 2010-01-25T16:34:53.206Z · LW(p) · GW(p)

I'm still not sure what to get. I hear that whole life is significantly more expensive than term, so the savings from term could be put aside to later pay for the higher premiums? Hmm, but maybe whole life makes more sense. Or since I'm 27, maybe I could get a 10 year term and then switch to whole life.

Replies from: Blueberry, ciphergoth, Kevin
comment by Blueberry · 2010-01-25T22:55:58.646Z · LW(p) · GW(p)

I hear that whole life is significantly more expensive than term, so the savings from term could be put aside to later pay for the higher premiums?

Yes. In fact, that's exactly what the insurance company does with your premiums when you buy whole life. Except they take a bunch out for themselves. There's no good reason to buy whole life when you just could buy term and invest the difference until you have enough saved to pay for the cryofund. Except if you don't think you will be disciplined enough to regularly invest the difference, and even then, you can have money automatically taken from a bank account into your cryonics account.

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-01-25T23:36:11.567Z · LW(p) · GW(p)

Albeit that if the money is in your name, those who might otherwise be your heirs will have a motive to try and stop your cryonic preservation to get their hands on the money.

It's happened.

Replies from: Blueberry
comment by Blueberry · 2010-01-25T23:46:46.087Z · LW(p) · GW(p)

Albeit that if the money is in your name, those who might otherwise be your heirs will have a motive to try and stop your cryonic preservation to get their hands on the money.

True, but you can set up an irrevocable trust to prevent that.

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-01-26T05:39:19.827Z · LW(p) · GW(p)

It's a lot easier to just buy a life insurance policy.

comment by Paul Crowley (ciphergoth) · 2010-01-25T17:42:22.698Z · LW(p) · GW(p)

Yes, I found whole life to be 4-5 times more expensive than term.

I'm currently contemplating going for 40 year term and betting that either the world will end before then, or if not that the Singularity will take place, or if not that cryopreservation will be much cheaper by then.

comment by Kevin · 2010-01-25T16:38:11.441Z · LW(p) · GW(p)

Is there an actuary in the house?

comment by Morendil · 2010-01-23T11:55:11.011Z · LW(p) · GW(p)

Ten days and counting.

comment by Paul Crowley (ciphergoth) · 2010-01-21T20:11:25.272Z · LW(p) · GW(p)

Magic, thanks! As it turns out, people's default assumption isn't that I've joined a cult, it's that this is my mid-life crisis. What I find very odd is that some of this is from people who knew me ten years ago!

comment by AngryParsley · 2010-01-21T04:22:53.549Z · LW(p) · GW(p)

Alcor and CI are both 501(c)(3) nonprofits. From the IRS guide to applying for tax-exempt status:

A 501(c)(3) organization:

...

  • must ensure that its earnings do not inure to the benefit of any private shareholder or individual;

  • must not operate for the benefit of private interests such as those of its founder, the founder’s family, its shareholders or persons controlled by such interests;

The only people making money off this are the employees (all 15 of them between CI and Alcor) and the life insurance companies. The rest of us have to settle for a warm fuzzy feeling when people sign up.

ETA: Correction. CI is not a 501(c)(3), just a regular nonprofit. Thanks ciphergoth.

Replies from: ciphergoth
comment by Paul Crowley (ciphergoth) · 2010-01-21T12:34:09.667Z · LW(p) · GW(p)

Actually, the CI is not a 501(c)(3) though it is a non-profit.

comment by CronoDAS · 2010-01-19T21:36:36.194Z · LW(p) · GW(p)

I see a disturbing surface similarity.

"If you don't teach your children the One True Religion, you're a lousy parent."

My own excuse for not signing up for cryonics is not that I don't think it will work, it's that I don't particularly value my own existence. I'm much more concerned about the effects of my death on other people than its effects on me; I've resolved not to die before my parents do, because I don't want them to suffer the grief my death would cause.

Incidentally, is it possible to sign someone else up for cryonics, if they don't object?

Replies from: alyssavance, wedrifid, MichaelGR, akshatrathi, gwern
comment by alyssavance · 2010-01-19T22:04:52.781Z · LW(p) · GW(p)

"If you don't teach your children the One True Religion, you're a lousy parent."

Given that the One True Religion is actually correct, wouldn't you, in fact, be a lousy parent if you did not teach it? Someone who claims to be a Christian and yet doesn't teach their kids about Christianity is, under their incorrect belief system, condemning them to an eternity of torture, which surely qualifies as being a lousy parent in my book.

Replies from: Nick_Tarleton
comment by Nick_Tarleton · 2010-01-19T22:59:24.628Z · LW(p) · GW(p)

IAWYC, but to nitpick, not all Christians believe in an eternity of torture for nonbelievers. Though of course the conclusion follows for any belief in a substantially better afterlife for believers.

(I feel like this is important to point out, to avoid demonizing an outgroup, but don't trust that feeling very much. What do others think?)

Replies from: AllanCrossman, Kaj_Sotala
comment by AllanCrossman · 2010-01-21T21:32:32.469Z · LW(p) · GW(p)

IAWYC, but to nitpick, not all Christians believe in an eternity of torture for nonbelievers.

Indeed, but I wonder how they deal with passages like Revelation 14:11, Matthew 25:41, or Mark 9:43.

Its conceptually possible to believe that the Bible is full of nonsense yet Jesus really did die for our sins. But nobody ever seems to actually hold this position. Or if they do, they never seem to come out and say it.

Replies from: Nick_Tarleton, Richard_Kennaway, Christian_Szegedy, thomblake
comment by Nick_Tarleton · 2010-01-21T21:42:28.483Z · LW(p) · GW(p)

Indeed, but I wonder how they deal with passages like Revelation 14:11, Matthew 25:41, or Mark 9:43.

Frequently, by not knowing about them.

Its conceptually possible to believe that the Bible is full of nonsense yet Jesus really did die for our sins. But nobody ever seems to actually hold this position. Or if they do, they never seem to come out and say it.

They do, but they express it as either "the Bible was written by fallible men" or "it's all Deep Metaphor".

comment by Richard_Kennaway · 2010-01-23T23:37:04.914Z · LW(p) · GW(p)

Indeed, but I wonder how they deal with passages like Revelation 14:11, Matthew 25:41, or Mark 9:43.

If you really want to know, you could try asking them. Or reading their books, if you don't know any. You could even think up good arguments yourself for reconciling the belief with the verses.

I have no book recommendations. My point is that flaunting Biblical quotations and going "nyah! nyah!" does not make a good argument, even if the conclusion is correct. Zombie-hunting requires better instruments than that.

Replies from: AllanCrossman
comment by AllanCrossman · 2010-01-24T09:42:54.574Z · LW(p) · GW(p)

you could try asking them

I have. You point out the verses to them and they say things like "Well all I know is that God is just." Or they just say "Hmm." What I want to know is what a thinking sort of hell-denying Christian says.

Or reading their books

Since this is essentially a heretical position, I'm not sure how heavily it's defended in the literature. Still, I do have in my bookshelf an anthology containing a universalist essay by Marilyn McCord Adams, where she states that "I do not regard Scripture as infallible [... but ...] I do not regard my universalist theology as un-Scriptural, because I believe the theme of definitive divine triumph is central to the Bible". She seems to want to reject the Bible and accept it too.

You could even think up good arguments yourself for reconciling the belief with the verses.

I think the most coherent Christian position would be: There is a God. Various interesting things happened at God's doing, including Jesus and his miracles. The people who witnessed all these events wrote about them, but invariably these accounts are half fiction or worse. Paul is clearly a charlatan.

But nobody seems to believe this: Christians who think the Bible is fallible nevertheless act as if it is mostly right.

flaunting Biblical quotations [...] does not make a good argument

It's necessary when dealing with the doublethink of people who want to take the Bible as divine yet reject key parts of it.

going "nyah! nyah!"

Note that this sort of comment provokes an automatic reaction to fight back, rather than to consider whether you might be correct.

Replies from: Richard_Kennaway, RobinZ
comment by Richard_Kennaway · 2010-01-24T12:46:57.601Z · LW(p) · GW(p)

What I want to know is what a thinking sort of hell-denying Christian says.

Many doctrines are collected here. Not all have the damned eternally waterboarded with boiling lead. For example, the Orthodox churches teach that hell is the response to the direct presence of God by the soul which has rejected Him. It is no more a punishment than the pain you feel if you cut a finger.

And then, whatever hell is, who goes there, and do they stay there for eternity? Doctrines differ on this as well -- the issue of works vs. faith, or the issue of those who have never encountered the Word and have not been in a position to accept or reject it.

How do they explain Biblical passages? By interpreting them (as they would say) correctly. Unless you look to extreme fringe groups who think that the King James Bible was a new revelation whose every letter is to be as meticulously preserved and revered as Moslems do the Koran, every Christian doctrine allows that the text needs interpretation. As well, the Catholic and Orthodox churches do not regard the Bible as the sole source of the Word, regarding the settled doctrine of the church as another source of divine revelation. There is also the Book of Nature, which God also wrote.

With multiple sources of divine revelation, but an axiomatic unity of that revelation, any conflicts must result from imperfect human understanding. Given the axiom, it is really not difficult to come up with resolutions of apparent conflicts. Confabulating stories in order to maintain an immovable idea is something the brain is very good at. Watch me confabulate a Bayesian justification of confabulation! Strong evidence can always defeat strong priors, and vice versa. So if the unity of God's Word is as unshakeable as 2+2=4, a mere difficult passage is less than a feather on the scales.

I say this not to teach Christian doctrines (I'm as atheist as anyone, and my Church of Scotland upbringing was as unzealous as it could possibly be and still be called a religion), but to point out that Christians do actually have answers to these questions. Ok, bad answers if you like, but if you want to argue against them you need to either tackle those answers, or find a weapon so awesome it blows the entire religious enterprise out of the water. (I'm sure there's a perfect LW link for the latter, but I can't at the moment recall where. This is rather diffuse.) Just quoting the Bible is like creationists smugly telling each other that evolutionists think a monkey gave birth to a man. It's an exercise in pouring scorn on Them. You know, those Others, over There.

As Nick Tarleton warned, upthread.

Replies from: AllanCrossman
comment by AllanCrossman · 2010-01-24T13:39:42.088Z · LW(p) · GW(p)

Just quoting the Bible is like creationists smugly telling each other that evolutionists think a monkey gave birth to a man.

It's not like that at all. Many Bible passages dealing with Hell are perfectly clear, whereas it takes a great distortion of evolutionary theory to get to "a monkey gave birth to a man".

comment by RobinZ · 2010-01-24T15:12:07.858Z · LW(p) · GW(p)

Speaking of thinking Christians makes me think of Fred Clark: some clue might be found in his interpretation of Genesis 6-9.

Replies from: AllanCrossman
comment by AllanCrossman · 2010-01-24T17:21:48.869Z · LW(p) · GW(p)

It would be easier to accept texts as mere teaching stories if they were clearly intended as such. A few are, like the Book of Job, and possibly, Jonah. Parts of Genesis, maybe (though I doubt it). But it can't be right to dismiss as a mere story everything that doesn't seem likely or decent. Much of it is surely intended literally.

Replies from: RobinZ
comment by RobinZ · 2010-01-24T19:40:05.602Z · LW(p) · GW(p)

I would agree, which is part of why I found the linked post so strange.

comment by Christian_Szegedy · 2010-01-21T21:45:41.894Z · LW(p) · GW(p)

A very common argument taught by the traditional churches (as opposed to the neo-evangelical churches in America) that the notions of "eternal fire" and "hell" are just symbols to express the pain caused by the distance from God. Therefore, the punishment is self-inflicted, not something imposed by God directly, but rather a logical consequence.

comment by thomblake · 2010-01-21T21:41:00.878Z · LW(p) · GW(p)

It's not too hard to interpret these passages to mean that hell exists, and is only for certain kinds of sins. There's a difference between rejecting God and never having heard of him, for instance.

I'm always astounded when Protestants do actually believe the Bible is not full of nonsense. The Catholic Church did a lot of editing / selection of what went in there, using "Sacred Tradition" as their primary justification. Given that Protestants reject Sacred Tradition, it should follow that they have no basis for choosing which apocrypha should have been included in the first place, and shouldn't just take the Catholics' word for it.

Replies from: Christian_Szegedy, orthonormal, CronoDAS
comment by Christian_Szegedy · 2010-01-21T22:11:44.932Z · LW(p) · GW(p)

Protestant religions are mostly political constructs. They tried to make a few theological changes, but mostly on the cosmetic level only to justify the political independence from the Pope.

Even if it would not be the case, religions need something sacrosanct, which is the scripture in this case. It would have been politically very unwise to try to compromise the apparent sanctity of that source, especially since it was very easy to put their own interpretation to it. Even modern evangelical religions don't try to modify the wording of the actual script.

Additionally, since the language of religion has been latin for more than 1500 years, the actual text of the bible changed practically nothing since around 400. One could argue that the church and its ideology that time was more different from the current catholic ones than the current protestant churches and their teachings.

Replies from: thomblake
comment by thomblake · 2010-01-21T22:29:02.903Z · LW(p) · GW(p)

the actual text of the bible changed practically nothing since around 400.

I'd agree with you there, but the period before 400ish was not negligible. Before that time the New Testament wasn't even a book, but rather a collection of different books, many of which did not make it into the canon. Clearly, people were actually concerned about the issue of canonicity at around the time of the Reformation; it was touched on at Trent, as well as various non-RC christian councils, in the 16th century or so.

That said, while your political explanation seems correct, it should not be comforting to Protestant theologians.

Replies from: Christian_Szegedy
comment by Christian_Szegedy · 2010-01-21T22:53:56.784Z · LW(p) · GW(p)

That said, while your political explanation seems correct, it should not be comforting to Protestant theologians.

To be fair: One of the main cornerstones of a lot of christian religions, the divinity of Christ, was quite a political decision from the fourth century.

Theologians learned to live with it as well.

comment by orthonormal · 2010-01-23T21:00:43.049Z · LW(p) · GW(p)

The Catholic Church did a lot of editing / selection of what went in there, using "Sacred Tradition" as their primary justification.

Literary quality and coherence were actually optimized pretty well in the selection process; if you don't believe me, read an apocryphal gospel sometime. They're basically Jesus fanfic of various stripes, much more ridiculous than the ones deemed canonical, and the vast (secular) scholarly consensus has them all written in the second or third centuries (excepting the Gospel of Thomas).

Then again, since many apocryphal gospels were written to buttress theologies different from the mainline one, it was easy to have them rejected for that reason alone.

comment by CronoDAS · 2010-01-23T04:16:25.247Z · LW(p) · GW(p)

Some Protestant sects do, indeed, use a slightly different Bible than the Catholic one. (Or so I heard.)

Replies from: orthonormal
comment by orthonormal · 2010-01-23T21:01:54.501Z · LW(p) · GW(p)

That's correct; they drop some late-written Old Testament books, which they call the "Catholic Apocrypha".

comment by Kaj_Sotala · 2010-01-20T10:36:00.933Z · LW(p) · GW(p)

Also, there are some Christian denominations which think that nonbelievers simply die and don't get revived after the world has ended, unlike the believers who are.

IIRC some also put more weight on doing good works during your life than whether you are actually a believer or not.

Replies from: Dustin, Kevin
comment by Dustin · 2010-01-20T19:53:01.000Z · LW(p) · GW(p)

This is what Jehovah's Witnesses believe.

comment by Kevin · 2010-01-20T10:50:36.283Z · LW(p) · GW(p)

That was also a belief of some of the most important Jewish scholars. Orthodox Judaism holds it as a truth, and the other sects of Judaism don't believe it.

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-01-21T18:25:32.528Z · LW(p) · GW(p)

Not true AFAIK; last time I checked I was told that sinners got a maximum of twelve months in Gehenna, or eleven months if someone says Kaddish for them, and Saturdays off.

Replies from: Kevin
comment by Kevin · 2010-01-21T19:32:56.162Z · LW(p) · GW(p)

Does this change from Orthodox sect to Orthodox sect or even rabbi to rabbi? I glanced at Wikipedia and assumed that quote from the Talmud applied, but maybe it is interpreted differently, quoted out of context, or just selectively ignored. I think I just underestimated the ability of Orthodox Jews to rationalize away their actual belief system, especially the most negative aspects. http://en.wikipedia.org/wiki/Resurrection#Orthodox_Judaism

I would guess that the interpretation changed when Sheol stopped being interpreted as "grave" and started being interpreted as "hell." I don't know which meaning of Sheol the Talmudic scholars had.

comment by wedrifid · 2010-01-20T01:40:27.621Z · LW(p) · GW(p)

I see a disturbing surface similarity.

"If you don't teach your children the One True Religion, you're a lousy parent."

It's good reasoning (from respective premises) in both cases. It is believing that the One Religion is True that is stupid. We have further negative associations with that kind of statement because we expect most 'stable' religious people to compartmentalise their beliefs such that the stupidity doesn't leak out into their actual judgements.

comment by MichaelGR · 2010-01-19T22:08:38.678Z · LW(p) · GW(p)

My own excuse for not signing up for cryonics is not that I don't think it will work, it's that I don't particularly value my own existence.

Could you elaborate on this?

If you are depressed, or not enjoying life, or not satisfied with who you are for some reason or other, have you considered that if we get to a future where technology is vastly more advanced than it is now, that there might be ways to fix that and at least bring you to the level of "life enjoyment" that others who want to sign up for cryonics have (if not much more than that since we are currently very limited)?

Because of that possibility, maybe it would make sense to sign up, and if you get to the "other side" and realize that you still don't value your existence and there's no way to change that, then commit suicide.

Replies from: Kaj_Sotala, CronoDAS, CronoDAS
comment by Kaj_Sotala · 2010-01-20T10:48:35.720Z · LW(p) · GW(p)

Personally, I have a mild preference towards being alive rather than dead, but it's not strong enough to motivate me to look at cryonics options. (Especially since their availability in Europe is rather bad.) This is partially motivated by the fact that I consider continuity of consciousness to be an illusion in any case - yes, there might be a person tomorrow who remembers thinking the thoughts of me today, but that's a different person from the one typing these words now.

Of course, I'm evolutionarily hardwired to succumb to that illusion in some degree. Postulating a period of cryonic suspension after which I'm rebuilt, however, feels enough like being effectively killed and then reborn that it breaks the illusion. Also, that illusion is mostly something that operates in 'near' mode. Evoking the far, post-revival future gets me into 'far' mode, where I'm much less inclined to attach particular value for the survival of this particular being.

Finally, there's also the fact that I consider our chances of actually building FAI and not getting destroyed by UFAI to be rather vanishingly small.

Replies from: Dustin, MichaelGR, Kevin
comment by Dustin · 2010-01-27T02:16:41.542Z · LW(p) · GW(p)

This is partially motivated by the fact that I consider continuity of consciousness to be an illusion in any case - yes, there might be a person tomorrow who remembers thinking the thoughts of me today, but that's a different person from the one typing these words now.

Interesting. That thought process is how I made a case for cryonics to a friend recently. Their objection was that they didn't think it would be them, and I countered with the fact that the you of tomorrow isn't really the same as the you of today...and yet you still want to live till tomorrow.

comment by MichaelGR · 2010-01-20T15:02:39.471Z · LW(p) · GW(p)

Personally, I have a mild preference towards being alive rather than dead, but it's not strong enough to motivate me to look at cryonics options. (Especially since their availability in Europe is rather bad.)

Do you think that there might be a link between these two things?

Aubrey de Grey often talks about the "pro-death trance", and says that as long as people think that death from the diseases of aging is inevitable, they'll find ways to rationalize why "it's a good thing" or at least "not so bad".

Do you think that if Cryonics was widely available where you are and that it was affordable (a hundred Euros a year life insurance, f.ex.) that this would increase your interest in it?

Replies from: whpearson, Kaj_Sotala
comment by whpearson · 2010-01-21T11:43:54.638Z · LW(p) · GW(p)

I have pretty much the same view as Kaj, I'd get cryonics if it was cheap.

If I did I'd want to put a note that I'd be okay with people using my brain for science when they needed it to test scanning equipment and the like. For some reason I can associate better and feel more positive about imagining papers being published about my brain than being reincarnated in silicon (or carbon nanotubes).

comment by Kaj_Sotala · 2010-01-20T15:06:45.905Z · LW(p) · GW(p)

Do you think that if Cryonics was widely available where you are and that it was affordable (a hundred Euros a year life insurance, f.ex.) that this would increase your interest in it?

Probably, yes.

Replies from: UnholySmoke
comment by UnholySmoke · 2010-01-21T15:09:08.162Z · LW(p) · GW(p)

I often have this thought, and then get a nasty sick feeling along the lines of 'what the hell kind of expected utility calculation am I doing that weighs a second shot at life against some amount of cash?' Argument rejected!

Replies from: ciphergoth
comment by Paul Crowley (ciphergoth) · 2010-01-21T15:17:11.170Z · LW(p) · GW(p)

This has to be a rationality error. Given that it's far from guaranteed to work, there has to be an amount that cryonics could cost such that it wouldn't be worth signing up. I'm not saying that the real costs are that high, just that if you're making a rational decision such an amount will exist.

Replies from: UnholySmoke
comment by UnholySmoke · 2010-01-21T16:52:36.653Z · LW(p) · GW(p)

Sorry, should have given more context.

Given the sky-high utility I'd place on living, I wouldn't expect to see the numbers crunch down to a place where a non-huge sum of money is the difference between signing up and not.

So when someone says 'if it were half the price maybe I'd sign up' I'm always interested to know exactly what calculations they're performing, and exactly what it is that reduces the billions of utilons of living down to a marginal cash sum. The (tiny?) chance of cryonics working? Serious coincidence if those factors cancel comfortably. Just smacks of bottom-line to me.

Put it this way - imagine cryonics has been seriously, prohibitively expensive for many years after introduction. Say it still was today, for some reason, and then after much debate and hand-wringing about immortality for the uber-rich, tomorrow suddenly and very publicly dropped to current levels, I'd expect to see a huge upswing in signing up. Such is the human being!

Replies from: ciphergoth
comment by Paul Crowley (ciphergoth) · 2010-01-21T18:31:43.869Z · LW(p) · GW(p)

I agree with all of this.

comment by Kevin · 2010-01-20T13:26:41.794Z · LW(p) · GW(p)

Do you agree with the quantum physics sequence? This is the big reveal:

If you can see the moments of now braided into time, the causal dependencies of future states on past states, the high-level pattern of synapses and the internal narrative as a computation within it - if you can viscerally dispel the classical hallucination of a little billiard ball that is you, and see your nows strung out in the river that never flows - then you can see that signing up for cryonics, being vitrified in liquid nitrogen when you die, and having your brain nanotechnologically reconstructed fifty years later, is actually less of a change than going to sleep, dreaming, and forgetting your dreams when you wake up.

You should be able to see that, now, if you've followed through this whole series. You should be able to get it on a gut level - that being vitrified in liquid nitrogen for fifty years (around 3e52 Planck intervals) is not very different from waiting an average of 2e26 Planck intervals between neurons firing, on the generous assumption that there are a hundred trillion synapses firing a thousand times per second. You should be able to see that there is nothing preserved from one night's sleep to the morning's waking, which cryonic suspension does not preserve also. Assuming the vitrification technology is good enough for a sufficiently powerful Bayesian superintelligence to look at your frozen brain, and figure out "who you were" to the same resolution that your morning's waking self resembles the person who went to sleep that night.

http://lesswrong.com/lw/qx/timeless_identity/

Replies from: Kaj_Sotala, wedrifid
comment by Kaj_Sotala · 2010-01-20T13:59:00.685Z · LW(p) · GW(p)

Assuming the vitrification technology is good enough for a sufficiently powerful Bayesian superintelligence to look at your frozen brain, and figure out "who you were" to the same resolution that your morning's waking self resembles the person who went to sleep that night.

But I don't think the person tomorrow is the same person as me today, either.

Replies from: MichaelVassar
comment by MichaelVassar · 2010-01-21T07:22:01.428Z · LW(p) · GW(p)

Point taken. Any interest in having your volition realized? This seems much more likely to me to matter and I do happen to run an organization aimed at providing it whether you pay us or not but we'd still appreciate your help.

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2010-01-21T11:23:11.347Z · LW(p) · GW(p)

Well, I am a monthly donor, and unless something unexpected happens I'll be coming over in a few months to see what I can do for SIAI, so yes. :)

comment by wedrifid · 2010-01-20T14:19:34.395Z · LW(p) · GW(p)

being vitrified in liquid nitrogen when you die, and having your brain nanotechnologically reconstructed fifty years later, is actually less of a change than going to sleep, dreaming, and forgetting your dreams when you wake up.

I haven't been entirely convinced on that note. The process of dying and the time it takes from heart stopping to head frozen in a jar seems like it would give plenty of opportunity for minor disruptions even granted that a superintelligence could put it back together.

Replies from: blogospheroid
comment by blogospheroid · 2010-01-23T20:09:18.994Z · LW(p) · GW(p)

I'm not sure if this has ever been presented as a scenario, but even if you are looking at many minor disruptions, physically speaking, there aren't that many places that your neurons would have gone.

So, it is possible that many versions of you might be woken up, wedrifid1, wedrifid2, etc. , each the result of a different extrapolation that was minor enough to be extrapolated, yet major enough to deserve a different version. This would only happen if the damages have occurred in a place critical to your sense of self. I simply don't know enough neurology and neurochemistry to say how much damage this is and where, but I'm sure that the superintelligences would be able to crack that one.

And your great grand children, being the nice sweet posthumans that we expect them to be, (they did recover you, didn't they?) will spend time with all versions of their great grand parents. Their brains would be running at higher cycles and keeping intelligent conversations on with 10 versions of you will be trivial to them.

comment by CronoDAS · 2010-01-19T23:26:19.274Z · LW(p) · GW(p)

Could you elaborate on this?

Most of my desires seem to take the form "I don't want to do/experience X". Those desires of the form "I want to do/experience X" seem to be much weaker. Being dead means that I will have no experiences, and will therefore never have an experience I don't want, at the cost of never being able to have an experience I do want. Because I want to avoid bad experiences much more than I want to have good experiences, being dead doesn't seem like all that bad a deal.

I'm also incredibly lazy. I hate doing things that seem like they take work or effort. If I'm dead, I'll never have to do anything at all, ever again, and that has a kind of perverse appeal to it.

Replies from: Dustin, Vladimir_Nesov, UnholySmoke
comment by Dustin · 2010-01-20T01:01:23.755Z · LW(p) · GW(p)

I just wanted to note that your post seems completely alien to me.

Replies from: Bongo
comment by Bongo · 2010-01-20T17:24:17.784Z · LW(p) · GW(p)

Not to me.

comment by Vladimir_Nesov · 2010-01-20T01:40:26.165Z · LW(p) · GW(p)

Because I want to avoid bad experiences much more than I want to have good experiences, being dead doesn't seem like all that bad a deal.

This rejection doesn't work: if the world of the future changes so that bad experiences don't happen, and good experiences are better, it's in your interest to see it. Furthermore, do you prefer your current disposition, or you'd rather it'd change?

Replies from: CronoDAS
comment by CronoDAS · 2010-01-20T02:26:28.053Z · LW(p) · GW(p)

I don't know if I want it to change or not, but that doesn't seem like something to worry about because I don't know how to change my disposition and I don't know how to go about figuring how to change my disposition.

Replies from: wedrifid
comment by wedrifid · 2010-01-20T02:37:10.858Z · LW(p) · GW(p)

You know what? Someone should just go hunt down CronoDAS and forcibly cryo-suspend him. It'd be doing everyone a favour. He'd get to live in a future where he doesn't have to be geek-emo, a perceived 'murder' would be less shameful than a suicide for his parents and we wouldn't have the same old hand wringing conversation all the time.

See you on the other side. (Or not, as the case may be.)

Replies from: Bindbreaker, CronoDAS
comment by Bindbreaker · 2010-01-20T02:47:32.245Z · LW(p) · GW(p)

This post was obviously a joke, but "we should kill this guy so as to avoid social awkwardness" is probably a bad sentiment, revival or no revival.

Replies from: wedrifid
comment by wedrifid · 2010-01-20T03:08:37.481Z · LW(p) · GW(p)

On the other hand, "we should (legally) kill this guy so as to save his life" is unethical and I would never do it. But it is a significant question and the kind of reasoning that is relevant to all sorts of situations.

comment by CronoDAS · 2010-01-20T04:13:30.728Z · LW(p) · GW(p)

we wouldn't have the same old hand wringing conversation all the time

Should I stop talking about this here?

Replies from: wedrifid
comment by wedrifid · 2010-01-20T04:24:25.429Z · LW(p) · GW(p)

No, I don't mind at all. As long as you don't mind that I don't treat this specific desire of yours with sombre dignity. I do, after all, think a death wish as an alternative to cryonic revival where your mental health can be restored is silly and something to laugh at (and so lower in status and discourage without being actually aggressive.)

Replies from: CronoDAS
comment by CronoDAS · 2010-01-20T04:34:58.085Z · LW(p) · GW(p)

Well, as long as I'm being funny...

Replies from: bogdanb
comment by bogdanb · 2010-01-20T22:12:18.369Z · LW(p) · GW(p)

Not to nitpick, but I think wedrifid was implying “ridiculous” rather than “funny”.

;-p

comment by UnholySmoke · 2010-01-21T15:11:53.831Z · LW(p) · GW(p)

Being dead != Not doing anything

Not doing something because you're lazy != Not existing

I don't believe that you put low utility on life. You're just putting low utility on doing stuff you don't like.

comment by CronoDAS · 2010-01-19T22:25:20.991Z · LW(p) · GW(p)

I don't know if I can be "fixed" without changing me to the point where I'm effectively somebody else. And that's not much different than someone in the future simply having a baby and raising it to be a better person than I am. Furthermore, if the future has to choose between resurrecting me and somebody raising a child from scratch, I prefer that somebody raise a child; I'd rather the future have someone better than "me" instead of someone that I would recognize as "me".

(Additionally, the argument you just made is also an argument for getting frozen right now instead of having to wait until you die a natural death before you get to be revived in a better future. "If the afterlife is so great, why not kill yourself and get there right now?")

Replies from: Vladimir_Nesov, MichaelGR
comment by Vladimir_Nesov · 2010-01-20T01:18:50.519Z · LW(p) · GW(p)

The future will have this choice (not to revive you), and will make it against you if this turns out to be a better option, but if you don't make it to the future, you won't give it the chance of doing this particular thing (your revival) in case it turns out to be a good thing.

Again, you can't be certain of what your preference actually says in the not-clear-cut cases like this, you can't know for sure that you prefer some child to be raised in place of yourself, and for this particular question it seems to be a false dilemma, since it's likely that there will be no resource limitation of this kind, only moral optimization.

comment by MichaelGR · 2010-01-19T22:46:02.166Z · LW(p) · GW(p)

I don't know if I can be "fixed" without changing me to the point where I'm effectively somebody else.

I don't want to get into a whole other discussion here, but I think people change a lot throughout their lives - I know I sure did - and I'm not sure if this would be such a problem. Maybe it would be, but comparing the certainty of death to that potential problem, I know I'd take the risk.

Furthermore, if the future has to choose between resurrecting me and somebody raising a child from scratch, I prefer that somebody raise a child; I'd rather the future have someone better than "me" instead of someone that I would recognize as "me".

The cost of another individual might be so low in the future that there might not be a choice between you and someone else.

(Additionally, the argument you just made is also an argument for getting frozen right now instead of having to wait until you die a natural death before you get to be revived in a better future. "If the afterlife is so great, why not kill yourself and get there right now?")

For someone who doesn't want to live at all right now and would commit suicide anyway, then yes, I'd recommend getting cryo'ed instead.

But for someone who enjoys life, then no, I wouldn't recommend it because it might not work (though having that possibility is still better than the certainty of annihilation).

Life > Cryo uncertainty > Death

Replies from: dclayh
comment by dclayh · 2010-01-19T23:04:16.411Z · LW(p) · GW(p)

This leads directly into the morbid subject of "What is the optimal way to kill oneself, for purposes of cryo?"

Replies from: MichaelGR
comment by MichaelGR · 2010-01-20T00:14:20.652Z · LW(p) · GW(p)

I've actually been thinking about something similar;

What if I find out I have an incurable degenerative brain disease. At which point would I decide to get vitrified to improve my chances of being successfully revived by keeping my brain in better condition at the time of my death?

Now that's a tough decision to make...

Replies from: AngryParsley
comment by AngryParsley · 2010-01-20T01:01:37.188Z · LW(p) · GW(p)

If you live in the US, make sure you have had life insurance for at least two years. Then move to Oregon or Washington).

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-01-21T18:26:58.037Z · LW(p) · GW(p)

Suicide is automatic grounds for autopsy; if this is not true in the assisted-suicide states, I haven't heard about it.

Replies from: AngryParsley
comment by AngryParsley · 2010-01-21T18:44:48.334Z · LW(p) · GW(p)

Technically, neither state considers it suicide. I don't know if that rules out autopsy in practice though.

From the Oregon Death with Dignity Act:

Nothing in ORS 127.800 to 127.897 shall be construed to authorize a physician or any other person to end a patient's life by lethal injection, mercy killing or active euthanasia. Actions taken in accordance with ORS 127.800 to 127.897 shall not, for any purpose, constitute suicide, assisted suicide, mercy killing or homicide, under the law. [1995 c.3 s.3.14]

From Washington Initiative 1000:

Nothing in this chapter authorizes a physician or any other person to end a patient’ s life by lethal injection, mercy killing, or active euthanasia. Actions taken in accordance with this chapter do not, for any purpose, constitute suicide, assisted suicide, mercy killing, or homicide, under the law. State reports shall not refer to practice under this chapter as “suicide” or “assisted suicide.” Consistent with sections 1 (7), (11), and (12), 2(1), 4(1)(k), 6, 7, 9, 12 (1) and (2), 16 (1) and (2), 17, 19(1) (a) and (d), and 20(2) of this act, state reports shall refer to practice under this chapter as obtaining and self-administering life-ending medication.

comment by akshatrathi · 2010-01-20T00:07:48.838Z · LW(p) · GW(p)

I've resolved not to die before my parents do, because I don't want them to suffer the grief my death would cause.

How would you make sure that will not happen?

Replies from: CronoDAS
comment by CronoDAS · 2010-01-20T00:26:08.308Z · LW(p) · GW(p)

I'll rephrase.

I've resolved not to die voluntarily before my parents do.

comment by gwern · 2010-01-20T01:39:55.559Z · LW(p) · GW(p)

Incidentally, is it possible to sign someone else up for cryonics, if they don't object?

Obviously they have to actively consent at some point - even if only to sign the papers you shove in front of them. And then they need to cooperate while dying.

But I suppose you could do the research and fill out the form and pay for their insurance policy, yeah. But I wouldn't do that for someone who might screw it all up at the end.

comment by taw · 2010-01-20T07:01:26.414Z · LW(p) · GW(p)

in exchange for an extra $300 per year.

I'm inclined to believe this number is a lie, as I refuse to believe you are stupid enough to make mistakes of this order of magnitude.

The claimed $180/year (claimed $300 figure minus membership costs) * 50 or so more years people will live only gives $9k. Safe investment gives you barely enough to keep up with inflation, so you cannot use exponential growth argument.

Real costs are around $100k-$200k reference.

Real life insurance costs increase drastically as you age, and as your chance of death increases. Surely you must know that. If you paid the same amount of money each year, you'd need to pay $2k-$5k depending on your cryonics provider and insurance company overhead.

What will very likely happen is people paying for life insurance, then finding out at age of 70 that their life insurance costs increase so much that they cannot afford it any more, and so they won't see any cryonics even though they paid big money all their lives for it. (Not that chances of cryonics working are significant enough for it to make much difference).

Replies from: bgrah449, Morendil, Dustin, ciphergoth
comment by bgrah449 · 2010-01-20T08:27:15.863Z · LW(p) · GW(p)

taw, real life insurance costs increase drastically as you age, but only if you are beginning the policy. They don't readjust the rates on a life insurance policy every year; that's just buying a series of one-year term-life policies.

I.e., if I buy whole-life insurance coverage at 25, my rate gets locked in. My monthly/annual premium does not increase as I age due to the risk of dying increasing.

Replies from: ciphergoth, taw
comment by Paul Crowley (ciphergoth) · 2010-01-20T08:39:07.720Z · LW(p) · GW(p)

How does the insurer hope to make a profit, given that they're probably betting on death being inevitable?

Replies from: Richard_Kennaway, bgrah449
comment by Richard_Kennaway · 2010-01-20T11:43:24.442Z · LW(p) · GW(p)

In the UK these are called life assurance policies. Assurance, because the event (death) will assuredly happen. You pay a fixed annual sum every year; the insurance company pays out a lump sum when you die. It is a combination of insurance and investment. Insurance, because the death payout happens even if you don't live long enough for your payments to cover the lump sum. Investment, because if you live long enough the final payment is funded by what you put in, plus the proceeds of the insurance company's investments, minus their charges -- part of which is the cost of early payouts to less fortunate people.

Some versions have a maturity date: if you're still alive then, you collect the lump sum yourself and the policy terminates. At that point the lump sum will be less that what you could have made by investing those payments yourself. The difference is what you are paying in order to protect against dying early.

As always, remember that investments may plummet as well as fall.

comment by bgrah449 · 2010-01-20T09:19:46.374Z · LW(p) · GW(p)

AngryParsley did a good job summing it up below.

1) While death is inevitable, payout is not.

2) Investment income.

3) Inflation eroding the true cost of the payout.

comment by taw · 2010-01-20T08:28:37.708Z · LW(p) · GW(p)

I'd love to see which insurer is stupid enough to offer something like that. Care to provide links?

Replies from: AngryParsley, Jack
comment by AngryParsley · 2010-01-20T08:43:46.500Z · LW(p) · GW(p)

I have a policy with Kansas City Life Insurance:

All these benefits come with a guarantee that your premium won’t change. The basic premium you agree to now will remain the same throughout the life of your policy.

So umm... yeah. That's how life insurance usually works.

ETA: This is the first time I've heard, "life insurance doesn't work" as an objection to cryonics.

Replies from: ciphergoth
comment by Paul Crowley (ciphergoth) · 2010-01-20T08:51:22.618Z · LW(p) · GW(p)

If you don't mind my asking, how much goes in, and how much comes out? taw's math that no policy is going to pay out more than fifty times the annual pay-in seems like it has to be right.

Replies from: AngryParsley
comment by AngryParsley · 2010-01-20T09:06:09.291Z · LW(p) · GW(p)

Remember that a lot of people who get life insurance policies cancel them before they die, or fall on hard times and can't pay the premiums. I'm 24 and healthy. I went the more expensive route and got whole life insurance, so my premiums are $64/month. With Alcor dues I end up spending about a grand per year on cryonics. Did I mention I picked what is basically the most expensive option? (Alcor whole body preservation with whole life insurance). You could easily cut that down to $300/year if you went with CI and term life insurance.

$64 * 12 * 50 = $38,400, which is a bit less than the policy of $200k. If that money were invested every month, it would end up being significantly more than the policy amount.

comment by Jack · 2010-01-20T08:56:48.496Z · LW(p) · GW(p)

Why is it stupid? Young people are the best customers- they aren't likely to die anytime soon so insurer's make a bundle off them even with the lower rates.

comment by Morendil · 2010-01-20T08:39:38.113Z · LW(p) · GW(p)

"Funded by life insurance" strikes me as an oversimplified summary of a strategy that must necessarily be more sophisticated. Plus "life insurance" actually means several different things, only some of which actually insure you against loss of life.

I'm still trying to find out more, but it seems the most effective plan would be a "term" life insurance (costs about $30 a month for 20 years at my age, 40ish and healthy), which lasts a limited duration, isn't an investment, but does pay out a large sum to designated beneficiaries in the event of death. (I haven't done the math on inflation yet.) You would combine that with actual long-term investments earmarked for funding the actual costs of the procedure if you need it after 20 years. These investments may be "life insurance" of the usual kind, or stocks, or whatever.

Doing that mitigates the scenario I'm really worried about: learning in 2 to 10 years that in spite of being (relatively) young, healthy and wealthy I have a fatal disease (cancer, Lou Gehrig, whatever) and having to choose between my family's stability and dying for ever. Cryonics as insurance against feeling dreadfully stupid.

In twenty years I expect I will have obtained more information, and gotten richer, and might make different choices.

I'm interested enough in cryo that I'm actually trying to get actual quotes, as opposed to merely speculating; I have gotten in touch with Rudi Hoffman who was recommended earlier on LW. My situation - non-US resident - might mean that whatever results I get are not really representative, but I'm willing to report back here with whatever info I get.

comment by Dustin · 2010-01-20T20:17:36.471Z · LW(p) · GW(p)

My current life insurance policy is what is called "term life insurance". It is good for a term of 20 years.

The payout if I die within those 20 years is $500,000.

My monthly premium is $40 for that whole 20 years.

You can get an instant online quote here. You don't have to put in real name and email address.

Replies from: taw
comment by taw · 2010-01-21T07:03:44.410Z · LW(p) · GW(p)

Even assuming best health class - something which won't happen as you age.

  • Age 27, quotes: 250-600
  • Age 47, quotes: 720-1970
  • Age 67, quotes: 6550-13890
  • Age 77: nobody willing to provide insurance

In other words, this is exactly what I was talking about - it's a big fat lie to pretend your premium won't change as you age.

Replies from: AngryParsley, Dustin
comment by AngryParsley · 2010-01-21T07:07:53.144Z · LW(p) · GW(p)

Term life insurance is not the only type available. Most people who get term life insurance plan on having enough money saved up by the time the term runs out. Whole life insurance has no change in premium for the entire lifetime of the insured.

Replies from: taw
comment by taw · 2010-01-21T07:19:17.455Z · LW(p) · GW(p)

And where are quotes for that, proving my point again?

Replies from: AngryParsley
comment by AngryParsley · 2010-01-21T07:22:49.323Z · LW(p) · GW(p)

You asked before and I replied:

I have a policy with Kansas City Life Insurance:

All these benefits come with a guarantee that your premium won’t change. The basic premium you agree to now will remain the same throughout the life of your policy.

So umm... yeah. That's how life insurance usually works.

ETA: This is the first time I've heard, "life insurance doesn't work" as an objection to cryonics.

Replies from: wedrifid, taw
comment by wedrifid · 2010-01-21T07:27:41.427Z · LW(p) · GW(p)

Downvoted for going against taw's Outside the influence of evidence View?

comment by taw · 2010-01-21T11:27:25.045Z · LW(p) · GW(p)

And what are their premiums? They simply have to be far higher than form 25 year old's 20 year term life insurance if their business is to make any profits. The only low premiums I've seen so far are for young people's term life insurance, which people keep naively extrapolating ignoring aging.

I'm really disappointed that this supposedly rational community keeps failing basic math. The entire "cryonics is cheap" argument relies on failing basic math.

Replies from: AngryParsley, Technologos, wedrifid
comment by AngryParsley · 2010-01-21T18:04:32.060Z · LW(p) · GW(p)

Again, I replied in the very same thread I linked to.

Remember that a lot of people who get life insurance policies cancel them before they die, or fall on hard times and can't pay the premiums. I'm 24 and healthy. I went the more expensive route and got whole life insurance, so my premiums are $64/month. With Alcor dues I end up spending about a grand per year on cryonics. Did I mention I picked what is basically the most expensive option? (Alcor whole body preservation with whole life insurance). You could easily cut that down to $300/year if you went with CI and term life insurance.

$64 * 12 * 50 = $38,400, which is a bit less than the policy of $200k. If that money were invested every month, it would end up being significantly more than the policy amount.

Term life insurance is not a bad idea for most people. Lots of people save up money over their careers. $80,000 is barely a down payment on a condo in the bay area.

comment by Technologos · 2010-01-21T19:31:55.415Z · LW(p) · GW(p)

I should note that most of the organizations we are talking about (Alcor, ACS, CI) are non-profits.

comment by wedrifid · 2010-01-21T12:00:33.152Z · LW(p) · GW(p)

I'm really disappointed that this supposedly rational community keeps failing basic math. The entire "cryonics is cheap" argument relies on failing basic math.

$80,000 plus $500 a year in membership dues is cheap. (Alcor). I can multiply, divide and find various integrals and derivatives of those figures if it makes you happy. Perhaps it is my English skills that are my flaw? You dispute my understanding of 'cheap'?

Replies from: taw
comment by taw · 2010-01-21T14:39:34.624Z · LW(p) · GW(p)

80k is for neuro-preservation, full body is 150k. Neither of them counts as "cheap" by any definition of "cheap". It's also at least an order of magnitude more expensive than what Eliezer keeps talking about ($300/year).

Replies from: Eliezer_Yudkowsky, wedrifid, RobinZ
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-01-21T18:39:28.988Z · LW(p) · GW(p)

CI is $50K for whole-body.

Replies from: taw, CronoDAS
comment by taw · 2010-01-21T21:48:59.433Z · LW(p) · GW(p)

This reference says it's much more if you include all costs:

The Cryonics Institute charges $28,000 for perfusion and storage of an Lifetime Member and $35,000 for a Yearly Member. [...] For service more comparable to what Alcor provides — including Standby and Transport — a Lifetime Member pays $88,000 and a Yearly Member pays $95,000. For details on CI pricing see Membership and Details Concerning SA Standby and Transport for CI Members.

Replies from: queensblade
comment by queensblade · 2010-01-21T21:55:14.146Z · LW(p) · GW(p)

28,000 means enough vitrification solution for neuro

comment by CronoDAS · 2010-01-21T18:58:47.758Z · LW(p) · GW(p)

I thought their current website said $30K, if you don't contract for standby services?

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-01-21T19:17:00.688Z · LW(p) · GW(p)

That's not including the cost of transportation to CI.

Replies from: CronoDAS
comment by CronoDAS · 2010-01-21T19:21:19.871Z · LW(p) · GW(p)

Oh.

comment by wedrifid · 2010-01-21T23:09:11.842Z · LW(p) · GW(p)

Neither of them counts as "cheap" by any definition of "cheap".

Dollars per expected day of life extension. Applying the same definition to other health investments makes cryonics the 'cheap' option. I agree that this is probably not the definition used by some advocates.

It's also at least an order of magnitude more expensive than what Eliezer keeps talking about ($300/year).

Yes. (Well, if you use binary or base 4.)

comment by RobinZ · 2010-01-21T16:40:14.423Z · LW(p) · GW(p)

The $300/year is supposed to be the insurance rate for a $100k policy, but I agree - that is not cheap. I am likely to make enough money to afford it if I stay on my current career plan (mechanical engineering), but it's not a negligible sum.

comment by Dustin · 2010-01-21T19:42:00.169Z · LW(p) · GW(p)

I'm 32. I fit in the "Preferred" health group. 30 year term life insurance with a 100k payout is 168/year as per the quotes page I provided above.

As AngryParsley mentions, if you purchase term life insurance you're planing on having savings to cover your needs after your policy expires. This is my plan.

However, I suppose in say 15 years, I could purchase another 30 year/100k term insurance policy. Let's say I slip a category to "Standard Plus".

My premium will be 550/year. That of course assumes I didn't save anything during those 15 years (not to mention the remaining 15 years in the original policy) and need a 100k policy.

Replies from: taw
comment by taw · 2010-01-21T21:45:56.454Z · LW(p) · GW(p)

So the plan changed from "only $300/month for cryonics" to $300/month for term life insurance and membership feeds + somehow save the entire $100k anyway + obviously save even more money for retirement and all other things normal people save for?

This is very expensive and risky proposition, most people will not be able to afford it. Most people who think they'll save that much won't.

Replies from: AngryParsley, Dustin
comment by AngryParsley · 2010-01-21T22:00:49.963Z · LW(p) · GW(p)

Are you a finite state machine? I have told you twice that whole life insurance is available and costs about the same as cable television if you want to pay for the most expensive cryopreservation available. CI + whole life insurance would probably be $500/year.

comment by Dustin · 2010-01-21T22:29:28.480Z · LW(p) · GW(p)

This will be my last post on the issue as it seems like maybe I'm being trolled.

I never claimed it was $300/month.

My claim is exactly what I stated.

That is: As per the quotes page linked, my term insurance for 100k/30 year is 168/year. If I was concerned with not having the savings at the end of that 30 years, I would probably in 15 years buy another 100k/30 year term policy for 550/year.

Which is more likely...that everyone else at LW fails at basic math, or that you fail at basic math?

Replies from: RobinZ
comment by RobinZ · 2010-01-21T23:33:53.951Z · LW(p) · GW(p)

FYI: I misread your comment as comparing an option where you buy the policy now to an option where you buy the policy later.

comment by Paul Crowley (ciphergoth) · 2010-01-20T08:03:51.263Z · LW(p) · GW(p)

"Lie" is much too strong a term, but I get the same result when I multiply 180 by 50, and I'm curious to understand the discrepancy.

comment by EphemeralNight · 2011-09-24T08:45:55.533Z · LW(p) · GW(p)

Since learning, from Less Wrong, of Alcor and vitrification tech and such, I seriously considered cryonics for the first time in my life and really it was obvious. However slim, it is an actual chance to live beyond the meager handful of decades we get naturally, an actual chance to not die, and in the world as it is today, the only option. Even if the chance of it actually working as advertised (waking up after however long with a brand new perfectly healthy youthful nanotechnologically-grown immortal body) is vanishingly tiny, it is still the optimal action in today's world, is it not?

I should mention that (despite my efforts to hack it out of myself) I have a powerful neurotic phobia of medication, mind-altering substances, surgeries, and basically anything past or current technology can do to a human body that leaves traces, however beneficial. The idea of my still-active brain being pumped full of cryoprotectant upon my heart's last beat is more subjectively disturbing to me than eating flesh cut from my own body.

And I fully intend to sign up anyway.

(I'm currently living on a fixed income that has me occasionally going hungry in order to keep myself in air conditioning and internet, or I would have signed up already.)

The thing is, I want to convince my dad to sign himself up as well, and I think punctuating with this article, if presented in the proper context, could go a long way towards convincing him to sign us up all at once if I can just get passed his excessive skepticism. I'm not at all confident in my ability to sell him on it, though, so are there any good arguments to use when your primary obstacle is the other person's deep-seated irrational pride in their own skepticism? He could easily afford it, and he already lives in Arizona. I know he would totally go for it if I could just find a way to get passed his initial dismissal of it (a decade ago) as false hope for gullible cowards. I think he's been numbing himself to his mortality, and accepting the existence of a genuine hope against death would be difficult because it dispels the numbness. (This might even be why the majority of otherwise-sane people dismiss cryonics out of hand now that I think of it: for some, accepting a tenuous hope is more painful than having no hope at all.)

I'd appreciate any good advice on how to present my case to him, if anyone has any insights.

Replies from: lessdazed, ciphergoth
comment by lessdazed · 2011-09-24T21:19:24.507Z · LW(p) · GW(p)

the chance of it actually working as advertised (waking up after however long with a brand new perfectly healthy youthful nanotechnologically-grown immortal body) is vanishingly tiny

Am I the only one who thinks it is far more likely that the institution will fail than that the technology is never developed or is never applied?

Corporations, nation-states - few things have lasted hundreds of years with their cores intact. Laws change, market prices of commodities change, wars happen. The United States is not immune.

Replies from: katydee
comment by katydee · 2011-09-24T21:20:45.518Z · LW(p) · GW(p)

Concur. I am moderately confident that cryonics will eventually be a viable technology, assuming normal conditions-- but I am very much not confident that Alcor and the like will live to see that day.

Replies from: lessdazed
comment by lessdazed · 2011-09-24T21:22:16.440Z · LW(p) · GW(p)

Maybe we can vitrify them?

comment by Paul Crowley (ciphergoth) · 2011-09-24T18:22:32.244Z · LW(p) · GW(p)

I wish you the best of luck in signing up, and persuading your dad to. I think that the technical plausibility of cryonics (not "cyronics" btw) is much higher than you seem to imply here, incidentally - I'd put it over 50%.

Replies from: EphemeralNight
comment by EphemeralNight · 2011-09-27T12:18:19.510Z · LW(p) · GW(p)

I wasn't actually expressing an estimation of the plausibility of it working; merely that an uncertainty of death is preferable to a certainty of death.

comment by Kevin · 2010-01-21T21:52:41.141Z · LW(p) · GW(p)

Are they going to keep having this conference? $300 a year seems like an outright bargain if I get a free trip to Florida every year out of it.

comment by James_K · 2010-01-20T05:38:26.247Z · LW(p) · GW(p)

I have a cryonics related question, and this seems as good a thread as any to ask it.

I'm a New Zealander and most discussions of cryonics that I've been exposed to focus on the United States, or failing that Europe. If I have to have my head packed in ice and shipped to the US for preservation its going to degrade a fair bit before it gets there (best case scenario its a 12 hour flight, and that's just to LA, in practice time from death to preservation could be days). This is not a pleasant prospect for me, since it could lower the probability of successful revival by a large margin.

Since there are about 24 million people in Australia and New Zealand I'm sure I'm not the first person to realise this. Is anyone out there aware of any reputable cryonics organisations that are a bit closer to home? Alternatively, can anyone point me to sources that contradict my belief that the distance my head would have to travel would make cryonics a poor bet?

Replies from: AndrewH, Mitchell_Porter
comment by AndrewH · 2010-01-20T09:06:18.804Z · LW(p) · GW(p)

I am also a New Zealander, AND I am signed up with Cryonics Institute. You might be interested in contacting the Cryonics Association of Australasia but I'm sure there is no actual suspension and storage nearby.

Besides you are missing the main point, if you don't sign up now and you die tomorrow, you are annihilated - no questions asked. I would be wary of this question as it can be an excuse to not sign up.

Replies from: James_K
comment by James_K · 2010-01-21T05:18:31.315Z · LW(p) · GW(p)

Thanks Andrew, it looks like this Cryonics Association is a good first point of contact for me.

comment by Mitchell_Porter · 2010-01-20T05:57:23.316Z · LW(p) · GW(p)

There has been talk of a cryosuspension facility in Australia. But flying the body to North America is the only way it's been done so far.

Replies from: James_K
comment by James_K · 2010-01-21T05:25:10.758Z · LW(p) · GW(p)

Thanks Mitchell. This actually raises a dilemma for me, do I sign up for US cryonics now, or wait and see if the Australian facility pans out? I'm in my 20s so the odds of me dying in the next decade are pretty slim, so I could afford to wait for a few years at least. Preservation in Australia would increase the odds of successful revival.

On the other hand, the odds of my death in that time isn't zero and nothing might come of an Australian facility, meaning I could be risking my future existence for nothing.

Because the optimal solution would be to sign up for cryonics in the US now, and switch to Australia at a later time if that becomes an option. How easy is it to break a cryonics contract?

Replies from: Eliezer_Yudkowsky, MichaelGR, anonymoushero, RobinZ, byrnema
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-01-21T18:41:50.652Z · LW(p) · GW(p)

How easy is it to break a cryonics contract?

The concept doesn't apply; you stop paying dues to one org, pay dues to a different org instead, fill out new paperwork and change the beneficiary on your life insurance.

The main thing, I would say, is to get the life insurance now - though signing up for cryo at the last minute is also difficult, so again, sign up with CI for now (membership dues and costs cheaper) and then worry later about switching to a local Australian org if one gets started.

Replies from: James_K, queensblade
comment by James_K · 2010-01-22T05:45:59.725Z · LW(p) · GW(p)

Thanks Eliezer, that's exactly what I wanted to know. With no barrier to exit, there's no reason for me not to sign up now.

I'll think I'll be checking out the Cryonics Association of Australasia over the weekend.

comment by queensblade · 2010-01-21T23:03:38.515Z · LW(p) · GW(p)

How easy is it to break a cryonics contract?

It's my perception, if you pay up front at CI, you can get your money back, no problem. But that's bad if your spouse wants the money, and has you cremated. Can't speak for Alcor...

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-01-22T01:25:50.499Z · LW(p) · GW(p)

You make the cryonics org a beneficiary of the life insurance, or even have the life insurance in their name, so the spouse can't get hold of it - a situation that has arisen a few times, apparently.

comment by MichaelGR · 2010-01-21T05:53:11.334Z · LW(p) · GW(p)

How does signing up for cryonics in another country work exactly?

Do they keep your body refrigerated after your death and then ship it to the other country? What are the legalities?

Or were you talking about signing up in the US and then going there if you learn that you are sick (but that wouldn't work with unexpected death)?

Replies from: James_K
comment by James_K · 2010-01-21T08:00:52.696Z · LW(p) · GW(p)

I'm not really sure, hence all the questions.

The best case would be if I had enough notice of my death I could relocate to the Us a month or so before death, preferably somewhere close to the facility.

But I might not get that kind of notice, in which case I have to face the prospect of my remains being refrigerated or packed in ice or something and get shipped as fast as I can arrange to the US. If I put it in my will that my remains are to be transported, the legalities at my end should be OK, but what if there's some kind of quarantine issue or something in bring a human body into the US?

Australia's a quick flight from New Zealand, about a 2 hour flight to cross the Tasman Sea. The US is a lot further away and we don't have a Free Trade Agreement with them at this time so I'm less confident of being able to get a package into the country without delay.

comment by anonymoushero · 2010-01-22T16:37:10.106Z · LW(p) · GW(p)

Does anyone know what the progress is like on the Australian facility? The article about Rhoades is four years old and there's nothing on the CAA site about it.

Google Trends suggests that the region has a lot of latent interest. Check out the top search origins. http://www.google.com/trends?q=cryonics

FYI Here are Rhoades' cryonet messages. http://www.cryonet.org/cgi-bin/findmsgs.cgi?author=philip%20rhoades

comment by RobinZ · 2010-01-21T05:29:02.699Z · LW(p) · GW(p)

I believe CI has a yearly membership option.

comment by byrnema · 2010-01-21T12:46:59.927Z · LW(p) · GW(p)

If it is difficult to break a cryonics contract -- something I hadn't considered, then this greatly increases the utility of waiting to see what technology brings over the next 10-20 years verses signing up now. If a government program is developed, for example, I'd want to go with them.

comment by ata · 2010-01-19T20:50:11.371Z · LW(p) · GW(p)

Just so I understand this part of your point, what do you mean by "hero" (as in "I am a hero" but also the previous paragraph where you talk about who is and isn't a hero)? Is that a reference to some earlier article I missed, maybe?

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-01-19T22:15:48.287Z · LW(p) · GW(p)

You're not missing any context. I thought there was a pretty clear divide at the gathering between people living their ordinary lives as sound studio technicians or scientists or whatever; and people trying to change life as we know it, like me or the cancer-cure guy or the old guard who'd spent years trying to do something about the insane loss of life. I'm not sure how I could make it any clearer. Some were in class "heroes", some were in class "the ordinary lives that heroes protect".

Replies from: dclayh
comment by dclayh · 2010-01-19T22:58:52.714Z · LW(p) · GW(p)

Presumably some would reserve the word "hero" for those who actually succeed in changing life as we know it (for the better), and thus would be confused by your usage.

Replies from: Kevin, ata, CronoDAS
comment by Kevin · 2010-01-20T12:46:51.742Z · LW(p) · GW(p)

I chose to interpret it as hero in the literary sense. There is something epic about Eliezer's life mission, no?

Let's just hope he isn't a tragic hero. You don't need to succeed at your mission to be a hero; you just need to be the protagonist in the story. It's all very absurd, but surely more so for Eliezer than you and me...

comment by ata · 2010-01-20T00:04:37.839Z · LW(p) · GW(p)

Yeah, maybe a better term to use in this context would be something like "revolutionary" (a bit aggrandizing, but so is "hero", and I'd say it's well-deserved). That would be for those who are actively trying, whether or not they have personally made any significant, lasting contributions — the heroes would be those who have.

(Not that we'd want this to turn into a status game, of course. The only point of debate here is whether clearer terminology could be used.)

Replies from: CronoDAS, bgrah449
comment by CronoDAS · 2010-01-20T00:06:34.609Z · LW(p) · GW(p)

"Aspiring hero" is good enough, I think.

Replies from: ata
comment by ata · 2010-01-20T05:43:46.117Z · LW(p) · GW(p)

I could accept that. That's really the only point I was trying to make; that trying to do something noble, like curing cancer, is praiseworthy, but does not automatically make someone a hero. Lots of people try to cure cancer; most of them are well-intentioned kooks or quacks... and even of those who aren't, those who work on possible cancer cures within a rigourous scientific/rational framework, most of them will fail. As I said, they are worthy of praise and recognition for their efforts, but they are not automatically heroes. But I would be fine with calling them "aspiring heroes".

I'm trying to make sure I'm not arguing about definitions here, but I'm not sure if the disagreement is over the definition of the word "hero" or over what we value enough to consider heroic. I might be persuaded that even trying to cure cancer is a heroic act, but I'm not sure how we could avoid having that include the well-intentioned kooks too.

Edit: Actually, I think I just persuaded myself: the well-intentioned kooks tend to promote their kookery without sufficient evidence, possibly giving people false hope or even leading people to choose an ineffective treatment over one relatively likely to be effective. That is not heroic regardless of intent. I can accept that if a person is working on cancer treatments with rationality, scientific rigour, and intellectual honesty, then they can reasonably be described as heroic.

comment by bgrah449 · 2010-01-20T01:01:02.938Z · LW(p) · GW(p)

I think revolutionary is still a little lofty. These people consumed a product. They were among the first people to consume a product. The whole post reads like, "I liked Nirvana before they were popular, and I want special status conveyed on me to recognize that fact."

Replies from: Eliezer_Yudkowsky, ata
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-01-20T01:18:43.206Z · LW(p) · GW(p)

Like I said, most of the people at that gathering were not heroes; they were people who signed up out of common sense. Some of them were trying to cure cancer (literally) or get the whole world signed up for cryonics; those were the heroes.

comment by ata · 2010-01-20T05:32:36.999Z · LW(p) · GW(p)

I meant "revolutionary" would be for "people trying to change life as we know it" (as Eliezer put it), not just anybody signed up for cryonics. (And "hero" would be for those who succeed at changing life as we know it (for the better).) But maybe it's not the best term anyway, it was just an example.

comment by CronoDAS · 2010-01-19T23:11:07.862Z · LW(p) · GW(p)

Indeed.

comment by Kutta · 2010-01-19T20:47:29.862Z · LW(p) · GW(p)

OP upvoted for displaying emotions that fit the facts.

comment by byrnema · 2010-01-28T18:46:26.469Z · LW(p) · GW(p)

I’ve mentioned already in comments to this post that parents don’t have access to cryonics. I would like to describe in more detail what I mean by ‘access’. I think that childless adults often don’t realize the extent to which parents depend upon embedded social structures, though I’m sure they’ve noticed things like children’s menus, stroller parking stations and priority airplane seating. (One of my worst experiences as a parent was spending 14 hours with an 11 month old in Chicago-O’Hare ...)

Access certainly includes affordability. $300 per year is what it might minimally cost for one person. If it scales linearly and you have 3 kids, that would be $1,500 per year to cover the whole family. Consider that many families are struggling to cover health insurance or save for college tuition, and have already relinquished all but very occasional movies and dinners out.

Also, access includes general societal acceptance and a certain level of background participation.

  • I depend upon society to help me explore what the ethical issues are so that I can make up my own mind in an informed way. I’m not an ethicist or a pastor, I’ve specialized in a different area.

  • If there is a certain level of background participation, I can possibly count on any number of aunts and uncles and grandparents and godparents to sign up, in the case that my husband and I can’t be preserved at all or resuscitated as early as our child. (For example, my child might die of leukemia that they have a cure for in 100 years, but my husband might die of a cancer they don’t have a cure for for 150 years, and I might die of brain degeneration that makes it impossible to ever revive me.)

  • Placing a child in cryonics may ostracize a grieving parent from important components of community support. Without informed consideration of the issues and a collection of common experiences, society won’t know how to help the parents of a cryonically preserved child grieve. Even if you believe the child has a good chance of being revived, the parent still needs to come to terms with the terrible pain of not being able to care for their child every day anymore. When they’re reunited, the parent may have only a vague memory of their child. So an example of societal cryonics infrastructure would be the option of freezing a healthy parent with their child, if they choose to.

Access also includes the same elements that are of concern to childless adults – proximity to a large airport, access to hospitals that know about cryonics and that are willing to comply with initial steps, access to an ambulance that comes prepared with a big vat of ice.

Can you imagine a grieving parent, at the sight of the wreck, trying to arrange for someone to go to a grocery store and buy a bucket and 12 bags of ice?

One last remark about “access”. What happens once cryonics is more commonplace? Perhaps 1% of the population is signing up for cryonics and societal norms shift over a couple years. Certainly this number would rise to well above 75%. Could cryonics companies keep up with the 7000 people that die per day in the U.S, 2.5 million per year? How many billions of people will be cryo-preserved before revival is possible? Perhaps these are trivial issues, I don’t know, but they are nevertheless relevant to whether there is any real access to the 150 million parents in the US.

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-01-28T19:19:57.998Z · LW(p) · GW(p)

Eventually it might be easy, painless, and cheap for parents to save their children's lives. The more people sign up for cryonics, the closer we get to that world.

Meanwhile, though, we don't live in that world, and for now, only parents who actually care about their children's lives will bother.

See also: http://www.overcomingbias.com/2009/08/pick-one-sick-kids-or-look-poor.html

Replies from: byrnema
comment by byrnema · 2010-01-28T20:35:58.967Z · LW(p) · GW(p)

See also: http://www.overcomingbias.com/2009/08/pick-one-sick-kids-or-look-poor.html

I've lived with families in third and second world countries, as a guest, and have my own and different ideas about this story. For example, depending upon some economic and cultural variables, it could be more likely that they don't fully trust Western solutions and didn't want to appear uppity or as though they were rejecting their native societal support structure, which they depend upon. Still, I wouldn't assert that anything is the case without actually talking to a Bolivian family. Doesn't it matter what they think of their experience?

Likewise, by insisting that it's a cut-and-dry, settled issue, you missed an opportunity at your conference to ask the couple with the kids how they overcame their initial qualms, if any, and what advice they would give to other parents thinking about cryonics. I think that I would have gotten along with this couple and after speaking together for a few minutes, neither one of us would have walked away thinking that the other set of parents were 'bad parents'.

comment by Paul Crowley (ciphergoth) · 2010-01-25T23:44:33.731Z · LW(p) · GW(p)

I'm trying to avoid confirmation bias on this one, and I'm asking everywhere I can for links to the best anti-cryonics writing online. Thanks!

comment by Halfwit · 2013-01-06T19:01:23.660Z · LW(p) · GW(p)

I thought this was rather tasteful media coverage: http://www.telegraph.co.uk/science/8691489/Robert-Ettinger-the-father-of-cryonics-is-gone-for-now.html

Replies from: gwern
comment by gwern · 2013-01-06T19:51:56.647Z · LW(p) · GW(p)

Very positive too. Hard to ask for more favorable coverage than that.

Replies from: shminux
comment by Shmi (shminux) · 2013-01-06T20:20:40.125Z · LW(p) · GW(p)

Maybe too favorable, given that the author does not question the time frame of only decades until revival.

comment by CryoMan · 2011-06-23T05:36:52.331Z · LW(p) · GW(p)

I am applauding this article. You have moved me. I am A-2561 neuro and I'm proud to be a member of Alcor. I am 16 years old. I got into the field myself when I watched Dr. De Grey's documentary Do You Want To Live Forever. I am unbelievably lucky. The average American has a better chance of being an A-list celebrity than being a cryonicist. As Mike Perry said; Cryonicists are born, not made. When I watched it, something just clicked and I decided to devote my life to it. A lousy parent also doesn't get memberships for the pets of their children. Cupcake and Snugglemuffins have memberships because I begged my parents to get it for them. No child should see their dog rot in the ground followed by the 'Heaven' story that we've had shoved at us for thousands of years.

Replies from: ArisKatsaris, Dorikka
comment by ArisKatsaris · 2011-07-21T16:15:18.367Z · LW(p) · GW(p)

I want to downvote you for naming a pet "snugglemuffins", but that would probably be an abuse of the system. :-)

comment by Dorikka · 2011-07-21T17:44:17.402Z · LW(p) · GW(p)

The average American has a better chance of being an A-list celebrity than being a cryonicist. As Mike Perry said; Cryonicists are born, not made.

I'm reading the first sentence as being an implication of the fact that there are more celebrities than cryonicists, and the second one as meaning that how much one is exposed to information that may motivate one to be a cryonicist significant impacts the chance that they will be a cryonicist. Is this right, or am I missing something?

Replies from: KPier
comment by KPier · 2011-07-21T18:34:57.462Z · LW(p) · GW(p)

I interpreted the second one to mean the opposite: that exposure to information really doesn't help turn people into cryonicists (discussed in That Magical Click), All the rational arguments in the world really haven't persuaded that many people, while most of the ones who sign up just get it.

comment by byrnema · 2010-01-21T00:02:13.462Z · LW(p) · GW(p)

It seems obvious to me that if cryonics companies wanted more people to sign up, all they'd need to do is advertise a little. An ad compaign quelling top 10 parent fears would probably start causing people to sign up in droves. However, they remain quite quiet so I do assume that there's some kind techno-elitist thing going on ... they don't want everyone signing up.

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-01-21T00:22:03.942Z · LW(p) · GW(p)

Doesn't, um, thinking about that for like 30 seconds tell you how unlikely it is?

Replies from: wedrifid, byrnema
comment by wedrifid · 2010-01-21T00:30:47.168Z · LW(p) · GW(p)

Do you know why cryonics is not more heavily advertised? Thinking about it for 30 seconds gives me some hypotheses but I'm too socially distant to make a reliable guess.

Replies from: gwern
comment by gwern · 2010-01-21T01:00:03.485Z · LW(p) · GW(p)

I like the mockery explanation. Cryonics is as about as socially acceptable as furry fandom; if furries scraped up a few millions for some TV spots, do you think they would get more or less members in the long run? There is such a thing as bad publicity.

And existing cryonics members might be exasperated - money used for advertising is money not used for research or long-term sustainability (I hear Alcor runs at a loss).

Replies from: JamesAndrix, byrnema, Morendil
comment by JamesAndrix · 2010-01-21T21:26:02.058Z · LW(p) · GW(p)

I'm pretty sure that at the root, most furries are furries because of anthropomorphized animal cartoon shows. I think a well designed commercial could push a lot of people over the edge.

Thanks, now I have an entertaining conspiracy theory about Avatar.

comment by byrnema · 2010-01-21T01:47:23.995Z · LW(p) · GW(p)

Cryonics is as about as socially acceptable as furry fandom

This is a myth. Techno-filia is very much part of our culture. Science fiction dominates our movies. People would scramble to sign up for cryonics if the infrastructure was there and they were certain it wasn't a scam. But that's a big IF. And that's the IF -- this idea of parents not choosing cryonics because they're lousy parents is a huge MYTH invented right on the spot. Parents don't have access to cryonics.

(If a cryonics company is reading this: I do suggest an ad campaign. I think the image you project should be 'safe household product': something completely established and solid that people can sign up for and sign out of easily -- just a basic, mundane service. No complications and lots of options. People aren't signing they're life away, they're buying a service. And it's just suspension till a later date -- I'd stay well clear of any utopian pseudo-religious stuff.)

Replies from: Eliezer_Yudkowsky, Furcas
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-01-21T01:49:15.865Z · LW(p) · GW(p)

People would scramble to sign up for cryonics if the infrastructure was there and they were certain it wasn't a scam

AFAICT your statement is simply false.

Replies from: gwillen, Blueberry, byrnema
comment by gwillen · 2010-02-05T02:15:05.330Z · LW(p) · GW(p)

I won't try to judge the original statement, but I do think that people believing cryonics to be a scam is a serious problem -- much more serious than I would have believed. I have talked to some friends (very bright friends with computer science backgrounds, in the process of getting college degrees) about the idea, and a shockingly large number of them seemed quite certain that Alcor was a scam. I managed to dissuade maybe one of those, but in the process I think I convinced at least one more that I was a sucker.

Replies from: Eliezer_Yudkowsky, mattnewport, ciphergoth, gwillen
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-02-05T09:00:11.845Z · LW(p) · GW(p)

Reasoning by perceptual recognition. Cryonics seems weird and involves money, therefore it's perceptually recognized as a scam. The fact that it would be immensely labor-intensive to develop the suspension tech, isn't marketed well or at all really, and would have a very poor payoff on invested labor as scams go, will have little impact on this. The lightning-fast perceptual system hath spoken.

I'm surprised that you say your friends are computer programmers. Programmers need to be capable of abstract thought.

comment by mattnewport · 2010-02-05T06:56:16.568Z · LW(p) · GW(p)

It has struck me that if you wanted to set out to create a profitable scam, cryonics looks like quite a good idea. I don't have any particular reason to think that actual cryonics companies are a scam but it does seem like something of a perfect crime. It's almost like a perfect Ponzi scheme.

Replies from: thomblake, Eliezer_Yudkowsky
comment by thomblake · 2010-02-05T14:35:44.021Z · LW(p) · GW(p)

Currently it is set up as a bit of a Ponzi scheme; without new people coming in (and donations) these companies wouldn't survive very long. But then, with a little tweaking you could apply that analysis to any business with customers to make it look like a Ponzi scheme.

Replies from: ciphergoth, mattnewport
comment by Paul Crowley (ciphergoth) · 2010-02-05T15:07:39.764Z · LW(p) · GW(p)

Could you write this up in more detail somewhere? The claim is that the "patient care trust" doesn't need new customers to be financially viable, and should keep going even if the primary business fails. If this isn't true it would be worth drawing attention to.

Replies from: thomblake
comment by thomblake · 2010-02-05T15:19:20.964Z · LW(p) · GW(p)

Alcor is running at a loss

I do believe they would be capable of running within their means if they had to.

Replies from: Morendil
comment by Morendil · 2010-02-05T16:14:49.503Z · LW(p) · GW(p)

For some value of "running at a loss", i.e. where you interpret that as "would run at a loss if it weren't for donations and bequests".

Given the nature of their business, donations and bequests do not strike me as an anomalous source of revenue. I do plan on asking for more information on the nature of these revenues before signing up.

However, this is an issue quite separate from the viability of the patient care trust, which is set up to keep suspendees as they are even in the case of a failure of the "main business".

comment by mattnewport · 2010-02-05T17:16:13.834Z · LW(p) · GW(p)

Most businesses deliver a product or service to their customers much sooner after receiving their money than a cryonics company does. Those customers also tend to be alive and so in a position to complain if they are not satisfied with their purchase.

Replies from: Morendil
comment by Morendil · 2010-02-05T17:31:49.935Z · LW(p) · GW(p)

Let's try to make this concrete.

Suppose I choose CI, and pay up now for a lifetime membership. I will pay $1250 once, and in parallel build up $200K insurance policy designating CI as the beneficiary. The only part of the money CI sees now is the $1.2K. No small sum, but neither it is more than a tiny fraction of the salaries and costs CI verifiably pays.

At 40, I can reasonably expect to go 30 to 40 years before I die. At any time during this period, if it becomes apparent that CI is up to anything screwy, I can (so I understand) change my insurance policy back; or at any rate contest their claim to it.

If you want to defraud customers, there are quicker, cheaper, more reliable ways to do it.

Replies from: mattnewport
comment by mattnewport · 2010-02-05T19:23:37.579Z · LW(p) · GW(p)

There are people currently being stored are there not?

Replies from: Morendil
comment by Morendil · 2010-02-05T19:31:09.352Z · LW(p) · GW(p)

Indeed there are.

As the reasoning above suggests, they tend to be people who have known and watched the cryonics organizations for a long time, up close and personal.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-02-05T07:40:44.628Z · LW(p) · GW(p)

This would require cryonics companies to lie about their finances. Otherwise they have no way to extract money from their reserves without alarming customers.

Replies from: mattnewport
comment by mattnewport · 2010-02-05T08:33:47.706Z · LW(p) · GW(p)

Banks have been lying about their finances for years. Cryonics companies would hardly be unusual in the current economic climate if they were lying about their finances. I have some AAA rated mortgage backed securities for sale if anyone's interested.

Replies from: ciphergoth
comment by Paul Crowley (ciphergoth) · 2010-02-05T08:37:10.842Z · LW(p) · GW(p)

Banks hide their deception not only in actual secrecy but also in overwhelming complexity.

Replies from: mattnewport
comment by mattnewport · 2010-02-05T08:44:45.160Z · LW(p) · GW(p)

To be clear, I actually don't think cryonics companies are scams. I just think that if you wanted to set up a scam, cryonics would be a promising avenue.

I think the complexity thing is overblown for banks to be honest. If you believe the MSM you might get that impression, but credit default swaps, collaterized debt obligations, mortgage backed securities and the rest of the TLAs behind the financial crisis are not actually difficult to understand for anyone with a basic grasp of maths. The idea that such instruments are fundamentally complex largely stems from the mathematical ineptitude of most people in the media. If you have trouble understanding the concept of percentages then a credit default swap probably seems quite confusing.

Replies from: ciphergoth, Morendil
comment by Paul Crowley (ciphergoth) · 2010-02-05T08:54:41.766Z · LW(p) · GW(p)

cryonics would be a promising avenue.

It seems very unpromising indeed to me. Could you explain how you'd pull it off? Would you publish falsified accounts, for example? Bear in mind that you're competing with existing providers and operating in a community which talks to each other; if existing providers think you're a scammer, they will say so, and they are polite about each other.

Replies from: mattnewport
comment by mattnewport · 2010-02-05T09:11:57.961Z · LW(p) · GW(p)

Scam is perhaps a little strong, but it does seem like a perfect Ponzi scheme. The basic idea of a Ponzi scheme is that you can pay off your existing investors with the proceeds from new investors, as long as your existing investors are happy with your annual reports of profits.

Cryonics promises an indefinitely deferred payoff - you pay into the fund now for a chance at a huge payoff sometime after you die. As long as you can sustain a positive cash-flow you never have to pay out from the fund. People won't get suspicious for quite a while - you can always claim you don't want to risk damaging your charges by subjecting them to experimental revival procedures. If you don't believe in what you're selling you'll be (permanently) dead before anybody gets suspicious. Meanwhile you will have enthusiastic customers evangelizing you based on their huge 'paper profits' - they've paid you a paltry $300 a year for the promise of eternal life.

If Bernie Madoff could get away with cooking the books for 20 years in one of the most heavily regulated industries in the US with the relative handicap (compared to cryonics) of actually having to pay out on occasion then it is at least plausible that a cryonics company could be a profitable enterprise for someone who did not believe in cryonics. Of course, if it is easy to verify that they are storing brains in liquid nitrogen it doesn't necessarily matter if it's a scam.

Replies from: ciphergoth
comment by Paul Crowley (ciphergoth) · 2010-02-05T09:19:13.182Z · LW(p) · GW(p)

So you'd publish honest, or at least non-fraudulent, accounts?

Replies from: mattnewport
comment by mattnewport · 2010-02-05T09:26:24.486Z · LW(p) · GW(p)

I'd publish legal accounts. If I get to play by the same accounting rules as a 'too big to fail' bank then I wouldn't call them honest accounts.

You actually don't need to do anything illegal or even that morally questionable to make good money from an insurance business. People pay you to hold their money. It's why Warren Buffet loves insurance companies:

The Power of Float

The source of our insurance funds is "float," which is money that doesn't belong to us but that we temporarily hold. Most of our float arises because (1) premiums are paid upfront though the service we provide - insurance protection - is delivered over a period that usually covers a year and; (2) loss events that occur today do not always result in our immediately paying claims, because it sometimes takes many years for losses to be reported (asbestos losses would be an example), negotiated and settled. The $20 million of float that came with our 1967 purchase (National Indemnity- NICO) has now increased - both by way of internal growth and acquisitions - to $46.1 billion.

Float is wonderful - if it doesn't come at a high price. Its cost is determined by underwriting results, meaning how the expenses and losses we will ultimately pay compare with the premiums we have received. When an underwriting profit is achieved - as has been the case at Berkshire in about half of the 38 years we have been in the insurance business - float is better than free. In such years, we are actually paid for holding other people's money.

(emphasis mine)

Cryonics seems like a pretty great source of 'float'.

comment by Morendil · 2010-02-05T08:50:37.820Z · LW(p) · GW(p)

I just think that if you wanted to set up a scam, cryonics would be a promising avenue.

You've said so once. This second instance is thus an attempt at proof by repeated assertion. Actual reasoning would be preferred.

comment by Paul Crowley (ciphergoth) · 2010-02-05T07:52:31.593Z · LW(p) · GW(p)

Yes, I encountered this too from several of my friends. One was almost mockingly certain that I was considering giving money to a group of scamsters, though they had no specific comments on Alcor or CI's published financial information.

comment by gwillen · 2010-02-05T02:19:27.289Z · LW(p) · GW(p)

(For the record for when people I know find this post -- I have not actually overcome the inertia and signed up. This is largely due to the fact that my living relatives are likely to have control over the disposition of my remains, so there is little point in signing up for cryonics unless I can get up the nerve to talk to them about it.)

comment by Blueberry · 2010-01-21T19:01:38.627Z · LW(p) · GW(p)

Why? It'll be a huge boost for cryonics when the first person is brought back. People will be able to see with their own eyes, for the first time, that it actually works. Until then, it's still speculative.

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-01-21T19:16:39.111Z · LW(p) · GW(p)

"certain it would work" != "certain it's not a scam"

Replies from: byrnema
comment by byrnema · 2010-01-22T19:41:24.314Z · LW(p) · GW(p)

Things that work are usually not called scams.

Pills that cure cancer for $5000 a month are scams. People who can contact deceased loved ones are offering scams.

Whether the people who provide the service believe in their service or not, services that rely on technology we don't have available yet are scams. I'm sure that there are people in Alcor who feel extra pressure, knowing that if cryonics doesn't work, they're essentially scamming their members.

Replies from: ciphergoth, thomblake, bgrah449
comment by Paul Crowley (ciphergoth) · 2010-02-05T08:52:36.227Z · LW(p) · GW(p)

Things that work are usually not called scams.

The point is the other side of the implication: things that are not scams don't always work.

Replies from: byrnema
comment by byrnema · 2010-02-05T12:14:23.268Z · LW(p) · GW(p)

You are thinking of scam in the sense of 'deliberate fraud'. A quick survey of definitions on the web support your sense as by far the dominant one, and mine more or less non-existent. I was meaning scam in the sense of wasting your money, and certainly including the case of deliberate fraud.

Think about it from the point of view of the mother that must make smart economical decisions in order to make sure the bills are paid each month; if she told me that cryonics was a 'scam' I would understand her meaning.

I think Eliezer describes this sense of scam quite well here, because indeed it doesn't make a difference for this sense if the cryonics companies have good intentions, and are working really intensively, and are in the hole financially. I just disagree there is any problem with this quick perception, from that mother's point of view. She's still thinking, 'a fool and his money are easily parted'.

I'm not such a mother. I bought two of those "One Laptop Per Child" OLPC laptops for $400 two years ago. I was willing to invest in an idea I cared about, even though it didn't seem like it was going to work.

Were they a scam? I think they had great intentions ... but if there isn't a child somewhere with a laptop because of my purchase, then, yes, they were. Even if this is just because OLPC hadn't anticipated that adults would take the laptops and resell them.

And, finally, I don't know for certain but I suspect that many of the medium-type persons that contact relatives and tell fortunes have sincere intentions of some kind.

Replies from: ciphergoth
comment by Paul Crowley (ciphergoth) · 2010-02-05T12:21:36.333Z · LW(p) · GW(p)

Now that you've discovered the standard meaning of the phrase "scam", I think it would be best if we stuck to it rather than gratuitously switching to a private language. Perhaps there is another term that covers the whole category of expenditures that don't work out the way you want.

Replies from: byrnema, whpearson
comment by byrnema · 2010-02-05T13:14:06.313Z · LW(p) · GW(p)

Perhaps we're coming from different perspectives, but my point of view is that you're being gratuitously aggressive. (Consider the wording of your first two sentences and imagine it read with a snarl, as I did.) Is that going to be the general result of this post here on Less Wrong?

I don't make big sweeping apologies unless it (a) actually matters, and I feel badly or (b) the polite context of the exchange is established so that it is not an unfair status hit.

If you insist of making me take a status hit that I think is unfair -- even though I've lost karma for this whole exchange, and MichaelGR already told me he didn't agree with my use of the word, and I already sound like a jerk throughout the whole exchange because I keep changing my mind about whether or not people think cryonics is a deliberate scam -- then I'll have to admit that I just don't think my broader usage of 'scam' is so uncommon.

Here are 2 examples of people using 'scam' in the sense I mean.

The Bottled Water Scam

Whole Life Insurance is a Scam

So that I only want to reply sarcastically so sorry I used a word that wasn't immediately agreed with by everyone.

I am including all of this as an immediate-case-study response relevant to the post Logical Rudeness, to write what goes through my head when I'm pressed for a formal statement of defeat when I felt I had already made polite concessions. I think otherwise -- without the reason to call attention to these thoughts -- I would have just written something slightly passive aggressive, but mostly even more concessionary then the latest concession.

Replies from: ciphergoth
comment by Paul Crowley (ciphergoth) · 2010-02-05T13:40:10.050Z · LW(p) · GW(p)

Everyone's in a mood on LW today, it seems, and I don't exclude myself. I meant to come across with a much lighter tone than that, to be sure, and I don't mean to commit the sin that C S Lewis describes so well in "The Screwtape Letters" of insisting that one's own words be taken strictly at face value while reading every possible connotation and side meaning into the words of others.

But I really do think that using the term "scam" in this way is inadvisable, and that the links you provide are using the term in a hyperbolic way, to smuggle in the implication of insincerity on the part of the providers without proof. I really think that "scam" denotes the wrong concept and certainly strongly carries the wrong connotations. whpearson's suggestion of "boondoggle" is a good one.

I'm not sure how to address what you say about "status". But I like to think that one of the things we're better at here is conceding gracefully and accepting it gracefully. If I've given the discussion an emotional charge that makes that difficult, that wasn't my intent.

Replies from: byrnema
comment by byrnema · 2010-02-05T18:32:02.662Z · LW(p) · GW(p)

I voted you up. Your latest comment left me feeling expansive rather than defensive, and that feels like a much better place to be rationally.

So I'm not sure why, in this expansive mood, I'm stll not willing to fully agree. For now I'll call it "stubbornness of purpose" -- I do want to 'smuggle in' those negative connotations while describing the negative feelings people have for cryonics -- and think about whether this is a flaw in character or rationality or something more positive or neutral.

comment by whpearson · 2010-02-05T12:25:18.674Z · LW(p) · GW(p)

How about Boondoggle)

Replies from: ciphergoth
comment by Paul Crowley (ciphergoth) · 2010-02-05T12:26:27.626Z · LW(p) · GW(p)

Excellent!

comment by thomblake · 2010-01-22T19:45:24.316Z · LW(p) · GW(p)

Even in a world where cryonics works, we could imagine a "cryonics scam" where a company took money for cryonics and then didn't freeze/revive people.

Replies from: byrnema
comment by byrnema · 2010-01-22T20:31:35.617Z · LW(p) · GW(p)

I guess it depends on what you mean by "work". If I gave my money to a cryonics company and and they purposely didn't freeze me or revive me, I would say that it didn't work.

But we're talking about whether or not people would trust it wasn't a scam, even if it wasn't.

If the infrastructure for something is in place then people usually do trust that it isn't a scam. (Infrastructure often means safeguards against scamming anyway.) Most people trust hospitals to provide medical care.

Well, actually that's a good example. Even though hospitals have a lot of infrastructure throughout the country, people still have a limited trust in them. There are often good reasons for this. And then people are supposed to turn around and have boundless faith in the operations of a tiny, private, nearly secret company?

comment by bgrah449 · 2010-01-22T19:49:49.271Z · LW(p) · GW(p)

I'm sure that there are people in Alcor who feel extra pressure, knowing that if cryonics doesn't work, they're essentially scamming their members.

Why?

EDIT: Why are you sure of this, I mean.

Replies from: byrnema
comment by byrnema · 2010-01-22T20:21:50.220Z · LW(p) · GW(p)

Because I'm sure that some of them have good intentions. They might know that they're doing their best to give people a chance, but if they're human (?) they would also feel the responsibility of all these people depending upon them.

Replies from: MichaelGR
comment by MichaelGR · 2010-01-22T20:35:56.552Z · LW(p) · GW(p)

All you say in this comment seems true, but not the part in your previous comment about "if cryonics doesn't work, they're essentially scamming their members."

If I pay firefighters to extinguish the fire that is burning down my house, and they try, do the best they can under the conditions they have to work in, but my house still burns down in the end, have they scammed me?

I don't think "scam" is the right word.

I'm sure the employees of cryonics organizations would be extremely disappointed if cryonics somehow didn't work, and they would probably feel sad for the loss of many potential lives, but if they actually tried their best, I highly doubt that they'd feel like they did something morally wrong or scam-like.

AFAIK, no serious Cryonics organization with actual facilities is guaranteeing a result (being revived). In legalese, it's a "best efforts obligation" rather than an "obligation to achieve a specific result".

Replies from: byrnema
comment by byrnema · 2010-01-22T20:56:22.766Z · LW(p) · GW(p)

I concede that the service that they're actually providing is an opportunity for revival only. That has a value, and people are willing to pay for that value.

The cryonics facility owner who thinks of it exactly like this will sleep well at night. However, people usually have more complex relationships with reality. The cryonics owner knows he is selling optimism about cryonics. Do you think he would feel that it was moral to continue selling memberships if he thought the probability was virtually zero?

Replies from: bgrah449
comment by bgrah449 · 2010-01-22T21:08:25.344Z · LW(p) · GW(p)

Unless the seller is withholding information that would change the buyers' estimates, how he feels about the product is immaterial.

comment by byrnema · 2010-01-21T02:06:06.132Z · LW(p) · GW(p)

AFAYCT? You think most Americans are irrational. Why would you expect to have a good model of how they think?

Take sexing of babies. At first, 'people' were vocal about how it wouldn't be natural to know the baby's sex, and people still extol the virtues of 'being surprised' when the baby is born. But it was something doctors offered and over time, pragmatic people ignored critical voices and started doing it, and culture changed.

Culture is changed by being normal. 'People' probably dislike the idea of cryonics because you connect it with singularity concepts -- your utopia is not everyone's utopia. Let them imagine their own future.

Replies from: wedrifid
comment by wedrifid · 2010-01-21T02:16:34.215Z · LW(p) · GW(p)

Your comment is not internally consistent. You present a model which predicts that people will not sign up for cryonics even if they think it is not a scam.

Replies from: byrnema
comment by byrnema · 2010-01-21T02:23:59.829Z · LW(p) · GW(p)

It's a generally true thing, not a worked out linear argument.

I contend that parents don't have access to cryonics. The rest is just random bullets of indignation.

Replies from: Eliezer_Yudkowsky, wedrifid
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-01-21T18:49:42.051Z · LW(p) · GW(p)

It's a generally true thing, not a worked out linear argument.

Which just goes to say, you see what I'm saying? There you go.

comment by wedrifid · 2010-01-21T02:25:50.651Z · LW(p) · GW(p)

The rest is just random bullets of indignation.

Yes, and the fact that they contradict one another is significant to me.

Replies from: byrnema
comment by byrnema · 2010-01-21T02:36:42.944Z · LW(p) · GW(p)

Well the first contradiction was that I was giving ad advice to a company I accused of being elitist. The contradiction was not lost upon me; but if I have a probability that they're doing X, I can hedge by also betting on Y. Anyway, a cryonics company isn't a monolith; I'm sure they've got their different internal perspectives. Which leads to my second set of contradictions about people -- but people aren't a monolith either.

Young husbands will go along with cryonics because they like Terminator, and 40-something mothers will go along with cryonics because there isn't a cure for that chromosomal anomaly now but there might be in 5 years. Or maybe it will buy someone extra time to have another baby to provide cord blood. The problem is trying to sell them a vision of a weird far distant future instead of just providing a service.

comment by Furcas · 2010-01-21T02:21:55.346Z · LW(p) · GW(p)

I think byrnema has a point. I don't think most people are even aware that cryonics isn't sci-fi anymore.

Replies from: pengvado, ciphergoth
comment by pengvado · 2010-01-21T03:41:55.169Z · LW(p) · GW(p)

Anecdote: I read sci-fi as a kid, learned of the concept of cryonics, thought it was a good idea if it worked... and then it never occurred to me to research whether it was a real technology. Surely I would have heard of it if it was?
Then years later I ran into a mention on OvercomingBias and signed up pretty much immediately.

comment by Paul Crowley (ciphergoth) · 2010-02-05T08:56:12.523Z · LW(p) · GW(p)

The way people say "it's science fiction" as if it tells you anything at all about the plausibility of what's under discussion drives me crazy. Doctor Who and communications satellites are both science fiction.

comment by Morendil · 2010-01-21T01:16:08.124Z · LW(p) · GW(p)

Source ? A non-sustainable cryonics organization is one you don't want to be signed up with. These dewars use electricity (EDIT: oops, no they don't; substitute "rental for the space to store them").

Replies from: gwern
comment by gwern · 2010-01-21T14:41:52.892Z · LW(p) · GW(p)

You still need to create the nitrogen in the first place.

But you can read the financial statements yourself: http://www.alcor.org/Library/html/financial.html (Seriously, am I the only person here who can look things up? The answers are on, like, page 10.)

I should mention that I define 'running at a loss' as not being able to pay all bills out of either investment income or out of fees (membership dues, freezing fees, etc.); if there is a gap between expenses and the former, then they are running at a loss and depending on the charity of others to make it up.

And this is the case. In 2008, they spent $1.7 million - but they got 622k for freezing, and ~300k in fees & income, for a total of $990,999. In other words, Alcor is not currently self-sustaining.

(Why aren't they bankrupt? Because of $1,357,239 in 'contributions, gifts, and grants', and 'noncash contributions' of $753,979.)

Replies from: Morendil
comment by Morendil · 2010-01-21T19:45:19.981Z · LW(p) · GW(p)

Thanks, that's useful info.

comment by byrnema · 2010-01-21T00:39:27.965Z · LW(p) · GW(p)

Not advertising is a clear signal. If any company wants the masses, they advertise to the masses. If Proctor & Gamble comes out with a great new detergent, they're not going to wait for people to do the research and find out about them.

Replies from: magfrump, byrnema
comment by magfrump · 2010-01-21T00:47:18.272Z · LW(p) · GW(p)

A clear signal that cryonics companies don't have an advertising budget?

comment by byrnema · 2010-01-21T00:48:04.995Z · LW(p) · GW(p)

thinking about that for like 30 seconds tell you how unlikely it is?

The more I think about it the more likely it seems... So: finding out about them is the first barrier to entry.

comment by LauraABJ · 2010-01-20T15:56:41.468Z · LW(p) · GW(p)

A question for Eliezer and anyone else with an opinion: what is your probability estimate of cryonics working? Why? An actual number is important, since otherwise cryonics is an instance of pascal's mugging. "Well, it's infinitely more than zero and you can multiply it by infinity if it does work" doesn't cut it for me. Since I place the probability of a positive singularity diminishingly small (p<0.0001), I don't see a point in wasting the money I could be enjoying now on lottery tickets or spending the social capital and energy on something that will make me seem insane.

Replies from: Eliezer_Yudkowsky, drimshnick
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-01-20T16:19:35.595Z · LW(p) · GW(p)

My estimate of the core technology working would be "it simply looks like it should work", which in terms of calibration should probably go to 90% or 80% or something like that.

Estimates of cryonics organizations staying alive are outside the range of my comparative advantage in predictions, but I'll note that I tend to think in terms of them staying around for 30 years, not 300 years.

The weakest link in the chain is humankind's overall probability of surviving. This is generally something I've refused to put a number on, with the excuse that I don't know how to estimate the probability of doing the "impossible" - though for those who insist on using silly reference classes, I should note that my success rate on the AI-Box Experiment is 60%. (It's at least possible, though, that once you're frozen, you would have no way of noticing all the Everett branches where you died - there wouldn't be anyone who experienced that death.)

Replies from: loqi, LauraABJ, zero_call, byrnema
comment by loqi · 2010-01-20T18:45:56.988Z · LW(p) · GW(p)

It's at least possible, though, that once you're frozen, you would have no way of noticing all the Everett branches where you died - there wouldn't be anyone who experienced that death.

Ha, Cryonics as an outcome lens for quantum immortality? I find that surprisingly intuitive.

comment by LauraABJ · 2010-01-20T20:20:20.556Z · LW(p) · GW(p)

Well, I look at it this way:

I place the odds of humans actually being able to resuspend a frozen corpse near zero.

Therefore, in order for cryonics to work, we would need some form of information capture technology that would scan the in tact frozen brain and model the synaptic information in a form that could be 'played.' This is equivalent to the technology needed for uploading.

Given the complicated nature of whole brain simulations, some form of 'easier' quick and dirty AI is vastly more likely to come into being before this could take place.

I place the odds of this AI being friendly near zero. This might be where our calculations diverge.

In terms of 'evertt branches', one can never 'experience' being dead, so if we're going to go that route, we might as well say that we all live on in some branch where FAI was developed in time to save us... needless to say, this gets a bit silly as an argument for real decisions.

Replies from: MichaelVassar
comment by MichaelVassar · 2010-01-21T06:57:05.836Z · LW(p) · GW(p)

By default AI isn't friendly, but independent of SIAI succeeding does it really make sense to have 99% confidence in humanity as a whole not doing a given thing which is critical for our survival correctly or in FAI being impossibly difficult not merely for humans but for the gradually enhanced transhumans which humanity could technologically self-modify into if we don't wipe ourselves out? If we knew how to cheaply synthetically create 'clicks' of the type discussed in this post we would already have the tech to avoid UFAI indefinitely, enabling massive self-enhancement prior to work on FAI.

Replies from: LauraABJ
comment by LauraABJ · 2010-01-21T15:55:59.706Z · LW(p) · GW(p)

I actually did reflect after posting that my probability estimate was 'overconfident,' but since I don't mind being embarrassed if I'm wrong, I'm placing it at where I actually believe it to be. Many posts on this blog have been dedicated to explaining how completely difficult the task of FAI is and how few people are capable of making meaningful contributions to the problem. There seems to be a panoply of ways for things to go horribly wrong in even minute ways. I think 1 in 10,000, or even 1 in a million is being generous enough with the odds that the problem is still worth looking at (given what's at stake). Perhaps you have a problem with the mind-set of low probabilities, like it's pessimistic and self-defeating? Also, do you really believe uploading could occur before AI?

Replies from: MichaelVassar
comment by MichaelVassar · 2010-01-22T04:27:01.055Z · LW(p) · GW(p)

I would be very surprised if uploading was easier than AI, maybe slightly more surprised than I would be by cold fusion being real, but with the sort of broad probabilities I use that's still a bit over 1%. AGI is terribly difficult too. It's not FAI or uploading but very high caliber people have failed over and over.

The status quo points to AGI before FAI, but the status quo continually changes, both due to trends and due to radical surprises. The world wouldn't have to change more radically than it has numerous times in the past for the sanity waterline to rise far enough that people capable of making significant progress towards AGI reliably understood that they needed to aim for FAI or for uploading instead. Once Newton could unsurprisingly be a Christian theist and an Alchemist. By the mid 20th century the priors against Einstein being a theist were phenomenal and in fact he wasn't one. (his Spinozaism is closer to what we call atheism than what most people call atheism is). I don't think that extreme low probabilities are self defeating to me, though they might be for some people, I just disagree with them.

Replies from: pdf23ds, komponisto, LauraABJ
comment by pdf23ds · 2010-01-22T07:57:36.725Z · LW(p) · GW(p)

I would be very surprised if uploading was easier than AI

Do you mean "easier than AGI"? Why? With enough computing power, the hardest thing to do would probably be to supply the sensory inputs and do something useful with the motor outputs. With destructive uploading you don't even need nanotech. It doesn't seem like it requires any incredible new insights into the brain or intelligence in general.

Replies from: MichaelVassar, MichaelGR, AngryParsley
comment by MichaelVassar · 2010-01-22T19:59:19.788Z · LW(p) · GW(p)

Uploading is likely to require a lot of basic science, though not the depth of insight required for AGI. That same science will also make AGI much easier while most progress towards AGI contributes less though not nothing to uploading.

With all the science done there is still a HUGE engineering project. Engineering is done in near mode but very easy to talk about in far mode. People hand-wave the details and assume that it's a matter of throwing money at a problem, but large technically demanding engineering projects fail or are greatly delayed all the time even if they have money and large novel projects have a great deal of difficulty attracting large amounts of funding.

GOFAI is like trying to fly by flapping giant bird wings with your arms. Magical thinking.

Evolutionary approaches to AI are like platinum jet-packs. Simple, easy to make, inordinately expensive and stupidly hard to control.

Uploading is like building a bird from scratch. It would Definitely work really well if people could just get all the bugs out, but it's a big, complicated, insanely expensive, and judging by history there will be lots of bugs.

Neuromorphic AI is like trying to build a bird while looking for insights and then building an airplane when you understand how birds work.

FAI is like trying to build a floating magnetic airship. It sounds casually like something that is significantly more likely to be possible than not but we have very little idea in practice how its done, nothing in nature to imitate, and no promise that the necessary high-level insights required to pull it off are humanly achievable. OTOH, since we haven't looked very hard as a species, we also have no good reason to think they aren't so it basically falls to your priors.

Replies from: Jordan
comment by Jordan · 2010-01-22T20:28:36.651Z · LW(p) · GW(p)

I think the primary point overlooked when thinking about uploads is that there are milestones along the way that will greatly increase funding and overall motivation. I'm confident that if a rough mouse brain could be uploaded then the response from governments and the private sector would be tremendous. There are plenty of smart people and organizations in the world that would understand the potential of human uploads once basic feasibility had been demonstrated. The engineering project would still be daunting, of course, but the economic incentive would plainly be seen as the greatest in history.

Replies from: MichaelVassar
comment by MichaelVassar · 2010-01-23T17:49:05.298Z · LW(p) · GW(p)

Sorry, but with today's industry and government sectors I don't buy it. Not for uploads, not for aging. This awareness already happened with MNT, but it didn't have the effect in question.

Replies from: Kaj_Sotala, ciphergoth, Jordan
comment by Kaj_Sotala · 2010-01-23T20:47:33.409Z · LW(p) · GW(p)

Successfully uploading a mouse brain - and possibly also the radical extension of the lifespan of a mouse - would seem to me like it'd get as much media attention as Dolly the Sheep did. Has there been some MNT demonstration that would've gotten an equivalent amount of publicity?

Though judging from the reaction to Dolly, the reaction might be an anti-uploading backlash just as well as a positive one.

comment by Paul Crowley (ciphergoth) · 2010-01-24T10:18:38.144Z · LW(p) · GW(p)

MNT == molecular nanotechnology?

Replies from: Jordan
comment by Jordan · 2010-01-25T09:45:00.267Z · LW(p) · GW(p)

Ayup.

comment by Jordan · 2010-01-23T22:29:41.498Z · LW(p) · GW(p)

There's awareness of MNT, but feasibility of the more extreme possibilities hasn't been demonstrated adequately for heavy investment. The roadmap from mouse brain to human brain is also much, much clearer than the roadmap from here to full fledged MNT.

comment by MichaelGR · 2010-01-22T20:11:34.829Z · LW(p) · GW(p)

If you want to learn more about WBE and the challenges ahead, this is probably the best place to start:

Whole Brain Emulation: A Roadmap by Nick Bostrom and Anders Sandberg

comment by AngryParsley · 2010-01-22T08:05:32.525Z · LW(p) · GW(p)

It doesn't seem like it requires any incredible new insights into the brain or intelligence in general.

I think that's why Vassar is betting on AGI: it requires insight, but the rest of the necessary technology is already here. Uploading requires an engineering project involving advances in cryobiology, ultramicrotomes, scanning electron microscopes, and computer processors. There's no need for new insight, but the required technology advances are significant.

comment by komponisto · 2010-01-22T21:51:41.340Z · LW(p) · GW(p)

The status quo points to AGI before FAI

Who are the people capable of making significant progress on AGI who aren't already aware of (and indeed working on) FAI? My impression was that the really smart "MIT-type" AI people were basically all working on narrow AI.

comment by LauraABJ · 2010-01-22T21:29:25.516Z · LW(p) · GW(p)

Your argument is interesting, but I'm not sure if you arrived at your 1% estimate by specific reasoning about uploading/AI, or by simply arguing that paradigmatic 'surprises' occur frequently enough that we should never assign more than a 99% chance to something (theoretically possible) not happening.

I can conceive of many possible worlds (given AGI does not occur) in which the individual technologies needed to achieve uploading are all in place, and yet are never put together for that purpose due to general human revulsion. I can also conceive of global-political reasons that will throw a wrench in tech-development in general. Should I assign each of those a 1% probability just because they are possible?

Also, no offense meant to you or anyone else here, but I frequently wonder how much bias there is in this in-group of people who like to think about uploading/FAI towards believing that it will actually occur. It's a difficult thing to gage, since it seems the people best qualified to answer questions about these topics are the ones most excited/invested in the positive outcomes. I mean, if someone looks at the evidence and becomes convinced that the situation is hopeless, they are much less likely to get involved in bringing about a positive outcome and more likely to rationalize all this away as either crazy or likely to occur so far in the future that it won't bother them. Where do you go for an outside view?

Replies from: MichaelVassar
comment by MichaelVassar · 2010-01-25T05:36:20.320Z · LW(p) · GW(p)

Paradigmatic surprises vary a lot in how dramatic they are. X-rays and double slit deserved WAY lower probabilities than 1%. I'm basically going on how convincing I find the arguments for uploading first and trying to maintain calibrated confidence intervals. I would not bet 99:1 against uploading happening first. I would bet 9:1 without qualm. I would probably bet 49:1 I find it very easy to tell personally credible stories (no outlandish steps) where uploading happens first for good reasons. The probability of any of those stories happening may be much less than 1%, but they probably constitute exemplars of a large class.

Assigning a 1% probability to uploading not happening in a given decade when it could happen, due to politics and/or revulsion, seems much too low. Decade-to-decade correlations could be pretty high but not plausibly near 1, so given civilization's long term survival uploading is inevitable once the required tech is in place, but it's silly to assume civilization's long-term survival.

I don't really think that outside views are that widely applicable a methodology and if there isn't an obvious place to look for one there probably isn't one. The buck for judgment and decision-making has to stop somewhere, and stopping with deciding on reference classes seems silly in most situations. That said, I share your concern. I'm sure that there is a bias in the community of interested people, but I think that the community's most careful thinkers can and do largely avoid it. I certainly think bad outcomes are more likely than good ones, but I think that the odds are around 2:1 rather than 100:1.

Replies from: Eliezer_Yudkowsky, LauraABJ
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-01-25T06:30:13.712Z · LW(p) · GW(p)

double slit deserved WAY lower probabilities than 1%

I think that was probably the greatest single surprise in the entire history of time.

Replies from: MichaelVassar
comment by MichaelVassar · 2010-01-25T11:35:20.684Z · LW(p) · GW(p)

Outside of pure math at least. Irrational numbers were a big deal.

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-01-25T16:55:14.680Z · LW(p) · GW(p)

Measured in the prior probability that was assigned or could justly have been assigned beforehand, I don't think irrational numbers come close.

comment by LauraABJ · 2010-01-25T16:38:19.966Z · LW(p) · GW(p)

I'd be interested in seeing your reasoning written out in a top-level post. 2:1 seems beyond optimistic to me, especially if you give AI before uploading 9:1, but I'm sure you have your reasons. Explaining a few of these 'personally credible stories,' and what classes you place them in such that they sum to 10% total may be helpful. This goes for why you think FAI has such a high chance or succeeding as well.

Also, I believe I used the phrase 'outside view' incorrectly, since I didn't mean reference classes. I was interested to know if there are people who are not part of your community that help you with number crunching on the tech-side. An 'unbiased' source of probabilities, if you will.

Replies from: MichaelVassar
comment by MichaelVassar · 2010-01-26T05:11:02.130Z · LW(p) · GW(p)

I think of my community as essentially consisting of the people who are willing to do this sort of analysis, so almost axiomatically no.

The simplest reason for thinking that FAI is (relatively) likely to succeed is the same reason for thinking that slavery ending or world peace are more likely than one might assume from psychology or from economics, namely that people who think about them are unusually motivated to try to bring them about.

comment by zero_call · 2010-01-21T01:42:20.861Z · LW(p) · GW(p)

AI-Box: 60% success? I have it that you lost twice, won twice.

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-01-21T18:45:33.788Z · LW(p) · GW(p)

Don't know where you got your numbers, there were two experiments with small AI handicaps ($10 and $20, 2 wins) and three experiments for $2K-$5K with 1 win and 2 losses.

comment by byrnema · 2010-01-22T02:49:47.251Z · LW(p) · GW(p)

My estimate of the core technology working would be "it simply looks like it should work", which in terms of calibration should probably go to 90% or 80% or something like that.

I don't think this probability is too high if by 'core technology working' you mean ever working. However, would you modify this probability if we're talking specifically about people virtified in the next ten years? As we learn more about how to unvitrify people, we'll learn more about the right way to vitrify them.

Alcor writes cryonics should work

If foreseeable technology can repair injuries of the preservation process;

so that's probably the probability I'm talking about.

comment by drimshnick · 2010-01-20T18:37:36.115Z · LW(p) · GW(p)

And there is also the downside risk even if it does work - what if you are reanimated to be a slave of some non-FAI warlord! From this example, we can see that the probability of successful cryo actually resulting in a negative outcome is at least as big as the probability of non-FAI winning out.*

So the question actually reduces to a classic heaven-and-hell or nothing argument - would you rather the chance of heaven with the possibility of hell, or neither?

(*Of course, if the non-FAI sees no need to even have you around and doesn't even bother thawing you out and just kills you, this is a negative outcome also, as you've wasted lots of money.)

Replies from: Corey_Newsome
comment by Corey_Newsome · 2010-01-20T22:56:22.352Z · LW(p) · GW(p)

...because the first thing warlords do when they take over Scottsdale, Arizona, is invest great amounts of money in technology to revive old people, then use their highly advanced mind-controlling powers to turn them into mentally aware but vicariously controlled slaves, or otherwise coerce their few dozens of old computer scientists and physicists to kick babies and spit on puppies. Because warlords and UnFriendly AIs are evil for the sake of being evil. Makes perfect sense.

(Your parenthetical point is an argument for donating to FAI research, not an argument against getting froze.)

comment by magfrump · 2010-01-20T04:56:58.085Z · LW(p) · GW(p)

I was raised to consider organ donation to be the moral thing to do on my death.

I am less skeptical than average of cryonics, and nervous about "neuro" options since I'd prefer to be revived earlier and with a body. On the other hand, it still seems to me that organ donation is the more effective option for more-people-being-alive-and-happy, even if it's not me.

Am I stuck with the "neuro" option for myself? How should that translate to my children?

What do most people on LW think about organ donation?

ETA: the Cryonics Institute (the only page I've seen linked here) doesn't have that option, so am I stuck paying much more? Informative links would be appreciated.

Replies from: James_Miller, D_Alex, CronoDAS
comment by James_Miller · 2010-01-20T05:29:17.308Z · LW(p) · GW(p)

To make up for not being an organ donor cut back on some area of personal consumption and donate the money to a charity. Since the probability of someone actually getting your organs if you agreed to be an organ donor are, I think, very low you wouldn't have to give that much to a charity for you do be doing more social good through the charitable contribution than you would have as a potential organ donor.

Replies from: magfrump
comment by magfrump · 2010-01-20T06:26:36.499Z · LW(p) · GW(p)

So does the lack of discussion of organ donation stem from its perceived lack of efficacy? If so why discuss cryonics so heavily, when it costs money? I remember Robin Hanson assigning it around a 5% chance of success (for his personal setup, not cryonics eventually working at all which I assume would be much higher), and I would naively assume that I have a greater than 5% chance of my organs helping someone (any statistics on this?).

I agree that donating to charity is likely to be more effective, but it is also likely to be more effective than cryonics (as discussed elsewhere) and donating organs doesn't actually take away from my charity funds.

I don't mean to speak to making agreements for children, I think that stands as the right thing to do.

Replies from: James_Miller
comment by James_Miller · 2010-01-20T17:24:00.523Z · LW(p) · GW(p)

You don't have a fixed amount of charity funds, you have the amount you choose to give.

Fly to a really poor country. Seek out a very poor family that has lots of kids. Give this family $1,000, which could easily be five years income for this family. On average you will have done much more good than if you spent this $1,000 on yourself and signed up to be an organ donor. Make sure this $1,000 does not reduce your other charitable giving.

If you had cancer would you forgo treatment because the out-of-pocket amount you would have to pay for the treatment would have been better spent helping others?

Replies from: byrnema
comment by byrnema · 2010-01-22T19:06:44.348Z · LW(p) · GW(p)

This is not a rational answer to the question of whether it is more ethical to sign up for cryonics or donate organs, it is a rationalization. If magfrump decides it is more ethical to sign up for cryonics than donate his organs, he must decide this based on the ethics of those two choices. Someone might rationalize that it's OK to be less ethical 'over here' if they're more ethical 'over there', but it still doesn't change the ethics of those two choices.

The exception is if the ethics 'over here' and the ethics 'over there' are inter-dependent. So donating money to a family in a poor country would be ethically relevant if one choice facilitates donating to the family and one does not. Here, we have the exact opposite of what was suggested: magrump can use the money he saves from not signing up for cryonics to help the family, and so he can consider this an argument in favor of the ethicality of not signing up for cryonics.

(But, magfrump, I would add that we also have an ethical obligation to value our own lives. The symmetry in ethics-space can usually be found. Here, I can identify it in the hypothetical space where cryonics works: the person who needs an organ can also be cryonically suspended, perhaps until an organ is available. Then you both could live. In the space where cryonics doesn't work, organ donation is more ethical, since at least one of you can live.)

comment by D_Alex · 2010-01-20T08:55:34.624Z · LW(p) · GW(p)

The "best" organ donors are young people who suffered a massive head trauma, typically in a motor vehicle accident... If you die in a situation where cryopreservation can proceed, you will probably be too old or too diseased for your organs to be of use. So perhaps the two options are not exclusive after all.

Replies from: magfrump
comment by magfrump · 2010-01-20T18:18:40.066Z · LW(p) · GW(p)

So as a young person with very little chance of dying from disease and very little money it would be better to stay an organ donor now and sign up for cryonics when I'm older and have more money?

This was the intuitive conclusion I reached but I wasn't aware of the "'best' organ donors' being certain demographics. BTW that intuitively seems right, but I'm curious where you got the information.

comment by CronoDAS · 2010-01-20T06:41:45.519Z · LW(p) · GW(p)

Well, I do know that cryopreservation and organ donation are currently mutually exclusive, even if you go with a "neuro" option.

I don't know whether it's better to sign up for cryonics or to be an organ donor.

comment by CronoDAS · 2010-01-20T03:55:13.833Z · LW(p) · GW(p)

For the record, does anyone have a good website I can link my father to containing a reasonably persuasive case for signing up for cryonics? He's a smart guy and skilled at Traditional Rationality; I think he can be persuaded to sign up, but I don't know if I can persuade him. (When I told him the actual price of cryonics, his response was something like "Sure, you can preserve someone for that amount, but revival, once it exists, would probably cost the equivalent of millions of dollars, and who would pay for that?")

Replies from: Morendil, ciphergoth, Eneasz, byrnema
comment by Morendil · 2010-01-20T18:35:22.547Z · LW(p) · GW(p)

The technology to revive suspendees will likely cost billions to develop, but who cares about development costs? What matters is the cost per procedure, and we already have "magical" technologies which carry only a reasonable cost per use, for instance MRI scanning.

The Future of Humanity Institute has a technological roadmap for Whole Brain Emulation which tantalizingly mentions MRI as a technology which already has close to the required resolution to scan brains at a resolution suitable for emulation.

Freezing is itself a primitive technology, it's only the small scale at which it is currently implemented which keeps the costs high. You don't need to look very far to see how cheap advanced-to-the-point-of-magical technology can get, given economies of scale; it's sitting on your desk, or in your pocket.

If it is feasible at all, and if it is ever done at scale, it will be cheap. This last could be a very big if: current levels of adoption are not encouraging. However, you can expect that as soon as the technical feasibility is proven many more people are going to develop an interest in cryonics.

Even assuming no singularity and no nanotech, a relatively modest extrapolation from current technology would be enough get us to "uploads" from frozen brains. Of course, reaching the tech level is only half the story - you'd still have to prove that in practice the emulated brains are "the same people". Our understanding of how the brain implements consciousness might be flawed, perhaps Penrose turns out to be right after all, etc.

Replies from: CronoDAS
comment by CronoDAS · 2010-01-21T06:17:19.588Z · LW(p) · GW(p)

Yeah, Penrose's position that the human brain is a hypercomputer isn't really supported by known physics, but there's still enough unknown and poorly understood physics that it can't be ruled out. His "proof" that human brains are hypercomputers based on applying Godel's incompleteness theorem to human mathematical reasoning, however, missed the obvious loophole: Godel's theorem only applies to consistent systems, and human reasoning is anything but consistent!

Replies from: pdf23ds
comment by pdf23ds · 2010-01-21T07:59:45.929Z · LW(p) · GW(p)

His "proof" that human brains are hypercomputers based on applying Godel's incompleteness theorem to human mathematical reasoning, however, missed the obvious loophole: Godel's theorem only applies to consistent systems, and human reasoning is anything but consistent!

I thought the obvious loophole was that brains aren't formal systems.

Replies from: CronoDAS, Cyan
comment by CronoDAS · 2010-01-21T08:40:22.919Z · LW(p) · GW(p)

If you can simulate them in a Turing machine, then they might as well be.

comment by Cyan · 2010-01-21T15:11:21.854Z · LW(p) · GW(p)

I thought the obvious loophole was that one can construct statements of the form "Cyan's brain can't prove this statement is true". (The statement is true, but you'll have to prove it for yourself -- you can't take my word for it.)

comment by Paul Crowley (ciphergoth) · 2010-01-20T06:52:13.775Z · LW(p) · GW(p)

Revival when first developed will probably cost the equivalent of hundreds of millions. You won't be revived until the cost is much lower; if progress continues and UFAI is avoided, I can't see how that can fail to happen.

comment by Eneasz · 2010-01-20T18:00:32.148Z · LW(p) · GW(p)

Try You Only Live Twice ( http://lesswrong.com/lw/wq/you_only_live_twice/ ) perhaps?

comment by byrnema · 2010-01-22T19:34:07.053Z · LW(p) · GW(p)

Sure, you can preserve someone for that amount, but revival, once it exists, would probably cost the equivalent of millions of dollars, and who would pay for that?

It's trite but it's true that you usually 'get what you pay for'. So that if cryonics really only cost $300 a year, I wouldn't go anywhere near it. But then I know that it is hugely subsidized by people who obviously have a genuine interest in it working, so this is inconclusive.

comment by RolfAndreassen · 2010-01-19T22:22:49.967Z · LW(p) · GW(p)

My estimate of the probabilities involved in calculating the payoff from cryonics differs from your estimates. I do not think it follows that I am a bad parent.

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-01-19T22:38:43.065Z · LW(p) · GW(p)

Suppose your child dies. Afterward, everyone alive at the time of a Friendly intelligence explosion plus the tiny handful signed up for cryonics, live happily ever after. Would you say in retrospect that you'd been a bad parent, or would you plead that, in retrospect, you made the best possible decision given the information that you had?

After all, your child could die in a car crash on a shopping trip, and yet taking them along on that shopping trip could still have been the best possible choice given the statistical information that you had. Is that the plea you would make in the above event? What probabilities do you assign?

Replies from: RolfAndreassen, Bindbreaker
comment by RolfAndreassen · 2010-01-19T22:55:36.919Z · LW(p) · GW(p)

Would you say in retrospect that you'd been a bad parent, or would you plead that, in retrospect, you made the best possible decision given the information that you had?

I reject your framing. I would say that I had made a bad mistake. Errors do not a bad parent make. Or, to put it another way, suppose you woke up in the Christian Hell; would you plead that you had made the best decision on the available information? Scary what-ifs are no argument. You cannot make me reconsider a probability assignment by pointing out the bad consequences if my assessment is wrong; you can only do so by adding information. I understand that you believe you're trying to save my life, but please be aware that turning to the Dark Side to do so is not likely to impress me; if you need the power of the Dark Side, how good can your argument be, anyway?

What probabilities do you assign?

The brain's functioning depends on electric and chemical potentials internal to the cells as well as connections between the cells. I believe that cryonics can maintain the network, but not the internal state of the nodes; consequently I assign "too low to meaningfully consider" to the probability of restoring my personality from my frozen brain. If the technology improves, I will reconsider.

Edit: I should specify that right now I have no children, lest I be misunderstood. It seems quite possible I will have some in the near future, though.

Replies from: Eliezer_Yudkowsky, soreff, MichaelVassar, Andy_McKenzie
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-01-19T23:10:29.828Z · LW(p) · GW(p)

Errors do not a bad parent make.

Predictable errors do.

Or, to put it another way, suppose you woke up in the Christian Hell; would you plead that you had made the best decision on the available information?

Hell yes.

You cannot make me reconsider a probability assignment by pointing out the bad consequences if my assessment is wrong; you can only do so by adding information.

One way of assessing probabilities is to ask how indignant we have a right to be if reality contradicts us. I would be really indignant if contradicted by reality about Christianity being correct. How indignant would you be if Reality comes back and says, "Sorry, cryonics worked"? My understanding is that dogs have been cooled to the point of cessation of brain activity and revived with no detected loss of memory, though I'd have to look up the reference... if that will actually convince you to sign up for cryonics; otherwise, please state your true rejection.

Replies from: loqi, MichaelVassar, mattnewport
comment by loqi · 2010-01-19T23:59:15.071Z · LW(p) · GW(p)

http://74.125.155.132/scholar?q=cache:ZNOvlaxp0p8J:scholar.google.com/&hl=en&as_sdt=2000

Conclusions: In a systematic series of studies in dogs, the rapid induction of profound cerebral hypothermia (tympanic temperature 10°C) by aortic flush of cold saline immediately after the start of exsanguination cardiac arrest-which rarely can be resuscitated effectively with current methods-can achieve survival without functional or histologic brain damage, after cardiac arrest no-flow of 60 or 90 mins and possibly 120 mins. The use of additional preservation strategies should be pursued in the 120-min arrest model.

comment by MichaelVassar · 2010-01-21T07:17:03.023Z · LW(p) · GW(p)

If even a percent or two of parents didn't make predictable errors we would have probably reached a Friendly Singularity ages ago. That's a very high standard. If only parents who met it reproduced the species would rapidly have gone extinct.

comment by mattnewport · 2010-01-19T23:21:21.997Z · LW(p) · GW(p)

How indignant would you be if Reality comes back and says, "Sorry, cryonics worked"?

I don't think this is really the issue. If I make a bet in poker believing (correctly given the available information) that the odds are in my favour but I go on to lose the hand I am not indignant - I was perfectly aware I was taking a calculated risk. In retrospect I should have folded but I still made the right decision at the time. Making the best decision given the available information doesn't mean making the retrospectively correct decision.

I haven't yet reached the point where cryonics crosses my risk/reward threshold. It is on my list of 'things to keep an eye on and potentially change my position in light of new information' however.

Replies from: JGWeissman
comment by JGWeissman · 2010-01-19T23:40:44.296Z · LW(p) · GW(p)

If you make a bet in poker believing that you have .6 chance of winning, and you lose, I believe your claim that you will not be indignant. In this case you have a weak belief that you will win. But, if you lose bets with the same probability 10 times in row, would you feed indignant? Would you question your assumptions and calculations that led to the .6 probability?

If it turns out the cryonics works, would you be surprised? Would you have to question any beliefs that influence your current view of it?

Replies from: mattnewport
comment by mattnewport · 2010-01-19T23:49:24.417Z · LW(p) · GW(p)

Yes, at some point if I kept seeing unexpected outcomes in poker I would begin to wonder if the game was fixed somehow. I'm open to changing my view of whether cryonics is worthwhile in light of new evidence as well.

I wouldn't be hugely surprised if at some point in the next 50 years someone is revived after dying and being frozen. My doubts are less related to the theoretical possibilities of reviving someone and more to the practical realities and cost/benefit vs. other uses of my available resources.

comment by soreff · 2010-01-19T23:41:14.157Z · LW(p) · GW(p)

I believe that cryonics can maintain the network, but not the internal state of the nodes; consequently I assign "too low to meaningfully consider" to the probability of restoring my personality from my frozen brain.

There is experimental evidence to allay that specific concern. People have had flat EEGs (from barbituate poisoning, and from (non-cryogenic!) hypothermia). They've been revived with memories and personalities intact. The network, not transient electrical state, holds long term information. (Oops, partial duplication of Eliezer's post below - I'm reasonably sure this has happened to humans as well, though...) (found the canine article: http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1476969/)

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-01-19T23:43:38.262Z · LW(p) · GW(p)

So, how indignant are you feeling right now? Serious question.

Will you suspect the forces that previously led you to come up with this objection, since they've been proven wrong?

Will you hesitate to make a similar snap decision without looking up sources or FAQs the next time your child's life is at stake?

Replies from: RolfAndreassen
comment by RolfAndreassen · 2010-01-20T20:28:19.251Z · LW(p) · GW(p)

So, how indignant are you feeling right now? Serious question.

Not at all, on the grounds that I do not agree with this sentence:

Will you suspect the forces that previously led you to come up with this objection, since they've been proven wrong?

You are way overestimating the strength of your evidence, here; and I'm sorry, but this is not a subject I trust you to be rational about, because you clearly care far too much. There is a vast difference between "cold enough for cessation of brain activity" (not even below freezing!) and "liquid bloody nitrogen"; there is a difference between human brains and dog brains; there is a difference between 120 minutes and 120 years; there is a difference between the controlled conditions of a laboratory, and real-life accident or injury.

That said, this is a promising direction of research for convincing me. How's this? If a dog is cooled below freezing, left there for 24 hours, and then revived, I will sign up for cryonics. Cross my heart and hope not to die.

If it turns out the cryonics works, would you be surprised?

If it turns out that cryonics as practised in 2010 works, then yes, I would be surprised. I would not be particularly suprised if a similar technology can be made to work in the future; I don't object to the proposition that information is information and the brain is un-magical, only to the overconfidence in today's methods of preserving that information. In any case, though, I can't very well update on predicted future surprises, can I now?

Replies from: Cyan
comment by Cyan · 2010-01-20T20:38:40.375Z · LW(p) · GW(p)

Since you expect some future cryonics tech to be successful, there's a strong argument that you should sign up now: you can expect to be frozen with the state of the art at the time of your brain death, not 2010 technology, and if you put it off, your window of opportunity may close.

Disclosure: I am not signed up for cryonics (but the discussion of the past few days has convinced me that I ought to).

Replies from: Cyan
comment by Cyan · 2010-01-20T21:04:41.894Z · LW(p) · GW(p)

I'm curious as to whether the upvotes are for the argument or just the disclosure. Transfer karma here to indicate upvotes just for the disclosure.

comment by MichaelVassar · 2010-01-21T00:14:56.845Z · LW(p) · GW(p)

How high a probability do you place on the information content of the brain depending on maintaining electrochemical potentials? Why? Why do you think your information and analysis are better than those of those who disagree?

Replies from: RolfAndreassen
comment by RolfAndreassen · 2010-01-26T00:25:25.198Z · LW(p) · GW(p)

In order: 90%; because personality seems to me state-ful (that is, there is clearly some sort of long-term storage with quite rapid (relative to nerve growth) writing going on, which seems to me hard to explain purely in terms of the interconnections), and a neural network with no activation information in the nodes will not respond to a given input in the same way as the same network with some excited nodes; and because you have not given a convincing counterargument nor a convincing appeal to expertise.

Replies from: MichaelVassar
comment by MichaelVassar · 2010-01-26T05:06:42.547Z · LW(p) · GW(p)

Certainly the internal state of a neuron includes things that are preserved by uploading other than the wiring diagram. Anyway, are you doing a calculation where another factor of 10 makes a critical difference?

Replies from: RolfAndreassen
comment by RolfAndreassen · 2010-01-26T19:00:32.959Z · LW(p) · GW(p)

Uploading, yes; but we were discussing cryonics. Uploading is a completely different question. Indeed, I would assign a rather higher probability to uploading preserving personality, than to cryonics doing so.

And yes, I generally expect orders of magnitude to make a difference. If they don't, then your uncertainty is so large anyway that attempting a fake precision is just fooling yourself.

Although... actually... it occurs to me that you could move the order of magnitude somewhere else. Suppose I kept your probability estimate of cryonics working, and multiplied the price by ten? Even by twenty? ... That does make a pretty fair chunk of my budget, but still. I think I'll have to revisit that calculation.

comment by Andy_McKenzie · 2010-01-20T06:55:24.646Z · LW(p) · GW(p)

Not sure what exactly you mean by the "internal state of the nodes." If you are referring to inside the individual brain cells, then I think you're mistaken. We can already peer into the inside of neurons. Transmission electron microscopy is a powerful technology! Combine it with serial sectioning with a diamond knife and you can get quite a lot of detail in quite a large amount of tissue.

For example consider Ragsdale et al's recent study, to pick the first sstem scopus result. They looked at some sensory neurons in C. elegans, and were able to identify not just internal receptors but also which cells (sheath cells) contain abundant endoplasmic reticulum, secretory granules, and/or lipid globules.

This whole discussion comes down to what level of scale separation you might need to recapitulate the function of the brain and the specific characteristics that make you you. Going down to say the atomic level would probably be very difficult, for instance. But there's good reason to think that we won't have to go nearly that far down to reproduce human characteristics. Have you read the pdf roadmap? No reason to form beliefs w/o the relevant knowledge! :)

Replies from: RolfAndreassen
comment by RolfAndreassen · 2010-01-20T20:30:03.261Z · LW(p) · GW(p)

You are responding to a point somewhat at angles to the one I made. Yes, we can learn a lot about the internal state of brain cells using modern technology. It does not follow that such state survives long-term storage at liquid-nitrogen temperatures.

Replies from: Andy_McKenzie
comment by Andy_McKenzie · 2010-01-20T23:43:58.359Z · LW(p) · GW(p)

Is it the immediate effects of the freezing process that trouble you or the long-term effects of staying frozen for years / decades / centuries?

comment by Bindbreaker · 2010-01-19T22:47:08.838Z · LW(p) · GW(p)

Suppose your child dies. Afterward, everyone alive at the time of an unFriendly intelligence explosion plus the tiny handful signed up for cryonics (including your child), also dies. Would you say in retrospect that you'd been a bad parent, or would you plead that, in retrospect, you made the best possible decision given the information that you had?

I, personally, will allocate any resources that I would otherwise use for cryonics to the prevention of existential risks.

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-01-19T23:12:49.220Z · LW(p) · GW(p)

I have no child; this is not coincidence. If I did have a kid you can damn well better believe that kid would be signed up for cryonics or I wouldn't be able to sleep.

I, personally, will allocate any resources that I would otherwise use for cryonics to the prevention of existential risks.

I'll accept that excuse for your not being signed up yourself - though I'm rather skeptical until I see the donation receipt. I will not accept that excuse for your child not being signed up. I'll accept it as an excuse for not having a child, but not as an excuse for having a child and then not signing them up for cryonics. Take it out of the movie budget, not the existential risks budget.

Replies from: Bindbreaker
comment by Bindbreaker · 2010-01-20T00:08:35.344Z · LW(p) · GW(p)

I don't believe in excuses, I believe that signing up for cryonics is less rational than donating to prevent existential risks. For somewhat related reasons, I do not intend to have children.

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-01-20T00:30:57.425Z · LW(p) · GW(p)

Sounds like you could be in a consistent state of heroism, then. May I ask to which existential risk(s) you are currently donating?

Replies from: Bindbreaker
comment by Bindbreaker · 2010-01-20T00:44:08.306Z · LW(p) · GW(p)

I'm in the "amassing resources" phase at present. Part of the reason I'm on this site is to try and find out what organizations are worth donating to.

I am in no way a hero. I'm just a guy who did the math, and at least part of my motivation is selfish anyway.

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-01-20T00:48:43.509Z · LW(p) · GW(p)

I strongly advise you to immediately start donating something to somewhere, even if it's $10/year to Methuselah. If there's one thing you learn working in the nonprofit world, it's that people who donated last year are likely to also donate this year, and people who last year planned to donate "next year" will this year be planning to donate "next year".

Replies from: alyssavance, GuySrinivasan, Bindbreaker
comment by alyssavance · 2010-01-20T06:51:22.056Z · LW(p) · GW(p)

Upon hearing this advice, I just donated $10 to SIAI, even though I consider this amount totally insignificant relative to my expected future donations. I will upvote anyone who does the same for any transhumanist charity.

Replies from: Liron
comment by Liron · 2010-01-21T04:07:24.252Z · LW(p) · GW(p)

Way to turn correlation-causality correlation into causality

comment by SarahSrinivasan (GuySrinivasan) · 2010-01-20T08:10:13.233Z · LW(p) · GW(p)

Do you have an estimate of how much a new donor to SIAI is worth above and beyond their initial donation? How about given that I ask them to donate with money they were about to repay me anyway?

If it's significant it could be well worth the social capital to spread your own donations among non-donor friends.

comment by Bindbreaker · 2010-01-20T01:03:53.952Z · LW(p) · GW(p)

I plan to donate once I have X dollars of nonessential income, and yes, I have a specific value for X.

Replies from: gwern, Eliezer_Yudkowsky
comment by gwern · 2010-01-20T01:44:21.194Z · LW(p) · GW(p)

Did your calculations for X take into account discounting at 0-10%? Money for research years from now does much less good than money now.

Replies from: Bindbreaker, bgrah449
comment by Bindbreaker · 2010-01-20T02:53:32.854Z · LW(p) · GW(p)

No-- thanks for the tip! I will adjust my calculations accordingly.

comment by bgrah449 · 2010-01-20T01:45:31.326Z · LW(p) · GW(p)

Or the cost of the research being delayed.

Replies from: gwern
comment by gwern · 2010-01-20T01:48:37.861Z · LW(p) · GW(p)

I figured that was covered by 'much less good'; there are a lot of costs to delaying, if we wanted to enumerate them - risks of good charities going under, inflation and catastrophic economic events gnawing away at one's stored value, the ever-present existential risks each year, etc.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-01-20T04:04:56.015Z · LW(p) · GW(p)

Antiakrasia, future-self-influencing recommendation: if you can afford $10/year today, make sure your current level of giving is not zero.

comment by AngryParsley · 2010-01-20T00:55:29.961Z · LW(p) · GW(p)

Was this the get-together in Florida from the 8th to the 10th? I decided not to go since I assumed everyone would be from the atheist/libertarian/male/nerd/singularity/etc group. I'm glad to see I was wrong.

Replies from: Eliezer_Yudkowsky, righteousreason
comment by righteousreason · 2010-01-23T17:59:08.344Z · LW(p) · GW(p)

where the hell would you find a group like that!?

Replies from: AngryParsley
comment by AngryParsley · 2010-01-24T02:40:27.940Z · LW(p) · GW(p)

It's not hard since I live in the bay area.

comment by thomblake · 2010-01-19T19:38:10.933Z · LW(p) · GW(p)

I was going to leave a comment simply stating:

"Eliezer Yudkowsky - the man who can make a blatantly off-topic post and be upvoted for it."

But it occurs to me I might be missing something, so explanation please.

Replies from: Eliezer_Yudkowsky, Furcas, bgrah449
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-01-19T20:12:12.730Z · LW(p) · GW(p)

There's a final filter in rationality where you take your ideas seriously, and a critical sub-filter is where you're willing to take ideas seriously even though the people around you don't.

Going to a group where cryonics was normal was a shift of perspective even for me, and here I thought I had conformity beat. It was what caused me to realize - no, parents who don't sign their kids up for cryonics really are doing something inexcusable; the mistake is not inevitable, it's just them.

Replies from: Roko, Kevin
comment by Roko · 2010-01-19T22:06:58.257Z · LW(p) · GW(p)

I dunno, I think that we smart people have a tendency to look for perfectionism in ourselves, and demand it from others. I have spoken to many ordinary people about cryo, some quite smart, and their brains just go walla-walla-bonk crazy. In this regard, I see them as being rather like children who cannot help but eat the nice marshmallow in front of them.

Replies from: MichaelGR
comment by MichaelGR · 2010-01-19T22:26:51.661Z · LW(p) · GW(p)

their brains just go walla-walla-bonk crazy

Anything specific you can share?

I'm thinking about mentioning cryo to a few people, and am curious to know what kind of reaction to expect.

Replies from: Roko, akshatrathi
comment by Roko · 2010-01-19T23:03:26.496Z · LW(p) · GW(p)

Strong negative emotional reactions, lots of psychological defense mechanisms activate, smart people say silly silly things. I'll never forget my best friend's girlfriend, a Cambridge medical student, saying that whilst cryonics, might save you from death, she said it was not certain to work and therefore "too risky".

comment by akshatrathi · 2010-01-20T00:14:31.317Z · LW(p) · GW(p)

I second Michael's question

comment by Kevin · 2010-01-20T13:29:28.712Z · LW(p) · GW(p)

I blame the education system.

comment by Furcas · 2010-01-19T19:47:40.874Z · LW(p) · GW(p)

Signing up for cryonics is kind of the textbook example of applied rationality around here, much as theism is the textbook example of applied irrationality, so I think it's interesting to know what kind of people did it, and why.

Replies from: nerzhin
comment by nerzhin · 2010-01-19T20:09:23.400Z · LW(p) · GW(p)

Quibble: "theism" itself isn't so much applied irrationality - that would be something more like wasting time at church, or buying lottery tickets - an action with a tangible cost.

comment by bgrah449 · 2010-01-19T20:41:25.689Z · LW(p) · GW(p)

This is a sly way of still saying that but not taking the karma hit. (Upvoted, btw)

Replies from: Cyan
comment by Cyan · 2010-01-19T20:49:27.685Z · LW(p) · GW(p)

Proslepsis.

Replies from: bgrah449
comment by bgrah449 · 2010-01-19T20:55:03.756Z · LW(p) · GW(p)

Learning new very-specific words that completely nail a phenomenon I'm trying to describe is something I really enjoy, and it doesn't happen too often. Thanks!

Replies from: Cyan
comment by Cyan · 2010-01-19T21:23:12.244Z · LW(p) · GW(p)

My pleasure. (It was a joint effort: my vague recollection that there was a term that means "mentioning without mentioning" plus Google equals... a lot of karma points, apparently.)

comment by RulerofBenthos · 2018-05-10T20:27:32.222Z · LW(p) · GW(p)

Please provide links to the cheap cryonics you speak of. When I looked into it for my kids it was not affordable (this was 20 years ago, though....). Is it still Alcor providing cryonics? Or have other people gotten into the game? I've been out of it for a bit...

comment by Christian_Szegedy · 2010-01-21T23:13:24.261Z · LW(p) · GW(p)

Can anyone post a comparison of the services, (and other pros and cons) between Alcor and CI?

What are the arguments for either? Paying something like 2X price for a similar looking service implies that there should be a difference in the quality of service, perhaps the quality of the cryopreservation procedure. Maybe the most important one is the financial health of the company: the probability that they manage to exist long enough.

Replies from: AngryParsley
comment by AngryParsley · 2010-01-22T02:01:42.795Z · LW(p) · GW(p)

There's some information here.

Alcor takes care of standby (they'll send a team to camp out at your deathbed) and transportation. CI requires contracting with Suspended Animation for that level of care. Alcor invests more money per patient to fund liquid nitrogen and other maintenance expenses. ($25,000 for neuros and $65,000 for whole body)

On the other hand, CI keeps a relatively low profile while Alcor doesn't.

Replies from: Christian_Szegedy
comment by Christian_Szegedy · 2010-01-22T02:37:39.782Z · LW(p) · GW(p)

Thanks for the effort!

Of course I came to the above link and also to CI's comparison page, still it would be nice to see some independent review of both companies. Especially on the financial prospects.

The information on trust is interesting. Does it mean that Alcor patients have a safer future in case the operation of Alcor is endangered?

What I found strange is that Alcor advertises itself as the only company using "perfusion technology" whatever it means as opposed to CI, whereas CI also insists on using the "best techniques currently available" and cites the same paper as Alcor. It is definitely unclear. The only clear dinstinction seems to be that Alcor uses full body vitrification if a premium is payed. Still, the price difference between 88K and 35K are staggering, even if the transportation/suspension costs are taken into account.

comment by Bindbreaker · 2010-01-19T22:06:08.630Z · LW(p) · GW(p)

This might get me blasted off the face of the Internet, but by my (admittedly primitive) calculations, there is a >95% chance that I will live to see the end of the world as we know it, whether that be a positive or negative end. I do not see any reason to sign up for cryonics, as it will merely constitute a drain on my currently available resources with no tangible benefit. I am further unconvinced that cryonics is a legitimate industry. I am, of course, open to argument, but I really can't see cryonics as something that would rationally inspire this sort of reaction.

Replies from: soreff, Nick_Tarleton, mattnewport
comment by soreff · 2010-01-19T23:21:09.945Z · LW(p) · GW(p)

I'm curious as to how you calculate that >95%. I ask because I, personally, overestimated the threats from what amounts to unfriendly AI at two points in time (during the Japanese 5th generation computer project, and during Lenat's CYC project), and I overestimated the threat from y2k (and I thought I had a solid lower bound on its effects from unprepared sectors of the economy at the time). Might you be doing something similar?

Full disclosure: I have cryonics arrangements in place (with Alcor), but I'm unsure whether the odds of actually being revived or uploaded justify the (admittedly small) costs. Since I've signed up (around 1990 or so) I've revised my guess as to the odds downwards for a couple of reasons: (a) full Drexler/Merkle nanotech is taking much longer to be developed than I'd have guessed - "never" is still a distinct possibility (b) If we do get full nanotech, Robin Hanson's malthusian scenario of exploding upload replication looks chillingly plausible (c) During the Bush years, biodeathicists like Leon Kass actually got positions in high places. I'd anticipated that life extension might be a very hard technical problem - but not that there would be people in power actively trying to stop it.

Replies from: blogospheroid
comment by blogospheroid · 2010-01-20T08:16:23.678Z · LW(p) · GW(p)

Think Global Soreff.

Japan and China have huge aging populations. Their incentive to develop life extension treatments will be much greater than the biodeathicists ability to impede the same in the United States.

China is facing a huge aging problem. They are probably the first country to get old before getting rich. if i were in the chinese politburo, I'd be POURING money into life extension research.

Though why Japan already hasn't done so seems surprising from this viewpoint. Any ideas Why Japan hasn't poured money into healthspan extension?

Replies from: anonymoushero
comment by anonymoushero · 2010-01-22T17:16:55.005Z · LW(p) · GW(p)

Chinese cryonics? There are rumors, but nothing concrete. http://www.cryonics.org/immortalist/january05/letters.htm

There are better results searching for "人体冷冻法 ", "人体冷冻学" or "人体冷冻技术": An article about Alcor ("ah-er-ke") http://news.xinhuanet.com/world/2005-12/13/content_3913137.htm

On a related note, prospects for AGI research in China: http://www.hplusmagazine.com/articles/ai/chinese-singularity

Someone with working knowledge of hiragana/katakana might try the same for Japanese cryonics?

Replies from: anonymoushero, thomblake
comment by anonymoushero · 2010-01-25T14:48:55.500Z · LW(p) · GW(p)

So who is this "Zheng Kuifei (郑奎飞), President of the Beijing Yong Sheng Academy" from the cryonics.org archives?

One investigative article from the Chinese media is not too flattering. (http://paper.people.com.cn/hqrw/html/2006-11/16/content_12065967.htm) Quite a colorful character - claims to be 'secretly engaged' to a famous actress. Right... His interview is also interesting (http://www.people.com.cn/GB/paper447/16692/1469113.html). Google translate should get you the gist of it. He's filed for lots of singularity-relevant patents too. (http://www.ipexl.com/directory/en/APPLICANT_Zheng_Kuifei.html)

As far as I can tell, this so-called Beijing Yong Sheng Academy or Beijing Immortality-Era Economic Research Institute (北京永生时代经济研究院) does not exist either.

The saddest thing about all this is that this guy's antics have probably poisoned the water for cryonics in China.

comment by thomblake · 2010-01-22T17:22:39.721Z · LW(p) · GW(p)

Someone with working knowledge of hiragana/katakana might try the same for Japanese cryonics?

For reference, Alcor in Japanese is (predictably) アルコル, according to ja.wikipedia.org.

ETA: relevant resource: http://www.cryonics.jp/index-e.html

comment by Nick_Tarleton · 2010-01-19T22:53:59.537Z · LW(p) · GW(p)

This might get me blasted off the face of the Internet, but by my (admittedly primitive) calculations, there is a >95% chance that I will live to see the end of the world as we know it, whether that be a positive or negative end. I do not see any reason to sign up for cryonics, as it will merely constitute a drain on my currently available resources with no tangible benefit.

Probably no tangible benefit, but expected utility? Those few percent, or tenths of a percent, where cryonics saves you are worth a lot (assuming you have values that make cryonics worth considering in the first place).

(Full disclosure: I'm not signed up, but only because I think cryonics costs would come from the same "far-mode speculative futurism" mental account as better uses of money, rather than "luxury consumption". If not for that consideration — which I'm not all that sure about in any case — the decision would be massively overdetermined.)

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-01-20T01:24:23.422Z · LW(p) · GW(p)

(assuming you have values that make cryonics worth considering in the first place)

Do you believe in having non-human-universal values, just because of believing you do or even because of having learned to follow them? Can you elaborate?

comment by mattnewport · 2010-01-19T22:24:57.546Z · LW(p) · GW(p)

I've yet to be convinced by the arguments for cryonics either. Given my age and health there's a < 1% chance that I will die in the next 20 years. There are numerous reasons why cryonics could fail and I estimate the chances of it succeeding at < 10%. The events that would make it more likely to succeed will also tend to make my survival without cryonics more likely. Overall I don't find the cost/benefit very compelling. The weirdness of it (contra the theme of Eliezer's post) is a factor as well.

Replies from: jimmy, akshatrathi
comment by jimmy · 2010-01-19T23:27:52.896Z · LW(p) · GW(p)

Given my age and health there's a < 1% chance that I will die in the next 20 years.

But with life insurance you only pay that <1% worth, so it balances out.

If the weirdness is a negative factor, then just don't tell anybody.

Replies from: mattnewport
comment by mattnewport · 2010-01-19T23:38:34.428Z · LW(p) · GW(p)

Well, life insurance by necessity does not give fair odds but I take your point.

Not telling anybody doesn't solve the weirdness factor. I'd feel weird wearing a tin foil hat to prevent the government controlling my mind even if I only did it in secret.

Replies from: jimmy, Eliezer_Yudkowsky
comment by jimmy · 2010-01-20T03:13:17.741Z · LW(p) · GW(p)

If you know that the weirdness feeling is due to bad reasons, then tell it to go to hell :p

In a world full of crazies the right answer is going to feel weird, so you might as well get used to the feeling.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-01-19T23:46:33.200Z · LW(p) · GW(p)

Life insurance companies need to make a profit, but there's a large gain from trade when you swap the life insurance proceeds for a cryonic suspension(1). The net return on the whole transaction is not necessarily negative.

(1) Though technically, the gain from trade isn't just from trading money for cryonics, since money has no intrinsic value, just opportunity costs. The gain-from-trade comes from the three steps of (a) working some hours to get money, (b) trade small amounts of money in most Everett branches for large amounts of money in Everett branches where you die, which involves paying some overhead (c) trade life insurance proceeds for many hours in those branches. The gains emerge in steps (a) and (c).

Replies from: mattnewport
comment by mattnewport · 2010-01-20T00:09:56.797Z · LW(p) · GW(p)

The reason I'm not currently persuaded that cryonics is worth it for me is that it is one of quite a large number of things I could do that have a very low probability of a very large benefit. With cryonics there's a high level of uncertainty around both the probability of success and the magnitude of the benefit. I don't have the time or resources to sign up for all such things, nor do I currently have the inclination to devote the resources to investigating all of them thoroughly enough to narrow the uncertainty. Cryonics is hovering around the level where additional research may seem worthwhile whereas, say, buddhism is not. It hasn't quite crossed the threshold yet however.

comment by akshatrathi · 2010-01-20T00:17:25.954Z · LW(p) · GW(p)

Say you survive the next 20 years and say your probability to die in the 20 years hence be < 10%. Would you sign up for cryonics then? If not, what is that probability of death which will make you sign up for cryonics?

PS: How did you come up with the probability of < 1% about your own death?

Replies from: mattnewport
comment by mariz · 2010-01-25T13:50:11.216Z · LW(p) · GW(p)

Here's a simple metric to demonstrate why alternatives to cryonics could be preferred:

Suppose we calculate the overall value of living as the quantity of life multiplied by the quality of life. For lack of a better metric, we can rate our quality of life from 1 to 100. Thus one really good year (quality = 100) is equal to 100 really bad years (ql = 1). If you think quality of life is more important, you can use a larger metric, like 1 to 1000. But for our purposes, let's use a scale to 100.

Some transhumanists have calculated that your life expectancy without aging is about 1300 years (because there's still an annual probability that you will die from an accident, homicide, etc.). Conservatively, let's assume that if cryonics and revivification are successful, you can expect to live for another 1000 years. Also, knowing nothing else about the future, your quality of life will be ~50. Thus your total life-index points gained is 50,000. But suppose that the probability that cryonics/revivification will be successful is 1 in 10,000, or .0001. Thus the expected utility points gained is .0001 * 50,000 = 50.

It will cost your $300/year for the rest of your life to gain those expected 50 points. But suppose you could spend that $300 a year on something that is 80% likely to increase your quality of life by 5 points a year (only 5%) for the rest of your life (let's say another 50 years). There are all kinds of things that could do that: vacations, games, lovers, whatever. That's .80 5 50 = 200 expected utility points.

You're better off spending your money on things that are highly likely to increase your quality of life here and now, then on things that are highly unlikely or unknown to increase your quantity and quality of life in the future.

Replies from: Morendil, ciphergoth, Richard_Kennaway, AdeleneDawner
comment by Morendil · 2010-01-25T14:46:03.786Z · LW(p) · GW(p)

To put this in perspective, $300/year is the cost of my ACM subscription. That's a rounding error as far as increasing my quality of life is concerned, way below 5%.

Replies from: juliawise
comment by juliawise · 2011-07-21T14:03:31.941Z · LW(p) · GW(p)

For about a billion people in the world, $300 a year (or $500, as it sounds the numbers probably really are) would double their income, very probably increasing their quality of life dramatically. I'd rather give my money to them.

Replies from: MixedNuts, KPier, multifoliaterose, Morendil
comment by MixedNuts · 2011-07-21T14:15:43.644Z · LW(p) · GW(p)

"Hi, you have cancer. Want an experimental treatment? It works with >5% probability and costs $500/year." "No thanks, I'll die and give the money to charity."

Strangely enough, I don't hear that nearly as often as the one against cryonics. And it's even worse, because signing up for cryonics means more people will be able to (economies of scale, looks less weird, more people hear of it).

Not to mention that most charities suck. But VillageReach does qualify.

comment by KPier · 2011-07-21T16:25:42.955Z · LW(p) · GW(p)

Welcome to LessWrong!

While it's not relevant to Mornedil's point (about his own quality of life), this was my major objection to cryonics for a while as well. There are a couple of problems with it: Most people don't currently donate all their disposable income to charity. If you do, then a cryonics subscription would actually trade off with charitable donations; if you're like most people, it probably trades off with eating out, seeing movies and saving for retirement.

As MixedNuts points out below, most people don't hesitate to spend that much on accepted medical treatments that could save their lives; another, related point is that people on cryonics may not feel the need to spend millions on costly end-of-life treatments that will only extend their lives by a few months. A disproportionate high portion of medical costs come from the last year of life.

Thirdly, if you estimate the money spent on cryonics could save 20 lives in a third world country, you are choosing between extending 20 lives for a few decades and (possibly) extending one life for millions of years. Which side of that tradeoff you prefer depends a lot on your view of immortality.

Finally, ask yourself "If I was offered cryonics for free, would I sign up?" If not, this isn't your true rejection.

Replies from: juliawise
comment by juliawise · 2011-07-23T23:51:06.623Z · LW(p) · GW(p)

Most people don't currently donate all their disposable income to charity.

I do. I give away all my earnings and my husband gives about 20% of his, so we live on a much smaller budget than most people we know.

People on cryonics may not feel the need to spend millions on costly end-of-life treatments

This would be good. But it would be good if people laid off the end-of-life spending even without cryonics.

Finally, ask yourself "If I was offered cryonics for free, would I sign up?"

Maybe. I only heard of the idea a week ago - still thinking.

Replies from: Eliezer_Yudkowsky, jkaufman, utilitymonster
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2011-07-24T00:04:10.271Z · LW(p) · GW(p)

I give away all my earnings and my husband gives about 20% of his, so we live on a much smaller budget than most people we know.

You have my great respect for this, and if you moreover endorse

But it would be good if people laid off the end-of-life spending even without cryonics.

and you've got some sort of numerical lives-saved estimate on the charities you're donating to, then I will accept "Cryonics is not altruistically maximizing" from you and your husband - and only from you two.

Unless you have kids, in which case you should sign them up.

Replies from: juliawise
comment by juliawise · 2011-07-24T00:41:55.701Z · LW(p) · GW(p)

numerical lives-saved estimate on the charities you're donating to

The metric I care more about is more like quality-adjusted life years than lives saved. We've been giving to Oxfam because they seem to be doing good work on changing systems (e.g. agricultural policy) that keep people in miserable situations addition to more micro, and thus measurable, stuff (e.g. mosquito nets). The lack of measurement does bother us, and our last donation was to their evaluation and monitoring department. I do understand that restricted donations aren't really restricted, but Oxfam indicated having donors give specifically to something as unpopular as evaluation does increase their willingness to increase its budget.

We may go with a more GiveWell-y choice next year.

Unless you have kids, in which case you should sign them up.

Only if I believe my (currently non-existing) children's lives are more valuable than other lives. Otherwise, I should fund a cryonics scholarship for someone who definitely wants it. Assuming I even think cryonics is a good use of money, which I'm currently not sure about.

The ethics of allocating lots of resources to our own children instead of other people's, and of making our own vs. adopting, is another thing I'm not sure about. If there are writings on LW about this topic, I haven't found them.

Replies from: multifoliaterose
comment by multifoliaterose · 2011-07-24T01:08:05.330Z · LW(p) · GW(p)

The ethics of allocating lots of resources to our own children instead of other people's, and of making our own vs. adopting, is another thing I'm not sure about. If there are writings on LW about this topic, I haven't found them.

In light of the sustainability concerns that Carl Shulman raises in paragraphs 2, 3 and 4 here; I'm not sure that it's advisable to base the (major) life choice of having or adopting children on ethical considerations.

That being said, if one is looking at the situation bloodlessly and without regard for personal satisfaction & sustainability, I'm reasonably sure that having or adopting children does not count as effective philanthropy. There are two relevant points here:

(a) If one is committed to global welfare, the expected commitment to global welfare of one's (biological or adopted) children is lower than that of one's own commitment. On a biological level there's regression to the mean and at the environmental level though one's values does influence those of one's children, there's also a general tendency for children to rebel against their parents.

(b) The philanthropic opportunity cost of having or adopting children is (in my opinion) so large as to eclipse the added value of a life in the developed world. The financial cost alone has been estimated as a quarter million dollars per child.

And even if one considers the quality of life in the developed world to be so large so that one extra person living in the developed world is more important than hundreds of people in the developing world, to the extent that there are good existential risk reduction charities the calculation still comes out against having children (if the human race thrives in the future then our descendants will have much higher quality of life than people in the contemporary developed world).

comment by jefftk (jkaufman) · 2011-07-25T20:20:35.675Z · LW(p) · GW(p)

Most people don't currently donate all their disposable income to charity.

I do. I give away all my earnings and my husband gives about 20% of his, so we live on a much smaller budget than most people we know.

While we live on a much smaller budget than many people, we still have disposable income that we could choose to spend on cryonics instead of other things. If cryonics cost $500/year you would still have $28/week in discretionary money after the cryonics spending. Whether this makes sense depends on whether you think that you would get more happiness out of cryonics or that $10/week. As for me, I need to read more about cryonics.

(Some background: As she wrote, julia is very unwilling to spend money on herself that could instead be going to helping other people. Because this leads to making yourself miserable, I decided to put $38/week into an account as a conditional gift, where the condition is that it can be spent on herself (or on gifts for people she knows personally) but not given away. So cryonics would not in our case actually mean less money given to charity.)

comment by utilitymonster · 2011-07-26T00:21:05.818Z · LW(p) · GW(p)

Do you know about Giving What We Can? You may be interested in getting to know people in that community. Basically, it's a group of people that pledges to give 10% of their earnings to the most effective charities in the developing world. Feel free to PM me or reply if you want to know more.

Replies from: juliawise
comment by juliawise · 2011-07-26T15:19:07.472Z · LW(p) · GW(p)

I'm familiar with it. Thanks for checking!

comment by Morendil · 2011-07-21T15:53:56.976Z · LW(p) · GW(p)

Irrelevant. My observation was adressed to an argument that those $300 would improve my own QoL.

comment by Paul Crowley (ciphergoth) · 2010-02-09T08:22:29.996Z · LW(p) · GW(p)

I think this hugely underestimates both the probability and utility of reanimation. If I am revived, I expect to live for billions of years, and to eventually know a quality of life that would be off the end of any scale we can imagine.

Replies from: complexmeme
comment by complexmeme · 2010-05-31T03:39:31.824Z · LW(p) · GW(p)

I can't argue that cryonics would strike me as an excellent deal if I believed that, but that seems wildly optimistic.

Replies from: ciphergoth
comment by Paul Crowley (ciphergoth) · 2010-05-31T08:10:37.343Z · LW(p) · GW(p)

This seems an odd response. I'd understand a response that said "why on Earth do you anticipate that?" or one that said "I think I know why you anticipate that, here are some arguments against...". But "wildly optimistic" seems to me to make the mistake of offering "a literary criticism, not a scientific one" - as if we knew more about how optimistic a future to expect than what sort of future to expect. These must come the other way around - we must first think about what we anticipate, and our level of optimism must flow from that.

Replies from: Vladimir_Nesov, complexmeme
comment by Vladimir_Nesov · 2010-05-31T09:32:52.623Z · LW(p) · GW(p)

These must come the other way around - we must first think about what we anticipate, and our level of optimism must flow from that.

Not always - minds with the right preference produce surprising outcomes that couldn't be anticipated, of more or less anticipated good quality. (Expected Creative Surprises)

Replies from: complexmeme, ciphergoth
comment by complexmeme · 2010-06-02T18:33:31.876Z · LW(p) · GW(p)

But that property is not limited to outcomes of good quality, correct?

comment by Paul Crowley (ciphergoth) · 2010-05-31T15:12:21.107Z · LW(p) · GW(p)

Agreed - but that caveat doesn't apply in this instance, does it?

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-05-31T16:13:00.639Z · LW(p) · GW(p)

It does apply, the argument you attacked is wrong for a different reason. Amusingly, I see your original comment, and the follow-up arguments for incorrectness of the previous arguments as all wrong (under assumptions not widely accepted though). Let's break it up:

(1) "If I am revived, I expect to live for billions of years"
(2) "That seems wildly optimistic"
(3) "We must first think about what we anticipate, and our level of optimism must flow from that"

(3) is wrong because the general pattern of reasoning from how good the postulated outcome is to its plausibility is valid. (2) is wrong because it's not in fact too optimistic, quite the opposite. And (1) is wrong because it's not optimistic enough. If your concepts haven't broken down when the world is optimized for a magical concept of preference, it's not optimized strongly enough. "Revival" and "quality of life" are status quo natural categories which are unlikely to survive strong optimization according to the whole of human preference in a recognizable form.

Replies from: complexmeme
comment by complexmeme · 2010-06-02T18:36:56.821Z · LW(p) · GW(p)

Do you think that if someone frozen in the near future is revived, that's likely to happen after a friendly-AI singularity has occurred? If so, what's your reasoning for that assumption?

comment by complexmeme · 2010-06-02T18:27:13.186Z · LW(p) · GW(p)

Sure, I'm talking about heuristics. Don't think that's a mistake, though, in an instance with so many unknowns. I agree that my comment above is not a counter-argument, per se, just explaining why your statement goes over my head.

Since you prefer specificity: Why on Earth do you anticipate that?

comment by Richard_Kennaway · 2010-01-25T14:47:06.422Z · LW(p) · GW(p)

The best alternative to cryonics is to never need it -- to live long enough to be able to keep living longer, as new ways of living longer are developed. Cryonics is only an emergency lifeboat into the future. If you need the lifeboat you take it, but only when the ship is doomed.

Replies from: ciphergoth
comment by Paul Crowley (ciphergoth) · 2010-01-25T15:28:25.823Z · LW(p) · GW(p)

Or as Ralph Merkle put it, "cryonic suspension is the second-worst thing that can happen to you".

comment by AdeleneDawner · 2010-01-25T14:06:39.751Z · LW(p) · GW(p)
  1. I don't think the 1 to 100 scale works; the scale should allow for negative numbers to accommodate some of the concepts that have been mentioned.For example, would you rather die at 50 years old, or live another decade while being constantly and pointlessly tortured, and then die?

  2. It seems reasonable to assume that selection bias will work in our favor when considering the nature of the world in cases where the revival will work. This is debatable, but the debate shouldn't just be ignored.

  3. Even assuming that your math is right, I'm having a hard time thinking of something that I could spend $300/year on that would give me a quality-of-life increase equivalent to 5% of the difference between the worse possible case (being tortured for a year) and the best possible case (being massively independently wealthy, having an awesome social life and plenty of interesting things to do with my time). I'd rate a week's vacation as less than 0.5% of the difference between those two, for example, and you can barely get plane tickets to somewhere interesting for $300.

Edit: Flubbed the math. Point still stands, but not as strongly as I originally thought.

Replies from: jhuffman
comment by jhuffman · 2010-01-26T13:08:07.501Z · LW(p) · GW(p)

For a single individual the cost is much more than $300. Alcor's website says membership is $478 annually, plus another $120 a year if you elect the stand-by option. Also you need $150K worth of life insurance, which will add a bit more.

Peanuts! You say...

I really don't see the point of signing up now, because I really don't see how you can avoid losing all the information in your mind to autolysis unless you get a standby or at least a very quick (within an hour or two) vitrification. That means I have to be in the right place, at the right time when I die and I simply don't think thats likely now - when any death I experience would almost certainly be sudden and it would be hours and hours before I'm vitrified.

I mean, if I get a disease and have some warning then sure I'll consider a move to Phoenix and pay them their $20k surcharge (about a lifetime's worth of dues anyway) and pay for the procedure in cash up-front. There is no reason for me to put money into dues now when the net present value of those payments exceeds the surcharge they charge if you are a "last minute" patient.

I understand this isn't an option if you don't have at least that much liquidity but since I happen to do so then it makes sense to me to keep it all (and future payments) under my control.

Hopefully that decision is a long time from now and I'll be more optimistic about the whole business at that time. I'll also have better picture of my overall financial outlook and whether I'd rather spend that money on my children's future than my doubtful one.

Replies from: dilaudid, Vladimir_Nesov
comment by dilaudid · 2010-02-01T12:52:39.703Z · LW(p) · GW(p)

jhuffman's point made me think of the following devil's advocacy: If someone is very confident of cryonics, say more than 99% confident, then they should have themselves preserved before death. They should really have themselves preserved immediately - otherwise there is a higher risk that they will die in a way that causes the destruction of their mind, than there is that cryonics will fail. The amount that they will be willing to pay would also be irrelevant - they won't need the money until after they are preserved. I appreciate that there are probably laws against preserving healthy adults, so this is strictly a thought experiment.

As people get older their risk of death or brain damage increases. This means that as someone gets older the confidence level at which they should seek early preservation will decrease. Also as someone gets older their expected "natural" survival time decreases, by definition. This means the payoff for not seeking early preservation is reducing all the time. This seems to bring some force to the argument - if there is a 10% probability that cryonics will succeed, then I really can't see why anyone would let themselves get within 6 years of likely death - they are putting a second lifetime at risk for 6 years of less and less healthy life.

Finally the confidence level relates to cost. If people can be shown to have a low level of confidence in cryonics, then their willingness to pay money should be lower. The figures I've seen quoted require a sum of $150,000. (Whether this is paid in life insurance or not is irrelevant - you must pay for it in the premium since, if you're going to keep the insurance until you die, the probability of the insurer paying out is 100%). If the probability of Cryonics working is 10%, then the average cost for a successful re-animation is $1.5 million. This is a pretty conservative cost I think - doubtless for some who read this blog it is small change. Not for me sadly though :)

Replies from: jhuffman
comment by jhuffman · 2010-02-01T14:44:33.619Z · LW(p) · GW(p)

I don't think anyone is that confident...at least I hope that they are not. Even if cryonics itself works there are so many other reasons revival would never happen; I outlined them near the bottom of the thread related to my original reply to this post already so I won't do so again. Suffice it to say, even if you had 100% confidence in both cryonics and future revival technology, you cannot have nearly 100% confidence in actually being revived.

But if you are young and healthy and want to be preserved intact you can probably figure out how to do it; but it is risky and you need to take precautions which I don't know the least thing about... The last thing you want is to end up under a scalpel on a medical examiner's table, which is what often happens to people who die suddenly or violently.

comment by Vladimir_Nesov · 2010-01-27T15:15:12.530Z · LW(p) · GW(p)

I really don't see how you can avoid losing all the information in your mind to autolysis unless you get a standby or at least a very quick (within an hour or two) vitrification.

It's information, it doesn't need to be found in the same form that is necessary for normal brain's function. If there is something else correlated with the info in question, or some kind of residue from which it's possible to infer what was there before, it's enough. Also, concepts in the brain seem to be coded holographically, so even a bullet in the brain may be recoverable from.

(For the same reason, importance of vitrification seems overemphasised. I can't imagine information getting lost because of freezing damage. Of course, brain becomes broken, but info doesn't magically disappear because of that.)

Replies from: ciphergoth, jhuffman
comment by Paul Crowley (ciphergoth) · 2010-01-27T16:13:25.441Z · LW(p) · GW(p)

Most cryonics literature concentrates on the possibility of direct bodily reanimation, not scanning and WBE.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-01-27T16:23:17.047Z · LW(p) · GW(p)

And yet even vitrified brains still fracture during freezing, and even pre-vitrification bodies are stored. What happens in modern practice to bodies that were impossible to vitrify, or that weren't frozen for "too long" (e.g. because of autopsy)? Are they thrown away? (I believe they are.) I'm a little worried about these scenarios (and consistency of decision-making that goes into them).

Replies from: ciphergoth
comment by Paul Crowley (ciphergoth) · 2010-01-27T16:40:13.757Z · LW(p) · GW(p)

Lots of non-vitrified bodies are stored. I don't think any are discarded because they were impossible to vitrify, but some are discarded because they weren't frozen for too long, and Alcor note that this is controversial: see Neural Archaeology and Ethics of Non-ideal Cryonics Cases.

I'm expecting to have to wait a long subjective time before I get to meet James Bedford, put it that way!

(Updated to add "Ethics..." link)

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-01-27T16:53:06.097Z · LW(p) · GW(p)

Thanks for the links! From the first one:

The really key idea in cryonics is the idea of freezing (or otherwise preserving) people when we don't know if we can ever revive them. Of course, we intend to figure out later whether we can do this. We intend to succeed in reviving them. But before we've actually done so, we certainly can't prove we will succeed. And funny thing, after we've done so, the proof will be irrelevant.

[...]

We can say that a condition is incurable (meaning permanently incurable, not just incurable by present technology) if the information is permanently lost. Without any means to put patients in stasis, doctors must decide what is curable and incurable in a hasty fashion. Nobody can afford to wait. But with cryonic suspension, there is no hurry at all. We simply don't have to decide that someone is gone until we have full and complete understanding of what happened to them. Before cryonics, the patient was assumed dead unless proven otherwise; after cryonics, we assume that the patient is alive unless proven otherwise.

From the second link:

The most serious ethical problems of non-ideal cases arise in the context of “last minute” cases. A “last minute” case is a case in which a cryonics organization is contacted when legal death is imminent, or has already occurred, for a non-member of the organization.

These cases typically involve distraught families, high emotion, lack of informed consent, and even lack of patient consent when the patient is unconscious or already legally deceased. Families are faced with the decision of paying a large amount of money for something they do not understand, is not likely to work, and that cryonics organizations can barely defend. Such cases conform to the worst negative stereotypes of cryonics preying on grieving families for financial gain. “Last minute” cases are rarely accepted by Alcor for many of these reasons.

Replies from: ciphergoth
comment by Paul Crowley (ciphergoth) · 2010-01-27T17:17:27.988Z · LW(p) · GW(p)

Sure, but if I was Alcor or CI, I'd be wary of being seen to be over-eager to preserve (and so get the money).

The best solution is probably to ask when you sign up.

comment by jhuffman · 2010-01-27T20:31:27.346Z · LW(p) · GW(p)

It's information, it doesn't need to be found in the same form that is necessary for normal brain's function. If there is something else correlated with the info in question, or some kind of residue from which it's possible to infer what was there before, it's enough.

That's a pretty big if. The self-destruction and consumption of neural cells is certainly going to leave a residue, but this would be like trying to figure out what sort of information was carved into an apple slice before I ate it, digested it and excreted the remains.

Replies from: pdf23ds
comment by pdf23ds · 2010-01-27T22:08:53.367Z · LW(p) · GW(p)

I am fairly sure, though I haven't been able to refind a link, that there's some solid evidence that autolysis isn't nearly that quick or severe.

Replies from: jhuffman
comment by jhuffman · 2010-01-28T02:19:36.151Z · LW(p) · GW(p)

I am fairly sure, though I haven't been able to refind a link, that there's some solid evidence that autolysis isn't nearly that quick or severe.

We can watch neural cells dying underneath a microscope. The destruction looks pretty complete. Structure is dissolved in what are essentially digestive enzymes.

If you read Alcor's FAQ for Scientists, you'll notice that they are the most careful to point out that there is considerable doubt about the possibility to ever revive anyone whose gone several hours without vitrification. Maybe this is because they want more "stand-by" revenue. Maybe its because they know they know there is no basis for speculation; by our current understanding of things its a serious problem. There are those who hope it is not a fatal problem. There are those who hope there is a heaven, too.

Replies from: Eliezer_Yudkowsky, ciphergoth
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-01-28T02:27:30.226Z · LW(p) · GW(p)

What has to be done ASAP is not vitrification it is cooling. Just dropping the body in a bath of icewater will prevent that kind of damage for days; and Suspended Animation or Alcor - either of which will be waiting right next to your bedside as fast as they can fly - have much more effective ways of cooling than a bath of icewater, and they're working on better ways yet. They also use a portable thumper to perform CPR while cooling (their special waterproof version has been patented by cryo orgs for marketing for other medical uses), just to make sure your blood stays oxygenated while you're in the metabolic danger zone, and I believe they pump you full of other protectants as well (using interosseous access, which is much faster than intravenous).

Local cryonics groups in faraway lands may not have thumpers and complicated blood medications and interosseous access, but they can at least dump you in a bathtub of ice water and perform CPR for a few minutes.

Also, with a bit more life insurance you can get the air ambulance option at Suspended Animation.

Replies from: jhuffman
comment by jhuffman · 2010-01-28T13:13:23.805Z · LW(p) · GW(p)

What has to be done ASAP is not vitrification it is cooling.

You are right - what I should be saying is not that I'm concerned about the likelihood of hours without vitrification but that I am concerned about hours of autolysis occurring.

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-01-28T14:10:33.800Z · LW(p) · GW(p)

Yes, well, I think it's safe to say that quite a few cryonics patients and orgs are concerned about that. There are complex technologies to prevent it, but also a simple one, widely available if you have any local cryonics group. It's called crushed ice, and a lot of stores will sell it to you.

Replies from: nawitus
comment by nawitus · 2010-01-28T14:26:21.938Z · LW(p) · GW(p)

What we need are studies of damage from vitrification when the operation was not done immediately after death, but after few hours as it usually happens.

comment by Paul Crowley (ciphergoth) · 2010-01-28T08:33:30.887Z · LW(p) · GW(p)

Please write up these objections into a blog post or article somewhere. From the searches I've been doing, you only have to clear a very low bar to write the most clearly argued and well informed criticism of cryonics in the world.

comment by MichaelGR · 2010-01-26T20:51:20.692Z · LW(p) · GW(p)

Here's an audio interview from 5 days ago with Ben Best, the president of the Cryonics Institute:

http://itsrainmakingtime.com/2010/cryonics/

comment by MichaelGR · 2010-01-22T23:52:52.975Z · LW(p) · GW(p)

I have found this article which is about the same event (they even mention Eliezer as "an authority in the field of artificial intelligence"):

http://www.immortalhumans.com/what-type-of-personality-thinks-immortality-is-possible/

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-01-22T23:57:23.722Z · LW(p) · GW(p)

'twarn't me

Replies from: MichaelGR
comment by MichaelGR · 2010-01-23T00:42:59.789Z · LW(p) · GW(p)

If I understand that contraction properly, are you saying you were instead the "science fiction scribe" they mention?

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-01-23T01:15:03.049Z · LW(p) · GW(p)

I was not at that conference. It is a different conference.

Replies from: MichaelGR, MichaelGR
comment by MichaelGR · 2010-01-24T07:52:46.489Z · LW(p) · GW(p)

Okay, I got it wrong the first time. But this one is about the conference you attended (and the author mentions you by name):

http://www.depressedmetabolism.com/2010/01/15/teens-twenties-cryonicist-event-2010/

comment by MichaelGR · 2010-01-23T01:20:06.598Z · LW(p) · GW(p)

Ah, well. What are the odds? I didn't even think to double check the name of the conference in your post since it so obviously seemed to be the same one.

I guess there's a lesson in there...

comment by byrnema · 2010-01-22T03:27:04.817Z · LW(p) · GW(p)

OK, here's a rationality test.

To test in some measure if your assumptions and hypothesis about cryonics are calibrated, how many people do you estimate were cryono-preserved at Alcor in 2009? (Don't look! And don't answer if you already know or knew for 2008 or another recent year.)

Later edit: The number of people who signed up for cryonics in 2009 is available as well if you want to estimate that too.

If you want, provide your estimate and something about your calculation.

Replies from: RobinZ
comment by RobinZ · 2010-01-22T03:46:24.864Z · LW(p) · GW(p)

Normal distribution, truncated at zero ... mean 10, standard deviation 50.

And that's ten and fifty I mean, not one hundred or five.

Edit: Justification is a sense that there are less than 1e4 people on ice so far, suspected to be based on flipping past such a statistic on a webpage without consciously reading. Expect number of people signed up is between twice and thirty times number stored.

Edit mk. 2: Or between zero and twenty, whichever is greater.

Edit mk. 3: I have now checked my numbers. ...Interesting.

comment by drimshnick · 2010-01-19T20:38:50.609Z · LW(p) · GW(p)

Forgive my ignorance, but aren't the real costs of cryonics much higher than their nominal fees, given the need to ensure that the preserved are financially secure post re-animation? What is the relative utility of perhaps having the chance of being re-animated as compared to not having a poor lifestyle (i.e. "going to the movies or eating at nice restaurants") now?

Replies from: AngryParsley, Jonathan_Graehl
comment by AngryParsley · 2010-01-20T19:07:11.379Z · LW(p) · GW(p)

The costs are anywhere from $300-$1500 per year depending on your choice of life insurance policy and provider. Most people would rather be alive but poor in the future than dead.

If you're really concerned about being poor in the future, there are financial instruments that can be (ab)used. Really though, any society that researches and implements the technology to revive dead people will probably treat poor people kindly as well.

comment by Jonathan_Graehl · 2010-01-19T23:57:36.997Z · LW(p) · GW(p)

Good point. Childless retirees often convert their entire wealth into a whole-life annuity, with the expected profit to the issuer possibly going to a charity - Charitable Gift Annuity.

comment by David M. Brown (david-m-brown) · 2019-02-01T04:30:00.815Z · LW(p) · GW(p)

"If you don't sign up your kids for cryonics then you are a lousy parent."

What happened to the "aim to explain" thing? The above is not an obvious statement of what constitutes lousy parenting. In fact, it is a patently ridiculous assertion.

comment by PaulAlmond · 2010-11-14T01:34:34.111Z · LW(p) · GW(p)

I'll raise an issue here, without taking a position on it myself right now. I'm not saying there is no answer (in fact, I can think of at least one), but I think one is needed.

If you sign up for cryonics, and it is going to work and give you a very long life in a posthuman future, given that such a long life would involve a huge number of observer moments, almost all of which will be far in the future, why are you experiencing such a rare (i.e. extraordinarily early) observer moment right now? In other words, why not apply the Doomsday argument's logic to a human life as an argument against the feasibility of cryonics?

Replies from: Furcas
comment by Furcas · 2010-11-14T02:23:11.354Z · LW(p) · GW(p)

Because that logic is flawed.

If I (the Furcas typing these words) lived in 3010, 'I' would have different memories and 'I' would be experiencing different things and thus I (the Furcas typing these words) would not exist. Thus there is no likelihood whatsoever that I (the Furcas typing these words) could have existed in 3010*.

There may be something left of me in 3010, just as there is something left of the boy I 'was' in 1990 today, but it won't be me: The memories will be different and the observations will be different, therefore the experience will be different. Asking why I don't exist in 3010 is asking why experience X is not experience Y. X is not Y because X does not equal Y. It's as simple as that.

*Except, of course, if I were put in a simulation that very closely replicates the environment that I (believe I) experience in 2010.

comment by byrnema · 2010-01-23T01:49:24.914Z · LW(p) · GW(p)

I'm still so baffled by this post.

Eliezer, do you like human beings? As they are, or do you want to change them?

Please: recognize the motive in asking this question, and give me a square answer.

Replies from: Eliezer_Yudkowsky, Tyrrell_McAllister
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-01-23T02:47:20.459Z · LW(p) · GW(p)

http://books.google.com/books?id=MK0-GY5Ak7MC&pg=PA88&lpg=PA88&dq=%22sock+full+of%22+%22ozy+and+millie%22&source=bl&ots=f0DznsbBEK&sig=oXYyvFEDclrOajLIz7kdKI6oC0k&hl=en&ei=GGNaS-exFISksgPps6zMBA&sa=X&oi=book_result&ct=result&resnum=1&ved=0CAcQ6AEwAA#v=onepage&q=&f=false

Second comic from the top.

Backup link: http://dergeis.livejournal.com/319819.html

Replies from: RobinZ, Alicorn, CronoDAS
comment by Alicorn · 2010-01-23T03:17:42.048Z · LW(p) · GW(p)

I'm rather the opposite. My feelings can best be summed up by a Pirates of Penzance quote rather than a webcomic:

Individually, I love you all with affection unspeakable. But collectively, I look upon you with a disgust that amounts to absolute detestation.

Replies from: juliawise, Bindbreaker
comment by juliawise · 2011-07-21T14:11:05.214Z · LW(p) · GW(p)

Or Mrs. Banks in Mary Poppins: "Though we adore men individually, we agree that as a group they're rather stupid."

comment by Bindbreaker · 2010-01-23T03:24:55.780Z · LW(p) · GW(p)

What does that mean in practical terms?

Replies from: Alicorn
comment by Alicorn · 2010-01-23T03:27:01.048Z · LW(p) · GW(p)

I adore many individual humans, and considering even complete strangers one at a time, I can offer the benefit of the doubt to a considerable degree. I abhor us as a species, and when large groups of humans do stupid or evil things, my benefit-of-doubt mechanisms stop working and I fall back on "we suck".

Replies from: CronoDAS
comment by CronoDAS · 2010-01-23T05:51:42.478Z · LW(p) · GW(p)

"A person is smart. People are dumb, panicky, dangerous animals and you know it." - Agent K, Men in Black

comment by CronoDAS · 2010-01-23T05:34:42.756Z · LW(p) · GW(p)

(In other words: "I love humanity. It's people I can't stand." - Linus van Pelt)?

comment by Tyrrell_McAllister · 2010-01-23T02:39:58.874Z · LW(p) · GW(p)

There's no question that Eliezer wants humans to change. But he wants them to change in accordance with a coherent extrapolation of their values as they are now.

comment by Paul Crowley (ciphergoth) · 2010-01-21T13:53:05.617Z · LW(p) · GW(p)

Wow, the opposition I'm getting in my blog for suggesting this! (No link, because I want to avoid a pile-on). Still, two friends say they're also considering it. Will push :-)

comment by Rlive · 2010-01-20T19:57:51.353Z · LW(p) · GW(p)

Only three hands went up that did not identify as atheist/agnostic, and I think those >also might have all been old cryonicists.

Actually, I believe the question was "would you not describe yourself as athiest/agnostic" rather than "identify as" which is a very different question.

comment by PaulAlmond · 2010-11-14T01:35:11.426Z · LW(p) · GW(p)

I'll raise an issue here, without taking a position on it myself right now. I'm not saying there is no answer (in fact, I can think of at least one), but I think one is needed.

If you sign up for cryonics, and it is going to work and give you a very long life in a posthuman future, given that such a long life would involve a huge number of observer moments, almost all of which will be far in the future, why are you experiencing such a rare (i.e. extraordinarily early) observer moment right now? In other words, why not apply the Doomsday argument's logic to a human life as an argument against the feasibility of cryonics?

comment by byrnema · 2010-01-25T22:43:31.196Z · LW(p) · GW(p)

I must have a good imagination because I can think of lots of reasons for reviving people to a future they would rather not be revived in. If the future is "transhumanist" it could be something reviving us that we wouldn't even recognize as human. (Isn't thinking that only minds with values like ours would revive us another version of the error of thinking that alien minds are in any way comprehensible to us?)

If there's a possibility that a revived future would be unpleasant, how can a parent abandon their child to that future, knowing that they would have no control over whether they are also revived or whether they would even have any protective, useful custody over their child?

comment by patrissimo · 2010-01-25T01:14:50.430Z · LW(p) · GW(p)

Your final paragraph is a very limited list of the ways parents can spend money on their children. For example, what if the choice is between spending more money on your current kids (like by signing them up for cryonics), and having more kids? By giving kid 1 immortality, you snuff out kid 2's chance at life. There are more life or not-life tradeoffs going on here than merely cryonics.

Anyway, there are a bunch of things mixed up in your (understandably) emotional paragraph. Like: what do parents owe their children? And: is cryonics a cost-effective benefit? Both of these links seem somewhat suspect to me.

I'm still a few million in net worth away from thinking cryonics is worth the cost.

Replies from: MichaelGR, MichaelR
comment by MichaelGR · 2010-01-25T03:35:39.717Z · LW(p) · GW(p)

For example, what if the choice is between spending more money on your current kids (like by signing them up for cryonics), and having more kids? By giving kid 1 immortality, you snuff out kid 2's chance at life. There are more life or not-life tradeoffs going on here than merely cryonics.

Maybe there are better examples out there, but this isn't very convincing to me.

The limiting factor on the number of kids that people have very rarely seems to be money, despite what some people will say. Actions speak louder than words, and the poor have more kids than the rich.

And if cryonics is a problem because it makes people have fewer kids (which remains to be seen), it's pretty low on the list of things that produce that effect (f.ex. cheap birth control, careers, and the desire for a social life have certainly "snuffed out" many more potential kids than cryonics ever did (if any)).

I'm still a few million in net worth away from thinking cryonics is worth the cost.

How do you figure that?

Are you aware that cryonics paid for via life insurance usually costs a few hundreds a year for someone your age, and probably less for a young child? You've probably played bigger poker hands than that. If money's a limiting factor, it should be easy to trim that amount from the fat somewhere else in the budget.

comment by MichaelR · 2010-01-25T13:50:20.068Z · LW(p) · GW(p)

The tradeoff between kid 1 and kid 2 doesn't exist, because kid 2 doesn't exist. There is no kid 2 to whom to give life, any more than there is a kid 2 to whom to give a popsickle. To do good or ill by kid 2, kid 2 has first to exist; bringing kid 2 into existence is not a good for kid 2, nor is denying kid 2 existence a wrong, because kid 2 has no prior existence to grant hir good or ill. You can't harm an hypothesis.

comment by tpebop · 2010-01-22T07:01:35.710Z · LW(p) · GW(p)

I do not believe I'll be singing up for cryonics. Not because I think it's too expensive, or impossible to be reanimated. The reason I won't be signing up is because I have no interest in living forever.

Replies from: ciphergoth, Blueberry, AngryParsley
comment by Paul Crowley (ciphergoth) · 2010-01-22T15:08:59.212Z · LW(p) · GW(p)

Can you be more precise about the age at which you wish to die?

Replies from: tpebop
comment by tpebop · 2010-01-22T16:10:05.941Z · LW(p) · GW(p)

edit: I'd like to live to maybe 150~200. I don't find that impossible with current medical/technological advances. The leading causes of death in old age tend to be as a result of organ failure and disease. I imagine that in the near future if any of my organs fail, I'll be able to have them replaced via prosthesis, or cloned organs.

Replies from: ciphergoth
comment by Paul Crowley (ciphergoth) · 2010-01-22T16:20:29.321Z · LW(p) · GW(p)

And once 120, you'd like to die, even if you find you're in better health at 120 than you are now?

Replies from: tpebop
comment by tpebop · 2010-01-22T16:40:25.525Z · LW(p) · GW(p)

I guess it depends on how well I'm able to sustain my own existence. If at age 120(150~200) I'm unable to feed, or financially support myself. Then yes, I'd like to die. If I'm at the high point of my life, successful(this is relative, assume my evaluation of success is the same as yours), healthy, and have enough activities to keep me entertained, then I'd like to continue living.

Replies from: ciphergoth
comment by Paul Crowley (ciphergoth) · 2010-01-22T17:03:56.161Z · LW(p) · GW(p)

So at what point would you like to die no matter how well you're doing?

Replies from: tpebop
comment by tpebop · 2010-01-22T17:17:55.921Z · LW(p) · GW(p)

If I am lucky enough to be eternally financially secure and healthy, then I'd like to live to the life expectancy of everyone else. this is taking into account cryonics. If it becomes ubiquitous to live very long, via any means, then I'd like to live just as long. If in 2010, the average adult male lived to be 30 years old, I wouldn't want to live to be 200.

Replies from: Alicorn
comment by Alicorn · 2010-01-22T17:27:41.918Z · LW(p) · GW(p)

This is really interesting. Do you have a dispreference for uniqueness in other things, too? Do you think that societies are optimized for the average lifespans of their inhabitants and wouldn't be able to deal with a longer-lived outlying specimen? You specified "male" - if males lived 30 years and females got to be a thousand, would you still want to live to be only 30?

Replies from: tpebop
comment by tpebop · 2010-01-22T22:09:49.486Z · LW(p) · GW(p)

I personally just think that everyone should be given the same chance to live a long and happy life. I don't think anyone should be "privileged" enough to live longer than anyone else, simply because they have the financial means to do so.

I do think societies are optimized for their average inhabitant lifespan. If a group of "super humans" came about, I think that they'd be met with extreme opposition from other "normal" people. If you've ever read a history boook, or watched the news, then you're already aware of the numerous examples prejudice (that often lead to violence/genocide) for those who are different, be it ethnicity, creed, gender, or sexual preference.

Probably, I wouldn't want to go around being the "freakishly immortal" male, I imagine it'd reduce my chances of finding adequate mates, and/or fitting into society. As irrational as that sounds, I quite like being social/normal.

Replies from: CronoDAS, Alicorn
comment by CronoDAS · 2010-01-26T11:50:46.394Z · LW(p) · GW(p)

I personally just think that everyone should be given the same chance to live a long and happy life. I don't think anyone should be "privileged" enough to live longer than anyone else, simply because they have the financial means to do so.

Should we give up antibiotics because some people can't afford them?

comment by Alicorn · 2010-01-22T22:11:14.115Z · LW(p) · GW(p)

Well, if you were the freakishly immortal male, nobody would (probably) be able to tell until you were the far side of thirty; so while it might or might not help, it doesn't seem like it'd hurt, in the finding-mates department.

comment by Blueberry · 2010-01-22T07:16:19.928Z · LW(p) · GW(p)

I have no interest in living forever.

Well, even if you were preserved, you still wouldn't live "forever". You could still die in an accident in some way that wouldn't allow you to be dethawed. You could still die from an illness that hasn't been cured yet. And you wouldn't survive the heat death of the universe.

But, why wouldn't you want to keep living? I hear this sentiment often and really don't understand it. I've always wanted to be immortal.

Replies from: AdeleneDawner, tpebop
comment by AdeleneDawner · 2010-01-22T13:44:26.360Z · LW(p) · GW(p)

I actually feel the same way. It's not going to stop me from signing up for cryo, though.

The feeling isn't rational; if anything, I'd describe it as instinctual, since it seems to be fairly free-floating and I don't remember having believed anything that seems likely to have spawned it. I try not to build on it, but it has acquired some cruft over the years. The main component is the feeling that anything over about 1,000 years' lifetime is just crazy talk - literally unthinkable in a way that my brain classifies as 'impossible' for no good reason. A recurring crufty rationalization of this is the idea that I wouldn't be able to handle 1,000 years' worth of cultural change. Another component of the issue is the feeling that I have that eventually I'll be 'done' - I'll run out of interesting things to do, or just not want to continue for whatever reason. For no apparent reason, my brain attaches the idea '200 years old' to this bit.

Of course, I also rather strongly suspect that if I live to be 1,000 (subjective) years old, I'll feel the same way, just with the numbers '10,000' and '2,000' where I have '1,000' and '200' now. Either way, it seems like living that long and finding out is a good solution to this problem.

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2010-01-22T13:57:08.384Z · LW(p) · GW(p)

A recurring crufty rationalization of this is the idea that I wouldn't be able to handle 1,000 years' worth of cultural change.

You've already handled ~50,000 years of cultural change.

Replies from: AdeleneDawner
comment by AdeleneDawner · 2010-01-22T14:00:26.048Z · LW(p) · GW(p)

I was counting cultural change since I was born, though perhaps it'd make more sense to count cultural change since I started participating in the world - no more than 30 years' worth, by either count.

How are you counting it, and why?

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2010-01-22T14:36:00.566Z · LW(p) · GW(p)

You've gone from no culture at all (which I somewhat arbitrarily placed as the equivalent of 50kyrs ago) to the present in only as many years as you are old. A mere 1,000 more years of change, experienced in real time, should be easy in comparison.

Think back to whenever your earliest memories are, and the person you were then. Think of that magnitude of change as being just the beginning.

Replies from: AdeleneDawner
comment by AdeleneDawner · 2010-01-22T14:51:38.915Z · LW(p) · GW(p)

I already had a reasonably good grasp of some parts of our culture as far back as I remember. I'm also already having trouble really keeping up with the world as it is now - I have trouble remembering that cell phones and laptops are commonplace, for example.

My model of how comprehension of culture works is based on the Critical Period Hypothesis - I suspect that we get one burst of really good ability to pick that kind of concept up, and then have a much lower ability for the rest of our lives.

It does occur to me that the kind of advanced science that would be able to reverse cryo preservation might well be able to recreate that kind of ability, though, now that I've thought enough about it to spell it out.

Replies from: gwern
comment by gwern · 2011-07-31T21:22:11.142Z · LW(p) · GW(p)

It does occur to me that the kind of advanced science that would be able to reverse cryo preservation might well be able to recreate that kind of ability, though, now that I've thought enough about it to spell it out.

I'd put a much higher probability on that kind of 'advanced science' than on cryonics working, FWIW. The key ingredients like BDNF are already loosely known, and we already know a lot of drugs (like piracetam) that have effects on BDNF.

comment by tpebop · 2010-01-22T17:13:39.674Z · LW(p) · GW(p)

I phrased my original statement from the point of view of a person who lives in a world where people live to be 80. I'd like to live as long as everyone else lives, if cryonics, prosthesis, nanotechnology, or some unforeseen technology come across that allows people to live to be thousands of years old, then I'd like to live as long as them too. I'm afraid of being alone, and I wouldn't like to be the last person, or one of the last people alive.

comment by AngryParsley · 2010-01-22T08:51:44.095Z · LW(p) · GW(p)

I have no interest in living forever.

OK, but do you have an interest in living for say... 100 years ? 200 years? 1,000,000 years? All of these lifespans are significantly shorter than forever, yet longer than current lifespans.

Replies from: tpebop
comment by tpebop · 2010-01-22T17:06:20.609Z · LW(p) · GW(p)

My opinion has changed. I'd like to live as long as everyone else. No more, no less. If the average life expectancy of a healthy American male was 1,000 years, then I'd like to live to be the ripe old age of 1,000. I would not like to rely on cryonics to extend my life. I would however use cloned organs, full body prosthesis, or nanotechnology to extend my life.

My biggest concern with cryonics is whether my consciousness could be transferred to a new body. I'm still skeptical about how consciousness is formed exactly. I'm skeptical that if an exact (to the atomic scale) replica of my brain is created, that it might not be me. I'm not willing to bet money that could go toward my (future) child's college expenses, a house, or emergency medical expenses. I'd also note that I am currently trying to decide between textbooks, food, and rent. Perhaps if I were more financially secure my opinion would be different.

If consciousness is proven to be concrete, and/or easily transferable then I'll sign up for cryonics. Until then I'll live my current life to the fullest, by wasting my money on tangible, menial activities like watching movies, and playing laser tag with my brother.

Replies from: JGWeissman
comment by JGWeissman · 2010-01-22T19:16:23.985Z · LW(p) · GW(p)

My biggest concern with cryonics is whether my consciousness could be transferred to a new body. I'm still skeptical about how consciousness is formed exactly. I'm skeptical that if an exact (to the atomic scale) replica of my brain is created, that it might not be me.

It sounds like you are worried about philosophical zombies.

The key point of the linked article is that an atom for atom replica of your brain would direct its body to talk about its consious experiences for the same reason you talk about your consious experiences, so it would be an astounding coincidence if your reports of consiousness corresponded with your experience if that consious experience was not part of the causal physics governing the atoms.

Replies from: tpebop
comment by tpebop · 2010-01-22T22:50:21.197Z · LW(p) · GW(p)

I don't understand. Are you saying that if an exactly replica of my brain was created, then it wouldn't be me? If that's the case, then why sign up for cryonics?

Replies from: JGWeissman
comment by JGWeissman · 2010-01-22T22:56:21.757Z · LW(p) · GW(p)

No, I am saying the opposite, that the exact replica of your brain would be you, complete with your consiousness.

comment by ChrisBrown · 2010-01-21T22:50:04.051Z · LW(p) · GW(p)

It is fairly irrational, but the reason I haven't signed up is because it seems in order to get insurance, you normally need to have a blood test. Basically, I have this phobia when it comes to those; I recognize it is stupid, but I can't seem to get over it (maybe if I could have it done while knocked out, but it seems unlikely the people taking the blood would go for that). I’ve heard you can get term insurance without a medical test in the amounts required for cryonics (I think the premiums would probably cost me a few hundred dollars more per year; I’m 25 and in the US, so my risk of dying in the next 25 years should be low enough to not merit too large of an increase). Does anyone know if there are carriers that would do this?

Replies from: Alicorn
comment by Alicorn · 2010-01-21T22:51:20.563Z · LW(p) · GW(p)

What would you do if it were medically necessary for you to get a blood test for some other reason (or what have you done in the past)?

Replies from: ChrisBrown
comment by ChrisBrown · 2010-01-21T23:54:56.331Z · LW(p) · GW(p)

I've had one done in the past when I was younger, and that probably made my phobia a bit worse. I believe they had to hold my arm down and and basically force it. I'd like to think if I needed it for some other medical reason, I would be able to do so without a similar incident; but realistically, I figure that won't be the case.

Replies from: byrnema
comment by byrnema · 2010-01-22T00:34:56.488Z · LW(p) · GW(p)

Have you tried desensitization? Could you prick yourself with a med-lance? Even though you could feel weird doing it at home by yourself, you might feel more in control if it's just you doing it. Then being analytical and curious about your response might make it go away.

Replies from: ChrisBrown
comment by ChrisBrown · 2010-01-22T15:14:18.042Z · LW(p) · GW(p)

Although you suggestion did sound strange initially, desensitization might be a good way to get over it. I’ll have to considering trying that as an option.

comment by zero_call · 2010-01-21T01:46:30.267Z · LW(p) · GW(p)

You're doing your argument an injustice by not linking to (or expounding on) a great reason why cryonics should work. In case you already have, well then, please could you just cite your prior work.

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-01-21T01:51:51.334Z · LW(p) · GW(p)

http://lesswrong.com/lw/wq/you_only_live_twice/

Replies from: zero_call
comment by zero_call · 2010-01-21T02:12:23.541Z · LW(p) · GW(p)

It's clear then that cryonic preservation and resuscitation is a largely untested, unknown hypothesis. Let's talk about it again in 10 or 20 years. Until then, you are getting carried away IMO.

Replies from: AngryParsley
comment by AngryParsley · 2010-01-21T02:31:57.636Z · LW(p) · GW(p)

Untested? Mammalian organs have already been successfully cryopreserved, thawed, and transplanted. Cryopreserving organic material (including small multicellular life such as embryos) is commonplace now.

Unknown? Saturating a brain with cryoprotectant and preserving it in liquid nitrogen is going to preserve the information in it a lot better than burning it or burying it in the ground. Did you look at the electron micrographs of cryopreserved brain tissue?

If you're going to wait until you're confident cryonics will work, you'll have waited too long. The cost-benefit analysis favors signing up if there's even a 5% chance of success.

Replies from: zero_call, Vladimir_Nesov
comment by zero_call · 2010-01-21T02:55:56.678Z · LW(p) · GW(p)

By "cryogenic preservation", I mean to say, "long term cryogenic preservation and brain/body reawakening". This should be clear from context. Anyways...

Yes, it is untested. For this concept to be tested, they would (obviously) need to cryopreserve a human brain/body and then attempt to successfully re-awaken it.

And yes, it is unknown. It is true that cryo-preservation does a better job, than say, "dirt preservation", i.e., "worm food preservation". Nevertheless, it is unknown how the resuscitation and repair of the brain would work. Let alone the concept of "brain scanning", which remains only a pure (albeit alluring) science fiction speculation.

EDIT: I'm sorry, I must admit to being somewhat ignorant about the subject. I've just found links to some archives of prior tests. However, in the protocol of "tests with reasonable chance of success", I stand by my argument.

Replies from: AngryParsley
comment by AngryParsley · 2010-01-21T03:49:34.899Z · LW(p) · GW(p)

For this concept to be tested, they would (obviously) need to cryopreserve a human brain/body and then attempt to successfully re-awaken it.

And the fact that mammalian organs have already been successfully cryopreserved and revived doesn't cause you to reevaluate the chance of revival for humans in the future?

Nevertheless, it is unknown how the resuscitation and repair of the brain would work.

Unknown in the sense that there are many candidate methods that look like they'll work, but they require advances in computer hardware, materials science, and other fields. The method doesn't matter. All that matters is that enough information is preserved today so that some future technology can eventually recover you.

Let alone the concept of "brain scanning", which remains only a pure (albeit alluring) science fiction speculation.

HM)'s brain was cryopreserved and microtomed so that scientists could study it. Better microtomes and microscopy equipment would allow for a brain scan at high enough fidelity for emulation. This example doesn't require any new inventions, just improvements on existing devices. Are you willing to bet that no future technology will ever be able to reconstruct a mind from a cryopreserved brain?

Replies from: zero_call
comment by zero_call · 2010-01-21T04:49:06.851Z · LW(p) · GW(p)

All the evidence you're suggesting is indirect and far removed from the actual goal. And the successes of organ/dog/etc. cryonics you keep mentioning, on inspection, have been subject to much more extreme constraints (e.g., extremely limited time, extensive pharmacological life support systems, etc.) than the cryonic preservation methods. However you might want to qualify it, the cryonics establishment is just speculating.

There are many scientific fields that do huge amounts of unsuccessful tests, and never (or after fifty years and counting) solve the problem (e.g., in plasma confinement.) Cryonics hasn't even been tested.

Replies from: AngryParsley
comment by AngryParsley · 2010-01-21T05:32:20.899Z · LW(p) · GW(p)

All the evidence you're suggesting is indirect and far removed from the actual goal. And the successes of organ/dog/etc. cryonics you keep mentioning, on inspection, have been subject to much more extreme constraints...

You have to test airfoils in wind tunnels before you can build a plane. Isn't all this stuff evidence in favor of cryonics? And "far removed"? I doubt either of us could tell the difference between rabbit kidney cells and human brain cells under a microscope.

You didn't answer any of the questions I asked. Maybe I should be more specific. Please tell me which parts of cryonics you think make it unlikely that cryopreserved people will ever be revived:

  • Do you think that the cryopreservation process itself damages the brain cells enough to destroy the mind?

  • Do you think that long-term storage at low temperatures damages the brain enough to destroy the mind?

  • Do you think that no future technology will be able to recover the mind from a cryopreserved brain?

  • Also, what part of my example do you have a problem with? Do you think there will not be any advances in microtome, microscope, and computing technology?

There are many scientific fields that do huge amounts of unsuccessful tests, and never (or after fifty years and counting) solve the problem (e.g., in plasma confinement.)

Never is a lot longer than 50 years. You mention plasma confinement. JET has a gain of 0.7 and ITER is designed to have a gain of 10. Both are experimental reactors, but they should cause you to doubt that plasma confinement will never work. You can argue that it won't be as cost-effective as other power generation methods, but if fusion plants brought people back from the dead, they'd be built eventually.

Replies from: zero_call
comment by zero_call · 2010-01-21T06:26:00.562Z · LW(p) · GW(p)

Plasma physics is actually my research area. :) . I was just using that as an example to show that you can't assume something will work, just because it seems plausible.

I haven't answered your questions because the burden of proof is on you. It doesn't show anything to refute my arguments; I don't claim to be an expert on the subject and your refutation wouldn't even prove anything, because in the end, you still don't have any evidence.

It's been good discussing this (genuine comment) but from my side, this discussion is over. I'm just repeating myself now.

Replies from: orthonormal, AngryParsley
comment by orthonormal · 2010-01-23T02:09:22.954Z · LW(p) · GW(p)

you still don't have any evidence

This is a larger issue here than just cryonics, which might be why you're getting some downvotes. While few of the things we've been referring to would be admissible in a courtroom, the sense in which we use the word "evidence" is a bit more general than that. (Do, in particular, read the post linked within the wiki article.)

And it's not just a matter of idiosyncratic word usage— the way we think of evidence really is better suited to figuring things out than the working definition most people use in order to say things like "There's no real evidence that people evolved from animals, because you weren't there to see it happen!"

What we mean is that there are plenty of facts about our physics and our biology that fit into "how we'd expect the world to look if it's true cryonics works", but don't fit into any stated version of "how we'd expect the world to look if cryonics is doomed to failure". These are, in fact, pieces of evidence that cryonics should work.

It's the same principle behind Feynman's lecture "There's Plenty of Room at the Bottom", when he discussed nanotechnology in theory. Of course he couldn't point to full examples of what he was talking about, but it was very valid to say that 'if physics works the way we think it does, then these things should be quite possible to do'.

Replies from: Tyrrell_McAllister, zero_call
comment by Tyrrell_McAllister · 2010-01-23T02:34:46.849Z · LW(p) · GW(p)

It's the same principle behind Feynman's lecture "There's Plenty of Room at the Bottom", when he discussed nanotechnology in theory. Of course he couldn't point to full examples of what he was talking about, but it was very valid to say that 'if physics works the way we think it does, then these things should be quite possible to do'.

See also Eliezer's post Is Molecular Nanotechnology "Scientific"? (also found at the wiki article you gave — that wiki's really starting to shape up).

comment by zero_call · 2010-01-23T04:01:40.676Z · LW(p) · GW(p)

I agree that this argument depends a lot on how you look at the idea of "evidence". But it's not just in the court-room evidence-set that the cryonics argument wouldn't pass. And it's unfair to compare evolutionary evidence with cryonics evidence, where the former has been explicitly, clearly documented and tested (as in dog breeding), whereas the latter (as I keep repeating myself) has had absolutely no testing whatsoever.

Evidence is required to be explicitly linked, in the general scientific community. Testing like "bringing a dog back from a low-blood-pressure state for 15 minutes while under intensive pharmacological life support" does not qualify that human cryonic regeneration will work. It qualifies a certain optimism, but (in my mind) this optimism appears to have taken over in the kind of context of the discussion we're having here.

In the physics community, or in the mainstream scientific community in general, the cryonics argument doesn't pass, for these reasons. (You might argue that the mainstream scientific community is just ignorant to the cryonics idea, but again, that's a testament to the same conclusion.)

The reason for the difference in "evidence" is because test-evidence is crucial for scientific justification. There's a deep, wide difference between strong plausibility and demonstrated truth. That's why, for example, the whole GAI and singularity science receives so little attention. It's all extremely plausible, but plausibility alone doesn't count. It doesn't actually count for very much, and that's why this is a forum, not an academic conference. To me, it definitely doesn't count for paying hundreds or thousands of dollars -- and that's assuming that the price doesn't jump.

For the cryonics argument, the assumption leaps that you require to go from plausibility-to-fact include, for example, nanotechnological repair techniques, which actually don't even exist in any form today. You can argue about plausibility all you want (like AngryParsley) has done, but that's only effective insofar as demonstrating further plausibility. It doesn't actually take you anywhere, and it doesn't provide any evidence of the sort that you need to do real science (i.e., publish papers.)

And I appreciate your concern about my down votes, but it's OK, I think I'm happily doomed to a constant zero karma status.

Replies from: Eliezer_Yudkowsky, Tyrrell_McAllister, orthonormal, pdf23ds
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-01-23T04:10:31.331Z · LW(p) · GW(p)

Then you are, in a technical sense, crazy. The rules of probability theory are rules, not suggestions. No one gets to modify what counts as "rational evidence", not even science; it's given by Bayes's Theorem. If you make judgments based on other criteria like "not believing anything strange can happen, regardless of what existing theories predict when extrapolated, unless I see definite confirmation, regardless of the reasonableness of expecting definite confirmation right now even given that the underlying theory is correct", you'll end up with systematically wrong answers. Same thing if you try to decide in defiance of expected utility, using a rule like "ignore the stakes if they get large enough" or "ignore any outcome that has not received definite advance confirmation regardless of its probability under a straight extrapolation of modern science".

Replies from: zero_call
comment by zero_call · 2010-01-23T04:22:04.246Z · LW(p) · GW(p)

I very easily believe that you think I'm crazy. Then again, you're the one paying thousands of dollars to freeze yourself, devoting your life to philosophy in the quest of achieving a technical, non-philosophical goal, and believing that the universe is instantaneously creating infinitely many new worlds at every instant.

Your idea of "data extrapolation" is drastically different from mine. I'm using the accepted modus operandi of data extrapolation in physics, biology, and all other hard sciences, where we extrapolate conclusions from direct evidence. While my form of extrapolation is in no way "superior", it does have a much higher degree of certainty, and I certainly don't feel that that's somehow "crazy".

If I'm crazy, then the entire scientific community is crazy. All I'm doing is requiring the standard level of data before I believe someone's claim to be correct. And this standard is very, very fruitful, I might add.

Replies from: AndyWood, Jordan, Kevin
comment by AndyWood · 2010-01-23T04:44:07.949Z · LW(p) · GW(p)

Your repeated references to your own background in physics as a way of explaining your own thinking suggests to me that you may not be hearing what other people are saying, but rather mistranslating.

I don't see anybody saying they think unfreezing will definitely work. By and large, I see people saying that if you greatly value extending your life into the far future, then, given all the courses of action that we know of, signing up for cryonics is one of the best bets.

Evidence is knowledge that supports a conclusion. It isn't anything like proof. In that sense, there is evidence that unfreezing will work someday, which is not to say that anybody knows that it will work.

Replies from: zero_call
comment by zero_call · 2010-01-23T04:52:03.269Z · LW(p) · GW(p)

Ironically, you're mistake my assertion in the same manner that you think I'm mistaking others'. It's correct people aren't claiming certainty of beliefs, but neither am I. In fact, I'm supporting uncertainty in the probabilities, and that's all.

I haven't once claimed that cryonics won't work, or can't work. My degree of skepticism was engendered by the extreme degree of support that was shown in the top-post. That being said, I haven't even expressed that much skepticism. I merely suggested that I would wait a while before coming to such a strong conclusion as EY, e.g., so strong as to pass strong value judgements on others for not making my same decision.

Replies from: orthonormal, Jordan
comment by orthonormal · 2010-01-23T20:47:10.392Z · LW(p) · GW(p)

You do, however, seem to be making the claim that given the current uncertainties, one should not sign up for cryonics; and this is the point on which we seem to really disagree.

One's actions are a statement of probability, or they should be if one is thinking and acting somewhat rationally under a state of uncertainty.

It's not known with certainty whether the current cryonics procedure will suffice to actually extend my life, but that doesn't excuse me from making a decision about it now since I might well die before the feasibility of cryonics is definitively settled, and since I legitimately care about things like "dying needlessly" on the one hand and "wasting money pointlessly" on the other.

You might legitimately come to the conclusion that you don't like the odds when compared with the present cost and with respect to your priorities in life. But saying "I'm not sure of the chances, therefore I'll go with the less weird-sounding choice, or the one that doesn't require any present effort on my part" is an unsound heuristic in general.

Replies from: Vladimir_Nesov, thomblake
comment by Vladimir_Nesov · 2010-01-23T21:27:23.724Z · LW(p) · GW(p)

But saying "I'm not sure of the chances, therefore I'll go with the less weird-sounding choice, or the one that doesn't require any present effort on my part" is an unsound heuristic in general.

A good point.

comment by thomblake · 2010-01-25T22:00:20.220Z · LW(p) · GW(p)

But saying "I'm not sure of the chances, therefore I'll go with the less weird-sounding choice, or the one that doesn't require any present effort on my part" is an unsound heuristic in general.

I might be missing a bit of jargon here, but what would it mean for a heuristic to be "unsound"? Do you mean it performs badly against other competing heuristics?

It sounds to me like a heuristic that should work pretty well, and I'm not sure of any heuristic that is as simple but works better for general cases. It seems to be the same caliber as "gambling money is bad".

comment by Jordan · 2010-01-23T05:03:19.677Z · LW(p) · GW(p)

Uncertainties in the probabilities can't be absorbed into a single, more conservative probability? If I'm 10% sure that someone's estimate that cryonics has a 50% likelihood of working is well calibrated, isn't that the same as being 5% sure that cryonics is likely to work?

Replies from: RobinZ, bgrah449
comment by RobinZ · 2010-01-23T23:36:36.720Z · LW(p) · GW(p)

You're assigning 0% probability to (cryonics_working|estimate_miscalibrated). Therefore you should buy the lottery ticket.

Replies from: Jordan
comment by Jordan · 2010-01-24T03:11:41.249Z · LW(p) · GW(p)

My calculation was simplistic, you're right, but still useful for arriving at a conservative estimate. To add to the nitpickyness we should mention that the probability that someone is completely well calibrated on something like cryonics is almost surely 0. The 10% estimate should instead be for the chance that the person's estimate is well calibrated or underconfident.

ETA: applying the same conservative method to the lottery odds would lead you to be even less willing to buy a ticket.

Replies from: RobinZ
comment by RobinZ · 2010-01-24T04:17:17.463Z · LW(p) · GW(p)

The point I'm making is that there is an additional parameter in the equation: your probability that cryonics is possible independent of that source. This needn't be epsilon any more than the expectation of the lottery value need be epsilon.

Replies from: Jordan
comment by Jordan · 2010-01-24T05:35:35.281Z · LW(p) · GW(p)

I agree. That's why I said so previously =P

comment by bgrah449 · 2010-01-23T05:25:50.174Z · LW(p) · GW(p)

Potentially more - perhaps their process for calibration is poor, but the answer coincidentally happens to be right.

comment by Jordan · 2010-01-23T04:49:46.593Z · LW(p) · GW(p)

Suppose someone offered you 1,000,000 : 1 odds against string theory being correct. You can buy in with $5. If string theory is ever confirmed (or at least shown to be more accurate than the Standard Model) then you make $5,000,000, otherwise you've wasted your $5. Do you take the option?

There is -- as far as I know -- no experimental evidence in favor of string theory, but it certainly seems plausible to many physicists. In fact, based on that plausibility, many physicists have done the expected value calculation and decided they should take the gamble and devote their lives to studying string theory -- a decision very similar to the hypothetical monetary option.

comment by Kevin · 2010-01-23T08:10:47.051Z · LW(p) · GW(p)

devoting your life to philosophy in the quest of achieving a technical, non-philosphical goal

Less Wrong reads like philosophy, but that's because it's made up of words. As I understand it, Eliezer's real quest is writing an ideal decision theory, set of goals, and rigorously proving the safety of self-modifying code for optimizing that decision theory. All of this philosophy is necessary in order to settle down on what it even means to have an ideal decision theory or set of goals.

comment by Tyrrell_McAllister · 2010-01-23T04:29:04.130Z · LW(p) · GW(p)

There's a deep, wide difference between strong plausibility and demonstrated truth.

This right here is the root of the disagreement. There is no qualitative difference between a "strong plausibility" and a "demonstrated truth". Beliefs can only have degrees of plausibility (aka probability) and these degrees never attain certainty. A "demonstrated truth" is just a belief whose negation is extremely implausible.

Replies from: zero_call
comment by zero_call · 2010-01-23T04:37:17.294Z · LW(p) · GW(p)

The problem is that what you call "strong plausibility" is entirely subjective until you bring evidence onto the table, and that's the point I'm trying to make.

Edit: In other words, I might think cryonics is strongly plausible, but for example, I might also think the existence of gnomes seems pretty plausible, but I don't believe either one until I see some evidence.

Replies from: Tyrrell_McAllister
comment by Tyrrell_McAllister · 2010-01-23T05:39:13.356Z · LW(p) · GW(p)

The problem is that what you call "strong plausibility" is entirely subjective until you bring evidence onto the table, and that's the point I'm trying to make.

A piece of evidence regarding a claim is any information, of whatever sort, that ought to affect the plausibility (aka probability) that you assign to the claim. If that evidence makes the claim "strongly plausible", then that's an objective fact about how the laws of probability require you to update on the evidence. There is nothing subjective about it.

In other words, I might think cryonics is strongly plausible, but for example, I might also think the existence of gnomes seems pretty plausible, but I don't believe either one until I see some evidence.

If you thought that garden gnomes were strongly plausible, then either

  1. you had the misfortune to start with a very inaccurate prior,

  2. you had the misfortune to be exposed to highly misleading evidence, or, most likely,

  3. you failed to apply the rules of probable inference correctly to the evidence you received.

None of these need be the case for you to think that cryonics is plausible enough to justify the expense. That is the difference between cryonics and garden gnomes.

comment by orthonormal · 2010-01-24T00:39:17.006Z · LW(p) · GW(p)

And I appreciate your concern about my down votes, but it's OK, I think I'm happily doomed to a constant zero karma status.

So that's why you chose your moniker...

Seriously, though, I doubt it. You're a contrarian here on cryonics and AI, but ISTM most of the downvotes have been due to misinterpretation or miscommunication (in both directions, but it's easier for you to learn our conversational idiosyncrasies than vice versa) rather than mere disagreement. As soon as you get involved in some discussions on other topics and grow more accustomed to the way we think and write, your karma will probably drift up whether you like it or not.

comment by pdf23ds · 2010-01-23T13:22:56.155Z · LW(p) · GW(p)

I agree that this argument depends a lot on how you look at the idea of "evidence". But it's not just in the court-room evidence-set that the cryonics argument wouldn't pass.

Yes, that's very true. You persuasively argue that there is little scientific evidence that current cryonics will make revival possible.

But you are still conflating Bayesian evidence with scientific evidence. I wonder if you could provide a critique that says we shouldn't be using Bayesian evidence to make decisions (or at least decisions about cryonics), but rather scientific evidence. The consensus around here is that Bayesian evidence is much more effective on an individual level, even though with current humans science is still very much necessary for overall progress in knowledge.

comment by AngryParsley · 2010-01-21T07:32:52.969Z · LW(p) · GW(p)

Burden of proof? Did you look at the giant amount of information already written on cryonics?

Fine, here are my answers to my own questions.

  • Do you think that the cryopreservation process itself damages the brain cells enough to destroy the mind?

No. Cryopreserving some tissues is common today. Organs from mammals have already been cryopreserved and transplanted. Electron micrographs of cryopreserved brain tissue show almost no degradation. There is an issue with microfractures, but the amount of information destroyed by them is minimal. The fractures are very few and translate tissue along a plane by a few microns. The other problem is ischemic injury. This is mitigated by having standby procedures and ID tags with instructions on how to begin cooling the body. Brain cells don't immediately die when deprived of oxygen, but they do start an ischemic cascade that can't be prevented currently. The cascade takes several hours at normal body temperature, and is drastically extended if the body is brought close to freezing.

  • Do you think that long-term storage at low temperatures damages the brain enough to destroy the mind?

No. Biochemistry is basically stopped at 77K. The only degradation occurs from free radicals caused by cosmic rays. About 0.3 milliSieverts is absorbed each year. The LD50 for acute radiation poisoning is 3-4 Sieverts with current medicine. That's at least 10,000 years of cosmic rays. The human body itself contains some radioactive isotopes, but if you do the math it's still well over 1,000 years before radiation poisoning is an issue.

  • Do you think that no future technology will be able to recover the mind from a cryopreserved brain?

No. See my comment about microtomes, microscopes, and brain emulation. That's just using slightly improved technology. A superintelligence with molecular nanotechnology certainly wouldn't have a problem reviving a corpsicle.

Whew, that was a lot of text and links. Once you said the discussion was over I couldn't resist.

Replies from: wedrifid, byrnema
comment by wedrifid · 2010-01-21T07:42:38.751Z · LW(p) · GW(p)

corpsicle

Nice term. I hadn't heard it before. I read now that I am evidently supposed to find the term offensively pejorative. But I'll take tongue in cheek humour over dignity any day of the week.

Replies from: AngryParsley
comment by AngryParsley · 2010-01-21T07:52:18.972Z · LW(p) · GW(p)

I guess as with other epithets, it's acceptable for members of the slighted group to use the term amongst themselves. :)

Replies from: wedrifid
comment by wedrifid · 2010-01-21T08:00:05.273Z · LW(p) · GW(p)

There's a minimum budget Sci. Fic. movie for you right there!

"What up Corpsicle?"

"Me! Mwahahahaha!"

comment by byrnema · 2010-01-21T12:36:10.042Z · LW(p) · GW(p)

I would like to check something. I naively imagine* that if you freeze a person cryonics-style, no damage is done to their brain immediately. So that if you froze a person that was alive, and then un-thawed them 5 minutes after you froze them, they'd wake up virtually the same -- alive and unharmed. Is this true?

*This idea is based on stories about children frozen in lakes and optimism.

Replies from: tut, Kevin
comment by tut · 2010-01-21T12:47:55.430Z · LW(p) · GW(p)

"Children frozen in lakes" are not frozen, only hypothermic. If you actually get ice in your brain cells you die.

"Freezing" a person "cryonics-style" begins with drainging all the blood from their head, and the process takes more than a few minutes. So it is pretty much guaranteed to cause severe brain damage or death if you do it to a live person. Which means that it would not be legal, even if somebody had done that experiment they could not publish it.

comment by Kevin · 2010-01-21T12:42:16.458Z · LW(p) · GW(p)

I don't think so. Vitrification and the chemicals used are poisonous, but fixing the toxic damage is presumed to be one of the easier steps in reviving someone's vitrified brain.

This might be true for some definition of freezing a person but not with the protocols currently used by Alcor and CI.

comment by Vladimir_Nesov · 2010-01-22T07:15:32.809Z · LW(p) · GW(p)

Mammalian organs have already been successfully cryopreserved, thawed, and transplanted.

As far as I can see from reading the abstract of this citation, there is no actual cooling down in liquid nitrogen involved, only perfusion with a cryoprotectant. Please give a quote, find another citation, or retract the claim (I don't follow the literature, so don't know whether the claim is true; the cited paper is from 1994, so a lot could've changed).

Replies from: AngryParsley
comment by AngryParsley · 2010-01-22T07:48:46.979Z · LW(p) · GW(p)

From Greg Fahy's wikipedia article:

In the summer of 2005, where he was a keynote speaker at the annual Society for Cryobiology meeting, he announced that Twenty-First Century Medicine had successfully cryopreserved a rabbit kidney at -130ºC by vitrification and transplanted it into a rabbit after rewarming, with subsequent long-term life support by the vitrified-rewarmed kidney as the sole kidney.

Here's an abstract of the relevant paper with a link to the PDF.

I knew it had been done but I linked to the wrong abstract in my original post.

comment by byrnema · 2010-01-20T22:30:26.119Z · LW(p) · GW(p)

I keep hearing that once everyone starts signing up for cryonics, I'm going to want to sign up too.

Well, yeah. Is there going to be any room for me or am I going to be out of luck?

Replies from: Morendil, ciphergoth
comment by Morendil · 2010-01-21T00:21:43.589Z · LW(p) · GW(p)

No, there's no penalty to be expected for signing up late. In fact, it ought to be cheaper then owing to economies of scale.

The only drawback is you might die ill-prepared while you're waiting.

comment by Paul Crowley (ciphergoth) · 2010-01-21T08:39:30.172Z · LW(p) · GW(p)

It will only get cheaper, easier and more reliable as more people sign up.

However, you won't be able to say that you signed up before it was popular :-)

Replies from: byrnema
comment by byrnema · 2010-01-21T13:19:44.739Z · LW(p) · GW(p)

It will only get cheaper, easier and more reliable as more people sign up.

I hope so.

I've ordered my tops fears about cryonics. My fourth greatest concern is that as soon cryonics starts really getting underway, there's going to be stressful issues with people trying to sign up but there not being enough room. Then I will have something like survivor guilt.

comment by byrnema · 2010-01-20T13:00:16.992Z · LW(p) · GW(p)

in exchange for an extra $300 per year.

What is the actual cost per year?

I don't know how the cost is figured by the company.... But consider the following scenario: a child is signed up for 10 years, then dies, and then needs to be suspended for 150 years. Assuming no inflation, how much would that cost?

Whatever amount the parent needs to come up with, we can divide that by 45, the number of years a parent can expect to be working.

Replies from: Morendil, bogdanb
comment by Morendil · 2010-01-21T00:53:18.721Z · LW(p) · GW(p)

The cost is different from the fee. You pay a flat fee to be put into a dewar after various unappetizing things are done to your body and brain to minimize damage.

The upkeep costs of the dewars are paid by the cryonics organization out of investment funds they have set up for the express purpose of being sustainable over the very long term. (A very good reason to lower your expectations of cryonics a notch would be to learn that some reputable cryonics company was hard hit by the subprime crisis or had given their money to Madoff to invest.)

Suggested reading: The First Immortal by Halperin.

comment by bogdanb · 2010-01-20T23:30:20.760Z · LW(p) · GW(p)

As far as I understand it, nothing is expected to be payed after the child dies. In fact, the organizations seem quite emphatically against accepting promise of “future payments”. Every cost is upfront.

(The idea is that if they'd accepted “future payments”, and the parent stopped paying at some point, they'd have to unfreeze the kid, which they'd dislike; it probably wouldn't do wonders on reputation, either.)

Replies from: byrnema
comment by byrnema · 2010-01-20T23:58:05.767Z · LW(p) · GW(p)

Well, that sounds like a good thing.

Replies from: RobinZ
comment by RobinZ · 2010-01-21T00:55:48.597Z · LW(p) · GW(p)

There's some sad history behind that attitude, I regret to say. There was a high-profile story some years ago about the now-gone Cryonics Society of California, at which several cryonics patients were allowed to thaw - to a major degree because of financial problems.

(Yes, I listen to This American Life occasionally.)

comment by mariz · 2010-01-20T12:54:43.793Z · LW(p) · GW(p)

Taking the cryonics mindset to its logical conclusion, the most "rational" thing to do is commit suicide at age 30 and have yourself cryopreserved. Waiting until a natural death at a ripe old age, there may be too much neural damage to reconstitute the mind/brain. And since you're destined to die anyway, isn't the loss of 50 years of life a rational trade off for the miniscule chance of infinite life?

NO.

Replies from: Morendil, Kaj_Sotala, Kevin, wedrifid, mariz
comment by Morendil · 2010-01-20T13:52:33.623Z · LW(p) · GW(p)

Please explain how suicide "logically" follows from what you call the "cyronics mindset".

One possible motivation for being interested in cryonics (mine, for instance) is that you value having enjoyable and novel experiences. There is a small probability that by having my brain preserved, I will gain access to a very large supply of these experiences. And as I currently judge such things, dying and having my brain rot would put a definite and irrevocable stop to having such experiences.

It would be stupid to commit suicide now, even if I had arranged for cryonics, because the evidence is largely in favor of my being able to arrange for 20 years of novel enjoyable experiences starting now, while successful suspension and revival remains a long shot. I do not feel confident enough in calculations which multiply a very large utility of future life after revival, by a very small probability of eventual revival.

However there is a small but non-negligible chance that I will be diagnosed with a fatal disease during that period. At the moment the diagnosis is established my options for funding suspension simultaneously vanish; and most of my capital at that time should rationally be invested in fighting the disease (and making plans for my family). My capacity to arrange for future enjoyable experiences will effectively plummet to near zero as a result; I will have lost an option I now have, which appears to be the only option of its kind.

As long as my brain remains capable of novel enjoyable experiences, and I have plenty of evidence around me that older people are so capable, there is no "neural damage" to protect against. I would reason differently if I were diagnosed with, say, Alzheimer's. I would prefer not to grapple with the question "how much of myself can I lose and still be myself".

It may seem odd to care that much about my 10-year-removed future self. But "caring about future selves" and "not committing suicide" are at least consistent choices, and they both seem logically consistent with an investment in cyronics.

comment by Kaj_Sotala · 2010-01-20T13:10:58.928Z · LW(p) · GW(p)

Taking the cryonics mindset to its logical conclusion, the most "rational" thing to do is commit suicide at age 30 and have yourself cryopreserved.

That might follow if you assign certain probabilities, utilities and discount factors, but it certainly isn't the obvious logical conclusion. Even for most cryonics advocates, very likely living for at least 40 years more beats the a small extra chance of being revived in the future. "Paying a bit extra for the chance of being revived later on is worth it" does not equal "killing yourself for the chance of being revived later on is worth it".

(Not even if we assumed the most inconvenient possible world where committing suicide at the age of 30 actually did improve your chances of getting successfully cryopreserved - in the world we live in, the following police investigation etc. would probably just reduce the odds.)

Replies from: mariz
comment by mariz · 2010-01-20T13:23:10.898Z · LW(p) · GW(p)

What is the calculated utility of signing up for cryonics? I've never seen a figure.

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2010-01-20T14:02:21.796Z · LW(p) · GW(p)

It'll vary drastically depending on who you ask. Hanson puts the worth of cryonic suspension at $125,000, assuming 50K$/year income.

comment by Kevin · 2010-01-20T13:04:25.750Z · LW(p) · GW(p)

No, because cryonics is expected to improve dramatically during our lifetimes. So the longer you wait to be preserved, the more likely it will work.

comment by wedrifid · 2010-01-20T14:43:29.697Z · LW(p) · GW(p)

Capital letters don't change math. Something is either a logical, rational conclusion given what you know or it isn't.

comment by mariz · 2010-01-20T13:00:56.354Z · LW(p) · GW(p)

Plus, suicide allows you to make a controlled Exit and a controlled delivery in the cryopreserved state. You could die in a car accident, trapped in the wreckage for hours before they extract you, while your brain degenerates. You could be shot in the head. You could develop a neural disease or a brain tumor.

You just can't take these chances. The rational solution is suicide at an early age.

Replies from: Dustin
comment by Dustin · 2010-01-20T20:21:32.240Z · LW(p) · GW(p)

Depends on how high a probability you assign to cryonics working.

comment by Jonii · 2010-01-20T11:05:12.664Z · LW(p) · GW(p)

This post was beautiful, thank you for it.

That said, I'm not signed up for cryonics. I'm still unsure about how does it work out when I live in Northern Europe.

Replies from: ChrisPine
comment by ChrisPine · 2010-01-20T13:50:58.289Z · LW(p) · GW(p)

Same here, so if anyone has any info...

It's not as easy if you don't live in California.

Replies from: James_Miller
comment by James_Miller · 2010-01-20T17:10:08.592Z · LW(p) · GW(p)

Sign up with a U.S. provider. Chances are you will die of some non-sudden illness and have the ability to fly to the U.S. at the end stage of your life.

Replies from: bogdanb
comment by bogdanb · 2010-01-20T23:33:04.798Z · LW(p) · GW(p)

Do you have any idea how hard it is for some of us to get a US Visa? What would I put on the application as purpose of visit? I guess tourism (it is a kind of time travel, isn't it?), but if I'm critically ill that might not work...

comment by CronoDAS · 2010-01-20T00:22:33.156Z · LW(p) · GW(p)

I just came up with another excuse: I can't afford to pay for it.

I don't have $29,250 in savings. I also have no income and don't expect to have one in the future. Given the nature of the insurance business, the expected value of buying life insurance should be negative; I can't buy the insurance with my savings and expect to get a larger payout unless I take steps to hasten my own death.

Replies from: orthonormal, Vladimir_Nesov, bgrah449
comment by orthonormal · 2010-01-20T01:47:43.082Z · LW(p) · GW(p)

You should know by now that "I just came up with another excuse" is a red flag for motivated cognition. We might not have even found your true reason for rejection yet...

Replies from: CronoDAS, CronoDAS
comment by CronoDAS · 2010-01-20T02:29:17.655Z · LW(p) · GW(p)

Well, my true reason might indeed be something more along the lines of "my parents wouldn't approve of it".

And the original post referred to reasons not to sign up for cryonics as "excuses" so I copied the terminology. ;)

comment by CronoDAS · 2010-01-20T03:57:05.023Z · LW(p) · GW(p)

Yet another possible "true rejection":

I don't feel as though I deserve to be revived in the future. I suspect that my existence has been a net loss for the world so far. I make garbage. I've been educated at taxpayer expense. I've done very little that anyone would consider a service worth paying for. I'm a leech, a parasite, a (figurative) basement dweller, a near-hikikomori, a lazy bum, a loser, and plenty of other negative terms. And this isn't going to change. So why should I leave the future with the burden of dealing with me?

Replies from: LucasSloan, wedrifid, bogdanb
comment by LucasSloan · 2010-01-20T06:31:19.744Z · LW(p) · GW(p)

Tell you what. If I make it (to the creation of an FAI), and no one else has already done it, I will personally spend the resources to revive you and pay for your upkeep. I further make this pledge for anyone who is cryopreserved and unwanted.

Replies from: CronoDAS, Bindbreaker
comment by CronoDAS · 2010-01-26T11:39:43.112Z · LW(p) · GW(p)

Then you're part of the problem. I'm sick of being a charity case.

comment by Bindbreaker · 2010-01-20T06:35:47.927Z · LW(p) · GW(p)

I'm pretty sure most people are concerned more with the scenario where revival comes before FAI.

Replies from: LucasSloan
comment by LucasSloan · 2010-01-20T06:46:04.710Z · LW(p) · GW(p)

I think most people who are concerned about revival aren't really considering on an emotional level FAI at all. I'd considered making the same promise regardless of FAI, but I think that it would be negligent of me to do so, with such important investment opportunities available. Also, I'm not sure I'd have that much money, even for just CronoDAS.

comment by wedrifid · 2010-01-20T04:02:58.252Z · LW(p) · GW(p)

That sounds like it could be closer to home.

comment by bogdanb · 2010-01-20T23:37:02.319Z · LW(p) · GW(p)

And this isn't going to change.

How do you know? Or, in other words, why do you assign a lower priority to this than to cryopreservation actually working?

(If it didn't work, then it doesn't matter if you deserved it or not, any money you had would still be redistributed in the society, and you wouldn't cause any significant expense anymore.)

comment by Vladimir_Nesov · 2010-01-20T01:55:50.753Z · LW(p) · GW(p)

Given the nature of the insurance business, the expected value of buying life insurance should be negative

This is thoroughly confused. The expected amount of money out of the deal is negative, but even expected value of money is positive (otherwise people shouldn't buy insurance), and in this particular case you need to think about expected value of your post-revival life, not money.

Replies from: bgrah449, CronoDAS
comment by bgrah449 · 2010-01-20T02:28:23.508Z · LW(p) · GW(p)

Life insurance is purchased more for signaling than as a financial instrument. (Life insurance was unsellable when the product was invented; the concept of your family profiting from your death was morbid. Salesmen eventually realized they had to market it as something a man purchases to provide for his family in the unlikely event of his death; buying it was buying the identity of a successful family man.)

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-01-20T02:51:34.619Z · LW(p) · GW(p)

Life insurance is purchased more for signaling than as a financial instrument.

I originally wrote "otherwise people won't buy insurance", then recognized the difference, and posted the phrasing "otherwise people shouldn't buy insurance". A lot of insurance really does have positive expected value.

comment by CronoDAS · 2010-01-20T02:21:27.568Z · LW(p) · GW(p)

The expected amount of money out of the deal is negative

That's what I meant to say.

If were to try to buy cryonics with a life insurance policy, I'll probably run out of savings with which to pay the insurance premiums before I die of natural causes.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-01-20T02:38:44.551Z · LW(p) · GW(p)

Ah, OK. Assuming the insane premises that you keep stating, this conclusion makes sense.

comment by bgrah449 · 2010-01-20T01:25:55.639Z · LW(p) · GW(p)

The expected monetary value of life insurance to the insured is, if used as directed, always zero!

Also, I'm not sure if anyone has told you this before, but cognitive dissonance is supposed to be a private thing, like going to the bathroom or popping a zit.

Edit: Since my point is that the insured will have no gain, the expected monetary value of life insurance to the insured is always negative - thanks mattnewport.

Replies from: Kazuo_Thow, CronoDAS, wedrifid, Kaj_Sotala, mattnewport
comment by Kazuo_Thow · 2010-01-20T09:59:22.797Z · LW(p) · GW(p)

but cognitive dissonance is supposed to be a private thing, like going to the bathroom or popping a zit.

I see no compelling reason care about another person's mundane, unavoidable bodily functions. But I can see a number of compelling reasons to care about another person's sanity.

Replies from: bgrah449
comment by bgrah449 · 2010-01-20T10:12:07.452Z · LW(p) · GW(p)

Mentally healthy, well-adjusted people cognitively dissonate privately. EDIT: When they can help it.

Replies from: loqi, wedrifid
comment by loqi · 2010-01-20T19:29:31.472Z · LW(p) · GW(p)

So if I suspect I'm mentally unhealthy or ill-adjusted, I should just keep it to myself, rather than communicating honestly about my situation with a group of folks on the internet and running the risk of... making bgrah449 feel uncomfortable?

Got it.

Replies from: bgrah449
comment by bgrah449 · 2010-01-20T20:52:59.684Z · LW(p) · GW(p)

What's the upside for you? The Internet coming back with a prescription of well-adjustment?

comment by wedrifid · 2010-01-20T11:08:54.872Z · LW(p) · GW(p)

Mentally healthy, well-adjusted people don't tend to freely admit negative things about themselves at all, cognitively dissonating or not. (With a few exceptions along the lines of demonstrating lower value to a significantly lower status other in order to promote comfort.)

comment by CronoDAS · 2010-01-20T01:33:25.409Z · LW(p) · GW(p)

The expected monetary value of life insurance to the insured is, if used as directed, always zero!

I don't understand this sentence. Are you saying that money is of no use to the dead? There's a very real sense in which this is not true: people do have preferences as to what happens to their money after they die. If they didn't, they wouldn't write wills.

Replies from: bgrah449
comment by bgrah449 · 2010-01-20T01:41:16.784Z · LW(p) · GW(p)

Money is of no use to the dead. But reliable guarantees about how wealth will be distributed after death is something the living value.

comment by wedrifid · 2010-01-20T01:33:24.245Z · LW(p) · GW(p)

Also, I'm not sure if anyone has told you this before, but cognitive dissonance is supposed to be a private thing, like going to the bathroom or popping a zit.

I don't see why and didn't want the imagery.

Replies from: bgrah449
comment by bgrah449 · 2010-01-20T01:43:17.090Z · LW(p) · GW(p)

Would the fox be happier with an audience?

Replies from: wedrifid
comment by wedrifid · 2010-01-20T01:48:59.246Z · LW(p) · GW(p)

Thanks for that link. I hadn't heard that one.

comment by Kaj_Sotala · 2010-01-20T11:01:08.590Z · LW(p) · GW(p)

Also, I'm not sure if anyone has told you this before, but cognitive dissonance is supposed to be a private thing

Umm, why? If you're experiencing cognitive dissonance, you should let others know of it, so they can help you consider the issue and hopefully resolve the cause of the dissonance.

Also, it's perfectly fine to show your irrationality here.

comment by mattnewport · 2010-01-20T01:34:23.016Z · LW(p) · GW(p)

Surely the expected monetary value is always negative (the insurance company has to make a profit)? The expected utility is presumably positive if the decision to purchase life insurance was rational.

Replies from: bgrah449
comment by bgrah449 · 2010-01-20T01:40:05.912Z · LW(p) · GW(p)

The insured won't be cashing any checks; his monetary gain is zero.

Replies from: mattnewport
comment by mattnewport · 2010-01-20T01:52:36.370Z · LW(p) · GW(p)

Well from that perspective the monetary value is even more negative - you pay out a premium but you are guaranteed never to personally receive the payout. The monetary value doesn't depend on you being alive to collect the payout though. The expected monetary value of insurance is always negative (absent insurance fraud) but the expected utility may be positive.

Replies from: bgrah449
comment by bgrah449 · 2010-01-20T02:04:09.149Z · LW(p) · GW(p)

Your first statement is right; I'm making a correction in the original comment. My point was that the dead can't spend money.

comment by clay · 2010-01-19T22:57:25.461Z · LW(p) · GW(p)

Was there video taken of this event? It seems like a normal with a bunch of normal people who signed up for cryonics might do a lot of good.

comment by jasonmcdowell · 2010-01-19T21:14:20.017Z · LW(p) · GW(p)

I agree with the morality of signing up your children for cryonics, but something is tickling my mind that I am unsure of.

If they die as children and cryonics works, they may wake up in a very different situation and their parents may not have survived. This still seems much better than dying, and perhaps children would be better able to adapt to any future shock, but cryonics at this point entails the risk of a discontinuous leap into an alien future. Adults at least know what they are getting themselves into and make the choice for themselves.

At the present time I am not signed up for cryonics but I intend to. I will influence my wife to sign up (and presumably our kids once we have kids.). I plan to suggest to my parents that they sign up for cryonics, but they probably won't. I'm not planning to try push them, or particularly mount an effort to change their minds. At the moment I feel like they need to come into it themselves if they are going to be happy about it later.

Replies from: thomblake, Vladimir_Nesov
comment by thomblake · 2010-01-19T21:25:23.343Z · LW(p) · GW(p)

a discontinuous leap into an alien future

Compare: death.

Would you kill them to prevent it?

Replies from: jasonmcdowell
comment by jasonmcdowell · 2010-01-19T22:13:08.854Z · LW(p) · GW(p)

The morality of the issue is clear and intuitive to me: uncertainty is hugely preferable to death.

However, I also see a blind spot in my reasoning: I don't understand the implications of making that kind of decision for someone else, including someone (a child) who is incapable of making decisions for themselves yet. I'm not sure if this is significantly different than all the other decisions parents make for their children. Maybe I'll understand it better when I actually have kids - but right now I know I don't understand the implications of making big decisions for a non-consenting person.

Replies from: MichaelGR, Nick_Tarleton
comment by MichaelGR · 2010-01-19T22:34:03.497Z · LW(p) · GW(p)

I wonder if it's really so different from all the other decisions that parents take and that often end up saving kids lives (and thus giving them some responsibility for exposing them to an uncertain future).

Would someone face the same moral dilemma with seatbelts? I don't think so, but what's the difference?

It is possible that a far future would be a bad place for that child that you saved, but it is also possible for the near future to be a bad place (from poverty all the way to surviving a nuclear holocaust and living through The Road), yet we seem to be fine making that choice to save them.

I'm not sure the difference is quite as big as we might first think. There is more uncertainty, but it goes both way (could be much worse, but could also be much better), and maybe it cancels out.

comment by Nick_Tarleton · 2010-01-19T23:03:22.440Z · LW(p) · GW(p)

Choosing the status quo is still making a decision.

comment by Vladimir_Nesov · 2010-01-20T00:59:12.429Z · LW(p) · GW(p)

This still seems much better than dying, and perhaps children would be better able to adapt to any future shock, but cryonics at this point entails the risk of a discontinuous leap into an alien future.

There is no "but". If it's better, you should do it. If the "risk of a discontinuous leap into an alien future" is so serious, then you should admit that it's actually worse if the children survive, which I don't buy.

See also: Reversal test, Shut up and multiply.

Replies from: jasonmcdowell
comment by jasonmcdowell · 2010-01-20T22:01:02.111Z · LW(p) · GW(p)

The part that interests me is whether or not I have the right to make these decisions for my (future) child. I think I probably do, and in absence of knowing, I will assume I do.

However, control over the decision feels a little funny and I don't know exactly why. It has something to do with consent and something to do with my not caring to push my parents toward it.

Replies from: scotherns
comment by scotherns · 2010-01-21T14:42:19.222Z · LW(p) · GW(p)

As a parent you make a great many decisions for your children that effect their lives in ways great and small. This is not simply your right, but your duty. Cryonics is just one of the many choices you will have to make.

Not pushing your parents towards it is another issue, but have you even discussed the possibility of it with them? My parents were surprisingly positive of the idea when I discussed it with them, and are now actively researching it. Previously, they were not aware that it was even a serious option.

comment by Zack_M_Davis · 2010-01-28T19:35:55.904Z · LW(p) · GW(p)

Bah. Any FAI worthy of the name will reconstruct a plausible approximation of me from my notebooks and blog comments.

comment by quanticle · 2010-01-20T01:22:09.470Z · LW(p) · GW(p)

I'm seeing a disturbing amount of groupthink here. We're all assuming that cryonics is a good thing, and that the only thing in dispute is whether the amount of good that cryonics generates is worth the cost. However, given that no one who has been cryogenically frozen has yet been revived, how do we know that cryonics is a good thing at all? I mean, what if the freezing process somehow changed neurochemistry so that everyone who came back was a psychopath? Given that we don't have any evidence either way, why are we all jumping to the conclusion that cryonics is something that we'd all sign up for if only we had the means?

Replies from: ciphergoth, wedrifid, loqi, RobinZ, gwern, ata, bgrah449
comment by Paul Crowley (ciphergoth) · 2010-01-20T06:56:44.971Z · LW(p) · GW(p)

Never say "groupthink" unless you have better evidence than people agreeing.

Replies from: tpebop, tpebop
comment by tpebop · 2010-01-22T06:41:07.009Z · LW(p) · GW(p)

I'm only posting this to play devils advocate, if not to stir up the debate a bit. I apologize for any spelling or grammatical errors. English isn't my first language.

To make groupthink testable, Irving Janis devised eight symptoms indicative of groupthink (1977).

(My interpretations may be flawed, feel free to point out any flaws in my logic)

> 1. Illusions of invulnerability creating excessive optimism and encouraging risk taking. Cryonics = eternal life in the future, relatively high financial risk, relatively low risk of being revived. The risk is still worth if if you could possibly be alive again.

> 2. Rationalizing warnings that might challenge the group's assumptions. Reanimation in the future might be expensive, reanimation might not be possible, Alcor may go bankrupt, Consciousness may not be transferable, reanimation is not possible now.

> 3. Unquestioned belief in the morality of the group, causing members to ignore the consequences of their actions. The diehards of the group seem to take no hesitation to call another person outside of their name if they simply do not agree with those who support cryonics.

>4. Stereotyping those who are opposed to the group as weak, evil, biased, spiteful, disfigured, impotent, or stupid. "If you don't sign your child up for cryonics you're a lousy parent"

> 5. Direct pressure to conform placed on any member who questions the group, couched in terms of "disloyalty". Not so much pressure as people questioning those who aren't yet sold on cryonics just yet, or those who don't believe in it all together.

>6. Self censorship of ideas that deviate from the apparent group consensus. This obviously can't be proven, I'm assuming some have omitted statements from their replies to this article to avoid conflict.

>7. Illusions of unanimity among group members, silence is viewed as agreement. I'm not so sure if there is an illusion of unanimity, seems that everyone is in agreement that cryonics is a logical/rational choice. This maybe be an illusion, I don't know.

>8. Mind guards — self-appointed members who shield the group from dissenting information. Hello Eliezer.

I'd like to state that I have no intentions of attacking anyone discussing this topic. I'm only trying to stir up friendly debate.

Replies from: Jack, Dan_Moore
comment by Jack · 2010-02-15T22:47:37.932Z · LW(p) · GW(p)

Actually, devil's advocacy is probably the best way to prevent group think (outside of earnest dissent). So well done.

It also occurs to me that some people holding a belief as a result of group think is entirely consistent with the belief being true and even justified-- which is an interesting feature that isn't always be obvious. I think I represent a partial data point against group think in this case because I have a something of a revulsion against the aesthetics of cryonics, some of the social implications and some of the arrogance I see in it's promotion but nonetheless conclude that it is probably a worthwhile gamble.

comment by Dan_Moore · 2010-02-18T23:41:39.778Z · LW(p) · GW(p)

Re: Groupthink symptom #1 - illusions of invulnerability or infallibility

The fact that the subject matter of cryonics is about an extended lifespan or second lifespan does not automatically confer this symptom of groupthink.

An example of groupthink often given is the decision process of the Bush Administration which led to the invasion of Iraq in 2003. Much of the information used to come to that decision was 'slam dunk' pre-invasion, but ultimately spurious or unverifiable.

comment by tpebop · 2010-01-22T06:34:29.722Z · LW(p) · GW(p)

I'd like to state that I'm neutral on the subject of cryonics, I'm only posting this to play devils advocate, if not to stir up the debate a bit. I apologize for any spelling or grammatical errors. English isn't my first language.

To make groupthink testable, Irving Janis devised eight symptoms indicative of groupthink (1977). (My interpretations may be flawed, feel free to point out any flaws in my logic)

  1. Illusions of invulnerability creating excessive optimism and encouraging risk taking.

Cryonics = eternal life in the future, relatively high financial risk, relatively low risk of being revived. The risk is still worth if if you could possibly be alive again.

  1. Rationalizing warnings that might challenge the group's assumptions.

Reanimation in the future might be expensive, reanimation might not be possible, Alcor may go bankrupt, Conciousness may not be transferable.

  1. Unquestioned belief in the morality of the group, causing members to ignore the consequences of their actions.

The diehards of the group seem to take no hesitation to call another person outside of their name if they simply do not agree with those who support cryonics.

  1. Stereotyping those who are opposed to the group as weak, evil, biased, spiteful, disfigured, impotent, or stupid.

"If you don't sign your child up for cryonics you're a lousy parent"

  1. Direct pressure to conform placed on any member who questions the group, couched in terms of "disloyalty".

Not so much pressure as people questioning those who aren't yet sold on cryonics just yet, or those who don't believe in it all together.

  1. Self censorship of ideas that deviate from the apparent group consensus.

This obviously can't be proven, I'm assuming some have omitted statements from their replies to this article to avoid conflict.

  1. Illusions of unanimity among group members, silence is viewed as agreement.

I'm not so sure if there is an illusion of unanimity, seems that everyone is in agreement that cryonics is a logical/rational choice. This maybe be an illusion, I don't know.

  1. Mind guards — self-appointed members who shield the group from dissenting information.

Hello Eliezer.

I'd like to state that I have no intentions of attacking anyone discussing this topic. I'm only trying to stir up friendly debate.

Replies from: pjeby
comment by pjeby · 2010-01-22T06:58:34.357Z · LW(p) · GW(p)

This appears to be a duplicate posting, and you should probably delete it.

comment by wedrifid · 2010-01-20T01:32:11.028Z · LW(p) · GW(p)

what if the freezing process somehow changed neurochemistry so that everyone who came back was a psychopath?

Or not.

Replies from: quanticle
comment by quanticle · 2010-01-20T03:12:18.390Z · LW(p) · GW(p)

Well, aren't you privileging the hypothesis that cryonics works? I mean, I look at Eliezer's argument above, and the unstated assumption is, "Cryonics works and has no ill side effects." Well, lets question that assumption. What if cryonics doesn't work? What if it works, but leaves you disabled? I know several people who have "living wills" - they'd rather be dead than disabled. Unless you're saying that your hypothetical thawing process will be nearly perfectly safe, I'd argue that there is a risk of disability, an outcome which may rank below death (depending on your individual value function, of course).

Given the above, would you say, "Anyone who doesn't buy cryonics for their children is a bad parent?" After all, aren't you imposing your value function vis a vis potential disability onto your children? Shouldn't we let them decide their own values regarding such a significant issue?

Replies from: wedrifid, Cyan, Nick_Tarleton
comment by wedrifid · 2010-01-20T03:22:31.021Z · LW(p) · GW(p)

Well, aren't you privileging the hypothesis that cryonics works?

Making people into psychopaths would be extremely difficult even if you were trying to do it. Cryonics working is a hypothesis that I would put as 'very slightly more likely than a desirable technological singularity'. It is worth the $300 a year because it is one of very few things that can actually save your life in the long term.

Unless you're saying that your hypothetical thawing process will be nearly perfectly safe, I'd argue that there is a risk of disability, an outcome which may rank below death (depending on your individual value function, of course).

I count all those scenarios as 'cryonics not working'.

comment by Cyan · 2010-01-20T03:25:44.260Z · LW(p) · GW(p)

I mean, I look at Eliezer's argument above, and the unstated assumption is, "Cryonics works and has no ill side effects."

That's not his assumption. The assumption is that there is a non-negligible chance that cryonics will work -- one chance in ten would be more than sufficient. Another assumption is that the opportunity to spend more time alive is far more desirable than death. It then follows that it's nuts not to sign up.

Replies from: quanticle, knb
comment by quanticle · 2010-01-20T03:35:41.490Z · LW(p) · GW(p)

Yeah, as wedrifid pointed out in a sibling post, I think Eliezer and I have different conceptions of what it means for cryonics to "work". I was defining "works" as a having a thawing process that doesn't kill you, but has the risk of disability. Eliezer, I now realize, has a much more stringent definition of the term.

Now, one more question, if you will humor me. What sort of incentives can we use to ensure that we are not used as guinea pigs for an experimental thawing process? For example, our descendants may want us thawed as soon as possible, even when the thawing process may not have been made sufficiently safe by our own criteria. How can we set up the incentives so that our descendants don't thaw us using a procedure that we consider unnecessarily risky?

Replies from: knb, AngryParsley, Cyan, wedrifid
comment by knb · 2010-01-20T05:10:51.773Z · LW(p) · GW(p)
  1. Only a much wealthier, more technologically advanced society would unfreeze corpses. Less technologically advanced societies couldn't do it, and poorer societies wouldn't bother.

  2. Over time, wealth eventually causes the cultural changes we call "moral progress".

  3. Almost all bad scenarios lead to cryopreserved people never being revived. They either become "gray goo", are eaten by roving bands of cannibals, are converted into paperclips, etc.

So anyway, I think in most scenarios reanimation will be better than death.

Replies from: Theist
comment by Theist · 2010-01-20T23:18:35.856Z · LW(p) · GW(p)

Over time, wealth eventually causes the cultural changes we call "moral progress".

This seems a non-sequitur to me. There are a number of examples where wealth and moral progress are found together, but there are also examples where they are not. China and oil-rich Arab states come to mind.

Replies from: knb
comment by knb · 2010-01-21T04:25:16.057Z · LW(p) · GW(p)

Culture changes slowly, but economic growth can happen quickly. China is still quite poor, first of all, but it still seems that significant moral progress has occurred in China, and in only 30 years or so.

The wealthier Arab states are still pretty regressive, but we must consider how bad they used to be. For instance, as recently as the 1950s, 20% of the population of Saudi Arabia were slaves.

comment by AngryParsley · 2010-01-20T04:57:22.966Z · LW(p) · GW(p)

Alcor's patient care trust board is composed of people who are signed up for cryonics. A majority of members on the board must have a cryopreserved relative or significant other. They could try to use people they don't care about as guinea pigs, but there are also bylaws about ethically reviving people.

comment by Cyan · 2010-01-20T03:46:33.199Z · LW(p) · GW(p)

Dammit quanticle, I'm an engineer/biochemist/statistician, not an economist!

comment by wedrifid · 2010-01-20T04:28:36.736Z · LW(p) · GW(p)

How can we set up the incentives so that our descendants don't thaw us using a procedure that we consider unnecessarily risky?

I suppose extend whichever incentives that we use to make our descendants even bother with us at all. (Outside my field too I am afraid. I'm more of a 'take direct action myself' kind of guy than a 'find some way to make people do stuff even when I am dead' kind of guy.)

comment by knb · 2010-01-20T04:50:39.707Z · LW(p) · GW(p)COool man
comment by Nick_Tarleton · 2010-01-20T03:14:42.810Z · LW(p) · GW(p)

Well, aren't you privileging the hypothesis that cryonics works?

We have actual object-level reasons to believe that.

What if it works, but leaves you disabled? I know several people who have "living wills" - they'd rather be dead than disabled. Unless you're saying that your hypothetical thawing process will be nearly perfectly safe

I think it would be, since a safe process appears to be possible and you can (and presumably would) just be left frozen until it was sufficiently developed.

Shouldn't we let them decide their own values regarding such a significant issue?

You can't not decide. Not signing them up is still deciding.

comment by loqi · 2010-01-20T01:44:40.619Z · LW(p) · GW(p)

Cryonics is a regular topic here and on OB. The conclusion that's being "jumped to" has been argued at length elsewhere. It appears you're mistaking inferential distance for groupthink.

comment by RobinZ · 2010-01-20T03:18:56.465Z · LW(p) · GW(p)

It doesn't look like a particularly strong consensus to me - the survey a while back had a sizeable minority of cryonics skeptics, and all of three people actually signed-up. And, of course, all the argument in the comments to this post.

comment by gwern · 2010-01-20T01:41:32.648Z · LW(p) · GW(p)

This is one of the old standard objections; I won't spoonfeed you, but try looking through the pro-cryonics literature. (I have yet to think of a decent argument against cryonics which hasn't been at least discussed.)

comment by ata · 2010-01-20T07:19:21.439Z · LW(p) · GW(p)

I agree with ciphergoth (or perhaps I'm groupthinking with him/her :P). As for the part of your post that came after the first sentence: when we develop the technology to revive cryopreserved people, we will see if it has any recurring, statistically-significant undesirable effects on people's psychology. If it does, we'll stop reviving people until we get it sorted out.

The very small risk of accidentally turning a few people into psychopaths before we notice the pattern (I say the risk is small because we don't have any particular reasons to privilege that or any other non-null hypothesis) is, I think, worth the large potential benefits to the individuals and to society.

comment by bgrah449 · 2010-01-20T09:37:52.944Z · LW(p) · GW(p)

Not as drastic, but there are other negative possibilities.

comment by georgepennellmartin · 2010-11-13T23:50:01.494Z · LW(p) · GW(p)

Speaking as a childless teenager, i'm a cryonics atheist, i don't believe it will ever be possible to revive a deceased, frozen human being. The human mind is too complex and fragile. The only reason I would ever sign up for cryonics would be in a Pascal's wager sort of way, in which case I may as well accept Jesus Christ as my lord and saviour at the same time. It's all false hope.

Replies from: lsparrish, Furcas, JGWeissman
comment by lsparrish · 2010-11-14T00:16:36.491Z · LW(p) · GW(p)

How much time did you spend researching the question prior to concluding that it was false hope?

Replies from: georgepennellmartin
comment by georgepennellmartin · 2010-11-14T00:38:29.754Z · LW(p) · GW(p)

I have read a few articles but mostly it was pure common sense. The death and freezing for probably over a century of your brain, would be traumatic. Information would inevitably be lost.

Replies from: ata
comment by ata · 2010-11-14T01:00:40.565Z · LW(p) · GW(p)

The death and freezing for probably over a century of your brain, would be traumatic. Information would inevitably be lost.

This is incorrect. Modern cryonics does not use "freezing", but rather vitrification at liquid nitrogen temperatures (below -124°C), such that chemical reactions almost completely stop. (See the table at the bottom of this page and the section about the claim that "cryonics freezes people" on the Cryonics myths page.)

Replies from: georgepennellmartin
comment by georgepennellmartin · 2010-11-14T01:25:28.297Z · LW(p) · GW(p)

Thats very interesting, its obvious that cryonics isn't just a pseudoscience. But I can't see how a brain's electrical impulses and ongoing chemical reactions would be preserved and restarted, if they were ceased.

Replies from: Perplexed
comment by Perplexed · 2010-11-14T01:49:21.248Z · LW(p) · GW(p)

I can't see how a brain's electrical impulses and ongoing chemical reactions would be preserved and restarted, if they were ceased.

I don't see why you think there would be a problem. Raising the temperature restarts chemical reactions. Shine a light in the eyes or tickle the feet - that is all it takes to start nerve pulses flowing if the metabolic support is working. Restarting the heart is going to be more difficult than restarting the brain. That is to say, not difficult at all.

Replies from: jimrandomh
comment by jimrandomh · 2010-11-14T01:55:25.739Z · LW(p) · GW(p)

This is slightly misleading, since the difficulty is not in restarting the reactions, but in repairing the damage sustained between death and preservation, repairing damage caused by the preservation process, and undoing the vitrification itself. These are hard problems, but they are well enough understood that we think we can predict which research paths will eventually lead to solutions, and what those solutions will look like in broad terms.

Replies from: lsparrish, Perplexed
comment by lsparrish · 2010-11-14T03:05:00.849Z · LW(p) · GW(p)

The original comment didn't say anything about structural damage or toxicity, just electrical activity and ongoing chemical reactions, which are non-issues.

comment by Perplexed · 2010-11-14T02:17:31.802Z · LW(p) · GW(p)

Right. I was assuming essentially no damage between death and preservation. Current practice is far from this ideal, as I understand it.

Replies from: lsparrish
comment by lsparrish · 2010-11-14T02:44:16.407Z · LW(p) · GW(p)

Yes, cryonics is a much more complex subject than many people give it credit for and many aspects get confused. Whenever someone mentions the brain's electrical activity being switched off as a sign of irreversible death I think they must be a newbie to the topic. Hypothermia patients frequently lose electrical activity and recover just fine. Structure is the key.

There is in reality a spectrum of cryonics. On the "soft" side would be a future invention (e.g. a very nontoxic cryoprotectant, or a means of rapid perfusion that lets you lower temperatures quickly enough) that permits zero chemical and structural damage, much like is currently only achievable in thin slices. On the "hard" side there are sub-ideal vitrifications and hard freezes.

There's a spectrum of probabilities of success. Zero damage would be about 100% likely to succeed, whereas hard freeze is probably less than 1%. (Perhaps the chance is higher than that, but the person would be almost completely amnesiac -- like a clone but with macroscopic features of the brain preserved.) Ideal conditions achievable today have a significantly higher probability (or percentage of memories preserved) than hard freezing. Unfortunately the unpopularity of cryonics means there's hardly any infrastructure for it, which means an ideal case is relatively unlikely to actually occur.

comment by Furcas · 2010-11-13T23:54:53.935Z · LW(p) · GW(p)

By becoming a Christian, you'd be dooming yourself with a variety of other possible gods. Signing up for cryonics doesn't have a chance of making you deader than you'll be otherwise.

Replies from: georgepennellmartin
comment by georgepennellmartin · 2010-11-13T23:59:12.287Z · LW(p) · GW(p)

Would make me a lot poorer while alive though, money I could have used to better enjoy what little time I have on earth or even spend it to make life better for others

Replies from: lsparrish
comment by lsparrish · 2010-11-14T00:20:14.537Z · LW(p) · GW(p)

Would you do it then if someone else was paying for it? Or if it was too cheap to be worth worrying about?

Replies from: georgepennellmartin
comment by georgepennellmartin · 2010-11-14T00:57:31.236Z · LW(p) · GW(p)

Probably, if only because it would make a great conversation starter

Replies from: lsparrish
comment by lsparrish · 2010-11-14T01:32:59.121Z · LW(p) · GW(p)

Yeah... My thought is that since it becomes dramatically cheaper at larger scales (and more likely to work with more research and interest in it) my chances are helped most by promoting interest in the idea.

Also I find the thought of saving billions of lives more intriguing than saving a few hundred nerds who happened to research it for themselves. Basically it seems like an under-appreciated topic given the possibility (even slight) of saving such huge numbers of lives.

Replies from: georgepennellmartin
comment by georgepennellmartin · 2010-11-14T01:54:52.686Z · LW(p) · GW(p)

Problem is that would also decrease the chances of you ultimately being revived, why would they bring you back to life if they have billions to choose from? Also the price of cryonics would probably skyrocket as fridge space ran out...etc through the law of diminishing returns meaning your corpse would be turfed out by new rich clients as soon as cryonics became popular. Think about it, what moral obligations would future generations have to revive you anyway? You'd be nothing but a resource sink with antiquated skills. No offence

Replies from: lsparrish
comment by lsparrish · 2010-11-14T02:56:34.658Z · LW(p) · GW(p)

The source of plausible moral obligation becomes much more obvious when you stop referring to the patients as "corpses". Corpses are associated with irreversible death -- we don't traditionally have a duty to revive corpses, but that is only because doing so would be impossible by definition.

If there are billions in need of revival, more resources will go towards finding a way to do it in the first place. Also, revival mechanisms that can only pay for themselves with greater economies of scale can also be employed.

If I have to learn a new set of skills, language, customs, etc. to live again that is a sacrifice I'm more than willing to make. If the people of the future are non-sociopathic humans, they will be willing to revive and reeducate me. However, I see no harm in setting up a trust that creates financial incentives as well, and covers any expenses. A few hundred years of compound interest can add up to a lot. The more people are involved in this, the more economies of scale (i.e. group schooling, revivee communities, specialists trained to deal with us, etc.) are possible and profitable.

comment by JGWeissman · 2010-11-14T00:06:25.554Z · LW(p) · GW(p)

How surprised are you, using your theory of the fragility and complexity of the human mind, that human minds exist at all?

Replies from: georgepennellmartin
comment by georgepennellmartin · 2010-11-14T00:41:02.792Z · LW(p) · GW(p)

I wouldn't say surprising as much but amazing and awe-inspiring definetly. That the human mind could be created without intent but simple trial and error is (ironically) miraculous

Replies from: JGWeissman
comment by JGWeissman · 2010-11-14T00:59:18.408Z · LW(p) · GW(p)

Then, since your theory calls something that did in fact happen "miraculous" (you would not have expected it to happen), you should consider that the complexity and fragility of the human mind may be more manageable that you previously thought.

Replies from: georgepennellmartin
comment by georgepennellmartin · 2010-11-14T01:37:26.956Z · LW(p) · GW(p)

Yes your right, however it could also be less 'manageable' that I thought, I don't believe science has reached the stage where we can know yet which it is. Perhaps I'm being a bit too pessimistic however. In the meantime I'll try and keep an open mind.

Replies from: JGWeissman
comment by JGWeissman · 2010-11-14T01:47:49.169Z · LW(p) · GW(p)

however it could also be less 'manageable' that I thought

That is countering evidence with an appeal to ignorance. The point is that theories claiming the complexity and fragility are more manageable assign a higher prior probability to the event of human minds evolving, and thus, by Bayes' Theorem, observing that human minds have actually evolved, you should assign higher probability to the theories that claim more manageability.

Replies from: georgepennellmartin
comment by georgepennellmartin · 2010-11-14T02:11:36.266Z · LW(p) · GW(p)

I would but I keep remembering Elidier Yudkowsky's anecdote about the professor who set his student the task of creating robotic vision, it seems to me that at every turn science has underestimated the challenge ahead. Ultimately I do believe the mind will be understood completely, just that it will be too late for us.

Replies from: JGWeissman
comment by JGWeissman · 2010-11-14T02:37:16.900Z · LW(p) · GW(p)

Ultimately I do believe the mind will be understood completely, just that it will be too late for us.

The whole point of cryonics is to push back when it will be too late, by preserving all the information about you that someone with a general understanding the human mind could use to reinstantiate your specific human mind. You don't need to understand the revival process at the time you are frozen.