Robin Hanson's Cryonics Hour

post by orthonormal · 2013-03-29T17:20:23.897Z · score: 29 (34 votes) · LW · GW · Legacy · 27 comments

Contents

  Relevant links:
None
27 comments

I'm writing to recommend something awesome to anyone who's recently signed up for cryonics (and to the future self of anyone who's about to do so). Robin Hanson has a longstanding offer that anyone who's newly signed up for cryonics can have an hour's discussion with him on any topic, and I took him up on that last week.

I expected to have a fascinating and wide-ranging discussion on various facets of futurism. My expectations were exceeded. Even if you've been reading Overcoming Bias for a long time, talking with Robin is an order of magnitude more stimulating/persuasive/informative than reading OB or even watching him debate someone else, and I'm now reconsidering my thinking on a number of topics as a result.

So if you've recently signed up, email Robin; and if you're intending to sign up, let this be one more incentive to quit procrastinating!

Relevant links:

The LessWrong Wiki article on cryonics is a good place to start if you have a bunch of questions about the topic.

If you want to argue about whether signing up for cryonics is a good idea, two good and relatively recent threads on that subject are under the posts on A survey of anti-cryonics writing and More Cryonics Probability Estimates.

And if you are cryocrastinating (you've decided that you should sign up for cryonics, but you haven't yet), here's a LW thread about taking the first step.

27 comments

Comments sorted by top scores.

comment by orthonormal · 2013-03-29T17:25:12.742Z · score: 14 (16 votes) · LW · GW

I don't think the following belonged in the OP, but it's worth saying:

Why was there such a difference for me between a conversation with RH and his more public outputs? My opinion is that he's very good at pointing out specific gaps in reasoning, which is extremely productive when it's your own reasoning. But when you're reading or watching Robin's exchange with someone else, it's all too tempting to think that he's picking nits and that the other person is just failing to respond in the correct way (i.e. the exact way that you'd respond, to which you don't see a counterargument from RH).

There are argumentative devices to circumvent this problem and make oneself more persuasive to an audience, but Robin doesn't seem to employ those as much as the norm.

comment by Douglas_Knight · 2013-03-29T19:18:59.013Z · score: 9 (9 votes) · LW · GW

My experience is exactly the opposite.

comment by Dorikka · 2013-03-29T22:45:14.912Z · score: 10 (10 votes) · LW · GW

Thanks for the data point. If you want to give some more detail, that might be helpful.

comment by Paul Crowley (ciphergoth) · 2013-03-29T18:09:06.610Z · score: 9 (9 votes) · LW · GW

This certainly accords with my experience. I didn't find his posts on FOOM persuasive, but after speaking to him in person I've shifted significantly towards the idea that his side of the debate is closer to the truth.

comment by moridinamael · 2013-03-29T18:53:58.543Z · score: 4 (6 votes) · LW · GW

Was it a matter of him explaining points he had made publicly in a different way, or did he provide an entirely new approach when talking with you?

Also, I know a few people who are devastatingly persuasive in a one-on-one conversation, regardless of whether they are right, who can't necessarily write or publicly debate as well as they speak in a private, relaxed context. Maybe Hanson is more charismatic in person and so you are giving him more credit?

comment by orthonormal · 2013-03-29T19:02:10.848Z · score: 7 (7 votes) · LW · GW

It's not the usual kind of charisma—I didn't feel a strong need to win his approval, relative to how much I do with other smart people. It's rather that he was extremely quick to understand my arguments and point out important aspects I hadn't considered, which makes it easier for me to consider that my argument might be flawed. So that's an aptitude, but it's one better correlated with good argument than the aptitude of charisma is.

comment by Paul Crowley (ciphergoth) · 2013-03-31T09:11:07.545Z · score: 4 (4 votes) · LW · GW

I don't think he's publicly made the argument he made with me - it feels like until I spoke to him, I couldn't see a way that his broad "outside view" predictions could translate into any specific outcome you might predict with an inside view. Now I can see how it might work.

comment by jsteinhardt · 2013-03-31T07:50:19.427Z · score: 2 (2 votes) · LW · GW

FWIW, while I've never talked to Robin in person, my experience with talking to Eliezer was pretty similar.

comment by mwengler · 2013-03-31T12:03:21.179Z · score: 2 (2 votes) · LW · GW

The very firs lesswrong meetup I ever went to (in Orange County) was attended by Yvain, Anna Salamon, and Luke (before he worked for whatever the institute is called these days). It was significantly more awesome than reading them in the blogs.

comment by gwern · 2013-11-17T03:34:33.333Z · score: 2 (2 votes) · LW · GW

Well, while we're trading personal evaluations... when I met Yvain in person, I found him to be not quite as awesome as his writings. I suspect I come off the same way (although I have a good excuse).

comment by mwengler · 2013-11-18T19:01:34.862Z · score: 1 (1 votes) · LW · GW

I'd still really look forward to spending some time with you in a small group or alone. Maybe it is a kind-of-person thing but I have NEVER been disappointed meeting someone in person that I admire from printed word. At some level, I think I am at least as fascinated trying to understand what kind of person produces ideas like that, and the in person meetings are just chock full of information that I will never get no matter how much I read.

comment by Paul Crowley (ciphergoth) · 2013-03-29T18:07:50.019Z · score: 12 (12 votes) · LW · GW

I strongly second this. I recently had the chance to have a drink with Robin and Katja Grace in London, and it is a candidate for the most interesting conversation I have had in my entire life.

comment by [deleted] · 2013-03-29T17:26:20.700Z · score: 10 (10 votes) · LW · GW

Would you feel comfortable with sharing some of the things you talked about, and/or some of the topics you're now reconsidering? I think they might be pretty interesting.

comment by orthonormal · 2013-03-29T18:56:27.937Z · score: 12 (12 votes) · LW · GW

We also talked about the relative likelihood of burning the cosmic commons, what would be required for a stable singleton in the future, mangled worlds and the Born probabilities, cryonics trusts and other incentives for revival, and some particulars of his projections about an em-driven world; but the topic that I'm most reconsidering afterward is the best approach to working on existential risk.

Essentially, Robin made the case that it's much more marginally useful now to work on analyzing the potentially long tail of x-risks than to focus on one very salient scenario—kind of like the way Bruce Schneier talks about better security via building resilient structures rather than concentrating on foiling specific "Hollywood" terror scenarios.

comment by John_Maxwell (John_Maxwell_IV) · 2013-03-30T05:49:02.144Z · score: 10 (10 votes) · LW · GW

Robin made the case that it's much more marginally useful now to work on analyzing the potentially long tail of x-risks than to focus on one very salient scenario

Seems worth its own post from him or you, IMO.

comment by Will_Newsome · 2013-03-30T04:52:06.238Z · score: 5 (5 votes) · LW · GW

(Kneejerk response: If only we could engineer some kind of intelligence that could analyze the potentially long tail of x-risk, or could prudentially decide how to make trade offs between that and other ways of reducing x-risk, or could prudentially reconsider all the considerations that went into focusing on x-risk in the first place instead of some other focus of moral significance, or...)

comment by orthonormal · 2013-03-30T15:48:57.701Z · score: 11 (11 votes) · LW · GW

Yes, one of the nice features of FAI is that success there helps immensely with all other x-risks. However, it's an open question whether creating FAI is possible before other x-risks become critical.

That is, the kneejerk response has the same template as saying, "if only we could engineer cold fusion, our other energy worries would be moot, so clearly we should devote most of the energy budget to cold fusion research". Some such arguments carry through on expected utility, while others don't; so I actually need to sit down and do my best reckoning.

comment by [deleted] · 2013-03-29T19:53:30.664Z · score: 3 (3 votes) · LW · GW

Am I right in thinking this is the answer given by Bostrom, Baum, and others? i.e. something like "Research a broad range and their inter-relationships rather than focusing on one (or engaging in policy advocacy)"

That viewpoint seems very different to MIRI's. I guess in practice there's less of a gap - Bostrom's writing an AI book, LW and MIRI people are interested in other xrisks. Nevertheless that's a fundamental difference between MIRI and FHI or CSER.

Edit: Also, thank you for sharing, that sounds fascinating - in particular I've never come across 'mangled worlds', how interesting.

comment by shminux · 2013-03-29T23:42:38.174Z · score: 6 (8 votes) · LW · GW

Having looked through the cryonics insurance options, I am having trouble justifying one vs a regular life insurance.

I think that an accident resulting in death makes cryo insurance a loss both to the insured and the estate, as the odds of both brain remaining intact and timely freezing are quite bad. So all you have left is the altruistic feeling of financing a cryo organization. If that's what you are after, donate explicitly.

If you get too demented or brain-damaged to handle your affairs, getting frozen is probably not a good idea anyway, since most of your personality is gone by then, and the odds of recovery are almost non-existent.

If you have a life insurance and get terminally ill, there are several ways to draw cash against the policy's value while you are still alive, and fund your cryosuspension that way.

If you want to guard against greedy relatives, (Rudi Hoffman's example), then drawing cash from your life insurance policy while still alive seems like a way to do it.

In summary, I am hard pressed to find a probable situation where a cryo insurance is preferable to a general whole life or universal life insurance, unless you have no one but yourself to care about. What am I missing here?

comment by orthonormal · 2013-03-30T16:10:49.182Z · score: 8 (8 votes) · LW · GW

After the failure of the Cryonics Society of New York (CSNY, not to be confused with this or this), due in part to their acceptance of cases whose families promised to pay in installments but later reneged (causing them to run out of money for keeping their other patients cryopreserved), the remaining cryonics organizations require ironclad assurance of payment for suspension. That's really hard to arrange if you die without a few months' notice, even if you have an insurance policy, since your beneficiaries won't have the money to give to the organization for a few weeks or months after your death (for which time you'd be on dry ice, and undergoing a small but worrisome amount of degradation). Naming the organization as a beneficiary gives them 100% assurance that the suspension will be paid for, and without that they won't send out the suspension team.

(Someone correct me if I'm mistaken in this account.)

comment by JGWeissman · 2013-03-30T06:53:05.240Z · score: 2 (2 votes) · LW · GW

Cryonics insurance is regular life insurance. What makes it cryonics insurance is that the beneficiary is a cryonics organization. You can give instructions to your cryonics organization about what to do with excess funds (the entire amount if you are not preserved, you can also give instructions about under which conditions you should be preserved).

comment by shminux · 2013-03-30T07:24:13.687Z · score: 0 (0 votes) · LW · GW

What makes it cryonics insurance is that the beneficiary is a cryonics organization.

Right, but my question is, why bother?

comment by Kawoomba · 2013-03-30T07:48:16.326Z · score: 0 (0 votes) · LW · GW

What am I missing here?

Think about who benefits from such a precommitment.

(Doesn't imply it's a scam, it's allowed to provide valuable services and to try to maximize income at the same time.)

comment by CronoDAS · 2013-03-30T19:37:46.601Z · score: 4 (6 votes) · LW · GW

What I'd really like is a YouTube video of Robin Hanson singing a particular Gilbert and Sullivan song. ;)

Yet everybody says I'm such a disagreeable man
And I can't think why!

comment by shminux · 2013-03-29T23:17:07.790Z · score: 1 (5 votes) · LW · GW

Robin is an order of magnitude more stimulating/persuasive/informative than reading OB or even watching him debate someone else, and I'm now reconsidering my thinking on a number of topics as a result.

Would you let him out of the box, were he an AI?

comment by RomeoStevens · 2013-03-30T02:49:19.518Z · score: 6 (6 votes) · LW · GW

He's not?

comment by orthonormal · 2013-03-30T15:51:51.778Z · score: 3 (3 votes) · LW · GW

AI DESTROYED. (Sorry, Robin.)