Why I haven't signed up for cryonics

post by Swimmer963 (Miranda Dixon-Luinenburg) (Swimmer963) · 2014-01-12T05:16:55.458Z · LW · GW · Legacy · 252 comments

Contents

  How I'm now on the fence about whether to sign up for cryonics
  Putting some numbers in that
    What's my final number?
  Fine-tuning and next steps
None
252 comments

(OR)

How I'm now on the fence about whether to sign up for cryonics

I'm not currently signed up for cryonics. In my social circle, that makes me a bit of an oddity. I disagree with Eliezer Yudkowsky; heaven forbid. 

My true rejection is that I don't feel a visceral urge to sign up. When I query my brain on why, what I get is that I don't feel that upset about me personally dying. It would suck, sure. It would suck a lot. But it wouldn't suck infinitely. I've seen a lot of people die. It's sad and wasteful and upsetting, but not like a civilization collapsing. It's neutral from a point of pleasure vs suffering for the dead person, and negative for the family, but they cope with it and find a bit of meaning and move on. 

(I'm desensitized. I have to be, to stay sane in a job where I watch people die on a day to day basis. This is a bias; I'm just not convinced that it's a bias in a negative direction.)

I think the deeper cause behind my rejection may be that I don't have enough to protect. Individuals may be unique, but as an individual, I'm fairly replaceable. All the things I'm currently doing can and are being done by other people. I'm not the sole support person in anyone's life, and if I were, I would be trying really, really hard to fix the situation. Part of me is convinced that wanting to personally survive and thinking that I deserve to is selfish and un-virtuous or something. (EDIT: or that it's non-altruistic to value my life above the amount Givewell thinks is reasonable to save a life–about $5,000. My revealed preference is that I obviously value my life more than this.)  

However, I don't think cryonics is wrong, or bad. It has obvious upsides, like being the only chance an average citizen has right now to do something that might lead to them not permanently dying. I say "average citizen" because people working on biological life extension and immortality research are arguably doing something about not dying. 

When queried, my brain tells me that it's doing an expected-value calculation and the expected value of cryonics to me is is too low to justify the costs; it's unlikely to succeed and the only reason some people have positive expected value for it is that they're multiplying that tiny number by the huge, huge number that they place on the value of my life. And my number doesn't feel big enough to outweigh those odds at that price. 

Putting some numbers in that

If my brain thinks this is a matter of expected-value calculations, I ought to do one. With actual numbers, even if they're made-up, and actual multiplication.

So: my death feels bad, but not infinitely bad. Obvious thing to do: assign a monetary value. Through a variety of helpful thought experiments (how much would I pay to cure a fatal illness if I were the only person in the world with it and research wouldn't help anyone but me and I could otherwise donate the money to EA charities; does the awesomeness of 3 million dewormings outway the suckiness of my death; is my death more or less sucky than the destruction of a high-end MRI machine), I've converged on a subjective value for my life of about $1 million. Like, give or take a lot. 

Cryonics feels unlikely to work for me. I think the basic principle is sound, but if someone were to tell me that cryonics had been shown to work for a human, I would be surprised. That's not a number, though, so I took the final result of Steve Harris' calculations here (inspired by the Sagan-Drake equation). His optimistic number is a 0.15 chance of success, or 1 in 7; his pessimistic number is 0.0023, or less than 1/400. My brain thinks 15% is too high and 0.23% sounds reasonable, but I'll use his numbers for upper and lower bounds. 

I started out trying to calculate the expected cost by some convoluted method where I was going to estimate my expected chance of dying each year and repeatedly subtract it from one and multiply by the amount I'd pay each year to calculate how much I could expect pay in total. Benquo pointed out to me that calculation like this are usually done using perpetuities, or PV calculations, so I made one in Excel and plugged in some numbers, approximating the Alcor annual membership fee as $600. Assuming my own discount rate is somewhere between 2% and 5%, I ran two calculations with those numbers. For 2%, the total expected, time-discounted cost would be $30,000; for a 5% discount rate, $12,000.

Excel also lets you do calculations on perpetuities that aren't perpetual, so I plugged in 62 years, the time by which I'll have a 50% chance of dying according to this actuarial table. It didn't change the final results much; $11,417 for a 5% discount rate and $21,000 for the 2% discount rate. 

That's not including the life insurance payout you need to pay for the actual freezing. So, life insurance premiums. Benquo's plan is five years of $2200 a year and then nothing from then on, which apparently isn't uncommon among plans for young healthy people. I could probably get something as good or better; I'm younger. So, $11,00 for total life insurance premiums. If I went with permanent annual payment, I could do a perpetuity calculation instead. 

In short: around $40,000 total, rounding up.

What's my final number?

There are two numbers I can output. When I started this article, one of them seemed like the obvious end product, so I calculated that. When I went back to finish this article days later, I walked through all the calculations again while writing the actual paragraphs, did what seemed obvious, ended up with a different number, and realized I'd calculated a different thing. So I'm not sure which one is right, although I suspect they're symmetrical. 

If I multiply the value of my life by the success chance of cryonics, I get a number that represents (I think) the monetary value of cryonics to me, given my factual beliefs and values. It would go up if the value of my life to me went up, or if the chances of cryonics succeeding went up. I can compare it directly to the actual cost of cryonics.

I take $1 million and plug in either 0.15 or 0.00023, and I get $150,000 as an upper bound and $2300 as a lower bound, to compare to a total cost somewhere in the ballpark of $40,000.

If I take the price of cryonics and divide it by the chance of success (because if I sign up, I'm optimistically paying for 100 worlds of which I survive in 15, or pessimistically paying for 10,000 worlds in which I survive in 23), I get the total expected cost per my life being saved, which I can compare to the figure I place on the value of my life. It goes down if the cost of cryonics goes down or the chances of success go up. 

I plug in my numbers and get a lower bound of $267,000 and an upper bound of 17 million. 

In both those cases, the optimistic success estimates make it seem worthwhile and the pessimistic success estimates don't, and my personal estimate of cryonics succeeding falls closer to pessimism. But it's close. It's a lot closer than I thought it would be. 

Updating somewhat in favour that I'll end up signed up for cryonics. 

Fine-tuning and next steps

I could get better numbers for the value of my life to me. It's kind of squicky to think about, but that's a bad reason. I could ask other people about their numbers and compare what they're accomplishing in their lives to my own life. I could do more thought experiments to better acquaint my brain with how much value $1 million actually is, because scope insensitivity. I could do upper and lower bounds.

I could include the cost of organizations cheaper than Alcor as a lower bound; the info is all here and the calculation wouldn't be too nasty but I have work in 7 hours and need to get to bed. 

I could do my own version of the cryonics success equation, plugging in my own estimates. (Although I suspect this data is less informed and less valuable than what's already there).

I could ask what other people think. Thus, write this post. 

 

252 comments

Comments sorted by top scores.

comment by advancedatheist · 2014-01-12T14:49:48.235Z · LW(p) · GW(p)

Cryonics has a more serious problem which I seldom see addressed. I've noticed a weird cognitive dissonance among cryonicists where they talk a good game about how much they believe in scientific progress, technological acceleration and so forth - yet they seem totally unconcerned about the fact that we just don't see this alleged trend happening in cryonics technology, despite its numerous inadequacies. In fact, Mike Darwin argues that the quality of cryopreservations has probably regressed since the 1980's.

In other words, attempting the cryogenic preservation of the human brain in a way which makes sense to neuroscientists, which should become the real focus of the cryonics movement, has a set of solvable, or at least describable, problems which current techniques could go a long way towards solving without having to invoke speculative future technologies or friendly AI's. Yet these problems have gone unsolved for decades, and not for the lack of financial resources. Just look at some wealthy cryonicists' plans to waste $100 million or more building that ridiculous Timeship (a.k.a. the Saulsoleum) in Comfort Texas.

What brought about this situation? I've made myself unpopular by suggesting that we can blame cryonics' association with transhumanism, and especially with the now discredited capital-N Nanotechnology cultism Eric Drexler created in the 1980's. Transhumanists and their precursors have a history of publishing nonsensical predictions about how we'll "become immortal" by arbitrary dates within the life expectancies of the transhumanists who make these forecasts. (James D. Miller does this in his Singularity Rising book. I leave articulating the logical problem with this claim as an exercise to the reader). Then one morning we read in our email that one of these transhumanists has died according to actuarial expectations, and possibly went into cryo, like FM-2030; or simply died in the ordinary way, like the Extropian Robert Bradbury.

In other words, transhumanism promotes a way of thinking which tends to make transhumanists spectators of, instead of active participants in, creating the sort of future they want to see. And cryonics has become a casualty of this screwed up world view, when it didn't have to turn out that way. Why exert yourself to improve cryonics' scientific credibility - again, in ways which neuroscientists would have to take seriously - when you believe that friendly AI's, Drexler's genie-like nanomachines and the technological singularity will solve your problems in the next 20-30 years? And as a bonus, this wonderful world in 2045 or so will also revive almost all the cryonauts, no matter how badly damaged their brains.

Well, I don't consider this a feasible "business plan" for my survival by cryotransport. And I know some other cryonicists who feel similarly. Cryonics needs some serious rebooting, and I've started to give some thought about regarding how I can get involved in the effort once I can find the people who look like they can make a go of it.

Replies from: James_Miller, Swimmer963, Vaniver, jpaulson, None
comment by James_Miller · 2014-01-12T23:42:48.816Z · LW(p) · GW(p)

James D. Miller does this in his Singularity Rising book. I leave articulating the logical problem with this claim as an exercise to the reader)

I would be grateful if you would tell me what the logical problem is.

Replies from: Icehawk78
comment by Icehawk78 · 2014-01-14T18:31:48.777Z · LW(p) · GW(p)

Presumably, the implication is that these predictions are not based on facts, but had their bottom line written first, and then everything else added later.

[I make no endorsement in support or rejection of this being a valid conclusion, having given it very little personal thought, but this being the issue that advancedatheist was implying seems fairly obvious to me.]

Replies from: James_Miller
comment by James_Miller · 2014-01-14T18:46:15.014Z · LW(p) · GW(p)

Thanks, if this is true I request advancedatheist explain why he thinks I did this.

Replies from: Icehawk78
comment by Icehawk78 · 2014-01-16T14:56:02.889Z · LW(p) · GW(p)

I can't say on behalf of advancedatheist, but others who I've heard make similar statements generally seem to base them on a manner of factor analysis; namely, assuming that you're evaluating a statement by a self-proclaimed transhumanist predicting the future development of some technology that currently does not exist, the factor which best predicts what date that technology will be predicted as is the current age of the predictor.

As I've not read much transhumanist writing, I have no real way to evaluate whether this is an accurate analysis, or simply cherry picking examples of the most egregious/popularly published examples (I frequently see Kurzweil and... mostly just Kurzeil, really, popping up when I've heard this argument before).

[As an aside, I just now, after finishing this comment, made the connection that you're the author that he cited as the example, rather than just a random commenter, so I'd assume you're much more familiar with the topic at hand than me.]

comment by Swimmer963 (Miranda Dixon-Luinenburg) (Swimmer963) · 2014-01-13T01:56:57.008Z · LW(p) · GW(p)

I've noticed a weird cognitive dissonance among cryonicists where they talk a good game about how much they believe in scientific progress, technological acceleration and so forth - yet they seem totally unconcerned about the fact that we just don't see this alleged trend happening in cryonics technology, despite its numerous inadequacies.

The problem of people compartmentalizing between what they think is valuable and what they ought to be working on is pretty universal. That being said, it does make cryonics less likely to succeed, and thus worth less; it's just a failure mode that might be hard to solve.

comment by Vaniver · 2014-01-12T19:57:49.619Z · LW(p) · GW(p)

In other words, transhumanism promotes a way of thinking which tends to make transhumanists spectators of, instead of active participants in, creating the sort of future they want to see.

I believe I've seen Mike Darwin and others specifically point to Eliezer as an example of a cryonics proponent who is increasing the number and ratio of spectator cryonauts, rather than active cryonauts.

Replies from: tanagrabeast, JoshuaZ
comment by tanagrabeast · 2014-01-12T20:59:16.817Z · LW(p) · GW(p)

As a counterpoint, let me offer my own experience rediscovering cryonics through Eliezer.

Originally, I hadn't seen the point. Like most people, I assumed cryonauts dreamed that one day someone would simply thaw them out, cure whatever killed them, and restart their heart with shock paddles or something. Even the most rudimentary understanding of or experience with biology and freezing temperatures made this idea patently absurd.

It wasn't until I discovered Eliezer's writings circa 2001 or so that I was able to see connections between high shock-level concepts like uploading, nanotech, and superintelligence. I reasoned that a successful outcome of cryonics is not likely to come through direct biological revival, but rather through atomically precise scanning, super-powerful computational reconstruction, and reinstantiation as an upload or in a replacement body.

The upshot of this reasoning is that for cryonics to have any chance of success, a future must be assured in which these technologies would be safely brought to bear on such problems. I continue to have trouble imagining such a future existing if the friendly AI problem is not solved before it is too late. As friendly AI seems unlikely to be solved without careful, deliberate research (which very few people are doing), investing in cryonics without also investing in friendly AI research feels pointless.

In those early years, I could afford to make donations to SIAI (now MIRI), but could not afford a cryonics plan, and certainly could not afford both. As I saw it, I was young. I could afford to wait on the cryonics, but would have the most impact on the future by donating to SIAI immediately. So I did.

That's the effect Eliezer's cryonics activism had on me.

comment by JoshuaZ · 2015-01-20T00:50:25.396Z · LW(p) · GW(p)

I believe I've seen Mike Darwin and others specifically point to Eliezer as an example of a cryonics proponent who is increasing the number and ratio of spectator cryonauts, rather than active cryonauts.

Which should be fine; an increase in spectator cryonauts is fine as long as it isn't stealing from the pool of active cryonauts. Since in this case it is making people who wouldn't have anything to do with cryonics be involved, it is still a good thing.

comment by Jonathan Paulson (jpaulson) · 2014-01-16T07:29:20.021Z · LW(p) · GW(p)

No one is working on cryonics because there's no money/interest because no one is signed up for cryonics. Probably the "easiest" way to solve this problem is to convince the general public that cryonics is a good idea. Then someone will care about making it better.

Some rich patron funding it all sounds good, but I can't think of a recent example where one person funded a significant R&D advance in any field.

Replies from: taryneast
comment by taryneast · 2014-02-02T06:42:44.019Z · LW(p) · GW(p)

"but I can't think of a recent example where one person funded a significant R&D advance in any field."

Christopher Reeve funds research into curing spinal cord injury Terry Pratchett funds research into Alzheimer's I'm sure there are others.

Replies from: jpaulson
comment by Jonathan Paulson (jpaulson) · 2014-02-02T09:01:36.162Z · LW(p) · GW(p)

Pratchett's donation appears to account for 1.5 months of the British funding towards Alzheimer's (numbers from http://web.archive.org/web/20080415210729/http://www.alzheimers-research.org.uk/news/article.php?type=News&archive=0&id=205, math from me) . Which is great and all, but public funding is way better. So I stand by my claim.

Replies from: taryneast
comment by taryneast · 2014-02-08T22:14:35.854Z · LW(p) · GW(p)

Ok, I stand corrected re: Pratchett. How did you come by the numbers? and can you research Reeve's impact too?

Until then, you've still "heard of one recent example" :)

comment by [deleted] · 2014-01-14T18:07:44.190Z · LW(p) · GW(p)

In other words, transhumanism promotes a way of thinking which tends to make transhumanists spectators of, instead of active participants in, creating the sort of future they want to see. And cryonics has become a casualty of this screwed up world view, when it didn't have to turn out that way. Why exert yourself to improve cryonics' scientific credibility - again, in ways which neuroscientists would have to take seriously - when you believe that friendly AI's, Drexler's genie-like nanomachines and the technological singularity will solve your problems in the next 20-30 years? And as a bonus, this wonderful world in 2045 or so will also revive almost all the cryonauts, no matter how badly damaged their brains.

applause. If there actually existed a cryopreservation technique that had been proven to really work in animal models - or better yet in human volunteers! - I would go ahead and sign up. But it doesn't exist, and instead of telling me who's working on making it exist, people tell me about the chances of successful revival using existing techniques.

I could say the same thing to the FAI effort. Actually, no, I am saying the same thing. Everyone seems to believe that too few people are committed to FAI research, but very few step up to actually volunteer their own efforts, even on a part-time basis, despite much of it still being in the realm of pure mathematics or ethics where you need little more than a good brain, some paper, pens, and lots of spare time to make a possible contribution.

Nu? If everyone has a problem and no-one is doing anything about it... why?

comment by Kaj_Sotala · 2014-01-12T06:27:05.195Z · LW(p) · GW(p)

It feels to me like the general pro-cryo advocacy here would be a bit of a double standard, at least when compared to general memes of effective altruism, shutting up and multiplying, and saving the world. If I value my life equally to the lives of others, it seems pretty obvious that there's no way by which the money spent on cryonics would be a better investment than spending it on general do-gooding.

Of course, this is not a new argument, and there are a few standard responses to it. The first one is that I don't actually value my life equally to that of everyone else's life, and that it's inconsistent to appeal to that when I don't appeal to it in my life in general. And it's certainly true that I do actually value my own life more than I value the life of a random stranger, but I do that because I'm human and can't avoid it, not because my values would endorse that as a maximally broad rule. If I get a chance to actually act in accordance to my preferred values and behave more altruistically than normal, I'll take it.

The other standard argument is that cryonics doesn't need to come out of my world-saving budget, it can come out of my leisure budget. Which is also true, but it requires that I'm interested enough in cryonics that I get enough fuzzy points from buying cryonics to make up whatever I lose in exchange. And it feels like once you take the leisure budget route, you're implicitly admitting that this is about purchasing fuzzies, not utilons, which makes it a little odd to apply to all those elaborate calculations which are often made with a strong tone of moral obligation. If one is going to be a utilitarian and use the strong tone of moral obligation, one doesn't get to use it to make the argument that one should invest a lot of money on saving just a single person, and with highly uncertain odds at that.

By going with the leisure budget argument, one is essentially admitting that cryonics isn't about altruism, it's about yourself. And of course, there is nothing wrong with that, since none of us is a 100% complete altruist who cares nothing about themselves, nor should we even try to idealize that kind of a person. And I'm not saying that there's anything wrong with signing up for cryonics - everyone gets to use their fuzzies budget the way they prefer, and if cryonices gives you the most fuzzies, cool. But if one doesn't get major fuzzies out of cryo, then that ought to be considered just as reasonable as well.

Replies from: ChrisHallquist, None, Swimmer963, poiuyt, Richard_Kennaway, Wes_W, Kawoomba, hyporational, passive_fist, lsparrish, lsparrish
comment by ChrisHallquist · 2014-01-12T08:36:55.540Z · LW(p) · GW(p)

I've had thoughts along similar lines. But it seems like there's a "be consistent about your selfishness" principle at work here. In particular, if...

  • ...you are generally willing to spend $X / month for something that has a significant chance of bringing you a very large benefit, like saving your life...
  • ...where $X /month is the cost of being signed up for cryonics (organization membership + life insurance)...
  • ... and you think cryonics has a significant chance of working...

It seems kind of inconsistent to not be signed up for cryonics.

(Caveat: not sure I can make consistent sense of my preferences involving far-future versions of "me".)

Replies from: RobbBB, Kaj_Sotala, None
comment by Rob Bensinger (RobbBB) · 2014-01-12T10:44:50.694Z · LW(p) · GW(p)

Consistency is a good thing, but it can be outweighed by other considerations. If my choices are between consistently giving the answer '2 + 2 = 5' on a test or sometimes giving '2 + 2 = 5' and other times ' 2 + 2 = 4', the latter is probably preferable. Kaj's argument is that if you core goal is EA, then spending hundreds of thousands of dollars on cryonics or heart surgery is the normatively wrong answer. Getting the wrong answer more often is worse than getting it less often, even when the price is a bit of inconsistency or doing-the-right-thing-for-the-wrong-reasons. When large numbers of lives are at stake, feeling satisfied with how cohesive your personal narrative or code of conduct is is mostly only important to the extent it serves the EA goal.

If you think saving non-human animals is the most important thing you could be doing, then it may be that you should become a vegan. But it's certainly not the case that if you find it too difficult to become a vegan, you should therefore stop trying to promote animal rights. Your original goal should still matter (if it ever mattered in the first place) regardless of how awkward it is for you to explain and justify your behavioral inconsistency to your peers.

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2014-01-12T12:22:23.098Z · LW(p) · GW(p)

Kaj's argument is that if you core goal is EA, then spending hundreds of thousands of dollars on cryonics or heart surgery is the normatively wrong answer. Getting the wrong answer more often is worse than getting it less often, even when the price is a bit of inconsistency or doing-the-right-thing-for-the-wrong-reasons. When large numbers of lives are at stake, feeling satisfied with how cohesive your personal narrative or code of conduct is is mostly only important to the extent it serves the EA goal.

I endorse this summary.

comment by Kaj_Sotala · 2014-01-12T11:56:14.817Z · LW(p) · GW(p)

While I don't think that there's anything wrong with preferring to be consistent about one's selfishness, I think it's just that: a preference.

The common argument seems to be that you should be consistent about your preferences because that way you'll maximize your expected utility. But that's tautological: expected utility maximization only makes sense if you have preferences that obey the von Neumann-Morgenstern axioms, and you furthermore have a meta-preference for maximizing the satisfaction of your preferences in the sense defined by the math of the axioms. (I've written a partial post about this, which I can try to finish if people are interested.)

For some cases, I do have such meta-preferences: I am interested in the maximization of my altruistic preferences. But I'm not that interested in the maximization of my other preferences. Another way of saying this would be that it is the altruistic faction in my brain which controls the verbal/explicit long-term planning and tends to have goals that would be ordinarily termed as "preferences", while the egoist faction is more motivated by just doing whatever feels good at the moment and isn't that interested in the long-term consequences.

Replies from: Alejandro1
comment by Alejandro1 · 2014-01-12T17:56:39.963Z · LW(p) · GW(p)

Another way of putting this: If you divide the things you do between "selfish" and "altruistic" things, then it seems to make sense to sign up for cryonics as an efficient part of the "selfish" component. But this division does not carve at the joints, and it is more realistic to the way the brain works to slice the things you do between "Near mode decisions" and "Far mode decisions". Then effective altruism wins over cryonics under Far considerations, and neither is on the radar under Near ones.

Replies from: James_Miller
comment by James_Miller · 2014-01-12T20:55:00.084Z · LW(p) · GW(p)

A huge number of people save money for a retirement that won't start for over a decade. For them, both retirement planning and cryonics fall under the selfish, far mode.

Replies from: Alejandro1
comment by Alejandro1 · 2014-01-12T21:41:54.118Z · LW(p) · GW(p)

That is true. On the other hand, saving for retirement is a common or even default thing to do in our society. If it wasn't, then I suspect many of those who currently do it wouldn't do it for similar reasons to those why they don't sign up for cryonics.

Replies from: Jiro
comment by Jiro · 2014-01-13T00:34:13.845Z · LW(p) · GW(p)

I suspect most people's reasons for not signing up for cryonics amount to "I don't think it has a big enough chance of working and paying money for a small chance of working amounts to Pascal's Mugging." I don't see how that would apply to retirement--would people in such a society seriously think they have only a very small chance of surviving until retirement age?

comment by [deleted] · 2014-01-14T17:43:21.170Z · LW(p) · GW(p)

... and you think cryonics has a significant chance of working...

0.23% is not a significant chance.

comment by [deleted] · 2014-01-14T17:39:42.048Z · LW(p) · GW(p)

(Disclaimer: I absolutely promise that I am not evil.)

The first one is that I don't actually value my life equally to that of everyone else's life, and that it's inconsistent to appeal to that when I don't appeal to it in my life in general.

Question: why the hell not? My brain processed this kind of question for the first time around fourth grade, when wanting special privileges to go on a field trip with the other kids despite having gotten in trouble. The answer I came up with then is the one I still use now: "why me? Because of Kant's Categorical Imperative" (that is, I didn't want to live in a world where nobody went on the field trip, therefore I should get to go on it -- though this wasn't exactly clear thinking regarding the problem I really had at the time!). I would not want to live in a world where everyone kept their own and everyone else's lifestyle to an absolute minimum in order to act with maximal altruism. Quite to the contrary: I want everyone to have as awesome a life as it is physically feasible for them to have!

I also do give to charity, do pay my taxes, and do support state-run social-welfare programs. So I'm not advocating total selfishness. I'm just proposing a heuristic: before advocating a certain level of altruism, check whether you're ok with that level of altruism becoming a Categorical Imperative, such that the Altruism Fairy brainwashes everyone into that level precisely.

In which case, yes, one should value one's own life over charity levels. After all, it's exactly what the charity recipients will do!

(Again, disclaimer: I swear I'm not evil.)

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2014-01-15T15:58:06.066Z · LW(p) · GW(p)

I would not want to live in a world where everyone kept their own and everyone else's lifestyle to an absolute minimum in order to act with maximal altruism. Quite to the contrary: I want everyone to have as awesome a life as it is physically feasible for them to have!

I think that the argument you're going for here (though I'm not entirely sure, so do correct me if I'm misinterpeting you) is "if everyone decided to dedicate their lives to altruism while accepting full misery to themselves, then everyone would be miserable, and thus a dedication to full altruism that makes you miserable is counterproductive to being altruistic".

And I agree! I think every altruist should take care of themselves first - for various reasons, including the one you mentioned, and also the fact that miserable people aren't usually very effective at helping others, and because you can inspire more people to become altruistic if they see that it's possible to have an awesome time while being an altruist.

But of course, "I should invest in myself because having an awesome life lets me help me others more effectively" is still completely compatible with the claim of "I shouldn't place more intrinsic value on others than in myself". It just means you're not being short-sighted about it.

Replies from: None, TheOtherDave
comment by [deleted] · 2014-01-15T18:02:08.478Z · LW(p) · GW(p)

I think that the argument you're going for here (though I'm not entirely sure, so do correct me if I'm misinterpeting you) is "if everyone decided to dedicate their lives to altruism while accepting full misery to themselves, then everyone would be miserable, and thus a dedication to full altruism that makes you miserable is counterproductive to being altruistic".

More like, "If everyone decided to dedicate their lives to altruism while accepting full misery to themselves, then everyone would be miserable, therefore total altruism is an incoherent value insofar as you expect anyone (including yourself) to ever actually follow it to its logical conclusion, therefore you shouldn't follow it in the first place."

Or, put simply, "Your supposed all-altruism is self-contradictory in the limit." Hence my having to put a disclaimer saying I'm not evil, since that's one of the most evil-villain-y statements I've ever made.

Of course, there are complications. For one thing, most people don't have the self-destructive messiah complex necessary for total altruism, so you can't apply first-level superrationality (ie: the Categorical Imperative) as including everyone. What I do endorse doing is acting with a high-enough level of altruism to make up for the people who don't act with any altruism while also engaging in some delta of actual non-superrational altruism.

How to figure out what level of altruistic action that implies, I have no idea. But I think it's better to be honest about the logically necessary level of selfishness than to pretend you're being totally altruistic but rationalize reasons to take care of yourself anyway.

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2014-01-15T18:30:47.985Z · LW(p) · GW(p)

"If everyone decided to dedicate their lives to altruism while accepting full misery to themselves, then everyone would be miserable, therefore total altruism is an incoherent value insofar as you expect anyone (including yourself) to ever actually follow it to its logical conclusion, therefore you shouldn't follow it in the first place."

Sorry, I don't follow. If the logical result of accepting full misery to oneself would be everyone being miserable, why wouldn't the altruists just reason this out and not accept full misery to themselves? "Valuing everyone the same as yourself" doesn't mean you'd have to let others treat you any way they like, it just means you'd in principle be ready for it, if it was necessary.

(I think we're just debating semantics rather than disagreeing now, do you agree?)

Replies from: None
comment by [deleted] · 2014-01-15T18:37:41.255Z · LW(p) · GW(p)

I think we have slightly different values, but are coming to identical practical conclusions, so we're agreeing violently.

EDIT: Besides, I totally get warm fuzzies from being nice to people, so it's not like I don't have a "selfish" motivation towards a higher level of altruism, anyway. SWEAR I'M NOT EVIL.

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2014-01-16T06:49:53.274Z · LW(p) · GW(p)

You said you'd prefer everyone to live awesome lives, I'm not sure how that could be construed as evil. :)

Replies from: None
comment by [deleted] · 2014-01-16T07:38:33.264Z · LW(p) · GW(p)

Serious answer: Even if I don't endorse it, I do feel a pang of guilt/envy/low-status at being less than 100% a self-impoverishing Effective Altruist, which has been coming out as an urge to declare myself not-evil, even by comparison.

Joke answer: eyes flash white, sips tea. SOON.

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2014-01-16T08:45:40.019Z · LW(p) · GW(p)

Okay, in that case you should stop feeling those negative emotions right now. :) Nobody here is a 100% self-impoverishing EA, and we ended up agreeing that it wouldn't even be a useful goal to have, so go indulge yourself in something not-at-all-useful-nor-altruistic and do feel good about it. :)

comment by TheOtherDave · 2014-01-15T16:19:11.356Z · LW(p) · GW(p)

"if everyone decided to dedicate their lives to altruism while accepting full misery to themselves, then everyone would be miserable,"

How confident of this are we?

I mean, there are many tasks which can lead to my happiness. If I perform a large subset of those tasks for my own benefit, they lead to a certain happiness-level for me... call that H1. If I perform a small subset of those tasks for everyone's benefit, they lead to a different happiness-level, H2, for everyone including me. H2 is, of course, much lower than H1... in fact, H2 is indistinguishable from zero, really, unless I'm some kind of superstar. (I'm not aggregating across people, here, I'm just measuring how happy I am personally.)

So far, so good.

But if everyone else is also performing a small subset of those tasks for everyone's benefit, then my happiness is N*H2. H2 is negligible, but N is large. Is (N*H2) > H1?

I really have no idea. On the face of it, it seems implausible. On the other hand, comparative advantage is a powerful force. We've discovered that when it comes to producing goods and services, for example, having one person performing a single task for everyone does much better than having everyone do everything for themselves.

Perhaps the same is true for producing happiness?

Which is not necessarily an argument for altruism in the real world, but in this hypothetical world where everyone acts with maximal altruism, maybe the end result is everyone is having a much more awesome life... they're simply having it thanks to the efforts of a huge community, rather than entirely due to their own efforts.

Then again, that sounds like a pretty good description of the real world I live in, also.

comment by Swimmer963 (Miranda Dixon-Luinenburg) (Swimmer963) · 2014-01-12T11:16:26.770Z · LW(p) · GW(p)

It feels to me like the general pro-cryo advocacy here would be a bit of a double standard, at least when compared to general memes of effective altruism, shutting up and multiplying, and saving the world.

I think this is why it feels squicky trying to assign a monetary value to my life; part of me thinks it's selfish to assign any more value to my life than Givewell's stated cost to save a stranger's life ($1700-ish??) But I know I value it more than that. I wouldn't risk my life for a paycheck.

Replies from: None, hyporational, James_Miller, MugaSofer
comment by [deleted] · 2014-01-12T21:04:19.179Z · LW(p) · GW(p)

I wouldn't risk my life for a paycheck.

Do you drive to work?

Replies from: Swimmer963, None
comment by Swimmer963 (Miranda Dixon-Luinenburg) (Swimmer963) · 2014-01-13T22:43:24.009Z · LW(p) · GW(p)

I bike, which might be worse but also might be better; depends how much the added lifespan from physical fitness trades off against the risk of an accident. And the risk is very likely less than 1/1000 given the years that I've been biking accident-free, so there's a multiplication there.

Replies from: Lumifer, Nornagest
comment by Lumifer · 2014-01-14T17:55:35.598Z · LW(p) · GW(p)

I bike, which might be worse but also might be better; depends how much the added lifespan from physical fitness trades off against the risk of an accident.

I rather suspect it depends primarily on where you bike. Biking through streets of Manhattan has different risk than biking on rural Wyoming roads.

Replies from: None
comment by [deleted] · 2014-01-16T22:30:33.349Z · LW(p) · GW(p)

Driving under the same conditions has similar risk disparity.

Replies from: Lumifer
comment by Lumifer · 2014-01-17T01:31:43.250Z · LW(p) · GW(p)

Driving under the same conditions has similar risk disparity.

I rather doubt that -- do you have data?

comment by Nornagest · 2014-01-13T22:46:23.239Z · LW(p) · GW(p)

I seem to remember the answer being that cycling is more dangerous per mile than driving, but that the increase in physical fitness more than compensates in all-cause mortality terms. The first paper I found seems to point to the same conclusion.

I don't know how that would be adjusted in someone that already has fitness habits. It probably also depends on how well developed the cycling infrastructure in your town is, but I've never seen any actual data on that either.

Replies from: Lethalmud
comment by Lethalmud · 2014-01-14T11:42:58.228Z · LW(p) · GW(p)

In my experience bicycling is much safer. I have been cycling more or less everyday since I was at least since I was 8. and have never been in a life-threatening accident. however, while traveling by car, I have been in 2 or 3 potential life threatening crashes. But this will be very dependent of location culture and personal variables.

comment by [deleted] · 2014-01-14T17:41:29.916Z · LW(p) · GW(p)

Do you know of a safer way to commute that lets you keep the same range of possible jobs?

comment by hyporational · 2014-01-12T20:36:01.584Z · LW(p) · GW(p)

If you got a lethal disease with a very expensive treatment, and you could afford it, would you refuse the treatment? What would the threshold price be? Does this idea feel as squicky as spending on cryonics?

Replies from: None
comment by [deleted] · 2014-01-14T17:42:47.968Z · LW(p) · GW(p)

Depends: has the treatment been proven to work before?

(Yes, I've heard the probability calculations. I don't make medical decisions based on plausibility figures when it has simply never been seen to work before, even in animal models.)

Replies from: Vulture
comment by Vulture · 2014-02-05T00:46:44.768Z · LW(p) · GW(p)

Part of shutting up and multiplying is multiplying through the probability of a payoff with the value of the payoff, and then treating it as a guaranteed gain of that much utility. This is a basic property of rational utility functions.

(I think. People who know what they're talking about, feel free to correct me)

Replies from: None
comment by [deleted] · 2014-02-05T11:56:45.323Z · LW(p) · GW(p)

You are correct regarding expected-utility calculations, but I make an epistemic separation between plausabilities and probabilities. Plausible means something could happen without contradicting the other things I know about reality. Probable means there is actually evidence something will happen. Expected value deals in probabilities, not plausibilities.

Now, given that cryonics has not been seen to work on, say, rats, I don't see why I should expect it to already be working on humans. I am willing to reevaluate based on any evidence someone can present to me.

Of course, then there's the question of what happens on the other side, so to speak, of who is restoring your preserved self and what they're doing with you. Generally, every answer I've heard to that question made my skin crawl.

comment by James_Miller · 2014-01-12T21:02:34.932Z · LW(p) · GW(p)

I wouldn't risk my life for a paycheck.

I bet you would. Lots of jobs have components (such as extra stress, less physical activity, or living in a dangerous or dirty city) that reduce life expediency. Unless you pick the job which maximizes your life span, you would effectively be risking your life for a paycheck. Tradeoffs are impossible to escape, even if you don't explicitly think about them.

Replies from: Wes_W
comment by Wes_W · 2014-01-12T21:27:23.184Z · LW(p) · GW(p)

In context, it seems uncharitable to read "risk my life" to include any risk small enough that taking it would still be consistent with valuing one's own life far above $1700.

comment by MugaSofer · 2014-01-12T19:37:51.928Z · LW(p) · GW(p)

Remember, your life has instrumental value others don't; if you risk your life for a paycheck, you're risking all future paychecks as well as your own life-value. The same applies to stressing yourself out obsessively working multiple jobs, robbing banks, selling your redundant organs ... even simply attempting to spend all your money on charity and the cheapest of foods tends too be a fairly bad suggestion for the average human (although if you think you can pull it off, great!)

comment by poiuyt · 2014-01-14T21:23:05.823Z · LW(p) · GW(p)

The other standard argument is that cryonics doesn't need to come out of my world-saving budget, it can come out of my leisure budget. Which is also true, but it requires that I'm interested enough in cryonics that I get enough fuzzy points from buying cryonics to make up whatever I lose in exchange. And it feels like once you take the leisure budget route, you're implicitly admitting that this is about purchasing fuzzies, not utilons, which makes it a little odd to apply to all those elaborate calculations which are often made with a strong tone of moral obligation. If one is going to be a utilitarian and use the strong tone of moral obligation, one doesn't get to use it to make the argument that one should invest a lot of money on saving just a single person, and with highly uncertain odds at that.

I imagine that a lot of people on Less Wrong get off on having someone tell them "with a strong tone of moral obligation" that death can be defeated and that they simply must invest their money in securing their own immortality. Even if it isn't a valid moral argument, per say, phrasing it as one makes cryonics buyers feel more good about their choice and improves the number of warm fuzzies they get from the thought that some day they'll wake up in the future, alive and healthy with everyone congratulating them on being so very brave and clever and daring to escape death like that.

Replies from: None, Kawoomba
comment by [deleted] · 2014-01-15T09:31:03.613Z · LW(p) · GW(p)

Even if it isn't a valid moral argument, per say, phrasing it as one makes cryonics buyers feel more good about their choice and improves the number of warm fuzzies they get from the thought that some day they'll wake up in the future, alive and healthy with everyone congratulating them on being so very brave and clever and daring to escape death like that.

Just asking, were you trying to make that sound awful and smug? Because that honestly sounds like a future I don't want to wake up in.

I want to wake up in the future where people have genuine compassion for the past, and are happy to welcome the "formerly dead" to a grand new life, hopefully even including their friends and loved ones who also made it successfully to "the Future". If the post-cryonic psychological counsellors of the future woke me up with, "Congratulations, you made the right business decision!", then I would infer that things had gone horribly wrong.

Replies from: ciphergoth, Brillyant, poiuyt
comment by Paul Crowley (ciphergoth) · 2014-01-17T07:17:43.758Z · LW(p) · GW(p)

Lost in the wilderness, I think we should go North; you, South. If I find help, but learn that you died, my first thought will not be "neener neener told you so".

comment by Brillyant · 2014-01-15T21:12:55.114Z · LW(p) · GW(p)

Interesting...

Is is possible cryonic wakers might be treated very poorly? Perhaps stigmatized?

I'm very ignorant of what all is involved in either "end" of cryonics, but what if, say, the cost of resurrecting the frozen person is prohibitively high and future people lobby to stop their waking up? And even the ones who do wake up are treated like pariahs?

It might play out like the immigaration situation in the US: A nation, founded by immigrants, that is now composed of a big chunk of citizens who hate immigrants.

I can already hear the arguments now...

"They won't know if we don't wake them up. Besides every one we wake costs us X resources which damages Y lives by Z%."

Replies from: Jiro
comment by Jiro · 2014-01-15T22:03:02.318Z · LW(p) · GW(p)

A nation, founded by immigrants, that is now composed of a big chunk of citizens who hate immigrants.

How is that any different from saying "a nation, founded by slaveowners, that is now composed of a big chunk of citizens who hate slaveowners"? Certainly the fact that your ancestors benefited from being slaveowners is no reason why you should support slaveowners now.

comment by poiuyt · 2014-01-15T20:39:04.830Z · LW(p) · GW(p)

Just asking, were you trying to make that sound awful and smug?

Yep.

While genuine compassion is probably the ideal emotion for a post-cryonic counselor to actually show, it's the anticipation of their currently ridiculed beliefs being validated, with a side order of justified smugness that gets people going in the here and now. There's nothing wrong with that: "Everyone who said I was stupid is wrong and gets forced to admit it." is probably one of the top ten most common fantasies and there's nothing wrong with spending your leisure budget on indulging a fantasy. Especially if it has real world benefits too.

Replies from: None
comment by [deleted] · 2014-01-15T21:33:20.795Z · LW(p) · GW(p)

it's the anticipation of their currently ridiculed beliefs being validated, with a side order of justified smugness that gets people going in the here and now.

That's... actually kinda sad, and I think I'm going to go feed my brain some warm fuzzies to counter it.

Trying to live forever out of spite instead of living well in the here and now that's available? Silly humans.

Replies from: Eliezer_Yudkowsky, blacktrance
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2014-01-16T06:30:31.193Z · LW(p) · GW(p)

Don't worry, poiuyt is making all of this up. I don't personally know of anyone to whom this imaginary scenario applies. The most common sentiment about cryonics is "God dammit I have to stop procrastinating", hence the enjoinders are welcome; as for their origin point, well, have you read HPMOR up to Ch. 96?

Replies from: poiuyt, None
comment by poiuyt · 2014-01-17T01:46:12.693Z · LW(p) · GW(p)

I feel that I am being misunderstood: I do not suggest that people sign up for cryonics out of spite. I imagine that almost everyone signed up for cryonics does so because they actually believe it will work. That is as it should be.

I am only pointing out that being told that I am stupid for signing up for cryonics is disheartening. Even if it is not a rational argument against cryonics, the disapproval of others still affects me. I know this because my friends and family make it a point to regularly inform me of the fact that cryonics is "a cult", that I am being "scammed out of my money" by Alcor and that even if it did work, I am "evil and wrong" for wanting it. Being told those things fills me with doubts and saps my willpower. Hearing someone on the pro-cryonics side of things reminding me of my reasons for signing up is reassuring. It restores the willpower I lose hearing those around me insulting my belief. Hearing that cryonics is good and I am good for signing up isn't evidence that cryonics will work. Hearing that non-cryonicists will "regret" their choice certainly isn't evidence that cryonics is the most effective way to save lives. But it is what I need to hear in order to not cave in to peer pressure and cancel my policy.

I get my beliefs from the evidence, but I'll take my motivation from wherever I can find it.

comment by [deleted] · 2014-01-16T07:57:59.254Z · LW(p) · GW(p)

Eliezer, I have been a frequent and enthusiastic participant on /r/hpmor for years before I decided to buck up and make a LessWrong account.

The most common sentiment about cryonics is "God dammit I have to stop procrastinating",

I don't recall someone answering my question in the other place I posted it, so I might as well ask you (since you would know): provided I am unwilling to believe current cryonic techniques actually work (even given a Friendly superintelligence that wants to bring people back), where can I be putting money towards other means of preserving people or life-extension in general?

Gwern had a posting once on something called "brain plastination", which supposedly works "better" in some sense than freezing in liquid nitrogen, even though that still relies on em'ing you to bring you back, which frankly I find frightening as all hell. Is there active research into that? Into improved cryonics techniques?

Or should I just donate to anti-aging research on grounds that keeping people alive and healthy for longer before they die is a safer bet than, you know, finding ways to preserve the dead such that they can be brought back to life later?

Replies from: somervta
comment by somervta · 2014-01-18T12:38:07.858Z · LW(p) · GW(p)

The Brain Preservation Foundation may be what you're looking for.

comment by blacktrance · 2014-02-05T22:05:23.244Z · LW(p) · GW(p)

There's good and bad spite. Good spite is something like, "They call me mad! But I was right all along. Muahahaha!" and feeling proud and happy that you made the right choice despite opposition from others. Bad spite is something like, "I was right and they were wrong, and now they're suffering for their mistakes. Serves them right". One is accomplishment, the other is schadenfreude.

comment by Kawoomba · 2014-01-16T07:16:34.180Z · LW(p) · GW(p)

Yes, it is a great psychological coping mechanism. Death is such a deeply personal topic that it would be folly to assume fuzzies, or the avoidance of frighties, didn't factor in.

However, such is the case with any measure or intervention explicitly relating to lifespan extension. So while extra guarding against motivated cognition is in order when dealing with one's personal future non-existence and the postponing thereof, saying "you're doing it because of the warm fuzzies!" isn't sufficient rejection of death escapism.

The cryonics buyer may well answer "well, yes, that, and also, you know, the whole 'potential future reanimation' part". You still have to engage with the object level.

comment by Richard_Kennaway · 2014-01-12T09:44:11.867Z · LW(p) · GW(p)

By going with the leisure budget argument

Should a monk who has taken vows have a sin budget, because the flesh is weak?

You seem conflicted, believing you should not value your own life over others', but continuing to do so; then justifying yielding to temptation on the grounds that you are tempted.

one is essentially admitting that cryonics isn't about altruism, it's about yourself.

Of course it is. Has it ever been presented as anything else, as "Escape death so you can do more for other people"? Support for cryonics is for the sake of everyone, but signing up to it is for oneself alone.

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2014-01-12T12:17:37.512Z · LW(p) · GW(p)

Should a monk who has taken vows have a sin budget, because the flesh is weak?

If that helps them achieve their vows overall.

I did try valuing the lives of others equally before. It only succeeded in making me feel miserable and preventing me from getting any good done. Tried that approach, doesn't work. Better to compromise with the egoist faction and achieve some good, rather than try killing it with fire and achieve nothing.

Of course it is. Has it ever been presented as anything else

Once people start saying things like "It really is hard to find a clearer example of an avoidable Holocaust that you can personally do something substantial about now" or "If you don't sign up your kids for cryonics then you are a lousy parent", it's hard to avoid reading a moral tone into them.

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2014-01-12T17:02:34.557Z · LW(p) · GW(p)

Should a monk who has taken vows have a sin budget, because the flesh is weak?

If that helps them achieve their vows overall.

The opportunity for self-serving application of this principle casts a shadow over all applications. I believe this hypothetical monk's spiritual guide would have little truck with such excuses, rest and food, both in strict moderation, being all the body requires. (I have recently been reading the Sayings of the Desert Fathers and St John Climacus' "Ladder of Divine Ascent", works from the first few centuries of Christianity, and the rigours of the lives described there are quite extraordinary.)

Better to compromise with the egoist faction and achieve some good, rather than try killing it with fire and achieve nothing.

"It's not me that wants this, it's this other thing I share this body with." Personally, that sounds to me like thinking gone wrong, whether you yield to or suppress this imaginary person. You appear to be identifying with the altruist faction when you write all this, but is that really the altruist faction speaking, or just the egoist faction pretending not to be? Recognising a conflict should be a first step towards resolving it.

Of course it is. Has it ever been presented as anything else

Once people start saying things like "It really is hard to find a clearer example of an avoidable Holocaust that you can personally do something substantial about now" or "If you don't sign up your kids for cryonics then you are a lousy parent", it's hard to avoid reading a moral tone into them.

These are moral arguments for supporting cryonics, rather than for signing up oneself. BTW, if it's sinfully self-indulgent to sign up oneself, how can you persuade anyone else to? Does a monk preach "eat, drink, and be merry"?

Finally, when I look at the world, I see almost no-one who values others above themselves. What, then, will the CEV of humanity have to say on the subject?

Replies from: Kaj_Sotala, MugaSofer
comment by Kaj_Sotala · 2014-01-13T13:19:02.479Z · LW(p) · GW(p)

The opportunity for self-serving application of this principle casts a shadow over all applications.

[…]

Finally, when I look at the world, I see almost no-one who values others above themselves. What, then, will the CEV of humanity have to say on the subject?

I'm confused over what exactly your position is. The first bit I quoted seems to imply that you think that one should sacrifice everything in favor of altruism, whereas the second excerpt seems like a criticism of that position.

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2014-01-13T15:27:46.220Z · LW(p) · GW(p)

My position is that (1) the universal practice of valuing oneself over others is right and proper (and I expect others to rightly and properly value themselves over me, it being up to me to earn any above-baseline favour I may receive), (2) there is room for discussion about what base level of compassion one should have towards distant strangers (I certainly don't put it at zero), and (3) I take the injunction to love one's neighbour as oneself as a corrective to a too low level of (2) rather than as a literal requirement, a practical rule of thumb for debiasing rather than a moral axiom. Perfect altruism is not even what I would want to want.

The first bit I quoted seems to imply that you think that one should sacrifice everything in favor of altruism

I'm drawing out what I see as the implications of holding (which I don't) that we ought to be perfectly altruistic, while finding (as I do) that in practice it is impossible. It leads, as you have found, to uneasy compromises guiltily taken.

Replies from: Kaj_Sotala, TheAncientGeek
comment by Kaj_Sotala · 2014-01-13T15:33:21.185Z · LW(p) · GW(p)

I did say right in my original comment (emphasis added):

By going with the leisure budget argument, one is essentially admitting that cryonics isn't about altruism, it's about yourself. And of course, there is nothing wrong with that, since none of us is a 100% complete altruist who cares nothing about themselves, nor should we even try to idealize that kind of a person.

comment by TheAncientGeek · 2014-01-13T17:02:40.817Z · LW(p) · GW(p)

I will attempt a resolution: other people are as imortant as me, in pirncipal, since I am not objectively anything special -- but I should concentrate my efforts on myself and those close to me, becuase I understand my and their needs better, and can therefore be more effective.

Replies from: None, None
comment by [deleted] · 2014-01-13T17:45:36.226Z · LW(p) · GW(p)

I don't think that's a sufficient or effective compromise. If I'm given a choice between saving the life of my child, or the lives of a 1000 other children, I will always save my child. And I will only feel guilt to the extent that I was unable to come up with a 3rd option that saves everybody.

I don't do it for some indirect reason such as that I understand my children's needs better or such. I do it because I value my own child's life more, plain and simple.

comment by [deleted] · 2014-01-14T20:32:45.806Z · LW(p) · GW(p)

Important to whom?

Replies from: TheAncientGeek
comment by TheAncientGeek · 2014-01-15T09:42:36.704Z · LW(p) · GW(p)

You might as well have asked: special to whom>? Even if there is no objective importance or specialiness anywhere, it still follows that I have no objective importance ort specialness.

comment by MugaSofer · 2014-01-12T19:43:59.202Z · LW(p) · GW(p)

For the record, you do have a limited supply of willpower. I'm guessing those monks either had extraordinary willpower reserves or nonstandard worldviews that made abstinence actually easier than sin.

Replies from: hyporational, Richard_Kennaway, Richard_Kennaway
comment by hyporational · 2014-01-12T20:47:30.659Z · LW(p) · GW(p)

It seems they practice that willpower muscle very explicitly for hours every day. Abstinence should actually be pretty easy considering you have very little else to drain your willpower with.

comment by Richard_Kennaway · 2014-01-14T12:09:38.649Z · LW(p) · GW(p)

For the record, you do have a limited supply of willpower.

If you think so.

Replies from: MugaSofer
comment by MugaSofer · 2014-01-14T12:37:48.456Z · LW(p) · GW(p)

Looking into your link now, but it was my understanding that the effect was weaker if the participant didn't believe in it, not nonexistent (i.e. disbelieving in ego depletion has a placebo effect.)

Wikipedia, Font Of All Knowledge concurrs:

An individual’s perceived level of fatigue has been shown to influence their subsequent performance on a task requiring self-regulation, independent of their actual state of depletion.[14] This effect is known as illusory fatigue. This was shown in an experiment in which participants engaged in a task that was either depleting or non-depleting, which determined each individual’s true state of depletion. Ultimately, when participants were led to believe their level of depletion was lower than their true state of depletion, they performed much better on a difficult working memory task. This indicates that an increased perceived level of fatigue can hinder self-regulatory performance independent of the actual state of depletion.

[...]

An experiment by Carol Dweck and subsequent work by Roy Baumeister and Kathleen Vohs has shown that beliefs in unlimited self-control helps mitigate ego depletion for a short while, but not for long. Participants that were led to believe that they will not get fatigued performed well on a second task but were fully depleted on a third task.[16]

ETA: It seems the Wikipedia citation is to a replication attempt of your link. They found the effect was real, but it only lessened ego depletion - subjects who were told they had unlimited willpower still suffered suffered ego depletion, just less strongly. So yup, placebo.

Replies from: EHeller
comment by EHeller · 2014-01-14T16:44:34.776Z · LW(p) · GW(p)

They found the effect was real, but it only lessened ego depletion - subjects who were told they had unlimited willpower still suffered suffered ego depletion, just less strongly. So yup, placebo.

I'm not sure the word "placebo" makes sense when you are discussing purely psychological phenomena. Obviously any effects will be related to psychology- its not like they gave them a pill.

Replies from: MugaSofer
comment by MugaSofer · 2014-01-14T19:32:17.099Z · LW(p) · GW(p)

I ... think it's supposed to be regulated at least partially by glucose levels? So in some of the experiments, they were giving them sugar pills, or sugar water or something? I'm afraid this isn't actually my field :(

But of course, no phenomenon is purely psychological (unless the patient is a ghost.) For example, I expect antidepressant medication is susceptible to the placebo effect.

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2014-01-15T16:02:10.129Z · LW(p) · GW(p)

See here.

Take, for example, the reaction to our claim that the glucose version of the resource argument is false (Kurzban 2010a ). Inzlicht & Schmeichel, scholars who have published widely in the willpower-as-resource literature, more or less casually bury the model with the remark in their commentary that the “mounting evidence points to the conclusion that blood glucose is not the proximate mechanism of depletion.” ( Malecek & Poldrack express a similar view.) Not a single voice has been raised to defend the glucose model, and, given the evidence that we advanced to support our view that this model is unlikely to be correct, we hope that researchers will take the fact that none of the impressive array of scholars submitting comments defended the view to be a good indication that perhaps the model is, in fact, indefensible. Even if the opportunity cost account of effort turns out not to be correct, we are pleased that the evidence from the commentaries – or the absence of evidence – will stand as an indication to audiences that it might be time to move to more profitable explanations of subjective effort.

While the silence on the glucose model is perhaps most obvious, we are similarly surprised by the remarkably light defense of the resource view more generally. As Kool & Botvinick put it, quite correctly in our perception: “Research on the dynamics of cognitive effort have been dominated, over recent decades, by accounts centering on the notion of a limited and depletable ‘resource’” (italics ours). It would seem to be quite surprising, then, that in the context of our critique of the dominant view, arguably the strongest pertinent remarks come from Carter & McCullough, who imply that the strength of the key phenomenon that underlies the resource model – two-task “ego-depletion” studies – might be considerably less than previously thought or perhaps even nonexistent. Despite the confidence voiced by Inzlicht & Schmeichel about the two-task findings, the strongest voices surrounding the model, then, are raised against it, rather than for it. (See also Monterosso & Luo , who are similarly skeptical of the resource account.)

Indeed, what defenses there are of the resource account are not nearly as adamant as we had expected. Hagger wonders if there is “still room for a ‘resource’ account,” given the evidence that cuts against it, conceding that “[t]he ego-depletion literature is problematic.” Further, he relies largely on the argument that the opportunity cost model we offer might be incomplete, thus “leaving room” for other ideas.

comment by Richard_Kennaway · 2014-01-13T15:31:06.646Z · LW(p) · GW(p)

nonstandard worldviews that made abstinence actually easier than sin

If it isn't, you're doing something wrong.

ETA: By which I don't mean that it is easy to do it right. Practicing anything involves a lot of doing it wrong while learning to do it right.

comment by Wes_W · 2014-01-12T18:11:42.093Z · LW(p) · GW(p)

It seems to me that, even valuing your own life and the lives of others equally, it's not necessarily inconsistent to pay much more for cryonics than it would cost to save a life by normal altruist means. Cryonics could save your life, and malaria nets could save somebody else's life, but these two life-savings are not equal. If you're willing to pay more to save a 5-year-old than an 85-year-old, then for some possible values of cryonics effectiveness, expectation of life quality post-resuscitation, and actual cost ratios, shutting up and multiplying could still favor cryonics.

If this argument carries, it would also mean that you should be spending money on buying cryonics for other people, in preference to any other form of altruism. But in practice, you might have a hard time finding people who would be willing to sign up for cryonics and aren't already willing/able to pay for it themselves, so you'd probably have to default back to regular altruism.

If you do have opportunities to buy cryonics for other people, and you value all lives equally, then you've still got the problem of whether you should sign yourself up rather than somebody else. But multiplying doesn't say you can't save yourself first there, just that you have no obligation to do so.

comment by Kawoomba · 2014-01-12T09:50:57.741Z · LW(p) · GW(p)

If I value my life equally to the lives of others (...)

Edit: Since you don't in terms of your revealed preferences, are you aspiring to actually reach such a state? Would an equal valuation of your life versus a random other life (say, in terms of QALYs) be a desirable Schelling point, or is "more altruistic" always preferable even at that point (race to the bottom)?

Replies from: Kaj_Sotala, army1987
comment by Kaj_Sotala · 2014-01-12T12:09:09.980Z · LW(p) · GW(p)

Depends on which part of my brain you ask. The altruistic faction does aspire to it, but the purely egoist faction doesn't want to be eradicated, and is (at least currently) powerful enough to block attempts to eradicate it entirely. The altruist faction is also not completely united, as different parts of my brain have differing opinions on which ethical system is best, so e.g. my positive utilitarian and deontological groups might join the egoist faction in blocking moves that led to the installation of values that were purely negative utilitarian.

comment by A1987dM (army1987) · 2014-01-12T09:57:17.338Z · LW(p) · GW(p)

Have you read the second paragraph of the comment you're replying to?

Replies from: Kawoomba
comment by Kawoomba · 2014-01-12T10:04:45.946Z · LW(p) · GW(p)

Clarified in grandparent.

comment by hyporational · 2014-01-12T08:42:42.077Z · LW(p) · GW(p)

If I get a chance to actually act in accordance to my preferred values and behave more altruistically than normal, I'll take it.

I don't understand this values vs preferred values thing. It sounds like "if I get a chance to go against my actual values in favor of some fictional values, I'll take it" which seems like a painful strategy. If you get to change your values in some direction permanently, it might work and I would understand why you'd want to change your cognition so that altruism felt better, to make your values more consistent.

Replies from: RobbBB, Kaj_Sotala, None
comment by Rob Bensinger (RobbBB) · 2014-01-12T10:58:56.533Z · LW(p) · GW(p)

It sounds like "if I get a chance to go against my actual values in favor of some fictional values, I'll take it" which seems like a painful strategy.

This isn't really different than any other situation where people wish they had a different characteristic than they do. Sometimes such preferences are healthy and benign in the case of other mental states, e.g., preferring to acquire more accurate beliefs. I don't see any reason to think they can't be healthy and benign in the case of preferring to change one's preferences (e.g., to make them more form a more consistent system, or to subordinate them to reflective and long-term preferences).

I would understand why you'd want to change your cognition so that altruism felt better, to make your values more consistent.

As I noted to Chris above, consistency isn't necessarily the highest goal here. The best reason to change your values so that altruism feels better is because it enhances altruism, not because it enhances consistency.

Replies from: hyporational
comment by hyporational · 2014-01-12T12:57:02.855Z · LW(p) · GW(p)

This isn't really different than any other situation where people wish they had a different characteristic than they do.

I disagree. In most cases like this people wish they were more empathetic to their future selves, which isn't relevant in the case of tricking yourself to do radical altruism, if your future self won't value it more than your current self.

The best reason to change your values so that altruism feels better is because it enhances altruism, not because it enhances consistency.

This argument depends entirely on how much you value altruism in the first place, which makes it not very appealing to me.

Replies from: RobbBB
comment by Rob Bensinger (RobbBB) · 2014-01-13T05:51:02.674Z · LW(p) · GW(p)

isn't relevant in the case of tricking yourself to do radical altruism, if your future self won't value it more than your current self.

I don't see the relevance. In prudential cases (e.g., getting yourself to go on a diet), the goal isn't to feel more empathy toward your future self. The goal is to get healthy; feeling more empathy toward your future self may be a useful means to that end, but it's not the only possible one. Similarly, in moral cases (e.g., getting yourself to donate to GiveWell), the goal isn't to feel more empathy toward strangers. The goal is to help strangers suffer and die less.

This argument depends entirely on how much you value altruism in the first place, which makes it not very appealing to me.

Suppose you see a child drowning in your neighbor's pool, and you can save the child without incurring risk. But, a twist: You have a fear of water.

Kaj and I aren't saying: If you're completely indifferent to the suffering of others, then there exists an argument so powerful that it can physically compel you to save the child. If that's your precondition for an interesting or compelling moral argument, then you're bound to be disappointed.

Kaj and I are saying: If you care to some extent about the suffering of others, then it makes sense for you to wish that you weren't averse to water, because your preference not to be in the water is getting in the way of other preferences that you much more strongly prefer to hold. This is true even if you don't care at all about your aversion to bodies of water in other contexts (e.g., you aren't pining to join any swim teams). For the same reason, it can make sense to wish that you weren't selfish enough to squander money on bone marrow transplants for yourself, even though you are that selfish.

Replies from: hyporational
comment by hyporational · 2014-01-13T06:10:17.472Z · LW(p) · GW(p)

the goal isn't to feel more empathy toward your future self. The goal is to get healthy; feeling more empathy toward your future self may be a useful means to that end, but it's not the only possible one.

Sorry, I used empathy a bit loosely. Anyways, the goal is to generate utility for my future self. Empathy is one mechanism for that, and there are others. The only reason to lose weight and get healthy at least for me is that I know for sure my future self will appreciate that. Otherwise I would just binge to satisfy my current self.

Kaj and I aren't saying: If you're completely indifferent to the suffering of others, then there exists an argument so powerful that it can physically compel you to save the child

What I'm saying is that if the child was random and I had a high risk of dying when trying to save them then there's no argument that would make me take that risk although I'm probably much more altruistic than average already. If I had an irrational aversion to water that actually reflected none of my values then of course I'd like to get rid of that.

Kaj and I are saying: If you care to some extent about the suffering of others, then it makes sense for you to wish that you weren't averse to water, because your preference not to be in the water is getting in the way of other preferences that you much more strongly prefer to hold.

It seems to me more like you're saying that if I have even an inkling of altruism in me then I should make it a core value that overrides everything else.

For the same reason, it can make sense to wish that you weren't selfish enough to squander money on bone marrow transplants for yourself, even though you are that selfish.

I really don't understand. Either you are that selfish, or you aren't. I'm that selfish, but also happily donate money. There's no argument that could change that. I think the human ability to change core values is very limited, much more limited than the human ability to lose weight.

Replies from: RobbBB
comment by Rob Bensinger (RobbBB) · 2014-01-13T10:26:38.426Z · LW(p) · GW(p)

The only reason to lose weight and get healthy at least for me is that I know for sure my future self will appreciate that.

No. There are also important things that my present self desires be true of my future self, to some extent independently of what my future self wants. For instance, I don't want to take a pill that will turn me into a murderer who loves that he's a murderer, even though if I took such a pill I'd be happy I did.

if the child was random and I had a high risk of dying when trying to save them then there's no argument that would make me take that risk

If your risk of dying is high enough, then you shouldn't try to save the child, since if you're sure to die the expected value may well be negative. Still, I don't see how this is relevant to any claim that anyone else on this thread (or in the OP) is making. 'My altruism is limited, and I'm perfectly OK with how limited it is and wouldn't take a pill to become more altruistic if one were freely available' is a coherent position, though it's not one I happen to find myself in.

If I had an irrational aversion to water that actually reflected none of my values then of course I'd like to get rid of that.

Then you understand the thing you were confused about initially: "I don't understand this values vs preferred values thing." Whether you call hydrophobia a 'value' or not, it's clearly a preference; what Kaj and I are talking about is privileging some preferences over others, having meta-preferences, etc. This is pretty ordinary, I think.

It seems to me more like you're saying that if I have even an inkling of altruism in me then I should make it a core value that overrides everything else.

Well, of course you should; when I say the word 'should', I'm building in my (conception of) morality, which is vaguely utilitarian and therefore is about maximizing, not satisficing, human well-being. For me to say that you should become more moral is like my saying that you shouldn't murder people. If you're inclined to murder people, then it's unlikely that my saying 'please don't do that, it would be a breach of your moral obligations' is going to have a large effect in dissuading you. Yet, all the same, it is bad to kill people, by the facts on the ground and the meaning of 'bad' (and of 'kill', and of 'to'...). And it's bad to strongly desire to kill people; and it's bad to be satisfied with a strong desire to kill people; etc. Acts and their consequences can be judged morally even when the actors don't themselves adhere to the moral system being used for judging.

I really don't understand. Either you are that selfish, or you aren't.

People aren't any level of selfish consistently; they exhibit more selfishness in some situations than others. Kaj's argument is that if I prize being altruistic over being egoistic, then it's reasonable for me to put no effort into eliminating my aversion to cryonics, even though signing up for cryonics would exhibit no more egoism than the amount of egoism revealed in a lot of my other behaviors.

'You ate those seventeen pancakes, therefore you should eat this muffin' shouldn't hold sway as an argument against someone who wants to go on a diet. For the same reason, 'You would spend thousands of dollars on heart surgery if you needed it to live, therefore you should spend comparable amounts of money on cryonics to get a chance at continued life' shouldn't hold sway as an argument against someone who wants above all else to optimize for the happiness of the whole human species. (And who therefore wants to want to optimize for everyone's aggregate happiness.)

I think the human ability to change core values is very limited, much more limited than the human ability to lose weight.

I'd love to see someone try to pick units with which to compare those two values. :)

Replies from: hyporational
comment by hyporational · 2014-01-13T12:42:05.269Z · LW(p) · GW(p)

Well, of course you should; when I say the word 'should', I'm building in my (conception of) morality, which is vaguely utilitarian and therefore is about maximizing, not satisficing, human well-being. For me to say that you should become more moral is like my saying that you shouldn't murder people. [...] Acts and their consequences can be judged morally even when the actors don't themselves adhere to the moral system being used for judging.

You should be more careful when thinking of examples and judging people explicitly. A true utilitarian would probably not want to make EA look as bad as you just did there, and would also understand that allies are useful to have even if their values aren't in perfect alignment with yours. Because of that paragraph, it's pretty difficult for me to look at anything else you said rationally.

Here's some discussion by another person on why the social pressure applied by some EA people might be damaging to the movement.

Replies from: RobbBB
comment by Rob Bensinger (RobbBB) · 2014-01-13T13:06:02.661Z · LW(p) · GW(p)

I'm not trying to browbeat you into changing your values. (Your own self-descriptions make it sound like that would be a waste of time, and I'm really more into the Socratic approach than the Crusader approach.) I'm making two points about the structure of utilitarian reasoning:

  1. 'It's better for people to have preferences that cause them to do better things.' is nearly a tautology for consequentialists, because the goodness of things that aren't intrinsically good is always a function of their effects. It's not a bold or interesting claim; I could equally well have said 'it's good for polar bears to have preferences that cause them to do good things'. Ditto for Clippy. If any voluntary behavior can be good or bad, then the volitions causing such behavior can also be good or bad.

  2. 'Should' can't be relativized to the preferences of the person being morally judged, else you will be unable to express the idea that people are capable of voluntarily doing bad things.

Do you take something about 1 or 2 to be unduly aggressive or dismissive? Maybe it would help if you said more about what your own views on these questions are.

I'll also say (equally non-facetiously): I don't endorse making yourself miserable with guilt, forbidding yourself to go to weddings, or obsessing over the fact that you aren't exactly 100% the person you wish you were. Those aren't good for personal or altruistic goals. (And I think both of those matter, even if I think altruistic goals matter more.) I don't want to lie to you about my ideals in order to be compassionate and tolerant of the fact that no one, least of all myself, lives up to them.

It would rather defeat the purpose of even having ideals if expressing or thinking about them made people less likely to achieve them, so I do hope we can find ways to live with the fact that our everyday moral heuristics don't have to be (indeed, as a matter of psychological realism, cannot be) the same as our rock-bottom moral algorithm.

Replies from: hyporational
comment by hyporational · 2014-01-13T13:52:53.085Z · LW(p) · GW(p)

'It's better for people to have preferences that cause them to do better things.' is nearly a tautology for consequentialists, because the goodness of things that aren't intrinsically good is always a function of their effects.

Consequentialism makes no sense without a system that judges which consequences are good. By the way, I don't understand why consequentialism and egoism would be mutually exclusive, which you seem to imply by conflating consequentialism and utilitarianism.

'Should' can't be relativized to the preferences of the person being morally judged, else you will be unable to express the idea that people are capable of voluntarily doing bad things.

I don't think I voluntarily do bad things according to my values, ever. I also don't understand why other people would voluntarily do bad things according to their own values. My values change though, and I might think I did something bad in the past.

Other people do bad things according to my values, but if their actions are truly voluntary and I can't point out a relevant contradiction in their thinking, saying they should do something else is useless, and working to restrict their behavior by other means would be more effective. Connotatively comparing them to murderers and completely ignoring that values have a spectrum would be one of the least effective strategies that come to mind.

Do you take something about 1 or 2 to be unduly aggressive or dismissive?

No.

I don't want to lie to you about my ideals in order to be compassionate and tolerant of the fact that no one, least of all myself, lives up to them.

To me that seems like you're ignoring what's normally persuasive to people out of plain stubbornness. The reason I'm bringing this up is because I have altruistic goals too, and I find such talk damaging to them.

It would rather defeat the purpose of even having ideals if expressing or thinking about them made people less likely to achieve them

Having ideals is fine if you make it absolutely clear that's all that they are. If thinking about them in a certain way motivates you, then great, but if it just makes some people pissed off then it would make sense to be more careful about what you say. Consider also that some people might have laxer ideals than you do, and still do more good according to your values. Ideals don't make or break a good person.

Replies from: RobbBB
comment by Rob Bensinger (RobbBB) · 2014-01-13T14:35:29.970Z · LW(p) · GW(p)

I don't understand why consequentialism and egoism would be mutually exclusive, which you seem to imply by conflating consequentialism and utilitarianism.

I'm not conflating the two. There are non-utilitarian moral consequentialisms. I'm not sure egoism qualifies, since egoism (like paperclip maximization) might not bear a sufficient family resemblance to the things we call 'morality'. But that's just a terminological issue.

If an egoist did choose to adopt moral terminology like 'ought' and 'good', and to cash those terms out using egoism, then the egoist would agree with my claim ''It's better for people to have preferences that cause them to do better things.' But the egoist would mean by that 'It better fits the goals of my form of egoism for people to have preferences that cause them to do things that make me personally happy', whereas what I mean by the sentence is something more like 'It better fits the goals of my form of altruism for people to have preferences that cause them to do things that improve the psychological welfare and preference-satisfaction of all agents'.

I don't think I voluntarily do bad things according to my values, ever.

Interesting! Then your usage of 'bad' is very unusual. (Or your preferences and general psychological makeup is very unusual.) Most people think themselves capable of making voluntary mistakes, acting against their own better judgment, regretting their decisions, making normative progress, etc.

Connotatively comparing them to murderers

Sorry, I don't think I was clear about why I drew this comparison. 'Murder' just means 'bad killing'. It's trivial to say that murder is bad. I was saying that it's nearly as trivial to say that preferences that lead to bad outcomes are bad. But it would be bizarre for anyone to suggest that every suboptimal decision is as bad as murder! I clearly should have been more careful in picking my comparison, but I just didn't think anyone would think I was honestly saying something almost unsurpassably silly.

I find such talk damaging to them.

What do you think is the best strategy for endorsing maximization as a good thing without seeming to endorse 'you should feel horribly guilty and hate yourself if you haven't 100% maximized your impact'? Or should we drop the idea that maximization is even a good thing?

Having ideals is fine if you make it absolutely clear that's all that they are.

I don't know what you mean by 'that's all they are'. Core preferences, ideals, values, goals... I'm using all these terms to pick out pretty much the same thing. I'm not using 'ideal' in any sense in which ideals are mere. They're an encoding of the most important things in human life, by reference to optima.

Replies from: blacktrance, hyporational
comment by blacktrance · 2014-01-13T17:21:08.653Z · LW(p) · GW(p)

Egoism is usually not the claim that everyone should act in the egoist's self-interest, but that everyone should act in their own self-interest, i.e. "It better fits the goal of my egoism for people to have preferences that cause them do to things that make them happy".

Replies from: RobbBB, hyporational
comment by Rob Bensinger (RobbBB) · 2014-01-13T18:26:46.294Z · LW(p) · GW(p)

That's true in the philosophical literature. But consequentialist egoism is a complicated, confusing, very hard to justify, and very hard to motivate view, since when I say 'I endorse egoism' in that sense I'm really endorsing two contradictory goals, not a single goal: (1) An overarching goal to have my personal desires met; (2) An overarching goal that every person act in whatever way ey expects to meet eir desires. The former 'goal' is the truer one, in that it's the one that actually guides my actions to the extent I'm a 'good' egoist; the latter goal is a weird hanger-on that doesn't seem to be action-guiding. If the two goals come in conflict, then the really important and valuable bit (from my perspective, as a hypothetical egoist) is that people satisfy my values, not that they satisfy their own; possibly the two goals don't come into conflict that often, but it's clear which one is more important when they do.

This is also useful because it sets up a starker contrast with utilitarianism; moral egoism as the SEP talks about it is a lot closer to descriptive egoism, and could well arise from utilitarianism plus a confused view of human psychology.

Replies from: blacktrance
comment by blacktrance · 2014-01-13T18:37:21.322Z · LW(p) · GW(p)

when I say 'I endorse egoism' in that sense I'm really endorsing two contradictory goals, not a single goal: (1) An overarching goal to have my personal desires met; (2) An overarching goal that every person act in whatever way ey expects to meet eir desires

The two goals don't conflict, or, more precisely, (2) isn't a goal, it's a decision rule. There is no conflict in having the goal of having your personal desires met and believing that the correct decision rule is to do whatever maximizes the fulfillment of one's own desires. It's similar to how in the prisoner's dilemma, each prisoner wants the other to cooperate, but doesn't believe that the other prisoner should cooperate.

Replies from: RobbBB
comment by Rob Bensinger (RobbBB) · 2014-01-14T02:04:10.309Z · LW(p) · GW(p)

There is no conflict in having the goal of having your personal desires met and believing that the correct decision rule is to do whatever maximizes the fulfillment of one's own desires.

I think it depends on what's meant by 'correct decision rule'. Suppose I came up to you and said that intuitionistic mathematics is 'correct', and conventional mathematics is 'incorrect'; but not in virtue of correspondence to any non-physical mathematical facts; and conventional mathematics is what I want people to use; and using conventional mathematics, and treating it as correct, furthers other everyone else's goals more too; and there is no deeper underlying rule that rationally commits anyone to saying that intuitionistic mathematics is correct. What then is the content of saying that intuitionistic mathematics is right and conventional is wrong?

It's similar to how in the prisoner's dilemma, each prisoner wants the other to cooperate, but doesn't believe that the other prisoner should cooperate.

I don't think the other player will cooperate, if I think the other player is best modeled as a rational agent. I don't know what it means to add to that that the other player 'shouldn't cooperate. If I get into a PD with a non-sentient Paperclip Maximizer, I might predict that it will defect, but there's no normative demand that it do so. I don't think that it should maximize paperclips, and if a bolt of lightning suddenly melted part of its brain and made it better at helping humans than at making paperclips, I wouldn't conclude that this was a bad or wrong or 'incorrect' thing, though it might be a thing that makes my mental model of the erstwhile paperclipper more complicated.

Replies from: blacktrance
comment by blacktrance · 2014-01-14T02:20:17.068Z · LW(p) · GW(p)

Sorry, I don't know much about the philosophy of mathematics, so your analogy goes over my head.

I don't know what it means to add to that that the other player 'shouldn't cooperate.

It means that it is optimal for the other player to defect, from the other player's point of view, if they're following the same decision rule that you're following. Given that you've endorsed this decision rule to yourself, you have no grounds on which to say that others shouldn't use it as well. If the other player chooses to cooperate, I would be happy because my preferences would have been fulfilled more than they would have been had he defected, but I would also judge that he had acted suboptimally, i.e. in a way he shouldn't have.

comment by hyporational · 2014-01-13T17:38:55.938Z · LW(p) · GW(p)

It seems various things are meant by egoism.

Begins with "Egoism can be a descriptive or a normative position."

Replies from: Lumifer
comment by Lumifer · 2014-01-13T17:41:50.474Z · LW(p) · GW(p)

It's also a common attack term :-/

Replies from: hyporational
comment by hyporational · 2014-01-13T17:42:51.551Z · LW(p) · GW(p)

I better stop using it. In fact, I better stop using any label for my value system.

comment by hyporational · 2014-01-13T17:00:57.095Z · LW(p) · GW(p)

I'm not sure egoism qualifies, since egoism (like paperclip maximization) might not bear a sufficient family resemblance to the things we call 'morality'. But that's just a terminological issue.

I'd have no problem calling Clippy a consequentialist, but a polar bear would probably lack the sufficient introspection. You have to have some inkling about what your values are to have morality. You're right it's a terminology issue and difficult one at that.

It's better for people to have preferences that cause them to do better things.' But the egoist would mean by that 'It better fits the goals of my form of egoism for people to have preferences that cause them to do things that make me personally happy

Disclaimer: I use "pleasure" as an umbrella term for various forms of experiential goodness. Say there's some utility cap in my brain that limits the amount of pleasure I can get from a single activity. One of these activities is helping other people, and the amount of pleasure I get from this activity is capped in a way that I can only get under 50 % of the maximum possible pleasure from altruism. Necessarily this will make me look for sources of pleasure elsewhere. What exactly does this make me? If I can't call myself an egoist, then I'm at a loss here. Perhaps "egoism" is a reputation hit anyway and I should ditch the word, huh?

Actually, the reason why EA ideas appeal to me is that the pleasure I can get by using the money on myself seems to be already capped, I'm making much more money than I use, and I'm looking for other sources. Since I learned about fuzzies, being actually effective seems to be the only way to get any pleasure from this altruism thing.

Then your usage of 'bad' is very unusual.

Most people don't do much introspection, so I would expect that. However you saying this surprises me, since I didn't expect to be unusual in this crowd.

mistakes, acting against their own better judgment, regretting their decisions, making normative progress, etc.

These are all bad only in retrospect and explicable by having insufficient information or different values compared to now, except "normative progress" I don't understand. Acting bad voluntarily would mean I make a choice which I expect to have bad consequences. It might help your understanding to know what part of my decision process I usually identify with.

This brings up another terminological problem. See, I totally understand I better use the word "bad" in a way that other people understand me, but if I used it while I'm describing my own decision process, that would lead me to scold myself unnecessarily. I don't think I voluntarily do anything bad in my brain, but it makes sense for other people to ascribe voluntary action to some of my mistakes, since they don't really have access to my decision processes. I also have very different private and public meanings for the word "I". In my private considerations, the role of "I" in my brain is very limited.

I just didn't think anyone would think I was honestly saying something almost unsurpassably silly.

I probably should have just asked what you meant since my brain came up with only the silly interpretation. I think the reason why I got angry at the murder example was the perceived social cost of my actions being associated with murder. Toe stubbing is trivially bad too you know, bad scales. I made a mistake, but only in retrospect. I'll make a different mistake next time.

What do you think is the best strategy for endorsing maximization as a good thing without seeming to endorse 'you should feel horribly guilty and hate yourself if you haven't 100% maximized your impact'? Or should we drop the idea that maximization is even a good thing?

When I first learned how little a life costs, my reaction wasn't guilt, at least not for long. This lead me to think "wow, apparently I care about people suffering much less than I previously thought, wonder why that is", not "I must be mistaken about my values and should feel horrible guilt for not maximizing my actual values".

As I previously described, motivation for altruism is purely positive for me, and I'm pretty sure that if I associated EA with guilt, that would make me ditch the idea altogether and look for sources of pleasure elsewhere. I get depressed easily, which makes any negative motivation very costly.

I'm not motivated by the idea of maximization in itself, but it helps my happiness to know how much my money can buy. Your idea of motivational can be another person's idea of demotivational. I think we should try to identify our audience to maximize impact. As a default I'd still try to motivate people positively, not to associate crappy feelings with the important ideas. Human brains are predictably irrational and there's a difference in saying you can save several lives in a month and be a superhero by donating compared to saying you can be a serial killer by spending the money on yourself.

comment by Kaj_Sotala · 2014-01-12T11:44:22.038Z · LW(p) · GW(p)

I don't understand this values vs preferred values thing.

In Yvain's liking/wanting/endorsing categorization, "preferred values" corresponds to any values that I approve of. Another way of saying it would be that there are modules in my brain which execute one set of behaviors, whereas another set of modules would prefer to be engaging in some other set of behaviors. Not really different from any situation where you end up doing something that you think that you shouldn't.

Replies from: blacktrance, hyporational
comment by blacktrance · 2014-01-13T15:53:25.425Z · LW(p) · GW(p)

If you approve of these values, why don't you practice them? It seems to me that approving of a value means you want others to practice it, regardless of whether you want it for yourself.

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2014-01-13T16:10:36.142Z · LW(p) · GW(p)

Did I say I don't? I'm not signed up for cryonics, for instance.

Replies from: blacktrance
comment by blacktrance · 2014-01-13T16:12:17.027Z · LW(p) · GW(p)

I mean valuing people equally.

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2014-01-13T16:20:30.652Z · LW(p) · GW(p)

Yes, that's what my above comment was a reference to. I do my best to practice it as well as I can.

comment by hyporational · 2014-01-12T13:09:51.213Z · LW(p) · GW(p)

It seems to me you're looking for temporal consistency. My problem understanding you stems from the fact that I don't expect my future self to wish I had been any more altruistic than I'm right now. I don't think being conflicted makes much sense without considering temporal differences in preference, and I think Yvain's descriptions fit this picture.

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2014-01-12T13:16:44.421Z · LW(p) · GW(p)

I guess you could frame it as a temporal inconsistency as well, since it does often led to regret afterwards, but it's more a "I'm doing this thing even though I know it's wrong" thing: not a conflict between one's current and future self, but rather a conflict between the good of myself and the good of others.

Replies from: hyporational
comment by hyporational · 2014-01-12T13:55:05.442Z · LW(p) · GW(p)

Interesting. I wonder if we have some fundamental difference in perceived identity at play here. It makes no sense to me to have a narrative where I do things I don't actually want to do.

Say I attach my identity to my whole body. There will be no conflict here since whatever I do is result of a resolved conflict hidden in the body and therefore I must want to do whatever I'm doing.

Say I attach my identity to my brain. My brain can want things that my body cannot do, but whatever the brain tells the body to do, will be a result of a resolved conflict hidden inside the brain and I will tell my body to do whatever I want my body to do. Whatever conflict of preferences arises will be a confusion of identity between the brain and the body.

Say I attach my identity to a part of my brain, to this consciousness thing that seems to be in charge of some executive functions, probably residing in the frontal cortex. Whatever this part of the brain tells the rest of the brain will be a result of a resolved conflict hidden inside this part of the brain and again whatever I tell the rest of my brain to do will necessarily have to be what I want to tell it to do, but I can't expect the rest of my brain to do something it cannot do. Whatever conflict arises will be a confusion of identity between this part and the rest of the brain.

I can think of several reasons why I'd want to assume a conflicted identity and almost all of them involve signalling and social convenience.

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2014-01-12T15:16:15.217Z · LW(p) · GW(p)

Say I attach my identity to my brain. My brain can want things that my body cannot do, but whatever the brain tells the body to do, will be a result of a resolved conflict hidden inside the brain and I will tell my body to do whatever I want my body to do.

I think the difference here is that, from the inside, it often doesn't feel like my actions were the result of a resolved conflict. Well, in a sense they were, since otherwise I'd have been paralyzed with inaction. But when I'm considering some decision that I'm conflicted over, it very literally feels like there's an actual struggle between different parts of my brain, and when I do reach a decision, the struggle usually isn't resolved in the sense of one part making a decisive argument and the other part acknowledging that they were wrong. (Though that does happen sometimes.)

Rather it feels like one part managed to get the upper hand and could temporarily force the other part into accepting the decision that was made, but the conflict isn't really resolved in any sense - if the circumstances were to change and I'd have to make the same decision again, the loser of this "round" might still end up winning the next one. Or the winner might get me started on the action but the loser might then make a comeback and block the action after all.

That's also why it doesn't seem right to talk about this as a conflict between current and future selves. That would seem to imply that I wanted thing X at time T, and some other thing Y at T+1. If you equated "wanting" with "the desire of the brain-faction that happens to be the strongest at the time when one's brain is sampled", then you could kind of frame it like a temporal conflict... but it feels like that description is losing information, since actually what happens is that I want both X and Y at both times: it's just the relative strength of those wants that varies.

Replies from: hyporational
comment by hyporational · 2014-01-12T16:27:22.381Z · LW(p) · GW(p)

when I'm considering some decision that I'm conflicted over, it very literally feels like there's an actual struggle between different parts of my brain

Ok. To me it most often feels like I'm observing that some parts of my brain struggle and that I'm there to tip the scales, so to speak. This doesn't necessarily lead to a desirable outcome if my influence isn't strong enough. I can't say I feel conflicted about in what direction to tip the scales, but I assume this is just because I'm identifying with a part of my brain that can't monitor its inner conflicts. I might have identified with several conflicting parts of my brain at once in the past, but don't remember what it felt like, nor would I be able to tell you how this transformation might have happened.

Rather it feels like one part managed to get the upper hand and could temporarily force the other part into accepting the decision that was made, but the conflict isn't really resolved in any sense

This sounds like tipping the scales. Are you indentifying with several conflicting processes or are you just expressing yourself in a socially convenient manner? If you're X that's trying to make process A win process B in your brain and process B wins in a way that leads to undesirable action, does it make any sense to say that you did something you didn't want to do?

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2014-01-12T19:21:38.867Z · LW(p) · GW(p)

Your description of tipping the scale sounds about right, but I think that it only covers two of the three kinds of scenarios that I experience:

  1. I can easily or semi-easily tip the scale in some direction, possibly with an expenditure of willpower. I would mostly not classify this as a struggle: instead I just make a decision.
  2. I would like to tip the scale in some direction, but fail (and instead end up procrastinating or whatever), or succeed but only by a thin margin. I would classify this as a struggle.
  3. I could tip the scale if I just decided what direction I wanted to tip them in, but I'm genuinely unsure of what direction I should tip them in. If scenario #1 feels like an expenditure of willpower in order to override a short-term impulse in favor of a long-term goal, and #2 like a failed or barely successful attempt to do so, then #3 feels like trying to decide what the long-term goal should be. Putting it differently, #3 feels like a situation where the set of processes that do the tipping do not necessarily have any preferences of their own, but rather act as the "carriers" of a set of preferences that multiple competing lower-level systems are trying to install in them. (Actually, that description doesn't feel quite right, but it's the best I can manage right now.)

I now realize that I hadn't previously clearly made the distinction between those different scenarios, and may have been conflating them to some extent. I'll have to rethink what I've said here in light of that.

I think that I identify with each brain-faction that has managed to "install" "its" preferences in the scale-tipping system at some point. So if there is any short-term impulse that all the factions think should be overriden given the chance, then I don't identify with that short-term impulse, but since e.g. both the negative utilitarian and deontological factions manage to take control at times, I identify with both to some extent.

comment by [deleted] · 2014-01-14T20:37:02.747Z · LW(p) · GW(p)

It means different "modules" of your mind have different values, and on reflection you favor one module over the other.

Part of why this still sounds problematic is that we have a hard time unravelling the "superego" (the metaphorical mental module responsible for enforcing nonselfish/pro-social values) from full and complete moral cognition. Thus, many people believe they believe they should be selfless to the point of self-sacrificing, even though, if you cloned them and actually made the clone that selfless, they would not endorse the clone as being a superior version of themselves.

comment by passive_fist · 2014-01-17T02:44:09.175Z · LW(p) · GW(p)

By going with the leisure budget argument, one is essentially admitting that cryonics isn't about altruism, it's about yourself.

I don't remember any non-crazy cryonics advocate ever saying otherwise.

comment by lsparrish · 2014-01-13T21:35:11.830Z · LW(p) · GW(p)

It feels to me like the general pro-cryo advocacy here would be a bit of a double standard, at least when compared to general memes of effective altruism, shutting up and multiplying, and saving the world. If I value my life equally to the lives of others, it seems pretty obvious that there's no way by which the money spent on cryonics would be a better investment than spending it on general do-gooding.

I think the scale on which it is done is the main thing here. Currently, cryonics is performed so infrequently that there isn't much infrastructure for it. So it is still fairly expensive compared to the amount of expected utility -- probably close to the value implied by regulatory tradeoffs ($5 million per life). On a large, industrial scale I expect it to be far better value than anything Givewell is going to find.

Replies from: Calvin
comment by Calvin · 2014-01-13T21:45:42.787Z · LW(p) · GW(p)

This is good argument capable of convincing me into pro-cryonics position, if and only if someone can follow this claim by an evidence pointing to high probability estimate that preservation and restoration will become possible during a resonable time period.

If it so happens, that cryopreservation fails to prevent information-theoretic death then value of your cryo-magazines filled with with corpses will amount to exactly 0$ (unless you also preserve the organs for transplants).

Replies from: lsparrish
comment by lsparrish · 2014-01-14T04:47:52.570Z · LW(p) · GW(p)

This is good argument capable of convincing me into pro-cryonics position, if and only if someone can follow this claim by an evidence pointing to high probability estimate that preservation and restoration will become possible during a resonable time period.

At some point, you will have to specialize in cryobiology and neuroscience (with some information science in there too) in order to process the data. I can understand wanting to see the data for yourself, but expecting everyone to process it rationally and in depth before they get on board isn't necessarily realistic for a large movement. Brian Wowk has written a lot of good papers on the challenges and mechanisms of cryopreservation, including cryoprotectant toxicity. Definitely worth reading up on. Even if you don't decide to be pro-cryonics, you could use a lot of the information to support something related, like cryopreservation of organs.

If it so happens, that cryopreservation fails to prevent information-theoretic death then value of your cryo-magazines filled with with corpses will amount to exactly 0$ (unless you also preserve the organs for transplants).

Until you have enough information to know, with very high confidence, that information-theoretic death has happened in the best cases, you can't really assign it all a $0 value in advance. You could perhaps assign a lower value than the cost of the project, but you would have to have enough information to do so justifiably. Ignorance cuts both ways here, and cryonics has traditionally been presented as an exercise in decision-making under conditions of uncertainty. I don't see a reason that logic would change if there are millions of patients under consideration. (Although it does imply more people with an interest in resolving the question one way or another, if possible.)

I don't quite agree that the value would be zero if it failed. It would probably displace various end-of-life medical and funeral options that are net-harmful, reduce religious fundamentalism, and increase investment in reanimation-relevant science (regenerative medicine, programmable nanodevices, etc). It would be interesting to see a comprehensive analysis of the positive and negative effects of cryonics becoming more popular. More organs for transplantation could be one effect worth accounting for, since it does not seem likely that we will need our original organs for reanimation. There would certainly be more pressure towards assisted suicide, so that could be positive or negative depending how you look at it.

comment by lsparrish · 2014-01-13T19:43:22.972Z · LW(p) · GW(p)

If I value my life equally to the lives of others, it seems pretty obvious that there's no way by which the money spent on cryonics would be a better investment than spending it on general do-gooding.

This just shifts the question to whether promoting cryonics is an effective form of general consequentialist do-gooding. There are a lot of factors to consider, in regards to large-scale cryonics:

  1. Effects on funding/enthusiasm for new technologies due to alignment of incentives.
  2. Effects on mitigation of existential risks, long-term economic policies, and investments.
  3. How much cheaper it gets when practiced on a large industrial scale.
  4. How much more reliable it becomes when practiced on a large industrial scale.
  5. Displacement of wasteful funeral practices.
  6. Displacement of wasteful end-of-life medical practices.
  7. Reduced religious fundamentalism, due to less belief in innate immortality.
  8. Reduced luxury purchases due to altered time preferences.
  9. Relative number of people who could be saved by cryonics but not by any other available technology.

There are some plausible negatives effects to consider as well:

  • A larger industry has more opportunities for corruption and mistakes, so it would probably be more regulated on a larger scale, resulting in higher administrative costs and restrictions on experimentation.
  • People might be less concerned with preventing some health problems (while being more concerned with others, including traffic fatalities and heart disease) as the result of risk compensation.
  • The pressure to cure diseases in the short term could be reduced. Some patients with terminal cases might decide to die earlier than they otherwise would (which would turn out to be permanent if cryonics fails to work for them).

However, the costs aren't likely to outweigh (or even significantly approach) the savings and benefits in my estimation. In many cases the apparent negatives (e.g. people checking out early, or reducing the overt pressure on scientists to cure cancer ASAP) could be a blessing in disguise (less suffering, less bad data). The regulation aspect probably actually benefits from cryonics being a larger and more visible industry, as the alternative is for regulations on the topic to be passed by non-sympathetic outside industries such as death care (funeral directors associations) and tissue banking (nontransplant anatomical donation organizations).

As it stands, LN2 costs of storage are fairly minimal (around $10/year per neuro patient, going by CI figures, or $1000 per patient assuming 1% interest on a long-term deposit), and can be dramatically reduced by larger scale storage spaces. Most of the money is going into standby, administration, equipment, and so forth, which are also likely to be a) scale friendly and b) already heavily invested in by the conventional medical community.

There's also the long-term financial services aspect. A large chunk is going into long-term savings / low-risk investment. My understanding is that this promotes economic growth.

The funds set aside for cryonics reanimation will eventually go to medical research and infrastructure to reanimate patients. This could take more than one form. Programmed nanorepair and/or uploading are the currently expected forms for today's patients, but that expectation does not necessarily hold for all future forms of cryonics. We might, at some point in the next few decades, reduce the brain damage factor to a point where biologically based regenerative techniques (tissue printing, stem cells, synthetic microbes, etc.) are plausible enough on their own. These technologies, or at least the basic science needed to achieve them, would obviously have uses outside the domain of cryonics.

So the direct and indirect results of cryonics seem to me to be good enough that a non-hypocritical EA might plausibly think it is a good idea to promote by whatever means they can. Signing up for it oneself might be useful to boost the credibility of discussing it with friends, especially if you have a social group that includes wealthy people who might donate to cryonics research or assist the transition to a larger infrastructure down the road somewhere. The question is whether this can beat something less expensive like an Adwords campaign of equivalent ongoing cost (say $80/month).

comment by brazil84 · 2014-01-12T21:26:23.204Z · LW(p) · GW(p)

As I mentioned in a private message to Hallquist, I favor a wait and see approach to cryonics.

This is based on a couple observations:

  1. There is an excellent chance that when (if?) I die, it will be either (1) in a way which gives me enough advance warning so that I have time to sign up for cryonics; or (2) it will sufficiently sudden that even if I had been signed up for cryonics it wouldn't have made a difference.
  1. It's not too hard to get cash out of a life insurance policy if you are terminally ill.

So it seems there isn't a huge downside to simply carrying life insurance and waiting to make the decision about cryonics. From an actuarial perspective, I have a pretty good chance of living another 30 or 40 years so there is also the possibility that more information may come out helping me to make a better decision.

Edit: As I mentioned to Hallquist, I am a bit concerned that my argument is basically a rationalization for cryoprastination. So feel free to point it out if I am missing something important.

comment by shminux · 2014-01-12T19:27:14.431Z · LW(p) · GW(p)

Initially I wanted to mention that there is one more factor: the odds of being effectively cryopreserved upon dying. I.e. being in a hospital amenable to cryonics and with a cryo team standing by, with enough of your brain intact to keep your identity. This excludes most accidental deaths, massive stroke, etc. However, the CDC data for the US http://www.cdc.gov/nchs/fastats/deaths.htm show that currently over 85% of all deaths appear to be cryo-compatible:

  • Number of deaths: 2,468,435
  • Death rate: 799.5 deaths per 100,000 population
  • Life expectancy: 78.7 years
  • Infant Mortality rate: 6.15 deaths per 1,000 live births

Number of deaths for leading causes of death:

  • Heart disease: 597,689
  • Cancer: 574,743
  • Chronic lower respiratory diseases: 138,080
  • Stroke (cerebrovascular diseases): 129,476
  • Accidents (unintentional injuries): 120,859
  • Alzheimer's disease: 83,494
  • Diabetes: 69,071
  • Nephritis, nephrotic syndrome, and nephrosis: 50,476
  • Influenza and Pneumonia: 50,097
  • Intentional self-harm (suicide): 38,364

So, given how ballpark all your estimates are, this one factor is probably irrelevant.

Replies from: roystgnr
comment by roystgnr · 2014-01-15T18:56:53.295Z · LW(p) · GW(p)

What percent of young people's deaths are cryo-compatible? Hypothetically, if most 70-80 year old people who die are in a hospital bed weeks after a terminal diagnosis, but most 30-40 year old people who die are in a wrecked car with paramedics far away, it might make sense for a 34 year old on the fence to forgo the cryonics membership and extra life insurance now but save the money he would have spent on premiums to sign up later in life.

comment by Fossegrimen · 2014-02-04T16:31:55.507Z · LW(p) · GW(p)

I have a view on this that I didn't find by quickly skimming the replies here. Apologies if it's been hashed to death elsewhere.

I simply can't get the numbers to add up when it comes to cryonics.

Let's assume a probability of 1 of cryonics working and the resulting expected lifespan to be until the sun goes out. That would equal a net gain of around 4 billion years or so. Now, investing the same amount of money in life extension research and getting, say a 25% chance of gaining a modest increase in lifespan of 10 years for everyone would equal 70bn/4 = 17.5bn expected years total.

If you think (like I do) that even if cryonics end up working, there will probably be a hard limit significantly less than 4bn years(my uneducated guess is that it will be around the lifespan of a ponderosa pine, or 6000 years or so), or that cryonics have a probability of less than one, the figures only get worse.

Of course, there may be a lower chance of extending everyones life by even a little, but on the other hand, my grandfather has already gained a decade partly as a result of my investment in a cancer research startup some years back, so I'm not willing to lower those odds by enough to make cryonics come out on top :)

Replies from: Eliezer_Yudkowsky, TheOtherDave
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2014-02-05T00:11:19.646Z · LW(p) · GW(p)

How do you invest $50,000 to get a 25% chance of increasing everyone's lifespan by 10 years? John Schloendorn himself couldn't do that on $50K.

Reviewing the numbers you made up for sanity is an important part of making decisions after making up numbers.

Replies from: Fossegrimen, shminux
comment by Fossegrimen · 2014-02-05T16:03:55.742Z · LW(p) · GW(p)

You're right. Those numbers weren't just slightly coloured by hindsight bias but thoroughly coated in several layers of metallic paint and polished. They need to be adjusted drastically down. The reasons I originally considered them to be reasonable are:

  • The field of cancer research seem to be a lot like software in the 80s in that our technical ability to produce new treatments is increasing faster than the actual number of treatments produced. This means that any money thrown at small groups of people with a garage and a good idea is almost certain to yield good results. (I still think this and am investing accordingly)

  • I have made one such investment which turned out to be a significant contribution in developing a treatment for prostate cancer that gives most patients about 10 extra years.

  • There are far too many similarities between cancer cells and ageing cells for me to readily accept that it is a coincidence. This means that investing in cancer research startups has the added bonus of a tiny but non-zero chance of someone solving the entire problem as a side effect.

  • In retrospect, I also went: "prostate cancer patients == many => everyone == many" (I know, scope insensitivity :( )

On the other hand, my numbers for cryonics were also absurdly optimistic, so I'm not yet convinced that the qualitative point I was trying (ineptly) to make is invalid. The point was: Even a large chance of extending one life by a lot should be outweighed by a smaller chance of extending a lot of lives by a little, especially if the difference in total expected number of years is significant.

Also: Thanks for the pushback. I am far too used to spending time with people who accept whatever I say at face value and the feeling I get on here of being the dumbest person in the room is very welcome.

comment by shminux · 2014-02-05T00:28:36.812Z · LW(p) · GW(p)

The numbers are indeed optimistic, but they are based on empirical evidence:

my grandfather has already gained a decade partly as a result of my investment in a cancer research startup some years back

More conservatively, but still vastly optimistic, suppose $50k has a 1% chance of creating a remedy for a long-term remission (say, 10 extra QALY) in a lethal disease which strikes 1% of the population and that almost every sufferer can get the cure. This reduces the total expected years gained down to some 2 million, which is still nothing to sneeze at.

comment by TheOtherDave · 2014-02-04T18:02:12.334Z · LW(p) · GW(p)

This math only works if I value a year of someone else's life approximately the same as a year of my life.

If instead I value a year of someone else's life (on average), say, a tenth as much as I value a year of my own life, then if I use your numbers to compare the EV of cryonics at 4 GDY (giga-Dave-years) to the EV of life-extension research at 1.75 GDY, I conclude that cryonics is a better deal.

Approached the other way... if I don't value any given life significantly more than any other, there's no particular reason for me to sign up for cryonics or research life extension. Sure, currently living people will die, but other people will be alive, so the total number of life-years is more or less the same either way... which is what, in this hypothetical, I actually care about. The important thing in that hypothetical is increasing the carrying capacity of the environment, so the population can be maximized.

It turns out to matter what we value.

Replies from: Fossegrimen, Fossegrimen
comment by Fossegrimen · 2014-02-05T17:15:49.287Z · LW(p) · GW(p)

Your first point is of course valid. My algorithm for determining value of a life is probably a bit different from yours because I end up with a very different result. I determine the value of a life in the following manner:

Value = Current contribution to making this ball of rock a better place + (Quality of life + Unrealised potential) * Number of remaining years.

If we consider extended life spans, the first element of that equation is dwarfed by the rest so we can consider that to be zero for the purpose of this discussion.

Quality of life involves a lot of parameters, and many are worth improving for a lot of people. Low hanging fruit includes: Water supply and sanitation in low-income countries, local pollution in the same countries, easily treatable diseases, Women's lib. All of these are in my opinion worthy alternatives to cryonics, but maybe not relevant for this particular discussion.

The remaining parameter is Unrealised Potential which I think of as (Intelligence * conscientiousness). I am brighter than most, but more lazy than many, so the result, if interpreted generously, is that I may be worth somewhat more than the median but certainly not by a factor of 10, so if we still go with the numbers above (even if Eliezer pointed out that they were crazy), my stance is still that cryonics is a poor investment. (It may be fun but not necessarily productive to come up with some better numbers.)

Also: I have absolutely no problem accepting that other people have different algorithms and priors for determining value of life, I am just explaining mine.

Your other point was more of a surprise and I have spent a significant amount of time considering it and doing rudimentary research on the subject, because it seems like a very valid point. The main problem is that it does not seem that the total number of high quality life-years is limited by the carrying capacity of the planet, especially if we accept women's lib as a core requirement for attaining high quality.

Declining fertility rate seems to be extremely well correlated with higher quality of life so, once we sort the poverty problem, the planets population will decline. Singapore already has a fertility rate of less than 1 child per woman and chinas population is expected to peak in 2020.

In the short term however, the carrying capacity may very well be a limiting factor and may be worth increasing. Also because a larger carrying capacity will indirectly help with sorting the poverty problem so it's a double win. In fact I am seriously considering moving some of my retirement savings from index funds to aquaculture because that seems to be where the most low-hanging fruit seems to be. Suggestions are welcome.

Again thanks for the pushback, having to actually think through my arguments is a new and welcome experience.

Replies from: TheOtherDave, TheOtherDave
comment by TheOtherDave · 2014-02-05T17:37:39.410Z · LW(p) · GW(p)

The main problem is that it does not seem that the total number of high quality life-years is limited by the carrying capacity of the planet

Sure. And it not seeming that way is a reason to lower our confidence that the hypothetical I described actually characterizes our values in the real world.

Which is no surprise, really.

comment by TheOtherDave · 2014-02-05T17:33:34.915Z · LW(p) · GW(p)

You're welcome.

You might also find that actually thinking through your arguments in the absence of pushback is a good habit to train.

For example, how did you arrive at your formula for the value of life? If someone were to push back on it for you, how would you support it? If you were going to challenge it, what would be its weakest spots?

comment by Fossegrimen · 2014-02-04T18:16:21.621Z · LW(p) · GW(p)

Actually, that last bit was an entirely new thought to me, thanks

comment by somervta · 2014-01-12T07:42:57.243Z · LW(p) · GW(p)

The obvious assumption to question is this:

Given that cryonics succeeds, is what you purchase really equal to what you purchase by saving yourself from a life-threatening disease? You say that you don't place an extremely high value on your own life, but is it the case that the extra life you purchase with cryonics (takes place in the far-future*, is likely significantly longer) than the extra life you are purchasing in your visualization (likely near-future, maybe shorter [presumable 62 years?]). Relevant considerations:

The length difference depends on how optimistic you are about life-extension, but I think it a pretty conservative notion given what the success of cryonics would require, advanced nanotech and AI is standard.

Cryonics success implies a world where humans are able to do significantly more things than we are now - this usually equates to a significantly better world, although this depends on your predictions about the social state of revivees.

Replies from: army1987
comment by A1987dM (army1987) · 2014-01-12T10:00:26.176Z · LW(p) · GW(p)

Cryonics success implies a world where humans are able to do significantly more things than we are now

Not necessarily -- it's possible that an uFAI would revive cryo patients.

Replies from: None
comment by [deleted] · 2014-01-14T20:39:42.947Z · LW(p) · GW(p)

Why? Dead humans turn to paperclips much easier than live ones, and the point in the design space where an AI wants to deliberately torture people or deliberately wirehead people is still much, much harder to hit than the point where it doesn't care at all.

Replies from: army1987
comment by A1987dM (army1987) · 2014-01-15T08:22:41.255Z · LW(p) · GW(p)

I'm thinking of cases where the programmers tried to write a FAI but they did something slightly wrong. I agree an AI created with no friendliness considerations in mind at all would be very unlikely to revive people.

Replies from: None
comment by [deleted] · 2014-01-15T09:20:12.513Z · LW(p) · GW(p)

I'm thinking of cases where the programmers tried to write a FAI but they did something slightly wrong.

I'm having trouble coming up with a realistic model of what that would look like. I'm also wondering why aspiring FAI designers didn't bother to test-run their utility function before actually "running" it in a real optimization process.

Replies from: Kaj_Sotala, Lumifer, MugaSofer
comment by Kaj_Sotala · 2014-01-15T16:08:52.974Z · LW(p) · GW(p)

Have you read Failed Utopia #4-2?

Replies from: None
comment by [deleted] · 2014-01-15T17:53:44.841Z · LW(p) · GW(p)

I have, but it's running with the dramatic-but-unrealistic "genie model" of AI, in which you could simply command the machine, "Be a Friendly AI!" or "Be the CEV of humanity!", and it would do it. In real life, verbal descriptions are mere shorthand for actual mental structures, and porting the necessary mental structures for even the slightest act of direct normativity over from one mind-architecture to another is (I believe) actually harder than just using some form of indirect normativity.

(That doesn't mean any form of indirect normativity will work rightly, but it does mean that Evil Genie AI is a generalization from fictional evidence.)

Hence my saying I have trouble coming up with a realistic model.

comment by Lumifer · 2014-01-15T15:39:18.874Z · LW(p) · GW(p)

I'm also wondering why aspiring FAI designers didn't bother to test-run their utility function before actually "running" it in a real optimization process.

Because if you don't construct a FAI but only construct a seed out of which a FAI will build itself, it's not obvious that you'll have the ability to do test runs.

Replies from: None
comment by [deleted] · 2014-01-15T17:55:16.389Z · LW(p) · GW(p)

Well, that sounds like a new area of AI safety engineering to explore, no? How to check your work before doing something potentially dangerous?

Replies from: Eugine_Nier
comment by Eugine_Nier · 2014-01-16T06:10:18.997Z · LW(p) · GW(p)

I believe that is MIRI's stated purpose.

Replies from: None
comment by [deleted] · 2014-01-16T08:06:44.681Z · LW(p) · GW(p)

Quite so, which is why I support MIRI despite their marketing techniques being much too fearmongering-laden, in my opinion.

Even though I do understand why they are: Eliezer believes he was dangerously close to actually building an AI before he realized it would destroy the human race, back in the SIAI days. Fair enough on him, being afraid of what all the other People Like Eliezer might do, but without being able to see his AI designs from that period, there's really no way for the rest of us to judge whether it would have destroyed the human race or just gone kaput like so many other supposed AGI designs. Private experience, however, does not serve as persuasive marketing material.

comment by MugaSofer · 2014-01-15T09:50:25.674Z · LW(p) · GW(p)

Perhaps it had implications that only became clear to a superintelligence?

Replies from: None
comment by [deleted] · 2014-01-15T12:57:47.118Z · LW(p) · GW(p)

Hmmm... Upon thinking it over in my spare brain-cycles for a few hours, I'd say the most likely failure mode of an attempted FAI is to extrapolate from the wrong valuation machinery in humans. For instance, you could end up with a world full of things people want and like, but don't approve. You would thus end up having a lot of fun while simultaneously knowing that everything about it is all wrong and it's never, ever going to stop.

Of course, that's just one cell in a 2^3-cell grid, and that's assuming Yvain's model of human motivations is accurate enough that FAI designers actually tried to use it, and then hit a very wrong square out of 8 possible squares.

Within that model, I'd say "approving" is what we're calling the motivational system that imposes moral limits on our behavior, so I would say if you manage to combine wanting and/or liking with a definite +approving, you've got a solid shot at something people would consider moral. Ideally, I'd say Friendliness should shoot for +liking/+approving while letting wanting vary. That is, an AI should do things people both like and approve of without regard to whether those people would actually feel motivated enough to do them.

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2014-01-16T06:32:09.494Z · LW(p) · GW(p)

You would thus end up having a lot of fun while simultaneously knowing that everything about it is all wrong and it's never, ever going to stop.

Are we totally sure this is not what utopia initially feels like from the inside? Because I have to say, that sentence sounded kinda attractive for a second.

Replies from: MugaSofer, None
comment by MugaSofer · 2014-01-16T23:45:58.679Z · LW(p) · GW(p)

What kinds of wierdtopias are you imagining that would fulfill those criteria?

Because the ones that first sprung to mind for me (this might make an interesting exercise for people, actually) were all emphatically, well, wrong. Bad. Unethical. Evil... could you give some examples?

Replies from: TheOtherDave
comment by TheOtherDave · 2014-01-17T00:59:57.836Z · LW(p) · GW(p)

I of course don't speak for EY, but what I would mean if I made a similar comment would hinge on expecting my experience of "I know that everything about this is all wrong" to correlate with anything that's radically different from what I was expecting and am accustomed to, whether or not they are bad, unethical, or evil, and even if I would endorse it (on sufficient reflection) more than any alternatives.

Given that I expect my ideal utopia to be radically different from what I was expecting and am accustomed to (because, really, how likely is the opposite?), I should therefore expect to react that way to it initially.

Replies from: MugaSofer
comment by MugaSofer · 2014-01-17T01:27:20.930Z · LW(p) · GW(p)

Although I don't usually include a description of the various models of the other speaker I'm juggling during conversation, that's my current best guess. However, principle of charity and so forth.

(Plus Eliezer is very good at coming up with wierdtopias - probably better than I am.)

comment by [deleted] · 2014-01-16T07:37:10.420Z · LW(p) · GW(p)

It's what an ill-designed "utopia" might feel like. Note the link to Yvain's posting: I'm referring to a "utopia" that basically consists of enforced heroin usage, or its equivalent. Surely you can come up with better things to do than that in five minutes' thinking.

comment by hyporational · 2014-01-12T09:21:44.251Z · LW(p) · GW(p)

I'd probably sign up if I were a US citizen. This makes me wonder if it's rational to stay in Finland. Has there been any fruitful discussion on this factor here before? Promoting cryonics in my home country doesn't seem like a great career move.

Replies from: Viliam_Bur
comment by Viliam_Bur · 2014-01-12T09:59:52.521Z · LW(p) · GW(p)

Promoting cryonics in my home country doesn't seem like a great career move.

Try promoting rationality instead. If you succeed, then maybe someone else will take care about cryonics And even if they don't, you still did something good.

Replies from: army1987, hyporational
comment by A1987dM (army1987) · 2014-01-12T14:28:30.515Z · LW(p) · GW(p)

Well, Finland already is the country with the most LWers per capita (as of the 2012 survey). :-)

Replies from: Viliam_Bur
comment by Viliam_Bur · 2014-01-12T17:46:40.076Z · LW(p) · GW(p)

Now the question is whether having more LWers makes it easier or harder to recruit new ones.

If the model is "only certain % of population is the LW type", then it should be harder, because the low-hanging fruit is already picked. If the model is "rationality is a learned skill", then it should be easier, because the existing group can provide better support.

I already think Finland is a very smart country (school system, healthy lifestyle), so if it's the latter model, your local rationalist group should have a great chance to expand. It's probably important how many of the 15 Finnish LWers live near each other.

comment by hyporational · 2014-01-12T20:21:58.234Z · LW(p) · GW(p)

If CFAR becomes a success and Finland starts to develop its own branch, I'll probably donate some money, but working there myself would be like buying fuzzies in a soup kitchen with my inferior cooking skills. Some other kinds of relevant local movements might also get my vocal and monetary support in the future.

At this point marketing our brand of rationality to anyone I don't know seems like a risky bet. They might get exposed to the wrong kinds of material the wrong time and that wouldn't mean anything good to my reputation.

comment by Douglas_Reay · 2014-01-19T01:56:22.219Z · LW(p) · GW(p)

For me, there's another factor: I have children.

I do value my own life. But I also value the lives of my children (and, by extension, their descendants).

So the calculation I look at is that I have $X, which I can spend either to obtain a particular chance of extending/improving my own life OR I can spend it to obtain a improvements in the lives of my children (by spending it on their education, passing it to them in my will, etc).

Replies from: Swimmer963
comment by Swimmer963 (Miranda Dixon-Luinenburg) (Swimmer963) · 2014-01-19T04:44:20.242Z · LW(p) · GW(p)

Excellent point. This isn't a consideration for me right now, but I expect there will be in the future.

comment by [deleted] · 2014-01-12T16:20:29.388Z · LW(p) · GW(p)

.

Replies from: Gunnar_Zarncke, Decius
comment by Gunnar_Zarncke · 2014-01-12T20:20:59.364Z · LW(p) · GW(p)

But the value of your life in comparison to other persons lifes doesn't change by this. You'd have to inflation-adjust the value of other persons lifes accordingly.

Only if you are not valuing other persons lifes can you get away with this, but the OP made sufficiently clear that this wasn't the case.

comment by Decius · 2014-01-12T17:59:51.755Z · LW(p) · GW(p)

It's reasonable to believe that the area under the curve with "QALYs of life" on the X axis and "Probability of having a life this good or better, given cryonics" is finite, even if there is no upper bound on lifespan. Given a chance of dying each year that has a lower bound, due to accident, murder, or existential hazard, I think that it is provable that the total expected lifetime is finite.

You make a good point that the expected lifetime of a successfully revived cryonicist might be more valuable than the life of someone who didn't sign up.

comment by christopherj · 2014-01-25T19:20:10.362Z · LW(p) · GW(p)

I suppose I belong to that group that would like to see more people signing up for cryonics but have not done so myself. For myself, I am young and expect to live quite a while longer. I expect the chance of dying without warning in a way that could be cryopreserved to be rather low, whereas if I had much warning I could decide then to be cryopreserved (so the loss is my chance of losing consciousness and then dying in a hospital without regaining consciousness). I currently am not signed up for life insurance, which would also mean the costs of cryopreservation are higher for me. I don't doubt the current or future technology, but I do doubt the circumstances and politics (even if I'm successfully preserved, who will want to revive an obsolete person via an expensive procedure into an overpopulated world?). My current lack of significant accomplishment both increases the relative cost to me of cryopreservation, and decreases the chances people would choose to revive me.

Current spending on cryonics also presents an opportunity cost. There is a possibility that death and aging are solved in my lifetime, but might require money. There is a possibility that with more money I could increase my probabilities later for cryopreservation and revival. The ideal time to get cryopreserved would be to do so now via cryosuicide, so I can maximize my odds of successful preservation and minimize my loss of mental power due to aging, yet this seems like a terrible idea.

As for worries about spending on myself instead of on altruism, that does not bother me. Like pretty much everyone, I value my comfort and future and present life more than that of others. I feel it is right and proper to focus my spending on myself and those close to me genetically, culturally, spatially, morally, etc, as well as those with more resources and skills, over random strangers. Those who feel tempted to donate all they have reducing themselves to the level of the poorest people in third world countries, seem short-sighted and in particular are undervaluing their position in society and the indirect contribution their career makes.

comment by [deleted] · 2014-01-14T17:57:11.203Z · LW(p) · GW(p)

(The following assumes that you don't actually want to die. My honest assessment is that I think you might want to die. I don't believe there's anything actually wrong with just living out your life as it comes and then dying, even if living longer might be nicer, and particularly when living longer might totally suck. So don't assume I'm passing judgement on a particular decision to sign up or not; in fact, the thought that life might suck forever drives me damn near favoring suicide myself.)

Let's tackle the question from another angle.

I do not believe cryonics will work. I have seen the probability calculations, and I prefer to call them plausibility calculations: cryonics has never been seen to work before, even in animal models, so I can't assign a strictly evidential probability to its working on me or anyone else. Since I don't have good priors or astronomical sums of mental computing power, I can't use pure Bayesianism, personally.

So, let's say I consider myself to have valid reason to believe cryonics simply doesn't work. Everyone cryo-preserved right now is dead, and is never coming back.

How do you fix that? That is, if you really believe cryonics can work, or probably does work, then who is performing the research to improve preservation techniques so that future cryopreservation procedures will be known to work? Who is doing things like freezing animal specimens and then attempting to resuscitate them? And more importantly: who is working to ensure that everyone successfully cryopreserved has a good life to come back to when you do resuscitate them (the answers "MIRI" and "FHI" will not be accepted, on grounds that I already support them)?

comment by hyporational · 2014-01-12T09:03:44.193Z · LW(p) · GW(p)

I'm desensitized. I have to be, to stay sane in a job where I watch people die on a day to day basis. This is a bias; I'm just not convinced that it's a bias in a negative direction.

While I'm definitely desensitized to suffering of others, seeing dead and dying people has made my own mortality all the more palpable. Constantly seeing sick people has also made scenarios of personal disability more available, which generally makes me avoid bad health choices out of fear. End of life care where I'm from is in an abysmal condition and I don't ever want to experience it myself. I fear it far more than death itself.

comment by kilobug · 2014-01-13T15:44:07.443Z · LW(p) · GW(p)

I'm also on the fence and wondering if cryonics are worth it (especially since I'm in France where there is no real option for it, so in addition to costs it would likely mean changing country), but I think you made two flaws in your (otherwise interesting) reasoning :

It's neutral from a point of pleasure vs suffering for the dead person

It forgets opportunity costs. Dying deprive the person of all the future experience (s)he could have, so of a huge amount of pleasure (and potentially suffering too).

So: my death feels bad, but not infinitely bad. Obvious thing to do: assign a monetary value.

Same as above : it depends a lot of expected pleasure vs suffering of your remaining life. Life of an horribly suffering last stage cancer patient, or of a last stage Alzheimer disease patient has much less value than life of someone expecting decades of healthy life. Cyronics believers think that if it works, they'll get "eternal" life, or least, thousands and thousands of healthy life. That gives to life a much higher value than the one it has in our current world (forgetting cryonics).

Replies from: byrnema
comment by byrnema · 2014-01-13T17:28:41.014Z · LW(p) · GW(p)

It's neutral from a point of pleasure vs suffering for the dead person

It forgets opportunity costs. Dying deprive the person of all the future experience (s)he could have, so of a huge amount of pleasure (and potentially suffering too).

I feel like being revived in the future would be a new project I am not yet emotionally committed to.

I think I would be / will be very motivated to extend my life, but when it comes to expending effort to "come back", I realize I feel some relief with just letting my identity go.

The main reason behind this is that what gives my life value are my social connections, without them I am just another 'I', no different than any other. It seems just as well that there be another, independent birth than my own revival. One reason I feel this way is from reading books -- being the 'I' in the story always feels the same.

This would all of course change if my family was signing up.

Replies from: Richard_Kennaway, kilobug
comment by Richard_Kennaway · 2014-01-15T12:52:43.875Z · LW(p) · GW(p)

The main reason behind this is that what gives my life value are my social connections, without them I am just another 'I', no different than any other.

Suppose that due to political upheavals you suddenly had to emigrate on your own. If you stay you will die, and if you leave you will lose your connections. Would you not leave, with regret certainly, but make new connections in your new home? In the present day world, many people have to do this.

Cryonics is like emigration. You leave this time and place because otherwise you die, get into a flimsy boat that may well sink on the trip, and possibly emerge into a new land of which you know nothing. To some it is even a desirable adventure.

Replies from: byrnema
comment by byrnema · 2014-01-16T00:53:18.665Z · LW(p) · GW(p)

Hmm...I wonder to what extent emigrating a relative 'lot' has formed my ideas about identity. Especially when I was younger, I did not feel like my identity was very robust to abrupt and discordant changes, usually geographic, and just accepted that different parts of my life felt different.

I did enjoy change, exactly as an adventure, and I have no wish to end experience.

However, with a change as discontinuous as cryonics (over time and social networks), I find that I'm not attached to particular components of my identity (such as gender and profession and enjoying blogging on Less Wrong, etc) and in the end, there's not much left save the universal feeling of experience -- the sense of identity captured by any really good book, the feeling of a voice and a sympathetic perception.

To illustrate, I would be exceptionally interested in a really realistic book about someone being resuscitated from cryonics (I find books more immersive than movies), but I wouldn't feel that 'I' needed to be the main character of that book, and I would be very excited to discover that my recent experience as a human in the 21st century has been a simulation, preparing me in some way for revival tomorrow morning in a brave new world...as a former Czech businessman.

comment by kilobug · 2014-01-15T10:01:38.290Z · LW(p) · GW(p)

The main reason behind this is that what gives my life value are my social connections, without them I am just another 'I', no different than any other.

I think you're going too far when saying it's "no different than any other", but I agree with the core idea - being revived without any of my social connections in an alien world would indeed significantly change "who I am". And it's one of the main reason for which while I do see some attraction in cryonics, I didn't do any serious move in that direction. It would be all different if a significant part of my family or close friends would sign too.

Replies from: byrnema
comment by byrnema · 2014-01-15T22:51:28.011Z · LW(p) · GW(p)

I think you're going too far when saying it's "no different than any other", but I agree with the core idea - being revived without any of my social connections in an alien world would indeed significantly change "who I am".

Hmm..actually, you have a different point of view.

I feel like I would have the same identity even without my social connections; I would have the specific identity that I currently have if I was revived.

My point was more along the lines it doesn't matter which identity I happened to have -- mine or someone else's, it wouldn't matter.

Consider that you have a choice whether to be revived as a particular Czech business man or as a particular medical doctor from Ohio (assuming for the hypothetical, that there was some coherent way to map these identities to 'you'). How would you pick?

Maybe you would pick based on the values of your current identity, kilobug. However, that seems rather arbitrary as these aren't the values exactly of either the Czech business man or the doctor from Ohio. I imagine either one of them would be happy with being themselves.

Now throw your actual identity in the mix, so that you get to pick from the three. I feel that many people examine their intuition and feel they would prefer that they themselves are picked. However, I examine my intuition and I find I don't care. Is this really so strange?

Replies from: byrnema
comment by byrnema · 2014-01-15T22:55:39.845Z · LW(p) · GW(p)

But I wanted to add ... if the daughter of the person from Ohio is also cryonicized and revived (somewhat randomly, I based my identities on the 118th and 88th patients at Alcor, though I don''t know what their professions were, and the 88th patient did have a daughter), I very much hope that the mother-daughter pair may be revived together. That, I think, would be a lot of fun to wake up together and find out what the new world is like.

comment by Jiro · 2014-01-13T02:29:44.156Z · LW(p) · GW(p)

Just like future cryonics research might be able to revive someone who was frozen now, perhaps future time travellers could revive people simply by rescuing them from before their death. Of course, time travellers can't revive people who died under all circumstances. Someone who dies in a hospital and has had an autopsy couldn't be rescued without changing the past.

Therefore, we should start a movement where dying people should make sure that they die inside hermetically sealed capsules that are placed in a vault which is rarely opened. If time travel is ever invented, time travellers could travel to a time and place inside the vault right after a dying person has been deposited, rescue the dying person, and take them to the future to cure their disease. Because the vault is closed, nobody can see them appear, and because the capsules are sealed, nobody from our time can tell that the contents of the capsule has been replaced. This would allow the time travellers to rescue such people without changing the past. It would even beat out cryonics since the capsules could be destroyed after enough time has passed for the patient to either die or be rescued--there's no need for permanent maintenance. (But don't ever destroy them in such a way that lets you tell if there was a body inside--if you do that, the person can no longer be revived without changing the past.)

This method of revival is not only cheaper than cryonics, it's also incompatible with cryonics since if you allow someone's body to be frozen as they die, you're watching the death, and time travellers can't rescue them from before their death without changing the past--this method requires that they die out of sight of everyone.

In order to work, this should be accompanied by spending money on time travel research.

Replies from: Calvin, MugaSofer
comment by Calvin · 2014-01-13T03:59:04.899Z · LW(p) · GW(p)

Would you like to live forever?

For just 50$ monthly fee, agents of Time Patrol Institute promise to travel back in time extract your body a few miliseconds before death. In order to avoid causing temporal "parodxes", we pledge to replace your body with (almost) identical artificially constructed clone. After your body is extracted and moved to closest non-paradoxical future date we will reverse damage caused by aging, increase your lifespan to infinity and treat you with a cup of coffee.

While we are fully aware that time travel is not yet possible, we believe that recent advances in nanotechnology and quantum physics, matched by your generous donations would hopefully allow us to construct a working time machine at any point in the future.

Why not Cryonics?

For all effective altruists in the audience, please consider that utility of immortalizing entire humankind is preferable to saving only those few of us who underwent cryonic procedures. If you don't sign your parents for temporal rescue, you are a lousy son. People who tell you otherwise are simply voicing their internalized deathist-presentists prejudices.

For selfish practically minded agents living in the 2014, please consider that while in order for you to benefit from cryonics it is mandatory that correct brain preservation techniques are developed and popularized during your lifespan, time travel can be developed at any point in the future, there is no need for hurry.

Regards, Blaise Pascal, CEO of TPI

Replies from: Lumifer
comment by Lumifer · 2014-01-13T04:07:06.553Z · LW(p) · GW(p)

and treat you with a cup of coffee.

What?!!? Not tea? I unwilling to be reborn into such a barbaric environment.

If you don't sign your parents for temporal rescue, you are a lousy son.

Wouldn't it be simpler to convert to Mormonism? :-D

comment by MugaSofer · 2014-01-14T12:18:59.342Z · LW(p) · GW(p)

Someone who dies in a hospital and has had an autopsy couldn't be rescued without changing the past.

Why not? Just replace them with a remote-controlled clone, or upload them from afar (using magic, obviously), or rewrite everyone's memories of you ...

Replies from: Jiro
comment by Jiro · 2014-01-14T18:07:29.700Z · LW(p) · GW(p)

Putting a body in a sealed capsule in a vault requires no magical technology (not counting the time travel itself). Someone who has time travel but otherwise our present level of technology could rescue someone who is allowed to die using the vault method. (Although hopefully they do have better medical technology or rescuing the dying person wouldn't do much good.)

It's true, of course, that if the time travellers have other forms of advanced technology they might be able to rescue a wider range of people, but the safest way to go is to use my method. Note that time travel interacts with cryonics in the same way: perhaps you don't need to freeze someone because a future person could time-travel and upload them from afar. Besides, why would you take the risk of the time travellers not being able to do this, considering that being put in a capsule for a few days is pretty cheap? You're a lousy parent if you don't sign your kids up for it....

Replies from: MugaSofer, TheOtherDave
comment by MugaSofer · 2014-01-14T19:26:48.165Z · LW(p) · GW(p)

Well, obviously here you run into your priors as to what sort of tech level is implied by time travel - much as discussions of cryonics run into the different intuitions people have about what a future where we can bring back the frozen will look like.

("The only economic incentive for such an expensive project - except perhaps for a few lucky individuals who would be used as entertainment, like animals in a zoo - would be to use them as an enslaved labor force.")

With that said ...

... once time travel has come into play, surely all that matters is whether the magical technology in question will eventually be developed?

Replies from: Jiro
comment by Jiro · 2014-01-15T00:32:49.855Z · LW(p) · GW(p)

once time travel has come into play, surely all that matters is whether the magical technology in question will eventually be developed?

Not only do you need to have time travellers, you need to have time travellers who are interested in reviving you. The farther in the future you get the less the chance that any time travellers would want to revive you (although they might always want someone for historical interest so the chance might never go down to zero.) The more advanced the technology required, the longer it'll be and the less the chance they'll want to bother.

Perhaps you could go the cryonics-like route and have a foundation set up whose express purpose is to revive people some time in the future in exchange for payments now. While unlike cryonics there is no ongoing cost to keep you in a position where you can be saved, the cost to keep someone wanting to save you is still ongoing. This would still be subject to the same objections used for cryonics foundations. Of course, like cryonics, you can always hope that someone creates a friendly AI who wants to save as many people as it can.

There's also the possibility that some technology simply will not be developed. Perhaps there are some fundamental quantum limits that prevent getting an accurate remote scan of you. Perhaps civilization has a 50% chance of dying out before they invent the magical technology.

Replies from: MugaSofer
comment by MugaSofer · 2014-01-15T09:30:15.961Z · LW(p) · GW(p)

Not only do you need to have time travellers, you need to have time travellers who are interested in reviving you. The farther in the future you get the less the chance that any time travellers would want to revive you (although they might always want someone for historical interest so the chance might never go down to zero.) The more advanced the technology required, the longer it'll be and the less the chance they'll want to bother.

Like I said, I guess this comes down to how you imagine such a future looking beyond "has time travel". I tend to assume some sort of post-scarcity omnibenevolent utopia, myself ...

There's also the possibility that some technology simply will not be developed. Perhaps there are some fundamental quantum limits that prevent getting an accurate remote scan of you. Perhaps civilization has a 50% chance of dying out before they invent the magical technology.

Before they invent any magical technology, you mean. There's more than one conceivable approach to such a last-second rescue.

comment by TheOtherDave · 2014-01-14T19:45:43.155Z · LW(p) · GW(p)

What is your estimate of the ratio of the probability of my being "rescued" given a sealed capsule, to that of my being rescued absent a sealed capsule?

Replies from: Jiro
comment by Jiro · 2014-01-15T00:22:00.240Z · LW(p) · GW(p)

I have no idea. I'm sure I could come up with an estimate in a similar manner to how people make estimates for cryonics, though.

comment by Gunnar_Zarncke · 2014-01-12T21:01:11.537Z · LW(p) · GW(p)

Some context googled together from earlier LW posts about this topic:

From the recent ChrisHallquists $500 thread a comment that takes the outside view and comes to a devasting conclusion: http://lesswrong.com/r/discussion/lw/jgu/i_will_pay_500_to_anyone_who_can_convince_me_to/acd5

In the discussion of a relevant blog post we have this critical comment: http://lesswrong.com/user/V_V/overview/

In the Neil deGrasse Tyson on Cryonics post a real Neuroscientist gave his very negative input: http://lesswrong.com/r/discussion/lw/8f4/neil_degrasse_tyson_on_cryonics/6krm

As a general note I'd urge you to question the probabilities you took from Will Cryonics Work as those numbers seem to

  • lack references (to me the look made up esp. because the are not qualified by e.g. a time until revive) and

  • they are from a pro-cryonics site.

Replies from: Swimmer963
comment by Swimmer963 (Miranda Dixon-Luinenburg) (Swimmer963) · 2014-01-13T01:54:54.549Z · LW(p) · GW(p)

As a general note I'd urge you to question the probabilities you took from Will Cryonics Work as those numbers seem to lack references (to me the look made up esp. because the are not qualified by e.g. a time until revive) and they are from a pro-cryonics site.

Yes. My calculations are lazy. I cobbled together the ideas of this post in a conversation that took place when I was supposed to be sleeping, and when I wrote it a few days later, it was by carving 2 hours out after my bedtime. Which won't be happening again tonight because I can only work so many 12 hour shifts on five hours of sleep a night. The alternative wasn't doing better calculations; it was not doing any calculations at all and sticking with my bottom line that cryonics doesn't feel like something I want to do, just because.

Also: the reason I posted this publically almost as soon as I had the thought of writing it at all was to get feedback. So thank you. I will hopefully read through all the feedback and take it into account the next time I would rather do that than sleeping.

comment by James_Miller · 2014-01-12T06:41:11.000Z · LW(p) · GW(p)

The possibility of a friendly ultra-AI greatly raises the expected value of cryonics. Such an AI would likely create a utopia that you would very much want to live in. Also, this possibility reduces the time interval before you would be brought back, and so makes it less likely that your brain would be destroyed before cryonics revival becomes possible. If you believe in the likelihood of a singularity by, say, 2100 then you can't trust calculations of the success of cryonics that don't factor in the singularity.

Replies from: Benito, V_V, hyporational
comment by Ben Pace (Benito) · 2014-01-12T08:44:04.743Z · LW(p) · GW(p)

Which causes me to think if another argument: if you attach a high probability to an Ultra-AI which doesn't quite have a perfectly aligned utility function, do you want to be brought back into a world which has or could have an UFAI?

Replies from: James_Miller, None
comment by James_Miller · 2014-01-12T17:26:44.044Z · LW(p) · GW(p)

Because there is a limited amount of free energy in the universe, unless the AI's goals incorporated your utility function it wouldn't bring you back and indeed would use the atoms in your body to further whatever goal it had. With very high probability, we only get an UFAI that would (1) bring you back and (2) make our lives have less value than they do today if evil humans deliberately put a huge amount of effort into making their AI unfriendly, programming in torturing humans as a terminal value.

Replies from: ialdabaoth, Decius
comment by ialdabaoth · 2014-01-12T18:13:51.426Z · LW(p) · GW(p)

Alternate scenario 1: AI wants to find out something that only human beings from a particular era would know, brings them back as simulations as a side-effect of the process it uses to extract their memories, and then doesn't particularly care about giving them a pleasant environment to exist in.

Alternate scenario 2: failed Friendly AI brings people back and tortures them because some human programmed it with a concept of "heaven" that has a hideously unfortunate implication.

Replies from: None, James_Miller
comment by [deleted] · 2014-01-14T20:48:03.218Z · LW(p) · GW(p)

Alternate scenario 2: failed Friendly AI brings people back and tortures them because some human programmed it with a concept of "heaven" that has a hideously unfortunate implication.

Good news: this one's remarkably unlikely, since almost all existing Friendly AI approaches are indirect ("look at some samples of real humans and optimize for the output of some formally-specified epistemic procedure for determining their values") rather than direct ("choirs of angels sing to the Throne of God").

Replies from: TheOtherDave
comment by TheOtherDave · 2014-01-14T20:53:10.017Z · LW(p) · GW(p)

Not sure how that helps. Would you prefer scenario 2b, with "[..] because its formally-specified epistemic procedure for determining the values of its samples of real humans results in a concept of value-maximization that has a hideously unfortunate implication."?

Replies from: None
comment by [deleted] · 2014-01-14T21:06:42.342Z · LW(p) · GW(p)

You're saying that enacting the endorsed values of real people taken at reflective equilibrium has an unfortunate implication? To whom? Surely not to the people whose values you're enacting. Which does leave population-ethics a biiiiig open question for FAI development, but it at least means the people whose values you feed to the Seed AI get what they want.

Replies from: TheOtherDave
comment by TheOtherDave · 2014-01-14T21:15:08.057Z · LW(p) · GW(p)

No, I'm saying that (in scenario 2b) enacting the result of a formally-specified epistemic procedure has an unfortunate implication. Unfortunate to everyone, including the people who were used as the sample against which that procedure ran.

Replies from: None
comment by [deleted] · 2014-01-14T22:24:14.447Z · LW(p) · GW(p)

Why? The whole point of a formally-specified epistemic procedure is that, with respect to the people taken as samples, it is right by definition.

Replies from: TheOtherDave
comment by TheOtherDave · 2014-01-14T22:41:25.740Z · LW(p) · GW(p)

Wonderful. Then the unfortunate implication will be right, by definition.

So what?

Replies from: None
comment by [deleted] · 2014-01-14T23:00:53.145Z · LW(p) · GW(p)

I'm not sure what the communication failure here is. The whole point is to construct algorithms that extrapolate the value-set of the input people. By doing so, you thus extrapolate a moral code that the input people can definitely endorse, hence the phrase "right by definition". So where is the unfortunate implication coming from?

Replies from: VAuroch, TheOtherDave
comment by VAuroch · 2014-01-15T01:14:45.980Z · LW(p) · GW(p)

A third-party guess: It's coming from a flaw in the formal specification of the epistemic procedure. That it is formally specified is not a guarantee that it is the specification we would want. It could rest on a faulty assumption, or take a step that appears justified but in actuality is slightly wrong.

Basically, formal specification is a good idea, but not a get-out-of-trouble-free card.

Replies from: None, TheOtherDave, None
comment by [deleted] · 2014-01-15T06:11:45.911Z · LW(p) · GW(p)

Replying elsewhere. Suffice to say, nobody would call it a "get out of trouble free" card. More like, get out of trouble after decades of prerequisite hard work, which is precisely why various forms of the hard work are being done now, decades before any kind of AGI is invented, let alone foom-flavored ultra-AI.

Reply.

comment by TheOtherDave · 2014-01-15T01:18:26.817Z · LW(p) · GW(p)

I have no idea if this is the communication failure, but I certainly would agree with this comment.

comment by [deleted] · 2014-01-15T05:49:18.194Z · LW(p) · GW(p)

Thanks!

comment by TheOtherDave · 2014-01-15T01:14:08.378Z · LW(p) · GW(p)

I'm not sure either. Let me back up a little... from my perspective, the exchange looks something like this:

ialdabaoth: what if failed FAI is incorrectly implemented and fucks things up?
eli_sennesh: that won't happen, because the way we produce FAI will involve an algorithm that looks at human brains and reverse-engineers their values, which then get implemented.
theOtherDave: just because the target specification is being produced by an algorithm doesn't mean its results won't fuck things up
e_s: yes it does, because the algorithm is a formally-specified epistemic procedure, which means its results are right by definition.
tOD: wtf?

So perhaps the problem is that I simply don't understand why it is that a formally-specified epistemic procedure running on my brain to extract the target specification for a powerful optimization process should be guaranteed not to fuck things up.

Replies from: None
comment by [deleted] · 2014-01-15T06:08:03.438Z · LW(p) · GW(p)

Ah, ok. I'm going to have to double-reply here, and my answer should be taken as a personal perspective. This is actually an issue I've been thinking about and conversing over with an FHI guy, I'd like to hear any thoughts someone might have.

Basically, we want to extract a coherent set of terminal goals from human beings. So far, the approach to this problem is from two angles:

1) Neuroscience/neuroethics/neuroeconomics: look at how the human brain actually makes choices, and attempt to describe where and how in the brain terminal values are rooted. See: Paul Christiano's "indirect normativity" write-up.

2) Pure ethics: there are lots of impulses in the brain that feed into choice, so instead of just picking one of those, let's sit down and do the moral philosophy on how to "think out" our terminal values. See: CEV, "reflective equilibrium", "what we want to want", concepts like that.

My personal opinion is that we also need to add:

3) Population ethics: given the ability to extract values from one human, we now need to sample lots of humans and come up with an ethically sound way of combining the resulting goal functions ("where our wishes cohere rather than interfere", blah blah blah) to make an optimization metric that works for everyone, even if it's not quite maximally perfect for every single individual (that is, Shlomo might prefer everyone be Jewish, Abed might prefer everyone be Muslim, John likes being secular just fine, the combined and extrapolated goal function doesn't perform mandatory religious conversions on anyone).

Now! Here's where we get to the part where we avoid fucking things up! At least in my opinion, and as a proposal I've put forth myself, if we really have an accurate model of human morality, then we should be able to implement the value-extraction process on some experimental subjects, predictively generate a course of action through our model behind closed doors, run an experiment on serious moral decision-making, and then find afterwards that (without having seen the generated proposals before) our subjects' real decisions either matched the predicted ones, or our subjects endorse the predicted ones.

That is, ideally, we should be able to test our notion of how to epistemically describe morality before we ever make that epistemic procedure or its outputs the goal metric for a Really Powerful Optimization Process. Short of things like bugs in the code or cosmic rays, we would thus (assuming we have time to carry out all the research before $YOUR_GEOPOLITICAL_ENEMY unleashes a paper-clipper For the Evulz) have a good idea what's going to happen before we take a serious risk.

Replies from: TheOtherDave
comment by TheOtherDave · 2014-01-15T15:59:50.553Z · LW(p) · GW(p)

So, if I've understood your proposal, we could summarize it as:
Step 1: we run the value-extractor (seed AI, whatever) on group G and get V.
Step 2: we run a simulation of using V as the target for our optimizer.
Step 3: we show the detailed log of that simulation to G, and/or we ask G various questions about their preferences and see whether their answers match the simulation.
Step 4: based on the results of step 3, we decide whether to actually run our optimizer on V.

Have I basically understood you?

If so, I have two points, one simple and boring, one more complicated and interesting.

The simple one is that this process depends critically on our simulation mechanism being reliable. If there's a design flaw in the simulator such that the simulation is wonderful but the actual results of running our optimizer is awful, the result of this process is that we endorse a wonderful world and create a completely different awful world and say "oops."

So I still don't see how this avoids the possibility of unfortunate implications. More generally, I don't think anything we can do will avoid that possibility. We simply have to accept that we might get it wrong, and do it anyway, because the probability of disaster if we don't do it is even higher.

The more interesting one... well, let's assume that we do steps 1-3.
Step 4 is where I get lost. I've been stuck on this point for years.

I see step 4 going like this:

  • Some members of G (G1) say "Hey, awesome, sign me up!"
  • Other members of G (G2) say "I guess? I mean, I kind of thought there would be more $currently_held_sacred_value, but if your computer says this is what I actually want, well, who am I to argue with a computer?"
  • G3 says "You know, that's not bad, but what would make it even better is if the bikeshed were painted yellow."
  • G4 says "Wait, what? You're telling me that my values, extrapolated and integrated with everyone else's and implemented in the actual world, look like that?!? But... but... that's awful! I mean, that world doesn't have any $currently_held_sacred_value! No, I can't accept that."
  • G5 says "Yeah, whatever. When's lunch?" ...and so on.

Then we stare at all that and pull out our hair. Is that a successful test? Who knows? What were we expecting, anyway... that all of G would be in G1? Why would we expect that? Even if V is perfectly correct... why would we expect mere humans to reliably endorse it?

Similarly, if we ask G a bunch of questions to elicit their revealed preferences/decisions and compare those to the results of the simulation, I expect that we'll find conflicting answers. Some things match up, others don't, some things depend on who we ask or how we ask them or whether they've eaten lunch recently.

Actually, I think the real situation is even more extreme than that. This whole enterprise depends on the idea that we have actual values... the so-called "terminal" ones, which we mostly aren't aware of right now, but are what we would want if we "learned together and grew together and yadda yadda"... which are more mutually reconcilable than the surface values that we claim to want or think we want (e.g., "everyone in the world embraces Jesus Christ in their hearts," "everybody suffers as I've suffered," "I rule the world!").

If that's true, it seems to me we should expect that the result of a superhuman intelligence optimizing the world for our terminal values and ignoring our surface values to seem alien and incomprehensible and probably kind of disgusting to the people we are right now.

And at that point we have to ask, what do we trust more? Our own brains, which say "BOO!", or the value-extractor/optimizer/simulator we've built, which says "no, really, it turns out this is what you actually want; trust me."?

If the answer to that is not immediately "we trust the software far more than our own fallible brains" we have clearly done something wrong.

But... in that case, why bother with the simulator at all? Just implement V, never mind what we think about it. Our thoughts are merely the reports of an obsolete platform; we have better tools now.

This is a special case of a more general principle: when we build tools that are more reliable than our own brains, and our brains disagree with the tools, we should ignore our own brains and obey the tools. Once a self-driving car is good enough, allowing human drivers to override it is at best unnecessary and at worst stupidly dangerous.

Similarly... this whole enterprise depends on building a machine that's better at knowing what I really want than I am. Once we've built the machine, asking me what I want is at best unnecessary and at worst stupidly dangerous.

Replies from: None
comment by [deleted] · 2014-01-15T18:35:31.899Z · LW(p) · GW(p)

The simple one is that this process depends critically on our simulation mechanism being reliable. If there's a design flaw in the simulator such that the simulation is wonderful but the actual results of running our optimizer is awful, the result of this process is that we endorse a wonderful world and create a completely different awful world and say "oops."

The idea is not to run a simulation of a tiny little universe, merely a simulation of a few people's moral decision processes. Basically, run a program that prints out what our proposed FAI would have done given some situations, show that to our sample people, and check if they actually endorse the proposed course of action.

(There's another related proposal for getting Friendly AI called value learning, which I've been scrawling notes on today. Basically, the idea is that the AI will keep a pool of possible utility functions (which are consistent, VNM-rational utility functions by construction), and we'll use some evidence about humans to rate the probability that a given utility function is Friendly. Depending on the details of this whole process and the math actually working out, you would get a learning agent that steadily refines its utility function to be more and more one that humans can endorse.)

The more interesting one... well, let's assume that we do steps 1-3. Step 4 is where I get lost. I've been stuck on this point for years.

I see step 4 going like this:

  • Some members of G (G1) say "Hey, awesome, sign me up!"
  • Other members of G (G2) say "I guess? I mean, I kind of thought there would be more $currently_held_sacred_value, but if your computer says this is what I actually want, well, who am I to argue with a computer?"
  • G3 says "You know, that's not bad, but what would make it even better is if the bikeshed were painted yellow."
  • G4 says "Wait, what? You're telling me that my values, extrapolated and integrated with everyone else's and implemented in the actual world, look like that?!? But... but... that's awful! I mean, that world doesn't have any $currently_held_sacred_value! No, I can't accept that."
  • G5 says "Yeah, whatever. When's lunch?" ...and so on.

This is why I did actually say that population ethics is a wide-open problem in machine ethics. Meaning, yes, the population has broken into political factions. Humans have a noted tendency to do that.

Now, the whole point of Coherent Extrapolated Volition on a population-ethics level was to employ a fairly simple population-ethical heuristic: "where our wishes cohere rather than interfere". Which, it seems to me, means: if people's wishes run against each-other, do nothing at all, do something only if there exists unanimous/near-unanimous/supermajority agreement. It's very democratic, in its way, but it will probably also end up implementing only the lowest common denominator. The result I expect to see from a naive all-humanity CEV with that population-ethic is something along the lines of, "People's health is vastly improved, mortality becomes optional, food ripens more easily and is tastier, and everyone gets a house. You humans couldn't actually agree on much more."

Which is all pretty well and good, but it's not much more than we could have gotten without going to the trouble of building a Friendly Artificial Intelligence!

Population ethics will also be rather much of a problem, I would venture to guess, because our tiny little ape-brains don't have any hardware or instincts for large-scale population ethics. Our default moral machinery, when we are born, is configured to be loyal to our tribe, to help our friends and family, and to kill the shit out of the other tribe.

It will take some solid new intellectual developments in ethics to actually come up with some real way of dealing with the problem you've stated. I mean, sure, you could just go to the other extreme and say an FAI should shoot for population-ethical Pareto optimality, but that requires defining population-ethical Pareto optimality. For instance, is a person with many friends and loved-ones worth more to your notion of Pareto-optimality because insufficiently catering to their wishes hurts their friends and loved-ones, versus a hermit who has no close associates to be hurt by his Pareto-optimal misery?

What I can say, at the very least, is that prospective AI designers and an FAI itself should listen hard to groups G4 and G2 and figure out, ideally, how to actually fix the goal function so that G1 they shift over to one of the groups actively assenting. We should assume endorsement of FAI is like consent to sex: active enthusiasm should be the minimum standard of endorsement, with quiet acquiescence a strong sign of "NO! STOP!".

Actually, I think the real situation is even more extreme than that. This whole enterprise depends on the idea that we have actual values... the so-called "terminal" ones, which we mostly aren't aware of right now, but are what we would want if we "learned together and grew together and yadda yadda"... which are more mutually reconcilable than the surface values that we claim to want or think we want (e.g., "everyone in the world embraces Jesus Christ in their hearts," "everybody suffers as I've suffered," "I rule the world!").

Reality is what it is. We certainly can't build a real FAI by slicing a goat's neck within the bounds of a pentagram at 1am on the Temple Mount in Jerusalem chanting ancient Canaanite prayers that translate as "Come forth and optimize for our values!". A prospective FAI-maker would have to actually know and understand moral psychology and neuro-ethics on a scientific level, which would give them a damn good idea of what sort of preference function they're extracting.

(By the way, the whole point of reflective equilibrium is to wash out really stupid ideas like "everyone suffers as I've suffered", which has never actually done anything for anyone.)

If that's true, it seems to me we should expect that the result of a superhuman intelligence optimizing the world for our terminal values and ignoring our surface values to seem alien and incomprehensible and probably kind of disgusting to the people we are right now.

Supposedly, yeah. I would certainly hope that a real FAI would understand about how people prefer gradual transitions and not completely overthrow everything all at once to any degree greater than strictly necessary.

And at that point we have to ask, what do we trust more? Our own brains, which say "BOO!", or the value-extractor/optimizer/simulator we've built, which says "no, really, it turns out this is what you actually want; trust me."?

Well, why are you proposing to trust the program's output? Check with the program's creators, check the program's construction, and check it's input. It's not an eldritch tome of lore, it's a series of scientific procedures. The whole great, awesome thing about science is that results are checkable, in principle, by anyone with the resources to do so.

Run the extraction process over again! Do repeated experiments, possibly even on separate samples, yield similar answers? If so, well, it may well be that the procedure is trustworthy, in that it does what its creators have specified that it does.

In which case, check that the specification does definitely correspond, in some way, to "extract what we really want from what we supposedly want."

In conclusion, SCIENCE!

Replies from: TheOtherDave
comment by TheOtherDave · 2014-01-15T22:57:50.248Z · LW(p) · GW(p)

run a program that prints out what our proposed FAI would have done given some situations, show that to our sample people, and check if they actually endorse the proposed course of action.

So, suppose we do this, and we conclude that our FAI is in fact capable of reliably proposing courses of actions that, in general terms, people endorse.

It seems clear to me that's not enough to show that it will not fuck things up when it comes time to actually implement changes in the real world. Do you disagree? Because back at the beginning of this conversation, it sounded like you were claiming you had in mind a process that was guaranteed not to fuck up, which is what I was skeptical about.

There's another related proposal [...] we'll use some evidence about humans to rate the probability that a given utility function is Friendly

Well, I certainly expect that to work better than not using evidence. Beyond that, I'm really not sure what to say about it. Here again... suppose this procedure works wonderfully, and as a consequence of climbing that hill we end up with a consistent set of VNM-rational utility functions that humans reliably endorse when they read about them.

It seems clear to me that's not enough to show that it will not fuck things up when it comes time to actually implement changes in the real world. Do you disagree?

Now you might reply "Well it's the best we can do!" and I might agree. As I said earlier, we simply have to accept that we might get it wrong, and do it anyway, because the probability of disaster if we don't do it is even higher. But let's not pretend there's no chance of failure.

yes, the population has broken into political factions

I'm not sure I would describe those subgroups as political factions, necessarily... they're just people expressing opinions at this stage. But sure, I could imagine analogous political factions.

the whole point of CEV was to employ a fairly simple population-ethical heuristic: "where our wishes cohere rather than interfere". [..] You humans couldn't actually agree on much more.

Well, now, this is a different issue. I actually agree with you here, but I was assuming for the sake of argument that the CEV paradigm actually works, and gets a real, worthwhile converged result from G. That is, I'm assuming for the same of comity that G actually would, if they were "more the people they wished to be" and so on and so forth in all the poetic language of the CEV paper, agree on V, and that our value-extractor somehow figures that out because it's really well-designed.

My point was that it doesn't follow from that that G as they actually are will agree on V.

(By the way, the whole point of reflective equilibrium is to wash out really stupid ideas like "everyone suffers as I've suffered", which has never actually done anything for anyone.)

Sure, I agree -- both that that's the point of RE, and that ESAIS is a really stupid (though popular) idea.

But reflective equilibrium is a method with an endpoint we approach asymptotically. The degree of reflective equilibrium humans can reliably achieve after being put in a quiet, air-conditioned room for twenty minutes, fed nutritious food and played soothing music for that time, and then asked questions is less than that which we can achieve after ten years or two hundred years.

At some point, we have to define a target of how much reflective equilibrium we expect from our input, and from our evaluators. The further we shift our target away from where we are right now, the more really stupid ideas we will wash out, and the less likely we are to endorse the result. The further we shift it towards where we are, the more stupid ideas we keep, and the more likely we are to endorse the result.

I would certainly hope that a real FAI would understand about how people prefer gradual transitions and not completely overthrow everything all at once to any degree greater than strictly necessary.

I feel like we're just talking past each other at this point, actually. I'm not talking about how quickly the FAI optimizes the world, I'm talking about whether we are likely to endorse the result of extracting our actual values.

In conclusion, SCIENCE!

(sigh) Yeah, OK. Tapping out now.

Replies from: None
comment by [deleted] · 2014-01-15T23:54:14.598Z · LW(p) · GW(p)

My point was that it doesn't follow from that that G as they actually are will agree on V.

I'm talking about whether we are likely to endorse the result of extracting our actual values.

OOOOOOOOOOOOOOOOOOOOH. Ah. Ok. That is actually an issue, yes! Sorry I didn't get what you meant before!

My answer is: that is an open problem, in the sense that we kind of need to know much more about neuro-ethics to answer it. It's certainly easy to imagine scenarios in which, for instance, the FAI proposes to make all humans total moral exemplars, and as a result all the real humans who secretly like being sinful, even if they don't endorse it, reject the deal entirely.

Yes, we have several different motivational systems, and the field of machine ethics tends to brush this under the rug by referring to everything as "human values" simply because the machine-ethics folks tend to contrast humans with paper-clippers to make a point about why machine-ethics experts are necessary.

This kind of thing is an example of the consideration that needs to be done to get somewhere. You are correct in saying that if FAI designers want their proposals to be accepted by the public (or even the general body of the educated elite) they need to cater not only to meta-level moral wishes but to actual desires and affections real people feel today. I would certainly argue this is an important component of Friendliness design.

At some point, we have to define a target of how much reflective equilibrium we expect from our input, and from our evaluators. The further we shift our target away from where we are right now, the more really stupid ideas we will wash out, and the less likely we are to endorse the result. The further we shift it towards where we are, the more stupid ideas we keep, and the more likely we are to endorse the result.

This assumes that people are unlikely to endorse smart ideas. I personally disagree: many ideas are difficult to locate in idea-space, but easy to evaluate. Life extension, for example, or marriage for romance.

Because back at the beginning of this conversation, it sounded like you were claiming you had in mind a process that was guaranteed not to fuck up, which is what I was skeptical about.

No, I have not solved AI Friendliness all on my lonesome. That would be a ridiculous claim, a crackpot sort of claim. I just have a bunch of research notes that, even with their best possible outcome, leave lots of open questions and remaining issues.

Now you might reply "Well it's the best we can do!" and I might agree. As I said earlier, we simply have to accept that we might get it wrong, and do it anyway, because the probability of disaster if we don't do it is even higher. But let's not pretend there's no chance of failure.

Certainly there's a chance of failure. I just think there's a lot we can and should do to reduce that chance. The potential rewards are simply too great not to.

comment by James_Miller · 2014-01-12T19:06:52.343Z · LW(p) · GW(p)

For scenario 1, it would almost certainly require less free energy just to get the information directly from the brain without ever bringing the person to consciousness.

For scenario 2, you would seriously consider suicide if you fear that a failed friendly AI might soon be developed. Indeed, since there is a chance you will become incapacitated (say by falling into a coma) you might want to destroy your brain long before such an AI could arise.

comment by Decius · 2014-01-12T18:01:47.838Z · LW(p) · GW(p)

It's also possible that the AI finds instrumental utility in having humans around, and that reviving cryonics patients is cheaper than using their Von Newman factories.

Replies from: James_Miller
comment by James_Miller · 2014-01-12T18:09:49.261Z · LW(p) · GW(p)

I disagree. Humans almost certainly do not efficiently use free energy compared to the types of production units an ultra-AI could make.

Replies from: Decius
comment by Decius · 2014-01-13T00:47:06.611Z · LW(p) · GW(p)

How expensive is it to make a production unit with the versatility and efficiency of a human? How much of that energy would simply be wasted anyway? Likely, no, but possible.

Rolling all of that into 'cryonics fails' has little effect on the expected value in any case.

comment by [deleted] · 2014-01-14T20:46:15.555Z · LW(p) · GW(p)

There's really not that much margin of error in Super Tengen Toppa AI design. The more powerful it is, the less margin for error.

It's not like you'd be brought back by a near-FAI that otherwise cares about human values but inexplicably thinks chocolate is horrible and eliminates every sign of it.

comment by V_V · 2014-01-13T17:51:51.565Z · LW(p) · GW(p)

I don't think it would make much difference.

Consider my comment in Hallquist's thread:

AI singularity won't affects points 1 and 2: If information about your personality has not been preserved, there is nothing an AI can do to revive you.

It might affect points 3 and 4, but to a limited extent: an AI might be better than vanilla humans at doing research, but it would not be able to develop technologies which are impossible or intrinsically impractical for physical reasons. A truly benevolent AI might be more motivated to revive cryopatients than regular people with selfish desires, but it would still have to allocate its resources economically, and cryopatient revival might not be the best use of them.

Points 5 and 6: clearly the sooner the super-duper AI appears and develops revival tech, the higher the probability that your cryoremains are still around, but super AI appearing early and developing revival tech soon is less probable than it appearing late and/or taking a long time to develop revival tech, hence I would think that the two effects roughly cancel out. Also, as other people have noted, super AI appearing and giving you radical life extension within your lifetime would make cryonics a waste of money.

More generally, I think that AI singularity is itself a conjunctive event, with the more extreme and earlier scenarios being less probable than the less extreme and later ones. Therefore I don't think that taking into accounts AIs should significantly affect any estimation of cryonics success.

Replies from: James_Miller
comment by James_Miller · 2014-01-13T18:01:28.068Z · LW(p) · GW(p)

I think that AI singularity is itself a conjunctive event,

The core thesis of my book Singularity Rising is (basically) that this isn't true, for the singularity at least, because there are many paths to a singularity and making progress along any one of them will help advance the others. For example, it seems highly likely that (conditional on our high tech civilization continuing) within 40 years genetic engineering will have created much smarter humans than have ever existed and these people will excel at computer programming compared to non-augmented humans.

Replies from: V_V
comment by V_V · 2014-01-13T18:34:06.246Z · LW(p) · GW(p)

The core thesis of my book Singularity Rising is (basically) that this isn't true, for the singularity at least, because there are many paths to a singularity and making progress along any one of them will help advance the others.

Well, I haven't read your book, hence I can't exclude that you might have made some good arguments I'm not aware of, but given the publicly available arguments I know, I don't think this is true.

For example, it seems highly likely that (conditional on our high tech civilization continuing) within 40 years genetic engineering will have created much smarter humans than have ever existed and these people will excel at computer programming compared to non-augmented humans.

Is it?

There are some neurological arguments that the human brain is near the maximum intelligence limit for a biological brain.
We are probably not going to breed people with IQ >200, perhaps we might breed people with IQ 140-160, but will there be tradeoffs that make it problematic to do it at large?
Will there be a demand for such humans?
Will they devote their efforts to AI research, or will their comparative advantage drive them to something else?
And how good will they be at developing super AI? As technology becomes more mature, making progress becomes more difficult because the low-hanging fruits have already have been picked, and intelligence itself might have diminishing returns (at very least, I would be surprised to observe an inverse linear correlation between average AI researcher IQ and time to AI).
And, of course, if singularity-inducing AI is impossible/impractical, the point is moot: these genetically enhanced Einstens will not develop it.

In general, with enough imagination you can envision many highly conjunctive ad hoc scenarios and put them into a disjunction, but I find this type of thinking highly suspicious, because you could use it to justify pretty much everything you wanted to believe.
I think it's better to recognize that we don't have any crystal ball to predict the future, and betting on extreme scenarios is probably not going to be a good deal.

comment by hyporational · 2014-01-12T08:52:50.568Z · LW(p) · GW(p)

How do you factor in uFAI or other bad revival scenarios?

comment by passive_fist · 2014-01-17T02:45:01.150Z · LW(p) · GW(p)

Even though LW is far more open to the idea of cryonics than other places, the general opinion on this site still seems to be that cryonics is unlikely to succeed (e.g. has a 10% chance of success).

How do LW'ers reconcile this with the belief that mind uploading is possible?

Replies from: TheOtherDave, Calvin
comment by TheOtherDave · 2014-01-17T03:45:20.494Z · LW(p) · GW(p)

I can't speak for anyone else, but I don't see a contradiction. Believing that a living brain's data can be successfully uploaded eventually doesn't imply that the same data can necessarily be uploaded from a brain preserved with current-day tech. The usual line I see quoted is that cryonics tech isn't guaranteed to preserve key data, but it has a higher chance than rot-in-a-box tech or burn-to-ash tech.

Replies from: passive_fist
comment by passive_fist · 2014-01-17T21:59:48.404Z · LW(p) · GW(p)

The usual line I see quoted is that cryonics tech isn't guaranteed to preserve key data, but it has a higher chance than rot-in-a-box tech or burn-to-ash tech.

So are you saying that this key data includes delicate fine molecular information, which is why it cannot be preserved with current tech?

Replies from: TheOtherDave, shminux
comment by TheOtherDave · 2014-01-18T03:36:34.933Z · LW(p) · GW(p)

Nope, I'm not saying that. There are many systems that both don't depend on fine molecular information, and also are easier to restore from being vitrified than to restore from being burned to ash.

Replies from: passive_fist
comment by passive_fist · 2014-01-18T19:14:11.420Z · LW(p) · GW(p)

Would you agree with shminux's reply then?

Replies from: TheOtherDave
comment by TheOtherDave · 2014-01-18T20:41:12.937Z · LW(p) · GW(p)

Certainly shminux's reply isn't what I had in mind initially, if that's what you mean.

As for whether I agree with it on its own terms... I'm not sure. Certainly I lack sufficient neurochemical domain knowledge to make meaningful estimates here, but I'm not as sure as they sound that everyone does.

comment by shminux · 2014-01-17T22:12:38.896Z · LW(p) · GW(p)

No one yet knows what the data substrate includes or how much of it has to be preserved for meaningful revival. For all we know, a piece of neocortex dropped into liquid nitrogen might do the trick in a pinch. Or maybe not even the best current cryo techniques would be enough. But it is not really possible to give a meaningful estimate, as cryonics does not appear to be in any reference class for which well-calibrated predictions exist.

comment by Calvin · 2014-01-17T03:53:34.264Z · LW(p) · GW(p)

Here is a parable illustrating relevant difficulty of both problems:

*Imagine you are presented with a modern manuscript in latin and asked to retype it on a computer and translate everything into English.

This is how uploading more or less looks like for me, data is there but it still needs to be understood, and copied. Ah, you also need a computer. Now consider the same has to be done with ancient manuscript, that has been preserved in a wooden box stored in ice cave and guarded by a couple of hopeful monks:

  • Imagine the manuscript has been preserved using correct means and all letters are still there.

Uploading is easy. There is no data loss, so it is equivalent to uploading modern manuscript. This means that monks were smart enough to choose optimal storage procedure (or got there by accident) - very unlikely.

  • Imagine the manuscript has been preserved using decent means and some letters are still there.

Now, we have to do a bit of guesswork... is the manuscript we translate the same thing original author had in mind? EY called it doing intelligent cryptography on a partially preserved brain, as far as I am aware. Monks knew just enough not to screw up the process, but their knowledge of manuscript-preservation-techniques was not perfect.

  • Imagine the manuscript has been preserved using decent means all traces have vanished without trace.

Now we are royally screwed, or we can wait a couple of thousands of millions years so that oracle computer can deduce state of manuscript by reversing entropy. This means monks know very little about manuscript-preservation.

  • Imagine there is no manuscript. There is a nice wooden box preserved with astonishing details, but manuscript have crumbled when monks put it inside.

Well, the monks who wanted to preserve manuscript didn't know that preserving the box does not help to preserve the manuscript, but they tried, right? This means monks don't understand connection between manuscript and box preservation techniques.

  • Imagine there is no manuscript. The box has been damaged as well.

This is what happens when manuscript-preservation business is run by people with little knowledge about what should be done to store belongings for thousands of years without significant damage.

In other words, uploading is something that can be figured out correctly in far, far future while the problem of what is proper cryo-storage has to be solved correctly right now as incorrect procedure may lead to irreversible loss of information for people who want to be preserved now. I don't assign high prior probability to the fact that we know enough about the brain to preserve minds correctly, and therefore cryonics in the current shape or form is unlikely to succeed.

Replies from: passive_fist
comment by passive_fist · 2014-01-17T04:07:49.624Z · LW(p) · GW(p)

I don't assign high prior probability to the fact that we know enough about the brain to preserve minds correctly, and therefore cryonics in the current shape or form is unlikely to succeed.

Are you saying that accurate preservation depends on highly delicate molecular states of the brain, and this is the reason they cannot be preserved with current techniques?

Replies from: Calvin
comment by Calvin · 2014-01-17T04:51:26.535Z · LW(p) · GW(p)

I don't know what is conditional to accurate preservation of the mind, but I am sure that if someone came up with definite answer, it would be a great leap forward for the whole community.

Some people seem to put their faith in structure for an answer, but how to test this claim in a meaningful way?

Replies from: passive_fist
comment by passive_fist · 2014-01-17T07:17:49.523Z · LW(p) · GW(p)

I don't know what is conditional to accurate preservation of the mind,

It seems like you're saying you don't know whether cryonics can succeed or not. Whereas in your first reply you said "therefore cryonics in the current shape or form is unlikely to succeed."

Replies from: Calvin
comment by Calvin · 2014-01-17T07:32:15.593Z · LW(p) · GW(p)

Yes.

I don't know if it is going to succeed or not (my precognition skills are rusty today), but I am using my current beliefs and evidence (sometimes lack of thereof) to speculate that it seems unlikely to work, in the same way cryonics proponents speculate that it is likely (well, likely enough to justify the cost) that their minds are going to survive till they are revived in the future.

comment by Brillyant · 2014-01-14T19:17:59.593Z · LW(p) · GW(p)

My true rejection is that I don't feel a visceral urge to sign up. When I query my brain on why, what I get is that I don't feel that upset about me personally dying. It would suck, sure. It would suck a lot. But it wouldn't suck infinitely. I've seen a lot of people die. It's sad and wasteful and upsetting, but not like a civilization collapsing. It's neutral from a point of pleasure vs suffering for the dead person, and negative for the family, but they cope with it and find a bit of meaning and move on.

I'm with you on this. And I hope to see more discussions focused on this aspect of cryonics, transhumanism, etc.

Cryonics seems a bit self-absorbed to me. And it implies death = bad. Of course, death does not = good. But does death really = bad? Or is more accurate to say death = neutral?

Replies from: TheOtherDave, blacktrance
comment by TheOtherDave · 2014-01-14T19:39:23.704Z · LW(p) · GW(p)

Is murder bad, on your view?
If so, why?

Replies from: Brillyant
comment by Brillyant · 2014-01-14T20:02:50.819Z · LW(p) · GW(p)

Yes, murder is bad. It's horribly traumatic to everyone surrounding it. It violates the victim's will. It isn't sustainable in a society. It leads to consequences that I see as a significant net negative.

I'm not really talking about that, though.

As the OP says, "[death] would suck, sure. It would suck a lot. But it wouldn't suck infinitely." This, to me, is the key.

I used to be an Evangelical Christian. We had the hell doctrine, which we took very seriously and literally as eternal conscious torment. That was something I would have considered to "suck infinitely" and therefore would justify a cryonics-styled response (i.e. salvation and the proselytization of the means of salvation). Plain old death by old age certainly sucks, but its not the end of the world...or the beginning of hell. It isn't something to be "saved" from.

Perhaps a better goal would be X years of life for everyone with zero disease or gratuitous pain. Immortality (or inconceivably long lifespans) seems a bit misguided and tied to our more animal, survive-at-all-costs nature.

Replies from: TheOtherDave
comment by TheOtherDave · 2014-01-14T20:26:16.705Z · LW(p) · GW(p)

Agreed that murder is significantly net-negative, but not infinitely so. (This is very different from what I thought you were saying originally, when you suggested death was neutral.)

Is dying of natural causes bad, on your view? (I'm not asking infinitely bad. Just if it's net-negative.)

If so, what level of response do you consider appropriate for the level of net-negative that dying of natural causes in fact is? For example, I gather you believe that cryonics advocates over-shoot that response... do you believe that a typically chosen person on the street is well-calibrated in this respect, or undershoots, or overshoots...?

Replies from: Brillyant
comment by Brillyant · 2014-01-14T21:01:17.747Z · LW(p) · GW(p)

Is dying of natural causes bad, on your view?

I'm not sure, but I don't think so. I don't think death is good -- It makes people sad, etc. But I don't think it is bad enough to lead to the sort of support cryonics gets on here.

Also, "natural causes" would need some clarification in my view. I'm all for medical technology elimination gratuitous suffering cause by naturally occuring diseases. I just think at some point -- 100 years or 1000 years or whatever -- perpetual life extension is moot.

Death is an end to your particular consciousness -- to your own sense of your self. That is all it is. It's just turning the switch off. Falling asleep and not waking.

The process of dying sucks. But being dead vs. being alive seems to me to be inconsequential in some sense. It isn't as if you will miss being alive...

The average person on the street is very afraid (if only subconciously) of death and "overshoots" their response. Lots of people have religion for this.

Replies from: TheOtherDave
comment by TheOtherDave · 2014-01-14T21:25:15.206Z · LW(p) · GW(p)

I'm... confused.

You seem to believe that, if we take into consideration the experiences of others, death is bad. ("It makes people sad, etc.")

I agree, and consider it a short step from there to "therefore death is bad," since I do in fact take into consideration the experiences of others.

But you take a detour I don't understand between those two points, and conclude instead that death is neutral. As near as I can figure it out, your detour involves deciding not to take into consideration the experiences of others, and to evaluate death entirely from the perspective of the dead person.

I understand perfectly well how you conclude that death is no big deal (EDIT: inconsequential) from that perspective. What I don't understand is how you arrive at that perspective, having started out from a perspective that takes the experiences of others into account.

Replies from: Brillyant
comment by Brillyant · 2014-01-15T05:28:49.782Z · LW(p) · GW(p)

I arrive at the conclusion that death is not good, yet not bad through something like philosophical Buddhism. While I wouldn't say death is "no big deal" (in fact, it is just about the biggest deal we face as humans), I would argue we are wired via evolution, including our self-aware consciousness, to make it into a much, much bigger deal than it need be.

I think we should consider the experience of others, but I don't think it should drive our views in regard to death. People will (and of course should) grieve. But it is important to embrace some sense of solidarity and perspective. No one escapes pain in one form or another.

I actually think it would be helpful to our world to undergo a reformation in regard to how we think about death. We are hardwired to survive-at-all-costs. That is silly and outdated and selfish and irrational. It is a stamp of our lowly origins...

Replies from: TheOtherDave
comment by TheOtherDave · 2014-01-15T15:18:17.010Z · LW(p) · GW(p)

While I wouldn't say death is "no big deal"

I've edited to replace "no big deal" with "inconsequential," which is the word you used. They seem interchangeable to me, but I apologize for putting words in your mouth.

we are wired via evolution, including our self-aware consciousness, to make it into a much, much bigger deal than it need be.

Sure, that's certainly true.

But it is important to embrace some sense of solidarity and perspective.

And that's true, too.

No one escapes pain in one form or another.

Also true... which is not itself a reason to eschew reducing the pain of others, or our own pain.

It's important and beneficial to embrace a sense of solidarity and perspective about all kinds of things... polio, child abuse, mortality, insanity, suffering, traffic jams, tooth decay, pebbles in my shoe, leprosy.

It's also important and beneficial to improve our environment so that those things don't continue to trouble us.

We are hardwired to survive-at-all-costs. That is silly and outdated and selfish and irrational.

(shrugs) Sure. But there's a huge gulf between "don't survive at all costs" and "death is neutral." I understand how you get to the former. I'm trying to understand how you get to the latter.

But, OK, I think I've gotten as much of an answer as I'm going to understand. Thanks for your time.

Replies from: Brillyant
comment by Brillyant · 2014-01-15T16:01:26.316Z · LW(p) · GW(p)

I've edited to replace "no big deal" with "inconsequential," which is the word you used. They seem interchangeable to me, but I apologize for putting words in your mouth.

To be clear, I said the difference to the person "experiencing" being dead vs. being alive in inconsequential. The process of dying, including the goodbyes, sucks.

Also true... which is not itself a reason to eschew reducing the pain of others, or our own pain.

Of course. Though I think it is helpful to temper our expectations. That is all I meant.

But there's a huge gulf between "don't survive at all costs" and "death is neutral." I understand how you get to the former. I'm trying to understand how you get to the latter.

Death is only a thing because life is a thing. It just is. I'd say it's peculiar (though natural) thing to apply value to it.

Maybe this tact: What if we solve death? What if we find a way to preserve everyone's consciousness and memory (and however else you define death transendence)? Is that "better"? Why? How? Because you get more utilons and fuzzies? Does the utilon/fuzzy economy collapse sans death?

More than that, it seems very rational people should be able to recognize that someone "dying" is nothing more than the flame of consciousness being extinguished. A flame that existed because of purely natural mechanisms. There is no "self" to die. A localized meat-hardware program (that you were familiar with and brought you psychological support) shut down. "Your" meat-hardware program is responding in turn with thoughts and emotions.

I mentioned Buddhism... as it pertains here, I see it as this: Death will be "bad" to you to the extent you identify with your "self". I am not my meat-hardware. I notice my meat-hardware via the consciousness it manifests. I notice my meat-hardware is hardwired to hate and fear death. I notice my meat-hardware will break someday -- it may even malfunction significantly and alter the manifest consciousness through which I view it...

In this sort of meditation, death, to me, is neutral.

Replies from: TheOtherDave
comment by TheOtherDave · 2014-01-15T16:06:56.005Z · LW(p) · GW(p)

OK. Thanks for clarifying your position.

comment by blacktrance · 2014-01-14T19:59:07.468Z · LW(p) · GW(p)

Assuming your life has non-infinitesimal positive value to you, even if losing one year of life would be a minor loss, losing some large number of years would be an enormous loss. Given that you'd be alive if you wouldn't die, you lose a lot from death.

Replies from: Brillyant
comment by Brillyant · 2014-01-14T20:10:18.468Z · LW(p) · GW(p)

Is infinite life optimal then?

Replies from: blacktrance
comment by blacktrance · 2014-01-14T20:15:46.385Z · LW(p) · GW(p)

If in net it has positive value, yes. If not, it's best to have life that's infinite unless you choose to terminate it.

Replies from: Brillyant
comment by Brillyant · 2014-01-14T20:19:47.166Z · LW(p) · GW(p)

So, life is valuable until it is no longer valuable?

Replies from: blacktrance
comment by blacktrance · 2014-01-14T20:27:50.208Z · LW(p) · GW(p)

If your life is valuable and adding more of it doesn't make its value negative at any point, then more of your life is better than less of your life.

Replies from: Brillyant
comment by Brillyant · 2014-01-14T21:10:36.024Z · LW(p) · GW(p)

The math seems much clearer to you than I, so let me ask: Is it possible that immortality as an option would dilute life's value when compared to a more tradtional human existence (75 years, dies of natural causes)?

I can imagine a 150-year lifespan being preferable to 75; 300 to 150; 1000 to 300; etc. And even when the numbers get very large and I cannot imagine the longer lifespan being better, I chalk it up to my weak imagination.

But what about infinite life? does the math break down if you could live -- preserve "your" consciousness, memories, etc. -- forever?

Replies from: Swimmer963, blacktrance
comment by Swimmer963 (Miranda Dixon-Luinenburg) (Swimmer963) · 2014-01-19T20:44:23.217Z · LW(p) · GW(p)

I can imagine a 150-year lifespan being preferable to 75; 300 to 150; 1000 to 300; etc. And even when the numbers get very large and I cannot imagine the longer lifespan being better, I chalk it up to my weak imagination.

Very large but non-infinite numbers are more likely to be what's on the table, I think. Given that something is likely to catch up with a future human society, even one capable of reviving frozen people–even if it's just the heat death of the universe.

comment by blacktrance · 2014-01-14T21:16:34.197Z · LW(p) · GW(p)

It may be important to explicitly distinguish between "could live forever" and "have to live forever", as the former excludes having to float in outer space for eternity, which would certainly be a life of negative value.

I don't see why the math would break down. As long as you anticipate your life continuing to have a net positive value, you should want to continue it. And I don't see why that would change just from your lifespan increasing, as long as you stay healthy.

Replies from: Brillyant
comment by Brillyant · 2014-01-15T14:22:30.052Z · LW(p) · GW(p)

The distinction you mention is very important, and it is one I tried to communicate I was aware of. Of course we can conceive of lots of circumstances where life "having" to continue would be bad...

The question is whether unlimited life renders everything valueless? It seems to me that some big chunk of life's value lies in it's novelty, and another big chunk in relatively rare and unique experiences, and another big chunk in overcoming obstacles... eternal life ruins all of that I think.

Mathematically, wouldn't every conceivable possibility be bound to occur over and over if you lived forever?

Replies from: blacktrance
comment by blacktrance · 2014-01-15T15:30:55.554Z · LW(p) · GW(p)

It seems to me that some big chunk of life's value lies in it's novelty, and another big chunk in relatively rare and unique experiences, and another big chunk in overcoming obstacles...

I doubt that novelty, rarity, or overcoming obstacles have any value by themselves, only that they are associated with good things. But supposing that they had a value of their own - do they encompass all of life's value? If novelty/rarity/obstacles were eliminated, would life be a net negative? It seems implausible.

Mathematically, wouldn't every conceivable possibility be bound to occur over and over if you lived forever?

Not if new possibilities are being created at the same time. In fact, it's probable that an individual's proportion of (things done):(things possible) would decrease as time passes, kind of like now, when the number of books published per year exceeds how much a person would want to read.

Replies from: Lumifer
comment by Lumifer · 2014-01-15T15:33:33.667Z · LW(p) · GW(p)

I doubt that novelty, rarity, or overcoming obstacles have any value by themselves, only that they are associated with good things.

Given that curiosity seems to be a hardwired-in biological urge, I would expect that novelty and rare experiences do have value by themselves.

Replies from: blacktrance
comment by blacktrance · 2014-01-15T15:37:02.132Z · LW(p) · GW(p)

Fulfilling a biological urge need not be something of value. For example, eating when you're hungry feels good, but it may be good to abolish eating food altogether.

Replies from: Lumifer
comment by Lumifer · 2014-01-15T15:42:53.854Z · LW(p) · GW(p)

Fulfilling a biological urge need not be something of value.

Your frontal cortex might decide it's not something of value, but the lower levels of your mind will still be quite sure it is. Hardwired is hardwired.

comment by Merkle · 2014-02-02T05:54:32.966Z · LW(p) · GW(p)

Cryonics likely has a probability of success of ~85%, as estimated at Will Cryonics Work?. Lower probability estimates are either unsupported, or supported by arguments with obvious errors. Cryonics deniers sometimes display gross breakdowns in their rational functions, as illustrated by a remarkable quote by Kenneth Storey.

Because deciding against cryonics is a form of suicide, it is selfish for the same reason that suicide is selfish: it causes pain and grief for the survivors, who loved the person who committed cryocide.

Ralph

Replies from: Jiro, None, TheOtherDave
comment by Jiro · 2014-02-02T19:04:34.091Z · LW(p) · GW(p)

This is nonsensical. Even if you're right, then most people who don't want cryonics are just mistaken about the probability of success. Being mistaken about something cannot be either suicidal (in the ordinary sense) or selfish, since both of those require conscious decisions.

If you think a bridge is safe, drive over it, and fall through to your death, was your decision to drive over the bridge "selfish"? Of course not. It caused pain and grief for the survivors, but not knowingly.

PS: What's your estimate of the probability of success for being rescued by time travellers if you make sure you die in a closed vault so you can be rescued without changing the past?

comment by [deleted] · 2014-02-05T01:23:10.272Z · LW(p) · GW(p)

85%? Seriously???

The near-certain impossibility of anything resembling the molecular nanotechnology favored on that page alone blows that out of the water as does the severe apparent institutional incompetence of cryonics providers.

comment by TheOtherDave · 2014-02-02T07:57:38.103Z · LW(p) · GW(p)

deciding against cryonics [..] causes pain and grief for the survivors

I don't expect that my friends and loved ones will experience less pain and grief when I stop breathing, talking, thinking, etc. if my brain has been cryopreserved against a possible future when its information-theoretical content can be extracted and those functions re-enabled, than if it hasn't been.

So if my deciding against cryonics is selfish, it is not for this reason.