Is every life really worth preserving?

post by RationallyOptimistic · 2011-12-23T17:04:22.640Z · LW · GW · Legacy · 79 comments

Contents

79 comments

Singularitarians frequently lament the irrevocably dead and the lack of widespread application of cryonics. Many cryonocists feel that as many lives as possible should be (and in a more rational world, would be) cryopreserved. Eliezer Yudkowsky, in an update to the touching note on the death of his younger brother Yehuda, forcefully expressed this sentiment:

"I stand there, and instead of reciting Tehillim I look at the outline on the grass of my little brother's grave. Beneath this thin rectangle in the dirt lies my brother's coffin, and within that coffin lie his bones, and perhaps decaying flesh if any remains. There is nothing here or anywhere of my little brother's self. His brain's information is destroyed. Yehuda wasn't signed up for cryonics and his body wasn't identified until three days later; but freezing could have been, should have been standard procedure for anonymous patients. The hospital that should have removed Yehuda's head when his heart stopped beating, and preserved him in liquid nitrogen to await rescue, instead laid him out on a slab. Why is the human species still doing this? Why do we still bury our dead? We have all the information we need in order to know better..."

Ignoring the debate concerning the merits of cryopreservation itself and the feasibility of mass cryonics, I would like to question the assumption that every life is worth preserving for posterity.

Consider those who have demonstrated through their actions that they are best kept excluded from society at large. John Wayne Gacy and Jeffrey Dahmer would be prime examples. Many people write these villains off as evil and give their condition not a second thought. But it is quite possible that they actually suffer from some sort of mental illness and are thus not fully responsible for their crimes. In fact, there is evidence that the brains of serial killers are measurably different from those of normal people. Far enough in the future, it might be possible to "cure" them. However, they will still possess toxic memories and thoughts that would greatly distress them now that they are normal. To truly repair them, they would likely need to have many or all of their memories erased. At that point, with an amnesic brain and a cloned body, are they even really the same person, and if not, what was the point of cryopreserving them?

Forming a robust theory of mind and realizing that not everyone thinks or sees the world the same way you do is actually quite difficult. Consider the immense complexity of the world we live in and the staggering scope of thoughts that can possibly be thought as a result. If cryopreservation means first and foremost mind preservation, maybe there are some minds that just shouldn't be preserved. Maybe the future would be a better, happier place without certain thoughts, feelings and memories--and without the minds that harbor them.

Personally, I think the assumption of "better safe than sorry" is a good-enough justification for mass cryonics (or for cryonics generally), but I think that assumption, like any, should at least be questioned.

 

79 comments

Comments sorted by top scores.

comment by Vladimir_Nesov · 2011-12-23T18:08:20.986Z · LW(p) · GW(p)

From consequentialist perspective, the value of not saving a life is the same as the value of killing someone. In which light, the title of your post becomes, "Is every person really worth not killing?" Try re-reading the argument with this framing in mind.

(Avoiding measures that save lives with certain probability is then equivalent to introducing the corresponding risk of death.)

Replies from: smijer, roystgnr, buybuydandavis, steven0461, Normal_Anomaly, shminux
comment by smijer · 2011-12-24T14:13:00.210Z · LW(p) · GW(p)

If the value of not saving a life is the same as the value of killing someone, that's fine. We can do that exercise and re-frame in terms of killing, and do the consequentialist calculation from there. The math is the same. If the goal is to bring ourselves to calculate from the heightened emotional perspective associated with killing, though, it is time to drop that frame and just get back to the math.

In terms of the opening post, the math is going to be similar even for the creation of all possible minds. If we have a good reason to restore every mind that has lived, it seems very probable that we have the exact same reason to create every mind that has not lived.

I'm not sure I see what that value is, though. Even if I want to live forever - and continue to want to live forever right up to the point that I am dead... One second after that point, I no longer care. At that point, only other living minds can find value in having me alive. It's up to them if they want to invest their resources in preserving and re-animating me or prefer to invest more of their resources in keeping themselves alive and creating more novel new minds through reproduction.

Replies from: wedrifid
comment by wedrifid · 2011-12-24T14:22:19.151Z · LW(p) · GW(p)

If the goal is to bring ourselves to calculate from the heightened emotional perspective associated with killing, though, it is time to drop that frame and just get back to the math.

Well spotted. I was wondering if anyone was going to notice that Vladimir's (absurdly highly upvoted) comment was basically a just a dark arts exploit trying to harness (largely deontological) moral judgements outside their intended context.

Replies from: ArisKatsaris, XiXiDu
comment by ArisKatsaris · 2011-12-24T16:10:00.368Z · LW(p) · GW(p)

I was wondering if anyone was going to notice that Vladimir's (absurdly highly upvoted) comment was basically a just a dark arts exploit trying to harness (largely deontological) moral judgements outside their intended context.

If that was an observation that you had already thought of, and you believed it good to be mentioned, why didn't you so mention it yourself -- instead of waiting to see if anyone else said it? I can conceive of some comments that are good to be made by only specific individuals, given specific contexts -- but I don't see this being one of them.

I find the attitude of "waiting to see if anyone else does this" and afterwards condemning/praising people collectively for failure/success in doing whatever person-failed-to-do-themselves an extremely distasteful one to me.

Replies from: wedrifid
comment by wedrifid · 2011-12-24T16:30:55.762Z · LW(p) · GW(p)

If that was an observation that you had already thought of, and you believed it good to be mentioned, why didn't you so mention it yourself -- instead of waiting to see if anyone else said it?

I did write a reply when Vladimir first wrote the comment. But I deleted it since I decided I couldn't be bothered getting into a potential flamewar about a subject that I know from experience is easy to spin for cheap moral-high-group points ("you're a murderer!", etc). I long ago realized that it is not (always) my responsibility to fix people who are wrong on the internet.

Since smijer is (as of the time of this comment) a user with 9 votes while Vladimir is in the top 20 of the top contributors and the specific comment being corrected is at +19 it does not seem at all inappropriate to lend support to his observation.

Replies from: ArisKatsaris, fortyeridania, smijer
comment by ArisKatsaris · 2011-12-24T16:44:07.507Z · LW(p) · GW(p)

Since smijer is (as of the time of this comment) a user with 9 votes while Vladimir is in the top 20 of the top contributors and the specific comment being corrected is at +19 it does not seem at all inappropriate to lend support to his observation.

Okay, I think I find this a good reason. Thank you for explaining.

Replies from: fortyeridania
comment by fortyeridania · 2011-12-25T12:29:01.481Z · LW(p) · GW(p)

You find this a good reason for what?

(1) For supporting smijer's comment

(2) For not chiming in when he first had the idea

If you mean the first...why? That wasn't the issue. The issue was why wedrifid hadn't chimed in. As for the second, wouldn't this imply that wedrifid was holding out because he expected someone with low karma to speak up first?

Replies from: ArisKatsaris
comment by ArisKatsaris · 2011-12-25T15:39:02.308Z · LW(p) · GW(p)

You find this a good reason for what?

For the seeming inconsistency I had noticed between (1) and (2).

comment by fortyeridania · 2011-12-25T12:32:21.762Z · LW(p) · GW(p)

Not wanting to get into a flamewar is, of course, reasonable. But daring to be the first to dissent is a valuable service, too.

comment by smijer · 2011-12-24T16:48:41.350Z · LW(p) · GW(p)

I appreciate the support.

comment by XiXiDu · 2011-12-24T15:13:25.242Z · LW(p) · GW(p)

Off topic:

If I remember correctly then you have been taking a quite derogatory stance with respect to people who complained about the voting behavior on this site. In any case, here are some snippets from comments made by you in the past 30 days:

Note: I am at least as shocked by the current downvote of this comment...

I express disgust with specific instances of voting.

Ok, me getting downvoted I can understand - someone has been mass dowvnvoting me across the board.

I'm actually getting concerned here. [...] he has not only been taken seriously but received upvotes while ridicule of the assumptions gets downvotes.

I was wondering if anyone was going to notice that Vladimir's (absurdly highly upvoted) comment was basically a just a dark arts exploit...

I predict that within 5 years you will become frequently appalled by the voting behavior on this site and in another 10 years you'll at least partly agree with me that a reputation system is actually a bad idea to have on a site like lesswrong because it doesn't refine what you deem rational nor does it provide valuable feedback but instead does lend credence to the arguments of trolls (as you would call them).

Replies from: wedrifid
comment by wedrifid · 2011-12-24T16:14:37.358Z · LW(p) · GW(p)

If I remember correctly then you have been taking a quite derogatory stance with respect to people who complained about the voting behavior on this site.

I doubt I ever took such a broad stance. You seem to have generalized to a large category so that you can fit me into it. In fact one of those artfully trimmed quotes you make there should have, if parsed for meaning rather than scanned for quotable keywords, given a far more reasonable impression of where my preferences lie on that subject.

I predict that within 5 years you will become frequently appalled by the voting behavior on this site

Quite possible. A few years after that I may well start telling kids to get off my lawn and tell stories about "When I was your age".

and in another 10 years you'll at least partly agree with me that a reputation system is actually a bad idea to have on a site like lesswrong

Money. Make the prediction with money. Because I want to take it.

Counter-prediction: In ten years time you will not have changed your mind (on this subject) at all.

Replies from: gwern, XiXiDu
comment by gwern · 2011-12-24T22:49:28.983Z · LW(p) · GW(p)

At least for myself, I'm happy to give that a low probability. Even with the lowered quality since Eliezer stopped writing, LW is still much better - thanks to karma - than OB or SL4 were.

Replies from: XiXiDu
comment by XiXiDu · 2011-12-27T18:32:14.079Z · LW(p) · GW(p)

LW is still much better - thanks to karma - than OB or SL4 were.

How do you know this? Would a reputation system cause the Tea Party movement to become less wrong?

The n-Category Café or Timothy Gowers blog do not employ a reputation system like less wrong. It's the people who make places better off than others.

It is trivially true that the lesswrong reputation system would fail if there were more irrational people here than rational people, where 'rational' is defined according to your criteria (not implying that your criteria are wrong).

I am quite sure that a lot of valuable opinions are lost due to the current reputation system because there are a lot of people who don't like the idea of being voted down according to unknown criteria rather than engaging in argumentative discourses.

And as I wrote before, the curren reputation system favors non-technical posts. More technical posts often don't receive the same amount of upvotes as non-technical posts and technical posts that turn out to be wrong are downvoted more extensively. This does discourage rigor and gives incentive to write posts about basic rationality rather than tackling important problems collaboratively.

Replies from: shminux, gwern
comment by Shmi (shminux) · 2011-12-27T19:22:07.319Z · LW(p) · GW(p)

A reputation system necessarily favors status quo.

This community are mostly aspired rationalists, not professionals in philosophy/decision theory/psychology, though there are a number of experts around. Accuracy of technical posts is hard to judge, so people probably go by the post quality, their gut feeling and how well it conforms to what has been agreed upon as correct before. Plus the usual points for humor. Minus penalty for poor spelling/grammar/style.

An example of a reputation system that works for a technical forum is MathOverflow, though partly because the mods are quite ruthless there about off-topic posts.

I am quite sure that a lot of valuable opinions are lost due to the current reputation system

...which likely means that this forum is not the right one for them. LW is open enough to resist "evaporative cooling", and rapid downvoting inhibits all but expert trolling.

gives incentive to write posts about basic rationality rather than tackling important problems collaboratively.

I think that is the idea. Educating people "about basic rationality" is a much more viable goal than doing basic research collaboratively. LW is often used as a sounding board for research write-ups, but that is probably as far as it can go. Anything more would require excluding amateurs from the discussion, to reduce the noise level. I am yet to see a public forum where "important problems" are solved "collaboratively". Feel free to provide counterexamples.

comment by gwern · 2011-12-27T19:55:38.264Z · LW(p) · GW(p)

Would a reputation system cause the Tea Party movement to become less wrong?

Yes. They would still have their major shibboleths like Obama being a Muslim born in Kenya, but reputation systems would at least reduce the most mouth-breathing comments.

The n-Category Café or Timothy Gowers blog do not employ a reputation system like less wrong. It's the people who make places better off than others.

People are a factor. People are not the only factor which is solely determinative. Code is Law.

I am quite sure that a lot of valuable opinions are lost due to the current reputation system because there are a lot of people who don't like the idea of being voted down according to unknown criteria rather than engaging in argumentative discourses.

And that is why LW has orders of magnitude less comments and posts than OB or SL4 did. Wait, never mind, I meant 'more'.

This does discourage rigor and gives incentive to write posts about basic rationality rather than tackling important problems collaboratively.

Or it discourages attempts to bamboozle with rigor. I don't remember terribly many rigorous proofs on LW, but then, I don't remember terribly many on OB or SL4 either.

comment by XiXiDu · 2011-12-27T18:16:35.043Z · LW(p) · GW(p)

I retracted the comment. Not sure why I made it and why I haven't used my brain more, sorry.

Counter-prediction: In ten years time you will not have changed your mind (on this subject) at all.

Likely, because I hate reputation systems. Peer pressure is already bad enough as it is. But if a reliable study is being conducted that shows that reputation systems cause groups to become more rational I will of course change my mind.

Money. Make the prediction with money. Because I want to take it.

Betting money seems to be a pretty bad idea if the bet depends on the decision of someone participating in the bet.

comment by roystgnr · 2011-12-24T01:58:18.396Z · LW(p) · GW(p)

the value of not saving a life is the same as the value of killing someone

If you found someone in the process of killing another, what actions would you be willing to undertake to stop them? Would you be willing to undertake those same actions every time you found someone whose non-subsistence expenditures exceeded $X, the minimum expenditure necessary to [buy enough malaria nets, etc... to] have an expected outcome of one life saved?

Even consequentialism is supposed to acknowledge that ethical rules need to be evaluated in terms of their long-term consequences rather than just their immediate outcomes.

comment by buybuydandavis · 2011-12-23T18:54:22.421Z · LW(p) · GW(p)

That's just very poor consequentialism in my eyes. Instead of me pointing out the most abominable scenarios that I believe immediately follow from such a consequentialism, why don't you supply one that you think would be objectionable to others, but which you'd be willing to defend?

As for your spin on the question, while I think it is a different question than the original, I see no need to shy away from it. Some people are worth killing. That's not to say there isn't something of value in them, but choice is about tradeoffs, and I don't expect that to change with greater technology. The particular tradeoffs will change, but that there are tradeoffs will not.

And in the same way, a great many more people are not worth saving either.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2011-12-24T12:25:22.229Z · LW(p) · GW(p)

As for your spin on the question, while I think it is a different question than the original, I see no need to shy away from it.

Sure, assuming we're clear on what the question means.

comment by steven0461 · 2011-12-23T20:24:53.081Z · LW(p) · GW(p)

The reframed version gets much of its psychological strength from 1) intuitions that say killing is bad on top of its bad consequences and 2) intuitions that say killing has bad consequences that letting die does not have. You're taking both of those intuitions as invalid (as you have to for the framing to be equivalent), so you can't rely on conclusions largely caused by them.

comment by Normal_Anomaly · 2011-12-23T18:38:56.366Z · LW(p) · GW(p)

I think you mean "uncertain probability"?

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2011-12-23T18:44:18.670Z · LW(p) · GW(p)

"Certain" as in a figure of speech, like "ice cream of certain flavor", not indication of precision. (Although probabilities can well be precise...)

comment by Shmi (shminux) · 2011-12-23T18:35:29.217Z · LW(p) · GW(p)

Taking this argument ad absurdum: Roe vs Wade is a crime against humanity, since a fetus is potentially a person.

Replies from: Vladimir_Nesov, Viliam_Bur, None
comment by Vladimir_Nesov · 2011-12-23T18:40:36.853Z · LW(p) · GW(p)

The alternatives I'm comparing are a living person dying vs. not dying. Living vs. never having lived is different and harder to evaluate.

Replies from: shminux
comment by Shmi (shminux) · 2011-12-23T19:22:23.111Z · LW(p) · GW(p)

No, the alternatives you are comparing are reviving a frozen brain vs doing something potentially more useful, once the revival technology is available.

For example, if creating a new mind has a positive utility some day, it's the matter of calculating what to spend (potentially still limited) resources on: creating a new happy mind (trivially easy even now, except for the "happy" part) or reviving/rejuvenating/curing/uploading/rehabilitaing a grandpa stiff in a cryo tank (impossible now, but still probably much harder than the alternative even in the future).

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2011-12-23T19:33:02.184Z · LW(p) · GW(p)

No, the alternatives you are comparing are reviving a frozen brain vs doing something potentially more useful

My comment is unrelated to cryonics, I posted it to remind about framing effects of saying "not saving lives" as compared to "killing". (Part of motivation for posting it is that I find the mention of Eliezer's dead brother in the context of an argument for killing people distasteful.)

creating a new happy mind (trivially easy even now, except for the "happy" part) or reviving/rejuvenating/curing/uploading/rehabilitaing a grandpa

As I said, harder to evaluate. I'm uncertain on which of these particular alternatives is better (considering a hypothetical tradeoff), particularly where a new mind can be made better in some respects in a more resource-efficient way.

Replies from: shminux
comment by Shmi (shminux) · 2011-12-23T19:38:23.472Z · LW(p) · GW(p)

My comment is unrelated to cryonics, I posted it to remind about framing effects of saying "not saving lives" as compared to "killing"

Ah, OK. I thought you were commenting on the merits of cryopreservation.

comment by Viliam_Bur · 2011-12-25T16:40:00.027Z · LW(p) · GW(p)

Taking this argument ad absurdum: Roe vs Wade is a crime against humanity, since a fetus is potentially a person.

What exactly makes it absurd?

I am not sure what units are best for measuring a value of human life, so let's just say that a life of average adult person has value 1. What would be your estimate of value of a 3-month fetus, 6-month fetus, 9-month fetus, a newborn child, 1/2 year old child, 1 year old child, etc.?

If you say that a fetus has less value than an adult person, but still a nonzero value, for example it could be 0.01, then killing 100 fetuses is like killing 1 adult person, and killing 100 000 fetuses is like killing 1 000 adult people. Calling the killing of 1 000 adult people "crime against humanity" would be perhaps exaggerated, but not exactly absurd.

If you have strong opinions on this topic, I would like to see your best try to estimate the shape of "human life value" curve for fetuses and small childs. At what age does killing a human organism become worse than having a proverbial dustspeck in rationalist's eye?

Replies from: TheOtherDave
comment by TheOtherDave · 2011-12-25T19:45:50.696Z · LW(p) · GW(p)

Thousands of adults are in fact killed in auto accidents every year, and yet it seems to me very strange indeed to call auto accidents a crime against humanity.

Thousands of adults are killed in street crimes, and it seems very strange to me to call street crime a crime against humanity.

Etc., etc., etc.

I conclude that my intuitions about whether something counts as a "crime against humanity" aren't especially well calibrated, and therefore that I should be reluctant to use those intuitions as evidence when thinking about scales way outside my normal experience.

And of course, the value-to-me of an individual can vary by many orders of magnitude, depending on the individual. I would likely have chosen to allow my nephew's fetal development to continue rather than preserve the life of a randomly chosen adult, for example, but I don't generally value the development of a fetus more than an adult.

But leaving the "crimes against humanity" labeling business aside, and assuming some typical value for a fetus and an adult, then sure, if I value a developing fetus 1/N as much as I value a living adult, then I prefer to allow 1 adult to die rather than allow the development of N fetuses to be terminated.

comment by [deleted] · 2011-12-24T05:38:17.399Z · LW(p) · GW(p)

Actually, much worse: Roe vs Wade effectively enables serial genocide.

comment by David_Gerard · 2011-12-23T19:36:55.503Z · LW(p) · GW(p)

This appears to be a different frame for the death penalty debate.

comment by [deleted] · 2011-12-23T17:42:45.763Z · LW(p) · GW(p)

Far enough in the future, it might be possible to "cure" them. However, they will still possess toxic memories and thoughts that would greatly distress them now that they are normal. To truly repair them, they would likely need to have many or all of their memories erased. At that point, with an amnesic brain and a cloned body, are they even really the same person, and if not, what was the point of cryopreserving them?

Why not "cure" them by building a mind that can bare them without too much distress? A sufficiently different mind can I think bear any thoughts a human mind "diseased" or not can have. Do such minds necessarily or even probably hold no value to us?

At least 50% of US university students have had a homicidal fantasy this year. Guess how common rape fantasies are. To a more "sensitive" mind something like that could seem horrifying. But how ... can they think such things and still have sympathy and not go around stabbing each other all the time?

Forming a robust theory of mind and realizing that not everyone thinks or sees the world the same way you do is actually quite difficult. Consider the immense complexity of the world we live in and the staggering scope of thoughts that can possibly be thought as a result. If cryopreservation means first and foremost mind preservation, maybe there are some minds that just shouldn't be preserved. Maybe the future would be a better, happier place without certain thoughts, feelings and memories--and without the minds that harbor them.

Feelings, memories, thoughts, if you somehow carve them away from functioning running brain, are merely information. Are you speaking of not reviving certain minds with such ideas or of that information being permanently erased?

BTW Have you read "Beyond the reach of God" before?

Personally, I think the assumption of "better safe than sorry" is a good-enough justification for mass cryonics (or for cryonics generally), but I think that assumption, like any, should at least be questioned.

I agree.

comment by DanielLC · 2011-12-23T20:54:56.785Z · LW(p) · GW(p)

Many people write these villains off as evil and give their condition not a second thought. But it is quite possible that they actually suffer from some sort of mental illness and are thus not fully responsible for their crimes.

What's the difference?

In fact, there is evidence that the brains of serial killers are measurably different from those of normal people.

They are measurably different. The simplest measure is the number of people murdered by them.

We could just keep them as-is and use different methods of keeping them from killing each other.

I don't see any benefit of using an old mind over making a new one, so I definitely don't think every life is really worth preserving.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2011-12-23T21:01:30.478Z · LW(p) · GW(p)

I don't see any benefit of using an old mind over making a new one

One consideration is that permanently terminated lives could be significantly undesirable, compared to continuing ones, which could outweigh the benefits of implementing a different better mind instead.

Replies from: DanielLC
comment by DanielLC · 2011-12-23T21:10:24.949Z · LW(p) · GW(p)

But all lives are permanently terminated immediately after they're created. They're then replaced with a slightly different one.

I don't like the idea of having a utility system complicated enough to distinguish from those things.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2011-12-23T21:33:15.637Z · LW(p) · GW(p)

I don't like the idea of having a utility system complicated enough to distinguish [between] those things.

You already do.

Replies from: DanielLC
comment by DanielLC · 2011-12-24T01:15:00.057Z · LW(p) · GW(p)

No I don't.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2011-12-24T01:21:28.558Z · LW(p) · GW(p)

That link describes what you believe, not why those beliefs are true; my point was that you're mistaken.

Replies from: DanielLC
comment by DanielLC · 2011-12-24T07:36:57.728Z · LW(p) · GW(p)

No, I'm not. I know my own values. I know the utopia dictated by my values. Please do not accuse me of being mistaken.

Replies from: FAWS, Vladimir_Nesov
comment by FAWS · 2011-12-24T12:37:02.873Z · LW(p) · GW(p)

I think you and Vladimir are talking about different things. You probably follow your surface level moral theory as much as any human follows theirs, and unlike most people you seem to be willing to bite the bullets implied, but you don't follow it the way an AI would follow their utility function. You still notice the bullets you bite, you do things for all sorts of other reasons when you don't have the opportunity to think it through in terms of total happiness caused, and you probably eliminate all sorts of strategies that might raise total happiness for other reasons before they rise to conscious attention and you can evaluate them properly.

Replies from: DanielLC
comment by DanielLC · 2011-12-24T20:25:25.376Z · LW(p) · GW(p)

If you want to really get down to it, I am not a utility maximizer. Insomuch as I try to maximize any sort of utility, I try to bring happiness. I may feel bad thinking about things that have a net increase of happiness, but I still try to bring them about.

If imagining a possible future makes me feel bad, this is a fact about me, and not a fact about the possible future. I wish to get rid of the bad feeling. My instinct is to do it by averting that future, but I know better. I just make sure it's not a future in which I feel bad about it.

comment by Vladimir_Nesov · 2011-12-24T12:20:56.084Z · LW(p) · GW(p)

I know my own values. I know the utopia dictated by my values.

Why do you believe you do?

Please do not accuse me of being mistaken.

That's an antiproductive attitude for a rationalist.

Replies from: DanielLC
comment by DanielLC · 2011-12-24T20:34:32.175Z · LW(p) · GW(p)

Why do you believe you do?

I know my own values because they're what I try to maximize. All that's apparent to me is my qualia, and, while I concede that other people have qualia, I see no importance in anything that isn't someone's qualia.

I mentioned that I know the utopia dictated by my values to show that I didn't just convince myself that it's all that I care about and ignore its implications. The utopia is tiling the universe with orgasmium.

That's an antiproductive attitude for a rationalist.

If you have a particular reason to believe that I am mistaken, please say so. If you simply accuse me of being mistaken about my own values, that doesn't help. You are not me, and you can't just assume I am like you. You don't know nearly as much about me as I do.

You gave me a reason why I might not know about my own values. I showed that I had already taken this into account. You did not ask for clarification. You did not find a reason I may have failed to correctly take it into account. You did not give me another reason I might be incorrect. You simply claimed that I was wrong.

Replies from: None
comment by [deleted] · 2011-12-26T02:39:17.041Z · LW(p) · GW(p)

I know my own values because they're what I try to maximize. All that's apparent to me is my qualia, and, while I concede that other people have qualia, I see no importance in anything that isn't someone's qualia.

I'm not disputing your line of thought, but I still wonder about something I touched upon before, if neuroscience or the likes would dissolve qualia into smaller components, and it would become apparent to you that "There is no unitary thing as qualia/mind frame, the momentary experience is reducible, but an anthill, a screen with pixels". Would that exhort you to reassess your utopia?

Replies from: dlthomas
comment by dlthomas · 2011-12-26T05:33:13.623Z · LW(p) · GW(p)

I see no reason that the reducibility of something would deny it's potential status as something to be valued. I could value whirlpools without denying that they're made of water, or (for an example closer to reality) literature without denying that it's made up of words which are made up of letters.

Replies from: None
comment by [deleted] · 2012-01-10T13:27:36.071Z · LW(p) · GW(p)

Sorry for taking such a long time to answer.

Agreed. But if you read DanielLC's argument he seem to think that that the reducibility of for example personal identity makes it unimportant in terms of value since it can be reduced to "mind frames" over time. Basically I wonder if his understanding of qualia (if that even such a thing really exists) would be totally wrong or could be reduced, would he then claim that mind frames are morally unimportant because the can be reduced to something ells or that the concept is misleading.

comment by Shmi (shminux) · 2011-12-23T19:33:01.343Z · LW(p) · GW(p)

I do not understand this obsession with preserving every living mind (it seems to me that EY and LW in general implicitly or explicitly subscribe to the popular notion that a body is a vessel for the mind).

Those who wish to be frozen and can afford it are free to take their chances, those who believe in eternal soul or reincarnation are free to take theirs, those who would rather die forever should not be judged, either.

It sure sucks if you want to get frozen but cannot afford it, and it is a reasonable goal to reduce cost/improve odds of revival, but it is but one of many useful goals to work on.

Replies from: Vladimir_Nesov, Stuart_Armstrong, dlthomas, lessdazed
comment by Vladimir_Nesov · 2011-12-23T19:45:08.951Z · LW(p) · GW(p)

Those who wish to be frozen and can afford it are free to take their chances, those who believe in eternal soul or reincarnation are free to take theirs, those who would rather die forever should not be judged, either.

There are laws of thought, and correct decisions (that we don't know very well). What people believe is mostly irrelevant to what the right thing to do is.

People might have the right (power) to do whatever they believe, they might indeed in practice be free to implement any decision they choose, and they might intrinsically value this power, but this fact is irrelevant for judging the correctness of their decisions.

(In short, I object to the "everyone can make up their own correctness" mindset. We do know better than to let considerations about souls and afterlife determine the right answer.)

Replies from: shminux
comment by Shmi (shminux) · 2011-12-23T21:19:28.825Z · LW(p) · GW(p)

I am not sure what you mean by "correct" and "right answer" in this case.

Life is not math. If your goal is to improve the subjective quality of life of each person, there is no clear-cut answer to how to do that. If your goal is something "bigger", you better state what it is upfront, so that it and the means to achieve it can be discussed first.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2011-12-23T22:48:02.706Z · LW(p) · GW(p)

Life is not math.

It's much harder than the most difficult math that humans are able to do, but the answers are still non-mysterious, and it is your calling and power as a person to seek them.

comment by Stuart_Armstrong · 2011-12-24T10:19:37.568Z · LW(p) · GW(p)

It seems very likely that if cryopreservation was the default option - or even just a rather standard option - that many, many, many more people would go for it than do at present. And still while exercising free choice. Also, there seem to be few religious commandments against cryopreservation, so (if it worked) there would always be the option of dying or reincarnating later on.

So, two possible worlds, both with free choice, and one with much more death in one than the other - I see why we'd want to tilt the balance away from it.

Replies from: shminux
comment by Shmi (shminux) · 2011-12-24T19:37:01.102Z · LW(p) · GW(p)

I guess I just don't give as much value to a generic human life as you do.

Replies from: Stuart_Armstrong
comment by Stuart_Armstrong · 2011-12-28T14:03:09.568Z · LW(p) · GW(p)

Do you give it any value? Would generic acceptance of cryopreservation be something you'd take if it were free?

Replies from: shminux
comment by Shmi (shminux) · 2011-12-28T21:32:45.771Z · LW(p) · GW(p)

Not sure what you are asking. I would pay for cryo for myself if I could afford it and considered it a worthwhile investment vs other alternatives (such as a nice vacation while still alive).

Presumably, if cryo was affordable and mainstream enough, many people would go for it. After all, people pay more to get buried rather than cremated, and there is precious little rationale for that.

comment by dlthomas · 2011-12-23T19:35:22.948Z · LW(p) · GW(p)

Those who wish to be frozen and can afford it are free to take their chances, those who believe in eternal soul or reincarnation are free to take theirs, those who would rather die forever should not be judged, either.

This seems to intersect non-trivially with positions on suicide.

Replies from: shminux
comment by Shmi (shminux) · 2011-12-23T20:09:17.868Z · LW(p) · GW(p)

This seems to intersect non-trivially with positions on suicide.

There are many grey areas, sure, some more politically/emotionally charged than others. Let's not complicate things by adding the terms like suicide, abortion and euthanasia into the mix.

comment by lessdazed · 2011-12-24T06:53:01.400Z · LW(p) · GW(p)

those who would rather die forever should not be judged, either.

What does "not be judged" mean?

it is a reasonable goal to reduce cost/improve odds of revival, but it is but one of many useful goals to work on.

Who disputes this?

comment by TheOtherDave · 2011-12-23T21:49:54.950Z · LW(p) · GW(p)

To truly save them, they would likely need to have many or all of their memories erased.

This is question-begging. Sure, if my experiences and memories are a net negative, such that I and my surroundings are improved by wiping all of that away and starting fresh, then there's no particular reason to preserve those experiences and memories. Of course.

OTOH, if they're a net positive, then there is.

comment by brilee · 2011-12-24T01:41:17.498Z · LW(p) · GW(p)

I am hesitant, and I think many others may be hesitant to engage in a debate on eugenics, not because it might trigger strong feelings (I think we as a community are capable of setting those aside), but because of the way it might be perceived by casual visitors to the site.

It would be nice if we could get some sort of agreement to ignore political correctness/face the consequences of political incorrectness and engage in what I think would be a very healthy debate.

comment by Clara (she/they) · 2022-11-18T21:40:08.737Z · LW(p) · GW(p)

If erasing the memories were done by artificially stimulating the mechanism that causes normal forgetting, I think they'd be the same person. After all, I don't consider myself a new person whenever I forget something. But maybe there's something I'm missing. 

comment by codythegreen · 2011-12-24T03:36:10.184Z · LW(p) · GW(p)

To deny any thought feeling or memory or the mind that harbored it... seems a bit extreme i don't know if theirs a term for it ? Maybe we should save our imperfections they in many ways are what make us human.

(I'm sorry if my entry's are not as polished as most I'm a little unrefined.)

Replies from: Karmakaiser
comment by Karmakaiser · 2011-12-26T23:52:23.899Z · LW(p) · GW(p)

Are you sure of this? I could point to many of our technological imperfections that caused great uneasiness in their day but are now considered normal and even natural. Further I could point to the eradication of diseases which were considered scourges of God and ultimately "part of the deal" or being human. Part of being a trans-humanist is a belief that contracts can and should be written as humanity becomes more technologically advanced.

Now changing a personality through manufactured means is considered an acceptable treatment of Autism, Bi-Polar Disorder, ADHD, Depression and many others. Now the drugging of conscious agents against their will is a moral problem, but to deny that as an option seems to be to be backward, rather than forward thinking. If I was offered a pill that would render my innate biases inert I would have little hesitation in taking it. As I simulate the decision now, the only objections I can think of are social biases wondering what people would think of a person with a black belt in rationality.

Replies from: AspiringKnitter
comment by AspiringKnitter · 2012-01-09T07:31:41.944Z · LW(p) · GW(p)

Now changing a personality through manufactured means is considered an acceptable treatment of Autism, Bi-Polar Disorder, ADHD, Depression and many others.

Actually, with regard to autism, at least, those are fighting words.

comment by [deleted] · 2011-12-23T17:42:25.071Z · LW(p) · GW(p)

Consider those who have demonstrated through their actions that they are best kept excluded from society at large. John Wayne Gacy and Jeffrey Dahmer would be prime examples. Many people write these villains off as evil and give their condition not a second thought. But it is quite possible that they actually suffer from some sort of mental illness and are thus not fully responsible for their crimes. In fact, there is evidence that the brains of serial killers are measurably different from those of normal people. Far enough in the future, it might be possible to "cure" them. However, they will still possess toxic memories and thoughts that would greatly distress them now that they are normal. To truly repair them, they would likely need to have many or all of their memories erased. At that point, with an amnesic brain and a cloned body, are they even really the same person, and if not, what was the point of cryopreserving them?

For a sufficiently advanced civilization all brains are measurably different from each other. I think you should reconsider or at the very least expand on by what you mean by "fully responsible". As it is used by Western legal systems and implicitly most people it is not a coherent concept. Neither is sick or healthy or even cure for that matter. To prime you on this and perhaps dissolve some confusion, I recommend reading this article.

Perhaps not being fully responsible means not being responsible for acting under a sufficiently false map or even a false map at all, because we can't infer revealed preferences from that or perhaps because of the golden rule. Perhaps not being fully responsible means someone who has values sufficiently different from most people to peacefully coexist, perhaps different values from yourself or someone to whom you outsource your moral autonomy. Perhaps not being fully responsible means a brain I can't yet change with technology so it will promote my values and I must thus waste by imprisonment or destruction.

As you can see I can't tell you neither can I tell what you meant here. :)

Replies from: RationallyOptimistic
comment by RationallyOptimistic · 2011-12-23T18:00:29.325Z · LW(p) · GW(p)

By "not fully responsible" I was trying to sidestep a free will debate. My point was that "bad" people might just have "bad" brains; perhaps they were exposed to too much serotonin while in the womb or inherited a bad set of genes, and that plus some trauma early in life might have damaged them in such a way that they were willing to commit unspeakable acts that "normal" people would not. I think it's not unlikely that whatever makes a serial killer a serial killer will eventually be identified, screened for and cured. But what to do with existing serial killers is different problem.

comment by Bruno_Coelho · 2011-12-24T15:02:35.812Z · LW(p) · GW(p)

I ask a diferent question: in a time constrain scenario, what lives are worth dying? Some people are elevating the risk of human extintion – producting weapons of mass destruction -- , perhaps for this they deserve to die?

comment by lessdazed · 2011-12-23T18:35:48.402Z · LW(p) · GW(p)

Is every what worth preserving?

Replies from: shminux, None
comment by Shmi (shminux) · 2011-12-23T19:03:38.117Z · LW(p) · GW(p)

Every mind is sacred,

Every mind is great.

If a mind is wasted,

EY gets quite irate.

Replies from: Normal_Anomaly, codythegreen
comment by Normal_Anomaly · 2011-12-24T17:58:21.210Z · LW(p) · GW(p)

Upvoted, but it would scan better if you took out the "quite". (Assuming you pronounce "EY" as 2 syllables.)

comment by codythegreen · 2011-12-24T03:45:23.139Z · LW(p) · GW(p)

100% Agree.

comment by [deleted] · 2011-12-23T19:24:35.188Z · LW(p) · GW(p)

Sorry, that made me laugh, but in order to preserve the signal to noise ratio I down voted the post. If dming has taught me anything, it has taught me that Monty Python references need to be nipped in the bud.

Replies from: lessdazed
comment by lessdazed · 2011-12-24T06:59:36.995Z · LW(p) · GW(p)

Robert Heinlein once claimed (tongue-in-cheek, I hope) that the "simplest explanation" is always: "The woman down the street is a witch; she did it." Eleven words - not many physics papers can beat that.

Faced with this challenge, there are two different roads you can take.

First, you can ask: "The woman down the street is a what?" Just because English has one word to indicate a concept, doesn't mean that the concept itself is simple. Suppose you were talking to aliens who didn't know about witches, women, or streets - how long would it take you to explain your theory to them?

All italicization is potentially also an allusion to something by EY.

comment by malthrin · 2011-12-23T19:19:00.261Z · LW(p) · GW(p)

Voted you down. This is deontologist thought in transhumanist wrapping paper.

Ignoring the debate concerning the merits of eternal paradise itself and the question of Heaven's existence, I would like to question the assumption that every soul is worth preserving for posterity.

Consider those who have demonstrated through their actions that they are best kept excluded from society at large. John Wayne Gacy and Jeffrey Dahmer would be prime examples. Many people write these villains off as evil and give their condition not a second thought. But it is quite possible that they actually suffer from some sort of Satanic corruption and are thus not fully responsible for their crimes. In fact, there is evidence that the souls of serial killers are measurably different from those of normal people. Far enough in the future, it might be possible to "cure" them. However, they will still possess toxic memories and thoughts that would greatly distress them now that they are normal. To truly save them, they would likely need to have many or all of their memories erased. At that point, with an amnesic brain and a cloned body, are they even really the same person, and if not, what was the point of saving them?

Forming a robust theory of mind and realizing that not everyone thinks or sees the world the same way you do is actually quite difficult. Consider the immense complexity of the world we live in and the staggering scope of thoughts that can possibly be thought as a result. If eternal salvation means first and foremost soul preservation, maybe there are some souls that just shouldn't be saved. Maybe Heaven would be a better, happier place without certain thoughts, feelings and memories--and without the minds that harbor them.

Replies from: steven0461, dlthomas
comment by steven0461 · 2011-12-23T20:14:41.717Z · LW(p) · GW(p)

would be a better, happier place

Sure sounds like consequentialism to me.

comment by dlthomas · 2011-12-23T19:25:56.115Z · LW(p) · GW(p)

Is consequentialism an essential part of transhumanism?

Replies from: Alicorn, Vladimir_Nesov
comment by Alicorn · 2011-12-23T19:30:29.077Z · LW(p) · GW(p)

No.

comment by Vladimir_Nesov · 2011-12-23T20:20:54.877Z · LW(p) · GW(p)

Why should that be an interesting question? (What's "transhumanism", again?) What matters is whether this allows you to find correct decisions, perhaps whether it's a useful sense of "correct" to rely on when you have something to protect.

Replies from: dlthomas
comment by dlthomas · 2011-12-23T20:22:22.432Z · LW(p) · GW(p)

It seemed relevant to the parent's objection to the original article.