Discussion of LW going on in felicifia
post by somervta · 2013-01-15T05:59:03.984Z · LW · GW · Legacy · 44 commentsContents
44 comments
So I recently found myself in a bit of an awkward position.
I frequent several Facebook discussion groups, several of which are about LW-related issues. In one of these groups, a discussion about identity/cryonics led to one about the end Massimo Piglucci's recent post (the bit where he says that the idea of uploading is a form of dualism), which turned into a discussion of LW views in general, at which point I revealed that I was, in fact, a LWer.
This put me in the awkward position of defending/explicating the views of the entirety of LW. Now, I've only been reading LessWrong for <10 months, and I've only recently become a more active part of the community. I've been a transhumanist for less time than that, seriously thinking about cryonics and identity for even less, and I suddenly found myself speaking for the intersection of all those groups
The discussion was crossposted to Felicifia a few days ago, and I realized that I was possibly out of my depth. I'm hoping I haven't grossly misrepresented anyone, but rereading the comments, I'm no longer sure.
Felicifia:
http://www.felicifia.org/viewtopic.php?f=23&t=801
Original FB Group :
https://www.facebook.com/groups/utilitarianism/permalink/318563281580856/
EDIT: If you're commenting on the topic, please state whether or not you'd mind me quoting you at felicifia (If you have a felicifia account, and you'd prefer to post it there yourself, be my guest.)
44 comments
Comments sorted by top scores.
comment by Viliam_Bur · 2013-01-15T10:22:26.216Z · LW(p) · GW(p)
You know, "politics is the mindkiller" is not only about the conventional meaning of the word "politics". It is about tribes and belonging. Right now you are conflicted as a member of two tribes, and you may feel pressured to choose your loyalty, and protect your status in the selected tribe. Which is not a good epistemic state.
Now on the topic:
Cryonics uses up far more resources [than cancer treatment]
Do we have any specific numbers here? I think the values for "cancer treatment" would depend on the exact kind of treatment and also how long the patient survives, but I don't have an estimate.
If cryonics works, [family and friends] still suffer the same [grief].
Wrong alief. Despite saying "if cryonics works" the author in the rest of the sentence still expects that it does not. Otherwise, they would also include the happiness of family and friends after the frozen person is cured. That is what "if cryonics works" means.
Expressed this way, it is like saying (for a conventional treatment of a conventional disease) that whether doctors can or cannot cure the disease there is no difference, because either way family and friends suffer grief for having the person taken to the hospital. Yes, they do. But in one case, the person also returns from the hospital. That's the whole point of taking people to hospitals, isn't it?
trying to integrate [cryonics] better into society uses up time and resources that could have been spent on higher expectation activities
Technically, by following this argument, we also should stop curing cancer, because that money could also be used for Givewell charities and animal welfare. Suddenly, this argument does not sound so appealing. Why? I guess because cryonics is far; curing a cancer (your, or in your family) is near; and Givewell charities are also far but less so than cryonics. Removing a near suffering feels more important than removing a far suffering. That's human; but let's not pretend that we did a utilitarian calculation here, if we actually used a completely different decision procedure.
...but you already said that.
I think that this discussion is mostly a waste of time, simply because your opponent's true rejection seems to be "cryonics does not work". And then all is written under this alief. Under this alief the arguments make sense: if the cryonics does not work, of course wasting money on cryonics is stupid. But instead of saying this openly, there is a rationalization about why utilitarians should do this and shouldn't do that, by pretending that we have numbers that prove "utility(cancer cure) > utility(animal welfare) > utility(cryonics)". Also, when discussing cryonics, you are supposed to be a perfect utilitarian and willing to sacrifice your life for someone else's greater benefit, but you are allowed to make a selfish exception from perfect utilitarianism when curing your cancer.
For me, the only interesting argument was the one that a smart human in a pre-Singularity world is more useful than a smart human in a post-Singularity world, therefore curing smart people now is more useful than freezing them and curing them in future.
Replies from: Eliezer_Yudkowsky, Arepo, DanielLC, lavalamp↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-01-16T16:54:48.157Z · LW(p) · GW(p)
I don't feel grief when somebody gets cryosuspended. Seriously, I don't, so far as I can tell. I feel awful when I read about someone who wasn't cryosuspended.
Replies from: shminux, MugaSofer, DaFranker↑ comment by Shmi (shminux) · 2013-01-16T17:46:16.707Z · LW(p) · GW(p)
If your estimate of the probability of their eventual revival is p, shouldn't you feel (1-p) fraction of grief?
Replies from: Eliezer_Yudkowsky, DaFranker↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-01-16T19:57:25.051Z · LW(p) · GW(p)
Would that be useful? I expect cryonics to basically work on a technical level. Most of my probability mass for not seeing them again is concentrated in Everett branches where I and the rest of the human species are dead, and for some odd reason that feels like it should make a difference - if somebody goes to Australia for fifty years, are perfectly healthy, and most of my probability mass for not seeing them again is the Earth being wiped out in the meanwhile, I wouldn't mourn them more than I'd mourn anyone else in danger.
Replies from: CarlShulman, wedrifid, Viliam_Bur, shminux, MugaSofer↑ comment by CarlShulman · 2013-01-16T20:42:44.459Z · LW(p) · GW(p)
I expect cryonics to basically work on a technical level.
Even given this, I would doubt:
Most of my probability mass for not seeing them again is concentrated in Everett branches where I and the rest of the human species are dead,
E.g. what about the fact that cryonics organizations have the financial structure of precarious defined-benefit pension plans during a demographic decline and massive population aging, save that those currently receiving pensions can't complain if they are cut?.
↑ comment by wedrifid · 2013-01-17T18:21:07.577Z · LW(p) · GW(p)
I wouldn't mourn them more than I'd mourn anyone else in danger.
I share and endorse-as-psychologically-healthy your general attitude to grief in this kind of situation. Both the broad principle "Would that be useful?" and the more specific evaluation of the actual loss in expected-deaths with existential considerations in mind. That said, I would suggest that there is in fact reason to mourn more than for anyone else in danger. To the extent that mourning bad things is desirable, in this case you would mourn (1 - p(chance of positive transhumanist future)) * (value of expected life if immediate cause of death wasn't there).
Compare two universes:
- Luna is living contentedly at the age of 150 years. Then someone MESSES WITH TIME, the planet explodes, and so Rocks Fall, Everybody Dies.
- Luna dies at 25, is cryopreserved then 125 years later someone MESSES WITH TIME, the planet explodes, and so Rocks Fall, Everybody Dies.
All else being equal I prefer the first universe to the second one. I would pay more Sickles to make the first universe exist than for the second to exist. If I was a person inclined towards mourning, was in such a universe, the temporary death event occurred unexpectedly and I happened to assign p(DOOM)=0.8 then I would mourn the loss 0.8 * (however much I care about Luna's previously expected 125 years). This is in addition to having a much greater preference that the DOOM doesn't occur, both for Luna's sake and everyone else's.
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-01-18T17:13:41.893Z · LW(p) · GW(p)
I agree that the first universe is better, but I'd be way too busy mourning the death of the planet to mourn the interval between those two outcomes if the planet was actually dead. You could call that mental accounting, but isn't everything?
↑ comment by Viliam_Bur · 2013-01-16T21:12:46.670Z · LW(p) · GW(p)
Makes sense. I was thinking about the chance of cryonics working, generally... but it also makes sense to think about chances of cryonics working conditional on other things -- such as: civilization not collapsing, etc. Those should he higher.
For example, a chance of cryonics working if we have a Friendly AI, that seems pretty nice.
↑ comment by Shmi (shminux) · 2013-01-16T20:48:23.720Z · LW(p) · GW(p)
OK, I think I see your point. You wouldn't grieve over someone who is incommunicado on a perilous journey, even if you are quite sure you will never hear from them again, even though they might well be dead already. As long as there is a non-zero chance of them being alive, you treat them as such. And you obviously expect cryonics to have a fair chance of success, so you treat cryosuspended people as live.
Replies from: wedrifid↑ comment by wedrifid · 2013-01-17T17:40:14.807Z · LW(p) · GW(p)
You wouldn't grieve over someone who is incommunicado on a perilous journey, even if you are quite sure you will never hear from them again, even though they might well be dead already. As long as there is a non-zero chance of them being alive, you treat them as such. And you obviously expect cryonics to have a fair chance of success, so you treat cryosuspended people as live.
There is an additional component to Eliezer's comment that I suggest is important. In particular your scenario only mentions the peril of the traveler where Eliezer emphasizes that the traveler is in (approximately) the same amount of danger as he and everyone else is. So the only additional loss is the lack of communication.
Consider an example of the kind of thing that matches your description but that I infer would result in Eliezer experiencing grief: Eliezer has a counter-factual relative of that he loves dearly. The relative isn't especially rational and either indulges false beliefs or uses a biased and flawed decision making procedure. The biased beliefs or decision making leads the relative to go on an absolutely stupid journey that has a 95% chance of failure and death, and for no particularly good reason. (Maybe climbing Everest despite a medical condition that he is in denial about or something.) In such a case of highly-probable death of a loved one Eliezer could be expected to grieve for the probable pointless death.
The above is very different to if the other person merely ends up on a slightly different perilous path than the one that Eliezer is on himself.
↑ comment by DaFranker · 2013-01-16T18:17:41.536Z · LW(p) · GW(p)
If your estimate of the probability of their eventual revival is p, shouldn't you feel (1-p) fraction of grief?
Bwuh. That doesn't seem to add up to normality.
If a loved one who has no intention of ever signing up for life-extension techniques (or suspended animation) departs for a distant country in a final manner with no intention to return or ever contact you again, should you feel 1 grief?
Your system works when one attaches grief to the "currently dead and non-functional" state of a person, but when one attaches it to "mind irrecoverably destroyed such that it will never experience again", things are different. This will vary very dramatically from person to person, AFAIK.
↑ comment by DaFranker · 2013-01-16T17:19:43.465Z · LW(p) · GW(p)
The caveat here is that whether grief activates or not will depend highly on whether IsFrozen() is closer to IsDead() or to IsSleeping() (or IsOnATrip() or something similar implying prolonged period of no-contact) in the synaptic thought infrastructure* and experience processing of any person's brain.
If learning of someone being cryo'd fires off more thoughts and memory-patterns in the brain that are more like those fired off when learning of death than like those fired off when learning of sleep / coma / prolongued absence in a faraway country or something, then people will likely feel grief when learning of someone being cryo'd.
* Am I using these terms correctly? I'm not a neuro-anything expert (or even serious amateur), so I might be using words that point at completely different places than where I want, or have no real common/established meaning.
↑ comment by Arepo · 2013-01-17T13:39:56.130Z · LW(p) · GW(p)
Written a full response to your comments on Felicifia (I'm not going to discuss this in three different venues), but...
your opponent's true rejection seems to be "cryonics does not work"
This sort of groundless speculation about my beliefs (and its subsequent upvoting success), a) in a thread where I’ve said nothing about them, b) where I’ve made no arguments to whose soundness the eventual success/failure of cryo would be at all relevant, and c) where the speculator has made remarks that demonstrate he hasn’t even read the arguments he’s dismissing (among other things a reductio ad absurdum to an ‘absurd’ conclusion which I’ve already shown I hold), does not make me more confident that the atmosphere on this site supports proper scepticism.
Ie you're projecting.
Replies from: Viliam_Bur, lavalamp, somervta↑ comment by Viliam_Bur · 2013-01-17T23:12:26.052Z · LW(p) · GW(p)
I apologize for misinterpreting your position. I wrote what at the moment seemed to me as the most likely explanation.
↑ comment by DanielLC · 2013-01-15T20:18:37.432Z · LW(p) · GW(p)
Otherwise, they would also include the happiness of family and friends after the frozen person is cured.
That only works if their family and friends are cryopreserved themselves, or live until it becomes possible to wake them up.
That being said, I don't think the pain of grief is comparable to death.
Technically, by following this argument, we also should stop curing cancer, because that money could also be used for Givewell charities and animal welfare.
And that's exactly why I'm against donating to cancer research. If I had the opportunity to funnel all that money to something useful, I would. If I had the opportunity to choose between cancer, cyronics, or the government using the money in another fairly useless manner, I don't know which I'd pick.
Replies from: ahartell↑ comment by ahartell · 2013-01-15T21:36:52.491Z · LW(p) · GW(p)
And that's exactly why I'm against donating to cancer research.
I think opposition to donating to cancer research (as opposed to donating to more cost efficient options) is obvious and accepted (here). Still, I'm selfish enough that if I had cancer I would treat it, which is what was actually being considered/compared to cryonics.
I'm sure this has come up before, but I think there are some cases in which cancer research donations make sense. Often donations geared towards the curing of specific diseases is prompted by some personal emotional connection to the disease (e.g., someone in one's family suffered or died as a result of it), and I expect these kind of emotional donations don't replace other would-be-efficient charity donations but instead replace general spending or saving. That said, I don't actually know if that's the case.
Replies from: DanielLC↑ comment by DanielLC · 2013-01-17T00:09:07.316Z · LW(p) · GW(p)
Still, I'm selfish enough that if I had cancer I would treat it, which is what was actually being considered/compared to cryonics.
I often do the selfish thing when I don't have enough willpower to do the right thing, but it doesn't take a whole lot of willpower to not sign up for cryonics. Based on the fact that someone coined the word "cryocrastination", I'm betting it takes quite a bit of willpower to sign up.
↑ comment by lavalamp · 2013-01-17T15:28:51.055Z · LW(p) · GW(p)
If cryonics works, [family and friends] still suffer the same [grief].
Wrong alief. Despite saying "if cryonics works" the author in the rest of the sentence still expects that it does not.
I disagree with this conclusion. The validity of the statement depends on the beliefs of the family and friends, not the beliefs of the author of the sentence or the individual being frozen. For average values of family and friends, it's probably even true.
comment by Peter Wildeford (peter_hurford) · 2013-01-15T19:45:35.858Z · LW(p) · GW(p)
As someone who frequents both LessWrong and is the current "head administrator" of Felicifia, there's really nothing wrong with being a member of both. There's no consensus view of cryonics on either forum. I personally do not have a strong view one way or the other on cryonics (though I have not signed up and currently do not intend to).
I know how it can feel to be debating out of depth. Usually, but not always, my being out of depth has meant that I was wrong. I try not to win arguments, because winning arguments typically means you've already written the bottom line and are unopen to new reasoning, a strategy not very welcome on either LW or Felicifia. I'm not saying that you're doing this, or even that you're wrong (I'm out of depth on cryonics relative to you!), but just issuing a general warning.
And I hope you continue to post on Felicifia! We could use more detractors! I find that echo chambers have no vital safety check against groupthink.
comment by Manfred · 2013-01-15T09:41:29.082Z · LW(p) · GW(p)
Welp, have fun being out of your depth :D
One piece of advice is that trying to win arguments is pretty much pointless. You should be trying to help the other person understand you,, understand the other person, or understand yourself.
Replies from: somervta↑ comment by somervta · 2013-01-15T10:42:06.155Z · LW(p) · GW(p)
I'm not really trying to win the argument. I'm not really sure what I'm trying to do... Originally I was trying to explain my/LWwers's point of view, now I'm not so sure. I mean, I love arguments, they're fun, but this doesn't feel like one.
Replies from: Error, Viliam_Bur↑ comment by Error · 2013-01-15T12:30:33.419Z · LW(p) · GW(p)
Figuring out what you're trying to do -- whatever it is -- might be helpful for its own sake. I, for one, am most prone to say stupid things when I haven't worked out what I'm trying to communicate in advance.
↑ comment by Viliam_Bur · 2013-01-15T14:25:14.285Z · LW(p) · GW(p)
I'm not really sure what I'm trying to do
To make all your friends agree with each other, thus creating a pleasant social environment for yourself? ;-)
comment by Oscar_Cunningham · 2013-01-15T18:34:13.920Z · LW(p) · GW(p)
...I was, in fact, a LWer.
There's no such thing. Keep your identity small, and all that.
Replies from: None, torekp↑ comment by [deleted] · 2013-01-16T17:44:02.956Z · LW(p) · GW(p)
Denying the effects of group affiliation on psychology won't make 'em go away.
Replies from: Vladimir_Nesov, Oscar_Cunningham↑ comment by Vladimir_Nesov · 2013-01-16T21:07:58.944Z · LW(p) · GW(p)
Disapproving of this effect will probably reduce it.
Replies from: None, TimS↑ comment by [deleted] · 2013-01-16T21:32:32.443Z · LW(p) · GW(p)
Based on...?
Replies from: Risto_Saarelma↑ comment by Risto_Saarelma · 2013-01-18T06:27:26.378Z · LW(p) · GW(p)
Occasionally noticing you're about to do something stupid because you think you're affiliated with a group, now that you're aware it happens all the time, and trying to do better?
Replies from: None↑ comment by Oscar_Cunningham · 2013-01-16T20:44:04.964Z · LW(p) · GW(p)
Thinking of yourself as not part of the group will help though.
Replies from: None↑ comment by [deleted] · 2013-01-16T20:46:16.798Z · LW(p) · GW(p)
Will it? How do you know? What do you mean by "help"? Why do you believe this? How confident are you?
Replies from: Oscar_Cunningham↑ comment by Oscar_Cunningham · 2013-01-16T21:17:37.284Z · LW(p) · GW(p)
Good questions, I'm afraid that that's just my intuition with no experiments backing it up. Do you know of any relevant data? I can't think of a way to structure a good experiment; how would we measure group affiliation except by getting people to report it?
comment by ChristianKl · 2013-01-16T18:05:33.194Z · LW(p) · GW(p)
According to the last survey, the number of LessWrongers who are signed up for cryonics is less than the amount of LessWrongers who are theists.
It just seems that the people who advocate cryonics are more vocal than those that do not.
Replies from: JoshuaZ, RolfAndreassen↑ comment by JoshuaZ · 2013-01-17T05:22:02.343Z · LW(p) · GW(p)
Yes, but keep in mind that many people on LW are fairly young and healthy so signing up for cryonics now isn't necessarily something that makes sense. And there are a fair number of people here who expect to never die or for all of humanity to be wiped out. (Essentially assigning a high probability to a near singularity.)
Replies from: ChristianKl↑ comment by ChristianKl · 2013-01-17T09:48:07.554Z · LW(p) · GW(p)
The cost of signing up for cryonics is the cost of life insurance. It's cheaper when you are young.
Replies from: gwern↑ comment by gwern · 2013-01-17T18:57:00.149Z · LW(p) · GW(p)
Opportunity costs are highest when you are young, on top of the serious risk JoshuaZ that you will, ex post*, not want to engage in cryonics and hence your capital will be inefficiently locked up in life insurance.
* Lots of possible reasons to change one's mind. Perhaps research will prove, over the coming decades, that cryonics destroys necessary information. Or perhaps there will be a massive loss of patients, leading to a corresponding spike in the storage part of the cryonics Drake equation. Perhaps there will just be a general hope function-style decay of prospects for cryonics as the decades drag on with no improvement or progress.
Another issue might be that rates are too high now: interest rates have been at rock bottom for years now, so what's the implicit interest rate in life insurance rates? Rock bottom such that when normality returns you forfeit even more returns?
↑ comment by RolfAndreassen · 2013-01-17T00:35:02.833Z · LW(p) · GW(p)
I wonder how that would look if weighted for karma or another measure of participation?
Replies from: gwern↑ comment by gwern · 2013-01-17T00:56:03.468Z · LW(p) · GW(p)
Yvain's looked at this: http://squid314.livejournal.com/349656.html (see also my & unnamed's comments there).