How to Seem (and Be) Deep

post by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2007-10-14T18:13:09.000Z · LW · GW · Legacy · 122 comments

I recently attended a discussion group whose topic, at that session, was Death.  It brought out deep emotions.  I think that of all the Silicon Valley lunches I've ever attended, this one was the most honest; people talked about the death of family, the death of friends, what they thought about their own deaths.  People really listened to each other.  I wish I knew how to reproduce those conditions reliably.

I was the only transhumanist present, and I was extremely careful not to be obnoxious about it.  ("A fanatic is someone who can't change his mind and won't change the subject."  I endeavor to at least be capable of changing the subject.)  Unsurprisingly, people talked about the meaning that death gives to life, or how death is truly a blessing in disguise.  But I did, very cautiously, explain that transhumanists are generally positive on life but thumbs down on death.

Afterward, several people came up to me and told me I was very "deep".  Well, yes, I am, but this got me thinking about what makes people seem deep. 

At one point in the discussion, a woman said that thinking about death led her to be nice to people because, who knows, she might not see them again.  "When I have a nice thing to say about someone," she said, "now I say it to them right away, instead of waiting."

"That is a beautiful thought," I said, "and even if someday the threat of death is lifted from you, I hope you will keep on doing it—"

Afterward, this woman was one of the people who told me I was deep.

At another point in the discussion, a man spoke of some benefit X of death, I don't recall exactly what.  And I said:  "You know, given human nature, if people got hit on the head by a baseball bat every week, pretty soon they would invent reasons why getting hit on the head with a baseball bat was a good thing.  But if you took someone who wasn't being hit on the head with a baseball bat, and you asked them if they wanted it, they would say no.  I think that if you took someone who was immortal, and asked them if they wanted to die for benefit X, they would say no."

Afterward, this man told me I was deep.

Correlation is not causality.  Maybe I was just speaking in a deep voice that day, and so sounded wise.

But my suspicion is that I came across as "deep" because I coherently violated the cached pattern for "deep wisdom" in a way that made immediate sense.

There's a stereotype of Deep Wisdom.  Death: complete the pattern: "Death gives meaning to life."  Everyone knows this standard Deeply Wise response.  And so it takes on some of the characteristics of an applause light.  If you say it, people may nod along, because the brain completes the pattern and they know they're supposed to nod.  They may even say "What deep wisdom!", perhaps in the hope of being thought deep themselves.   But they will not be surprised; they will not have heard anything outside the box; they will not have heard anything they could not have thought of for themselves.  One might call it belief in wisdom—the thought is labeled "deeply wise", and it's the completed standard pattern for "deep wisdom", but it carries no experience of insight.

People who try to seem Deeply Wise often end up seeming hollow, echoing as it were, because they're trying to seem Deeply Wise instead of optimizing.

How much thinking did I need to do, in the course of seeming deep?  Human brains only run at 100Hz and I responded in realtime, so most of the work must have been precomputed.  The part I experienced as effortful was picking a response understandable in one inferential step and then phrasing it for maximum impact.

Philosophically, nearly all of my work was already done.  Complete the pattern: Existing condition X is really justified because it has benefit Y:  "Naturalistic fallacy?" / "Status quo bias?" / "Could we get Y without X?" / "If we had never even heard of X before, would we voluntarily take it on to get Y?"  I think it's fair to say that I execute these thought-patterns at around the same level of automaticity as I breathe.  After all, most of human thought has to be cache lookups if the brain is to work at all.

And I already held to the developed philosophy of transhumanism.  Transhumanism also has cached thoughts about death.  Death: complete the pattern: "Death is a pointless tragedy which people rationalize."  This was a nonstandard cache, one with which my listeners were unfamiliar.  I had several opportunities to use nonstandard cache, and because they were all part of the developed philosophy of transhumanism, they all visibly belonged to the same theme.  This made me seem coherent, as well as original.

I suspect this is one reason Eastern philosophy seems deep to Westerners—it has nonstandard but coherent cache for Deep Wisdom.  Symmetrically, in works of Japanese fiction, one sometimes finds Christians depicted as repositories of deep wisdom and/or mystical secrets.  (And sometimes not.)

If I recall correctly an economist once remarked that popular audiences are so unfamiliar with standard economics that, when he was called upon to make a television appearance, he just needed to repeat back Econ 101 in order to sound like a brilliantly original thinker.

Also crucial was that my listeners could see immediately that my reply made sense.  They might or might not have agreed with the thought, but it was not a complete non-sequitur unto them.  I know transhumanists who are unable to seem deep because they are unable to appreciate what their listener does not already know.  If you want to sound deep, you can never say anything that is more than a single step of inferential distance away from your listener's current mental state.  That's just the way it is.

To seem deep, study nonstandard philosophies.  Seek out discussions on topics that will give you a chance to appear deep.  Do your philosophical thinking in advance, so you can concentrate on explaining well.  Above all, practice staying within the one-inferential-step bound.

To be deep, think for yourself about "wise" or important or emotionally fraught topics.  Thinking for yourself isn't the same as coming up with an unusual answer.  It does mean seeing for yourself, rather than letting your brain complete the pattern.  If you don't stop at the first answer, and cast out replies that seem vaguely unsatisfactory, in time your thoughts will form a coherent whole, flowing from the single source of yourself, rather than being fragmentary repetitions of other people's conclusions.

122 comments

Comments sorted by oldest first, as this post is from before comment nesting was available (around 2009-02-27).

comment by Anders_Sandberg · 2007-10-14T20:18:27.000Z · LW(p) · GW(p)

I have played with the idea of writing a "wisdom generator" program for a long time. A lot of "wise" statements seem to follow a small set of formulaic rules, and it would not be too hard to make a program that randomly generated wise sayings. A typical rule is to create a paradox ("Seek freedom and become captive of your desires. Seek discipline and find your liberty") or just use a nice chiasm or reversal ("The heart of a fool is in his mouth, but the mouth of the wise man is in his heart"). This seems to fit in with your theory: the structure given by the form is enough to trigger recognition that a wise saying will now arrive. If the conclusion is weird or unfamiliar, so much the better.

Currently reading Raymond Smullyan's The Tao is Silent, and I'm struck by how much less wise taoism seems when it is clearly explained.

Replies from: stcredzero, taryneast, brilee, Evan Rysdam
comment by stcredzero · 2010-05-30T18:31:33.358Z · LW(p) · GW(p)

I suspect that this sort of algorithm was unconsciously internalized by many scriptwriters of Kung Fu films. I did the same thing, unconsciously, during the period I was reading Smullyan's books. That's what I did to come up with, "There's neither heaven nor hell save what we grant ourselves, neither fairness nor justice save what we grant each other."

I suspect that this sort of algorithm was used as a sort of filter by the more savvy Taoist masters -- just sit back and see who gets trapped in this particular local maxima.

comment by taryneast · 2011-01-30T16:19:42.192Z · LW(p) · GW(p)

You may wish to study the "terribly mysterious" sayings of The Sphinx (from the movie "Mystery Men") for inspiration :)

Replies from: Broggly
comment by Broggly · 2011-04-30T07:45:36.177Z · LW(p) · GW(p)

"When you can balance a tack hammer on your head, you will head off your foes with a balanced attack."

comment by brilee · 2011-12-08T02:56:44.439Z · LW(p) · GW(p)

You should work at a fortune cookie company, I'm sure you'd learn some tricks of the trade.

comment by Sunny from QAD (Evan Rysdam) · 2019-08-16T04:55:56.945Z · LW(p) · GW(p)

This probably didn't exist when you wrote this comment, but it does now: https://sebpearce.com/bullshit/

comment by bw2 · 2007-10-14T22:51:51.000Z · LW(p) · GW(p)

Evidently, you know, talking to people of average intelligence we are always going to sound deep, especially on social occasions when we tailor our conversation to the listener. But that has nothing to do with the particular view you defended. Someone defending that death gives meaning to life with better arguments than those people had would elicit the same response.

comment by bw2 · 2007-10-14T23:01:24.000Z · LW(p) · GW(p)

I sometimes feel our lifespan is already too long and could use some reduction through technology. There are too many opportunities to achieve a partcular goal which reduces the intensity and importance that one single opportunity would have. Of course we try to convince ourselves of this importance but that works very poorly when reality tells otherwise.

Replies from: faul_sname
comment by faul_sname · 2012-01-16T09:04:17.633Z · LW(p) · GW(p)

Why reduce it through technology? Take more risks. Go out with a bang. There is nothing that limits your ability to reduce your lifespan, only to extend it.

comment by g · 2007-10-14T23:58:49.000Z · LW(p) · GW(p)

See also: the chapter entitled "A different box of tools" in "Surely you're joking, Mr Feynman".

comment by Robin_Hanson2 · 2007-10-15T00:14:46.000Z · LW(p) · GW(p)

g, yes, Feynman's differing calculus tools example came to my mind as well when reading this.

comment by g · 2007-10-15T00:16:46.000Z · LW(p) · GW(p)

bw, I think I concur with Eliezer's diagnosis in another thread of Stockholm syndrome, or something like it. If you find it too easy to achieve all your goals because you have so many opportunities, then find harder goals.

(Perhaps I'm just, like, a seething cauldron of negativity or something, but that particular problem seems to me rather remote from my own experience, or from that of anyone else I know enough about.)

comment by bw2 · 2007-10-15T00:21:07.000Z · LW(p) · GW(p)

g, perhaps I did not make it clear that it is not a question of difficulty or challenge but a question of time, that we should have just one moment in time to do something, or the opportunity will be lost. Immortality will destroy that. But this is not exactly what the original text is about, so I will for a better opportunity (see, damn life, there is always a next opportunity)

Replies from: JohnH
comment by JohnH · 2011-04-25T01:11:51.358Z · LW(p) · GW(p)

always a next opportunity

Only if you are immortal.

comment by g · 2007-10-15T00:27:06.000Z · LW(p) · GW(p)

For "harder", please feel free to read "less frequently achievable".

comment by bw2 · 2007-10-15T00:41:00.000Z · LW(p) · GW(p)

I had one opportunity to kiss the girl I loved the most and I blew it. She may be dead now and if not she is married and she is married because she may be dead soon, and will be for sure at some point. I have one opportunity in these five or ten years when my brain is at its peak to write a great book in philosophy, and we shall ses if I blow it or not. But it seems clear to me that the best thing about life is that we die. Is this rationalizing something I cannot change? Not if I do not just rationalize but explain with absolute clarity and necessity.

comment by g · 2007-10-15T00:55:35.000Z · LW(p) · GW(p)

Well, please feel free to explain with absolute clarity and necessity; perhaps you'll do so in that great philosophy book. I regret that, at least for me, you haven't at all managed to do so yet. I can see that, e.g., writing a good philosophy book might seem more valuable to you if you only have one shot at it (though, er, it seems to me that it's not unheard of for philosophers to write more than one good book in their lives), but I can't imagine how you can think that's not outweighed by being able to write more and better books. And if your expected productive lifespan were a thousand years, there would still be challenges big enough that you'd only get one shot at them. They'd just be bigger, harder challenges.

In other words, you'd get more done, you'd get better things done, you'd have better just-one-shot challenges to meet (perhaps: not "kiss the girl I loved the most" but "find someone I can live with happily for a thousand years"; not "write a really good philosophy book" but "definitively solve such-and-such a very deep philosophical problem" -- though I bet these aren't imaginative enough); what's the downside, here?

Perhaps you think actual immortality would be worse somehow; I think that's a more defensible proposition. But you actually claimed not merely "infinitely extended lives might turn out to be worse" but "even as they are, our lives are quite likely too long". Stockholm syndrome, sorry.

comment by michael_vassar3 · 2007-10-15T00:56:23.000Z · LW(p) · GW(p)

"I had one opportunity to kiss the girl I loved the most and I blew it. " And that makes your life better than if you had more opportunities?

Huh?

It seems noteworthy that the first known story (Gilgamesh), and the second well known one (Eden), and the dominant global religions (Christianity, Islam), are all about yearning for immortality.

comment by bw2 · 2007-10-15T01:05:50.000Z · LW(p) · GW(p)

"And if your expected productive lifespan were a thousand years, there would still be challenges big enough that you'd only get one shot at them. They'd just be bigger, harder challenges."

Good point. I have nothing againt extending it to a thousand years after we have very carefully thought up a new life to go with the new lifespan. We haven't even done that for the current four score and ten years, for christs sake! Of course that makes my life better. Compare that moment of the missed opportunity with buying a pair of jeans. It is better because it cannot be iterated. And if religion talked about it so much, I would suggest transhumanists smoke out their cached religious thoughts more fully.

comment by Nick_Tarleton · 2007-10-15T01:45:04.000Z · LW(p) · GW(p)

(Sorry to continue dragging off-topic:)

that we should have just one moment in time to do something, or the opportunity will be lost.

Look on the 'bright' side: no matter how long you live, the exact same opportunity will never come again.

But life should never have second chances? How awful!

I have nothing againt extending it to a thousand years after we have very carefully thought up a new life to go with the new lifespan.

But so much better to extend your life first and then tackle the challenge of creating meaning! Not only is this more likely to work (who can plan a thousand years ahead in detail?) but you don't to wait around while billions die.

Compare that moment of the missed opportunity with buying a pair of jeans.

Huh?

More recommended reading: The Hardest Stuff of All, Just What's So Wonderful About This Whole Existence Thing Anyway?

comment by Laura · 2007-10-15T01:51:16.000Z · LW(p) · GW(p)

I don't think anyone is qualified to judge, based on theory alone, whether true immortality is meaningful or worth achieving, since no one has lived much longer than 120 yrs. Maybe the human consciousness would throw up its hands and scream 'to hell with it all!' after 300 years, maybe not. Maybe our children will be lacksidasical losers because they have no impetus to get off their asses and on with their lives (lord knows how many ppl get a move on because they fear getting too old for girlfriends/marriage/children). But we don't know that, and it's all a moot point, since nobody's done it before. What is clear is that almost everyone wishes they didn't age, that our bodies and minds did not decay, that our memories did not fade, that we could keep the vigor, curiosity, openess and excitement of our most productive years. Why not try for that and see what living so long is really like? What would we have to lose?

comment by Shakespeare's_Fool · 2007-10-15T01:52:41.000Z · LW(p) · GW(p)

bw,

Had we but world enough, and time, This coyness, lady, were no crime.


The grave's a fine and private place, But none, I think, do there embrace. --Andrew Marvell

John

comment by bw2 · 2007-10-15T02:24:11.000Z · LW(p) · GW(p)

That no matter how long you live, the exact same opportunity will never come again is a view I fear depends on the existence of death and would disappear without it; that is the nub; so there may some be anthropic bias in your view, Nick.

comment by savagehenry · 2007-10-15T06:20:37.000Z · LW(p) · GW(p)

I can understand the reasoning behind the saying that death gives meaning to life. But I've never been able to fully agree with that sentiment. If I could I would live forever. Death certainly gives me reason to want to do as much as I can while I am still able. But that desire doesn't give my life any more meaning than if it was not there. I can agree that death makes life precious, for without death life would be abundant.

I often imagine what it'd be like to live 200 years or 1000 years. I know like Eliezer I would do so if able (assuming my mind was still intact the entire time). I can't even begin to imagine the things I would be able to understand with a lifespan like that. I'm only 22 years old and I know and understand quite a bit, but what I don't know and don't understand is far greater. To me living a longer than what is currently natural life would be an opportunity to soak up even more knowledge and experience. That's what I'm doing now with my life and I hope by some advancement in technology I'm able to do so for far longer than 78 years (or whatever my life expectancy is).

I don't think I've ever actually heard anyone say exactly why they think death gives meaning to life. Anyone got a link to something that explains this?

comment by Anders_Sandberg · 2007-10-15T11:09:52.000Z · LW(p) · GW(p)

In think the "death gives meaning to life" meme is a great example of "standard wisdom". It is apparently paradoxical (right form to be "deep"), it provides a comfortable consolation for a nasty situation. But I have seldom seen any deep defense for it in the bioethical literature. Even people who strongly support it and ought to work very hard to demonstrate to fellow philosophers that it is a true statement seem to be content to just rattle it off as self-evident (or that people not feeling it in their guts are simply superficial).

Being a hopeless empiricist I would like to check whether people today feel life being less meaningful than a century ago, and whether people in countries with short life expectancy feel more meaning than in countries with none. I'm pretty certain the later is not true, and the first looks iffy (hard to check, and lots of confounders like changed social and cultural values, though). I did some statistics on the current state, http://www.aleph.se/andart/archives/2006/12/a_long_and_happy_life.html and found no link between longer life and ennui, at least on a national level.

comment by michael_vassar3 · 2007-10-15T13:14:47.000Z · LW(p) · GW(p)

bw: Maybe our disagreement is superficial in that case, except for Nick's proposed change in order of operation and your misuse of the concept of anthropic bias. I have no commitment to the idea of living indefinitely, just to the idea of not committing to dying at any particular time just out of custom. There are some things that I'd like to do or seen done that seem very hard by human standards. They should provide plenty of challenge for quite a few centuries with my current capabilities if they are not outright impossible. Maybe there really isn't anything beyond them, that is, no further set of achievements, experiences, or resolutions that I would derive from my current self in any number of centuries. At some point there probably isn't. Even then I don't see much down-side to living longer. Boredom, ennui, suffering of all kinds, these are just the consequences of physical processes in my brain that I could easily eliminate if I didn't want them and if I had greater capabilities, they aren't something fundamental to how the universe is. But if someone else wants my mass/energy and I'm done with it I don't intend to decide now as an infant that my grown-up self should horde it reflexively.

comment by Robin_Hanson2 · 2007-10-15T13:35:38.000Z · LW(p) · GW(p)

This post suggests another good reason why it is hard to distinguish between intelligence levels above your own.

comment by bw2 · 2007-10-15T15:22:30.000Z · LW(p) · GW(p)

Of course if I could become immortal and not change anything else I would welcome it. There is not even any point arguing about that and I don't think anyone denies it would be desirable. The question is whether you can do that, because mortality is the most fundamental fact about us. Nothing will be the same afterwards, so it is rather touching but fundamentally misguided to speak how we would do the same things but have more time to do them well, etc. I for one do not see what the point would be in acquiring knowledge if I never died, which rather speaks against Michael Vassar's utopia. As for the scenarios where we just extend life and do not eliminate desth altogether, I think you will agree that after some yipping point the distinction is nugatory because we no longer conceive of a limit and will direct our efforts to extending the lifespan further. Finally, as for measuring empirically what it would be to live without the prospect of death, I can't see how we can measure something that does not exist yet.

comment by josh · 2007-10-15T16:37:43.000Z · LW(p) · GW(p)

"I for one do not see what the point would be in acquiring knowledge if I never died"

? Why do you acquire knowledge now? I do it because it's fun/interesting/useful toward accomplishing some goal.

comment by bw2 · 2007-10-15T17:42:01.000Z · LW(p) · GW(p)

What is the postulate of objectivity as for instance someone like Monod describes it? Seeing the world as if you were dead, as if you were not there to see it, seeing the world as it would be without your seeing it. Without death this possibility is not even conceivable. That in broad strokes. More generally, you simply cannot talk about what life will be like without death using a concept of life that only makes sense when death exists. Don't you see the bias, the radical bias, in this approach?

comment by josh · 2007-10-15T17:52:30.000Z · LW(p) · GW(p)

"Seeing the world as if you were dead, as if you were not there to see it, seeing the world as it would be without your seeing it."

This is an analogy. You might as well say we couldn't attempt objectivity without "veils" (of ignorance). Without the existence of veils, such a concept is unthinkable.

Does anybody really think of objectivity as seeing the world as if you were dead?

comment by michael_vassar3 · 2007-10-15T18:23:21.000Z · LW(p) · GW(p)

Bw: This is my last response on this thread, but it might be of use to you to consider that several posters here, myself included, already expect to live forever without it detracting from our lives at all. Furthermore, ALL children under some age lack a concept of death without obvious and dramatic impoverishment of their lives. On occasion one can encounter an adult who claims to remember learning about death, though this could be, in every case, a false memory for all I know. What one never encounters is an adult who says that finding out about death was what gave life meaning for them.

comment by bw2 · 2007-10-15T19:25:35.000Z · LW(p) · GW(p)

It is not a question of what you expect. Christians in the past expected to live forever and that did not detract from their lives. Thank you for your kind responses to my views.

comment by Unknown_Healer · 2007-10-15T19:58:32.000Z · LW(p) · GW(p)

Michael,

You 'expect' to live forever, i.e. consider it more likely than not? Outside of quantum immortality and similar views about other multiverse concepts, that seems to go beyond the evidence. Unless thermodynamics can be circumvented we will almost certainly die for lack of resources, even if we don't suffer aging or succumb to existential risks or other sudden dooms.

Replies from: MattPrather
comment by MattPrather · 2010-04-12T06:18:15.936Z · LW(p) · GW(p)

You're killin' me, Smalls!

  1. I'll admit my own ignorance concerning whatever "quantum immortality" and "multiverse" mean.

  2. If this was the TV show "Jeopardy!" and for the Daily Double I was supposed to tell the name of the Law which proves in the textbook that we would almost certainly die for lack of resources, even if we could live forever -- if I was really there -- I would probably guess "The Second Law?", and then pray...

  3. I'm no Einstein, but at least I know the textbooks alone do not suffice to carry any inferences made from them.

  4. Do you have something other than a theory to prove that my mind is fundamentally doomed to extinction by "thermodynamics", and is there really certain "evidence" of that?

Replies from: DanielLC
comment by DanielLC · 2010-09-05T04:55:55.159Z · LW(p) · GW(p)

Do you have anything including theory to prove that your mind isn't fundamentally doomed?

There is plenty of evidence for the second law of thermodynamics. Every time we perform an experiment in which entropy is recorded, there's some chance we notice a decrease. Every time we don't, it's evidence that it just doesn't happen. We've done so many experiments that we can accurately predict what the result of a given experiment will be. Every function used is one-to-one. Once the information is out there, there's nothing we can do to delete it. The universe will only hold so much information.

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2010-09-05T05:50:06.112Z · LW(p) · GW(p)

Once the information is out there, there's nothing we can do to delete it. The universe will only hold so much information.

Agreed with the former, but not so sure about the latter. There are still some loopholes (for the heat death of the universe) that haven't been closed. This PhD thesis seems to contain the most recent review of the issues involved.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2007-10-15T20:19:30.000Z · LW(p) · GW(p)

I can only speak for myself, but I think most of us are defining "immortality" as "living for at least a million years" rather than Greg Egan's "Not, dying after a very long time; just not dying, ever."

Now I certainly have no moral objection to the latter state of affairs. As I sometimes like to tell people, "I want to live one more day. Tomorrow I will still want to live one more day. Therefore I want to live forever, proof by induction on the positive integers."

But flippant remarks aside, I'm not sure how I feel about real immortality, if such a thing should be physically permissible. Do I want to live longer than a billion years, live longer than a trillion years, live longer than a googolplex years, live longer than Graham's Number, live so long it has to be expressed in Conway chained arrow notation, live longer than Busy_Beaver(100)?

Note that I say "live longer than Graham's Number", not "live longer than Graham's Number years/seconds/millennia", because these are all essentially the same number. Living for this amount of time does not just require the ability to circumvent thermodynamics, it requires the ability to build custom universes with custom laws of physics. And the vast majority of integers are very much larger than that, or even Busy_Beaver(100). Perhaps this is possible. Perhaps not.

The emotional connection that I feel to my future self who's lived for Graham's Number is pretty much nil, on its own. But my self of tomorrow, with whom I identify very strongly, will be just a tiny bit closer. As I fulfill or abandon old goals, I will adopt new ones. The connection may be vicarious, but it is there.

And I certainly see a very great difference between humanity continuing forever, versus humanity continuing to Graham's Number and then halting; a difference very much worth dying for. (It follows that my discount rate is 1.)

So, as I usually tell people:

"Do I want to live forever? I don't know. Ask me again in a million years. Maybe then I'll have decided how I feel about immortality. I am a short-term thinker; I take my life one eon at a time."

Replies from: PeteG
comment by PeteG · 2023-02-01T18:41:26.298Z · LW(p) · GW(p)

And I certainly see a very great difference between humanity continuing forever, versus humanity continuing to Graham's Number and then halting

You can't use "humanity" and "Graham's Number" in the same sentence.

comment by Peter_de_Blanc · 2007-10-15T20:20:50.000Z · LW(p) · GW(p)

Unknown Healer:

Maybe he means that his expected value for his lifespan diverges to +infinity.

(Me too.)

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2007-10-15T20:28:27.000Z · LW(p) · GW(p)

For your expected lifespan value to diverge to +infinity, it is necessary to place only

.000000000000000000000000000000000000000000000000000000000001

probability on your chance of living forever, and I don't think you can realistically defend assigning a probability lower than that.

Replies from: DanielLC
comment by DanielLC · 2010-09-05T05:01:08.495Z · LW(p) · GW(p)

First off: doomsday argument. If you're going to live that long, you're not going to be in this part.

Second: if you live forever, it gives weird paradoxes involving probability. If you where to look at your watch at a random time, it seems like there's a 50:50 chance that the second hand is on an even number. It's trivial to move around events so that that only happens a quarter of the time. This would mean that the probability of things is influenced by their order. I find assigning zero probability to something less counter-intuitive.

comment by TGGP4 · 2007-10-15T20:47:43.000Z · LW(p) · GW(p)

Christians may claim to believe in eternal life, but in their behavior they do not act like it. I thought I remembered a good post here mentioning how Muslim armies should never surrender if they actually believed death in battle guaranteed eternal life in paradise, but the historical record shows they were perfectly capable of it. Unfortunately google has not been able to find that page for me.

comment by g · 2007-10-15T20:54:15.000Z · LW(p) · GW(p)

Doesn't that assume that their only motivation is to achieve an optimal eternity for themselves? I'd have thought it quite possible to believe both "Surrendering will slightly decrease my chance of ending up in paradise" and "I should surrender".

I could much more readily believe that Muslims' readiness to surrender appears greater than one would expect if they really believed that. (For this it isn't necessary that the expected readiness to surrender be 0.)

comment by JulianMorrison · 2007-10-15T21:26:44.000Z · LW(p) · GW(p)

Immortality seems to me like a meaningless concept, because the only hypothetical event that could define it (the end of time) would also deny it.

A more useful concept is actuarial life expectancy. To make that middling long, we need to defeat aging. That's all but certain, and I don't expect it to seriously alter humanity. To make it really long, we need to defeat such civilization-ending mischances as a local supernova. That means uploading and backups at minimum. To make it indefinite, we need a way to laugh in the face of entropy and duck away from string-theory universe collisions. No such a creature would be recognizably human.

This is neither an argument for or against, but it is an argument against such trite ideas as "ask me in a million years". You won't be you. There will be historical continuity, but almost no similarity.

I suppose that ties in to a Buddhist way of looking at time. The self is in the moment. You aren't the same you from one instant to the next. Given that, what does dying mean? From one perspective, you've been doing nothing but dying since your remotest ancestor congealed out of loose nucleotides.

comment by Laura · 2007-10-15T21:59:20.000Z · LW(p) · GW(p)

"What one never encounters is an adult who says that finding out about death was what gave life meaning for them." I beg to differ. Many people have had close calls with death that have been pivitol, life-changing experiences. A friend of mine changed careers and got married after his plane nearly crashed. "I realized we don't have that much time on this earth to be wasting it in board meetings," or some such.

I still would NOT argue that this is evidence that life needs death to have meaning, but death certainly IS a strong motivator to get on with life.

comment by Valter · 2007-10-15T22:01:04.000Z · LW(p) · GW(p)

I must be hanging with a very different crowd: I had never heard of anyone saying that death is what gives meaning to life. It seems such an obviously stupid notion that I can only imagine someone cooked it up to make him/herself look deep - and failed because everyone else cried "sour grapes!" P.S. I do think that I will grow old and die. I don't like it, but there are worse things (eg, I could die before I grow old).

comment by bw2 · 2007-10-15T22:09:45.000Z · LW(p) · GW(p)

I enjoyed this discussion very much and hope that Eliezer will excuse the distraction from the main topic, since he is after all very much interested in this. Will make one last point. What I find extraordinary is that most of you seem to assume the sophisticated, critical, elaborate thesis is that death should not be accepted, and that the contrary opinion is somehow primitive. This is silly. There is no living creature on earth who does not have the level of intelligence necessary to conclude that death is bad. Which of course does not mean the opinion is wrong. It could be right, but what it certainly is not is a testimony to great intelligence.

comment by Laura · 2007-10-15T23:19:11.000Z · LW(p) · GW(p)

Sorry to go on on this topic, but it seems to me that a false dichotomy has been developed in this thread between two ideas:

1) Death gives meaning to life.

2) Immortality is worth attempting/achieving.

I do not see why these ideas are at all mutually exclusive. Of course the idea that death gives ALL of the meaning to life would be incompatible with immortality, but certainly some of the transhumanists here must concede that it gives some meaning. Maybe the confusion is with the word "meaning." Many of the things that humans find meaningful in life, such as getting married, developing a career, and raising children, have developed their societal meaning within the confines of a short and finite life, and might even be absurd to pursue in similar ways given immortality. What would "till death do you part" mean without death? It would be ludicrous to make such a binding promise for an eternity entirely unfathomable. Choosing the one right person to raise children with would be unnecessary if you could reproduce indefinitely, and even your children would not be the same few special people if you had a multitude of them at all different ages.

Not that there is anything wrong or even worse about having infinite partners, children, occupations, etc, but the meaning we ascribe to these events would most definitely change.

Many people might not be receptive to these changes, and their conclusion that their imminent death gives meaning to their life is not so absurd as you all are claiming.

comment by g · 2007-10-15T23:39:26.000Z · LW(p) · GW(p)

Laura, I think you're right that there's a distinction not being made, but I'm not sure it's the one you say it is. Rather, "death gives meaning to life" could mean (1) "some of the things we find meaningful have meanings that are as they are partly because of death" or (2) "it's because of death that life has meaning". #1 is probably true, but doesn't give much grounds for disagreeing with Eliezer about death. #2 is a different matter entirely. There's also (3) "it turns out that for some people the prospect of imminent death is effective at making them think better about their priorities", but dying seems like a rather drastic way of getting such benefits.

bw, who cares which (if either) thesis is sophisticated or critical or elaborate? That seems as pointless as asking which is more "Western" or more "conservative" or more "imaginative". I don't care whether my opinions and attitudes demonstrate my intelligence, I care whether they're right, lead to a fulfilling life, etc. (Well, human nature being what it is, I expect that on some level I do care whether they demonstrate my intelligence, but that's stupid.)

comment by Doug_S. · 2007-10-15T23:48:35.000Z · LW(p) · GW(p)

The movie Mystery Men makes fun of this with a character called The Sphinx. He appears as a mysterious mentor to the group of wannabe comic book heroes that the story focuses on. For a while, he appears to be saying things of great wisdom, but then it becomes apparent that he uses a simple algorithm for generating his profound-seeming sayings.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2007-10-16T00:27:04.000Z · LW(p) · GW(p)

There's a meaning that death gives to life, but it's not all that important, and it's not all that happy either.

Replies from: ata
comment by ata · 2010-05-07T23:31:56.725Z · LW(p) · GW(p)

That's going to be my new quick argument for transhumanism. "Listen to this depressing European synthpop! Do you really want the future to be like that??"

(Incidentally, a recent comment on the video states: "Ronan explained at the San Antonio show and San Diego show that this song was about living forever. That living forever was more of a torment than a gift." Sounded like the opposite to me — a song about how much human extinction would suck. But no, everything's gotta be about how the purposeless evolutionary status quo is coincidentally exactly what we should want...)

Replies from: player_03
comment by player_03 · 2011-08-04T22:06:56.137Z · LW(p) · GW(p)

I agree with your interpretation of the song, and to back it up, here's the chorus of "The Farthest Star" (another song by the same band).

We possess the power, if this should start to fall apart, To mend divides, to change the world, to reach the farthest star. If we should stay silent, if fear should win our hearts, Our light will have long diminished before it reaches the farthest star.

This time, the message is pretty clear: we should aspire to overcome both our differences and our limitations, to avoid extinction and to expand throughout the universe. I suppose it doesn't say anything about immortality, but otherwise it seems to match transhumanist philosophy.

Replies from: ata, ata
comment by ata · 2011-08-05T00:00:14.267Z · LW(p) · GW(p)

Since the above comment of mine was posted, I actually became a big fan of VNV Nation (thanks Eliezer! :P) and downloaded the rest of their discography. "The Farthest Star" is definitely a good one. Though I do remember from one live recording of "Further" that Ronan did in fact say that it's about living forever, but given the lyrics, it sounds more like it's about what it would be like for one or two people living forever while the rest of humanity dies, and honestly that probably would suck.

comment by ata · 2011-10-22T21:08:23.028Z · LW(p) · GW(p)

Streamline, from their newest album, also seems fairly transhumanist, and in a more hopeful way than most of their songs.

Also, by the unholy power of confirmation bias, I hereby declare that Testament is about humanity's recklessness and apathy in the face of existential risks, and Tomorrow Never Comes is about our final desperate and ultimately futile efforts to stave off doomsday after having waited too long to act.

comment by bw2 · 2007-10-16T00:30:35.000Z · LW(p) · GW(p)

I agree with you, g, and I hope I made that clear, but some of the comments and I believe even the mention of stockholm syndrome seem to imply that the idea that death is meaningful does not qualify as rational, lies at a subrational level, instead of taking it, as I believe it deserves to be taken, as an idea that could be right or wrong.

comment by Laura · 2007-10-16T01:30:34.000Z · LW(p) · GW(p)

Elizer- Thanks for the links. I think people are sour-grapes, because it's so much easier to recognize what they might lose than imagine what they could gain through immortality. It's such an unknown. But choosing death to avoid such unknowns would be a poor form of risk minimization, since it's irreversible. Do you have a link to material about why you believe you will achieve immortality?

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2007-10-16T01:46:55.000Z · LW(p) · GW(p)

Believe is too strong a term, especially for "real immortality".

But one who fully mastered the Way could shape the power to burn outside a human mind, and this would be strength enough to accomplish many things that modern folk consider difficult.

comment by Tiiba2 · 2007-10-16T02:24:21.000Z · LW(p) · GW(p)

"""For your expected lifespan value to diverge to +infinity, it is necessary to place only

.000000000000000000000000000000000000000000000000000000000001

probability on your chance of living forever, and I don't think you can realistically defend assigning a probability lower than that."""

I can assign a nonzero probability to any number less than infinity, but in infinite time, the probability that even a godlike being will earn a Darwin is 1, no matter how unlikely it is n the next year.

comment by Tiiba2 · 2007-10-16T02:24:36.000Z · LW(p) · GW(p)

"""For your expected lifespan value to diverge to +infinity, it is necessary to place only

.000000000000000000000000000000000000000000000000000000000001

probability on your chance of living forever, and I don't think you can realistically defend assigning a probability lower than that."""

I can assign a nonzero probability to any number less than infinity, but in infinite time, the probability that even a godlike being will earn a Darwin is 1, no matter how unlikely it is n the next year.

comment by anonymous_person · 2007-10-16T02:31:18.000Z · LW(p) · GW(p)

Eliezer, I'm just wondering what the current best available options are for would-be immortals. I've heard about cryogenics, but it seems like an obvious pipe dream, because cosmic rays will ravage the body so badly in a short time, which is something we don't have to think about much while alive since the damage is being constantly repaired.

comment by Kaj_Sotala · 2007-10-16T13:21:54.000Z · LW(p) · GW(p)

Uhm, cosmic rays a threat to cryonics? Where the heck did /that/ come from?

comment by J. · 2007-10-16T18:17:36.000Z · LW(p) · GW(p)

BW,

Based on your comments here, I've increased my subjective probability that you will not write a great philosophy book significantly.

comment by anonymous_person · 2007-10-16T21:45:00.000Z · LW(p) · GW(p)

"Uhm, cosmic rays a threat to cryonics? Where the heck did /that/ come from?"

From my biophysics professor, who is somewhat eccentric and goes off on a lot of random tangents, and who once basically started mocking people who go in for cryonics, for not seeing this obvious problem.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2007-10-16T22:00:00.000Z · LW(p) · GW(p)

Cosmic ray damage isn't going to matter except over 10 KYear+ time periods, and even a million years worth would almost certainly be repairable by mature nanotechnology. Cryonics is supposed to get you to 2050 or whenever. I can't even find this question in standard cryonics FAQs, it's so bizarre.

comment by g · 2007-10-16T22:01:00.000Z · LW(p) · GW(p)

bw: Whose opinion does matter?

comment by anonymous_person · 2007-10-16T22:28:00.000Z · LW(p) · GW(p)

Eliezer, I couldn't find it in standard cryonics FAQ either.... that's why I asked you! I have wondered in the past about the accuracy of some of the things that this prof. has said in his freewheeling way, so I'm glad you could clear that up for me. Thanks for answering my question, but you could have left out the condescension. The last sentence of your answer adds nothing of value.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2007-10-16T22:31:00.000Z · LW(p) · GW(p)

My condescension was directed toward your biophysics professor. Seriously, what the hell gives senior scientists the idea that they can stop using science and still form accurate beliefs?

comment by anonymous_person · 2007-10-16T22:40:00.000Z · LW(p) · GW(p)

I see ... sorry for taking your answer the wrong way. Thanks again for answering my question.

comment by Nick Hay (nickjhay) · 2007-10-16T22:43:00.000Z · LW(p) · GW(p)

Tiiba:

The hypothesis is actual immortality, to which nonzero probability is being assigned. For example, suppose under some scenario your probability of dying at each time decreases by a factor of 1/2. Then, your total probability of dying is 2 times the probability of dying at the very first step, which we can assume far less than 1/2.

comment by Anders_Sandberg · 2007-10-16T23:31:00.000Z · LW(p) · GW(p)

People have apparently argued for a 300 to 30,000 years storage limit due to free radicals due to cosmic rays, but the uncertainty is pretty big. Cosmic rays and background radiation are likely not as much a problem as carbon-14 and potassium-40 atoms anyway, not to mention the freezing damage. http://www.cryonics.org/1chapter2.html has a bit of discussion of this. The quick way of estimating the damage is to assume it is time compressed, so that the accumulated yearly dose is given as an acute dose.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2007-10-16T23:59:00.000Z · LW(p) · GW(p)

In 30,000 years you get 6000 rems worth of cosmic rays. This would be fatal (in a day or an hour) if a living human received it all at once.

But it's not nearly as much damage as is done by vitrifying someone to the temperature of liquid nitrogen, which would kill you instantly if it happened all at once.

There's a difference between functional damage to living systems (on which basis the cryonics folk are calculating that it will take at least 3000 years); versus the informational damage required to disrupt the relative structure of neurons frozen at liquid nitrogen temperature sufficiently to permanently erase information stored therein (probably Myears).

Sorta like the difference between doing enough damage to a hard drive to prevent it from working normally when you plug it in (which is how medical cryobiologists think), versus doing enough damage to a hard drive that not even the NSA can figure out what was once stored in it (you would be strongly advised to vaporize it).

In any case, cosmics are simply not significant over the timeframe of realistic cryonics (<300 years).

comment by Kaj_Sotala · 2007-10-17T03:45:00.000Z · LW(p) · GW(p)

Of course, the professor in question might not have had any idea of when revival is expected to be feasible. Cryonics in popular culture tends to be portrayed as "people get frozen in the hopes that people in the distant, distant future might be able to revive them", making the position a bit more understandable.

comment by Stan · 2007-10-17T16:37:00.000Z · LW(p) · GW(p)

Not to mention unconstrained optimization....

comment by octopod · 2010-08-18T16:08:07.954Z · LW(p) · GW(p)"I know transhumanists who are unable to seem deep because they are unable to appreciate what their listener does not already know. If you want to sound deep, you can never say anything that is more than a single step of inferential distance away from your listener's current mental state."

This is extremely interesting to me because I am such a person; I have had significant difficulty throughout my life with uderstanding the existing state of other people. I've luckily found a mate who is much better at it than I am, and can therefore pull me aside if necessary to tip me off that I'm talking at cross purposes with my interlocutor. However, this is my own problem to solve.

What I want to know, though, is this: Is "a single step" of a particular reliable size, or do people take differently sized steps?

comment by Snowyowl · 2010-11-21T02:16:31.721Z · LW(p) · GW(p)

Thanks for informing me of another bias you are triggering. You're one of the first people (maybe the first person?) I've found who explains in a convincing way how not to be fooled by people speaking in a convincing way.

(Sorry if I got that from you. I know it's a cached thought, but I can't seem to trace it.)

comment by MatthewBaker · 2011-07-05T18:06:19.722Z · LW(p) · GW(p)

Thank you for this post Eliezer, it was deep :). (I will learn to pronounce your name correctly before i meet you, just you wait.)

comment by TheStevenator · 2011-08-03T20:28:36.041Z · LW(p) · GW(p)

Another great post. Much of the philosophical discussion I have with people consists of them 'pretending to be wise'. Whenever I am giving a fragmentary repitition of someone else's conclusion (usually when talking about something complex in science that I know only a little about) I'm at least up front with them. I'll say something thing like "I don't understand this nearly as well as [insert some experts or a specific field], but here is the little bit I do know.

comment by blacksmith_tb · 2011-10-24T02:42:20.655Z · LW(p) · GW(p)

This deep-seeming by violating expectations reminds me of the great quote from Niels Bohr, that there "two sorts of truth: trivialities, where opposites are obviously absurd, and profound truths, recognised by the fact that the opposite is also a profound truth."

comment by taelor · 2011-11-21T23:07:22.677Z · LW(p) · GW(p)

I think that this works for two reasons: firstly, people tend to assume that everyone else is working from the same cache as themselves, so when we encounter someone working from non-standard cache, we often assume that the speaker must have thought up everything on his own; secondly, cached wisdom tend to be polished, self-contained and carefully worded for maximum rhetorical effect, whereas original thinking tends to be... not those things. Consequently, when we encounter an unfamiliar bit of cached wisdom, it seems as though the idea must have burst fully formed Athena-style from the speakers brow, when really he's just repeating something he read in a book somewhere that was gradually refined over time by others.

comment by Spectral_Dragon · 2012-02-16T23:16:08.595Z · LW(p) · GW(p)

Next time around, I'd be more careful to link to tvtropes - that site is even more addictive than lesswrong! Ah, Eliezer, you continue to find new ways to steal time from me.

Is there any deepness, though, that you can just figure out without previously contemplating it, or is nearly all philosophy something that needs to just be explained later? And isn't then anything deep just regurgitating what we've already thought?

comment by Swimmer963 (Miranda Dixon-Luinenburg) (Swimmer963) · 2012-04-13T21:30:34.424Z · LW(p) · GW(p)

People have told me I was 'deep' because, in discussions, it's a habit for me to point out opposing points of view to everything that comes up, even if I agree with the original point of view, and to come up with the best arguments I can for the point of view even if I disagree with it, all while being very polite and pleasant about it. Apparently that's a good way to come across as really open to new ideas, which a lot of people seem to equate with being 'deep'.

comment by MrCheeze · 2012-06-04T00:54:08.450Z · LW(p) · GW(p)

"I think that if you took someone who was immortal, and asked them if they wanted to die for benefit X, they would say no."

This doesn't help against arguments that stable immortality is impossible or incredibly unlikely, of course, but I suppose those aren't the arguments you were countering at the time.

comment by sboo · 2012-08-20T10:06:17.026Z · LW(p) · GW(p)

have you succeeded in chaining these "one-inference-steps"?

that is, have you found you can take people with different beliefs / less domain knowledge, in casual conversation, and quickly explain things one inference at a time? i've found that i can only pull a few of those, even if they follow and are delightfully surprised by each one, else i start sounding too weird.

comment by jooyous · 2013-01-08T06:40:36.793Z · LW(p) · GW(p)

How does a transhumanist respond to a person that wants to die? Like not in the future in a "death has X benefit" way, but an actual concrete "I'm going to finish up these things here and then put on my nice shoes and die" way?

Replies from: Jayson_Virissimo, shminux, nshepperd
comment by Jayson_Virissimo · 2013-01-08T07:27:29.659Z · LW(p) · GW(p)

How does a transhumanist respond to a person that wants to die? Like not in the future in a "death has X benefit" way, but an actual concrete "I'm going to finish up these things here and then put on my nice shoes and die" way?

Not all transhumanists share the same normative ethics and preferences, so the question is underspecified.

Replies from: jooyous
comment by jooyous · 2013-01-08T07:32:04.796Z · LW(p) · GW(p)

Oh, sorry! What various normative ethics and preferences are there? What else should I specify? o.O I guess I'm confused because I agree with the "death bad, health good" idea on the macro level, but I know a number of ... strange individuals on the micro level.

Replies from: MugaSofer
comment by MugaSofer · 2013-01-08T11:09:25.686Z · LW(p) · GW(p)

Immortalism (probably what you meant by "transhumanist") is the norm here. I'm not sure what the normative response to your query is, though; my response would be "try to persuade them otherwise, forcibly restrain them until you succeed in doing so."

Replies from: Gastogh
comment by Gastogh · 2013-01-09T10:55:11.745Z · LW(p) · GW(p)

I'm not sure how literally I'm supposed to take that last statement, or how general its intended application is. It just doesn't seem practicable.

I'm assuming you wouldn't drop everything else that's going on in your life for an unspecified amount of time in order to personally force a stranger to stay alive, all just as a response to them stating that it would be their preference to die. Was this only meant to apply if it was someone close to you who expressed that desire, or do you actually work full-time in suicide prevention or something?

Replies from: MugaSofer
comment by MugaSofer · 2013-01-09T11:13:47.273Z · LW(p) · GW(p)

Well, that's a best-case scenario. Obviously opportunity costs and such might make it impractical. But if possible you should prevent them from killing themself and work on persuading them not to try.

I don't work in suicide prevention and I don't know anyone who does; this is just my judgement of the hypothetical scenario presented (with a few additional assumptions for details that weren't specified.)

comment by shminux · 2013-01-08T07:34:58.313Z · LW(p) · GW(p)

With respect to their terminal (no pun intended) values.

Replies from: MugaSofer
comment by MugaSofer · 2013-01-08T11:05:30.699Z · LW(p) · GW(p)

I'm guessing that the "person" in question is human. Do you believe human terminal values are suicidal?

Replies from: shminux
comment by shminux · 2013-01-08T15:52:45.779Z · LW(p) · GW(p)

Nice, you've managed to mix a noncentral fallacy and a typical mind fallacy in just one sentence.

Replies from: MugaSofer
comment by MugaSofer · 2013-01-08T15:56:27.033Z · LW(p) · GW(p)

Assuming that "person" is referring to a human is not the typical mind fallacy. Asking you a question regarding human terminal values, which is relevant to the discussion at hand, is not the noncentral fallacy.

EDIT: And vice versa, obviously.

Replies from: Kawoomba
comment by Kawoomba · 2013-01-08T16:56:06.765Z · LW(p) · GW(p)

Do you believe human terminal values are suicidal? ...

... is a typical mind fallacy because you're making conclusions / creating constraints on others' terminal values based on what you think is the "norm", or the "average" (chosen relative to your specific culture/your time period in human history, I presume), the judgement of which is based on the norms and values you experienced / encountered, and possibly the discarding of terminal values you deem aberrant.

The question is not "Are human terminal values typically suicidal", but "Can human terminal values be suicidal", the answer to which is yes, even if it were very rare.

Do you believe human terminal values are suicidal?

The noncentral fallacy is using "suicide" without qualifications, which invokes typical images of violent suicides. It's a much weaker case IMO, maybe shminux can chime in.

Replies from: MugaSofer
comment by MugaSofer · 2013-01-08T18:02:31.052Z · LW(p) · GW(p)

Thanks for the explanation.

Regarding the typical mind fallacy ... I'm not sure about this. The OP didn't specify that you were talking to someone unusual beyond their professing a desire to die, so my assumption that they are a neurotypical human seems valid. OTOH, it is presumably possible to construct a situation where human CEV or whatever would still want them to die, so I guess it depends on how "terminally want" is defined. For the record, I didn't mean that humans could never prefer death, but merely that we do not desire it.

Regarding the noncentral fallacy, I certainly didn't mean violent suicides - if that connotation crept in I apologize. Note the OP implied a nonviolent death.

Replies from: Kawoomba
comment by Kawoomba · 2013-01-08T18:16:46.818Z · LW(p) · GW(p)

You're welcome.

For the record, I didn't mean that humans could never prefer death, but merely that we do not desire it.

I think I see the problem now. The quoted sentence only makes sense if by "we do not desire it" you mean "on average, it's not part of a human's terminal values". Much of the disagreement, I think, stems from the computer science crowd automatically checking such a broad assertion ("we do not desire it / are human values suicidal") against the extreme cases, then, having found cases for which it doesn't hold, return "This is a false statement".

Similar to a reductio ad absurdum, you just need one counter example to falsify such a blanket statement.

I'd advise you, on this forum specifically, to avoid such confusion by saying e.g. "Do you believe neurotypical (current culture/time frame) human terminal values are suicidal?" In that case, a charge of "typical mind fallacy" would be baseless since, well, you are only talking about "typical" humans (whatever that may be).

You won't find much disagreement that most currently living humans do not value suicide for its own sake.

Then again, most currently living humans do not value certain kinds of liquor. Same thing.

Replies from: MugaSofer
comment by MugaSofer · 2013-01-09T10:09:53.129Z · LW(p) · GW(p)

Good points. For various reasons, I tend to use "human" to mean neurotypical human, at least when considering minds. I need to be more careful to correct that.

Replies from: jooyous
comment by jooyous · 2013-01-10T18:00:16.536Z · LW(p) · GW(p)

But this is kind of the point of my question. If someone decides they want to die (when they're not terminally ill and in great pain so it's not immediately obvious why) do we assume that this is evidence that they're NOT neurotypical and immediately start treating their desires as weird brain fluctuations and trying to save them from themselves? Or do we let them do what they want even if this is an indication of mental illness? Or is there a line in the middle somewhere?

If we suppose there is a small batch of humans that profess the desire to die as a thing to do does a transhumanist immortalist jump in and try to save that batch or leave them alone?

Replies from: MugaSofer
comment by MugaSofer · 2013-01-11T12:51:49.732Z · LW(p) · GW(p)

Well, some would argue that if they're not neurotypical (as opposed to neurotypical and stupid misguided) then we should respect their terminal values.

comment by nshepperd · 2013-01-08T08:51:02.727Z · LW(p) · GW(p)

Supposedly there exist transhumanists who don't subscribe to immortalism, as the other two commenters seem to be trying to say, but less helpfully. Probably a more precise formulation of your question would thus be "how does a transhumanist immortalist respond to a person that wants to die?"

That out of the way, my direct response would probably be "here's the number for the suicide hotline". If they don't actually seem to be in any real danger of killing themselves any time soon, I might ask them what they hope to gain by dying today.

Replies from: jooyous
comment by jooyous · 2013-01-08T18:09:04.858Z · LW(p) · GW(p)

See, I feel like suicide hotlines are for people who don't want to live, which isn't quite the same thing? What if they do give you a concrete answer. Is there any answer they could give that would pop them out of the "death is bad" bubble? Like, what if they say they feel like their death is part of some weird, creative, performance art thing?

Thank you for helpfulness! I understand the distinction now. =)

Replies from: MugaSofer
comment by MugaSofer · 2013-01-10T08:39:59.057Z · LW(p) · GW(p)

I feel like suicide hotlines are for people who don't want to live, which isn't quite the same thing

I think suicide hotlines are for anyone who wants to die, although if someone has really though it through I doubt they'd be swayed by the advice of someone who was expecting a depressed teenager.

comment by PhilGoetz · 2013-02-13T19:43:22.679Z · LW(p) · GW(p)

I just read Tom Stoppard's "Rosencrantz & Guildenstern are Dead", which is praised as a deep and intellectual play. It appears to operate primarily by stringing us along with a few lines of boring dialogue, then throwing in something random or meaningless. The unexpected line intrigues us; we feel the thrill of curiousity, undiluted by any interest in dramatic tension, plot, or character. But the dialogue's breakneck speed forces us to leave it behind before we can inspect the line and discover it says nothing we didn't already know. Repeat until curtain.

The play is allegedly "about" destiny vs. free will, significance vs. insignificance, and death. But it merely rambles on about these things, presenting trite, overused metaphors and angstful reactions in pretentious language, without ever making an argument. An argument must begin with facts, and Stoppard's play carefully and deliberately excises all facts from the start.

Replies from: TheOtherDave
comment by TheOtherDave · 2013-02-13T20:25:23.967Z · LW(p) · GW(p)

To establish some grounds for comparison, can you list three or four plays which do say things we didn't already know, and which make an argument beginning with facts?

Replies from: drethelin
comment by drethelin · 2013-03-26T06:10:52.408Z · LW(p) · GW(p)

The grandparent is either the most amazing missing of the point or a perfect troll. And possibly also this comment? shit I've gone too deep.

Replies from: None
comment by [deleted] · 2013-03-26T06:17:37.833Z · LW(p) · GW(p)

.

Replies from: TheOtherDave
comment by TheOtherDave · 2013-03-26T06:37:01.979Z · LW(p) · GW(p)

You may be right.

That said, I continue to be puzzled by the idea that plays should specify problems and work towards answers (or tell us things we didn't know, or make arguments beginning with facts); objecting to a play on the grounds that it doesn't do this strikes me as about as sensible as objecting to a scientific paper on the grounds that it doesn't rhyme.

That said, it's possible I just have too narrow a scope of what a play is. That's why I asked for examples of plays that do have this property; if pointed at such a thing I might completely rethink my understanding of what makes a play worthwhile. If you have examples handy, I'd be grateful.

Replies from: None
comment by [deleted] · 2013-03-26T06:44:46.853Z · LW(p) · GW(p)

.

Replies from: TheOtherDave
comment by TheOtherDave · 2013-03-26T13:33:42.556Z · LW(p) · GW(p)

Thanks for clarifying. Of those I've only seen Mindwalk but I understand better what you mean now.

And, sure, I agree that there's a mostly unexplored popular-entertainment niche for this sort of rigorous message film; I originally thought you were supporting a different claim.

Replies from: None
comment by [deleted] · 2013-03-26T14:26:50.967Z · LW(p) · GW(p)

.

Replies from: TheOtherDave
comment by TheOtherDave · 2013-03-26T15:07:10.053Z · LW(p) · GW(p)

(shrug) This reduces to the question "what are plays for"? Whatever they're for, failing to do that thing is grounds for objection.

I expect "that thing" is a disjunction, and I don't claim to have a full specification. But in much the same way that one doesn't have to be able to articulate precisely what a business plan is for in order to be pretty confident that the fact that it isn't in iambic pentameter isn't grounds for objecting to one, I don't think a full specification of the purpose of theatre is necessary to support the claim I'm making.

That said, if I strip out the implicit context and address your question in isolation... "failing to entertain" is probably a generic enough answer to cover most of the bases.

comment by Algernoq · 2014-08-10T18:49:04.148Z · LW(p) · GW(p)

Calling someone "deep" is like calling someone "articulate"...It's a statement of unwillingness or inability to discuss what was said. I'm not offended by being called "deep" because I'll outlive the deathists who tend to call me that.

comment by peyton.thalman · 2022-01-19T16:00:51.299Z · LW(p) · GW(p)

This is why I love LessWrong. I'm new to this but I feel my thinking ability improve every day. Thank you.

comment by Morgana Vanhooperdink (morgana-vanhooperdink) · 2023-09-02T14:42:56.956Z · LW(p) · GW(p)

I've never heard of Transhumanism until now.  After reading this article, and doing some quickie research, I have to say I absolutely hate it.  Doesn't make a lick of sense to me given I'm pro-death positivity, and know death from an Anthropological standpoint.  If nothing died the land would become barren, the dead provide nutrient rich mulch that can fertilize hundreds of plants, important bacteria and creatures that live off decay would die out.  Whale Fall is the term used for when a whale dies and sinks to the bottom of the sea, it's carcass becomes a vital habitat and source of food for thousands of ocean lifeforms, an animal carcass (humans included.) is no different.

Imo trying to push and justify negativity towards death only perpetuates an unhealthy fear of an event that will happen to all of us, not only can it hurt people mentally, but it can also make the grieving process so much harder for loved ones.  Death isn't a bad thing, if more people spoke about it openly and constructively rather than treat it like a taboo, it wouldn't be so scary.

I don't mean to be rude, or try to say your views are wrong.  I'm just not the right target audience for that philosophy.  Demystifying death and seeing it a natural, unavoidable event, treating it like an everyday thing did a world of good for my mental health so I can't imagine being anti-death.