Update on Kim Suozzi (cancer patient in want of cryonics)
post by ahartell · 2013-01-22T09:15:11.369Z · LW · GW · Legacy · 63 commentsContents
63 comments
Kim Suozzi was a neuroscience student with brain cancer who wanted to be cryonically preserved but lacked the funds. She appealed to reddit and a foundation was set up, called the Society for Venturism. Enough money was raised, and when she died on the January 17th, she was preserved by Alcor.
I wasn't sure if I should post about this, but I was glad to see that enough money was raised and it was discussed on LessWrong here, here, and here.
Edit: It looks like Alcor actually worked with her to lower the costs, and waived some of the fees.
Edit 2: The Society for Venturism has been around for a while, and wasn't set up just for her.
63 comments
Comments sorted by top scores.
comment by advancedatheist · 2013-01-22T15:40:41.384Z · LW(p) · GW(p)
In 1992 I attended a dinner held by Alcor's people to commemorate the 25th anniversary of the cryosuspension of James Bedford, who has managed to stay frozen after all these years and currently resides at Alcor.
Mike Darwin gave one of his characteristically passionate and learned speeches at this event, where he invoked Joseph Campbell's ideas popular at the time about the Hero's Journey. As I recall it, Mike said that James Bedford, an ordinary man, went on a fantastic journey across time to an unknown future, in effect becoming a new kind of mythic hero. Some day, Mike said, Bedford the myth might contribute to reconstituting Bedford the man.
Bedford hasn't exactly become a household name, but then his suspension happened before most of today's Americans were born. Kim Suozzi's struggle and cryosuspension, by contrast, has happened in our awareness and in a different media environment. She may have the potential to become a kind of mythic heroine for the millennial generation. And I would certainly like to see Suozzi the myth become Suozzi the healthy, whole young woman again.
We just need some poets to tell this myth in compelling ways. Stephenie Meyer has demonstrated that a market exists for stories about ordinary mortal women of Kim's generation who become "reverse Arwens" by rejecting aging and other human limitations.
Steven B. Harris, MD, also wrote about the repurposing of mythological tropes for cryonics purposes years ago in his essay, "Cryonics And The Resurrection Of The Mythic Hero," which you can read by scrolling down on this page:
comment by advancedatheist · 2013-01-22T14:52:30.802Z · LW(p) · GW(p)
I'd like to thank LessWrongers who donated for Kim's suspension. I hope you didn't donate your money in vain.
May science speed you, Miss Suozzi.
comment by Viliam_Bur · 2013-01-22T20:51:48.047Z · LW(p) · GW(p)
These days, publicity can make you literally immortal!
Replies from: gwerncomment by wedrifid · 2013-01-22T14:42:49.351Z · LW(p) · GW(p)
Enough money was raised, and when she died on the January 17th, she was preserved by Alcor.
Alcor? That's curious. Given the critical lack of funds I would have expected Cryonics Institute to be used. It seems like enough money and then some was raised!
Replies from: saturn, Morendil↑ comment by saturn · 2013-01-22T22:19:06.920Z · LW(p) · GW(p)
Given what I've heard about CI's quality control, I don't blame her for trying to raise enough money for Alcor.
Replies from: ModusPonies↑ comment by ModusPonies · 2013-02-04T16:08:18.130Z · LW(p) · GW(p)
What have you heard about CI's quality control, and do you happen to have the sources conveniently available? (I'm making the decision between CI and Alcor.)
Replies from: saturn↑ comment by saturn · 2013-02-06T23:25:40.103Z · LW(p) · GW(p)
I don't have any special insight on this subject, only what I've picked up from reading LW and occasionally talking about it on IRC. Many sources are linked from the comments in this thread (the comments are much more informative than the original post). To sum up, it seems that both CI and Alcor are lamentably bad, but CI is considerably worse.
comment by [deleted] · 2013-01-26T21:32:53.910Z · LW(p) · GW(p)
I am so incredibly glad that she made it.
comment by advancedatheist · 2013-01-23T04:19:32.402Z · LW(p) · GW(p)
I've thought of a way of managing the religious objection to Kim's possible revival versus Abrahamic afterlife beliefs. Eternity doesn't mean endless time like we experience it. Many theologians argue that in eternity, our assumptions and experiences about time don't apply. Kim's soul, whatever that means, could very well exist in eternity in whatever place god assigns it (preferably a tolerable one if god considers her an "anonymous christian," despite her agnosticism), yet this soul could also inhabit the realm of time in Kim's revived and restored body (and she'll literally need a body because she got a neurosuspension) in Future World.
In other words, each outcome doesn't necessarily have to exclude the other. I work with a woman who converted to Orthodox Christianity and has a side business selling icons, and apparently in that tradition a "mystery" doesn't have the meaning of "puzzle" which the human mind can potentially solve and understand, as in our use of the phrase "murder mysteries." Orthodox Christians believe that not only does the human mind not understand god's mysteries; the human mind simply cannot understand them. Instead the Christian has to accept the mystery as a revelation of god's transcendent sovereignty over creation. Kim's revival might find some elbow room in this understanding of "mystery" for certain kinds of religionists who might otherwise consider her demon-possessed or a zombie.
Reference: http://en.wikipedia.org/wiki/Anonymous_Christian
Replies from: CarlShulman↑ comment by CarlShulman · 2013-01-23T04:32:01.984Z · LW(p) · GW(p)
Mark, I'm curious. I gather you are a supporter of cryonics who is very critical of most proposed routes to reviving or reconstructing cryopreserved people. How would you hope to be revived if you are cryopreserved? And what probabilities would you input into Jeff Kaufman's probability spreadsheet (adding your answers there would be very interesting, if you'd like)?
comment by loup-vaillant · 2013-01-22T17:36:12.050Z · LW(p) · GW(p)
when she died
She's clinically dead for sure, but probably not information theoretic dead. I'd rather use the latter definition.
Anyway, she did successfully raised her odds, so that looks like good news.
Replies from: None, jkaufman↑ comment by [deleted] · 2013-01-22T19:10:03.277Z · LW(p) · GW(p)
At the end of the day, one only corresponds with the clinically living.
Replies from: loup-vaillant↑ comment by loup-vaillant · 2013-01-23T01:07:07.332Z · LW(p) · GW(p)
Here is how I feel: the odds are not good, I can do close to nothing about it, and I have to wait a lifetime to boot. It sucks, but there's still that small glimmer of hope.
A coffin doesn't feel that way. When I see one, I just want revenge.
(Edit: /retribution/revenge)
↑ comment by jefftk (jkaufman) · 2013-01-26T20:03:39.903Z · LW(p) · GW(p)
By "probably not" do you mean that her odds of being information-theoretically dead are less than 50%? Where would you put them?
Replies from: loup-vaillant↑ comment by loup-vaillant · 2013-01-27T10:53:59.622Z · LW(p) · GW(p)
I do mean less than 50%. Something below 10%, even. I'm just quite confident that someone who is cryopreserved, especially recently, still contain enough information to be reconstructed. On the other hand, I don't know enough about the physical structure of the human mind to be completely sure. I'd say most of my probability for her being actually information-theoretically dead lies in my ignorance of the subject.
Anyway, that's about 90% of her being still alive. My probability that she will be revived eventually is much lower, of course. I have to account for existential and catastrophic risks, the economic collapse of Alcor, the failure to further our technology… Heck, some religious fanatics may bomb the place for all I know (that one is below 1%).
comment by James_Miller · 2013-01-22T15:44:56.656Z · LW(p) · GW(p)
I interviewed Kim for a potential article for humanity+ magazine about her quest to get charitable funds to pay for cryonics. The article was never published because very shortly after my interview Alcor decided to fully fund Kim. Here is part of the article:
Kim Suozzi’s Cold and Lonely Journey To Outrun Brain Cancer
Like any sensible girl diagnosed with a fatal disease, 23-year-old Kim Suozzi is making arrangements to be cryogenically preserved. Kim has brain cancer, and although she's participating in a clinical trial for an experimental treatment, she told me that without cryonics her chance of survival would be basically zero.
As hard as it should be to believe, there are actually some people in Kim's position who forgo cryonics to accept certain death even though these people don't want to die. There are cancer patients who would spend every penny they have plus a bunch of taxpayer dollars, and then (if it were necessary) crawl across broken glass for a traditional treatment that would give them only a few percentage points chance of survival, but who have no interest in cryonics. Although I don't think all of these people should be forced into cryonics, at the very least they should be compelled to take a sanity test to determine if they're capable of making rational medical decisions. After all, the norm in Western society is to treat a preference for suicide as a sign of mental illness.
Of course, in reality it's people who sign up for cryonics , such as this author, who are considered mentally suspect. Fewer than 3,000 people have ever registered for cryonics despite the fact that over a hundred million have surely heard of it. We consider death, especially when it strikes someone who should be only in the first third of her life, a horrible, heartbreaking tragedy. Yet cryonics, which offers a means of escaping or at least postponing death, is something almost no one opts for, making Kim Suozzi a socially brave pioneer rather than an ordinary cancer patient.
If you think that, as futurist Ray Kurzweil writes, the Singularity is near (Kurzweil estimates 2045) then it won't take too long before cryonics could be used to revive you. Kurzweil has signed up for cryonics, and Kim told me that the plausibility of Kurzweil's analysis is a big part of why she is interested in cryonics.
Replies from: CarlShulman↑ comment by CarlShulman · 2013-01-22T19:56:35.398Z · LW(p) · GW(p)
If you think that, as futurist Ray Kurzweil writes, the Singularity is near (Kurzweil estimates 2045)
James, you've seen this study of past AI predictions, this independent grading of Kurzweil's predictions, and the stagnation of computer serial speeds and neuroimaging resolution, right? Hans Moravec has already made several predictions of AI progress based on hardware progress that have been falsified too.
Replies from: Kawoomba↑ comment by Kawoomba · 2013-01-22T21:38:30.225Z · LW(p) · GW(p)
Do you have a Singularity ETA, and if so, may I ask what it is?
Replies from: CarlShulman↑ comment by CarlShulman · 2013-01-22T22:24:17.993Z · LW(p) · GW(p)
My median timeline estimate for loosely human-level AI (i.e. it is technically feasible to build AI that can do most anything a human can do, although AI performance would be superhuman in many areas, as it already is), conditional on no catastrophes stopping forward progress would be near the end of the century. This is not a very stable or solid estimate, and I would update a lot on seeing the views of folks who had studied the issues and accumulated strong track records in prediction exercises like DAGGRE focused on technological forecasting and other relevant areas, among many other things.
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-01-23T00:28:23.121Z · LW(p) · GW(p)
Median doom time toward the end of the century? That seems enormously optimistic. If I believed this I'd breathe a huge sigh of relief, upgrade my cryonics coverage, spend almost all current time and funding trying to launch CFAR, and write a whole lot more about the importance of avoiding biocatastrophes and moderating global warming and so on. I might still work on FAI due to comparative advantage, but I'd be writing mostly with an eye to my successors. But it just doesn't seem like ninety more years out is a reasonable median estimate. I'd expect bloody uploads before 2100.
Carl, ???
Replies from: CarlShulman↑ comment by CarlShulman · 2013-01-23T02:06:37.305Z · LW(p) · GW(p)
AI has had 60 years or more, depending on when you start counting, with (the price-performance cognate of) Moore's law running through that time: the progress we've seen reflects both hardware and software innovation. Hardware progress probably slows dramatically this century, although neuroscience knowledge should get better.
Looking at a lot of software improvement curves for specific domains (games, speech, vision, navigation) big innovations don't seem to be coming much faster than they used to, and trend projection suggests decades to reach human performance on many tasks which seem far from AGI-complete. Technologies like solar panels or electric vehicles can take many decades to become useful enough to compete with rivals.
Intermediate AI progress has fallen short of Kurzweilian predictions, although it's still decent. Among AI people AGI before the middle of the century is a view seen mainly in groups selected for AGI enthusiasm, like the folk at the AGI conference, but less so among the broader AI community. And there's Robin's progress metric (although it still hasn't been done for other fields, especially the ones making the most progress).
Are we halfway there, assuming we can manage to keep up this much progress (when progress in many other technological fields is slowing)?
Intelligence enhancement for researchers, uploads, and other boosts could help a lot, but IA will probably be a long time coming (biology is slow: FDA for drugs, maturation for genetic engineering) and uploads are very demanding of hardware technology and require much better brain models (correlated with AI difficulty).
I didn't say 87 years, but closer to 87 than 32 (or 16, for Kurzweil's prediction of a Turing-Test passing AI).
Replies from: Kaj_Sotala, Eliezer_Yudkowsky, Eliezer_Yudkowsky↑ comment by Kaj_Sotala · 2013-01-23T08:12:12.360Z · LW(p) · GW(p)
The main thing that makes me suspect we might have AGI before 2100 are neuroprostheses: in addition to bionic eyes for humans, we've got working implants that replicate parts of hippocampal and cerebellar function for rats. At least one computational neuroscientist that I know of has told me that we could replicate the human cerebellum as well pretty soon, but the hard problem lies in finding suitable connections that could be used to interface the brain with computers well enough. He was also willing to go on record on neocortex prostheses not being that far away.
If we did have neural prostheses - the installation of which might end up becoming a routine medical procedure - they could no doubt be set to also record any surrounding brain activity, thus helping reverse engineer the parts we don't have figured out yet. Privacy issues might limit the extent to which that was done with humans, but less so for animals. X years to neuroprosthesis-driven cat uploads and then Y years to figuring out their neural algorithms and then creating better versions of those to get more-or-less neuromorphic AGIs.
The main crucial variables for estimating X would be the ability to manufacture sufficiently small chips to replace brain function with, and the ability to reliably interface them with the brain without risk of rejection or infection. I don't know how the latter is currently projected to develop.
Replies from: Dreaded_Anomaly↑ comment by Dreaded_Anomaly · 2013-01-25T03:53:19.176Z · LW(p) · GW(p)
The main thing that makes me suspect we might have AGI before 2100 are neuroprostheses: in addition to bionic eyes for humans, we've got working implants that replicate parts of hippocampal and cerebellar function for rats.
The hippocampal implant has been extended to monkeys.
Replies from: wedrifid, Kaj_Sotala↑ comment by Kaj_Sotala · 2013-01-25T07:09:16.286Z · LW(p) · GW(p)
Thanks, I'd missed that.
↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-01-23T03:07:45.763Z · LW(p) · GW(p)
This is a rather important point. How do we get more info on it? You're the first halfway-sane person I've ever heard put the median at 2100.
From my perspective if you told me that in actual fact AGI had been developed in 2120 (a bit of a ways after your median) despite the lack of any great catastrophes, I would update in the direction of believing all of the following:
- Rogue biotech hadn't actually been a danger. You didn't make any strong predictions about this because it was outside your conditional; I don't know much about it either. Basically I'm just noting it down. Also, no total global worse-than-Greece collapse, no nuclear-proliferated war brought on by global warming, etc.
- Moore's Law had come to a nearly complete permanent halt or slowdown no more than 10-20 years after 2013.
- AI academia was Great Stagnating (this is relatively easy to believe)
- Machine learning techniques that actually had non-stagnat-y people pushing on them for stock-market trading also plateaued, or weren't published, or never AGI-generalized.
- All the Foresight people were really really optimistic about nanotech, nobody cracked protein folding, or that field Great Stagnated somehow... the nanotech-related news I see, especially about protein folding, doesn't seem to square with this, but perhaps the press releases are exaggerated.
- Large updates in the direction of global economic slowdown, patent wars kill innovation everywhere, corruption of universities even worse than we think, even fewer smart people try to go into real tech innovation, etcetera.
- Biotech stays regulation-locked forever - not too hard to believe.
- Anders Sandberg is wrong about basically everything to do with uploading.
It seems like I'd have to execute a lot of updates. How do we resolve this?
Replies from: CarlShulman, JoshuaFox, shminux↑ comment by CarlShulman · 2013-01-23T04:06:08.518Z · LW(p) · GW(p)
Moore's Law had come to a nearly complete permanent halt or slowdown no more than 10-20 years after 2013
Well atom-size features are scheduled to come along on that time-scale, believed to mark the end of scaling feature size downwards. That has been an essential part of Moore's law all along the way. Without it, one has to instead do things like use more efficient materials at the same size, new architectural designs, new cooling, etc. That's a big change in the underlying mechanisms of electronics improvement, and a pretty reasonable place for the trend to go awry, although it also wouldn't be surprising if it kept going for some time longer.
AI academia was Great Stagnating (this is relatively easy to believe)
The so-called "Great Stagnation" isn't actually a stagnation, it's mainly just compounding growth at a slower rate. How much of the remaining distance to AGI do you think was covered 2002-2012? 1992-2002?
All the Foresight people were really really optimistic about nanotech
Haven't they been so far?
In any case, nanotechnology can't shrink feature sizes below atomic scale, and that's already coming up via conventional technology. Also, if the world is one where computation is energy-limited, denser computers that use more energy in a smaller space aren't obviously that helpful.
perhaps the press releases are exaggerated
Could you give some examples of what you had in mind?
Large updates in the direction of global economic slowdown, patent wars kill innovation everywhere, corruption of universities even worse than we think, even fewer smart people try to go into real tech innovation, etc.
Well, there is demographic decline: rich country populations are shrinking. China is shrinking even faster, although bringing in its youth into the innovation sectors may help a lot.
Biotech stays regulation-locked forever - not too hard to believe.
Say biotech genetic engineering methods are developed in the next 10-20 years, heavily implemented 10 years later, and the kids hit their productive prime 20 years after that. Then they go faster, but how much faster? That's a fast biotech trajectory to enhanced intelligence, but the fruit mostly fall in the last quarter of the century.
Anders Sandberg is wrong about basically everything to do with uploading.
See 15:30 of this talk, Anders' Monte Carlo simulation (assumptions debatable, obviously) is a wide curve with a center around 2075. Separately Anders expresses nontrivial uncertainty about the brain model/cognitive neuroscience step, setting aside the views of the non-Anders population.
You're the first halfway-sane person I've ever heard put the median at 2100.
vs
I didn't say 87 years, but closer to 87 than 32 (or 16, for Kurzweil's prediction of a Turing-Test passing AI).
I said "near the end of the century" contrasted to a prediction of intelligence explosion in 2045.
Replies from: rhollerith_dot_com, Baughn↑ comment by RHollerith (rhollerith_dot_com) · 2013-01-27T03:21:42.587Z · LW(p) · GW(p)
Well atom-size features are scheduled to come along on that time-scale, believed to mark the end of scaling feature size downwards.
A very tangential point, but in his 1998 book Robot, Hans Moravec speculates about atoms made from alternative subatomic particles that are smaller and able to absorb, transmit and emit more energy than the versions made from electrons and protons.
↑ comment by Baughn · 2013-01-23T21:11:23.275Z · LW(p) · GW(p)
press releases
Here's one: http://phys.org/news/2012-08-d-wave-quantum-method-protein-problem.html
That doesn't apply to large proteins yet, but it doesn't make me optimistic about the nanotech timeline. (Which is to say, it makes me update in favor of faster R&D.)
Replies from: CarlShulman, Eliezer_Yudkowsky↑ comment by CarlShulman · 2013-01-23T21:38:53.479Z · LW(p) · GW(p)
http://blogs.nature.com/news/2012/08/d-wave-quantum-computer-solves-protein-folding-problem.html
It’s also worth pointing that conventional computers could already solve these particular protein folding problems.
You have a computer doing something we could already do, but less efficiently than existing methods, which have not been impressively useful themselves?
ETA: https://plus.google.com/103530621949492999968/posts/U11X8sec1pU
Replies from: Baughn↑ comment by Baughn · 2013-01-24T00:20:50.572Z · LW(p) · GW(p)
The G+ post explains what it's good for pretty well, doesn't it?
It's not a dramatic improvement (yet), but it's a larger potential speedup than anything else I've seen on the protein-folding problem lately.
Replies from: CarlShulman↑ comment by CarlShulman · 2014-09-08T04:22:04.714Z · LW(p) · GW(p)
You can duplicate that D-Wave machine on a laptop.
Replies from: Baughn↑ comment by Baughn · 2014-09-08T20:26:21.879Z · LW(p) · GW(p)
True, but somewhat besides the point; it's the asymptotic speedup that's interesting.
...you know, assuming the thing actually does what they claim it does. sigh
Replies from: CarlShulman↑ comment by CarlShulman · 2014-09-09T03:15:55.271Z · LW(p) · GW(p)
Also no asymptotic speedup.
↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-01-24T20:57:21.393Z · LW(p) · GW(p)
Nobody believes in D-Wave.
Replies from: shminux↑ comment by Shmi (shminux) · 2013-01-24T21:17:51.112Z · LW(p) · GW(p)
That seems like an oversimplification. Clearly some people do.
Scott Aaronson:
“I no longer feel like playing an adversarial role. I really, genuinely hope D-Wave succeeds.” That said, he noted that D-Wave still hadn’t provided proof of a critical test of quantum computing.
I am not qualified to judge whether the D-Wave's claim that they use quantum annealing, rather than the standard simulated annealing (as Scott suspects) in their adiabatic quantum computing is justified. However, the lack of independent replication of their claims is disconcerting.
Replies from: Kawoomba↑ comment by JoshuaFox · 2013-01-24T08:59:52.201Z · LW(p) · GW(p)
This is puzzling.
I had thought that the question of AI timelines was so central that the core SI research community would have long since Aumannated and come to a consensus probability distribution.
Anyway, good you're doing it now.
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-01-24T17:17:56.227Z · LW(p) · GW(p)
Maybe I was absent from the office that day? I hadn't heard Carl's 2083 estimate (I recently asked him in person what the actual median was, and he averaged his last several predictions together to get 2083) until now, and it was indeed outside what I thought was our Aumann-range, hence my surprise.
Replies from: ciphergoth, CarlShulman↑ comment by Paul Crowley (ciphergoth) · 2013-01-26T15:45:50.526Z · LW(p) · GW(p)
It seems like the sort of thing people would plan to do on a day you were going to be in the office.
↑ comment by CarlShulman · 2013-02-05T08:55:29.980Z · LW(p) · GW(p)
We had discussed timelines to this effect last year.
↑ comment by Shmi (shminux) · 2013-01-23T21:07:02.873Z · LW(p) · GW(p)
I'm wondering why this is stated as a conjunction. Would a single failure here really result in an early AGI development?
Replies from: rhollerith_dot_com↑ comment by RHollerith (rhollerith_dot_com) · 2013-01-27T03:36:29.650Z · LW(p) · GW(p)
If I go in the garage and observe that the floor is wet, I would update in the direction of
it rained last night; and
Frank left the garage door open again; and
the tools the boys negligently left outside in the grass all night got rusty.
But that of course does not mean that if the tools had not gotten rusty, the garage floor would not have gotten wet.
In other words, Eliezer was writing about statistical relationships, and you seem to have mistaken them for causal relationships.
↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-01-24T21:05:42.673Z · LW(p) · GW(p)
BTW regarding Robin's AI progress metric, my reaction is more like Doug's (the first / most upvoted comment).
Replies from: CarlShulman↑ comment by CarlShulman · 2013-01-24T21:24:17.530Z · LW(p) · GW(p)
I agree with that comment that machine learning has been on a roll, but Robin's reply is important too. We can ask how machine learning shows up in the performance statistics for particular tasks to think about its relative contribution.
comment by Nic_Smith · 2013-01-23T03:05:36.378Z · LW(p) · GW(p)
A small correction: The Society for Venturism has been around for quite a while, although I have a vague impression they've been more active in the last year than in the past. I had a look at their site to see when they were founded (1986), and noticed they're currently raising funds for someone else, Aaron Winborn.
Replies from: ahartellcomment by nigerweiss · 2013-01-22T22:25:27.294Z · LW(p) · GW(p)
That's got to be close to a best case suspension. I wish her nothing but the best.
comment by lsparrish · 2013-01-24T05:45:16.470Z · LW(p) · GW(p)
Some mass media coverage here. Also this video features her (religious) mother explaining the reasoning behind the head-only thing and how she came to terms with it.
Replies from: hankx7787comment by curiousepic · 2013-01-22T15:21:49.493Z · LW(p) · GW(p)
It will be interesting to read the case report if/when it's posted.
comment by advancedatheist · 2013-01-23T04:31:47.850Z · LW(p) · GW(p)
I say this about relatively few people, especially women, but from what I've learned about Kim, I feel that I approve of her.
Secular, science-oriented, self-interested, passionate about doing something controversial to try to survive, and apparently non-promiscuous. (The last appeals to the Dark Enlightenment conservative in me.)
I wish I had a girlfriend like her when I went to college.
I would have wanted the time to get to know her better. Perhaps I will if we both make it to The Other Side.
Replies from: None↑ comment by [deleted] · 2013-01-26T21:30:29.385Z · LW(p) · GW(p)
You asked elsewhere why this seems to have been down voted. This seems seems easy to explain. People are treating this as death, though we might prefer to tag this as "suspension" and like with all deaths there are sacred feelings. You then proceeded to implicitly violate a taboo of modern sexual sacredness feeling. This was enough for a patter match to your post being offensive.
Some people may just find commenting about the sexual lives of deceased people and your opinions on them distasteful, which is ironically a cached intuition that their current belief system wouldn't generate de novo, though I'm sure it can generate rationalizations for it if it happens to be present. Don't talk about the sex lives of dead people as an extension of not talking about the bodily functions of dead people. In any case this intuition was a minor factor.
The thing is I don't find it offensive at all and even understand your sentiment. Is it really such a horrible thing to say that not wanting to have sex with lots of different people can be a good characteristic by some? But being a communication consequentalist I have to scold you for making the wrong choice if you didn't want to troll. The meaning of a communication is the response it elicits.
comment by advancedatheist · 2013-01-24T14:56:18.937Z · LW(p) · GW(p)
I have to laugh at you people. You downvoted one of my comments because I wrote that I thought Kim had avoided the hookup behavior which damages young women's character and makes them unsuitable for stable relationships with men?
In other words, you don't like the fact that I called her the opposite of a slut?
What a perverse world I live in, where we have both more sluts and more sexually rejected men than ever, yet the political correctness thought-police requires us to accept this situation as an "advanced" society.
Replies from: ArisKatsaris↑ comment by ArisKatsaris · 2013-01-24T15:18:38.124Z · LW(p) · GW(p)
Downvoting you now for repeated and offensive use of the word 'slut'.
Even a reactionary can opt for chivalry instead for misogyny. How many times have you seen C.S.Lewis or J.R.R. Tolkien using the word "slut"?
Replies from: None↑ comment by [deleted] · 2013-01-26T21:40:19.160Z · LW(p) · GW(p)
I think that both men where quite religious and Christian isn't a coincidence in this case. There is very much a set of values that neither reaction nor liberalism can regenerate out of the old Western tradition that they inherited since they lack the memetic technology to do so and the socioeconomic circumstances aren't favourable either.
I see this in many different kinds of values where there is a serious gap between say atheist Christian raised parents would like to transmit but can't to their children since they don't have good arguments that would make sense in their framework if the latter is taken seriously. Now as the one or two conservatives reading this probably realize people don't take frameworks seriously so inertia does transmit some of it in a pre-rational manner. I dread to think what my ehtics would be like if all of it was derived from some set of axiomisable first principles without the unique prejudices and eclectic mix of impressions and infections that I've collected over my life...
Note this probably generalizes to any exposure to ideologies. Once you reject a certain framework cached thoughts from that system of thought remain and influence your values, sometimes even constituting key parts of your value system. But this does not mean you have retained the correct tools needed to infect other people or even your future self with these values.
Humans are broken in this regard, the best we can hope for is setting up resilient traditions and reducing memetic mutation as much as possible and ... but those seems pretty antithetical to technological progress. So our society is very much like an artificial intelligence that can either chose to stagnate and be certain in keeping its current value set or self-improve in capability but risk accidental changing it. This isn't all that's going on of course, but I think it is a real trade off that modern man is incapable of considering properly.
Replies from: Multiheaded↑ comment by Multiheaded · 2013-01-27T06:22:36.206Z · LW(p) · GW(p)
I have a secret love of chaos. There should be more of it. Do not believe—and I am dead serious when I say this—do not assume that order and stability are always good, in a society or in a universe. The old, the ossified, must always give way to new life and the birth of new things. Before the new things can be born the old must perish. This is a dangerous realization, because it tells us that we must eventually part with much of what is familiar to us. And that hurts. But that is part of the script of life. Unless we can psychologically accommodate change, we ourselves begin to die, inwardly. What I am saying is that objects, customs, habits, and ways of life must perish so that the authentic human being can live. And it is the authentic human being who matters most, the viable, elastic organism which can bounce back, absorb, and deal with the new.
-PKD, How to Build a Universe That Doesn't Fall Apart Two Days Later
Replies from: None↑ comment by [deleted] · 2013-01-27T13:23:36.103Z · LW(p) · GW(p)
When it comes to people the LessWrong consesus is that death is bad and that them dying is no the best way to encourage robustness and adaptability of human society.
I find most value deathist arguments unconvincing for much the same reason I find deathist arguments unconvincing.