Why rationalists are not much concerned about mortality?
post by SurvivalBias (alex_lw) · 2022-02-10T00:11:46.068Z · LW · GW · 40 commentsThis is a question post.
Contents
Answers 18 Vladimir_Nesov 15 Matthew Barnett 8 Ben Pace 6 Vaniver 6 Dustin 4 ChristianKl 3 Dagon 2 Razied 1 Martin Randall -3 methree None 40 comments
As of 2022, humans have a life expectancy of ~80 years and a hard limit of ~120. Most rationalists I know agree that dying is a bad thing and at minimum we should have an option to live considerably longer and free of the "diseases of the age", if not indefinitely. It seems to me that this is exactly the kind of problem where rationality skills like "taking things seriously", "seeing with fresh eyes" and awareness of time discounting and status quo bias should help one to notice something is very very wrong and take action. Yet - with the exception of cryonics[1] and a few occasional posts on LW - this topic is largely ignored in the rationality community, with relatively few people doing the available interventions on the personal level, and almost nobody actively working on solving the problem for everyone.
I am genuinely confused, why is this happening? How is it possible that so many people who are equipped with epistemological tools to understand they and everyone they love are going to die, understand it's totally horrible, understand this problem is solvable in principle, can keep on doing nothing about it?
There is a number of potential answers to this question I can think of, but none of them is satisfying and I'm not posting them to avoid priming.
[ETA: to be clear, I have spent a reasonable amount of time and effort making sure that the premise of the question is indeed the case - whether rationalists are insufficiently concerned about mortality - and my answer is unequivocal "yes". In case you have evidence to the contrary, please feel free to post them as an answer]
- ^
It's an interesting question exactly how likely cryonics is to work and I'm planning to publish my analysis of this at some point. But unless you assign a ridiculously optimistic probability to it working, the problem largely remains. Even 80% probability of success would mean your chances are worse than in Russian roulette! Besides, my impression is that only a minority of rationalists is signed up anyway.
Answers
AGI is likely closer than an anti-aging intervention that adds decades discovered without AGI. I used to believe that AGI results in either death or approximately immediate perfect cure for aging and other forms of mortality (depending on how AI alignment and judgement of morality work out), and that is a reason to mostly ignore anti-aging. Recently I began to see [LW(p) · GW(p)] less powerful/general (by design) AGI as a plausible way of controlling AI risk, that isn't easy to make more generally useful safely. If that works out, immediate cure for aging doesn't follow, even after AI risk is no longer imminent. This makes current anti-aging research not as pointless. (In one partial failure mode, with an anti-goodharting non-corrigible AI, straightforward AI development might even become permanently impossible, thwarted by the AGI that controls AI risk but can't be disabled. In that case any anti-aging must be developed "manually".)
I can only speak for my personal experience, but I think there's a significant minority of rationalists who care about preventing their own personal deaths a lot. I know because I've met them during my own process of figuring out what to do about death.
Personally, I video record my most of my life [LW · GW], plan to get cryopreserved (especially via the best methods available), am interested in and currently trying to pursue evidence-based strategies to slow aging, and try to avoid excess exposure to risk of injuries. There's not a lot more I can personally do to stop my own death besides these things, so oftentimes, I tend to just stop talking about it.
↑ comment by SurvivalBias (alex_lw) · 2022-02-10T21:33:58.121Z · LW(p) · GW(p)
My impression is that it's more than most people do! [Although full disclosure, myself I'm signed up with CI and following what I believe is the right pattern of diet and exercise. I'll probably start some of the highest benefit/risk ratio compounds (read: rapamycin and/or NAD+ stuff) in a year or two when I'm past 30.]
But also, how do you feel about donating to the relevant orgs (e.g. SENS), working in a related or adjacent area, and advocating for this cause?
Replies from: matthew-barnett↑ comment by Matthew Barnett (matthew-barnett) · 2022-02-10T21:55:59.566Z · LW(p) · GW(p)
But also, how do you feel about donating to the relevant orgs (e.g. SENS), working in a related or adjacent area, and advocating for this cause?
I think of myself as having two parts to my utility function (really just, what I care about). There's a selfish part, and a non-selfish part. As for the selfish component, I'm happy to pursue personal strategies to delay my aging and death. Indeed, I feel that my personal life extension strategies are extreme even by the standards of conventional life extension enthusiasts.
I don't see a compelling selfish reason to donate to or work for life extension organizations. Even if I was a highly skilled biologist (and I'm not), the number of hours or days I could realistically hope to hasten the end of aging would be a low number. In that amount of time, I could have pursued better strategies aimed at helping myself alone.
While delaying death by one day gives a combined sum of millions of years of extra life across everyone, to me it's just one day. That's hardly worth switching careers over.
On the other hand, the non-selfish part of my utility function prefers to do what's best for the world generally, and I don't find life extension research particularly competitive across this axis. In the past, I've contemplated volunteering to help life extension advocacy, but it was more of a personal emotional thing than what I thought would actually be effective.
I have considered whether life extension could turn out to be extremely important for non-selfish reasons in this post [EA · GW]. Ultimately, I do not find the arguments very compelling. Not only am I skeptical that life extension is coming any time soon, but I suspect that by the time it arrives, something even more important (such as AGI) will be here already.
Replies from: alex_lw↑ comment by SurvivalBias (alex_lw) · 2022-02-16T03:11:53.365Z · LW(p) · GW(p)
I personally believe exactly the right kind of advocacy may be extremely effective, but that's really a story for a post. Otherwise yeah, AGI is probably higher impact for those who can and want to work there. However, in my observation the majority of rationalists do not in fact work in AGI, and imao life extension and adjacent areas have a much wider range of opportunities and so could be a good fit for many of those people.
I'm pretty concerned, I'm trying to prevent the AI catastrophe happening that will likely kill me.
Also my rationalist housemate Daniel Filan often reminds me of his basic belief about how doing 30 mins of exercise a few times a week has an expected return of something like 10 hours of life or whatever. (I forget the details.) It definitely happens to me a bunch.
Also right now I'm pretty excited about figuring out more of the micromorts I spend on different things, and get used to calculating things with them (including diet, exercise, as well as things in the reference class of walking through shady places at night or driving without a seatbelt). Now that I've gotten lots of practice with microcovid estimates, I can do this sort of thing much easier.
↑ comment by SurvivalBias (alex_lw) · 2022-02-10T05:29:48.684Z · LW(p) · GW(p)
>I'm pretty concerned, I'm trying to prevent the AI catastrophe happening that will likely kill me.
That was one of my top guesses, and I'm definitely not implying that longevity is higher or equal priority than AI alignment - it's not. I'm just saying that after AI alignment and maybe rationality itself, not dying [even if AGI doesn't come] seems like a pretty darn big deal to me. Is your position that AGI in our lifetime is so inevitable that other possibilities are irrelevant? Or that other possibilities are non-trivial (say above 10%) but since AGI is the greatest risk all resources should be focused on it? If the latter, do you believe it should be the strategy of the community as a whole or just those working on AGI alignment directly?
[Exercising 30 min few times a week is great, and I'm glad your housemate pushes you to do it! But, well, it's like not going to big concerts in Feb 2020 - it's basic sanity most regular people would also know to follow. Hell it's literally the FDA advice and has been for decades.]
Replies from: None, Benito↑ comment by [deleted] · 2022-02-10T07:48:50.864Z · LW(p) · GW(p)
I'll go out there and say it: longevity is a higher priority than AI alignment. I think this community got nerd sniped on AI alignment and it is simply against the social norms here to prioritize differently.
Replies from: meedstrom↑ comment by meedstrom · 2022-02-12T14:12:15.690Z · LW(p) · GW(p)
There's no need for rhetorical devices like "I'll go out there and say it". Please.
Also the force of norms looks weak to me in this place, it's a herd of cats, so that explanation makes little sense. Also, it's fine to state your understanding of a topic without describing everyone else as "nerd sniped", no one will needle you for your conclusion. Also, there's little point to commenting if you only state your conclusion -- the conclusion is uninteresting, we're looking to learn from the thought process [LW · GW] behind it.
Replies from: None↑ comment by [deleted] · 2022-02-13T00:46:33.649Z · LW(p) · GW(p)
It's not a rhetorical device though? The OP said:
I'm definitely not implying that longevity is higher or equal priority than AI alignment - it's not.
He wrote as if that was an open-and-shut case that needed no argumentation at all. I simply wrote that I am taking the other side.
↑ comment by Ben Pace (Benito) · 2022-02-10T18:22:33.760Z · LW(p) · GW(p)
I mean, the field of AI has been around ~70 years, and it looks to me we’re more than half way through the route to AGI. So even if we got full life extension today it wouldn’t have that much impact for that many people.
Replies from: alex_lw↑ comment by SurvivalBias (alex_lw) · 2022-02-10T21:22:09.228Z · LW(p) · GW(p)
Well, about 55 million people die per year, most of them from aging, so solving it for everyone today vs say 50-60 years later with AGI would have saved 2-3 billions of potentially indefinite very very long lives. This definitely counts as "much impact for many people" on my book.
But also, what's the probability that we will indeed get AGI in the next 50 or 70 years? I mean, I know it's a hotly debated topic so asking for your personal best estimate.
Replies from: Benito↑ comment by Ben Pace (Benito) · 2022-02-12T09:26:05.139Z · LW(p) · GW(p)
Sure, it's a lot compared to most activities, but it's not a lot compared to the total people who could live in the future lightcone. You have to be clear what you're comparing to when you say something is large.
My estimate? Oh I dunno. The future is hard to predict, and crazy shit happens by default. But currently I'd be more surprised if it didn't happen than if it did. So more than 50%, for 50 years. Also more than 50% for 30 years. My guess is there's a lot of very scalable and valuable products to be made with ML, which will put all the smart people and smart money in the world into improving ML, which is a very powerful force. Shrug. I'd have to think more to try to pin it down more.
↑ comment by Matthew Barnett (matthew-barnett) · 2022-02-10T18:16:52.567Z · LW(p) · GW(p)
I'm pretty concerned, I'm trying to prevent the AI catastrophe happening that will likely kill me.
On a personal level, it seems quite unlikely that any individual can meaningfully alter the risk of an existential catastrophe enough for their own efforts to be justified selfishly. Put another way, I think it makes sense to focus on preventing existential risks, but not as a means of preventing one's own death.
One optimistic explanation is that rationalists care more about AI risk because it's an altruistic pursuit. That's one possible way of answering OP's question.
Replies from: TurnTrout, Vaniver, martin-randall↑ comment by TurnTrout · 2022-02-10T22:04:23.139Z · LW(p) · GW(p)
- I decide both my actions and, to varying extents, the actions of people like me.
On a gut level, I also refuse to live in a world where people like me do nothing about AI risk for your reason of low expected individual impact, because that feels cowardly. (TBC this is a rebuke of that reason, not of you)
- A high enough P(death from AI) screens off the benefits of many other interventions. If I thought myself 90% likely to die to AI before age 50, then I wouldn't care much about living to 90 instead of 80.
↑ comment by Vaniver · 2022-02-12T07:08:59.801Z · LW(p) · GW(p)
On a personal level, it seems quite unlikely that any individual can meaningfully alter the risk of an existential catastrophe enough for their own efforts to be justified selfishly.
I think this depends a lot on 1) time discounting 2) whether you think there will be anything like impact certificates / rewards for helping in the future. That is, it may be the case that increasing chance of positive singularity by 1/million is worth more than your natural lifespan in EV terms (while, of course, mattering very little for most discount rates). And if you think the existence of Earth is currently worth like 2 quadrillion dollars (annual world GDP * 20), and you can increase probability of survival by a millionth, and you'll be compensated something like a thousandth of the value you provided, then you're looking at $2M in present value.
↑ comment by Martin Randall (martin-randall) · 2022-02-15T03:10:32.335Z · LW(p) · GW(p)
I care about longevity; I donate to longevity research institutions. I also try to live healthily.
That said, I'm also in my early 30s. I just took an actuarial table and my rough probability distribution of when I expect transformative AI to be possible and calculated my probability of dying vs. my probability of seeing transformative AI, and ended up with 23% and 77%. So, like, even if I'm totally selfish, on my beliefs it seems three times more important to do something about the Singularity than all-cause mortality.
This is less true the older someone is, of course.
↑ comment by Adam Zerner (adamzerner) · 2022-02-13T01:39:47.461Z · LW(p) · GW(p)
Maybe I am misreading this, but when they say "using the mortality rates for 2019", I think they are assuming that there won't be increases in life expectancy. Like, we're currently observing that people born in the 1930s living ~80 years, and so we'll assume that people born in eg. the 1980s will also live ~80 years. But that seems like [LW · GW] a very bad assumption to me.
Speculation here, but if we grant your premise, then the answer to your question might be something like:
Rationalists largely come from engineering backgrounds. Rightly or wrongly, AI is mostly framed in an engineering context and mortality is mostly framed in the context of biologists and medical doctors.
That being said, I think it's really important to suss out if the premise of your question is correct. If it is so, and the signals we are getting about AI risk organizations having almost too much cash, we should be directing some portion of our funding to organizations like SENS instead of AI risk.
There are plenty of people who have AGI timelines that suggest to them that either AGI will kill them before they reach their natural mortality or AGI will be powerful enough to prevent their natural mortality by that point.
Even without having direct access to AGI new machine learning advances for protein folding and protein design might be more central to longevity than the research that's billed as longevity research.
That said, I do agree that anti-aging is an important topic. One problem of how people who prescribe to fight it often seem to be into seeking the key under the lightbulb.
The SENS paradigm seems insular to me. I don't have a charitable explanation of why fascia getting tenser as people age isn't on their list of aging damage.
↑ comment by [deleted] · 2022-02-11T17:18:52.500Z · LW(p) · GW(p)
Attributing magical capabilities to AGI seems to be a common cognitive failure mode :( is there not some way we can encourage people to be more grounded in their expectations?
Replies from: matthew-barnett↑ comment by Matthew Barnett (matthew-barnett) · 2022-02-11T17:41:54.162Z · LW(p) · GW(p)
AI need not be magical for its development to have a profound effect on the progress of science and technology. It is worth understanding the mechanisms that some people have proposed. Here's a blog post series that explains one potential route.
Replies from: None↑ comment by [deleted] · 2022-02-12T06:41:19.955Z · LW(p) · GW(p)
Those posts are a prime example of the magical thinking I’m talking about: the assumption that scaling real world processes is like Factorio. That kind of seamless scaling is only approached in the highly controlled world of software, and even then any software engineer worth their salt can tell you just how unreliable immature automation can be. The real world is extremely messy, stochastic, and disordered, and doesn’t map well into the type of problems that recent advances in AI have been good at solving.
We may soon get to the point where an AGi is able to construct a monumental plan for developing nanotech capabilities… only for that plan to not survive its first contact with the real world. At best we can hope for AI assistants helping to offload certain portions of the R&D effort, like we are currently seeing with AlphaFold. However the problem domains where AI can be effective in finding such useful models are limited. And while I can think of some other areas that would benefit from the same AlphaFold treatment (better molecular dynamics codes, for example), it’s not the kind of stuff that would lead to revolutionary super-exponential advances. The singletarian thinking which pervades the AI x-risk crowd just isn’t reflective of practical reality.
AI development increases by constant factors the rate at which technology is advanced. That is good and valuable. But right now the rate at which molecular nanotechnology or longevity are being advanced is effectively nil for reasons that have nothing to do with the technical capabilities AI would advance. So there is a strong argument to be made that attacking these problems head on—like how Elon Musk attacked electric cars and space launch capability—would have more of an impact that the meta-level work on AI.
Replies from: ChristianKl↑ comment by ChristianKl · 2022-02-12T23:02:27.198Z · LW(p) · GW(p)
The real world is extremely messy, stochastic, and disordered, and doesn’t map well into the type of problems that recent advances in AI have been good at solving.
The recent advances in AI have not produced AGIs.
AlphaFold is essentially a tool. It's not a replacement for the current scientists in the way an AGI that's much smarter then the current scientists would be.
Replies from: None↑ comment by [deleted] · 2022-02-13T00:53:53.700Z · LW(p) · GW(p)
You misunderstood my intent of that statement. I was saying that AGI wouldn't be smarter or more capable than the current scientists in solving these particular problems for a very long time, even if architecturally it is able to attack the same problems more efficiently. It's not enough of a constrained problem that a computer running in a box is able to replace the role of humans, not at least until it has human-level effectors to allow it to embody itself in the real world.
AGI wouldn't be categorically different from present day AI. It's just an AI for writing AI (hence, "general"), but the AI's it writes are still constrained in much the same way as the AI that we write today. If there is some reason for not believing this would be the case, it is so-far unstated.
↑ comment by SurvivalBias (alex_lw) · 2022-02-16T03:26:24.679Z · LW(p) · GW(p)
There are plenty of people who have AGI timelines that suggest to them that either AGI will kill them before they reach their natural mortality or AGI will be powerful enough to prevent their natural mortality by that point.
True but there's also plenty of people who think otherwise, other comments being an example.
I'm not a biologist, but I'm reasonably sure that fascia getting tenser would be downstream of the hallmarks of aging, if that's what you're talking about. It's kinda like asking why "going to a boardgame party in San Francisco" isn't on the list of covid transmission vectors. And in any case, SENS is far from being the only organization, there's many others with different approaches and focus areas, probably one of them covers fascia even if SENS doesn't.
Replies from: ChristianKl↑ comment by ChristianKl · 2022-02-16T11:04:03.270Z · LW(p) · GW(p)
I'm not a biologist, but I'm reasonably sure that fascia getting tenser would be downstream of the hallmarks of aging, if that's what you're talking about.
I don't think there's a good reason to make that assumption. There are various factors that lead to fascia getting tense. Substances like fibrin keep the fascia contracted and don't get automatically cleared.
SENS is certainly not the only organization and there are plenty of people who don't believe that aging is as easy as just curing the hallmarks.
Replies from: alex_lw↑ comment by SurvivalBias (alex_lw) · 2022-02-16T16:16:42.183Z · LW(p) · GW(p)
I would be very surprised if inflammation or loss of proteostasis did not have any effect on fascia, if only because they have negative effect on ~everything. But more importantly, I don't think there's any significant number of people dying from fascia stiffness? That's one of the main ideas behind the hallmarks of aging, that you don't have to solve the entire problem in its every minuscule aspect at once. If you could just forestall all these hallmarks or even just some of them, you could probably increase lifespan and healthspan significantly, thus buying more time to fix other problems (or develop completely knew approaches like mind uploading or regenerative medicine or whatever else).
I think many MANY smart people realize something is very wrong. There's been a LOT written about it, including much of the early LessWrong content.
The way I see it, when we're talking about non-me humans, the vast majority of them will be replaced with people I probably like roughly the same amount, so my preference for longevity in general is mild. There is a crisis coming in my own death, but I don't see much to do about it.
to notice something is very very wrong and take action.
...
understand this problem is solvable in principle
I do NOT think that the "and take action" part is trivial, nor that the problem is solvable in principle, certainly not with much likelihood of impacting current rationalists' lives.
In terms of "what can I do to increase the area under the curve of probability-weighted happiness and longevity", working on nearer-term issues has much higher expected value, IMO.
↑ comment by SurvivalBias (alex_lw) · 2022-02-16T02:59:45.652Z · LW(p) · GW(p)
The way I see it, when we're talking about non-me humans, the vast majority of them will be replaced with people I probably like roughly the same amount, so my preference for longevity in general is mild.
Am I reading this incorrectly or are you saying that you don't care about your friends and loved ones dying?
There's at least two currently ongoing clinical trials with an explicit goal of slowing aging in humans (TAME and PEARL), that's just the most salient example. At some point I'll definitely make a post with a detailed answer to the question of "what can I do". As for the problem not being solvable in principle, I don't believe I've ever seen an argument for this which didn't involve a horrendous strawman or quasi-religion of some sort.
Replies from: Dagon↑ comment by Dagon · 2022-02-16T03:14:33.490Z · LW(p) · GW(p)
I care about my friends and loved ones. I even care about strangers. I'm a fan of life extension research. But I'm not dedicating much of my resources to it - in the big picture, one human's about as good as another, and in the small picture I don't expect to have much chance of success, and don't want to reduce my enjoyment of my remaining time on a crazy longshot.
I have to say that neither of those trials look particularly promising on the "ending aging" front. They may slightly delay some problems (and that's GREAT - living longer is, in fact, better), but that's not anywhere near solving it in principle. Mind upload might be a solution eventually, but I think it's more likely for bio-brains to continue dying and the immortal are digital from birth.
↑ comment by SurvivalBias (alex_lw) · 2022-02-16T03:54:29.451Z · LW(p) · GW(p)
but that's not anywhere near solving it in principle
Of course they are not, that's not the point. The point is that they can add more time for us to discover more cures - to the few decades most rationalists already have, considering the age distribution. During that time new approaches will likely be discovered, hopefully adding even more time, until we get to mind uploading, or nanobots constantly repairing the body, or some other complete solution. The concept is called longevity escape velocity.
but I think it's more likely for bio-brains to continue dying and the immortal are digital from birth
Why would you think that?
And another question. Imagine you've found yourself with an incurable disease and 3 years to live. Moreover, it's infectious and it has infected everyone you love. Would you try experimental cures and encourage them to try as well, or would you just give up so as not to reduce your enjoyment of the remaining time?
Replies from: Dagon↑ comment by Dagon · 2022-02-16T04:20:29.871Z · LW(p) · GW(p)
Imagine you've found yourself with an incurable disease and 3 years to live.
This is an obvious and common enough analogy that you don't need to frame it as a thought experiment. I understand that I have an incurable disease. It's longer than 3 years, I hope, but not by much more than an order of magnitude, certainly nowhere near 2. I'm not even doing everything I could in terms of lifestyle, exercise, and nutrition to extend it, let alone "experimental" cures. It's not infectious, fortunately - everyone already has it.
Friends I've lost to disease, accident, or suicide ALSO didn't universally commit to "experimental cures" - in all cases I know of, the cost (non-monetary cost of side-effects more than pure money, but some of that too) of the long-shots were higher than their perceived success rate.
As Pascal's Wager options go, giving up significant resources or happiness over the next decade for a VERY TINY chance of living longer, seems to be among the less compelling formulations.
↑ comment by SurvivalBias (alex_lw) · 2022-02-16T05:26:09.502Z · LW(p) · GW(p)
Equating high risk/high reward strategies with Pascal Wager is a way too common failure mode, and it's helped by putting numbers on your estimates. How much is VERY TINY, how much do you think the best available options really cost, and how much would you be willing to pay (assuming you have that kind of money) for a 50% chance of living to 300 years?
To be clear, I'm not so much trying to convince you personally, as to get a generally better sense of the inferential distances involved.
Replies from: Dagon↑ comment by Dagon · 2022-02-16T05:59:42.912Z · LW(p) · GW(p)
I'd actually like to be convinced, but I suspect our priors differ by enough that it's unlikely. I currently assign less than a 0.05% that I'll live another 50 years (which would put me over 100), and three orders of magnitude less likely that I'll live to 300. These are small enough that I don't have as much precision in my beliefs as that implies, of course.
Conditional on significant lifestyle changes, I can probably raise those chances by 10x, from vanishingly unlikely to ... vanishingly unlikely. Conditional on more money than I'm likely to have (which is already in the top few percent of humanity), maybe another 3x.
I don't believe there are any tradeoffs I can make which would give me a 50% chance to live to 300 years.
↑ comment by SurvivalBias (alex_lw) · 2022-02-16T15:52:37.983Z · LW(p) · GW(p)
That's, like, 99.95% probability, one in two thousand chances. You'd have two orders of magnitude higher chances of survival if you were to literally shoot yourself with a literal gun. I'm not sure you can forecast anything at all (about humans or technologies) with this degree of certainty decades into the future, definitely not that every single one of dozens attempts in a technology you're not an expert in fail and every single one of hundreds attempts in another technology you're not an expert in fail (building aligned AGI).
I don't believe there are any tradeoffs I can make which would give me a 50% chance to live to 300 years.
I don't believe it either, it's a thought experiment, I assumed it'd be obvious since it's a very common technique to estimate how much one should value low probabilities.
Replies from: DagonThe anti-aging field is going great as far as I can see, with billion-dollar investements happening regularly, clinical trials are ongoing and the field as a whole has started to attract the attention it deserves. I think rationalists are not especially worried because they (or rather, I do) believe that the problem is already well on its way to being solved. If we don't all die from misaligned AI / nuclear war / biological weapon in the next 20 years, I don't think we'll have to worry about aging too much.
↑ comment by [deleted] · 2022-02-10T07:46:35.127Z · LW(p) · GW(p)
I wish this was the case. However those large scale investments you speak of are mostly being put into things which address the symptoms of growing old, but not the underlying causes. There are very, very few researchers working on permanently ending aging or at least full rejuvenation, and they are chronically underfunded.
Replies from: matthew-barnett↑ comment by Matthew Barnett (matthew-barnett) · 2022-02-10T18:10:46.665Z · LW(p) · GW(p)
I agree that the amount of funding that goes into explicitly anti-aging research is often greatly exaggerated. That said, as you may have heard, Altos Labs recently got started, and rumors indicate that it's being well funded by Jeff Bezos and maybe a few others. My general impression is that anti-aging researchers think this is a big deal.
Karl Pfleger has tried to catalog companies that are trying to address aspects of aging, and his list is quite long, possibly a great deal longer than you might expect. Biological research in fields related to aging, especially stem cell research and cancer research, is not underfunded (at least, in my estimation).
Replies from: None↑ comment by [deleted] · 2022-02-11T11:08:18.725Z · LW(p) · GW(p)
It is a big deal. It is also not as big a deal as work towards full rejuvenation would be. Altos Labs, like Calico and others before it, is attempting to cure diseases of aging. They are not, to my knowledge, attempting to achieve full rejuvenation that would prevent age-related disease by means of eternally maintained youth.
It is, in principle, easier to prevent cancer than to cure it. And the strategies you would use for each are different. There aren't many people outside of SENS who are working on the rejuvenation-as-prevention approach.
↑ comment by SurvivalBias (alex_lw) · 2022-02-10T04:33:36.483Z · LW(p) · GW(p)
Thanks for the answer, that wasn't one of my top guesses! Based on your experience, do you think it's widely held in the community?
And I totally see how it kinda makes sense from the distance because it's what the most vocal figures of the anti-aging community often claim. The problem is that it has also been the case 20 years ago - see Methuselah Foundation "make 90 the new 50 by 2030" - and probably 20 years before that. And, to the best of my understanding, while substantial progress has been made, there hasn't been any revolutions comparable with e.g. revolution in ML over the same period. And ironically, if you talk to the rank-n-file folks in the longevity community, many of them are stocked about AGI coming and saving us all from death, because they see it as the only hope for aging to be solved within their lifetime. It is certainly possible that we solve aging in the next 20 years, but it's nowhere near guaranteed, and my personal estimate of this happening (without aligned AGI help) is well below 50%. Are you saying your estimates of it happening soon enough are close to 100%?
I also wouldn't call billion-dollar investments uncommon, the only example I can think of is Altos Labs, and it's recent and so far nobody seems to know wtf exactly are they doing. And AI safety also has billion-dollar range players, namely OpenAI.
Most importantly, throwing more money at the problem isn't the only possible approach. Consider how early in the COVID pandemic there was a lot of effort put into figuring out what exactly is the right strategy on the individual level. Due to various problems, longevity advice suffers from similar levels of uncertainty. There's a huge amount of data gathered, but it's all confusing and contradictory and models are very incomplete, and there's various sources of bias etc - and it's a hugely important problem to get right for ~everyone. Sounds like a perfect use case for the methods of rationality to me, yet there's very little effort in this direction, nothing to compare with COVID - which is nowhere nearly as lethal! And just like with COVID, even if someone is young and optimistic so they are confident they'll be able to jump on the LEV train, almost everyone has friends or loved ones who are much older.
Mortality is a very old problem, and lots of smart people have spent lots of time thinking about it. Perhaps the best intervention anyone has come up with is harm reduction via acceptance. That's the approach I'm taking personally. Denial is popular, but isn't very rationalist and seems to lead to more overall suffering.
I'm not working on promoting this approach because it's literally thousand of years old and that's not a good personal fit. But I support and respect people who do.
↑ comment by SurvivalBias (alex_lw) · 2022-02-16T02:33:26.552Z · LW(p) · GW(p)
Smallpox is also a very old problem, and lots of smart people had spent lots of time thinking about it, until they've figured out a way to fix it. In theory, you could make an argument that no viable approaches exist today or in the foreseeable future and so harm reduction is the best strategy (from the purely selfish standpoint, working on the problem would still help the people in the future in this scenario). However, I don't think in practice it would be a very strong argument, and in any case you are not making it.
If you're say 60+ than yes, anti-aging is not a realistic option and all you have is cryonics, but most of the people in the community are well below 60. And even for a 60+ years old, I'd say that using the best currently available interventions to get cryopreserved a few years later and have a slightly higher chance for reanimation would be a high priority.
Replies from: martin-randall↑ comment by Martin Randall (martin-randall) · 2022-02-17T03:55:55.975Z · LW(p) · GW(p)
Yes, there are a number of interventions available that could delay death by a few years. For example, my copy of "Ageless: The New Science of Getting Older Without Getting Old", which is almost a year old, ends with a short list:
- Don't smoke
- Don't eat too much
- Get some Exercise
- Get seven to eight hours of sleep
- Get vaccinated and wash your hands
- Take care of your teeth
- Wear sunscreen
- Monitor your heart rate and blood pressure
- Don't bother with supplements
- Don't bother with longevity drugs yet
- Be a woman.
Do these count? When you say "relatively few people [are] doing the available interventions on the personal level" are these the interventions you're talking about?
Replies from: alex_lw↑ comment by SurvivalBias (alex_lw) · 2022-02-22T01:41:17.993Z · LW(p) · GW(p)
Yes and no. 1-6 are obviously necessary but not sufficient - there's much more to diet and exercise than "not too much" and "some" respectively. 7 and 8 are kinda minor and of dubious utility except for in some narrow circumstances so whatever. And 9 and 10 are hotly debated and that's exactly what you'd need rationality for, as well as figuring out the right pattern of diet and exercise. And I mean right for each individual person, not in general, and the same with supplements - a 60-year old should have much higher tolerance for potential risks of a longevity treatment than a 25yo, since the latter has more less to gain and more to loose.
I'm not sure everyone thinks death is bad. I mean, it's been a "feature" of being human since before there were humans and it has worked quite well so far to have a process of death. Messing with a working system is always a dangerous proposition, so I, personally, wonder if it is wise to remove that feature. Therefore, I do nothing about it (maybe I should be more active in opposition? I don't know).
↑ comment by SurvivalBias (alex_lw) · 2022-02-10T15:41:32.031Z · LW(p) · GW(p)
Dangerous proposition in what sense? Someone may die? Everyone may die? I have, um, not very good news for you...
Replies from: martin-randall↑ comment by Martin Randall (martin-randall) · 2022-02-15T02:26:55.970Z · LW(p) · GW(p)
So many answers here. For example: maybe without death, reproduction rates fall off a cliff, society ages, culture shifts from "explore" to "exploit", we never leave Earth, we waste the vast majority of our potential as a species. Later, our sun dies, everyone survives the experience, we realize that we're in a philosophical hypothetical and the thought experiment ends in bathos.
Replies from: alex_lw↑ comment by SurvivalBias (alex_lw) · 2022-02-16T03:40:28.869Z · LW(p) · GW(p)
Oh no, what if me and everyone I care about would only get to live 5 billion instead of 80 years. And all that only to find out it was a half-assed hypothetical.
Replies from: martin-randall↑ comment by Martin Randall (martin-randall) · 2022-02-17T03:08:45.869Z · LW(p) · GW(p)
I would prefer to have this conversation without the sarcasm. Maybe I encouraged it with my "half-assed hypothetical". If so, please consider this an attempt to reset the tone.
Dangerous proposition in what sense? Someone may die? Everyone may die? I have, um, not very good news for you...
I read this as a claim that it is impossible for the elimination of death, aging, or mortality to be dangerous because it can only decrease the danger of dying. I replied by pointing out that there are other dangers, such as the danger of astronomical waste [? · GW]. Another danger is suffering risk [? · GW]. The story in Surface Detail points in that direction.
If I misread you then you were probably saying something I agree with.
Oh no, what if me and everyone I care about would only get to live 5 billion instead of 80 years.
I read this as a statement that you aren't concerned about astronomical waste. That's a completely reasonable response, many philosophers agree with you.
40 comments
Comments sorted by top scores.
comment by Dustin · 2022-02-10T01:38:05.224Z · LW(p) · GW(p)
I'm skeptical of the premise of the question.
I do not think your stated basis for thinking rationalists are not concerned with mortality is sufficient to grant you that it is true.
Replies from: alex_lw↑ comment by SurvivalBias (alex_lw) · 2022-02-10T02:19:51.641Z · LW(p) · GW(p)
I'd be happy to be proven wrong, and existence is generally much easier to prove than non-existence. Can you point to any notable rationality-adjacent organizations focused on longevity research? Bloggers or curated sequences? When was the last rationalist event with focus on life extension (not counting cryonics, it was last Sunday)? Any major figures in the community focused on this area?
To be clear, I don't mean "concerned about a war in Ukraine" level, I mean "concerned about AI alignment" level. Since these are the two most likely ways for the present day community members humans to die, with the exact proportion between them depending on one's age and AI timelines estimate, I would expect a roughly comparable level of attention and that is very much not what I observe. Am I looking in the wrong places?
↑ comment by Elizabeth (pktechgirl) · 2022-02-10T04:15:26.477Z · LW(p) · GW(p)
Tags on LW: Longevity [? · GW], Aging [? · GW]
The now-defunct Longevity Research Institute and Daphnia Labs were founded and run by Sarah Constantin. Geroscience magazine was run by someone at a rationalist house. SENS is adjacent. At least one ACX grant went to support a longevity researcher. I also know of private projects that have never been announced publicly.
It is not AI-level attention, but it is much more than is given to Ukraine.
Replies from: alex_lw↑ comment by SurvivalBias (alex_lw) · 2022-02-10T05:05:21.399Z · LW(p) · GW(p)
I agree, Ukraine was an exaggeration. I've checked the tags and grants before asking the question, and am well aware of SENS but never thought or heard of it being adjacent. Is it? Didn't know of the three defunct institutions as well, so I should raise my estimate somewhat.
↑ comment by Dustin · 2022-02-10T02:44:13.028Z · LW(p) · GW(p)
I'm not arguing that you're wrong I'm just saying that you seem to have just assumed it was true without really setting out to prove it or line up convincing evidence. It just struck me that you seemed to be asking "why" before answering "if".
I'm also not sure that the answers to your questions in this comment are as necessarily revealing as they might seem at first glance. For example, more of the low hanging fruit might be picked WRT mortality...not as much to be revealed. Maybe mortality is mostly about making ourselves do the right thing and akrasia type stuff, which gets discussed plenty.
It might be that you're right but if I were you I'd like to determine that first.
Replies from: alex_lw↑ comment by SurvivalBias (alex_lw) · 2022-02-10T04:47:47.361Z · LW(p) · GW(p)
I have indeed spent a certain amount of time figuring out whether it's the case, and the answer I came to was "yep, definitely". Edited the question to make it more clear. I didn't lay out the reasoning behind it, because I assumed anyone arguing in good faith would either accept the premise based on their own experience, or just point to the counterexamples (as Elizabeth and in a certain stretched sens Ben Pace did).
>low hanging fruit might be picked WRT mortality
I'm doubtful, but I can certainly see a strong argument for this! However my point is that, like with existential risks, it is a serious enough problem that it's worth focusing on even after low hanging fruit has been picked up.
>Maybe mortality is mostly about making ourselves do the right thing and akrasia type stuff
Hmm, can you elaborate on what do you mean here? Are you talking about applying [non-drug] interventions? But the best interventions known today will give you 1-2 decades if you're lucky.
Replies from: Dustin↑ comment by Dustin · 2022-02-10T16:30:25.215Z · LW(p) · GW(p)
I assumed anyone arguing in good faith would either accept the premise based on their own experience, or just point to the counterexamples
Well, I'm not arguing in bad faith. In fact, I'm almost not arguing at all! If your premise is correct, I think it's a very good question to ask!
To the extent I am arguing it's with the assumption behind the premise. To me, it does not seem readily apparent that rationalists are less concerned with mortality than they are with AI risk. At least not so readily apparent that it can just be glossed over.
I'm doubtful, but I can certainly see a strong argument for this!
To be clear, here I'm not actually making the low-hanging fruit argument. I'm just pointing out one of the things that came to mind that make your premise not so readily apparent to me. Another thing I thought about is that hardly anyone outside of the rationalist community is, or has ever, thought about AI risk. Most people probably don't even acknowledge that AI risk is a thing. Mortality is thought about by everyone, forever. It's almost as if mortality risk concern is a different reference class than AI risk concern.
I think if you were to summarize my objection to just glossing over the premise of your question it's that relative amounts of rationalist activity surrounding mortality and AI risk is, to me, not sufficiently indicative of concern so that you can just gloss over the basis for your question. If you are correct, I think it's very important, but it's not obvious to me that you are correct. If you are correct, I think it's really important to make that argument rather than glossing it over.
I spend maybe 2 minutes per day ensuring my doors are locked and maybe an hour per day picking out clothes, getting dressed, washing my face, doing my hair, etc. I don't think that means I'm less concerned about the physical security of my home relative to my physical appearance!
Hmm, can you elaborate on what do you mean here? Are you talking about applying [non-drug] interventions? But the best interventions known today will give you 1-2 decades if you're lucky.
Yeah, I'm talking about exercise and "eating healthy" and all the stuff that everyone knows you should do but many don't because it's unpleasant and hard.
Anyway, I also think it's likely that the questions I'd want answered are so adjacent to the question you want answered that a good answer to any of them will largely answer all of them.
Replies from: alex_lw↑ comment by SurvivalBias (alex_lw) · 2022-02-10T21:07:31.009Z · LW(p) · GW(p)
Mortality is thought about by everyone, forever.
Technically probably yes, but the specific position of "it is something we can and should do something about right now" is unfortunately nearly as fringe as AI risk: a bunch of vocal advocates with a small following pushing for it, plus some experts in the broader field and some public figures maybe kinda tentatively flirting with it. So, to me these are two really very comparable positions, very unconventional but also very obvious if you reason from the first principles and some basic background knowledge. Maybe that's why I may sound a bit frustrated or negative, because it feels like the people who clearly should be able to make this conclusion, for some reason don't. And that's why I'm basically asking this question, to understand why don't or what am I missing or whatever is going on.
By the way, can you clarify what's your take on the premise of the question? I'm still not sure whether you think:
- Rationalists are paying comparatively little attention to mortality and it is justified
- Rationalists are paying comparatively little attention to mortality and it is not justified
- Rationalists are paying comparatively lot attention to mortality and I'm just not looking in the right places
- Something else
Yeah, I'm talking about exercise and "eating healthy" and all the stuff that everyone knows you should do but many don't because it's unpleasant and hard.
Ok, in that case akrasia etc debates are very relevant. But even so, not everybody knows [LW · GW]. Maybe the facts that you should exercise and watch what you eat themselves are relatively uncontroversial (although I still remember the dark days when EY himself was advocating on facebook that "calories in / calories out" is bullshit). But exactly what kinds of diet and exercise are optimal for longevity is a hugely controversial topic, and it's mainly not for the lack of data, it's for the lack of interpretation, i.e. something that we could well try to do on lesswrong. So it'd be cool to see more posts e.g. like this [LW · GW].
Replies from: Dustin↑ comment by Dustin · 2022-02-10T21:55:45.695Z · LW(p) · GW(p)
By the way, can you clarify what's your take on the premise of the question?
I lean towards little attention and it is not justified, but I'm really just feeling around in the dark here...and thus my bit of frustration at just jumping right past the step at determining if this is actually the case.
I can imagine plausible arguments for each of the options you give (and more) and I'm not entirely convinced by any of them.
↑ comment by [deleted] · 2022-02-10T07:52:36.542Z · LW(p) · GW(p)
Are you aware of SENS? There is massive overlap between them and the rationality community here in the Bay Area. They are, however, surprisingly underfunded and receive relatively little attention on sites like this compared with, say, AI alignment. So I see your point.
Replies from: alex_lw↑ comment by SurvivalBias (alex_lw) · 2022-02-10T17:38:51.856Z · LW(p) · GW(p)
I'm well aware, but this comment section is the first time I hear there's a non-trivial overlap! Are you saying many active rationalists are SENS supporters?
Replies from: None↑ comment by [deleted] · 2022-02-11T11:09:38.874Z · LW(p) · GW(p)
It is one of the most common charities donated to by effective altruists here. But what I'm also saying is that many of the people working at SENS have had some level of exposure to the less wrong / rationalist community.
Replies from: alex_lw↑ comment by SurvivalBias (alex_lw) · 2022-02-16T03:27:45.018Z · LW(p) · GW(p)
Hmm that's interesting, I need to find those people.
comment by Shmi (shminux) · 2022-02-10T08:29:43.185Z · LW(p) · GW(p)
Eternal youth is a tempting goal, and I hate hate hate getting old and eventually dying probably more that anything, but... There is almost nothing I can do about it personally, and in my estimation the chance of any meaningful progress in the next couple of decades (i.e. reaching anything close to escape velocity) is negligible. Cryonics is a hail Mary option, and I am not sure if it's worth spending a sizable chunk of my savings (or income) on that. The evaluation of the situation might be similar for others. So, what may look like "not being concerned" is in reality giving up on a hopeless if tempting cause.
Replies from: None, alex_lw↑ comment by [deleted] · 2022-02-10T11:22:18.087Z · LW(p) · GW(p)
I find this viewpoint at odds with the evidence. People who are really attacking this issue, like the SENS research foundation, seem to think that longevity escape velocity is achievable within our lifetimes.
Robert Freitas, who knows more than anyone else alive about the medical applications of nanotechnology, believes that our limitations are due to tooling, and if we had atomically precise manufacturing then all diseases of the body (including aging) would be trivial to solve. He and his partner Ralph Merkle believe that APM could be achieved in 10 years time with proper funding.
Ray Kurzweil, for all his faults, plots some pretty accurate graphs. Those graphs show us achieving the necessary process technology to manipulate matter at the sub-nanometer scale within 20 years, max.
Are you pushing 80 years old? That's the only reason I can imagine you'd think this beyond your lifetime. Both the SENS and nanotech approaches are constrained by lack of resources, including people working on the problem. This is an area where you could make a difference, if you put in a lot of effort.
Replies from: shminux↑ comment by Shmi (shminux) · 2022-02-10T15:42:47.389Z · LW(p) · GW(p)
I've briefly looked into SENS and it comes across as cultish and not very credible. Nanotech would be neat, but getting it working and usable as nanobots swarming human body without extreme adverse effects seems like something achievable but with a timeline of half a century or so. Kurzwell has not had a great track record in forecasting. I think the best chance of extending human lifespan of someone alive today until the aging kinks are worked out is figuring out hibernation: slowing down metabolism 10-20 times and keeping the body in the fridge. But I don't see anyone working on that, though there is some discussion of it in the context of months-long interplanetary travel.
Replies from: None↑ comment by [deleted] · 2022-02-13T08:41:30.569Z · LW(p) · GW(p)
Kurzwell is completely inept at making predictions from his graphs. He is usually quite wrong in a very naive way. For example, one of his core predictions is when we will achieve human-level AI based on (IIRC) nothing more than when a computer with a number of transistors equal to neurons in the human brain could be bought off-the-shelf for $1000. As if that line in the sand had anything at all to do with making AGI.
But his exponential chart about transistors/$ is simply raw data, and the extrapolation is a straightforward prediction that has held true. He has another chart on the topic of manipulatable feature sizes using various approaches, and that also shows convergence on nanometer-resolution in the 2035-2045 timeframe. I trust this in the same way that I trust his charts about Moore's law: it's not a law of nature, but I wouldn't bet against it either.
↑ comment by SurvivalBias (alex_lw) · 2022-02-10T15:48:56.204Z · LW(p) · GW(p)
Cryonics is around 20 bucks a month if you get it through insurance, plus 120 to sign up.
With that out of the way, I think there is substantial difference between "no LEV in 20 years" and "nothing can be done". For one thing, known interventions - diet, exercise, very likely some chemicals - can most likely increase your life expectancy by 10-30 years depending on how right you get it, age, health and other factors. For another thing, even if working on the cause, donating to it or advocating for it won't help yourself, it can still help many people you know and love, not to mention everyone else. Finally, the whole point of epistemic rationality (arguably) is to work correctly with probabilities. How certain you are that there will be no LEV in 20 years? If there's a 10% chance, isn't it's worth giving a try and increasing it a bit? If you ~100% certain, where do you get this information?
comment by Adam Zerner (adamzerner) · 2022-02-13T01:43:01.854Z · LW(p) · GW(p)
This seems like a good time to shamelessly plug a post I wrote: How much should we value life? [LW · GW]. I'd love to hear anything that people think or have to say about it.
comment by superads91 · 2022-02-10T02:13:41.840Z · LW(p) · GW(p)
People can make scientific progress over their lifetimes way more easier than ethical progress. Ethical progress is much more dependent on newer generations.
So, imagine a world with the technology of today, but with the people of the year 1200 instead.
And I could give you a 100 more reasons why death is a necessary evil. Immortality = too much power for a flawed mortal. Imagine if Genghis Khan had been immortal.
When one realizes how far life is from the rosy picture that is often painted, one has a much easier time accepting death, even while still fearing it or still wanting to live as long as possible.
Replies from: Richard_Kennaway, Radford Neal, Richard_Kennaway, alex_lw↑ comment by Richard_Kennaway · 2022-02-10T17:46:01.478Z · LW(p) · GW(p)
Personally, I've been hearing all my life about the Serious Philosophical Issues posed by life extension, and my attitude has always been that I'm willing to grapple with those issues for as many centuries as it takes.
-- Patrick Nielsen Hayden
https://www.lesswrong.com/posts/TZsXNaJwETWvJPLCE/rationality-quotes-august-2010?commentId=nboCCze5EjRxwHYzn [LW(p) · GW(p)]
↑ comment by Radford Neal · 2022-02-10T04:19:34.846Z · LW(p) · GW(p)
But I don't think we are discussing immortality - just stopping aging. Stopping aging won't prevent someone from killing the next Genghis Khan.
Replies from: superads91↑ comment by superads91 · 2022-02-10T05:26:04.669Z · LW(p) · GW(p)
You didn't specify that in your post.
-
Stopping aging would also have an effect on what I am saying. You would have a stagnant population, so, again, imagine the technology of today with the population of the year 1200.
-
Even stopping aging creates power unbalances. Who tells you that we would be able to kill Genghis Khan? Or what if we only suceeded after 10000 years?
-
Today we're discussing stopping aging, tomorrow we'll be discussing physical invulnerability.
(Lol the amount of dislikes on my first comment... It's funny how much people in these circles don't wanna die, never stopping to consider at least 1 bad consequence of that.)
Replies from: Richard_Kennaway, Dustin↑ comment by Richard_Kennaway · 2022-02-10T18:10:40.439Z · LW(p) · GW(p)
Death strikes down the good and the bad alike. For every monster stopped, a saint also. Or perhaps it is ten monsters for every saint. Or ten saints for every monster. Who can say? Whence your assurance that it is better to slaughter a billion people every decade than risk the evil that they might do?
BTW, while accounts differ over how Genghis Khan died, he was in his late 60s. While 67 is not 37, he did not die of old age, and his empire outlived him anyway.
↑ comment by Dustin · 2022-02-10T16:40:11.408Z · LW(p) · GW(p)
t's funny how much people in these circles don't wanna die, never stopping to consider at least 1 bad consequence of that.
Is that really the only reason you can think of that you got downvoted?
Your comment didn't say there's at least one downside to life extension. It said the downsides outweigh the upsides and it did not make a well-argued case that that was so.
edit: Instead of "well-argued" I originally had the word "convincing". I changed it because a comment does not have to be convincing or correct to get upvotes (or at least to not get downvotes). In fact, I predict that a comment or post that summarized a lot of downsides people should consider when forming their opinions about life extension in a non-preachy "you guys are so wrong about this" sort of way would be highly upvoted.
↑ comment by Richard_Kennaway · 2022-02-10T17:36:29.857Z · LW(p) · GW(p)
So, imagine a world with the technology of today, but with the people of the year 1200 instead.
Ah, the ancient Greeks getting Science right, and starting a scientific and industrial revolution more than 2000 years ago!
Yes please!
Replies from: superads91↑ comment by superads91 · 2022-02-15T00:49:48.853Z · LW(p) · GW(p)
"Ah, the ancient Greeks getting Science right, and starting a scientific and industrial revolution more than 2000 years ago!
Yes please!"
Ah, gladiators living thousand year lifespans with injury-related chronic pain, having taken millions of heads and faced millions of moments of the most extreme torment, yes please!
Being a slave for 50.000 years, yes please!
But of course, this wouldn't happen, right? We all know that life is simple, just office work, eating McDonald's and watching TV.
Or even better, those Greeks then would just naturally "become good people" (because we all know how easyyyyyy it is for people to change morally and/or to give up on their power) and just create utopia for all anyway. Right?
I'll say it again: only such unrealistic times could ever produce such unrealistic thoughts.
Replies from: Richard_Kennaway, alex_lw↑ comment by Richard_Kennaway · 2022-02-15T10:14:14.266Z · LW(p) · GW(p)
Ah, gladiators living thousand year lifespans with injury-related chronic pain, having taken millions of heads and faced millions of moments of the most extreme torment, yes please!
You think that a scientific and industrial revolution would leave all of society fixed in amber?
Replies from: superads91↑ comment by superads91 · 2022-02-15T19:55:55.502Z · LW(p) · GW(p)
"You think that a scientific and industrial revolution would leave all of society fixed in amber?"
In moral terms, mostly. Unless you're naive enough to think that people easily change morally or easily give up their power. Which you definitely are, I get it.
In fact you don't even need to travel to ancient Greece to see how horrific your dream would be. Just go anywhere outside our modern Western bubble or comfort and semi-decency, really. There's still plenty of slavery and concentration camps to choose from.
↑ comment by SurvivalBias (alex_lw) · 2022-02-16T03:32:59.997Z · LW(p) · GW(p)
Just a reminder, in this argument we are not the modern people who get to feel all moral and righteous about themselves, we are the Greeks. Do you really want to die for some hypothetical moral improvement of future generations? If so, you can go ahead and be my guest, but myself I'd very much rather not to.
Replies from: superads91↑ comment by superads91 · 2022-02-16T06:56:19.870Z · LW(p) · GW(p)
Like the popular saying goes, you either die a hero, or live long enough to become a villain. We are flawed beings, and unfortunately (yes, unfortunately, I would like to live forever as well (I mean, at least my present self, I'm pretty sure after a couple centuries I'd have gone insane even with all the memory-editing and cell-rejuvenating tech you can imagine (maybe that would extend it to a few millenia))) death is a necessary balancer of power.
So, no, I don't wanna die for future generations, but I better do someday. Personality needs coherence, that's why we're advert to change (some more, some less). That's why new beings are important to keep the power balance, if there is even any balance in this chaotic world.
One way to accept death is simply thinking how bad things could get beyond this current unusual normalcy (which won't last long). Cancer patients want to die. Slaves want to die. Imagine denying death to those least fortunate. That would be way worse than mortality. (And yes, you could probably cure cancer or any pain in a world with immortality, but the problem are the slaves, of those denied the treatment... i.e., the problem is the tyranny, which would be greatly amplified in a deathless world, and being naive to the point of not considering it.)
Replies from: alex_lw↑ comment by SurvivalBias (alex_lw) · 2022-02-16T16:01:14.902Z · LW(p) · GW(p)
You're fighting a strawman (nobody's going to deny death to anyone, and except for seriously ill most people who truly want to die now have an option to do so; myself I'm actually pro-euthanasia). And, once again, you want to inflict on literally everyone a fate you say you don't want for yourself. Also, I don't accept the premise there's any innate power balance in the universe that we ought to uphold even at the cost of our lives, we do not inhabit a Marvel movie. And you're assuming the knowledge which you can't possibly have, about exactly how human consciousness functions and what alterations to it we'll be able to make in the next centuries or millennia.
Replies from: superads91↑ comment by superads91 · 2022-02-16T19:56:54.540Z · LW(p) · GW(p)
"you're assuming the knowledge which you can't possibly have"
Naturally, I can't predict the future (unfortunately). But neither can you:
"nobody's going to deny death to anyone"
You're making just as much assumptions as myself. The only difference is that you want to spin the heaven/hell wheel of fortune (this is a metaphor), while I don't - at least not until we've had a hell of a lot more time to study it (aka no immortality in a foreseeable future).
↑ comment by SurvivalBias (alex_lw) · 2022-02-10T05:59:50.672Z · LW(p) · GW(p)
>When one realizes how far life is from the rosy picture that is often painted, one has a much easier time accepting death, even while still fearing it or still wanting to live as long as possible.
Do you truly estimate your life as not worth or barely worth living? If yes, I'm deeply sorry about that and I hope you'll find a way to improve it. Let me assure you that there's many people, myself included, who truly genuinely love life and enjoy it.
If it's just a comforting lie you believe in believing to make the thought of death more tolerable, well, I can understand that, death really is terrifying, but then consider maybe not to use it as an argument.
Replies from: dashdashdot, superads91↑ comment by dashdashdot · 2022-02-10T07:52:34.043Z · LW(p) · GW(p)
I see this argument more often but I don't think it's always fear of death that is the driving force of not wanting to live forever.
Can you tell me if there's something wrong with the following metaphor:
I immensely enjoy a mountain hiking trip but after a few weeks it needs to end because my body is aching and even the beauty of the mountains becomes mundane.
Isn't life somehow the same way?
Some burdens seem only carry-able because they're temporary and some beauty is only (or more) beautiful because it's fleeting.
(By the way I would jump on the opportunity of an increased life span to say 200-300 years, 80 seems really short, but not indefinite extension)
Replies from: None, Dustin, alex_lw↑ comment by [deleted] · 2022-02-10T11:24:24.820Z · LW(p) · GW(p)
Maybe it's a mind projection fallacy? I can't relate to that at all. I never tire of doing something new, or trying again something I enjoyed before. And there is so much to do in the universe... I could spend millions of years and not run out of things to do.
↑ comment by SurvivalBias (alex_lw) · 2022-02-10T15:52:43.796Z · LW(p) · GW(p)
>By the way I would jump on the opportunity of an increased life span to say 200-300 years, 80 seems really short, but not indefinite extension
Ok that's honestly good enough for me, I say lets get there and then argue whether we need more extension.
I'm no therapist and not even good as a regular human being at talking about carrying burdens that make one to want to kill themselves eventually, you should probably seek advice of someone who can do a better job at it.
↑ comment by superads91 · 2022-02-10T17:08:38.886Z · LW(p) · GW(p)
"Do you truly estimate your life as not worth or barely worth living? If yes, I'm deeply sorry about that and I hope you'll find a way to improve it. Let me assure you that there's many people, myself included, who truly genuinely love life and enjoy it."
Nah, I've been lucky myself. But this isn't about myself or any individual, but life is general. I keep saying this: people today live in this rare modern oasis of comfort, which gives them these naive perspectives. Until they develop an excruciatingly painful chronic disease at least (and I mean, don't even need to mention the massive modern dark clouds over our heads that anyone in this forum should know about).
Replies from: alex_lw↑ comment by SurvivalBias (alex_lw) · 2022-02-10T17:36:27.136Z · LW(p) · GW(p)
So your argument is that people should die for their own good, despite what they think about it themselves? Probably not since it'd be a almost a caricature villain, but I don't see where else are you going with this. And the goal of "not developing an excruciatingly painful chronic disease" is not exactly at odds with the goal "combat aging".