Why are people unkeen to immortality that would come from technological advancements and/or AI?

post by Gabi QUENE · 2024-01-16T14:23:21.108Z · LW · GW · 3 comments

This is a question post.

Contents

  Answers
    9 Lichdar
    8 Andrew Burns
    7 SilverFlame
    5 Elessar2
    5 avturchin
    4 the gears to ascension
    4 Dagon
    3 jchan
    2 Lalartu
    2 FireStormOOO
    -6 Shankar Sivarajan
None
3 comments

Nearly everyone with whom I have talked about the mindblowing possibilities of a friendly ASI -people that hadn't heard about it before-, I have seen the same reaction: the person is skeptical and rejects the idea of immortality: "Humans are made to die", "what is the value of life then?", and sometime: "I want to die".

Why does that happen? I can't understand their reaction.

If you are like them, let us know your arguments.

Answers

answer by Lichdar · 2024-01-17T18:30:42.620Z · LW(p) · GW(p)

I want to die so my biological children can replace me: there is something essentially beautiful about it all. It speaks to life and nature, both which I have a great deal of esteem for.

That said, I don't mind life extension research but anything that threatens to end all biological life or essentially kill a human to replace it with a shadowy undead digital copy are both not worth it for it.

As another has mentioned, a lot of our fundamental values come from the opportunities and limitations of biology: fundamentally losing that eventually leads to a world without life, love or meaning. As we are holobioants, each change will have substantial downstream loss and likely not to a good end.

As far as I am concerned, immortality comes from reproduction and the vast array of behaviors around it are fundamentally beautiful and worthwhile.

comment by Richard_Kennaway · 2024-01-17T18:48:19.626Z · LW(p) · GW(p)

Why not go on living alongside your descendants?

As far as I am concerned, immortality comes from reproduction

I'm with Woody Allen, in preferring immortality to come from not dying.

Replies from: Lichdar
comment by Lichdar · 2024-01-17T19:02:41.083Z · LW(p) · GW(p)

I don't mind it: but not in a way that wipes out my descendants, which is pretty likely with AGI.

I would much rather die than to have a world without life and love, and as noted before, I think a lot of our mores and values as a species comes from reproduction. Immortality will decrease the value of replacement and thus, those values.

Replies from: Jiro
comment by Jiro · 2024-01-18T01:28:37.950Z · LW(p) · GW(p)

By this reasoning, why is the current lifespan perfect, except by astonishingly unlikely chance? If it's so good to have death because it makes replacement valuable, maybe reducing lifespan by 10 years would make replacement even more valuable?

comment by the gears to ascension (lahwran) · 2024-01-19T03:27:17.850Z · LW(p) · GW(p)

what if we could augment reproduction to no longer lose minds, so that when you have kids, they retain your memories in some significant form? I agree with you that current reproduction is special, passing on the informational "soul" of the body, but I want to be able to pass on more of my perspective than just body directly. of course, it would need to still not be setting the main self, not like an adult growing up again, but rather a child who grows into having the full memory of all their ancestors.

But then, perhaps, what if those digital copies you mentioned were instead biological copies, biological backups - a brain and mind stored cryonically, and neurally linkable with telepathy to allow sharing significant parts of your memory, your informational "soul" data, with others? what if you could be immortal but become slower over time, as your older perspective is no longer really fitting into the reality of the descendants, and you can be available should they wish to come learn from your perspective, but ultimately leaving it up to them whether to wake you this year?

If we first assume an improving society that can achieve such things, there are so many gradations between current reproduction and "no more reproduction and everyone's immortal" to consider...

I don't think dying is what makes reproduction useful as a strategy, whether we choose to find it beautiful or not - I think the need to reinitialize brains and bodies in order to learn a new way of being for a new context is what makes it valuable. And right now, in exchange for that ongoing reinitialization and clean-slate of children, we are losing the wisdom of the elders every generation. (not to mention how as people get old, their brains break in a more total way and many of them get more cranky and prejudiced, at least on average. that part can probably be fixed with straightforward healing-the-body healthcare, not drastic life extension stuff, people are nicer when their lives suck less.)

Replies from: Lichdar
comment by Lichdar · 2024-01-19T03:52:51.849Z · LW(p) · GW(p)

But you do pass on your consciousness in a significant way to your children through education, communication and relationships and there is an entire set of admirable behaviors selected around that.

I generally am less opposed to any biological strategy, though the dissolution of the self into copies would definitely bring up issues. But I do think that anything biological has significant advantages in that ultimate relatedness to being, and moreover in the promotion of life: biology is made up of trillions of individual cells, all arguably agentic, which coordinate marvelously into a holobioant and through which endless deaths and waste all transform into more life through nutrient recycling.

Replies from: lahwran
comment by the gears to ascension (lahwran) · 2024-01-19T04:46:24.947Z · LW(p) · GW(p)

yeah, sounds like we're mostly on the same page, I'm just excessively optimistic about the possibilities of technology and how much more we could pass on than current education, I do agree that it is a sliver of passing on consciousness, but generally my view is we should be at least able to end forgetting completely, instead turning all forgetting into moving-knowledge-and-selfhood-to-cold-archive. personally I'd prefer for nearly ~all live computation to be ~biological, I want to become a fully general shapeshifter before I pass on my information. I'm pretty sure the tech will be there in the next 40 years, based on the beginnings that michael levin's research is giving.

(but, also, I'm hoping to live for about 1k to 10k years as a new kind of hyper-efficient deep-space post-carbon "biology" I suspect is possible, so in that respect I am still probably pretty far from your viewpoint! I wanna live in a low gravity superstructure around pluto...)

answer by Andrew Burns · 2024-01-17T04:56:15.806Z · LW(p) · GW(p)

The apprehension of death guides a good deal of human behavior, so the sort of entity that might arise when freed from this fate could be frightening (i.e., undergo substantial value drift in a direction that we would not approve of, like toward something akin to baby-eating). Consider how immortal beings in fiction often have hostile alien values. AI never ends well in fiction, and neither does immortality.

answer by SilverFlame · 2024-01-17T17:11:42.382Z · LW(p) · GW(p)

First, a brief summary of my personal stance on immortality:

- Escaping the effects of aging for myself does not currently rate highly on my "satisfying my core desires" metrics at the moment

- Improving my resilience to random chances of dying rates as a medium priority on said metrics, but that puts it in the midst of a decently large group of objectives

- If immortality becomes widely available, we will lose the current guarantee that "awful people will eventually die", which greatly increases the upper bounds of the awfulness they can spread

- Personal growth can achieve a lot, but there's also parts of your "self" that can be near-impossible to get rid of, and I've noticed they tend to accumulate over time. It isn't too hard to extrapolate from there and expect a future where things have changed so much that the life you want to live just isn't possible anymore, and none of the options available are acceptable.

Some final notes:

- There are other maybe-impossible-maybe-not objectives I personally care more about that can be pursued (I am not ready to speak publicly on most of them)

- I place a decent amount of prioritization pressure to objectives that support a "duty" or "role" that I take up, when relevant, and according to my estimations my stance would change if I somehow took up a role where personal freedom from aging was required to fulfill the duty

- I do not care strongly enough to oppose non-"awful" (by my own definitions) people from pursuing immortality; my priorities mostly affect my own allocations of resources

- I mentioned in several places things I'm not willing to fight over, but I am somewhat willing to explain some aspects of my trains of thought. Note, however, that I am a somewhat private person and often elect silence over even acknowledging a boundary was approached.

comment by Andrew Burns (andrew-burns) · 2024-01-17T21:08:09.288Z · LW(p) · GW(p)

You cannot know a person is not secretly awful until they become awful. Humans have an interpretability problem. So suppose an awful person behaves aligned (non-awful) in order to get into the immortality program, and then does a treacherous left turn and becomes extremely awful and heaps suffering on mortals and other immortals. The risks from misaligned immortals are basically the same as the risks from misaligned AIs, except the substrate differences mean immortals operate more slowly at being awful. But suppose this misaligned immortal has an IQ of 180+. Such a being could think up novel ways of inflicting lasting suffering on other immortals, creating substantial s-risk. Moreover, this single misaligned immortal could, with time, devise a misaligned AI, and when the misaligned AI turns on the misaligned immortal and also on the other immortals and the mortals (if any are left), you are left with suffering that would make Hitler blanch.

comment by dr_s · 2024-01-18T18:42:06.168Z · LW(p) · GW(p)

If immortality becomes widely available, we will lose the current guarantee that "awful people will eventually die", which greatly increases the upper bounds of the awfulness they can spread

I mean... amazingly good people die too. Sure, a society of immortals would obviously very weird, and possibly quite static, but I don't see how eventual random death is some kind of saving grace here. Awful people die and new ones are born anyway.

comment by [deleted] · 2024-01-17T18:04:17.624Z · LW(p) · GW(p)

- If immortality becomes widely available, we will lose the current guarantee that "awful people will eventually die", which greatly increases the upper bounds of the awfulness they can spread

Do you think that some future generation of humans (or AI replacements) will become immortal, with the treatments being widely available?  

Assuming they do - remember, every software system humans have ever built already is immortal, so AIs will all have that property - what bounds the awfulness of future people but not the people alive right now?  Why do you think future people will be better people?

If you had some authority to affect the outcome - whether or not current people get to be immortal, or you can reserve the treatment for future people who don't exist yet - does your belief that future people will be better people justify this genocide of current people?

Replies from: SilverFlame
comment by SilverFlame · 2024-01-17T18:51:33.094Z · LW(p) · GW(p)

Do you think that some future generation of humans (or AI replacements) will become immortal, with the treatments being widely available?

I do not estimate the probability to be zero, but other than that my estimation metrics do not have any meaningful data to report.

Assuming they do - remember, every software system humans have ever built already is immortal, so AIs will all have that property - what bounds the awfulness of future people but not the people alive right now?

First, I'm not sure I agree that software systems are immortal. I've encountered quite a few tools and programs that are nigh-impossible to use on modern computers without extensive layers of emulation, and I expect that problem to get worse over time.

Second, I mainly track three primary limitations on somebody's "maximum awfulness":

  • In a pre-immortality world, they have only a fixed amount of time to exert direct influence and spread awfulness that way
  • The "society" in which we operate exerts pressure on nearly everyone it encompasses, amplifying the effects of "favored" actions and reducing the effects of "unpopular" actions. This is a massive oversimplification of a very multi-pronged concept, but this isn't the right time to delve into this concept.
  • Nobody is alone in the "game", and there will almost always be someone else whose actions and influence exerts pressure on whatever a given person is trying to do, although the degree of this effect varies wildly.

If immortality enters the picture, the latter two bullet points will still apply, but I estimate that they would not be nearly as effective on their own. Given infinite time, awful people can spread their influence and create awful organizations, especially given that people I consider "awful" tend to more easily acquire influence than people I consider "good" (since they have fewer inhibitions and more willingness to disrespect boundaries), so that would suggest a strong indication towards imbalance in the long term.

Why do you think future people will be better people?

I don't necessarily think future people will be better people. I don't feel confident estimating how their "awfulness rating" would compare to current people, but if held at gunpoint I would estimate little to no change. I am curious what made you think that I held such an expectation, but you don't have to answer.

If you had some authority to affect the outcome - whether or not current people get to be immortal, or you can reserve the treatment for future people who don't exist yet - does your belief that future people will be better people justify this genocide of current people?

There would be several factors in a decision to use such authority:

  • If I gained the authority through a specific role or duty, what would the expectations of that role or duty suggest I should do? This would be a calculation in its own right, but this should be a sufficient summary.
  • Do I expect my choice to prevent the spread of immortality to be meaningful long-term? The sub-questions here would look like "If I don't allow the spread, will someone else get to make a similar choice later?"
  • Is this the right time to make the decision? (I often recommend people ask this question during important decision-making)

The first and third factors I feel are self-explanatory, but I will talk a bit more on the second factor.

I would expect others given the same decision to not necessarily make the same choice, so by most statistical distributions even one or two other people facing the same decision would greatly increase my estimation of "likelihood that someone else chooses to hit the 'immortality button'". Therefore, if I expect the chance of "someone else chooses to press the button" to be "likely", I would then have to calculate further on how much I trusted the others I expected to be making such decisions. If I expected awful people to have the opportunity to choose whether to press the button, I would favor pressing it under my own control and circumstances, but if I expected good people to be my "competition", I would likely refrain and let them pursue the matter themselves.

... does your belief that future people will be better people justify this genocide of current people?

I do not currently consider myself to have enough ability to influence the pursuit of immortality, but I have consciously chosen to prioritize other things. I also prefer to frame such matters in the case of "how much change from the expected outcome can you achieve" rather than focusing upon all the perceived badness of the expected outcome. I've found such framing to be more efficient and stabilizing in my work as a software engineer.

 

As a general note to wrap things up, I prefer to avoid exerting major influence on matters where I do not feel strongly. I find that this tends to reduce "backsplash" from such exertions and shows respect for boundaries of people in general. As the topic of pursuing immortality is clearly a strong interest of many people and it is not a strong interest of mine, I tend to refrain from taking action more overt than being willing to discuss my perspective.

Replies from: None
comment by [deleted] · 2024-01-17T20:02:54.353Z · LW(p) · GW(p)

First, I'm not sure I agree that software systems are immortal. I've encountered quite a few tools and programs that are nigh-impossible to use on modern computers without extensive layers of emulation, and I expect that problem to get worse over time.

I'm not sure your position is coherent.  You, as a SWE, know that you can keep producing turing complete emulations and keep any possible software from the past working, with slight patches.  (for example, early game console games depended on UDB to work at all).  It's irrelevant if it isn't economically feasible to do so.  I think you and I can both agree that an "immortal" human is a human that will not die of aging or any disease that doesn't cause instant death.  It doesn't mean that it will be economically feasible to produce food to feed them in the far future, they could die from that, but they are still biologically immortal.  Similarly, software is digitally immortal and eternal...as long as you are willing to keep building emulators or replacement hardware from specs.

There would be several factors in a decision to use such authority:

  • If I gained the authority through a specific role or duty, what would the expectations of that role or duty suggest I should do? This would be a calculation in its own right, but this should be a sufficient summary.
  • Do I expect my choice to prevent the spread of immortality to be meaningful long-term? The sub-questions here would look like "If I don't allow the spread, will someone else get to make a similar choice later?"
  • Is this the right time to make the decision? (I often recommend people ask this question during important decision-making)

While I found your careful thought process here inspiring, the normal hypothetical assumption is to assume you have the authority to make the decision without any consequences or duty, and are immortal.  Meaning that none of these apply.  You hypothetically can 'click the mouse'* and choose no immortality until some later date, but you personally have no authority to influence how worthy future humans are. 

*such as in a computer game like Civilization

Finally, the implicit assumption I make, and I think you should make given the existing evidence that software is immortal, is that: 

If I don't allow the spread, will someone else get to make a similar choice later?"

There is a slightly less than 100% chance that within 1000 years, barring cataclysmic event, that some kind of life with the cognitive abilities of humans+ will exist in the solar system that is immortal.  There are large practical advantages to having this property, from being able to make longer term plans to simply not losing information with time.  

Human lifespans were not evolved in an environment with modern tools and complex technology, they are completely unsuitable to an environment where it takes say years to transfer between planets on the most efficient trajectory, and possibly centuries to reach the nearest star, depending on engineering limitations. 

Again, though, its reasonable to have doubt that biological 'meatware' can ever be made eternal, but since software already is, immortality exists the moment software can mimic all the important human cognitive capabilities.

Replies from: Dagon, SilverFlame
comment by Dagon · 2024-01-18T17:49:49.074Z · LW(p) · GW(p)

I think you and I can both agree that an "immortal" human is a human that will not die of aging or any disease that doesn't cause instant death.  It doesn't mean that it will be economically feasible to produce food to feed them in the far future, they could die from that, but they are still biologically immortal.

I don't know about either of you, but I do NOT agree with that definition as the default meaning in this discussion.  Human immortality, colloquially in conversations I've had and seen in LW and related circles, means "a representative individual who experiences the universe in ways similar to me, has a high probability of continuing to do so for thousands of years."

"pattern-immortal, but probably never going to actually live" is certainly not what most people mean.

comment by SilverFlame · 2024-01-17T22:51:06.875Z · LW(p) · GW(p)

I'm not sure your position is coherent.  You, as a SWE, know that you can keep producing turing complete emulations and keep any possible software from the past working, with slight patches.  (for example, early game console games depended on UDB to work at all).

Source code and binary files would qualify as "immortal" by most definitions, but my experience using Linux and assisting in software rehosts has made me very dubious of the "immortality" of the software's usability.

Here's a brief summary of factors that contribute to that doubt:

  • Source code is usually not as portable as people think it is, and can be near-impossible to build correctly without access to sufficient documentation or copies of the original workspace(s)
  • Operating systems can be very picky about what executables they'll run, and executables also care a lot about what versions of libraries they want are present
  • There's a lot of architectures out there for workspaces, networks, and systems nowadays, and information about a lot of them is quietly being lost to brain drain and failures to document; some of that information can be near-impossible to re-acquire afterwards

 It's irrelevant if it isn't economically feasible to do so.

I do not consider economic infeasibility irrelevant when a problem can approach the scope of "a major corporation or government dogpiling the problem might have a 30% chance of solving it, and your reward will be nowhere near the price tag". It is possible that I am overestimating the feasibility of such rehosts after suffering through some painful rehost efforts, but that is an estimate from my intuition and thus there is little that discussion can achieve.

While I found your careful thought process here inspiring, the normal hypothetical assumption is to assume you have the authority to make the decision without any consequences or duty, and are immortal.  Meaning that none of these apply.

First, I make a point of asking those questions even in such a simplified context. I have spent a fair amount of time training my "option generator" and "decision processor" to embed such checklists to minimize the chances of easily-avoided outcomes slipping through. The answer to the first bullet point would easily calculate as "your role has no obligations either way", but the other two questions would still be relevant.

But, to specifically answer within your clarified framing and with the idea of my choice being the governing choice in all resulting timelines, I would currently choose to withhold the information/technology, and very likely would make use of my ability to "lock away" memories to properly control the information.

The rest of your response seems reasonable enough when using the assumption that software is immortal, so I have nothing worth saying about it beyond that.

Replies from: None
comment by [deleted] · 2024-01-17T22:57:48.446Z · LW(p) · GW(p)

But, to specifically answer within your clarified framing and with the idea of my choice being the governing choice in all resulting timelines, I would currently choose to withhold the information/technology, and very likely would make use of my ability to "lock away" memories to properly control the information.

Ok.  So remember, your choices are:

  1.  Lock away the technology for some time
  2. Release it now

1 doesn't mean forever, say the length of the maximum human lifespan.  You are choosing to kill every living person because you hope that the next generation of humans is more moral/ethical/deserving of immortality than the present, but you get no ability to affect the outcome.  The next generation, slightly after everyone alive is dead, will be immortal, and as unethical or not as you believe future people will be.

I am saying that I don't see how 1 is very justifiable, it's also genocide even though in this hypothetical you will fail no legal consequences for committing the atrocity.  

I believe this made up hypothetical is a fairly good model for actual reality.  I think people working together even by accident* - simply pretending that immortality is impossible, for example and not allowing studies on cryonics to ever be published - could in fact delay human indefinite life extension for some time, maybe as long as the maximum human lifespan.  But regardless of the length of the delay, there are 'assholes' today, and 'future assholes', and it isn't a valid argument to say your should delay immortality for hope that future people are less, well, bad.  

*the reason this won't last forever is because the technology has immense instrumental utility.  Even a small amount of reliable, proven to work life extension would have almost every person who can afford it purchasing it, and advances in other areas make achieving this more and more likely.

Replies from: SilverFlame
comment by SilverFlame · 2024-01-17T23:18:58.882Z · LW(p) · GW(p)

Ok.  So remember, your choices are:

  1.  Lock away the technology for some time
  2. Release it now

 

You are choosing to kill every living person because you hope that the next generation of humans is more moral/ethical/deserving of immortality than the present, but you get no ability to affect the outcome.

Even with this context, my calculations come out the same. It appears that our estimations of the value (and possibly sacred-ness) of lives are different, as well as our allocations of relative weights for such things. I don't know that I have anything further worth mentioning, and am satisfied with my presentation of the paths my process follows.

Replies from: None
comment by [deleted] · 2024-01-17T23:26:39.636Z · LW(p) · GW(p)

Do you think your process could be explained to others in an "external reasoning" way or is this just kinda an internal gut feel, like you just value everyone on the planet being dead and you roll the dice on whoever is next.

Replies from: SilverFlame
comment by SilverFlame · 2024-01-18T00:15:02.909Z · LW(p) · GW(p)

The decision was generated by my intuition since I've done the math on this question before, but it did not draw from a specific "gut feeling" beyond me querying the heavily-programmed intuition for a response with the appropriate inputs.

Your question has raised to mind some specific deviations of my perspective I have not explicitly mentioned yet:

  • I spent a large amount of time tracing what virtues I value and what sorts of "value" I care about, and afterwards have spent 5-ish years using that knowledge to "automate" calculations that use such information as input by training my intuition to do as much of the process as is reasonable
    • I know what my value categories are (even if I don't usually share the full list) and why they're on the list (and why some things aren't on the list)
    • My "decision engine" is trained to be capable of adding "research X to improve confidence" options when making decisions
      • If time or resources demand an immediate decision, then I will make a call based on the estimates I can make with minimal hesitation
    • This system is actively maintained
  • I do not consider lives "priceless", I will perform some sort of valuation if they are relevant to a calculation
    • An individual is valued via my estimates of their replacement cost, which can sometimes be alarmingly high in the case of unique individuals
    • Groups I can't easily gather data on are estimated for using intuition-driven distributions of my expectations for density of people capable of gathering/using influence and of awful people
    • My estimations and their underlying metrics are generally kept internal and subject to change because I find it socially detrimental to discuss such things without a pressing need being present
  • Two "value categories" I track are "allows timelines where Super Good Things happen" and "allows timelines where Super Bad Things happen"
    • These categories have some of the strongest weights in the list of categories
    • They specifically cover things I think would be Super Good/Bad to happen, either to myself or others
  • I estimate that skilled awful people having an unlimited lifespan would be a Super Bad Thing, therefore timelines that allow it are heavily weighted against
    • Awful people can convert "normal" people to expand the numbers of awful people, and given a lack of pressure even "average" people can trend towards being awful
    • The influence accumulation curves over time I have personally observed and estimated look to be exponential barring major external intervention and resource limitations, and currently the finite lifespan of humans forces the awful people to each deal with the slow-growth parts of their curves before hitting their stride
answer by Elessar2 · 2024-01-19T01:08:59.852Z · LW(p) · GW(p)

Because there is a very strong possibility that the "I" that achieves this immortality won't be the "I" that I have been in this biological package up to this point-the technology required may very well grossly distort (or even destroy or render irrelevant) my consciousness beyond all recognition or similarity, and I could end up as a slave or addict to the technological AI overmind in question as it subtly morphs my mind into a compromised mess.  Even if the key to I. turns out to be biological more or less, I'll almost certainly have to navigate the AI gauntlet in any event sooner or later.  I'd rather take my chances with transcending this plane altogether for a more benign and less dualistic one.

In a less dire era of history I'd be all in favor, esp. given how healthy I am right now (age 61), esp. also given how much I've honed my mind to overcome as many dualities here that I can, but all bets are off from here on out.

comment by dr_s · 2024-01-19T19:30:50.417Z · LW(p) · GW(p)

I think this is an added layer though - I don't think the responses listed here are responses of people deep enough in the transhumanism/AI rabbit hole to even consider those options. Rather, they sound like the more general kind of answers that you'd hear also in response to a theoretical offer of immortality that means 100% what you expect it to, no catches.

answer by avturchin · 2024-01-16T19:45:31.636Z · LW(p) · GW(p)

Maybe it is part of the system which protects them from the fear of death: they suppress not only thoughts about death but even their own fear of it. Similar to Freudian repression of thoughts about sex. 

answer by the gears to ascension · 2024-01-19T01:06:11.574Z · LW(p) · GW(p)

I suspect a significant portion of what's going on is that there's a core kernel of truth to what they're saying - something along the lines that they're hesitant to stagnate. "not dying", to me, involves also greatly increasing my ability to change, to the point where in 500 years I'm such a different person that I'm more comparable to a descendant of myself than to my current self. I think people rightly recognize that without that level of self-mutability, extending lifespan leads to your "soul"/your informational self getting old.

the other answers have plenty of truth to them too - cope, expecting life to get worse, pessimism about the possibility, etc.

but consider: the old kind of immortality of mammalian life is the non-self-preserving kind, where you have kids, and people are very used to that and sort of intuitively know that if they live a long time they'll be messing with that deeply fundamental dynamic; I think much of what makes the mutability of selfhood of having kids good needs to be ported over to the individual self in order for serious longevity to be at all a good idea, the ability to stay mentally curious and mutable for a much much more extended period so as to continue mentally adapting to new circumstances.

answer by Dagon · 2024-01-16T17:26:29.303Z · LW(p) · GW(p)

I want to live forever.  I think it's vanishingly unlikely  that I will, or that anyone alive today or born in the near future will.  I think it's somewhat possible that other entities (alien or human-descended biological or cyborg, different enough that it's still effectively alien) will have a sufficiently different mechanism and conception of identity so as to be near-immortal.  

For entities of our ego-size (the unit of individual identity for humans and rate of experience-having), I think it will always be the case that replacement is far more efficient than growth and continuation. 

comment by [deleted] · 2024-01-17T18:08:52.514Z · LW(p) · GW(p)

Assuming "born in the near future" means "within 1/2 a human lifespan", you believe that over the next 160 years, humans will not be able to make themselves immortal.

And the obvious means to do it that current science says will eventually work, by life support using https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9088731/ and brain implants to augment and replace slowly dying original neurons, would count as a "cyborg" and not a human.

I'm not sure I buy that, an 'original human' living in a life support system and augmented by artificial systems would still have all the identity properties a current human has.  And I think the technical route is pretty plausible, especially over 160 years, especially if you assume robotic systems driven by AI are actually administering the life support, and learning from a very large (millions+) patient pool the various edge cases where it can fail.

Replies from: Dagon
comment by Dagon · 2024-01-18T17:38:08.511Z · LW(p) · GW(p)

I obviously can't give a timeframe.  When framed as "what year will the first lucid, functioning, "normal" human celebrate their 250th birthday?",  I kind of segfault.  It's WAY past the singularity where "normal human" ceases to mean anything.

I give fairly high weights to "collapse or destruction", but excluding those, I think there's a fair probability on mind-expansion-and-extension techniques that need to start before birth to be fully effective.  Those born before such things are commonly available will grow old and die.  

Replies from: None
comment by [deleted] · 2024-01-18T18:34:44.548Z · LW(p) · GW(p)

Why will they die? Say robots are swapping parts with lab grown organs. The patient lives forevermore in a sealed lab. The robots are driven by AI models that learn from all patients.

  1. What can kill them? What can kill them if engineers take straightforward and obvious precautions? (Redundant power sources, redundant ai models, redundant lab grown organs made different ways)

  2. If you accept that current engineers could at least paper design a system that is unlikely to fail, when did they die? Their heart cannot stop because there are 3+ parallel systems that serve the role. Their immune system can't fail for the same reason. Strokes don't do much damage because the robots react in seconds.

A. Did they die when they entered the biolab, as they cannot experience the world directly until there is a major technology advance?

B. Did they die when 10 percent of their brain rotted by age 110 and procedures to add neural stem cells and brain implants mean more of their cognition is artificial?

C. Did they die because they hit age 200 and have forgotten most of what happened when they could touch grass?

I can see many ways the above could fail and patients could die, but I can't see a way that is very likely for them to all die. Some people could live to 250+ and will if this tech becomes possible, and they will not have pre birth edits.

A valid counterargument would be a grounded way this will always fail. Today, for example,if you tried to do this, the death rate is 100 percent because "eventually the patient dies from a lack of an organ function that current science is aware exists" or "eventually the human workers doing this make a single mistake" or "eventually weird and unexpected stuff happens like swollen lymph ducts that there is no treatment for and they die".

But when you imagine "robots driven by AI models who have learned to grow new bodies from scratch" it's hard to see a valid counterargument. Mistakes will be made but generally only once, and eventually a subset of the original cohort reaches 250.

Replies from: Dagon
comment by Dagon · 2024-01-18T19:07:23.204Z · LW(p) · GW(p)

Say robots are swapping parts with lab grown organs.

There's a pretty big gap from swapping non-cognitive organs to swapping whatever goes wrong as brains age.  But take that as solved - there are machines that replace or regenerate degradation on the information-content level. Which means they can distinguish between "bad" change due to damage and aging and "good" change due to learning and experiencing.  I'm skeptical that this will happen before biological humans are obsolete.

Even so, the most likely (IMO) way to die is if the robots find something better to do.  Either caring for more profitable entities, or creating bad art, or creating new entities that the robots love more (or love the same, but are way cheaper because they're not biological and don't degrade in these weird ways).  In the same way that human professional caregivers give a lot more attention to their children than to patients.

They die when the brain stops being conscious, retrieving and forming memories, etc.  They die irrevocably when the patterns in the brain are vanishingly unlikely to ever be re-instantiated and powered up.

Replies from: None
comment by [deleted] · 2024-01-18T23:48:52.555Z · LW(p) · GW(p)

Ok but it's a wildly different claim between "humans who are alive now or will be born in the future without genetic edits are all doomed to die of aging" vs "humans will not be worth keeping alive once the tech base is adequate to keep them alive 250+ years".

This also kinda simplifies. If you think a technological singularity is an inevitable emergent future event that no choice by humans can do more than delay, then this simplifies to "every human alive now will die soon after the Singularity, which will likely happen somewhere between 2028 and 2060".

Which seems to be the majority view of many posters here. (I think the Singularity will happen but am not confident it will be as deadly to humans as others model)

Either way it seems you would believe that humans will die by 2060 and aging doesn't matter for many of us.

Replies from: Dagon
comment by Dagon · 2024-01-19T03:13:17.586Z · LW(p) · GW(p)

My argument is fractal, though. It's not "there will be lots of investment and progress toward immortality, then it will be ignored because nobody noticed that it's not worth it (to those who control it)".  It's "at each step, the hard problem of brain repair will be ... hard, and not solved, because replacement works so well already that continuation (of others; most of us would continue the self if we could) isn't worth everything".  I do strongly expect a lot of early death due to "simple" organ failure, cancer, heart disease, etc will be reduced or fully solved.  I don't expect that to add up to true immortality, and the last bit of cascade failures involving brain degradation may well never be solved.  

Replies from: None
comment by [deleted] · 2024-01-19T05:23:02.753Z · LW(p) · GW(p)

So ok this reminds me of a pro atheist argument. So you concede that straightforward tech advances an automation can probably fix everything but brain degradation. And hypothetically suppose someone demonstrated some method of partial brain repair. (Neural stem cells, gene edits to turn off aging mechanisms, replacing non neuron support cells with deaged replacements made by deaging and editing the mutations on a single pluripotent stem cell then differentiating it). So that cuts the problem in half, as at least half the cells in the brain motile and replaceable.

Then of course there's the implants. Theoretically they can replace any function and there are some successful experiments in rats.

And like ok, so someone's memories are in implants and their brain continues to degrade. Do you think there is some measurable cognitive capacity you can't restore? When I think of this problem, I think of a VR world made with generative ai, and injected into the narrative are continuous cognitive tests for a variety of functions. So as a patient starts to perform poorly on some tests, more implants are installed, new structures are grown with stem cells (their brain might take on an alien shape and be several times present size to fit all these modifications). This happens until the scores on all tests reach a target baseline

It's a continuous process. Because they keep reflecting on their original life and personality as the process happens, at all times the patient retains their original personality, human level cognitive capacity, and most declarative memories though there will be errors that get corrected by checking records.

So I mean...at age 100 probably every memory before 50 is a copy made by recall. Nothing is original. And age 150, same for the (100,150) interval. And so on for eternity.

So what I challenge you to find is some definition of death that lets the 250 year old be dead but doesn't let you define a 100 year old...or 50 year old....as deceased.

I will note that my own views are that it's a continuous process, it's possible for someone to be partially dead. With enough technology you can prevent someone from being completely dead for at least a billion years. I just think of it as number in the interval (0, 1). 0 means their body was incinerated and any journals burned, 1.0 is today. Yesterday is 0.99999....i think humans "die" over time regardless of still breathing because a small amount of information is being lost. The loss only stops with neural implants and backed up files.

Replies from: Dagon
comment by Dagon · 2024-01-19T17:42:57.002Z · LW(p) · GW(p)

So that cuts the problem in half, as at least half the cells in the brain motile and replaceable.

I don't think so.  It may solve for 90% of the body's mass, or even a large percentage of neurons, without making very much progress on the hard part of maintaining cognitive ability and continuity. I (and we) don't know enough detail of what makes human brains work to have any clue whether it's actually solvable in existing brains, which haven't already developed with monitoring and electronic access.

And with that, I think I'll bow out.  Thanks for the discussion - I'll read further posts and rebuttals, but probably won't reply.

Replies from: None
comment by [deleted] · 2024-01-19T17:56:03.334Z · LW(p) · GW(p)

Ok. Just one note, I did address memory later in the same comment above. You can grow new brain structures, digitally connect them, and they will learn over time the traits of the dying "original" networks they are mimicking. Note we do this all the time in ANNs.

Another meta comment is I am like explaining how you could use a big steam engine made of brass to reach 60mph in a train. I don't know of better techniques either. I am saying "well you could bolt the patients skull to a fixed point, expose the brain, and add additional structures to copy and augment it to restore lost capabilities."

I don't believe such a crude solution will be necessary, I just don't know anything better with today's tech base, and I am saying that this will work eventually. Your belief that "death always wins" would be like people in 1910 believing "aircraft will always crash". Technically true but the rate matters, with methodical refinement and midair refueling and component replacement you can make an aircraft fly for centuries or longer before it crashes.

answer by jchan · 2024-01-17T19:51:37.696Z · LW(p) · GW(p)

It could be that people regard the likelihood of being resurrected into a bad situation (e.g. as a zoo exhibit, a tortured worker em, etc.) as outweighing that of a positive outcome.

answer by Lalartu · 2024-01-18T12:56:29.094Z · LW(p) · GW(p)

A lot of people just don't believe it is possible, and for good reasons. Life extension as a scientific field was around for about a century, with exactly zero results so far. And these "ASI can grant immortality" stories usually assume nanotechnology, which is most likely fundamentally impossible.

If life extension was actually available, I think attitude would be different.

answer by FireStormOOO · 2024-01-18T02:33:47.789Z · LW(p) · GW(p)

For everyone who gets curious and challenges (or even evaluates on the merits) the approved right answers they learned from their culture, there's dozens more who for whatever reason don't.  "Who am I to challenge <insert authority>", "Why should I think I know better?", "How am I supposed to know what's true?" (rhetorically, not expecting an answer exists).   And a thousand other rationalizations besides. 

And then of those who try, most just find another authority they like better and end their inquiry - independent thinking is hard work, thankless work, lonely work.  Even many groups that supposedly value this adopt the language and trappings without the actual thought and inquiry.  People mostly challenge the approved right answers that the in-group has told them are safe to challenge.  Even here plenty haven't escaped this.

And obviously you already know the safe approved "right" answers from society at large on this question - it's all a trap and you're a fool for considering it.  And credit where it's due historically, they've so far been right.

answer by Shankar Sivarajan · 2024-01-17T06:04:43.182Z · LW(p) · GW(p)

Are you familiar with the concept of "religion"? You might find understanding the beliefs of so-called "death cults" helpful. There are a couple that are so popular and influential that even many who explicitly disavow them have adopted their views regarding death. 

3 comments

Comments sorted by top scores.

comment by Mitchell_Porter · 2024-01-18T01:47:53.531Z · LW(p) · GW(p)

Why are people unkeen to immortality that would come from technological advancements and/or AI?

If only we knew! 

I've been around since the 1990s, so I have personally observed the human race fail to take a serious interest, even just in longevity, for decades. And of course 1990s Internet transhumanism didn't invent the idea, there have been isolated calls for longevity and immortality, for decades and centuries before that. 

One may of course argue that Taoist alchemists and medieval blood-transfusionists and 1990s nanotechnologists were all just too soon, that actually curing aging, for example, objectively requires knowledge that we don't possess even now. 

But what I'm talking about is the failure to organize and prioritize. The reason that no truly major organization or institution has ever made e.g. the reversal of aging a serious priority, is not to be explained just by the incomplete state of human knowledge, although the gatekeepers of knowledge have surely played an outsized role in this state of affairs. 

If someone of the status of Newton or Kant or Oppenheimer had used their position to say the human race should try to conquer death; or even if a group of second-tier scientists or intellectuals had the clarity and audacity to say firmly and repeatedly, that in the age of science, we can and should figure out how to live a thousand years - then perhaps "life extensionism" or "immortalism" would for some time already have existed as a well-known school of thought, alongside all the other philosophies and ideologies that exist in the world of ideas. 

I suppose that, compared to decades ago, things are a lot better. The prospect of immortality is now a regular subject of pop-science documentaries about biotechnology and the study of aging. There are anti-aging radicals scattered throughout world academia, there are a handful of well-funded research groups working on aspects of the aging problem, and there are hundreds of billions of dollars spent annually on biological and medical research, even if it is spent inefficiently. So, culture has shifted greatly. 

Now, your question is "why don't people in general want to live forever via technology", which is a slightly different question to "why didn't the human race organize to make it happen", although they are definitely related. There's probably a dozen reasons that contribute. For example, some proposed modes of immortality involve the abandonment of the human body, and may sound insane or repulsive. 

I think a major reason is that many people already find life miserable or exhausting. Their will-to-live is already fully used up, just to cope with the present. Or even if they have achieved a kind of happiness, they got there by accepting the world as it is, accepting limits, focusing on the positives, and so on. Death is sad but life goes on. 

Also, people are good at thinking of reasons not to do it. If no one dies, do we all just live under the same politicians forever? if no one dies, won't the world fill up and we'll all starve? Aren't there too many people already? What if you get bored? Some of these are powerful reasons. Not everyone is going to think of outer space as an outlet for excess population. But mostly these are ways to deflect an idea that has already been dismissed for other reasons. There aren't many people who are genuinely excited by the idea of thousand-year lifespans and then go, hang on, what about the environment, and reject it for that reason. 

comment by Shankar Sivarajan (shankar-sivarajan) · 2024-01-17T05:52:32.845Z · LW(p) · GW(p)

Relevant smbc: https://www.smbc-comics.com/comic/2013-01-29

"When they realized they were in a desert, they built a religion to worship thirst."

comment by BeyondTheBorg · 2024-01-19T04:01:11.388Z · LW(p) · GW(p)

It's learned helplessness. People have seen loved ones die and remember they could do nothing to stop it. Past longevity research has not panned out, and people have grown rightfully skeptical about a cure for what has up to this point just been the human condition. Though I suspect they'd gladly take such a cure if one existed.

We also think of death as a great equalizer that allows new (maybe better) people to succeed the old (bad) people (e.g. Supreme Court justices). There will arise tough questions about labor, retirement, marriages, population, and democracy currently solved by death, that our existing political institutions are not remotely ready to answer in its absence.