Age changes what you care about
post by Dentin · 2022-10-16T15:36:36.148Z · LW · GW · 37 commentsContents
38 comments
[Possible trigger warning for discussion of mortality.]
I've been on LessWrong for a very long time. My first exposure to the power of AI was in the mid 1980's, and I was easily at "shock level 4" at the turn of the millennium thanks to various pieces of fiction, newsgroups, and discussion boards.
(If you're curious, "The metamorphosis of prime intellect", "Autonomy", and "A fire upon the deep" all had large impact on me.)
My current best guess is that there's a double-digit percentage chance the human race (uploads included) will go extinct in the next century as a result of changes due to AI.
With that background in mind, existential AI risk is not my highest priority, and I am making no meaningful effort to address that risk other than sometimes posting comments here on LW. This post is an attempt to provide some insight into why, to others who might not understand.
Put bluntly, I have bigger things to worry about than double digit odds of extinction in the next century due to AI, and I am a rather selfish individual.
What could possibly be more important than extinction? At the moment, that would be an extremely solid 50% chance of death in the next 3-4 decades. That's not some vague guesswork based on hypothetical technological advances; it's a hundred thousand people dying per day, every day, as they have been for the last century, each death contributing data to the certainty of that number. That's two decades of watching the health care system be far from adequate, much less efficient.
Consider that right now, in 2022, I am 49 years old. In 2030 I'll be 57, and there's good reason to believe that I'll be rolling 1d100 every year and hoping that I don't get unlucky. By 2040, it's 1d50, and by 2050 it's 1d25. Integrate that over the next three decades and it does not paint a pretty picture.
Allow me a moment to try to convey the horror of what this feels like from the inside: every day, you wonder, "is today the day that my hardware enters catastrophic cascading failure?" Every day, it's getting out a 1d10 and rolling it five times, knowing that if they all come up 1's it's over. I have no ability to make backups of my mind state; I cannot multihost or run multiple replicas; I cannot even export any meaningful amount of data. The high reliability platform I'm currently running on needs to fail exactly once and I am permanently, forever, dead. My best backup plan is a single digit probability that I can be frozen on death and revived in the future. (Alcor provides at best only a few percent odds of recovery, but it's better than zero and it's cheap.)
That's what it feels like after you've had your first few real health scares.
I'd like to say that I'm noble enough, strong enough as a rationalist that I can "do the math" and multiply things out, take into account those billions of lives at risk from AI, and change my answer. But that's where the selfish aspect of my personality comes in: I turns out that I just don't care that much about those lives. I care a lot more about personally surviving the now than to have everyone survive the later.
It's hard to care about the downfall of society when your stomach is empty and your throat is parched. It's hard to care about a century from now, when death is staring you in the face right now.
37 comments
Comments sorted by top scores.
comment by trevor (TrevorWiesinger) · 2022-10-17T04:14:13.481Z · LW(p) · GW(p)
I think this has pretty noteworthy policy implications. Most policymakers are at least as old as this and many are much older, but only have baseline cultural attitudes towards death in order to cope.
Many of these are totally false or epistemically bad (e.g. returning to nature or living on in other's hearts), so their mentality towards their existence ends up in a cycle of repeatedly getting broken down by truth until another house of cards of lies and justifications is built up again.
Replies from: TrevorWiesinger, None↑ comment by trevor (TrevorWiesinger) · 2022-10-17T23:47:09.334Z · LW(p) · GW(p)
I just want to clarify that my epistemic confidence in the wording of this was low, people seem to cope quite well and only get a shakeup every 5-10 years or something. I do however think that this makes it hard to talk about with other people their age because it might shake one of them up, and of course they can't talk about it with younger people because they wouldn't understand and shouldn't anyway.
Also noteworthy: Simpler people and more locally-focused people will suffer less.
comment by Sune · 2022-10-16T19:16:03.500Z · LW(p) · GW(p)
I know this isn’t really the point in the post, but I dont think the “roling five d10 every day” or even the “1d00 a year” are good models for the probability of dying. They make sense statistically, when only considering your age and sex, but you yourself knows for example if you have been diagnosed with cancer or not. You might say that you get a cancer diagnosis if the first four d10 are all 1s and the fifth is a 2 or 3. Then once you have the cancer diagnosis, your probability is higher, especially when looking months or years ahead.
The usual mortality statistics dont distinguish between unexpected death and expected deaths. Do anyone know how of a more accurate model of how it is revealed when you die? Im not looking for exact probabilities and not necessarily of the resolution of days. Just something more accurate than the simple model that ignores current health.
comment by Mitchell_Porter · 2022-10-16T22:26:01.962Z · LW(p) · GW(p)
I take the opposite position, the rise of AI is now rapid enough that AI safety should be prioritized over anti-aging research.
Replies from: dkirmani↑ comment by dkirmani · 2022-10-17T05:36:12.437Z · LW(p) · GW(p)
Wouldn't curing aging turn people into longtermists?
Replies from: Mitchell_Porter↑ comment by Mitchell_Porter · 2022-10-17T06:54:35.444Z · LW(p) · GW(p)
I'm saying AI is on track to take over in the short term. That means right now is our one chance to make it something we can coexist with.
comment by Garrett Baker (D0TheMath) · 2022-10-16T16:49:07.558Z · LW(p) · GW(p)
Sounds like you should get into life extension research! Or at least, support it with money.
Replies from: Dentin↑ comment by Dentin · 2022-10-16T17:07:54.218Z · LW(p) · GW(p)
I agree. I've been donating $10k-50k per year for the past decade or so. I determined a couple years ago that it was better for me to acquire money at my current job and spend it hiring professionals, than to go into fundamental research myself.
Most of my hobby time these days goes toward biochem and biomedical research, so that I can be at the cutting edge if it becomes necessary. Being able to get treatments from 5-10 years beyond the official approval timelines may very well make the difference between life and death.
Replies from: D0TheMath↑ comment by Garrett Baker (D0TheMath) · 2022-10-16T17:09:59.019Z · LW(p) · GW(p)
Nice!
comment by Aiyen · 2022-10-25T15:03:43.027Z · LW(p) · GW(p)
This is one of the reasons why we should be hesitant about the plans for deliberately slowing AI progress that have gotten popular of late. One hundred fifty thousand people die every day. Almost none of these are cryopreserved. Those that are, we are still extremely uncertain if they can ever be recovered, even in a positive singularity.
Getting it right matters more than getting it done fast, but fast still matters. The price of half a second is the difference between life and death for someone, probably someone we’d come to love if we knew them.
comment by Jonas Hallgren · 2022-10-16T22:27:22.695Z · LW(p) · GW(p)
(PSA:)
Hey you, whoever is reading this comment, this post is not an excuse to skip working on alignment. I can fully relate to the fear of death here, and my own tradeoff is focusing hard on instrumental goals such as my own physical health and nutrition (including supplements) to delay death and get some nice productivity benefits. This doesn't mean that an AI won't kill you within 15 years, so it's most likely not even a defect in a tragedy of commons to not work on it; it's rather paramount to your future success at being alive.
(Also if we solve alignment, then we can get a pretty op AGI that can help us out with the other stuff so really it's very much a win-win in my mind.)
↑ comment by the gears to ascension (lahwran) · 2022-10-16T22:29:53.359Z · LW(p) · GW(p)
after all, inter agent and interspecies alignment is simply an instrumental goal on the way to artificial intelligence that can generate biological and information theoretic immortality
comment by hairyfigment · 2022-10-16T17:47:16.284Z · LW(p) · GW(p)
If we somehow produced the sort of AI which EY wants, then I think you'd have radically underestimated the chance of being reconstructed from cryonically preserved data.
On the other side, you appear comfortable talking about 4 more decades of potential life, which is rather longer than the maximum I can believe in for myself in the absence of a positive singularity. I may also disagree with your take on selfishness, but that isn't even the crux here! Set aside the fact that, in my view, AGI is likely to kill everyone in much less than 40 years from today. Even ignoring that, you would have to be overstating the case when you dismiss "the downfall of society," because obviously that kills you in less than 4 decades with certainty. Nor is AGI the only X-risk we have to worry about.
comment by M. Y. Zuo · 2022-10-16T16:31:02.399Z · LW(p) · GW(p)
If your interested in overcoming your fear of death, there was another LW post about an excellent book, https://en.wikipedia.org/wiki/The_Denial_of_Death
I wouldn't get too worried about trying to convince that long term ambitions are usually sacrificed for shorter term ones. Since the vast majority, if not all, of the population likely discounts longer term things as well to varying extents, including LW posters, regardless of purported claims.
In fact, I've yet to see anyone actually demonstrate an exclusive focus on the long term in a credible manner. So you shouldn't feel deficient regarding that.
Replies from: Dentin↑ comment by Dentin · 2022-10-16T16:41:50.784Z · LW(p) · GW(p)
I've had enough time and exposure that I've largely worked through my fear of death; I put substantial effort into finding a healthy way of managing my mental state in that areas. It doesn't significantly impact my day, and hasn't for a while.
But it's still there looming large when I ask myself, "what's the most important thing I can be doing right now?"
comment by plex (ete) · 2022-10-19T23:50:26.086Z · LW(p) · GW(p)
It's good that you're in tune with your values, and able to focus on what feels most important. However, I think your timelines need updating in light of the flood of capabilities advances in recent months.
We're hitting the part of the exponential in AI progress where it seems increasingly clear that we don't get multiple decades, barring some unexpected deviation from the trend. With high P(doom|superintelligence), I would put your odds of personally dying due to misaligned AI significantly higher than those due to aging even at 49, though shifting the needle on your own preparations might be more tractable.
Replies from: Aiyen↑ comment by Aiyen · 2022-10-27T20:42:44.991Z · LW(p) · GW(p)
What is your reasoning here? I'm inclined to agree, on the timelines if not the high P(doom|superintelligence), but a lot of rationalists I know have longer timelines. Tbh, I'm a lot more worried about aging than unfriendly superintelligence conditional on AI not being significantly regulated; advances in AI progress seem likely to open the possibility of more rigorous alignment strategies than could be meaningfully planned out earlier.
Replies from: ete↑ comment by plex (ete) · 2022-10-27T23:16:05.999Z · LW(p) · GW(p)
I mostly buy the position outlined in:
AGI ruin scenarios are likely (and disjunctive) [LW · GW]
A central AI alignment problem: capabilities generalization, and the sharp left turn [LW · GW]
Why all the fuss about recursive self-improvement? [LW · GW]
Warning Shots Probably Wouldn't Change The Picture Much [LW · GW]
my current outlook on AI risk mitigation [AF · GW]
The intensity of regulation which seems remotely plausible does not help if we don't have a alignment method to mandate which holds up through recursive self-improvement. We don't have one, and it seems pretty likely that we won't get one in time. Humanity is almost certainly incapable of coordinating to not build such a powerful technology given the current strategic, cultural, and geopolitical landscape.
I think we might get lucky in one of a few directions, but the default outcome is doom.
Replies from: ErickBall, Aiyen↑ comment by Aiyen · 2022-10-28T00:16:17.020Z · LW(p) · GW(p)
I said conditional on it not being regulated. If it’s regulated, I suspect there’s an extremely high probability of doom.
Replies from: ErickBall↑ comment by ErickBall · 2022-10-28T12:02:24.927Z · LW(p) · GW(p)
How does this work?
Replies from: Aiyen↑ comment by Aiyen · 2022-10-28T15:39:59.738Z · LW(p) · GW(p)
There's a discussion of this here [LW · GW]. If you think I should write more on the subject I might devote more time to it; this seems like an extremely important point and one that isn't widely acknowledged.
Replies from: ErickBall↑ comment by ErickBall · 2022-10-28T16:23:48.146Z · LW(p) · GW(p)
It looks like in that thread you never replied to the people saying they couldn't follow your explanation. Specifically, what bad things could an AI regulator do that would increase the probability of doom?
Replies from: Aiyen↑ comment by Aiyen · 2022-10-28T17:10:49.463Z · LW(p) · GW(p)
- Mandate specific architectures to be used because the government is more familiar with them, even if other architectures would be safer.
- Mandate specific "alignment" protocols to be used that do not, in fact, make an AI safer or more legible, and divert resources to them that would otherwise have gone to actual alignment work.
- Declare certain AI "biases" unacceptable, and force the use of AIs that do not display them. If some of these "biases" are in fact real patterns about the world, this could select for AIs with unpredictable blind spots and/or deceptive AIs.
- Increase compliance costs such that fewer people are willing to work on alignment, and smaller teams might be forced out of the field entirely.
- Subsidize unhelpful approaches to alignment, drawing in people more interested in making money than in actually solving the problem, increasing the noise-to-signal ratio.
- Create licensing requirements that force researchers out of the field.
- Create their own AI project under political administrators that have no understanding of alignment, and no real interest in solving it, thereby producing AIs that have an unusually high probability of causing doom and an unusually low probability of producing useful alignment research and/or taking a pivotal act to reduce or end the risk.
- Push research underground, reducing the ability of researchers to collaborate.
- Push research into other jurisdictions with less of a culture of safety. E.g. DeepMind cares enough about alignment to try to quantify how hard a goal can be optimized before degenerate behavior emerges; if they are shut down and some other organization elsewhere takes the lead, they may well not share this goal.
This was just off the top of my head. In real life, regulation tends to cause problems that no one saw coming in advance. The strongest counterargument here is that regulation should at least slow capabilities research down, buying more time for alignment. But regulators do not have either the technical knowledge or the actual desire to distinguish capabilities and alignment research, and alignment research is much more fragile.
Replies from: ErickBallcomment by Astynax · 2022-10-23T16:25:37.646Z · LW(p) · GW(p)
I'm having a disconnect. I think I'm kind of selfish too. But if it came to a choice between me dying this year and humanity dying 100 years from now, I'll take my death. It's going to happen anyway, and I'm old enough I got mine, or most of it. I'm confident I'd feel the same if I didn't have children, though less intensely. What is causing the difference in these perspectives? IDK. My 90-year-old friend would snort at the question; what difference would a year or two make? The old have less to lose. But the young are usually much more willing to risk their lives. So: IDK.
Replies from: whestler↑ comment by whestler · 2024-07-31T11:32:40.189Z · LW(p) · GW(p)
I'm in the same boat. I'm not that worried about my own life, in the general scheme of things. I fully expect I'll die, and probably earlier than I would in a world without AI development. What really cuts me up is the idea that there will be no future to speak of, that all my efforts won't contribute to something, some small influence on other people enjoying their lives at a later time. A place people feel happy and safe and fulfilled.
If I had a credible offer to guarantee that future in exchange for my life, I think I'd take it.
(I'm currently healthy, more than half my life left to live, assuming average life expectancy)
Sometimes I try to take comfort in many-worlds, that there exist different timelines where humanity manages to regulate AI or align it with human values (whatever those are). Given that I have no capacity to influence those timelines though, it doesn't feel like they are meaningfully there.
comment by JacobW38 (JacobW) · 2022-10-17T06:11:22.388Z · LW(p) · GW(p)
Honestly, even from a purely selfish standpoint, I'd be much more concerned about a plausible extinction scenario than just dying. Figuring out what to do when I'm dead is pretty much my life's work, and if I'm being completely honest and brazenly flouting convention, the stuff I've learned from that research holds a genuine, not-at-all-morbid appeal to me. Like, even if death wasn't inevitable, I'd still want to see it for myself at some point. I definitely wouldn't choose to artificially prolong my lifespan, given the opportunity. So personally, death and I are on pretty amicable terms. On the other hand, in the case of an extinction event... I don't even know what there would be left for me to do at that point. It's just the kind of thing that, as I imagine it, drains all the hope and optimism I had out of me, to the point where even picking up the pieces of whatever remains feels like a monumental task. So my takeaway would be that anyone, no matter their circumstances, who really feels that AI or anything else poses such a threat should absolutely feel no inhibition toward working to prevent such an outcome. But on an individual basis, I think it would pay dividends for all of us to be generally death-positive, if perhaps not as unreservedly so as I am.
Replies from: conor-sullivan, fidnie↑ comment by Lone Pine (conor-sullivan) · 2022-10-17T10:11:19.564Z · LW(p) · GW(p)
Do you believe in an afterlife?
Replies from: JacobW↑ comment by JacobW38 (JacobW) · 2022-10-18T05:05:21.851Z · LW(p) · GW(p)
I have a taboo on the word "believe", but I am an academic researcher of afterlife evidence. I personally specialize in verifiable instances of early-childhood past-life recall.
Replies from: lahwran, Aiyen↑ comment by the gears to ascension (lahwran) · 2022-10-18T09:37:36.667Z · LW(p) · GW(p)
You still haven't actually provided verifiable instances, only referenced them and summarized them as adding up to an insight; if you're interested in extracting the insights for others I'd be interested, but right now I don't estimate high likelihood that doing so will provide evidence that warrants concluding there's hidden-variable soul memory that provides access to passwords or other long facts that someone could not have had classical physical access to. I do agree with you, actually, in contrast to almost everyone else here, that it is warranted to call memetic knowledge "reincarnation" weakly, and kids knowing unexpected things doesn't seem shocking to me - but it doesn't appear to me that there's evidence that implies requirement of physics violations, and it still seems to me that the evidence continues to imply that any memory that is uniquely stored in a person's brain at time of death diffuses irretrievably into heat as the body decays.
I'd sure love to be wrong about that, let us all know when you've got more precise receipts.
Replies from: JacobW↑ comment by JacobW38 (JacobW) · 2022-11-10T06:03:59.591Z · LW(p) · GW(p)
Apologies for the absence; combination of busy/annoyance with downvotes, but I could also do a better job of being clear and concise. Unfortunately, after having given it thought, I just don't think your request is something I can do for you, nor should it be. Honestly, if you were to simply take my word for it, I'd wonder what you were thinking. But good information, including primary sources, is openly accessible, and it's something that I encourage those with the interest to take a deep dive into, for sure. Once you go far enough in, in my experience, there's no getting out, unless perhaps you're way more demanding of utter perfection in scientific analysis than I am, and I'm generally seen as one of the most demanding people currently in the PL-memory field, to the point of being a bit of a curmudgeon (not to mention an open sympathizer with skeptics like CSICOP, which is also deeply unpopular). But it takes a commitment to really wanting to know one way or the other. I can't decide for anyone whether or not to have that.
I certainly could summarize the findings and takeaways of afterlife evidence and past-life memory investigations for a broad audience, but I haven't found any reason to assume that it wouldn't just be downvoted. That's not why I came here anyways; I joined to improve my own methods and practice. I feel that if I were interested in doing anything like proselytizing, I would have to have an awfully low opinion of the ability of the evidence to speak for itself, and I don't at all. But you tell me if I'm taking the right approach here, or if an ELI5 on the matter would be appropriate and/or desired. I'd not hesitate to provide such content if invited.
Replies from: amaury-lorin↑ comment by momom2 (amaury-lorin) · 2023-11-21T14:06:03.183Z · LW(p) · GW(p)
I invite you. You can send me this summary in private to avoid downvotes.
↑ comment by Aiyen · 2022-10-27T20:37:45.452Z · LW(p) · GW(p)
If you don't like the word "believe", what is the probability you assign to it?
Replies from: JacobW↑ comment by JacobW38 (JacobW) · 2022-11-10T05:27:36.795Z · LW(p) · GW(p)
Based on evidence I've been presented with to this point - I'd say high enough to confidently bet every dollar I'll ever earn on it. Easily >99% that it'll be put beyond reasonable doubt in the next 100-150 years, and I only specify that long because of the spectacularly lofty standards academia forces such evidence to measure up to. I'm basically alone in my field in actually being in favor of the latter, however, so I have no interest in declining to play the long game with it.
↑ comment by Filip Dousek (fidnie) · 2022-10-23T09:14:43.640Z · LW(p) · GW(p)
do you have any published papers on this? or, what are the top papers on the topic?
Replies from: JacobW↑ comment by JacobW38 (JacobW) · 2022-11-10T06:18:50.799Z · LW(p) · GW(p)
Thanks for asking. I'll likely be publishing my first paper early next year, but the subject matter is quite advanced, definitely not entry-level stuff. It takes more of a practical orientation to the issue than merely establishing evidence (the former my specialty as a researcher; as is probably clear from other replies, I'm satisfied with the raw evidence).
As for best published papers for introductory purposes, here you can find one of my personal all-time favorites. https://www.semanticscholar.org/paper/Development-of-Certainty-About-the-Correct-Deceased-Haraldsson-Abu-Izzeddin/4fb93e1dfb2e353a5f6e8b030cede31064b2536e