The Personal Implications of AGI Realism

post by xizneb · 2024-10-20T16:43:37.870Z · LW · GW · 7 comments

Contents

  Superintelligence Is On The Horizon
  All Possible Views About Our Lifetimes Are Wild
  What Does This Mean On A Personal Level?
None
7 comments

 

Superintelligence Is On The Horizon

It’s widely accepted that powerful general AI, and soon after, superintelligence, may eventually be created.[1] There’s no fundamental law keeping humanity at the top of the intelligence hierarchy. While there are physical limits to intelligence, we can only speculate about where they lie. It’s reasonable to assume that even if we hit an S-curve in progress, that plateau will be far beyond anything even 15 John von Neumann clones could imagine.

Gwern was one of the first to recognise the "scaling hypothesis"; others followed later. While debate continues over whether scaling alone will lead to AI systems capable of self-improvement, it seems likely that scaling, combined with algorithmic progress and hardware advancements, will continue to drive progress for the foreseeable future. Dwarkesh Patel estimates a "70% chance scaling + algorithmic progress + hardware advances will get us to AGI by 2040". These odds are too high to ignore. Even if there are delays, superintelligence is still coming.

Some argue it's likely to be built by the end of this decade; others think it might take longer. But almost no one doubts that AGI will emerge this century, barring a global catastrophe. Even skeptics like Yann LeCun predict AGI could be reached in “years, if not a decade.” As Stuart Russell noted, estimates have shifted from “30-50 years” to “3-5 years.

Leopold Aschenbrenner calls this shift "AGI realism." In this post, we focus on one key implication of this view—leaving aside geopolitical concerns:

We are rapidly building machines smarter than the smartest humans. This is not another cool Silicon Valley boom; this isn’t some random community of coders writing an innocent open source software package; this isn’t fun and games. Superintelligence is going to be wild; it will be the most powerful weapon mankind has ever built. And for any of us involved, it’ll be the most important thing we ever do. 

Of course, this could be wrong. AGI might not arrive until later this century, though this seems increasingly unlikely. Nevertheless, it’s a future we must still consider.

Even in a scenario where AGI arrives late in the century, many of us alive today will witness it. I was born in the early 2000s, and it’s more probable than not that AGI will be developed within my lifetime. While much attention is paid to the technical, geopolitical, and regulatory consequences of short timelines, the personal implications are less often discussed.

All Possible Views About Our Lifetimes Are Wild

This title riffs on Holden Karnofsky's post "All Possible Views About Humanity's Future Are Wild [LW · GW]." In essence, either we build superintelligence—ushering in a transformative era—or we don't. We may see utopia, catastrophe, or something in between. Perhaps geopolitical conflicts, like a war over Taiwan, will disrupt chip manufacturing, or an unforeseen limitation could prevent us from creating superhuman intelligence. Whatever the case, each scenario is extraordinary. Arguably, no view of our future is "tame." There is no non-wild view.

Personally, I want to be there to witness whatever happens, even if it’s the cause of my demise. It seems only natural to want to see the most pivotal transition since the emergence of intelligent life on Earth. Will we succumb to Moloch? Or will we get our act together? Are we heading toward utopia, catastrophe, or something in between?

The changes described in Dario Amodei's "Machines of Loving Grace" paint a picture of what a predominantly positive future of highly powerful AI systems could look like. As he says in the footnotes, his view may even be perceived as "pretty tame":

“I do anticipate some minority of people’s reaction will be “this is pretty tame”. I think those people need to, in Twitter parlance, “touch grass”. But more importantly, tame is good from a societal perspective. I think there’s only so much change people can handle at once, and the pace I’m describing is probably close to the limits of what society can absorb without extreme turbulence.”

To be clear, what Dario describes as being perceived as "tame" already includes:

AI researcher Marius Hobbhahn speculates [LW · GW] that the leap from 2020 to 2050 could be as jarring as transporting someone from the Middle Ages to modern-day Times Square, exposing them to smartphones, the internet, and modern medicine.

Or, as Leopold Aschenbrenner points out, we might see massive geopolitical turbulence.

Or, in Eliezer Yudkowsky’s view, we face near-certain doom [LW · GW].

Regardless of which scenario you find most plausible, one thing is abundantly clear: all possible views about our lifetimes are wild.


What Does This Mean On A Personal Level?

It’s dizzying to think that you might be alive when the 24th century comes crashing down on the 21st. If your probability of doom is high, you might be tempted to maximise risk—if you enjoy taking risks—since there would seem to be little to lose. However, I would argue that if there’s even a small chance that doom isn’t inevitable, the focus should be on self-preservation. Imagine getting hit by a truck just years or decades before the birth of superintelligence.

It makes sense to fully embrace your current human experience. Savor love, emotions—positive and negative—and other unique aspects of human existence. Be grateful. Nurture your relationships. Pursue things you intrinsically value. While future advanced AI systems might also have subjective experiences, for now, feeling is something distinctly human.

For better or for worse, no part of the human condition will remain the same after superintelligence. Biological evolution is slow, but technological progress has been exponential. The modern world itself emerged in the blink of an eye. If we survive this transition, superintelligence might bridge the gap between our biological limitations and technological capabilities.

The best approach, in my view, is to fully experience what it means to be human while minimising your risks. Avoid unnecessary dangers—reckless driving, water hazards, falls, excessive sun exposure, and mental health neglect. Look both ways when crossing the street. Focus on becoming as healthy as possible.[2] 

This video provides a good summary of how to effectively reduce your risk of death.

Maybe reading science fiction – series like The Culture by Iain Banks– is a good way to prepare for what’s coming.[3] Alternatively, some may prefer to stay grounded in present reality, knowing that the second half of this century might outpace even the wildest sci-fi. In ways we can’t fully predict, the future could be stranger [LW · GW] than anything we imagine.

As AI researcher Katja Grace writes:

I guess there’s maybe a 10-20% chance of AI causing human extinction in the coming decades, but I feel more distressed about it than even that suggests—I think because in the case where it doesn’t cause human extinction, I find it hard to imagine life not going kind of off the rails. So many things I like about the world seem likely to be over or badly disrupted with superhuman AI (writing, explaining things to people, friendships where you can be of any use to one another, taking pride in skills, thinking, learning, figuring out how to achieve things, making things, easy tracking of what is and isn’t conscious), and I don’t trust that the replacements will be actually good, or good for us, or that anything will be reversible.

Even if we don’t die, it still feels like everything is coming to an end.

Holden Karnofsky has described a “call to vigilance” when thinking about the most important century. Similarly, I believe we should all adopt this mindset when considering the personal implications of AGI. The right reaction isn’t to dismiss this as hype or far-off sci-fi. Instead, it’s the realisation: “…oh… wow… I don’t know what to say, and I think I might vomit… I need to sit down and process this.”

To conclude: 

Utopia is uncertain, doom is uncertain, but radical, unimaginable change is not. 

We stand at the threshold of possibly the most significant transition in the history of intelligence on Earth—and maybe our corner of the universe.

Each of us must find our own way to live meaningfully in the face of such uncertainty, possibility, and responsibility. 

We should all live more intentionally and understand the gravity of the situation we're in.

It’s worth taking the time to seriously and viscerally consider how to live in the years or decades leading up to the dawn of superintelligence.

  1. ^

     For the purpose of this post, we’ll abide by the definition in DeepMind’s paper “Levels of AGI for Operationalizing Progress on the Path to AGI”

  2. ^

     Maybe you could argue getting maximally healthy isn’t *that* important, as in a best-case scenario for superintelligence, ~all diseases would be solved. But still, it probably makes sense to hedge for longer timelines and stay as healthy as possible.

  3. ^

7 comments

Comments sorted by top scores.

comment by Lalartu · 2024-10-22T14:08:40.757Z · LW(p) · GW(p)

This chain of logic is founded on an assumption that these technologies are possible, which I find highly dubious. If an (aligned) superintelligence is built, and we ask it for life extension, the most probable answer would be that biological immortality (and all stuff requiring nanorobots) is just plain impossible, and brain uploading wouldn't help because your copy is not you.

Replies from: Mitchell_Porter, xizneb
comment by Mitchell_Porter · 2024-10-22T17:36:06.362Z · LW(p) · GW(p)

Who said biological immortality (do you mean a complete cure for ageing?) requires nanobots?

We know individual cell lines can go on indefinitely, the challenge is to have an intelligent multicellular organism that can too.

Replies from: Lalartu, green_leaf
comment by Lalartu · 2024-10-23T09:44:28.304Z · LW(p) · GW(p)

Cell line being immortal doesn't prove that immortal brain is possible any more than microbe strain being immortal.

comment by green_leaf · 2024-10-23T04:35:50.354Z · LW(p) · GW(p)

(Thanks to the Hayflick limit, only some lines can go on indefinitely.)

comment by xizneb · 2024-10-23T02:02:52.453Z · LW(p) · GW(p)

I don't think the assumption is highly dubious. You don't need to believe in the possibility of mind uploading or biological immortality to assume radically transformative changes in the human condition due to advanced AI. The "Neuroscience and Mind" section of Dario Amodei's (who has a formal background in biophysics) essay attempts to clearly speculate what'll happen in these areas with the helped of advanced AI (even setting aside that mind uploading is probably "possible in principle").

Even if some goals are unattainable,  AGI could still likely (as Dario speculates) drive radical advancements in areas like health, longevity, and cognitive enhancement. The point isn't to guarantee specific outcomes, but to recognise that AGI will likely push the boundaries of what we currently believe is possible and transform the world unrecognisably. And we should be mentally reflecting and preparing for that.

Replies from: Lalartu
comment by Lalartu · 2024-10-23T11:30:10.911Z · LW(p) · GW(p)

Amodei’s general argument is this:

"my basic prediction is that AI-enabled biology and medicine will allow us to compress the progress that human biologists would have achieved over the next 50-100 years into 5-10 years."

This may be correct, but his estimate of what is expected to be achieved in 100 years without AI is likely wildly overoptimistic. In particular, his argument for doubling of lifespan is just an extrapolation from past increase in life expectancy, which is ridiculous because progress in extending maximum human lifespan so far is exactly zero.

Replies from: xizneb
comment by xizneb · 2024-10-23T12:00:07.850Z · LW(p) · GW(p)

I agree that there are significant uncertainties on the specific consequences of AI accelerating bio/medicine R&D, but I think even without buying into Amodei's specific speculations on life extension, you would still get wildly transformative breakthroughs and unforeseen consequences. I do agree it seems to make sense to be wary of just extrapolating past increases in life expectancy. 

Time will tell!