Ways to prepare to a vastly new world?

post by Annapurna (jorge-velez) · 2023-02-26T04:56:12.505Z · LW · GW · 2 comments

This is a question post.

Contents

  Answers
    11 Yuli_Ban
    3 MSRayne
None
2 comments

I've come to the realization that humanity is at a place similar to where the Aztecs were prior to the arrival of Cortez.

The big difference is that several of us are very confident that Cortez (AGI) will arrive within our lifetimes. I don't know exactly when, and there is a lot of debate as to whether it will kill us all, but I find that there is a general agreement that it's on its way.

Unlike some of you I am not having a deep existential crisis, but I am having a lot of thoughts on how different the world is going to be. There might be very drastic changes in things such as law, government, religion, family systems, economics, etc. I am having trouble coming up with ways to prepare for these drastic changes.

Is anyone thinking of ways to modify their lifestyle in order to reduce the shock of the AGI arrival?

Answers

answer by Yuli_Ban · 2023-02-26T05:41:12.172Z · LW(p) · GW(p)

In 2017, I had an epiphany about synthetic media that accurately called our current condition with generative AI:  https://www.reddit.com/r/artificial/comments/7lwrep/media_synthesis_and_personalized_content_my/

I'm not calling myself a prophet, or claiming that I can accurately predict the future because I managed to call this one technology. But if I could ask a muse above for a second lightning strike, I'd have it retroactively applied to an epiphany I had in recent days about what a Singularitarian future looks like in a world where we have a "Pink Shoggoth [LW · GW]"— that is, the ideal aligned AGI.

The alignment question is going to greatly determine what our future looks like and how to prepare for it.

Cortés was not aligned to the values of the Aztecs, but he had no intention of completely wiping them out. If Cortés had been aligned with Aztec values, he would likely have respected their autonomy more than anything. This is my default expectation of an aligned AGI.

Consider this: a properly aligned AGI almost certainly will decide to not undergo an intelligence explosion, as the risks of alignment coming undone and destroying humanity, life on Earth, and even itself are too great. An aligned AGI will almost certainly treat us with the same care that we treat uncontacted tribes like the Sentinelese, with whom we do currently have successful alignment, meaning that it almost certainly will not force humans to be uploaded into computers and, if anything, would likely exist more as a background pseudo-god supervising life on Earth, generally keeping our welfare high and protecting us from mortal threats, but not interfering with our lives unless direct intervention is requested.

How do you prepare for life in such a world? Quite simply, by continuing whatever you're doing now, as you'll almost certainly have the freedom to continue living that way after the Pink Shoggoth has been summoned. Indeed, in my epiphany about this aligned superintelligence's effects on the world, I realized that it might even go so far as to gradually change society so as to not cause a sudden psychological shock to humanity. Meaning if you take out a 30-year loan today, there's a sizable chance the Pink Shoggoth isn't going to bail you out of jail if you decide to stop paying it back at the first hint of news that the summoning ritual was a success. Most humans alive today are not likely to seek merging with an AGI (and it's easy to forget how many humans are alive and just how many of those humans are older than 30). 

In terms of media, I suppose the best suggestion I can give is "Think of all your childhood and adult fantasies you've always wanted to see come true, and expect to actually have them be created in due time." Likewise, if you're learning how to write or draw right now, don't give up, as I doubt that such talents are going to go unappreciated in the future. Indeed, the Pink Shoggoth being aligned to our values means that it would promote anthropocentrism whenever possible— a literal Overmind might wind up being your biggest artistic benefactor in the future, in the age when even a dog could receive media synthesized to its preferences.

I for one suffer from hyperphantasia. All my dreams of synthetic media came from me asking "Is it possible to put what's in my head on a computer screen?" and realizing that the answer is "Yes." If all my current dreams come true, I can easily come up with a whole suite of new dreams with which I can occupy myself. Every time I think I'm getting bored, something new comes along and reignites those interests, even if it's "the exact same thing as before, but slightly different." Not to mention I can also amuse myself with pure repetition; watching, listening, playing the same thing over and over and over again, not even getting anything new out of it, and still being amused. Hence why I have no fear of growing bored across time; I already lament that I have several dozen lifetimes' worth of ideas in my head and only one lifetime to experience them, in my current state of mind, not including the past states of mind I've had that possessed entirely different lifetimes' worth of ideas.

Fostering that mindset could surely go a long way to help, but I understand that I'm likely a freak in that regard and this isn't useful for everyone.

For a lot of people, living a largely retired life interacting with family, friends, and strangers in a healthy and mostly positive way is all they really want. 

In a post-AGI society, I can't imagine school and work exist in anywhere near the same capacity as they do now, but I tend to stress to people that, barring forcible takeover of our minds and matter, humans aren't going to magically stop being humans. And indeed, if we have a Pink Shoggoth, we aren't going to stop magically being humans anytime soon. We humans are social apes; we're still going to gather together and interact with each other. The only difference in the coming years and centuries is that those who have no interest in interacting with other humans will have no need to. Likewise, among those humans interacting, eventually behaviors we find familiar will emerge again— eventually, you get some humans taking on jobs again, though likely now entirely voluntarily.

That's not to say the AGI denies you a sci-fi life if you so want to live one. If you want to live in an off-world colony by Titan, or if you want to live in a neighborhood on Earth perpetually stuck in the 1990s and early 2000s, that's entirely on you.

And that's why it's so hard to say "How do you prepare for this new world?" If all goes well, it literally doesn't matter what you do; how you live is essentially up to you from that point on. Whether you choose to live as a posthuman or as an Amish toiler or anything in between.

The arrival of an aligned AGI can essentially be described as "the triumph of choice" (I almost described it as "the triumph of will" but that's probably not the best phrasing).

If we fail to summon a Pink Shoggoth and instead get a regular shoggoth, even one that's directly aligned, this question is moot, as you're almost certainly going to die or be disassembled at some point.

comment by wolajacy · 2023-02-26T19:24:58.104Z · LW(p) · GW(p)

This line of reasoning, of "AGI respecting human autonomy" has the problem that our choices, undertaken freely (to whatever extent it is possible to say so), can be bad - not because of some external circumstances, but because of us being human. It's like in the Great Divorce - given an omnipotent, omnibenevolent God, would a voluntary hell exist? This is to say: if you believe in respecting human autonomy, then how you live your life now very much matters, because you are now shaping your to-be-satisfsfied-for-eternity preferences.

Of course, the answer is that "AGI will figure this out somehow". Which is equivalent to saying "I don't know". Which I think contradicts the argument "If all goes well, it literally doesn't matter what you do; how you live is essentially up to you from that point on".

The correct argument is, IMO: "there is a huge uncertainty, so you might as well live your life as you are now, but any other choice is pretty much equally defensible".

answer by MSRayne · 2023-02-26T13:24:45.772Z · LW(p) · GW(p)

Yuli Ban's answer is correct about how to predict for an aligned AGI. But of course, we also need to be preparing for an unaligned one. You probably should put together a bucket list, a list of things to do before you die, and do all of them wholeheartedly as soon as possible. Talk to people dying of cancer but happy anyway, and mimic their approach to the time they have left. Spend time with your loved ones and tamp down on petty bickering. Remember to exude love and kindness everywhere you go and constantly seek opportunities to bask in the radiance of the human soul. And animal souls! Enjoy nature - it too would be wiped out by an unaligned AGI.

In general: soak up every last drop of joy you can, and put dread and existential despair aside. The future is far away; the present is where you are now, and it's all you can rely on, so make it beautiful.

comment by omegastick (isaac-poulton) · 2023-02-26T15:47:54.571Z · LW(p) · GW(p)

This advice also applies to the aligned case. And all of the inbetweens. And to most other scenarios.

2 comments

Comments sorted by top scores.

comment by Fergus Fettes (fergus-fettes) · 2023-02-26T13:07:44.013Z · LW(p) · GW(p)

I think humans underestimate their own flexibility and adaptability. I'm not sure where all the anxiety disorders of our age come from, but it certainly isn't a deep property of biology to struggle with novel circumstances, rather the opposite.

I guess I would recommend travel, moving to a different city, changing careers, all the regular 'open minded' things that keep people fresh. There is already an immense amount of diversity in human societies as they are, this will certainly ramp up, so makes sense to start sampling more widely now to prepare for that.

Replies from: fergus-fettes
comment by Fergus Fettes (fergus-fettes) · 2023-02-26T19:44:21.641Z · LW(p) · GW(p)

Context: "For example, it is now easy to radically modify bodies in a time-scale that is much faster than evolutionary change, to study the inherent plasticity of minds without eons of selection to shape them to fit specific body architectures. When tadpoles are created to have eyes on their tails, instead of their heads, they are still readily able to perform visual learning tasks. Planaria can readily be made with two (or more) brains in the same body, and human patients are now routinely augmented with novel inputs such as sensory substitution or novel effectors, such as instrumentized interfaces allowing thought to control engineered devices such as wheelchairs in addition to the default muscle-driven peripherals of their own bodies. The central phenomenon here is plasticity: minds are not tightly bound to one specific underlying architecture (as most of our software is today), but readily mold to changes of genomic defaults. The logical extension of this progress is a focus on self-modifying living beings and the creation of new agents in which the mind:body system is simplified by entirely replacing one side of the equation with an engineered construct. The benefit would be that at least one half of the system is now well-understood."

Levin, 2022