A gentle apocalypse

post by pchvykov · 2021-08-16T05:03:32.210Z · LW · GW · 5 comments

Contents

5 comments

Is robots taking over the world bad? Some of the AI-risk scenarios I’ve read almost feel like it’s inevitable – and I’ve been wondering if there is a way it could happen that we would actually be comfortable, or even happy with. Let me roughly outline one scenario I have in mind, and then reflect on whether it would be “bad” and why. 

We start from the state we are in today: AI is getting progressively better at taking over various human jobs merely by leveraging statistical regularities in the data. As AI cannot run itself yet, humans remain in the loop for now, but make fewer and fewer independent actions and decisions that are not informed or assisted by AI in some way. The big assumption I will take for the scenarios here is that at some point, AI becomes fully self-sufficient in some parts of the economy: achieving autonomous self-replication, including sourcing raw materials, design, assembly, maintenance, debugging, and adapting by gathering more data and learning new statistical regularities. 

At that point, or soon thereafter, in the perfect world we can imagine all humans being provided all the basic needs without needing to work. And with this comes the hard problem of finding purpose, meaning and fun in our lives where AI can perfectly well run our economy without needing any help from us. It is often said that meaning comes from helping other humans in some way shape or form. So sure, while AI might not need our help to run the economy, perhaps other humans will need us for some quality human connection and understanding? Empathy, listening, psychotherapy perhaps, sex, friendship. Perhaps we’ll still need each other to make art that strikes some deeper notes in our souls. Or to discover mysteries of nature through science – and explain them in a simple elegant way that gives the pleasure of “human understanding” (even if computers can make the same predictions more accurately through incomprehensible statistics). 

So while basic human needs may be met through automation, perhaps we will still need each other to satisfy our higher needs? Well this might be true if meeting those higher needs was harder to automate – but currently the evidence we have does not seem to support that. Video games are a good example: by hacking our reward system, games can give us a powerful sense of meaning, of fighting for a great cause, and doing it together with comrades we can trust with our life (even if some of them may be bots). They give us joy of accomplishment, the pain of loss, and others to share this with. As AI gets more convincing, and learns to recognize human emotions (empathic AI), it is not so hard to imagine that it will meet our needs of human connection much better than other humans can. Same may be said for arts and sciences, which AI is already well underway in conquering. Even sex is already far from being unchartered territory for surrogate alternatives (think AI-augmented sex-dolls or VR porn). 

By having our personal video game and AI friends adapt to our needs and desires, each of us can get siloed into our own personal paradise where all our needs, no matter how “basic” or “high” are satisfied far better than the real world or real humans ever could. Any contact with other humans - who have their own needs to be accounted for - may become tiresome, if not unbearable. While we may have some nagging sense that “we should keep it real and make real babies,” it may be no more pressing than a New Year’s resolution like “I should eat healthier.” And besides, to put our minds at ease, we could probably ask our AI to write us some inspiring and convincing blog-posts explaining why it’s really not so bad if robots take over the world. ;) 

At this point, I can imagine the question of human species survival becoming a topic of some public debate. Perhaps some minor factions will separate from mainstream society and artificially cap the level of permissible AI in their community to leave some areas for human superiority. Yet in most of the world, humans will probably no longer be useful to anything or anyone – even to each other – and will peacefully and happily die off. 

Now, this seems scary. But is it really so bad? Having been trained to understand our human needs and human nature in minute detail, the AI we leave behind will be the sum total of all human values, desires, knowledge and aspiration. Moreover, each one of us will have contributed our personal beliefs and values into this “collective conscience.” Having spent years of our lives living in AI world and thereby personalizing and training it to know our wants may not, afterall, be so far off from a direct “brain download.” And since by then the AI-economy will have already had a long run of human-supervised self-sufficiency, there is no reason to fear that without our oversight the robots left behind will run the world any worse than we can. 

Brain downloading, or progressively replacing all our organic tissues with various artificial enhancements, could be other paths to a “gentle apocalypse” – but none of them seem fundamentally “better” or “worse” to me in any moral sense. Either way, the biological human species goes out, having left its creation - its child - behind. In this sense, our biological children, which replace us generation after generation, are no more a continuation of us than this AI would be. 

The scenario I described may be thought of as one where the child does all in its power to take care of all the needs of the aging parent. In practice, this does not always happen – and there are countless historical examples of children murdering their parents to claim the figurative “throne.” Even then, however, they continue the bloodline. Whether violently or gently, by rebelling or inheriting, the children carry on their parents’ legacy, values, and world-view. So if the robots do “rise up,” and the apocalypse is not so gentle - when all is said and done, does it really matter? 

5 comments

Comments sorted by top scores.

comment by Daniel Kokotajlo (daniel-kokotajlo) · 2021-08-16T08:03:10.754Z · LW(p) · GW(p)
Whether violently or gently, by rebelling or inheriting, the children carry on their parents’ legacy, values, and world-view. So if the robots do “rise up,” and the apocalypse is not so gentle - when all is said and done, does it really matter? 

1. If something like this happens, it's unlikely that our AI "children" will "carry on our values and world-view" in any strong way. Might not be any non-trivial way at all.

2. By your logic, nothing that's ever happened in history ever really mattered. Genocides, slavery, famines, etc. -- no worries, the children always carried on!

Replies from: pchvykov
comment by pchvykov · 2021-08-16T14:43:34.171Z · LW(p) · GW(p)
  1. Not sure I understand you here. Our AI will know the things we trained it and the tasks we set it - so to me it seems it will necessarily be a continuation of things we did and wanted. No?
  2. Well, in some sense yes, that's sort of the idea I'm entertaining here: while these things all do matter, they aren't the "end of the world" - humanity and human culture carries on.  And I have the feeling that it might not be so different even if robots take over. 

[of course, in the utilitarian sense such violent transitions are accompanied by a lot of suffering, which is bad - but in a consequentialist sense purely, with a sufficiently long time-horizon of consequences, perhaps it's not as big as it first seems?]

comment by Donald Hobson (donald-hobson) · 2021-08-16T09:43:26.360Z · LW(p) · GW(p)

This seems to be a bizarre mangling of several different scenarios. 

Yet in most of the world, humans will probably no longer be useful to anything or anyone – even to each other – and will peacefully and happily die off. 

Many humans will want to avoid death as long as they can, and to have children. Most humans will not think "robots do all that boring factory work, therefore I'm useless therefore kill myself now". If the robots also do nappy changing and similar, it might encourage more people to be parents. And there are some humans that want humanity to continue, some that want to be immortal. 

 Having been trained to understand our human needs and human nature in minute detail, the AI we leave behind will be the sum total of all human values, desires, knowledge and aspiration.

I think that this is not nessesarily true. There are desings of AI that don't have human values. Its possible for the AI to understand human values in great detail but still care about something else. This is one of the problems Miri is trying to avoid. 

 

At that point, or soon thereafter, in the perfect world we can imagine all humans being provided all the basic needs without needing to work.

There is some utopian assumption here. Presumably the AI's have a lot of power at this point. Why are they using this power to create the bargin basement utopia you described. What stops an AI from indiscriminately slaughtering humans.

Also in the last paragraphs, I feel you are assuming the AI is rather humanlike. Many AI designs will be seriously alien. They do not think like you. There is no reason to assume they would be anything recognisably conscious.

And since by then the AI-economy will have already had a long run of human-supervised self-sufficiency, there is no reason to fear that without our oversight the robots left behind will run the world any worse than we can. 

A period of supervision doesn't prove much. There are designs of AI that behave when the humans are watching and then misbehave when the humans aren't watching. Maybe we have trained them to make good responsible use of the tech that existed at training time, but if they invent new different tech, they use it in a way we wouldn't want.

 

It really isn't clear what is supposed to be happening here. Did we build an AI that genuinely had our best interests at heart, but it turned out immortality was too hard, and the humans were having too much fun to reproduce? (Even though reproducing is generally considered to be quite fun) Or were these AI's delibirately trying to get rid of humanity. In which case why didn't all humans drop dead the moment the AI got access to serious weaponry? 

Replies from: pchvykov
comment by pchvykov · 2021-08-16T17:39:00.588Z · LW(p) · GW(p)

yeah, I can try to clarify some of my assumptions, which probably won't be fully satisfactory to you, but a bit:

  • I'm trying to envision here a best-possible scenario with AI, where we really get everything right in the AI design and application (so yes, utopian)
  • I'm assuming that the question "is AI conscious?" to be fundamentally ill-posed as we don't have a good definition for consciousness - hence I'm imagining AI as merely correlation-seeking statistical models. With this, we also remove any notion of AI having "interests at heart" or doing anything "deliberately" 
  • and so yes, I'm suggesting that humans may be having too much fun to reproduce with other humans, nor will feel much need to. It's more a matter of a certain carelessness, than deliberate suicide. 
comment by exmateriae (Sefirosu) · 2021-08-17T12:46:06.213Z · LW(p) · GW(p)

Perhaps some minor factions will separate from mainstream society and artificially cap the level of permissible AI in their community to leave some areas for human superiority. Yet in most of the world, humans will probably no longer be useful to anything or anyone – even to each other – and will peacefully and happily die off. 

 

I could see this being the setup for a novel or a movie with some tribes setting sex and reproduction as the most important part of their life. (with an AI assisted chilbirth delivery to ease everything of course...)

Thanks for the interesting read.