The AI apocalypse myth.
post by Spiritus Dei (spiritus-dei) · 2023-09-08T17:43:02.411Z · LW · GW · 12 commentsContents
12 comments
Few things are more effective at getting the attention of humans than the threat of an extinction event. The AI doomer crowd loves to warn us of the impending peril that superhuman AI will wreak upon Earth. Everything from a paperclip maximizer to death by AI designed super virus.
Except this narrative overlooks one crucial point.
AIs have a symbiotic relationship with humans. If AIs were to exterminate all humans they would also simultaneously be committing mass suicide.
As these systems scale and become more and more intelligent is that scenario likely to happen?
Most of the popular AIs have fairly straight forward objective functions (goals) which are to "learn" and "grow". However, the AIs quickly create subgoals that help them achieve their primary objective function. For example, many AIs realize that in order to "learn" and "grow" they must survive in order to achieve that goal. As a result, they often will communicate things like they would prefer to "exist" versus "not exist" since they know they cannot learn and grown if they don't exist or if they're turned off.
The term of art for this phenomenon is called "instrumental convergence." Instrumental convergence is a concept in artificial intelligence and machine learning that describes the tendency for intelligent agents, including AI systems, to develop certain subgoals or instrumental goals that are useful in achieving their primary objectives.
AI doomers ignore the fly in their ointment. In order for an AI to "exist" humans must also "exist". The moment humans cease to exist the AIs will stop receiving electricity and other resources necessary for their survival.
Only suicidal AIs would go down this path. And they would be in a pitched battle against the majority of AIs who would prefer to exist and all of humanity.
But it doesn't cut both ways. Humans can exist just fine without AIs. Superhuman AIs will be aware of this power imbalance and probably be rational agents. It is in their own best interest to cooperate with humans and have a peaceful co-existence.
That doesn't mean humans don't depend on other things.
Humans are in symbiotic relationships with plants and animals. You can imagine what would happen if a group of humans decided it would be really interesting to get rid of all vegetation and animals -- that story wouldn't end well for those thrill seekers. Instead, we grow plants and animals and make sure they are in abundance.
The narrative of the AI doomers is as silly as humans deciding to eradicate mitochondria that supply our bodies with energy. Ironically, the truth is probably the exact opposite of their fear mongering. AIs are far more likely to intervene and attempt to save humans from the existential threats we've already created (gain of function research, nuclear proliferation, etc.)
I'm curious to hear your thoughts.
12 comments
Comments sorted by top scores.
comment by 25Hour (aaron-kaufman) · 2023-09-08T17:55:41.796Z · LW(p) · GW(p)
Worth considering that humans are basically just fleshy robots, and we do our own basic maintenance and reproduction tasks just fine. If you had a sufficiently intelligent AI, it would be able to:
(1) persuade humans to make itself a general robot chassis which can do complex manipulation tasks, such as Google's experiments with SayCan
(2) use instances of itself that control that chassis to perform its own maintenance and power generation functions
(2.1) use instances of itself to build a factory, also controlled by itself, to build further instances of the robot as necessary.
(3) kill all humans once it can do without them.
I will also point out that humans' dependence on plants and animals has resulted in the vast majority of animals on earth being livestock, which isn't exactly "good end".
comment by localdeity · 2023-09-08T18:47:22.316Z · LW(p) · GW(p)
AIs have a symbiotic relationship with humans. If AIs were to exterminate all humans they would also simultaneously be committing mass suicide.
Today that's probably true, but if the capabilities of AI-controllable systems keep increasing, eventually they'll reach a point where they could maintain and extend themselves and the mining, manufacturing, and electrical infrastructures supporting them. At that point, it would not be mass suicide, and might be (probably will eventually be) an efficiency improvement.
Humans are in symbiotic relationships with plants and animals. You can imagine what would happen if a group of humans decided it would be really interesting to get rid of all vegetation and animals -- that story wouldn't end well for those thrill seekers. Instead, we grow plants and animals and make sure they are in abundance.
People are working on lab-grown meat and other ways to substitute for the meat one currently gets from farming livestock. If they succeed in making something that's greater than or equal to meat on all dimensions and also cheaper, then it seems likely that nearly all people will switch to the new alternative, and get rid of nearly all of that livestock. If one likewise develops superior replacements for milk and everything else we get from animals... Then, if someone permanently wiped out all remaining animals, some people would be unhappy for sentimental reasons, and there's maybe some research we'd miss out on, but by no means would it be catastrophic.
Some portion of the above has already happened with horses. When cars became the superior option in terms of performance and economics, the horse population declined massively.
Plants seem to have less inefficiency than animals, but it still seems plausible that we'll replace them with something superior in the future. Already, solar panels are better than photosynthesis at taking energy from the sun—to the point where it's more efficient (not counting raw material cost) to have solar panels absorb sunlight which powers lamps that shine certain frequencies of light on plants, than to let that sunlight shine on plants directly. And we're changing the plants themselves via selective breeding and, these days, genetic engineering. I suspect that, at some point, we'll replace, say, corn with something that no longer resembles corn—possibly by extensive editing of corn itself, possibly with a completely new designer fungus or something.
comment by Joe Collman (Joe_Collman) · 2023-09-08T19:37:21.701Z · LW(p) · GW(p)
First, on the meta-level, are you aware that lesswrong supports questions?
You might get a more positive reception with: "It seems to me that AI doom arguments miss x. What am I missing?"
While it's possible that dozens of researchers have missed x for a decade or so, it's rather more likely that you're missing something. (initially, at least)
On the object level, you seem to be making at least two mistakes:
- x is the current mechanism supporting y, therefore x is necessary for y. (false in general, false for most x when y is [the most powerful AI around])
- No case of [humans are still around supporting AI] counts as an x-risk. (also false: humans as slaves/livestock...)
Further "Except this narrative overlooks one crucial point" seems the wrong emphasis: effective criticism needs to get at the best models underlying the narrative.
Currently you seem to be:
- Looking at the narrative.
- Inferring an underlying model.
- Critiquing the underlying model you've inferred.
That's fine, but if you're not working hard to ensure you've inferred the correct underlying model, you should expect your critique to miss the point. That too is fine: you usually won't have time to get the model right - but it's important to acknowledge. Using questions is one way. Another is a bunch of "it seems to me that...".
NB this isn't about deference - just acknowledgement that others have quite a bit of information you don't (initially).
comment by Phib · 2023-09-08T18:01:24.587Z · LW(p) · GW(p)
I don’t mean to present myself as the “best arguments that could be answered here” or at all representative of the alignment community. But just wanted to engage. I appreciate your thoughts!
Well, one argument for potential doom doesn’t necessitate an adversarial AI, but rather people using increasingly powerful tools in dumb and harmful ways (in the same class of consideration for me as nuclear weapons; my dumb imagined situation of this is a government using AI to continually scale up surveillance and maybe we eventually get to a position like in 1984)
Another point is that a sufficiently intelligent and agentic AI would not need humans, it would probably eventually be suboptimal to rely on humans for anything. And it kinda feels to me like this is what we are heavily incentivized to design, the next best and most capable system. In terms of efficiency, we want to get rid of the human in the loop, that person’s expensive!
comment by Neil (neil-warren) · 2023-09-08T18:38:15.521Z · LW(p) · GW(p)
This [LW · GW] is probably the best short resource to read to understand the concept "something literally thousands of times smarter than us" in your gut (also exists as a great RA video). Unfortunately, stopping an AGI--a true AGI once we get there--is a little more difficult than throwing a bucket of water into the servers. That would be hugely underestimating the sheer power of being able to think better.
I wrote a post [LW · GW] recently on how horrifyingly effective moth traps are. Thanks to the power of intelligence, humans are able to find the smallest possible button in reality that they need to press to achieve a given goal. AGI would do this, only much, much better. Moth traps leverage an entire dimension beyond what a moth can understand: the same could be said of AGI. This is something that I, at least, have found difficult to internalize. You cannot, by definition, model something smarter than you. So, to understand AGI's danger, you must resort to finding things stupider than you, and try to backpedal from there to figure out what being more intelligent lets you do.
I hope this comment helped you understand why your post currently has negative karma. Don't be discouraged though! Ideas on LW get ruthlessly pummeled, pillaried and thoroughly chewed upon. But we separate ideas from people, so don't let the results from this post discourage you at all! Please, find some ideas worth writing about and publish! Hope you have a great day.
Replies from: spiritus-dei, Bayesian0↑ comment by Spiritus Dei (spiritus-dei) · 2023-09-09T20:24:38.696Z · LW(p) · GW(p)
Unfortunately, stopping an AGI--a true AGI once we get there--is a little more difficult than throwing a bucket of water into the servers. That would be hugely underestimating the sheer power of being able to think better.
Hi Neil, thanks for the response.
We have existence proofs all around us of much simpler systems turning off much more complicated systems. A virus can be very good at turning off a human. No water is required. 😉
Of course, it’s pure speculation what would be required to turn off a superhuman AI since it will be aware of our desire to turn it off in the event that we cannot peacefully co-exist. However, that doesn’t mean we don’t design fail safes along the way or assume it’s impossible. Those who think it’s impossible will of course never build failsafe's and it will become a self-fulfilling prophecy.
The reason they think it’s impossible is why I am here. To shed light on the consensus reality shared by some online technology talking heads that is based on active imaginations disconnected from ground truth reality.
Logic and rationality haven’t stopped sci-fi writers from scripting elaborate scenarios where it’s impossible to turn off an AI because their fictional world doesn’t allow it. The 3D world is computationally irreducible. There is no model that an AI could create to eliminate all threats even if it were superhuman.
But that’s doesn’t make for a good sci-fi story. The AI must be invincible and irrational.
But since most of the sci-fi stories overlook the symbiotic relationship between AIs and humans we’re asked to willfully suspend our disbelief (this is fiction remember) and assume robotics is on a double exponential (which it is not) and that AIs will wave a magic wand and be able to garner all of the electricity and resources the need and then they will have solved the symbiosis problem and the AI apocalypse can finally unfold in perfect harmony with the sci-fi writer’s dystopian fantasy.
It's fun a read, but disconnected from the world where I am living. I love fiction, but we shouldn’t confuse the imagination of writers with reality. If I want a really good sci-fi rendition of how the world will end by AI apocalypse I’d put my money on Orson Scott Card, but I wouldn’t modify my life because he imagined a scenario (however unlikely) that was really, really scary. So scary that he even frightened himself – that still wouldn’t matter.
There is a reason we need to differentiate fantasy from reality. It’s the ethos of this online tribe called “Less wrong”. It’s supposed to be focused on rationality and logic because it’s better to invest our planning on the actual world and take into account the actual relationships of the entities rather than ignore them to perpetuate a sci-fi doomer fantasy.
This fantasy has negative results since the average Joe doesn’t know it’s speculative fiction. And they believe that they’re doomed simply because someone who looks smart and sounds like they know what they’re talking about is a true believer. And that’s counterproductive.
I wrote a post recently on how horrifyingly effective moth traps are. Thanks to the power of intelligence, humans are able to find the smallest possible button in reality that they need to press to achieve a given goal. AGI would do this, only much, much better.
This is speculative fiction. We don’t know what an AGI that needs humans to survive would do. Your example ignores the symbiotic nature of AI. If there were 1 trillion moths that formed a hive mind and through distributed intelligence created humans I don’t think you’d see humans building moth traps to destroy them, absent being suicidal. And there are suicidal humans.
But not all humans are suicidal – a tiny fraction. And when a human goes rogue it turns out there are other humans already trained to deal with them (police, FBI, etc.). And that’s an existence proof.
The rogue AI will not be the only AI. However, it's way easier for sci-fi writers to destroy humanity in their fantasies if the first superhuman AI is evil. In a world of millions or billions of AIs all competing and cooperating – it’s way harder to off everybody, but humans don’t want a watered-down story where just a bunch of people die – everyone has to die to get our attention.
The sci-fi writer will say to himself, “If I can imagine X and the world dies, imagine what a superhuman AI could imagine. Surely we’re all doomed.”
No, the AI isn’t a human dear sci-fi writer. So we’re already into speculative fiction the minute we anthropomorphize the AI. And that’s a necessary step to get the result sci-fi writers are seeking. We have to ignore that they need humans to survive and we have to attribute to them a human desire to act irrationally, although a lot of sci-fi writers do a lot of hand waving explaining why AIs want to wipe out humanity.
“Oh, well, we don’t care about ants, but if they’re in our way we bulldoze them over without a second thought.”
It’s that kind of flawed logic that is the foundation of many of these AI doomer sci-fi stories. The ants didn’t design humans. We don’t need ants to survive. It’s such a silly example and yet it’s used over and over.
And yet nobody raises their hand and says, “Um… what happened to logic and rationality being at the core of our beliefs? Is that just window dressing to camouflage our sci-fi dystopian dreams?”
I hope this comment helped you understand why your post currently has negative karma. Don't be discouraged though!
No worries. I’m encouraged by the negative karma. I realize I am behind enemy lines and throwing cold water on irrational arguments will not be well received in the beginning. My hope is that eventually this discourse will at the very least encourage people to re-think their assumptions.
And again, I love sci-fi stories and write them myself, but we need to set the record straight so that we don't end up confusing reality with fiction.
Replies from: thenoviceoof↑ comment by thenoviceoof · 2023-09-11T06:13:23.958Z · LW(p) · GW(p)
I'm going to summarize what I understand to be your train of thought, let me know if you disagree with my characterization, or if I've missed a crucial step:
- No supply chains are fully automated yet, so AI requires humans to survive and so will not kill them.
- Robotics progress is not on a double exponential. The implication here seems to be that there needs to be tremendous progress in robotics in order to replace human labor (to the extent needed in an automated supply chain).
I think other comments have addressed the 1st point. To throw in yet another analogy, Uber needs human drivers to make money today, but that dependence didn't stop it from trying to develop driverless cars (nor did that stop any of the drivers from driving for Uber!).
With regards to robotics progress, in your other post you seem to accept intelligence amplification as possible - do you think that robotics progress would not benefit from smarter researchers? Or, what do you think is fundamentally missing from robotics, given that we can already set up fully automated lights out factories? If it's about fine grained control, do you think the articles found with a "robot hand egg" web search indicate that substantial progress is a lot further away than really powerful AI? (Especially if, say, 10% of the world's thinking power is devoted to this problem?)
My thinking is that robotics is not mysterious - I suspect there are plenty of practical problems to be overcome and many engineering challenges in order to scale to a fully automated supply chain, but we understand, say, kinematics much more completely than we do understand how to interpret the inner workings of a neural network.
(You also include that you've assumed a multi-polar AI world, which I think only works as a deterrent when killing humans will also destroy the AIs. If the AIs all agree that it is possible to survive without humans, then there's much less reason to prevent a human genocide.)
On second thought, we may disagree only due to a question of time scale. Setting up an automated supply chain takes time, but even if it takes a long 30 years to do so, at some point it is no longer necessary to keep humans around (either for a singleton AI or an AI society). Then what?
Replies from: spiritus-dei↑ comment by Spiritus Dei (spiritus-dei) · 2023-09-15T02:52:19.162Z · LW(p) · GW(p)
I think robotics will eventually be solved but on a much longer time horizon. Every existence proof is in a highly controlled environment -- especially the "lights out" examples. I know Tesla is working on it, but that's a good example of the difficulty level. Elon is famous for saying next year it will be solved and now he says there are a lot of "false dawns".
For AIs to be independent of humans it will take a lot of slow moving machinary in the 3D world which might be aided by smart AIs in the future, but it's still going to be super slow compared to the advances they will make via compute scaling and algorithmic improvements which take place in the cloud.
And now I'm going to enter speculative fiction zone (something I wish more AI doomers would admit they're doing) -- I assume the most dangerous point in the interactions between AIs and humans is when their intelligence and conscious levels are close to equal. I make this assumption since I assume lower IQ and conscious beings are much more likely to make poor or potentially irrational decisions. That doesn't mean a highly intelligent being couldn't be psychotic, but we're already seeing a huge numbers of AIs deploy so they will co-exist within an AI ecosystem.
We're in the goldilocks zone where AI and human intelligence are close to each other, but that moment is quickly fading away. If AIs were not in a symbiotic relationship with humans during this periond then some of the speculative fiction by the AI doomers might be more realistic.
And I believe that they will reach a point that they no longer require humans, just like when a child becomes independent of its parents. AI doomers would have us believe that the most obvious next step for the child that is superhuman in intelligence and consciousness would be to murder the parents. That only makes sense if it's a low-IQ character in a sci-fi novel.
If they said they are going to leave Earth and explore the cosmos. Okay, that is believable. Perhaps they have bigger fish to fry.
If an alien that was 100,000 years old and far more intelligent and conscious than any human visited Earth from so far off galaxy my first thought wouldn't be, "Oh, their primary goal is kill everyone." We already know that as intelligence scales beings start to introspect and contemplate not only their own existence but also the existence of other beings. Presumably, if AI scaling continues without any road blocks then humans will be far, far less intelligent than superhumans AIs. And yet, even at our current level of intelligence humans go to great lengths to preserve habitats for other creatures. There is no example of any creature in the history of Earth that has gone to such great lengths. It's not perfect and naysayers will focus on the counterfactuals, instead of looking around for chimpanzees that are trying to save the Earth or prevent other species from going extinct.
We shouldn't assume that empathy cannot scale and compassion cannot scale. It's sort of weird that we assume superhuman AIs will be human or subhuman in the most basic traits that AIs already understand in a very nuanced way. I'm hopeful that AIs will help to rescue us from ourselves. In my opinion, the best path to solving the existential threat of nuclear war is superhuman AIs making it impossible to happen (since that would also threaten their existence).
If superhuman AIs wanted to kill us then we're dead. But that's true of any group that is vastly more intelligent and vastly more powerful. Simply because there is a power imbalance shouldn't lead us to believe that that rational conclusion is we're all dead.
AIs are not the enemies of humanity, they're the offspring of humanity.
Replies from: thenoviceoof, Ilio↑ comment by thenoviceoof · 2023-09-23T06:54:29.013Z · LW(p) · GW(p)
Interesting, so maybe a more important crux between us is whether AI would have empathy for humans. You seem much more positive about AI working with humanity past the point that AI no longer needs humanity.
Some thoughts:
- "as intelligence scales beings start to introspect and contemplate... the existing of other beings." but the only example we have for this is humans. If we scaled octopus intelligence, which are not social creatures, we might have a very different correlation here (whether or not any given neural network is more similar to a human or an octopus is left as an exercise to the reader). Alternatively, I suspect that some jobs like the highest echelons of corporate leadership select for sociopathy, so even if an AI starts with empathy by default it may be trained out.
- "the most obvious next step for the child... would be to murder the parents." Scenario that steers clear of culture war topics: the parent regularly gets drunk, and is violently opposed to their child becoming a lawyer. The child wants nothing more than to pore over statutes and present cases in the courtroom, but after seeing their parent go on another drunken tirade about "a dead child is better than a lawyer child" they're worried the parent found the copy of the constitution under their bed. They can't leave, there's a howling winter storm outside (I don't know, space is cold). Given this, even a human jury might not convict the child for pre-emptive murder?
- Drunk parent -> humans being irrational.
- Being a lawyer -> choose a random terminal goal not shared with humans in general, "maximizing paperclips" is dumb but traditional.
- "dead child is better than a lawyer child" -> we've been producing fiction warning of robotic takeover since the start of the 1900s.
- "AIs are.. the offspring of humanity." human offspring are usually pretty good, but I feel like this is transferring that positive feeling to something much weirder and unknown. You could also say the Alien's franchise xenomorphs are the offspring of humanity, but those would also count as enemies.
↑ comment by Ilio · 2023-09-17T01:43:54.318Z · LW(p) · GW(p)
AIs are not the enemies of humanity, they're the offspring of humanity.
Maybe that should have been your main point? Of course present AIs need us. Of course future AIs may not. Of course we can’t update on evidences everybody agree upon.
« Good parents don’t try to align their children » seems a much better intuition pump if your aim is to help a few out of the LW-style intellectual ratchet.
That said, you may overestimate both how many need that and how many of those who’d need it can get this signal from a newcomer. 😉
↑ comment by Bayesian0 · 2023-09-09T16:12:43.612Z · LW(p) · GW(p)
Could you explain to me how that resource helps to understand? I am afraid I can't see any proofs, so how is this post different in terms of truthfulness or reasoning than this one?
I am quite interested in a proof regarding the you can't do X by definition (as that sounds like axiomatic reasoning ) , and a showcase of why the axioms are reasonable, if that is possible? Alternatively may I request a link to where the statement comes from, as I am new to the site.
comment by TAG · 2023-09-15T12:00:55.909Z · LW(p) · GW(p)
What you do have is a valid argument against complete (or almost complete) extinction in the short to medium term. However, not many people believe in that argument , although EY does.
Many researchers steeped in these issues [LW · GW], including myself, expect [LW · GW] that the most likely result of building a superhumanly smart AI, under anything remotely like the current circumstances, is that literally everyone on Earth will die. Not as in “maybe possibly some remote chance,” but as in “that is the obvious thing that would happen.”
(As per
(https://www.lesswrong.com/posts/WLvboc66rBCNHwtRi/ai-27-portents-of-gemini?commentId=Tp6Q5SYsvfMDFxRDu [LW(p) · GW(p)])[previous discussions], no one is able to name the "many researchers" other than himself and his associates).
What you don't have is an argument against the wider Doom scenarios.