Grey Goo Requires AI
post by harsimony · 2021-01-15T04:45:35.905Z · LW · GW · 11 commentsThis is a link post for https://harsimony.wordpress.com/2020/10/05/grey-goo-requires-ai/
Contents
11 comments
Summary: Risks from self-replicating machines or nanotechnology rely on the presence of a powerful artificial intelligence within the machines in order to overcome human control and the logistics of self-assembling in many domains.
The grey goo scenario posits that developing self-replicating machines could present an existential risk for society. These replicators would transform all matter on earth into copies of themselves, turning the planet into a swarming, inert mass of identical machines.
I think this scenario is unlikely compared to other existential risks. To see why, lets look that the components of a self-replicating machine.
Energy Source: Because you can’t do anything without a consistent source of energy.
Locomotion: Of course, our machine needs to move from place to place gathering new resources, otherwise it will eventually run out of materials in its local environment. The amount of mobility it has determines how much stuff it can transform into copies. If the machine has wheels, it could plausibly convert an entire continent. With a boat, it could convert the entire earth. With rockets, not even the stars would be safe from our little machine.
Elemental Analysis: Knowing what resources you have nearby is important. Possibilities for what you can build depend heavily on the available elements. A general purpose tool for elemental analysis is needed.
Excavation: Our machine can move to a location, and determine which elements are available. Now it needs to actually pull them out of the ground and convert them into a form which can be processed.
Processing: The raw materials our machine finds are rarely ready to be made into parts directly. Ore needs to be smelted into metal, small organics need to be converted into plastics, and so on.
Subcomponent Assembly: The purified metals and organics can now be converted into machine parts. This is best achieved by having specialized machines for different components. For example, one part of the machine might print plastic housing, another part builds motors, while a third part makes computer chips.
Global Assembly: With all of our subcomponents built, the parent machine needs to assemble everything into a fully functional copy.
Copies of Blueprint: Much like DNA, each copy of the machine must contain a blueprint of the entire structure. Without this, it will not be able to make another copy of itself.
Decision Making: Up to this point, we have a self replicator with everything needed to build a copy of itself. However, without some decision making process, the machine would do nothing. Without instructions, our machine is just an expensive Swiss army knife: a bunch of useful tools which just sits there. I am not claiming that these instructions need to be smart (they could simply read “go straight”, for example) but there has to be something.
So far, this just looks like a bunch of stuff that we already have, glued together. Most of these processes were invented by the mid-1900’s. Why haven’t we built this yet? Where is the danger?
Despite looking boring, this system has the capacity to be really dangerous. This is because once you create something with a general ability to self-replicate, the same forces of natural selection which made complex life start acting on your machine. Even a machine with simple instructions and high fidelity copies will have mutations. These mutations can be errors in software, malfunctions in how components are made, errors in the blueprint, and so on. Almost all of these mutations will break the machine. But some will make their offspring better off, and these new machines will come to dominate the population of self-replicators.
Lets look at a simple example.
You build a self-replicator with the instruction “Move West 1 km, make 1 copy, then repeat” which will build a copy of itself every kilometer and move east-to-west, forming a Congo line of self-replicators. You start your machine and move to a point a few kilometers directly west of it, ready to turn off the copies that arrive and declare the experiment a success. When the first machine in the line reaches you, it is followed by a tight formation of perfect copies spaced 1 meter apart. Success! Except, weren’t they supposed to be spaced 1 kilometer apart? You quickly turn off all of the machines and look at their code. It turns out that a freak cosmic ray deleted the ‘k’ in ‘km’ in the instructions, changing the spacing of machines to 1 meter and giving the machine 1000 times higher fitness than the others. Strange, you think, but at least you stopped things before they got out of hand! As you drive home with your truckload of defective machines, you notice another copy of the machine, dutifully making copies spaced 1 kilometer apart, but heading north this time. You quickly turn off this new line of machines formed by this mutant and discover that the magnet on their compass wasn’t formed properly, orienting these machines in the wrong direction. You shudder to think what would have happened if this line of replicators reached the nearest town.
This example is contrived of course, but mistakes like these are bound to happen. This will give your machine very undesirable behavior in the long term, either wiping out all of your replicators or making new machines with complex adaptations who’s only goal is self-replication. Life itself formed extremely complex adaptations to favor self-replication from almost nothing, and, given the opportunity, these machines will too. In fact, the possibility of mutation and growth in complexity was a central motivation for the Von Neumann universal constructor.
Fortunately, even after many generations, most of these machines will be pretty dumb, you could pick one up and scrap it for parts without any resistance. There is very little danger of a grey goo scenario here. So where is the danger? Crucially, nature did not only make complex organisms, but general intelligence. With enough time, evolution has created highly intelligent, cooperative, resource hoarding, self-replicators: us! Essentially, people, with their dreams of reaching the stars and populating the universe, are a physical manifestation of the grey goo scenario (“flesh-colored goo” doesn’t really roll off the tongue). Given enough time, there is no reason to think that self-replicating machines won’t do the same. But even before this happens, the machines will already be wreaking havoc: replicating too fast, going to places they aren’t supposed to, and consuming cities to make new machines.
But these issues aren’t fundamental problems with self-replicators. This is an AI alignment issue. The decision process for the new machines has become misaligned with what it’s original designers intended. Solutions to the alignment problem will immediately apply to these new systems, preventing or eliminating dangerous errors in replication. Like before, this has a precedent in biology. Fundamentally, self-replicators are dangerous, but only because they have the ability to develop intelligence or change their behavior. This means we can focus on AI safety instead of worrying about nanotechnology risks as an independent threat.
Practically, is this scenario likely? No. The previous discussion glossed over a lot of practical hurdles for self-replicating machines. For nanoscale machines, a lot of the components I listed have not yet been demonstrated and might not be possible (I hope to review what progress has been made here in a future post). Besides that, the process of self-replication is very fragile and almost entirely dependent on a certain set of elements. You simply cannot make metal parts if you only have hydrogen, for example. Additionally, these machines will have to make complicated choices about where to find new resources, how to design new machines with different resources, and how to compete with others for resources. Even intelligent machines will face resource shortages, energy shortages, or be destroyed by people when they become a threat.
Overall, the grey goo scenario and the proposed risks of nanotechnology are really just AI safety arguments wrapped in less plausible packaging. Even assuming these things are built, the problem can essentially be solved with whatever comes out of AI alignment research. More importantly, I expect that AI will be developed before general purpose nanotechnology or self-replication, so AI risk should be the focus of research efforts rather than studying nanotechnology risks themselves.
11 comments
Comments sorted by top scores.
comment by Donald Hobson (donald-hobson) · 2021-01-15T18:44:52.068Z · LW(p) · GW(p)
Biology is very pro mutation. In biology, the data is stored on a medium with a moderate mutation rate. The there is no error correction, and many mutations make meaningful changes.
There are error detection schemes that can make the chance an error goes unnoticed exponentially tiny. You can use a storage medium more reliable than DNA. You can encrypt all the data so that a single bitflip will scramble all the instructions. If you are engineering to minimize mutations, and are prepared to pay a moderate performance penalty for it, you can make mutations that don't quickly cause the bot to self destruct or damage the bot beyond its ability to do anything never happen, not once in the whole universe full of bots.
comment by Mitchell_Porter · 2021-01-15T10:46:22.610Z · LW(p) · GW(p)
These replicators would transform all matter on earth into copies of themselves
A replicator doesn't need the capacity to devour literally all matter (with all the chemical diversity that implies), in order to be a threat. Suppose there was a replicator that just needs CO2 and H2O. Those molecules are abundant in the atmosphere and the ocean. There would be no need for onboard AI.
Replies from: avturchin, harsimony↑ comment by harsimony · 2021-01-15T16:08:25.854Z · LW(p) · GW(p)
Is it fair to say that this is similar to Richard Kennaway's [LW(p) · GW(p)] point? If so, see my response to their comment.
I agree with you and Richard that nanotechnology still presents a catastrophic risk, but I don't believe nanotechnology presents an existential risk independent of AI (which I could have stated more clearly!).
Replies from: Mitchell_Porter↑ comment by Mitchell_Porter · 2021-01-16T02:51:37.237Z · LW(p) · GW(p)
Imagine an airborne "mold" that grows on every surface, and uses up all the atmospheric CO2. You'd need to be hermetically sealed away to escape it, and then the planet would freeze around you anyway.
comment by Donald Hobson (donald-hobson) · 2021-01-15T19:11:50.346Z · LW(p) · GW(p)
Yes any design of self replicating bot has physical limits imposed by energy use, resource availability ect.
However it is not clear that biological life is close to those limits.
If we assume that a very smart and malevolent human is designing this grey goo, I suspect they could make something world ending.
Fortunately, even after many generations, most of these machines will be pretty dumb, you could pick one up and scrap it for parts without any resistance. There is very little danger of a grey goo scenario here.
Control of biological pests is not that easy.
If we are assuming a malevolent human, the assemblers could.
Be very small; Hide in hard to reach places; be disguised as rocks; start shooting at anyone nearby; run faster than humans through rough terrain; get a head start and be replicating faster than humans can kill them.
Suppose a machine looks like an ant, it can self replicate in 1 hour given any kind of organic material (human flesh, wood, plastic, grass ect) It has an extremely poisonous bite. It is programmed to find dark and dirty corners and self replicate there. It is set to send out some for somewhere else as soon a few copies have been made. (So there are a few of these devices everywhere, not one garbage can full of them that can be noticed and dealt with.) This is looking hard to deal with. If the machines are programmed to sometimes build a small long distance drone which sprinkles more "ants", this is looking like something that could spread worldwide and turn most of the biosphere and humanity into more copies of itself.
Active maliciousness is enough to get grey goo. (In the absence of blue goo, self replicating machinery designed to stop grey goo, or any other grey goo prevention that doesn't exist today)
Competent engineering, designed with grey goo in mind as a risk, is enough not to get grey goo.
Incompetent engineering by people not considering grey goo, I don't know.
Replies from: harsimony↑ comment by harsimony · 2021-01-16T00:59:06.893Z · LW(p) · GW(p)
These are good points.
The scenarios you present would certainly be catastrophic (and are cause for great concern/research about nanotechnology), but could all of humanity really be consumed by self replicators?
I would argue that even if they were maliciously designed, self replicators would have to outsmart us in some sense in order to become a true existential risk. Simply grabbing resources is not enough to completely eliminate a society which is actively defending a fraction of those resources, especially if they also have access to self-replicators/nanotechnology (blue goo) and other defense mechanisms.
If we assume that a very smart and malevolent human is designing this grey goo, I suspect they could make something world ending.
I agree, conditional on the grey goo having some sort of intelligence which can thwart our countermeasures.
A parallel scenario occurs when a smart and malevolent human designs an AI (which may or may not choose to self replicate). This post attempts to point out that these two situations are nearly identical, and that the existential risk does not come from self-replication or nanotechnology themselves, but rather from the intelligence of the grey goo. This would mean that we could prevent existential risks from replication by copying over the analogous solution in AI safety: to make sure that the replicators decision-making remains aligned with human interests, we can apply alignment techniques; to handle malicious replicators, we can apply the same plan we would use to handle malicious AI's.
Replies from: donald-hobson↑ comment by Donald Hobson (donald-hobson) · 2021-01-16T16:24:55.311Z · LW(p) · GW(p)
Simply grabbing resources is not enough to completely eliminate a society which is actively defending a fraction of those resources, especially if they also have access to self-replicators/nanotechnology (blue goo) and other defense mechanisms.
I don't see anything that humans are currently doing that would stop human extinction in this scenario. The goo can't reach the ISS, and maybe submarines, but current submarines are reliant on outside resources like food.
In these scenaios, the self replication is fast enough that there is only a few days between noticing something wrong, and almost all humans being dead, not enough time to do much engineering. In a contest of fast replicators, the side with a small head-start can vastly outnumber the other. If a sufficiently well designed blue goo is created in advance, then I expect humanity to be fine.
I agree, conditional on the grey goo having some sort of intelligence which can thwart our countermeasures.
A modern computer virus is not significantly intelligent. The designers of the virus might have put a lot of thought into searching for security holes, the virus itself is not intelligent. (usually) The designers might know what SQL injection is and how it works, the virus just repeats a particular hard coded string into any textbox it sees.
It might be possible to create a blue goo and nanoweapon defense team so sophisticated that no simplistic hard coded strategy would work. But this does not currently exist, and is only something that would be built if humanity is seriously worried about grey goo. And again, it has to be built in advance of advanced grey goo.
Replies from: harsimony↑ comment by harsimony · 2021-01-16T17:29:59.891Z · LW(p) · GW(p)
I agree. I think humanity should adopt some sort of grey-goo-resistant shelter and, if/when nanotechnology is advanced enough, create some sort of blue-goo defense system (perhaps it could be designed after-the-fact in the shelter).
The fact that these problems seem tractable, and that -in my estimation- we will achieve dangerous AI before dangerous nanotechnology suggests to me that preventing AI risk should have priority over preventing nanotechnology risks.
comment by Richard_Kennaway · 2021-01-15T14:40:18.148Z · LW(p) · GW(p)
Green goo doesn't need all that (see: Covid and other plagues). Why would grey goo? Ok, Covid isn't transforming everything into more of itself, but it's doing enough of that to cause serious harm.
Replies from: harsimony↑ comment by harsimony · 2021-01-15T15:59:28.128Z · LW(p) · GW(p)
That's true. I guess I should have clarified that the argument here doesn't exclude nanotechnology from the category of catastrophic risks (by catastrophic, I mean things like hurricanes which could cause lots of damage but could not eliminate humanity) but rules out nanotechnology as an existential risk independent from AI.
Lots of simple replicators can use up the resources in a specific environment. But in order to present a true existential risk, nanotechnology would have to permanently out-compete humanity for vital resources, which would require outsmarting humanity in some sense.