Keeping self-replicating nanobots in check

post by Knight Lee (Max Lee) · 2024-12-09T05:25:45.898Z · LW · GW · 4 comments

Contents

  Hierarchical mutation prevention
    Conclusion
None
4 comments

EDIT: after reading a comment by Thomas Kwa [LW · GW], I feel my idea probably isn't necessary. Checksums (plus error correction code) are probably enough. :/

This is a random unimportant idea to prevent a grey goo scenario, where self replicating nanobots accidentally go out of control and consume everything.

My idea is that self replicating nanobots should never replicate their "DNA," or self replication instructions. Instead, each nanobot can only "download" these self replication instructions from a higher level nanobot.

I'm not sure if this idea is new.

Hierarchical mutation prevention

Every complex thing that self replicates, from humans to bacteria to viruses to computer viruses, have some kind of instruction, be it DNA or computer code, that relies on some general purpose interpreter, be it gene expression or code execution. This is probably the case for self replicating nanobots too.

The self replicating nanobots should never replicate their self replication instructions, but receive new copies from a "master nanobot" which uses one "master copy" to create new copies.

To ensure a mutated master copy can never create another master copy, the master copy has a different format. For example, the master copy might be a set of instructions for outputting a normal copy.

When a new "master nanobot" is built, and needs a new master copy, it must get its master copy from a bigger, second order master nanobot. This is a " nanobot," which uses a " copy" of the instructions, to mint master copies of the instructions.

In this way, no copy of the instructions can create another copy at the same level of itself. It can only create copies on a lower level. So if the self replication instructions mutate anywhere, the mutated version cannot sustain itself. The self replication instructions are "downloaded, not replicated."

Conclusion

In theory, this is safer than having the self replicating nanobots check their instructions for mutations, since a large mutation might disable the checking process while preserving the self replication process. This hierarchical mutation prevention system only breaks if a large mutation creates an entirely new process of replicating the instructions, which seems less likely.

This idea isn't very important, because I feel an AGI that's so good at engineering (and inventing) that it can make self replicating nanobots, can probably think of this. The humans using it are probably wise enough to ask for solutions to the problem.

4 comments

Comments sorted by top scores.

comment by Thomas Kwa (thomas-kwa) · 2024-12-09T19:21:18.649Z · LW(p) · GW(p)

It's likely possible to engineer away mutations just by checking. ECC memory already has an error rate nine orders of magnitude better than human DNA, and with better error correction you could probably get the error rate low enough that less than one error happens in the expected number of nanobots that will ever exist. ECC is not the kind of checking for which the checking process can be disabled, as the memory module always processes raw bits into error-corrected bits, which fails unless it matches some checksum which can be made astronomically unlikely to happen in a mutation.

Replies from: Max Lee
comment by Knight Lee (Max Lee) · 2024-12-09T21:43:12.663Z · LW(p) · GW(p)

You're very right! I didn't really think of that. I had the intuition that mutation is very hard to avoid since cancer is very hard to avoid, but maybe it's not that really accurate.

Thinking a bit more, it does seem unlikely that a mutation can disable the checking process itself, if the checking process is well designed with checksums.

One idea is that the meaning of each byte (or "base pair" in our DNA analogy), changes depending on the checksum of previous bytes. This way, if one byte is mutated, the meaning of every next byte changes (e.g. "hello" becomes "ifmmp"), rendering the entire string of instructions useless. The checking process itself cannot break in any way to compensate for this. It has to break in such a way that it won't update its checksum for this one new byte, but still will update its checksum for all other bytes, which is very unlikely. If it simply disables checksums, all bytes become illegible (like encryption). I use the word "byte" very abstractly—it could be any unit of information.

And yes, error correction code could further improve things by allowing a few mutations to get corrected without making the nanobot self destruct.

It's still possible the hierarchical idea in my post has advantages over checksums. It theoretically only slows down self replication when a nanobot retrieves its instructions the first time, not when a nanobot uses its instructions.

Maybe a compromise is that there is only one level of master nanobots, and they are allowed to replicate the master copy given that they use checksums. But they still use these master copies to install simple copies in other nanobots which do not need checksums.

I admit, maybe a slight difference in self replication efficiency doesn't matter. Exponential growth might be so fast that over-engineering the self replication speed is a waste of time. Choosing a simpler system that can be engineered and set up sooner might be wiser.

I agree that the hierarchical idea (and any master copy idea) might end up being overkill. I don't see it as a very big idea myself.

comment by Dagon · 2024-12-09T18:22:46.827Z · LW(p) · GW(p)

That's a fair bit of additional system complexity (though perhaps similar code-complexity, and fewer actual circuits).  More importantly, it really just moves the problem out one level - now you worry about runaway or mutated controllers.  You can make a tree of controllers-controlling-controllers, up to a small number of top-level controllers, with "only" logarithmic overhead, but it's still not clear why a supervisor bot is less risk than a distributed set of bots.
 

Replies from: Max Lee
comment by Knight Lee (Max Lee) · 2024-12-09T21:51:15.313Z · LW(p) · GW(p)

If one level of nanobots mutates, it can pass the mutation on the nanobots "below" it but not other nanobots at the same level, so as long as nanobots "below" it don't travel too far and wide, it won't be able to exponentially grow until it ravages large parts of the world.

Of course a mutation at a very high level (maybe the top level) will still be a big problem. I kind of forgot to explain this part, but my idea is that these machines at a very high level will be fewer in number, and bigger, so that they might be easier to control or destroy.

Anyways, I do admit my idea might not be that necessary after reading Thomas Kwa's post.