grey goo is unlikely
post by bhauth · 2023-04-17T01:59:57.054Z · LW · GW · 117 commentsThis is a link post for https://bhauth.com/blog/biology/nanobots.html
Contents
1. localized melting 2. rare materials 3. metal surfaces 4. electric motors 5. inorganic catalysts 6. no liquid 7. no water 8. high temperatures 9. diamond 10. other rigid materials 11. 3d structures 12. active transport 13. combining reaction steps 14. positional nanoassembly 15. everything else None 117 comments
The nanomachinery builds diamondoid bacteria, that replicate with solar power and atmospheric CHON, maybe aggregate into some miniature rockets or jets so they can ride the jetstream to spread across the Earth's atmosphere, get into human bloodstreams and hide, strike on a timer.
— Eliezer Yudkowsky [LW · GW]
To control these atoms you need some sort of molecular chaperone that can also serve as a catalyst. You need a fairly large group of other atoms arranged in a complex, articulated, three-dimensional way to activate the substrate and bring in the reactant, and massage the two until they react in just the desired way. You need something very much like an enzyme.
My understanding is that anyone who can grasp what "orthos wildly attacking the heterodox without reading their stuff and making up positions to attack" looks like, considers that this is what Smalley did with Drexler - made up an unworkable approach and argued against it.
In this post, I use "nanobots" to mean "self-replicating microscopic machines with some fundamental mechanistic differences from all biological life that make them superior". Various specific differences from biological cells have been proposed. I've organized this post by those proposed differences.
1. localized melting
Most 3d printers melt material to extrude it through a nozzle. Large heat differences can't be maintained on a small scale.
2. rare materials
If a nanobot consists largely of something rare, getting more of that material to replicate is difficult outside controlled environments.
Growth of algae and bacteria is often limited by availability of iron, which is more common than most elements. Iron is the active catalytic site of many enzymes, and is needed by all known life. The growth of something made mostly of iron would be far more limited, and other metals have more limited availability than that.
3. metal surfaces
Melting material isn't feasible per (1), so material must be built up by adding to the surface. Since that's the case, the inside of structures must be chemically the same as what was the exterior.
Metal objects have a protective oxide layer. In an air or water environment, there's no way to add individual (eg) aluminum atoms to a metal surface and end up with metallic aluminum inside; the whole thing will typically be aluminum oxide or hydroxide.
Corrosion is also a proportionately bigger problem for smaller objects. A micrometer-scale metal structure will rapidly corrode, perhaps doing some Ostwald ripening.
4. electric motors
Normal "electric motors" are all electromagnetic motors, typically using ferromagnetic cores for windings. Bigger is better for those, up to at least the point where you can saturate cores.
On a very small scale, it's better to use electrostatic motors, and you can make MEMS electrostatic motors with lithography. (Not just theoretically; people actually do that.) But, per (2) & (3), bulk metals are a problem for a self-replicating system. If you need to have compounds floating around, electrical insulation is also difficult. You also need some way to switch current, and while small semiconductor switches are possible, per (3) building them is difficult.
Instead of electrostatic charge of metal objects, it's better to use ions. Ions could bind to some molecule, and electrostatic forces could cause that to rotate relative to another molecule. Hmm, this is starting to sound rather familiar.
5. inorganic catalysts
Lab chemistry and drug synthesis often use metal catalysts in solution, perhaps with a small ligand. Palladium acetate is used for making drugs, but it's very toxic to humans, because it...catalyzes reactions.
Life requires control of what happens, which means selective catalysis of reactions, which means molecules need to be selectively bound, which requires specific arrangements of hydrogen bond donors and acceptors and so on, and that requires organic compounds. Controlled catalysis requires organic compounds.
6. no liquid
Any self-replicating nanobot must have many internal components. If the interior is not filled with water, those components will clump together and be unable to move around, because electrostatic & dispersion interactions are proportionately much stronger on a small scale. The same is true to a lesser extent for the nanobots themselves.
Vacuum is even worse. Any self-replicating cell must move material between outside and multiple compartments. Gas leakage by the transporters would be inevitable. Cellular vacuum pumps would require too much energy and may be impossible. Also, strongly binding the compounds used (eg CO2) to carriers at every step would require too much energy. ("Too much energy" means too much to be competitive with normal biological processes.)
7. no water
Most enzymes maintain their shape because the interior is hydrophobic and the exterior is hydrophilic. If some polar solvent is used instead of water, then this stability is weakened; most organic solvents will denature most proteins. If you use a hydrophobic solvent, it can't dissolve ions or facilitate many reactions.
Ester and amide bonds are the best ways to reversibly connect organic molecules. Both involve making or taking water or alcohol. Alcohols have no advantages over water in terms of conditions where they're stable.
Water is by far the best choice of liquid. The effectiveness of water for dissolving ions is unique. Water can help catalyze reactions by donating and accepting hydrogen. Water is common on Earth, easy to get and easy to maintain levels of.
8. high temperatures
Per (5) you need organic molecules to selectively catalyze reactions.
Enzymes need to be able to change shape somewhat. Without conformational changes, enzymes can't grab their substrate well enough. Without conformational changes, there's no way to drive an unfavorable reaction with a favorable reaction, and that's necessary.
Because enzymes must be able to do conformational changes, they need to have some strong interactions and some weaker interactions that can be broken or shifted. Those weaker interactions can't hold molecules together at high temperatures. Some life can grow at 100 C but 200 C isn't possible.
This means that the reactions you can do are limited to what organic compounds can do at relatively low temperatures - and existing life can pretty much do anything useful in that category already.
9. diamond
It's possible to make molecules containing diamond structures at ambient temperature. The synthesis involves carbocations or carboanions or carbon radicals, which are all very unstable. The yields are mediocre and the compounds involved are reactive enough to destroy any conceivable enzymes.
Some people have simulated structures that could theoretically place carbon atoms on diamond in specific positions at ambient temperature. Here's a paper on that. Because diamond is so kinetically stable, the synthesis must be exothermic, with high-energy intermediates. So, high vacuum is required, which per (6) doesn't work.
Also, the chemicals consumed to make those high-energy intermediates are too reactive to plausibly be made by any enzyme-like system. And per (1) & (8) you can't use high temperatures to make them on a small scale.
Also, there is no way to later remove carbon atoms from the diamond at low temperature. How, then, would a nanobot with a diamond shell replicate?
10. other rigid materials
CaCO3, silica, and apatite are much easier to manipulate than diamond. They're used in (respectively) mollusk shells, diatom frustules, and bone.
If it was advantageous to use structures of those inside cells for reactions somehow, then some organisms would already do that. Enzymes generally must do conformational changes to catalyze reactions. A completely rigid diamond shape with functional groups would not make a particularly good enzyme.
And of course, just a small solid shape, with nothing attached to it - even if you can make arbitary shapes - isn't useful for much besides cell scaffolding, and even then, building diatom frustules out of linked diamond pieces seems worse than what they do now with silica. Sure, diamond is even stronger than silica, but that doesn't matter. And that's assuming you can make interlocking diamond pieces, which you can't.
11. 3d structures
Unlike cells, nanobots could make 3d structures, instead of being limited to a soup of folded linear structures.
Yes, believe it or not, I've seen people say that. But cells have eg microfilaments.
Again, enzymes must be able to do conformational changes to work. At ambient temperature, that means they're shaking violently, and if proteins are flopping around constantly, you can't have a rigid positioner move to a fixed position and assume you're placing something correctly.
What you can do is hold onto the end of a linear chain as you extrude it, then fold up that chain into a 3d structure. What you can do is use an enzyme that binds to 2 folded proteins and connects them together. And those are methods that are used by all known life.
12. active transport
Life relies on diffusion and random collisions; nanobots could intentionally move things around.
Yes, I've actually seen people say that, but cells do use myosin to transport proteins sometimes. That uses a lot of energy, so it's only used for large things.
13. combining reaction steps
Nanobots could put all the sequential reaction steps next to each other, making them much more efficient than cells.
Cells have compartments with proteins that do related reactions. Some proteins form complexes that do multiple reaction steps. Existing life already does this to the extent that it makes sense to.
14. positional nanoassembly
The above sections should be enough background to finally cover what's perhaps the most central concept of the genre of proposals called "nanobots".
Some people see 3d printers and CNC routers, and don't understand enzymes or what changes on a molecular scale very well, and think that cells that work more like 3d printers or gantry cranes would be better. Now, a FDM 3d printer has several components:
- sensors that detect the current position
- drivers that control motors based on sensors
- 3 motors that do 3-axis movement
- a rigid bed and rigid drive system
- a good connection between the bed and material being printed
- a nozzle that melts material
Protein-sized position sensors don't exist.
Molecular linear motors do exist, but 1 ATP (or other energy carrier) is needed for every step taken.
If you want to catalyze reactions, you need floppy enzymes. Even if you attach them to a rigid bed, they'll flop all over the place. (On a microscopic scale, normal temperatures are like a macroscopic 3d printer being shaken violently.)
Suppose you're printing diamond somehow. You need a seed that's rigidly connected to the printing mechanism. The connection would need to be removable in order to detach the product from the printer. In a large 3d printer, you can peel plastic off a metal surface, but that won't work for covalently bonded diamond. You would need a diamond seed with functional groups that allow it to be grabbed, and since you're not starting with a sheet, you'd need a 5-axis printer arm.
Drexler wrote a book that proposed mechanical computers which control positioner arms by lever assemblies. An obvious problem there is mechanical wear - yes, some MEMS devices have adequate lifetimes, but those just vibrate; their sliding friction is negligible. But suppose you can solve this by making everything out of diamond or using something like lubricin.
So, suppose you have a mechanical computer that moves arms that control placement of something. Diamond is impractical, so let's say silica is being placed. Whatever you're placing, you need chemical intermediates that go on the arms, and you need energy to power everything. Making energy from fuel or photosynthesis requires more specific chemicals, not just specific arrangements of some solid. To do the reactions needed for energy and intermediate production, you need things that can do conformational changes - enzymes.
Without conformational changes, enzymes can't grab their substrate well enough. Without conformational changes, there's no way to drive an unfavorable reaction with a favorable reaction, and that's necessary. You can't just use rigid positioners to drive reactions that way, because they have no way to sense that the reaction has happened or not...except through conformational changes of a flexible enzyme-like tooltip on the positioner, which would have the same issues here.
At ambient temperature, enzymes that can do conformational changes are shaking violently, and if proteins are flopping around constantly, you can't have a rigid positioner move to a fixed position and assume you're placing something correctly. Since you need enzymes, you need a ribosome, and production of monomers - and amino acids are the best choice, chemical elements are limited and there is no superior alternative.
Since all that is still needed, what are the positioners actually accomplishing? They'd only be needed to build positioners. The whole thing would be a redundant side system to enzymatic life.
OK, maybe you want to build some kind of mechanical computers too. Clearly, life doesn't require that for operation, but does that even work? Consider a mechanical computer indicating a position. It has some number, and the high bit corresponds to a large positional difference, which means you need a long lever, and then the force is too weak, so you'd need some mechanical amplifier. So that's a problem.
Consider also that as vacuum is impractical per (6), and enzymes and chemical intermediates are needed, you'd have stuff floating around. So you have all these moving parts, they need to interface with the enzymes so they can't just be separated by a solid barrier, and stuff could get in there and jam the system.
The problems are myriad, and I'd be well-positioned to see solutions if any existed. But suppose you solve them and make tiny mechanical computers in cells - what's the hypothetical advantage of that? The ability to "do computation"? Brains are more energy-efficient than semiconductor computers for many tasks, and the total embodied computation in cells is far greater than that of neurons' occasional spikes.
15. everything else
When someone has an idea about something cells could do, it's often reasonable to presume that it's either impossible, useless, or already used by some organism - but there are obviously cases where improvement is possible. It's certainly physically possible to correct harmful mutations with genetic engineering. There are also ongoing arms races between pathogens and hosts where each step is an informational problem.
But what about more basic mechanisms? Have basic mechanisms for typical Earth conditions been optimized to the point that no improvement is possible? That depends on their complexity. For example, glycolysis and the citric acid cycle are optimal, but here's a more-efficient CO2 fixation pathway I designed. (Yes, you'd want to assimilate the glycolaldehyde synthons by (erythrose 4-phosphate -> glucose 6-phosphate -> 2x erythrose 4-phosphate). I left that as a way for people to show they understood my blog.) See also my post on the origin of life for some reasons life works the way it does. (You can see I'm a big blogger - that's a good career plan, right?)
I wrote this post now as a sort of side note to my post on AI risks. But...what if a superintelligence finds something I didn't think of?
I know, right? What if it finds a way to travel faster than light and sets up in Alpha Centauri, then comes back? What if it finds a way to make unlimited free energy? What if it finds a friendly unicorn that grants it 3 wishes?
There's a gap between seeing that something is conceivably possible and seeing how to do it, and that's the only reason that things like research and planning and prediction about the future are possible. I understand Eliezer Yudkowsky thinks that someone a little smarter than von Neumann (who didn't invent the "von Neumann architecture" or half the other stuff he took credit for, but that's off topic) would be able to invent "grey goo" type nanobots. If that was the case, even I would at least be able to see how it would be done.
To be clear, I'm not trying to imply that a superintelligent AI wouldn't have any plausible route to taking over societies or killing most of humanity or various other undesirable outcomes. I'm only saying that worrying about "grey goo" is a waste of time. On the other hand, Smalley was mad at Drexler for scaring people away from research into carbon nanotubes, but carbon nanotubes would be a health hazard if they were used widely, and the applications Smalley hoped for weren't practical. Perhaps I would thank Drexler if he actually pushed people away from working on carbon nanotubes, but he didn't.
117 comments
Comments sorted by top scores.
comment by Thomas Kwa (thomas-kwa) · 2023-04-18T21:54:58.471Z · LW(p) · GW(p)
Not an expert in chemistry or biochemistry, but this post seems to basically not engage with the feasibility studies Drexler has made in Nanosystems, and makes a bunch of assertions without justification, including where Nanosystems has counterarguments. I wish more commenters would engage on the object level because I really don't have the background to, and even I see a bunch of objections. Nevertheless I'll make an attempt. I encourage OP and others to correct me where I am ignorant of some established science.
Points 1, 2, 3, 4 are not relevant to Drexlerian nanotech and seem like reasonable points for other paradigms.
Regarding 5, my understanding is that mechanosynthesis involves precise placement of individual atoms according to blueprints, thus making catalysts that selectively bind to particular molecules unnecessary.
6. no liquid
Any self-replicating nanobot must have many internal components. If the interior is not filled with water, those components will clump together and be unable to move around, because electrostatic & dispersion interactions are proportionately much stronger on a small scale. The same is true to a lesser extent for the nanobots themselves.Vacuum is even worse. Any self-replicating cell must move material between outside and multiple compartments. Gas leakage by the transporters would be inevitable. Cellular vacuum pumps would require too much energy and may be impossible. Also, strongly binding the compounds used (eg CO2) to carriers at every step would require too much energy. ("Too much energy" means too much to be competitive with normal biological processes.)
- If your components are all fixed in place by covalent bonds, they can't clump together.
- Nanosystems section 11.4 gives arguments against gas leakage being inevitable, and a proposed "turbomolecular pump" design.
- It's not clear why a cellular vacuum pump must have a low efficiency, and the actual work (P*V) that must be done by a vacuum pump should be well below the kcal/gram range that biological cells need to replicate.
7 is not relevant given that I'm imagining hard vacuum systems.
8. high temperatures
[...] Enzymes need to be able to change shape somewhat. Without conformational changes, enzymes can't grab their substrate well enough. Without conformational changes, there's no way to drive an unfavorable reaction with a favorable reaction, and that's necessary.Because enzymes must be able to do conformational changes, they need to have some strong interactions and some weaker interactions that can be broken or shifted. Those weaker interactions can't hold molecules together at high temperatures. Some life can grow at 100 C but 200 C isn't possible.
If the "enzymes" are not made of proteins, their weaker interactions can be stronger than the hydrogen bonds that hold proteins together. If they're roughly twice as strong (say by using covalent bonds or twice as many hydrogen bonds), they wouldn't denature even at 200C.
Also, there is no way to later remove carbon atoms from the diamond at low temperature. How, then, would a nanobot with a diamond shell replicate?
The shell could be made of diamond panels with airtight joints. The daughter cell's internal components and membrane are manufactured inside the parent cell, then the membrane is added to the parent cell's membrane, it unfolds in an origami fashion into two membranes of original size, then the daughter cell separates.
This is a pretty obvious idea to me, so unless there's some obvious reason why something like this doesn't work, I get the feeling that the post isn't engaging with the strongest arguments.
10-13 seem pretty reasonable to me.
14. positional nanoassembly
[...] Protein-sized position sensors don't exist.
Molecular linear motors do exist, but 1 ATP (or other energy carrier) is needed for every step taken.
If you want to catalyze reactions, you need floppy enzymes. Even if you attach them to a rigid bed, they'll flop all over the place. (On a microscopic scale, normal temperatures are like a macroscopic 3d printer being shaken violently.)
It seems to me that the "step" for molecular linear motors could be an arbitrarily long distance. The moving part randomly moves about the stator, and there are ratchets every, say, 100 nm that let the moving part pass in one direction when an ATP is consumed. Then when it needs to be fixed in position, a different mechanism does that.
The "floppy enzymes" has the same solution as section 8. In chapter 13 of Nanosystems Drexler also gives three different ways this problem is solved, two of which involve molecular manipulators:
Aside from differences of scale and component properties, molecular manipulators differ from macroscale devices in that they must maintain positional accuracy despite thermal excitation. This problem can be minimized either (1) by operation at reduced temperatures, which receives no further attention here; or (2) by the use of a stiff mechanism, as described in Section 13.4.1:or (3) by use of local nonbonded contacts to align reagent devices to workpieces immediately before reaction, as discussed in Section 13.4.2.
My current impression is that this post is perfectly consistent with the problem of making a self-replicating diamondoid bacterium being easy enough that unamplified humans could do it in <500 years given good research practices, software, and perhaps some narrow AI tools like some future version of AlphaFold. It's true that Drexlerian nanotech is not really optimized for being self-replicating nanomachines, as Drexler envisions it's most useful for mass manufacturing and computing. They might be less evolvable, consume more energy, or be specialized to certain environments like the atmosphere or ocean, especially when designed by human-level intelligences. But there are also potentially huge advantages like being undigestible to biological life and viruses, being impervious to more forms of damage, or a wider range of metabolic pathways. But it requires a different argument to say that either (a) nanotech is impossible, or (b) the disadvantages of nanotech-based life forms outweigh the advantages.
If I were in the mindset I get from this post, I would have a hard time not asserting that powered flight would be impractical for airplanes larger than birds, or that modern semiconductor manufacturing were impossible due to the precision required. I would probably deny other possibilities currently thought extremely plausible, like fusion energy and immortality. Maybe the author isn't making the same mistake, but I'm doubtful nonetheless.
Replies from: bhauth↑ comment by bhauth · 2023-04-18T22:21:11.366Z · LW(p) · GW(p)
Regarding 5, my understanding is that mechanosynthesis involves precise placement of individual atoms according to blueprints, thus making catalysts that selectively bind to particular molecules unnecessary.
No, that does not follow.
The shell could be made of diamond panels with airtight joints. The daughter cell's internal components and membrane are manufactured inside the parent cell, then the membrane is added to the parent cell's membrane, it unfolds in an origami fashion into two membranes of original size, then the daughter cell separates.
...for one thing, that's not airtight.
It seems to me that the "step" for molecular linear motors could be an arbitrarily long distance.
No, the steps happen by diffusion so they become slower. That's why slower muscles are more efficient.
The "floppy enzymes" has the same solution as section 8. In chapter 13 of Nanosystems Drexler also gives three different ways this problem is solved, two of which involve molecular manipulators:
see this reply [LW(p) · GW(p)]
Replies from: thomas-kwa↑ comment by Thomas Kwa (thomas-kwa) · 2023-04-19T19:17:37.160Z · LW(p) · GW(p)
I don't know how to engage with the first two comments. As for diffusion being slow, you need to argue that it's so slow as to be uncompetitive with replication times of biological life, and that no other mechanism for placing individual atoms / small molecules could achieve better speed and energy efficiency, e.g. this one.
I don't have the expertise to evaluate the comment by Muireall, so I made a Manifold market.
Replies from: bhauth↑ comment by bhauth · 2023-04-20T02:50:01.096Z · LW(p) · GW(p)
Such actuator design specifics aren't relevant to my point. If you want to move a large distance, powered by energy from a chemical reaction, you have to diffuse to the target point, then use the chemical energy to ratchet the position. That's how kinesin works. A chemical reaction doesn't smoothly provide force along a range of movement. Thus, larger movements per reaction take longer.
Replies from: donald-hobson↑ comment by Donald Hobson (donald-hobson) · 2024-08-14T12:02:19.100Z · LW(p) · GW(p)
Biological life uses an ATP system. This is an energy currency, but it's discrete. Like having batteries that can only be empty or full. It doesn't give a good way to apply smaller amounts of energy than 1 atp molecule carries, even if less energy is needed.
Nanobots could have a continuous energy system, or smaller units of energy.
comment by Gunnar_Zarncke · 2023-04-17T13:27:14.886Z · LW(p) · GW(p)
I want to remind everybody how efficient molecular machinery is in terms of thermodynamics:
this molecule [RNA] operates quite near the limit of thermodynamic efficiency [7 kcal/mol] set by the way it is assembled [~10 kcal/mol].
and
these calculations also establish that the E. coli bacterium produces an amount of heat less than six times (220npep/42npep) as large as the absolute physical lower bound dictated by its growth rate, internal entropy production, and durability.
From an article Statistical Physics of Self-replication by Jeremy England
deriving a lower bound for the amount of heat that is produced during a process of self-replication in a system coupled to a thermal bath. We find that the minimum value for the physically allowed rate of heat production is determined by the growth rate, internal entropy, and durability of the replicator, and we discuss the implications of this finding for bacterial cell division, as well as for the pre-biotic emergence of self-replicating nucleic acids.
https://aip.scitation.org/doi/10.1063/1.4818538
That said I think that there may be many sweet spots for a combination of macroscopic and microscopic processes. Many industrial chemical processes are such combinations by providing very specialized baths of nutrients and substrates and combining efficient macroscopic flow and transport with microscopic chemical and organic reactions. But there may be more spots that allow for efficiently building up small-scale structures.
comment by Steven Byrnes (steve2152) · 2023-04-17T13:32:21.595Z · LW(p) · GW(p)
In the context of AI x-risk, I’m mainly interested in
- (1) can an AI use nanotech as a central ingredient of a plan to wipe out humanity, and
- (2) can an AI use nanotech as a central ingredient of a plan to operate perpetually in a world without humans?
[(2) is obviously possible once you have a few billion human-level-intelligent robots, but the question is “can nanotech dramatically reduce the amount of time that the AI is relying on human help, compared to that baseline?”. Presumably “being able to make arbitrarily more chips or chip-equivalents” would be the most difficult ingredient.]
In both cases it seems to me that the answer is “obviously yes”:
- super-plagues / crop diseases / etc. are an existence proof for (1),
- human brains are an existence proof for (2).
Therefore grey goo as defined in this post doesn’t seem too relevant for my AI-related questions. Like, if the AI doesn’t have a plan to make nanotech things that can exterminate / outcompete microbes living in rocks deep under the seafloor—man, I just don’t care.
None of this is meant to be a criticism of this post, which I’m glad exists, even if I’m not in a position to evaluate it. Indeed, I’m not even sure OP would disagree with my comment here (based on their main AI post).
Replies from: jacob_cannell, tailcalled, fergus-fettes, going-durden↑ comment by jacob_cannell · 2023-04-17T19:40:12.213Z · LW(p) · GW(p)
The merit of this post is to taboo nanotech. Practical bottom-up nanotech is simply synthetic biology, and practical top-down nanotech is simply modern chip lithography. So:
1.) can an AI use synthetic bio as a central ingredient of a plan to wipe out humanity?
Sure.
2.) can an AI use synthetic bio or chip litho a central ingredient of a plan to operate perpetually in a world without humans?
Sure
But doesn't sound as exciting? Good.
Replies from: tailcalled↑ comment by tailcalled · 2023-04-17T20:44:02.039Z · LW(p) · GW(p)
Another merit of the OP might be in pointing out bullshit by Eliezer Yudkowsky/Eric Drexler?
It's kind of unfortunate if key early figures in the rationalist community introduce some bullshit to the memespace and we never get around to purging it and end up tanking our reputation by regularly appealing to it. Having this sort of post around helps get rid of it.
↑ comment by tailcalled · 2023-04-17T14:48:38.308Z · LW(p) · GW(p)
I'd also be interested in:
- (3) could an AI that is developing nanotech without paying attention to the full range of consequences accidentally develop a form of nanotech that is devastating to humanity
(Imagine if e.g. there is some nanotech that does something useful but also creates long-lasting poisonous pollution as a side-effect, for instance.)
I.e. is it sufficient safety that the AI isn't trying to kill us with nanotech? Or must it also be trying to not kill us?
↑ comment by Fergus Fettes (fergus-fettes) · 2023-06-26T10:37:19.328Z · LW(p) · GW(p)
Also worth noting w.r.t this that an AI that is leaning on bio-like nano is not one that can reliably maintain control over its own goals-- it will have to gamble a lot more with evolutionary dynamics than many scenarios seem to imply meaning:
- instrumental goal convergence more likely
- paperclippers more unlikely
So again, tabooing magical nano has a big impact on a lot of scenarios widely discussed.
Replies from: steve2152↑ comment by Steven Byrnes (steve2152) · 2023-06-26T12:32:13.688Z · LW(p) · GW(p)
I don’t understand why evolution has anything to do with what I wrote.
Evolution designed a genome, and then the genome (plus womb etc.) builds a brain.
By the same token, it’s possible that a future AI could design a genome (or genome-like thing), and then that genome could build a brain. RIght?
Hmm, I guess a related point is that an AI wanting to take over the world probably needs to be able to either make lots of (ideally exact) copies of itself or solve the alignment problem w.r.t. its successors. And the former is maybe infeasible for a bio-like brain-ish thing in a vat. But not necessarily. And anyway, it might be also infeasible for a non-bio-like computer made from self-assembling nanobots or whatever. So I still don’t really care.
Replies from: fergus-fettes↑ comment by Fergus Fettes (fergus-fettes) · 2023-06-27T11:57:29.457Z · LW(p) · GW(p)
(2) can an AI use nanotech as a central ingredient of a plan to operate perpetually in a world without humans?
In the 'magical nano exists' universe, the AI can do this with well-behaved nanofactories.
In the 'bio-like nano' universe, 'evolutionary dynamics' (aka game theory among replicators under high brownian noise) will make 'operate perpetually' a shaky proposal for any entity that values its goals and identity. No-one 'operates perpetually' under high noise, goals and identity are constantly evolving.
So the answer to the question is likely 'no'-- you need to drop some constraints on 'an AI' or 'operate perpetually'.
Before you say 'I don't care, we all die anyway'-- maybe you don't, but many people (myself included) do care rather a lot about who kills us and why and what they do afterwards.
Replies from: steve2152↑ comment by Steven Byrnes (steve2152) · 2023-06-27T12:55:32.390Z · LW(p) · GW(p)
I’m imagining an exchange like this.
ME: Imagine a world with chips similar to today’s chips, and robots similar to humans, and no other nano magic. With enough chips and enough robots, such a system could operate perpetually, right? Just as human society does.
THEM: OK sure that could happen but not until there are millions or even billions of human-level robots, because chips are very hard to fabricate, like you need to staff all these high-purity chemical factories and mines and thousands of companies manufacturing precision equipment for the fab etc.
ME: I don’t agree with “millions or even billions”, but I’ll concede that claim for the sake of argument. OK fine, let’s replace the “chips” (top-down nano) with “brains-in-vats” (self-assembling nano). The vats are in a big warehouse with robots supplying nutrients. Each brain-in-vat is grown via a carefully controlled process that starts with a genome (or genome-like thing) that is synthesized in a DNA-synthesis machine and quadruple-checked for errors. Now the infrastructure requirements are much smaller.
~~
OK, so now in this story, do you agree that evolution is not particularly relevant? Like, I guess a brain-in-a-vat might get cancer, if the AI can’t get DNA replication error rates dramatically lower than it is in humans (I imagine it could, because its tradeoffs are different), but I don’t think that’s what you were talking about. A brain-in-a-vat with cancer is not a risk to the AI itself, it could just dump the vat and start over.
(This story does require that the AI solves the alignment problem with respect to the brains-in-vats.)
Replies from: fergus-fettes↑ comment by Fergus Fettes (fergus-fettes) · 2023-07-04T16:44:38.188Z · LW(p) · GW(p)
If you construct a hypothetical wherein there is obviously no space for evolutionary dynamics, then yes, evolutionary dynamics are unlikely to play a big role.
The case I was thinking of (which would likely be part of the research process towards 'brains in vats'-- essentially a prerequisit) is larger and larger collectives of designed organisms, forming tissues etc.
It may be possible to design a functioning brain in a vat from the ground up with no evolution, but I imagine that
a) you would get there faster verifying hypotheses with in vitro experiments
b) by the time you got to brains-in-vats, you would be able to make lots of other, smaller scale designed organisms that could do interesting, useful things as large assemblies
And since you have to pay a high price for error correction, the group that is more willing to gamble with evolutionary dynamics will likely have MVOs ready to deploy sooner that the one that insists on stripping all the evolutionary dynamics out of their setup.
↑ comment by Going Durden (going-durden) · 2023-04-18T07:18:25.201Z · LW(p) · GW(p)
- (1) can an AI use nanotech as a central ingredient of a plan to wipe out humanity, and
- (2) can an AI use nanotech as a central ingredient of a plan to operate perpetually in a world without humans?
Given the hard limitations on dry nanotech, and pretty underwhelming power of wet nanotech/biotech, both answers should be "...Eh."
We have no plausible evidence that any kind of efficent nanotech that could be used for a Gray Goo scenario is possible, and this post is one of the many arguments against it.
If we focus only on completely plausible versions of nanotech, the worst case scenario is a the AI creating a "blight" that could very, very, very slowly damage our agriculture, cause disease in humans, and expand the AI's influence, on the scale of decades or centuries. There is no plausible way to make an exponentially growing nanite cloud that would wipe us out and assemble into an AI God, the worst case scenario is an upjumped artificial slime mold that slowly creeps over everything, and can be fended off with a dustpan.
↑ comment by Steven Byrnes (steve2152) · 2023-04-18T11:42:34.675Z · LW(p) · GW(p)
If an AI arranged to release a highly-contagious deadly engineered pathogen in an international airport, it would not take "decades or centuries" to spread. Right????
Replies from: going-durden↑ comment by Going Durden (going-durden) · 2023-04-19T11:06:44.492Z · LW(p) · GW(p)
a pathogen is not grey nanotech, but biotech. And while it would be very, very dangerous, there is no plausible way for it to wipe out humanity. We already have highly-contagious deadly pathogens all over the planet, and they are sluggish to spread, and their deadliness is inverse to their contagiousness for obvious reasons (dead men don't travel very well).
Replies from: steve2152↑ comment by Steven Byrnes (steve2152) · 2023-04-19T14:50:01.444Z · LW(p) · GW(p)
I’m finding this conversation frustrating. It seems to me that your grandparent comment [LW(p) · GW(p)] was specifically talking about biotech & pandemics. For example, you said “wet nanotech/biotech”. And then in that context you said “"blight" that could very, very, very slowly damage our agriculture, cause disease in humans, and expand the AI's influence, on the scale of decades or centuries”. This sure sounds to me like a claim that a novel pandemic would spread over the course of decades or centuries. Right? And such a claim is patently absurd. It did not take decades or centuries for COVID to spread around the world. (Even before mass air travel, it did not take decades or centuries for Spanish Flu to spread around the world.) Instead of acknowledging that mistake, your response is “a pathogen is not grey nanotech, but biotech”, which is missing the point—I was disputing a claim that you made about biotech.
their deadliness is inverse to their contagiousness for obvious reasons (dead men don't travel very well).
Famously, when you catch COVID, you can become infectious a day or two before you become symptomatic. (That’s why it was so hard to contain.) And COVID also could cause nerve-damage that presumably had nothing to do with its ability to spread. More generally, it seems perfectly possible for a disease to have a highly-contagious-but-not-too-damaging early phase and then a few days later it turns lethal, perhaps by spreading into a totally different part of the body. So I strongly disbelieve the claim that deadliness and contagiousness of engineered pathogens are inevitably inverse, let alone that this is “obvious”.
I also suggest reading this article.
Replies from: going-durden↑ comment by Going Durden (going-durden) · 2023-04-20T06:58:36.271Z · LW(p) · GW(p)
sorry if the thread of my comment got messy, I did mention somewhere that COVID-like pathogen would likely be worst case scenario, for the reasons you mentioned above (long incubation).
However, I believe that COVID pandemic actually proves that humanity is robust against such threats. Quarantine worked. Masks worked. Vaccines worked. Soap and disinfectant worked. As human response would scale up with the danger inherent in any pandemic, I think that anything significantly more deadly that COVID would be stopped even faster, due to far more draconian quarantine responses.
With those in place, I do not see how a pathogen could be used to "wipe out humanity". Decimate, yes. Annihilate? No.
But as I agreed in another thread, we should cut that conversation now. Discussing this online is literally feeding ideas to our potential enemy (be it AI or misaligned humans).
↑ comment by the gears to ascension (lahwran) · 2023-04-22T19:19:24.752Z · LW(p) · GW(p)
did we live through the same pandemic?
Replies from: going-durden↑ comment by Going Durden (going-durden) · 2023-04-28T06:31:28.957Z · LW(p) · GW(p)
we very likely did not, given the span of it, and various national responses.
Replies from: lahwran↑ comment by the gears to ascension (lahwran) · 2023-04-28T06:33:15.114Z · LW(p) · GW(p)
fair enough. most countries' responses left a lot to be desired. a few countries that are general known for having their act together overall did for covid too, but it didn't include some critical large population countries.
↑ comment by Donald Hobson (donald-hobson) · 2024-08-14T12:16:00.320Z · LW(p) · GW(p)
If I was a malicious AI trying to design a pandemic.
- Don't let it start in one place and slowly spread. Get 100 brainwashed humans to drop it in 100 water reservoirs all at once across the world. No warning. No slow spreading, it's all over the earth from day 1.
- Don't make one virus, make at least 100, each with different mechanisms of action. Good luck with the testing and vaccines.
- Spread the finest misinformation. Detailed plausible and subtly wrong scientific papers.
- Generally interfere with humanities efforts to fix the problem. Top vaccine scientists die in freak accidents.
- Give the diseases a long incubation period where they are harmless but highly effective, then they turn leathal.
- Make the diseases have mental effects. Make people confident that they aren't infected, less cautious about infecting others, or if you can make everyone infected a high functioning psycopath plotting to infect as many other people as possible.
comment by Mitchell_Porter · 2023-04-17T07:17:34.935Z · LW(p) · GW(p)
This might be the most erudite chemistry post [? · GW] ever on Less Wrong. @Eric Drexler [LW · GW] actually comments here on occasion; I wonder what he would have to say.
I have been trying to sum up my own thoughts without getting too deeply into it. I think I would emphasize first that the capabilities of plain old DNA-based bacteria are already pretty amazing - bacteria already live everywhere from the clouds to our bloodstreams - and if one is worried about what malevolent intent can accomplish on the nanoscale, they already provide reason to be worried. And I think @bhauth [LW · GW] (author of this post) would agree with that.
Now, regarding the feasibility of an alternative kind of nanobot, with a hard solid exterior, a vacuum interior, and mechanical components... All the physical challenges are real enough, but I'm very wary of supposing that they can't be surmounted. For example, synthesis of diamondoid parts might sound impossibly laborious; then one reads about "direct conversion of CO2 to multi-layer graphene", and thinks, could you have a little nano "sandwich maker" that fills with CO2 (purified by filter), has just the right shape and charge distribution on its inner surfaces to be a substrate for the formation of graphene, and which clamps shut when a particular current flows, opening afterwards to reveal a customized nanosheet that can become a tube or a layer or a surface...
You're right that cells do all this stuff already, but stochastically, with self-assembled floppy parts in an aqueous environment, and so on. You could view the decades of discussion about mechanical nanotechnology, as a long theoretical study of whether and how rigid mechanical paradigms can also be introduced on the nanoscale. What I would say is that, if one is specifically interested in nanobots made of "diamondoid", one can also investigate the extent to which the biological paradigm can be imported into that world, to create hybrid designs which opportunistically use whichever approach works.
Replies from: Metacelsus↑ comment by Metacelsus · 2023-04-17T21:19:29.008Z · LW(p) · GW(p)
>to our bloodstreams
Nitpick: https://www.nature.com/articles/s41564-023-01350-w
"No evidence for a common blood microbiome based on a population study of 9,770 healthy humans"
Of course, skin, digestive tract, reproductive tract, etc. all have lots of bacteria.
Replies from: gilchcomment by DaemonicSigil · 2023-04-17T06:36:51.837Z · LW(p) · GW(p)
I really like this post, I hope to see more like it on less wrong, and I strong-upvoted it. That said, let me now go through my thoughts on various points:
-
Rare materials: Yep, this is a real design constraint, but probably not that hard to design around? I'm not expecting nanobots to be made mostly out of iron.
-
Metal surfaces: Why not just build up the metal object in an oxygen-free environment, then add on an external passivation layer at then end? The passivation layer could be engineered to be more stable than the naturally occurring oxidation layer the metal would normally have. There would still be a minimum size for metal objects, of course. (Or more precisely, a minimal curvature.) Corrosion could definitely be a problem, but Cathodic protection might help in some cases.
-
Agree that electrostatic motors are the way to go here. I'm not sure the power supply necessarily has to be an ion gradient, nor that the motor would otherwise need to be made from metal. Metal might be actively bad, because it allows the electrons to slosh around a lot. What about this general scheme for a motor?: Wheel-shaped molecules, with sites that can hold electrons. A high voltage coming from a power supply deposits electrons on one wheel, and produces holes on the other wheel. The corresponding sites are attracted to each other and once they get close enough, the electrons jump into the holes, filling them. Switching is determined by the proximity of various sites on the wheels as they move relative to each other. Considering how electrons are able to jump around between sites in the electron transport chain, this doesn't seem impossible.
-
An intermediate design that I can imagine is a block with a series of tubes and chambers embedded in it. (The bulk of the block can hold electronics.) Most of the tubes are filled with water, so nanobot components can happily bounce around in them. But lots of components are also mounted to the walls of the tubes. You can't clump together if you're bonded to the wall of your tube. A small minority of tubes can be filled with gas, or even under vacuum, for any weird processes that may require those conditions. Pumping energy is volume times pressure, so the energy requirements could be reasonable as long as the volume is small.
-
Yeah, most reactions should probably still be done in water.
-
The reason for doing things at high temperatures is to do reactions with a high activation energy. If we're designing custom catalysts (artificial enzymes) for our nanobots, we can probably finesse it so that the enzyme coaxes the reactants into the high-energy intermediate state, even if the ambient temperature is low (via coupling to a more favorable reaction, for example). Also, covalent single-bonds can rotate, so there's nothing preventing the existence of a covalently bonded structure that can also exhibit conformational changes. Finally, I'm skeptical that existing life has found everything there is to be found here. Plenty of chemicals are useful to humans, but not useful to any living thing, and hence lack an evolved synthesis pathway. Also, I'd guess that the stupidity of evolution has left a lot of low hanging fruit for humans. For example, rather than trying to do a reaction with proteins, we can do it with a group of complicated catalysts synthesized by proteins.
-
I'd bet on diamond synthesis still being possible somehow, but it does seem like a genuinely complicated question, so I'll have to look into it further.
-
Yep, I'm not really even sure how useful diamond would be for lots of nano-stuff. Might mostly just be used for large structures the bots build, rather than being anywhere on the self-replication pathway for the bots. One difference is it's covalent rather than ionic, so there would probably be much less concern about that Ostwald ripening thing for parts made of diamond. So maybe there would be some uses for it because of that.
-
Doesn't have to be rigid, can still be connection based. For example, there could be simple protein-based building blocks that act like legos. An assembly head can assemble these, and move around on the surface of the part it's building by accepting signals that correspond to "move 1 block left", etc. Position is always exactly known, not because there's a rigid beam anywhere in the system, but because we know the exact integer number of steps the assembly head has moved since the start.
12, 13. ATP needs to be everywhere because it supplies energy. Okay, fair enough. Ribosomes need to be almost everywhere, since proteins are needed everywhere and then ribosomes come with their own host of supporting infrastructure like tRNAs and stuff, which have to be everywhere too. I'm being a little unfair to Eukaryotic cells here, which have lots of sophisticated stuff going on, but my general picture is that whenever a cell decides to make proteins somewhere and then transport them somewhere else, that costs genome space, which is very limited, so the cell can't do that very often. Nanobots genuinely have different constraints from life here, in particular they have cheaper genome space, and so they can have custom designed pipes for every type of protein they use, and the pipe leads right to the chamber where that protein is being used. Huge information cost, but if it makes things work way better, it's probably worthwhile for nanobots. I totally believe that life is using those techniques exactly as much as is optimal for it, though.
- Mostly covered by my answer to 11 where the assembler head walks around on the surface and counts its steps. Larger structures can have many assembler heads working at once. I also don't know why you're scoffing at the potential for building computers here. The supposed "embodied" computation of existing cells is currently computing things that are keeping us alive, which is great, but you can't exactly solve any other important problems on it. It's not a flexible universal computer, in the sense of a Turing machine that can run any program.
↑ comment by bhauth · 2023-04-17T17:41:23.650Z · LW(p) · GW(p)
I really like this post, I hope to see more like it on less wrong, and I strong-upvoted it.
Thanks, glad you liked it. You made quite the comment here, but I'll try to respond to most of it.
Metal surfaces: Why not just build up the metal object in an oxygen-free environment, then add on an external passivation layer at then end?
- To build up metal, you need to carry metal atoms somehow. That requires moving ions, because otherwise there's no motive force for the transfer, plus your carrier would probably be stuck to the metal.
Without proteins carrying ions in water, this is difficult. The best version of what you're proposing is probably directed electrochemical deposition in some solvent that has a wide electrochemical window and can dissolve some metal ions. Such solvents would denature proteins.
- Inputs and outputs need to be transferred between compartments. Cells do use "airlock" type structures for transferring material, but some leakage would be inevitable.
The passivation layer could be engineered to be more stable than the naturally occurring oxidation layer the metal would normally have. There would still be a minimum size for metal objects, of course. (Or more precisely, a minimal curvature.) Corrosion could definitely be a problem, but Cathodic protection might help in some cases.
It's true that proteins can be designed to bind strongly to metal oxide surfaces and inhibit corrosion fairly well. That's actually an interesting research topic that might be useful for steel. But even that isn't good enough on such a small scale, and you'd need to fully cover all exposed surfaces.
The only other option for "engineering" more-stable surfaces is metal nitrides or carbides, but that requires high temperatures, it's not something enzymes can do.
Cathodic protection doesn't help here. It doesn't maintain a perfect equilibrium, and objects would still do Ostwald ripening and tend to become more spherical.
Agree that electrostatic motors are the way to go here. I'm not sure the power supply necessarily has to be an ion gradient, nor that the motor would otherwise need to be made from metal. Metal might be actively bad, because it allows the electrons to slosh around a lot.
I'm not sure what you mean by electrons "sloshing around".
What about this general scheme for a motor?: Wheel-shaped molecules, with sites that can hold electrons. A high voltage coming from a power supply deposits electrons on one wheel, and produces holes on the other wheel. The corresponding sites are attracted to each other and once they get close enough, the electrons jump into the holes, filling them. Switching is determined by the proximity of various sites on the wheels as they move relative to each other. Considering how electrons are able to jump around between sites in the electron transport chain, this doesn't seem impossible.
It's certainly possible to make electromechanical computers with relays. And it's possible to use MEMS electrostatic actuators for relays. They're just not as good as semiconductors for computers. The MEMS relay approach is actually used in some devices for handling high-frequency radio signals.
Consider the analogous versions of ionic and electrostatic motors, and think about what's better. Ionic motors use tubes filled with water instead of conductive wires with insulation; those transmit signals slower but are easier to make. Ionic motors can dump ions into solution instead of needing a conductor at a lower voltage. Ionic motors don't have to deal with possible unintentional electrolysis. Ion gates are much easier to make with proteins than electrical switches.
Electrostatic motors are generally switched for each rotational step, but consider: If you want to compete with the energy usage of ionic motors, you can only use a few electron-volts per rotation. Semiconductor switches and relays are not so good for measuring out individual electrons.
Instead of sites that hold electrons, why not use sites that hold ions?
An intermediate design that I can imagine is a block with a series of tubes and chambers embedded in it. (The bulk of the block can hold electronics.) Most of the tubes are filled with water, so nanobot components can happily bounce around in them. But lots of components are also mounted to the walls of the tubes. You can't clump together if you're bonded to the wall of your tube. A small minority of tubes can be filled with gas, or even under vacuum, for any weird processes that may require those conditions. Pumping energy is volume times pressure, so the energy requirements could be reasonable as long as the volume is small.
That's basically the same proposal as having gas-filled compartments in large cells, so this applies:
Any self-replicating cell must move material between outside and multiple compartments. Gas leakage by the transporters would be inevitable. Cellular vacuum pumps would require too much energy and may be impossible. Also, strongly binding the compounds used (eg CO2) to carriers at every step would require too much energy. ("Too much energy" means too much to be competitive with normal biological processes.)
If the objects bound to the walls of gas-filled compartments have movable arms, on a small enough scale, those arms would also get stuck to (or away from) the walls by electrostatic and dispersion forces.
The reason for doing things at high temperatures is to do reactions with a high activation energy. If we're designing custom catalysts (artificial enzymes) for our nanobots, we can probably finesse it so that the enzyme coaxes the reactants into the high-energy intermediate state, even if the ambient temperature is low (via coupling to a more favorable reaction, for example).
Yes, enzymes can catalyze some difficult reactions. The main tools they use for that are hydrogen bonding patterns that stabilize specific conformations, and electrostatic fields around the active site. There are also p450 oxidases that put a reactive site in a hydrophobic pocket, and use it to oxidize hydrocarbons in a semi-controlled way to create a way to process them.
But enzymes aren't magic. They have limitations. For example, methane can be metabolized by some bacteria, but it's always oxidized to methanol in a reaction that consumes NAD(P)H. Half the energy of the methane is wasted to get it to a state that can be metabolized, and there's just no way around that.
Another notable limitation of enzymes is the difficulty of making aliphatic hydrocarbons. That's why hydrophobic stuff is almost always fatty acids or terpenes from DMAPP.
I'm fairly familiar with protein mechanisms and their limitations. Is there some other type of mechanism you're proposing for low-temperature catalysis, something that enzymes don't already use?
Also, covalent single-bonds can rotate, so there's nothing preventing the existence of a covalently bonded structure that can also exhibit conformational changes.
Yes, proteins are covalently bonded. They're also non-covalently bonded. If all their structure was covalent, then they wouldn't be able to do conformational changes. And because some of it is non-covalent, they denature at high temperature.
Also, I'd guess that the stupidity of evolution has left a lot of low hanging fruit for humans. For example, rather than trying to do a reaction with proteins, we can do it with a group of complicated catalysts synthesized by proteins.
I think the word for that is "cofactor".
I'd bet on diamond synthesis still being possible somehow, but it does seem like a genuinely complicated question, so I'll have to look into it further.
OK. Maybe you'll learn something from the attempt.
Doesn't have to be rigid, can still be connection based. For example, there could be simple protein-based building blocks that act like legos. An assembly head can assemble these, and move around on the surface of the part it's building by accepting signals that correspond to "move 1 block left", etc. Position is always exactly known, not because there's a rigid beam anywhere in the system, but because we know the exact integer number of steps the assembly head has moved since the start.
OK, suppose you have a linear motor (like myosin) which is controlled by a signal (like a DNA sequence) that indicates a series of movements. (Something more computer-like would be less efficient than that). Also remember that on a molecular scale, energy-efficient = reversible. ATPase spins in both directions.
Compared to coding for a protein sequence, you're using more information and more energy to do this. It's also rather difficult to get single-protein-spacing-level control.
So, you're imagining something like a protein with regularly spaced sites that can be attached to, and something that travels along it, with an enzyme-like tooltip that can bind to those sites to do a reaction that connects something. And that is...actually similar to how cytoskeletons work, but obviously they're not directly controlled by DNA or RNA.
my general picture is that whenever a cell decides to make proteins somewhere and then transport them somewhere else, that costs genome space, which is very limited, so the cell can't do that very often. Nanobots genuinely have different constraints from life here, in particular they have cheaper genome space, and so they can have custom designed pipes for every type of protein they use, and the pipe leads right to the chamber where that protein is being used. Huge information cost, but if it makes things work way better, it's probably worthwhile for nanobots. I totally believe that life is using those techniques exactly as much as is optimal for it, though.
Specifying positions with positioners would require more bits of information than coding for proteins. DNA has high information density, and something much more compact than that wouldn't have strong enough binding to be read accurately.
In what sense would nanobots have "cheaper genome space" than current cells? What mechanism do you envision being used for information storage?
I also don't know why you're scoffing at the potential for building computers here. The supposed "embodied" computation of existing cells is currently computing things that are keeping us alive, which is great, but you can't exactly solve any other important problems on it. It's not a flexible universal computer, in the sense of a Turing machine that can run any program.
If the point of your nanobots is to be "like current life, but worse, except it also produces a computer" then I think the usual word for that is "neurons". The resulting computer would need to be better than current systems.
You want to grow brains that work more like CPUs. The computational paradigm used by CPUs is used because it's conceptually easy to program, but it has some problems. Error tolerance is very poor; CPUs can be crashed by a single bit-flip from cosmic rays. CPUs also have less computational capacity than GPUs. Brains work more like...neural networks; perhaps that's where the name came from.
Drexler envisioned such mechanical computers being used to control internal processes as well; that's why I made the comparison. According to some people, this would be an advantage over how cells work for controlling internal operations, but I disagree.
Replies from: DaemonicSigil↑ comment by DaemonicSigil · 2023-04-18T06:22:49.487Z · LW(p) · GW(p)
Thanks for the detailed reply. Jumping right in:
Yep, I totally concede that the size and level of detail of exposed metal parts is going to be limited, the discussion would mostly be interesting in terms of whether or not nanomachines would be able to assemble large metal parts as an external product or metal parts that are fully embedded in another material (eg. copper wires embedded in diamond). The discussion about surface coatings and cathodic protection is just "haggling over the price", so to speak.
I'm not sure what you mean by electrons "sloshing around".
The thing where if you stick a piece of metal in an electric field, charges build up on the surface of the metal to oppose the field.
Consider the analogous versions of ionic and electrostatic motors, and think about what's better. Ionic motors use tubes filled with water instead of conductive wires with insulation; those transmit signals slower but are easier to make. Ionic motors can dump ions into solution instead of needing a conductor at a lower voltage. Ionic motors don't have to deal with possible unintentional electrolysis. Ion gates are much easier to make with proteins than electrical switches.
The original drawback I had in mind for ionic motors is that you need to drag a membrane everywhere that you want to use a motor, which is very inconvenient. Tubes are membranes, but they're rolled up, which makes them a lot more convenient. Diffusion of ions is very fast on these scales, so I'd guess that ions and electrons are about equally good as power sources, unless the motor is going to be correspondingly very fast at using up lots of ions. On the other hand, I think you're misunderstanding what I'm saying about the pure electrostatic motor. It doesn't need any external switching electronics to power the motor, and in particular not silicon electronics. It should just spin given DC power of the correct voltage. The switching would work via proximity, and would happen on the wheel molecules themselves. It's easiest for electrons to tunnel between two sites when they're physically close in space. Depending on how the wheels are rotated relative to each other, various sites will be closer to or farther from each other in space, and this changes as the wheel spins.
If the objects bound to the walls of gas-filled compartments have movable arms, on a small enough scale, those arms would also get stuck to (or away from) the walls by electrostatic and dispersion forces.
Aren't there lots of proteins that undergo conformational changes in ways that don't look like "having arms"? Alternatively, I can make my arm negatively charged and put it on a tower-thingy made of lots of covalently bonded carbon so that it doesn't bend. That tower-thingy holds it up away from the surface of the tube so it's physically too far away to stick. But won't it just stick to the tower-thingy? I'm one step ahead of you: I've also made the tower-thingy negatively charged!
I'm fairly familiar with protein mechanisms and their limitations. Is there some other type of mechanism you're proposing for low-temperature catalysis, something that enzymes don't already use?
I guess to start with, let's say we're making diamond. We're building up a block of the stuff from carbon, and the dangling bonds on the edge of the structure connect to hydrogens. My first though would be a condensation reaction: We stick a methanediol onto the structure, replacing two dangling hydrogens with bonds to a new carbon atom. Two molecules of water are produced, made from the hydroxyl groups and those two hydrogens.
I think existing proteins can do condensation reactions okay, but maybe those become impossible when the carbon you're trying to attach to is already bonded to three other carbons?
Yes, proteins are covalently bonded. They're also non-covalently bonded. If all their structure was covalent, then they wouldn't be able to do conformational changes. And because some of it is non-covalent, they denature at high temperature.
You're correctly pointing out two extremes: very floppy chains with no additional structure, and fully rigid crystals with to flex at all. I'm trying to say that there exists an intermediate zone in between: Molecules that have enough covalent bonds to have structure, but there's a few pivots where there's just a single covalent bond that can rotate, and this gives the molecule flexibility to change shape. The structure of the molecule isn't a chain, it's a connected graph with lots of cycles.
I think the word for that is "cofactor".
Yep, I know what a cofactor is. :) I'm saying that we could go farther in that direction than nature has. Normally a cofactor is held in a parent enzyme that still has to be just the right shape and everything. What if even the parent enzyme was just some organic molecule synthesized by other enzymes, rather than a protein itself? You complained above that any solvent that wouldn't corrode metals would have to denature proteins. What if we could use molecules that aren't proteins instead?
Also remember that on a molecular scale, energy-efficient = reversible. ATPase spins in both directions.
If we're reading from RNA, near-reversibility seems fine and good. Maybe sometimes the assembly head takes a step backwards to where it came from and its RNA reader correspondingly takes a step backwards to the previous codon in the RNA sequence. If we're working directly from electrical signals, maybe we have to just spend the energy needed to make sure that there's no backtracking. When a tRNA detaches from its amino acid during protein synthesis, that seems pretty darn irreversible, but apparently the energy cost is bearable for the cell. If we're now assembling larger lego bricks, each of which is an individual protein, probably the cost to make all the walking around on the surface irreversible is not too much compared to the cost to assemble each lego brick in the first place.
In what sense would nanobots have "cheaper genome space" than current cells? What mechanism do you envision being used for information storage?
A few senses. The first is that life is amazingly robust to errors, but not infinitely robust. The larger the genome, the more mutations you get per generation. Life does have some error repair mechanisms, but using information theory it's possible to devise an error correcting code of arbitrarily high robustness. Life just can't switch to using such a code because changing things up like that would mess with everything else. That kind of refactor is something that evolution just can't do. If I recall correctly, evolutionary theorists do seem to think that this is a limiting factor on (coding) genome size.
Another sense is that nanobots can be much more cooperative rather than competitive. Each bacteria has to hold its entire genome, along with all the machinery needed to sustain and reproduce itself. Multicellular organisms have it much better in that their cells can cooperate and specialize, but even there, each cell has its own copy of the genome, even the genes it doesn't particularly need at the moment. You could imagine that maybe only 1 in 1000 nanobots has to lug around the master copy of the genome, and the rest can just request the specific parts they need for their particular specialty they're working on. They don't out-compete the bot with the full genome because the system was designed top down and the presence of the the maser-genome holding bot makes sense from a top-down perspective. No need to worry about the nanobot equivalent of cancer because see above about error correction.
If the point of your nanobots is to be "like current life, but worse, except it also produces a computer" then I think the usual word for that is "neurons". The resulting computer would need to be better than current systems.
Think about component size. Neurons are huge. Even current transistors are still much larger than individual proteins. The goal here is not "somehow build a computer with nanotechnology". The goal is to build a computer whose logic gates are literally the size of molecules. (And also to make it reversible and therefore super power-efficient.) I know neurons are quite complex and are much more advanced than a simple transistor, but still, the size difference from neurons to molecules is ridiculous. And small components typically switch faster than large ones.
Replies from: bhauth↑ comment by bhauth · 2023-04-19T00:25:02.155Z · LW(p) · GW(p)
I guess to start with, let's say we're making diamond. We're building up a block of the stuff from carbon, and the dangling bonds on the edge of the structure connect to hydrogens. My first though would be a condensation reaction: We stick a methanediol onto the structure, replacing two dangling hydrogens with bonds to a new carbon atom. Two molecules of water are produced, made from the hydroxyl groups and those two hydrogens.
I think existing proteins can do condensation reactions okay, but maybe those become impossible when the carbon you're trying to attach to is already bonded to three other carbons?
Condensation reactions are only possible in certain circumstances. Maybe read about the mechanism of aldol condensation and get back to me. Also, methanediol is in equilibrium with formaldehyde in water.
I realize you don't know my background, but if you want to say I'm wrong about something chemistry-related, you'll have to put in a little more effort than that.
↑ comment by johnswentworth · 2023-04-17T20:26:22.863Z · LW(p) · GW(p)
I really like this post, I hope to see more like it on less wrong, and I strong-upvoted it. That said, let me now go through my thoughts on various points...
+1. I'd add that, besides the specific objections to points here, the overall argument of the post has a major conjunction problem: it only takes one or maybe two of the points to be wrong, in order for the end-to-end argument to fall apart. And a lot of these points do not have the sort of watertight argument which establishes anywhere near 90% confidence, and 90% per-step would already be on the low side for a chain with 10+ mostly-conjunctive steps.
On top of that, the end-to-end argument mostly seems to argue against some rather specific pictures (e.g. diamondoids, nano-3d printing), which are a lot narrower than "grey goo" in general.
So I think the actual headline argument is pretty weak. But even so, I strong-upvoted the post, because I love the object-level analysis of the individual points on their own.
Replies from: david-johnston, jacob_cannell↑ comment by David Johnston (david-johnston) · 2023-04-17T22:12:16.879Z · LW(p) · GW(p)
One of the contentions of this post is that life has thoroughly explored the space of nanotech possibilities. This hypothesis makes the failures of novel nanotech proposals non independent. That said, I don’t think the post offers enough evidence to be highly confident in this proposition (the author might privately know enough to be more confident, but if so it’s not all in the post).
Separately, I can see myself thinking, when all is said and done, that Yudkowsky and Drexler are less reliable about nanotech than I previously thought (which was a modest level of reliability to begin with), even if there are some possibilities for novel nanotech missed or dismissed by this post. Though I think not everything has been said yet.
Replies from: donald-hobson↑ comment by Donald Hobson (donald-hobson) · 2024-08-14T12:22:31.435Z · LW(p) · GW(p)
All life runs on DNA in particular. Scientists have added extra base pairs and made life forms that work fine. Evolution didn't. DNA is a fairly arbitrary molecule amongst a larger set of similar double chain organics.
I think this post is just dismissing everything with weak reasons. I don't think this post is evidence at all, by conservation of expected evidence, we should take an unusually bad argument against a position as evidence for that position.
If nanotech really was impossible, it's likely that better impossibility arguments would exist.
↑ comment by jacob_cannell · 2023-04-21T00:49:37.558Z · LW(p) · GW(p)
it only takes one or maybe two of the points to be wrong, in order for the end-to-end argument to fall apart
Regardless of the specific argument here, biological cells are already near pareto optimal robots in terms of thermodynamic efficiency. There is essentially no potential improvement for designs that are better at converting energy into replication of code, or just converting energy into carefully arranged nanostructures in general. This is a much stronger airtight argument not against the possibility of nanotech, but against the promise of nanotech.
Replies from: donald-hobson↑ comment by Donald Hobson (donald-hobson) · 2024-08-14T12:26:24.448Z · LW(p) · GW(p)
Regardless of the specific argument here, biological cells are already near pareto optimal robots in terms of thermodynamic efficiency.
You can point to little bits of them that are efficient. Photosynthesis still sucks. Modern solar panels are WAY better.
Also, bio cells don't try to build fusion reactors. All that deuterium floating around for the taking and they don't even try.
Nanobots that did build fusion reactors would have a large advantage. Yes this requires the nanobots to work together on macro scale projects.
Replies from: bhauth↑ comment by bhauth · 2024-08-14T13:45:40.523Z · LW(p) · GW(p)
Photosynthesis still sucks. Modern solar panels are WAY better.
Conversion of light to ATP and NADPH is a little more efficient than typical solar panels. If you want to talk about the efficiency of CO2 to sugar vs industrial systems, you should compare that to current direct air capture (with solar power) efficiency - plus water electrolysis and conversion to chemicals, I suppose. As for talking about fusion as an advantage of nanobots specifically, that's retarded. If you want me to debate you seriously, post something that shows you deeply understand something about biology or chemistry. Maybe run it by an expert in the field first so you don't waste everyone's time.
comment by epiphi · 2023-04-17T18:36:12.421Z · LW(p) · GW(p)
This is nice to see, I’ve been generally kind of unimpressed by what have felt like overly generous handwaves re: gray gooey nanobots, and I do think biological cells are probably our best comparison point for how nanobots might work in practice.
That said, I see some of the discussion here veering in the direction of brainstorming novel ways to do harm with biology, which we have a general norm against in the biosecurity community – just wanted to offer a nudge to y’all to consider the cost vs. benefit [LW · GW] of sharing takes in that direction. Feel free to follow up with me over DM!
Replies from: Vladimir_Nesov, cwbakerlee, donald-hobson, gilch↑ comment by Vladimir_Nesov · 2023-04-17T22:52:06.899Z · LW(p) · GW(p)
I don't see specifically gray gooey nanobots having a visible presence on LW. When people gesture at nanotech, it's mostly in the sense of molecular manufacturing, local self-contained infrastructure for producing advanced things like computers, a macroscale activity. This is important for quickly instantiating designs that can't be constructed on existing infrastructure, bootstrapping molecular manufacturing capability starting from things like existing RNA printers.
This way, bringing new things into physical existence only requires having their designs, given a sufficiently versatile manufacturing toolset. If there is no extended delay with incrementally upgrading production facilities all over the world, ability to design machines thousands of times faster than human civilization directly translates into ability to quickly manufacture them.
(The diamondoid bacterium things Yudkowsky keeps mentioning don't particularly need self-replication capabilities to make the same point, they could just as well be pumped out by Zerg queens foraging underground. The details of this don't matter for the point being made, there are many independent ways of eating the world that don't overall become less effective because some of them are on further reflection infeasible.)
↑ comment by cwbakerlee · 2023-04-17T19:30:03.966Z · LW(p) · GW(p)
Strong +1 to this
I'm also happy to discuss stuff about norms further 1 on 1 -- the best way to contact me, anonymously or non-anonymously, is through this short form.
↑ comment by Davidmanheim · 2023-04-18T10:51:47.282Z · LW(p) · GW(p)
I assume the strong +1 was specifically on the infohazards angle? (Which I also strongly agree with.)
Replies from: cwbakerlee↑ comment by cwbakerlee · 2023-04-18T13:45:43.667Z · LW(p) · GW(p)
Yep, that's right -- thanks for clarifying!
↑ comment by Donald Hobson (donald-hobson) · 2024-08-14T12:30:18.529Z · LW(p) · GW(p)
Remember, we are talking about the power of intelligence here.
For nanobots to be possible, there needs to be one plan that works. For them to be impossible, every plan needs to fail.
How unassailably solid did the argument for airplanes look before any were built?
↑ comment by gilch · 2023-04-18T00:22:11.379Z · LW(p) · GW(p)
It's a fair point that this topic touches on potential infohazards [? · GW]. I don't think anything I've said so far is particularly novel, although in the saying I'm perhaps making the ideas less obscure. I also haven't really gone into much depth of detail (mostly because of my relative lack of expertise). My main aim has been to nudge others into taking the threats more seriously, even after seeing a related strawman cut down.
comment by anithite (obserience) · 2023-04-17T19:15:50.114Z · LW(p) · GW(p)
edit: (link)green goo is plausible [LW · GW]
The AI can kill us and then take over with better optimized biotech very easily.
- Doubling time for
- Plants (IE:solar powered wet nanotech) > single digit days
- Algae in ideal conditions 1.5 days
- E. Coli 20 minutes
- There are piles of yummy carbohydrates lying around (Trees, plants, houses)
- The AI can go full Tyranid
- The AI can re-use existing cellular machinery. No need to rebuild the photosynthesis or protein building machinery, full digestion and rebuilding at the amino acid level is wasteful.
- Sub 2 minute doubling times are plausible for a system whose rate limiting step is mechanically infecting plants with a fast acting subversive virus. Spreading flying things are self replicators that steal energy+cellular machinery from plants during infection (IE:mosquito like). Onset time could be a few hours till construction of shoggoth like things. Full biosphere assimilation could be limited by flight speed.
Nature can't do these things since they require substantial non-incremental design changes. Mosquitoes won't simultaneously get plant adapted needles + biological machinery to sort incoming proteins and cellular contents + continuous grow/split reproduction that would allow a small starting population to eat a forest in a day. Nature can't design the virus to do post infection shoggoth construction either.
The only thing that even re-uses existing cellular machinery is viruses and that's because they operate on much faster evolutionary time scales than their victims. Evolution takes so long that winning strategies to eat or subvert existing populations of organisms are self-limiting. The first thing to sort of work wipes out the population and then something else not vulnerable fills the niche.
Design is much more powerful than evolution since individually useless parts can be developed to create a much more effective whole. Evolution can't flip the retina or reroute the recurrent laryngeal nerve even though those would be easy changes a human engineer could make.
Endgame biotech (IE: can design new proteins/DNA/organisms) is very powerful.
But that doesn't mean dry nanotech is useless.
- even if production is expensive it may be worth building some things that way anyways.
- computers
- structural components
- Biology is largely stuck with ~0.15 Gpa materials (collagen, cellulose, chitin)
- oriented UHMWPE should be wet synthesizeable (6 Gpa tensile strength)
- graphene/diamondoid may be worth it in some places to hit 30 Gpa (EG:for things that fly or go to space)
- dry nanotech won't be vulnerable to parasites that can infect a biological system.
- even if the AI has to deal with single day doubling times that's still enough to cover the planet in a month.
- but with the right design parasites really shouldn't be a problem.
- biological parasite defenses are not-optimal
↑ comment by Ponder Stibbons · 2023-04-18T13:26:09.372Z · LW(p) · GW(p)
“Design is much more powerful than evolution since individually useless parts can be developed to create a much more effective whole. Evolution can't flip the retina or reroute the recurrent laryngeal nerve even though those would be easy changes a human engineer could make.”
But directed evolution of a polymeric macromolecule (E.g. repurposing an existing enzyme to process a new substrate) is so much easier practically speaking than designing and making a bespoke macromolecule to do the same job. Synthesis and testing of many evolutionary candidates is quick and easy, so many design/make/test cycles can be run quickly. This is what is happening at the forefront of the artificial enzyme field.
Replies from: obserience↑ comment by anithite (obserience) · 2023-04-18T16:38:18.251Z · LW(p) · GW(p)
Yes, designing proteins or RNAzymes or whatever is hard. Immense solution space and difficult physics. Trial and error or physically implemented genetic algorithms work well and may be optimal. (EG:provide fitness incentive to bacteria that succeed (EG:can you metabolize lactose?))
Major flaw in evolution:
- nature does not assign credit for instrumental value
- assume an enzymatic pathway is needed to perform N steps
- all steps must be performed for benefit to occur
- difficulty of solving each step is "C" constant
- evolution has to do O(C^N) work to solve problem
- with additional small constant factor improvement for horizontal genetic transfer and cooperative solution finding (EG: bacterial symbiosis)
- intelligent agent can solve for each step individually for O(C*N) (linear) work
- this applies also to any combination of structural and biochemical changes.
Also, nature's design language may not be optimal for expressing useful design changes concisely. Biological state machines are hard to change in ways that carry through neatly to the final organism. This shows in various small ways in organism design. Larger changes don't happen even though they're very favorable (EG:retina flip would substantially improve low light eye capabilities (it very much did in image sensors)) and less valuable changes not happening and not varying almost at all over evolutionary time implies there's something in the way there. If nature could easily make plumbing changes, organisms wouldn't all have similar topology (IE:not just be warped copies of something else). New part introduction and old part elimination can happen but it's not quick or clean.
Nature has no mechanisms for making changes at higher levels of abstraction. It can change one part of the DNA string but not "all the start codons at once and the ribosome start codon recognition domain". Each individual genetic change is an independent discovery.
Working in these domains of abstraction reduces the dimensionality of the problem immensely and other such abstractions can be used to further constrain solution space cheaply.
comment by PeterMcCluskey · 2023-04-17T17:31:19.960Z · LW(p) · GW(p)
I'm puzzled that this post is being upvoted. The author does not sound familiar with Drexler's arguments in NanoSystems.
I don't think we should worry much about how nanotech might affect an AI's abilities, but this post does not seem helpful.
Replies from: thomas-kwa↑ comment by Thomas Kwa (thomas-kwa) · 2023-04-18T22:03:17.555Z · LW(p) · GW(p)
I agree and expanded on this in a comment [LW(p) · GW(p)].
Replies from: JenniferRM↑ comment by JenniferRM · 2023-04-21T18:30:58.868Z · LW(p) · GW(p)
Voting is, of necessity, pleiotropically optimized. It loops into reward structures for author motivation, but it also regulates position within default reading suggestion hierarchies for readers seeking educational material, and it also potentially connects to a sense that the content is "agreed to" in some sort of tribal sense.
If someone says something very "important if true and maybe true" that's one possible reason to push the content "UP into attention" rather than DOWN.
Another "attentional" reason might be if some content says "the first wrong idea that occurs to nearly everyone, which also has a high quality rebuttal cleanly and saliently attached to it".
That is, upvotes can and maybe should flow certain places for reasons of active value-of-information and/or pedagogy [? · GW]. Probably there are other reasons, as well! 😉
A) As high-quality highly-upvoted rebuttals like Mr Kwa's have arrived, I've personally been thinking that maybe I should reverse my initial downvote, which would make this jump even higher. I'm a very unusual voter, but I've explained my (tentative) theories of upvoting once or twice, and some people might have started to copy me.
B) I could imagine some voters were hoping (as I might if I thought about it some more and changed my mind on what my voting policy should be in very small ways) to somehow inspire some good rebuttals, by pre-emptively upvoting things in high VoI areas where LW simply hasn't had much discussion lately?
C) An alternative explanation is of course that a lot of LW voters haven't actually looked at nanotech very much, and don't have good independent object level takes, and just agreed with the OP because they don't know any better and it seemed plausible and well written. (This seems the most likely to me, fwiw.)
D) Another possibility is, of course, that there are a lot of object level agreement voters on LW and also all three of us are wrong about how nano could or would "really just work" if the best research directions got enough high-talent interest and supportive funding. I doubt this one... but it feels wise to include an anti-hubristic hypothesis when enumerating possibilities 😇
comment by Roko · 2023-04-21T14:37:46.036Z · LW(p) · GW(p)
Organic Life is Unlikely
(list of reasons why any kind of organic life ought to be impossible, which must to some extent actually be correct because the Fermi Observation shows that it is extremely rare)
I don't really think this approach of listing a bunch of problems is a way to get a high level of certainty about this. In a certain sense, you should treat this like a math problem and insist on a formal proof that nanotech is impossible starting from the Schrodinger Equation. And of course, such a proof would have the very difficult task of ruling out nanotech without ruling out actual bacteria.
comment by gilch · 2023-04-17T04:42:10.124Z · LW(p) · GW(p)
self-replicating microscopic machines with some fundamental mechanistic differences from all biological life that make them superior
I think Wet Nanotech might qualify then.
Consider a minor modification to a natural microbe: a different genetic code. I.e., a codon still codes for an amino acid, but which corresponds to which could differ. (This correspondence is universal in natural life, with a few small exceptions.) Such an organism would effectively be immune to all of the viruses that would affect its natural counterpart, and no horizontal gene transfer to natural life would be possible.
One could also imagine further modifications. Greater resistance to mutations, perhaps using a more stable XNA and more repair genes. More types of amino acids. Reversed chirality of various biomolecules as compared to natural life, etc. Such an organism (with the appropriate enzymes) could digest natural life, but not the reverse.
There's nothing here that seems fundamentally incompatible with our understanding of biochemistry, but with enough of these changes, such an organism might then become an invasive species with a massive competitive advantage over natural life, ultimately resulting in an ecophagy scenario.
Replies from: bhauth, avturchin↑ comment by bhauth · 2023-04-17T04:53:17.207Z · LW(p) · GW(p)
That has already happened naturally and also already been done artificially.
See this paper for reasons why codons are almost universal.
Replies from: JenniferRM↑ comment by JenniferRM · 2023-04-17T17:20:15.660Z · LW(p) · GW(p)
That third link seems to be full of woo.
Where was the optimization pressure for better designs supposed to have arisen in the "communal" phase?
Thus, we may speculate that the emergence of life should best be viewed in three phases, distinguished by the nature of their evolutionary dynamics. In the first phase, treated in the present article, life was very robust to ambiguity, but there was no fully unified innovation-sharing protocol. The ambiguity in this stage led inexorably to a dynamic from which a universal and optimized innovation-sharing protocol emerged, through a cooperative mechanism. In the second phase, the community rapidly developed complexity through the frictionless exchange of novelty enabled by the genetic code, a dynamic we recognize to be patently Lamarckian (19). With the increasing level of complexity there arose necessarily a lower tolerance of ambiguity, leading finally to a transition to a state wherein communal dynamics had to be suppressed and refinement superseded innovation. This Darwinian transition led to the third phase, which was dominated by vertical descent and characterized by the slow and tempered accumulation of complexity.
They claim that universal horizontal gene transfer (HGT) arose through a "cooperative" mechanism, without saying what that would have looked like at the level of cells, or at the level of some kind of soupy boundary-free chemostat, or something?
They don't seem to be aware of compensatory mutations or quasi-species or that horizontal transfer is parasitic by default [LW · GW].
Like: where did the new "sloppy but still useful" alleles come from? Why (and how) would any local part of "the communal system" spend energy to generate such things or pass them along starting literally from scratch? This sort of meta-evolutionary cleverness usually requires a long time to arise!
The thing that should be possible (not easy, but possible) only now, with technology, is to invent some new amino acids (leveraging what exists in the biosphere now instead of what was randomly available billions of years ago) AND a new codon system for them, and to boot strap from there, via directed evolution, towards some kind of cohesively viable neolife that (if it turns out to locate a better local optimum than we did) might voraciously consume the current ecology.
The above image is from Figure 2, with the description "Examples of genetically encoded noncanonical amino acids with novel functions" from Schulze's "Expanding the genetic code".
Compensatory mutation is actually a pretty interesting and key concept, because it suggests a method by which one might prevent a gray good scenario on purpose, rather than via "mere finger crossing" regarding limitations that we might pray that we will be lucky enough for it to run into by accident.
We could run evolution in silico at the protein level, with large conceptual jumps, and then print something unlikely to be able to evolve.
The "unable to easily evolve" thing might be similar to human telomeres, but more robust.
It could make every generation of neolife to almost entirely a degeneration way from "edenic neolife" and towards a mutational meltdown.
Note that this is essentially an "alignment" or "corrigibility" strategy, but at the level of chemistry and molecular biology, where the hardware is much much easier to reason about, in comparison to the "software" of "planning and optimization processes themselves".
If you could cause there to be only a 1 in septillion chance of positive or compensatory mutations on purpose (knowing the mechanisms and math to calculate this risk) and put several fully independent booby traps into the system that will fail after a handful of mutations, then you could have the first X generations "eat and double very very efficiently" and then have the colony switch to "doing the task" for Y generations, and then maybe as meltdown became inevitable within ~Z more generations they could, perhaps, actively prepare for recycling?
I can at least IMAGINE this for genomes, because genomes are mostly not Turing Complete.
I know of nothing similar that could be used to make AI with survive-and-spread powers similarly intrinsically safe.
Replies from: bhauth↑ comment by bhauth · 2023-04-17T18:02:47.721Z · LW(p) · GW(p)
You're misunderstanding the point of those proposed amino acids. They're proposals for things to be made by (at least partly) non-enzymatic lab-style chemical processes, processed into proteins by ribosomes, and then used for non-cell purposes. Trying to use azides (!) or photocrosslinkers (?) in amino acids isn't going to make cells work better.
There really isn't much improvement to be had by using different amino acids.
Replies from: JenniferRM↑ comment by JenniferRM · 2023-04-17T18:30:54.054Z · LW(p) · GW(p)
The new aminoacids might be "essential" (not manufacturable internally) and have to come in as "vitamins" potentially. This is another possible way to prevent gray goo on purpose, but hypothetically it might be possible to find ways to move that synthesis into the genome of neolife itself, if that was cheap and safe. These seem like engineering considerations that could change from project to project.
Mostly I have two fundamental points:
1) Existing life is not necessarily bio-chemically optimal because it currently exists within circumscribed bounds that can be transgressed. Those amino acids are weird and cool and might be helpful for something. Only one amino acid (and not even any of those... just anything) has to work to give "neo-life" some kind of durable competitive advantage over normal life.
2) All designs have to come from somewhere, with the optimization pressure supplied by some source, and it is not safe or wise to rely on random "naturally given" limits in the powers of systems that contain an internal open-ended optimization engine. When trying to do safety engineering, and trying to reconcile inherent safety with the design of something involving autonomous (potentially exponential) growth, either (1) just don't do it, or else (2) add multiple well-tested purposeful independent default shutdown mechanisms. If you're "doing it" then look at all your safety mechanisms in a fault tree analysis and if the chance of an error is 1/N then make sure there will definitely not be anything vaguely close to N opportunities for a catastrophe to occur.
↑ comment by avturchin · 2023-04-17T10:12:40.124Z · LW(p) · GW(p)
I am also for Wet Nanotech. But different genetic code is not needed or it is not an important thing.
The main thing is to put a Turing computer inside a living cell similar to E.Coli and to create a way for two-side communication with externals computer. Such computer should be genetically encoded, so if the cell replicates the computer also replicates. The computer has to have the ability to get data from some sensors inside the cells and output some proteins.
Building such Wet Nanotech is orders of magnitude simpler than real nanotech.
The main obstacle for AI is the need to perform real-world experiments. In classical EY paradigm, first AI is so superintelligent that it does not need to perform any experiments, as it could guess everything about the real world and will do everything right from the first attempt. But if first AI is still limited by amount of available compute, or its intelligence or some critical data, it has to run tests.
Running experiments is longer and AI is more likely to be cought in its first attempts. This will slower its ascend and may force it to choose the ways there it cooperates with humans for longer periods of time.
Replies from: bhauth↑ comment by bhauth · 2023-04-17T17:46:51.811Z · LW(p) · GW(p)
You want to grow brains that work more like CPUs. The computational paradigm used by CPUs is used because it's conceptually easy to program, but it has some problems. Error tolerance is very poor; CPUs can be crashed by a single bit-flip from cosmic rays. CPUs also have less computational capacity than GPUs. Brains work more like...neural networks; perhaps that's where the name came from.
Replies from: avturchin↑ comment by avturchin · 2023-04-17T19:36:33.842Z · LW(p) · GW(p)
No, I didn't mean brains. I mean digital computers inside the cell; but they can use all the ways of error-correction including parallelism.
Replies from: MakoYass, Gunnar_Zarncke↑ comment by mako yass (MakoYass) · 2023-04-18T00:28:22.137Z · LW(p) · GW(p)
Have you heard of the Arc protein? It's conceivable that it's responsible for transmitting digital information in the brain, like, if that were useful, it would be doing that, so I'd expect to see computation too.
I sometimes wonder if this is openworm's missing piece. But it's not my field.
Replies from: JenniferRM↑ comment by JenniferRM · 2023-04-21T14:26:31.532Z · LW(p) · GW(p)
That is so freakin' cool. Thank you for this link. Hadn't heard about this yet...
...and yes, memory consolidation is on my list as "very important" for uploading people to get a result where the ems are still definitely "full people (with all the features which give presumptive confidence that the features are 'sufficient for personhood' because the list as been constructed in a minimalist way such that the absence of a sufficient feature 'breaking personhood', then not even normal healthy humans are 'people')".
↑ comment by Gunnar_Zarncke · 2023-04-18T10:34:19.366Z · LW(p) · GW(p)
There is a new paper by Jeremy England that seems relevant:
Self-organized computation in the far-from-equilibrium cell
Recent progress in our understanding of the physics of self-organization in active matter has pointed to the possibility of spontaneous collective behaviors that effectively compute things about the patterns in the surrounding patterned environment.
comment by tailcalled · 2023-04-17T12:39:12.938Z · LW(p) · GW(p)
Just to be clear, a point which the post seems to take for granted, but which people not familiar with the topic might not think about, is:
Life is already selected for inclusive genetic fitness, so if nanobots do not unlock powerful capacities that life does not already have, then you cannot have a gray goo scenario because ordinary life will outcompete your nanobots for resources.
Replies from: Charlie Steiner↑ comment by Charlie Steiner · 2023-04-17T13:15:57.486Z · LW(p) · GW(p)
I dunno, I agree with the post but disagree that this is much of a safety factor for nanotech.
There are things that are easy for design that are impossible for evolution. Like if you make a cyanobacterium with an alternate genetic code so that it's immune to all current viruses, this would outcompete unmodified cyanobacteria. But evolution is never going to change the entire genome all at once to caputure this advantage.
Artificial life can probably do a lot of weird and powerful stuff even if the "diamandoid nanobot" picture is wrong.
Replies from: bhauth, tailcalled↑ comment by tailcalled · 2023-04-17T14:45:26.418Z · LW(p) · GW(p)
I might be wrong but I think the idea you have here of something with immunity to all current viruses would constitute a genuine counterargument to the OP? Possible I'm misunderstanding the scope of what OP is arguing about.
Replies from: steve2152, None, bhauth↑ comment by Steven Byrnes (steve2152) · 2023-04-17T17:00:04.727Z · LW(p) · GW(p)
OP said:
I use "nanobots" to mean "self-replicating microscopic machines with some fundamental mechanistic differences from all biological life that make them superior".
I think that there are lots of plausible “invasive species from hell” scenarios where an organism is sufficiently edited so as to have no natural viruses (because its genome is weird) and no natural predators (because its sugars are weird or it has an exotic new toxin) and so on. They would still have ecological niches where they wouldn’t be able to thrive, and they would still presumably get predators and diseases eventually. But a lot of destruction could happen in the meantime, including collapsing critical ecosystems etc., and it could happen fast (years not decades, but also not weeks) if the organism is introduced in lots of places at once, I would assume.
Those scenarios are important, but they’re not “nanobots” by OP’s definition.
↑ comment by [deleted] · 2023-04-17T16:12:28.831Z · LW(p) · GW(p)
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3133615/
Here's likely what Steiner is referencing. Take a genome, add 1 base to each codon. Something you can do in a python script chatGPT can write in 2 minutes.
But effectively impossible for nature to ever do - likely evolution will hit time limit exceeded - we only have about 1 billion years left on this star and it took 3 billion to reach that point - before doing this even once.
The reason is the computational mechanism to do this is complex and 1 time use without an evolutionary pressure vector pointing towards it. It will never be found by evolution.
The 4 codon based life will be possibly superior to all life because it can access a 4 times larger library of possible protein components and it automatically becomes immune to all viruses. (Until new virii evolve)
Modified life in a way that let's it outcompete existing life is not grey goo, it's "green goo" and a totally different scenario. Green goo also will be limited by energy and barriers protecting existing life, for example cellulose is hard to break and this may not be solvable. So the green goo might grow and outcompete life slowly, taking centuries to cover the planet.
comment by anithite (obserience) · 2023-04-17T18:11:46.028Z · LW(p) · GW(p)
OK, maybe you want to build some kind of mechanical computers too. Clearly, life doesn't require that for operation, but does that even work? Consider a mechanical computer indicating a position. It has some number, and the high bit corresponds to a large positional difference, which means you need a long lever, and then the force is too weak, so you'd need some mechanical amplifier. So that's a problem.
Drexler absolutely considered thermal noise. Rod logic uses rods at right angles whose positions allow or prevent movement of other rods. That's the amplification since a small force moving one rod can control a later applied larger force on a blocked rod.
http://www.nanoindustries.com/nanojbl/NanoConProc/nanocon2.html#anchor84400
Replies from: Muireall, bhauth↑ comment by Muireall · 2023-04-18T00:06:15.940Z · LW(p) · GW(p)
Drexler's calculations concern the thermal excitation of vibrations in logic rods, not the thermal excitation of their translational motion. Plugging his own numbers for dissipation into the fluctuation-dissipation relation, a typical thermal displacement of a rod during a cycle is going to be on the order of the 0.7nm error threshold for his proposed design in Nanosystems.
That dissipation is already at the limit (from Akhiezer damping) of what defect-free bulk diamond could theoretically achieve at the proposed frequency of operation even if somehow all thermoelastic damping, friction, and acoustic radiation could be engineered away. An assembly of non-bonded rods sliding against and colliding with one another ought to have something like 3 orders of magnitude worse noise and dissipation from fundamental processes alone, irrespective of clever engineering, as a lower bound. Assemblies like this in general, not just the nanomechanical computer, aren't going to operate with nanometer precision at room temperature.
Replies from: obserience, thomas-kwa↑ comment by anithite (obserience) · 2023-04-19T20:17:46.697Z · LW(p) · GW(p)
edit: This was uncharitable. Sorry about that.
This comment suggested not leaving rods to flop around if they were vibrating.
The real concern was that positive control of the rods to the needed precision was impossible as described below.
Replies from: Muireall↑ comment by Muireall · 2023-04-19T22:47:11.861Z · LW(p) · GW(p)
I've given it some thought, yes. Nanosystems proposes something like what you describe. During its motion, the rod is supposed to be confined to its trajectory by the drive mechanism, which, in response to deviations from the desired trajectory, rapidly applies forces much stronger than the net force accelerating the rod.
But the drive mechanism is also vibrating. That's why I mentioned the fluctuation-dissipation theorem—very informally, it doesn't matter what the drive mechanism looks like. You can calculate the noise forces based on the dissipation associated with the positional degree of freedom.
There's a second fundamental problem in positional uncertainty due to backaction from the drive mechanism. Very informally, if you want your confining potential to put your rod inside a range with some response speed (bandwidth), then the fluctuations in the force obey , from standard uncertainty principle arguments. But those fluctuations themselves impart positional noise. Getting the imprecision safely below the error threshold in the presence of thermal noise puts backaction in the range of thermal forces.
Replies from: obserience↑ comment by anithite (obserience) · 2023-04-20T05:57:51.285Z · LW(p) · GW(p)
Sorry for the previous comment. I misunderstood your original point.
My original understanding was, that the fluctuation-dissipation relation connects lossy dynamic things (EG, electrical resistance, viscous drag) to related thermal noise (Johnson–Nyquist noise, Brownian force). So Drexler has some figure for viscous damping (essentially) of a rod inside a guide channel and this predicts some thermal W/Hz/(meter of rod) spectral noise power density. That was what I thought initially and led to my first comment. If the rods are moving around then just hold them in position, right?
This is true but incomplete.
But the drive mechanism is also vibrating. That's why I mentioned the fluctuation-dissipation theorem—very informally, it doesn't matter what the drive mechanism looks like. You can calculate the noise forces based on the dissipation associated with the positional degree of freedom.
You pointed out that a similar phenomenon exists in *whatever* controls linear position. Springs have associated damping coefficients so the damping coefficient in the spring extension DOF has associated thermal noise. In theory this can be zero but some practical minimum exists represented by EG:"defect-free bulk diamond" which gives some minimum practical noise power per unit force.
Concretely, take a block of diamond and apply the max allowable compressive force. This is the lowest dissipation spring that can provide that much force. Real structures will be much worse.
Going back to the rod logic system, if I "drive" the rod by covalently bonding one end to the structure, will it actually move 0.7 nm? (C-C bond length is ~0.15 nm. linear spring model says bond should break at +0.17nm extension (350kJ/mol, 40n/m stiffness)). That *is* a way to control position ... so if you're right, the rod should break the covalent bond. My intuition is thermal energy doesn't usually do that.
What are the the numbers you're using?(bandwidth, stiffness, etc.)?
Does your math suggest that in the static case rods will vibrate out of position? Maybe I'm misunderstanding things.
During its motion, the rod is supposed to be confined to its trajectory by the drive mechanism, which, in response to deviations from the desired trajectory, rapidly applies forces much stronger than the net force accelerating the rod.
(Nanosystems PP344 (fig 12.2)
Having the text in front of me now, the rods supposedly have "alignment knobs" which limit range of motion. The drive springs don't have to define rod position to within the error threshold during motion.
The knob<-->channel contact could be much more rigid than the spring, depending on interatomic repulsion. That's a lot closer to the "covalently bond the rod to the structure" hypothetical suggested above. If the dissipation-fluctuation based argument holds, the opposing force and stiffness will be on the order of bond stiffness/strength.
There's a second fundamental problem in positional uncertainty due to backaction from the drive mechanism. Very informally, if you want your confining potential to put your rod inside a range with some response speed (bandwidth), then the fluctuations in the force obey , from standard uncertainty principle arguments. But those fluctuations themselves impart positional noise. Getting the imprecision safely below the error threshold in the presence of thermal noise puts backaction in the range of thermal forces.
When I plug the hypothetical numbers into that equation (10Ghz, 0.7nm) I get force deviations in the fN range (1.5e-15 N) that's six orders of magnitude from the nanonewton range forces proposed for actuation. This should Accommodate using the pessimistic "characteristic frequency of rod vibration"(10Thz) along with some narrowing of positional uncertainty.
That aside, these are atoms. De Broglie wavelength for a single carbon atom at room temp is 0.04 nm and we're dealing with many carbon atoms bonded together. Quantum mechanical effects are still significant?
If you're right, and if the numbers are conservative with real damping coefficients 3 OOM higher, forces would be 1.5 OOM higher meaning covalent bonds hold things together much less well. This seems wrong. Benzyl groups would seem then to regularly fall off of rigid molecules for example. Perhaps the rods are especially rigid leading to better coupling of thermal noise into the anchoring bond at lower atom counts?
Certainly if drexler's design is impossible by 3 orders of magnitude rod logic would perform much less well.
Replies from: Muireall↑ comment by Muireall · 2023-04-20T22:36:43.721Z · LW(p) · GW(p)
No worries, my comment didn't give much to go on. I did say "a typical thermal displacement of a rod during a cycle is going to be on the order of the 0.7nm error threshold for his proposed design", which isn't true if the mechanism works as described. It might have been better to frame it as—you're in a bad situation when your thermal kinetic energy is on the order of the kinetic energy of the switching motion. There's no clean win to be had.
If the positional uncertainty was close to the error limit, can we just bump up the logic element size(2x, 3x, 10x)? I'd assume scaling things up by some factor would reduce the relative effects of thermal noise and uncertainty.
That's correct, although it increases power requirements and introduces low-frequency resonances to the logic elements.
Also, the expression ( ) suggests the second concern might be clock rate?
In this design, the bandwidth requirement is set by how quickly a blocked rod will pass if the blocker fluctuates out of the way. If slowing the clock rate 10x includes reducing all forces by a factor of 100 to slow everything down proportionally, then yes, this lets you average away backaction noise like while permitting more thermal motion. If you keep making everything both larger and slower, it will eventually work, yes. Will it be competitive with field-effect transistors? Practically, I doubt it, but it's harder to find in-principle arguments at that level.
That noted, in this design, (I think) a blocked rod is tensioned with ~10x the switching drive force, so you'd want the response time of the restoring force to be ~10 ps. If your is the same as the error threshold, then you're admitting error rates of . Using (100 GHz, 0.07 nm [Drexler seems to claim 0.02nm in 12.3.7b]), the quantum-limited force noise spectral density is a few times less than the thermal force noise related to the claimed drag on the 1GHz cycle.
What I'm saying isn't that the numbers in Nanosystems don't keep the rod in place. These noise forces are connected with displacement noise by the stiffness of the mechanism, as you observe. What I'm saying is that these numbers are so close to quantum limits that they can't be right, or even within a couple of orders of magnitude of right. As you say, quantum effects shouldn't be relevant. By the same token, noise and dissipation should be far above quantum limits.
Replies from: obserience↑ comment by anithite (obserience) · 2023-04-21T00:25:23.897Z · LW(p) · GW(p)
Yeah, transistor based designs also look promising. Insulation on the order of 2-3 nm suffices to prevent tunneling leakage and speeds are faster. Promises of quasi-reversibility, low power and the absurdly low element size made rod logic appealing if feasible. I'll settle for clock speeds a factor of 100 higher even if you can't fit a microcontroller in a microbe.
My instinct is to look for low hanging design optimizations to salvage performance (EG: drive system changes to make forces on rods at end of travel and blocked rods equal reducing speed of errors and removing most of that 10x penalty.) Maybe enough of those can cut the required scale-up to the point where it's competitive in some areas with transistors.
But we won't know any of this for sure unless it's built. If thermal noise is 3OOM worse than Drexler's figures it's all pointless anyways.
I remain skeptical the system will move significant fractions of a bond length if a rod is held by a potential well formed by inter-atomic repulsion on one of the "alignment knobs" and mostly constant drive spring force. Stiffness and max force should be perhaps half that of a C-C bond and energy required to move the rod out of position would be 2-3x that to break a C-C bond since the spring can keep applying force over the error threshold distance. Alternatively the system *is* that aggressively built such that thermal noise is enough to break things in normal operation which is a big point against.
Replies from: Muireall↑ comment by Thomas Kwa (thomas-kwa) · 2023-04-19T19:12:19.607Z · LW(p) · GW(p)
I'm not sure how to evaluate this, so I made a Manifold market for it. I'd be excited for you to help me edit the market if you endorse slightly different wording.
https://manifold.markets/ThomasKwa/does-thermal-noise-make-drexlerian
↑ comment by bhauth · 2023-04-17T18:22:34.145Z · LW(p) · GW(p)
Yes, you need some kind of switch for any mechanical computer. My point was that you need multiple mechanical "amplifiers" for each single positioner arm, the energy usage of that would be substantial, and if you have a binary mechanical switch controlling a relatively large movement, then the thermal noise will put it in an intermediate state a lot of the time so the arm position will be off.
Replies from: obserience↑ comment by anithite (obserience) · 2023-04-17T18:58:36.763Z · LW(p) · GW(p)
That's not how computers (the ones we have today or the rod logic ones proposed work). Each rod or wire represents a single on/off bit.
Yes, doing mechanosynthesis is more complicated and precise sub nm control of a tooltip may not be competitive with biology for self replication. But if the AI wants a substrate to think on that can implement lots of FLOPs then molecular rod logic will work.
For that matter protein based mechanical or hybrid electromechanical computers are plausible. Likely with lower energy consumption per erased bit than neurons and certainly with more density. Human computers have nm sized transistors. There's no reason to think that neurons and synapses are the most efficient sort of biological computer.
Replies from: jacob_cannell↑ comment by jacob_cannell · 2023-04-17T20:55:18.443Z · LW(p) · GW(p)
There's no reason to think that neurons and synapses are the most efficient sort of biological computer.
Bio-neuron based brains are extremely efficient [LW · GW], and close to pareto-optimal. We are near the end of moore's law and the viable open routes for forward progress in energy efficiency are essentially neuromorphic.
Replies from: obserience, DaemonicSigil↑ comment by anithite (obserience) · 2023-04-18T00:26:39.607Z · LW(p) · GW(p)
edit: continued partially in the original article [LW(p) · GW(p)]
That post makes a fundamental error about wiring energy efficiency by ignoring the 8 OOM difference in electrical conductivity between neuron saltwater and copper. (0.5 S vs 50 MS)
There's almost certainly a factor of 100 energy efficiency gains to be had by switching from saltwater to copper in the brain and reducing capacitance by thinning the wires. I'll be leaving a comment soon but that had to be said.
energy/bit/(linear distance) agreement points to underlying principle of "if you've thinned the wires why haven't you packed everything in tighter" leading to similar capacitance and therefore energy values/(linear distance)
face to face die stacking results suggest that computers could be much more efficient if they weren't limited to 2d packing of logic elements. A second logic layer more than halved power consumption at the same performance and that's with limited interconnect density between the two logic dies.
The Cu<-->saltwater conductivity difference leads to better utilisation of wiring capacitance to reduce thermal noise voltage at transistor gates. Concretely, there are more electrons able to effectively vote on the output voltage. For very short interconnects this matters less but long distance or high fanout nodes have lots of capacitance and low resistance wires make the voltage much more stable.
Replies from: jacob_cannell↑ comment by jacob_cannell · 2023-04-18T00:30:16.210Z · LW(p) · GW(p)
Electrical conduction through "Neuron saltwater" is not how neuronal interconnect works, its electrochemical. You are simply mistaken, as copper interconnect wire energy limits and neuron wire energy efficiency limits are essentially the same and both approach the theoretical landauer minimum as explained in the article.
↑ comment by DaemonicSigil · 2023-04-18T07:53:58.524Z · LW(p) · GW(p)
Mandatory footnote for this comment:
The Landauer limit puts the energy cost to erase a bit at about 0.02eV at room temperature. For comparison, the energy in a single photon of visible light is about 1eV. Already we can see that the brain is not going to get anywhere close to this. 1eV is a molecular energy scale, not a cellular one.
The brain requires about 20 Watts of power. Running this directly through the Landauer limit, we get 10^21 bits erased per second. For comparison, the number of synapses is about 2*10^14 (pulled from jacob_cannell's post linked above) and this gives about 600kB of data erased per synapse per second. This is not a reasonable number! It's justified in the post by assuming that we're banned from using regular digital logic to implement binary arithmetic and are instead forced into using heaps of "counters" where the size of the heap is the number you're representing, and this comes along with shot noise, of course.
The section on "interconnect" similarly assumes that we're forced to dissipate a certain amount of energy per bit transferred per unit length of interconnection. We're banned from using superconducting interconnect, or any other creative solution here. Also, if we could shrink everything, the required length of interconnect would be shorter, but the post just does the calculation for things being normal brain size.
I'd further argue that, even if interconnect requirements are as a matter of engineering practicality close the the limits of what we can build, we should not confuse that with being "close to the thermodynamic limits". Moving a bit from here to there should have no thermodynamic cost, and if we can't manage it except by dissipating a huge amount of energy, then that's a fact about our engineering skills, not a fact about the amount of computation the brain is doing.
In short, if you assume that you have to do things the way the brain does them, then the brain is somewhat close to "thermodynamic limits", but without those assumptions it's nowhere near the actual Landauer limit.
Replies from: jacob_cannell↑ comment by jacob_cannell · 2023-04-18T16:46:51.758Z · LW(p) · GW(p)
The Landauer limit puts the energy cost to erase a bit at about 0.02eV at room temperature.
No it does not - that is one of many common layman misunderstandings, which the article corrects. The practical Landauer limit (for fast reliable erasures) is closer to 1eV.
It's justified in the post by assuming that we're banned from using regular digital logic to implement binary arithmetic and are instead forced into using heaps of "counters"
Digital multipliers use similar or more energy for low precision multiply but are far larger, as discussed in the article with numerous links to research literature. (And most upcoming advanced designs for approaching brain energy efficiency use analog multipliers - as in memristor crossbar designs).
The section on "interconnect" similarly assumes that we're forced to dissipate a certain amount of energy per bit transferred per unit length of interconnection.
That is indeed how conventional computing works.
Also, if we could shrink everything, the required length of interconnect would be shorter, but the post just does the calculation for things being normal brain size.
You obviously didn't read the post as indeed it discusses this - see the section on size and temperature.
Moving a bit from here to there should have no thermodynamic cost, and if we can't manage it except by dissipating a huge amount of energy, then that's a fact about our engineering skills,
As discussed in the post - you absolutely can move bits without dissipating much energy using reversible interconnect (ie optics), but this does not come without enormous fundamental disadvantages in size.
Replies from: DaemonicSigil, obserience↑ comment by DaemonicSigil · 2023-04-18T18:39:01.040Z · LW(p) · GW(p)
No it does not - that is one of many common layman misunderstandings, which the article corrects. > The practical Landauer limit (for fast reliable erasures) is closer to 1eV.
So this is how the 1eV value is derived, right? Start with a bit that we want to erase. Set things up so there's an energy gap of between the 0 state and the 1 state. Then couple to the environment, and wait for some length of time, so the probability that the bit has a value of 0 becomes:
This is the probability of successful erasure, and if we want to get a really high probability, we need to set or something like that.
But instead imagine that we're trying to erase 100 bits all at once. Now we set things up so that the bit strings that aren't all zeros have an energy of and the all-zeros bit string has an energy of 0. Now if we couple to the environment, we get the following probability of successful erasure of all the bits:
This is approximately equal to:
Now, to make the probability of successful erasure really high, we can pick:
The is there to cancel the in the exponent. This is just the familiar Landauer limit. And the is there to make sure that we get the same level of reliability as before. But now that is amortized over 100 bits, so the extra reliability cost per bit is much less. So if I'm not wrong, the theoretical limit per bit should still be .
Replies from: jacob_cannell, Muireall↑ comment by jacob_cannell · 2023-04-18T18:48:55.145Z · LW(p) · GW(p)
So this is how the 1eV value is derived, right?
The article has links to the 3 good sources (Landauer, Zhirnov, Frank) for this derivation. I don't have time to analyze your math in detail but I suspect you are starting with the wrong setup - you need a minimal energy well to represent a bit stably against noise at all, and you pay that price for each bit, otherwise it isn't actually a bit.
My prior that you find an error in the physics lit here is extremely low - this is pretty well established at this point.
Replies from: DaemonicSigil↑ comment by DaemonicSigil · 2023-04-20T06:42:25.080Z · LW(p) · GW(p)
I've taken a look at Michael P. Frank's paper and it doesn't seem like I've found an error in the physics lit. Also, I still 100% endorse my comment above: The physics is correct.
So your priors check out, but how can both be true?
you need a minimal energy well to represent a bit stably against noise at all, and you pay that price for each bit, otherwise it isn't actually a bit.
To use the terminology in Frank, this is you're talking about. My analysis above applies to . Now in section 2 of Frank's paper, he says:
With this particular mechanism, we see that ; later, we will see that in other mechanisms, can be made much less than .
The formula shows up in section 2, before Frank moves on to talking about reversible computing. In section 3, he gives adiabatic switching as an example of a case where can be made much smaller than . (Though other mechanisms are also possible.) About midway through section 4, Frank uses the standard value, since he's no longer discussing the restricted case where .
Replies from: jacob_cannell↑ comment by jacob_cannell · 2023-04-24T22:28:09.019Z · LW(p) · GW(p)
Adiabatic computing is a form of partial reversible computing.
↑ comment by Muireall · 2023-04-18T21:47:41.123Z · LW(p) · GW(p)
If you can only erase bits 100 at a time, you don't really have 100 bits, do you?
Now we set things up so that the bit strings that aren't all zeros have an energy of and the all-zeros bit string has an energy of 0.
Now your thermal state just equalizes probabilities across those nonzero bit strings.
↑ comment by anithite (obserience) · 2023-04-18T18:17:22.111Z · LW(p) · GW(p)
You obviously didn't read the post as indeed it discusses this - see the section on size and temperature.
That point (compute energy/system surface area) assumes we can't drop clock speed. If cooling was the binding constraint, drop clock speed and now we can reap gains in eficiency from miniaturization.
Heat dissipation scales linearly with size for a constant ΔT. Shrink a device by a factor of ten and the driving thermal gradient increases in steepness by ten while the cross sectional area of the material conducting that heat goes down by 100x. So if thermals are the constraint, then scaling linear dimensions down by 10x requires reducing power by 10x or switching to some exotic cooling solution (which may be limited in improvement OOMs achievable).
But if we assume constant energy per bit*(linear distance), reducing wire length by 10x cuts power consumption by 10x. Only if you want to increase clock speed by 10x (since propagation velocity is unchanged and signal travel less distance). Does power go back up. In fact wire thinning to reduce propagation speed gets you a small amount of added power savings.
All that assumes the logic will shrink which is not a given.
Added points regarding cooling improvements:
- brain power density of 20mW/cc is quite low.
- ΔT is pretty small (single digit °C)
- switching to temperature tolerant materials for higher ΔT gives (1-1.5 OOM)
- phase change cooling gives another 1 OOM
- Increasing pump power/coolant volume is the biggie since even a few Mpa is doable without being counterproductive or increasing power budget much (2-3 OOM)
- ΔT is pretty small (single digit °C)
- even if cooling is hard binding, if interconnect density increases, can downsize a bit less and devote more volume to cooling.
↑ comment by jacob_cannell · 2023-04-18T18:43:13.961Z · LW(p) · GW(p)
The brain is already at minimal viable clock rate.
Your comment now seems largely in agreement: reducing wire length 10x cuts interconnect power consumption by 10x but surface area decreases 100x so surface power density increases 10x. That would result in a 3x increase in temp/cooling demands which is completely unviable for a bio brain constrained to room temp and already using active liquid cooling and the entire surface of the skin as a radiator.
Digital computers of course can - and do - go much denser/hotter, but that ends up ultimately costing more energy for cooling.
So anyway the conclusion of that section was:
Replies from: obserienceConclusion: The brain is perhaps 1 to 2 OOM larger than the physical limits for a computer of equivalent power, but is constrained to its somewhat larger than minimal size due in part to thermodynamic cooling considerations.
↑ comment by anithite (obserience) · 2023-04-18T19:03:29.976Z · LW(p) · GW(p)
What sets the minimal clock rate? Increasing wire resistance and reducing the number of ion channels and pumps proportionally should just work. (ignoring leakage).
It is certainly tempting to run at higher clock speeds (serial thinking speed is a nice feature) but if miniaturization can be done and then clock speeds must be limited for thermal reasons why can't we just do that?
That aside, is miniaturization out of the question (IE:logic won't shrink)? Is there a lower limit on number of charge carriers for synapses to work?
Synapses are around 1µm³ which seems big enough to shrink down a bit without weird quantum effects ruining everything. Humans have certainly made smaller transistors or memristors for that matter. Perhaps some of the learning functionality needs to be stripped but we do inference on models all the time without any continuous learning and that's still quite useful.
Replies from: bhauth↑ comment by bhauth · 2023-04-18T19:09:59.869Z · LW(p) · GW(p)
Signal propagation is faster in larger axons.
Replies from: jacob_cannell↑ comment by jacob_cannell · 2023-04-18T19:37:03.167Z · LW(p) · GW(p)
What sets the minimal clock rate?
Evolutionary arms races: ie the need to think quickly to avoid becoming prey, think fast enough to catch prey, etc.
That aside, is miniaturization out of the question (IE:logic won't shrink)? Is there a lower limit on number of charge carriers for synapses to work?
The prime overall size constraint seems may be surface/volume ratios and temp as we already discussed, but yes synapses are already pretty minimal for what they do (they are analog multipliers and storage devices).
Synapses are equivalent to entire multipliers + storage devices + some extra functions, far more than transistors.
Replies from: bhauthcomment by Shmi (shminux) · 2023-04-17T03:20:14.036Z · LW(p) · GW(p)
I don't think this is the right mind frame, thinking about how something specific appears too hard or even infeasible. A better frame is "say, you are given $100B/year, can hire the best people in the world, and have 10 years to come up with viable self-replicating nanobots, or else we all die, how would you go about it?"
Replies from: bhauth, tailcalled↑ comment by bhauth · 2023-04-17T03:41:37.031Z · LW(p) · GW(p)
Is that a question? If I'm given an impossible task, I try to find a way around it. The details would depend on the specifics of your hypothetical situation.
Or are you saying that the flaw in my argument is that...I didn't have the right emotional state while writing it? I'm not sure I understand your point.
Replies from: shminux↑ comment by Shmi (shminux) · 2023-04-17T04:30:57.331Z · LW(p) · GW(p)
I guess the latter? But maybe also the former. Trying to solve the problem rather than enumerating all the ways in which it is unsolvable.
Replies from: bhauth↑ comment by bhauth · 2023-04-17T05:25:51.403Z · LW(p) · GW(p)
That framing is unnatural to me. I see "solving a problem" as being more like solving several mazes simultaneously. Finding or seeing dead ends in a maze is both a type of progress towards solving the maze and a type of progress towards knowing if the maze is solvable.
Replies from: JenniferRM↑ comment by JenniferRM · 2023-04-21T17:31:33.897Z · LW(p) · GW(p)
I'd like to say up front that I respect you both, but I think shminux is right that bhauth's article (1) doesn't make the point it needs to make to change the "belief about the whether a set of 'mazes' exist whose collective solution gives nano" for many people working on nano and (2) this is logically connected to issue of "motivational stuff".
A key question is the "amount of work" necessary to make intellectual progress on nano (which is probably inherently cross-disciplinary), and thus it is implicitly connected to motivating the amount of work a human would have to put in. This could be shorted to just talking about "motivation" which is a complex thing to do for many reasons that the reader can surely imagine for themselves. And yet... I shall step into the puddle, and see how deep it might be!🙃
I. Close To Object Level Nano Stuff
For people who are hunting, intellectually, "among the countably infinite number of mazes whose solution and joining with other solved mazes would constitute a win condition by offering nano-enabling capacities" they have already solved many of the problems raised in the OP, as explained in in Thomas Kwa's excellent top level nano-solutions [LW(p) · GW(p)] comment.
One of Kwa's broad overall points is "nano isn't actually going to be a biological system operating on purely aqueous chemistry" and this helps dodge a huge numbers of these objections, and has been a recognized part of the plan for "real nanotech" (ie molecular manufacturing rather the nanoparticle bullshit that sucked up all the nano grant money) since the 1980s.
If bhauth wants to write an object level follow-up post, I think it might be interesting to read an attempt to defend the claim: "Nanotechnology requires aqueous biological methods... which are incapable of meeting the demand". However, I don't think this is something bhauth actually agrees with, so maybe that point is moot?
II. What Kinds Of Psychologizing Might Even Be Helpful And Why??
I really respect your engagement here, bhauth, whether you:
(1) really want to advance nano and are helping with that and this was truly your best effort, vs whether
(2) you are playing a devil's advocate against nano plans and offered this up as an attempt to "say what a lot of the doubters are thinking quietly, where the doubters can upvote, and then also read the comments, and then realize that their doubts weren't as justified as they might have expected", or
(3) something more complex to explain rhetorically and/or motivationally.
There is a kind of courage in speaking in public, and confident ability to reason about object level systems, and faith in an audience enough that you're willing to engage deeply.
Also, one there are skills for assessing the objectively most important ways technology can or should go in the future, and willingness to work on such things in a publicly visible forum where it also generates educational and political value for many potential readers.
All these are probably virtues, and many are things I see in your efforts here, bhauth!
I can't read your mind, and don't mean to impose advice where it is not welcome, but it does seem like you were missing ideas that you would have had if had spent the time to collect all the best "solved mazes" floating in the heads of most of the smartest people who want-or-wanted* to make nano happen?
III. Digressing Into A "Motivated Sociology Of Epistemics" For a Bit
All of this is to reiterate the initial point that effortful epistemics turns out to complexly interact with "emotional states" and what be want-or-wanted* when reasoning about the costs of putting in different kinds of epistemic effort.
(* = A thing often happens, motivationally, where an aspiring-oppenheimer-type really wants something, and starts collecting all the theories and techniques to make it happen... and then loses their desire part way through as they see more and more of the vivid details of what they are really actually likely to create given what they know. Often, as they become more able to apply a gears level analysis [? · GW], and it comes more into near mode [? · GW], and they see how it is likely to vividly interact with "all of everything they also care about in near mode" their awareness of certain "high effort ideas" becomes evidence of what they wanted, rather than want they still "want in the 'near' future".
(A wrinkle about "what 'near' means" arises with variation in age and motivation. Old people whose interests mostly end with their own likely death (like hedonic interests) will have some of the smallest ideas about what "near" is, and old people with externalized interests will have some of the largest ideas in that they might care in a detailed way about the decades or centuries after their death, while having a clearer idea of what that might even mean than would someone getting a PhD or still gunning for tenure. Old externalized people are thus more likely to be "writing for the ages" with clearer/longer timelines. (And if anyone in these planning loops is an immortalist with uncrushed dreams, then what counts as "near in time" gets even more complicated.)))
I think shminux probably didn't have time to write all this out, but might be nodding along to maybe half of it so far? And I think unpacking it might help bhauth (and all the people upvoting bhauth here?) to level up more and faster, which would probably be good!
For myself, I could maybe tell a story, where the reason I engaged here is, maybe, because I'm focused on getting a Win Condition for all of Earth, and all sapient beings (probably including large language model personas and potential-future aliens and so on?) and I think all good sapient beings with enough time and energy probably converge on collectively advancing the collective eudaemonia of all sentient beings (which I put non-zero credence on being a category that includes individual cells themselves).
Given this larger level goal, I think, as a sociological engineering challenge, it would logically fall out of this that it is super important for present day humans to help nucleate-or-grow-or-improve some kind of "long term convergent Win Condition Community" (which may have existed all the way back to Bacon, or even farther (and which probably explicitly needs to be able to converge with all live instances of similar communities that arise independently and stumble across each other)).
And given this, when I see two really smart people not seeming to understand each other and both making good points, in public, on LW, with wildly lopsided voting patterns...
...that is like catnip to a "Socio-Epistemic Progress Frame" which often seems, to me, to generate justifications for being specifically locally helpful and have that redound (via an admittedly very circuitous-seeming path) to extremely large long term benefits for all sentient beings?
I obviously can't mind read either of you, but when I suggested that bhauth might be doing "something even more rhetorically complex" it was out of an awareness that many such cases exist, and are probably helpful, even if wrong, so long as there is a relatively precise kind of good faith happening, where low-latency high-trust error correction [LW · GW] seems to be pretty central to explicit/formal cognitive growth.
A hunch I have is that maybe shminux started in computer science, and maybe bhauth started in biology? Also I think exactly things kinds of collaborations are often very intellectually productive from both sides!
IV. In Praise Of CS/BIO Collaboration
From experience working in computational virology out of a primary interest in studying the mechanisms of the smallest machines nature has so far produced (as a long term attack on being able to work on nano at an object level), I recognize some of the ways that these fields often have wildly different initial intuitions, based on distinctions like engineering/science, algorithms/empiricism, human/alien and design/accretion.
People whose default is to "engineer (and often reverse-engineer) human-designed algorithms" and people whose default is to "empirically study accreted alien [LW · GW] designs" have amazingly different approaches to thinking about "design" 😂
Still, I think there are strong analogies across these fields.
Like to a CS person "you need to patch a 2,000,000 line system written by people who are all now dead, against a new security zero day, as fast as you can" is a very very hard and advanced problem, but like... that's the simple STARTING position for essentially all biologically evolved systems, as a single step in a typical red queen dynamic... see for example hyperparasitic virophages for a tiny and more-likely-tractable example, where the "genetic code base" is relatively small, and generation times are minuscule. But there there are a lot of BIO people, I think, who have been dealing with nearly impossible systems for so long that they have "given up" in some deep way on expecting to understand certain things, and I think it would help them to play with code to get more intuitions about how KISS is possible and useful and beautiful.
(And to be clear, this CS/BIO/design thing is just one place where differences occur between these two fields, and it might very well not be the one that is going on here, and a lot of people in those fields are likely to roll their eyes at bothering with the other one, I suspect? Because "frames" or "emotions" or "stances" or "motivations" or just "finite life" maybe? But from a hyper abstract bayesian perspective, such motivational choices mean the data they are updating on has biases [LW · GW], so their posteriors will be predictably uncalibrated outside their "comfort zone", which is an epistemic issue.)
As a final note in praise of BIO/CS collaboration, it is probably useful to notice that current approaches to AI do not involve hand-coding any of it, but rather "summoning" algorithms into relatively-computationally-universal frameworks via SGD over data sets with enough kolmogorov complexity that it becomes worthwhile to simply put the generating algorithm in the weights rather than try to store all the cases in the weights. This is, arguably, a THIRD kind of "summoned design" that neither CS or BIO people are likely to have good intuitions for, but I suspect it is somewhere in the middle, and that mathematicians would be helpful for understanding it.
V. In Closing
If this helps bhauth or shminux, or anyone who upvoted bhauth really hard, or anyone who downvoted shminux, that would make it worth the writing based on what I'm directly aiming at so long as any the net harms to any such people are smaller and lesser (which is likely, because it is a wall-o-text that few will read unless (hopefully) they like, and are getting something, from reading it). Such is my hope 😇
↑ comment by tailcalled · 2023-04-17T11:51:45.324Z · LW(p) · GW(p)
The hard part isn't self-replicating nanobots. The hard part is self-replicating nanobots that are efficient enough to outcompete life.
comment by ChristianKl · 2023-04-20T02:25:33.852Z · LW(p) · GW(p)
This means that the reactions you can do are limited to what organic compounds can do at relatively low temperatures - and existing life can pretty much do anything useful in that category already.
We find that bacteria sometimes do manage to work at higher temperatures as well. Thermus aquaticus that gave us Taq polymerase works for example at higher temperatures than most other bacteria.
Generally, it's very hard for eukaryotes or prokaryotes to evolve the usage of new amino acids. It's unclear what we could do with artificial designed proteins when we open up the range of amino acids further.
Simply changing the way proteins are coded allows immunizing bacteria against existing phages. Phages are a key reason we have the current diversity of bacteria.
I understand Eliezer Yudkowsky thinks that someone a little smarter than von Neumann (who didn't invent the "von Neumann architecture" or half the other stuff he took credit for, but that's off topic) would be able to invent "grey goo" type nanobots.
Eliezer model is that an AI with that intelligence will self-improve from "a little smarter than von Neumann" to "a lot smarter than von Neumann"
comment by Davidmanheim · 2023-04-18T10:48:30.192Z · LW(p) · GW(p)
None of this argues that creating grey goo is an unlikely outcome, just that it's a hard problem. And we have an existence proof of at least one example of a way to make gray goo that covers a planet, which is life-as-we-know-it, which did exactly that.
But solving hard problems is a thing that happens, and unlike the speed of light, this limit isn't fundamental. It's more like the "proofs" that heavier than air flight is impossible which existed in the 1800s, or the current "proofs" that LLMs won't become AGIs - convincing until the counterexample exists, but not at all indicative that no counterexample does or could exist.
↑ comment by Steven Byrnes (steve2152) · 2023-04-18T13:55:03.716Z · LW(p) · GW(p)
OP said:
I use "nanobots" to mean "self-replicating microscopic machines with some fundamental mechanistic differences from all biological life that make them superior".
(And I believe they’re using “grey goo” the same way.) So I think you’re using a different definition of “grey goo” from OP, and that under OP’s definition, biological life is not an existence proof.
I think the question of “whether grey-goo-as-defined-by-OP is possible” is an interesting question and I’d be curious to know the answer for various reasons, even if it’s not super-central in the context of AI risk.
Replies from: Davidmanheim↑ comment by Davidmanheim · 2023-04-19T15:04:34.548Z · LW(p) · GW(p)
He excludes the only examples we have, which is fine for his purposes, though I'm skeptical it's useful as a definition, especially since "some difference" is an unclear and easily moved bar. However, it doesn't change the way we want to do prediction about whether something different is possible. That is, even if the example is excluded, it is very relevant for the question "is something in the class possible to specify."
comment by Daniel Kokotajlo (daniel-kokotajlo) · 2023-04-17T11:47:35.443Z · LW(p) · GW(p)
Thanks so much for this post, I've been wishing for something like this for a long time. I kept hearing people grumbling about how EY & Drexler were way too bullish about nanotech, but no one had any actual arguments. Now we have arguments & a comment section. :)
Replies from: PeterMcCluskey↑ comment by PeterMcCluskey · 2023-04-17T17:20:55.289Z · LW(p) · GW(p)
I object to the implication that Eliezer and Drexler have similar positions. Eliezer seems to seriously underestimate how hard nanotech is. Drexler has been pretty cautious about predicting how much research it would require.
Replies from: daniel-kokotajlo↑ comment by Daniel Kokotajlo (daniel-kokotajlo) · 2023-04-18T00:48:28.716Z · LW(p) · GW(p)
Huh, interesting. I am skeptical. Drexler seems to have thought that ordinary human scientists could get to nanotech in his lifetime, if they made a great effort. Unless he's changed his mind about that, that means he agrees with Yudkowsky about nanotech, I think. (As I interpret him, Yudkowsky takes that claim and then adds the additional hypothesis that, in general, superintelligences will be able to do research several OOMs faster than human science, and thus e.g. "thirty years" becomes "a few days." If Drexler disagrees with this, fine, but it's not a disagreement about nanotech it's a disagreement about superintelligence.)
Can you say more about what you mean?
↑ comment by PeterMcCluskey · 2023-04-18T17:43:28.614Z · LW(p) · GW(p)
I can't point to anything concrete from Drexler, beyond him being much more cautious than Eliezer about predicting the speed of engineering projects.
Speaking more for myself than for Drexler, it seems unlikely that AI would speed up nanotech development more than 10x. Engineering new arrangements of matter normally has many steps that don't get sped up by more intelligence.
The initial nanotech systems we could realistically build with current technology are likely dependent on unusually pure feedstocks, and still likely to break down frequently. So I expect multiple generations of design before nanotech becomes general-purpose enough to matter.
I expect that developing nanotech via human research would require something like $1 billion in thoughtfully spent resources. Significant fractions of that would involve experiments that would be done serially. Sometimes that's because noise makes interactions hard to predict. Sometimes it's due to an experiment needing a product from a prior experiment.
Observing whether an experiment worked is slow, because the tools for nanoscale images are extremely sensitive to vibration. Headaches like this seem likely to add up.
Replies from: jacob_cannell, sharmake-farah↑ comment by jacob_cannell · 2023-04-18T18:54:20.540Z · LW(p) · GW(p)
Chip litho (practical top-down nanotech) is already approaching the practical physical limits for non-exotic computers (and practical exotic computers seem harder/farther than cold fusion).
Biology is already at the key physical limits (thermodynamic efficiency) for nanoscale robotics. It doesn't matter what materials you use to construct nanobots, they can't have large advantages over bio cells, because bio cells are already near optimal in terms of the primary constraints (which are thermodynamic efficiency for copying and spatially arranging bits).
↑ comment by Noosphere89 (sharmake-farah) · 2023-04-18T18:50:56.661Z · LW(p) · GW(p)
I basically agree with this take, assuming relatively conventional computers and no gigantic size computers like planet computers.
And yeah, I think Eliezer's biggest issue on ideas like nanotechnology, and his general his approach of assuming most limitations away by future technology isn't that they can't happen, but that he ignores that getting to that abstracted future state takes a lot more time than Eliezer thinks, and that time matters more than Eliezer thinks, especially in AI safety. and generally requires more contentious assumptions than Eliezer thinks.
comment by ErioirE (erioire) · 2024-06-26T18:25:46.406Z · LW(p) · GW(p)
If it was advantageous to use structures of those inside cells for reactions somehow, then some organisms would already do that.
Not necessarily. The space of advantageous biologically possible structural configurations seems to me to be intuitively larger than the space of useful configurations currently known to be in use.
In order for a structure to be evolutionarily feasible, it must not only be advantageous but also there must be a path of individually beneficial (or at minimum not harmful) small steps in between it and currently existing structures. If an adaptation does not lend itself to linearly realized benefit, e.g. one that works really well but only when 90%+ is 'correct', it has no evolutionary way to piece itself together from 0-90%.
comment by Phib · 2023-04-17T18:16:03.096Z · LW(p) · GW(p)
low commit here but I've previously used nanotech as an example (rather than a probable outcome) of a class somewhat known unknowns - to portray possible future risks that we can imagine as possible while not being fully conceived. So while grey goo might be unlikely, it seems that precursor to grey goo of a pretty intelligent system trying to mess us up is the thing to be focused on, and this is one of its many possibilities that we can even imagine
comment by Donald Hobson (donald-hobson) · 2024-08-14T12:42:38.207Z · LW(p) · GW(p)
Yes, I've actually seen people say that, but cells do use myosin to transport proteins sometimes. That uses a lot of energy, so it's only used for large things.
Cells have compartments with proteins that do related reactions. Some proteins form complexes that do multiple reaction steps. Existing life already does this to the extent that it makes sense to.
Humans or AI designing a transport/ compartmentalization system can go "how many compartments is optimal". Evolution doesn't work like this. It evolves a transport system to transport one specific thing in one specific organism.
It's like humans invent railways in general. Evolution invents a railway between 2 towns, and if it wants to connect a third town, it needs to invent a railway again from scratch. (Imagine a bunch of very secretive town councils)
comment by Metacelsus · 2023-04-17T21:17:09.777Z · LW(p) · GW(p)
>what if a superintelligence finds something I didn't think of?
I'm not a superintelligence, and I know of at least one plausible "green goo" scenario involving rogue microbes.