How probable is Molecular Nanotech?
post by leplen · 2013-06-29T07:06:38.522Z · LW · GW · Legacy · 56 commentsContents
Discussion Conservation of energy: Modelling is hard: A factory isn't the right analogy: Chaos The laws of physics hold Specific Claims Some things that are possible Bibliography None 56 comments
Circa a week ago I posted asking whether bringing up molecular nanotechnology(MNT) as a possible threat avenue for an unfriendly artificial intelligence made FAI research seem less credible because MNT seemed to me to be not obviously possible. I was told to some extent, to put up and address the science of MNT or shut up. A couple of people also expressed an interest in seeing a more fact and less PR oriented discussion, so I got the ball rolling and you all have no one to blame but yourselves. I should note before starting, that I do not personally have a strong opinion on whether Drexler-style MNT is possible. This isn't something I've researched previously, and I'm open to being convinced one way or the other. If MNT turns out to be likely at the end of this investigation, then hopefully this discussion can provide a good resource for LW/FAI on the topic for people like myself not yet convinced that MNT is the way of future. As far as I'm concerned, at this point all paths lead to victory.
While Nanosystems was the canonical reference mentioned in the last conversation. I purchased it, then about 2/3rds of the way through this I figured Engines of Creation was giving me enough to work with and cancelled my order. If the science in Nanosystems is really much better than in EoC I can reorder it, but I figured we'd get started for free. 50 bucks is a lot of money to spend on an internet argument.
Before I begin I would like to post the following disclaimers.
1. I am not an expert in many of the claims that border on MNT. I did work at a Nanotechnology center for a year, but that experience was essentially nothing like what Drexler describes. More relevantly I am in the process of completing a Ph.D. in Physics, and my thesis work is on computational modeling of novel materials. I don't really like squishy things, so I'm very much out of my depth when it comes to discussions as to what ribosomes can and cannot accomplish, and I'll happily defer to other authorities on the more biological subjects. With that being said, several of my colleagues run MD simulations of protein folding all day every day, and if a biology issue is particularly important, I can shoot some emails around the department and try and get a more expert opinion.
2. There are several difficulties in precisely addressing Drexler's arguments, because it's not always clear to me at least exactly what his arguments are. I've been going through Engines of Creation and several of his other works, and I'll present my best guess outline here. If other people would like to contribute specific claims about molecular nanotech, I'll be happy to add them to the list and do my best to address them.
3. This discussion is intended to be scientific. As was pointed out previously, Drexler et al. have made many claims about time tables of when things might be invented. Judging the accuracy of these claims is difficult because of issues with definitions as mentioned in the previous paragraph. I'm not interested in having this discussion encompass Drexler's general prediction accuracy. Nature is the only authority I'm interested in consulting in this thread. If someone wants to make a Drexler's prediction accuracy thread, they're welcome to do so.
4. If you have any questions about the science underlying anything I say, don't hesitate to ask. This is a fairly technical topic, and I'm happy to bring anyone up to speed on basic physics/chemistry terms and concepts.
Discussion
I'll begin by providing some background and highlighting why exactly I am not already convinced that MNT, and especially AI-assisted rapid MNT is the future, and then I'll try and address some specific claims made by Drexler in various publications.
Conservation of energy:
Modelling is hard:
Solving the Schrodinger equation is essentially impossible. We can solve it more or less exactly for the Hydrogen atom, but things get very very difficult from there. This is because we don't have a simple solution for the three-body problem, much less the n-body problem. Approximately, the difficulty is that because each electron interacts with every other electron, you have a system where to determine the forces on electron 1, you need to know the position of electrons 2 through N, but the position of each of those electrons depends somewhat on electron 1. We have some tricks and approximations to get around this problem, but they're only justified empirically. The only way we know what approximations are good approximations is by testing them in experiments. Experiments are difficult and expensive, and if the AI is using MNT to gain infrastructure, then we can assume it doesn't already have the infrastructure to run its own physics lab.
A factory isn't the right analogy:
The discussion of nanotechnology seems to me to have an enormous emphasis on Assemblers, or nanofactories, but a factory doesn't run unless it has a steady supply of raw materials and energy resources both arriving at the correct time. The evocation of a factory calls to mind the rigid regularity of an assembly line, but the factory only works because it's situated in the larger, more chaotic world of the economy. Designing new nanofactories isn't a problem of building the factory, but a problem of designing an entire economy. There has to be a source of raw material, an energy source, and means of transporting material and energy from place to place. And, with a microscopic factory, Brownian motion may have moved the factory by the time the delivery van gets there. This fact makes the modelling problem orders of magnitude more difficult. Drexler makes a big deal about how his rigid positional world isn't like the chaotic world of the chemists, but it seems like the chaos is still there; building a factory doesn't get rid of the logistics issue.
Chaos
The reason we can't solve the n-body problem, and lots of other problems such as the double pendulum and the weather is because it turns out to be a rather unfortunate fact of nature that many systems have a very sensitive dependence on initial conditions. This means that ANY error, any unaccounted for variable, can perturb a system in dramatic ways. Since there will always be some error (at the bare minimum h/4π) this means that our AI is going to have to do Monte Carlo simulations like the rest of us smucks and try to eliminate as many degrees of freedom as possible.
The laws of physics hold
I didn't think it would be necessary to mention this, but I believe that the laws of physics are pretty much the laws of physics we know right now. I would direct anyone who suggests that an AI has a shot at powering MNT with cold fusion, tachyons, or other physical phenomena not predicted by the standard model to this post. I am not saying there is no new no physics, but we understand quantum mechanics really well, and the Standard Model has been confirmed to enough decimal places that anyone who suggests something the Standard Model says can't happen is almost certainly wrong. Even if they have experimental evidence that is supposed to 99.9999% percent correct.
Specific Claims
Drexler's claims about what we can do now with respect to materials science in general are true. This should be unsurprising. It is not particularly difficult to predict the past. Here are 6 claims he makes that we can't currently accomplish which I'll try and evaluate:
- Building "gear-like" nanostructures is possible (Toward Integrated Nanosystems)
- Predicting crystal structures from first principles is possible (Toward Integrated Nanosystems)
- Genetic engineering is a superior form of chemical synthesis to traditional chemical plants. (EoC 6)
- "Biochemical engineers, then, will construct new enzymes to assemble new patterns of atoms. For example, they might make an enzyme-like machine which will add carbon atoms to a small spot, layer on layer. If bonded correctly, the atoms will build up to form a fine, flexible diamond fiber having over fifty times as much strength as the same weight of aluminum." (EoC 10)
- Proteins can make and break diamond bonds (EoC 11)
- Proteins are "programmable" (EoC 11)
2. True. This isn't true yet, but should be possible. I might even work on this after I graduate, if don't go hedge fund or into AI research.
3. Not wrong, but misleading. The statement "Genetic engineers have now programmed bacteria to make proteins ranging from human growth hormone to rennin, an enzyme used in making cheese." is true in the same sense that copying and pasting someone else's code constitutes programming. Splicing a gene into a plasmid is sweet, but genetic programming implies more control than we have. Similarly, the statement: "Whereas engineers running a chemical plant must work with vats of reacting chemicals (which often misarrange atoms and make noxious byproducts), engineers working with bacteria can make them absorb chemicals, carefully rearrange the atoms, and store a product or release it into the fluid around them." implies that bacterial synthesis leads to better yields (false), that bacteria are careful(meaningless), and implies greater control over genetically modified E.Coli than we have.
4a. False. Flexible diamond doesn't make any sense. Diamond is sp3 bonded carbon and those bonds are highly directional. They're not going to flex.. Metals are flexible because metallic bonds, unlike covalent bonds, don't confine the electrons in space. Whatever this purported carbon fiber is, it either won't be flexible, or it won't be diamond.
4b. False. It isn't clear that this is even remotely possible. Enzymes don't work like this. Enzymes are catalysts for existing reactions. There is no existing reaction that results in a single carbon atom. That's an enormously energetically unfavorable state. Breaking a single carbon carbon double bond requires something like 636 kJ/mol (6.5eV) of energy. That's roughly equivalent to burning 30 units of ATP at the same time. How? How do you get all that energy into the right place at the right time? How does your enzyme manage to hold on to the carbons strongly enough to pull them apart?
5. "A flexible, programmable protein machine will grasp a large molecule (the workpiece) while bringing a small molecule up against it in just the right place. Like an enzyme, it will then bond the molecules together. By bonding molecule after molecule to the workpiece, the machine will assemble a larger and larger structure while keeping complete control of how its atoms are arranged. This is the key ability that chemists have lacked." I'm no biologist, but this isn't how proteins work. Proteins aren't Turing machines. You don't set the state and ignore them. The conformation of a protein depends intimately on its environment. The really difficult part here is that the thing it's holding, the nanopart you're trying to assemble is a big part of the protein's environment. Drexler complains around how proteins are no good because they're soft and squishy, but then he claims they're strong enough to assemble diamond and metal parts. But if the stiff nanopart that you're assembling has a dangling carbon bond waiting to filled then it's just going to cannibalize the squishy protein that's holding it. What can a protein held together by Van der Waals bonds do to a diamond? How can it control the shape it takes well enough to build a fiber?
6. All of these tiny machines are repeatedly described as programmable, but that doesn't make any sense. What programs are they capable of accepting or executing? What set of instructions can a collection of 50 carbon atoms accept and execute? How are these instructions being delivered? This gets back to my factory vs. economy complaint. If nothing else, this seems an enormously sloppy use of language.
Some things that are possible
I think we have or will have the technology to build some interesting artificial inorganic structures in very small quantities, primarily using ultra-cold, ultra-high-vacuum laser traps. It's even possible that eventually we could create some functional objects this way, though I can't see any practical way to scale that production up.
"Nanorobots" will be small pieces of metal or dieletric material that we manipulate with lasers or sophisticated magnetic fields, possibly attached to some sort of organic ligand. This isn't much of a prediction, we pretty much do this already. The nanoworld will continue to be statistical and messy.
We will gain some inorganic control over organics like protein and DNA (though not organic over inorganic). This hasn't really been done yet that I'm aware of, but stronger bonds>weaker bonds makes sense. I think there are people trying to read DNA/proteins by pushing the strands through tiny silicon windows. I feel like I heard a seminar along those lines, though I'm pretty sure I slept through it.
That brings me through the first 12 pages of EoC or so. More to follow. Let me know if the links don't work or the formatting is terrible or I said something confusing. Also, please contribute any specific MNT claims you'd like evaluated, and any resources or publications you think are relevant. Thank you.
Bibliography
Molecular Devices and Machines)
56 comments
Comments sorted by top scores.
comment by Mitchell_Porter · 2013-06-29T07:48:58.653Z · LW(p) · GW(p)
Drexler's MIT thesis (30 Mb) is available for free - link is near the top of this page - and "contains most of the content of the book [Nanosystems] in nearly final form."
How does your enzyme manage to hold on to the carbons strongly enough to pull them apart?
There are real enzymes that do this - cleave carbon-carbon bonds - as you probably know...?
This paper contains an early proposal for how small groups of carbon atoms would be stored, positioned, and pulled apart as necessary. Many more papers are available here.
Replies from: leplen↑ comment by leplen · 2013-06-29T09:04:03.555Z · LW(p) · GW(p)
Drexler's MIT thesis (30 Mb) is available for free - link is near the top of this page - and "contains most of the content of the book [Nanosystems] in nearly final form."
Great. I'll shift my attention to that until Nanosystems arrives
This paper contains an early proposal for how small groups of carbon atoms would be stored, positioned, and pulled apart as necessary.
Excellent. Thanks.
comment by passive_fist · 2013-06-29T07:50:38.778Z · LW(p) · GW(p)
I also don't have a stance on MNT either. If it were possible that would be great, but what would be even greater is if we could actually foresee what is truly possible within the realm of reality. At the very least, that would allow us to plan our futures.
However, I hope you won't mind me making a counter-argument to your claims, just for sake of discussion.
EoC and Nanosystems aren't comparable. EoC is not even a book about MNT per se, it is more a book about the impact of future technology on society (it has chapters devoted to the internet and other things - it's also notable that he successfully predicted the rise of the internet). Nanosystems on the other hand is an engineering book. It starts out with a quantitative scaling analysis of things like magnetism, static electricity, pressure, velocity etc. at the macroscale and nanoscale and proceeds into detailed engineering computations. It is essentially like a classical text on engineering, except on the nanoscale.
As for the science presented in nanosystems, I view it as less of a 'blueprint' and more of a theoretical exploration of the most basic nanotechnology that is possible. For example, Drexler presents detailed plans for a nanomechanical computer. He does not make the claim that future computers will be like what he envisions. His nanomechanical computer is simply a theoretical proof-of-concept. It is there to show that computing at the nanoscale is possible. It's unlikely that practical nanocomputers in the future (if they are possible) will look like that at all. They will probably not use mechanical principles to work.
Now about your individual arguments:
Conservation of Energy: In Nanosystems Drexler makes a lot of energy computations. However, in general, it is true that building things on the molecular level is not necessarily more energy-efficient than building them the traditional way i.e. in bulk. In fact, for many things it would probably be far less energy-efficient. It seems to me that even if MNT were possible, most things would still be made using bulk technology. MNT would only be used for the high-tech components such as computers.
Modelling is Hard: You're talking about solving the Schrodinger equation analytically. In practice, a sufficiently-precise numerical simulation is more than adequate. In fact, ab-initio quantum simulations (simulations that make only the most modest of assumptions and compute from first principles) have been carried out for relatively large molecules. I think it is safe to assume that future computers will be able to model at least something as complicated as a nanoassembler entirely from first principles.
A factory isn't the right analogy: I don't understand this argument.
Chaos: You mention chaos but don't explain why it would ruin MNT. The limiting factor in current quantum mechanical simulations is not chaotic dynamics.
The laws of physics hold: Wholeheartedly agree. However, even in the realm of current physics there is a lot of legroom. Cold fusion may be a no-no, but hot fusion is definitely doable, and there is no law of physics (that we know of) that says you can't build a compact fusion reactor.
The simulations of molecular gears and such you find on the internet are of course fanciful. They have been done with molecular dynamics, not ab-initio simulation. You are correct that stability analysis has not been done. However, stability analysis of various diamondoid structures has been carried out, and contrary to the 'common knowledge' that diamond decays to graphite at the surface, defect-free passivated diamond turns out to be perfectly stable at room temperature, even in weird geometries [1]
Agree.
De novo enzymes have been created that perform functions unprecedented in the natural world [2] (this was reported in the journal Nature). Introduction of such proteins into bacteria leads to evolution and refinement of the initial structure. The question is not one of 'doing better than biology'. It's about technology and biology working together to achieve nanotech by any means necessary. You are correct that we are still very very far from reaching the level of mastery over organic chemistry that nature seems to have. Whether organic synthesis remains a plausible route to MNT remains to be seen.
If this is about creating single carbon atoms, you are right. However, it is not mentioned that single carbon atoms will need to exist in isolation. Carbon dimers can exist freely and in fact ab-initio simulations have shown that they can quite readly be made to react and bond with diamond surfaces [3]. I think it's more plausible that this is what is actually meant. I don't believe Drexler is so ignorant of basic chemistry as to have made this mistake.
I do not have enough knowledge to give an opinion on this.
I also agree that at the present there is no way to know whether such programmable machines are possible. However, they are not strictly necessary for MNT. A nanofactory would be able to achieve MNT without needing any kind of nanocomputer anywhere. Nanorobots are not necessary, so arguments refuting them do not by any means refute the possibility of MNT.
References:
- http://www.molecularassembler.com/Papers/TarasovFeb2012.pdf
- http://dx.doi.org/10.1038%2Fnature01556
- http://www.molecularassembler.com/Papers/AllisHelfrichFreitasMerkle2011.pdf
↑ comment by leplen · 2013-06-29T08:24:20.654Z · LW(p) · GW(p)
However, I hope you won't mind me making a counter-argument to your claims, just for sake of discussion.
Pleased as punch. I'm not an authority, just getting the ball rolling.
EoC and Nanosystems aren't comparable
Noted. Repurchased Nanosystems
Modelling is Hard: You're talking about solving the Schrodinger equation analytically. In practice, a sufficiently-precise numerical simulation is more than adequate. In fact, ab-initio quantum simulations (simulations that make only the most modest of assumptions and compute from first principles) have been carried out for relatively large molecules. I think it is safe to assume that future computers will be able to model at least something as complicated as a nanoassembler entirely from first principles.
Haha. And what is Ab-initio? That's a fighting word where I'm from. The point I'm striving to make here is that our "ab initio" methods are constantly being tweaked and evolved to fit experimental data. Now, we're making mathematical approximations in the model, there aren't any explicit empirical fitting parameters, but if an AI is going to have a hard time coming up with God's own exchange-correlation functional then it's not going to be able to leap-frog all the stumbling in the dark we're doing testing different ways to cut corners. If the best ab-initio modeling algorithm the AI has is coupled-cluster or B3lyp, then I can tell you exactly how big a system it can handle, how accurately, for how many resources. That's a really tight constraint, and I'm curious to see how it goes over. As for modelling assemblers, I can model a nanoassembler from first principles right now if you tell me where the atoms go. Of course "first principles" is up for debate, and I won't have "chemical accuracy". What I'm less sure about is whether I can model it interacting with it's environment.
I also agree that at the present there is no way to know whether such programmable machines are possible. However, they are not strictly necessary for MNT. A nanofactory would be able to achieve MNT without needing any kind of nanocomputer anywhere. Nanorobots are not necessary, so arguments refuting them do not by any means refute the possibility of MNT.
Sure. If I need to dig a hole I'd rather have a shovel than a "programmable" shovel any day. But if you have a whole bunch of different tools, you're back to the problem of how do they get to the work site in the right order. It doesn't have the same determinism as that "programmed protein" machine.
Replies from: lukeprog, passive_fist↑ comment by passive_fist · 2013-06-29T21:00:57.717Z · LW(p) · GW(p)
Sure. If I need to dig a hole I'd rather have a shovel than a "programmable" shovel any day. But if you have a whole bunch of different tools, you're back to the problem of how do they get to the work site in the right order. It doesn't have the same determinism as that "programmed protein" machine.
Perhaps you would care to explain what you mean? I admit I'm not quite sure what your argument is here.
Replies from: leplen↑ comment by leplen · 2013-06-29T21:56:55.759Z · LW(p) · GW(p)
There's a fundamental disconnect between a machine, and a programmable machine. A machine is presumed to do one operation, and do it well. A machine is like a shovel or a lever. It's not unnecessarily complicated, it's not terribly difficult to build, and it can usually work with pretty wide failure tolerances. This is why you just want a shovel and not a combination shovel/toaster when you have to dig a hole.
A programmable machine is like a computer. It is capable of performing many different operations depending on what kinds of inputs it receives. Programmable machines are complicated, difficult to construct, and can fail in both very subtle and very spectacular ways.
We can also imagine the distinction between a set of wood-working tools and a 3d-printer. A hammer is a machine. A reprap is a programmable machine.
If the question we're trying to answer is, can we build a protein hammer, the answer is probably yes. But if we make a bunch of simple protein hammers, then we have to solve the very difficult problem of how to ensure that each tool is in the right place at the right time. A priori , there's no molecular carpenter ensuring that those tools happen to encounter whatever we're trying to build in any consistent order.
That's a very different problem than the problem of "can we make a protein 3-D printer", that has the ability to respond to complicated commands.
I'm not sure which of these situations is the one being advocated for by MNT proponents.
Replies from: passive_fist, Armok_GoB↑ comment by passive_fist · 2013-06-29T22:41:04.175Z · LW(p) · GW(p)
Again, you're trying to argue against nanoassemblers. If you're trying to say that nanoassemblers will be difficult to build, I entirely concede that point! If they weren't, we'd have them already.
Nevertheless, we have today progammable machines that are built with components of nanoscopic size and are subject to weird quantum effects, that nevertheless have billions of components and work relatively smoothly. Such devices would have been thought impossible just a few decades ago. So just because something would be immensely complex is no argument for its impossibility.
However, as I said, this is all beside the point, since MNT does not strictly require nanoassemblers. A nanofactory would be built of a large set of simple tools as you describe - each tool only doing its own thing. This is much like biology where each enzyme is designed to do one thing well. However, unlike biology, the way you would go about designing a nanofactory would be similar to an assembly line. Components would be created in controlled conditions and probably high vacuum (possibly even cryogenic temperatures, especially for components with unstable or metastable intermediaries). Power would be delivered electrically or mechanically, not through ATP.
Why not just do it like biology? Well, because of different design constraints. Biological systems need to be able to grow and self-repair. Our nanofactory will have no such constraints. Instead, the focus would be on high throughput and reconfigurability. Thus necessitating a more controlled, high-power environment than the brownian diffusion-reaction processes of biology.
Replies from: leplen↑ comment by leplen · 2013-06-30T00:35:58.521Z · LW(p) · GW(p)
Great, so this I think captures a lot of the difficulty in this discussion, where there's a lot of different opinions as to what exactly constitutes MNT. In my reading of Drexler so far, he appears to more or less believe that early Nanotech will be assembled by coopting biological asssemblers like the ribosome. That's specifically the vision of MNT that I've been trying to address.
Since you seem not to believe in that view of MNT, do you have a concise description of your view of MNT that you could offer that I could add to the discussion post above? I'm particularly interested in what environment you imagine your nanoassembler operating.
Replies from: passive_fist, passive_fist↑ comment by passive_fist · 2013-06-30T01:09:35.643Z · LW(p) · GW(p)
To add to my reply above, one approach for discussion about the specifics of future technology is to take an approach like Nanosystems does: operate within safe limits of known technology and limit concepts to those that are more-or-less guaranteed to work, even if they are probably inefficient. In this way, even though we acknowledge that our designs could not be built today, and future technology will probably choose to build things in an entirely different way, we can still have a rough picture of what's possible and what isn't.
For example, take this video: http://www.youtube.com/watch?v=vEYN18d7gHg
It shows an 'assembly line for molecules'. Of course, there are many questions that are left unanswered. Energy consumption, reconfigurability, throughput. It's unclear at all if the whole thing would actually be an improvement over current technology. For example, will this nanofactory be able to produce additional nanofactories? If not, it wouldn't make things any cheaper or more efficient.
However, it does serve as a conceptual starting point. And indeed, small-scale versions of the technology exist right now (people have automated AFMs that are capable of producing atomic structures; people have also used AFMs to modify, break, and form chemical bonds).
↑ comment by passive_fist · 2013-06-30T00:49:46.088Z · LW(p) · GW(p)
there's a lot of different opinions as to what exactly constitutes MNT
There's two different discussions here. One is the specific form the technology will take. The other is what it will be capable of doing. About the latter, the idea is to have a technology that will be able to construct things at only marginally higher cost than that of raw materials. If MNT is possible, it will be able to turn dirt into strawberries, coal into diamonds, sand into computers and solar panels, and metal ore into rocket engines. Note that we are capable of accomplishing all of these feats right now; it's just that they take too much time and effort. The promise of MNT and why it is so tantalizing is precisely because it promises, once functional, to reduce this time and effort substantially.
I'm more than willing to debate about the specifics of the technology, although we will both have to admit that any such discussion would be incredibly premature at this point. I don't think a convincing case can be made right now for or against any hypothetical technology that will be able to achieve MNT.
I'm also more than willing to debate about the fundamental physical limits of construction at the nanoscale, but in that case it is much harder to refute the premise of MNT.
↑ comment by darius · 2013-06-29T23:28:48.990Z · LW(p) · GW(p)
it's also notable that he successfully predicted the rise of the internet
Quibble: there was plenty of internet in 1986. He predicted a global hypertext publishing network, and its scale of impact, and starting when (mid-90s). (He didn't give any such timeframe for nanotechnology, I guess it's worth mentioning.)
comment by Kaj_Sotala · 2013-06-29T07:24:48.190Z · LW(p) · GW(p)
Upvoted and I'm very much looking forward to seeing the full discussion, but I'd suggest changing the title to something like "is MNT plausible". "Is MNT science" is bound to invite arguments like "MNT is not scientifically proven but does not contradict existing science either" and other digressions into what exactly "scientific" means, which aren't really very relevant for what you're saying.
comment by gwern · 2013-06-29T15:49:20.483Z · LW(p) · GW(p)
Given that you bring up the n-body problem twice (speciously, IMO, because we're not dealing with infinitesimal points in a Newtonian astronomical context, but with constrained molecules of nonzero size in solids/fluids/atmosphere, you might as well say 'A* pathfinding can't work because it requires solving the n-body problem!'; and you ignore approximations), you may be interested to know that the n-body problem is in fact exactly soluble: see "The Solution of the n-body Problem".
Impossibility proofs are tricky things.
Replies from: leplen↑ comment by leplen · 2013-06-29T16:06:09.828Z · LW(p) · GW(p)
I was specifically referring to the difficulty of solving it for determining electron position, which, in the classical limit is exactly analogous to infinitesimal points moving around based on forces that obey an inverse square law, there's a sign difference in the Hamiltonian since electrons repel, but it's essentially the same problem, with essentially the same difficulties. We can solve the Hydrogen atom (2 body problem), we have some solutions for the Helium atom (3 body problem) and we sort of give up after that.
As for the solution to the n-body problem, I assume you're referring to the inifinite series solution which is known to converge very slowly. I'll try and read Quidong Wang's book and check and see if this is true. We (and by we I mean Poincare) have proven you can't solve it with algebra and integrals, and computers are know to be bad at derivatives. I think this may weaken my argument if calculating an infinite series solution to the S.E. is possible, because it would allow you in principle to numerically solve quantum mechanics programs to arbitrary accuracy, which right now we're incapable of. I'll need to look at how the solution behaves as a function on accuracy and n.
I will say I'm much happier with the tentative statement, "An AI may be able to devise novel solutions for coupled differential equations" than "An AI will get nanotechnology". Reducing the latter statement toward the former I think could give us much tighter bounds on what we expect to happen.
Thanks! Great contribution.
Replies from: gwern, Desrtopa↑ comment by gwern · 2013-06-29T16:45:48.245Z · LW(p) · GW(p)
I was specifically referring to the difficulty of solving it for determining electron position, which, in the classical limit is exactly analogous to infinitesimal points moving around based on forces that obey an inverse square law.
I thought you were talking about quantum difficulties in arranging on the nano scale? I am not sure what your classical limit is, but I am not sure it would apply to the argument you originally made.
Your reference also claims that there is a solution, but that has a million terms and is sensitive to round off error and thus impractical to use in any sort of numerical work, so it does not, off the top of my head, substantially affect my line of reasoning.
Indeed, the exact solution is worse than the known approximation methods. That we know the exact solution, but choose not to use it, is still interesting and relevant... I'll remind you of your first use of the n-body problem:
Solving the Schrodinger equation is essentially impossible. We can solve it more or less exactly for the Hydrogen atom, but things get very very difficult from there. This is because we don't have a simple solution for the three-body problem, much less the n-body problem. Approximately, the difficulty is that because each electron interacts with every other electron, you have a system where to determine the forces on electron 1, you need to know the position of electrons 2 through N, but the position of each of those electrons depends somewhat on electron 1. We have some tricks and approximations to get around this problem, but they're only justified empirically.
To pick an obvious observation: if we have an exact solution, however inefficient, does that not immediately give us both theoretical and empirical ways to justify the fast approximations by comparing them to the exact answers using only large amounts of computing power - and never appealing to experiments?
Replies from: leplen↑ comment by leplen · 2013-06-29T17:33:14.515Z · LW(p) · GW(p)
I was specifically referring to the difficulty of solving it for determining electron position, which, in the classical limit is exactly analogous to infinitesimal points moving around based on forces that obey an inverse square law.
I thought you were talking about quantum difficulties in arranging on the nano scale? I am not sure what your classical limit is, but I am not sure it would apply to the argument you originally made.
The Hamiltonians for the 2 systems are essentially identical. If you treat electrons as having a well-defined position and momentum (hence the classical limit) then the problem of atomic bonding is exactly the same as the gravitational n-body problem (plus a sign change to handle repulsion). I'll have to sit down and do a bunch of math before I can say how exactly the quantum aspects affect the infinite series solution presented. But my general statement that
is trivially true, and this is why I introduced solving the many-body S.E. as approximately equivalent to the n-body problem.
To pick an obvious observation: if we have an exact solution, however inefficient, does that not immediately give us both theoretical and empirical ways to justify the fast approximations by comparing them to the exact answers using only large amounts of computing power - and never appealing to experiments?
Exactly why I think you've made a good point. I need to look at the approximation and see if it's possible. If it has 10^24 derivatives to get chemical accuracy, and scales poorly with respect to n, then it's probably not useful in practice, but the argument you make here explicitly is exactly the argument I understood implicitly from your previous post.
Replies from: gwern↑ comment by gwern · 2013-06-29T19:34:13.041Z · LW(p) · GW(p)
is trivially true, and this is why I introduced solving the many-body S.E. as approximately equivalent to the n-body problem.
Alright, I will take your word for it. I had never seen anyone say that the classical Newtonian-mechanical sort of n-body problem was almost identical to a quantum intra-atomic version, though.
Exactly why I think you've made a good point. I need to look at the approximation and see if it's possible. If it has 10^24 derivatives to get chemical accuracy, and scales poorly with respect to n, then it's probably not useful in practice, but the argument you make here explicitly is exactly the argument I understood implicitly from your previous post.
If nothing else, it's an interesting example of a data/computation tradeoff.
(To expand for people not following: in the OP, he claims that an algorithm/AI which wants to design effective MNT must deal with problems equivalent to the n-body problem; however, since there is no solution to the n-body problem, it must use approximations; but by the nature of approximations, it's hard to know whether one has made a mistake, one wants experimental data confirming the accuracy of the approximation in the areas one wants to use it; hence an AI must engage in possibly a great deal of experimentation before it could hope to even design MNT. I pointed out that there is a proven exact solution to the n-body problem contrary to popular belief; however, this solution is itself extremely inefficient and one would never design using it; but since this solution is perfect, it does mean that a few chosen calculations of it can replace the experimental data one is using to test approximations. This means that in theory, with enough computing power, an AI could come up with efficient approximations for the n-body problem and get on with all the other tasks involved in designing MNT without ever running experiments. Of course, whether any of this matters in practice depends on how much experimenting or how much computing power you think is available in realistic scenarios and how wedded you are to a particular hard-takeoff-using-MNT scenario; if you're willing to allow years for takeoff, obviously both experimentation and computing power are much more abundant.)
Replies from: leplen↑ comment by leplen · 2013-06-29T20:05:24.592Z · LW(p) · GW(p)
Alright, I will take your word for it. I had never seen anyone say that the classical Newtonian-mechanical sort of n-body problem was almost identical to a quantum intra-atomic version, though.
There are differences and complications because of things like Uncertainty, magnetism, and the Pauli exclusion principle, but to first order the dominant effect on an individual atomic particle is the Coulomb force and the form of that is identical to the Gravitational force. The symmetry in the force laws may be more obvious than the Hamiltonian formulation I gave before.
The particularly interesting point is that even without doing any quantum mechanics at all, even if atomic bonding were only a consequence of classical electrostatic forces, we still wouldn't be able to solve the problem. The difficulty generated by the n-body problem is in many ways much greater than the difficulty generated by quantum mechanics.
Also, nice summary.
Replies from: DanielVarga, GeraldMonroe↑ comment by DanielVarga · 2013-07-02T10:57:36.440Z · LW(p) · GW(p)
I am not a physicist, but this stack exchange answer seems to disagree with your assessment: What are the primary obstacles to solve the many-body problem in quantum mechanics?
Replies from: leplen↑ comment by leplen · 2013-07-02T14:10:15.026Z · LW(p) · GW(p)
This is sort of true. The fact that it turns into the n-body problem prevents us from being able to do quantum mechanics analytically. Once we're stuck doing it numerically, then all the issues of sampling density of the wave function et al. crop up, and they make it very difficult to solve numerically.
Thanks for pointing this out. These numerical difficulties are also a big part of the problem, albeit less accessible to people who aren't comfortable with the concept of high-dimensional Hilbert spaces. A friend of mine had a really nice write-up in his thesis on this difficulty. I'll see if I can dig it up.
↑ comment by GeraldMonroe · 2013-06-30T07:09:04.190Z · LW(p) · GW(p)
Why do we have to solve it? In his latest book, he states that he calculates you can get the thermal noise down to 1/10 the diameter of a carbon atom or less if you use stiff enough components.
Furthermore, you can solve it empirically. Just build a piece of machinery that tries to accomplish a given task, and measure it's success rate. Systematically tweak the design and measure the performance of each variant. Eventually, you find a design that meets spec. That's how chemists do it today, actually.
Edit : to the -1, here's a link where a certain chemist that many know is doing exactly this : http://pipeline.corante.com/archives/2013/06/27/sealed_up_and_ready_to_go.php
↑ comment by Desrtopa · 2013-06-29T19:01:03.519Z · LW(p) · GW(p)
and computers are know to bad at derivatives
Can you expand on this?
A strong AI should be better than humans at pretty much every facet of reasoning, essentially as a starting premise. It's not like humans aren't computers, we're just wetware computers built very differently from our own current technology. "As good as the best humans" should be the absolute floor if we're positing the abilities of an optimally designed computer.
Replies from: leplen↑ comment by leplen · 2013-06-29T19:39:16.840Z · LW(p) · GW(p)
Humans are also bad at numerical derivatives. Derivatives are really messy when we don't have a closed analytical form for the function f'. Basically the problem is that the derivative formula -f(x)}{h})
involves subtracting nearly equal numbers and then dividing by almost zero. Both of these things destroy numerical accuracy very very quickly, because it takes very tiny errors and turns them into very large numbers. As long as the solution to the n-body problem is expressed in terms of a differential Taylor series without analytic components, it's going to be very very difficult to solve accurately.
For practical problems, where we don't know the initial state of the system to infinite accuracy this is a big problem. It also forces you to use of lots and lots of memory storing all your numbers accurately, because you burn through that accuracy really quickly.
Replies from: roystgnr↑ comment by roystgnr · 2013-06-30T07:14:02.720Z · LW(p) · GW(p)
Side note - finite differencing (which, you're right, typically throws away half of your precision) isn't the only way to get a computer to take a derivative. Automatic differentiation packages will typically get you the derivative of an explicitly defined function to roughly the accuracy with which you can evaluate the function itself.
I'm not familiar with the n-body problem series solution, though; there's lots of other ways that could turn out to be impractical to evaluate.
comment by Paul Crowley (ciphergoth) · 2013-06-29T19:04:03.898Z · LW(p) · GW(p)
Drexler's latest book goes to a lot of length to discuss the "Modelling is hard" problem. The key insight is that we do not need a means to model an arbitrary system; we only need to be able to model some systems, such that there's an overlap with useful systems that we can build. And the models don't have to be perfectly accurate; it suffices if the tolerances built into the design are enough to cover the known inaccuracies of the model.
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-06-29T22:04:07.822Z · LW(p) · GW(p)
Which one's the latest book?
Replies from: darius↑ comment by darius · 2013-06-29T22:22:08.572Z · LW(p) · GW(p)
Radical Abundance, came out this past month.
Added: The most relevant things in the book for this post (which I've only skimmed):
There's been lots of progress in molecular-scale engineering and science that isn't called nanotechnology. This progress has been pretty much along the lines Drexler sketched in his 1981 paper and in the how-can-we-get-there sections of Nanosystems, though. This matches what I saw sitting in on Caltech courses in biomolecular engineering last year. Drexler believes the biggest remaining holdup on the engineering work is how it's organized: when diverse scientists study nature their work adds up because nature is a whole, but when they work on bits and pieces of technology infrastructure in the same way, their work can't be expected to coalesce on its own into useful systems.
He gives his latest refinement of the arguments at a lay level.
comment by Manfred · 2013-06-29T08:05:41.933Z · LW(p) · GW(p)
4a. False. Flexible diamond doesn't make any sense. Diamond is sp3 bonded carbon and those bonds are highly directional. They're not going to flex.. Metals are flexible because metallic bonds, unlike covalent bonds, don't confine the electrons in space. Whatever this purported carbon fiber is it either won't be flexible, or it won't be diamond.
Well you can bend diamond, you just need to be fast about it, like 310 kHz fast. But yeah, "thread" it is not.
Proteins can make and break diamond bonds
Well, this sounds a bit less outlandish if rephrased: "Proteins can make and break carbon-carbon sigma-bonds." Which of course happens all the time in the course of making organic molecules.
For making diamond specifically, you might try starting out with formaldehyde, and releasing water as you stick carbons together. Looks like there should be plenty of spare energy to do it. Hm, but that seems very tricky to do, I wonder how people do it with synthetic diamond. Ah. Very high temperature carbon radicals sticking to a lattice-matched substrate.
comment by GeraldMonroe · 2013-06-30T06:10:29.889Z · LW(p) · GW(p)
From reading Radical Abundance :
Drexler believes that not only are stable gears possible, but that every component of a modern, macroscale assembly-line can be shrunk to the nanoscale. He believes this because his calculations, and some experiments show that this works.
He believes that " Nanomachines made of stiff materials can be engineered to employ familiar kinds of moving parts, using bearings that slide, gears that mesh, and springs that stretch and compress (along with latching mechanisms, planetary gears, constant-speed couplings, four-bar linkages, chain drives, conveyor belts . . .)."
The power to do this comes from 2 sources. First of all, the "feedstock" to a nanoassembly factory always consists of the element in question bonded to other atoms, such that it's an energetically favorable reaction to bond that element to something else. Specifically, if you were building up a part made of covalently bonded carbon (diamond), the atomic intermediate proposed by Drexler is carbon dimers ( C---C ). See http://e-drexler.com/d/05/00/DC10C-mechanosynthesis.pdf
Carbon dimers are unstable, and the carbon in question would rather bond to "graphene-, nanotube-, and diamond-like solids"
The paper I linked shows a proposed tool.
Second, electrostatic electric motors would be powered by plain old DC current. These would be the driving energy to turn all the mechanical components of an MNT assembly system. Here's the first example of someone getting one to work I found by googling : http://www.nanowerk.com/spotlight/spotid=19251.php
The control circuitry and sensors for the equipment would be powered the same way.
An actual MNT factory would work like the following. A tool-tip like in the paper I linked would be part of just one machine inside this factory. The factory would have hundreds or thousands of separate "assembly lines" that would each pass molecules from station to station, and at each station a single step is perfomed on the molecule. One the molecules are "finished", these assembly lines will converge onto assembly stations. These "assembly stations" are dealing with molecules that now have hundreds of atoms in them. Nanoscale robot arms (notice we've already gone up 100x in scale, the robot arms are therefore much bigger and thicker than the previous steps, and have are integrated systems that have guidance circuitry, sensors, and everything you see in large industrial robots today) grab parts from assembly lines and place them into larger assemblies. These larger assemblies move down bigger assembly lines, with parts from hundreds of smaller sub-lines being added to them.
There's several more increases in scale, with the parts growing larger and larger. Some of these steps are programmable. The robots will follow a pattern that can be changed, so what they produce varies. However, the base assembly lines will not be programmable.
In principle, this kind of "assembly line" could produce entire sub-assemblies that are identical to the sub assemblies in this nanoscale factory. Microscale robot arms would grab these sub-assemblies and slot them into place to produce "expansion wings" of the same nanoscale factory, or produce a whole new one.
This is also how the technology would be able to produce things that it cannot already make. When the technology is mature, if someone loads a blueprint into a working MNT replication system, and that blueprint requires parts that the current system cannot manufacture, the system would be able to look up in a library the blueprints for the assembly line that does produce those parts, and automatically translate library instructions to instructions the robots in the factory will follow. Basically, before it could produce the product someone ordered, it would have to build another small factory that can produce the product. A mature, fully developed system is only a "universal replicator" because it can produce the machinery to produce the machinery to make anything.
Please note that this is many, many, many generations of technology away. I'm describing a factory the size and complexity of the biggest factories in the world today, and the "tool tip" that is described in the paper I linked is just one teensy part that might theoretically go onto the tip of one of the smallest and simplest machines in that factory.
Also note that this kind of factory must be in a perfect vacuum. The tiniest contaminant will gum it up and it will seize up.
Another constraint to note is this. In Nanosystems, Drexler computes that the speed of motion for a system that is 10 million times smaller is in fact 10 million times faster. There's a bunch of math to justify this, but basically, scale matters, and for a mechanical system, the operating rate would scale accordingly. Biological enzymes are about this quick.
This means that an MNT factory, if it used convergent assembly, could produce large, macroscale products at 10 million times the rate that a current factory can produce them. Or it could, if every single bonding step that forms a stable bond from unstable intermediates didn't release heat. That heat product is what Drexler thinks will act to "throttle" MNT factories, such that the rate you can get heat out will determine how fast the factory will run. Yes, water cooling was proposed :)
One final note : biological proteins are only being investigated as a boostrap. The eventual goal will use no biological components at all, and will not resemble biology in any way. You can mentally compare it to how silk and wood was used to make the first airplanes.
comment by timtyler · 2013-06-29T21:41:18.445Z · LW(p) · GW(p)
It's very unclear in most of the discussions I read about these Nanofactories what's going to power them. What synthetic equivalent of ATP is going to allow us to out-compete the ribosome?
Er, like most factories, nanofactories will probably run off the grid.
comment by pcm · 2013-06-29T19:25:09.087Z · LW(p) · GW(p)
This page has links to 3 of Drexler's designs with pdb files. Can you simulate those?
Building those would require tools that are quite different from what we have now.
Replies from: leplen↑ comment by leplen · 2013-06-29T20:15:54.811Z · LW(p) · GW(p)
I'll try. 6,000 atoms is an order of magnitude or two more than a typical simulation for me, but I'll make some approximations and see what i can do. Is there a specific property you're interested in? Just metastability? The types of approximations that are valid depends strongle on what you want to know about your system.
comment by Circusfacialdisc · 2013-06-30T12:22:12.381Z · LW(p) · GW(p)
Concerning manipulation of diamond by biological molecules, what exactly is this?
(Not trying to make a point here; I am actively deferring to someone with more chemistry mojo than I have to explain this)
Replies from: None↑ comment by [deleted] · 2013-07-02T03:56:00.572Z · LW(p) · GW(p)
I did a quick little bit of searching and following back of citations, may have time to do so somewhere with easier access to paywalled journals tomorrow and if I misstated anything I will edit.
This protein SP1 (Stable Protein 1, originally from Aspen trees, a 12-part ring that is so stable it goes through boiling intact hence the name) forms hexagonal rings and as long as you don't mess with the parts that hold the hexagon together you an tack on interesting other things to the interior of the rings and the exterior. Keep hydrophobic patches around the exterior and you can get it to arrange itself into regular lattices. Keep the inside hydrophobic and you can get it to grab onto and encircle other hydrophobic particles of the proper size in solution.
Here they mixed the protein with protein-sized diamond particles, the rings grabbed those particles that were the proper size, and arranged themselves into flat extremely regular hexagonal arrays carrying the diamond particles along for the ride.
Replies from: Circusfacialdisc↑ comment by Circusfacialdisc · 2013-07-04T07:17:05.223Z · LW(p) · GW(p)
Ah, thank you. So the structure left at the end was not by any means a solid diamond.
Replies from: Nonecomment by Desrtopa · 2013-06-29T16:16:49.737Z · LW(p) · GW(p)
4a. False. Flexible diamond doesn't make any sense. Diamond is sp3 bonded carbon and those bonds are highly directional. They're not going to flex.. Metals are flexible because metallic bonds, unlike covalent bonds, don't confine the electrons in space. Whatever this purported carbon fiber is, it either won't be flexible, or it won't be diamond.
"Diamond fiber" is a common descriptor for carbon nanotubes, which are indeed flexible. At least as of this year, it's become possible for us to make them on a macroscopic scale.
Replies from: leplen↑ comment by leplen · 2013-06-29T17:42:50.654Z · LW(p) · GW(p)
But a CNT isn't diamond. Not at all! It's got carbon in it, but CNTs are sp2 bonded, not sp3 bonded. This isn't a diamond fiber anymore than the inside of a pencil is diamond fiber. The properties you would get expect out of a CNT are radically different than those you would expect out of a diamond fiber.
If we want to replace diamond fiber with CNT I can try and work through the implications of that, but they're completely different physical systems.
Replies from: Baughn↑ comment by Baughn · 2013-07-01T14:59:49.264Z · LW(p) · GW(p)
Speaking as a layman, I would expect he's thinking of some metamaterial made up partially of diamond, and partially of.. not-diamond. Diamond chainmail, to name a rather naive idea.
Is it possible to transition from sp3 to sp2 in a single crystal?
What, if you can place atoms arbitrarily, is the most useful thing you could do to make "flexible diamond clothing"? Ignoring the "diamond" part, just assume it's mostly covalently bonded carbon of some form or other. Would putting nano-scale electrostatic motors and flex sensors in there help?
comment by Desrtopa · 2013-06-29T16:10:07.884Z · LW(p) · GW(p)
For the "modelling is hard" issue, I think it's worth considering that cases like solving complex instantiations of the Schrodinger are probably an area in which we could reasonably predict a superintelligent AI would have a major advantage over humans.
It's not like the laws of physics considerations which are invariant depending on who's doing the engineering, the difficulty of modelling is largely dependent on the capabilities of the modeller.
Replies from: Kaj_Sotala, leplen↑ comment by Kaj_Sotala · 2013-06-29T18:42:08.431Z · LW(p) · GW(p)
It's not like the laws of physics considerations which are invariant depending on who's doing the engineering, the difficulty of modelling is largely dependent on the capabilities of the modeller.
You can generalize this claim to apply to any phenomena. What makes you say that this area in particular is one where an AI would have an advantage?
Replies from: Desrtopa↑ comment by Desrtopa · 2013-06-29T18:55:52.112Z · LW(p) · GW(p)
I would think creating accurate models of natural law and extrapolating the consequences would be a pretty obvious example of a task where the intelligence of the agent is highly relevant.
Replies from: Kaj_Sotala↑ comment by Kaj_Sotala · 2013-06-29T19:54:57.301Z · LW(p) · GW(p)
The intelligence of the agent is highly relevant, yes, but showing that AGI would have a major advantage in this area in particular would require showing that there are plausible ways of attacking the domain which humans are currently incapable of exploiting but which an AGI would be capable of exploiting.
Off the top of my head, I would expect that an AGI might have a larger relative advantage in something like social science, where hypothesis testing requires making sense of huge datasets and correctly interpreting complex statistical relationships and causal chains and integrating them with the researcher's existing knowledge - something that humans are quite bad at, due to not having evolved for the task. (Though evolution has spent a long while optimizing humans to have an intuitive understanding of the motives of other humans, one which might take a long while for an AGI to catch up with - but then, that understanding can also be a disadvantage in evaluating correct but counter-intuitive hypotheses.) In contrast, something well-understood like physics seems like a much likelier candidate for a field where you can't come up with any major intelligence enhancement technique that computer-aided humans couldn't exploit equally well.
"The more reliable a field's current predictions, the less likely an AGI is to have a relative advantage" seems like a potentially useful heuristic - having a field of science consistently make correct predictions is a sign of our evolved cognitive faculties already either being relatively good at it or at least effectively aided by computerized tools, suggesting that there is less room for improvement than in other fields. Physics is possibly the most reliable field of science there is, which would suggest that the biggest AGI advantages would lie in other fields.
Of course an AGI could still have a big advantage in physics due to general considerations such as thinking faster, instances of it coordinating better among themselves, etc., but those considerations would apply equally to all fields of science, not just physics. It doesn't seem impossible that an AGI wouldn't have any qualitative advantages over humans when it came to physics.
Replies from: NancyLebovitz, Armok_GoB↑ comment by NancyLebovitz · 2013-06-29T23:26:49.762Z · LW(p) · GW(p)
People might have a temporary but significant advantage in avoiding disastrous mistakes in dealing with other people.
Building from experiments in MNT has got to be easier for an AI than building from experiments with people.
I'm not even sure that social science covers how to not make bad mistakes in fraught political situations.
Replies from: Kaj_Sotala↑ comment by Kaj_Sotala · 2013-06-30T10:14:30.580Z · LW(p) · GW(p)
Yes, physics experiments are easier to interpret than social experiments, but (as you yourself point out), the current state of social science shows that this is also the case when humans are doing the experimenting.
Replies from: NancyLebovitz↑ comment by NancyLebovitz · 2013-06-30T20:05:14.798Z · LW(p) · GW(p)
No, I'm not talking about the interpretation of experiments so much as the risks while learning. People grow up (if at all fortunate) with the chance to do a lot of low-stakes attempts while dealing with other people. Even so, very few end up as skilled politicians.
If the AI needs to be able to navigate complex negotiations and signalling, it isn't going to start with the benefit of a child's slack for learning. If it needs to practice with people rather than simulations (I'm guessing it will), it could take years to build up to super-negotiator.
I could be wrong. Perhaps the AI can build on algorithms which aren't obvious to people, and/or use improved sensory abilities, and/or be able to capitalize on a huge bank of information.
Replies from: Kaj_Sotala↑ comment by Kaj_Sotala · 2013-06-30T20:27:10.310Z · LW(p) · GW(p)
The Internet provides plenty of opportunities for anonymous interaction with people, perfect for running safe experiments. And the amounts of raw information that you could process before ever needing to run your own experiments is just enormous - here I'm not talking about the scientific papers, but all the forum threads, mailing list archives, chatlogs, etc. etc. that exist online. These not only demonstrate how online social interaction works, but also contain plenty of data where people report on and analyze their various real-life social interactions ("hey guys, the funniest thing happened today...", "I was so creeped out on my ride home", "I met the most charming person"). That is just an insane corpus of what works and what all the different failure modes are. I would expect that it would take one a long time before they could come up with hypotheses that weren't in principle better answered by searching that corpus than doing your own experiments.
Of course, that is highly unstructured data, and it might be quite hard to effectively categorize it in a way that allows for efficient searches. So doing your own experiments might still be more effective. But again, plenty of anonymous forums exist for that.
↑ comment by leplen · 2013-06-29T17:58:28.023Z · LW(p) · GW(p)
Maybe. I'm still really worried about the sheer number of particles and degrees of freedom. An AI may be able to count to infinity much faster than I can, but it's not going to get there any sooner. It's not clear that the S.E. is solvable.
It's not like the laws of physics considerations which are invariant depending on who's doing the engineering, the difficulty of modelling is largely dependent on the capabilities of the modeller.
It's not clear to me that there's a difference? The laws of physics dictate the model.
comment by falenas108 · 2013-06-29T14:18:21.783Z · LW(p) · GW(p)
A lot of these arguments (but not all) boil down to, "This is theoretically possible, but we don't have the technology to do this right now." It seems to me that when talking about the time when an AI is developed, some combination of much higher levels of technology and much more processing power will be available, so this isn't as much of an issue.
All of these tiny machines are repeatedly described as programmable, but that doesn't make any sense. What programs are they capable of accepting or executing? What set of instructions can a collection of 50 carbon atoms accept and execute? How are these instructions being delivered?
I think he means programmable in the exact same way that DNA is programmable, that you can specify which amino acids you want and get a matching output.
comment by Nornagest · 2013-06-29T07:27:18.233Z · LW(p) · GW(p)
I haven't been closely following this debate, and didn't figure out what you meant by MNT until the first time you mentioned Drexler. I suggest spelling it out explicitly at least in the title, for the benefit of people reading the topic list; I'd do title and first mention in the body text, myself.
ETA: Done now. Thanks.
comment by Armok_GoB · 2013-07-03T00:03:57.146Z · LW(p) · GW(p)
How I feel (not actually think) right now: You just attacked an awful strawman of my beliefs! Except that strawman is not only a real person, but someone I considered a genius and world leading expert on the subject before knowing his exact claims just now. Darn.
comment by roystgnr · 2013-06-30T06:42:49.422Z · LW(p) · GW(p)
"Flexible diamond doesn't make any sense."
Diamond must deform elastically under some forces, because the other two alternatives are "deforms inelastically under any forces" (i.e. not diamond) and "behaves as a rigid body under some forces" (i.e. not anything that obeys relativity).
Whether there can be a useful degree of flex, admittedly, I have no idea.
comment by DanielLC · 2013-06-29T18:56:26.236Z · LW(p) · GW(p)
Modelling is hard
It's hard without a quantum computer. That is also hard, but in a different way. Anyone know about how likely it is that we'll ever have quantum computers that are good enough for this?
Chaos
The issue isn't trying to control it in the face of chaos. It's to make the system stable, or at least make sure the attractor is small enough that it is, for all intents and purposes, stable. You have to eliminate all the degrees of freedom, or at least all the ones that actually matter. You might not care where your factory went off to so long as it keeps running.
4a. False. Flexible diamond doesn't make any sense.
Pretend that they said carbon nanotubes, and that they described the construction of carbon nanotubes instead of diamond.
What set of instructions can a collection of 50 carbon atoms accept and execute? How are these instructions being delivered?
Either you make it a lot bigger than 50 carbon atoms and just make that the size of whatever is actually moving things, or you don't bother with making it programmable, and you just send the instructions as needed.
As for how you could make a computer that small, running a Turing machine with a DNA molecule as the tape comes to mind. It's much bigger than 50 carbon atoms, but you could still make the actual factory part that small, so long as you don't mind it moving around as it computes. There's likely a better computer design. I don't really know much about this stuff.
If you can move individual atoms around, you can build an atomic computer that computes by moving individual atoms around.