Failure By Analogy
post by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2008-11-18T02:54:06.000Z · LW · GW · Legacy · 13 commentsContents
13 comments
Previously in series: Logical or Connectionist AI?
Followup to: Surface Analogies and Deep Causes
"One of [the Middle Ages'] characteristics was that 'reasoning by analogy' was rampant; another characteristic was almost total intellectual stagnation, and we now see why the two go together. A reason for mentioning this is to point out that, by developing a keen ear for unwarranted analogies, one can detect a lot of medieval thinking today."
-- Edsger W. Dijkstra<geoff> neural nets are over-rated
<starglider> Their potential is overrated.
<geoff> their potential is us
-- #sl4
Wasn't it in some sense reasonable to have high hopes of neural networks? After all, they're just like the human brain, which is also massively parallel, distributed, asynchronous, and -
Hold on. Why not analogize to an earthworm's brain, instead of a human's?
A backprop network with sigmoid units... actually doesn't much resemble biology at all. Around as much as a voodoo doll resembles its victim. The surface shape may look vaguely similar in extremely superficial aspects at a first glance. But the interiors and behaviors, and basically the whole thing apart from the surface, are nothing at all alike. All that biological neurons have in common with gradient-optimization ANNs is... the spiderwebby look.
And who says that the spiderwebby look is the important fact about biology? Maybe the performance of biological brains has nothing to do with being made out of neurons, and everything to do with the cumulative selection pressure put into the design. Just like how the performance of biological brains has little to do with proteins being held together by van der Waals forces, instead of the much stronger covalent bonds that hold together silicon. Sometimes evolution gets stuck with poor building material, and it can't refactor because it can't execute simultaneous changes to migrate the design. If biology does some neat tricks with chemistry, it's because of the greater design pressures exerted by natural selection, not because the building materials are so wonderful.
Maybe neurons are just what brains happen to be made out of, because the blind idiot god is too stupid to sit down and invent transistors. All the modules get made out of neurons because that's all there is, even if the cognitive work would be much better-suited to a 2GHz CPU.
"Early attempts to make flying machines often did things like attaching beak onto the front, or trying to make a wing which would flap like a bird's wing. (This extraordinary persistent idea is found in Leonardo's notebooks and in a textbook on airplane design published in 1911.) It is easy for us to smile at such naivete, but one should realize that it made good sense at the time. What birds did was incredible, and nobody really knew how they did it. It always seemed to involve feathers and flapping. Maybe the beak was critical for stability..."
-- Hayes and Ford, "Turing Test Considered Harmful"
So... why didn't the flapping-wing designs work? Birds flap wings and they fly. The flying machine flaps its wings. Why, oh why, doesn't it fly?
Because... well... it just doesn't. This kind of analogic reasoning is not binding on Reality.
One of the basic tests to apply to reasoning that sounds kinda-good is "How shocked can I justifiably be if Reality comes back and says 'So what'?"
For example: Suppose that, after keeping track of the motions of the planets for centuries, and after confirming the underlying theory (General Relativity) to 14 decimal places, we predict where Mercury will be on July 1st, 2009. So we have a prediction, but that's not the same thing as a fact, right? Anyway, we look up in the sky on July 1st, 2009, and Reality says "So what!" - the planet Mercury has shifted outward to the same orbit as Mars.
In a case like this, I would be highly indignant and would probably sue Reality for breach of contract.
But suppose alternatively that, in the last twenty years, real estate prices have never gone down. You say, "Real estate prices have never gone down - therefore, they won't go down next year!" And next year, Reality says "So what?" It seems to me that you have no right to be shocked. You have used an argument to which Reality can easily say "So what?"
"Nature is the ultimate bigot, because it is obstinately and intolerantly devoted to its own prejudices and absolutely refuses to yield to the most persuasive rationalizations of humans."
-- J. R. Molloy
It's actually pretty hard to find arguments so persuasive that even Reality finds them binding. This is why Science is more difficult - why it's harder to successfully predict reality - than medieval scholars once thought.
One class of persuasive arguments that Reality quite often ignores is the Law of Similarity - that is, the argument that things which look similar ought to behave similarly.
A medieval alchemist puts lemon glazing onto a lump of lead. The lemon glazing is yellow, and gold is yellow. It seems like it ought to work... but the lead obstinately refuses to turn into gold. Reality just comes back and says, "So what? Things can be similar in some aspects without being similar in other aspects."
You should be especially suspicious when someone says, "I am building X, which will do P, because it is similar to Y, which also does P."
An abacus performs addition; and the beads of solder on a circuit board bear a certain surface resemblance to the beads on an abacus. Nonetheless, the circuit board does not perform addition because we can find a surface similarity to the abacus. The Law of Similarity and Contagion is not relevant. The circuit board would work in just the same fashion if every abacus upon Earth vanished in a puff of smoke, or if the beads of an abacus looked nothing like solder. A computer chip is not powered by its similarity to anything else, it just is. It exists in its own right, for its own reasons.
The Wright Brothers calculated that their plane would fly - before it ever flew - using reasoning that took no account whatsoever of their aircraft's similarity to a bird. They did look at birds (and I have looked at neuroscience) but the final calculations did not mention birds (I am fairly confident in asserting). A working airplane does not fly because it has wings "just like a bird". An airplane flies because it is an airplane, a thing that exists in its own right; and it would fly just as high, no more and no less, if no bird had ever existed.
The general form of failing-by-analogy runs something like this:
- You want property P.
- X has property P.
- You build Y, which has one or two surface similarities S to X.
- You argue that Y resembles X and should also P.
- Yet there is no reasoning which you can do on Y as a thing-in-itself to show that it will have property P, regardless of whether or not X had ever existed.
Analogical reasoning of this type is a very weak form of understanding. Reality often says "So what?" and ignores the argument.
The one comes to us and says: "Calculate how many synaptic transmissions per second take place in the human brain. This is the computing power required for human-equivalent intelligence. Raise enough venture capital to buy a supercomputer which performs the same number of floating-point operations per second. Intelligence is bound to emerge from a machine that powerful."
So you reply: "I'm sorry, I've never seen a human brain and I don't know anything about them. So, without talking about a human brain, can you explain how you calculated that 10^17 floating-point operations per second is the exact amount necessary and sufficient to yield human-equivalent intelligence?"
And the one says: "..."
You ask: "Say, what is this property of 'human-equivalent intelligence' which you expect to get? Can you explain it to me without pointing to a human?"
And the one says: "..."
You ask: "What makes you think that large amounts of computing power have something to do with 'intelligence', anyway? Can you answer without pointing to the example of the human brain? Pretend that I've never seen an 'intelligence' and that I have no reason as yet to believe any such thing can exist."
But you get the idea.
Now imagine that you go to the Wright Brothers and say: "I've never seen a bird. Why does your aircraft have 'wings'? And what is it you mean by 'flying'?"
And the Wright Brothers respond: "Well, by flying, we mean that this big heavy object is going to rise off the ground and move through the air without being supported. Once the plane is moving forward, the wings accelerate air downward, which generates lift that keeps the plane aloft."
If two processes have forms that are nearly identical, including internal structure that is similar to as many decimal places as you care to reason about, then you may be able to almost-prove results from one to the other. But if there is even one difference in the internal structure, then any number of other similarities may be rendered void. Two deterministic computations with identical data and identical rules will yield identical outputs. But if a single input bit is flipped from zero to one, the outputs are no longer required to have anything in common. The strength of analogical reasoning can be destroyed by a single perturbation.
Yes, sometimes analogy works. But the more complex and dissimilar the objects are, the less likely it is to work. The narrower the conditions required for success, the less likely it is to work. The more complex the machinery doing the job, the less likely it is to work. The more shallow your understanding of the object of the analogy, the more you are looking at its surface characteristics rather than its deep mechanisms, the less likely analogy is to work.
Analogy might work for something on the order of: "I crossed this river using a fallen log last time, so if I push another log across it, I might be able to cross it again." It doesn't work for creating objects of the order of complexity of, say, a toaster oven. (And hunter-gatherer bands face many rivers to cross, but not many toaster ovens to rewire.)
Admittedly, analogy often works in mathematics - much better than it does in science, in fact. In mathematics you can go back and prove the idea which analogy originally suggested. In mathematics, you get quick feedback about which analogies worked and which analogies didn't, and soon you pick up the pattern. And in mathematics you can always see the entire insides of things; you are not stuck examining the surface of an opaque mystery. Mathematical proposition A may be analogous to mathematical proposition B, which suggests the method; but afterward you can go back and prove A in its own right, regardless of whether or not B is true. In some cases you may need proposition B as a lemma, but certainly not all cases.
Which is to say: despite the misleading surface similarity, the "analogies" which mathematicians use are not analogous to the "analogies" of alchemists, and you cannot reason from the success of one to the success of the other.
13 comments
Comments sorted by oldest first, as this post is from before comment nesting was available (around 2009-02-27).
comment by Robin_Hanson2 · 2008-11-18T03:56:22.000Z · LW(p) · GW(p)
Aren't you just saying "prefer better analogies"? The more the features you are comparing are fundamental rather than surface, and the more similar are the objects in fundamental terms, the more likely they are to have other similar characteristics. When you are trying to make inferences about something, and you have other similar things to consider, you'd be a fool to not look for analogies, as well as a fool to expect too much from them.
comment by derekz2 · 2008-11-18T04:50:42.000Z · LW(p) · GW(p)
Analogies are great guess generators, sources of wondrous creativity. It's very cool that the universe works in such a way that analogy often leads to fruitful ideas, and sometimes to truths.
But determining whether a particular analogy has really led anywhere requires more and different work, because analogy is not a valid form of inference. Nothing is ever true because of an analogy.
comment by M_C · 2008-11-18T06:55:38.000Z · LW(p) · GW(p)
We definitely understand how neurons in several areas of the brain (e.g. retina, auditory cortex) do their information processing. Simulations of these areas did produce expected results. So there is reason to believe that simulating a whole human brain with high enough fidelity would lead to similar results - i.e. intelligence.
A simulated bird with high enough fidelity to physical reality will fly correctly in a simulated environment. The plane/bird argument does not work for precise enough models (physical or simulated).
Now, you could claim that a precise simulation would be overkill in terms of required compute power. That could definitely be the case. However, we will probably have enough readout and simulation capability for a human brain in 15-20 years. A suitable "airplane" (de-novo) design has that timeframe to succeed before a brute force simulation will get there first.
comment by Latanius2 · 2008-11-18T06:56:53.000Z · LW(p) · GW(p)
Analogy might work better for recognizing things already optimized in design space, especially if they are a product of evolution, with common ancestors (4 legs, looks like a lion, so run, even if it has stripes). And we only started designing complicated stuff a few thousand years ago at most...
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2008-11-18T07:44:02.000Z · LW(p) · GW(p)
I agree that if you were willing to (throw ethics out the window and) run flawed uploads to debug them, then the uploading project would eventually succeed to arbitrary precision - even if you (artificially?) blocked off the possibility of understanding any of the information above a certain level.
Since both uploading and fully-understood-AGI will both get there "eventually" given the indefinite prolongation of the human idiom of scientific progress, the question is which is likely to get there first (or intelligence enhancement via neurohacking, etc.)
I would also question whether any neural system or circuit (such as e.g. the running robot described earlier) has ever reproduced a useful biological function with the researchers involved not understanding how the higher level works, and just studying the individual biological neural behaviors to very fine accuracy. I doubt it has yet happened, but am willing to be corrected.
comment by g · 2008-11-18T09:50:13.000Z · LW(p) · GW(p)
Reasoning by analogy is at the heart of what has been called "the outside view" as opposed to "the inside view" (in the context of, e.g., trying to work out how long some task is going to take). Eliezer is on record as being an advocate of the outside view. The key question, I think, is how deep are the similarities you're appealing to. Unfortunately, that's often controversial.
(So: I agree with Robin's first comment here.)
comment by tndal · 2008-11-18T14:28:00.000Z · LW(p) · GW(p)
What happened to the **geddis comment (it was the first comment posted and has apprently been dropped)? It claimed all reasoning is analogy and also claimed Bayesian inference is analogy at work.
While phrased somewhat irritatingly, I thought the post was a worthwhile generator of ideas.
comment by Nick_Tarleton · 2008-11-18T15:36:23.000Z · LW(p) · GW(p)
Maybe the beak was critical for stability...
I wonder if the reasoning involved was even that sophisticated, or if it was something closer to cargo-cultism in the root sense: a not-really-conscious attempt to trick the air spirits into granting your creation flight, just as they grant it to birds.
comment by Silas · 2008-11-18T18:08:57.000Z · LW(p) · GW(p)
Nick_Tarleton: I think you're going a bit too far there. Stability control theory had by that time been rigorously and scientifically studied, dating back to Watts's flyball governor in the 18th century (which controlled shaft rotation speed by allowing a ball to swing out and increase rotational inertia as it sped up) and probably even before that with the incubator (which used heat to move a valve that allowed just the right amount of cooling air in). Then all throughout the 19th century engineers attacked the problem of "hunting" on trains, where they would unsettlingly lurch faster and slower. Bicycles, a fairly recent invention then, had to tackle the rotational stability problem, somewhat similar (as many bicycle design constraints are) to what aircraft deal with.
Certainly, many inventors grasped at straws in attempt to replicate functionality, but the idea that they considered the stability implications of the beak isn't too outlandish.
comment by Scott2 · 2008-11-21T11:35:08.000Z · LW(p) · GW(p)
Analogies are great guess generators<< - derekz
I agree. Eliezer finished off by saying how analogies were often useful in mathematics. I agree, and would say that derekz has pinpointed the reason why. In maths, it's fine to guess so long as you then go and prove it. However I don't see why analogies can't be used in this manner outside mathematics. I'm happy to follow a line of reasoning that says "Birds have wings that flap - and the flapping is associated with flying. Moreover, that flapping wings would help flight fits in with my view of the universe and it's physics (however naive that may be). Hence flapping wings is a good thing to try when designing a flying machine." I would feel no shame following that reasoning, and it appears to me to be "analogous" to reasoning via analogy in mathematics. The key point is that you don't assume your analogy has yielded a correct result. You just use it, as derekz said as a guess generator.