Making Reasoning Obviously Locally Correct
post by JGWeissman · 2011-03-12T19:41:44.756Z · LW · GW · Legacy · 26 commentsContents
us = Request("name") s = Request("name") us = Request("name") usName = us sName = us sName = Encode(us) Write usName Write sName Write Encode(usName) None 26 comments
x = y
x2 = x*y
x2 - y2 = x*y - y2
(x+y)(x-y) = y(x-y)
x+y = y
y+y = y
2*y = y
2 = 1
The above is an incorrect "proof" that 2=1. Even for those who know where the flaw is, it might seem reasonable to react to the existence of this "proof" by distrusting mathematical reasoning, which might contain such flaws that lead to erroneous results. But done properly, mathematical reasoning does not look like this "proof". It is more explicit, making each step obviously correct that an incorrect step cannot meet the standard. Let's take a look at what would happen when attempting to present this "proof" following this virtue:
The set of real numbers is known to be non empty, so let y be a real number. Let x = y.
By an axiom of the real numbers, a real number multiplied by a real number is a real number, so x*x = x*y.α
By the same axiom y*y is a real number. By an axiom y*y has an additive inverse -y*y.
By an axiom, adding a real number to a real number is a real number, so x*x + -y*y = x*y + -y*y.
By axioms defining addition, multiplication, and additive inverse, (x+y)*(x+-y) and y*(x+-y) are real numbers.
By the distributive law of multiplication over addition, (x+y)*(x+-y) = (x+y)*x + (x+y)*(-y) = x*x + y*x + x*(-y) + y*(-y)
By commutativity and associativity of addition and multiplication, and distribution of multiplication over additive inverses,
x*x + y*x + x*(-y) + y*(-y) = x*x + (x*y + -x*y) + - y*y = x*x + -y*y
Similarly y*(x+-y) = y*x + y*(-y) = x*y + -y*y
So, by transitivity of equality, (x+y)*(x+-y) = y*(x+-y)
By an axiom a real number not equal to zero has a multiplicative inverse. I want x+-y to have a multiplicative inverse, so I want to prove that x+-y is not zero, but I can't, since it is not true. By construction x = y, so x+-y= y+-y, and by the definition of an additive inverse, y+-y = 0. The proof fails.
Now, citing axioms and theorems to justify a step in a proof is not a mere social convention to make mathematicians happy. It is a useful constraint on your cognition, allowing you to make only inferences that are actually valid. Similarly, there is a virtue in the art of programming that constrains you from making mistakes in code. In Making Wrong Code Look Wrong, Joel Spolsky describes a solution to the problem of keeping track of which strings are already HTML encoded and ready to be written to a web page, and which strings are unformatted and may have come from a malicious user who wants to inject an evil script into your website (or a normal user who wants to display a message with characters that happen to be HTML control characters):
All strings that come from the user must be stored in variables (or database columns) with a name starting with the prefix "us" (for Unsafe String). All strings that have been HTML encoded or which came from a known-safe location must be stored in variables with a name starting with the prefix "s" (for Safe string).
Let me rewrite that same code, changing nothing but the variable names to match our new convention.
us = Request("name")
...pages later...
usName = us...pages later...
recordset("usName") = usName...days later...
sName = Encode(recordset("usName"))...pages or even months later...
Write sNameEvery line of code can be inspected by itself, and if every line of code is correct, the entire body of code is correct.
The thing I want you to notice about the new convention is that now, if you make a mistake with an unsafe string, you can always see it on some single line of code, as long as the coding convention is adhered to:
s = Request("name")
is a priori wrong, because you see the result of Request being assigned to a variable whose name begins with s, which is against the rules. The result of Request is always unsafe so it must always be assigned to a variable whose name begins with “us”.
us = Request("name")
is always OK.
usName = us
is always OK.
sName = us
is certainly wrong.
sName = Encode(us)
is certainly correct.
Write usName
is certainly wrong.
Write sName
is OK, as is
Write Encode(usName)
Just like the single step in the attempted mathematical proof that can't be explicitly justified, a single line of code violating Spolsky's convention is obviously wrong. In both these cases, it is possible to be sufficiently explicit and detailed that an automated process can verify correctness, a proof verifier for mathematics and compilers and static analysis for computer programs.
What about in domains where everything is not so clearly defined, where you have to deal with uncertainty? Well, first of all, probability theory is a mathematical description of uncertainty. You can be rigorous in keeping track of what the ideal amount of uncertainty is for a particular claim. Though that is a lot of work, and may be worthwhile only for beliefs that inform important decisions. But even in your everyday reasoning, beware local mistakes. Just as Spolsky would advise you to scream "Error!" wherever an unsafe string is stored in a variable meant for safe strings, even if that unsafe string never ends up written to a web page (until someone makes some locally correct modifications), don't wait for a global failure to hit you over the head with the wrongness of your local mistake. Nip that problem in the bud, back up, and do it right. And try to have a simply verified concept of locally correct that still adds up to globally correct.
α. I am being extra explicit in that I am avoiding using common notations that have been carefully defined with rules of manipulation derived from more basic rules of inference. This is for the benefit of non mathematicians who may have learned these notations (like subtraction, exponents) but not their justifications.
26 comments
Comments sorted by top scores.
comment by Miller · 2011-03-13T02:17:30.523Z · LW(p) · GW(p)
This is not a simple problem to tackle. The coding example is essentially a human-implemented type system. I don't generally prefer Joel's solution because if you have isolated this as a frequent problem you might as well elevate your solution to it into the type system and get compiler support. But as you see with programming languages, and with the math example provided, people swap rigour for simplicity and speed. I don't think we can trade one off for the other in any generalized fashion. That is, we see an error and we feel we've opted too much for simplicity. We see few errors and a lot of structure and we feel like we've overburdened ourselves. We kind of train ourselves on recent memory.
Interestingly it's our basic satisficing around our poor memories that defines where we set our tradeoff. A good argument for AGI beating us is that it painlessly chooses more rigour.
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2011-03-12T21:06:49.369Z · LW(p) · GW(p)
A simpler technique is to just substitute x = y = 1 in each step and look for the first false equation, i.e., the first step where a false statement is derived from a true statement. Clearly that reasoning step is not valid (does not produce only true conclusions from true premises).
Replies from: JGWeissman, Perplexed, Miller↑ comment by JGWeissman · 2011-03-12T22:23:38.798Z · LW(p) · GW(p)
That is simpler, because you already optimized it for finding the flaw in that particular incorrect "proof". I want a technique that generalizes to handle incorrect "proofs" that show 1=3 by setting equal positive and negative square roots, or that all horses are the same color by taking an element from an empty set.
Replies from: AlephNeil↑ comment by AlephNeil · 2011-03-13T05:08:42.470Z · LW(p) · GW(p)
Eliezer's method works in all of those cases.
The idea is to replace the reasoning as shown with a simple special case which would have to work if the original was valid.
For instance, in the "horses" example we replace the (more general) inductive step with specific instance where we try to prove that "if all 1 horses are the same colour then all 2 horses are the same colour".
It's not guaranteed to work, and even when it does it may not yield all the mistakes (see Perplexed's comment), but it's still useful.
The equivalent 'technique' in programming, if there is one, is stepping through the code with something like gdb to see where the 'bug' happens. The difference is that even when this does catch a bug in progress, it may not be determinate which step should be regarded as the 'first mistake'. (But adopting conventions, like this usName and sName business, may help.)
Replies from: endoself↑ comment by endoself · 2011-03-13T05:45:33.805Z · LW(p) · GW(p)
It serves a purpose, but it fails far too often and does not solve the original problem of making mistakes obvious.
Replies from: AlephNeil↑ comment by AlephNeil · 2011-03-13T05:51:52.248Z · LW(p) · GW(p)
does not solve the original problem of making mistakes obvious.
Yes it does.
In all three cases mentioned, it highlights mistakes in the proofs. (Using much less effort than applying massive amounts of paraffin to make each step waterproof.)
Replies from: endoself↑ comment by endoself · 2011-03-13T20:28:50.200Z · LW(p) · GW(p)
I did mean something by this, and not just "making all mistakes obvious".
The method in the post for handling unsafe strings and a habit of only using reversible operations in some areas of math make the error inherent in not following the convention. The notation takes the same form as the thing being analyzed, moving some thought from a conscious level to an automatic level. This kind of thing can save time overall in many situations because, when a mistake is made, you have already ruled out many places where the mistake cannot be. The paraffin becomes almost invisible, and it eliminates the need to check for leaks.
(I did not downvote you, btw)
↑ comment by Perplexed · 2011-03-12T21:35:07.336Z · LW(p) · GW(p)
Amusingly, try substituting x=y=0. You find a different flawed step than the one Eliezer found.
Replies from: cousin_it, Snowyowl, cousin_it↑ comment by cousin_it · 2011-03-13T14:06:33.710Z · LW(p) · GW(p)
Nice. So here we have two testcases that catch different bugs, and you can never prove the absence of bugs by adding more testcases. But testing still works well enough in practice that people rarely write fully formal proofs for software, or for math theorems for that matter :-)
↑ comment by Snowyowl · 2011-03-13T15:59:43.872Z · LW(p) · GW(p)
Actually, x=y=0 still catches the same flaw, it just catches another one at the same time.
Replies from: Perplexed↑ comment by Perplexed · 2011-03-13T16:27:20.395Z · LW(p) · GW(p)
Our disagreement seems to derive from my use of the words "different flawed step" and your use of "same flaw". Eliezer suggested substituting 1 for x and y in :
(x+y)(x-y) = y(x-y)
x+y = y
yielding
(1+1)(1-1) = 1(1-1) (true)
1+1 = 1 (false)
Thus, since a true equation was transformed into a false one, the step must have been flawed.
Under my suggestion, we have:
(0+0)(0-0) = 0(0-0) (true)
0+0 = 0 (true)
So, under Eliezer's suggested criterion (turning true to false) this is not a flawed step, though if you look carefully enough, you can still notice the flaw - a division by zero.
Replies from: timtyler↑ comment by timtyler · 2011-03-13T18:41:16.132Z · LW(p) · GW(p)
So, under Eliezer's suggested criterion (turning true to false) this is not a flawed step, though if you look carefully enough, you can still notice the flaw - a division by zero.
Hmm. A failure to identify a flawed step doesn't mean that the step isn't flawed.
A true statement turning into a false one does show that you manipulated it badly - but a true statement staying true doesn't show that you manipulated it well.
↑ comment by Miller · 2011-03-13T02:26:51.765Z · LW(p) · GW(p)
For someone that didn't spend much time in school, you've identified one of my chief ways of guessing the teacher's password. It's often the case that one can essentially perform this kind of hack to narrow down the options without otherwise understanding the problem as it was intended.
In this case for instance, one can imagine a multiple choice where the answers are the line that was the error. Your solution would rule out lines after a certain point. You might get lucky as a student and have 3 of the 4 answers ruled out this way. Voila! You've won and know almost nothing.
Replies from: JoshuaZ↑ comment by JoshuaZ · 2011-03-13T02:46:13.384Z · LW(p) · GW(p)
Speaking as a mathematican(ok, grad student but close enough for this purpose), this isn't learning almost nothing. Once one has gone through an example and can see where an attempted proof breaks down one can then look at that part in more detail. Of course one won't learn much if one only cares about identifying the problematic line, whether in a school setting or another setting; one always needs to generalize and integrate knowledge. Don't blame the teacher if one isn't going to take steps to do that.
comment by AlephNeil · 2011-03-13T06:15:10.952Z · LW(p) · GW(p)
If the cost of having written down an incorrect proof is much higher than the cost of burning up a lot of time then sure, making each step rigorous will be worthwhile. In mathematics, however, that generally isn't the case. No-one writes fully rigorous proofs - and occasionally someone spots an error in a published proof. Progress is much faster this way.
In programming, the costs and benefits may of course be totally different, depending on the project.
Replies from: JGWeissman↑ comment by JGWeissman · 2011-03-13T16:18:20.556Z · LW(p) · GW(p)
The example I made of being rigorous was excessively verbose because I was trying to demonstrate rigor to a general audience without making a lot of assumptions about background information. Mathematicians communicating with other mathematician can rely on a common knowledge of theorems and notations to write more concisely. They can factor a quadratic expression in one easily verifiable step. They do use "division", though they can drop down a layer of abstraction and explain it in terms of multiplicative inverses, and either way, they know they have to make sure the divisor is not zero.
comment by DanielLC · 2011-03-12T20:14:15.699Z · LW(p) · GW(p)
From what I understand, being completely explicit in math is quite rare. I figure it doesn't really count until you can run it through an automated proof checker.
Replies from: Johnicholas, None↑ comment by Johnicholas · 2011-03-12T23:18:28.732Z · LW(p) · GW(p)
Learning to use automated proof checkers might be a valuable rationalist-training technique.
Struggling with a compiler or a proof checker (They're very similar by the Curry-Howard isomorphism) is an interesting (and perhaps depressing) learning experience. The computer reveals how many "details" you can overlook, and how frequently your gut or instinct is just wrong. Often the "details" are the vast majority of the final result.
Some of the appeal that LessWrong has for programmers might be explained by their exposure to this sort of learning experience.
↑ comment by [deleted] · 2011-03-12T21:46:09.444Z · LW(p) · GW(p)
Agreed. In fact, I suspect that unless you actually can run it through an automatic proof checker, humans are more likely to spot an error in the vague proof. People aren't terribly good at reading pages of verbose symbol manipulation without glazing over.
Replies from: djcb↑ comment by djcb · 2011-03-13T12:05:15.734Z · LW(p) · GW(p)
Indeed. Case in point Principia Mathematica as discussed in the Stephen Wolfram's blog, with the amusing note about the proof for the proposition that 1 + 1 = 2.
comment by andreas · 2011-03-12T21:29:18.230Z · LW(p) · GW(p)
Now, citing axioms and theorems to justify a step in a proof is not a mere social convention to make mathematicians happy. It is a useful constraint on your cognition, allowing you to make only inferences that are actually valid.
When you are trying to build up a new argument, temporarily accepting steps of uncertain correctness can be helpful (if mentally tagged as such). This strategy can move you out of local optima by prompting you to think about what further assumptions would be required to make the steps correct.
Techniques based on this kind of reasoning are used in the simulation of physical systems and in machine inference more generally (tempering). Instead of exploring the state space of a system using the temperature you are actually interested in, which permits only very particular moves between states ("provably correct reasoning steps"), you explore using a higher temperature that makes it easier to move between different states ("arguments"). Afterwards, you check how probable the state is that you moved to when evaluated using the original temperature.
Replies from: JGWeissman↑ comment by JGWeissman · 2011-03-12T22:29:38.992Z · LW(p) · GW(p)
When you are trying to build up a new argument, temporarily accepting steps of uncertain correctness can be helpful (if mentally tagged as such).
Agreed. You just have to remember once you've figured out all those steps leading you your conclusion, you have an outline, not a completed proof. Being able to produce such outlines that most of the time can be successfully turned into proofs, or at least interesting reasons why the proof failed, is an important skill.
comment by VNKKET · 2011-03-18T06:41:34.240Z · LW(p) · GW(p)
Is it easy to accidentally come up with criteria for "locally correct" that will still let us construct globally wrong results?
This comment was brought to you by a surface analogy with the Penrose triangle.
comment by rabidchicken · 2011-03-12T20:27:51.543Z · LW(p) · GW(p)
Who could actually be convinced by that proof that 1 = 2 who has even taken elementary school algebra? I am wondering because I have seen that pop up in a few places, so it would seem that there are people who actually can be tricked into accepting it, otherwise it would have died much faster.
comment by Gray · 2011-03-13T14:05:52.605Z · LW(p) · GW(p)
I saw a lady yesterday who was looking for a five dollar bill in her purse. She was certain that the five dollar bill was in there, but the issue caused me to think. I guess it was obvious to me that if the five dollar bill was in her purse, then either it is directly in her purse, or it is directly inside something else that is directly in her purse. Obviously, to prove that the five dollar bill was in her purse, she would have to inspect each item in her purse, and then recursively inspect the contents of each item.
But given what you've said above:
Now, citing axioms and theorems to justify a step in a proof is not a mere social convention to make mathematicians happy. It is a useful constraint on your cognition, allowing you to make only inferences that are actually valid.
But is the proposition I cited above really an axiom? Is it true? I'm sure it's possible to generalize this spatial relation of containment to the universe as a whole, would the proposition be true then?
I think it's interesting that you site a programming example. I'm not sure that you've left the laboratory. More importantly, I'm not so sure how useful it is to prove my proposition to determine if the five-dollar bill is in her purse.
But I've been thinking about heuristics lately. Surely you know that, at least outside the laboratory, that we all use heuristics for problem solving and making decisions. And often we engage in heuristics when it really isn't practical to stop and write a formal proof of our answer.
This doesn't mean, however, that we can't analyze our heuristic when we get home. So I guess I'm imagining rationalists as people who regularly refine their heuristics, and has more heuristics, more rational tools, than other people have. You can prove the effectiveness of a heuristic using logic.
I'm not a programmer, but I think programmers probably have a leg up on this over other people. Heuristics for searching and sorting aren't just algorithms for machines, but apply to real life. There are probably other possibilities as well. Just something I've been thinking about.