What are your strategies for avoiding micromistakes?
post by NaiveTortoise (An1lam) · 20191004T18:42:48.777Z · score: 18 (9 votes) · LW · GW · 2 commentsThis is a question post.
Contents
Answers 43 johnswentworth 24 gwern 9 ESRogs 4 Dagon None 2 comments
I've recently been spending more time doing things that involve algebra and/or symbol manipulation (after a while not doing these things by hand that often) and have noticed that small mistakes cost me a lot of time. Specifically, I can usually catch such mistakes by doublechecking my work, but the cost of not being able to trust my initial results and redo steps is very high. High enough that I'm willing to spend time working to reduce the number of such mistakes I make even if it means slowing down quite a bit or adopting some other costly process.
If you've either developed such strategies for avoiding making such mistakes or would good at it in the first place, what do you do?
Two notes on the type of answers I'm looking for:
1. I should note that one answer is just to use something like WolframAlpha or Mathematica, which I do. That said, I'm still interested in not having to rely on such tools for things in the general symbol manipulation reference class as I don't like relying on my computer being present to do these sorts of things.
2. I did do some looking around for work addressing this (found this for example), but most of it suggested basic strategies that I already implement like being neat and checking your work.
Answers
A quote from Wheeler:
Never make a calculation until you know the answer. Make an estimate before every calculation, try a simple physical argument (symmetry! invariance! conservation!) before every derivation, guess the answer to every paradox and puzzle.
When you get into more difficulty math problems, outside the context of a classroom, it's very easy to push symbols around adnauseum without making any forward progress. The counter to this is to figure out the intuitive answer before starting to push symbols around.
When you follow this strategy, the process of writing a proof or solving a problem mostly consists of repeatedly asking "what does my intuition say here, and how do I translate that into the language of math?" This also gives builtin error checks along the way  if you look at the math, and it doesn't match what your intuition says, then something has gone wrong. Either there's a mistake in the math, a mistake in your intuition, or (most common) a piece was missed in the translation.
This also helps to train your intuition, in the cases where careful calculation reveals that in fact the intuitive answer was wrong.
I've been more consciously (I think it's literally impossible to think "without intuition", but thinking about it as a necessary prerequisite was new for me) thinking about / playing with the recommended approach in this comment since you made it and it's been helpful, especially in helping me notice the difference between questions where I can "read off" the answer vs. ones where I draw a blank.
However, I've also noticed that it's definitely a sort of "hard mode" in the sense that the more I rely on it the more it forces me to develop intuitions about everything I'm learning before I can think about them effectively. To give an example, I've been learning more statistics and there are a bunch of concepts about which I currently have no intuition. E.g., something like the "GilvenkoCantelli Theorem". Historically, I would have just filed it away in my head as "thing that is used to prove variant of Hoeffding that involves a supremum" or, honestly, forgotten about it. But since I've been trying to consciously practice developing intuitions, I end up spending a bunch of time thinking about what it's really saying because I no longer trust myself to use it without understanding it.
Now, I suspect many people in particular people from the LW population would have a response along the lines of, "that's good, you're forcing yourself to deeply understand everything you're learning". And that's partly true! On the other hand, I do think there's something to be said for knowing when to / not to spend time deeply grokking things and just using them as tools, and by forcing myself to rely heavily on intuitions, it becomes harder to do that.
Related to that, I'd be interested in hearing how you and other go about developing such intuitions. (I've been compiling my own list.)
My main response here is declarative mathematical frameworks [LW · GW], although I don't think that post is actually a very good explanation, so I'll give some examples.
First, the analogy: let's say I'm writing a python script, and I want it to pull some data over the internet from some API. The various layers of protocols used by the internet (IP, TCP, HTTP, etc) are specifically designed so that we can move data around without having to think about what's going on under the hood. Our intuition can operate at a high level of abstraction; we can intuitively "know the answer" without having to worry about the lowlevel details. If we try to pull www.google.com, and we get back a string that doesn't look like HTML, then something went wrong  that's part of our intuition for what the answer should look like.
A couple more mathy examples...
If you look at the way physicists and engineers use delta functions or differentials, it clearly works  they build up an intuition for which operations/expressions are "allowed", and which are not. There's a "calculus of differentials" and a "calculus of delta functions", which define what things we can and cannot do with these objects. It is possible to put those calculi on solid foundations  e.g. nonstandard analysis or distribution theory  but we can apparently intuit the rules pretty well even without studying those underlying details. We "know what the answer should look like". (Personally, I think we could teach the rules better, rather than making physics & engineering students figure them out on the fly, but that still wouldn't require studying nonstandard analysis etc.)
Different example: information theory offers some really handy declarative interfaces for certain kinds of problems. One great example I've used a lot lately is the data processing inequality (DPI): we have some random variable X which contains some information about Y. We compute some function f(X). The DPI then says that the information in f(X) about Y is no greater than the information in X about Y: processing data cannot add useful information. Garbage in, garbage out. It's extremely intuitive. If I'm working on a problem, and I see a place where I think "hmm, this is just computation on X, it shouldn't add any new info" then I can immediately apply the DPI. I don't have to worry about the details of how the DPI is proven while using it; I can intuit what the answer should look like.
That's typically how declarative frameworks show up: there's a theorem whose statement is very intuitively understandable, though the proof may be quite involved. That's when it makes sense to hang intuition on the theorem, without necessarily needing to think about the proof (although of course we do need to understand the theorem enough to make sure we're formalizing our intuition in a valid way!). One could even argue that this is what makes a "good" theorem in the first place.
This, I think, may be too domainspecific to really be answerable in any useful way. Anyway, more broadly: when you run into errors, it's always good to think sort of like pilots or sysadmins in dealing with complex system failures  doing research and making errors is certainly a complex system, where there are many steps where errors could be caught. What are the root causes, how did the error start propagating, and what could have been done throughout the stack to reduce it?

constrain the results: Fermi estimates, informative priors, inequalities, and upper/lower bounds are all good for telling you in advance roughly what the results should be, or at least building intuition about what you expect

implement in code or theoremchecker; these are excellent for flushing out hidden assumptions or errors. As Pierce puts it, proving a theorem about your code uncovers many bugs  and it doesn't matter what theorem!

solve with alternative methods, particularly brute force: solvers like Mathematica/Wolfram are great just to tell you what the right answer is so you can check your work. In statistics/genetics, I often solve something with Monte Carlo (or ABC) or brute force approaches like dynamic programming, and only then, after looking at the answers to build intuitions (see: #1), do I try to tackle an exact solution.

test the results: unit test critical values like 0 or the small integers, or boundaries, or very large numbers; use propertybased checking (I think also called 'metamorphic testing') like QuickCheck to establish that basic properties seem to hold (like always being positive, monotonic, input same length as output etc)

ensemble yourself: wait a while and sleep on it, try to 'rubber duck' it to activate your adversarial reasoning skills by explaining it, go through it in different modalities

generalize the results, so you don't have to resolve it: the most bugfree code or proof is the one you never write.

When you run into an error, think about it: how could it have been prevented? If you read something like Site Reliability Engineering: How Google Runs Production Systems or other books about failure in complex systems, you might find some useful inspiration. There is a lot of useful advice: for example, you should have some degree of failure in a wellfunctioning system; you should keep an eye on 'toil' versus genuinely new work and step back and shave some yaks when 'toil' starts getting out of hand; you should gradually automate manual workflows, perhaps starting from checklists as skeletons
Do you need to shave some yaks? Are your tools bad? Is it time to invest in learning to use better programs or formalization methods?
If you keep making an error, how can it be stopped?
If it's a simple error of formula or manipulation, perhaps you could make a bunch of spaced repetition system flashcards with slight variants all stressing that particular error.
Is it machinecheckable? For writing my essays, I flag as many errors or warning signs using two scripts and additional tools like
linkchecker
.Can you write a checklist to remind yourself to check for particular errors or problems after finishing?
Follow the 'rule of three': if you've done something 3 times, or argued at length the same thesis 3 times, etc, it may be time to think about it more closely to automate it or write down a fullfledged essay. I find this useful for writing because if something comes up 3 times, that suggests it's important and underserved, and also that you might save yourself time in the long run by writing it now. (This is my theory for why memory works in a spaced repetition sort of way: real world facts seems to follow some sort of longtailed or perhaps mixture distribution, where there are massed transient facts which can be safely forgotten, and longterm facts which pop up repeatedly with large spacings, so ignoring massed presentation but retaining facts which keep popping up after long intervals is more efficient than simply having memories be strengthened in proportion to total number of exposures.)
This is an excellent answer and I want to highlight that making Anki flashcards is especially useful in this case. I rarely make a mistake when I'm working with mathematics only because of the fact that I have made myself a lot of Anki cards thoroughly analyzing the concepts I use.
Using spaced repetition systems to see through a piece of mathematics, an essay by Michael Nielsen was really useful for me when initially investigating this idea.
Besides this, I have  what may be an eccentric idea  that when working I set special music soundtrack for the specific work I do. See here [LW · GW] for more details about this. Further, I think this idea about having a "ritual" is very related to not making mistakes, "getting in the zone", etc.
Just wanted to note this was definitely helpful and not too general. Weirdly enough, I've read parts of the SRE book but for some reason was compartmentalizing it in my "engineering" bucket rather than seeing the connection you pointed out.
One minihabit I have is to try to check my work in a different way from the way I produced it.
For example, if I'm copying down a large number (or string of characters, etc.), then when I doublecheck it, I read off the transcribed number backwards. I figure this way my brain is less likely to go "Yes yes, I've seen this already" and skip over any discrepancy.
And in general I look for ways to do the same kind of thing in other situations, such that checking is not just a repeat of the original process.
Incidentally, a similar consideration leads me to want to avoid reusing old metaphors when explaining things. If you use multiple metaphors you can triangulate on the meaning  errors in the listener's understanding will interfere destructively, leaving something closer to what you actually meant.
For this reason, I've been frustrated that we keep using "maximize paperclips" as the standin for a misaligned utility function. And I think reusing the exact same example again and again has contributed to the misunderstanding Eliezer describes here:
Original usage and intended meaning: The problem with turning the future over to just any superintelligence is that its utility function may have its attainable maximum at states we'd see as very lowvalue, even from the most cosmopolitan standpoint.
Misunderstood and widespread meaning: The first AGI ever to arise could show up in a paperclip factory (instead of a research lab specifically trying to do that). And then because AIs just mechanically carry out orders, it does what the humans had in mind, but too much of it.
If we'd found a bunch of different ways to say the first thing, and hadn't just said, "maximize paperclips" every time, then I think the misunderstanding would have been less likely.
This applies to any sort of coding, as well. Trivial mistakes compound and are difficult to find later. My general approach is unit testing. For each small section of work, do the calculation or process or subtheorem or whatever in two different ways. Many times the check calculation can be an estimate, where you're just looking for "unsurprising" as the comparison, rather than strict equality.
Yeah, with coding, unit testing plus assertions plus checking my intuitions against the code as John Wentworth described does in fact seem to work fairly well for me. I think the difficulty with algebra is that there's not always an obvious secondary check you can do.
2 comments
Comments sorted by top scores.
I would like to hear from people who were "born that way".
It might turn out that they've always just had good intuitions and attitudes that we could learn.
Or if it does turn out that they're just wired differently, that would be fascinating.
This is a great point. Not sure why I phrased it the way I did originally in retrospect. I updated the question to reflect your point.