What are your strategies for avoiding micromistakes?
post by An1lam · 20191004T18:42:48.777Z · score: 18 (9 votes) · LW · GW · 2 commentsThis is a question post.
Contents
Answers 43 johnswentworth 24 gwern 4 ESRogs 4 Dagon None 2 comments
I've recently been spending more time doing things that involve algebra and/or symbol manipulation (after a while not doing these things by hand that often) and have noticed that small mistakes cost me a lot of time. Specifically, I can usually catch such mistakes by doublechecking my work, but the cost of not being able to trust my initial results and redo steps is very high. High enough that I'm willing to spend time working to reduce the number of such mistakes I make even if it means slowing down quite a bit or adopting some other costly process.
If you've either developed such strategies for avoiding making such mistakes or would good at it in the first place, what do you do?
Two notes on the type of answers I'm looking for:
1. I should note that one answer is just to use something like WolframAlpha or Mathematica, which I do. That said, I'm still interested in not having to rely on such tools for things in the general symbol manipulation reference class as I don't like relying on my computer being present to do these sorts of things.
2. I did do some looking around for work addressing this (found this for example), but most of it suggested basic strategies that I already implement like being neat and checking your work.
Answers
A quote from Wheeler:
Never make a calculation until you know the answer. Make an estimate before every calculation, try a simple physical argument (symmetry! invariance! conservation!) before every derivation, guess the answer to every paradox and puzzle.
When you get into more difficulty math problems, outside the context of a classroom, it's very easy to push symbols around adnauseum without making any forward progress. The counter to this is to figure out the intuitive answer before starting to push symbols around.
When you follow this strategy, the process of writing a proof or solving a problem mostly consists of repeatedly asking "what does my intuition say here, and how do I translate that into the language of math?" This also gives builtin error checks along the way  if you look at the math, and it doesn't match what your intuition says, then something has gone wrong. Either there's a mistake in the math, a mistake in your intuition, or (most common) a piece was missed in the translation.
This also helps to train your intuition, in the cases where careful calculation reveals that in fact the intuitive answer was wrong.
This, I think, may be too domainspecific to really be answerable in any useful way. Anyway, more broadly: when you run into errors, it's always good to think sort of like pilots or sysadmins in dealing with complex system failures  doing research and making errors is certainly a complex system, where there are many steps where errors could be caught. What are the root causes, how did the error start propagating, and what could have been done throughout the stack to reduce it?

constrain the results: Fermi estimates, informative priors, inequalities, and upper/lower bounds are all good for telling you in advance roughly what the results should be, or at least building intuition about what you expect

implement in code or theoremchecker; these are excellent for flushing out hidden assumptions or errors. As Pierce puts it, proving a theorem about your code uncovers many bugs  and it doesn't matter what theorem!

solve with alternative methods, particularly brute force: solvers like Mathematica/Wolfram are great just to tell you what the right answer is so you can check your work. In statistics/genetics, I often solve something with Monte Carlo (or ABC) or brute force approaches like dynamic programming, and only then, after looking at the answers to build intuitions (see: #1), do I try to tackle an exact solution.

test the results: unit test critical values like 0 or the small integers, or boundaries, or very large numbers; use propertybased checking (I think also called 'metamorphic testing') like QuickCheck to establish that basic properties seem to hold (like always being positive, monotonic, input same length as output etc)

ensemble yourself: wait a while and sleep on it, try to 'rubber duck' it to activate your adversarial reasoning skills by explaining it, go through it in different modalities

generalize the results, so you don't have to resolve it: the most bugfree code or proof is the one you never write.

When you run into an error, think about it: how could it have been prevented? If you read something like Site Reliability Engineering: How Google Runs Production Systems or other books about failure in complex systems, you might find some useful inspiration. There is a lot of useful advice: for example, you should have some degree of failure in a wellfunctioning system; you should keep an eye on 'toil' versus genuinely new work and step back and shave some yaks when 'toil' starts getting out of hand; you should gradually automate manual workflows, perhaps starting from checklists as skeletons
Do you need to shave some yaks? Are your tools bad? Is it time to invest in learning to use better programs or formalization methods?
If you keep making an error, how can it be stopped?
If it's a simple error of formula or manipulation, perhaps you could make a bunch of spaced repetition system flashcards with slight variants all stressing that particular error.
Is it machinecheckable? For writing my essays, I flag as many errors or warning signs using two scripts and additional tools like
linkchecker
.Can you write a checklist to remind yourself to check for particular errors or problems after finishing?
Follow the 'rule of three': if you've done something 3 times, or argued at length the same thesis 3 times, etc, it may be time to think about it more closely to automate it or write down a fullfledged essay. I find this useful for writing because if something comes up 3 times, that suggests it's important and underserved, and also that you might save yourself time in the long run by writing it now. (This is my theory for why memory works in a spaced repetition sort of way: real world facts seems to follow some sort of longtailed or perhaps mixture distribution, where there are massed transient facts which can be safely forgotten, and longterm facts which pop up repeatedly with large spacings, so ignoring massed presentation but retaining facts which keep popping up after long intervals is more efficient than simply having memories be strengthened in proportion to total number of exposures.)
This is an excellent answer and I want to highlight that making Anki flashcards is especially useful in this case. I rarely make a mistake when I'm working with mathematics only because of the fact that I have made myself a lot of Anki cards thoroughly analyzing the concepts I use.
Using spaced repetition systems to see through a piece of mathematics, an essay by Michael Nielsen was really useful for me when initially investigating this idea.
Besides this, I have  what may be an eccentric idea  that when working I set special music soundtrack for the specific work I do. See here [LW · GW] for more details about this. Further, I think this idea about having a "ritual" is very related to not making mistakes, "getting in the zone", etc.
Just wanted to note this was definitely helpful and not too general. Weirdly enough, I've read parts of the SRE book but for some reason was compartmentalizing it in my "engineering" bucket rather than seeing the connection you pointed out.
One minihabit I have is to try to check my work in a different way from the way I produced it.
For example, if I'm copying down a large number (or string of characters, etc.), then when I doublecheck it, I read off the transcribed number backwards. I figure this way my brain is less likely to go "Yes yes, I've seen this already" and skip over any discrepancy.
And in general I look for ways to do the same kind of thing in other situations, such that checking is not just a repeat of the original process.
Incidentally, a similar consideration leads me to want to avoid reusing old metaphors when explaining things. If you use multiple metaphors you can triangulate on the meaning  errors in the listener's understanding will interfere destructively, leaving something closer to what you actually meant.
For this reason, I've been frustrated that we keep using "maximize paperclips" as the standin for a misaligned utility function. And I think reusing the exact same example again and again has contributed to the misunderstanding Eliezer describes here:
Original usage and intended meaning: The problem with turning the future over to just any superintelligence is that its utility function may have its attainable maximum at states we'd see as very lowvalue, even from the most cosmopolitan standpoint.
Misunderstood and widespread meaning: The first AGI ever to arise could show up in a paperclip factory (instead of a research lab specifically trying to do that). And then because AIs just mechanically carry out orders, it does what the humans had in mind, but too much of it.
If we'd found a bunch of different ways to say the first thing, and hadn't just said, "maximize paperclips" every time, then I think the misunderstanding would have been less likely.
This applies to any sort of coding, as well. Trivial mistakes compound and are difficult to find later. My general approach is unit testing. For each small section of work, do the calculation or process or subtheorem or whatever in two different ways. Many times the check calculation can be an estimate, where you're just looking for "unsurprising" as the comparison, rather than strict equality.
Yeah, with coding, unit testing plus assertions plus checking my intuitions against the code as John Wentworth described does in fact seem to work fairly well for me. I think the difficulty with algebra is that there's not always an obvious secondary check you can do.
2 comments
Comments sorted by top scores.
I would like to hear from people who were "born that way".
It might turn out that they've always just had good intuitions and attitudes that we could learn.
Or if it does turn out that they're just wired differently, that would be fascinating.
This is a great point. Not sure why I phrased it the way I did originally in retrospect. I updated the question to reflect your point.