In Defence of Optimizing Routine Tasks
post by leogao · 2021-11-09T05:09:41.595Z · LW · GW · 6 commentsContents
6 comments
People often quantify whether an optimization to a routine task is worth it by looking at the net amount of time saved. This mindset is exemplified by this oft-cited xkcd:
However, I think this doesn't really give a full picture of the best way to decide whether to make a task more efficient. There are many other factors that may weigh more than the raw time cost of something.
First off, for tasks that create friction inside important loops, this could disincentivize you from going around the loop, and make the feedback loop significantly more frustrating. For example, creating short aliases for common shell commands makes me much more productive because it decreases the mental cost I assign to running them and makes it easier to overcome the "activation energy", despite this probably not saving me more than an hour or two in a year, since typing a few extra characters realistically doesn't take that long. Of course, setting up aliases only takes a few minutes, but even if it took hours I still think it would work out to be worth it overall.
If the task inside the loop gets longer than even just a few seconds, this also adds significant context switching costs. For example, if I'm writing code that needs some simple utilities, the value of having those utilities already implemented and easily usable is worth far more than the reimplementation time it saves. If I need a utility function that takes 5 minutes to reimplement, by the time I finish implementing the function I've already partially lost track of the context in the original place where I needed the function and need to spend some more time and effort getting back into the flow of the original code I was writing. (This is also a reason why building up abstractions is so powerful - even more powerful than the mere code reuse benefits would imply - it lets you keep the high level context in short term memory.) In practice, this means that often building a set of problem specific tools first to abstract away annoying details and then having those tools to help you makes solving the problem feel so much more tactile and tractable and fun than the alternative. This seems obvious in retrospect but I still find myself not building tools when I should be sometimes.
If you're especially unlucky, the subtask itself could turn out to be harder than just a 5 minute task, and then you have to recurse another layer deeper, and after a few iterations of this you end up having totally forgotten what you were originally trying to do — the extremely frustrating "falling down the rabbit hole"/"yak shaving" phenomenon. Needless to say, this is really bad for productivity (and mental health). Having a solid foundation that behaves how you expect it to 99% of the time helps avoid this issue. Even just knowing that the probability of falling down a rabbit hole is low seems to help a lot with the activation energy I need to overcome to actually go implement something.
Even if the thing you're optimizing isn't in any critical loop, there's also the cognitive overhead of having to keep track of something at all. Having to do less things frees you up from having to worry at all about whether the thing is getting done, whether you forgot anything important, etc, which helps reduce stress a lot. Plus, I can't even count how many times I've thought "this is just going to be temporary/one-off and won't be part of an important feedback loop" about something that eventually ended up in an important feedback loop anyways.
Finally, not all time is made equal. You only get a few productive hours a day, and those hours are worth a lot more than hours where you can't really get yourself to do anything that requires too much cognitive effort or focus. Thus, even if it doesn't save you any time, finding a way to reduce the amount of cognitive effort needed could be worth a lot. For example, making the UI for something you need to check often more intuitive and displaying all the things you care about in easily interpretable formats (as opposed to just showing the raw data and forcing you to work things out in your head, even if that computation is trivial) could be worth a lot by freeing up your productive hours for less-automatable things.
Of course, premature optimization is still bad. I'm not saying you should go and optimize everything; it's just that there are other factors worth considering beyond the raw time you save. Figuring out when to optimize and when not to is its own rabbit hole, but knowing that time isn't everything is the first step.
6 comments
Comments sorted by top scores.
comment by localdeity · 2021-11-09T10:31:10.710Z · LW(p) · GW(p)
Things I'd mention:
- An additional benefit of automating small tasks is "getting better at automating small tasks".
- On the "working memory" subject: you say things along these lines already, but I would put it like this: if we imagine that your memory at the relevant level has N slots, and the concepts for your task currently require N+1 slots... if adding a convenience shrinks the required size to N, then that's very helpful. Therefore I would particularly favor conveniences that seem to shrink the "memory footprint" of the actions.
comment by Richard Horvath · 2021-11-10T11:07:59.685Z · LW(p) · GW(p)
I would add that often automating the task is way more fun than doing the task itself. Once I spent a lot of time automating something that was so mind numbing (simple, boring, but required constant focus as a small mistake would have had negative consequences) that I thought I rather shoot myself than do it again.
Although it turned out to also have saved a lot of time in the long run (it was not clear during that time how many times I would have had to do the task), I would have still chosen automation for the mental health benefits.
Replies from: leogaocomment by Drake Morrison (Leviad) · 2022-12-10T21:04:36.146Z · LW(p) · GW(p)
Cleary articulating the extra costs involved is valuable. I have seen the time tradeoff before, but I didn't think through the other costs that I as a human also go through.
comment by Nicholas / Heather Kross (NicholasKross) · 2022-10-13T19:37:26.694Z · LW(p) · GW(p)
Strongly upvoted, thank you for clarifying this. So many things that seem intractable are only that way because people haven't articulated the "extra costs" involved:
- tasks taking up mental effort, outside their bare "time cost"
- someone sounds dumb when trying to say something, but they haven't articulated it yet / don't know how to articulate their real objections
- Kelly criterion / insurance in general working by preventing large ruins, despite the negative arithmetic returns (see Save Haven, Spitznagel).
comment by bfinn · 2021-11-15T13:09:11.419Z · LW(p) · GW(p)
It would be handy to provide a list of tasks that many people do often and that can be optimized (and how to) - i.e. normal everyday things, rather than programming. Particularly ones relevant to this post, i.e. where optimizing seems like more trouble than it's worth. (Examples of the former abound - eg buying ready meals or Deliveroo instead of cooking, paying someone to do some of your admin. No good examples of the latter occur to me right now, but there must be some.)
Optimizing also includes dropping. In particular, dropping = not doing something at all, not because it's of negative value, but because it's lower value than other things you could do, i.e. a poor use of your time, and not easily outsourced. There may be things you could & should drop even though doing so involves time/effort and keeping going involves little.