The 10% Improvement Problem

post by norswap · 2018-07-02T16:27:01.146Z · score: 9 (5 votes) · LW · GW · 3 comments

This is a link post for https://twitter.com/mstrdrmr/status/1012865943578370048

There is something I call the 10% improvement problem. Basically, it's hard to track a change of a couple tens of percent in most abilities / states of being.

In some cases, there is a definition problem. What does it mean for your to be "twice" as happy as yesterday?

Assuming you do know, would you notice a 30% change in happiness? A 10% change?

Can you tell if you're being twice or half as productive? Maybe if you do the same thing everyday. But when faced with a new task, it's hard to tell whether the task is inherently difficult or if you're being uncharacteristically unproductive.

Even things that I feel should be eminently measurable, aren't. The effectiveness of a training program in building muscle, for instance. Heck, it's even hard to precisely measure muscle mass at 10% accuracy.

And that is saying nothing about confounding factors. You don't live a regimented life in a sterile box. Any comparison is necessarily going to be pears to apples.

And yet, 10% changes matter. Stack 10 of them and you've doubled whatever you were trying to improve. The problem is, how do you know what to stack?

Over a long period of time, these incremental changes are definitely noticeable. If you're on average 30% more effective every day, that will be noteworthy at the end of the year.

But you don't have the time to wait long periods for every single change you want to make. So you stack many of them, see if things improve on the long run, and accept that part of what you're doing may be pure cargo cult self-improvement.

This is an open problem, I have no solution to offer, but any help is appreciated.

3 comments

Comments sorted by top scores.

comment by johnswentworth · 2018-07-03T00:13:21.361Z · score: 3 (2 votes) · LW · GW
And yet, 10% changes matter. Stack 10 of them and you've doubled whatever you were trying to improve.

Counterargument [LW · GW]: 80/20 rule suggests that, most of the time in practice, either the 10% changes won't actually stack, or one of them will contribute most of the value on its own. It's really hard to find 10 changes of 10% each which actually stack.

comment by norswap · 2018-07-08T13:59:34.079Z · score: 1 (1 votes) · LW · GW

Good one. I think maybe that's true for some domains but not others?

Another way to consider this is that there are a small number of low-hanging fruits that yield a lot of improvement. You could even call them "beginner gains" if they are easy. But after that, you have to deal with a long tail of modest improvements - yet there's no doubt in my mind that correctly stacking them can yield some more improvement.

comment by johnswentworth · 2018-07-08T23:04:44.061Z · score: 4 (2 votes) · LW · GW

Also sorry I didn't actually answer your main question. It's actually something I've thought about quite a bit, but usually in the context of "not enough data to map out this very-high-dimensional space" rather than "not enough data to detect a small change". The problem is similar in both cases. I'll probably write a post or two on it at some point, but here's a very short summary.

Traditional probability theory relies heavily on large-number approximations; mainstream statistics uses convergence as its main criterion of validity. Small data problems, on the other hand, are much better suited to a Bayesian approach. In particular, if we have a few different models (call them ) and some data , we can compute the posterior without having to talk about convergence or large numbers at all.

The trade-off is that the math tends to be spectacularly hairy; usually involves high-dimensional integrals. Traditional approaches approximate those integrals for large numbers of data points, but the whole point here is that we don't have enough data for the approximations to be valid.