Don't Over-Optimize Things

post by owencb · 2022-06-16T16:33:17.560Z · LW · GW · 6 comments

Contents

  or Optimizing Optimization
    Reflection on purpose vs optimizing for that purpose
    What goes wrong when you over-optimize
    When is lots of optimization for a purpose good?
None
6 comments

or Optimizing Optimization

The definition of optimize is:

to make something as good as possible

It's hard to argue with that. It's no coincidence that a lot of us have something of an optimization mindset.

But sometimes trying to optimize can lead to worse outcomes (because we don't fully understand what to aim for). It's worth understanding how this happens. We can try to avoid it by a combination of thinking more about what to aim for, and (sometimes) simply optimizing less hard.

Reflection on purpose vs optimizing for that purpose

What does the activity of making something as good as possible look like in practice? I think often there are two stages:

  1. Reflection on the purpose — thinking about what the point of the thing at hand is, and identifying what counts as "good" in context
  2. Optimizing for that purpose — identifying the option(s) which do best at the identified purpose

Both of these stages are important parts of optimization in the general sense. But I think it's optimization-for-a-given-purpose that feels like optimization. When I say "over-optimization" I mean doing too much optimization for a given purpose.

What goes wrong when you over-optimize

Consider this exaggerated example:

Alice is a busy executive. She needs to get from one important meeting to another in a nearby city; she's definitely going to be late to the second meeting. She asks her assistant Bob to sort things out. "What should I be optimizing for?", Bob asks. "Just get me there as fast as possible", Alice replies, imagining that Bob will work out whether a taxi or train is faster.

Bob is on this. Eager to prove himself an excellent assistant, he first looks into a taxi (about 90 minutes) and a train (about 60 minutes plus 10 minutes travel at each end — but there's a 20 minute wait for the right train). So the taxi looks better.

But wait. Surely he can do better than 90 minutes? OK, so the journey is too short for a private jet to make sense, but what about a helicopter? Yep, 15 minutes to get to a helipad, plus 45 minutes flight time, and it can land on the hotel roof! Even adding in 5 minutes for embarking/disembarking, this is 25 minutes faster.

Or ... was he assuming that the drivers were sticking to the speed limit? Yeah, if he make the right phone calls he can find someone who can drive door to door in 60 minutes. 

Can he get the helicopter to be faster than that? Yeah, the driver can speed to the helipad, and bring it down to 57 minutes. Or what if he doesn't have it take off from a helipad? He just needs to find the closest possible bit of land and pay the owners to allow it to land there (or pay security people to temporarily clear the land even if they don't have permission to land). Surely that will come in under 55 minutes. Actually, if he's not concerned about proper airfields, he can revisit the option of a private jet ... just clear the street outside and use that as a runway, then have a skydiving instructor jump with Alice to land on the roof of the hotel ...

What's going wrong here? It isn't just that Bob is wasting time doing too much optimization, but that his solutions are getting worse as he does more optimization. This is because he has an imperfect understanding of the purpose. Goodhart's law [LW · GW] is biting, hard.

It's also the case that Bob has a bunch of other implicit knowledge baked into how he starts to search for options. He first thinks of taking a taxi or the train. These are unusually good options overall among possible ways to get from one city to the other; they're salient to him because they're common, and they're common because they're often good choices. Too much optimization is liable to throw out the value of this implicit knowledge.

So there are two ways Bob could do a better job:

  1. He could reflect more on the purpose of what he's doing (perhaps consulting Alice to understand that budget starts to matter when it's getting into the thousands of dollars, and that she really doesn't want to do things that bring legal or physical risk)
  2. He could do something other than pure optimization; like "find the first pretty good option and stop searching"[1], or "find a set of pretty good options and then pick the one that he gut-level feels best about"[2]
    • It's not obvious which of these will produce better outcomes; it depends how much of his implicit knowledge is known to his gut vs encoded in his search process

I'm generally a big fan of #1. Of course it's possible to go overboard, but I think it's often worth spending 3-30% of the time you'll spend on an activity reflecting on the purpose.[3] And it doesn't have much downside beyond the time cost.

Of course you'd like to sequence things such that you do the reflection on the purpose first ("premature optimization is the root of all evil"), but even then we're usually acting based on an imperfect understanding of the purpose, which means that more optimization for the purpose doesn't necessarily lead to better things. So some combination of #1 and #2 will often be best.

When is lots of optimization for a purpose good?

Optimization for a purpose is particularly good when:

See also Perils of optimizing in social contexts [LW · GW] for an important special case where it's worth being wary about optimizing.

(cross-posted [EA · GW] from the EA Forum)

  1. ^

    I owe this general point, which was the inspiration for the post, to Jan Kulveit, who expressed it concisely as "argmax -> softmax".

  2. ^

    This takes advantage of the fact that his gut is often implicitly tracking things, without needing to do the full work of reflecting on the purpose to make them explicit.

  3. ^

    As a toy example, suppose that every doubling of the time you spend reflecting on the purpose helps you do things 10% better; then you should invest about 12% of your time reflecting on purpose [source: scribbled calculation]. 

    Activities will vary a lot on how much you actually get benefits from reflecting on the purpose, but I don't think it's that unusual to see significant returns, particularly if the situation is complicated (& e.g. involving other people very often makes things complicated).

6 comments

Comments sorted by top scores.

comment by anonymousaisafety · 2022-06-16T22:58:44.308Z · LW(p) · GW(p)

My reply is focused on this specific statement:[1]

sometimes trying to [over] optimize can lead to worse outcomes 

There is something known as the performance / robustness stability tradeoff in controls theory. Controls theory[2] is the study of dynamic (e.g. autonomous) systems, and I have no idea why it is not more commonly cited on this forum.

The mathematical description of this gets a little bit unwieldy, so I'm going to simplify. 
Note that everything I'm about to say is discussing ideal systems and real systems are actually worse.

Higher performance systems are less stable than lower performance systems. For an intuitive idea of why this might be the case, consider the example of a system where you want to keep some variable to some setpoint, like temperature in a room. If you slowly control the error as it occurs, you'll end up with what is called a proportional error response.

Consider the following picture of a step response.[3]

You might want to be faster, so you might try to do something clever and add a term to the controller for how quickly the temperature is changing. Now you have a proportional-derivative error. There is a tradeoff. By making the system more responsive, we've made it less stable. It is now possible for our controller to oscillate out of control.

Here is picture of various step responses.[4] Take note of the unstable and marginally stable states.

The phrases to know are "gain margin" and "phase margin".

Gain margin is about how robust your system is in magnitude -- if the error is larger or smaller, how well does the system correct that error? You can think of gain margin as you're trying to keep a bouncing spring in place by hitting it with a hammer, and if you hit it too hard, it'll oscillate in a way you don't like.

Phase margin is about how robust your system is in time. To continue the previous example, the idea of phase margin is capturing the reality that you're controlling some external actuator, i.e. the hammer, and there's some delay between when you need to swing and when the swing actually occurs, and if that delay is too large, the system will respond differently. In fact, if that delay is just the right frequency, it'll actually add energy into the system and drive it unstable.[5]

The first controller I described above is called a PID (proportional, integral, derivative) controller and I gave examples of a P controller and PD controller. Normally you use a PI controller because the integral term drives the error to zero over time, which is necessary when your system has friction or some dead-zone or other bias that prevents a pure proportional controller from working.

There are various fancy controllers you'll hear about, like feed-forward, or MPC ("model predictive control"). The performance / robustness stability tradeoff applies to all of them. It is an iron law. It does not matter how fancy your controller gets. The fancier you make the controller, the more susceptible it is to going unstable. Basically, increasingly complicated controllers get performance by baking in assumptions about the physical world into the control loop. These assumptions are things like, how much bias is in the system, or quickly can an error change, what is the largest step response we might need to achieve. If those assumptions match reality, the controller will have very high performance and seem very stable. But if any of those assumptions are violated, that fancy controller might immediately go unstable. That's the price you pay for performance.

One way to think about this is the following thought experiment. You have a tradeoff between how well you can track a setpoint and how well you can reject disturbances. If you make it very difficult to knock a system off a setpoint, it'll reject disturbances well. However, a change in that setpoint might also look like a disturbance, and the system will be similarly sluggish to respond.

For real systems, a lot of the design considerations are going to be around giving yourself enough gain and phase margin so that you've got an envelope of safety around the testing you're able to do. Think of it like the factor of safety used in construction. The bridge is built to be say, 5x stronger than it needs to be. For this reason, and contrary to claims made on this forum, real systems are not engineered to the theoretical limit of performance or "efficiency".
 

  1. ^

    Bob is over-optimizing towards higher performance ("faster arrival") solutions that have increasingly higher risks of catastrophic failure ("death due to crashes from violating speed limits").

  2. ^
  3. ^
  4. ^
  5. ^

    Lack of phase margin is also what stops "I will simply control the robot over the network" ideas from working -- if the phase margin is insufficient, the delay incurred over the network will make it impossible for the remote actuators to be controlled in response to disturbances with any degree of accuracy.

Replies from: katy-kelly, Emrik North
comment by Katy Kelly (katy-kelly) · 2022-06-16T23:44:20.667Z · LW(p) · GW(p)

This was such a good read, I made an account to say that it should be a post in and of itself.

This example gave me a big aha about left/right political divides. 

"One way to think about this is the following thought experiment. You have a tradeoff between how well you can track a setpoint and how well you can reject disturbances. If you make it very difficult to knock a system off a setpoint, it'll reject disturbances well. However, a change in that setpoint might also look like a disturbance, and the system will be similarly sluggish to respond."

Spitballing / overgeneralizing: 

Maybe the right could be seen as the part of society better at rejecting disturbances, and the left the side that's better at tracking changes in the set point. 

Makes sense of why conservative areas often seem to be more stable (and why most cultures have all these weird, unnecessary taboos - they're over-rejecting), and why the left tends to be better at art, and most high performance cities are left leaning (they're tracking the set point), but also generally less stable (they're overly responsive).

comment by Emrik (Emrik North) · 2022-08-31T19:30:18.999Z · LW(p) · GW(p)

Productivity and akrasia are neighbouring valleys in a bistable system. If you're productive, you can keep up behaviour which lets you continue be productive (e.g. get your tasks done, sleep well, exercise). If you seem be behind on your tasks one day, it stresses you out a little, so you put some extra effort in to return to equilibrium. But if you're too far behind one day, your stress level shoots through the roof, so you put in a lot of extra effort, so you sleep less, so you have less effort to put in, so your stress level increases--and either you persevere gloriously because you tried really hard, or you fall apart. Make an ill-advised bet and you end up in the akratic equilibrium, and climbing back up will be rough.

But putting in extra effort is not the only response you have in order to decrease stress (sometimes). You can also give up on some of your plans and prioritise within what you can manage. Throwing your plans overboard gives you no chance of success, but it could make your productivity loop more robust. This has to be managed against the risk of degrading the strength of your habits, however. You're a finely-tuned multidimensional control system, and there are pitfalls in every direction.

  • The Pygmalion effect is a psychological phenomenon in which high expectations lead to improved performance in a given area.
  • It always takes longer than you expect, even when you take into account Hofstadter's Law.
  • Work expands so as to fill the time available for its completion.
  • The demand upon a resource tends to expand to match the supply of the resource.

When do you throw out luggage? When do you let out steam? If propositional attitudes are part of your control loop, how do you consciously manage it so conscious management doesn't interfere with the loop? Without resorting to model [LW · GW]-dissolving [LW · GW] outside-view perspectives, I mean.

Replies from: Emrik North
comment by Emrik (Emrik North) · 2022-08-31T19:43:26.417Z · LW(p) · GW(p)

Modest epistemology and hubris are bistable as well. You need hubris in order to produce anything worthwhile so you have the self-confidence required to produce anything worthwhile. Grr, need a better word for hubris.

comment by Shmi (shminux) · 2022-06-16T21:33:59.051Z · LW(p) · GW(p)

You might be reinventing slack [? · GW].

Replies from: owencb
comment by owencb · 2022-06-16T23:01:06.534Z · LW(p) · GW(p)

Interesting, I think there's some kind of analogy (or maybe generalization) here, but I don't fully see it.

I at least don't think it's a direct reinvention because slack (as I understand it) is a think that agents have, rather than something which determines what's good or bad about a particular decision.

(I do think I'm open to legit accusations of reinvention, but it's more like reinventing alignment issues.)