Non-Book Review: Patterns of Conflict

post by johnswentworth · 2020-11-30T21:05:24.389Z · LW · GW · 2 comments

Contents

  Beyond Dogfights
    No Observation
    No Integration of Information
    No Decision
    No Action
  Lessons for Organizations
None
2 comments
Soviet MiG-15 (left) and American F-86 (right)

From 1951-53, the Korean war saw the first major dogfights between jet aircraft - mainly the American F-86 and the Soviet MiG-15. On paper, the MiG-15 looked like the superior aircraft: it could out-climb, out-accelerate, out-turn, and generally out-perform the F-86 by most performance measures.

US Air Force colonel and fighter pilot John Boyd, however, would argue that these are the wrong metrics to consider. Winning a dogfight isn’t about out-accelerating the enemy, it’s about outsmarting the enemy. It’s about being able to predict their next move faster than they can predict your next move. It’s about “getting inside their OODA loop” - observing, orienting, deciding and acting faster than they can, making a decisive move before they have an opportunity to respond.

To evaluate the F-86 vs the MiG-15, we need to look at the whole OODA loop - observation, orientation, decision and action. Looking at aircraft performance metrics only tells us about the “action” component - i.e. which actions the aircraft allows the pilot to take. If we look at the other components, the MiG looks less dominant:

Even in terms of performance, the MiG wasn’t strictly dominant - the F-86 could dive faster, for instance. The MiG could gain energy faster, but the F-86 could lose energy faster - and if the goal is to move in a way the enemy didn’t expect or can’t match, that’s often just as good.

So under this view, the F-86 had the advantage in many ways. An F-86 pilot could see the enemy better, and keep track of what was going on better. Even if the MiG could outmaneuver them in theory, the F-86 pilots could make good decisions faster, leaving the MiG pilots confused and eventually gunning them down. Statistics from the war back up this picture - both sides wildly exaggerated kill ratios in their favor, but however we slice the data, the F-86 beat the MiG-15 at least as often as it lost, and a kill ratio 2:1 in favor of the F-86 is a typical estimate. Even against more experienced MiG pilots, the F-86 traded slightly better than 1:1.

Beyond Dogfights

The real insight is that these ideas apply beyond dogfights. Boyd’s real claim to fame - and the topic of his Patterns of Conflict presentation - is to turn “get inside the enemy’s OODA loop” into the foundational idea of military doctrine, from the tactical level (i.e. maneuvering on the battlefield) to the strategic (i.e. choosing which battles to fight). Patterns of Conflict illustrates the idea through six hours of examples from military history. In the interest of time, I’ll instead use an abstract example.

A group of soldiers in the Blue Army has been ordered to take a hill currently occupied by Green artillery. Unbeknownst to them, it’s a trap: a larger group of Green soldiers hides in some trees near the hill. Ideally, from the Blue perspective, the trap will be seen through and their soldiers will pull back rather than be wiped out. But there’s a lot of ways that can go wrong…

No Observation

The simplest failure mode is that the hidden Greens may not be seen at all, at least not before the Blue forces are past the point of no return. The Greens can camouflage themselves, set up smokescreens, or actively target Blue scouts/surveillance/intelligence.

Lack of information is deadly.

No Integration of Information

A more subtle failure mode is that the Blues may “see” the Green trap, but may not put the pieces together. Perhaps a scout on the other side of the trees notices tracks leading in, but that part of the force is under a separate command chain and the information never gets where it needs to go. Perhaps the Greens are able to interfere with communications, so the information can’t get to anyone in time, or isn’t trusted. Perhaps the Greens leave fake tracks into many different clumps of trees, so the Blues know that there may be some Greens hiding somewhere but have no idea where or whether there’s actually anyone in the trees at all. Perhaps the Blues can’t see into the trees, and know that the Greens like to set up ambushes, but the Blues don’t think about the fact that they can’t see into the trees - they forget to account for the uncertainty in their own map. Or maybe they do account for the uncertainty, but take the risk anyway.

Ambiguity and lack of communication are, from the Greens’ perspective, very suitable substitutes for outright lack of information or deception. Exactly this sort of tactic was prominent in WWII - for instance, the US fielded an entire “ghost army” of 1100 actors and set designers, complete with inflatable tanks and fake radio transmissions.

Inflatable tank. That’s some Scooby-Doo shit right there.

No Decision

Now we get into the classic problem of hierarchical management: the Blue attackers notice the ambush waiting in the trees, but they’ve been ordered to take the hill, so they charge in anyway. The people on the ground don’t have the authority to make decisions and change plans as new information arrives, and passing decisions back up the command chain would be prohibitively slow.

At first glance, this looks like simple inefficiency in the Blues’ organization, but the Greens do actually have some methods to create this sort of inefficiency. In order to counter the problem, the Blues need to delegate significant authority to lower-level officers at the front - but this requires that they trust those lower-level officers to make the right decisions. If the Greens pick off competent officers, or place agents in the Blues’ organization, then Blues won’t be able to trust their low-level officers’ decisions as much - so they’ll be forced to rely on more centralized, command-and-control decision making.

This sort of strategy plays a major role in guerrilla warfare, a la Mao. One does not pick off the incompetent people within the enemy’s hierarchy; they are assets. And if the enemy knows you have agents within their organization, that’s not a bad thing - their own attempts to tighten control will leave the organization less flexible and responsive, and the lack of flexibility and responsivity are exactly the weaknesses which guerrilla warfare attempts to leverage.

On the flip side, the German command structure in WWII provides a positive example of an organizational structure which does this well. Officers were generally given thorough and uniform training, allowing them to understand how other officers thought and coordinate with minimal communication. Low-level officers were given considerable decision latitude - their orders came with a “schwerpunkt”, a general objective, with the details left flexible. One good example of this in everyday life: a group of friends is meeting at a movie theater for a new release. If a few are late, the others know to buy them tickets and save seats - they coordinate even without communication.

No Action

Finally, even if the Blues see the Green ambush, pass that information where it needs to go, and decide to abort the attack, they may be physically incapable of calling it off. Maybe the attackers are already in too deep, or maybe lines of communication have been cut between the decision-makers and the attackers.

At the end of the day, nothing else matters unless there’s at least some action we can take which is less bad than the others.

Lessons for Organizations

Going really abstract for a moment: conflict is a game, in which “players” make moves. The point of an OODA loop is to see what move the other players are making, and choose our own best move in response. We want to make our move a function of their move, rather than just blindly choosing a move independent of what the other players are doing. In order to do that, we need to observe whatever we can about other players’ behavior, integrate that information into a model/map to predict what they’re going to do, decide what to do in response, and then act on it. Observation, orientation, decision, action - and it all has to happen fast enough that we can act after we have enough information to figure out the other players’ actions, while still acting either before they do or at the same time.

The “action” parts are usually pretty obvious - we have to be physically able to make a move. The “observation” parts are sometimes overlooked (as in the F-86 vs MiG-15 example, it’s often hard to notice value of information [LW · GW]), but also pretty obvious once you think about it. Personally, I think the real meat of OODA loops is in the “orient” and “decide” steps, especially when applying the idea to organizations.

The point of orientation and decision is to make our own actions a function of the actions of others, while still acting before or at the same time as they do. Drawing an analogy to biology: what’s interesting about living things is that they act differently in response to different environments, performing actions which are tailored to their circumstances. Their actions are a function of their environment (including other organisms in the environment). Even single-celled organisms (think e-coli) observe their environment and act differently depending on what they observe.

How do we make organizations behave like organisms? How do we make the actions of the organization a function of the environment? The hard parts of this, in the OODA framework, are orientation and decision. Orientation: information needs to be passed to the right people, and integrated together to answer decision-relevant questions about the environment. Decision: the people who have the information need to decide what to do in order to further the organization’s goals. And the real challenge: for ideal performance, this all needs to be done locally; a centralized command will have neither the speed nor the capacity to integrate all information and make all decisions about everything.

Boyd provides insights on this mainly by thinking about how to break it. How can we make an organization unable to effectively pass around and integrate information? We can cut lines of communication. We can bombard them with noise - i.e. fake information, like the ghost army. We can take hard-to-predict actions in general - keep our options open, maybe flip a coin every now and then. How can we make an organization unable to make decisions based on their information? Make them paranoid - pick off competent people, insert agents, generally force them to rely less on low-level decision making and more on centralized command-and-control. On our side, we want our people to understand each other well and be able to coordinate without extensive communication or centralized intervention - communication is slow and expensive, and central command is a bottleneck.

Moving beyond military applications, incentives are one piece of the puzzle - we want to incentivize individuals to act in the organization’s interest. But I think the bigger message here is that it’s really an information processing problem. How do we get information where it needs to go, without centralized integration? How can distributed decision-makers coordinate without slow, expensive communication? Even the incentives problem is largely an information problem - how do we communicate a general objective (i.e. schwerpunkt) to low-level decision makers, while still leaving them the freedom to make decisions based on new information as it comes up? Even assuming everyone is acting in the organization’s best interest, it’s still hard to pass information and coordinate when there’s too much information (including background knowledge of different specialists) for everyone to know everything all the time. These are the information-processing problems involved in designing an organization to have a fast, efficient OODA loop.

2 comments

Comments sorted by top scores.

comment by ryan_b · 2020-12-01T20:16:41.098Z · LW(p) · GW(p)

John Boyd is one of those cases where having someone more dedicated to writing down his ideas would have been hugely helpful. This is the largest single deposit of his ideas that he ever produced, and his legacy among the military really does mostly boil down to his giving the same presentation to hundreds of officers.

There's a few books which are by his understudies and collaborators in the military, but they did not contain much more detail for the inputs to Boyd's thought process that I could tell. I selected Science, Strategy and War: The Strategic Theory of John Boyd for the purpose, which has more rigorous investigation into these elements. I haven't finished it yet; it suffers from the combination of being stiff to read and also put-it-down-and-think-about-it-y.

Turning to the organizational side of things, there are a few spots in the business literature which emphasize decisions as the center of gravity. About two years ago I found one on HBR from 2010 about organizing the whole company around decisions. This is an appealing idea to me, because on reflection company growth seems to be a weirdly thoughtless process, wherein people notice they have problem X, so hire a team dedicated to X, and then assign all X-like responsibilities to them, and then so on with problem Y. This is fast and simple, but it breaks as soon as X, Y, Z are sub-problems of a larger problem. As an added benefit, if you commit to decisions up front you have no choice but to resolve the strategy questions explicitly, else you have nothing to go by.

In general, it seems like specialization of labor never works unless there are very few degrees of freedom.

There's another HBR article which is about Kahneman's forthcoming book on variance in decision making. The pitch there is that even if you have professional experts, with access to the correct information, endowed with the power to make the necessary decisions...the decisions they actually make are all over the map. Even two different experts in the same company. Even the same expert at two different times. That being said, it remains the case:

Make a decision > Abide by decision > Goodness of decision

I think this is why I am so enamored of reasoned rules [LW(p) · GW(p)].

comment by adamShimi · 2020-12-01T13:53:27.987Z · LW(p) · GW(p)

Really cool post.

While reading, I kept thinking back to the theory of distributed computing. It doesn't apply to everything here, but at least for orient, there's a clear analogy: how can you propagate information from one node to all the system? (An example is the problem of gossip). Some of the failure modes are also clearly related, making the communication or the synchronization between nodes more uncertain.

One good example of this in everyday life: a group of friends is meeting at a movie theater for a new release. If a few are late, the others know to buy them tickets and save seats - they coordinate even without communication.

In the same spirit as my previous paragraph, this reminds me of the big simplifying hypothesis in distributed computing that every node is running the same program. It helps with that kind of synchronization.