A Theory of Equilibrium in the Offense-Defense Balance

post by Maxwell Tabarrok (maxwell-tabarrok) · 2024-11-15T13:51:33.376Z · LW · GW · 6 comments

This is a link post for https://www.maximum-progress.com/p/a-theory-of-equilibrium-in-the-offense

Contents

  Peak Oil
  Offense-Defense Balance
None
6 comments

The offense-defense balance is a concept that compares how easy it is to protect vs conquer or destroy resources. For example, autonomous weapons and AI systems might make attacks easier and more scalable compared to defensive measures. The balance matters because when offense has the advantage, it can create instability and increase the likelihood of conflict.

I claim that worries over a massively upset offense-defense balance make the same mistake as peak oil apocalypse prophecies.

Peak Oil

Peak oil hysteria was seeded in the environmentalism and energy crisis of the 1970s and reached its apogee in the early 2000s with a flood of books, documentaries, and movements predicting civilizational collapse.

In some ways, the predictions of peak oil prophets did come true. Oil prices rose to an all-time peak in 2008, and oil consumption in the US reached its peak around the same time. Few, if any, of the forecasted societal or economic collapses came to fruition though.

The problem with the peak oil predictions was a selective extrapolation of some trends, while holding the adaptation of others constant. They projected rising oil prices and squabbles over the remaining finite supply that would force more and more uses for oil to shut down, but they didn’t project the new substitutes and oil sources that these high prices would create.

When oil prices rise from $100 to $200 a barrel, it doesn’t double everyone’s energy costs. $200 oil prices don’t force all plastic manufacturers to pay $200 for their chemical inputs, for example. They force them to substitute petroleum and use $150 dollars of vegetable oil instead. The new $200 price is an upper bound on what previous oil consumers pay after the price increase. Any consumers with substitutes that cost less than $200 will use those instead. So even in the most extreme world where oil runs out, most oil consumers aren’t faced with infinite costs and forced to shut down. Instead, they are only forced to pay for their next best substitute.

Offense-Defense Balance

As an example of how this mistake applies to the offense-defense balance, consider the offense-defense balance of presidential assassination. In the 19th century, guns were inaccurate at long ranges, so assassins needed to get close. Security used this defensive advantage to protect the president, but some assassins did slip through. Accurate, long range rifles are a massive upset to the offense-defense balance of presidential assassinations. Instead of needing to sneak up on the president in a theatre, you could sit half a kilometer away and take shots from there. These accurate guns are cheap and accessible.

But it would be a mistake to predict some large increase in the rate of presidential assassination. That prediction only makes sense if you extrapolate one trend in offensive capabilities, but hold everyone else’s adaptation constant. A rise in the offensive capability of assassins doesn’t force the secret service to accept a higher rate of presidential assassination. They are just forced to buy The Beast and put them behind glass.

Applying this to the future, consider drone-powered assassination. Assassin drones with bombs strapped to them will be small, fast, cheap, and potentially autonomous. This will make it easier to attempt to assassinate world leaders, but again it would be a mistake to project a new era of instability and terrorism based on this change. The death of a world leader is extremely costly. So any adaptation that the secret service can use to neutralize drone assassins that’s cheaper than letting more presidents die will be used. The effect of this shift of the Offense-Defense balance won’t be more deaths, it will be investment into powerful EMPs or counter-drones, or good old bullet proof glass.

The usual extrapolations of the offense-defense balance you hear are upper bounds of the costs that new offensive technologies can impose on defenders. If new technology enables $10,000 of drones to destroy a $10,000,000,000 dollar aircraft carrier, the actual cost imposed on defenders will not be losing all of their carriers. As long as there is any investment they can make that costs less than $10,000,000,000 dollars and neutralizes the drones, they will make it. Thus, shifts in the offense-defense balance are attenuated and somewhat self balancing: the more value that is imperiled by a new offensive technology, the more options that defenders can afford to neutralize it.

When the offense-defense balance changes due to some technology, prices and investment adjust to maintain a more stable equilibrium.

6 comments

Comments sorted by top scores.

comment by romeostevensit · 2024-11-15T22:41:00.452Z · LW(p) · GW(p)

This elides the original argument by assuming the conclusion: that countermanding efforts remain cheap relative to the innovations. But the whole point is that significant shifts in costs associated with defense of a certain level can change behaviors and which plans and supply chains are economically defensible a lot.

Replies from: maxwell-tabarrok
comment by Maxwell Tabarrok (maxwell-tabarrok) · 2024-11-16T15:01:15.742Z · LW(p) · GW(p)

Yeah, there can definitely still be imbalances/extra costs imposed on defenders but the point I'm making is that the projections people make are very often large over-estimates of what those costs will be.  

comment by Dagon · 2024-11-15T18:21:41.212Z · LW(p) · GW(p)

I think this is the right way to think of most anti-inductive (planner-adversarial or competitive exploitation) situations.  Where there are multiple dimensions of assymetric capabilities, any change is likely to shift the equilibrium, but not necessarily by as much as the shift in component.  

That said, tipping points are real, and sometimes a component shift can have a BIGGER effect, because it shifts the search to a new local minimum.  In most cases, this is not actully entirely due to that component change, but the discovery and reconfiguration is triggered by it.  The rise of mass shootings in the US is an example - there are a lot of causes, but the shift happened quite quickly.

Offense-defense is further confused as an example, because there are at least two different equilibria involved.  when you say

The offense-defense balance is a concept that compares how easy it is to protect vs conquer or destroy resources.

Conquer control vs retain control is a different thing than destroy vs preserve.  Frank Herbert claimed (via fiction) that "The people who can destroy a thing, they control it." but it's actually true in very few cases.  The equilibrium of who gets what share of the value from something can shift very separately from the equilibrium of how much total value that thing provides.

comment by Nathan Helm-Burger (nathan-helm-burger) · 2024-11-26T22:47:39.432Z · LW(p) · GW(p)

I think you raise a good point about Offense-Defense balance predictions. There is an equilibrium around effort spent on defense, which reacts to offense on some particular dimension becoming easier. So long as there's free energy which could be spent by the defender to bolster their defense and remain energy positive or neutral, and the defender has the affordances (time, intelligence, optimization pressure, etc.) to make the change, then you should predict that the defender will rebalance the equation.

That's one way things work out, and it happens more often than naive extrapolation predicts because often the defense is in some way novel, changing the game along a new dimension.

On the other hand, there are different equilibria that adversarial dynamics can fall into besides rebalancing. Let's look at some examples.

 

GAN example

A very clean example is Generative Adversarial Networks (GANs). This allows us to strip away many of the details and look at the patterns which emerge from the math. GANs are inherently unstable, because they have three equilibrium states: dynamic fluctuating balance (the goal of the developer, and the situation described in your post), attacker dominates, defender dominates. Anytime the system gets too far from the central equilibrium of dynamic balance, you fall towards the nearer of the other two states. And then stay there, without hope of recovery.

 

Ecosystem example

Another example I like to use for this situation is predator-prey relationships in ecosystems. The downside of using this as an example is that there is a dis-analogy to competition between humans, in that the predators and prey being examined have relatively little intelligence. Most of the optimization is coming from evolutionary selection pressure. On the plus side, we have a very long history with lots and lots of examples, and these examples occur in complex multi-actor dynamic states with existential stakes. So let's take a look at an example.

Ecosystem example: Foxes and rabbits. Why do the foxes not simply eat all the rabbits? Well, there are multiple reasons. One is that the foxes depend on a supply of rabbits to have enough spare energy to successfully reproduce and raise young. As rabbit supply dwindles, foxes starve or choose not to reproduce, and fox population dwindles.  See: https://en.wikipedia.org/wiki/Lotka%E2%80%93Volterra_equations 

In practice, this isn't a perfect model of the dynamics, because more complicated factors are almost always in play. There are other agents involved, who interact in less strong but still significant ways. Mice can also be a food source for the foxes, and to some extent compete for some of the same food as rabbits. Viruses in the rabbit population spread more easily and have more opportunity to adversarially optimize against the rabbit immune systems as the rabbit population increases, as sick rabbits are culled less, and as rabbit health declines due to competition for food and territory with other rabbits. Nevertheless, you do see Red Queen Races occur in long-standing predator-prey relationships, where the two species gradually one-up each other (via evolutionary selection pressure) on offense then defense then offense. 

The other thing you see happen is that the two species stop interacting. They become either locally extinguished such that they no longer have overlapping territories, or one or both species go completely extinct.

 

Military example

A closer analogy to the dynamics of future human conflict is... past human conflict. Before there were militaries, there were inter-tribal conflicts. Often there was similar armaments and fighting strengths on both sides, and the conflicts had no decisive winner. Instead the conflicts would drag on across many generations, waxing and waning in accordance with pressures of resources, populations, and territory.

Things changed with the advent agriculture and standing armies. War brought new dynamics, with winners sometimes exacting thorough elimination of losers. War has its own set of equations. When one human group decides to launch a coordinated attack against a weaker one, with the intent of exterminating the weaker one, we call this genocide. Relatively rarely do we see one ethnic group entirely exterminate another, because the dominant group usually keeps at least a few of the defeated as slaves and breeds with them. Humanity's history is pretty grim when you dig into the details of the many conflicts we've recorded.

 

Current concern: AI

The concern currently at hand is AI. AI is accelerating, and promises to also accelerate other technologies, such as biotech, which offer offensive potential. I am concerned about the offense-defense balance that the trends suggest. The affordances of modern humanity far exceed those of ancient humanity. If, as I expect, it becomes possible in the next few years for an AI to produce a plan for a small group of humans to follow which instructs them in covertly gathering supplies, building equipment, and carrying out lab procedures in order to produce a potent bioweapon. This could be because the humans were omnicidal, or because the AI deceived them about what the results of the plan would be. Humanity may get no chance to adapt defensively. We may just go the way of the dodo and passenger pigeon. The new enemy may simply render us extinct.

 

Conclusion

We should expect the pattern of dynamic balance between adversaries to be maintained when the offense-defense balance changes relatively slowly compared to the population cycles and adaptation rates of the groups. When you anticipate a large rapid shift of the offense-defense balance, you should expect the fragile equilibrium to break and for one or the other group to dominate. The anticipated trends of AI power are exactly the sort of rapid shift that should suggest a risk of extinction.

Replies from: sharmake-farah
comment by Noosphere89 (sharmake-farah) · 2024-11-26T22:55:57.189Z · LW(p) · GW(p)

Notably, the ecosystem example, while the populations are constantly fluctuating, it's actually pretty difficult to generate a result that ends in one species's total extinction/genocide, so there is a global stability/equilibrium in the offense/defense balance, even if it's locally unstable.

Replies from: nathan-helm-burger
comment by Nathan Helm-Burger (nathan-helm-burger) · 2024-11-26T23:31:02.051Z · LW(p) · GW(p)

Yes, the environments / habitats tend to be relatively large / complex / inaccessible to the agents involved. This allows for hiding, and for isolated niches. If the environment were smaller, or the agents had greater affordances / powers relative to their environments, then we'd expect outcomes to be less intermediate, more extreme.

As one can see in the microhabitats of sealed jars with plants and insects inside. I find it fascinating to watch timelapses of such mini-biospheres play out. Local extinction events are common in such small closed systems.