Insights from 'The Strategy of Conflict'
post by DanielFilan · 2018-01-04T05:05:43.091Z · LW · GW · 13 commentsContents
Schelling points in bargaining Threats and promises Mutually Assured Destruction None 13 comments
Cross-posted from my blog.
I recently read Thomas Schelling's book 'The Strategy of Conflict'. Many of the ideas it contains are now pretty widely known, especially in the rationalist community, such as the value of Schelling points when coordination must be obtained without communication, or the value of being able to commit oneself to actions that seem irrational. However, there are a few ideas that I got from the book that I don't think are as embedded in the public consciousness.
Schelling points in bargaining
The first such idea is the value of Schelling points in bargaining situations where communication is possible, as opposed to coordination situations where it is not. For instance, if you and I were dividing up a homogeneous pie that we both wanted as much of as possible, it would be strange if I told you that I demanded at least 52.3% of the pie. If I did, you would probably expect me to give some argument for the number 52.3% that distinguishes it from 51% or 55%. Indeed, it would be more strange than asking for 66.67%, which itself would be more strange than asking for 50%, which would be the most likely outcome were we to really run the experiment. Schelling uses as an example
the remarkable frequency with which long negotiations over complicated quantitative formulas or ad hoc shares in some costs or benefits converge ultimately on something as crudely simple as equal shares, shares proportionate to some common magnitude (gross national product, population, foreign-exchange deficit, and so forth), or the shares agreed on in some previous but logically irrelevant negotiation.
The explanation is basically that in bargaining situations like these, any agreement could be made better for either side, but it can't be made better for both simultaneously, and any agreement is better than no agreement. Talk is cheap, so it's difficult for any side to credibly commit to only accept certain arbitrary outcomes. Therefore, as Schelling puts it,
Each party's strategy is guided mainly by what he expects the other to accept or insist on; yet each knows that the other is guided by reciprocal thoughts. The final outcome must be a point from which neither expects the other to retreat; yet the main ingredient of this expectation is what one thinks the other expects the first to expect, and so on. Somehow, out of this fluid and indeterminate situation that seemingly provides no logical reason for anybody to expect anything except what he expects to be expected to expect, a decision is reached. These infinitely reflexive expectations must somehow converge upon a single point, at which each expects the other not to expect to be expected to retreat.
In other words, a Schelling point is a 'natural' outcome that somehow has the intrinsic property that each party can be expected to demand that they do at least as well as they would in that outcome.
Another way of putting this is that once we are bargained down to a Schelling point, we are not expected to let ourselves be bargained down further. Schelling uses the examples of soldiers fighting over a city. If one side retreats 13 km, they might be expected to retreat even further, unless they retreat to the single river running through the city. This river can serve as a Schelling point, and the attacking force might genuinely expect that their opponents will retreat no further.
Threats and promises
A second interesting idea contained in the book is the distinction between threats and promises. On some level, they're quite similar bargaining moves: in both cases, I make my behaviour dependent on yours by promising to sometimes do things that aren't narrowly rational, so that behaving in the way I want you to becomes profitable for you. When I threaten you, I say that if you don't do what I want, I'll force you to incur a cost even at a cost to myself, perhaps by beating you up, ruining your reputation, or refusing to trade with you. The purpose is to ensure that doing what I want becomes more profitable for you, taking my threat into account. When I make a promise, I say that if you do do what I want, I'll make your life better, again perhaps at a cost to myself, perhaps by giving you money, recommending that others hire you, or abstaining from behaviour that you dislike. Again, the purpose is to ensure that doing what I want, once you take my promise into account, is better for you than other options.
There is an important strategic difference between threats and promises, however. If a threat is successful, then it is not carried out. Conversely, the point of promises is to induce behaviour that forces you to carry out the promise. This means that in the ideal case, threat-making is cheap for the threatener, but promise-making is expensive for the promiser.
This difference has implications for one's ability to convince one's bargaining partner that one will carry out your threat or promise. If you and I make five bargains in a row, and in the first four situations I made a promise that I subsequently kept, then you have some reason for confidence that I will keep my fifth promise. However, if I make four threats in a row, all of which successfully deter you from engaging in behaviour that I don't want, then the fifth time I threaten you, you have no more evidence that I will carry out the threat than you did initially. Therefore, building a reputation as somebody who carries out their threats is somewhat more difficult than building a reputation for keeping promises. I must either occasionally make threats that fail to deter my bargaining partner, thus incurring both the cost of my partner not behaving in the way I prefer and also the cost of carrying out the threat, or visibly make investments that will make it cheap for me to carry out threats when necessary, such as hiring goons or being quick-witted and good at gossipping.
Mutually Assured Destruction
The final cluster of ideas contained in the book that I will talk about are implications of the model of mutually assured destruction (MAD). In a MAD dynamic, two parties both have the ability, and to some extent the inclination, to destroy the other party, perhaps by exploding a large number of nuclear bombs near them. However, they do not have the ability to destroy the other party immediately: when one party launches their nuclear bombs, the other has some amount of time to launch a second strike, sending nuclear bombs to the first party, before the first party's bombs land and annihilate the second party. Since both parties care about not being destroyed more than they care about destroying the other party, and both parties know this, they each adopt a strategy where they commit to launching a second strike in response to a first strike, and therefore no first strike is ever launched.
Compare the MAD dynamic to the case of two gunslingers in the wild west in a standoff. Each gunslinger knows that if she does not shoot first, she will likely die before being able to shoot back. Therefore, as soon as you think that the other is about to shoot, or that the other thinks that you are about to shoot, or that the other thinks that you think that the other is about to shoot, et cetera, you need to shoot or the other will. As a result, the gunslinger dynamic is an unstable one that is likely to result in bloodshed. In contrast, the MAD dynamic is characterised by peacefulness and stability, since each one knows that the other will not launch a first strike for fear of a second strike.
In the final few chapters of the book, Schelling discusses what has to happen in order to ensure that MAD remains stable. One implication of the model that is perhaps counterintuitive is that if you and I are in a MAD dynamic, it is vitally important to me that you know that you have second-strike capability, and that you know that I know that you know that you have it. If you don't have second-strike capability, then you will realise that I have the ability to launch a first strike. Furthermore, if you think that I know that you know that you don't have second-strike capability, then you'll think that I'll be tempted to launch a first strike myself (since perhaps my favourite outcome is one where you're destroyed). In this case, you'd rather launch a first strike before I do, since you anticipate being destroyed either way. Therefore, I have an incentive to help you invest in technology that will help you accurately perceive whether or not I am striking, as well as technology that will hide your weapons (like ballistic missile submarines) so that I cannot destroy them with a first strike.
A second implication of the MAD model is that it is much more stable if both sides have more nuclear weapons. Suppose that I need 100 nuclear weapons to destroy my enemy, and he is thinking of using his nuclear weapons to wipe out mine (since perhaps mine are not hidden), allowing him to launch a first strike. Schelling writes:
For illustration suppose his accuracies and abilities are such that one of his missiles has a 50-50 chance of knocking out one of ours. Then, if we have 200, he needs to knock out just over half; at 50 percent reliability he needs to fire just over 200 to cut our residual supply to less than 100. If we had 400, he would need to knock out three-quarters of ours; at a 50 percent discount rate for misses and failures he would need to fire more than twice 400, that is, more than 800. If we had 800, he would have to knock out seven-eighths of ours, and to do it with 50 percent reliability he would need over three times that number, or more than 2400. And so on. The larger the initial number on the "defending" side, the larger the multiple required by the attacker in order to reduce the victim's residual supply to below some "safe" number.
Consequently, if both sides have many times more nuclear weapons than are needed to destroy the entire world, the situation is much more stable than if they had barely enough to destroy the enemy: each is comforted in their second strike capabilities, and doesn't need to respond as aggressively to arms buildups by the other party.
It is important to note that this conclusion is only valid in a 'classic' simplified MAD dynamic. If for each nuclear weapon that you own, there is some possibility that a rogue actor will steal the weapon and use it for their own ends the value of large arms buildups becomes much less clear.
The final conclusion I'd like to draw from this model is that it would be preferable to not have weapons that could destroy other weapons. For instance, suppose that both parties were countries that had biological weapons that when released infected a large proportion of the other country, caused them obvious symptoms, and then killed them a week later, leaving a few days between the onset of symptoms and losing the ability to effectively do things. In such a situation, you would know that if I struck first, you would have ample ability to get still-functioning people to your weapons centres and launch a second strike, regardless of your ability to detect the biological weapon before it arrives, or the number of weapons and weapons centres that you or I have. Therefore, you are not tempted to launch first. Since this reasoning holds regardless of what type of weapon you have, it is always better for me to have this type of biological weapon in a MAD dynamic, rather than any nuclear weapons that can potentially destroy weapons centres, so as to preserve your second strike capabilities. I speculatively think that this argument should hold for real life biological weapons, since it seems to me that they could be destructive enough to act as a deterrent, but that authorities could detect their spread early enough to send remaining healthy government officials to launch a second strike.
13 comments
Comments sorted by top scores.
comment by waveman · 2018-01-04T05:31:12.227Z · LW(p) · GW(p)
>>> The final conclusion I'd like to draw from this model is that it would be preferable to not have weapons that could destroy other weapons. For instance, suppose that both parties were countries that had biological weapons that when released infected a large proportion of the other country, caused them obvious symptoms, and then killed them a week later, leaving a few days between the onset of symptoms and losing the ability to effectively do things. In such a situation, you would know that if I struck first, you would have ample ability to get still-functioning people to your weapons centres and launch a second strike, regardless of your ability to detect the biological weapon before it arrives, or the number of weapons and weapons centres that you or I have. Therefore, you are not tempted to launch first.
This was the case with the San people (formerly Kalahari Bushmen). They had slow acting poison arrows . This meant that any deadly fight resulted in the death of all the parties. So such fights were few and far between.
comment by DanielFilan · 2018-09-26T22:14:51.522Z · LW(p) · GW(p)
I recently listened to a podcast interview with Daniel Ellsberg on his book, warning the public of the less public aspects of US nuclear policy. This made me much more pessimistic about how well the MAD model describes the dynamics of conflicts between nuclear powers. Notes that I took of Ellsberg's claims, which I have varying levels of doubt in:
- There appear to be and have been principal agent problems within the US and USSR governments that makes it unwise to treat them as a single agent.
- In practice, parties have not preserved their enemies' second strike capability (which the US could do by e.g. giving Russia some nuclear submarines). [EDIT: actually I think that wouldn't currently work, since Russia's submarines are trackable by US satellites because the US has good satellites and something about Russian harbours?]
- In practice, parties have secretly committed to destructive attacks on other countries, which serve no deterrence purpose (unless we assume that parties are overrating the spying capabilities of their adversaries).
- Any widespread nuclear weapons use would be so devastating to the Earth that no second strike is needed to preserve deterrence (I find myself skeptical of this claim).
comment by Kaj_Sotala · 2018-01-07T16:28:29.484Z · LW(p) · GW(p)
Obligatory link re: the bargaining and Schelling points thing is A Positive Account of Property Rights, where David Friedman takes that insight and builds an entire theory and justification of property rights from it.
comment by Ben Pace (Benito) · 2018-01-05T06:49:08.316Z · LW(p) · GW(p)
A standard example for the threats and promises section is the police and rule of law, who in large part work by threatening violence/imprisonment. However, the ratio of times we choose not to break the law, to times we do and the threat comes through, is massive, and I imagine it looks incredibly cost-effective from the standpoint of government.
Those were also really interesting conclusions regarding MAD, that I’d want to help my opponent build tech that could do a second-strike, and also that the incentive would be to stockpile nukes. The position of America and Russia atm is no longer obviously bad to me (as opposed to a position where they each reduce to like 10-50 nukes), and might even just be pretty optimal.
Anyhow, I really appreciate people reading valuable books and writing up their new insights in a clear and concise manner, and these were all very interesting, so I’ve curated this post.
Replies from: ESRogs↑ comment by ESRogs · 2018-01-06T01:26:50.220Z · LW(p) · GW(p)
However, the ratio of times we choose not to break the law, to times we do and the threat comes through, is minuscule
Nitpick: I think you have this reversed. The ratio is actually large, right?
Replies from: Benito↑ comment by Ben Pace (Benito) · 2018-01-06T01:45:42.334Z · LW(p) · GW(p)
Thou art correct, and I’ve edited my comment accordingly.
comment by PDV · 2018-01-10T02:40:06.932Z · LW(p) · GW(p)
I was about to link my blog post on the same book from early last year, but apparently I never published or finished it. I still haven't finished it, but here's my post published anyway, some of it still in outline/note form. I latched onto several of the same insights, so thank you for writing them up properly.
Points and consequences of them I found interesting and compelling in my reading of it which are not already mentioned above:
The map, or at least the parts of the map known to be a shared map, are as important or more important than the territory in multiparty negotiations.
A great deal of how we conduct negotiations is subtly but heavily dependent on us being humans who think in human ways, our shared context. Consequence: negotiating with an uplifted cat would frazzle a skilled negotiator because of the amount of their experience that would be rendered unproductive or counterproductive.
Schelling cared far more about Schelling Fences (term only coined by Scott Alexander) than Schelling Points (term coined shortly after Schelling wrote).
Brinksmanship and the balance of terror never rationally incites an attack until the chance of someone's finger slipping and starting an attack by accident would incite an attack on its own.
comment by LawrenceC (LawChan) · 2018-01-16T18:15:05.440Z · LW(p) · GW(p)
(Cross-posted from Daniel's FB)
Regarding the Bioweapons MAD point: I think detecting that a novel bioweapon has been deployed might be less trivial than you think.
A (possibly) more serious problem is identifying who deployed the bioweapon: it’s easy to tell from where land based missiles come from, but much harder to verify that the weird symptoms reported in one part of the country come from a weapon from this specific adversary.
comment by ryan_b · 2018-01-05T19:20:10.941Z · LW(p) · GW(p)
On the subject of MAD, I would like to mention The Great American Gamble by Keith Payne, about the history of Cold War nuclear deterrence. The author has also written about how deterrence might work in the modern environment, and is a specialist in the field.
Of particular interest to us, the book discusses the failings of Cold War strategic thinking - for example the over reliance on assumptions of economic rationality in the opponent. I expect this to have some bearing on other x-risk scenarios.
comment by SquirrelInHell · 2018-01-04T21:19:12.593Z · LW(p) · GW(p)
This is an excellent summary and excellent content, thank you!
Replies from: DanielFilan↑ comment by DanielFilan · 2018-01-08T07:58:20.772Z · LW(p) · GW(p)
You're welcome :) I'd like to point out though that this isn't anything like a summary of the book, just the things that (a) I remembered after reading it, (b) I didn't already know, and (c) I thought had a high insight density.
comment by Anton Fleck (anton-fleck) · 2018-01-04T21:32:37.333Z · LW(p) · GW(p)
Another major point was the effectivness of a fraction of the maximum harm possible inflicted over a long period of time. It would be more effective in a bargaining process to only kill one hostage/ kill 100 POWs/ nuke one city rather than threatening maximum harm immediatly after your action.