Thomas C. Schelling's "Strategy of Conflict"

post by cousin_it · 2009-07-28T16:08:16.244Z · LW · GW · Legacy · 154 comments

It's an old book, I know, and one that many of us have already read. But if you haven't, you should.

If there's anything in the world that deserves to be called a martial art of rationality, this book is the closest approximation yet. Forget rationalist Judo: this is rationalist eye-gouging, rationalist gang warfare, rationalist nuclear deterrence. Techniques that let you win, but you don't want to look in the mirror afterward.

Imagine you and I have been separately parachuted into an unknown mountainous area. We both have maps and radios, and we know our own positions, but don't know each other's positions. The task is to rendezvous. Normally we'd coordinate by radio and pick a suitable meeting point, but this time you got lucky. So lucky in fact that I want to strangle you: upon landing you discovered that your radio is broken. It can transmit but not receive.

Two days of rock-climbing and stream-crossing later, tired and dirty, I arrive at the hill where you've been sitting all this time smugly enjoying your lack of information.

And after we split the prize and cash our checks I learn that you broke the radio on purpose.

Schelling's book walks you through numerous conflict situations where an unintuitive and often self-limiting move helps you win, slowly building up to the topic of nuclear deterrence between the US and the Soviets. And it's not idle speculation either: the author worked at the White House at the dawn of the Cold War and his theories eventually found wide military application in deterrence and arms control. Here's a selection of quotes to give you a flavor: the whole book is like this, except interspersed with game theory math.

The use of a professional collecting agency by a business firm for the collection of debts is a means of achieving unilateral rather than bilateral communication with its debtors and of being therefore unavailable to hear pleas or threats from the debtors.

A sufficiently severe and certain penalty on the payment of blackmail can protect a potential victim.

One may have to pay the bribed voter if the election is won, not on how he voted.

I can block your car in the road by placing my car in your way; my deterrent threat is passive, the decision to collide is up to you. If you, however, find me in your way and threaten to collide unless I move, you enjoy no such advantage: the decision to collide is still yours, and I enjoy deterrence. You have to arrange to have to collide unless I move, and that is a degree more complicated.

We have learned that the threat of massive destruction may deter an enemy only if there is a corresponding implicit promise of nondestruction in the event he complies, so that we must consider whether too great a capacity to strike him by surprise may induce him to strike first to avoid being disarmed by a first strike from us.

Leo Szilard has even pointed to the paradox that one might wish to confer immunity on foreign spies rather than subject them to prosecution, since they may be the only means by which the enemy can obtain persuasive evidence of the important truth that we are making no preparations for embarking on a surprise attack.

I sometimes think of game theory as being roughly divided in three parts, like Gaul. There's competitive zero-sum game theory, there's  cooperative game theory, and there are games where players compete but also have some shared interest. Except this third part isn't a middle ground. It's actually better thought of as ultra-competitive game theory. Zero-sum settings are relatively harmless: you minimax and that's it. It's the variable-sum games that make you nuke your neighbour.

Sometime ago in my wild and reckless youth that hopefully isn't over yet, a certain ex-girlfriend took to harassing me with suicide threats. (So making her stay alive was presumably our common interest in this variable-sum game.) As soon as I got around to looking at the situation through Schelling goggles, it became clear that ignoring the threats just leads to escalation. The correct solution was making myself unavailable for threats. Blacklist the phone number, block the email, spend a lot of time out of home. If any messages get through, pretend I didn't receive them anyway. It worked. It felt kinda bad, but it worked.

Hopefully you can also find something that works.

154 comments

Comments sorted by top scores.

comment by rwallace · 2009-07-28T21:17:15.350Z · LW(p) · GW(p)

A good reference, but it's worth remembering that if I tried the radio sabotage trick in real life, either I'd accidentally break the transmit capability as well as receive, or I'd be there until the deadline had come and gone happily blabbering about how I'm on the hill that looks like a pointy hat, while you were 20 miles away on a different hill that also looked like a pointy hat, cursing me, my radio and my inadequate directions.

In other words, like most things that are counterintuitive, these findings are counterintuitive precisely because their applicability in real life is the exception rather than the rule; by all means let's recognize the exceptions, but without forgetting what they are.

Replies from: cousin_it, wedrifid
comment by cousin_it · 2009-07-28T21:43:18.415Z · LW(p) · GW(p)

In the post I tried pretty hard to show the applicability of the techniques to real life, and so did Schelling. Apparently we haven't succeeded. Maybe some more quotes will tip the scales? Something of a more general nature, not ad hoc trickery?

If one is committed to punish a certain type of behavior when it reaches certain limits, but the limits are not carefully and objectively defined, the party threatened will realize when the time comes to decide whether the threat must be enforced or not, his interest and that of the threatening party will coincide in an attempt to avoid the mutually unpleasant consequences.

Or what do you say to this:

Among the legal privileges of corporations, two that are mentioned in textbooks are the right to sue and the "right" to be sued. Who wants to be sued! But the right to be sued is the power to make a promise: to borrow money, to enter a contract, to do business with someone who might be damaged. If suit does arise, the "right" seems a liability in retrospect; beforehand it was a prerequisite to doing business.

Or this:

If each party agrees to send a million dollars to the Red Cross on condition the other does, each may be tempted to cheat if the other contributes first, and each one's anticipation of the other's cheating will inhibit agreement. But if the contribution is divided into successive small contributions, each can try the other's good faith for a small price. Furthermore, since each can keep the other on short tether to the finish, no one ever need risk more than one small contribution at a time. Finally, this change in the incentive structure itself takes most of the risk out of the initial contribution; the value of established trust is made obviously visible to both.

Or this:

When there are two objects to negotiate, the decision to negotiate them simultaneously or in separate forums or at separate times is by no means neutral to the outcome, particularly when there is a latent extortionate threat that can be exploited only if it can be attached to some ordinary, legitimate, bargaining situation.

I'm not even being particularly picky on which paragraphs to quote. The whole book is like that. To me the main takeaway was not local trickery, but a general way of thinking about conflict situations; I started seeing them everywhere, all the time.

Replies from: rwallace
comment by rwallace · 2009-07-28T22:21:34.554Z · LW(p) · GW(p)

Thanks, those are better examples.

comment by wedrifid · 2009-07-29T01:02:32.136Z · LW(p) · GW(p)

In other words, like most things that are counterintuitive, these findings are counterintuitive precisely because their applicability in real life is the exception rather than the rule; by all means let's recognize the exceptions, but without forgetting what they are.

The examples in the original post are not exceptions. It just takes a while to recognise them under the veneers of social norms and instinctual behaviours.

The broken radio, for example, is exactly what I see when attempting to communicate with those who would present themselves as higher status. Blatant stupidity (broken receiver) is often a signal, not a weakness. (And I can incorporate this understanding when dealing with said people, which I find incredibly useful.)

Replies from: rwallace, None
comment by rwallace · 2009-07-29T09:42:33.257Z · LW(p) · GW(p)

Good point, though the results of this are frequently as disastrous as in my observation about the broken radio trick. (Much of Dilbert can be seen as examples thereof.)

Replies from: wedrifid
comment by wedrifid · 2009-07-29T19:28:33.743Z · LW(p) · GW(p)

I think you're right. It does seem to me that in the current environment the 'signal status though incomprehension' gives real losses to people rather frequently, as is the case with PHBs. I wonder though, how much my observations of the phenomenon are biased by selection. Perhaps by am far more likely to notice this sort of silliness when it is quite obvious that the signaller is going against his own self interest. That is certainly when it gets on my nerves the most!

comment by [deleted] · 2012-03-03T10:51:44.266Z · LW(p) · GW(p)Replies from: wedrifid
comment by wedrifid · 2012-03-04T13:54:02.187Z · LW(p) · GW(p)

Not quite. There is an element of cooperation involved but the payoff structure is qualitatively different, as is the timing. If you defect in the PD then the other person is better of defecting as well. If you break your radio the other guy is best off not breaking his. The PD is simultaneous while the radio is not. (So if you break your radio the other guy is able to hunt you down and bitch slap you.)

Replies from: None
comment by [deleted] · 2012-03-05T01:39:57.695Z · LW(p) · GW(p)

Ah, yeah. Somehow only the notion that "if you don't cooperate, something undesirable will happen (to someone)" remained salient in my mind.

comment by DeevGrape · 2011-11-15T06:47:21.692Z · LW(p) · GW(p)

Having read only a portion of the book so far (thanks for the pdf cousin_it and Alicorn!), I've noticed that the techniques and strategies Schelling goes over are applicable to my struggles with akrasia.

I'm sure it's been said before on lesswrong that when there's a conflict between immediate and delayed gratification, you can think of yourself as two agents: one rational, one emotional; one thinking in the present, one able to plan future moves and regret mistakes. These agents are obviously having a conflict, and I often find Rational Me (RM) losing ground to Irrational Me (IM) in situations that this book describes perfectly.

Say RM wants to work, and IM wants to watch TV online. If RM settles on "some" TV, IM can exploit the vagueness and non-natural settling point, and watch an entire season of a show. The two most stable negotiating points seem to be "no TV" and "unlimited amounts of TV".

Other techniques people use to avoid akrasia map really well with Schelling's conflict strategies, like breaking up commitments into small chunks ("fifteen minutes of work, then I can have a small reward") and forming a commitment with a third party to force your hand (like using stickk.com or working with friends or classmates).

comment by djcb · 2009-07-31T20:40:14.664Z · LW(p) · GW(p)

Slightly related (talking about game theory), one of the most bizarre things was the 1994 football/soccer match between Grenada and Barbados, in which both teams tried to win the game by deliberately score against themselves (and the opponents trying to prevent that).

comment by JoeSchmoe · 2009-07-30T17:37:17.599Z · LW(p) · GW(p)

A search through the comments on this article turns up exactly zero instances of the term "Vietnam".

Taking a hard look at what Schelling tried when faced with the real-world 'game' in Vietnam is enlightening as to the ups and downs of actually putting his theories -- or game theory in general -- into practice.

Fred Kaplan's piece in Slate from when Schelling won the Nobel is a good start:

http://www.slate.com/id/2127862/

Replies from: ciphergoth, Richard_Kennaway
comment by Paul Crowley (ciphergoth) · 2011-01-18T13:28:09.262Z · LW(p) · GW(p)

Terrible article in many ways - this is a very silly thing to say:

Schelling and McNaughton pondered the problem for more than an hour. In the end, they failed to come up with a single plausible answer to these most basic questions. So assured when writing about sending signals with force and inflicting pain to make an opponent behave, Tom Schelling, when faced with a real-life war, was stumped.

BTW, after a conversation with Eliezer at the weekend, I have just asked my employers to buy this book.

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2011-01-18T16:59:21.967Z · LW(p) · GW(p)

BTW, after a conversation with Eliezer at the weekend, I have just asked my employers to buy this book.

What do your employers do, that the book is relevant there? What they (assuming the CV on your web site is up to date) say about themselves on their web site is curiously unspecific.

Replies from: ciphergoth
comment by Paul Crowley (ciphergoth) · 2011-01-18T21:29:51.042Z · LW(p) · GW(p)

I work for a computer consultancy; we do all sorts of things. The book is relevant because while we generally enjoy excellent relations with all our clients, it can sometimes happen that they muck us about, for example on rates.

comment by Richard_Kennaway · 2009-07-30T18:37:53.566Z · LW(p) · GW(p)

Thanks for that extra light.

I have the 1980 edition of "The strategy of conflict" from the library at the moment. It's a reissue of the 1960 edition with an added preface by Schelling. Despite the Slate article closing by saying "Tom Schelling didn't write much about war after that [the Vietnam War]. He'd learned the limitations of his craft.", in his 1980 preface he judges the book's content as still "mostly all right".

comment by ajayjetti · 2009-07-29T23:37:25.714Z · LW(p) · GW(p)

So what happens in the broken radio example if both the persons have already read schellings book? Nobody gets the prize? I mean how does such a situation is resolved? If everybody perfects the art of rationality, who wins? and who loses?

Replies from: cousin_it, JJ10DMAN, cousin_it
comment by cousin_it · 2009-07-30T09:59:13.617Z · LW(p) · GW(p)

If it's common knowledge that both have read Schelling's book, the game is isomorphic to Chicken), which has been extensively studied.

Replies from: ajayjetti
comment by ajayjetti · 2009-07-30T23:47:45.413Z · LW(p) · GW(p)

so rationality doesn't always mean "win-win" ? In a chicken situation, the best thing for "both" the persons is to remain alive, which can be done by one of them (or both) "swerving", right? There is a good chance that one of them is called chicken.

Replies from: cousin_it, Linch
comment by cousin_it · 2009-07-31T08:29:20.057Z · LW(p) · GW(p)

Neither actual human rationality nor its best available game-theoretic formalizations (today) necessarily lead to win-win.

Replies from: Technologos
comment by Technologos · 2009-08-02T00:55:46.410Z · LW(p) · GW(p)

Indeed, the difference between Winning and "win-win" is important. Rationality wouldn't be much of a martial art if we limited the acceptable results to those in which all parties win.

comment by Linch · 2014-01-31T19:23:31.925Z · LW(p) · GW(p)

Hi! First post here. You might be interested in knowing that not only is the broken radio example isomorphic to "Chicken," but there's a real-life solution to the Chicken game that is very close to "destroying your receiver." That is, you can set up a "committment" that you will, in fact, not swerve. Of course, standard game theory tells us that this is not a credible threat (since dying is bad). Thus, you must make your commitment binding, eg., by ripping out the steering wheel.

Replies from: TheOtherDave
comment by TheOtherDave · 2014-01-31T19:25:57.436Z · LW(p) · GW(p)

And it helps to do it first. Being the second player to rip out the steering wheel is a whole other matter.

comment by JJ10DMAN · 2012-03-31T00:07:20.627Z · LW(p) · GW(p)

The example was just to make an illustration, and I wouldn't read into it too much. It has a lot of assumptions like, "I would rather sit around doing absolutely nothing than take stroll in the wilderness," and, "I have no possible landing position I can claim in order to make my preferred meeting point seem like a fair compromise, and therefore I must break my radio."

comment by cousin_it · 2009-07-30T09:40:56.751Z · LW(p) · GW(p)

I should've asked you to work it out for yourself, 'cause if you can't do that you really have no business commenting here, but... okay.

If it's common knowledge that both have read Schelling's book, the game has a Nash equilibrium in mixed strategies#Mixed_strategy). You break your radio with a certain probability and your buddy does the same.

comment by Alicorn · 2009-07-28T17:18:21.046Z · LW(p) · GW(p)

*adds book to list of books to read*

This sounds... ruthless, but in an extremely cool way.

Replies from: James_K, Dustin
comment by James_K · 2009-07-29T05:27:41.204Z · LW(p) · GW(p)

Shelling was actually the less ruthless of the pioneers of game theory. The other pioneer was Von Neumann who advocated a unilateral nuclear attack on the USSR before they developed their own nuclear weapons.

By contrast, Shelling invented the red hotline between the US and USSR, since more communication meant less chance of WW3.

Basically he was about ruthlessness for the good of humanity.

Replies from: Wei_Dai, Theist
comment by Wei Dai (Wei_Dai) · 2009-07-29T12:32:41.326Z · LW(p) · GW(p)

Shelling was actually the less ruthless of the pioneers of game theory. The other pioneer was Von Neumann who advocated a unilateral nuclear attack on the USSR before they developed their own nuclear weapons.

One thing I don't understand, why didn't the US announce at the end of World War II that it will nuke any country that attempts to develop a nuclear weapon or conducts a nuclear bomb test? If it had done that, then there would have been no need to actually nuke anyone. Was game theory invented too late?

Replies from: Richard_Kennaway, eirenicon, CronoDAS, cousin_it, khafra
comment by Richard_Kennaway · 2009-07-29T13:30:48.599Z · LW(p) · GW(p)

You are the President of the US. You make this announcement. Two years later, your spies tell you that the UK has a well-advanced nuclear bomb research programme. The world is, nevertheless, as peaceful on the whole as in fact it was in the real timeline.

Do you nuke London?

Replies from: Wei_Dai, BronecianFlyreme
comment by Wei Dai (Wei_Dai) · 2009-07-29T14:03:41.627Z · LW(p) · GW(p)

I'd give the following announcement: "People of the UK, please vote your government out of office and shut down your nuclear program. If you fail to do so, we will start nuking the following sites in sequence, one per day, starting [some date]." Well, I'd go through some secret diplomacy first, but that would be my endgame if all else failed. Some backward induction should convince the UK government not to start the nuclear program in the first place.

Replies from: UnholySmoke, Larks, Kaj_Sotala, Richard_Kennaway
comment by UnholySmoke · 2009-07-29T14:38:44.275Z · LW(p) · GW(p)

I can think, straight away, of four or five reason why this would have been very much the wrong thing to do.

  • You make an enemy of your biggest allies. Nukes or no, the US has never been more powerful than the rest of the world put together.
  • You don't react to coming out of one Cold War by initiating another.
  • This strategy is pointless unless you plan to follow through. The regime that laid down that threat would either be strung up when they launched, or voted straight out when they didn't.
  • Mutually assured destruction was what stopped nuclear war happening. Setting one country up as the Guardian of the Nukes is stupid, even if you are that country. I'm not a yank, but I believe this sort of idea is pretty big in the constitution.
  • Attacking London is a shortcut to getting a pounding. This one's just conjecture.

Basically he was about ruthlessness for the good of humanity.

Yeah I think the clue is in there. Better to be about the good of humanity, and ruthless if that's what's called for. Setting yourself up as 'the guy who has the balls to make the tough decisions' usually denotes you as a nutjob. Case in point: von Neumann suggesting launching was the right strategy. I don't think anyone would argue today that he was right, though back then the decision must have seemed pretty much impossible to make.

Replies from: orthonormal, Vladimir_Nesov
comment by orthonormal · 2009-07-29T18:41:52.864Z · LW(p) · GW(p)

Case in point: von Neumann suggesting launching was the right strategy. I don't think anyone would argue today that he was right, though back then the decision must have seemed pretty much impossible to make.

Survivorship bias. There were some very near misses (Cuban Missile Crisis, Stanislav Petrov, etc.), and it seems reasonable to conclude that a substantial fraction of the Everett branches that came out of our 1946 included a global thermonuclear war.

I'm not willing to conclude that von Neumann was right, but the fact that we avoided nuclear war isn't clear proof he was wrong.

comment by Vladimir_Nesov · 2009-07-29T15:28:49.482Z · LW(p) · GW(p)

You make an enemy of your biggest allies.

If the allies are rational, they should agree that it's in their interest to establish this strategy. The enemy of everyone is the all-out nuclear war.

Replies from: James_K
comment by James_K · 2009-07-29T22:13:39.633Z · LW(p) · GW(p)

This strikes me as a variant of the ultimatum game. The allies would have to accept a large asymmetry of power. If even one of them rejects the ultimatum you're stuck with the prospect of giving up your strategy (having burned most or all of your political capital with other nations), or committing mass murder.

When you add in the inability of governments to make binding commitments, this doesn't strike me as a viable strategy.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2009-07-29T22:41:00.282Z · LW(p) · GW(p)

Links in the Markdown syntax are written like this:

[ultimatum game](http://en.wikipedia.org/wiki/Ultimatum_game)

comment by Larks · 2010-12-15T23:37:54.017Z · LW(p) · GW(p)

The UK bomb was developed with the express purpose of providing independance from the US. If the US could keep the USSR nuke-free there'd be less need for a UK bomb. Also, it's possible that the US could tone down its anti-imperialist rhetoric/covert funding so as to not threaten the Empire.

comment by Kaj_Sotala · 2009-07-29T22:20:21.683Z · LW(p) · GW(p)

I think that, by the time you've reached the point where you're about to kill millions for the sake of the greater good, you'd do well to consider all the ethical injunctions this violated. (Especially given all the different ways this could go wrong that UnholySmoke could come up off the top of his head.)

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2009-07-31T07:58:31.477Z · LW(p) · GW(p)

Kaj, I was discussing a hypothetical nuclear strategy. We can't discuss any such strategy without involving the possibility of killing millions. Do the ethical injunctions imply that such discussions shouldn't occur?

Recall that MAD required that the US commit itself to destroy the Soviet Union if it detected that the USSR launched their nuclear missiles. Does MAD also violate ethical injunctions? Should it also not have been discussed? (How many different ways could things have gone wrong with MAD?)

Replies from: Kaj_Sotala, handoflixue
comment by Kaj_Sotala · 2009-08-02T20:14:48.443Z · LW(p) · GW(p)

Do the ethical injunctions imply that such discussions shouldn't occur?

Of course not. I'm not saying the strategy shouldn't be discussed, I'm saying that you seem to be expressing greater certainty of your proposed approach being correct than would be warranted.

(I wouldn't object to people discussing math, but I would object if somebody thought 2 + 2 = 5.)

comment by handoflixue · 2011-08-08T22:44:19.346Z · LW(p) · GW(p)

Recall that MAD required that the US commit itself to destroy the Soviet Union if it detected that the USSR launched their nuclear missiles

And the world as we know it is still around because Stanislav Petrov ignored that order and insisted the US couldn't possibly be stupid enough to actually launch that sort of attack.

I would pray that the US operators were equally sensible, but maybe they just got lucky and never had a technical glitch threaten the existence of humanity.

comment by Richard_Kennaway · 2009-07-29T14:18:25.493Z · LW(p) · GW(p)

The entire civilised world (which at this point does not include anyone who is still a member of the US government) is in uproar. Your attempts at secret diplomacy are leaked immediately. The people of the UK make tea in your general direction. Protesters march on the White House.

When do you push the button, and how will you keep order in your own country afterwards?

What I'm really getting at here is that your bland willingness to murder millions of non-combatants of a friendly power in peacetime because they do not accede to your empire-building unfits you for inclusion in the human race.

Also, that it's easy to win these games in your imagination. You just have to think, I will do this, and then my opponent must rationally do that. You have a completely watertight argument. Then your opponent goes and does something else. It does not matter that you followed the rules of the logical system if the system itself is inconsistent.

Replies from: loqi, Vladimir_Nesov, Viliam_Bur, cousin_it
comment by loqi · 2011-04-26T01:34:43.610Z · LW(p) · GW(p)

So says the man from his comfy perch in an Everett branch that survived the cold war.

What I'm really getting at here is that [a comment you made on LW] unfits you for inclusion in the human race.

Downvoted for being one of the most awful statements I have ever seen on this site, far and away the most awful to receive so many upvotes. What the fuck, people.

Replies from: shokwave, Richard_Kennaway
comment by shokwave · 2011-04-26T02:21:41.244Z · LW(p) · GW(p)

I doubt RichardKennaway believes Wei_Dai is unfit for inclusion in the human race. What he was saying, and what he received upvotes for, is that anyone who's blandly willing to murder millions of non-combatants of a friendly power in peacetime because they do not accede to empire-building is unfit for inclusion in the human race - and he's right, that sort of person should not be fit for inclusion in the human race. A comment on LW is not the same as that bland willingness to slaughter, and you do yourself no favours by incorrectly paraphrasing it as such.

Replies from: Wei_Dai, loqi
comment by Wei Dai (Wei_Dai) · 2011-04-26T06:38:36.909Z · LW(p) · GW(p)

anyone who's blandly willing to murder millions of non-combatants of a friendly power in peacetime because they do not accede to empire-building is unfit for inclusion in the human race

You do realize that the point of my proposed strategy was to prevent the destruction of Earth (from a potential nuclear war between the US and USSR), and not "empire building"?

I don't understand why Richard and you consider MAD acceptable, but my proposal beyond the pale. Both of you use the words "friendly power in peacetime", which must be relevant somehow but I don't see how. Why would it be ok (i.e., fit for inclusion in the human race) to commit to murdering millions of non-combatants of an enemy power in wartime in order to prevent nuclear war, but not ok to commit to murdering millions of non-combatants of a friendly power in peacetime in service of the same goal?

A comment on LW is not the same as that bland willingness to slaughter, and you do yourself no favours by incorrectly paraphrasing it as such.

I also took Richard's comment personally (he did say "your bland willingness", emphasis added), which is probably why I didn't respond to it.

Replies from: JoshuaZ, shokwave
comment by JoshuaZ · 2011-04-28T04:16:15.251Z · LW(p) · GW(p)

The issue seems to be that nuking a friendly power in peacetime feels to people pretty much like a railroad problem where you need to shove the fat person. In this particular case, since it isn't a hypothetical, the situation has been made all the more complicated by actual discussion of the historical and current geopolitics surrounding the situation (which essentially amounts to trying to find a clever solution to a train problem or arguing that the fat person wouldn't weigh enough.) The reaction is against your apparent strong consequentialism along with the fact that your strategy wouldn't actually work given the geopolitical situation. If one had an explicitly hypothetical geopolitical situation where this would work and then see how they respond it might be interesting.

comment by shokwave · 2011-04-29T02:26:29.087Z · LW(p) · GW(p)

I also took Richard's comment personally (he did say "your bland willingness", emphasis added), which is probably why I didn't respond to it.

Well, this is evidence against using second-person pronouns to avoid "he/she".

Replies from: JoshuaZ
comment by JoshuaZ · 2011-04-29T02:39:03.963Z · LW(p) · GW(p)

He could easily have said "bland willingness to" rather than "your bland willingness" so that doesn't seem to be an example where a pronoun is necessary.

Replies from: shokwave
comment by shokwave · 2011-04-29T02:45:19.606Z · LW(p) · GW(p)

No, it's an example where using "you" has caused someone to take something personally. Given that the "he/she" problem is that some people take it personally, I haven't solved the problem, I've just shifted it onto a different group of people.

comment by loqi · 2011-04-26T06:53:40.189Z · LW(p) · GW(p)

I was commenting on what he said, not guessing at his beliefs.

I don't think you've made a good case (any case) for your assertion concerning who is and is not to be included in our race. And it's not at all obvious to me that Wei Dai is wrong. I do hope that my lack of conviction on this point doesn't render me unfit for existence.

Anyone willing to deploy a nuclear weapon has a "bland willingness to slaughter". Anyone employing MAD has a "bland willingness to destroy the entire human race".

I suspect that you have no compelling proof that Wei Dai's hypothetical nuclear strategy is in fact wrong, let alone one compelling enough to justify the type of personal attack leveled by RichardKennaway. Would you also accuse Eliezer of displaying a "bland willingness to torture someone for 50 years" and sentence him to exclusion from humanity?

Replies from: shokwave
comment by shokwave · 2011-04-29T02:23:40.267Z · LW(p) · GW(p)

What I was saying was horrendous act is not the same as comment advising horrendous act in hypothetical situation. You conflated the two in paraphrasing RichardKennaway's comment as "comment advising horrendous act in hypothetical situation unfits you for inclusion in the human race" when what he was saying was "horrendous act unfits you for inclusion in the human race".

comment by Richard_Kennaway · 2011-04-27T22:00:35.481Z · LW(p) · GW(p)

I was rather intemperate, and on a different day maybe I would have been less so; or maybe I wouldn't. I am sorry that I offended Wei Dai.

But then, Wei Dai's posting was intemperate, as is your comment. I mention this not to excuse mine, just to point out how easily this happens. This may be partly the dynamics of the online medium, but in the present case I think it is also because we are dealing in fantasy here, and fantasy always has to be more extreme than reality, to make up for its own unreality.

You compare the problem to Eliezer's one of TORTURE vs SPECKS, but there is an important difference between them. TORTURE vs SPECKS is fiction, while Wei Dai spoke of an actual juncture in history in living memory, and actions that actually could have been taken.

What is the TORTURE vs SPECKS problem? The formulation of the problem is at that link, but what sort of thing is this problem? Given the followup posting the very next day, it seems likely to me that the intention was to manifest people's reactions to the problem. Perhaps it is also a touchstone, to see who has and who has not learned the material on which it stands. What it is not is a genuine problem which anyone needs to solve as anything but a thought experiment. TORTURE vs SPECKS is not going to happen. Other tradeoffs between great evil to one and small evils to many do happen; this one never will. While 50 years of torture is, regrettably, conceivably possible here and now in the real world, and may be happening to someone, somewhere, right now, there is no possibility of 3^^^3 specks. Why 3^^^3? Because that is intended to be a number large enough to produce the desired conclusion. Anyone whose objection is that it isn't a big enough number, besides manifesting a poor grasp of its magnitude, can simply add another uparrow. The problem is a fictional one, and as such exhibits the reverse meta-causality characteristic of fiction: 3^^^3 is in the problem because the point of the problem is for the solution to be TORTURE; that TORTURE is the solution is not caused by an actual possibility of 3^^^3 specks.

In another posting a year later, Eliezer speaks of ethical rules of the sort that you just don't break, as safety rails on a cliff he didn't see. This does not sit well with the TORTURE vs SPECKS material, but it doesn't have to: TORTURE vs SPECKS is fiction and the later posting is about real (though unspecified) actions.

So, the Cold War. Wei Dai would have the US after WWII threatening to nuke any country attempting to develop or test nuclear weapons. To the scenario of later discovering that (for example) the UK has a well-developed covert nuclear program, he responds:

I'd give the following announcement: "People of the UK, please vote your government out of office and shut down your nuclear program. If you fail to do so, we will start nuking the following sites in sequence, one per day, starting [some date]." Well, I'd go through some secret diplomacy first, but that would be my endgame if all else failed. Some backward induction should convince the UK government not to start the nuclear program in the first place.

It should, should it? And that, in Wei's mind, is adequate justification for pressing the button to kill millions of people for not doing what he told them to do. Is this rationality, or the politics of two-year-olds with nukes?

I seem to be getting intemperate again.

It's a poor sort of rationality that only works against people rational enough to lose. Or perhaps they can be superrational and precommit to developing their programme regardless of what threats you make? Then rationally, you must see that it would therefore be futile to make such threats. And so on. How's TDT/UDT with self-modifying agents modelling themselves and each other coming along?

This is fantasy masquerading as rationality. I stand by this that I said back then:

[I]t's easy to win these games in your imagination. You just have to think, I will do this, and then my opponent must rationally do that. You have a completely watertight argument. Then your opponent goes and does something else. It does not matter that you followed the rules of the logical system if the system itself is inconsistent.

To make these threats, you must be willing to actually do what you have said you will do if your enemy does not surrender. The moment you think "but rationally he has to surrender so I won't have to do this", you are making an excuse for yourself to not carry it out. Whatever belief you can muster that you would will evaporate like dew in the desert when the time comes.

How are you going to launch those nukes, anyway?

Replies from: loqi
comment by loqi · 2011-04-28T04:09:01.121Z · LW(p) · GW(p)

But then, Wei Dai's posting was intemperate, as is your comment. I mention this not to excuse mine, just to point out how easily this happens.

Using the word "intemperate" in this way is a remarkable dodge. Wei Dai's comment was entirely within the scope of the (admittedly extreme) hypothetical under discussion. Your comment contained a paragraph composed solely of vile personal insult and slanted misrepresentation of Wei Dai's statements. The tone of my response was deliberate and quite restrained relative to how I felt.

This may be partly the dynamics of the online medium, but in the present case I think it is also because we are dealing in fantasy here, and fantasy always has to be more extreme than reality, to make up for its own unreality.

Huh? You're "not excusing" the extremity of your interpersonal behavior on the grounds that the topic was fictional, and fiction is more extreme than reality? And then go on to explain that you don't behave similarly toward Eliezer with respect to his position on TORTURE vs SPECKS because that topic is even more fictional?

Is this rationality, or the politics of two-year-olds with nukes?

Is this a constructive point, or just more gesturing?

As for the rest of your comment: Thank you! This is the discussion I wanted to be reading all along. Aside from a general feeling that you're still not really trying to be fair, my remaining points are mercifully non-meta. To dampen political distractions, I'll refer to the nuke-holding country as H, and a nuke-developing country as D.

You're very focused on Wei Dai's statement about backward induction, but I think you're missing a key point: His strategy does not depend on D reasoning the way he expects them to, it's just heavily optimized for this outcome. I believe he's right to say that backward induction should convince D to comply, in the sense that it is in their own best interest to do so.

Or perhaps they can be superrational and precommit to developing their programme regardless of what threats you make? Then rationally, you must see that it would therefore be futile to make such threats.

Don't see how this follows. If both countries precommit, D gets bombed until it halts or otherwise cannot continue development. While this is not H's preferred outcome, H's entire strategy is predicated on weighing irreversible nuclear proliferation and its consequences more heavily than the millions of lives lost in the event of a suicidal failure to comply. In other words, D doesn't wield sufficient power in this scenario to affect H's decision, while H holds sufficient power to skew local incentives toward mutually beneficial outcomes.

Speaking of nuclear proliferation and its consequences, you've been pretty silent on this topic considering that preventing proliferation is the entire motivation for Wei Dai's strategy. Talking about "murdering millions" without at least framing it alongside the horror of proliferation is not productive.

How are you going to launch those nukes, anyway?

Practical considerations like this strike me as by far the best arguments against extreme, theory-heavy strategies. Messy real-world noise can easily make a high-stakes gambit more trouble than it's worth.

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2011-04-28T14:26:41.803Z · LW(p) · GW(p)

Is this rationality, or the politics of two-year-olds with nukes?

Is this a constructive point, or just more gesturing?

It is a gesture concluding a constructive point.

You're very focused on Wei Dai's statement about backward induction, but I think you're missing a key point: His strategy does not depend on D reasoning the way he expects them to, it's just heavily optimized for this outcome. I believe he's right to say that backward induction should convince D to comply, in the sense that it is in their own best interest to do so.

This is a distinction without a difference. If H bombs D, H has lost (and D has lost more).

If both countries precommit, D gets bombed until it halts or otherwise cannot continue development.

That depends on who precommits "first". That's a problematic concept for rational actors who have plenty of time to model each others' possible strategies in advance of taking action. If H, without even being informed of it by D, considers this possible precommitment strategy of D, is it still rational for H to persist and threaten D anyway? Or perhaps H can precommit to ignoring such a precommitment by D? Or should D already have anticipated H's original threat and backed down in advance of the threat ever having been made? I am reminded of the Forbidden Topic. Counterfactual blackmail isn't just for superintelligences. As I asked before, does the decision theory exist yet to handle self-modifying agents modelling themselves and others, demonstrating how real actions can arise from this seething mass of virtual possibilities?

Then also, in what you dismiss as "messy real-world noise", there may be a lot of other things D might do, such as fomenting insurrection in H, or sharing their research with every other country besides H (and blaming foreign spies), or assassinating H's leader, or doing any and all of these while overtly appearing to back down.

The moment H makes that threat, the whole world is H's enemy. H has declared a war that it hopes to win by the mere possession of overwhelming force.

Speaking of nuclear proliferation and its consequences, you've been pretty silent on this topic considering that preventing proliferation is the entire motivation for Wei Dai's strategy. Talking about "murdering millions" without at least framing it alongside the horror of proliferation is not productive.

I look around at the world since WWII and fail to see this horror. I look at Wei Dai's strategy and see the horror. loqi remarked about Everett branches, but imagining the measure of the wave function where the Cold War ended with nuclear conflagration fails to convince me of anything.

Replies from: loqi
comment by loqi · 2011-04-28T22:22:48.073Z · LW(p) · GW(p)

This is a distinction without a difference. If H bombs D, H has lost

This assumption determines (or at least greatly alters) the debate, and you need to make a better case for it. If H really "loses" by bombing D (meaning H considers this outcome less preferable than proliferation), then H's threat is not credible, and the strategy breaks down, no exotic decision theory necessary. Looks like a crucial difference to me.

That depends on who precommits "first". [...]

This entire paragraph depends on the above assumption. If I grant you that assumption and (artificially) hold constant H's intent to precommit, then we've entered the realm of bluffing, and yes, the game tree gets pathological.

loqi remarked about Everett branches, but imagining the measure of the wave function where the Cold War ended with nuclear conflagration fails to convince me of anything.

My mention of Everett branches was an indirect (and counter-productive) way of accusing you of hindsight bias.

Your talk of "convincing you" is distractingly binary. Do you admit that the severity and number of close calls in the Cold War is relevant to this discussion, and that these are positively correlated with the underlying justification for Wei Dai's strategy? (Not necessarily its feasibility!)

I look around at the world since WWII and fail to see this horror. I look at Wei Dai's strategy and see the horror.

Let's set aside scale and comparisons for a moment, because your position looks suspiciously one-sided. You fail to see the horror of nuclear proliferation? If I may ask, what is your estimate for the probability that a nuclear weapon will be deployed in the next 100 years? Did you even ask yourself this question, or are you just selectively attending to the low-probability horrors of Wei Dai's strategy?

Then also, in what you dismiss as "messy real-world noise"

Emphasis mine. You are compromised. Please take a deep breath (really!) and re-read my comment. I was not dismissing your point in the slightest, I was in fact stating my belief that it exemplified a class of particularly effective counter-arguments in this context.

comment by Vladimir_Nesov · 2009-07-29T14:24:56.952Z · LW(p) · GW(p)

because they do not accede to your empire-building

Fail.

comment by Viliam_Bur · 2011-09-05T11:11:14.508Z · LW(p) · GW(p)

You just have to think, I will do this, and then my opponent must rationally do that. You have a completely watertight argument. Then your opponent goes and does something else.

A model of reality, which assumes that an opponent must be rational, is an incorrect model. At best, it is a good approximation that could luckily return a correct answer in some situations.

I think this is a frequent bias for smart people -- assuming that (1) my reasoning is flawless, and (2) my opponent is on the same rationality level as me, therefore (3) my opponent must have the same model of situation as me, therefore (4) if I rationally predict that it is best for my opponent to do X, my opponent will really do X. And then my opponent does non-X, and I am like: WTF?!

comment by cousin_it · 2009-07-29T14:27:23.692Z · LW(p) · GW(p)

Richard, I'm with Nesov on this one. Don't attack the person making the argument.

comment by BronecianFlyreme · 2013-12-10T05:46:36.269Z · LW(p) · GW(p)

Interestingly, it seems to me like the most convenient solution to this problem would be to find some way to make yourself incapable of not nuking anyone who built I nuke. I don't think it's really feasible, but I thought it was worth mentioning just because it matches the article so closely

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2013-12-10T08:12:19.612Z · LW(p) · GW(p)

I'm sure all extortionists would find it very convenient to be able to say to their victims while breaking their legs, "It's you that's doing this, not me!" And to have the courts accept that as a valid defence, and jail the victim for committing assault on themselves. But the fact is, we cannot conduct brain surgery on ourselves to excise our responsibility. Is it an ability to be desired?

Replies from: Lumifer
comment by Lumifer · 2013-12-10T17:31:57.448Z · LW(p) · GW(p)

I'm sure all extortionists would find it very convenient to be able to say to their victims while breaking their legs, "It's you that's doing this, not me!" And to have the courts accept that as a valid defence, and jail the victim for committing assault on themselves.

You probably thought you were kidding. Not

comment by eirenicon · 2009-07-29T15:04:06.817Z · LW(p) · GW(p)

At the end of WWII, the US's nuclear arsenal was still small and limited. The declaration of such a threat would have made it worth the risk for the USSR to dramatically ramp up their nuclear weapons research, which had been ongoing since 1942. The Soviets tested their first nuke in 1949; at that point or any time earlier, it would have been too late for the US to follow through. They would've had to drop the Marshall Plan and risk starting another "hot war". With their European allies, especially the UK, still struggling economically, the outcome would have been far from assured.

comment by CronoDAS · 2009-07-31T00:30:03.666Z · LW(p) · GW(p)

As a practical matter, this would not have been possible. At the end of World War II, the U.S. didn't have enough nuclear weapons to do much more than threaten to blow up a city or two. Furthermore, intercontinental ballistic missiles didn't exist yet; the only way to get a nuclear bomb to its target was to put it in an airplane and hope the airplane doesn't get shot down before it gets to its destination.

Replies from: Wei_Dai, thomblake
comment by Wei Dai (Wei_Dai) · 2009-07-31T01:08:27.622Z · LW(p) · GW(p)

According to this book, in May 1949 (months before the Soviet's first bomb test), the US had 133 nuclear bombs and a plan (in case of war) to bomb 70 Soviet cities, but concluded that this was probably insufficient to "bring about capitulation". The book also mentions that the US panicked and speeded up the production of nuclear bombs after the Soviet bomb test, so if it had done that earlier, perhaps it would have had enough bombs to deter the Soviets from developing them.

Also, according to this article, the idea of using nuclear weapons to deter the development/testing of fusion weapons was actually proposed, by I I Rabi and Enrico Fermi:

They believed that any nation that violated such a prohibition would have to test a prototype weapon; this would be detected by the US and retaliation using the world’s largest stock of atomic bombs should follow. Their proposal gained no traction.

comment by thomblake · 2009-07-31T15:44:26.405Z · LW(p) · GW(p)

But at the end of the war, the US had developed cybernetic anti-aircraft guns to fight the Pacific War, but the Russians did not have them. They had little chance of shooting down our planes using manual sighting.

Replies from: irarseil
comment by irarseil · 2012-06-30T15:14:23.533Z · LW(p) · GW(p)

I think you should be aware that lesswrong is read in countries other than the USA, and writing about "our planes" in a forum where not everyone is American to mean "American planes" can lead to misunderstandings or can discourage others from taking part in the conversation.

comment by cousin_it · 2009-07-29T12:55:23.503Z · LW(p) · GW(p)

How would the US detect attempts to develop nuclear weapons before any tests took place? Should they have nuked the USSR on a well-founded suspicion?

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2009-07-29T13:33:49.038Z · LW(p) · GW(p)

How would the US detect attempts to develop nuclear weapons before any tests took place? Should they have nuked the USSR on a well-founded suspicion?

I think from a rational perspective, the answer must be yes. Under this hypothetical policy, if the USSR didn't want to be nuked, then it would have done whatever was necessary to dispel the US's suspicion (which of course it would have voiced first).

Do you really prefer the alternative that actually happened? That is, allow the USSR and many other countries to develop nuclear weapons and then depend on MAD and luck to prevent world destruction? Even if you personally do prefer this, it's hard to see how that was a rational choice for the US.

BTW, please stop editing so much! You're making me waste all my good retorts. :)

Replies from: eirenicon, Vladimir_Nesov, UnholySmoke, cousin_it
comment by eirenicon · 2009-07-30T02:08:10.590Z · LW(p) · GW(p)

It seems equally rational for the US to have renounced its own nuclear program, thereby rendering it immune to the nuclear attacks of other nations. That is what you're saying, right? The only way for the USSR to be immune from nuclear attack would be to prove to the US that it didn't have a program. Ergo, the US could be immune to nuclear attack if it proved to the USSR that it didn't have a program. Of course, that wouldn't ever deter the nuclear power from nuking the non-nuclear power. If the US prevented the USSR from developing nukes, it could hang the threat of nuclear war over them for as long as it liked in order to get what it wanted. Developing nuclear weapons was the only option the USSR had if it wanted to preserve its sovereignty. Therefore, threatening to nuke the USSR if it developed nukes would guarantee that you would nuke it if they didn't (i.e. use the nuke threat in every scenario, because why not?), which would force the USSR to develop nukes. Expecting the USSR, a country every inch as nationalistic as the US, a country that just won a war against far worse odds than the US ever faced, to bend the knee is simply unrealistic.

Also, what would the long-term outcome be? Either the US rules the world through fear, or it nukes every country that ever inches toward nuclear weaponry and turns the planet into a smoky craphole. I'll take MAD any day; despite its obvious risks, it proved pretty stable.

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2009-07-30T19:58:10.969Z · LW(p) · GW(p)

I think there is an equilibrium where the US promises not to use the threat of nukes for anything other than enforcing the no-nuclear-development policy and for obvious cases of self-defense, and it keeps this promise because to not do so would be to force other countries to start developing nukes.

Also, I note that many countries do not have nukes today nor enjoy protection by a nuclear power, and the US does not use the threat of nuclear war against them in every scenario.

Replies from: eirenicon
comment by eirenicon · 2009-07-30T20:17:54.893Z · LW(p) · GW(p)

I think that proposed equilibrium would have been extremely unlikely under circumstances where the US (a) had abandoned their pre-war isolationist policies and (b) were about to embark on a mission of bending other nations, often through military force, to their will. Nukes had just been used to end a war with Japan. Why wouldn't the US use them to end the Korean war, for example? Or even to pre-empt it? Or to pre-empt any other conflict it had an interest in? The US acted incredibly aggressively when a single misstep could have sent Soviet missiles in their direction. How aggressive might it have been if there was no such danger? I think you underestimate how much of a show stopper nuclear weapons were in the 40s and 50s. There was no international terrorism or domestic activism that could exact punitive measures on those who threatened to use or used nukes.

Even though the cold war is long over, I am still disturbed by how many nuclear weapons there are in the world. Even so, I would much rather live in this climate than one in which only a single nation - a nation with a long history of interfering with other sovereign countries, a nation that is currently engaged in two wars of aggression - was the only nuclear power around.

comment by Vladimir_Nesov · 2009-07-29T13:55:47.201Z · LW(p) · GW(p)

Given that there is a nontrivial chance that the policy won't be implemented reliably, and partially because of that the other side will fail to fear it properly, the expected utility of trying to implement this policy seems hideously negative (that is, there is a good chance a city will be nuked as a result, after which the policy crumbles under the public pressure, and after that everyone develops the technology).

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2009-07-29T15:24:00.429Z · LW(p) · GW(p)

Given that there is a nontrivial chance that the policy won't be implemented reliably, and partially because of that the other side will fail to fear it properly, the expected utility of trying to implement this policy seems hideously negative

Ok, granted, but was the expected utility less than allowing everyone to develop nuclear weapons and then using a policy of MAD? Clearly MAD has a much lower utility if the policy failed, so the only way it could have been superior is if it was considered much more reliable. But why should that be the case? It seems to me that MAD is not very reliable at all because the chance of error in launch detection is high (as illustrated by historical incidents) and the time to react is much shorter.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2009-07-29T15:32:40.363Z · LW(p) · GW(p)

The part you didn't quote addressed that: once this policy doesn't work out as planned, it crumbles and the development of nukes by everyone interested goes on as before. It isn't an alternative to MAD, because it won't actually work.

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2009-07-29T16:39:02.574Z · LW(p) · GW(p)

Well, you said that it had a "good chance" of failing. I see your point if by "good chance" you meant probability close to 1. But if "good chance" is more like 50%, then it would still have been worth it. Let's say MAD had a 10% chance of failing:

  • EU(MAD) = .1 * U(world destruction)
  • EU(NH) = .5 U(one city destroyed) + .05 U(world destruction)

Then EU(MAD) < EU(NH) if U(world destruction) < 10 U(one city destroyed).

comment by UnholySmoke · 2009-07-29T14:42:02.758Z · LW(p) · GW(p)

Should they have nuked the USSR on a well-founded suspicion?

I think from a rational perspective, the answer must be yes. [...] Do you really prefer the alternative that actually happened?

Utility function fail?

comment by cousin_it · 2009-07-29T14:00:02.974Z · LW(p) · GW(p)

I'm not sure everything would have happened as you describe, and thus not sure I prefer the alternative that actually happened. But your questions make me curious: do you also think the US was game-theoretically right to attack Iraq and will be right to attack Iran because those countries didn't do "whatever was necessary" to convince you they aren't developing WMDs?

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2009-07-29T14:52:25.189Z · LW(p) · GW(p)

My understanding is that the Iraq invasion was done mainly to test the "spread democracy" strategy, which the Bush administration believed in, and WMDs were more or less an excuse. Since that didn't work out so well, there seems to be little chance that Iran will be attacked in a similar way.

Game theoretically, physically invading a country to stop WMDs is much too costly, and not a credible threat, especially since lots of countries have already developed WMDs without being invaded.

comment by khafra · 2012-10-26T19:50:10.673Z · LW(p) · GW(p)

I think that, if we stay out of the least convenient possible world, this is impractical because of the uncertainty of intel. In a world where there was genuine uncertainty whether Saddam Hussein was building WMD, it seems like it would be difficult to gain enough certainty to launch against another country in peacetime. At least, until that other country announced "we have 20 experimental nuclear missiles targeted at major US cities, and we're going to go ahead with our first full-scale test of a nuclear warhead. Your move."

Today, we see this problem with attribution for computer network exploitation from (presumably) state actors. It's a reasonably good parallel to MAD, because we have offensive ability, but little defensive ability. In this environment, we haven't really seen computer network attacks used to control the development of intrusion/exploitation capabilities by state or even private actors (at least, as far as I know of).

Replies from: Nornagest
comment by Nornagest · 2012-10-26T21:05:42.789Z · LW(p) · GW(p)

ICBMs didn't exist at the time -- intercontinental capability didn't arrive until the Soviet R-7 missile in 1957, eight years after the first successful Russian nuclear test, and no missiles were tested with nuclear warheads until 1958 -- making the strategic picture dependent at least as much on air superiority as on the state of nuclear tech. Between geography and military focus, that would probably have given the United States a significant advantage if they'd chosen to pursue this avenue in the mid-to-late 1940s. On the other hand, intelligence services were pretty crude in some ways, too; my understanding is that the Russian atomic program was unknown to the American spook establishment until it was nearing completion.

comment by Theist · 2009-07-29T17:21:08.697Z · LW(p) · GW(p)

Basically he was about ruthlessness for the good of humanity.

Sounds like Machiavelli:

Hence we may learn the lesson that on seizing a state, the usurper should make haste to inflict what injuries he must, at a stroke, that he may not have to renew them daily, but be enabled by their discontinuance to reassure mens minds, and afterwards win them over by benefits.

The Prince, Chapter 8

comment by Dustin · 2009-07-28T17:39:50.627Z · LW(p) · GW(p)

Agreed. Unfortunately, my time is short and my book list is long.

Replies from: gjm
comment by gjm · 2009-07-28T22:44:27.661Z · LW(p) · GW(p)

Read it anyway.

comment by Wei Dai (Wei_Dai) · 2009-07-28T17:25:29.614Z · LW(p) · GW(p)

There seems to be a free online version of Schelling's book at http://www.questiaschool.com/read/94434630.

Replies from: roland, cousin_it, roryokane, Alicorn
comment by roland · 2009-07-28T18:51:47.032Z · LW(p) · GW(p)

It is free for a few pages... then you have to sign up which is not free.

comment by cousin_it · 2009-07-28T19:04:36.833Z · LW(p) · GW(p)

I have a .djvu version that I found ages ago in some torrent, if anyone's interested PM me and I will email it to you (no webserver handy ATM, sorry).

Replies from: Alicorn, Vladimir_Nesov
comment by Alicorn · 2009-07-28T19:06:55.821Z · LW(p) · GW(p)

Send it to me, I'll host it on my site (for the time being, at least). alicorn@elcenia.com

Replies from: cousin_it, roland
comment by cousin_it · 2009-07-28T19:24:41.974Z · LW(p) · GW(p)

Sent. Did you receive it?

Replies from: Alicorn
comment by Alicorn · 2009-07-28T20:38:48.189Z · LW(p) · GW(p)

I can't open the file myself, apparently, but I've uploaded it here.

Replies from: gworley, saturn
comment by Gordon Seidoh Worley (gworley) · 2009-07-29T17:50:54.910Z · LW(p) · GW(p)

I have converted it to plain text and PDF for everyone's convenience. I don't care much for DjVu, even though it is a better format, because I have a much nicer viewer that I use that lets me quickly annotate a PDF. For now you can download it from here (this probably won't stay up forever, though):

  • edited to remove
  • edited to remove
Replies from: Alicorn
comment by Alicorn · 2009-07-29T17:54:42.123Z · LW(p) · GW(p)

If you don't want to leave them up: text and PDF. Thanks!

Replies from: gworley
comment by Gordon Seidoh Worley (gworley) · 2009-07-29T18:03:59.822Z · LW(p) · GW(p)

Thanks. I have a monthly transfer quota and I already come close. A big PDF might have pushed me over.

comment by saturn · 2009-07-29T17:05:38.111Z · LW(p) · GW(p)

Here's a free djvu viewer for most popular operating systems.

Replies from: Alicorn
comment by Alicorn · 2009-07-29T17:09:53.506Z · LW(p) · GW(p)

It's possible I just fail at downloading things, but the Mac version seems to be not there.

Replies from: saturn, Bo102010
comment by saturn · 2009-07-30T02:12:34.645Z · LW(p) · GW(p)

I've mirrored the latest mac and windows versions of the djvu viewer, along with the original and converted versions of the book, here

comment by Bo102010 · 2009-07-29T17:23:00.851Z · LW(p) · GW(p)

It's because of Sourceforge's long spiral into uselessness. Google for the filename and you'll find a mirror.

comment by roland · 2009-07-28T19:16:23.069Z · LW(p) · GW(p)

Drop us a note with link when it's online.

comment by Vladimir_Nesov · 2009-07-28T22:18:07.251Z · LW(p) · GW(p)

It's available on the Kad p2p network, as are most of the sufficiently popular technical books.

comment by roryokane · 2012-10-01T11:16:36.829Z · LW(p) · GW(p)

I was able to download a copy from http://www.manyebooks.org/download/The_Strategy_of_Conflict.html, which links to http://www.en8848.com.cn/d/file/soft/Nonfiction/Obooks/201012/8bbb35724dcac415a5ecd74a62b2ba97.rar as the actual download link. That version is a 4.8 MB PDF file. It has equivalent image quality to the 17.4 MB PDF file hosted by Alicorn and is a smaller file.

comment by Alicorn · 2009-07-28T17:27:37.661Z · LW(p) · GW(p)

That link appears to lead to a copy of Nathaniel Hawthorne's "The Scarlet Letter".

comment by DavidAgain · 2013-03-21T23:02:00.253Z · LW(p) · GW(p)

The radio example is strangely apt given the most blatant manipulation of this sort I've experienced has involved people texting saying 'I'm already at [my preferred pub] for the evening: meet here? Sorry but will be out of reception', or people emailing asking you to deal with something and then their out of office appearing on your response.

Replies from: RandomThinker
comment by RandomThinker · 2013-04-19T09:48:09.004Z · LW(p) · GW(p)

It's amazing how good humans are at this sort of thing, by instinct. I'm reading the book Hierarchy in the Forrest, which is about tribal bands of humans up to 100k years ago. Without law and social structure, they basically solved all of their social equality problems by game theory. And depending on when precisely you think they evolved this social dynamic, they may have had hundreds of thousands of years to perfect it before we became hierarchical again.

http://www.amazon.com/Hierarchy-Forest-Evolution-Egalitarian-Behavior/dp/0674006917

If you look at rationality on a spectrum, this type of game theory isn't on the most enlightened/sophisticated form of it. Thugs, bullies, despots and drama queens are very good at this sort of manipulation. Rather it's basically the most primitive instinctive part of human reasoning.

However, that's not to say it doesn't work. The original post's description of not wanting to look yourself in the mirror afterwards is very apt.

comment by billswift · 2009-07-28T17:50:26.267Z · LW(p) · GW(p)

"Anyone, no matter how crazy, who you utterly and completely ignore will eventually stop bothering you." quote from memory from Spider Robinson, context was working in a mental hospital so escalation to violence wasn't a risk.

comment by Wei Dai (Wei_Dai) · 2009-07-28T21:37:06.802Z · LW(p) · GW(p)

In the radio example, there is no way for me to convince you that the receive capability is truly broken. Given that, there is no reason for me to actually break the receive ability, and you should distrust any claim on my part that the receive ability has been broken.

But Schelling must have been able to follow this reasoning, so what point was he trying to illustrate with the radio example?

Replies from: Alicorn, Technologos, cousin_it
comment by Alicorn · 2009-07-28T21:52:13.993Z · LW(p) · GW(p)

It can be difficult to pretend to be unable to hear someone on the other end of a two way communication. The impulse not to interrupt is strong enough to cause detectable irregularities in speech. Actually breaking, or at least turning off, the receive capability might be essential to maintaining the impression on the other end that it's broken.

Replies from: Jonathan_Graehl, wedrifid
comment by Jonathan_Graehl · 2009-07-29T05:32:42.621Z · LW(p) · GW(p)

A banal observation: everyone is assuming that the radio speaker is disabled while I transmit (or that I use an earpiece that the microphone can't overhear. I'm guessing the first is actually the case with handheld radios.

comment by wedrifid · 2009-07-29T01:05:59.996Z · LW(p) · GW(p)

It can be difficult to pretend to be unable to hear someone on the other end of a two way communication. The impulse not to interrupt is strong enough to cause detectable irregularities in speech. Actually breaking, or at least turning off, the receive capability might be essential to maintaining the impression on the other end that it's broken.

It is difficult to consciously pretend. That's why our brains don't leave this particular gambit up to our consciousness. It does seem that this, as you say, involves genuinely breaking the receive capability, but evidently the actual cost in terms of information wasted is worth the price.

comment by Technologos · 2009-07-30T07:25:06.843Z · LW(p) · GW(p)

Even if I distrust that you have a broken radio, as long as I prefer going to meet you (accepting the additional cost therein entailed) to never meeting you or meeting after an indefinitely long time, I will still go to wherever you say you are. If both people's radios are unbroken after the crash, whoever transmits the "receiver broken" signal probably gets the easier time of it.

This game is essentially the (repeated?) game of chicken, as long as "claim broken receiver and other person capitulates" > "both players admit unbroken" > "capitulate to other person's claim" > "neither player capitulates while both claim broken receivers".

Conveniently, this appears to be the broader point Schelling was trying to make. Flamboyant disabling of one's options often puts one in a better negotiating position. Hence, the American garrison in West Berlin.

comment by cousin_it · 2009-07-28T21:50:23.211Z · LW(p) · GW(p)

Actually the book tells about a psychological experiment conducted similarly to the situation I described, but there the subjects were told outright by the trustworthy experimenter that their partner couldn't learn their whereabouts. But I still think that average-rational humans would often fall for the radio trick. Expected utility suggests you don't have to believe your partner 100%; a small initial doubt reinforced over a day could suffice.

And yep, the problem of making your precommitments trustworthy is also discussed in much detail in the book.

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2009-07-29T02:11:45.699Z · LW(p) · GW(p)

There may be an interesting connection between this example and AIs knowing each other's source code. The idea is, if one AI can unilaterally prove its source code to another without the receiver being able to credibly deny receipt of the proof, then it should change its source code to commit to an unfair agreement that favors itself, then prove this. If it succeeds in being the first to do so, the other side then has no choice but to accept. So, Freaky Fairness seems to depend on the details of the proof process in some way.

Replies from: orthonormal, Eliezer_Yudkowsky, cousin_it
comment by orthonormal · 2009-07-29T18:52:53.745Z · LW(p) · GW(p)

If it succeeds in being the first to do so, the other side then has no choice but to accept.

This presumes that the other side obeys standard causal decision theory; in fact, it's an illustration of why causal decision theory is vulnerable to exploitation if precommitment is available, and suggests that two selfish rational CDT agents who each have precommitment options will generally wind up sabotaging each other.

This is a reason to reject CDT as the basis for instrumental rationality, even if you're not worried that Omega is lurking around the corner.

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2009-07-29T19:59:38.533Z · LW(p) · GW(p)

You can reject CDT but what are you going to replace it with? Until Eliezer publishes his decision theory and I have a chance to review it, I'm sticking with CDT.

I thought cousin_it's result was really interesting because it seems to show that agents using standard CDT can nevertheless convert any game into a cooperative game, as long as they have some way to prove their source code to each other. My comment was made in that context, pointing out that the mechanism for proving source code needs to have a subtle property, which I termed "consensual".

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2009-07-29T20:55:08.546Z · LW(p) · GW(p)

One obvious "upgrade" to any decision theory that has such problems is to discard all of your knowledge (data, observations) before making any decisions (save for some structural knowledge to leave the decision algorithm nontrivial). For each decision that you make (using given decision algorithm) while knowing X, you can make a conditional decision (using the same decision algorithm) that says "If X, then A else B", and then recall whether X is actually true. This, for example, mends the particular failure of not being able to precommit (you remember that you are on the losing branch only after you've made the decision to do a certain disadvantageous action if you are on the losing branch).

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2009-07-30T00:37:57.649Z · LW(p) · GW(p)

You can claim that you are using such a decision theory and hence that I should find your precommitments credible, but if you have no way of proving this, then I shouldn't believe you, since it is to your advantage to have me believe you are using such a decision theory without actually using it.

From your earlier writings I think you might be assuming that AIs would be intelligent enough to just know what decision algorithms others are using, without any explicit proof procedure. I think that's an interesting possibility to consider, but not a very likely one. But maybe I'm missing something. If you wrote down any arguments in favor of this assumption, I'd be interested to see them.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2009-07-30T10:16:31.719Z · LW(p) · GW(p)

That was an answer for your question about what should you replace CDT with. If you won't be able to convince other agents that you now run on timeless CDT, you gain a little smaller advantage than otherwise, but that's a separate problem. If you know that your claims of precommitment won't be believed, you don't precommit, it's that easy. But sometimes, you'll find a better solution than if you only lived in a moment.

Also note that even if you do convince other agents about the abstract fact that your decision theory is now timeless, it won't help you very much, since it doesn't prove that you'll precommit in a specific situation. You only precommit in a given situation if you know that this action makes the situation better for you, which in case of cooperation means that the other side will be able to tell whether you actually precommited, and this is not at all the same as being able to tell what decision theory you use.

Since using a decision theory with precommitment is almost always an advantage, it's easy to assume that a sufficiently intelligent agent always uses something of the sort, but that doesn't allow you to know more about their actions -- in fact, you know less, since such agent has more options now.

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2009-07-30T20:17:02.266Z · LW(p) · GW(p)

But sometimes, you'll find a better solution than if you only lived in a moment.

Yes, I see that your decision theory (is it the same as Eliezer's?) gives better solutions in the following circumstances:

  • dealing with Omega
  • dealing with copies of oneself
  • cooperating with a counterpart in another possible world

Do you think it gives better solutions in the case of AIs (who don't initially think they're copies of each other) trying to cooperate? If so, can you give a specific scenario and show how the solution is derived?

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-07-29T04:13:44.405Z · LW(p) · GW(p)

Unless, of course, you already know that most AIs will go ahead and "suicidally" deny the unfair agreement.

comment by cousin_it · 2009-07-29T07:25:00.622Z · LW(p) · GW(p)

Yes. In the original setting of FF the tournament setup enforces that everyone's true source code is common knowledge). Most likely the problem is hard to solve without at least a little common knowledge.

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2009-07-29T12:14:26.511Z · LW(p) · GW(p)

Yes. In the original setting of FF the tournament setup enforces that everyone's true source code is common knowledge. Most likely the problem is hard to solve without at least a little common knowledge.

Hmm, I'm not seeing what common knowledge has to do with it. Instead, what seems necessary is that the source code proving process must be consensual rather than unilateral. (The former has to exist, and the latter cannot, in order for FF to work.)

A model for a unilateral proof process would be a trustworthy device that accepts a string from the prover and then sends that string along with the message "1" to the receiver if the string is the prover's source code, and "0" otherwise.

A model for a consensual proof process would be a trustworthy device that accepts from the prover and verifier each a string, and sends a message "1" to both parties if the two strings are identical and represent the prover's source code, and "0" otherwise.

Replies from: cousin_it
comment by cousin_it · 2009-07-29T18:04:17.287Z · LW(p) · GW(p)

In your second case one party can still cheat by being out of town when the "1" message arrives. It seems to me that the whole endeavor hinges on the success of the exchange being common knowledge.

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2009-07-29T18:15:55.499Z · LW(p) · GW(p)

In your second case one party can still cheat by being out of town when the "1" message arrives.

I'm not getting you. Can you elaborate on which party can cheat, and how. And by "second case" do you mean the "unilateral" one or the "consensual" one?

Replies from: cousin_it
comment by cousin_it · 2009-07-29T21:17:15.089Z · LW(p) · GW(p)

The "consensual" one.

For a rigorous demonstration, imagine this: while preparing to play the Freaky Fairness game, I managed to install a subtle bug into the tournament code that will slightly and randomly distort all source code inputs passed to my algorithm. Then I submit some nice regular quining-cooperative program. In the actual game your program will assume I will cooperate, while mine will see you as a defector and play to win. When the game gives players an incentive to misunderstand, even a slight violation of "you know that I know that you know..." can wreak havoc, hence my emphasis on common knowledge.

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2009-07-29T23:29:14.458Z · LW(p) · GW(p)

In the actual game your program will assume I will cooperate, while mine will see you as a defector and play to win.

I see what you're saying now, but this seems easy to prevent. Since you have changed your source code to FF, and I know you have, I can simply ask you whether you believe I am a defector, and treat you as a defector if you say "yes". I know your source code so I know you can't lie (specify Freaky Fairness to include this honesty). Doesn't that solve the problem?

ETA: There is still a chance of accidental miscommunication, but you no longer have an incentive to deliberately cheat.

Replies from: cousin_it
comment by cousin_it · 2009-07-30T09:24:56.380Z · LW(p) · GW(p)

In this solution you have an incentive to similarly be outa town when I say "no". Think through it recursively. Related topics: two generals problem, two-phase commit.

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2009-07-30T19:42:50.117Z · LW(p) · GW(p)

Ok, let's say that two FFs can establish a cryptographically secure channel. The two players can each choose to block the channel at any time, but it can't read, inject, delete, or change the order of messages. Is that sufficient to make it arbitrarily unlikely for any player to put the FFs into a state where FF1 will treat FF2 as a cooperator, but FF2 will treat FF1 as a defector? I think the answer is yes, using the following protocol:

FF1 will start by sending a 1 or 0 (chosen randomly) to FF2. After that, each FF will send a 1 or 0 after it receives a 1 or 0 from the other, keeping the number of 1s sent no more than the number of 1s received plus one. If an FF receives N 1s before a time limit is reached, it will threat the other as a cooperator, otherwise as a defector. Now in order to cheat, a player would have to guess when to block the channel, and the probability of guessing the right time goes to 0 as N goes to infinity.

This is not necessarily the most efficient protocol, but it may be good enough as a proof of concept. On the other hand, the "merger by secure joint construction" approach seems to have the advantage of not having to deal with this problem. Or is there an analogous one that I'm not seeing?

comment by Dagon · 2009-07-28T19:37:14.177Z · LW(p) · GW(p)

I sometimes think of game theory as being roughly divided in three parts, like Gaul. There's competitive zero-sum game theory, there's cooperative game theory, and there are games where players compete but also have some shared interest

Almost all human interactions are the third type. In fact, I think of this not as 3 parts, but as one thing - strategy, which has (at least) 2 special cases that have been studied: zero-sum and cooperative positive-sum. These special cases are interesting not because they occur, but because they illuminate aspects of the whole.

comment by Wei Dai (Wei_Dai) · 2009-07-28T17:24:36.983Z · LW(p) · GW(p)

I sometimes think of game theory as being roughly divided in three parts, like Gaul. There's competitive zero-sum game theory, there's cooperative game theory, and there are games where players compete but also have some shared interest.

I'm still trying to figure out a good description of "cooperative game theory". What do you think of this:

Cooperative game theory studies situations where agreements to cooperate can be enforced, and asks which agreements and outcomes will result. This typically involves considerations of individual rationality and fairness.

Replies from: Alicorn
comment by Alicorn · 2009-07-28T17:34:49.534Z · LW(p) · GW(p)

What do you mean by "enforced"?

Replies from: Wei_Dai, Jack
comment by Wei Dai (Wei_Dai) · 2009-07-28T17:50:17.907Z · LW(p) · GW(p)

It means we can assume that once an agreement is made, all the agents will follow it. For example, the agreement may be a contract enforceable by law, or enforced by being provably coded into the agents' decision algorithms, or just by the physics of the situation like in my black hole example.

comment by Jack · 2009-07-28T17:50:40.512Z · LW(p) · GW(p)

I assume he means punishing defectors.

comment by Bayeslisk · 2013-12-10T09:25:47.403Z · LW(p) · GW(p)

This seems interesting in the horrifying way I have been considering excising from myself due to the prevalence of hostile metastrategic bashes: that is, people find you are the kind of person who flat-out welcomes game theory making a monster of em, and then refuses to deal with you, good day, enjoy being a sociopath, and without the charm, to boot.

comment by SilasBarta · 2009-07-28T17:40:06.836Z · LW(p) · GW(p)

I haven't read this book, but I can't see how Schelling would convincingly make this argument:

Leo Szilard has even pointed to the paradox that one might wish to confer immunity on foreign spies rather than subject them to prosecution, since they may be the only means by which the enemy can obtain persuasive evidence of the important truth that we are making no preparations for embarking on a surprise attack.

It's true that enemy spies can provide a useful function, in allowing you to credibly signal self-serving information. However, deliberate, publicly-known policies of aiding enemy spies defeats the purpose, because at that point, it's indistinguishable from counterespionage. After all, why not go one step further and feed spies truthful information? Same problem applies here.

Replies from: Psychohistorian, eirenicon
comment by Psychohistorian · 2009-07-28T18:49:45.753Z · LW(p) · GW(p)

I don't quite see how conferring immunity on foreign spies would degrade the information they could access. Deliberately and openly feeding them information is going to be pointless, as they obviously can't trust you. But encouraging foreign spies by not prosecuting them should not negatively affect their ability to obtain and relay information.

Replies from: SilasBarta
comment by SilasBarta · 2009-07-28T19:35:48.384Z · LW(p) · GW(p)

I still don't see it. It doesn't seem like any deliberate action could ever credibly signal "information only a spy could get". It's almost a problem of self-contradiction:

"I'm trying to tell you something that I'm trying to hide from you."

To put it in more concrete terms, what if one day the US lifted all protocols for protecting information at its military bases and defense contractors? Would foreign espionage agencies think, "woot! Motherload!" Or would they think, "Sure, and it's probably all junk ... where are the real secrets?"

Replies from: Cyan
comment by Cyan · 2009-07-28T20:15:22.025Z · LW(p) · GW(p)

The signal isn't to the opposing power -- it's to potential spies. You make recruiting easier for the opponent because you want to establish a fact about your plans and goals. The opponent will always have the problem of determining whether or not you're feeding its spies disinformation, but having more independent spies can help with that.

Replies from: SilasBarta
comment by SilasBarta · 2009-07-29T02:46:36.034Z · LW(p) · GW(p)

So again, take it one step further: what would be wrong with subsidizing foreign spies? Say, pay a stipend to an account of their choice. Wouldn't that make people even more willing to be spies?

Replies from: Cyan
comment by Cyan · 2009-07-29T03:19:29.312Z · LW(p) · GW(p)

That would probably work too, provided you could conclusively demonstrate that the payment system wasn't some kind of trap (to address the concerns of potential spies) or attempt at counter-recruitment (to address the concerns of the opponent). That seems more difficult than simply declaring a policy of immunity and demonstrating it by not prosecuting caught spies.

ETA: Oh yeah, you also have to confirm that the people you are paying are actually doing the job you are paying them for, to wit, conveying accurate information to the opponent. It can't just be a "sign up for anonymous cash payments" scheme. I can't think of a way to simultaneously guarantee all these conditions, but if there is a way I'm not imaginative enough to see, then yeah, subsidizing the opponent's spies would work.

comment by eirenicon · 2009-07-28T21:45:37.631Z · LW(p) · GW(p)

You're not aiding spies in getting information, you're just lowering the risk they take, which encourages more spying. Someone in high position could leak information, only risking being fired, not being shot. This does not change the reliability of the information, which, in spying, is always in question anyway.

comment by wedrifid · 2009-07-28T17:20:45.198Z · LW(p) · GW(p)

I sometimes think of game theory as being roughly divided in three parts, like Gaul. There's competitive zero-sum game theory, there's cooperative game theory, and there are games where players compete but also have some shared interest. Except this third part isn't a middle ground. It's actually better thought of as ultra-competitive game theory. Zero-sum settings are relatively harmless: you minimax and that's it. It's the variable-sum games that make you nuke your neighbour.

Could you clarify that last bit for me? You seem to have a valid point but I don't think I can glean it from that wording. I can imagine plenty of scenarios in which competitive zero-sum game theory will suggest that I nuke my neighbour. The most obvious example being if I kill them all and take their stuff and I think I can get away with it. Common interests appear not to be necessary.

Replies from: cousin_it, tuli, christopherj
comment by cousin_it · 2009-07-28T19:19:07.883Z · LW(p) · GW(p)

Of course some real world zero-sum games are ruthless too, but I haven't seen anything as bad as the nuke game, and it is variable-sum. Peace benefits everyone, but if one side in an arms race starts getting ahead, both sides know there will be war which harms both. If the game was zero-sum, war would've happened long ago and with weaker weapons.

The book gives an example of both Soviets and Americans expending effort on submarine-detection technologies while both desperately hoping that such technologies don't exist, because undetectable submarines with ICBMs are such a great retaliation guarantee that no one attacks anyone.

Replies from: wedrifid
comment by wedrifid · 2009-07-28T23:11:02.924Z · LW(p) · GW(p)

Thanks, that makes sense. It also brings to mind some key points from Robin's talk on existential risks.

comment by tuli · 2009-07-29T06:23:19.309Z · LW(p) · GW(p)

Just remember that once you nuke (that is destroy) something, you have left the bounds of zero-sum game and quite likely entered a negative sum game (though you may end up having positive outcome, the sum is negative).

Replies from: bentarm, wedrifid
comment by bentarm · 2009-07-29T11:13:42.190Z · LW(p) · GW(p)

(though you may end up having positive outcome, the sum is negative).

Well isn't this exactly the problem cousin_it is referring to when the game is non-zero sum? It means that I might need to take 1000 utils from you in order to gain 50 utils for myself. (or even: I might need to take 1000 utils from you in order to limit my losses to 50 utils).

comment by wedrifid · 2009-07-29T08:03:47.289Z · LW(p) · GW(p)

It's possible that it will be a negative sum. It is also possible in principle that it has become a positive sum. The sign of the 'sum' doesn't actually seem to be the important part of the quoted context here, rather the presence or absence of a shared interest.

comment by christopherj · 2013-12-24T22:10:07.412Z · LW(p) · GW(p)

In two player zero sum games, vengeance (hurting self to hurt other more) is impossible, as are threats and destruction in general -- because the total score is always the same. They are ruthless in that to gain score you must take it from the other player (also eliminates cooperation), but there can be no nuking. If the game is variable sum (or zero sum with extra players), you again gain the ability to unilaterally and unavoidably lower someone's score (the score can be destroyed in variable sum games, or transferred to the other players in zero sum games, allowing for vengeance, punishment, destruction, team cooperation, etc.

Replies from: wedrifid, Lumifer
comment by wedrifid · 2014-01-31T18:46:38.194Z · LW(p) · GW(p)

I have reread the context and I find I concur with wedrifid_2009.

In two player zero sum games, vengeance (hurting self to hurt other more) is impossible, as are threats and destruction in general -- because the total score is always the same.

Vengeance is impossible, threats are irrelevant but destruction most certainly is not. Don't confuse the arbitrary constraint "the total score is always the same" with the notion that nothing 'destructive' can occur in such a game. What is prevented (to rational participants) is destruction for the purpose of game theoretic influence.

Consider a spherical cow in (a spaceship with me in a) vacuum. We are stranded and have a fixed reserve of energy. I am going to kill the spherical cow. I will dismember her. I will denature the proteins that make up her flesh. Then I will eat her. Because destroying her means I get to use all the energy and oxygen for myself. This includes the energy that was in the cow before I destroyed her. It's nothing personal. There was no threat. I was not retaliating. There was neither punishment nor cooperation. Just destruction.

ie. One of these things is not like the other things, one of these things just doesn't belong:

vengeance, punishment, destruction, team cooperation, etc

comment by Lumifer · 2013-12-25T00:00:08.780Z · LW(p) · GW(p)

In two player zero sum games, vengeance (hurting self to hurt other more) is impossible, as are threats and destruction in general -- because the total score is always the same.

It may be that you're using a restrictive definition of zero-sum games, but generally speaking that is not true because of the difference between the final outcome and the intermediate score-keeping.

Consider e.g. a fight to the death or a computer-game match with a clear winner. The outcome is zero-sum: one player wins, one player loses, the end. But in the process of the fight the score varies and things like hurting self to hurt the other more are perfectly possible and can be rational tactics.

Replies from: Vaniver, christopherj
comment by Vaniver · 2013-12-25T01:28:57.072Z · LW(p) · GW(p)

I think you're mixing levels- in a match with a clear winner, "hurting self" properly means "make my probability of losing higher" not "reduce my in-game resources." I can't reduce my chance of winning to reduce my opponent's chance of winning by more- the net effect is increasing my chance of winning.

Replies from: Lumifer
comment by Lumifer · 2013-12-25T01:43:30.524Z · LW(p) · GW(p)

I am not so much mixing levels as pointing out that different levels exist.

comment by christopherj · 2013-12-25T07:32:55.669Z · LW(p) · GW(p)

You're confusing yourself because you're mixing scoring systems -- first you say that the game is zero sum, win or lose, then you talk about variable sum game resources. In a zero sum game, the total score is always the same; you can either steal points or give them away, but can never destroy them. If the total score changes throughout the game, then you're not talking about a zero sum game. There's no different levels, though you can play a zero sum game as a variable sum game (I won while at full health!).

comment by beoShaffer · 2011-10-29T18:28:30.408Z · LW(p) · GW(p)

I am in the middle of reading this book, due to this post. I strongly second the recommendation to read it.

comment by Aryn · 2011-10-11T04:07:22.956Z · LW(p) · GW(p)

Why should it be advantageous to break your reciever? You've been dropped in a wild, mountainous region, with only as many supplies as you can carry down on your parachute. Finding and coordinating with another human is to your advantage, even if you don't get extracted immediately upon meeting the objective. The wilderness is no place to sit down on a hilltop. You need to find food, water, shelter and protection from predators, and doing this with someone else to help you is immensely easier. We formed tribes in the first place for exactly this reason.

Replies from: Decius
comment by Decius · 2013-03-20T21:04:11.192Z · LW(p) · GW(p)

Because both people are competent and capable of survival alone, but its more work to travel; by having a radio that transmits but doesn't receive, you can transmit your location and the fact that you can't receive, and the other person is forced to come to you rather than come to the point halfway between the two of you.

comment by [deleted] · 2010-05-26T02:14:47.814Z · LW(p) · GW(p)

This is where rationality and logic meet. If, upon landing, you knew that the other person had broken their own radio in order to avoid work, you would most likely meet up with them anyway. Being that you are away from civilization and will only be picked up upon rendezvous, it is in your own best interest to meet, and then reveal the other's deception upon pickup. Even if you are not given their share, you still have your own, which is the original goal, and you have not lost anything. Also, the work that it may have taken to reach a coordinate acceptable to both persons may be equal or more than the work put out if you had gone straight to the other person.