Is our continued existence evidence that Mutually Assured Destruction worked?

post by jefftk (jkaufman) · 2013-06-18T14:40:36.167Z · LW · GW · Legacy · 68 comments

Contents

68 comments

The standard view of Mutually Assured Distruction (MAD) is something like:

During the cold war the US and USSR had weapons capable of immense destruction, but no matter how tense things got they never used them because they knew how bad that would be. While MAD is a terrifying thing, it did work, this time.

Occasionally people will reply with an argument like:

If any of several near-miss incidents had gone even slightly differently, both sides would have launched their missiles and we wouldn't be here today looking back. In a sense this was an experiment where the only outcome we could observe was success: nukes would have meant no observers, no nukes and we're still here. So we don't actually know how useful MAD was.

This is an anthropic argument, an attempt to handle the bias that comes from a link between outcomes and the number of people who can observe them. Imagine we were trying to figure out whether flipping "heads" was more likely than flipping "tails", but there was a coin demon that killed everyone if "tails" came up. Either we would see "heads" flipped, or we would see nothing at all. We're not able to sample from the "tails: everyone-dies" worlds. Even if the demon responds to tails by killing everyone only 40% of the time, we're still going to over-sample the happy-heads outcome.

Applying the anthropic principle here, however, requires that a failure of MAD really would have killed everyone. While it would have killed billions, and made major parts of the world uninhabitable, still many people would have survived. [1] How much would we have rebuilt? What would be the population now? If the cold war had gone hot and the US and USSR had fallen into wiping each other out, what would 2013 be like? Roughly, we're oversampling the no-nukes outcome by the ratio of our current population to the population there would have been in a yes-nukes outcome, and the less lopsided that ratio is the more evidence that MAD did work after all.


[1] For this wikipedia cites: The global health effects of nuclear war (1982), Long-term worldwide effects of multiple nuclear-weapons detonations (1975). Some looking online also turns up an Accelerating Future blog post. I haven't read them thoroughly, and I don't know much about the research here.

I also posted this on my blog

68 comments

Comments sorted by top scores.

comment by [deleted] · 2013-06-18T18:10:36.194Z · LW(p) · GW(p)

Why are "observers" ontologically fundamental in these anthropic arguments? Is there somewhere an analysis of this assumption? I know that there is SSA and SIA, but I don't really understand them.

How do you assign probability over worlds with different numbers of observers?

Replies from: jkaufman, shminux, SilasBarta
comment by jefftk (jkaufman) · 2013-06-19T20:45:19.545Z · LW(p) · GW(p)

Why are "observers" ontologically fundamental in these anthropic arguments?

An elaboration on the coin demon example. Let's say flipping "tails" instantly kills half the people. There are three things that can happen when the coin is flipped:

  • 50%: heads, everyone lives
  • 25%: tails, and you're lucky and live
  • 25%: tails, and you're unlucky and die

Now imagine you're looking back at a coin that was flipped earlier: 1/3 of the time you'll see tails and 2/3 you'll see heads.

If a tree falls on sleeping beauty might be useful.

comment by Shmi (shminux) · 2013-06-18T21:25:38.651Z · LW(p) · GW(p)

I don't understand it, either. I mean, I understand the logic, I just don't understand how an assumption like that can ever be tested, short of Omega coming down to Earth and introducing a fellow simulation to us. Maybe we could talk about this at a meetup.

Replies from: cousin_it, None
comment by cousin_it · 2013-06-18T22:34:15.134Z · LW(p) · GW(p)

Here's a funny test: if many people flip coins to decide whether to have kids, and SSA is true, then the results should be biased toward "don't have kids". Bostrom's book discusses similar scenarios, I think, but I'm still pretty proud of coming up with them independently :-)

Replies from: jkaufman
comment by jefftk (jkaufman) · 2013-06-19T20:46:40.467Z · LW(p) · GW(p)

the results should be biased toward "don't have kids"

Could you elaborate?

Replies from: cousin_it
comment by cousin_it · 2013-06-20T09:07:38.272Z · LW(p) · GW(p)

See chapter 9 of Bostrom's book. His analysis seems a little weird to me, but the descriptions of the scenarios are very nice and clear.

comment by [deleted] · 2013-06-19T00:05:25.686Z · LW(p) · GW(p)

Maybe we could talk about this at a meetup.

Good idea. We can make confusing anthropic stuff a future topic. Perhaps this weekend even.

Replies from: shminux
comment by Shmi (shminux) · 2013-06-21T18:00:01.458Z · LW(p) · GW(p)

Huh, I thought the meetups are on hiatus for the summer, since they don't show up in the regular or irregular LW meetup announcements.

Replies from: None
comment by [deleted] · 2013-06-23T21:23:58.919Z · LW(p) · GW(p)

Not at all. I'm just too lazy to post them a lot of the time.

The general rule is every saturday at 15:30 at bennys.

comment by SilasBarta · 2013-06-20T01:04:28.930Z · LW(p) · GW(p)

Many treatments of this issue use "observer moments" as a fundamental unit over which the selection occurs, expecting themselves to be in the class of observer-moments most common in the space of all observer moments.

comment by verbify · 2013-06-19T12:05:25.130Z · LW(p) · GW(p)

There is empirical evidence against the MAD hypothesis.

During the Cuban Missile Crisis, a Russian submarine believed that nuclear war had broken out. Three officers on board the submarine were authorised to unanimously launch a nuclear torpedo. An argument broke out among the three, in which http://en.wikipedia.org/wiki/Vasili_Arkhipov was against the launch, preventing a nuclear missile from being launched (and presumably a nuclear retaliation would not be unlikely).

Let's assign a probability that one of the officers would be in favour of launching a missile. Given that 2/3 in our sample were in favour of launching the missile, let's assign the probability to be 2/3 for any single officer. Therefore, the chance that a missile would have been launched is (2/3)^3 - 27%. Even if the probability was less than 2/3 (and people's opinions are interdependent - one person could convince another, or one person could take a contrarian stand), nevertheless it still could have happened. And this wasn't the only time that there almost was a nuclear war. (Incidentally I believe these cases show that an individual effort can at certain points have a huge impact - proof of both the butterfly effect or the 'great man theory' - as long as you redefine 'Great Man' to be a person in the right time and place).

As to the question of whether MAD worked - well it can be certainly argued to have helped, but it demonstrably did not prevent circumstances that could lead to a nuclear war.

Replies from: ThisSpaceAvailable
comment by ThisSpaceAvailable · 2013-06-22T04:20:31.742Z · LW(p) · GW(p)

" but it demonstrably did not prevent circumstances that could lead to a nuclear war." Do you mean "it didn't eliminate all circumstances", or do you mean "there were no circumstances that it prevented"?

Replies from: verbify
comment by verbify · 2013-06-24T13:25:44.974Z · LW(p) · GW(p)

I meant it didn't eliminate all circumstances.

I think you've pointed out a flaw in my argument. The statement "MAD made nuclear war impossible" is demonstrably false - nuclear war could still happen with MAD. The statement "MAD prevented nuclear war" could still be true - nuclear war may have taken place (and probably would have been more likely) in the absence of MAD, and therefore MAD did prevent a nuclear war.

comment by OccamsTaser · 2013-06-18T20:49:14.286Z · LW(p) · GW(p)

Ultimately, I think what this question boils down to is whether to expect "a sample" or "a sample within which we live" (i.e. whether or not the anthropic argument applies). Under MWI, anthropics would be quite likely to hold. On the other hand, if there is only a single world, it would be quite unlikely to hold (as you not living is a possible outcome, whether you could observe it or not). In the former case, we've received no evidence that MAD works. In the latter, however, we have received such evidence.

Replies from: Mestroyer, ThisSpaceAvailable
comment by Mestroyer · 2013-06-18T23:06:15.317Z · LW(p) · GW(p)

That is an excellent username. Welcome to LessWrong.

comment by ThisSpaceAvailable · 2013-06-22T05:09:47.986Z · LW(p) · GW(p)

I don't see what your reasoning is (and I find "anthropics would hold" to be ambiguous). Can you explain?

Suppose half the worlds adopt a strategy that is certain to avoid war, and half adopt one the has a 50% chance. Of the worlds without war, 2/3 have adopted a strategy that is certain to avoid war. Therefore, anyone in a world without war should have their confidence that they are in a world that has adopted a strategy that is certain to avoid war go from ½ to 2/3 upon seeing war fail to develop.

comment by OrphanWilde · 2013-06-18T18:08:53.025Z · LW(p) · GW(p)

Something that doesn't often get remarked upon is that the Cold War wasn't the first instance of the strategy of MAD. World War 1 was the culmination of a MAD strategy gone awry.

One difference between the Cold War and World War 1, however, is that in the Cold War at least one of the nations involved was actually employing people to study the mathematics behind the strategy.

Replies from: TimS, Luke_A_Somers, wedrifid, Decius
comment by TimS · 2013-06-18T19:22:52.225Z · LW(p) · GW(p)

I don't think Europe pre-WWI can accurately be characterized as powers using a MAD-style strategy.

WWI was the result of complex interactions between multiple powers, none of whom were dominant in their region, and feared domination by another power in the region because they lacked the power to impose existence-ending consequences on rival states if the rival won military victory.

The Cold War was between only two powers who each had the ability to impose existence-ending consequences on the rival state even if the rival won a conventional military victory.

In short, MAD may have been a Nash equilibrium in a two-power international system, but it almost certainly was not the Nash equilibrium before WWI. (If one looks to history, it is not clear that any Nash equilibrium exists in those circumstances). From such reasoning grows the International Relations Realists school of political science.

comment by Luke_A_Somers · 2013-06-18T21:20:20.277Z · LW(p) · GW(p)

But both France and Germany thought that in the event of a war that their side would rush to an easy victory. That seems to me to be quite the opposite of MAD.

comment by wedrifid · 2013-06-18T20:48:58.483Z · LW(p) · GW(p)

Something that doesn't often get remarked upon is that the Cold War wasn't the first instance of the strategy of MAD. World War 1 was the culmination of a MAD strategy gone awry.

How did World War 1 involve mutually assured destruction? It seems to me that destruction can't have been particularly strongly assured given that the significant powers on one of the sides wasn't destroyed. There were significant casualties and economic cost but MAD tends to imply something more than just "even the winner has casualties!" considerations. Are you using "MAD" far more loosely than I would expected or making some claim about history that surprises me?

(By contrast a Cold War in which both sides had lots of nuclear weapons stockpiled actually could result in mutual destruction if someone made a wrong move.)

Replies from: OrphanWilde
comment by OrphanWilde · 2013-06-18T21:18:48.907Z · LW(p) · GW(p)

Far more loosely. Part of the object behind the complex network of alliances was to make war too costly to initiate. Once war was initiated, however, it was guaranteed to be on a massive scale. The damage done by WW1 is forgotten in consideration of the damage done by WW2, but it carried a substantial toll; around 33% of military-age British men died over a four year time period.

In both cases the nations involved were always one event away from total catastrophe.

Replies from: wedrifid
comment by wedrifid · 2013-06-19T10:22:23.095Z · LW(p) · GW(p)

The damage done by WW1 is forgotten in consideration of the damage done by WW2

It tends not to be forgotten here. Australia had far more casualties in the first world war than the second.

Replies from: Emile
comment by Emile · 2013-06-20T17:51:58.978Z · LW(p) · GW(p)

France also had more casualties in WW1, and may even loom bigger in our memories.

Replies from: wedrifid
comment by wedrifid · 2013-06-20T20:14:28.103Z · LW(p) · GW(p)

France also had more casualties in WW1, and may even loom bigger in our memories.

That seems likely. Absent any specific information to the contrary I expect 'looming' to approximately track casualties/population and by that metric France was over three times worse off than Australia.

comment by Decius · 2013-06-18T21:22:47.132Z · LW(p) · GW(p)

I think that the entangling alliances that precipitated WW1 were intended as a win-lose strategy, intending to deter aggression by having lots of allies.

The nations joining alliances are doing so to increase their chances of winning a war, not to increase the chances that their opponent will lose a war; there's no mechanism for a lose-lose outcome.

Replies from: TimS
comment by TimS · 2013-06-19T01:09:49.156Z · LW(p) · GW(p)

I think it is more accurate to say that the powers aligned against the potential regional hegemon, who responded with alliances with the willing, regardless of whether they were worthwhile allies.

If you look at the treaties starting in 1848, you see a slow drift from everyone-balance-against-France to everyone-balance-against-Germany. The UK's century or more long running conflict with France transforms into a very close alliance in less than a generation.

Let me put it slightly differently: I think best explanation of Germany's willingness to ally so closely with Austria-Hungary (to the point that a dispute which had no interests for Germany could initiate WWI) is best explained as a unwillingness of anyone else to ally with Germany. Sure, the Central Powers allies make rational sense to Germany once they are all the possible allies. But the historical fact that no one else was willing to ally with Germany cries out for explanation (and France, Britain, or Russia would have maximized their chance to win any European war by allying with Germany).

Replies from: Aharon, Decius
comment by Aharon · 2013-06-19T10:36:20.158Z · LW(p) · GW(p)

Your attempt at an explanation is interesting, but to my knowledge, doesn't fit the facts. The nations weren't unwilling to ally with Germany, in contrary, the German Emperor didn't want to maintain the alliances that had been created by Bismarck. For example, Russia wanted to renew the Reinsurance Treaty in 1890 (http://en.wikipedia.org/wiki/Reinsurance_Treaty), but Germany didn't.

Replies from: TimS
comment by TimS · 2013-06-19T21:16:09.969Z · LW(p) · GW(p)

My most important point is that reasoning of the form "If only the Kaiser had been less obsessed with a strong navy, Britain might have been induced not to ally with France" is likely false. Since the 1700s, Britain's policy had always been to prevent a European hegemon - the UK's opponent changed from France to Germany when the potential hegemon changed from France (Louis XIV, Napoleon) to Germany.

That said, with the benefit of hindsight, it is obvious that Germany could not be closely allied with Austria-Hungary and Russia. Both wanted to dominate the Balkans to the exclusion of any other great power: Russia for warm water ports, AH to have a freer hand against internal dissent.

Also with the benefit of hindsight, Germany looks awfully foolish for picking AH over Russia. But even if the Reinsurance Treaty was renewed in 1890, it is unclear whether Russia would have continued to be willing to renew it over the next two decades.

But my thesis is that nations act in their own interest, regardless of internal dynamics. That is not the same as saying that nations always correctly figure out what their interests are. Britain's failure to take steps to prevent the unification of Germany in the 1850-60s is as inexplicable as Germany's choice of AH over Russia a few decades later.

Replies from: Aharon
comment by Aharon · 2013-06-20T19:30:24.481Z · LW(p) · GW(p)

I think internal dynamics play a greater role than you assume. Personalities do matter in politics. To take a current example, while little has changed about the facts between Russia and Germany of today, the relationship between those two nations has changed a lot after Merkel succeeded Schröder as chancelor, simply because Putin and Merkel don't work as well together on a personal level as Schröder and Putin did.

Replies from: TimS
comment by TimS · 2013-06-21T14:55:46.817Z · LW(p) · GW(p)

That is a very valid critique of international relations realism.

But what specific international interests has Germany changed its position on because of the closer relationship between specific leaders? Likewise, are there any specific international positions that Russia has changed because of the closer relationship?

I suspect that Russia's geopolitical interests matter a lot more to Russia's stance on big issues (e.g. Syria) than any interpersonal relationship. In other words, just about any internal structure of government in Russia would likely be saying the same things that the current government is saying.

Like China propping up the North Korean government even though the Chinese probably doesn't like North Korea's behavior. The geopolitical consequences of reunification are not in China's interests, and that probably outweighs just about any misbehavior from North Korea, unless NK escalates a lot.

Replies from: Aharon
comment by Aharon · 2013-06-22T10:16:29.827Z · LW(p) · GW(p)

I notice that I'm confused.

The example that I would have liked to bring up was Germany's stance on the Nord Stream project (http://en.wikipedia.org/wiki/Nord_Stream), which serves as a direct supply with russian natural gas independent of transit countries. In Germany, the support for this project by Schröder was widely perceived as a result of his relationship to Putin and his plans after leaving politics (he is head of the shareholder's committee). I assumed this project is clearly against German national interest, since it creates an even stronger dependence on russian natural gas than the dependence already existing right now. I assumed that Merkel's worse personal relationship with Putin and her not benefitting personally from this project would have lead to a stance that is more in line with Germany's interest in energy independence. Indeed, she has voiced that opinion - for example, advocating a LNG terminal in Wilhelmshaven (http://www.dailystar.com.lb/Business/Middle-East/Jan/10/Merkel-says-Germany-should-lessen-dependence-on-Russian-energy.ashx#axzz2Ww9dAIzo). However, when it comes to actions, she consistently supported Nord Stream and sabotaged alternatives.

comment by Decius · 2013-06-19T01:55:13.453Z · LW(p) · GW(p)

But the historical fact that no one else was willing to ally with Germany cries out for explanation (and France, Britain, or Russia would have maximized their chance to win any European war by allying with Germany).

Are you saying that the evidence suggests that Germany would have 'won' the war if France, Britain, or Russia had been allied with Germany?

Replies from: TimS
comment by TimS · 2013-06-19T02:44:04.622Z · LW(p) · GW(p)

I'm pretty confident that the answer is yes for each country.

France (>.95) This alliance definitely wins. Using casualties as an indirect measure of military strength, France is third most (after Germany and Russia). But this is a total counter-factual because there is essentially no historically plausible path to WWI that leads to France and Germany on the same side.

Historically, I believe the last alliance between those states before WWI was the War of Austrian Succession (and Prussia was not unified into Germany at that time).

Russia (>.75) Second most casualties, but with the benefit of hindsight, there's strong reason to think this overstates Russian military power - although Russia defeated Napoleon and would go on to defeat Hitler with minimal assistance. Still, even assuming that Russia is equivalent in military power to the second raters like Ottoman Empire and Italy (a very questionable assumption), the removal of the Eastern Front probably adds enough German troops to the Western Front to overwhelm France.

UK (>.65) Fourth most casualties. Fought on the same front as France, so removal of those troops increases leverage on the outcome of the France-Germany fight. As you may know, the French army barely made it through the war, so lack of other forces to absorb casualties seems plausible for swinging the outcome.

comment by Will_Newsome · 2013-06-19T19:27:14.691Z · LW(p) · GW(p)

(A consideration that people often miss: just because MAD nearly failed doesn't mean it wasn't better than politically feasible (quantum-probable) counterfactually enacted policies.)

comment by ThisSpaceAvailable · 2013-06-22T05:02:06.106Z · LW(p) · GW(p)

Suppose you're on a game show, and you're given a choice between three doors, One has a car, one has a goat, and one has a tiger. You pick a door, and you figure it has 1/3 chance of having the car. The host tells you to stand in front of your chosen door. He presses a button, and one of the doors you didn't choose swings open and a tiger jumps out. Should you update your previous belief that the door you chose has a 1/3 chance of having the car? Your only new evidence was that you've seen that the door you chose didn't have the tiger. But if the door had had the tiger, you wouldn't have seen that evidence, because the tiger would have killed you.

Replies from: jkaufman
comment by jefftk (jkaufman) · 2013-06-22T06:20:27.652Z · LW(p) · GW(p)

Should you update your previous belief that the door you chose has a 1/3 chance of having the car?

Yes. The door now has a 1/2 chance of having the car.

(This is assuming that the host's button means "open the door with the tiger".)

comment by davidpearce · 2013-06-19T08:36:07.234Z · LW(p) · GW(p)

Yes, assuming post-Everett quantum mechanics, our continued existence needn't be interpreted as evidence that Mutually Assured Destruction works, but rather as an anthropic selection effect. It's unclear why (at least in our family of branches) Hugh Everett, who certainly took his own thesis seriously, spent much of his later life working for the Pentagon targeting thermonuclear weaponry on cities. For Everett must have realised that in countless world-branches, such weapons would actually be used. Either way, the idea that Mutually Assured Destruction works could prove ethically catastrophic this century if taken seriously.

comment by Zaine · 2013-06-19T00:27:06.085Z · LW(p) · GW(p)

MAD didn't work; we were simply lucky: http://cornerstone.gmu.edu/articles/4198

comment by Yosarian2 · 2013-06-18T17:35:58.007Z · LW(p) · GW(p)

I don't think you need to invoke the anthropomorphic principle here.

The fact that we survived the cold war without a nuclear war doesn't really tell us that much about the odds of doing so. It is basically impossible to estimate the probability of something with a sample size of 1. It could be that we had a 90% chance of getting through the cold war without a nuclear war, or it could be that we only had a 10% chance and just got lucky; based on our current data, either seems quite plausible.

So, for your question "did MAD work"; well, in some sense, either it worked or we got lucky or some combination of the two. But we don't really have enough information to know if it's a good policy to follow or not.

Replies from: MumpsimusLane
comment by MumpsimusLane · 2013-06-18T19:07:12.281Z · LW(p) · GW(p)

That doesn't sound right to me. Sure, with a sample size of 1, your estimate won't be very accurate, but that one data point is still going to be giving you some information, even if it isn't very much. You gotta update incrementally, right?

Replies from: Yosarian2, Decius
comment by Yosarian2 · 2013-06-19T17:41:49.152Z · LW(p) · GW(p)

Updating incrementally is useful, but only if you keep in perspective how little you know and how unreliable your information is based on a single trial. If you forget that, then you end up like the guy who says "Well, I drove drunk once, and I didn't crash my car, so therefore driving drunk isn't dangerous". Sometimes "I don't know" is a better first approximation then anything else.

Of course, it would be accurate to say that we can get some information from this. I mentioned "anything from 10% to 90%", but on the other hand, I would say that the our experience so far makes the hypothesis "99% intelligent species blow themselves up within 50 years of creating a nuclear bomb" pretty unlikely.

However, any hypothesis from "10% of the time, MAD works at preventing a nuclear war" to "99% of the time, MAD works at preventing a nuclear war" or anything in between seems like it's still quite plausible. Based on a sample size of 1, I would say that any hypothesis that fits the observed data at least 10% of the time would have to be considered a plausible hypothesis.

Replies from: MumpsimusLane
comment by MumpsimusLane · 2013-06-19T22:59:46.652Z · LW(p) · GW(p)

Um... yes. I guess we're on the same page then. :)

comment by Decius · 2013-06-18T21:18:37.007Z · LW(p) · GW(p)

If you flip a coin and it comes up heads, do you have any new information about whether it is a fair coin or not? If you thought the odds of MAD working were 50/50, do you have any new information on which to update?

Replies from: gwern
comment by gwern · 2013-06-18T22:06:03.001Z · LW(p) · GW(p)

If you flip a coin and it comes up heads, do you have any new information about whether it is a fair coin or not?

Yes. If you started with a uniform distribution over the coin's probability of heads being 0 to 1, your new posterior distribution will tilt toward the 0.5-1 half and the 0-0.5 will shrink. Sivia's Data Analysis on page 16/29 even includes an illustration of how the distribution evolves for some random choices of probability and possible coinflips; I've screenshotted it: http://i.imgur.com/KbpduAj.png Note how drastically the distribution changes between n=0 (top left) and n=1 (top middle) - one has learned information from this single coinflip! (Diminishing returns implies the first observation carries the most information...)

Replies from: Decius, cousin_it
comment by Decius · 2013-06-18T23:13:13.464Z · LW(p) · GW(p)

Given that the coin either has two of the same faces or is fair, and it is not more likely to be two-tailed than two-headed: Take the hypothesis h "the coin has either two heads or two tails" and the evidence e "The first flip came up heads"

I don't think that you can update your probability of h, regardless of your prior belief, since P(e|h)=P(e|~h).

I think that you are asking what the evidence indicates the bias of this one coin is, rather than asking if this one coin is biased. Which, in fact, does apply to the case I was trying to illustrate.

Replies from: gwern
comment by gwern · 2013-06-19T01:22:13.679Z · LW(p) · GW(p)

Given that the coin either has two of the same faces or is fair, and it is not more likely to be two-tailed than two-headed...I think that you are asking what the evidence indicates the bias of this one coin is, rather than asking if this one coin is biased. Which, in fact, does apply to the case I was trying to illustrate.

In what sense is a "fair coin" not a coin with a 'bias' of exactly 0.5?

Replies from: Decius
comment by Decius · 2013-06-19T01:50:57.842Z · LW(p) · GW(p)

The first throw being heads is evidence for the proposition that the coin is biased towards heads, evidence against the proposition that the coin is biased towards tails, and neutral towards the union of the two propositions.

If showing heads for the first throw was evidence for or against the coin being fair, not showing heads for the first row would have to be evidence against or for.

That we survived is evidence for "MAD reduces risk" and evidence against "MAD increases risk", but is neutral for "MAD changes risk".

Replies from: MumpsimusLane
comment by MumpsimusLane · 2013-06-19T03:00:34.315Z · LW(p) · GW(p)

Thought experiment: Get 100 coins, with 50 designed to land on heads 90% of the time and 50 designed to land on tails 90% of the time. If you flipped each coin once, and put all the coins that happened to land on heads (50ish) in one pile, on average, 45 of them will be coins biased towards heads, and only 5 will be biased towards tails.

If you only got the chance to flip one randomly selected coin, and it came up heads, you should say it has a 90% probability of being a heads-biased coin, because it will be 45 out of 50 times.

That's how I'm seeing this situation, anyway. I'm not really understanding what you're trying to say here.

Replies from: Decius
comment by Decius · 2013-06-19T04:47:58.371Z · LW(p) · GW(p)

Take those 100 coins, and add 100 fair coins. Select one at random and flip it. It comes up heads. What are the odds that it is one of the biased coins?

Replies from: MumpsimusLane
comment by MumpsimusLane · 2013-06-19T17:12:12.126Z · LW(p) · GW(p)

Okay, I think I get it. I was initially thinking that the probabilities of the relationship between MAD and reducing risk being negative, nothing, weak, strong, whatever, would all be similar. If you assume that the probability that we all die without MAD is 50%, and each coin represents a possible probability of death with MAD, then I would have put in one 1% coin, one 2% coin, and so on up to 100. That would give us a distribution just like gwern's given graph.

You're saying that it is very likely that there is no relationship at all, and while surviving provides evidence of a positive relationship over a negative one (if we ignore anthropic stuff, and we probably shouldn't), it doesn't change the probability that there is no relationship. So you'd have significantly more 50% coins than 64% coins or 37% coins to draw from. The updates would look the same, but with only one data point, your best guess is that there is no relationship. Is that what you're saying?

So then the difference is all about prior probabilities, yes? If you have two variables that coorelated one time, and that's all the experimenting that you get to do, how likely is it that they have a positive relationship, and how likely is it that it was a coincidence? I... don't know. I'd have to think about it more.

comment by cousin_it · 2013-06-18T22:25:53.706Z · LW(p) · GW(p)

You're right, but just a tiny note: you could also interpret Decius's question as "do you have any new information about how far the coin is from being fair?" and then the answer seems to be "no".

comment by drethelin · 2013-06-18T22:32:58.702Z · LW(p) · GW(p)

Maybe MAD is the great filter

Replies from: Mestroyer, NancyLebovitz
comment by Mestroyer · 2013-06-18T23:00:32.941Z · LW(p) · GW(p)

I can't imagine that out of 10^22 stars, only one would have life around it because of that. MAD should have let more worlds through than that. There would have to be worlds that got global governments before weapons capable of MAD. And if global governments (or other things that allow worlds to avoid our situations like ours or worse) were so rare, why would the one world that managed to slip through the filter (us) be even rarer, and have managed to survive to 2013 with two antagonistic superpowers with nukes, and no obvious stroke of luck preventing them from killing everyone?

Replies from: drethelin, JoshuaZ
comment by drethelin · 2013-06-19T05:51:29.990Z · LW(p) · GW(p)

Nuclear warfare MAD is one thing, but interplanetary and interestellar civilizations also suffer from MAD problems, arguably even moreso due to light-speed time lag of communications.

comment by JoshuaZ · 2013-06-19T04:43:20.183Z · LW(p) · GW(p)

If nuclear war is the Great Filter it may take a while. One could conceive for example if it takes on the order generally of a few hundred years to get substantially off-planet from when one develops nukes (note that this requires among other details no near Singularity) then it starts looking a lot more plausible. Similarly, global governments may not be stable in that time frame.

Note also that there's been discussion here of how much the age of the planets could matter, and whether planets where intelligent life arises earlier are more likely to wipe themselves out (from easier access to uranium 235), and the consensus seemed to be that this was not a significant enough different to be a major part of the filter. See discussion here when I last brought it up.

Replies from: Houshalter
comment by Houshalter · 2013-06-19T08:05:30.363Z · LW(p) · GW(p)

Possibly an explanation for the great filter, but if it only applies after we move off-planet, then it doesn't explain why we survived our own cold war on our own planet (and means we are still at risk of hitting the great filter ourselves, and not a rare technological civilization among the stars like we thought.)

Replies from: JoshuaZ
comment by JoshuaZ · 2013-06-19T15:22:03.017Z · LW(p) · GW(p)

then it doesn't explain why we survived our own cold war on our own planet

We haven't gotten strongly off planet yet. We survived our first encounter with nuclear war, that doesn't mean that it won't still happen. Indeed, more groups now have access to nuclear weapons. By many estimates the US currently has close to first strike capability on Russia and China, but that may change as China improves its military. And as technology improves, making nukes becomes easier, not harder.

and means we are still at risk of hitting the great filter ourselves, and not a rare technological civilization among the stars like we thought.

Well, this is essentially just the question of whether the Filter is largely in front of us or largely behind us.

comment by NancyLebovitz · 2013-06-20T06:32:08.962Z · LW(p) · GW(p)

Should there be worlds that don't have enough radioactives?

Replies from: Izeinwinter
comment by Izeinwinter · 2013-06-20T06:39:44.763Z · LW(p) · GW(p)

There are chemistries far more threatening to life on earth than nukes. - that could be considered the first success of arms control, the world backed off from chemical weapons after WW1, and it is possible to get to fission via transmutation of stable isotopes, even if quite cumbersome. Fusors or accelerators + thorium, then breeders would work.

Replies from: NancyLebovitz, JoshuaZ
comment by NancyLebovitz · 2013-06-20T11:29:34.468Z · LW(p) · GW(p)

I'm a little nervous about tech getting to the point where we have home build-a-virus kits.

comment by JoshuaZ · 2013-06-20T13:44:37.900Z · LW(p) · GW(p)

There are chemistries far more threatening to life on earth than nukes. - that could be considered the first success of arms control, the world backed off from chemical weapons

Can you expand on why you think chemical weapons would be that threatening? They seem to be much easier to deal with, both in terms of prevention and in terms of total damage done. Most chemicals used in chemical weapons (e.g. sarin and VX) break down fairly fast when exposed to the environment- this is a major reason why so many chemical weapons are designed with binary systems.

Replies from: Izeinwinter
comment by Izeinwinter · 2013-06-21T11:18:59.562Z · LW(p) · GW(p)

Most chemical weapons break down rapidly, yes. That is why those chemicals were used as battlefield weapons - for an army, rending the field of battle impassible for any length of time is a major bug, not a feature. There are known toxins that are far more persistent, and they could be deployed as WMD far more horrific than very large explosions. None of them have been used, or even proposed as weapons, but this speaks of the restraint of weapon makers, generals and politicians, not the space of what the laws of nature make possible.

I would prefer not to give examples because there is a diffrence between people learning of this while studying in general and providing searchable hits for "How to end the wirld", but go find a list of poisons by toxicity, and read up on their specific properties. There are /many/ things worse than nukes. Either we are in a timeline surrounded by an void of death or humans can in fact be trusted with the keys of Armageddon.

Replies from: CronoDAS
comment by CronoDAS · 2013-06-24T09:58:40.923Z · LW(p) · GW(p)

In order to kill someone with a poison, the poison has to at least touch that person, and many poisons need much more than skin contact to be lethal: they have to be ingested, injected, or inhaled. Explosions tend to expand rapidly. Chemicals, however, tend to sit in one place; you need a way to disperse it. Yes, a handful of botulinium toxin could theoretically kill every human on the planet, but you'd never be able to get it inside every person on the planet.

In other words, [citation needed].

On the other hand, I wouldn't be surprised if one could kill more people with a few vials of smallpox virus than with a single nuclear weapon...

Replies from: Izeinwinter
comment by Izeinwinter · 2013-06-24T20:42:08.582Z · LW(p) · GW(p)

209-805-3

Replies from: CronoDAS
comment by CronoDAS · 2013-06-24T21:58:11.351Z · LW(p) · GW(p)

I agree, what you're referring to is very nasty stuff, and yes, you could easily kill a lot of people with it. You'll still have trouble wiping out an area larger than a stadium with it, though. (It's certainly possible to do though, if you have enough resources. Maybe something involving crop dusting planes, I dunno.)

On the bright side, anyone plotting to use that particular agent to kill people with is probably going to end up killing themselves with it before they manage to get anyone else.

Just for fun: "A Tall Tail" by Charles Stross

comment by ChristianKl · 2013-06-19T09:24:33.577Z · LW(p) · GW(p)

Applying the anthropic principle here, however, requires that a failure of MAD really would have killed everyone.

No, it doesn't.

People understand very well that they can't draw valid conclusion from experiments with few measurements in domains where it's possible to make a lot of measurements.

They usually don't understand that the same is true with nuclear war. One observation of a strategy working doesn't provide evidence that the strategy works.

If I would tell you that I started doing Forex trading and I turned 100$ into 150$ in a few days, you would think that I'm a good Forex trader even if I could prove to you that I really turned 100$ into 150$ and those were the only Forex trades I did in my life. It would not rational for you to give me your money to invest on your behalf.

If your data set has only one observation it doesn't provide much evidence. That's what the core of the anthropic principle is about.

Replies from: ThisSpaceAvailable
comment by ThisSpaceAvailable · 2013-06-22T04:54:01.366Z · LW(p) · GW(p)

"If your data set has only one observation it doesn't provide much evidence. That's what the core of the anthropic principle is about." No, the anthropic principle is a bout selection bias.

If everyone who does Forex trading reports the results on the internet, then when I see someone reporting that they made money doing Forex, I should update my confidence that Forex trading is profitable. But if only people who make money in Forex report their results, then my update should be much smaller when I see someone reporting that they made money.

Side note: Chrome spell checker flags "anthropic" but not "Forex". I find that sad.

comment by hedges · 2013-06-18T17:04:43.045Z · LW(p) · GW(p)

If we imagine mutually assured destruction as if it was a policy option in a strategy game, it would have statistics along the lines of:

-20% chance of nuclear war, +40% nuclear war intensity.