A Parable of Elites and Takeoffs
post by gwern · 2014-06-30T23:04:35.372Z · LW · GW · Legacy · 98 commentsContents
98 comments
Let me tell you a parable of the future. Let’s say, 70 years from now, in a large Western country we’ll call Nacirema.
One day far from now: scientific development has continued apace, and a large government project (with, unsurprisingly, a lot of military funding) has taken the scattered pieces of cutting-edge research and put them together into a single awesome technology, which could revolutionize (or at least, vastly improve) all sectors of the economy. Leading thinkers had long forecast that this area of science’s mysteries would eventually yield to progress, despite theoretical confusion and perhaps-disappointing initial results and the scorn of more conservative types and the incomprehension (or outright disgust, for ‘playing god’) of the general population, and at last - it had! The future was bright.
Unfortunately, it was hurriedly decided to use an early prototype outside the lab in an impoverished foreign country. Whether out of arrogance, bureaucratic inertia, overconfidence on the part of the involved researchers, condescending racism, the need to justify the billions of grant-dollars that cumulative went into the project over the years by showing some use of it - whatever, the reasons no longer mattered after the final order was signed. The technology was used, but the consequences turned out to be horrific: over a brief period of what seemed like mere days, entire cities collapsed and scores - hundreds - of thousands of people died. (Modern economies are extremely interdependent and fragile, and small disruptions can have large consequences; more people died in the chaos of the evacuation of the areas around Fukushima than will die of the radiation.)
An unmitigated disaster. Worse, the technology didn’t even accomplish the assigned goal - that was thanks to a third party’s actions! Ironic. But that’s how life goes: ‘Man Proposes, God Disposes’.
So, what to do with the tech? The positive potential was still there, but no one could doubt anymore that there was a horrific dark side: they had just seen what it could do if misused, even if the authorities (as usual) were spinning the events as furiously as possible to avoid frightening the public. You could put it under heavy government control, and they did.
But what was to stop Nacirema’s rivals from copying the technology and using it domestically or as a weapon against Nacirema? In particular, Nacirema’s enormous furiously-industrializing rival far to the East in Asia, which aspired to regional hegemony, had a long history of being an “oriental despotism” and still had a repressive political system - ruled by an opaque corrupt oligarchy - which abrogated basic human rights such as free speech, and was not a little racist/xenophobic & angry at historical interference in its domestic affairs by Seilla & Nacirema…
The ‘arms race’ was obvious to anyone who thought about the issue. You had to obtain your own tech or be left in the dust. But an arms race was terrifyingly dangerous - one power with the tech was bad enough, but if there were two holders? A dozen? There was no reason to expect all the wishes to be benign once everyone had their own genie-in-a-bottle. It would not be hyperbolic to say that the fate of global civilization was at stake (even if there were survivors off-planet or in Hanson-style ‘disaster refuges’, they could hardly rebuild civilization on their own; not to mention that a lot of resources like hydrocarbons have already been depleted beyond the ability of a small primitive group to exploit) or maybe even the human race itself. If ever an x-risk was a clear and present danger, this was it.
Fortunately, the ‘hard take-off’ scenario did not come to pass, as each time it took years to double the power of the tech; nor was it something you could make in your bedroom, even if you knew the key insights (deducible by a grad student from published papers, as concerned agencies in Nacirema proved). Rather, the experts forecast a slower take-off, on a more human time-scale, where the technology escalated in power over the next two or three decades; importantly, they thought that the Eastern rival’s scientists would not be able to clone the technology for another decade or perhaps longer.
So one of the involved researchers - a bona fide world-renowned genius who had made signal contributions to the design of the computers and software involved and had the utmost credibility - made the obvious suggestion. Don’t let the arms race start. Don’t expose humanity to an unstable equilibrium of the sort which has collapsed many times in human history. Instead, Nacirema should boldly deliver an ultimatum to the rival: submit to examination and verification that they were not developing the tech, or be destroyed. Stop the contagion from spreading and root out the x-risk. Research in the area would be proscribed, as almost all of it was inherently dual-use.
Others disagreed, of course, with many alternative proposals: perhaps researchers could be trusted to self-regulate; or, related research could be regulated by a special UN agency; or the tech could be distributed to all major countries to reach an equilibrium immediately; or, treaties could be signed; or Nacirema could voluntarily abandon the technology, continue to do things the old-fashioned way, and lead by moral authority.
You might think that the politicians would do something, even if they ignored the genius: the prognostications of a few obscure researchers and of short stories published in science fiction had turned out to be truth; the dangers had been realized in practice, and there was no uncertainty about what a war with the tech would entail; the logic of the arms race has been well-documented by many instances to lead to instability and propel countries into war (consider the battleship arms race leading up to WWI); the proposer had impeccable credentials and deep domain-specific expertise and was far from alone in being deeply concerned about the issue; there were multiple years to cope with the crisis after fair warning had been given, so there was enough time; and so on. If the Nacireman political system were to ever be willing to take major action to prevent an x-risk, this would seem to be the ideal scenario. So did they?
Let's step back a bit. One might have faith in the political elites of this country. Surely given the years of warning as the tech became more sophisticated, people would see that this time really was different, this time it was the gravest threat humanity had faced, that the warnings of elite scientists of doomsday would be taken seriously; surely everyone would see the truth of proposition X, leading them to endorse Y and agree with the ‘extremists’ about policy decision Z (to condense our hopes into one formula); how can we doubt that policy-makers and research funders would begin to respond to the tech safety challenge? After all, we can point to some other instances where policymakers reached good outcomes for minor problems like CFC damages to the atmosphere.
So with all that in mind, in our little future world, did the Nacireman political system respond effectively?
I’m a bit cynical, so let’s say the answer was… No. Of course not. They did not follow his plan.
And it's not that they found a better plan, either. (Let's face it, any plan calling for more war has to be considered a last resort, even if you have a special new tech to help, and is likely to fail.) Nothing meaningful was done. "Man plans, God laughs." The trajectory of events was indistinguishable from bureaucratic inertia, self-serving behavior by various groups, and was the usual story. After all, what was in it for the politicians? Did such a strategy swell any corporation’s profits? Or offer scope for further taxation & regulation? Or could it be used to appeal to anyone’s emotion-driven ethics by playing on disgust or purity or in-group loyalty? The strategy had no constituency except those who were concerned by an abstract threat in the future (perhaps, as their opponents insinuated, they were neurotic ‘hawks’ hellbent on war). Besides, the Nacireman people were exhausted from long years of war in multiple foreign countries and a large domestic depression whose scars remained. Time passed.
Eventually the experts turned out to be wrong but in the worst possible way: the rival took half the time projected to develop their own tech, and the window of opportunity snapped shut. The arms race had begun, and humanity would tremble in fear, as it wondered if it would live out the century or the unthinkable happen.
Good luck, you people of the future! I wish you all the best, although I can’t be optimistic; if you survive, it will be by the skin of your teeth, and I suspect that due to hindsight bias and near-miss bias, you won’t even be able to appreciate how dire the situation was afterwards and will forget your peril or minimize the danger or reason that the tech couldn’t have been that dangerous since you survived - which would be a sad & pathetic coda indeed.
The End.
(Oh, I’m sorry. Did I write “70 years from now”? I meant: “70 years ago”. The technology is, of course, nuclear fission which had many potential applications in civilian economy - if nothing else, every sector benefits from electricity ‘too cheap to meter’; Nacirema is America & the eastern rival is Russia; the genius is John von Neumann, the SF stories were by Heinlein & Cartmill among others - the latter giving rise to the Astounding incident; and we all know how the Cold War led civilization to the brink of thermonuclear war. Why, did you think it was about something else?)
This was written for a planned essay on why computational complexity/diminishing returns doesn't imply AI will be safe, but who knows when I'll finish that, so I thought I'd post it separately.
98 comments
Comments sorted by top scores.
comment by fezziwig · 2014-07-01T20:46:27.274Z · LW(p) · GW(p)
So one of the involved researchers - a bona fide world-renowned genius who had made signal contributions to the design of the computers and software involved and had the utmost credibility - made the obvious suggestion. Don’t let the arms race start. ... Instead, Nacirema should boldly deliver an ultimatum to the rival: submit to examination and verification that they were not developing the tech, or be destroyed.
Damn those politicians! Damn their laziness and greed! If only they'd had the courage to take over the world, then everything would have been fine!
Don't misunderstand; that's what's being proposed here. Hegemony would not have been enough. You need inspectors in all the research institutions, experienced in the local language language and culture. You need air inspections of every place a pile might be constructed, quite challenging in 1945. You need to do these things not just to your rival, but to everyone who aspires to become your rival. You need your allies to comply, voluntarily or not. Whenever anyone challenges your reign openly, you have to be willing and able to destroy them utterly. You can't miss even once, because when you do you won't get nuclear war, you'll get nuclear terrorism.
And you have to go on doing all this until nuclear weapons cease to be a world-ending threat. That is to say, forever.
This strategy probably is the best one for short-circuiting the Cold War. It's still a terrible plan. Between that plan and nothing our elites made the right choice, and if they did it out of inertia instead of careful calculation then it's not argument against them, it's an argument in favor of inertia.
Replies from: gwern, fezziwig↑ comment by gwern · 2014-07-02T02:00:48.262Z · LW(p) · GW(p)
You need inspectors in all the research institutions, experienced in the local language language and culture. You need air inspections of every place a pile might be constructed, quite challenging in 1945. You need to do these things not just to your rival, but to everyone who aspires to become your rival
It's not that difficult. Think about the flowchart of materials that go into atomic bombs. You don't need to control everyone everywhere. What you need to control are the raw uranium ore and derivatives, specialty goods useful for things like ultracentrifuges, monitor the rare specialists in shaped explosives and nuclear physics, sample the air for nuclear substances, and so on.
There are many natural chokepoints and many steps are difficult or impossible under light surveillance: you need a lot of raw uranium ore, thermal diffusion purification requires comically much electricity, centrifuges emit characteristic vibrations, laser purification is impossible to develop without extensive experience, the USA and other nations already routinely do air sampling missions to monitor fallout from tests...
I won't say that nuclear counterproliferation efforts have been perfect, but I will point out that a fair number of nations have had considerably difficulty getting their nuclear programs working (since he's come up already, how well was Saddam Hussein's nuclear program going when the issue was rendered moot by the US invasion?) and the successful members often have aid from previous members of the nuclear club & no serious interference in the form of embargoes much less active monitoring and threats from a jealous existing nuclear club member.
And you have to go on doing all this until nuclear weapons cease to be a world-ending threat. That is to say, forever.
You're right, because clearly the status quo is totally a solution 'forever'.
Between that plan and nothing our elites made the right choice, and if they did it out of inertia instead of careful calculation then it's not argument against them, it's an argument in favor of inertia.
Retrospective determinism, eh? 'Because X did not happen, it was inevitable that X would not happen, therefore, inertia was the right choice.' Nor is winning a lottery ticket an argument in favor of playing the lottery. (Not to mention that if inertia had been the wrong choice, we wouldn't be here arguing about it and so one could justify any policy whatsoever. Reasoning that 'we did Y and we survived! so Y must be a great policy' is not a good way to try to analyze the world.)
Replies from: fezziwig↑ comment by fezziwig · 2014-07-02T18:23:41.218Z · LW(p) · GW(p)
I'd like to address your other points, but I think we have to talk about your last paragraph first.
You're quite right; that the cold war did not end the world in our particular branch is not proof that the cold war was survivable in more than a tiny handful of possible worlds. But let me remind you in turn that "von Neumann's plan would have been worse than the cold war" is not the same as "the cold war was safe", "the cold war was good", "the cold war doesn't share any of the weaknesses of von Neumann's plan", or even "the cold war was terrible but still the best choice we had". I'm arguing only that narrow thing: that our forefathers were right to reject von Neumann's plan.
Fair enough?
Replies from: gwern↑ comment by gwern · 2014-07-02T19:54:46.917Z · LW(p) · GW(p)
Fair enough, but a lot of the objections here seem to be based on the argument that 'the Cold War was reasonably objectively safe (and we know so for [anthropicly biased reasons]), while unilateral strikes or ultimatums are objectively dangerous; hence the Cold War was the better choice', while I think the right version is 'the Cold War was objectively extremely dangerous, while unilateral strikes or ultimatums are [merely] objectively dangerous; hence the Cold War was the worse choice'. I don't think people are directly comparing the scenarios and merely making a relative judgment.
↑ comment by fezziwig · 2014-07-01T20:48:00.373Z · LW(p) · GW(p)
(Though for what it's worth, I actually do agree with your point about AI, insofar as the analogy holds: we could get into a Cold-War-like situation and humanity would probably not enjoy the result. I just don't think world conquest is the answer.)
comment by Punoxysm · 2014-06-30T23:36:09.049Z · LW(p) · GW(p)
This was pretty transparent. And I disagree with it.
I'd observe that though the peculiarities of the cold war actually made nuclear peace tougher than it would have been in most time periods, but we still made it through, but you can see that the current multi-polar world is substantially safer even though there are more countries with nuclear weapons than ever before.
Also, trying the Neumann plan (or your description of it) would have been awful, and would almost certainly have triggered conventional war if it had been followed through on; not only that but the USSR's development of the bomb did not mark the "closing of the window"; the US had more bombs and superior bomb delivery capabilities for several years after both countries had the bomb, and the US still didn't go to war. And even in retrospect that looks like the right choice, since war would have devastated Europe and eventually resulted in a "parable" essay about how the super-weapon motivated its developing nation to bloodily enforce global hegemony.
Replies from: James_Miller↑ comment by James_Miller · 2014-07-01T00:32:04.232Z · LW(p) · GW(p)
and would almost certainly have triggered conventional war
I think the opposite, as it would have probably prevented the Korean War. After the U.S. developed hydrogen bombs, John von Neumann helped create the U.S. military strategy that in any war with the Soviet Union we would seek to kill their leaders. Had, at the end of WWII, von Neumann given an atomic ultimatum to Stalin it would likely have included the threat that if Stalin didn't comply we would do everything possible to kill him. Given that Stalin's primary concern was the welfare of Stalin, this would probably have been enough to have gotten Stalin to officially comply, although he certainly would have cheated if he could have gotten away with it, but with a big enough inspection effort this would have been unproductive.
Replies from: Punoxysm↑ comment by Punoxysm · 2014-07-01T05:47:40.724Z · LW(p) · GW(p)
The US threatened to depose Saddam Hussein unless he allowed inspections. And obviously his own self-interested motives led him to allowing those inspections and a verifiably disarmed Iraq has pretty much not been a foreign policy issue since. So I guess you're right!
Replies from: gwern↑ comment by gwern · 2014-07-01T15:21:12.478Z · LW(p) · GW(p)
I don't think you understand what happened. Saddam thought that his former close sponsor & ally needed him against Iran because without our Sunni man in Iraq and the fear of WMDs the country would become a Shi'a & Persian pawn*. (You remember the whole Iran-Iraq War and 'exporting the revolution' and Hezbollah, right?)
Huh. How about that. Why, it looks like that's what happened under Maliki and that's why the country is currently being torn apart and the Iraqi government is inviting Iranian troops in to help restore order.
It would seem Saddam's mistake was in thinking the USA was run by rational actors, and not run by morons who would sabotage their geopolitical interests in the interests of revenge against a "guy that tried to kill my dad at one time". As my parable points out, one should not expect that sort of rational planning from the USA or indeed large countries in general.
So no, I think your objection does not hold water once one actually knows why the inspections were refused, and does not apply to the hypothetical involving Stalin.
* EDIT: BTW, I will note that this is a classic example of failing to apply the principle of charity, demonizing enemies, and not caring about contexts. No, Saddam couldn't have been acting in a complex Middle Eastern complex where the USA and Iraq were natural allies for which ultimatums made no sense; no, he had to be going against his own rational self-interests and be crazy.
Replies from: Punoxysm, None, solipsist, ThisSpaceAvailable↑ comment by Punoxysm · 2014-07-01T17:07:15.615Z · LW(p) · GW(p)
I think you actually illustrate how correct I am. When there's uncertainty about how sincere a threat is, especially because virtually all threats of military action are negative value for both parties if executed, and when the threat sets a precedent that the threatening party could continually impose its will, it's natural to test the threatening parties commitment.
All you're saying is that Saddam called the USA's bluff and was wrong and it was disastrous. That could EASILY have happened with an attempt by the US to demand inspections from Russia.
Think about it further: you are threatened by a nation with a newly developed super-weapon, but only modest stockpiles and uncertain ability to deliver it, to not develop your own version of the super-weapon. The demand is that you submit to thorough inspections, which your enemy would certainly use to spy as extensively as possible on you.
Not to mention that it would set a precedent where you'd have to back down for the next demand, and the next; anythings' better than being a smear of ash after all, isn't it?
Or you could consider your excellent military position right next to your enemy's allies, along with the amount of safety provided by a combination of secrecy and bunkers, and decide that the best move - the only move if you want to resist the slide towards subjugation - is to call your enemy's bluff.
Replies from: gwern↑ comment by gwern · 2014-07-01T17:25:48.597Z · LW(p) · GW(p)
All you're saying is that Saddam called the USA's bluff and was wrong and it was disastrous. That could EASILY have happened with an attempt by the US to demand inspections from Russia.
Um, no, because the USSR had no reason to think and be correct in thinking it served a useful role for the USA which meant the threats were bluffs that were best ridden out lest it damage both allies' long-term goals.
I'm amazed. You present an example which you think is a great example of irrationality on a dictator's part, I show you are wrong and have no idea why Saddam resisted and you think you can spin it to support your claims and it actually illustrates how correct you are! What could possibly falsify your criticisms?
Not to mention that it would set a precedent where you'd have to back down for the next demand, and the next; anythings' better than being a smear of ash after all, isn't it?
How well did that work in the Cold War against non-nuclear nations..? Everyone understands the logic of blackmail and the point of using Schelling fences to avoid sliding down the slippery slope.
Or you could consider your excellent military position right next to your enemy's allies, along with the amount of safety provided by a combination of secrecy and bunkers, and decide that the best move - the only move if you want to resist the slide towards subjugation - is to call your enemy's bluff.
And likely lose, with no superweapon of your own, no prospect of developing it soon under the chaos of war (it was already a top-priority program for Stalin, war could only have delayed it), being cutoff from one of one's most important trading partners which kept one from economic collapse during WWII, and a self-centered psychopath like Stalin would have to worry about how well bunkers would really protect him against weapons they have no direct experience with and which were increasing in tonnage each year.
Replies from: ThisSpaceAvailable↑ comment by ThisSpaceAvailable · 2014-07-04T01:45:47.841Z · LW(p) · GW(p)
Um, no, because the USSR had no reason to think and be correct in thinking it served a useful role for the USA which meant the threats were bluffs that were best ridden out lest it damage both allies' long-term goals.
Do you mean "Iraq", rather than "USSR"?
I'm amazed. You present an example which you think is a great example of irrationality on a dictator's part, I show you are wrong and have no idea why Saddam resisted and you think you can spin it to support your claims and it actually illustrates how correct you are!
I don't think Punoxysm is saying that it's an example of irrationality. Punoxysm is saying that it's a reasonable reaction, and it shows that calling the bluff would also be a reasonable response in Stalin's case. You haven't shown that Punoxysm is wrong, you've argued that Punoxysm is wrong.
What could possibly falsify your criticisms?
I think the answer to that is rather obvious. If Hussein had allowed the inspections, that would support your position. It's rather odd to be calling someone's position unfalsifiable, simply because they are not accepting your explanations for why evidence falsifying your position is unpersuasive.
Replies from: gwern↑ comment by gwern · 2014-07-04T02:52:48.983Z · LW(p) · GW(p)
Do you mean "Iraq", rather than "USSR"?
No, I meant USSR. Iraq was in a special position of being both a former close US ally and still in the valuable-to-the-US geopolitical position which made it an ally in the first place, and that is why Saddam engaged in the reasoning he did. The USSR was a former close US ally, yes, but played no such valuable role and both recognized each other as their principal threat after the Nazis were defeated.
You haven't shown that Punoxysm is wrong, you've argued that Punoxysm is wrong.
I don't know how I can point out he's wrong any more clearly. Saddam had good reason to think the threats were bluffs. Stalin would not have because those reasons did not apply to the USSR. The situations are not the same.
If Hussein had allowed the inspections, that would support your position.
Yes, but we already know he didn't. So the question is his motivations; Punoxysm has asserted that if he did it for irrational reasons, then it supports his criticism, and when I pointed out that he did it for rational reasons, he then did it supported his position! So why did he not say in the first place simply, 'Saddam didn't allow inspections; this is evidence the strategy cannot work'? Obviously, because he felt the irrational qualifier was necessary right up until I produced the references. (It is a basic principal of natural language that you do not use unnecessary restrictions or qualifiers when they are not relevant.)
Replies from: ThisSpaceAvailable↑ comment by ThisSpaceAvailable · 2014-07-07T20:50:54.675Z · LW(p) · GW(p)
No, I meant USSR.
So, just to be clear: you believe that in the hypothetical world in which the US threatens to attack the USSR if it does not allow inspections, the USSR would have no reason to think this serves a useful purpose, and would be therefore justified in concluding it was a bluff?
Stalin would not have because those reasons did not apply to the USSR. The situations are not the same.
You are saying that there are reasons for thinking it was a bluff that did not apply to the USSR. That's denying the antecedent.
So the question is his motivations; Punoxysm has asserted that if he did it for irrational reasons,
It's not clear to me what you're referring to.
Replies from: gwern↑ comment by gwern · 2014-07-07T22:22:19.526Z · LW(p) · GW(p)
you believe that in the hypothetical world in which the US threatens to attack the USSR if it does not allow inspections, the USSR would have no reason to think this serves a useful purpose, and would be therefore justified in concluding it was a bluff?
No, that's like, the opposite of what I mean. I'm baffled you could not understand this (and similarly that you had to ask for clarification about my BMR example in the other comment when I had said clearly that a statement would be evidence against an impending bust). If this is the best you can read what I've written, then I think maybe it's time for me to call this conversation quits. I don't know if you're being deliberately obtuse or think too differently, but either way...
You are saying that there are reasons for thinking it was a bluff that did not apply to the USSR. That's denying the antecedent.
Good thing we're not using deductive logic! Denying the antecedent is, like almost all classical fallacies, a useful Bayesian piece of evidence. By removing one potential way for it to be a bluff, the probability of being a bluff necessarily falls; by removing the antecedent, the consequent is that much less likely.
It's not clear to me what you're referring to.
'He' is Saddam. Obviously. That was how the comment thread started and what I was objecting to and I even name Saddam in the same paragraph you claim to be confused by!
EDIT: looking back through your comments, you seem to consistently and repeatedly misunderstand what I said and ignore parts where I explained clearly what I meant, in a way well beyond an ordinarily obtuse commenter. I now think you're doing this deliberately, and so I'm going to stop now.
Replies from: ThisSpaceAvailable, ThisSpaceAvailable↑ comment by ThisSpaceAvailable · 2014-07-08T06:05:30.913Z · LW(p) · GW(p)
All you're saying is that Saddam called the USA's bluff and was wrong and it was disastrous. That could EASILY have happened with an attempt by the US to demand inspections from Russia.
Um, no, because the USSR had no reason to think and be correct in thinking it served a useful role for the USA which meant the threats were bluffs that were best ridden out lest it damage both allies' long-term goals.
I read "it" in "it served a useful role" as referring to "demanding inspections". And I took "which meant the threats were bluffs" to mean "in the hypothetical involving the USSR, the threats were bluffs". Because the prior clause had clearly established that you were talking about the USSR. Maybe instead of accusing me of bad faith, you could actually try to clear up the confusion. I'll be downvoting your posts until you do. It would be nice if you could write your sentence with correct and clear grammar, especially when dealing with complex compound sentences, and if you can't be bothered to do so, then don't complain about people having trouble parsing your sentences. When there's a failure in communication, attributing all of the blame to the other person is a very anti-rationalist position to take.
By removing one potential way for it to be a bluff, the probability of being a bluff necessarily falls
You didn't merely say that the probability is lower; you presented it as a logical certainty.
'He' is Saddam. Obviously. That was how the comment thread started and what I was objecting to and I even name Saddam in the same paragraph you claim to be confused by!
You said that Punoxysm asserted that Saddam did it for irrational reasons. I don't think that it is entirely clear as to what statement by Punoxysm you consider to be the making that assertion. If I had been unclear about who you were talking about, I would have said who, rather than what.
↑ comment by ThisSpaceAvailable · 2014-07-26T03:31:00.847Z · LW(p) · GW(p)
gwern, I was under the impression that this is a rationalist site, dedicated to the idea that people are fallible creatures, and should not act with the conviction that they are right and anyone who disagrees with them is wrong. I have done more than my fair share to resolve this misunderstanding. I have politely asked for clarification. I have even gone to the trouble of asking a third party to read the posts, and this third party agrees that the sentence "Um, no, because the USSR had no reason to think and be correct in thinking it served a useful role for the USA which meant the threats were bluffs that were best ridden out lest it damage both allies' long-term goals." is confusing. Now, the question is: are you willing to act like an adult, or are you going to just have a temper tantrum because someone doesn't understand you? Are you willing to go to the same effort that I have, and get your own third party (fourth party, I suppose) to read the sentence, and see whether they understand it? There are all sorts of ambiguities in your sentence, such as whether it is intended to be parsed as "(the USSR had no reason to think and be correct in thinking it served a useful role for the USA) which meant the threats were bluffs" or "the USSR had no reason to think and be correct in thinking (it served a useful role for the USA which meant the threats were bluffs)". Note that the latter is grammatically incorrect, in that the "which" should be a "that". And no, it's not being a Grammar Nazi to point out grammatical errors that affect readability. If you can't have a calm, rational, and civil conversation about this, then I can only conclude that it you are not a rationalist.
Replies from: gwern↑ comment by gwern · 2014-07-26T13:33:23.630Z · LW(p) · GW(p)
gwern, I was under the impression that this is a rationalist site, dedicated to the idea that people are fallible creatures, and should not act with the conviction that they are right and anyone who disagrees with them is wrong...Now, the question is: are you willing to act like an adult, or are you going to just have a temper tantrum because someone doesn't understand you?...If you can't have a calm, rational, and civil conversation about this, then I can only conclude that it you are not a rationalist.
I really don't care about your underhanded attempts to shame me into further engagement, and I stand by my earlier comment.
Replies from: ThisSpaceAvailable↑ comment by ThisSpaceAvailable · 2014-08-05T19:20:00.349Z · LW(p) · GW(p)
Openly asking you to explain your post is "underhanded"? You have already engaged in further engagement. It's just that that engagement consists of going to another thread and insulting me, rather than actually addressing the issue in the thread that it came up in like an adult.
Replies from: drethelin↑ comment by [deleted] · 2014-07-01T15:59:20.351Z · LW(p) · GW(p)
In a world where a possibly-irrational actor is using "do what we tell you or you get nuked" as an instrument of foreign policy against a load of other possibly-irrational actors, how long would it be before something went horribly wrong?
Replies from: gwern↑ comment by gwern · 2014-07-01T17:14:34.932Z · LW(p) · GW(p)
Is a world in which only one possibly-irrational actor has nukes and can make threats more likely to go wrong or upon going wrong go horribly wrong, than a world in which dozens of possibly-irrational actors have nukes and can make threats?
↑ comment by solipsist · 2014-07-02T00:03:25.156Z · LW(p) · GW(p)
I don't think this is a terribly strong reply. Saddam Hussein's hypothesis of US policy towards him was mistaken. Perhaps his hypothesis was based on a solid conceptual framework about the US acting in its own long-term self-interest. But we've tested Hussein's hypothesis, it was mistaken, and Saddam died.
Replies from: gwern↑ comment by gwern · 2014-07-02T01:50:41.914Z · LW(p) · GW(p)
It is a strong reply. What the Saddam example shows is that ultimatum givers can be irrational; but that's not what you need to show to show that a USA ultimatum to the USSR would have failed! You need to show that the USSR would have been irrational. The Saddam example doesn't show that. It shows that 'crazy' totalitarian dictators can actually be more rational than liberal Western democracies, which is what is needed for proposed plan to work.
That's why I say that the Saddam example supports the proposed plan, it doesn't undermine it: it establishes the sanity of the only actor who matters once the plan has been put into action - the person receiving the ultimatum.
Replies from: solipsist↑ comment by solipsist · 2014-07-02T02:31:06.882Z · LW(p) · GW(p)
That's why I say that the Saddam example supports the proposed plan
So, all else held constant: if Saddam Hussein had capitulated to US demands and (counterfactually) did not rebuff inspectors, you would count that as evidence against the proposed plan?
ETA In the interests of positive feedback -- I like the overall post, and I'm just picking on this individual comment.
Replies from: gwern↑ comment by gwern · 2014-07-02T02:39:31.633Z · LW(p) · GW(p)
No, that would be evidence for it. I know you are trying to show I am having it both ways, but I am not. Think of the full tree of possibilities: ultimatum/no-ultimatum, bluff/real, rational-refusal/irrational-refusal. If real ultimatum had been issued and Saddam had then refused for irrational reasons, that would be strong evidence against the plan, because that's the situation which is predicted to go well. And that's the situation Punoxysm thought he'd found, but he hadn't.
(Actually, you're the second person today to think I was doing something like that. I mentioned on IRC I had correctly predicted to Gawker in late 2013 that the black-market BMR would soon be busted by law enforcement - as most of its employees would be within two months or so while setting up the successor Utopia black-market - mentioning that among other warning signs, BMR had never mentioned detecting attempts by law enforcement to infiltrate it; someone quoted at me that surely 'absence of evidence is evidence of absence'? Surely if BMR had claimed to be seeing law enforcement infiltration I would consider that evidence for infiltration, so how could I turn around and argue that lack of BMR claims was also evidence for infiltration? Yes, this is a good criticism - in a binary context.
But this was more complex than a binary observation: there were at least 3 possibilities. 1. it could be that law enforcement was not trying to infiltrate at all, 2. it could be they were trying & had failed, or 3. it could be that they were trying & succeeded. BMR's silence was evidence they didn't spot any attacks, so this is evidence that law enforcement was not trying, but it was also evidence for the other proposition that law enforcement was trying & succeeding; a priori, the former was massively improbable because BMR was old and notorious and it's inconceivable LE was not actively trying to bust it, while the latter was quite probable & had just been done to Silk Road. Hence, observing BMR silence pushed the infiltration outcome to a high posterior while the not-trying remained still pretty unlikely.
Of course, for an obscure small marketplace, the reasoning would happen the other way around: because it starts off more likely to be ignored than infiltrated, silence is golden. I'm thinking of titling any writeup "The Pig That Didn't Oink".)
Replies from: Benja, ThisSpaceAvailable↑ comment by Benya (Benja) · 2014-07-03T12:59:06.668Z · LW(p) · GW(p)
Incidentally, the same argument also applies to Governor Earl Warren's statement quoted in Absence of evidence is evidence of absence: He can be seen as arguing that there are at least three possibilities, (1) there is no fifth column, (2) there is a fifth column and it supposed to do sabotage independent from an invasion, (3) there is a fifth column and it is supposed to aid a Japanese invasion of the West Coast. In case (2), you would expect to have seen sabotage; in case (1) and (3), you wouldn't, because if the fifth column were known to exist by the time of the invasion, it would be much less effective. Thus, while observing no sabotage is evidence against the fifth column existing, it is evidence in favor of a fifth column existing and being intended to support an invasion. I recently heard Eliezer claim that this was giving Warren too much credit when someone was pointing out an interpretation similar to this, but I'm pretty sure this argument was represented in Warren's brain (if not in explicit words) when he made his statement, even if it's pretty plausible that his choice of words was influenced by making it sound as if the absence of sabotage was actually supporting the contention that there was a fifth column.
In particular, Warren doesn't say that the lack of subversive activity convinces him that there is a fifth column, he says that it convinces him "that the sabotage we are to get, the Fifth Column activities are to get, are timed just like Pearl Harbor was timed". Moreover, in the full transcript, he claims that there are reasons to think (1) very unlikely, namely that, he alleges, the Axis powers all use them everywhere else:
To assume that the enemy has not planned fifth column activities for us in a wave of sabotage is simply to live in a fool's paradise. These activities, whether you call them "fifth column activities" or "sabotage" or "war behind the lines upon civilians," or whatever you may call it, are just as much an integral part of Axis warfare as any of their military and naval operations. When I say that I refer to all of the Axis powers with which we are at war. [...] Those activities are now being used actively in the war in the Pacific, in every field of operations about which I have read. They have unquestionably, gentlemen, planned such activities for California. For us to believe to the contrary is just not realistic.
I.e., he claims that (1) would be very unique given the Axis powers' behavior elsewhere. On the other hand, he suggests that (3) fits a pattern of surprise attacks:
[...] It convinces me more than perhaps any other factor that the sabotage that we are to get, the fifth column activities that we are to get, are timed just like Pearl Harbor was timed and just like the invasion of France, and of Denmark, and of Norway, and all of those other countries.
And later, he explicitly argues that you wouldn't expect to have seen sabotage in case (3):
If there were sporadic sabotage at this time or if there had been for the last 2 months, the people of California or the Federal authorities would be on the alert to such an extent that they could not possibly have any real fifth column activities when the M-day comes.
So he has the pieces there for a correct Bayesian argument that a fifth column still has high posterior probability after seeing no sabotage, and that a fifth column intended to support an invasion has higher posterior than prior probability: Low prior probability of (1); (comparatively) high prior probability of (3); and an argument that (3) predicts the evidence nearly as well as (1) does. I'm not saying his premises are true, just that the fact that he claims all of them suggests that his brain did in fact represent the correct argument. The fact that he doesn't say that this argument convinces him "more than anything" that there is a fifth column, but rather says that it convinces him that the sabotage will be timed like Pearl Harbor (and France, Denmark and Norway), further supports this -- though, as noted above, while I think that his brain did represent the correct argument, it does seem plausible that his words were chosen so as to suggest the alternative interpretation as well.
↑ comment by ThisSpaceAvailable · 2014-07-07T21:19:43.291Z · LW(p) · GW(p)
Surely if BMR had claimed to be seeing law enforcement infiltration I would consider that evidence for infiltration, so how could I turn around and argue that lack of BMR claims was also evidence for infiltration? Yes, this is a good criticism - in a binary context.
So, if BMR had claimed to be seeing infiltration, would you consider that evidence that BMR is not about to be busted?
Replies from: gwern↑ comment by gwern · 2014-07-07T22:10:07.674Z · LW(p) · GW(p)
Yes. If a big market one expects to be under attack reports fending off attack, then one would be more optimistic about it:
Hence, observing BMR silence pushed the infiltration outcome to a high posterior while the not-trying remained still pretty unlikely. Of course, for an obscure small marketplace, the reasoning would happen the other way around: because it starts off more likely to be ignored than infiltrated, silence is golden
(That said, that only applies to the one particular kind of observation/argument from silence; as I told Chen, there were several reasons to expect BMR to be short-lived on top of the general short-livedness of black-markets, but I think the logic behind those other reasons doesn't need to be explained since they're not tricky or counterintuitive like the argument from silence.)
Replies from: ThisSpaceAvailable↑ comment by ThisSpaceAvailable · 2014-07-08T06:39:23.257Z · LW(p) · GW(p)
Then it seems to me that when responding to "Surely if BMR had claimed to be seeing law enforcement infiltration I would consider that evidence for infiltration, so how could I turn around and argue that lack of BMR claims was also evidence for infiltration?", you should lead off with "I would consider that evidence for infiltration, but against an imminent bust", before launching into all the explanation. That way, it would more clear whether you are denying the premise ("you'd consider that evidence for your thesis, too"), rather than just the conclusion. And the phrase "If a big market one expects" would a lot clearer with "that" between "market" and "one".
↑ comment by ThisSpaceAvailable · 2014-07-03T07:26:38.591Z · LW(p) · GW(p)
It would seem Saddam's mistake was in thinking the USA was run by rational actors, and not run by morons who would sabotage their geopolitical interests in the interests of revenge against a "guy that tried to kill my dad at one time".
If the US had been able to credibly pre-commit to the invasion if inspections were not allowed, then that pre-commitment would not be foolish. And once they had attempted such a pre-commitment, not following through would have harmed their ability to make pre-commitments in the future. A willingness to incur losses to punish others is a vital part of diplomacy. If that's "irrational", you have a very narrow view of rationality, and your version of "rationality" will be absolutely crushed in pretty much any negotiation.
As my parable points out, one should not expect that sort of rational planning from the USA or indeed large countries in general.
So, if I'm following correctly, your position was that the US was foolish for following through ... and Hussein was foolish for not realizing they would follow through. So if everyone is foolish, how can you argue that because X would be in hypothetical Stalin's best interests, it somehow follows that he would do X?
So no, I think your objection does not hold water once one actually knows why the inspections were refused, and does not apply to the hypothetical involving Stalin.
Maybe it's the late hour, but I'm having trouble seeing how "The other guy may decide we're bluffing and call us on it" does not apply to hypothetical Stalin.
Replies from: gwern↑ comment by gwern · 2014-07-03T17:46:53.299Z · LW(p) · GW(p)
And once they had attempted such a pre-commitment, not following through would have harmed their ability to make pre-commitments in the future. A willingness to incur losses to punish others is a vital part of diplomacy.
A willingness to incur losses is a useful part - if you are seeking useful goals. I may well want to follow through on a threat in order to preserve my credibility for future threats, but if I choose to make threats for stupid self-defeating goals, then precommitting is a horrible irrational thing which destroys me. The USA would have been much better off not invading Iraq and losing some credibility, because the invasion of Iraq would have predictably disastrous consequences for both the USA and Iraq which were far worse than the loss of credibility.
A willingness to incur losses to punish others is a vital part of diplomacy. If that's "irrational", you have a very narrow view of rationality, and your version of "rationality" will be absolutely crushed in pretty much any negotiation.
The first rule of strategy: don't pursue stupid goals. If you think that you can pursue any goal unrelated to what you actually want, then you have a very narrow view of rationality and your version of rationality will be absolutely crushed in pretty much any negotiation. You do not want to be able to precommit to shooting yourself in the foot.
So, if I'm following correctly, your position was that the US was foolish for following through ... and Hussein was foolish for not realizing they would follow through.
The US was foolish for issuing threats to achieve a goal that harmed its actual interests, Saddam was mistaken but reasoning correctly in treating it as a bluff, and the US was even more foolish to carry through on the threat.
Maybe it's the late hour, but I'm having trouble seeing how "The other guy may decide we're bluffing and call us on it" does not apply to hypothetical Stalin.
Because in that scenario, Stalin would not be thinking the USA is doing something so stupid it must be a bluff, because it wouldn't be so stupid it is probably a bluff.
Replies from: ThisSpaceAvailable↑ comment by ThisSpaceAvailable · 2014-07-04T01:37:15.495Z · LW(p) · GW(p)
A willingness to incur losses is a useful part - if you are seeking useful goals.
You are adding conditions. A willingness to incur losses is very much a necessary condition. Identifying other necessary conditions doesn't change that.
I may well want to follow through on a threat in order to preserve my credibility for future threats, but if I choose to make threats for stupid self-defeating goals, then precommitting is a horrible irrational thing which destroys me.
How is enforcing the sanctions a stupid goal?
The USA would have been much better off not invading Iraq and losing some credibility, because the invasion of Iraq would have predictably disastrous consequences for both the USA and Iraq which were far worse than the loss of credibility.
I disagree that there were predictable disastrous consequences. The actual results are hardly disastrous, and the harmful results were not entirely predictable.
Saddam was mistaken but reasoning correctly in treating it as a bluff,
It's hard to claim that Saddam was reasoning correctly when he arrived at the incorrect conclusions.
Because in that scenario, Stalin would not be thinking the USA is doing something so stupid it must be a bluff, because it wouldn't be so stupid it is probably a bluff.
Maybe that's an argument for Stalin being less likely to call the bluff, but it's far from an argument that we can be sure of it.
Replies from: gwern↑ comment by gwern · 2014-07-04T02:59:39.537Z · LW(p) · GW(p)
A willingness to incur losses is very much a necessary condition.
And you are treating willingness to incur losses as a sufficient condition, when it is merely a necessary condition. Willingness to incur losses is only useful when pursuing desirable goals; if you are pursuing harmful goals like 'invade Iraq, waste trillions, destablize the Middle East, and offer your regional enemy a weak divided pawn', then being unwilling to incur loss such as in threats is actually making you better off.
How is enforcing the sanctions a stupid goal?
Who said anything about sanctions? I thought we were discussing the US invasion of Iraq.
I disagree that there were predictable disastrous consequences. The actual results are hardly disastrous, and the harmful results were not entirely predictable.
I strongly disagree they were not predictable. They were predicted long in advance by the many critics of the proposed invasion. I was paying very close attention to the runup to the invasion because I was shocked that something so moronic, so based on flimsy evidence, so unnecessary to fight the War on Terror, and going to entail hundreds of billions of dollars wasted in the best case. The military consequences of invading a mushedup pseudo state ruled by a brutal dictatorship run by an ethnic minority, where the minority vs the majority was only the major running conflict in the past millennium of Islamic history, did not take a Napoleon to extrapolate.
It's hard to claim that Saddam was reasoning correctly when he arrived at the incorrect conclusions.
And was a lottery winner reasoning correctly because the consequences happened to be good? Does one example of a good outcome justify any bad reasoning?
Maybe that's an argument for Stalin being less likely to call the bluff, but it's far from an argument that we can be sure of it.
It's an argument that the Saddam example does not tell us anything useful about Stalin, because the key reason Saddam refused does not exist in the Stalin situation.
Replies from: ThisSpaceAvailable↑ comment by ThisSpaceAvailable · 2014-07-07T21:08:25.748Z · LW(p) · GW(p)
And you are treating willingness to incur losses as a sufficient condition, when it is merely a necessary condition.
I don't see how you're interpreting me as saying that. Willingness to incur losses is a vital part of diplomacy. The fact that this can facilitate bad things doesn't change that. It's like responding to the claim that a rifle is a vital part of deer hunting by saying "Not if you shoot your foot rather the deer".
Who said anything about sanctions? I thought we were discussing the US invasion of Iraq.
The inspection regime was part of the sanctions imposed against Iraq.
I strongly disagree they were not predictable. They were predicted long in advance by the many critics of the proposed invasion.
There were people predicting bad consequences, and there were people predicting good consequences. Looking at hindsight doesn't make it predictable.
And was a lottery winner reasoning correctly because the consequences happened to be good?
It's a bit odd to go from reasoning that it was predictable based on hindsight, to rejecting the idea that Saddam reasoned correctly based on hindsight. I didn't say that Saddam definitely wasn't reasoning correctly, only that it is hard to argue that position. Unlike a lottery winner, this wasn't a random event. Clearly, if Saddam thought it was definitely a bluff, he was completely wrong. So you would have to argue that Saddam recognized that it likely was not a bluff, but he assigned such a high confidence to it being a bluff that calling it was worth the risk of death, and that level of confidence was well-justified. The very fact that it was not a bluff is quite strong evidence that thinking it was not a bluff was wrong.
Replies from: gwern↑ comment by gwern · 2014-07-07T22:16:24.754Z · LW(p) · GW(p)
Willingness to incur losses is a vital part of diplomacy. The fact that this can facilitate bad things doesn't change that. It's like responding to the claim that a rifle is a vital part of deer hunting by saying "Not if you shoot your foot rather the deer".
Indeed. If you suck as much at shooting a rifle as the USA sucks at diplomacy in the Middle East, you should leave it at home.
The inspection regime was part of the sanctions imposed against Iraq.
The sanctions did not require the USA invasion which has been so disastrous.
There were people predicting bad consequences, and there were people predicting good consequences. Looking at hindsight doesn't make it predictable.
If it was such a good idea, why did it take the patriotic fervor of 9/11 and a case about WMDs based on lies and exaggerations to convince the USA to invade Iraq? Because it was a predictably bad idea which a lot of people were skeptical of.
It's a bit odd to go from reasoning that it was predictable based on hindsight, to rejecting the idea that Saddam reasoned correctly based on hindsight. I didn't say that Saddam definitely wasn't reasoning correctly, only that it is hard to argue that position. Unlike a lottery winner, this wasn't a random event. Clearly, if Saddam thought it was definitely a bluff, he was completely wrong. So you would have to argue that Saddam recognized that it likely was not a bluff, but he assigned such a high confidence to it being a bluff that calling it was worth the risk of death, and that level of confidence was well-justified.
I don't know what to say to this but to repeat myself: he was reasoning correctly about the consequences of it not being a bluff, and whether a rational self-interested USA would want to do it. To call this wrong is itself a post hoc argument from hindsight that he should have foreseen that the USA was irrational and self-sabotaging and acted accordingly, and voluntarily topple his regime & empower Iran solely on the odds of that.
The very fact that it was not a bluff is quite strong evidence that thinking it was not a bluff was wrong.
And is this 'quite strong evidence' neutralized by recent events in Syria? What's the proper reference class here?
Replies from: ThisSpaceAvailable↑ comment by ThisSpaceAvailable · 2014-07-08T06:29:11.154Z · LW(p) · GW(p)
The sanctions did not require the USA invasion which has been so disastrous.
Saddam didn't seem to be amenable to complying with them without serious action.
If it was such a good idea, why did it take the patriotic fervor of 9/11 and a case about WMDs based on lies and exaggerations to convince the USA to invade Iraq? Because it was a predictably bad idea which a lot of people were skeptical of.
I'm hardly denying that there were concerns.
I don't know what to say to this but to repeat myself: he was reasoning correctly about the consequences of it not being a bluff, and whether a rational self-interested USA would want to do it.
I don't know what definition of "rationality" you are using, that it is correct to trust one's life to others following it.
To call this wrong is itself a post hoc argument from hindsight that he should have foreseen that the USA was irrational and self-sabotaging and acted accordingly, and voluntarily topple his regime & empower Iran solely on the odds of that.
It's hindsight only in the most broad sense, and all empirical knowledge is based on hindsight in the most broad sense. And the literal reading of that sentence is that "Saddam" is the subject of "topple his regime". Who is saying that Saddam should have toppled his own regime?
And is this 'quite strong evidence' neutralized by recent events in Syria? What's the proper reference class here?
"Recent events in Syria"? You'll have to be more specific. And you seem to be trying to slide from a discussion of the case itself to discussion of whether the case is the proper reference class.
comment by Wei Dai (Wei_Dai) · 2014-07-02T12:04:51.696Z · LW(p) · GW(p)
the genius is John von Neumann
Historical note: According to Prisoner's Dilemma By William Poundstone, von Neumann didn't suggest issuing a nuclear ultimatum but instead advocated a surprise first strike against the Soviet Union. Bertrand Russell did suggest a nuclear ultimatum but with the goal of establishing a world government rather than just non-proliferation.
In a previous related discussion, I noted that I I Rabi and Enrico Fermi did propose using the threat of nuclear attack to deter the development of fusion weapons. However in my online searches, I haven't found any prominent historical figures suggesting the exact thing that you (and I in that previous thread) are suggesting here, of using a nuclear threat just to prevent the proliferation of fission weapons, which is kind of curious...
comment by pianoforte611 · 2014-07-01T00:03:55.839Z · LW(p) · GW(p)
Nacirema huh? I feel stupid now.
Replies from: gwern↑ comment by gwern · 2014-07-01T00:14:30.595Z · LW(p) · GW(p)
It's a classic! You might enjoy some of the other articles/parables about the Nacirema: https://en.wikipedia.org/wiki/Nacirema
Replies from: bbleeker↑ comment by Sabiola (bbleeker) · 2014-07-01T11:47:38.598Z · LW(p) · GW(p)
So that's where the N comes from! I was wondering why it was Nacirema instead of Acirema.
Replies from: gwern↑ comment by gwern · 2014-07-01T15:48:33.429Z · LW(p) · GW(p)
I think at least part of it is that 'Acirema' is way more recognizable as a variant on 'America' than is 'Nacirema'. The 'N' is a major decoy, while 'Acirema' has the same word shape ("The theory holds that a novel bouma shape created by changing the lower-case letters to upper-case hinders a person’s recall ability.")
comment by Lalartu · 2014-07-01T08:59:19.986Z · LW(p) · GW(p)
Well, in reality Americans understood that Stalin would never agree to such plan, so it means war. They did not have enough nukes for guaranteed victory (few cities were acceptable losses for USSR), did not have any reliable information about Soviet nuclear research, and knew how badly war with Russia can end.
Replies from: James_Miller↑ comment by James_Miller · 2014-07-01T15:44:17.032Z · LW(p) · GW(p)
Remember the firebombings of Dresden and Tokyo. At the time, the U.S. didn't need nuclear weapons to inflict mass damage by air on cities. And since the Soviets would not have been able to have a concentrated mass tank force (since it would have been nuked) our tank forces would have been unstoppable.
Replies from: Punoxysm, Lalartu↑ comment by Punoxysm · 2014-07-01T21:09:59.712Z · LW(p) · GW(p)
The US took a long time to establish the air superiority necessary to execute those firebombings; and tactical nuclear weapons would have been available in very low numbers and difficult to deploy effectively [anything you nuke, your own forces can't pass; not even because of radiation but because of infrastructure destruction] (and certainly the Soviets could have adapted; they outnumbered Allied forces substantially in Europe).
But forget all that; how is a bloody, brutal war immediately after WWII to subjugate the Soviets preferable to the Cold War as it happened? Would it have reduced x-risk in the long term? I doubt that; how long could the US have monopolized nuclear weapons, especially if it immediately used them as threats that would terrify and antagonize every other nation in the world?
Replies from: James_Miller↑ comment by James_Miller · 2014-07-01T22:06:38.037Z · LW(p) · GW(p)
But forget all that; how is a bloody, brutal war immediately after WWII to subjugate the Soviets preferable to the Cold War as it happened?
Let's assume the many worlds hypothesis is correct and consider all of the branches of the multiverse that share our 1946. In how many of them did the cold war turn hot? For what percentage would it have been better to make the threat?
Also, a world in which just the United States has atomic weapons would have many additional benefits such as probably higher world economic growth rates because of lower defense spending.
Replies from: Punoxysm, Kaj_Sotala↑ comment by Punoxysm · 2014-07-02T06:11:28.597Z · LW(p) · GW(p)
Once we get into talking about alternate histories, our ability to have an evidence-based discussion pretty much goes out the window.
I'll say the following: 1) The cold war as we know it did come "close" in some sense to going hot; that's bad, that's x-risk in action 2) All things considered, the last 70 years as they actually happened went a hell of a lot better than the 70 years before, just on a political and military basis alone (so disregarding technology). 3) Ultimatums meant to monopolize the atomic bomb make sense if the goal is enacting a US-led One-World-Government, even if you believe WWIII would have broken out after ultimatums somehow fail to lead to peace 4) I DO believe WWIII would have broken out 5) I believe an attempted One-World-Government or other extreme attempt at global hegemony by the US would have been a disaster even without a USA-USSR WWIII.
↑ comment by Kaj_Sotala · 2014-07-02T05:09:55.826Z · LW(p) · GW(p)
Let's assume the many worlds hypothesis is correct and consider all of the branches of the multiverse that share our 1946. In how many of them did the cold war turn hot? For what percentage would it have been better to make the threat?
Given that a massive amount of quantum-scale randomness would have to go systematically in a different direction for it to have any noticeable macro-scale effect, and that even then most macro-scale effects would be barely even noticeable, isn't the default answer to questions like this always "in the overwhelming majority of branches, history never noticeably diverged from ours"?
Replies from: James_Miller↑ comment by James_Miller · 2014-07-02T05:25:46.463Z · LW(p) · GW(p)
Wouldn't quantum effects have some influence on who gets cancer from background radiation, and wouldn't the impact of this ripple in a chaotic way throughout the world so that, say, Petrov isn't the one on duty on 9/26/1983?
Replies from: Baughn↑ comment by Lalartu · 2014-07-02T08:21:20.031Z · LW(p) · GW(p)
Compared to destruction done by German forces, American strategic bombing would have been just an annoyance. Also, USA would be unable to achieve air superiority, and their bombers would suffer heavy losses.
Using nukes against even heavily concentrated tanks (~50 tanks per kilometer of frontline, as in major tank battles) is just a waste of nukes. In a clash between Soviet and American tank forces Americans would have been curbstomped.
Replies from: James_Miller↑ comment by James_Miller · 2014-07-02T15:27:00.651Z · LW(p) · GW(p)
Compared to destruction done by German forces, American strategic bombing would have been just an annoyance.
No, at the very least we would have been able to attack Soviet cities from bases in China and Japan that the Germans couldn't hit.
Using nukes against even heavily concentrated tanks (~50 tanks per kilometer of frontline, as in major tank battles) is just a waste of nukes.
I'm not sure about this since the goal would be to create a hole for your tanks to exploit so you could encircle the enemy.
Replies from: Lalartu↑ comment by Lalartu · 2014-07-03T08:17:26.641Z · LW(p) · GW(p)
No, at the very least we would have been able to attack Soviet cities from bases in China and Japan that the >Germans couldn't hit.
No, main Soviet industrial centers were far beyond the range of any bombers, from Europe or from China. Also, bombing a city does far less to reduce military production than capturing it (look at figures for Germany 1944-1945).
I'm not sure about this since the goal would be to create a hole for your tanks to exploit so you could encircle >the enemy.
So you suggest using American nukes ( dozen in 1946), dropping them on Soviet tanks, from strategic bombers that hardly can hit a target smaller that city, to gain a modest tactical advantage (bringing two battalions of tank destroyers will have the same effect )? Using such brilliant plans USA would have surely lost WWIII.
Replies from: James_Miller↑ comment by James_Miller · 2014-07-03T14:46:21.181Z · LW(p) · GW(p)
I'm far from an expert on tank battles, but my impression is that what you really want to do is encircle the enemy tanks to cut them off from supplies. Being able to punch a small hole in enemy defenses would be extremely helpful. My impression was also that strategic bombers had difficulty hitting targets because of interference from anti air defenses and enemy aircraft, and this wouldn't have been a problem when attacking targets in the field under conditions under which the U.S. had air superiority.
Replies from: Lalartu↑ comment by Lalartu · 2014-07-04T09:08:30.213Z · LW(p) · GW(p)
Encirclement operation works on much bigger scale, "small hole" here is tens of kilometers wide, through a defence line that is also tens of kilometers in depth. Using nukes against tanks makes no sense unless numbers of nukes and tanks are comparable.
Poor accuracy of strategic bombing was because of high altitude. On low altitude these bombers are very easy targets for anti-aircraft artillery (Soviet divisions had lots of it), and dropping nuke is a suicide mission.
comment by Kaj_Sotala · 2014-07-01T03:16:28.688Z · LW(p) · GW(p)
deducible by a grad student from published papers, as concerned agencies in Nacirema proved
Probably because I've read a few accounts of a grad student doing this, I realized what you were doing by this sentence.
comment by solaire · 2014-07-01T11:27:36.399Z · LW(p) · GW(p)
I have to say that nuclear warfare was less of a human extinction risk than some people tend to think or is directly suggested by this text. Even a straight all out war between the United States and Soviet Union using their full arsenals would not have caused human extinction nor likely have prevented some technological societies from rebuilding if they didn't outright survive. I've seen out there expert analyses on raw destruction and on factors like subsequent global climate devastation showing this conclusion from any plausible military contigencies and actions. The remaining arguments in favor would have to be pretty convoluted, like by setting a sociopolitical precedent it would automatically guarantee any future or rebuilt societies would seek military conflict through further nuclear wars.
The most dangerous extinction risk that could be caused by human action in the 20th century probably would have been deliberate attempts at destruction of the ozone layer in a supervillain sense. (This could be facilitated by nuclear weaponry, of course.) No actual polity as far as anyone knows, I think, planned such a thing. Accidental destruction, timed differently in an alternate history pathway, could also have been pretty bad. To consider and compare a full range of hypotheses, biological warfare was (and still is) a threat but overall is probably less of an x-risk as well, if you understand the flat out mass extinction potential of ozone destruction.
It wouldn't be bad to invite debate on these points as I think actually fully understanding various x-risks, near misses in the real world, and all that is rather important to getting something useful out of this parable.
Replies from: gwern↑ comment by gwern · 2014-07-01T15:39:51.773Z · LW(p) · GW(p)
I've seen out there expert analyses on raw destruction and on factors like subsequent global climate devastation showing this conclusion from any plausible military contigencies and actions.
The only one I've personally read is Herman Kahn's On Thermonuclear War, which oversimplifies a lot and generally tries to paint matters as optimistically as possible; as well, people from that era like Samuel Cohen in his memoirs describe Kahn as willing to fudge numbers to make their scenarios look better.
Personally, I am not optimistic. Remember the formulation of existential risk: not just extinction, but also the permanent curtailment of human potential. So if industrialized civilization collapsed permanently, that would be a serious x-risk almost up there with extinction itself. I agree that I don't think nuclear war is likely to immediately cause human extinction, but if it destroys industrialized civilization, then it's setting us up to actually be wiped out over the coming millennia or millions of years by a fluke pandemic or asteroid or any of the usual natural x-risks.
Coal, oil, surface metals, and many other resources are effectively impossible to extract with low-tech levels like say the 1800s. (Imagine trying to frack or deepsea mine or extract tar with 1800s metallurgy or anything.) Historically, we see that entire continents can go for millennia on end with little meaningful change economically; much of Africa might as well not be in the same world, for all the good progress has done it. Intellectual traditions and scholarship can become corrupted into meaningless repetition of sacred literature (how much genuine innovation took place in China from AD 0 to AD 1800, compared to its wealth and large intelligentsia? why do all acupuncture trials 'succeed' in China and Japan when it's shown to be worthless placebo in Western trials?) We still don't know why the Industrial & Scientific Revolutions took place in Western Europe starting around the 1500s, when there had been urbanized civilizations for millennia and China in every way looked better, so how could we be confident that if humanity were reduced to the Dark Ages we'd quickly recover? Brief reparable interruptions in globalized supply chains cause long-lasting - we still haven't recovered to the trendline of hard drive prices from the Thai floods, and that was just flooding, nothing remotely like a countervalue nuke against Bangkok knocking out a good chunk of Thai business & financial infrastructure. Experience/learning curve effects mean that high efficiencies are locked up in the heads and hands and physical arrangement of existing capital, so plants cannot simply be replaced overnight, the expertise has to be developed from scratch. Complex civilizations can simply collapse and disperse back into the low-tech agrarian societies from whence they sprung (I'm thinking particularly of the case-studies in Tainter's Collapse).
Of course, we can't yet name an industrialized civilization that collapsed, but it's not like it's been a thing all that long - the Roman Empire lasted a lot longer than the Industrial Revolution has, but nevertheless, we know how that ended.
Replies from: solaire, Luke_A_Somers↑ comment by solaire · 2014-07-02T05:12:25.814Z · LW(p) · GW(p)
It feels like we have talked past each other given this and responses to other comments.
I do not think this really addressed a core misconception shaping the debate or a best a contradiction of historical expert analysis. Would you call it "industrial collapse" if, following a full scale nuclear war, present day Australia was still standing a month later with little military destruction nor human casualties?
I am not directly an expert in the field and climate science in particular has advanced a lot compared to historical research, on all topics not just nuclear winter, but I have read some different authors. Also to the point, the sheer volume of expert work characterized at best by conflicting opinions should you accept the most pessimistic nuclear warfare predictions is worth considering. Sagan and Turco and others repeatedly collaborated on several high profile works and the state of expert science I think could be accurately said to be considered to have advanced over time.
See for example: http://www.atmos.washington.edu/~ackerman/Articles/Turco_Nuclear_Winter_90.pdf
This particular paper doesn't discuss, say, military strategy other than very broad consensus, eg. both sides would favor Northern Hemisphere targets, though see a ton of cited and other sources. Even conditionally overcoming, for the purpose of hypothetical consideration, the lower prior probability of certain full scale military conflicts, direct, targeted destruction of more than about 20% of the world population as a military and strategic outcome just wasn't feasible, ever. This as a popular misconception might be readily dismissed by those of us here, but recognize that large amounts of past research was on fully trying to understand, admittedly we still don't completely, subsequent climate and ecological effects. The latter are the only real x-risk concern from a technological and natural science standpoint. A few degrees Celsius of temperature change globally and other havoc is not nothing but most predictions indicated low risk of a real extinction event. Much of the world would have nominally go on without the US and USSR and losses suffered by their respective allies, and by pretty much everyone's inspection it's not like those who survived would be all impovershed 3rd worlders who could never recover.
It is a stretch to describe predictions and understanding at times in the past, even "1980 Australia survives intact, with some climate and ecological repercussions" as "industrial civilization completely collapses." Those two statements are not equivalent at all. The former prediction might have been incorrect but it existed.
Clearly there are reasons to consider prior study on the matter less than ideal, experts lacking time or funding or facing political pressure. Though, saying that experts attempted to study the issue at the time and got it wrong is different from ignoring it and from others rejecting a correct conclusion by the experts. Very few expert predictions leaned in the direction of x-risk as considered here - not just immediate near extinction but also "permanent curtailment of potential," at least when putting nuclear warfare and low if uncertain nuclear winter predictions on the same scale as other x-risks.
Replies from: gwern↑ comment by gwern · 2015-03-01T21:41:08.582Z · LW(p) · GW(p)
Would you call it "industrial collapse" if, following a full scale nuclear war, present day Australia was still standing a month later with little military destruction nor human casualties?
I assume you mean here if Australia escaped any direct attack? Sure. The lesson of I Am A Pencil - no one person (or country) knows how to make a pencil. Australia is heavily integrated into the world economy: to caricature, they mine iron for China, and in exchange they get everything else. Can Australia make an Intel chip fab using only on-island resources? Could it even maintain such a chip fab? Can Australia replace the pharmaceutical factories of the USA and Switzerland using only on-island resources? Where do the trained specialists and rare elements come from? Consider the Great Depression: did Australia escape it? If it cannot escape a simple economic slowdown because it is so highly intertwined, it is not going to escape the disruption and substantial destruction of almost the entire scientific-industrial-technological complex of the Western world. Australia would immediately be thrown into dire poverty and its advanced capabilities will begin decaying. Whether Australia becomes a new Tanzania of technology loss will depend on how badly mauled the rest of the world is, though, I would guess.
Even conditionally overcoming, for the purpose of hypothetical consideration, the lower prior probability of certain full scale military conflicts, direct, targeted destruction of more than about 20% of the world population as a military and strategic outcome just wasn't feasible, ever.
An instantaneous loss of 10-20% of population and destruction of major urban centers is pretty much unprecedented. The few examples I can think of similar levels of population loss, like the Mongols & Iran or the Spanish & New World, are not promising.
by pretty much everyone's inspection it's not like those who survived would be all impovershed 3rd worlders who could never recover.
But none of those countries were responsible for the Industrial or Scientific Revolutions. Humanity would survive... much as it always has. That's the problem.
Clearly there are reasons to consider prior study on the matter less than ideal, experts lacking time or funding or facing political pressure. Though, saying that experts attempted to study the issue at the time and got it wrong is different from ignoring it and from others rejecting a correct conclusion by the experts. Very few expert predictions leaned in the direction of x-risk as considered here - not just immediate near extinction but also "permanent curtailment of potential," at least when putting nuclear warfare and low if uncertain nuclear winter predictions on the same scale as other x-risks.
I've read this paragraph 3 times and I still don't know what you're talking about. You're being way too vague about what experts or what predictions you're talking about or what you're responding to or how it connects to your claims about Australia.
↑ comment by Luke_A_Somers · 2014-07-01T16:32:30.441Z · LW(p) · GW(p)
the Roman Empire lasted a lot longer than the Industrial Revolution has, but nevertheless, we know how that ended.
Well, yes... It persisted until 1453. Rome wasn't the center of the Roman Empire since around 330.
The big idea that allows modern civilization is that you don't worship the knowledge, you go out and test it... that's the main thing, and that would persist easily. Knowing about germs is another biggie. That sort of stuff is spread very widely (if thinly), and could allow a rebound. But to get that rebound, it needs to have been there in the first place. Mere users of civilization who haven't become modern themselves would not get this boost. (see: all case-studies in Collapse, if I'm not mistaken)
Also, the last I checked on acupuncture, the placement is unimportant, but getting stuck with needles does help with pain. So they're continuing doing something that works, but they haven't removed unimportant details.
Replies from: gwern↑ comment by gwern · 2014-07-01T17:33:11.930Z · LW(p) · GW(p)
It persisted until 1453. Rome wasn't the center of the Roman Empire since around 330.
Either way you want to count, from the first Roman conquests to the fall of the West or from the fall of the West to the fall of the East. Both get you periods comparable to or longer than the entire history of the Industrial & Scientific Revolutions so far.
The big idea that allows modern civilization is that you don't worship the knowledge, you go out and test it... that's the main thing, and that would persist easily.
I don't think 'testing things' is as easy or trivial as you think. It's very easy to 'test' something and get exactly the result you want. Or get a result which means nothing. Cargo cult science & thinking is the default, not the aberration. When science goes bad, it doesn't look like 'we've decided we aren't going to do Science anymore', it looks like this: http://lesswrong.com/lw/kfb/open_thread_30_june_2014_6_july_2014/b1u3 (To use an example from yesterday.) It looks like science, everyone involved thinks it's science, it passes all the traditional rituals like peer review and statistical tests, but it means next to nothing. The same way millennia of theologians or magicians or alchemists think they're doing something useful and acquiring knowledge.
Once the culture of real science is lost, I'm not sure it has a good chance of surviving. How well did the spirit of Greek philosophy survive the Roman Empire? Yes, eventually it came back, but there's anthropic bias there (we probably wouldn't be discussing science if Greek philosophy & logic hadn't survived somehow), and consider the chancy transmission of a lot of it from Greek to Arabic back to Europe.
Knowing about germs is another biggie.
Will degenerate almost immediately into classic taboos and miasmas and evil spirits. How well does folk understanding of antibiotics accord with reality? When people routinely discontinue antibiotic treatment because they feel better, are they exhibiting an understanding of germ theory? Or consider the popularity of anti-vax among the most highly educated as we speak...
but getting stuck with needles does help with pain
I was under the impression that sham acupuncture generally performed comparable to 'real' acupuncture: https://en.wikipedia.org/wiki/Acupuncture
Replies from: Luke_A_Somers↑ comment by Luke_A_Somers · 2014-07-01T17:37:45.307Z · LW(p) · GW(p)
Once the culture of real science is lost,
RIght. But real science is widespread. There are research universities in Lesotho, and I've met professors from there and they know how science works. They've done it, and continue to do it.
I was under the impression that sham acupuncture generally performed comparable to 'real' acupuncture
Exactly. The sham acupuncture still involves poking people with needles! It's just not aligned.
Replies from: gwern↑ comment by gwern · 2014-07-01T17:48:45.056Z · LW(p) · GW(p)
There are research universities in Lesotho, and I've met professors from there and they know how science works.
And how much of the culture of science has spread through Lesotho? Or would survive the university being shut down? Or survive a single charismatic professor leaving and being replaced by a corrupt leader who demands publishable results? The question isn't whether Science exists in the world, but to what extent it's a delicate flower that lives in a greenhouse and will quickly die or become a shambling parody of itself when conditions change, and whether it can survive something like the collapse of civilization.
Exactly. The sham acupuncture still involves poking people with needles! It's just not aligned.
It does? I thought sham acupuncture involved either needle-less approaches or trick needles where it pokes the patient but retracts rather than breaks the skin.
Replies from: Luke_A_Somers↑ comment by Luke_A_Somers · 2014-07-01T19:23:30.709Z · LW(p) · GW(p)
Pt 1: I don't know. The core of science is not so very complicated. Empiricism plus skepticism plus math. The hardest part of that is math, and of the three that is the most easily transmitted by book. Of the rest, that's a bit of sociology I can't judge. Lesotho isn't what I'm holding up as 'the most likely source of a rebound in the event of nuclear war' - it's an example of the spread of real science.
Pt 2: Sometimes... but even if acupuncture is a really reliable way of inducing a strong placebo effect on people even if they know it 'does nothing', that's useful.
Replies from: gwern↑ comment by gwern · 2014-07-02T02:08:55.542Z · LW(p) · GW(p)
The core of science is not so very complicated
Then why did it take so long?
The hardest part of that is math, and of the three that is the most easily transmitted by book.
The tradition of math is the most ancient & universal of the 3 parts you mention. Most regions of the world develop math, sometimes to fairly high levels like in India or China. Is that consistent with it being 'the hardest part'? In contrast, empiricism and skepticism are typically marginal and unpopular on the rare occasions they show up; the Greek Skeptics were one of the more minor traditions, the Carvaka of India were some heretics known from like one surviving text from the early BCs and were never a viable force, and offhand I don't even know of any Chinese philosophical tradition which could reasonably be described as either 'empirical' or 'skeptical'.
that's useful.
It's also not what they think they're measuring. Still diseased.
Replies from: Luke_A_Somers↑ comment by Luke_A_Somers · 2014-07-02T10:50:53.027Z · LW(p) · GW(p)
Then why did it take so long?
Because it's psychologically hard and unintuitive, not because it's complicated. Math is complicated and difficult, but it's not psychologically challenging like 'do your best to destroy your own clever explanations and cheer if someone else does'.
Acupuncture makes a great example. Here we have folks who are on to something that works. Yay! Case closed. ... except, not. Because they don't have the idea of science, the hard and unintuitive thing that says you should try to find all the times that that thing you rely on doesn't work, they can't find those boundaries.
Replies from: gwern↑ comment by gwern · 2015-03-01T21:37:13.523Z · LW(p) · GW(p)
Because it's psychologically hard and unintuitive, not because it's complicated.
...and if science is psychologically hard & unintuitive, all the easier for it to be substituted for something superficially similar but ineffective.
Math is complicated and difficult, but it's not psychologically challenging like 'do your best to destroy your own clever explanations and cheer if someone else does'.
And how does that not make science harder than math?
Replies from: Luke_A_Somers↑ comment by Luke_A_Somers · 2015-03-01T22:43:51.975Z · LW(p) · GW(p)
Skepticism and empiricism are robust ideas, by which I mean there's nothing particularly similar to them. They are also very compact. You can fit them on a post-card. On the other hand, math is this enormous edifice.
The 'getting it wrong' that you see all over modern science is a failure, yes, but most of these scientist-failures are failing due to contingent local factors like conflicts of interest and grant proposals and muddy results and competition pressures... they're failing to fulfill the scientific ideal for sure, but it's not because they lack the scientific ideal. They can correctly teach science. If bad scientists were all we had, then science would have bad habits and that would be bad, but it could be solved much more easily than having to redesign the thing without knowing that it was possible, like we did the first time.
This is still the case even if all the scientists are under the thumbs of warlords who make them do stupid stuff. The idea is there, the light can spread. Not right away, likely, but we won't need to wait thousands of years for it to re-emerge.
comment by The_Duck · 2014-07-03T01:00:05.747Z · LW(p) · GW(p)
The analogy seems pretty nice. The argument seems to be that, based on the historical record, we're doomed to collective inaction in the face of even extraordinarily dangerous risks. I agree that the case of nukes does provide some evidence for this.
I think you paint things a little too grimly, though. We have done at least a little bit to try to mitigate the risks of this particular technology: there are ongoing efforts to prevent proliferation of nuclear weapons and reduce nuclear stockpiles. And maybe a greater risk really would provoke a more serious response.
comment by Jach · 2014-07-02T19:12:01.560Z · LW(p) · GW(p)
Your post prompted me to recall what I read in Military Nanotechnology: Potential Applications and Preventive Arms Control by Jürgen Altmann. It deals mostly with non-molecular nanotech we can expect to see in the next 5-20 years (or already, as it was published in 2006), but it does go over molecular nanotech and it's worth thinking about the commonly mentioned x-risk of a universal molecular assembler in addition to AGI for the elites to handle over the next 70 years.
I think as a small counter to the pessimistic outlook the parable gives, it's worth remembering that the Biological and Toxin Weapons Convention and especially the Chemical Weapons Convention have been fairly successful in their goals. The CWC lays out acceptable verification methods which aren't so demanding that if a country accepts them then they slide into complete subjugation of the inspectors... If it could be extended to cover nanotech weapons that'd be a good thing.
On the other hand, maybe they're not so much cause for optimism. The BTWC has a noticeable lack of verification measures, and Altmann cites that as mainly due to the US dragging its feet. The US can't even deal with managing smaller threats at home where it has complete jurisdiction, like 3D printed guns, so it's hard for me to see it in its current form dealing with a bigger threat of a nanotech arms race (let alone x-risks), especially if that requires playing nice with the international community.
comment by [deleted] · 2014-07-01T13:27:46.516Z · LW(p) · GW(p)
every sector benefits from electricity ‘too cheap to meter’
Except that that was never actually in the cards...
comment by bramflakes · 2014-07-01T00:17:56.305Z · LW(p) · GW(p)
Why, did you think it was about something else?
I patternmatched the first half to eugenics.
Well, "impoverished foreign country" doesn't match well to Nazi Germany, but everything else checks out.
Replies from: knbcomment by Larks · 2014-06-30T23:14:25.416Z · LW(p) · GW(p)
Good story! The bit about Nacirema's rival being 'to the east' and an 'orietal despotism' and 'angry at historical interference in its domestic affairs by Seilla & Nacirema' gave the game away a little early though.
Replies from: gwern↑ comment by gwern · 2014-06-30T23:24:36.194Z · LW(p) · GW(p)
Really? The wording was supposed to make it look like China. What made you think USSR? I thought relatively few people knew about Western involvement in the Russian Revolution compared to stuff like the Opium Wars etc.
Replies from: Larks, Luke_A_Somers↑ comment by Luke_A_Somers · 2014-07-01T16:35:13.888Z · LW(p) · GW(p)
I figured at first that you were blending multiple real-life narratives. But when I saw 'Seilla', that told me 'It's Nukes'. Or rather, 'from now on, it's probably going to be nukes'. After all, you had very early on
Unfortunately, it was hurriedly decided to use an early prototype outside the lab in an impoverished foreign country.
That's simply not how it went 70 years ago.
Replies from: gwern↑ comment by gwern · 2014-07-01T17:10:32.810Z · LW(p) · GW(p)
But when I saw 'Seilla', that told me 'It's Nukes'.
Ah. Hm... I wasn't even referring to the WWII Allies there, I was referring to Western involvement (particularly the USA & Triple Entente) in the Russian Revolution supporting the Whites. Perhaps I could use "Nacirema and Threefold Understanding"?
That's simply not how it went 70 years ago.
Yes, it is. The decision was indeed hurried. They were early prototypes. The use was outside the lab. The country was indeed a foreign country. The country was impoverished by our standards, and impoverished by the standards of the day before and definitely impoverished after years of embargo, war, fire-bombing, wasted military expenditures, and destruction of the labor force. The Axis countries were poor even before the war. (If you are under the mistaken impression that Germany, for example, was 'rich' or 'heavily industrialized' or even comparable to England or America per capita, I suggest you read Tooze's The Wages of Destruction.)
Replies from: Luke_A_Somers↑ comment by Luke_A_Somers · 2014-07-01T17:34:47.563Z · LW(p) · GW(p)
One teeensy problem. The technology you are referring to here is not the same technology that had been alluded to in the previous paragraph.
Replies from: gwerncomment by [deleted] · 2014-07-01T05:13:41.048Z · LW(p) · GW(p)
Cute, except nuclear technology and AGI could not be more different.
Replies from: gwern↑ comment by gwern · 2014-07-01T15:23:03.238Z · LW(p) · GW(p)
Feel free to elaborate on that. As I point out in this parable, nuclear tech looks eerily like slow takeoff and arms race scenarios once you delete the names, and elites failed to deal with it in any meaningful way other than accelerating the arms race & hoping we'd all survive.
Replies from: None↑ comment by [deleted] · 2014-07-01T16:06:40.114Z · LW(p) · GW(p)
Well let's see:
1) AGI doesn't require obscure, hard-to-process materials that can be physically controlled.
2) AGI is software and therefore trivially copyable -- you can have the design for a nuclear bomb and the materials, but still need lots of specialists with experience in constructing nuclear bombs in order to build one. An AGI, on the other hand, could be built to run on commodity hardware.
3) AGI is merely instrumental to weaponization in the high-probability risk scenarios. It's a low-cost Manhattan project in a box. A pariah country would use an AGI to direct their nuclear bomb project for example, not actually "deploy an AGI in the field", whatever that means. So there's a meta-level difference here: whereas peaceful nuclear technology actually generates weapons grade material as a side-product, AGI itself doesn't carry that risk.
4) It's hard to analyze what the exact risk is you are predicating this story on. What was the slow-takeoff failure that cost hundreds of thousands of people their lives? It's hard to criticize specifically what you had in mind without knowing what you had in mind, other than that it involved a human-hostile, confrontational hard takeoff in a "properly" regulated project. As a general category of failures I assign very little probability mass there.
5) I would argue that the first AGI doesn't require a Manhattan-scale project to construct, although I recognize that is a controversial opinion.
Replies from: gwern↑ comment by gwern · 2014-07-01T16:29:11.719Z · LW(p) · GW(p)
1) AGI doesn't require obscure, hard-to-process materials that can be physically controlled.
Yes, it does: it requires obscene amounts of computing power, which require enormous extremely efficient multi-billion-dollar chip fabs to create, each of which currently cost more than the entire Manhattan Project did and draw upon exotic materials & specialties; see my discussion in http://www.gwern.net/Slowing%20Moore%27s%20Law
2) AGI is software and therefore trivially copyable -- you can have the design for a nuclear bomb and the materials, but still need lots of specialists with experience in constructing nuclear bombs in order to build one. An AGI, on the other hand, could be built to run on commodity hardware.
You also need a lot of specialists to run a supercomputer. Amusingly, supercomputers have always been closely linked to nuclear bomb development, from the Manhattan Project to the national laboratories like Livermore.
whereas peaceful nuclear technology actually generates weapons grade material as a side-product
Only some nuclear technologies are inherent proliferation risks. Specific kinds of reactors, sure. But lots of other things like cesium for medicine? No way.
A pariah country would use an AGI to direct their nuclear bomb project for example, not actually "deploy an AGI in the field", whatever that means.
Dead is dead. Does it matter if you're dead because an AGI hacked your local dam and drowned you or piloted a drone or developed a nuclear bomb?
AGI itself doesn't carry that risk.
Any AGI is going to carry the risk of being misapplied in all the ways that humans have done harm with their general intelligence throughout history. What are you going to do, install DRM on it?
4) It's hard to analyze what the exact risk is you are predicating this story on. What was the slow-takeoff failure that cost hundreds of thousands of people their lives?
The slow takeoff was the development of atomic bombs from custom low kilotonnage bombs which weighed tons and could only be delivered by slow vulnerable heavy bombers deployed near the target to mass-produced lightweight megatonnage warheads which could be fit on ICBMs and represented a global threat with no defense. I thought this was straightforward, was I assuming too much knowledge of the Cold War and nuclear politics when I wrote it?
5) I would argue that the first AGI doesn't require a Manhattan-scale project to construct
Maybe, maybe not. If it didn't, I think that would support my thesis, by implying that an AGI arms race could be much faster and more volatile than the nuclear arms race was.
Replies from: James_Miller, None↑ comment by James_Miller · 2014-07-01T22:27:06.997Z · LW(p) · GW(p)
I would argue that the first AGI doesn't require a Manhattan-scale project to construct
I would argue that the high tech world is, mostly unwittingly, currently undertaking a much bigger than Manhattan-scale project to construct AGI. Think of all the resources going into making computers smarter, faster, and cheaper. I don't believe that the Internet is going to wake up and automatically become an AGI, but markets are strongly pushing tech companies towards creating the hardware likely necessary for AGI.
↑ comment by [deleted] · 2014-07-01T17:23:15.034Z · LW(p) · GW(p)
I thought this was straightforward, was I assuming too much knowledge of the Cold War and nuclear politics when I wrote it?
It was very straightforward and transparent. But it was supposed to be an allegory, right? So what's the analog in the AGI interpretation?
Maybe, maybe not. If it didn't, I think that would support my thesis, by implying that an AGI arms race could be much faster and more volatile than the nuclear arms race was.
My point is that this isn't an arms race. The whole cold war concept doesn't make sense for AGI.
Replies from: gwern↑ comment by gwern · 2014-07-01T17:54:18.685Z · LW(p) · GW(p)
So what's the analog in the AGI interpretation?
The analog would be an early buggy AGI which is not particularly powerful and is slow, and it & its developers improving it over a few years. (This is different from the hard takeoff scenario which suggests the AGI improves rapidly at an exponential rate due to the recursiveness of the improvements.)
My point is that this isn't an arms race. The whole cold war concept doesn't make sense for AGI.
How would it not be an arms race?
Replies from: None↑ comment by [deleted] · 2014-07-01T18:09:55.325Z · LW(p) · GW(p)
The analog would be an early buggy AGI which is not particularly powerful and is slow, and it & its developers improving it over a few years.
How does that lead to hundreds of thousands dying in some impoverished foreign country?
How would it not be an arms race?
Gwern, it's your argument. The onus is on you to show there is any parallel at all. You've asserted there is. Why?
Replies from: gwern, drethelin↑ comment by gwern · 2014-07-02T02:21:25.833Z · LW(p) · GW(p)
How does that lead to hundreds of thousands dying in some impoverished foreign country?
Huh? That was what happened with the first use of nuclear bombs, it's not necessarily what will happen with AGI. We should be so lucky!
I think you aren't understanding my point here of the parable. I thought it was clear in the middle, but to repeat myself... Even with nuclear bombs, which are as textbook a case of x-risk as you could ever hope to find, with as well-established physics endorsed by brainy specialists as possible, with hundreds of thousands of dead bodies due to an early weak version to underscore for even the most moronic possible politician 'yes, this is very real and these weapons are really fucking dangerous' as a 'sputnik moment', politicians still did not take meaningful preventive action.
Hence, since AGI will on every dimension be a less clean simple case (harder to understand, harder to predict the power of, less likely to present a clear signal of danger in time to be useful, more useful in civilian applications) than nuclear weapons were, a fortiori, politicians will not take meaningful preventive action about AGI. Political elites failed an easy x-risk test, and so it is unlikely they will pass a harder x-risk test.This is in direct contrast to what lukeprog seems to believe, and you'll note I allude to his previous posts about how well he thinks elites dealt with past issues.
No, I don't expect the early AGI prototypes to tip their hand and conveniently warn us like that. Life is not a Hollywood movie where the Evil AI character conveniently slaughters a town and then sits around patiently waiting for the heroes to defeat them. I expect AGI to either not be particularly powerful/dangerous & our concerns entirely groundless, or to not look like a major problem until it's too late.
The onus is on you to show there is any parallel at all. You've asserted there is. Why?
Why do you think there won't be any arms race? If AGI are militarily powerful and increase in power, that sets up the conditions for an arms race: countries will need to acquire and develop AGI merely to maintain parity, which in turn encourages further development by other countries to maintain their relative level of military power. What part of this do you disagree with? 'arms race' is a common and well-understood pattern, it would be helpful if you explained your disagreement (which you still haven't so far) rather than demand I explicate something fairly obvious.
Replies from: None↑ comment by [deleted] · 2014-07-02T03:07:09.072Z · LW(p) · GW(p)
It's only obvious to you, apparently.
I don't believe AGI will be militarily useful, at least moreso than any other technology.
Nor do I believe that AGI will be developed on a long enough time scale for an "arms race".
Nor do I think politicians will be involved, at all.
Replies from: gwern↑ comment by gwern · 2014-07-02T03:29:31.576Z · LW(p) · GW(p)
I don't believe AGI will be militarily useful, at least moreso than any other technology.
Other technologies have sparked arms races, so that seems like an odd position to take.
Nor do I believe that AGI will be developed on a long enough time scale for an "arms race".
If you're a 'fast takeoff' proponent, I suppose the parallels to nukes aren't of much value and you don't care whether the politicians would handle well or poorly a slow takeoff. I don't find fast takeoffs all that plausible, so these are relevant matters to me and many other people interested in AI safety.
Replies from: None↑ comment by [deleted] · 2014-07-06T00:50:07.989Z · LW(p) · GW(p)
Eh.. timescales are relative here. Typically when someone around here says “fast takeoff” I assume they mean something along the lines of That Alien Message -- hard takeoff on the order of a literal blink of an eye, which is pure sci-fi bunk. But I find the other extreme parroted by Luke Muehlhauser and Stuart Armstrong and others -- 50 to 100 years -- equally bogus. From the weak inside view my best predictions put the entire project on the order of 1-2 decades, and the critical "takeoff" period measured in months or a few years, depending on the underlying architecture. That's not what most people around here mean by a "fast takeoff", but it is still too fast for meaningful political reaction.