Will the world's elites navigate the creation of AI just fine?
post by lukeprog · 2013-05-31T18:49:10.861Z · LW · GW · Legacy · 266 commentsContents
266 comments
One open question in AI risk strategy is: Can we trust the world's elite decision-makers (hereafter "elites") to navigate the creation of human-level AI (and beyond) just fine, without the kinds of special efforts that e.g. Bostrom and Yudkowsky think are needed?
Some reasons for concern include:
- Otherwise smart people say unreasonable things about AI safety.
- Many people who believed AI was around the corner didn't take safety very seriously.
- Elites have failed to navigate many important issues wisely (2008 financial crisis, climate change, Iraq War, etc.), for a variety of reasons.
- AI may arrive rather suddenly, leaving little time for preparation.
But if you were trying to argue for hope, you might argue along these lines (presented for the sake of argument; I don't actually endorse this argument):
- If AI is preceded by visible signals, elites are likely to take safety measures. Effective measures were taken to address asteroid risk. Large resources are devoted to mitigating climate change risks. Personal and tribal selfishness align with AI risk-reduction in a way they may not align on climate change. Availability of information is increasing over time.
- AI is likely to be preceded by visible signals. Conceptual insights often take years of incremental tweaking. In vision, speech, games, compression, robotics, and other fields, performance curves are mostly smooth. "Human-level performance at X" benchmarks influence perceptions and should be more exhaustive and come more rapidly as AI approaches. Recursive self-improvement capabilities could be charted, and are likely to be AI-complete. If AI succeeds, it will likely succeed for reasons comprehensible by the AI researchers of the time.
- Therefore, safety measures will likely be taken.
- If safety measures are taken, then elites will navigate the creation of AI just fine. Corporate and government leaders can use simple heuristics (e.g. Nobel prizes) to access the upper end of expert opinion. AI designs with easily tailored tendency to act may be the easiest to build. The use of early AIs to solve AI safety problems creates an attractor for "safe, powerful AI." Arms races not insurmountable.
The basic structure of this 'argument for hope' is due to Carl Shulman, though he doesn't necessarily endorse the details. (Also, it's just a rough argument, and as stated is not deductively valid.)
Personally, I am not very comforted by this argument because:
- Elites often fail to take effective action despite plenty of warning.
- I think there's a >10% chance AI will not be preceded by visible signals.
- I think the elites' safety measures will likely be insufficient.
Obviously, there's a lot more for me to spell out here, and some of it may be unclear. The reason I'm posting these thoughts in such a rough state is so that MIRI can get some help on our research into this question.
In particular, I'd like to know:
- Which historical events are analogous to AI risk in some important ways? Possibilities include: nuclear weapons, climate change, recombinant DNA, nanotechnology, chloroflourocarbons, asteroids, cyberterrorism, Spanish flu, the 2008 financial crisis, and large wars.
- What are some good resources (e.g. books) for investigating the relevance of these analogies to AI risk (for the purposes of illuminating elites' likely response to AI risk)?
- What are some good studies on elites' decision-making abilities in general?
- Has the increasing availability of information in the past century noticeably improved elite decision-making?
266 comments
Comments sorted by top scores.
comment by lukeprog · 2013-10-31T22:27:12.869Z · LW(p) · GW(p)
Lately I've been listening to audiobooks (at 2x speed) in my down time, especially ones that seem likely to have passages relevant to the question of how well policy-makers will deal with AGI, basically continuing this project but only doing the "collection" stage, not the "analysis" stage.
I'll post quotes from the audiobooks I listen to as replies to this comment.
Replies from: lukeprog, lukeprog, lukeprog, lukeprog, lukeprog, lukeprog, lukeprog, lukeprog, lukeprog, lukeprog, lukeprog, lukeprog, lukeprog, lukeprog, lukeprog, lukeprog, lukeprog, lukeprog, lukeprog, lukeprog, lukeprog, shminux, lukeprog, lukeprog, lukeprog, lukeprog, lukeprog, lukeprog, lukeprog, lukeprog, lukeprog, lukeprog, lukeprog, lukeprog, lukeprog, lukeprog, lukeprog, lukeprog, lukeprog, lukeprog, lukeprog, lukeprog, somervta, lukeprog, lukeprog, lukeprog, lukeprog, lukeprog, lukeprog, lukeprog, lukeprog, lukeprog, lukeprog, lukeprog, lukeprog, lukeprog↑ comment by lukeprog · 2013-11-21T17:32:21.487Z · LW(p) · GW(p)
From Watts' Everything is Obvious:
Replies from: lukeprog, lukeprog, lukeprog, lukeproga management scientist named Steven Schnaars tried to quantify the accuracy of technology-trend predictions by combing through a large collection of books, magazines, and industry reports, and recording hundreds of predictions that had been made during the 1970s. He concluded that roughly 80 percent of all predictions were wrong, whether they were made by experts or not.
Nor is it just forecasters of long-term social and technology trends that have lousy records. Publishers, producers, and marketers—experienced and motivated professionals in business with plenty of skin in the game—have just as much difficulty predicting which books, movies, and products will become the next big hit as political experts have in predicting the next revolution. In fact, the history of cultural markets is crowded with examples of future blockbusters—Elvis, Star Wars, Seinfeld, Harry Potter, American Idol—that publishers and movie studios left for dead while simultaneously betting big on total failures. And whether we consider the most spectacular business meltdowns of recent times—Long-Term Capital Management in 1998, Enron in 2001, WorldCom in 2002, the near-collapse of the entire financial system in 2008 — or spectacular success stories like the rise of Google and Facebook, what is perhaps most striking about them is that virtually nobody seems to have had any idea what was about to happen. In September 2008, for example, even as Lehman Brothers’ collapse was imminent, Treasury and Federal Reserve officials — who arguably had the best information available to anyone in the world — failed to anticipate the devastating freeze in global credit markets that followed. Conversely, in the late 1990s the founders of Google, Sergey Brin and Larry Page, tried to sell their company for $1.6M. Fortunately for them, nobody was interested, because Google went on to attain a market value of over $160 billion, or about 100,000 times what they and everybody else apparently thought it was worth only a few years earlier.
↑ comment by lukeprog · 2013-11-21T19:05:28.054Z · LW(p) · GW(p)
More (#1) from Everything is Obvious:
Problems like this one have led some skeptics to claim that prediction markets are not necessarily superior to other less sophisticated methods, such as opinion polls, that are harder to manipulate in practice. However, little attention has been paid to evaluating the relative performance of different methods, so nobody really knows for sure. To try to settle the matter, my colleagues at Yahoo! Research and I conducted a systematic comparison of several different prediction methods, where the predictions in question were the outcomes of NFL football games. To begin with, for each of the fourteen to sixteen games taking place each weekend over the course of the 2008 season, we conducted a poll in which we asked respondents to state the probability that the home team would win as well as their confidence in their prediction. We also collected similar data from the website Probability Sports, an online contest where participants can win cash prizes by predicting the outcomes of sporting events. Next, we compared the performance of these two polls with the Vegas sports betting market—one of the oldest and most popular betting markets in the world—as well as with another prediction market, TradeSports. And finally, we compared the prediction of both the markets and the polls against two simple statistical models. The first model relied only on the historical probability that home teams win — which they do 58 percent of the time — while the second model also factored in the recent win-loss records of the two teams in question. In this way, we set up a six-way comparison between different prediction methods — two statistical models, two markets, and two polls.
Given how different these methods were, what we found was surprising: All of them performed about the same. To be fair, the two prediction markets performed a little better than the other methods, which is consistent with the theoretical argument above. But the very best performing method—the Las Vegas Market—was only about 3 percentage points more accurate than the worst-performing method, which was the model that always predicted the home team would win with 58 percent probability. All the other methods were somewhere in between. In fact, the model that also included recent win-loss records was so close to the Vegas market that if you used both methods to predict the actual point differences between the teams, the average error in their predictions would differ by less than a tenth of a point. Now, if you’re betting on the outcomes of hundreds or thousands of games, these tiny differences may still be the difference between making and losing money. At the same time, however, it’s surprising that the aggregated wisdom of thousands of market participants, who collectively devote countless hours to analyzing upcoming games for any shred of useful information, is only incrementally better than a simple statistical model that relies only on historical averages.
When we first told some prediction market researchers about this result, their reaction was that it must reflect some special feature of football. The NFL, they argued, has lots of rules like salary caps and draft picks that help to keep teams as equal as possible. And football, of course, is a game where the result can be decided by tiny random acts, like the wide receiver dragging in the quarterback’s desperate pass with his fingertips as he runs full tilt across the goal line to win the game in its closing seconds. Football games, in other words, have a lot of randomness built into them — arguably, in fact, that’s what makes them exciting. Perhaps it’s not so surprising after all, then, that all the information and analysis that is generated by the small army of football pundits who bombard fans with predictions every week is not superhelpful (although it might be surprising to the pundits). In order to be persuaded, our colleagues insisted, we would have to find the same result in some other domain for which the signal-to-noise ratio might be considerably higher than it is in the specific case of football.
OK, what about baseball? Baseball fans pride themselves on their near-fanatical attention to every measurable detail of the game, from batting averages to pitching rotations. Indeed, an entire field of research called sabermetrics has developed specifically for the purpose of analyzing baseball statistics, even spawning its own journal, the Baseball Research Journal. One might think, therefore, that prediction markets, with their far greater capacity to factor in different sorts of information, would outperform simplistic statistical models by a much wider margin for baseball than they do for football. But that turns out not to be true either. We compared the predictions of the Las Vegas sports betting markets over nearly twenty thousand Major League baseball games played from 1999 to 2006 with a simple statistical model based again on home-team advantage and the recent win-loss records of the two teams. This time, the difference between the two was even smaller — in fact, the performance of the market and the model were indistinguishable. In spite of all the statistics and analysis, in other words, and in spite of the absence of meaningful salary caps in baseball and the resulting concentration of superstar players on teams like the New York Yankees and Boston Red Sox, the outcomes of baseball games are even closer to random events than football games.
Since then, we have either found or learned about the same kind of result for other kinds of events that prediction markets have been used to predict, from the opening weekend box office revenues for feature films to the outcomes of presidential elections. Unlike sports, these events occur without any of the rules or conditions that are designed to make sports competitive. There is also a lot of relevant information that prediction markets could conceivably exploit to boost their performance well beyond that of a simple model or a poll of relatively uninformed individuals. Yet when we compared the Hollywood Stock Exchange (HSX) — one of the most popular prediction markets, which has a reputation for accurate prediction—with a simple statistical model, the HSX did only slightly better. And in a separate study of the outcomes of five US presidential elections from 1988 to 2004, political scientists Robert Erikson and Christopher Wlezien found that a simple statistical correction of ordinary opinion polls outperformed even the vaunted Iowa Electronic Markets.
↑ comment by lukeprog · 2013-11-21T19:13:09.346Z · LW(p) · GW(p)
More (#2) from Everything is Obvious:
Ironically, in fact, the organizations that embody what would seem to be the best practices in strategy planning—organizations, for example, that possess great clarity of vision and that act decisively—can also be the most vulnerable to planning errors. The problem is what strategy consultant and author Michael Raynor calls the strategy paradox. In his book of the same name, Raynor illustrates the paradox by revisiting the case of Sony’s Betamax videocassette, which famously lost out to the cheaper, lower-quality VHS technology developed by Matsushita. According to conventional wisdom, Sony’s blunder was twofold: First, they focused on image quality over running time, thereby conceding VHS the advantage of being able to tape full-length movies. And second, they designed Betamax to be a standalone format, whereas VHS was “open,” meaning that multiple manufacturers could compete to make the devices, thereby driving down the price. As the video-rental market exploded, VHS gained a small but inevitable lead in market share, and this small lead then grew rapidly through a process of cumulative advantage. The more people bought VHS recorders, the more stores stocked VHS tapes, and vice versa. The result over time was near-total saturation of the market by the VHS format and a humiliating defeat for Sony.
What the conventional wisdom overlooks, however, is that Sony’s vision of the VCR wasn’t as a device for watching rented movies at all. Rather, Sony expected people to use VCRs to tape TV shows, allowing them to watch their favorite shows at their leisure. Considering the exploding popularity of digital VCRs that are now used for precisely this purpose, Sony’s view of the future wasn’t implausible at all. And if it had come to pass, the superior picture quality of Betamax might well have made up for the extra cost, while the shorter taping time may have been irrelevant. Nor was it the case that Matsushita had any better inkling than Sony how fast the video-rental market would take off—indeed, an earlier experiment in movie rentals by the Palo Alto–based firm CTI had failed dramatically. Regardless, by the time it had become clear that home movie viewing, not taping TV shows, would be the killer app of the VCR, it was too late. Sony did their best to correct course, and in fact very quickly produced a longer-playing BII version, eliminating the initial advantage held by Matsushita. But it was all to no avail. Once VHS got a sufficient market lead, the resulting network effects were impossible to overcome. Sony’s failure, in other words, was not really the strategic blunder it is often made out to be, resulting instead from a shift in consumer demand that happened far more rapidly than anyone in the industry had anticipated.
Shortly after their debacle with Betamax, Sony made another big strategic bet on recording technology — this time with their MiniDisc players. Determined not to make the same mistake twice, Sony paid careful attention to where Betamax had gone wrong, and did their best to learn the appropriate lessons. In contrast with Betamax, Sony made sure that MiniDiscs had ample capacity to record whole albums. And mindful of the importance of content distribution to the outcome of the VCR wars, they acquired their own content repository in the form of Sony Music. At the time they were introduced in the early 1990s, MiniDiscs held clear technical advantages over the then-dominant CD format. In particular, the MiniDiscs could record as well as play, and because they were smaller and more resistant to jolts they were better suited to portable devices. Recordable CDs, by contrast, required entirely new machines, which at the time were extremely expensive.
By all reasonable measures the MiniDisc should have been an outrageous success. And yet it bombed. What happened? In a nutshell, the Internet happened. The cost of memory plummeted, allowing people to store entire libraries of music on their personal computers. High-speed Internet connections allowed for peer-to-peer file sharing. Flash drive memory allowed for easy downloading to portable devices. And new websites for finding and downloading music abounded. The explosive growth of the Internet was not driven by the music business in particular, nor was Sony the only company that failed to anticipate the profound effect that the Internet would have on production, distribution, and consumption of music. Nobody did. Sony, in other words, really was doing the best that anyone could have done to learn from the past and to anticipate the future—but they got rolled anyway, by forces beyond anyone’s ability to predict or control.
Surprisingly, the company that “got it right” in the music industry was Apple, with their combination of the iPod player and their iTunes store. In retrospect, Apple’s strategy looks visionary, and analysts and consumers alike fall over themselves to pay homage to Apple’s dedication to design and quality. Yet the iPod was exactly the kind of strategic play that the lessons of Betamax, not to mention Apple’s own experience in the PC market, should have taught them would fail. The iPod was large and expensive. It was based on closed architecture that Apple refused to license, ran on proprietary software, and was actively resisted by the major content providers. Nevertheless, it was a smashing success. So in what sense was Apple’s strategy better than Sony’s? Yes, Apple had made a great product, but so had Sony. Yes, they looked ahead and did their best to see which way the technological winds were blowing, but so did Sony. And yes, once they made their choices, they stuck to them and executed brilliantly; but that’s exactly what Sony did as well. The only important difference, in Raynor’s view, was that Sony’s choices happened to be wrong while Apple’s happened to be right.
This is the strategy paradox. The main cause of strategic failure, Raynor argues, is not bad strategy, but great strategy that just happens to be wrong. Bad strategy is characterized by lack of vision, muddled leadership, and inept execution—not the stuff of success for sure, but more likely to lead to persistent mediocrity than colossal failure. Great strategy, by contrast, is marked by clarity of vision, bold leadership, and laser-focused execution. When applied to just the right set of commitments, great strategy can lead to resounding success—as it did for Apple with the iPod—but it can also lead to resounding failure. Whether great strategy succeeds or fails therefore depends entirely on whether the initial vision happens to be right or not. And that is not just difficult to know in advance, but impossible.
↑ comment by lukeprog · 2013-11-21T19:23:04.068Z · LW(p) · GW(p)
More (#4) from Everything is Obvious:
Another nonmarket approach to harnessing local knowledge that is increasingly popular among governments and foundations alike is the prize competition. Rather than allocating resources ahead of time to preselected recipients, prize competitions reverse the funding mechanism, allowing anyone to work on the problem, but only rewarding solutions that satisfy prespecified objectives. Prize competitions have attracted a lot of attention in recent years for the incredible amount of creativity they have managed to leverage out of relatively small prize pools. The funding agency DARPA, for example, was able to harness the collective creativity of dozens of university research labs to build self-driving robot vehicles by offering just a few million dollars in prize money—far less than it would have cost to fund the same amount of work with conventional research grants. Likewise, the $10 million Ansari X Prize elicited more than $100 million worth of research and development in pursuit of building a reusable spacecraft. And the video rental company Netflix got some of the world’s most talented computer scientists to help it improve its movie recommendation algorithms for just a $1 million prize.
Inspired by these examples—along with “open innovation” companies like Innocentive, which conducts hundreds of prize competitions in engineering, computer science, math, chemistry, life sciences, physical sciences, and business—governments are wondering if the same approach can be used to solve otherwise intractable policy problems. In the past year, for example, the Obama administration has generated shock waves throughout the education establishment by announcing its “Race to the Top”—effectively a prize competition among US states for public education resources allocated on the basis of plans that the states must submit, which are scored on a variety of dimensions, including student performance measurement, teacher accountability, and labor contract reforms. Much of the controversy around the Race to the Top takes issue with its emphasis on teacher quality as the primary determinant of student performance and on standardized testing as a way to measure it. These legitimate critiques notwithstanding, however, the Race to the Top remains an interesting policy experiment for the simple reason that, like cap and trade, it specifies the “solution” only at the highest level, while leaving the specifics up to the states themselves.
↑ comment by lukeprog · 2013-11-21T19:21:34.051Z · LW(p) · GW(p)
More (#3) from Everything is Obvious:
Of all the prognosticators, forecasters, and fortune-tellers, few are at once more confident and yet less accountable than those in the business of predicting fashion trends. Every year, the various industries in the business of designing, producing, selling, and commenting on shoes, clothing, and apparel are awash in predictions for what could be, might be, should be, and surely will be the next big thing. That these predictions are almost never checked for accuracy, that so many trends arrive unforeseen, and that the explanations given for them are only possible in hindsight, seems to have little effect on the breezy air of self-assurance that the arbiters of fashion so often exude. So it’s encouraging that at least one successful fashion company pays no attention to any of it.
That company is Zara, the Spanish clothing retailer that has made business press headlines for over a decade with its novel approach to satisfying consumer demand. Rather than trying to anticipate what shoppers will buy next season, Zara effectively acknowledges that it has no idea. Instead, it adopts what we might call a measure-and-react strategy. First, it sends out agents to scour shopping malls, town centers, and other gathering places to observe what people are already wearing, thereby generating lots of ideas about what might work. Second, drawing on these and other sources of inspiration, it produces an extraordinarily large portfolio of styles, fabrics, and colors—where each combination is initially made in only a small batch—and sends them out to stores, where it can then measure directly what is selling and what isn’t. And finally, it has a very flexible manufacturing and distribution operation that can react quickly to the information that is coming directly from stores, dropping those styles that aren’t selling (with relatively little left-over inventory) and scaling up those that are. All this depends on Zara’s ability to design, produce, ship, and sell a new garment anywhere in the world in just over two weeks—a stunning accomplishment to anyone who has waited in limbo for just about any designer good that isn’t on the shelf.
↑ comment by lukeprog · 2014-03-02T18:58:56.177Z · LW(p) · GW(p)
From Rhodes' Arsenals of Folly:
Replies from: lukeprog, lukeprog, lukeprog, lukeprogIn the 1950s, when the RBMK design was developed and approved, Soviet industry had not yet mastered the technology necessary to manufacture steel pressure vessels capacious enough to surround such large reactor cores. For that reason, among others, scientists, engineers, and managers in the Soviet nuclear-power industry had pretended for years that a loss-of-coolant accident was unlikely to the point of impossibility in an RBMK. They knew better. The industry had been plagued with disasters and near-disasters since its earliest days. All of them had been covered up, treated as state secrets; information about them was denied not only to the Soviet public but even to the industry’s managers and operators. Engineering is based on experience, including operating experience; treating design flaws and accidents as state secrets meant that every other similar nuclear-power station remained vulnerable and unprepared.
Unknown to the Soviet public and the world, at least thirteen serious power-reactor accidents had occurred in the Soviet Union before the one at Chernobyl. Between 1964 and 1979, for example, repeated fuel-assembly fires plagued Reactor Number One at the Beloyarsk nuclear-power plant east of the Urals near Novosibirsk. In 1975, the core of an RBMK reactor at the Leningrad plant partly melted down; cooling the core by flooding it with liquid nitrogen led to a discharge of radiation into the environment equivalent to about one-twentieth the amount that was released at Chernobyl in 1986. In 1982, a rupture of the central fuel assembly of Chernobyl Reactor Number One released radioactivity over the nearby bedroom community of Pripyat, now in 1986 once again exposed and at risk. In 1985, a steam relief valve burst during a shaky startup of Reactor Number One at the Balakovo nuclear-power plant, on the Volga River about 150 miles southwest of Samara, jetting 500-degree steam that scalded to death fourteen members of the start-up staff; despite the accident, the responsible official, Balakovo’s plant director, Viktor Bryukhanov, was promoted to supervise construction at Chernobyl and direct its operation.
↑ comment by lukeprog · 2014-03-02T19:18:57.028Z · LW(p) · GW(p)
More (#3) from Arsenals of Folly:
On 25 July 1980, Carter added further to Soviet fears by promulgating a new presidential directive, PD-59, that included an argument for fighting extended nuclear wars rather than attacking at the outset with everything in the arsenal, the early LeMay strategy that was still enshrined in the SIOP. “If deterrence fails initially,” PD-59 argued, “we must be capable of fighting successfully so that the adversary would not achieve his war aims and would suffer costs that are unacceptable, or in any event greater than his gains, from having initiated an attack.” The Republican National Convention that had just nominated Ronald Reagan as its candidate for the presidency had also endorsed preparing to fight prolonged nuclear wars. The Republican platform and PD-59 together presented the Soviet Union with a solid front in favor of a new and more threatening U.S. nuclear posture.
And:
Reagan, not yet aware of the developing Soviet war scare, ratcheted his rhetoric higher in a March 1983 speech to the annual convention of the National Association of Evangelicals in Orlando, Florida. There he named the Soviet Union “the focus of evil in the modern world” and, famously, “an evil empire.” The speech... won the support as well of Vladimir Slipchenko, a member of the Soviet general staff: "The military, the armed forces… used this [speech] as a reason to begin a very intense preparation inside the military for a state of war…. We started to run huge strategic exercises…. These were the first military exercises in which we really tested our mobilization. We didn’t just exercise the ground forces but also the strategic [nuclear] arms…. For the military, the period when we were called the evil empire was actually very good and useful, because we achieved a very high military readiness…. We also rehearsed the situation when a nonnuclear war might turn into a nuclear war."
And:
All this evidence points to the same conclusion: that the United States and the Soviet Union, apes on a treadmill, inadvertently blundered close to nuclear war in November 1983. That, and not the decline and fall of the Soviet Union, was the return on the neoconservatives’ long, cynical, and radically partisan investment in threat inflation and arms-race escalation.
During the Cuban confrontation, when American nuclear weapons were ready to launch or already aloft and moving toward their Soviet targets on hundreds of SAC bombers, both sides were at least aware of the danger and working intensely to resolve the dispute. During ABLE ARCHER 83, in contrast, an American renewal of high Cold War rhetoric, aggressive and perilous threat displays, and naïve incredulity were combined with Soviet arms-race and surprise-attack insecurities and heavy-handed war-scare propaganda in a nearly lethal mix.
And:
Cannon found that “most of his aides thought of [Ronald Reagen] as intelligent, but many also considered him intellectually lazy.” In fact, they laughed at him behind his back. He was Joe Six-pack, they told each other, his opinions and judgments exactly those guileless truisms you would expect to find among patrons of a neighborhood bar. “The sad, shared secret of the Reagan White House,” Cannon writes, “was that no one in the presidential entourage had confidence in the judgment or capacities of the president. Often, they took advantage of Reagan’s niceness and naïveté to indulge competing concepts of the presidency and advance their own ambitions. Pragmatists and conservatives alike treated Reagan as if he were a child monarch in need of constant protection. They paid homage to him, but gave him no respect.” A book in his hand was more likely to be a Tom Clancy novel than a Henry Kissinger memoir— though the same could be said for many Americans. “Not one of the friends and aides” Leslie Gelb interviewed “suggested that the President was, in any conventional sense, analytical, intellectually curious or well-informed— even though it would have been easy and natural for them to say so. They clearly did not think it necessary. Time and again, they painted a picture of a man who had serious intellectual shortcomings but was a political heavyweight, a leader whose instincts and intuition were right more often than their own analyses. His mind, they said, is shaped almost entirely by his own personal history, not by pondering on history books.” For George Shultz, in Cannon’s paraphrase, “Reagan’s seemingly irrelevant anecdotes were tools that the president used to comprehend the world. ‘He often reduced his thinking to a joke,’ Shultz said. ‘That doesn’t mean it didn’t have a heavy element to it.’” Cannon counters that Reagan “sometimes used humor to avoid facing issues he ought to have faced, particularly the reality that it was impossible to increase military spending, reduce taxes and balance the budget simultaneously.”
...Less politely, the political scientist Richard M. Pious, reviewing Cannon’s biography and other studies of the president, reduced their findings to three parallel axioms: “Reagan could only understand things if they were presented as a story; he could only explain something if he narrated it; he could only think about principles if they involved metaphor and analogy.”
And:
Replies from: shminuxReagan’s fundamentalist mentation encouraged him to find the supernatural as credible as the natural. He had been convinced since at least his days as governor of California that the end of the world was approaching. He believed that the Bible predicted the future. “Everything is in place for the battle of Armageddon and the second coming of Christ,” he told a surprised California state senator one day in 1971, citing as a sign his understanding that Libya had gone Communist. The founding of Israel in 1948, the Jews thus reclaiming their homeland, was another sign Reagan credited as meaning that a great final battle between good and evil would soon be fought on the plain of Armageddon. The atomic bombings of Hiroshima and Nagasaki, he believed, fulfilled the prediction in Revelation of an army out of Asia of “twice ten thousand times ten thousand” routed by plagues of “fire and smoke and sulfur.” He added Chernobyl to his list when he learned that the name of the old town was the Byelorussian word for wormwood, fulfilling the prophecy of “a great star [that] fell from heaven, blazing like a torch, and it fell on a third of the rivers and on the fountains of water. The name of the star is Wormwood.”
↑ comment by Shmi (shminux) · 2014-03-02T20:02:44.220Z · LW(p) · GW(p)
Amazing stuff. Was the world really as close to a nuclear war in 1983 as in 1962?
↑ comment by lukeprog · 2014-03-02T19:12:23.628Z · LW(p) · GW(p)
More (#2) from Arsenals of Folly:
From the beginning, and throughout all the years of the Cold War, the United States led the Soviet Union in total numbers of strategic nuclear bombs and warheads. The bitter U.S. political debates of the 1970s and early 1980s about nuclear strategy, nuclear force levels, supposed Soviet first-strike capabilities, and strategic defense hinged on arguments as divorced from reality as the debates of medieval scholars about the characteristics of seraphim and cherubim.
And:
The boldest prediction of impending Soviet collapse during this period, however, was the work of a young and previously unknown French historical demographer named Emmanuel Todd, reported in a book titled The Final Fall, published in France in 1976 and in translation in the United States in 1979. (Demography is the branch of anthropology that concerns statistics of health and disease, birth and death; historical demography uses demographic tools to study the past— or, in Todd’s case, to investigate a closed society that deliberately obfuscated its demographics.) Todd had written his remarkable book while still a graduate student. It was reviewed in English primarily in journals of Russian studies, exactly where it needed to be noticed to alert the community of experts on which the U.S. government relied for information about Soviet trends. Unfortunately, almost without exception, professional Sovietologists— Richard Pipes was a typical specimen— were the last to recognize the decline and fall of the political system on whose leviathan enigmas they had built their careers. The reviewers praised Todd’s innovative approach, but his prediction of impending Soviet collapse was dismissed as a “penchant for dramatic prophesying.”
“Internal pressures are pushing the Soviet system to the breaking point,” Todd dramatically— but also accurately— prophesied on the opening page of his book. “In ten, twenty, or thirty years, an astonished world will be witness to the dissolution or the collapse of this, the first of the Communist systems.” To explain how he came to such a radical conclusion in an era when the Committee on the Present Danger was claiming that the Soviet Union was growing in strength and malevolence, he demonstrated that Soviet statistics, otherwise “shabby and false,” could still be mined for valuable information on the state of society. Even censored statistics, such as rates of birth and death missing from the charts for the Terror famine years 1931 to 1935, “indicate the abuses of Stalinism, especially when they succeed a period marked by a relatively large volume of data.” Age pyramids, he pointed out— graphs in which stacked horizontal bars represent the percentage of the population in each age group—“ have fixed for everyone to see the errors of Stalinism, Maoism, or any other totalitarian alternative which declares war upon a human community…. Rather belatedly, it is apparent that 30 to 60 million inhabitants in the USSR are missing. In 1975, it was clear that about 150 million were missing in China. Given population, the proportions are nearly the same.”
And, a blockquote from the writings of Robert Gates:
As he recounted to me, [Carter’s national security adviser Zbigniew] Brzezinski was awakened at three in the morning by [his military assistant, William] Odom, who told him that some 220 Soviet missiles had been launched against the United States. Brzezinski knew that the President’s decision time to order retaliation was from three to seven minutes after a Soviet launch. Thus he told Odom he would stand by for a further call to confirm a Soviet launch and the intended targets before calling the President. Brzezinski was convinced we had to hit back and told Odom to confirm that the Strategic Air Command was launching its planes. When Odom called back, he reported that he had further confirmation, but that 2,200 missiles had been launched— it was an all-out attack. One minute before Brzezinski intended to telephone the President, Odom called a third time to say that other warning systems were not reporting Soviet launches. Sitting alone in the middle of the night, Brzezinski had not awakened his wife, reckoning that everyone would be dead in half an hour. It had been a false alarm. Someone had mistakenly put military exercise tapes into the computer system. When it was over, Zbig just went back to bed. I doubt he slept much, though.
↑ comment by lukeprog · 2014-03-02T19:05:32.787Z · LW(p) · GW(p)
More (#1) from Arsenals of Folly:
Through Saturday and Sunday, despite the frantic efforts at Chernobyl, the evacuation of Pripyat’s entire population, the extensive casualties, and the plume of radiation advancing into Finland and Sweden, no public announcement issued from the Kremlin. In his memoirs, Gorbachev implicitly blames the government commission for the delay, writing that its reports “consisted mainly of preliminary fact-finding, with all kinds of cautious remarks but without any conclusions at all.” Whether or not Gorbachev was misled, a better measure of the Soviet government’s initial response is that sometime on Sunday, the editors of Izvestia, the government-controlled newspaper, were ordered to suppress a story about the accident. Kiev went unwarned. So did Minsk. So did Europe. “In those first days,” a village teacher in Byelorussia wrote later, “there were mixed feelings. I remember two: fear and insult. Everything had happened and there was no information: the government was silent, the doctors were silent. The regions waited for directions from the oblast [i.e., province], the oblast from Minsk, and Minsk from Moscow. It was a long, long chain, and at the end of it a few people made the decisions. We turned out to be defenseless. That was the main feeling in those days. Just a few people were deciding our fate, the fate of millions.”
Curiously, a U.S. spy satellite had passed over the Chernobyl complex on Saturday morning only twenty-eight seconds after the explosions and had imaged it. American intelligence thought at first that a missile had been fired, reports health physicist and Chernobyl expert Richard Mould. When the image remained stationary, “opinion changed to a missile had blown up in its silo.” Consulting a map corrected the mistake. By Sunday the British government had been informed, but neither the United States nor Britain warned the public.
And:
The RBMK reactor was a dual-use design. It was developed in the 1950s as a production reactor to produce plutonium for nuclear weapons, then adapted for civilian power operation in the 1970s; Like its graphite core, its pyatachok was punctured with multiple channels from which irradiated fuel rods could be removed via an overhead crane while the reactor was operating. If the military needed plutonium, on-line refueling would allow fuel rods to be removed early to maximize their bloom of military-grade plutonium. *5 A safety containment structure around such a reactor, which would probably have prevented an accident like the one at Chernobyl, would have also greatly reduced its military value. Military needs thus competed with civilian needs in the choice of the RBMK design when the Soviet Union decided to greatly expand electricity production with nuclear power in the early 1970s; a competing light-water reactor design, the Soviet VVER, was safer but less suitable for the production of military-grade plutonium.
And:
Blix emphasized twice that he had not gone to Moscow to “scream” at the Soviet leaders but to help them. Certainly the rest of the world was screaming by then, with minimal but measurable quantities of Chernobyl radionuclides falling out around the world and particularly on Western Europe; the Chernobyl fallout was roughly equivalent to the fallout from a twelve-megaton nuclear explosion (the explosions themselves had been equivalent to about thirty to forty tons of TNT). Blix’s statements did help, and in exchange for them he extracted historic agreements from Gorbachev to make available timely information about the accident and its aftermath. “The Soviet authorities agreed,” he said later, “to provide daily information on radiation levels from seven measurement stations, one close to Chernobyl and the other six along the Western border of the USSR.” They agreed as well to participate in a post-accident review meeting and to increase cooperation in the field of nuclear safety. “It is sad, but a common experience,” Blix concludes, “that only big accidents or other setbacks will provide the necessary impetus to move governments and authorities to act.”
And:
Why should it matter whether people were killed by fire or blast? The answer began to emerge only in the 1980s, when a few independent scientists looked into the neglected subject of mass fires from nuclear weapons. As one of them, Theodore Postol, found, even a very limited attack on enemy industry “might actually result in about two to three times more fatalities than that predicted by the government for the [all-out] anti-population attack” if mass fires were included in casualty predictions. Two to three times the 285 million Soviet and Chinese dead that SIOP-62 predicted based on blast damage alone would raise that number close to 1 billion.
↑ comment by lukeprog · 2014-03-02T19:23:04.981Z · LW(p) · GW(p)
More (#4) from Arsenals of Folly:
Gorbachev had read the Palme Commission report, Common Security: A Blueprint for Survival, had reviewed its ideas with Arbatov as well as with Brandt, Bahr, and Palme himself, and had seized on common security as a more realistic national-security policy than those of his predecessors for dealing with the hard realities of the nuclear age.
Before Gorbachev, even during the years of détente, the Soviet military had operated on the assumption (however unrealistic) that it should plan to win a nuclear war should one be fought— a strategy built on the Soviet experience of fighting Germany during the Second World War. Partly because a massive surprise attack had initiated that nearly fatal conflict, the Soviet military had been and still was deeply skeptical of relying on deterrence to prevent an enemy attack. For different reasons, so were the proponents of common security. Brandt, who followed the Palme Commission’s deliberations closely, wrote that he “shared the conclusions [the commission] came to: collective security as an essential political task in the nuclear age, and partnership in security as a military concept to take over gradually from the strategy of nuclear deterrence; [because] deterrence threatens to destroy what it is supposed to be defending, and thereby increasingly loses credibility.”
↑ comment by lukeprog · 2014-04-13T02:15:59.824Z · LW(p) · GW(p)
From Lewis' Flash Boys:
Like every other trader on the Chicago exchanges, [Spivey] saw how much money could be made trading futures contracts in Chicago against the present prices of the individual stocks trading in New York and New Jersey. Every day there were thousands of moments when the prices were out of whack — when, for instance you could sell the futures contract for more than the price of the stocks that comprised it. To capture the profits, you had to be fast to both markets at once... The exchanges, by 2007, were simply stacks of computers in data centers. The speed with which trades occurred on them was no longer constrained by people. The only constraint was how fast an electronic signal could travel between Chicago and New York...
What Spivey had realized, by 2008, was that there was a big difference between the trading speed that was available between these exchanges and the trading speed that was theoretically possible... Incredibly to Spivey, the telecom carriers were not set up to understand the new demand for speed. Not only did Verizon fail to see that it could sell its special route to traders for a fortune; Verizon didn’t even seem aware it owned anything of special value. “You would have to order up several lines and hope that you got it,” says Spivey. “They didn’t know what they had.” As late as 2008, major telecom carriers were unaware that the financial markets had changed, radically, the value of a millisecond.
...The construction guy [driving the route] with him clearly suspected he might be out of his mind. Yet when Spivey pressed him, even he couldn’t come up with a reason why the plan wasn’t at least theoretically possible. That’s what Spivey had been after: a reason not to do it. “I was just trying to find the reason no [telecom] carrier had done it,” he says. “I was thinking: Surely I’ll see some roadblock.” Aside from the construction engineer’s opinion that no one in his right mind wanted to cut through the hard Allegheny rock, he couldn’t find one.
So Spivey began digging the line, keeping it secret for 2 years. He didn't start trying to sell the line to banks and traders until a couple months before the line was complete. And then:
Replies from: lukeprogThe biggest question about the line — Why? — remained imperfectly explored. All its creators knew was that the Wall Street people who wanted it wanted it very badly — and also wanted to find ways for others not to have it. In one of his first meetings with a big Wall Street firm, Spivey had told the firm’s boss the price of his line: $10.6 million plus costs if he paid up front, $20 million or so if he paid in installments. The boss said he’d like to go away and think about it. He returned with a single question: “Can you double the price?”
↑ comment by lukeprog · 2014-04-13T02:46:38.917Z · LW(p) · GW(p)
More (#1) from Flash Boys:
...why did the market in any given stock dry up only when he was trying to trade in it? To make his point, he asked the developers to stand behind him and watch while he traded. “I’d say, ‘Watch closely. I am about to buy one hundred thousand shares of AMD. I am willing to pay forty-eight dollars a share. There are currently one hundred thousand shares of AMD being offered at forty-eight dollars a share—ten thousand on BATS, thirty-five thousand on the New York Stock Exchange, thirty thousand on Nasdaq, and twenty-five thousand on Direct Edge.’ You could see it all on the screens. We’d all sit there and stare at the screen and I’d have my finger over the Enter button. I’d count out loud to five... Then I’d hit the Enter button and — boom! — all hell would break loose. The offerings would all disappear, and the stock would pop higher.” At which point he turned to the guys standing behind him and said, “You see, I’m the event. I am the news.”
And:
The deep problem with the system [high-frequency trading] was a kind of moral inertia. So long as it served the narrow self-interests of everyone inside it, no one on the inside would ever seek to change it, no matter how corrupt or sinister it became — though even to use words like “corrupt” and “sinister” made serious people uncomfortable, and so Brad avoided them. Maybe his biggest concern, when he spoke to investors, was that he’d be seen as just another nut with a conspiracy theory. One of the compliments that made him happiest was when a big investor said, “Thank God, finally there’s someone who knows something about high-frequency trading who isn’t an Area 51 guy.” Because he wasn’t a radical, it took him a while to figure out that fate and circumstance had created for him a dramatic role, which he was obliged to play. One night he actually turned to Ashley, now his wife, and said, “It feels like I’m an expert in something that badly needs to be changed. I think there’s only a few people in the world who can do anything about this. If I don’t do something right now — me, Brad Katsuyama — there’s no one to call.”
And:
Like a lot of regulations, Reg NMS was well-meaning and sensible. If everyone on Wall Street abided by the rule’s spirit, the rule would have established a new fairness in the U.S. stock market. The rule, however, contained a loophole: It failed to specify the speed of the SIP. To gather and organize the stock prices from all the exchanges took milliseconds. It took milliseconds more to disseminate those calculations. The technology used to perform these calculations was old and slow, and the exchanges apparently had little interest in improving it. There was no rule against high-frequency traders setting up computers inside the exchanges and building their own, much faster, better cared for version of the SIP. That’s exactly what they’d done, so well that there were times when the gap between the high-frequency traders’ view of the market and that of ordinary investors could be twenty-five milliseconds, or twice the time it now took to travel from New York to Chicago and back again.
Reg NMS was intended to create equality of opportunity in the U.S. stock market. Instead it institutionalized a more pernicious inequality. A small class of insiders with the resources to create speed were now allowed to preview the market and trade on what they had seen.
...By complying with Reg NMS, [Schwall] now understood, the smart order routers simply marched investors into various traps laid for them by high-frequency traders. “At that point I just got very, very pissed off,” he said. “That they are ripping off the retirement savings of the entire country through systematic fraud and people don’t even realize it. That just drives me up the fucking wall.”
His anger expressed itself in a search for greater detail. When he saw that Reg NMS had been created to correct for the market manipulations of the old NYSE specialists, he wanted to know: How had that corruption come about? He began another search. He discovered that the New York Stock Exchange specialists had been exploiting a loophole in some earlier regulation—which of course just led Schwall to ask: What event had led the SEC to create that regulation? Many hours later he’d clawed his way back to the 1987 stock market crash, which, as it turned out, gave rise to the first, albeit crude, form of high-frequency trading. During the 1987 crash, Wall Street brokers, to avoid having to buy stock, had stopped answering their phones, and small investors were unable to enter their orders into the market. In response, the government regulators had mandated the creation of an electronic Small Order Execution System so that the little guy’s order could be sent into the market with the press of a key on a computer keyboard, without a stockbroker first taking it from him on the phone. Because a computer was able to transmit trades must faster than humans, the system was soon gamed by smart traders, for purposes having nothing to do with the little guy. At which point Schwall naturally asked: From whence came the regulation that had made brokers feel comfortable not answering their phones in the midst of the 1987 stock market crash?
...Several days later he’d worked his way back to the late 1800s. The entire history of Wall Street was the story of scandals, it now seemed to him, linked together tail to trunk like circus elephants. Every systemic market injustice arose from some loophole in a regulation created to correct some prior injustice. “No matter what the regulators did, some other intermediary found a way to react, so there would be another form of front-running,” he said. When he was done in the Staten Island library he returned to work, as if there was nothing unusual at all about the product manager having turned himself into a private eye. He’d learned several important things, he told his colleagues. First, there was nothing new about the behavior they were at war with: The U.S. financial markets had always been either corrupt or about to be corrupted. Second, there was zero chance that the problem would be solved by financial regulators; or, rather, the regulators might solve the narrow problem of front-running in the stock market by high-frequency traders, but whatever they did to solve the problem would create yet another opportunity for financial intermediaries to make money at the expense of investors.
Schwall’s final point was more aspiration than insight. For the first time in Wall Street history, the technology existed that eliminated entirely the need for financial intermediaries. Buyers and sellers in the U.S. stock market were now able to connect with each other without any need of a third party. “The way that the technology had evolved gave me the conviction that we had a unique opportunity to solve the problem,” he said. “There was no longer any need for any human intervention.” If they were going to somehow eliminate the Wall Street middlemen who had flourished for centuries, they needed to enlarge the frame of the picture they were creating. “I was so concerned that we were talking about what we were doing as a solution to high-frequency trading,” he said. “It was bigger than that. The goal had to be to eliminate any unnecessary intermediation.”
↑ comment by lukeprog · 2013-11-21T11:39:25.037Z · LW(p) · GW(p)
There was so much worth quoting from Better Angels of Our Nature that I couldn't keep up. I'll share a few quotes anyway.
Replies from: lukeprog, lukeprog, lukeprog, lukeprogsometimes the advantage of conformity to each individual can lead to pathologies in the group as a whole. A famous example is the way an early technological standard can gain a toehold among a critical mass of users, who use it because so many other people are using it, and thereby lock out superior competitors. According to some theories, these “network externalities” explain the success of English spelling, the QWERTY keyboard, VHS videocassettes, and Microsoft software (though there are doubters in each case). Another example is the unpredictable fortunes of bestsellers, fashions, top-forty singles, and Hollywood blockbusters. The mathematician Duncan Watts set up two versions of a Web site in which users could download garage-band rock music. In one version users could not see how many times a song had already been downloaded. The differences in popularity among songs were slight, and they tended to be stable from one run of the study to another. But in the other version people could see how popular a song had been. These users tended to download the popular songs, making them more popular still, in a runaway positive feedback loop. The amplification of small initial differences led to large chasms between a few smash hits and many duds—and the hits and duds often changed places when the study was rerun.
↑ comment by lukeprog · 2013-11-21T12:16:04.726Z · LW(p) · GW(p)
More (#3) from Better Angels of Our Nature:
Replies from: Nonelet’s have a look at political discourse, which most people believe has been getting dumb and dumber. There’s no such thing as the IQ of a speech, but Tetlock and other political psychologists have identified a variable called integrative complexity that captures a sense of intellectual balance, nuance, and sophistication. A passage that is low in integrative complexity stakes out an opinion and relentlessly hammers it home, without nuance or qualification. Its minimal complexity can be quantified by counting words like absolutely, always, certainly, definitively, entirely, forever, indisputable, irrefutable, undoubtedly, and unquestionably. A passage gets credit for some degree of integrative complexity if it shows a touch of subtlety with words like usually, almost, but, however, and maybe. It is rated higher if it acknowledges two points of view, higher still if it discusses connections, tradeoffs, or compromises between them, and highest of all if it explains these relationships by reference to a higher principle or system. The integrative complexity of a passage is not the same as the intelligence of the person who wrote it, but the two are correlated, especially, according to Simonton, among American presidents.
Integrative complexity is related to violence. People whose language is less integratively complex, on average, are more likely to react to frustration with violence and are more likely to go to war in war games. Working with the psychologist Peter Suedfeld, Tetlock tracked the integrative complexity of the speeches of national leaders in a number of political crises of the 20th century that ended peacefully (such as the Berlin blockade in 1948 and the Cuban Missile Crisis) or in war (such as World War I and the Korean War), and found that when the complexity of the leaders’ speeches declined, war followed. In particular, they found a linkage between rhetorical simple-mindedness and military confrontations in speeches by Arabs and Israelis, and by the Americans and Soviets during the Cold War. We don’t know exactly what the correlations mean: whether mule-headed antagonists cannot think their way to an agreement, or bellicose antagonists simplify their rhetoric to stake out an implacable bargaining position. Reviewing both laboratory and real-world studies, Tetlock suggests that both dynamics are in play.
↑ comment by [deleted] · 2015-10-17T17:03:49.150Z · LW(p) · GW(p)
Further reading on integrative complexity:
Wikipedia Psychlopedia Google book
Now that I've been introduced to the concept, I want to evaluate how useful it is to incorporate into my rhetorical repertoire and vocabulary. And, to determine whether it can inform my beliefs about assessing the exfoliating intelligence of others (a term I'll coin to refer to that intelligence/knowledge which another can pass on to me to aid my vocabulary and verbal abstract reasoning - my neuropsychological strengths which I try to max out just like an RPG character).
At a less meta level, knowing the strengths and weaknesses of the trait will inform whether I choose to signal it or dampen it from herein and in what situations. It is important for imitators to remember that whatever IC is associated with does not neccersarily imply those associations to lay others.
strengths
- conflict resolution (see Luke's post)
As listed in psycholopedia:
- appreciation of complexity
- scientific profficiency
- stress accomodationo
- resistance to persuasion
- prediction ability
- social responsibliy
- more initiative, as rated by managers, and more motivation to seek power, as gauged by a projective test
weaknesses
based on psychlopedia:
- low scores on compliance and conscientiousness
seem antagonistic and even narcissistic based on the wiki article:
dependence (more likely to defer to others)
- rational expectations (more likely to fallaciously assume they are dealing with rational agents)
Upon reflection, here are my conclusions:
- high integrative complexity dominates low integrative complexity for those who have insight into the concept and self aware of how it relates to them, others, and the capacity to use the skill and hide it.
- the questions eliciting the answers that are expert rated to define the concept of IC by psychometricians is very crude and there ought to be a validated tool devised, if that is an achievable feat (cognitive complexity or time estimates beyond the scope of my time/intelligence at the moment)
- I have been using this tool as my primary estimate of intelligence of people but will instead subordinate it to ordinary psychometric status before I became aware of it here and will now elevate traditional tools of intelligence to their established status
- I'm interested in learning about the algorithms used to search say Twitter and assess IC. Anyone got any info?
- very interested in any research on IC association with corporate board performance and shareprices etc. Doesn't seem to be much research but generally research does start with Defence implications before going corporate...
- Interested in exploring relations between the assessment of IC and tools used in CBT given their structural similarity...and by extensions general relationships between IC and mental health
↑ comment by lukeprog · 2013-11-21T12:19:38.809Z · LW(p) · GW(p)
More (#4) from Better Angels of Our Nature:
Replies from: NoneUnless two adversaries are locked in a fight to the death, aggression is not zero-sum but negative-sum; they are collectively better off not doing it, despite the advantage to the victor. The advantage to a conqueror in gaining a bit more land is swamped by the disadvantage to the family he kills in stealing it, and the few moments of drive reduction experienced by a rapist are obscenely out of proportion to the suffering he causes his victim. The asymmetry is ultimately a consequence of the law of entropy: an infinitesimal fraction of the states of the universe are orderly enough to support life and happiness, so it’s easier to destroy and cause misery than to cultivate and cause happiness. All of this means that even the most steely-eyed utilitarian calculus, in which a disinterested observer tots up the total happiness and unhappiness, will deem violence undesirable, because it creates more unhappiness in its victims than happiness in its perpetrators, and lowers the aggregate amount of happiness in the world.
↑ comment by [deleted] · 2015-10-17T16:44:18.165Z · LW(p) · GW(p)
Unless two adversaries are locked in a fight to the death, aggression is not zero-sum but negative-sum; they are collectively better off not doing it, despite the advantage to the victor.
Untrue unless you're in a non-sequential game
The advantage to a conqueror in gaining a bit more land is swamped by the disadvantage to the family he kills in stealing it, and the few moments of drive reduction experienced by a rapist are obscenely out of proportion to the suffering he causes his victim.
True under a utilitarian framework and with a few common mind-theoretic assumptions derived from intuitions stemming from most people's empathy
The asymmetry is ultimately a consequence of the law of entropy: an infinitesimal fraction of the states of the universe are orderly enough to support life and happiness, so it’s easier to destroy and cause misery than to cultivate and cause happiness.
Woo
↑ comment by lukeprog · 2013-11-21T11:46:19.768Z · LW(p) · GW(p)
More (#2) from Better Angels of Our Nature:
we can consider the purest model of how abstract reasoning might undermine the temptations of violence, the Prisoner’s Dilemma. In his popular Scientific American column, the computer scientist Douglas Hofstadter once agonized over the fact that the seemingly rational response in a one-shot Prisoner’s Dilemma was to defect. You cannot trust the other player to cooperate, because he has no grounds for trusting you, and cooperating while he defects will bring you the worst outcome. Hofstadter’s agony came from the observation that if both sides looked down on their dilemma from a single Olympian vantage point, stepping out of their parochial stations, they should both deduce that the best outcome is for both to cooperate. If each has confidence that the other realizes that, and that the other realizes that he or she realizes it, ad infinitum, both should cooperate and reap the benefits. Hofstadter envisioned a "superrationality" in which both sides were certain of the other’s rationality, and certain that the other was certain of theirs, and so on, though he wistfully acknowledged that it was not easy to see how to get people to be superrational.
Can higher intelligence at least nudge people in the direction of superrationality? That is, are better reasoners likely to reflect on the fact that mutual cooperation leads to the best joint outcome, assume that the other guy is reflecting on it as well, and profit from the resulting simultaneous leap of trust? No one has given people of different levels of intelligence a true one-shot Prisoner’s Dilemma, but a recent study came close by using a sequential one-shot Prisoner’s Dilemma, in which the second player acts only after seeing the first player’s move. The economist Stephen Burks and his collaborators gave a thousand trainee truck drivers a Matrices IQ test and a Prisoner’s Dilemma, using money for the offers and payoffs. The smarter truckers were more likely to cooperate on the first move, even after controlling for age, race, gender, schooling, and income. The investigators also looked at the response of the second player to the first player’s move. This response has nothing to do with superrationality, but it does reflect a willingness to cooperate in response to the other player’s cooperation in such a way that both players would benefit if the game were iterated. Smarter truckers, it turned out, were more likely to respond to cooperation with cooperation, and to defection with defection.
The economist Garrett Jones connected intelligence to the Prisoner’s Dilemma by a different route. He scoured the literature for all the Iterated Prisoner’s Dilemma experiments that had been conducted in colleges and universities from 1959 to 2003. Across thirty-six experiments involving thousands of participants, he found that the higher a school’s mean SAT score (which is strongly correlated with mean IQ), the more its students cooperated. Two very different studies, then, agree that intelligence enhances mutual cooperation in the quintessential situation in which its benefits can be foreseen. A society that gets smarter, then, may be a society that becomes more cooperative.
↑ comment by lukeprog · 2013-11-21T11:41:52.713Z · LW(p) · GW(p)
More (#1) from Better Angels of Our Nature:
Measuring the psychological traits of public figures, to be sure, has a sketchy history, but the psychologist Dean Simonton has developed several historiometric measures that are reliable and valid (in the psychometrician’s technical sense) and politically nonpartisan. He analyzed a dataset of 42 presidents from GW to GWB and found that both raw intelligence and openness to new ideas and values are significantly correlated with presidential performance as it has been assessed by nonpartisan historians. Though Bush himself is well above the average of the population in intelligence, he is third-lowest among the presidents, and comes in dead last in openness to experience, with a rock-bottom score of 0.0 on the 0–100 scale. Simonton published his work in 2006, while Bush was still in office, but the three historians’ surveys conducted since then bear out the correlation: Bush was ranked 37th, 36th, and 39th among the 42 presidents.
As for Vietnam, the implication that the United States would have avoided the war if only the advisors of Kennedy and Johnson had been less intelligent seems unlikely in light of the fact that after they left the scene, the war was ferociously prosecuted by Richard Nixon, who was neither the best nor the brightest. The relationship between presidential intelligence and war may also be quantified. Between 1946 (when the PRIO dataset begins) and 2008, a president’s IQ is negatively correlated with the number of battle deaths in wars involving the United States during his presidency, with a coefficient of -0.45. One could say that for every presidential IQ point, 13,440 fewer people die in battle, though it’s more accurate to say that the three smartest postwar presidents, Kennedy, Carter, and Clinton, kept the country out of destructive wars.
↑ comment by lukeprog · 2014-06-21T21:40:22.935Z · LW(p) · GW(p)
From Ariely's The Honest Truth about Dishonesty:
Replies from: lukeprog, lukeprogA few years ago I received a letter from a woman named Rhonda who attended the University of California at Berkeley. She told me about a problem she’d had in her house and how a little ethical reminder helped her solve it.
She was living near campus with several other people—none of whom knew one another. When the cleaning people came each weekend, they left several rolls of toilet paper in each of the two bathrooms. However, by Monday all the toilet paper would be gone. It was a classic tragedy-of-the-commons situation: because some people hoarded the toilet paper and took more than their fair share, the public resource was destroyed for everyone else.
After reading about the Ten Commandments experiment on my blog, Rhonda put a note in one of the bathrooms asking people not to remove toilet paper, as it was a shared commodity. To her great satisfaction, one roll reappeared in a few hours, and another the next day. In the other note-free bathroom, however, there was no toilet paper until the following weekend, when the cleaning people returned.
↑ comment by lukeprog · 2014-06-24T23:42:05.098Z · LW(p) · GW(p)
More (#1) from Ariely's The Honest Truth about Dishonesty:
Armed with our evidence that when people sign their names to some kind of pledge, it puts them into a more honest disposition (at least temporarily), we approached the IRS, thinking that Uncle Sam would be glad to hear of ways to boost tax revenues. The interaction with the IRS went something like this:
ME: By the time taxpayers finish entering all the data onto the form, it is too late. The cheating is done and over with, and no one will say, “Oh, I need to sign this thing, let me go back and give honest answers.” You see? If people sign before they enter any data onto the form, they cheat less. What you need is a signature at the top of the form, and this will remind everyone that they are supposed to be telling the truth.
IRS: Yes, that’s interesting. But it would be illegal to ask people to sign at the top of the form. The signature needs to verify the accuracy of the information provided.
ME: How about asking people to sign twice? Once at the top and once at the bottom? That way, the top signature will act as a pledge—reminding people of their patriotism, moral fiber, mother, the flag, homemade apple pie—and the signature at the bottom would be for verification.
IRS: Well, that would be confusing.
ME: Have you looked at the tax code or the tax forms recently?
IRS: [No reaction.]
ME: How about this? What if the first item on the tax form asked if the taxpayer would like to donate twenty-five dollars to a task force to fight corruption? Regardless of the particular answer, the question will force people to contemplate their standing on honesty and its importance for society! And if the taxpayer donates money to this task force, they not only state an opinion, but they also put some money behind their decision, and now they might be even more likely to follow their own example.
IRS: [Stony silence.]
And:
Over the course of many years of teaching, I’ve noticed that there typically seems to be a rash of deaths among students’ relatives at the end of the semester, and it happens mostly in the week before final exams and before papers are due. In an average semester, about 10 percent of my students come to me asking for an extension because someone has died—usually a grandmother. Of course I find it very sad and am always ready to sympathize with my students and give them more time to complete their assignments. But the question remains: what is it about the weeks before finals that is so dangerous to students’ relatives?
Most professors encounter the same puzzling phenomenon, and I’ll guess that we have come to suspect some kind of causal relationship between exams and sudden deaths among grandmothers. In fact, one intrepid researcher has successfully proven it. After collecting data over several years, Mike Adams (a professor of biology at Eastern Connecticut State University) has shown that grandmothers are ten times more likely to die before a midterm and nineteen times more likely to die before a final exam. Moreover, grandmothers of students who aren’t doing so well in class are at even higher risk—students who are failing are fifty times more likely to lose a grandmother compared with non-failing students.
In a paper exploring this sad connection, Adams speculates that the phenomenon is due to intrafamilial dynamics, which is to say, students’ grandmothers care so much about their grandchildren that they worry themselves to death over the outcome of exams. This would indeed explain why fatalities occur more frequently as the stakes rise, especially in cases where a student’s academic future is in peril. With this finding in mind, it is rather clear that from a public policy perspective, grandmothers—particularly those of failing students—should be closely monitored for signs of ill health during the weeks before and during finals. Another recommendation is that their grandchildren, again particularly the ones who are not doing well in class, should not tell their grandmothers anything about the timing of the exams or how they are performing in class.
Though it is likely that intrafamilial dynamics cause this tragic turn of events, there is another possible explanation for the plague that seems to strike grandmothers twice a year. It may have something to do with students’ lack of preparation and their subsequent scramble to buy more time than with any real threat to the safety of those dear old women. If that is the case, we might want to ask why it is that students become so susceptible to “losing” their grandmothers (in e-mails to professors) at semesters’ end.
Perhaps at the end of the semester, the students become so depleted by the months of studying and burning the candle at both ends that they lose some of their morality and in the process also show disregard for their grandmothers’ lives. If the concentration it takes to remember a longer digit can send people running for chocolate cake, it’s not hard to imagine how dealing with months of cumulative material from several classes might lead students to fake a dead grandmother in order to ease the pressure (not that that’s an excuse for lying to one’s professors).
↑ comment by lukeprog · 2014-06-24T23:48:54.767Z · LW(p) · GW(p)
More (#2) from Ariely's The Honest Truth about Dishonesty:
On one particular flight, I was flipping through a magazine and discovered a MENSA quiz (questions that are supposed to measure intelligence). Since I am rather competitive, I naturally had to try it. The directions said that the answers were in the back of the magazine. After I answered the first question, I flipped to the back to see if I was correct, and lo and behold, I was. But as I continued with the quiz, I also noticed that as I was checking the answer to the question I just finished solving, my eyes strayed just a bit to the next answer. Having glanced at the answer to the next question, I found the next problem to be much easier. At the end of the quiz, I was able to correctly solve most of the questions, which made it easier for me to believe that I was some sort of genius. But then I had to wonder whether my score was that high because I was supersmart or because I had seen the answers out of the corner of my eye (my inclination was, of course, to attribute it to my own intelligence).
The same basic process can take place in any test in which the answers are available on another page or are written upside down, as they often are in magazines and SAT study guides. We often use the answers when we practice taking tests to convince ourselves that we’re smart or, if we get an answer wrong, that we’ve made a silly mistake that we would never make during a real exam. Either way, we come away with an inflated idea of how bright we actually are—and that’s something we’re generally happy to accept.
And:
During one unbearably long operation on my hands, the doctors inserted long needles from the tips of my fingers through the joints in order to hold my fingers straight so that the skin could heal properly. At the top of each needle they placed a cork so that I couldn’t unintentionally scratch myself or poke my eyes. After a couple of months of living with this unearthly contraption, I found that it would be removed in the clinic—not under anesthesia. That worried me a lot, because I imagined that the pain would be pretty awful. But the nurses said, “Oh, don’t worry. This is a simple procedure and it’s not even painful.” For the next few weeks I felt much less worried about the procedure.
When the time came to withdraw the needles, one nurse held my elbow and the other slowly pulled out each needle with pliers. Of course, the pain was excruciating and lasted for days—very much in contrast to how they described the procedure. Still, in hindsight, I was very glad they had lied to me. If they had told me the truth about what to expect, I would have spent the weeks before the extraction anticipating the procedure in misery, dread, and stress—which in turn might have compromised my much-needed immune system. So in the end, I came to believe that there are certain circumstances in which white lies are justified.
↑ comment by lukeprog · 2014-02-08T20:27:25.591Z · LW(p) · GW(p)
From Feynman's Surely You're Joking, Mr. Feynman:
Replies from: lukeprogWhen I sat with the philosophers I listened to them discuss very seriously a book called Process and Reality by Whitehead. They were using words in a funny way, and I couldn't quite understand what they were saying...
After some discussion as to what "essential object" meant, the professor leading the seminar said something meant to clarify things and drew something that looked like lightning bolts on the blackboard. "Mr. Feynman," he said, "would you say an electron is an 'essential object'?"
Well, now I was in trouble. I admitted that I hadn't read the book, so I had no idea of what Whitehead meant by the phrase; I had only come to watch. "But," I said, "I'll try to answer the professor's question if you will first answer a question from me, so I can have a better idea of what 'essential object' means. Is a brick an essential object?"
...Then the answers came out. One man stood up and said, "A brick as an individual, specific brick. That is what Whitehead means by an essential object."
Another man said, "No, it isn't the individual brick that is an essential object; it's the general character that all bricks have in common their 'brickness' that is the essential object."
Another guy got up and said, "No, it's not in the bricks themselves. 'Essential object' means the idea in the mind that you get when you think of bricks."
Another guy got up, and another, and I tell you I have never heard such ingenious different ways of looking at a brick before. And, just like it should in all stories about philosophers, it ended up in complete chaos. In all their previous discussions they hadn't even asked themselves whether such a simple object as a brick, much less an electron, is an "essential object."
↑ comment by lukeprog · 2014-02-08T20:34:23.142Z · LW(p) · GW(p)
More (#1) from Surely You're Joking, Mr. Feynman:
The next paper selected for me [by a seminar for biology students] was by Adrian and Bronk. They demonstrated that nerve impulses were sharp, single pulse phenomena. They had done experiments with cats in which they had measured voltages on nerves.
I began to read the paper. It kept talking about extensors and flexors, the gastrocnemius muscle, and so on. This and that muscle were named, but I hadn't the foggiest idea of where they were located in relation to the nerves or to the cat. So I went to the librarian in the biology section and asked her if she could find me a map of the cat.
"A map of the cat, sir?" she asked, horrified. "You mean a zoological chart!" From then on there were rumors about some dumb biology graduate student who was looking for a "map of the cat."
When it came time for me to give my talk on the subject, I started off by drawing an outline of the cat and began to name the various muscles. The other students in the class interrupt me: "We know all that!"
"Oh," I say, "you do? Then no wonder I can catch up with you so fast after you've had four years of biology." They had wasted all their time memorizing stuff like that, when it could be looked up in fifteen minutes.
And:
Hildegarde said, "I'll need a lot of ribosomes from bacteria."
Meselson and I had extracted enormous quantities of ribosomes from E. coli for some other experiment. I said, "Hell, I'll just give you the ribosomes we've got. We have plenty of them in my refrigerator at the lab."
It would have been a fantastic and vital discovery if I had been a good biologist. But I wasn't a good biologist. We had a good idea, a good experiment, the right equipment, but I screwed it up: I gave her infected ribosomes the grossest possible error that you could make in an experiment like that. My ribosomes had been in the refrigerator for almost a month, and had become contaminated with some other living things. Had I prepared those ribosomes promptly over again and given them to her in a serious and careful way, with everything under control, that experiment would have worked,, and we would have been the first to demonstrate the uniformity of life: the machinery of making proteins, the ribosomes, is the same in every creature. We were there at the right place, we were doing the right things, but I was doing things as an amateur stupid and sloppy.
And:
In the South Seas there is a cargo cult of people. During the war they saw airplanes land with lots of good materials, and they want the same thing to happen now. So they've arranged to make things like runways, to put fires along the sides of the runways, to make a wooden hut for a man to sit in, with two wooden pieces on his head like headphones and bars of bamboo sticking out like antennas he's the controller and they wait for the airplanes to land. They're doing everything right. The form is perfect. It looks exactly the way it looked before. But it doesn't work. No airplanes land. So I call these things cargo cult science, because they follow all the apparent precepts and forms of scientific investigation, but they're missing something essential, because the planes don't land.
...there is one feature I notice that is generally missing in cargo cult science. That is the idea that we all hope you have learned in studying science in school we never explicitly say what this is, but just hope that you catch on by all the examples of scientific investigation. It is interesting, therefore, to bring it out now and speak of it explicitly. It's a kind of scientific integrity, a principle of scientific thought that corresponds to a kind of utter honesty a kind of leaning over backwards. For example, if you're doing an experiment, you should report everything that you think might make it invalid not only what you think is right about it: other causes that could possibly explain your results; and things you thought of that you've eliminated by some other experiment, and how they worked to make sure the other fellow can tell they have been eliminated.
Details that could throw doubt on your interpretation must be given, if you know them. You must do the best you can if you know anything at all wrong, or possibly wrong to explain it. If you make a theory, for example, and advertise it, or put it out, then you must also put down all the facts that disagree with it, as well as those that agree with it. There is also a more subtle problem. When you have put a lot of ideas together to make an elaborate theory, you want to make sure, when explaining what it fits, that those things it fits are not just the things that gave you the idea for the theory; but that the finished theory makes something else come out right, in addition.
In summary, the idea is to try to give all of the information to help others to judge the value of your contribution; not just the information that leads to judgment in one particular direction or another.
↑ comment by lukeprog · 2013-11-14T17:50:57.595Z · LW(p) · GW(p)
One quote from Taleb's AntiFragile is here, and here's another:
Replies from: lukeprogPractitioners don’t write; they do. Birds fly and those who lecture them are the ones who write their story. So it is easy to see that history is truly written by losers with time on their hands and a protected academic position.
The greatest irony is that we watched firsthand how narratives of thought are made, as we were lucky enough to face another episode of blatant intellectual expropriation. We received an invitation to publish our side of the story—being option practitioners—in the honorable Wiley Encyclopedia of Quantitative Finance. So we wrote a version of the previous paper mixed with our own experiences. Shock: we caught the editor of the historical section, one Barnard College professor, red-handed trying to modify our account. A historian of economic thought, he proceeded to rewrite our story to play down, if not reverse, its message and change the arrow of the formation of knowledge. This was scientific history in the making. The fellow sitting in his office in Barnard College was now dictating to us what we saw as traders—we were supposed to override what we saw with our own eyes with his logic.
I came to notice a few similar inversions of the formation of knowledge. For instance, in his book written in the late 1990s, the Berkeley professor Highly Certified Fragilista Mark Rubinstein attributed to publications by finance professors techniques and heuristics that we practitioners had been extremely familiar with (often in more sophisticated forms) since the 1980s, when I got involved in the business.
No, we don’t put theories into practice. We create theories out of practice. That was our story, and it is easy to infer from it—and from similar stories—that the confusion is generalized. The theory is the child of the cure, not the opposite...
↑ comment by lukeprog · 2013-11-14T18:31:47.869Z · LW(p) · GW(p)
AntiFragile makes lots of interesting points, but it's clear in some cases that Taleb is running roughshod over the truth in order to support his preferred view. I've italicized the particularly lame part:
Now, one can see a possible role for basic science, but not in the way it is intended to be. For an example of a chain of unintended uses, let us start with Phase One, the computer. The mathematical discipline of combinatorics, here basic science, derived from propositional knowledge, led to the building of computers, or so the story goes... But at first, nobody had an idea what to do with these enormous boxes full of circuits as they were cumbersome, expensive, and their applications were not too widespread, outside of database management, only good to process quantities of data. It is as if one needed to invent an application for the thrill of technology. Baby boomers will remember those mysterious punch cards. Then someone introduced the console to input with the aid of a screen monitor, using a keyboard. This led, of course, to word processing, and the computer took off because of its fitness to word processing, particularly with the microcomputer in the early 1980s. It was convenient, but not much more than that until some other unintended consequence came to be mixed into it. Now Phase Two, the Internet. It had been set up as a resilient military communication network device, developed by a research unit of the Department of Defense called DARPA and got a boost the days when Ronald Reagan was obsessed with the Soviets. It was meant to allow the United States to survive a generalized military attack. Great idea, but add the personal computer plus Internet and we get social networks, broken marriages, a rise in nerdiness, the ability for a post-Soviet person with social difficulties to find a matching spouse. All that thanks to initial U.S. tax dollars (or rather budget deficit) during Reagan’s anti-Soviet crusade.
So for now we are looking at the forward arrow and at no point, although science was at some use along the way since computer technology relies on science in most of its aspects; at no point did academic science serve in setting its direction, rather it served as a slave to chance discoveries in an opaque environment, with almost no one but college dropouts and overgrown high school students along the way. The process remained self-directed and unpredictable at every step.
↑ comment by lukeprog · 2014-05-29T04:00:19.025Z · LW(p) · GW(p)
From Think Like a Freak:
Replies from: lukeprogNow we asked about the print ads. How often did they run? One executive told us, with obvious pride, that the company had bought newspaper inserts every single Sunday for the past twenty years in 250 markets across the United States.
So how could they tell whether these ads were effective? They couldn’t. With no variation whatsoever, it was impossible to know.
What if, we said, the company ran an experiment to find out? In science, the randomized control trial has been the gold standard of learning for hundreds of years—but why should scientists have all the fun? We described an experiment the company might run. They could select 40 major markets across the country and randomly divide them into two groups. In the first group, the company would keep buying newspaper ads every Sunday. In the second group, they’d go totally dark—not a single ad. After three months, it would be easy to compare merchandise sales in the two groups to see how much the print ads mattered.
“Are you crazy?” one marketing executive said. “We can’t possibly go dark in 20 markets. Our CEO would kill us.”
“Yeah,” said someone else, “it’d be like that kid in Pittsburgh.”
What kid in Pittsburgh?
They told us about a summer intern who was supposed to call in the Sunday ad buys for the Pittsburgh newspapers. For whatever reason, he botched his assignment and failed to make the calls. So for the entire summer, the company ran no newspaper ads in a large chunk of Pittsburgh. “Yeah,” one executive said, “we almost got fired for that one.”
So what happened, we asked, to the company’s Pittsburgh sales that summer?
They looked at us, then at each other—and sheepishly admitted it never occurred to them to check the data. When they went back and ran the numbers, they found something shocking: the ad blackout hadn’t affected Pittsburgh sales at all!
Now that, we said, is valuable feedback. The company may well be wasting hundreds of millions of dollars on advertising. How could the executives know for sure? That 40-market experiment would go a long way toward answering the question. And so, we asked them, are you ready to try it now?
“Are you crazy?” the marketing executive said again. “We’ll get fired if we do that!”
To this day, on every single Sunday in every single market, this company still buys newspaper advertising—even though the only real piece of feedback they ever got is that the ads don’t work.
↑ comment by lukeprog · 2014-05-29T04:25:04.962Z · LW(p) · GW(p)
More (#1) from Think Like a Freak:
The rules were simple. A contestant ate as many hot dogs and buns (“HDB,” officially) as he could in 12 minutes. Any HDB or portion thereof already in the eater’s mouth when the final bell rang would count toward his total as long as he swallowed it eventually. An eater could be disqualified, however, if during the contest a significant amount of HDB that had gone into his mouth came back out—known in the sport as a “reversal of fortune.” Condiments were allowed but no serious competitor would bother. Beverages were also allowed, any kind in unlimited quantity. In 2001, when Kobi decided to enter the Coney Island contest, the record stood at a mind-boggling 25.125 HDB in 12 minutes.
...How did he do? In his very first Coney Island contest, Kobi smoked the field and set a new world record... he ate 50.
...Kobayashi had observed that most Coney Island eaters used a similar strategy, which was not really much of a strategy at all. It was essentially a sped-up version of how the average person eats a hot dog at a backyard barbecue: pick it up, cram the dog and bun into the mouth, chew from end to end, and glug some water to wash it down. Kobayashi wondered if perhaps there was a better way. Nowhere was it written, for instance, that the dog must be eaten end to end. His first experiment was simple: What would happen if he broke the dog and bun in half before eating? This, he found, afforded more options for chewing and loading, and it also let his hands do some of the work that would otherwise occupy his mouth...
Kobayashi now questioned another conventional practice: eating the dog and bun together. It wasn’t surprising that everyone did this. The dog is nested so comfortably in the bun, and when eating for pleasure, the soft blandness of the bun pairs wonderfully with the slick, seasoned meat. But Kobayashi wasn’t eating for pleasure. Chewing dog and bun together, he discovered, created a density conflict. The dog itself is a compressed tube of dense, salty meat that can practically slide down the gullet on its own. The bun, while airy and less substantial, takes up a lot of space and requires a lot of chewing. So he started removing the dog from bun. Now he could feed himself a handful of bunless dogs, broken in half, followed by a round of buns.
...As easily as he was able to swallow the hot dogs—like a trained dolphin slorping down herring at the aquarium—the bun was still a problem. (If you want to win a bar bet, challenge someone to eat two hot-dog buns in one minute without a beverage; it is nearly impossible.) So Kobayashi tried something different. As he was feeding himself the bunless, broken hot dogs with one hand, he used the other hand to dunk the bun into his water cup. Then he’d squeeze out most of the excess water and smush the bun into his mouth. This might seem counterintuitive—why put extra liquid in your stomach when you need all available space for buns and dogs?—but the bun-dunking provided a hidden benefit. Eating soggy buns meant Kobayashi grew less thirsty along the way, which meant less time wasted on drinking. He experimented with water temperature and found that warm was best, as it relaxed his chewing muscles. He also spiked the water with vegetable oil, which seemed to help swallowing.
His experimentation was endless. He videotaped his training sessions and recorded all his data in a spreadsheet, hunting for inefficiencies and lost milliseconds. He experimented with pace: Was it better to go hard the first four minutes, ease off during the middle four, and “sprint” toward the end—or maintain a steady pace throughout? (A fast start, he discovered, was best.) He found that getting a lot of sleep was especially important. So was weight training: strong muscles aided in eating and helped resist the urge to throw up. He also discovered that he could make more room in his stomach by jumping and wriggling as he ate—a strange, animalistic dance that came to be known as the Kobayashi Shake.
And:
the United Nations set up an incentive plan to compensate manufacturers for curtailing the pollutants they released into the atmosphere. The payments, in the form of carbon credits that could be sold on the open market, were indexed to the environmental harm of each pollutant.
For every ton of carbon dioxide a factory eliminated, it would receive one credit. Other pollutants were far more remunerative: methane (21 credits), nitrous oxide (310), and, near the top of the list, something called hydrofluorocarbon-23, or HFC-23. It is a “super” greenhouse gas that is a by-product in the manufacture of HCFC-22, a common refrigerant that is itself plenty bad for the environment.
The UN was hoping that factories would switch to a greener refrigerant than HCFC-22. One way to incentivize them, it reasoned, was to reward the factories handsomely for destroying their stock of its waste gas, HFC-23. So the UN offered a whopping bounty of 11,700 carbon credits for every ton of HFC-23 that was destroyed rather than released into the atmosphere.
Can you guess what happened next?
Factories around the world, especially in China and India, began to churn out extra HCFC-22 in order to generate extra HFC-23 so they could rake in the cash. As an official with the Environmental Investigation Agency (EIA) put it: “The evidence is overwhelming that manufacturers are creating excess HFC-23 simply to destroy it and earn carbon credits.” The average factory earned more than $20 million a year by selling carbon credits for HFC-23.
Angry and embarrassed, the UN changed the rules of the program to curb the abuse; several carbon markets banned the HFC-23 credits, making it harder for the factories to find buyers. So what will happen to all those extra tons of harmful HFC-23 that suddenly lost its value? The EIA warns that China and India may well “release vast amounts of . . . HFC-23 into the atmosphere, causing global greenhouse gas emissions to skyrocket.”
Which means the UN wound up paying polluters millions upon millions of dollars to . . . create additional pollution.
Backfiring bounties are, sadly, not as rare as one might hope. This phenomenon is sometimes called “the cobra effect.” As the story goes, a British overlord in colonial India thought there were far too many cobras in Delhi. So he offered a cash bounty for every cobra skin. The incentive worked well—so well, in fact, that it gave rise to a new industry: cobra farming. Indians began to breed, raise, and slaughter the snakes to take advantage of the bounty. Eventually the bounty was rescinded—whereupon the cobra farmers did the logical thing and set their snakes free, as toxic and unwanted as today’s HFC-23.
↑ comment by lukeprog · 2014-03-05T20:46:31.836Z · LW(p) · GW(p)
From Rhodes' Twilight of the Bombs:
Replies from: lukeprogFor the Senate subcommittee, Pavlov reviewed how the several levels of the Soviet control system worked together:
"...Let me describe … one possible scenario of attack under the conditions of the coup. The early warning system detects a missile attack and sends signals to the subsystems that assess the threat. It is a process that immediately involves the president of the country, the minister of defense, chief of the general staff and the commanders in chief of the three branches of strategic nuclear forces.
"Then the chief of the general staff and commanders in chief of strategic nuclear forces form a command and send it down to the subordinate units. In essence, this command is meant to inform troops and weapons systems about a possible nuclear attack, and this command is called a preliminary command.
"The preliminary command opens up access by the launch crews to the equipment directly controlling the use of nuclear weapons and also gives them access to the relevant special documentation. However, launch crews do not [yet] have the full right to use the equipment of direct control over the use of nuclear weapons.
"As a more accurate assessment of the situation is made, a message is received from the early warning systems confirming the fact of nuclear attack, and the decision to use nuclear weapons may be made at that point. It can be carried out according to a two-stage process."
The first stage of this two-stage process, Pavlov continued, once again involved the top leadership in a political decision—whether or not to generate a “permission command” that would be sent to the CICs. Then, during the second stage, the CICs and the chief of the general staff would decide as military leaders whether or not to generate a “direct command” ordering launch crews to fire their weapons. Even then, the direct command had to pass through an ordeal of what Pavlov called “special processing42 by technical and organizational means to verify its authenticity.” Each of these actions had time limits, and if the time for an action expired, the blocking system that normally prevented weapons from being launched automatically reactivated.
Cumbersome as the Soviet system seemed from their descriptions, Blair pointed out, it was “actually devised … to streamline43 the command system to ensure that they could release nuclear weapons within the time frame of a ballistic missile attack launched by the United States, that is to say, within 15 to 30 minutes.” And despite its complexity, Blair added, a nuclear launch by the coup leaders might still have been possible had they persuaded the general staff to issue Yanayev a Cheget and had one or more of the CICs gone along. “There is an important lesson44 here,” Blair concluded. “No system of safeguards can reliably guard against misbehavior at the very apex of government, in any government. There is no adequate answer to the question, ‘Who guards the guards?’”
↑ comment by lukeprog · 2014-03-05T21:21:14.415Z · LW(p) · GW(p)
More (#1) from Twilight of the Bombs:
That November, the Republican Party won a landslide victory in the Clinton midterm elections, the first Republican legislative majority in forty years. Democrats lost fifty-four seats in the House of Representatives. Newt Gingrich, the new House speaker, announced his Contract with America. The new crowd of representatives brought a highly parochial perspective to government, Christian Alfonsi noted:
"Many of the new Republicans37 on Capitol Hill were young enough to have avoided Vietnam entirely; and most of those who had not been young enough had received deferments. Never before had the American people elected a congressional majority so few of whose members had served in the military. Perhaps the most striking attribute of the new House membership, though, was its startling lack of familiarity with the world outside America’s borders. Fully a third of the new Republican House members had never set foot outside the United States. In the main, many of them considered that a good thing; or if not, then certainly not a deficiency to be rectified. The deep suspicion of the UN reflected in the Contract with America was an accurate reflection of these individuals’ deep distrust of the foreign, in all senses of that term."
And:
There was a battle as well within the U.S. weapons and defense bureaucracies over allowing tests up to some yield limit—Perry proposed five hundred tons—but Clinton agreed to support stockpile stewardship with an initial budget of $4 billion in tacit exchange for zero yield. (Not everyone was happy with the quid pro quo. Graham visited Los Alamos and found a hostile if generally polite audience. “Several of them,” he told me, “said the government had betrayed them. ‘They made a deal with us,’ one said, ‘that we would be able to work on nuclear weapons for our entire careers and they betrayed us.’ That didn’t seem like a very rational argument to me. Right at the end of the questioning, Sig Hecker stood up and made a really gracious speech. He said, ‘The CTBT is national policy, a moratorium is national policy, there are good reasons for it, and here’s what they are and we should support it.’ I went to lunch with him afterward and told him, ‘You know, Sig, I strongly support the CTBT, but you may recall that a predecessor of yours, Harold Agnew, used to complain that people had forgotten what nuclear weapons are like because we don’t have atmospheric tests any more. I wouldn’t be against having an internationally supervised atmospheric test once every five years or so. To remind people how awful these things are.’ Sig said, ‘You know, I’ve been thinking the same thing.’”)
And:
Time published an article by Scowcroft and [George Bush Sr.], called “Why We Didn’t Remove Saddam.” In it the two men predicted disastrous results of a cavalier invasion—results that in fact would occur:
"We would have been forced to occupy Baghdad and, in effect, rule Iraq. The coalition would instantly have collapsed, the Arabs deserting it in anger and other allies pulling out as well... Going in and occupying Iraq, thus unilaterally exceeding the U.N.’s mandate, would have destroyed the precedent of international response to aggression we hoped to establish. Had we gone the invasion route, the U.S. could conceivably still be an occupying power in a bitterly hostile land. It would have been a dramatically different—and perhaps barren—outcome."
And:
By mid-October twelve people had been exposed to anthrax, five of whom would eventually die. The U.S. government deployed military units nationwide to guard nuclear power plants, water supplies, oil refineries, airports, railroad terminals, the Empire State Building, the Brooklyn and Golden Gate bridges. Then a letter arrived at the Senate offices of the Democratic majority leader Tom Daschle containing not the low-grade form of anthrax included in the first round of letters mailed to Florida and New York but a highly purified, military-grade aerosolized powder that was ten times as deadly. The Senate shut down the next day, 16 October, the House the day after that. Twenty-eight staffers were found to have been exposed. The Senate office building attacks made front-page world news and sowed panic throughout Washington.
But something else happened that week in Washington that had an even greater impact on George Bush and Dick Cheney’s thinking. Special sensors that detect chemical, biological, or radiological agents had been installed in the White House to protect the president. On Thursday, 18 October, they went off while Cheney and his aides were working in the Situation Room. “Everyone who had entered10 the Situation Room that day,” the journalist Jane Mayer reported, “was believed to have been exposed, and that included Cheney. ‘They thought there had been a nerve attack,’ a former administration official, who was sworn to secrecy about it, later confided. ‘It was really, really scary. They thought that Cheney was already lethally infected.’” Cheney had recently been briefed about the lack of U.S. defenses against a biowarfare attack, Mayer revealed. Thus, “when the White House sensor11 registered the presence of such poisons less than a month later, many, including Cheney, believed a nightmare was unfolding. ‘It was a really nerve jangling time,’ the former official said."
And:
No nuclear weapons are known to have been stolen in any country since their first development by the United States in 1945. Whether this fact is testimony to the quality of the security that nuclear weapons are rightly accorded in every country that has them, or whether thieves judge attempting to acquire such complicated, dangerous, and well-guarded explosives not to be worth the risk remains to be seen. U.S. and Russian nuclear weapons are outfitted with complicated physical and electronic locking mechanisms with defensive features that may be deadly; weapons in countries such as Pakistan are protected the way South Africa’s were, by being stored partly disassembled, with their fissile components divided among several locations in guarded bunkers or vaults. The theft of a nuclear weapon anywhere would activate every resource the international community could muster, with shooting on sight the minimum rule of engagement.
Because nuclear weapons are well protected, national-security bureaucracies have postulated that a terrorist group that wants to acquire a nuclear capability will be forced to build its own bomb or bombs. Enriching uranium or breeding plutonium and separating it from its intensely radioactive matrix of spent fuel are both well beyond the capacity of subnational entities. The notion that a government would risk its own security by giving up control of a nuclear weapon to terrorists is nonsensical despite the Bush administration’s use of the argument to justify invading Saddam Hussein’s Iraq. A nuclear attack on United States interests by a terrorist group using a donated bomb would certainly lead to a devastating nuclear counterattack on the country that supplied the weapon, provided the supplier could be determined—a near certainty with nuclear forensics and other means of investigation.
↑ comment by lukeprog · 2014-02-14T17:46:32.336Z · LW(p) · GW(p)
From Harford's The Undercover Economist Strikes Back:
"Are you sure there isn't a much simpler solution to [economic growth] that you're missing?" The world is full of people who will tell you that there is. Tie your currency to gold! Always balance your budget! Protect manufacturing! Eliminate red tape! That kind of thing. You can safely ignore these people. Anyone who insists that running a modern economy is a matter of plain common sense frankly doesn’t understand much about running a modern economy.
And:
Replies from: lukeprog, lukeprogBhutan. The Himalayan mountain kingdom provides the clearest example I can think of that there’s a difference between collecting statistics about happiness and making people happy. Bhutan is venerated by the more naïve among happiness wonks... who seem unaware of its rather dubious human rights record. According to Human Rights Watch, many members of Bhutan’s Nepali minority have been stripped of their citizenship and harassed out of the country. Although, of course, if the Nepalis were miserable to start with, ethnic cleansing, driving them out of the country, might indeed raise average happiness levels—in Bhutan itself, if not in refugee camps across the border in Nepal.
Funnily enough, the “gross national happiness” thing appears to have emerged as a defensive reflex—the then king of Bhutan, Jigme Singye Wangchuck, announced that “Gross National Happiness is more important than Gross Domestic Product” when pressed on the question of Bhutan’s lack of economic progress in an interview with the Financial Times in 1986. His majesty isn’t the last person to turn to alternative measures of progress for consolation. When Nicolas Sarkozy was president of France he commissioned three renowned economists, Joseph Stiglitz (a Nobel laureate), Amartya Sen (another Nobel laureate) and Jean-Paul Fitoussi, to contemplate alternatives to GDP. One possible reason for President Sarkozy’s enthusiasm was surely that the French spend most of their time not working, and this lowers France’s GDP. The country is likely to look better on most alternative indices. It’s not unreasonable to look at those alternatives, but let’s not kid ourselves: politicians are always on the lookout for statistical measures that reflect well on them.
↑ comment by lukeprog · 2014-02-14T18:02:18.969Z · LW(p) · GW(p)
More (#2) from The Undercover Economist Strikes Back:
...forecasting is not the economist’s main job. Unfortunately, economists have managed to stereotype themselves as bad forecasters because investment firms have realized that they can get some publicity by sending someone called a “chief economist” to the studios of Bloomberg Television, where said chief economist will opine about whether shares will go up or down. Most academic economists don’t even try to forecast, because they know that forecasts of complex systems are extremely difficult — if anything, rather than being overconfident in their forecasts, they’re too eager to dismiss forecasting as an activity for fools and frauds.
Keynes famously remarked, “If economists could manage to get themselves thought of as humble, competent people, on a level with dentists, that would be splendid!” It’s a good joke, but it’s not just a joke; you don’t expect your dentist to be able to forecast the pattern of tooth decay, but you expect that she will be able to give you good practical advice on dental health and to intervene to fix problems when they occur. That is what we should demand from economists: useful advice about how to keep the economy working well, and solutions when the economy malfunctions.
And:
When you look at the most exciting, innovative work coming out of economics today, it’s pretty much all from microeconomists, not macroeconomists. Think of Al Roth’s work on market design, in which he uses computer-based algorithms to allocate children to school places, young doctors to their first hospital jobs, and kidney donors to compatible patients. Economists such as Paul Milgrom, Hal Varian and Paul Klemperer are scoring notable successes in auction design, from Google Ads to lucrative spectrum auctions to efforts to support the banking system without giving massive handouts to banks. John List, Esther Duflo and others are designing economic experiments to reveal hidden truths about human behavior. These economists are much more like dentists — or doctors, or engineers. They solve problems.
And:
Macroeconomic models have become elegant and logically sophisticated, but suffer a serious disconnection from reality. The thinking has been that logical consistency must come first, and hopefully the models will start to look realistic eventually. This is not entirely ridiculous—Robert Lucas’s critique of the Phillips curve and the chastening stagflation of the 1970s showed economists that it wasn’t enough merely to draw conclusions from the data, because the data could change dramatically. But four decades on from the “rational expectations” revolution, there are good reasons to believe that macroeconomics is failing to incorporate some important perspectives.
...Three examples spring to mind: banking, behavioral economics and complexity theory.
And:
behavioral economics, a kind of fusion of economics and psychology, has made big inroads into economic thought in the past fifteen years... Microeconomists were initially skeptical, and many remain skeptical. But skeptical or not, they have paid attention and either embraced behavioral economics or criticized it.
But macroeconomists? They seem to have ignored behavioral economics almost entirely. Robert Shiller told me that while the microeconomists would show up to argue when he gave seminars on behavioral finance, the macroeconomists just haven’t shown up at all.
↑ comment by lukeprog · 2014-02-14T17:56:09.947Z · LW(p) · GW(p)
More (#1) from The Undercover Economist Strikes Back:
In The Wealth of Nations, [Smith] wrote: “A linen shirt, for example, is, strictly speaking, not a necessity of life. The Greeks and Romans lived, I suppose, very comfortably though they had no linen. But in the present times, through the greater part of Europe, a creditable day-laborer would be ashamed to appear in public without a linen shirt. . . .”
Smith’s point is not that poverty is relative, but that it is a social condition. People don’t become poor just because the median citizen receives a pay raise, whatever Eurostat may say. But they may become poor if something they cannot afford—such as a television—becomes viewed as a social essential. A person can lack the money necessary to participate in society, and that, in an important sense, is poverty.
For me, the poverty lines that make the most sense are absolute poverty lines, adjusted over time to reflect social change. Appropriately enough, one of the attempts to do such work is made by a foundation established by Seebohm Rowntree’s father, Joseph. The Joseph Rowntree Foundation uses focus groups to establish what things people feel it’s now necessary to have in order to take part in society—the list includes a vacation, a no-frills mobile phone and enough money to buy a cheap suit every two or three years. Of course, this is all subjective, but so is poverty. I’m not sure we will get anywhere if we believe that some expert, somewhere—even an expert as thoughtful as Mollie Orshansky or Seebohm Rowntree—is going to be able to nail down, permanently and precisely, what it means to be poor.
Even if we accept the simpler idea of a nutrition-based absolute poverty line, there will always be complications. One obvious one is the cost of living: lower in, say, Alabama than in New York. In principle, absolute poverty lines could and should take account of the cost of living, but the U.S. poverty line does not. A second issue is how to deal with short-term loss of income. A middle manager who loses her job and is unemployed for three months before finding another well-paid position might temporarily fall below the poverty line as far as her income is concerned, but with good prospects, a credit card and savings in the bank, she won’t need to live like a poor person—and she is likely to maintain much of her pre-poverty spending pattern. For this reason, some economists prefer to measure poverty not by what a household earns in a given week, month or year—but by how much money that household spends.
And:
According to the official United States government definition, 15 percent of the U.S. population was poor in 2011. That was the highest percentage since the early 1990s, up from 12.3 percent in 2006, just before the recession began. For all its faults, you can see one of the appeals of an absolute poverty line: if poverty goes up during recessions, you are probably measuring something sensible.
The European Union doesn’t use a comparable poverty line, but in the year 2000, researchers at the University of York tried to work out what EU poverty rates would be as measured against U.S. standards. They estimated poverty rates as high as 48 percent in Portugal and as low as 6 percent in Denmark, with France at 12 percent, Germany at 15 percent and the UK at 18 percent. Clearly, national income is a big influence on absolute poverty (Portugal is a fair bit poorer than Denmark), but so, too, is the distribution of income (France and the UK have similar average incomes, but France is more egalitarian).
↑ comment by lukeprog · 2014-01-15T01:13:01.652Z · LW(p) · GW(p)
From Caplan's The Myth of the Rational Voter:
Replies from: lukeprog, lukeprog, lukeprogThe history of dictatorships creates a strong impression that bad policies exist because the interests of rulers and ruled diverge. A simple solution is make the rulers and the ruled identical by giving “power to the people.” If the people decide to delegate decisions to full-time politicians, so what? Those who pay the piper— or vote to pay the piper— call the tune.
This optimistic story is, however, often at odds with the facts. Democracies frequently adopt and maintain policies harmful for most people. Protectionism is a classic example. Economists across the political spectrum have pointed out its folly for centuries, but almost every democracy restricts imports. Even when countries negotiate free trade agreements, the subtext is not, “Trade is mutually beneficial,” but, “We’ll do you the favor of buying your imports if you do us the favor of buying ours.” Admittedly, this is less appalling than the Berlin Wall, yet it is more baffling. In theory, democracy is a bulwark against socially harmful policies, but in practice it gives them a safe harbor.
↑ comment by lukeprog · 2014-01-15T01:35:08.819Z · LW(p) · GW(p)
More (#2) from The Myth of the Rational Voter:
Replies from: PrismatticMany economists took the [self-interested voter hypothesis] for granted, but few bothered to defend it. After completing my doctorate I read more outside my discipline, and discovered that political scientists have subjected the SIVH to extensive and diverse empirical tests. Their results are impressively uniform: The SIVH fails.
Start with the easiest case: partisan identification. Both economists and the public almost automatically accept the view that poor people are liberal Democrats and rich people are conservative Republicans. The data paint a quite different picture. At least in the United States, there is only a flimsy connection between individuals’ incomes and their ideology or party. The sign fits the stereotype: As your income rises, you are more likely to be conservative and Republican. But the effect is small, and shrinks further after controlling for race. A black millionaire is more likely to be a Democrat than a white janitor. The Republicans might be the party for the rich, but they are not the party of the rich.
We see the same pattern for specific policies. The elderly are not more in favor of Social Security and Medicare than the rest of the population. Seniors strongly favor these programs, but so do the young. Contrary to the SIVH-inspired bumper sticker “If men got pregnant, abortion would be a sacrament,” men appear a little more pro-choice on abortion than women. Compared to the overall population, the unemployed are at most a little more in favor of government-guaranteed jobs, and the uninsured at most a little more supportive of national health insurance. Measures of self-interest predict little about beliefs about economic policy. Even when the stakes are life and death, political self-interest rarely surfaces: Males vulnerable to the draft support it at normal levels, and families and friends of conscripts in Vietnam were in fact more opposed to withdrawal than average.
The broken clock of the SIVH is right twice a day. It fails for party identification, Social Security, Medicare, abortion, job programs, national health insurance, Vietnam, and the draft. But it works tolerably well for a few scattered issues. You might expect to see the exceptions on big questions with a lot of money at stake, but the truth is almost the reverse. The SIVH shines brightest on the banal issue of smoking. Donald Green and Ann Gerken find that smokers and nonsmokers are ideologically and demographically similar, but smokers are a lot more opposed to restrictions and taxes on their favorite vice. Belief in “smokers’ rights” cleanly rises with daily cigarette consumption: fully 61.5% of “heavy” smokers want laxer antismoking policies, but only 13.9% of people who “never smoked” agree. If the SIVH were true, comparable patterns of belief would be everywhere. They are not.
↑ comment by Prismattic · 2014-01-15T02:19:29.247Z · LW(p) · GW(p)
The elderly are not more in favor of Social Security and Medicare than the rest of the population. Seniors strongly favor these programs, but so do the young. Contrary to the SIVH-inspired bumper sticker “If men got pregnant, abortion would be a sacrament,” men appear a little more pro-choice on abortion than women. Compared to the overall population, the unemployed are at most a little more in favor of government-guaranteed jobs, and the uninsured at most a little more supportive of national health insurance.
This is an absurdly narrow definition of self-interest. Many people who are not old have parents who are senior citizens. Men have wives, sisters, and daughters whose well-being is important to them. Etc. Self-interest != solipsistic egoism.
↑ comment by lukeprog · 2014-01-15T01:32:01.032Z · LW(p) · GW(p)
More (#1) from The Myth of the Rational Voter:
Marxist regimes — and Stalin in particular — treated biology and physics asymmetrically.
In biology, Stalin and other prominent Marxist leaders elevated the views of the quack antigeneticist Trofim Lysenko to state-supported orthodoxy, leading to the dismissal of thousands of geneticists and plant biologists. Lysenkoism hurt Soviet agriculture, and helped trigger the deadliest famine in human history during China’s Great Leap Forward.
In physics, on the other hand, leading scientists enjoyed more intellectual autonomy than any other segment of Soviet society. Internationally respected physicists ran the Soviet atomic project, not Marxist ideologues. When their rivals tried to copy Lysenko’s tactics, Stalin balked. A conference intended to start a witch hunt in Soviet physics was abruptly canceled, a decision that had to originate with Stalin. Holloway recounts a telling conversation between Beria, the political leader of the Soviet atomic project, and Kurchatov, its scientific leader: "Beria asked Kurchatov whether it was true that quantum mechanics and relativity theory were idealist, in the sense of antimaterialist. Kurchatov replied that if relativity theory and quantum mechanics were rejected, the bomb would have to be rejected too. Beria was worried by this reply, and may have asked Stalin to call off the conference."
The “Lysenkoization” of Soviet physics never came.
The best explanation for the difference is that modern physics had a practical payoff that Stalin and other Communist leaders highly valued: nuclear weapons.
And:
We encounter the price-sensitivity of irrationality whenever someone unexpectedly offers us a bet based on our professed beliefs. Suppose you insist that poverty in the Third World is sure to get worse in the next decade. A challenger immediately retorts, “Want to bet? If you’re really ‘sure,’ you won’t mind giving me ten-to-one odds.” Why are you are unlikely to accept this wager? Perhaps you never believed your own words; your statements were poetry— or lies. But it is implausible to tar all reluctance to bet with insincerity. People often believe that their assertions are true until you make them “put up or shut up.” A bet moderates their views— that is, changes their minds— whether or not they retract their words.
How does this process work? Your default is to believe what makes you feel best. But an offer to bet triggers standby rationality. Two facts then come into focus. First, being wrong endangers your net worth. Second, your belief received little scrutiny before it was adopted. Now you have to ask yourself which is worse: Financial loss in a bet, or psychological loss of self-worth? A few prefer financial loss, but most covertly rethink their views. Almost no one “bets the farm” even if — pre-wager — he felt sure.
↑ comment by lukeprog · 2014-01-15T01:38:16.668Z · LW(p) · GW(p)
More (#3) from The Myth of the Rational Voter:
Replies from: PrismatticOne striking instance of unreasoning deference: Shortly after 9/11, polls strangely found that the nation’s citizens suddenly had more faith in their government. How often can you “trust the government in Washington to do what is right”? In 2000, only 30% of Americans said “just about always” or “most of the time.” Two weeks after 9/11, that number more than doubled to 64%. It is hard to see consumers trusting GM more after a major accident forces a recall. The public’s reaction is akin to that of religious sects who mispredict the end of the world: “We believe now more than ever.”
↑ comment by Prismattic · 2014-01-15T02:16:39.610Z · LW(p) · GW(p)
Allow me to offer an alternative explanation of this phenomenon for consideration. Typically, when polled about their trust in insitutions, people tend to trust the executive branch more than the legislature or the courts, and they trust the military far more than they trust civilian government agencies. In the period before 9/11, our long national nightmare of peace and prosperity would generally have made the military less salient in people's minds, and the spectacles of impeachment and Bush v. Gore would have made the legislative and judicial branches more salient in people's minds. After 9/11, the legislative agenda quieted down/the legislature temporarily took a back seat to the executive, and military and national security organs became very high salience. So when people were asked about the government, the most immediate associations would have been to the parts that were viewed as more trustworthy.
↑ comment by lukeprog · 2013-11-03T18:43:39.297Z · LW(p) · GW(p)
From Richard Rhodes' The Making of the Atomic Bomb:
Replies from: lukeprog, lukeprog, lukeprog, lukeprog, lukeprog[Experimental physicist Francis William Aston wrote:] "If we were able to transmute [hydrogen] into [helium] nearly 1 percent of the mass would be annihilated. [Because e=mc^2, as Einstein recently proved,] the quantity of energy liberated would be prodigious. Thus to change the hydrogen in a glass of water into helium would release enough energy to drive the Queen Mary across the Atlantic and back at full speed."
Aston goes on in this lecture, delivered in 1936, to speculate about the consequences of that energy release... "There are those about us who say that such research should be stopped by law, alleging that man’s destructive powers are already large enough. So, no doubt, the more elderly and ape-like of our prehistoric ancestors objected to the innovation of cooked food and pointed out the grave dangers attending the use of the newly discovered agency, fire. Personally I think there is no doubt that sub-atomic energy is available all around us, and that one day man will release and control its almost infinite power. We cannot prevent him from doing so and can only hope that he will not use it exclusively in blowing up his next door neighbor."
↑ comment by lukeprog · 2013-11-03T19:11:44.909Z · LW(p) · GW(p)
More (#2) from The Making of the Atomic Bomb:
After Alexander Sachs paraphrased the Einstein-Szilard letter to Roosevelt, Roosevelt demanded action, and Edwin Watson set up a meeting with representatives from the Bureau of Standards, the Army, and the Navy...
Szilard began by emphasizing the possibility of a chain reaction in a uranium-graphite system. Whether such a system would work, he said, depended on the capture cross section of carbon and that was not yet sufficiently known. If the value was large, they would know that a large-scale experiment would fail. If the value was extremely small, a large-scale experiment would look highly promising. An intermediate value would necessitate a large-scale experiment to decide. He estimated the destructive potential of a uranium bomb to be as much as twenty thousand tons of high-explosive equivalent. Such a bomb, he had written in the memorandum Sachs carried to Roosevelt, would depend on fast neutrons and might be “too heavy to be transported by airplane,” which meant he was still thinking of exploding natural uranium, not of separating U235.
Upon asking for some money to conduct the relevant experiments, the Army representative launched into a tirade:
"He told us that it was naive to believe that we could make a significant contribution to defense by creating a new weapon. He said that if a new weapon is created, it usually takes two wars before one can know whether the weapon is any good or not. Then he explained rather laboriously that it is in the end not weapons which win the wars, but the morale of the troops. He went on in this vein for a long time until suddenly Wigner, the most polite of us, interrupted him. [Wigner] said in his high-pitched voice that it was very interesting for him to hear this. He always thought that weapons were very important and that this is what costs money, and this is why the Army needs such a large appropriation. But he was very interested to hear that he was wrong: it’s not weapons but the morale which wins the wars. And if this is correct, perhaps one should take a second look at the budget of the Army, and maybe the budget could be cut."
"All right, all right," Adamson snapped, "you'll get your money."
↑ comment by lukeprog · 2013-11-03T19:31:58.057Z · LW(p) · GW(p)
More (#3) from The Making of the Atomic Bomb:
...the British Chemical Society asked [Otto] Frisch to write a review of advances in experimental nuclear physics for its annual report...
Frisch’s review article mentioned the possibility of a chain reaction only to discount it. He based that conclusion on Bohr’s argument that the U238 in natural uranium would scatter fast neutrons, slowing them to capture-resonance energies; the few that escaped capture would not suffice, he thought, to initiate a slow-neutron chain reaction in the scarce U235. Slow neutrons in any case could never produce more than a modest explosion, Frisch pointed out; they took too long slowing down and finding a nucleus. As he explained later: "That process would take times of the order of a sizeable part of a millisecond... and for the whole chain reaction to develop would take several milliseconds; once the material got hot enough to vaporize, it would begin to expand and the reaction would be stopped before it got much further. So the thing might blow up like a pile of gunpowder, but no worse, and that wasn’t worth the trouble."
Not long from Nazi Germany, Frisch found his argument against a violently explosive chain reaction reassuring. It was backed by the work of no less a theoretician than Niels Bohr. With satisfaction he published it.
...Concerned that Hitler might bluff Neville Chamberlain with threats of a new secret weapon, Churchill had collected a briefing from Frederick Lindemann and written to caution the cabinet not to fear “new explosives of devastating power” for at least “several years.” The best authorities, the distinguished M.P. emphasized with a nod to Niels Bohr, held that “only a minor constituent of uranium is effective in these processes.” That constituent would need to be laboriously extracted for any large-scale effects. “The chain process can take place only if the uranium is concentrated in a large mass,” Churchill continued, slightly muddling the point. “As soon as the energy develops, it will explode with a mild detonation before any really violent effects can be produced. It might be as good as our present-day explosives, but it is unlikely to produce anything very much more dangerous.” He concluded optimistically: “Dark hints will be dropped and terrifying whispers will be assiduously circulated, but it is to be hoped that nobody will be taken in by them.”
...[Several months later] Frisch walked home through ominous blackouts so dark that he sometimes stumbled over roadside benches and could distinguish fellow pedestrians only by the glow of the luminous cards they had taken to wearing in their hatbands. Thus reminded of the continuing threat of German bombing, he found himself questioning his confident Chemical Society review: “Is that really true what I have written?”
Sometime in February 1940 he looked again. There had always been four possible mechanisms for an explosive chain reaction in uranium: (1) slow-neutron fission of U238; (2) fast-neutron fission of U238; (3) slow-neutron fission of U235; and (4) fast-neutron fission of U235. Bohr’s logical distinction between U238 and thorium on the one hand and U235 on the other ruled out (1): U238 was not fissioned by slow neutrons. (2) was inefficient because of scattering and the parasitic effects of the capture resonance of U238. (3) was possibly applicable to power production but too slow for a practical weapon. But what about (4)? Apparently no one in Britain, France or the United States had asked the question quite that way before.
If Frisch now glimpsed an opening into those depths he did so because he had looked carefully at isotope separation and had decided it could be accomplished even with so fugitive an isotope as U235. He was therefore prepared to consider the behavior of the pure substance unalloyed with U238, as Bohr, Fermi and even Szilard had not yet been...
...He shared the problem with [Rudolf] Peierls... [and together they worked out that] eighty generations of neutrons — as many as could be expected to multiply before the swelling explosion separated the atoms of U235 enough to stop the chain reaction — still millionths of a second in total, gave temperatures as hot as the interior of the sun, pressures greater than the center of the earth where iron flows as a liquid. “I worked out the results of what such a nuclear explosion would be,” says Peierls. “Both Frisch and I were staggered by them.”
And finally, practically: could even a few pounds of U235 be separated from U238? Frisch writes: "I had worked out the possible efficiency of my separation system with the help of Clusius’s formula, and we came to the conclusion that with something like a hundred thousand similar separation tubes one might produce a pound of reasonably pure uranium-235 in a modest time, measured in weeks. At that point we stared at each other and realized that an atomic bomb might after all be possible."
Frisch and Peierls wrote a two-part report of their findings:
The first of the two parts they titled “On the construction of a ‘superbomb’; based on a nuclear chain reaction in uranium.” It was intended, they wrote, “to point out and discuss a possibility which seems to have been overlooked in... earlier discussions.” They proceeded to cover the same ground they had previously covered together in private, noting that “the energy liberated by a 5 kg bomb would be equivalent to that of several thousand tons of dynamite.” They described a simple mechanism for arming the weapon: making the uranium sphere in two parts “which are brought together first when the explosion is wanted. Once assembled, the bomb would explode within a second or less.” Springs, they thought, might pull the two small hemispheres together. Assembly would have to be rapid or the chain reaction would begin prematurely, destroying the bomb but not much else. A byproduct of the explosion—about 20 percent of its energy, they thought—would be radiation, the equivalent of “a hundred tons of radium” that would be “fatal to living beings even a long time after the explosion.” Effective protection from the weapon would be “hardly possible.”
The second report, “Memorandum on the properties of a radioactive ‘super-bomb,’” a less technical document, was apparently intended as an alternative presentation for nonscientists. This study explored beyond the technical questions of design and production to the strategic issues of possession and use; it managed at the same time both seemly innocence and extraordinary prescience:
As a weapon, the super-bomb would be practically irresistible. There is no material or structure that could be expected to resist the force of the explosion.
Owing to the spreading of radioactive substances with the wind, the bomb could probably not be used without killing large numbers of civilians, and this may make it unsuitable as a weapon for use by this country.
It is quite conceivable that Germany is, in fact, developing this weapon.
If one works on the assumption that Germany is, or will be, in the possession of this weapon, it must be realised that no shelters are available that would be effective and could be used on a large scale. The most effective reply would be a counter-threat with a similar weapon.
Thus in the first months of 1940 it was already clear to two intelligent observers that nuclear weapons would be weapons of mass destruction against which the only apparent defense would be the deterrent effect of mutual possession. Frisch and Peierls finished their two reports and took them to [Mark] Oliphant. He quizzed the men thoroughly, added a cover letter to their memoranda (“I have considered these suggestions in some detail and have had considerable discussion with the authors, with the result that I am convinced that the whole thing must be taken rather seriously, if only to make sure that the other side are not occupied in the production of such a bomb at the present time”) and sent letter and documents off to Henry Thomas Tizard...
“I have often been asked,” Otto Frisch wrote many years afterward of the moment when he understood that a bomb might be possible after all, before he and Peierls carried the news to Mark Oliphant, “why I didn’t abandon the project there and then, saying nothing to anybody. Why start on a project which, if it was successful, would end with the production of a weapon of unparalleled violence, a weapon of mass destruction such as the world had never seen? The answer was very simple. We were at war, and the idea was reasonably obvious; very probably some German scientists had had the same idea and were working on it.”
Whatever scientists of one warring nation could conceive, the scientists of another warring nation might also conceive — and keep secret. That early in 1939 and early 1940, the nuclear arms race began.
↑ comment by lukeprog · 2013-11-03T19:01:04.240Z · LW(p) · GW(p)
More (#1) from The Making of the Atomic Bomb:
Fermi and Szilard had both written reports on their secondary-neutron experiments and were ready to send them to the Physical Review. With Pegram’s concurrence they decided to go ahead and mail the reports to the Review, to establish priority, but to ask the editor to delay publishing them until the secrecy issue could be resolved...
...If Bohr could be convinced to swing his prestige behind secrecy, the campaign to isolate German nuclear physics research might work.
They met in the evening in Wigner’s office. “Szilard outlined the Columbia data,” Wheeler reports, “and the preliminary indications from it that at least two secondary neutrons emerge from each neutron-induced fission. Did this not mean that a nuclear explosive was certainly possible?” Not necessarily, Bohr countered. “We tried to convince him,” Teller writes, “that we should go ahead with fission research but we should not publish the results. We should keep the results secret, lest the Nazis learn of them and produce nuclear explosions first. Bohr insisted that we would never succeed in producing nuclear energy and he also insisted that secrecy must never be introduced into physics.”
...[Bohr] had worked for decades to shape physics into an international community, a model within its limited franchise of what a peaceful, politically united world might be. Openness was its fragile, essential charter, an operational necessity, as freedom of speech is an operational necessity to a democracy. Complete openness enforced absolute honesty: the scientist reported all his results, favorable and unfavorable, where all could read them, making possible the ongoing correction of error. Secrecy would revoke that charter and subordinate science as a political system—Polanyi’s “republic”—to the anarchic competition of the nation-states.
...March 17 was a Friday; Szilard traveled down to Washington from Princeton with Teller; Fermi stayed the weekend. They got together, reports Szilard, “to discuss whether or not these things”—the Physical Review papers—“should be published. Both Teller and I thought that they should not. Fermi thought that they should. But after a long discussion, Fermi took the position that after all this was a democracy; if the majority was against publication, he would abide by the wish of the majority.” Within a day or two the issue became moot. The group learned of the Joliot/von Halban/Kowarski paper, published in Nature on March 18. “From that moment on,” Szilard notes, “Fermi was adamant that withholding publication made no sense.”1135
[About a month later, German physicist] Paul Harteck wrote a letter jointly with his assistant to the German War Office: "We take the liberty of calling to your attention the newest development in nuclear physics, which, in our opinion, will probably make it possible to produce an explosive many orders of magnitude more powerful than the conventional ones... That country which first makes use of it has an unsurpassable advantage over the others."
The Harteck letter reached Kurt Diebner, a competent nuclear physicist stuck unhappily in the Wehrmacht’s ordnance department studying high explosives. Diebner carried it to Hans Geiger. Geiger recommended pursuing the research. The War Office agreed.
On the origins of the Einstein–Szilárd letter:
Szilard told Einstein about the Columbia secondary neutron experiments and his calculations toward a chain reaction in uranium and graphite. Long afterward he would recall his surprise that Einstein had not yet heard of the possibility of a chain reaction. When he mentioned it Einstein interjected... “I never thought of that!” He was nevertheless, says Szilard, “very quick to see the implications and perfectly willing to do anything that needed to be done. He was willing to assume responsibility for sounding the alarm even though it was quite possible that the alarm might prove to be a false alarm. The one thing most scientists are really afraid of is to make fools of themselves. Einstein was free from such a fear and this above all is what made his position unique on this occasion.”
And:
By [August 1935] the Hungarians at least believed they saw major humanitarian benefit inherent in what Eugene Wigner would describe in retrospect as “a horrible military weapon,” explaining: "Although none of us spoke much about it to the authorities [during this early period] — they considered us dreamers enough as it was — we did hope for another effect of the development of atomic weapons in addition to the warding off of imminent disaster. We realized that, should atomic weapons be developed, no two nations would be able to live in peace with each other unless their military forces were controlled by a common higher authority. We expected that these controls, if they were effective enough to abolish atomic warfare, would be effective enough to abolish also all other forms of war. This hope was almost as strong a spur to our endeavors as was our fear of becoming the victims of the enemy’s atomic bombings."
From the horrible weapon which they were about to urge the United States to develop, Szilard, Teller and Wigner — “the Hungarian conspiracy,” Merle Tuve was amused to call them — hoped for more than deterrence against German aggression. They also hoped for world government and world peace, conditions they imagined bombs made of uranium might enforce.
↑ comment by lukeprog · 2013-11-06T03:32:11.982Z · LW(p) · GW(p)
More (#5) from The Making of the Atomic Bomb:
With fifty-three people aboard including the concert violinist the Hydro sailed on time. Forty-five minutes into the crossing, Haukelid’s charge of plastic explosive blew the hull. The captain felt the explosion rather than heard it, and though Tinnsjö is landlocked he thought they might have been torpedoed. The bow swamped first as Haukelid had intended; while the passengers and crew struggled to release the lifeboats, the freight cars with their thirty-nine drums of heavy water — 162 gallons mixed with 800 gallons of dross — broke loose, rolled overboard and sank like stones. Of passengers and crew twenty-six drowned. The concert violinist slipped high and dry into a lifeboat; when his violin case floated by, someone was kind enough to fish it out for him. Kurt Diebner of German Army Ordnance counted the full effect on German fission research of the Vemork bombing and the sinking of the Hydro in a postwar interview:
"When one considers that right up to the end of the war, in 1945, there was virtually no increase in our heavy-water stocks in Germany... it will be seen that it was the elimination of German heavy-water production in Norway that was the main factor in our failure to achieve a self-sustaining atomic reactor before the war ended.
The race to the bomb, such as it was, ended for Germany on a mountain lake in Norway on a cold Sunday morning in February 1944.
↑ comment by lukeprog · 2013-11-06T02:52:51.680Z · LW(p) · GW(p)
More (#4) from The Making of the Atomic Bomb:
Two associates of Soviet physicist Igor Kurchatov reported to the Physical Review in June 1940 that they had observed rare spontaneous fissioning in uranium. “The complete lack of any American response to the publication of the discovery,” writes the American physicist Herbert F. York, “was one of the factors which convinced the Russians that there must be a big secret project under way in the United States.”
And:
[An experiment] left the German project with two possible moderator materials: graphite and heavy water. In January a misleading measurement reduced that number to one. At Heidelberg Walther Bothe, an exceptional experimentalist who would eventually share a Nobel Prize with Max Born, measured the absorption cross section of carbon using a 3.6-foot sphere of high-quality graphite submerged in a tank of water. He found a cross section of 6.4 × 10−27 cm^2, more than twice Fermi’s value, and concluded that graphite, like ordinary water, would absorb too many neutrons to sustain a chain reaction in natural uranium. Von Halban and Kowarski, now at Cambridge and in contact with the MAUD Committee, similarly overestimated the carbon cross section — the graphite in both experiments was probably contaminated with neutron-absorbing impurities such as boron — but their work was eventually checked against Fermi’s. Bothe could make no such check. The previous fall Szilard had assaulted Fermi with another secrecy appeal:
"When [Fermi] finished his [carbon absorption] measurement the question of secrecy again came up. I went to his office and said that now that we had this value perhaps the value ought not to be made public. And this time Fermi really lost his temper; he really thought this was absurd. There was nothing much more I could say, but next time when I dropped in his office he told me that Pegram had come to see him, and Pegram thought that this value should not be published. From that point the secrecy was on."
It was on just in time to prevent German researchers from pursuing a cheap, effective moderator. Bothe’s measurement ended German experiments on graphite.
And:
Leo Szilard was known by now throughout the American physics community as the leading apostle of secrecy in fission matters. To his mailbox, late in May 1940, came a puzzled note from a Princeton physicist, Louis A. Turner. Turner had written a Letter to the Editor of the Physical Review, a copy of which he enclosed.1365 It was entitled “Atomic energy from U238” and he wondered if it should be withheld from publication. “It seems as if it was wild enough speculation so that it could do no possible harm,” Turner told Szilard, “but that is for someone else to say.”
Turner had published a masterly twenty-nine-page review article on nuclear fission in the January Reviews of Modern Physics, citing nearly one hundred papers that had appeared since Hahn and Strassmann reported their discovery twelve months earlier; the number of papers indicates the impact of the discovery on physics and the rush of physicists to explore it. Turner had also noted the recent Nier/Columbia report confirming the attribution of slow-neutron fission to U235. (He could hardly have missed it; the New York Times and other newspapers publicized the story widely. He wrote Szilard irritably or ingenuously that he found it “a little difficult to figure out the guiding principle [of keeping fission research secret] in view of the recent ample publicity given to the separation of isotopes.”1368) His reading for the review article and the new Columbia measurements had stimulated him to further thought; the result was his Physical Review letter.
...Szilard... answered Turner’s letter on May 30... [and] told him “it might eventually turn out to be a very important contribution” — and proposed he keep it secret. Szilard saw beyond what Turner had seen. He saw that a fissile element bred in uranium could be chemically separated away: that the relatively easy and relatively inexpensive process of chemical separation could replace the horrendously difficult and expensive process of physical separation of isotopes as a way to a bomb.
And:
“Oppenheimer wanted me to be the associate director,” [I.I. Rabi] told an interviewer many years later. “I thought it over and turned him down. I said, ‘I’m very serious about this war. We could lose it with insufficient radar.’” The Columbia physicist thought radar more immediately important to the defense of his country than the distant prospect of an atomic bomb. Nor did he choose to work full time, he told Oppenheimer, to make “the culmination of three centuries of physics” a weapon of mass destruction. Oppenheimer responded that he would take “a different stand” if he thought the atomic bomb would serve as such a culmination. “To me it is primarily the development in time of war of a military weapon of some consequence.” Either Oppenheimer had not yet thought his way through to a more millenarian view of the new weapon’s implications or he chose to avoid discussing those implications with Rabi. He asked Rabi only to participate in an inaugural physics conference at Los Alamos in April 1943 and to help convince others, particularly Hans Bethe, to sign on. Eventually Rabi would come and go as a visiting consultant, one of the very few exceptions to Groves’ compartmentalization and isolation rules.
And:
Work toward an atomic bomb had begun in the USSR in 1939. A thirtysix-year-old nuclear physicist, Igor Kurchatov, the head of a major laboratory since his late twenties, alerted his government then to the possible military significance of nuclear fission. Kurchatov suspected that fission research might be under way already in Nazi Germany. Soviet physicists realized in 1940 that the United States must also be pursuing a program when the names of prominent physicists, chemists, metallurgists and mathematicians disappeared from international journals: secrecy itself gave the secret away.
↑ comment by lukeprog · 2014-08-02T18:39:32.563Z · LW(p) · GW(p)
From Poor Economics:
As this book has shown, although we have no magic bullets to eradicate poverty, no one-shot cure-all, we do know a number of things about how to improve the lives of the poor. In particular, five key lessons emerge.
First, the poor often lack critical pieces of information and believe things that are not true. They are unsure about the benefits of immunizing children; they think there is little value in what is learned during the first few years of education; they don’t know how much fertilizer they need to use; they don’t know which is the easiest way to get infected with HIV; they don’t know what their politicians do when in office. When their firmly held beliefs turn out to be incorrect, they end up making the wrong decision, sometimes with drastic consequences — think of the girls who have unprotected sex with older men or the farmers who use twice as much fertilizer as they should. Even when they know that they don’t know, the resulting uncertainty can be damaging. For example, the uncertainty about the benefits of immunization combines with the universal tendency to procrastinate, with the result that a lot of children don’t get immunized. Citizens who vote in the dark are more likely to vote for someone of their ethnic group, at the cost of increasing bigotry and corruption.
We saw many instances in which a simple piece of information makes a big difference. However, not every information campaign is effective. It seems that in order to work, an information campaign must have several features: It must say something that people don’t already know (general exhortations like “No sex before marriage” seem to be less effective); it must do so in an attractive and simple way (a film, a play, a TV show, a well-designed report card); and it must come from a credible source (interestingly, the press seems to be viewed as credible). One of the corollaries of this view is that governments pay a huge cost in terms of lost credibility when they say things that are misleading, confusing, or false.
Second, the poor bear responsibility for too many aspects of their lives. The richer you are, the more the “right” decisions are made for you. The poor have no piped water, and therefore do not benefit from the chlorine that the city government puts into the water supply. If they want clean drinking water, they have to purify it themselves. They cannot afford ready-made fortified breakfast cereals and therefore have to make sure that they and their children get enough nutrients. They have no automatic way to save, such as a retirement plan or a contribution to Social Security, so they have to find a way to make sure that they save. These decisions are difficult for everyone because they require some thinking now or some other small cost today, and the benefits are usually reaped in the distant future. As such, procrastination very easily gets in the way. For the poor, this is compounded by the fact that their lives are already much more demanding than ours: Many of them run small businesses in highly competitive industries; most of the rest work as casual laborers and need to constantly worry about where their next job will come from. This means that their lives could be significantly improved by making it as easy as possible to do the right thing — based on everything else we know — using the power of default options and small nudges: Salt fortified with iron and iodine could be made cheap enough that everyone buys it. Savings accounts, the kind that make it easy to put in money and somewhat costlier to take it out, can be made easily available to everyone, if need be, by subsidizing the cost for the bank that offers them. Chlorine could be made available next to every source where piping water is too expensive. There are many similar examples.
Third, there are good reasons that some markets are missing for the poor, or that the poor face unfavorable prices in them. The poor get a negative interest rate from their savings accounts (if they are lucky enough to have an account) and pay exorbitant rates on their loans (if they can get one) because handling even a small quantity of money entails a fixed cost. The market for health insurance for the poor has not developed, despite the devastating effects of serious health problems in their lives because the limited insurance options that can be sustained in the market (catastrophic health insurance, formulaic weather insurance) are not what the poor want.
In some cases, a technological or an institutional innovation may allow a market to develop where it was missing. This happened in the case of microcredit, which made small loans at more affordable rates available to millions of poor people, although perhaps not the poorest. Electronic money transfer systems (using cell phones and the like) and unique identification for individuals may radically cut the cost of providing savings and remittance services to the poor over the next few years. But we also have to recognize that in some cases, the conditions for a market to emerge on its own are simply not there. In such cases, governments should step in to support the market to provide the necessary conditions, or failing that, consider providing the service themselves.
We should recognize that this may entail giving away goods or services (such as bed nets or visits to a preventive care center) for free or even rewarding people, strange as it might sound, for doing things that are good for them. The mistrust of free distribution of goods and services among various experts has probably gone too far, even from a pure cost-benefit point of view. It often ends up being cheaper, per person served, to distribute a service for free than to try to extract a nominal fee. In some cases, it may involve ensuring that the price of a product sold by the market is attractive enough to allow the market to develop. For example, governments could subsidize insurance premiums, or distribute vouchers that parents can take to any school, private or public, or force banks to offer free “no frills” savings accounts to everyone for a nominal fee. It is important to keep in mind that these subsidized markets need to be carefully regulated to ensure they function well. For example, school vouchers work well when all parents have a way of figuring out the right school for their child; otherwise, they can turn into a way of giving even more of an advantage to savvy parents.
Fourth, poor countries are not doomed to failure because they are poor, or because they have had an unfortunate history. It is true that things often do not work in these countries: Programs intended to help the poor end up in the wrong hands, teachers teach desultorily or not at all, roads weakened by theft of materials collapse under the weight of overburdened trucks, and so forth. But many of these failures have less to do with some grand conspiracy of the elites to maintain their hold on the economy and more to do with some avoidable flaw in the detailed design of policies, and the ubiquitous three Is: ignorance, ideology, and inertia. Nurses are expected to carry out jobs that no ordinary human being would be able to complete, and yet no one feels compelled to change their job description. The fad of the moment (be it dams, barefoot doctors, microcredit, or whatever) is turned into a policy without any attention to the reality within which it is supposed to function. We were once told by a senior government official in India that the village education committees always include the parent of the best student in the school and the parent of the worst student in the school. When we asked how they decided who were the best and worst children, given that there are no tests until fourth grade, she quickly changed subjects. And yet even these absurd rules, once in place, keep going out of sheer inertia.
The good news, if that is the right expression, is that it is possible to improve governance and policy without changing the existing social and political structures. There is tremendous scope for improvement even in “good” institutional environments, and some margin for action even in bad ones. A small revolution can be achieved by making sure that everyone is invited to village meetings; by monitoring government workers and holding them accountable for failures in performing their duties; by monitoring politicians at all levels and sharing this information with voters; and by making clear to users of public services what they should expect—what the exact health center hours are, how much money (or how many bags of rice) they are entitled to.
Finally, expectations about what people are able or unable to do all too often end up turning into self-fulfilling prophecies. Children give up on school when their teachers (and sometimes their parents) signal to them that they are not smart enough to master the curriculum; fruit sellers don’t make the effort to repay their debt because they expect that they will fall back into debt very quickly; nurses stop coming to work because nobody expects them to be there; politicians whom no one expects to perform have no incentive to try improving people’s lives. Changing expectations is not easy, but it is not impossible: After seeing a female pradhan in their village, villagers not only lost their prejudice against women politicians but even started thinking that their daughter might become one, too; teachers who are told that their job is simply to make sure that all the children can read can accomplish that task within the duration of a summer camp. Most important, the role of expectations means that success often feeds on itself. When a situation starts to improve, the improvement itself affects beliefs and behavior. This is one more reason one should not necessarily be afraid of handing things out (including cash) when needed to get a virtuous cycle started.
↑ comment by lukeprog · 2014-05-29T04:31:40.272Z · LW(p) · GW(p)
From The Visioneers:
[Gerard] O’Neill also looked to the technological changes he had seen in his own lifetime as another means of setting boundary conditions. When he was a boy, a DC-3 airliner could carry only a few dozen passengers some thousand miles. Three decades later, a Boeing 747 carried hundreds of passengers on nonstop transoceanic flights. Similarly, O’Neill considered the massive growth in computing power that he had witnessed over his twenty years as a research scientist. O’Neill extended the same sort of extrapolations to the tools of space travel. As a result, projections we might dismiss today as wildly overoptimistic appeared less so circa 1973, in the wake of a decade that saw such spectacular American and Soviet successes in space.
And:
Given its relative simplicity— the underlying physics dated to the late nineteenth century, and there were few moving parts other than the buckets— O’Neill saw the mass driver as an elegant solution. It violated no laws of physics and, after O’Neill carefully worked out its power requirements, dimensions, and performance, he concluded that it could be built with existing or soon-to-be available equipment. But, like the rest of his designs for space settlements, the feasibility of O’Neill’s mass driver was predicated on a series of optimistic assumptions and extrapolations: a lunar outpost could be established, physicists would see continued progress in improving the capabilities of superconducting wires, and so forth. And all this, of course, rested on another, broader set of assumptions about economics, the accuracy of NASA’s long-term projections, and sufficient public support. As O’Neill and other visioneers all discovered, just having a sound set of calculations, some inspiring drawings, and a vision for the future wasn’t enough.
And:
Starting in the early 1990s, a backlash against Drexler and Drexlerian nanotechnology emerged and grew. When Science published a 1991 piece called “The Apostle of Nanotechnology,” its title employed a trope frequently used in attacks on Drexler’s ideas. Journalists regularly used words like “messiah,” “guru,” “prophet,” and “nanoevangelist” to describe Drexler and, displaying a willingness to span the biblical testaments, critics likened him to both Moses and John the Baptist. Phillip Barth, an engineer at computer giant Hewlett-Packard, took this analogy even further when his posting to the Internet discussion group “sci.nanotech” speculated as to whether “nanoism” might become the “next great mass-movement” in the tradition of Christianity, Islam, or communism. When interviewed for Science, Barth dismissed Drexler’s visioneering, saying “you might as well call it nanoreligion.”
And:
scientists have often taken issue with colleagues’ popularizing activities, sometimes expressing the view that one should engage the public only at the end of one’s research career. For instance, Carl Sagan’s “vulgar” works (most notably the television series Cosmos, which he did midcareer) supposedly sabotaged his election to the National Academy of Sciences. Gerard O’Neill, meanwhile, became an advocate for space colonies and a public figure only after two decades of work as a respected physicist at an Ivy League school. Drexler, however, broke from this pattern, publishing his modest oeuvre of “real” research only after promoting nanotechnology in a popular book.
↑ comment by lukeprog · 2014-03-26T01:48:01.986Z · LW(p) · GW(p)
From Priest & Arkin's Top Secret America:
Replies from: lukeprog, lukeprog“It’s the soccer ball syndrome. Something happens, and they want to rush to cover it,” said Richard H. Immerman, who, until 2009, was the assistant deputy director of national intelligence for Analytic Integrity, the office that oversees analysis for all the agencies but has little power over how individual agencies conduct their work. “I saw tremendous overlap” in what analysts worked on. “There’s no systematic and rigorous division of labor.” Even the analysts at the gigantic National Counterterrorism Center (NCTC)2—established in 2003 as the pinnacle of intelligence, the repository of the most sensitive, most difficult-to-obtain nuggets of information—got low marks from intelligence officials for not producing reports that were original, or even just better than those already written by the CIA, the FBI, the National Security Agency, or the Defense Intelligence Agency.
It’s not an academic insufficiency. When John M. Custer III was the director of intelligence at U.S. Central Command, he grew angry at how little helpful information came out of the NCTC. In 2007, he visited its director at the time, retired vice admiral John Scott Redd, to say so, loudly. “I told him,” Custer explained to me, “that after four and a half years, this organization had never produced one shred of information that helped me prosecute three wars!” Redd was not apologetic. He believed the system worked well, saying it wasn’t designed to serve commanders in the field but policy makers in Washington. That explanation sounded like a poor excuse to Custer. Mediocre information was mediocre information, no matter on whose desk it landed.
Two years later, as head of the army’s intelligence school at Fort Huachuca, Arizona, Custer still got red-faced when he recalled that day and his general frustration with Washington’s bureaucracy. “Who has the mission of reducing redundancy and ensuring everybody doesn’t gravitate to the lowest-hanging fruit?” he asked. “Who orchestrates what is produced so that everybody doesn’t produce the same thing?” The answer in Top Secret America was, dangerously, nobody.
This sort of wasteful redundancy is endemic in Top Secret America, not just in analysis but everywhere. Born of the blank check that Congress first gave national security agencies in the wake of the 9/11 attacks, Top Secret America’s wasteful duplication was cultivated by the bureaucratic instinct that bigger is always better, and by the speed at which big departments like defense allowed their subagencies to grow.
↑ comment by lukeprog · 2014-03-26T02:02:17.401Z · LW(p) · GW(p)
More (#2) from Top Secret America:
The interagency group briefing slide on the status of WMD consequence management again seemed designed to minimize the appearance of any loss on the part of NorthCom, but the truth of the command’s diminished status, even in this, the one area in which it had seemed to have unambiguous leadership, showed up in a final bullet: under the new arrangements, all the response units weren’t even obligated to come to the aid of NorthCom; rather, the military services could make forces available “to the greatest extent possible.”
These developments were heartbreaking to those who had spent years building up Northern Command. But the fact that Northern Command would even continue to exist as a major, four-star-led, geographic military command, with virtually no responsibilities, no competencies, and no unique role to fill, demonstrated the resiliency of institutions created in the wake of 9/11 and just how difficult it would be to ever actually shrink Top Secret America. Northern Command, with its $100 million renovated concrete headquarters, its two dozen generals, its redundant command centers, its gigantic electronic map, and its multitude of contractors, looked as busy as ever, putting together agendas and exercises and PowerPoint briefings in the name of keeping the nation safe.
And, on JSOC:
Besides the damage inflicted on the enemy by the CIA’s killer drones, paramilitary forces killed dozens of al-Qaeda leaders and hundreds of its foot soldiers in the decade after 9/11. But troops from a more mysterious organization, based in North Carolina, have killed easily ten times as many al-Qaeda, and hundreds of Iraqi insurgents as well.
This secretive organization, created in 1980 but completely reinvented in 2003, flies ten times more drones than the CIA. Some are armed with Hellfire missiles; most carry video cameras, sensors, and signals intercept equipment. When the CIA’s paramilitary Special Activities Division1 needs help, or when the president decides to send agency operatives on a covert mission into a foreign country, it often borrows troops from this same organization, temporarily deputizing them when necessary in order to get the missions done.
The CIA has captured, imprisoned, and interrogated close to a hundred terrorists in secret prisons around the world. Troops from this other secret military unit have captured and interrogated ten times as many. They hold them in prisons in Iraq and Afghanistan that they alone control and, for at least three years after 9/11, they sometimes ignored U.S. military rules for interrogation and used almost whatever means they thought might be most effective.
Of all the top secret units fighting terrorism after 9/11, this is the single organization that has killed and captured more al-Qaeda members around the world and destroyed more of their training camps and safe houses than the rest of the U.S. government forces combined. And although it greatly benefited from the technology produced by Top Secret America, the secret to its success has been otherwise escaping the behemoth created in response to the 9/11 attacks.
And:
The other challenge JSOC faced was a human one: how its troops were interrogating and treating detainees. Shortly after McChrystal took command in September 2003, he visited the JSOC detention facility in Iraq, a place separate from the larger Abu Ghraib prison that would become notorious for prisoner abuse at the hands of low-level army soldiers. There was a skeletal staff of about thirteen people, meaning they had no time to try to cajole detainees into divulging important intelligence. There was little or no information about individual detainees for interrogators to use to question them in a more productive way. As a result, interrogators didn’t know what questions to ask or how to ask them to get a response.
Worse, some JSOC Task Force 121 members were beating prisoners—something that would before long become known to Iraqis and the rest of the world. Indeed, even before the Abu Ghraib prison photos began circulating among investigators, a confidential report warned army generals that some JSOC interrogators were assaulting prisoners and hiding them in secret facilities, and that this could be feeding the Iraqi insurgency by “making gratuitous enemies,” reported the Washington Post’s Josh White, who first obtained a copy of the report by retired colonel Stuart A. Herrington.
That wasn’t the only extreme: in an effort to force insurgents to turn themselves in, some JSOC troops also detained mothers, wives, and daughters when the men in a house they were looking for were not at home. These detentions and other massive sweep operations flooded prisons with terrified, innocent people—some of them were more like hostages than suspects—that was particularly counterproductive to winning Iraqi support, Herrington noted.
And:
Replies from: shminuxSwiftly, Obama declassified Bush-era directives on interrogations and then banned the harsh techniques. He announced that he would close the military prison at Guantánamo, but he backed off on this under political pressure. He promised to try alleged terrorists in criminal courts but backed down on that too. The covert action review proceeded as planned.
When it was finished, the new administration had “changed virtually nothing,” said Rizzo. “Things continued. Authorities were continued that were originally granted by President Bush beginning shortly after 9/11. Those were all picked up, reviewed, and endorsed by the Obama administration.”
Like that of his predecessor, Obama’s Justice Department has also aggressively used the state secrets privilege to quash court challenges to clandestine government actions. The privilege is a rule that permits the executive branch to withhold evidence in a court case when it believes national security would be harmed by its public release. From January 2001 to January 2009, the government invoked the state secrets privilege in more than one hundred cases, which is more than five times the number of cases invoked in all previous administrations, according to a study by the Georgetown Law Center on National Security and the Law. The Obama administration also initiated more leak investigations against national security whistle-blowers and journalists than had the Bush administration, hoping, at the very least, to scare government employees with security clearances into not speaking with reporters.
And the growth of Top Secret America continued, too. In the first month of the administration, four new intelligence and Special Operations organizations that had already been in the works were activated.3 But by the end of 2009, some thirty-nine new or reorganized counterterrorism organizations came into being. This included seven new counterterrorism and intelligence task forces overseas and ten Special Operations and military intelligence units that were created or substantially reorganized. The next year, 2010, was just as busy: Obama’s Top Secret America added twenty-four new organizations and a dozen new task forces and military units, although the wars in Afghanistan and Iraq were winding down.
↑ comment by Shmi (shminux) · 2014-03-26T02:20:52.769Z · LW(p) · GW(p)
I wonder if the security-industrial complex bureaucracy is any better in other countries.
Replies from: Lumifer, lukeprog↑ comment by lukeprog · 2014-03-26T04:08:55.269Z · LW(p) · GW(p)
Stay tuned; The Secret History of MI6 and Defend the Realm are in my audiobook queue. :)
↑ comment by lukeprog · 2014-03-26T01:55:31.105Z · LW(p) · GW(p)
More (#1) from Top Secret America:
As we learned more about Top Secret America, we sometimes thought Osama bin Laden must have been gloating. There was so much for him to take satisfaction from: the chronic elevation of Homeland Security’s color-coded threat warning, the anxious mood and culture of fear that had taken hold of public discussions about al-Qaeda, the complete contortions the government and media went through every time there was a terrorist bombing overseas or a near-miss at home. We imagined bin Laden and his sidekick, Ayman Zawahiri, pleased most by this uncontrollable American spending spree in the midst of an economic downturn. It was evident from the audiotapes secretly released after 9/11 that they both followed the news and would have known that thousands of people had lost their homes, that many more had lost their jobs, that states were cutting back on health care for poor children and on education just to stay afloat and to allow state fusion centers and mini-homeland security offices everywhere to stay open. They would have known, too, that the major American political parties were tearing themselves apart over how to stop deficit spending and reverse the economic free fall, and that they still feared al-Qaeda as a threat more frightening than the Soviet superpower of the cold war.
And this is exactly what a terrorist organization would want. With no hope of defeating a much better equipped and professional nation-state army, terrorists hoped to get their adversary to overreact, to bleed itself dry, and to trample the very values it tried to protect. In this sense, al-Qaeda—though increasingly short on leaders and influence (a fact no one in Top Secret America would ever say publicly, just in case there was another attack)—was doing much more damage to its enemy than it had on 9/11.
And:
Terrorists in Yemen were thought to be actively plotting to strike the American homeland, and, in response, President Obama had signed an order sending dozens of secret commandos there. The commandos had set up a joint operations center in Yemen and packed it with consoles, hard drives, forensic kits, and communications gear. They exchanged thousands of intercepts, agent reports, photographic evidence, and real-time video surveillance with dozens of top-secret organizations serving their needs from the United States. That was the system as it was intended.
But when that dreaded but awaited intelligence about threats originating in Yemen reached the National Counterterrorism Center for analysis, it arrived buried within the daily load of thousands of snippets of general terrorist-related data from around the world that Leiter said all needed to be given equal attention.
Instead of searching one network of computerized intelligence reports, NCTC analysts had to switch from database to database, from hard drive to hard drive, from screen to screen, merely to locate the Yemen material that might be interesting to study further. If they wanted raw material—transcripts of voice intercepts or email exchanges that had not been analyzed and condensed by the CIA or NSA—they had to use liaison officers assigned to those agencies to try to find it, or call people they happened to know there and try to persuade them to locate it. As secret U.S. military operations in Yemen intensified and the chatter about a possible terrorist strike in the United States increased, the intelligence agencies further ramped up their effort. That meant that the flood of information coming into the NCTC became a torrent, a fire hose instead of an eyedropper.
Somewhere in that deluge was Umar Farouk Abdulmutallab. He showed up in bits and pieces. In August, NSA intercepted al-Qaeda conversations about an unidentified “Nigerian.” They had only a partial name. In September, the NSA intercepted a communication about Awlaki—the very same person Major Hasan had contacted—facilitating transportation for someone through Yemen. There was also a report from the CIA station in Nigeria of a father who was worried about his son because he had become interested in radical teachings and had gone to Yemen.
But even at a time of intense secret military operations going on in the country, the many clues to what was about to happen went missing in the immensity and complexity of the counterterrorism system. Abdulmutallab left Yemen, returned to Nigeria, and on December 16 purchased a one-way ticket to the United States. Once again, connections hiding in plain sight went unnoticed.
“There are so many people involved here,” Leiter later told Congress.
“Everyone had the dots to connect,” DNI Blair explained to lawmakers. “But I hadn’t made it clear exactly who had primary responsibility.”
Waltzing through the gaping holes in the security net, Abdulmutallab was able to step aboard Northwest Airlines Flight 253 without any difficulty. As the plane descended toward Detroit, he returned from the bathroom with a pillow over his stomach and tried to ignite explosives hidden in his underwear. And just as the billions of dollars and tens of thousands of security-cleared personnel of the massive 9/11 apparatus hadn’t prevented Abdulmutallab from getting to this moment, it did nothing now to prevent disaster. Instead, a Dutch video producer, Jasper Schuringa, dove across four airplane seats to tackle the twenty-three-year-old when he saw him trying to light something on fire.
The secretary of Homeland Security, Janet Napolitano, was the first to address the public afterward. She was happy to announce that “once the incident occurred, the system worked.” The next day, however, she admitted the system that had allowed him onto the plane with an explosive had “failed miserably.”
“We didn’t follow up and prioritize the stream of intelligence,” White House counterterrorism adviser John O. Brennan explained later, “because no one intelligence entity, or team, or task force, was assigned responsibility for doing that follow-up investigation.”
Incredible as it was, after all this time, after all these reorganizations, after all the money spent to get things right, no one person was actually responsible for counterterrorism. And no one is responsible today, either.
↑ comment by lukeprog · 2014-02-22T23:09:14.694Z · LW(p) · GW(p)
From Pentland's Social Physics:
Replies from: lukeprog, lukeprogwhat can a single individual do to increase rate of idea flow in their part of their social network? Fortunately, there are many ways. In 1985, Bob Kelly of Carnegie Mellon University launched the now famous Bell Stars study. Bell Laboratories, a premier research laboratory, wanted to know more about what separates a star performer from the average performer. Is it something innate or can star performance be learned? Bell Labs already hired the best and the brightest from the world’s most prestigious universities, but only a few lived up to their apparent potential for brilliance. Instead, most hires developed into solid performers but did not contribute substantially to AT&T’s competitive advantage in the marketplace.
What Kelly found was that star producers engage in “preparatory exploration”; that is, they develop dependable two-way streets to experts ahead of time, setting up a relationship that will later help the star producer complete critical tasks. Moreover, the stars’ networks differed from typical workers’ networks in two important respects. First, they maintained stronger engagement with the people in their networks, so that these people responded more quickly and helpfully. As a result, the stars rarely spent time spinning their wheels or going down blind alleys.
Second, star performers’ networks were also more diverse. Average performers saw the world only from the viewpoint of their job, and kept pushing the same points. Stars, on the other hand, had people in their networks with a more diverse set of work roles, so they could adopt the perspectives of customers, competitors, and managers. Because they could see the situation from a variety of viewpoints, they could develop better solutions to problems.
↑ comment by lukeprog · 2014-02-22T23:23:19.970Z · LW(p) · GW(p)
More (#2) from Social Physics:
For the entire community, we measured activity levels by using the accelerometer sensors embedded in their mobile phones. Unlike typical social science experiments, FunFit was conducted out in the real world, with all the complications of daily life. In addition, we collected hundreds of thousands of hours and hundreds of gigabytes of contextual data, so that we could later go back and see which factors had the greatest effect.
On average, it turned out that the social network incentive scheme worked almost four times more efficiently than a traditional individual-incentive market approach. For the buddies who had the most interactions with their assigned target, the social network incentive worked almost eight times better than the standard market approach.
And better yet, it stuck. People who received social network incentives maintained their higher levels of activity even after the incentives disappeared. These small but focused social network incentives generated engagement around new, healthier habits of behavior by creating social pressure for behavior change in the community.
And:
Unexpectedly, we found that the factors most people usually think of as driving group performance—i.e., cohesion, motivation, and satisfaction—were not statistically significant. The largest factor in predicting group intelligence was the equality of conversational turn taking; groups where a few people dominated the conversation were less collectively intelligent than those with a more equal distribution of conversational turn taking. The second most important factor was the social intelligence of a group’s members, as measured by their ability to read each other’s social signals. Women tend to do better at reading social signals, so groups with more women tended to do better...
↑ comment by lukeprog · 2014-02-22T23:15:39.395Z · LW(p) · GW(p)
More (#1) from Social Physics:
When people are behaving independently of their social learning, it is likely that they have independent information and that they believe in that information enough to fight the effects of social influence. Find as many of these “wise guys” as possible and learn from them. Such contrarians sometimes have the best ideas, but sometimes they are just oddballs. How can you know which is which? If you can find many such independent thinkers and discover that there is a consensus among a large subset of them, then a really, really good trading strategy is to follow the contrarian consensus. For instance, in the eToro network the consensus of these independent strategies is reliably more than twice as good as the best human trader.
And:
to answer the question of how habits form, my research group studied the spread of health behaviors in a tightly knit undergraduate dorm for one year. In the Social Evolution Study, led by PhD student Anmol Madan and myself, with Professor David Lazer helping with design of the experiment and data analysis, we gave all the participating students smartphones with special software so that we could track their social interactions with both close friends and acquaintances. In total, this study produced more than five hundred thousand hours of data and included face-to-face interactions, phone calls, and texting, as well as extensive surveys and weight measurements. These hundreds of gigabytes of data allowed us to examine what goes into the creation of habits.
One particular health behavior that we focused on was weight change and on whether this was more influenced by the behavior of friends or by peers in the surrounding community...
exposure to the behavior examples that surrounded each individual dominated everything else we examined in this study. It was more important than personal factors, such as weight gain by friends, gender, age, or stress/happiness, and even more than all these other factors combined. Put another way, the effect of exposure to the surrounding set of behavior examples was about as powerful as the effect of IQ on standardized test scores.
It might be asked how we can know that exposure to the surrounding behaviors actually caused the idea flow; perhaps it is merely a correlation. The answer is in this experiment we could make quantitative, time-synchronized predictions, which make other noncausal explanations fairly implausible. Perhaps even more persuasively, we have also been able to use the connection between exposure and behavior to predict outcomes in several different situations, and even to manipulate exposure in order to cause behavior changes. Finally, there also have been careful quantitative laboratory experiments that show similar effects and in which the causality is certain.
Therefore, people seem to pick up at least some habits from exposure to those of peers (and not just friends). When everyone else takes that second slice of pizza, we probably will also. The fact that exposure turned out to be more important for driving idea flow than all the other factors combined highlights the overarching importance of automatic social learning in shaping our lives.
And:
How do we choose who to vote for? Do our preferences also come from exposure to those around us? We tackled this question in the Social Evolution experiment by analyzing students’ political views during the 2008 presidential election.9 The question we asked was: Do political views reflect the behaviors that people are exposed to or are they formed more by individual reasoning? By giving these students specially equipped smartphones, we monitored their patterns of social interaction by tracking who spent time with whom, who called whom, who spent time at the same places, and so forth.
We also asked the students a wide range of questions about their interest in politics, involvement in politics, political leanings, and finally (after the election), we inquired which candidate had received their vote. In total, this produced more than five hundred thousand hours of automatically generated data about their interaction patterns, which we then combined with survey data about their beliefs, attitudes, personality, and more.
When sifting through these hundreds of gigabytes of data, we found that the amount of exposure to people possessing similar opinions accurately predicted both the students’ level of interest in the presidential race and their liberal-conservative balance. This collective opinion effect was very clear: More exposure to similar views made the students more extreme in their own views.
Most important, though, this meant that the amount of exposure to people with similar views also predicted the students’ eventual voting behavior. For first-year students, the size of this social exposure effect was similar to the weight gain ones I described in the previous section, while for older students, who presumably had more fixed attitudes, the size of the effect was less but still quite significant.
But what did not predict their voting behavior? The views of the people they talked politics with, and the views of their friends. Just as with weight gain, it was the behavior of the surrounding peer group—the set of behavior examples that they were immersed in—that was the most powerful force in driving idea flow and shaping opinion.
↑ comment by lukeprog · 2013-12-21T18:16:46.934Z · LW(p) · GW(p)
From de Mesquita and Smith's The Dictator's Handbook:
Replies from: lukeprog, lukeprogHow do tyrants hold on to power for so long? For that matter, why is the tenure of successful democratic leaders so brief? How can countries with such misguided and corrupt economic policies survive for so long? Why are countries that are prone to natural disasters so often unprepared when they happen? And how can lands rich with natural resources at the same time support populations stricken with poverty?
Equally, we may well wonder: Why are Wall Street executives so politically tone-deaf that they dole out billions in bonuses while plunging the global economy into recession? Why is the leadership of a corporation, on whose shoulders so much responsibility rests, decided by so few people? Why are failed CEOs retained and paid handsomely even as their company’s shareholders lose their shirts?
In one form or another, these questions of political behavior pop up again and again. Each explanation, each story, treats the errant leader and his or her faulty decision making as a one-off, one-of-a-kind situation. But there is nothing unique about political behavior.
...We look at each case and conclude they are different, uncharacteristic anomalies. Yet they are held together by the logic of politics, the rules ruling rulers.
...To understand politics properly, we must modify one assumption in particular: we must stop thinking that leaders can lead unilaterally.
No leader is monolithic. If we are to make any sense of how power works, we must stop thinking that North Korea’s Kim Jong Il can do whatever he wants. We must stop believing that Adolf Hitler or Joseph Stalin or Genghis Khan or anyone else is in sole control of their respective nation. We must give up the notion that Enron’s Kenneth Lay or British Petroleum’s (BP) Tony Hayward knew about everything that was going on in their companies, or that they could have made all the big decisions. All of these notions are flat out wrong because no emperor, no king, no sheikh, no tyrant, no chief executive officer (CEO), no family head, no leader whatsoever can govern alone.
...For leaders, the political landscape can be broken down into three groups of people: the nominal selectorate, the real selectorate, and the winning coalition.
The nominal selectorate includes every person who has at least some legal say in choosing their leader. In the United States it is everyone eligible to vote, meaning all citizens aged eighteen and over. Of course, as every citizen of the United States must realize, the right to vote is important, but at the end of the day no individual voter has a lot of say over who leads the country. Members of the nominal selectorate in a universal-franchise democracy have a toe in the political door, but not much more. In that way, the nominal selectorate in the United States or Britain or France doesn’t have much more power than its counterparts, the “voters,” in the old Soviet Union. There, too, all adult citizens had the right to vote, although their choice was generally to say Yes or No to the candidates chosen by the Communist Party rather than to pick among candidates. Still, every adult citizen of the Soviet Union, where voting was mandatory, was a member of the nominal selectorate. The second stratum of politics consists of the real selectorate. This is the group that actually chooses the leader. In today’s China (as in the old Soviet Union), it consists of all voting members of the Communist Party; in Saudi Arabia’s monarchy it is the senior members of the royal family; in Great Britain, the voters backing members of parliament from the majority party. The most important of these groups is the third, the subset of the real selectorate that makes up a winning coalition. These are the people whose support is essential if a leader is to survive in office. In the USSR the winning coalition consisted of a small group of people inside the Communist Party who chose candidates and who controlled policy. Their support was essential to keep the commissars and general secretary in power. These were the folks with the power to overthrow their boss—and he knew it. In the United States the winning coalition is vastly larger. It consists of the minimal number of voters who give the edge to one presidential candidate (or, at the legislative level in each state or district, to a member of the House or Senate) over another. For Louis XIV, the winning coalition was a handful of members of the court, military officers, and senior civil servants without whom a rival could have replaced the king.
Fundamentally, the nominal selectorate is the pool of potential support for a leader; the real selectorate includes those whose support is truly influential; and the winning coalition extends only to those essential supporters without whom the leader would be finished. A simple way to think of these groups is: interchangeables, influentials, and essentials.
In the United States, the voters are the nominal selectorate — interchangeables. As for the real selectorate — influentials — the electors of the electoral college really choose the president (just like the party faithful picked their general secretary back in the USSR), but the electors nowadays are normatively bound to vote the way their state’s voters voted, so they don’t really have much independent clout in practice. In the United States, the nominal selectorate and real selectorate are therefore pretty closely aligned. This is why, even though you’re only one among many voters, interchangeable with others, you still feel like your vote is influential — that it counts and is counted. The winning coalition — essentials — in the United States is the smallest bunch of voters, properly distributed among the states, whose support for a candidate translates into a presidential win in the electoral college. And while the winning coalition (essentials) is a pretty big fraction of the nominal selectorate (interchangeables), it doesn’t have to be even close to a majority of the US population. In fact, given the federal structure of American elections, it’s possible to control the executive and legislative branches of government with as little as about one fifth of the vote, if the votes are really efficiently placed...
Looking elsewhere we see that there can be a vast range in the size of the nominal selectorate, the real selectorate, and the winning coalition. Some places, like North Korea, have a mass nominal selectorate in which everyone gets to vote — it’s a joke, of course — a tiny real selectorate who actually pick their leader, and a winning coalition that surely is no more than maybe a couple of hundred people (if that) and without whom even North Korea’s first leader, Kim Il Sung, could have been reduced to ashes. Other nations, like Saudi Arabia, have a tiny nominal and real selectorate, made up of the royal family and a few crucial merchants and religious leaders. The Saudi winning coalition is perhaps even smaller than North Korea’s.
...These three groups provide the foundation of all that’s to come in the rest of this book, and, more importantly, the foundation behind the working of politics in all organizations, big and small. Variations in the sizes of these three groups give politics a three-dimensional structure that clarifies the complexity of political life. By working out how these dimensions intersect—that is, each organization’s mix in the size of its interchangeable, influential, and essential groups—we can come to grips with the puzzles of politics. Differences in the size of these groups across states, businesses, and any other organization, as you will see, decide almost everything that happens in politics—what leaders can do, what they can and can’t get away with, to whom they answer, and the relative qualities of life that everyone under them enjoys (or, too often, doesn’t enjoy).
↑ comment by lukeprog · 2013-12-21T18:22:00.231Z · LW(p) · GW(p)
More (#2) from The Dictator's Handbook:
Democratic leaders profess a desire for democratization. Yet the reality is that it is rarely in their interest. As the coalition size grows in a foreign nation, its leader becomes more and more compelled to enact policies that his people want and not the policies desired by the puppeteer’s people. If a democratic leader wants a foreign leader to follow his prescribed policies then he needs to insulate his puppet from domestic pressures. This means reducing coalition size in vanquished states. This makes it cheaper and easier to sustain puppets and buy policy. US foreign policy is awash with examples where the United States overtly or covertly undermines the development of democracy because it promoted the policies counter to US interests. Queen Liliuokalani of Hawaii in 1893, Salvador Allende of Chile in 1973, Mohammad Mosaddegh of Iran in 1953, and Jacobo Arbenz of Guatemala in 1954 all suffered such fates.
And:
Sun Tzu exerted a lasting influence on the study of war precisely because his recommendations are the right recommendations for leaders, like monarchs and autocrats, who rule based on a small coalition. The Weinberger Doctrine—like its more recent replacement, the Powell Doctrine—exerts influence over American security policy precisely because it recommends the most appropriate actions for leaders who are beholden to a large coalition.
We have seen that larger coalition systems are extremely selective in their decisions about waging war and smaller coalition systems are not. Democracies only fight when negotiation proves unfruitful and the democrat’s military advantage is overwhelming, or when, without fighting, the democrat’s chances of political survival are slim to none. Furthermore, when war becomes necessary, large-coalition regimes make an extra effort to win if the fight proves difficult. Small-coalition leaders do not if doing so uses up so much treasure that would be better spent on private rewards that keep their cronies loyal. And finally, when a war is over, larger coalition leaders make more effort to enforce the peace and the policy gains they sought through occupation or the imposition of a puppet regime. Small-coalition leaders mostly take the valuable private goods for which they fought and go home, or take over the territory they conquered so as to enjoy the economic fruits of their victory for a long time.
Clausewitz had war right. War, it seems, truly is just domestic politics as usual. For all the philosophical talk of “a just war,” and all the strategizing about balances of power and national interests, in the end, war, like all politics, is about staying in power and controlling as many resources as possible. It is precisely this predictability and normality of war that makes it, like all the pathologies of politics we have discussed, susceptible to being understood and fixed.
↑ comment by lukeprog · 2013-12-21T18:19:23.439Z · LW(p) · GW(p)
More (#1) from The Dictator's Handbook:
One interesting manifestation of the differences between wealth and poverty in resource-rich lands is the cost of living for expatriates living in these countries. While it is tempting to think that cities like Oslo, Tokyo, or London would top the list as the most expensive places, they don’t. Instead it is Luanda, the capital of the southwestern African state of Angola. It can cost upwards of $10,000 per month for housing in a reasonable neighborhood, and even then water and electricity are intermittent. What makes this so shocking is the surrounding poverty. According to the United Nations Development Program, 68 percent of Angola’s population lives below the poverty line, more than a quarter of children die before their fifth birthday, and male life expectancy is below forty-five years. The most recent year for which income inequality data are available is 2000. These data suggest that the poorest 20 percent of the population have only 2 percent of the wealth. Angola is ranked 143 out of 182 nations in terms of overall human development. Prices in Angola, as in many other West African states, are fueled by oil.
The resource curse enables autocrats to massively reward their supporters and accumulate enormous wealth. This drives prices to the stratospheric heights seen in Luanda, where wealthy expatriates and lucky coalition members can have foie gras flown in from France every day. Yet to make sure the people cannot coordinate, rebel, and take control of the state, leaders endeavor to keep those outside the coalition poor, ignorant, and unorganized. It is ironic that while oil revenues provide the resources to fix societal problems, it creates political incentives to make them far worse.
This effect is much less pernicious in democracies. The trouble is that once a state profits from mineral wealth, it is unlikely to democratize. The easiest way to incentivize the leader to liberalize policy is to force him to rely on tax revenue to generate funds. Once this happens, the incumbent can no longer suppress the population because the people won’t work if he does.
The upshot is that the resource curse can be lifted. If aid organizations want to help the peoples of oil-rich nations, then the logic of our survival-based argument suggests they would achieve more by spending their donations lobbying the governments in the developed world to increase the tax on petroleum than by providing assistance overseas. By raising the price of oil and gas, such taxes would reduce worldwide demand for oil. This in turn would reduce oil revenues and make leaders more reliant on taxation.
↑ comment by lukeprog · 2013-12-14T22:43:26.381Z · LW(p) · GW(p)
From Ferguson's The Ascent of Money:
Replies from: lukeprog, gwernThe libro segreto - literally the secret bookf - of Giovanni di Bicci de’ Medici sheds fascinating light on the family’s rise. In part, this was simply a story of meticulous bookkeeping. By modern standards, to be sure, there were imperfections. The Medici did not systematically use the double-entry method, though it was known in Genoa as early as the 1340s. Still, the modern researcher cannot fail to be impressed by the neatness and orderliness of the Medici accounts. The archives also contain a number of early Medici balance sheets, with reserves and deposits correctly arranged on one side (as liabilities or vostro) and loans to clients or commercial bills on the other side (as assets or nostro). The Medici did not invent these techniques, but they applied them on a larger scale than had hitherto been seen in Florence. The real key to the Medicis’ success, however, was not so much size as diversification. Whereas earlier Italian banks had been monolithic structures, easily brought down by one defaulting debtor, the Medici bank was in fact multiple related partnerships, each based on a special, regularly renegotiated contract. Branch managers were not employees but junior partners who were remunerated with a share of the profits. It was this decentralization that helped make the Medici bank so profitable. With a capital of around 20,000 florins in 1402 and a payroll of at most seventeen people, it made profits of 151,820 florins between 1397 and 1420 - around 6,326 florins a year, a rate of return of 32 per cent. The Rome branch alone was soon posting returns of over 30 per cent. The proof that the model worked can be seen in the Florentine tax records, which list page after page of Giovanni di Bicci’s assets, totalling some 91,000 florins.
↑ comment by lukeprog · 2013-12-14T22:46:22.777Z · LW(p) · GW(p)
More (#1) from The Ascent of Money:
The Amsterdam Exchange Bank (Wisselbank) was set up in 1609 to resolve the practical problems created for merchants by the circulation of multiple currencies in the United Provinces, where there were no fewer than fourteen different mints and copious quantities of foreign coins. By allowing merchants to set up accounts denominated in a standardized currency, the Exchange Bank pioneered the system of cheques and direct debits or transfers that we take for granted today. This allowed more and more commercial transactions to take place without the need for the sums involved to materialize in actual coins. One merchant could make a payment to another simply by arranging for his account at the bank to be debited and the counterparty’s account to be credited. The limitation on this system was simply that the Exchange Bank maintained something close to a 100 per cent ratio between its deposits and its reserves of precious metal and coin. As late as 1760, when its deposits stood at just under 19 million florins, its metallic reserve was over 16 million. A run on the bank was therefore a virtual impossibility, since it had enough cash on hand to satisfy nearly all of its depositors if, for some reason, they all wanted to liquidate their deposits at once. This made the bank secure, no doubt, but it prevented it performing what would now be seen as the defining characteristic of a bank, credit creation.
And:
If the South had managed to hold on to New Orleans until the cotton harvest had been offloaded to Europe, they might have managed to sell more than £3 million of cotton bonds in London. Maybe even the risk-averse Rothschilds might have come off the financial fence. As it was, they dismissed the Erlanger loan as being ‘of so speculative a nature that it was very likely to attract all wild speculators . . . we do not hear of any respectable people having anything to do with it’. The Confederacy had overplayed its hand. They had turned off the cotton tap, but then lost the ability to turn it back on. By 1863 the mills of Lancashire had found new sources of cotton in China, Egypt and India. And now investors were rapidly losing faith in the South’s cotton-backed bonds. The consequences for the Confederate economy were disastrous.
With its domestic bond market exhausted and only two paltry foreign loans, the Confederate government was forced to print unbacked paper dollars to pay for the war and its other expenses, 1.7 billion dollars’ worth in all. Both sides in the Civil War had to print money, it is true. But by the end of the war the Union’s ‘greenback’ dollars were still worth about 50 cents in gold, whereas the Confederacy’s ‘greybacks’ were worth just one cent, despite a vain attempt at currency reform in 1864. The situation was worsened by the ability of Southern states and municipalities to print paper money of their own; and by rampant forgery, since Confederate notes were crudely made and easy to copy. With ever more paper money chasing ever fewer goods, inflation exploded. Prices in the South rose by around 4,000 per cent during the Civil War. By contrast, prices in the North rose by just 60 per cent. Even before the surrender of the principal Confederate armies in April 1865, the economy of the South was collapsing, with hyperinflation as the sure harbinger of defeat.
The Rothschilds had been right. Those who had invested in Confederate bonds ended up losing everything, since the victorious North pledged not to honour the debts of the South. In the end, there had been no option but to finance the Southern war effort by printing money. It would not be the last time in history that an attempt to buck the bond market would end in ruinous inflation and military humiliation.
↑ comment by gwern · 2013-12-14T23:14:14.804Z · LW(p) · GW(p)
The Medici Bank is pretty interesting. A while ago I wrote https://en.wikipedia.org/wiki/Medici_Bank on the topic; LWers might find it interesting how international finance worked back then.
↑ comment by lukeprog · 2013-12-04T17:22:41.671Z · LW(p) · GW(p)
From Scahill's Dirty Wars:
Replies from: lukeprog, lukeprogAccording to the National Security Act of 1947, the president is required to issue a finding before undertaking a covert action. The law states that the action must comply with US law and the Constitution. The presidential finding signed by Bush on September 17, 2001, was used to create a highly classified, secret program code-named Greystone. GST, as it was referred to in internal documents, would be an umbrella under which many of the most clandestine and legally questionable activities would be authorized and conducted in the early days of the Global War on Terror (GWOT). It relied on the administration’s interpretation of the AUMF passed by Congress, which declared any al Qaeda suspect anywhere in the world a legitimate target. In effect, the presidential finding declared all covert actions to be preauthorized and legal, which critics said violated the spirit of the National Security Act. Under GST, a series of compartmentalized programs were created that, together, effectively formed a global assassination and kidnap operation. Authority for targeted kills was radically streamlined. Such operations no longer needed direct presidential approval on a case-by-case basis. Black, the head of the Counterterrorism Center, could now directly order hits...
GST was also the vehicle for snatch operations, known as extraordinary renditions. Under GST, the CIA began coordinating with intelligence agencies in various countries to establish “Status of Forces” agreements to create secret prisons where detainees could be held, interrogated and kept away from the Red Cross, the US Congress and anything vaguely resembling a justice system. These agreements not only gave immunity to US government personnel, but to private contractors as well. The administration did not want to put terror suspects on trial, “because they would get lawyered up,” said Jose Rodriguez, who at the time ran the CIA’s Directorate of Operations, which was responsible for all of the “action” run by the Agency. “[O]ur job, first and foremost, is to obtain information.” To obtain that information, authorization was given to interrogators to use ghoulish, at times medieval, techniques on detainees, many of which were developed by studying the torture tactics of America’s enemies. The War Council lawyers issued a series of legal documents, later dubbed the “Torture Memos” by human rights and civil liberties organizations, that attempted to rationalize the tactics as necessary and something other than torture...
The CIA began secretly holding prisoners in Afghanistan on the edge of Bagram Airfield, which had been commandeered by US military forces. In the beginning, it was an ad hoc operation with prisoners stuffed into shipping containers. Eventually, it expanded to a handful of other discrete sites, among them an underground prison near the Kabul airport and an old brick factory north of Kabul. Doubling as a CIA substation, the factory became known as the “Salt Pit” and would be used to house prisoners, including those who had been snatched in other countries and brought to Afghanistan. CIA officials who worked on counterterrorism in the early days after 9/11 said that the idea for a network of secret prisons around the world was not initially a big-picture plan, but rather evolved as the scope of operations grew. The CIA had first looked into using naval vessels and remote islands—such as uninhabited islands dotting Lake Kariba in Zambia—as possible detention sites at which to interrogate suspected al Qaeda operatives. Eventually, the CIA would build up its own network of secret “black sites” in at least eight countries, including Thailand, Poland, Romania, Mauritania, Lithuania and Diego Garcia in the Indian Ocean. But in the beginning, lacking its own secret prisons, the Agency began funneling suspects to Egypt, Morocco and Jordan for interrogation. By using foreign intelligence services, prisoners could be freely tortured without any messy congressional inquiries.
In the early stages of the GST program, the Bush administration faced little obstruction from Congress. Democrats and Republicans alike gave tremendous latitude to the administration to prosecute its secret war. For its part, the White House at times refused to provide details of its covert operations to the relevant congressional oversight committees but met little protest for its reticence. The administration also unilaterally decided to reduce the elite Gang of Eight members of Congress to just four: the chairs and ranking members of the House and Senate intelligence committees. Those members are prohibited from discussing these briefings with anyone. In effect, it meant that Congress had no oversight of the GST program. And that was exactly how Cheney wanted it.
↑ comment by lukeprog · 2013-12-04T17:38:37.410Z · LW(p) · GW(p)
More (#2) from Dirty Wars:
According to the ICRC, some of the prisoners were bounced around to different black sites for more than three years, where they were kept in “continuous solitary confinement and incommunicado detention. They had no knowledge of where they were being held, no contact with persons other than their interrogators or guards.” The US personnel guarding them wore masks. None of the prisoners was ever permitted a phone call or to write to inform their families they had been taken. They simply vanished.
During the course of their imprisonment, some of the prisoners were confined in boxes and subjected to prolonged nudity—sometimes lasting for several months. Some of them were kept for days at a time, naked, in “stress standing positions,” with their “arms extended and chained above the head.” During this torture, they were not allowed to use a toilet and “had to defecate and urinate over themselves.” Beatings and kickings were common, as was a practice of placing a collar around a prisoner’s neck and using it to slam him against walls or yank him down hallways. Loud music was used for sleep deprivation, as was temperature manipulation. If prisoners were perceived to be cooperating, they were given clothes to wear. If they were deemed uncooperative, they’d be stripped naked. Dietary manipulation was used—at times the prisoners were put on liquid-only diets for weeks at a time. Three of the prisoners told the ICRC they had been waterboarded. Some of them were moved to as many as ten different sites during their imprisonment. “I was told during this period that I was one of the first to receive these interrogation techniques, so no rules applied,” one prisoner, taken early on in the war on terror, told the ICRC. “I felt like they were experimenting and trying out techniques to be used later on other people.”
And:
While TF-121 was given a mission to kill or capture Osama bin Laden and Saddam Hussein by the spring of 2004, Washington was increasingly focused on Iraq. Veteran intelligence officials identify this period as a turning point in the hunt for bin Laden. At a time when JSOC was asking for more resources and permissions to pursue targets inside of Pakistan and other countries, there was a tectonic shift toward making Iraq the numberone priority.
The heavy costs of that strategic redirection to the larger counterterrorism mission were of deep concern to Lieutenant Colonel Anthony Shaffer, a senior military intelligence officer who was CIA trained and had worked for the DIA and JSOC. Shaffer ran a task force, Stratus Ivy, that was part of a program started in the late 1990s code-named Able Danger. Utilizing what was then cutting-edge “data mining” technology, the program was operated by military intelligence and the Special Operations Command and aimed at identifying al Qaeda cells globally. Shaffer and some of his Able Danger colleagues claimed that they had uncovered several of the 9/11 hijackers a year before the attacks but that no action was taken against them. He told the 9/11 Commission he felt frustrated when the program was shut down and believed it was one of the few effective tools the United States had in the fight against al Qaeda pre-9/11. After the attacks, Shaffer volunteered for active duty and became the commander of the DIA’s Operating Base Alpha, which Shaffer said “conducted clandestine antiterrorist operations” in Africa. Shaffer was running the secret program, targeting al Qaeda figures who might flee Afghanistan and seek shelter in Somalia, Liberia and other African nations. It “was the first DIA covert action of the post–Cold War era, where my officers used an African national military proxy to hunt down and kill al Qaeda terrorists,” Shaffer recalled.
Like many other experienced intelligence officers who had been tracking al Qaeda prior to 9/11, Shaffer believed that the focus was finally placed correctly on destroying the terror network and killing or capturing its leaders. But then all resources were repurposed for the Iraq invasion. “I saw the Bush administration lunacy up close and personal,” Shaffer said. After a year and a half of running the African ops, “I was forced to shut down Operating Base Alpha so that its resources could be used for the Iraq invasion.”
Shaffer was reassigned as an intelligence planner on the DIA team that helped feed information on possible Iraqi WMD sites to the advance JSOC teams that covertly entered Iraq ahead of the invasion. “It yielded nothing,” he alleged. “As we now know, no WMD were ever found.” He believed that shifting the focus and resources to Iraq was a grave error that allowed bin Laden to continue operating for nearly another decade. Shaffer was eventually sent to Afghanistan, where he would clash with US military leaders over his proposals to run operations into Pakistan to target the al Qaeda leaders who were hiding there.
And:
The task force’s operations, Exum said, were “very compartmentalized, very stove-piped.” JSOC was creating a system where its intelligence operations were feeding its action and often that intelligence would not be vetted by anyone outside of the JSOC structure. The priority was to keep hitting targets. “The most serious thing is the abuse of power that that allows you to do,” said Wilkerson, the former chief of staff to Powell. He continued: "You go in and you get some intelligence, and usually your intelligence comes through this apparatus too, and so you say, ‘Oh, this is really good actionable intelligence. Here’s Operation Blue Thunder. Go do it.’ And they go do it, and they kill 27, 30, 40 people, whatever, and they capture seven or eight. Then you find out that the intelligence was bad and you killed a bunch of innocent people and you have a bunch of innocent people on your hands, so you stuff ’em in Guantánamo. No one ever knows anything about that. You don’t have to prove to anyone that you did right. You did it all in secret, so you just go to the next operation. You say, ‘Chalk that one up to experience,’ and you go to the next operation. And, believe me, that happened."
↑ comment by lukeprog · 2013-12-04T17:30:54.862Z · LW(p) · GW(p)
More (#1) from Dirty Wars:
However, just as the FBI believed it was making headway with Libi, CIA operatives, on orders from Cofer Black, showed up at Bagram and demanded to take him into their custody. The FBI agents objected to the CIA taking him, but the White House overruled them. “You know where you are going,” one of the CIA operatives told Libi as he took him from the FBI. “Before you get there, I am going to find your mother and fuck her.”
The CIA flew Libi to the USS Bataan in the Arabian Sea, which was also housing the so-called American Taliban, John Walker Lindh, who had been picked up in Afghanistan, and other foreign fighters. From there, Libi was transferred to Egypt, where he was tortured by Egyptian agents. Libi’s interrogation focused on a goal that would become a centerpiece of the rendition and torture program: proving an Iraq connection to 9/11. Once he was in CIA custody, interrogators pummeled Libi with questions attempting to link the attacks and al Qaeda to Iraq. Even after the interrogators working Libi over had reported that they had broken him and that he was “compliant,” Cheney’s office directly intervened and ordered that he continue to be subjected to enhanced interrogation techniques. “After real macho interrogation—this is enhanced interrogation techniques on steroids—he admitted that al Qaeda and Saddam were working together. He admitted that al Qaeda and Saddam were working together on WMDs,” former senior FBI interrogator Ali Soufan told PBS’s Frontline. But the Defense Intelligence Agency (DIA) cast serious doubt on Libi’s claims at the time, observing in a classified intelligence report that he “lacks specific details” on alleged Iraqi involvement, asserting that it was “likely this individual is intentionally misleading” his interrogators. Noting that he had been “undergoing debriefs for several weeks,” the DIA analysis concluded Libi may have been “describing scenarios to the debriefers that he knows will retain their interest.” Despite such doubts, Libi’s “confession” would later be given to Secretary of State Powell when he made the administration’s fraudulent case at the United Nations for the Iraq War. In that speech Powell would say, “I can trace the story of a senior terrorist operative telling how Iraq provided training in these weapons to al Qaeda.” Later, after these claims were proven false, Libi, according to Soufan, admitted he had lied. “I gave you what you want[ed] to hear,” he said. “I want[ed] the torture to stop. I gave you anything you want[ed] to hear.”
And:
Although part of Rumsfeld’s visit to Fort Bragg was public, he was also there for a secret meeting—with the forces whose units were seldom mentioned in the press and whose operations were entirely shrouded in secrecy: the Joint Special Operations Command, or JSOC. On paper, JSOC appeared to be an almost academic entity, and its official mission was described in bland, bureaucratic terms. Officially, JSOC was the “joint headquarters designed to study special operations requirements and techniques; ensure interoperability and equipment standardization; plan and conduct joint special operations exercises and training; and develop joint special operations tactics.” In reality, JSOC was the most closely guarded secret force in the US national security apparatus. Its members were known within the covert ops community as ninjas, “snake eaters,” or, simply, operators. Of all of the military forces available to the president of the United States, none was as elite as JSOC. When a president of the United States wanted to conduct an operation in total secrecy, away from the prying eyes of Congress, the best bet was not the CIA, but rather JSOC. “Who’s getting ready to deploy?” Rumsfeld asked when he addressed the special operators. The generals pointed to the men on standby. “Good for you. Where you off to? Ahh, you’d have to shoot me if you told me, right?” Rumsfeld joked. “Just checking.”
And:
Colonel Lang said Bush “was so taken with President Saleh as a personable, friendly, chummy kind of guy, that Bush was in fact quite willing to listen to whatever Saleh said about, ‘We like you Americans, we want to help you, we want to cooperate with you,’ that kind of business, and was quite willing to send them foreign aid, including military aid.” During his meeting with President Bush in November 2001, Saleh “expressed his concern and hope that the military action in Afghanistan does not exceed its borders and spread to other parts of the Middle East, igniting further instability in the region,” according to a statement issued by the Yemeni Embassy in Washington at the end of the visit. But to keep Yemen off Washington’s target list, Saleh would have to take action. Or at least give the appearance of doing so.
Saleh’s entourage was given a list of several al Qaeda suspects that the Yemeni regime could target as a show of good faith. The next month, Saleh ordered his forces to raid a village in Marib Province, where Abu Ali al Harithi, a lead suspect in the Cole bombing, and other militants were believed to be residing. The operation by Yemeni special forces was a categorical failure. Local tribesmen took several of the soldiers hostage and the targets of the raid allegedly escaped unharmed. The soldiers were later released through tribal mediators, but the action angered the tribes and served as a warning to Saleh to stay out of Marib. It was the beginning of what would be a complex and dangerous chess match for Saleh as he made his first moves to satisfy Washington’s desire for targeted killing in Yemen while maintaining his own hold on power.
And:
Replies from: NoneIn early July 2002, CIA interrogators began receiving training from SERE instructors and psychologists on extreme interrogation tactics. Later that month, Rumsfeld’s office requested documents from JPRA, “including excerpts from SERE instructor lesson plans, a list of physical and psychological pressures used in SERE resistance training, and a memo from a SERE psychologist assessing the long-term psychological effects of SERE resistance training on students and the effects of waterboarding,” according to a Senate Armed Services Committee investigation. “The list of SERE techniques included such methods as sensory deprivation, sleep disruption, stress positions, waterboarding, and slapping. It also made reference to a section of the JPRA instructor manual that discusses ‘coercive pressures,’ such as keeping the lights on at all times, and treating a person like an animal.” The Pentagon’s deputy general counsel for intelligence, Richard Shiffrin, acknowledged that the Pentagon wanted the documents in order to “reverse-engineer” SERE’s knowledge of enemy torture tactics for use against US detainees. He also described how JPRA provided interrogators with documents about “mind-control experiments” used on US prisoners by North Korean agents. “It was real ‘Manchurian Candidate’ stuff,” Shiffrin said. JPRA’s commander also sent the same information to the CIA.
The use of these new techniques was discussed at the National Security Council, including at meetings attended by Rumsfeld and Condoleezza Rice. By the summer of 2002, the War Council legal team, led by Cheney’s consigliere, David Addington, had developed a legal rationale for redefining torture so narrowly that virtually any tactic that did not result in death was fair game. “For an act to constitute torture as defined in [the federal torture statute], it must inflict pain that is difficult to endure. Physical pain amounting to torture must be equivalent in intensity to the pain accompanying serious physical injury, such as organ failure, impairment of bodily function, or even death,” Assistant Attorney General for the Office of Legal Counsel Jay Bybee asserted in what would become an infamous legal memo rationalizing the torture of US prisoners. “For purely mental pain or suffering to amount to torture under [the federal torture statute], it must result in significant psychological harm of significant duration, e.g., lasting for months or even years.” A second memo signed by Bybee gave legal justification for using a specific series of “enhanced interrogation techniques,” including waterboarding. “There was not gonna be any deniability,” said the CIA’s Rodriguez, who was coordinating the interrogation of prisoners at the black sites. “In August of 2002, I felt I had all the authorities that I needed, all the approvals that I needed. The atmosphere in the country was different. Everybody wanted us to save American lives.” He added, “We went to the border of legality. We went to the border, but that was within legal bounds.”
↑ comment by [deleted] · 2015-11-09T09:38:20.448Z · LW(p) · GW(p)
Foreign fighters show up everywhere. And now there's the whole Islamic State issue. Perhaps all the world needs is more foreign legions doing good things. The FFL is overrecruited afterall. Heck, we could even deal with the refugee crisis by offering visas to those mercenaries. Sure as hell would be more popular than selling visas and citizenship cause people always get antsy about inequality and having less downward social comparisons.
↑ comment by lukeprog · 2013-11-07T18:23:10.239Z · LW(p) · GW(p)
Passage from Patterson's Dark Pools: The Rise of the Machine Traders and the Rigging of the U.S. Stock Market:
In 1994, two finance professors, Bill Christie and Paul Schultz, published a groundbreaking study based on the trading data of Nasdaq stocks such as Apple and Intel.
The two professors had noticed something very odd in the data: Nasdaq market makers rarely if ever posted an order at an “odd-eighth” — as in $10⅛ $10⅜ $10⅝ or $10⅞ (recall that this was a time when stocks were quoted in fractions of a dollar, not pennies.) Instead, they found that for heavily traded stocks such as Apple, market makers posted odd-eighth quotes roughly 1 percent of the time.
When they looked at spreads for stocks on the NYSE or American Stock Exchange, by comparison, they found a consistent use of odd-eighths. That meant Nasdaq market makers must be deliberately colluding to keep spreads artificially wide. Instead of the minimum spread of 12.5 cents (one-eighth of a dollar), spreads were usually twenty-five or fifty cents wide. That extra 12.5 cents was coming directly out of the pockets of investors. Add it up, and Nasdaq’s market makers were siphoning billions out of the pockets of investors.
...Inside the SEC, the study erupted like a bomb. The Nasdaq investigation was assigned to a staid, low-key attorney in the enforcement division named Leo Wang. Socially awkward, but aggressive as a pit bull, Wang had gained prestige within the commission for handling a high-profile bond-manipulation case against Salomon Brothers in the early 1990s.... [Wang] started hammering Nasdaq dealers with subpoenas, demanding transaction records. He hit the jackpot when he forced the firms to hand over truckloads of tape recordings going back years. Traders had been oblivious to the recordings, which were made as a backup in the event of a dispute over the details of a trade. Inside the SEC, the enormity of the task of reviewing the tapes at first seemed daunting — it could take weeks, if not months, to comb through them for evidence of price fixing.
But it proved all too easy: The very first tape Wang played revealed two dealers fixing prices.
↑ comment by lukeprog · 2013-10-31T22:56:51.766Z · LW(p) · GW(p)
Some relevant quotes from Schlosser's Command and Control: Nuclear Weapons, the Damascus Accident, and the Illusion of Safety:
On January 23, 1961, a B-52 bomber took off from Seymour Johnson Air Force Base in Goldsboro, North Carolina, for an airborne alert... [Near] midnight... the boom operator of [a refueling] tanker noticed fuel leaking from the B-52’ s right wing. Spray from the leak soon formed a wide plume, and within two minutes about forty thousand gallons of jet fuel had poured from the wing. The command post at Seymour Johnson told the pilot, Major Walter S. Tulloch, to dump the rest of the fuel in the ocean and prepare for an emergency landing. But fuel wouldn’t drain from the tank inside the left wing, creating a weight imbalance. At half past midnight, with the flaps down and the landing gear extended, the B-52 went into an uncontrolled spin...
The B-52 was carrying two Mark 39 hydrogen bombs, each with a yield of 4 megatons. As the aircraft spun downward, centrifugal forces pulled a lanyard in the cockpit. The lanyard was attached to the bomb release mechanism. When the lanyard was pulled, the locking pins were removed from one of the bombs. The Mark 39 fell from the plane. The arming wires were yanked out, and the bomb responded as though it had been deliberately released by the crew above a target. The pulse generator activated the low-voltage thermal batteries. The drogue parachute opened, and then the main chute. The barometric switches closed. The timer ran out, activating the high-voltage thermal batteries. The bomb hit the ground, and the piezoelectric crystals inside the nose crushed. They sent a firing signal. But the weapon didn’t detonate.
Every safety mechanism had failed, except one: the ready/safe switch in the cockpit. The switch was in the SAFE position when the bomb dropped. Had the switch been set to GROUND or AIR, the X-unit would’ve charged, the detonators would’ve triggered, and a thermonuclear weapon would have exploded in a field near Faro, North Carolina...
The other Mark 39 plummeted straight down and landed in a meadow just off Big Daddy’s Road, near the Nahunta Swamp. Its parachutes had failed to open. The high explosives did not detonate, and the primary was largely undamaged...
The Air Force assured the public that the two weapons had been unarmed and that there was never any risk of a nuclear explosion. Those statements were misleading. The T-249 control box and ready/safe switch, installed in every one of SAC’s bombers, had already raised concerns at Sandia. The switch required a low-voltage signal of brief duration to operate — and that kind of signal could easily be provided by a stray wire or a short circuit, as a B-52 full of electronic equipment disintegrated midair.
A year after the North Carolina accident, a SAC ground crew removed four Mark 28 bombs from a B-47 bomber and noticed that all of the weapons were armed. But the seal on the ready/ safe switch in the cockpit was intact, and the knob hadn’t been turned to GROUND or AIR. The bombs had not been armed by the crew. A seven-month investigation by Sandia found that a tiny metal nut had come off a screw inside the plane and lodged against an unused radar-heating circuit. The nut had created a new electrical pathway, allowing current to reach an arming line— and bypass the ready/ safe switch. A similar glitch on the B-52 that crashed near Goldsboro would have caused a 4-megaton thermonuclear explosion. “It would have been bad news— in spades,” Parker F. Jones, a safety engineer at Sandia, wrote in a memo about the accident. “One simple, dynamo-technology, low-voltage switch stood between the United States and a major catastrophe!”
And:
Replies from: lukeprog, lukeprog, lukeprog, lukeprogOn January 1, 1960, General Lauris Norstad, the supreme allied commander in Europe, placed all of NATO’s [USA-supplied] nuclear-capable units on a fifteen-minute alert, without consulting Congress. Every NATO air squadron was ordered to keep at least two fighter planes loaded with fuel and a nuclear weapon, parked near a runway. And thermonuclear warheads were mated to the intermediate-range Jupiter missiles in Italy and the Thor missiles in Great Britain. The new alert policy had the full support of President Eisenhower, who thought that NATO should be able to respond promptly to a Soviet attack...
Members of the Joint Committee on Atomic Energy visited fifteen NATO bases in December 1960, eager to see how America’s nuclear weapons were being deployed. The group was accompanied by Harold Agnew, ...an expert on how to design bombs, and how to handle them properly. At a NATO base in Germany, Agnew looked out at the runway and, in his own words, “nearly wet my pants.” The F-84F fighter planes on alert, each carrying a fully assembled Mark 7 bomb, were being guarded by a single American soldier. Agnew walked over and asked the young enlisted man, who carried an old-fashioned, bolt-action rifle, what he’d do if somebody jumped into one of the planes and tried to take off. Would he shoot at the pilot— or the bomb? The soldier had never been told what to do... Agnew realized there was little to prevent a German pilot from taking a plane, flying it to the Soviet Union, and dropping an atomic bomb.
The custody arrangements at the Jupiter missile sites in Italy were even more alarming. Each site had three missiles topped with a 1.4-megaton warhead— a weapon capable of igniting firestorms and flattening every brick structure within thirty square miles. All the security was provided by Italian troops. The launch authentication officer was the only American at the site. Two keys were required to launch the missiles; one was held by the American, the other by an Italian officer. The keys were often worn on a string around the neck, like a dog tag.
Congressman Chet Holifield, the chairman of the joint committee, was amazed to find three ballistic missiles, carrying thermonuclear weapons, in the custody of a single American officer with a handgun. “All [the Italians] have to do is hit him on the head with a blackjack, and they have got his key,” Holifield said, during a closed-door committee hearing after the trip. The Jupiters were located near a forest, without any protective covering, and brightly illuminated at night. They would be sitting ducks for a sniper. “There were three Jupiters setting there in the open— all pointed toward the sky,” Holifield told the committee. “Over $300 million has been spent to set up that little show and it can be knocked out with 3 rifle bullets.”
...Harold Agnew was amazed to see a group of NATO weapon handlers pull the arming wires out of a Mark 7 while unloading it from a plane. When the wires were pulled, the arming sequence began— and if the X-unit charged, a Mark 7 could be detonated by its radar, by its barometric switches, by its timer, or by falling just a few feet from a plane and landing on a runway. A stray cosmic ray could, theoretically, detonate it. The weapon seemed to invite mistakes... And a Mark 7 sometimes contained things it shouldn’t. A screwdriver was found inside one of the bombs; an Allen wrench was somehow left inside another. In both bombs, the loose tools could have caused a short circuit.
↑ comment by lukeprog · 2013-10-31T23:02:50.576Z · LW(p) · GW(p)
More from Command and Control:
Agnew thought that sort of lock would solve many of the custody problems at NATO. A coded switch, installed in every nuclear weapon, would block the crucial arming circuits. It would make a clear distinction between the physical possession of a weapon and the ability to use one. It would become a form of remote control. And the power to exert that control, to prohibit or allow a nuclear detonation, would remain with whoever had the code.
Agnew brought an early version of the electromechanical locking system to Washington, D.C., for a closed-door hearing of the joint committee... To unlock a nuclear weapon, a two-man custodial team would attach a cable to it from the decoder. Then they’d turn the knobs on the decoder to enter a four-digit code. It was a “split-knowledge” code— each custodian would be given only two of the four numbers. Once the correct code was entered, the switch inside the weapon would take anywhere from thirty seconds to two and a half minutes to unlock, as its little gears, cams, and cam followers whirred and spun... everyone in the hearing room agreed that it was absolutely essential for national security.
The American military, however, vehemently opposed putting any locks on nuclear weapons. The Army, the Navy, the Air Force, the Marines, the Joint Chiefs of Staff, General Power at SAC, General Norstad at NATO — all of them agreed that locks were a bad idea. The always/never dilemma lay at the heart of military’s thinking. “No single device can be expected to increase both safety and readiness,” the Joint Chiefs of Staff argued. And readiness was considered more important: the nuclear weapons in Europe were “adequately safe, within the limits of the operational requirements imposed on them.”
...After reading the joint committee’s report, President Kennedy halted the dispersal of nuclear weapons among America’s NATO allies. Studies on weapon safety and command and control were commissioned. At Sandia, the development of coded, electromechanical locks was begun on a crash basis. Known at first as “Prescribed Action Links,” the locks were given a new name, one that sounded less restrictive, in the hopes of appeasing the military. “Permissive Action Links” sounded more friendly, as did the acronym: PALs.
And:
Feuds between the Army, the Navy, and the Air Force continued, despite McNamara’s vow that the Pentagon would have “one defense policy, not three conflicting defense policies.” Interservice rivalries once again complicated the effort to develop a rational nuclear strategy. The Joint Chiefs of Staff had been instructed to alter the SIOP, so that President Kennedy would have a number of options during a nuclear war. Studies were under way to make that possible. But the nuclear ambitions of the Army, the Navy, and the Air Force still seemed incompatible— and, at times, incomprehensible.
↑ comment by lukeprog · 2013-11-01T08:12:49.637Z · LW(p) · GW(p)
More (#3) from Command and Control:
President Kennedy and Secretary of Defense McNamara had taken a personal interest in nuclear weapon safety. A few months after Goldsboro, Kennedy gave the Department of Defense “responsibility for identifying and resolving health and safety problems connected with the custody and storage of nuclear weapons.” The Atomic Energy Commission was to play an important, though subsidiary, role. Kennedy’s decision empowered McNamara to do whatever seemed necessary. But it also reinforced military, not civilian, control of the system. At Los Alamos, Livermore, and Sandia, the reliability of nuclear weapons continued to receive far greater attention than their safety. And a dangerous way of thinking, a form of complacency later known as the Titanic Effect took hold among weapon designers: the more impossible an accidental detonation seemed to be, the more likely it became.
And:
Twenty-three years after Sandia became a separate laboratory, it created a nuclear weapon safety department. An assistant to the secretary of defense for atomic energy, Carl Walske, was concerned about the risks of nuclear accidents. He had traveled to Denmark, dealt with the aftermath of the Thule accident, and come to believe that the safety standards of the weapons labs were based on a questionable use of statistics. Before a nuclear weapon could enter the stockpile, the odds of its accidental detonation had to be specified, along with its other “military characteristics.” Those odds were usually said to be one in a million during storage, transportation, and handling. But the dimensions of that probability were rarely defined. Was the risk one in a million for a single weapon — or for an entire weapon system? Was it one in a million per year — or throughout the operational life of a weapon? How the risk was defined made a big difference, at a time when the United States had about thirty thousand nuclear weapons. The permissible risk of an American nuclear weapon detonating inadvertently could range from one in a million to one in twenty thousand, depending on when the statistical parameters were set.
Walske issued new safety standards in March 1968. They said that the “probability of a premature nuclear detonation” should be no greater than one in a billion, amid “normal storage and operational environments,” during the lifetime of a single weapon. And the probability of a detonation amid “abnormal environments” should be no greater than one in a million. An abnormal environment could be anything from the heat of a burning airplane to the water pressure inside a sinking submarine. Walske’s safety standards applied to every nuclear weapon in the American stockpile. They demanded a high level of certainty that an accidental detonation could never occur. But they offered no guidelines on how these strict criteria could be met. And in the memo announcing the new policy, Walske expressed confidence that “the adoption of the attached standards will not result in any increase in weapon development times or costs.”
A few months later, William L. Stevens was chosen to head Sandia’s new Nuclear Safety Department... Stevens looked through the accident reports kept by the Defense Atomic Support Agency, the Pentagon group that had replaced the Armed Forces Special Weapons Project. The military now used Native American terminology to categorize nuclear weapon accidents. The loss, theft, or seizure of a weapon was an Empty Quiver. Damage to a weapon, without any harm to the public or risk of detonation, was a Bent Spear. And an accident that caused the unauthorized launch or jettison of a weapon, a fire, an explosion, a release of radioactivity, or a full-scale detonation was a Broken Arrow. The official list of nuclear accidents, compiled by the Department of Defense and the AEC, included thirteen Broken Arrows. Bill Stevens read reports that secretly described a much larger number of unusual events with nuclear weapons. And a study of abnormal environments commissioned by Sandia soon found that at least 1,200 nuclear weapons had been involved in “significant” incidents and accidents between 1950 and March 1968.
The armed services had done a poor job of reporting nuclear weapon accidents until 1959— and subsequently reported about 130 a year. Many of the accidents were minor: “During loading of a Mk 25 Mod O WR Warhead onto a 6X6 truck, a handler lost his balance . . . the unit tipped and fell approximately four feet from the truck to the pavement.” And some were not: “A C-124 Aircraft carrying eight Mk 28 War reserve Warheads and one Mk 49 Y2 Mod 3 War Reserve Warhead was struck by lightning... Observers noted a large ball of fire pass through the aircraft from nose to tail... The ball of fire was accompanied by a loud noise.”
Reading these accident reports persuaded Stevens that the safety of America’s nuclear weapons couldn’t be assumed. The available data was insufficient for making accurate predictions about the future; a thousand weapon accidents were not enough for any reliable calculation of the odds. Twenty-three weapons had been directly exposed to fires during an accident, without detonating. Did that prove a fire couldn’t detonate a nuclear weapon? Or would the twenty-fourth exposure produce a blinding white flash and a mushroom cloud? The one-in-a-million assurances that Sandia had made for years now seemed questionable. They’d been made without much empirical evidence.
And:
Four Jupiter missiles in Italy had also been hit by lightning. Some of their thermal batteries fired, and in two of the warheads, tritium gas was released into their cores, ready to boost a nuclear detonation. The weapons weren’t designed to sit atop missiles, exposed to the elements, for days at a time. They lacked safety mechanisms to protect against lightning strikes. Instead of removing the warheads or putting safety devices inside them, the Air Force surrounded its Jupiter sites with tall metal towers to draw lightning away from the missiles.
Stan Spray’s group ruthlessly burned, scorched, baked, crushed, and tortured weapon components to find their potential flaws. And in the process Spray helped to overturn the traditional thinking about electrical circuits at Sandia. It had always been taken for granted that if two circuits were kept physically apart, if they weren’t mated or connected in any way— like separate power lines running beside a highway— current couldn’t travel from one to the other. In a normal environment, that might be true. But strange things began to happen when extreme heat and stress were applied.
When circuit boards were bent or crushed, circuits that were supposed to be kept far apart might suddenly meet. The charring of a circuit board could transform its fiberglass from an insulator into a conductor of electricity. The solder of a heat-sensitive fuse was supposed to melt when it reached a certain temperature, blocking the passage of current during a fire. But Spray discovered that solder behaved oddly once it melted. As a liquid it could prevent an electrical connection— or flow back into its original place, reconnect wires, and allow current to travel between them.
The unpredictable behavior of materials and electrical circuits during an accident was compounded by the design of most nuclear weapons. Although fission and fusion were radically new and destructive forces in warfare, the interior layout of bombs hadn’t changed a great deal since the Second World War. The wires from different components still met in a single junction box. Wiring that armed the bomb and wiring that prevented it from being armed often passed through the same junction— making it possible for current to jump from one to the other. And the safety devices were often located far from the bomb’s firing set. The greater the distance between them, Spray realized, the greater the risk that stray electricity could somehow enter an arming line, set off the detonators, and cause a nuclear explosion.
And:
Another Sandia safety effort was being concluded at roughly the same time. Project Crescent had set out to design a “supersafe” bomb — one that wouldn’t detonate “under any conceivable set of accident conditions” or spread plutonium, even after being mistakenly dropped from an altitude of forty thousand feet. At first, the Air Force was “less than enthusiastic about requiring more safety in nuclear weapons,” according to a classified memo on the project. But the Air Force eventually warmed to the idea; a supersafe bomb might permit the resumption of the Strategic Air Command’s airborne alert. After more than two years of research, Project Crescent proposed a weapon design that — like a concept car at an automobile show — was innovative but impractical. To prevent the high explosives from detonating and scattering plutonium after a plane crash, the bomb would have a thick casing and a lot of interior padding. Those features would make it three to four times heavier than most hydrogen bombs. The additional weight would reduce the number of nuclear weapons that a B-52 could carry— and that’s why the supersafe bomb was never built.
↑ comment by lukeprog · 2013-11-01T07:25:49.918Z · LW(p) · GW(p)
More (#2) from Command and Control:
Far from being grounds for celebration, the absence of a missile gap became a potential source of embarrassment for the Kennedy administration. Many of the claims made by the Democrats during the recent presidential campaign now seemed baseless. Although General Power still insisted that the Soviets were hiding their long-range missiles beneath camouflage, the United States clearly had not fallen behind in the nuclear arms race. Public knowledge of that fact would be inconvenient— and so the public wasn’t told. When McNamara admitted that the missile gap was a myth, during an off-the-record briefing with reporters, President Kennedy was displeased.
At a press conference the following day, Kennedy stressed that “it would be premature to reach a judgment as to whether there is a gap or not a gap.” Soon the whole issue was forgotten. Political concerns, not strategic ones, determined how many long-range, land-based missiles the United States would build. Before Sputnik, President Eisenhower had thought that twenty to forty would be enough. Jerome Wiesner advised President Kennedy that roughly ten times that number would be sufficient for deterrence. But General Power wanted the Strategic Air Command to have ten thousand Minuteman missiles, aimed at every military target in the Soviet Union that might threaten the United States. And members of Congress, unaware that the missile gap was a myth, also sought a large, land-based force. After much back and forth, McNamara decided to build a thousand Minuteman missiles. One Pentagon adviser later explained that it was “a round number.”
And:
Amid all the consideration of how to protect the president and the Joint Chiefs, how to gather information in real time, how to transmit war orders, how to devise the technical and administrative means for a flexible response, little thought had been given to an important question: how do you end a nuclear war? Thomas Schelling — a professor of economics at Harvard, a RAND analyst, proponent of game theory, and adviser to the Kennedy administration — began to worry about the issue early in 1961. While heading a committee on the risk of war by accident, miscalculation, or surprise, he was amazed to learn that there was no direct, secure form of communications between the White House and the Kremlin. It seemed almost unbelievable. Schelling had read the novel Red Alert a few years earlier, bought forty copies, and sent them to colleagues. The book gave a good sense of what could go wrong — and yet the president’s ability to call his Soviet counterpart on a “hot line” existed only in fiction. As things stood, AT&T’s telephone lines and Western Union’s telegraph lines were the only direct links between the United States and the Soviet Union. Both of them would be knocked out by a thermonuclear blast, and most radio communications would be, as well. The command-and-control systems of the two countries had no formal, reliable means of interacting. The problem was so serious and so obvious, Schelling thought, everybody must have assumed somebody else had taken care of it. Pauses for negotiation would be a waste of time, if there were no way to negotiate. And once a nuclear war began, no matter how pointless, devastating, and horrific, it might not end until both sides ran out of nuclear weapons.
And:
The lack of direct, secure communications between the White House and the Kremlin, the distrust that Kennedy felt toward the Soviet leader, and Khrushchev’s impulsive, unpredictable behavior complicated efforts to end the [Cuban missile] crisis peacefully. Khrushchev felt relieved, after hearing Kennedy’s speech, that the president hadn’t announced an invasion of Cuba. Well aware that the Soviet Union’s strategic forces were vastly inferior to those of the United States, Khrushchev had no desire to start a nuclear war. He did, however, want to test Kennedy’s mettle and see how much the Soviets could gain from the crisis. Khrushchev secretly ordered his ships loaded with missiles not to violate the quarantine. But in private letters to Kennedy, he vowed that the ships would never turn around, denied that offensive weapons had been placed in Cuba, and denounced the quarantine as “an act of aggression which pushes mankind toward... a world nuclear-missile war.”
...While the Kennedy administration anxiously wondered if the Soviets would back down, Khrushchev maintained a defiant facade. And then on October 26, persuaded by faulty intelligence that an American attack on Cuba was about to begin, he wrote another letter to Kennedy, offering a deal: the Soviet Union would remove the missiles from Cuba, if the United States promised never to invade Cuba.
Khrushchev’s letter arrived at the American embassy in Moscow around five o’clock in the evening, which was ten in the morning, Eastern Standard Time. It took almost eleven hours for the letter to be fully transmitted by cable to the State Department in Washington, D.C. Kennedy and his advisers were encouraged by its conciliatory tone and decided to accept the deal— but went to bed without replying. Seven more hours passed, and Khrushchev started to feel confident that the United States wasn’t about to attack Cuba, after all. He wrote another letter to Kennedy, adding a new demand: the missiles in Cuba would be removed, if the United States removed its Jupiter missiles from Turkey. Instead of being delivered to the American embassy, this letter was broadcast, for the world to hear, on Radio Moscow.
On the morning of October 27, as President Kennedy was drafting a reply to Khrushchev’s first proposal, the White House learned about his second one. Kennedy and his advisers struggled to understand what was happening in the Kremlin. Conflicting messages were now coming not only from Khrushchev, but from various diplomats, journalists, and Soviet intelligence agents who were secretly meeting with members of the administration. Convinced that Khrushchev was being duplicitous, McNamara now pushed for a limited air strike to destroy the missiles. General Maxwell Taylor, now head of the Joint Chiefs of Staff, recommended a large-scale attack. When an American U-2 was shot down over Cuba, killing the pilot, the pressure on Kennedy to launch an air strike increased enormously. A nuclear war with the Soviet Union seemed possible. “As I left the White House... on that beautiful fall evening,” McNamara later recalled, “I feared I might never live to see another Saturday night.”
The Cuban Missile Crisis ended amid the same sort of confusion and miscommunication that had plagued much of its thirteen days. President Kennedy sent the Kremlin a cable accepting the terms of Khrushchev’s first offer, never acknowledging that a second demand had been made. But Kennedy also instructed his brother to meet privately with Ambassador Dobrynin and agree to the demands made in Khrushchev’s second letter— so long as the promise to remove the Jupiters from Turkey was never made public. Giving up dangerous and obsolete American missiles to avert a nuclear holocaust seemed like a good idea. Only a handful of Kennedy’s close advisers were told about this secret agreement.
Meanwhile, at the Kremlin, Khrushchev suddenly became afraid once again that the United States was about to attack Cuba. He decided to remove the Soviet missiles from Cuba— without insisting upon the removal of the Jupiters from Turkey. Before he had a chance to transmit his decision to the Soviet embassy in Washington, word arrived from Dobrynin about Kennedy’s secret promise. Khrushchev was delighted by the president’s unexpected— and unnecessary— concession. But time seemed to be running out, and an American attack might still be pending. Instead of accepting the deal through a diplomatic cable, Khrushchev’s decision to remove the missiles from Cuba was immediately broadcast on Radio Moscow. No mention was made of the American vow to remove its missiles from Turkey.
Both leaders had feared that any military action would quickly escalate to a nuclear exchange. They had good reason to think so. Although Khrushchev never planned to move against Berlin during the crisis, the Joint Chiefs had greatly underestimated the strength of the Soviet military force based in Cuba. In addition to strategic weapons, the Soviet Union had almost one hundred tactical nuclear weapons on the island that would have been used by local commanders to repel an American attack. Some were as powerful as the bomb that destroyed Hiroshima. Had the likely targets of those weapons— the American fleet offshore and the U.S. naval base at Guantánamo— been destroyed, an all-out nuclear war would have been hard to avoid.
↑ comment by lukeprog · 2013-11-01T08:22:35.269Z · LW(p) · GW(p)
More (#4) from Command and Control:
After taking the new job, Peurifoy made a point of reading the classified reports on every... major [nuclear] weapon accident, a lengthy catalog of fires, crashes, and explosions, of near misses and disasters narrowly averted. The fact that an accidental detonation had not yet happened, that a major city had not yet been blanketed with plutonium, offered little comfort. The probabilities remained unknown. What were the odds of a screwdriver, used to repair an alarm system, launching the warhead off a missile, the odds of a rubber seat cushion bringing down a B-52? After reading through the accident reports, Peurifoy reached his own conclusion about the safety of America’s nuclear weapons: “We are living on borrowed time.”
Peurifoy had recently heard about an explosive called [TATB]. It had been invented in 1888 but had been rarely used since then— because TATB was so hard to detonate. Under federal law, it wasn’t even classified as an explosive; it was considered a flammable solid. With the right detonators, however, it could produce a shock wave almost as strong as the high explosives that surrounded the core of a nuclear weapon. TATB soon became known as an “insensitive high explosive.” You could drop it, hammer it, set it on fire, smash it into the ground at a speed of 1,500 feet per second, and it still wouldn’t detonate. The explosives being used in America’s nuclear weapons would go off from an impact one tenth as strong. Harold Agnew was now the director of Los Alamos, and he thought using TATB in hydrogen bombs made a lot more sense— as a means of preventing plutonium dispersal during an accident— than adding two or three thousand extra pounds of steel and padding.
All the necessary elements for nuclear weapon safety were now available: a unique signal, weak link/strong link technology, insensitive high explosives. The only thing missing was the willingness to fight a bureaucratic war on their behalf— and Bob Peurifoy had that quality in abundance. He was no longer a low-level employee, toiling away on the electrical system of a bomb, without a sense of the bigger picture. As the head of weapon development, he now had some authority to make policy at Sandia. And he planned to take advantage of it. Three months into the new job, Peurifoy told his superior, Glenn Fowler, a vice president at the lab, that all the nuclear weapons carried by aircraft had to be retrofitted with new safety devices. Peurifoy didn’t claim that the weapons were unsafe; he said their safety could no longer be presumed. Fowler listened carefully to his arguments and agreed. A briefing for Sandia’s upper management was scheduled for February 1974.
The briefing did not go well. The other vice presidents at Sandia were indifferent, unconvinced, or actively hostile to Peurifoy’s recommendations. The strongest opponents of a retrofit argued that it would harm the lab’s reputation— it would imply that Sandia had been wrong about nuclear weapon safety for years. They said new weapons with improved safety features could eventually replace the old ones. And they made clear that the lab’s research-and-development money would not be spent on bombs already in the stockpile. Sandia couldn’t force the armed services to alter their weapons, and the Department of Defense had the ultimate responsibility for nuclear weapon safety. The lab’s upper management said, essentially, that this was someone else’s problem.
In April 1974, Peurifoy and Fowler went to Washington and met with Major General Ernest Graves, Jr., a top official at the Atomic Energy Commission, whose responsibilities included weapon safety. Sandia reported to the AEC, and Peurifoy was aiming higher on the bureaucratic ladder. Graves listened to the presentation and then did nothing about it. Five months later, unwilling to let the issue drop and ready to escalate the battle, Peurifoy and Fowler put their concerns on the record. A letter to General Graves was drafted— and Glenn Fowler placed his career at risk by signing and sending it. The “Fowler Letter,” as it was soon called, caused a top secret uproar in the nuclear weapon community. It ensured that high-level officials at the weapons labs, the AEC, and the Pentagon couldn’t hide behind claims of plausible deniability, if a serious accident happened. The letter was proof that they had been warned.
“Most of the aircraft delivered weapons now in stockpile were designed to requirements which envisioned... operations consisting mostly of long periods of igloo storage and some brief exposure to transportation environments,” the Fowler letter began. But these weapons were now being used in ways that could subject them to abnormal environments. And none of the weapons had adequate safety mechanisms. Fowler described the “possibility of these safing devices being electrically bypassed through charred organic plastics or melted solder” and warned of their “premature operation from stray voltages and currents.” He listed the weapons that should immediately be retrofitted or retired, including the Genie, the Hound Dog, the 9-megaton Mark 53 bomb— and the weapons that needed to be replaced, notably the Mark 28, SAC’s most widely deployed bomb. He said that the secretary of defense should be told about the risks of using these weapons during ground alerts. And Fowler recommended, due to “the urgency associated with the safety question,” that nuclear weapons should be loaded onto aircraft only for missions “absolutely required for national security reasons.”
And:
Random urine tests of more than two thousand sailors at naval bases in Norfolk, Virginia, and San Diego, California, found that almost half had recently smoked pot. Although nuclear weapons and marijuana had recently become controversial subjects in American society, inspiring angry debates between liberals and conservatives, nobody argued that the two were a good combination.
...At Homestead Air Force Base in Florida, thirty-five members of an Army unit were arrested for using and selling marijuana and LSD. The unit controlled the Nike Hercules antiaircraft missiles on the base, along with their nuclear warheads. The drug use at Homestead was suspected after a fully armed Russian MiG-17 fighter plane, flown by a Cuban defector, landed there unchallenged, while Air Force One was parked on a nearby runway. Nineteen members of an Army detachment were arrested on pot charges at a Nike Hercules base on Mount Gleason, overlooking Los Angeles. One of them had been caught drying a large amount of marijuana on land belonging to the U.S. Forest Service. Three enlisted men at a Nike Hercules base in San Rafael, California, were removed from guard duty for psychiatric reasons. One of them had been charged with pointing a loaded rifle at the head of a sergeant. Although illegal drugs were not involved in the case, the three men were allowed to guard the missiles, despite a history of psychiatric problems. The squadron was understaffed, and its commander feared that hippies—“ people from the Haight-Ashbury”— were trying to steal nuclear weapons.
More than one fourth of the crew on the USS Nathan Hale, a Polaris submarine with sixteen ballistic missiles, were investigated for illegal drug use. Eighteen of the thirty-eight seamen were cleared; the rest were discharged or removed from submarine duty. A former crew member of the Nathan Hale told a reporter that hashish was often smoked when the sub was at sea. The Polaris base at Holy Loch, Scotland, helped turn the Cowal Peninsula into a center for drug dealing in Great Britain. Nine crew members of the USS Casimir Pulaski, a Polaris submarine, were convicted for smoking marijuana at sea. One of the submarine tenders that docked at the base, the USS Canopus, often carried nuclear warheads and ballistic missiles. The widespread marijuana use among its crew earned the ship a local nickname: the USS Cannabis.
Four SAC pilots stationed at Castle Air Force Base near Merced, California, were arrested with marijuana and LSD. The police who raided their house, located off the base, said that it resembled “a hippie type pad with a picture of Ho Chi Minh on the wall.” At Seymour Johnson Air Force Base in Goldsboro, North Carolina, 151 of the 225 security police officers were busted on marijuana charges. The Air Force Office of Special Investigations arrested many of them leaving the base’s nuclear weapon storage area. Marijuana was discovered in one of the underground control centers of a Minuteman missile squadron at Malmstrom Air Force Base near Great Falls, Montana. It was also found in the control center of a Titan II launch complex about forty miles southeast of Tucson, Arizona. The launch crew and security officers at the site were suspended while investigators tried to determine who was responsible for the “two marijuana cigarettes.”
The true extent of drug use among American military personnel with access to nuclear weapons was hard to determine. Of the roughly 114,000 people who’d been cleared to work with nuclear weapons in 1980, only 1.5 percent lost that clearance because of drug abuse. But the Personnel Reliability Program’s 98.5 percent success rate still allowed at least 1,728 “unreliable” drug uses near the weapons. And those were just the ones who got caught.
↑ comment by Shmi (shminux) · 2013-10-31T22:37:31.218Z · LW(p) · GW(p)
Do you keep a list of the audiobooks you liked anywhere? I'd love to take a peek.
Replies from: lukeprog↑ comment by lukeprog · 2013-10-31T23:10:57.771Z · LW(p) · GW(p)
Okay. In this comment I'll keep an updated list of audiobooks I've heard since Sept. 2013, for those who are interested. All audiobooks are available via iTunes/Audible unless otherwise noted.
Outstanding:
- Tetlock, Expert Political Judgment
- Pinker, The Better Angels of Our Nature (my clips)
- Schlosser, Command and Control (my clips)
- Yergin, The Quest (my clips)
- Osnos, Age of Ambition (my clips)
Worthwhile if you care about the subject matter:
- Singer, Wired for War (my clips)
- Feinstein, The Shadow World (my clips)
- Venter, Life at the Speed of Light (my clips)
- Rhodes, Arsenals of Folly (my clips)
- Weiner, Enemies: A History of the FBI (my clips)
- Rhodes, The Making of the Atomic Bomb (available here) (my clips)
- Gleick, Chaos (my clips)
- Wiener, Legacy of Ashes: The History of the CIA (my clips)
- Freese, Coal: A Human History (my clips)
- Aid, The Secret Sentry (my clips)
- Scahill, Dirty Wars (my clips)
- Patterson, Dark Pools (my clips)
- Lieberman, The Story of the Human Body
- Pentland, Social Physics (my clips)
- Okasha, Philosophy of Science: VSI
- Mazzetti, The Way of the Knife (my clips)
- Ferguson, The Ascent of Money (my clips)
- Lewis, The Big Short (my clips)
- de Mesquita & Smith, The Dictator's Handbook (my clips)
- Sunstein, Worst-Case Scenarios (available here) (my clips)
- Johnson, Where Good Ideas Come From (my clips)
- Harford, The Undercover Economist Strikes Back (my clips)
- Caplan, The Myth of the Rational Voter (my clips)
- Hawkins & Blakeslee, On Intelligence
- Gleick, The Information (my clips)
- Gleick, Isaac Newton
- Greene, Moral Tribes
- Feynman, Surely You're Joking, Mr. Feynman! (my clips)
- Sabin, The Bet (my clips)
- Watts, Everything Is Obvious: Once You Know the Answer (my clips)
- Greenblatt, The Swerve: How the World Became Modern (my clips)
- Cain, Quiet: The Power of Introverts in a World That Can't Stop Talking
- Dennett, Freedom Evolves
- Kaufman, The First 20 Hours
- Gertner, The Idea Factory (my clips)
- Olen, Pound Foolish
- McArdle, The Up Side of Down
- Rhodes, Twilight of the Bombs (my clips)
- Isaacson, Steve Jobs (my clips)
- Priest & Arkin, Top Secret America (my clips)
- Ayres, Super Crunchers (my clips)
- Lewis, Flash Boys (my clips)
- Dartnell, The Knowledge (my clips)
- Cowen, The Great Stagnation
- Lewis, The New New Thing (my clips)
- McCray, The Visioneers (my clips)
- Jackall, Moral Mazes (my clips)
- Langewiesche, The Atomic Bazaar
- Ariely, The Honest Truth about Dishonesty (my clips)
↑ comment by lukeprog · 2013-11-25T10:46:07.269Z · LW(p) · GW(p)
A process for turning ebooks into audiobooks for personal use, at least on Mac:
- Rip the Kindle ebook to non-DRMed .epub with Calibre and Apprentice Alf.
- Open the .epub in Sigil, merge all the contained HTML files into a single HTML file (select the files, right-click, Merge). Open the Source view for the big HTML file.
- Edit the source so that the ebook begins with the title and author, then jumps right into the foreword or preface or first chapter, and ends with the end of the last chapter or epilogue. (Cut out any table of contents, list of figures, list of tables, appendices, index, bibliography, and endnotes.)
- Remove footnotes if easy to do so, using Sigil's Regex find-and-replace (remember to use Minimal Match so you don't delete too much!). Click through several instances of the Find command to make sure it's going to properly cut out only the footnotes, before you click "Replace All."
- (Ignore italics here; it's added erroneously by LW.) Use find and replace to add [[slnc_1000]] at the end of every paragraph; Mac's text-to-speech engine interprets this as a slight pause, which aids in comprehension when I'm listening to the audiobook. Usually this just means replacing every instance of with [[slnc_1000]]
- Copy/paste that entire HTML file into a text file and save it as .html. Open this in your browser, Select All, right-click and choose Services -> Add to iTunes as Spoken Track. (I think "Ava" is the best voice; you'll have to add this voice by upgrading to Mavericks and adding Ava under System Preferences -> Dictation and Speech.) This will take a while, and might even throw up an error even though the track will continue being created and will succeed.
- Now, sync this text-to-speech audiobook to some audio player that can play at 2x or 3x speed, and listen away.
To de-DRM your Audible audiobooks, just use Tune4Mac.
Replies from: Dr_Manhattan↑ comment by Dr_Manhattan · 2013-12-08T16:16:05.740Z · LW(p) · GW(p)
VoiceDream for iPhone does a very fine job of text-to-speech; it also syncs your pocket bookmarks and can read epub files.
↑ comment by lukeprog · 2014-04-03T02:08:27.748Z · LW(p) · GW(p)
Other:
- Roose, Young Money. Too focused on a few individuals for my taste, but still has some interesting content. (my clips)
- Hofstadter & Sander, Surfaces and Essences. Probably a fine book, but I was only interested enough to read the first and last chapters.
- Taleb, AntiFragile. Learned some from it, but it's kinda wrong much of the time. (my clips)
- Acemoglu & Robinson, Why Nations Fail. Lots of handy examples, but too much of "our simple theory explains everything." (my clips)
- Byrne, The Many Worlds of Hugh Everett III (available here). Gave up on it; too much theory, not enough story. (my clips)
- Drexler, Radical Abundance. Gave up on it; too sanitized and basic.
- Mukherjee, The Emperor of All Maladies. Gave up on it; too slow in pace and flowery in language for me.
- Fukuyama, The Origins of Political Order. Gave up on it; the author is more keen on name-dropping theorists than on tracking down data.
- Friedman, The Moral Consequences of Economic Growth (available here). Gave up on it. There are some actual data in chs. 5-7, but the argument is too weak and unclear for my taste.
- Tuchman, The Proud Tower. Gave up on it after a couple chapters. Nothing wrong with it, it just wasn't dense enough in the kind of learning I'm trying to do.
- Foer, Eating Animals. I listened to this not to learn, but to shift my emotions. But it was too slow-moving, so I didn't finish it.
- Caro, The Power Broker. This might end up under "outstanding" if I ever finish it. For now, I've put this one on hold because it's very long and not as highly targeted at the useful learning I want to be doing right now than some other books.
- Rutherfurd, Sarum. This is the furthest I've gotten into any fiction book for the past 5 years at least, including HPMoR. I think it's giving my system 1 an education into what life was like in the historical eras it covers, without getting bogged down in deep characterization, complex plotting, or ornate environmental description. But I've put it on hold for now because it is incredibly long.
- Diamond, Collapse. I listened to several chapters, but it seemed to be mostly about environmental decline, which doesn't interest me much, so I stopped listening.
- Bowler & Morus, Making Modern Science (available here) (my clips). A decent history of modern science but not focused enough on what I wanted to learn, so I gave up.
- Brynjolfsson & McAfee, The Second Machine Age (my clips). Their earlier, shorter Race Against the Machine contained the core arguments; this book expands the material in order to explain things to a lay audience. As with Why Nations Fail, I have too many quibbles with this book's argument to put this book in the 'Liked' category.
- Clery, A Piece of the Sun. Nothing wrong with it, I just wasn't learning the type of things I was hoping to learn, so I stopped about half way through.
- Schuman, The Miracle. Fairly interesting, but not quite dense enough in the kind of stuff I'm hoping to learn these days.
- Conway & Oreskes, Merchants of Doubt. Fairly interesting, but not dense enough in the kind of things I'm hoping to learn.
- Horowitz, The Hard Thing About Hard Things
- Wessel, Red Ink
- Levitt & Dubner, Think Like a Freak (my clips)
- Gladwell, David and Goliath (my clips)
↑ comment by Shmi (shminux) · 2013-11-01T18:22:40.884Z · LW(p) · GW(p)
Thanks! Your first 3 are not my cup of tea, but I'll keep looking through the top 1000 list. For now, I am listening to MaddAddam, the last part of Margaret Atwood's post-apocalyptic fantasy trilogy, which qrnyf jvgu bar zna qvfnccbvagrq jvgu uvf pbagrzcbenel fbpvrgl ervairagvat naq ercbchyngvat gur rnegu jvgu orggre crbcyr ur qrfvtarq uvzfrys. She also has some very good non-fiction, like her Massey lecture on debt, which I warmly recommend.
↑ comment by Nick_Beckstead · 2014-02-24T14:10:07.286Z · LW(p) · GW(p)
Could you say a bit about your audiobook selection process?
Replies from: lukeprog↑ comment by lukeprog · 2014-02-24T16:32:02.100Z · LW(p) · GW(p)
When I was just starting out in September 2013, I realized that vanishingly few of the books I wanted to read were available as audiobooks, so it didn't make sense for me to search Audible for titles I wanted to read: the answer was basically always "no." So instead I browsed through the top 2000 best-selling unabridged non-fiction audiobooks on Audible, added a bunch of stuff to my wishlist, and then scrolled through the wishlist later and purchased the ones I most wanted to listen to.
These days, I have a better sense of what kind of books have a good chance of being recorded as audiobooks, so I sometimes do search for specific titles on Audible.
Some books that I really wanted to listen to are available in ebook but not audiobook, so I used this process to turn them into audiobooks. That only barely works, sometimes. I have to play text-to-speech audiobooks at a lower speed to understand them, and it's harder for my brain to stay engaged as I'm listening, especially when I'm tired. I might give up on that process, I'm not sure.
Most but not all of the books are selected because I expect them to have lots of case studies in "how the world works," specifically with regard to policy-making, power relations, scientific research, and technological development. This is definitely true for e.g. Command and Control, The Quest, Wired for War, Life at the Speed of Light, Enemies, The Making of the Atomic Bomb, Chaos, Legacy of Ashes, Coal, The Secret Sentry, Dirty Wars, The Way of the Knife, The Big Short, Worst-Case Scenarios, The Information, and The Idea Factory.
Replies from: ozziegooen, Nick_Beckstead↑ comment by ozziegooen · 2014-02-25T21:31:04.292Z · LW(p) · GW(p)
I definitely found out something similar. I've come to believe that most 'popular science', 'popular history' etc books are on audible, but almost anything with equations or code is not.
The 'great courses' have been quite fantastic for me for learning about the social sciences. I found out about those recently.
Occasionally I try podcasts for very niche topics (recent Rails updates, for instance), but have found them to be rather uninteresting in comparison to full books and courses.
↑ comment by Nick_Beckstead · 2014-02-25T09:47:38.180Z · LW(p) · GW(p)
Thanks!
↑ comment by lukeprog · 2013-11-24T13:27:32.922Z · LW(p) · GW(p)
From Singer's Wired for War:
Replies from: lukeprog, None, lukeprog, lukeprog, lukeprog, lukeprog, lukeprog, lukeprogpeople have long peered into the future and then gotten it completely and utterly wrong. My favorite example took place on October 9, 1903, when the New York Times predicted that “the flying machine which will really fly might be evolved by the combined and continuous efforts of mathematicians and mechanicians in from one million to ten million years.” That same day, two brothers who owned a bicycle shop in Ohio started assembling the very first airplane, which would fly just a few weeks later.
Similarly botched predictions frequently happen in the military field. General Giulio Douhet, the commander of Italy’s air force in World War I, is perhaps the most infamous. In 1921, he wrote a best-selling book called The Command of the Air, which argued that the invention of airplanes made all other parts of the military obsolete and unnecessary. Needless to say, this would be news both to my granddaddy, who sailed out to another world war just twenty years later, and to the soldiers slogging through the sand and dust of Iraq and Afghanistan today.
↑ comment by lukeprog · 2013-11-24T13:54:15.324Z · LW(p) · GW(p)
More (#7) from Wired for War:
if a robot vacuum cleaner started sucking up infants as well as dust, because of some programming error or design flaw, we can be sure that the people who made the mistakes would be held liable. That same idea of product liability can be taken from civilian law and applied over to the laws of war. While a system may be autonomous, those who created it still hold some responsibility for its actions. Given the larger stakes of war crimes, though, the punishment shouldn’t be a lawsuit, but criminal prosecution. If a programmer gets an entire village blown up by mistake, the proper punishment is not a monetary fine that the firm’s insurance company will end up paying. Many researchers might balk at this idea and claim it will stand in the way of their work. But as Bill Joy sensibly notes, especially when the consequences are high, “Scientists and technologists must take clear responsibility for the consequences of their discoveries.” Dr. Frankenstein should not get a free pass for his monster’s work, just because he has a doctorate.
The same concept could apply to unmanned systems that commit some war crime not because of manufacturer’s defect, but because of some sort of misuse or failure to take proper precautions. Given the different ways that people are likely to classify robots as “beings” when it comes to expectations of rights we might grant them one day, the same concept might be flipped across to the responsibilities that come with using or owning them. For example, a dog is a living, breathing animal totally separate from a human. That doesn’t mean, however, that the law is silent on the many legal questions that can arise from dogs’ actions. As odd as it sounds, pet law might then be a useful resource in figuring out how to assess the accountability of autonomous systems.
The owner of a pit bull may not be in total control of exactly what the dog does or even who the dog bites. The dog’s autonomy as a “being” doesn’t mean, however, that we just wave our hands and act as if there is no accountability if that dog mauls a little kid. Even if the pit bull’s owner was gone at the time, they still might be criminally prosecuted if the dog was abused or trained (programmed) improperly, or because the owner showed some sort of negligence in putting a dangerous dog into a situation where it was easy for kids to get harmed.
Like the dog owner, some future commander who deploys an autonomous robot may not always be in total control of their robot’s every operation, but that does not necessarily break their chain of accountability. If it turns out that the commands or programs they authorized the robot to operate under somehow contributed to a violation of the laws of war or if their robot was deployed into a situation where a reasonable person could guess that harm would occur, even unintentionally, then it is proper to hold them responsible. Commanders have what is known as responsibility “by negation.” Because they helped set the whole situation in process, commanders are equally responsible for what they didn’t do to avoid a war crime as for what they might have done to cause it.
And:
Today, the concept of machines replacing humans at the top of the food chain is not limited to stories like The Terminator or Maximum Overdrive (the Stephen King movie in which eighteen-wheeler trucks conspire to take over the world, one truck stop at a time). As military robotics expert Robert Finkelstein projects, “within 20 years” the pairing of AI and robotics will reach a point of development where a machine “matches human capabilities. You [will] have endowed it with capabilities that will allow it to outperform humans. It can’t stay static. It will be more than human, different than human. It will change at a pace that humans can’t match.” When technology reaches this point, “the rules change,” says Finkelstein. “On Monday you control it, on Tuesday it is doing things you didn’t anticipate, on Wednesday, God only knows. Is it a good thing or a bad thing, who knows? It could end up causing the end of humanity, or it could end war forever.”
Finkelstein is hardly the only scientist who talks so directly about robots taking over one day. Hans Moravec, director of the Robotics Institute at Carnegie Mellon University, believes that “the robots will eventually succeed us: humans clearly face extinction.” Eric Drexler, the engineer behind many of the basic concepts of nanotechnology, says that “our machines are evolving faster than we are. Within a few decades they seem likely to surpass us. Unless we learn to live with them in safety, our future will likely be both exciting and short.” Freeman Dyson, the distinguished physicist and mathematician who helped jump-start the field of quantum mechanics (and inspired the character of Dyson in the Terminator movies), states that “humanity looks to me like a magnificent beginning, but not the final word.” His equally distinguished son, the science historian George Dyson, came to the same conclusion, but for different reasons. As he puts it, “In the game of life and evolution, there are three players at the table: human beings, nature and machines. I am firmly on the side of nature. But nature, I suspect, is on the side of the machines.” Even inventor Ray Kurzweil of Singularity fame gives humanity “a 50 percent chance of survival.” He adds, “But then, I’ve always been accused of being an optimist.”
...Others believe that we must take action now to stave off this kind of future. Bill Joy, the cofounder of Sun Microsystems, describes himself as having had an epiphany a few years ago about his role in humanity’s future. “In designing software and microprocessors, I have never had the feeling I was designing an intelligent machine. The software and hardware is so fragile, and the capabilities of a machine to ‘think’ so clearly absent that, even as a possibility, this has always seemed very far in the future.... But now, with the prospect of human-level computing power in about 30 years, a new idea suggests itself: that I may be working to create tools which will enable the construction of technology that may replace our species. How do I feel about this? Very uncomfortable.”
↑ comment by lukeprog · 2013-11-24T13:50:06.651Z · LW(p) · GW(p)
More (#6) from Wired for War:
Perhaps the best illustration of how the bar is being lowered for groups seeking to develop or use such sophisticated systems comes in the form of “Team Gray,” one of the competitors in the 2005 DARPA Grand Challenge. Gray Insurance is a family-owned insurance company from Metairie, Louisiana, just outside New Orleans. As Eric Gray, who owns the firm along with his brother and dad, explained, the firm’s entry into robotics came on a lark. “I read an article in Popular Science about last year’s race and then threw the magazine in the back of my office. Later on, my brother came over and read the article, and he yelled over to me, ‘Hey did you read about this race?’ And I said, ‘Yeah,’ and he said, ‘You wanna try it?’ And I said, ‘Yeah, heck, let’s give it a try.’ ”
The Grays didn’t have PhDs in robotics, billion-dollar military labs backing them, or even much familiarity with computers. Instead, they brought in the head of their insurance company’s ten-person IT department for guidance on what to do. He then went out and bought some of the various parts and components described in the magazine article. They got their ruggedized computer, for example, at a boat show. The Grays then began reading up on video game programming, thinking that programming a robot car to drive through the real-world course had many parallels with “navigating an animated monster through a virtual world.” Everything was loaded into a Ford Escape Hybrid SUV, which they called Kat 5, after the category 5 Hurricane Katrina that hit their hometown just a few months before the race.
When it came time for the race to see who could design the best future automated military vehicle, Team Gray’s entry lined up beside robots made by some of the world’s most prestigious universities and companies. Kat 5 then not only finished the racecourse (recall that no robot contestant had even been able to go more than a few miles the year before), but came in fourth out of the 195 contestants, just thirty-seven minutes behind Sebastian Thrun’s Stanley robot. Said Eric Gray, who spent only $650,000 to make a robot that the Pentagon and nearly every top research university had been unable to build just a year before, “It’s a beautiful thing when people are ignorant that something is impossible.”
And:
Replies from: NoneWhen we think of the terrorist risks that emanate from unmanned systems, robotics expert Robert Finkelstein advises that we shouldn’t just look at organizations like al-Qaeda. “They can make a lone actor like Timothy McVeigh even more scary.” He describes a scenario in which “a few amateurs could shut down Manhattan with relative ease.” (Given that my publisher is based in Manhattan, we decided to leave the details out of the book.) Washington Post technology reporter Joel Garreau similarly writes, “One bright but embittered loner or one dissident grad student intent on martyrdom could—in a decent biological lab for example—unleash more death than ever dreamed of in nuclear scenarios. It could even be done by accident.”
In political theory, noted philosophers like Thomas Hobbes argued that individuals have always had to grant their obedience to governments because it was only by banding together and obeying some leader that people could protect themselves. Otherwise, life would be “nasty, brutish and short,” as he famously described a world without governments. But most people forget the rest of the deal that Hobbes laid out. “The obligation of subjects to the sovereign is understood to last as long and no longer than the power lasteth by which he is able to protect them.”
As a variety of scientists and analysts look at such new technologies as robotics, AI and nanotech, they are finding that massive power will no longer be held only by states. Nor will it even be limited to nonstate organizations like Hezbollah or al-Qaeda. It is also within the reach of individuals. The playing field is changing for Hobbes’s sovereign.
Even the eternal optimist Ray Kurzweil believes that with the barriers to entry being lowered for violence, we could see the rise of superempowered individuals who literally hold humanity’s future in their hands. New technologies are allowing individuals with creativity to push the limits of what is possible. He points out how Sergey Brin and Larry Page were just two Stanford kids with a creative idea that turned into Google, a mechanism that makes it easy for anyone to search almost all the world’s knowledge. However, their $100 billion idea is “also empowering for those who are destructive.” Information on how to build your own remote bomb or the genetic code for the 1918 flu bug are as searchable as the latest news on Britney Spears. Kurzweil describes the looming period in human history that we are entering, just before his hoped-for Singularity: “It feels like all ten billion of us are standing in a room up to our knees in flammable fluid, waiting for someone—anyone—to light a match.”
Kurzweil thinks we have enough fire extinguishers to avoid going up in flames before the Singularity arrives, but others aren’t so certain. Bill Joy, the so-called father of the Internet, for example, fears what he calls “KMD,” individuals who wield knowledge-enabled mass destruction. “It is no exaggeration to say that we are on the cusp of the further perfection of extreme evil, an evil whose possibility spreads well beyond that which weapons of mass destruction bequeathed to the nation states, on to a surprising and terrible empowerment of individuals.”
The science fiction writers concur. “Single individual mass destruction” is the biggest dilemma we have to worry about with our new technologies, warns Greg Bear. He notes that many high school labs now have greater sophistication and capability than the Pentagon’s top research labs did in the cold war. Vernor Vinge, the computer scientist turned award-winning novelist, agrees: “Historically, warfare has pushed technologies. We are in a situation now, if certain technologies become cheap enough, it’s not just countries that can do terrible things to millions of people, but criminal gangs can do terrible things to millions of people. What if for 50 dollars you buy something that could destroy everybody in a country? Then, basically, anybody who’s having a bad hair day is a threat to national survival.”
↑ comment by lukeprog · 2013-11-24T13:45:52.056Z · LW(p) · GW(p)
More (#5) from Wired for War:
The challenge for the United States is that stories like that of the Blues and Predator, where smart, innovative systems are designed at low costs, are all too rare. The U.S. military is by far the biggest designer and purchaser of weapons in the world. But it is also the most inefficient. As David Walker, the head of the Government Accountability Office (GAO), puts it, “We’re number 1 in the world in military capabilities. But on the business side, the Defense Department gets a D-minus, giving them the benefit of the doubt. If they were a business, they wouldn’t be in business.”
The Department of Justice once found that as much as 5 percent of the government’s annual budget is lost to old-fashioned fraud and theft, most of it in the defense realm. This is not helped by the fact that the Pentagon’s own rules and laws for how it should buy weapons are “routinely broken,” as one report in Defense News put it. One 2007 study of 131 Pentagon purchases found that 117 did not meet federal regulation standards. The Pentagon’s own inspector general also reported that not one person had been fired or otherwise held accountable for these violations.
...Whenever any new weapon is contemplated, the military often adds wave after wave of new requirements, gradually creeping the original concept outward. It builds in new design mandates, asks for various improvements and additions, forgetting that each new addition means another delay in delivery (and for robots, at least, forgetting that the systems were meant to be expendable). In turn, the makers are often only too happy to go along with what transforms into a process of gold-plating, as adding more bells, more whistles, and more design time means more money. These sorts of problems are rife in U.S. military robotics today. The MDARS (Mobile Detection Assessment Response System) is a golf-cart-sized robot that was planned as a cheap sentry at Pentagon warehouses and bases. It is now fifty times more expensive than originally projected. The air force’s unmanned bomber design is already projecting out at more than $2 billion a plane, roughly three times the original $737 million cost of the B-2 bomber it is to replace.
These costs weigh not just in dollars and cents. The more expensive the systems are, the fewer can be bought. The U.S. military becomes more heavily invested in those limited numbers of systems, and becomes less likely to change course and develop or buy alternative systems, even if they turn out to be better. The costs also change what doctrines can be used in battle, as the smaller number makes the military less likely to endanger systems in risky operations. Many worry this is defeating the whole purpose of unmanned systems. “We become prisoners of our very expensive purchases,” explains Ralph Peters. He worries that the United States might potentially lose some future war because of what he calls “quantitative incompetence.” Norm Augustine even jokes, all too seriously, that if the present trend continues, “In the year 2054, the entire defense budget will purchase just one tactical aircraft. This aircraft will have to be shared by the Air Force and Navy, three and one half days per week, except for the leap year, when it will be made available to the Marines for the extra day.”
↑ comment by lukeprog · 2013-11-24T13:43:25.517Z · LW(p) · GW(p)
More (#4) from Wired for War:
The force soon centered on a doctrine that would later be called the blitzkrieg, or “lightning war.” Tanks would be coordinated with air, artillery, and infantry units to create a concentrated force that could punch through enemy lines and spread shock and chaos, ultimately overwhelming the foe. This choice of doctrine influenced the Germans to build tanks that emphasized speed (German tanks were twice as fast) and reliability (the complicated French and British tanks often broke down), and that could communicate and coordinate with each other by radio. When Hitler later took power, he supported this mechanized way of warfare not only because it melded well with his vision of Nazism as the wave of the future, but also because he had a personal fear of horses.
When war returned to Europe, it seemed unlikely that the Germans would win. The French and the British had won the last war in the trenches, and seemed well prepared for this one with the newly constructed Maginot Line of fortifications. They also seemed better off with the new technologies as well. Indeed, the French alone had more tanks than the Germans (3,245 to 2,574). But the Germans chose the better doctrine, and they conquered all of France in just over forty days. In short, both sides had access to roughly the same technology, but made vastly different choices about how to use it, choices that shaped history.
And:
the [air] force will still sometimes put pilots’ career interests ahead of military efficiency, especially when those making the decisions are fighter jocks themselves. For example, many believe that the air force canceled its combat drone, Boeing’s X-45, before it could even be tested, in order to keep it from competing with its manned fighter jet of the future, the Joint Strike Fighter (JSF, a program now $38 billion over its original budget, and twenty-seven months past its schedule). One designer recalls, “The reason that was given was that we were expected to be simply too good in key areas and that we would have caused massive disruption to the efforts to ‘keep . . . JSF sold.’ If we had flown and things like survivability had been evenly assessed on a small scale and Congress had gotten ahold of the data, JSF would have been in serious trouble.”
Military cultural resistance also jibes with problems of technological “lock-in.” This is where change is resisted because of the costs sunk in the old technology, such as the large investment in infrastructure supporting it. Lock-in, for example, is why so many corporate and political interests are fighting the shift away from gas-guzzling cars.
This mix of organizational culture and past investment is why militaries will go to great lengths to keep their old systems relevant and old institutions intact. Cavalry forces were so desperate to keep horses relevant when machine guns and engines entered twentieth-century warfare that they even tried out “battle chariots,” which were basically machine guns mounted on the kind of chariots once used by ancient armies. Today’s equivalent is the development of a two-seat version of the Air Force’s F-22 Raptor (which costs some $360 million per plane, when you count the research and development). A sell of the idea described how the copilot is there to supervise an accompanying UAV that would be sent to strike guarded targets and engage enemy planes in any dogfights, as the drone could “perform high-speed aerobatics that would render a human pilot unconscious.” It’s an interesting concept, but it begs the question of what the human fighter pilot would do.
Akin to the baseball managers who couldn’t adapt to change like Billy Beane, such cultural resistance may prove another reason why the U.S. military could fall behind others in future wars, despite its massive investments in technologies. As General Eric Shinseki, the former U.S. Army chief of staff, once admonished his own service, “If you dislike change, you’re going to dislike irrelevance even more.” It is not a good sign then that the last time Shinseki made such a warning against the general opinion—that the invasion of Iraq would be costly—he was summarily fired by then secretary of defense Rumsfeld.
↑ comment by lukeprog · 2013-11-24T13:39:45.373Z · LW(p) · GW(p)
More (#3) from Wired for War:
Congress ordered the Pentagon to show a “preference for joint unmanned systems in acquisition programs for new systems, including a requirement under any such program for the development of a manned system for a certification that an unmanned system is incapable of meeting program requirements.” If the U.S. military was going to buy a new weapon, it would now have to justify why it was not a robotic one.
And:
In Steven Spielberg’s movie Minority Report, for instance, Tom Cruise wears gloves that turn his fingers into a virtual joystick/mouse, allowing him to call up and control data, including even video, without ever touching a computer. He literally can “point and click” in thin air. Colonel Bruce Sturk, who runs the high-tech battle lab at Langley Air Force Base, liked what he saw in the movie. “As a military person, I said, ‘My goodness, how great would it be if we had something similar to that?’ ” So the defense contractor Raytheon was hired to create a real version for the Pentagon. Bringing it full circle, the company then hired John Underkoffler, the technology guru who had first proposed the fictional idea to Spielberg. The result is the “G-Speak Gestural Technology System,” which lets users type and control images on a projected screen (including even a virtual computer keyboard projected in front of the user). Movie magic is made real via sensors inside the gloves and cameras that track the user’s hand movements.
And:
it is easy to see the attraction of building increasing levels of autonomy into military robots. The more autonomy a robot has, the less human operators have to support it. As one Pentagon report put it, “Having a dedicated operator for each robot will not pass the common sense test.” If robots don’t get higher on the autonomy scale, they don’t yield any cost or manpower savings. Moreover, it is incredibly difficult to operate a robot while trying to interpret and use the information it gathers. It can even get dangerous as it’s hard to operate a complex system while maintaining your own situational awareness in battle. The kid parallel would be like trying to play Madden football on a PlayStation in the middle of an actual game of dodgeball.
With the rise of more sophisticated sensors that better see the world, faster computers that can process information more quickly, and most important, GPS that can give a robot its location and destination instantaneously, higher levels of autonomy are becoming more attainable, as well as cheaper to build into robots. But each level of autonomy means more independence. It is a potential good in moving the human away from danger, but also raises the stakes of the robot’s decisions.
↑ comment by lukeprog · 2013-11-24T13:35:22.944Z · LW(p) · GW(p)
More (#2) from Wired for War:
Beyond just the factor of putting humans into dangerous environments, technology does not have the same limitations as the human body. For example, it used to be that when planes made high-speed turns or accelerations, the same gravitational pressures (g-forces) that knocked the human pilot out would also tear the plane apart. But now, as one study described of the F-16, the machines are pushing far ahead. “The airplane was too good. In fact, it was better than its pilots in one crucial way: It could maneuver so fast and hard that its pilots blacked out.”
If, as an official at DARPA observed, “the human is becoming the weakest link in defense systems,” unmanned systems offer a path around those limitations. They can fly faster and turn harder, without worrying about that squishy part in the middle. Looking forward, a robotics researcher notes that “the UCAV [the unmanned fighter jet] will totally trump the human pilot eventually, purely because of physics.” This may prove equally true at sea, and not just in underwater operations, where humans have to worry about small matters like breathing or suffering ruptured organs from water pressure. For example, small robotic boats (USV) have already operated in “sea state six.” This is when the ocean is so rough that waves are eighteen feet high or more, and human sailors would break their bones from all the tossing about.
Working at digital speed is another unmanned advantage that’s crucial in dangerous situations. Automobile crash avoidance technologies illustrate that a digital system can recognize a danger and react in about the same time that the human driver can only get to mid-curse word. Military analysts see the same thing happening in war, where bullets or even computer-guided missiles come in at Mach speed and defenses must be able to react against them even quicker. Humans can only react to incoming mortar rounds by taking cover at the last second, whereas “R2-D2,” the CRAM system in Baghdad, is able to shoot them down before they even arrive. Some think this is only the start. One army colonel says, “The trend towards the future will be robots reacting to robot attack, especially when operating at technologic speed. . . . As the loop gets shorter and shorter, there won’t be any time in it for humans.”
↑ comment by lukeprog · 2013-11-24T13:32:44.083Z · LW(p) · GW(p)
More (#1) from Wired for War:
nothing in this book is classified information. I only include what is available in the public domain. Of course, a few times in the course of the research I would ask some soldier or scientist about a secretive project or document and they would say, “How did you find out about that? I can’t even talk about it!” “Google” was all too often my answer, which says a lot both about security as well as what AI search programs bring to modern research.
And:
On August 12, 1944, the naval version of one of these planes, a converted B-24 bomber, was sent to take out a suspected Nazi V-3, an experimental 300-foot-long “supercannon” that supposedly could hit London from over 100 miles away (unbeknownst to the Allies, the cannon had already been knocked out of commission in a previous air raid). Before the plane even crossed the English Channel, the volatile Torpex exploded and killed the crew.
The pilot was Joseph Kennedy Jr., older brother of John Fitzgerald Kennedy, thirty-fifth president of the United States. The two had spent much of their youth competing for the attention of their father, the powerful businessman and politician Joseph Sr. While younger brother JFK was often sickly and decidedly bookish, firstborn son Joe Jr. had been the “chosen one” of the family. He was a natural-born athlete and leader, groomed from birth to become the very first Catholic president. Indeed, it is telling that in 1940, just before war broke out, JFK was auditing classes at Stanford Business School, while Joe Jr. was serving as a delegate to the Democratic National Convention. When the war started, Joe Jr. became a navy pilot, perhaps the most glamorous role at the time. John was initially rejected for service by the army because of his bad back. The navy relented and allowed John to join only after his father used his political influence.
When Joe Kennedy Jr. was killed in 1944, two things happened: the army ended the drone program for fear of angering the powerful Joe Sr. (setting the United States back for years in the use of remote systems), and the mantle of “chosen one” fell on JFK. When the congressional seat in Boston opened up in 1946, what had been planned for Joe Jr. was handed to JFK, who had instead been thinking of becoming a journalist. He would spend the rest of his days not only carrying the mantle of leadership, but also trying to live up to his dead brother’s carefree and playboy image.
↑ comment by lukeprog · 2014-06-24T23:55:32.006Z · LW(p) · GW(p)
From Osnos' Age of Ambition:
I lived in China for eight years, and I watched this age of ambition take shape. Above all, it is a time of plenty— the crest of a transformation one hundred times the scale, and ten times the speed, of the first Industrial Revolution, which created modern Britain. The Chinese people no longer want for food— the average citizen eats six times as much meat as in 1976— but this is a ravenous era of a different kind, a period when people have awoken with a hunger for new sensations, ideas, and respect. China is the world’s largest consumer of energy, movies, beer, and platinum; it is building more high-speed railroads and airports than the rest of the world combined.
And:
In the event that these censorship efforts failed, the Party was testing a weapon of last resort: the OFF switch. On July 5, 2009, members of China’s Muslim Uighur minority in the far western city of Urumqi protested police handling of a brawl between Hans and Uighurs. The protests turned violent, and nearly two hundred people died, most of them Han, who had been targeted for their ethnicity. Revenge attacks on Uighur neighborhoods followed, and in an effort to prevent people from communicating and organizing, the government abruptly disabled text messages, cut long-distance phone lines, and shut off Internet access almost entirely. The digital blackout lasted ten months, and the economic effects were dramatic: exports from Xinjiang, the Uighur autonomous region, plummeted more than 44 percent. But the Party was willing to accept immense economic damage to smother what it considered a political threat. In the event of a broader crisis someday, China probably has too many channels in and out to impose so complete a blackout on a national scale, but even a limited version would have a profound effect.
And:
Replies from: lukeprog, lukeprogOn October 8, 2010, ten months after Liu Xiaobo was convicted, the Nobel Committee awarded him the Peace Prize “for his long and non-violent struggle for fundamental human rights.” He was the first Chinese citizen to receive the award, not counting the Dalai Lama, who had lived for decades in exile. The award to Liu Xiaobo drove Chinese leaders into a rage; the government denounced Liu’s award as a “desecration” of Alfred Nobel’s legacy. For years, China had coveted a Nobel Prize as a validation of the nation’s progress and a measure of the world’s acceptance. The obsession with the prize was so intense that scholars had named it the “Nobel complex,” and each fall they debated China’s odds of winning it, like sports fans in a pennant race. There was once a television debate called “How Far Are We from a Nobel Prize?”
When the award was announced, most Chinese people had never heard of Liu, so the state media made the first impression; it splashed an article across the country reporting that he earned his living “bad-mouthing his own country.” The profile was a classic of the form: it described him as a collector of fine wines and porcelain, and it portrayed him telling fellow prisoners, “I’m not like you. I don’t lack for money. Foreigners pay me every year, even when I’m in prison.” Liu “spared no effort in working for Western anti-China forces” and, in doing so, “crossed the line of freedom of speech into crime.”
For activists, the news of the award was staggering. “Many broke down in tears, even uncontrollable sobbing,” one said later. In Beijing, bloggers, lawyers, and scholars gathered in the back of a restaurant to celebrate, but police arrived and detained twenty of them. When the announcement was made, Han Han, on his blog, toyed with censors and readers; he posted nothing but a pair of quotation marks enclosing an empty space. The post drew one and a half million hits and more than 28,000 comments.
↑ comment by lukeprog · 2014-06-25T00:05:16.510Z · LW(p) · GW(p)
More (#2) from Osnos' Age of Ambition:
In all, authorities executed at least fourteen yuan billionaires in the span of eight years, on charges ranging from pyramid schemes to murder for hire. (Yuan Baojing, a former stockbroker who made three billion yuan before his fortieth birthday, was convicted of arranging the killing of a man who tried to blackmail him.) The annual rich list was nicknamed the “death list.”
And:
As the Party’s monopoly on information gave way, so did its moral credibility. For people such as the philosophy student Tang Jie, the pursuit of truth did not satisfy their skepticism; it led them to deeper questions about who they wanted to be and whom they wanted to believe. In the summer of 2012, people noticed that another search word had been blocked. The anniversary of the Tiananmen Square demonstrations had just passed, and people had been discussing it, in code, by calling it “the truth”— zhenxiang. The censors picked up on this, and when people searched Weibo for anything further, they began receiving a warning: “In accordance with relevant laws, regulations, and policies, search results for ‘the truth’ have not been displayed.”
↑ comment by lukeprog · 2014-06-25T00:02:03.511Z · LW(p) · GW(p)
More (#1) from Osnos' Age of Ambition:
One of the most common rackets was illegal subcontracting. A single contract could be divvied up and sold for kickbacks, then sold again and again, until it reached the bottom of a food chain of labor, where the workers were cheap and unskilled. Railway ministry jobs were bought and sold: $ 4,500 to be a train attendant, $ 15,000 to be a supervisor. In November 2011 a former cook with no engineering experience was found to be building a high-speed railway bridge using a crew of unskilled migrant laborers who substituted crushed stones for cement in the bridge’s foundation. In railway circles, the practice of substituting cheap materials for real ones was common enough to rate its own expression: touliang huanzhu—“ robbing the beams to put in the pillars.”
With so many kickbacks changing hands, it wasn’t surprising that parts of the railway went wildly over budget. A station in Guangzhou slated to be built for $ 316 million ended up costing seven times that. The ministry was so large that bureaucrats would create fictional departments and run up expenses for them. A five-minute promotional video that went largely unseen cost nearly $ 3 million. The video led investigators to the ministry’s deputy propaganda chief, a woman whose home contained $ 1.5 million in cash and the deeds to nine houses.
Reporters who tried to expose the corruption in the railway world ran into dead ends. Two years before the crash, a journalist named Chen Jieren posted an article about problems in the ministry entitled, “Five Reasons That Liu Zhijun Should Take Blame and Resign,” but the piece was deleted from every major Web portal. Chen was later told that Liu oversaw a slush fund used for buying the loyalty of editors at major media and websites. Other government agencies also had serious financial problems— out of fifty, auditors found problems with forty-nine— but the scale of cash available in the railway world was in a class by itself. Liao Ran, an Asia specialist at Transparency International, told the International Herald Tribune that China’s high-speed railway was shaping up to be “the biggest single financial scandal not just in China, but perhaps in the world.”
And:
In February 2011, five months before the train crash, the Party finally moved on Liu Zhijun. According to Wang Mengshu, investigators concluded that Liu was preparing to use his illegal gains to bribe his way onto the Party Central Committee and, eventually, the Politburo. “He told Ding Shumiao, ‘Put together four hundred million for me. I’m going to need to spread some money around,’” Wang told me. Four hundred million yuan is about sixty-four million dollars. Liu managed to assemble nearly thirteen million yuan before he was stopped, Wang said. “The central government was worried that if he really succeeded in giving out four hundred million in bribes he would essentially have bought a government position. That’s why he was arrested.”
Liu was expelled from the Party the following May, for “severe violations of discipline” and “primary leadership responsibilities for the serious corruption problem within the railway system.” An account in the state press alleged that Liu took a 4 percent kickback on railway deals; another said he netted $ 152 million in bribes. He was the highest-ranking official to be arrested for corruption in five years. But it was Liu’s private life that caught people by surprise. The ministry accused him of “sexual misconduct,” and the Hong Kong newspaper Ming Pao reported that he had eighteen mistresses. His friend Ding was said to have helped him line up actresses from a television show in which she invested. Chinese officials are routinely discovered indulging in multiple sins of the flesh, prompting President Hu Jintao to give a speech a few years ago warning comrades against the “many temptations of power, wealth, and beautiful women.” But the image of a gallivanting Great Leap Liu, and the sheer logistics of keeping eighteen mistresses, made him into a punch line. When I asked Liu’s colleague if the mistress story was true, he replied, “What is your definition of a mistress?”
By the time the libidinous Liu was deposed, at least eight other senior officials had been removed and placed under investigation, including Zhang, Liu’s bombastic aide. Local media reported that Zhang, on an annual salary of less than five thousand dollars, had acquired a luxury home near Los Angeles, stirring speculation that he had been preparing to join the growing exodus of officials who were taking their fortunes abroad. In recent years, corrupt cadres who sent their families overseas had become known in Chinese as “naked officials.” In 2011 the central bank posted to the Web an internal report estimating that, since 1990, eighteen thousand corrupt officials had fled the country, having stolen $ 120 billion— a sum large enough to buy Disney or Amazon. (The report was promptly removed.)
And:
in China, people were more inclined to quote a very different statistic: in forty-seven years of service, high-speed trains in Japan had recorded just one fatality, a passenger caught in a closing door. It was becoming clear that parts of the new China had been built too fast for their own good. Three years had been set aside for construction of one of the longest bridges in North China, but it was finished in eighteen months, and nine months later, in August 2012, it collapsed, killing three people and injuring five. Local officials blamed overloaded trucks, though it was the sixth bridge collapse in a single year.
And:
After years of not daring to measure the Gini coefficient, in January 2013 the government finally published a figure, 0.47, but many specialists dismissed it; the economist Xu Xiaonian called it “a fairy tale.” (An independent calculation put the figure at 0.61, higher than the level in Zimbabwe.) Yet, for all the talk about income, it was becoming clear that people cared most of all about the gap in opportunity. When the Harvard sociologist Martin Whyte polled the Chinese public in 2009, he discovered that people had a surprisingly high tolerance for the rise of the plutocracy. What they resented were the obstacles that prevented them from joining it: weak courts, abuses of power, a lack of recourse. Two scholars, Yinqiang Zhang and Tor Eriksson, tracked the paths of Chinese families from 1989 to 2006 and found a “high degree of inequality of opportunity.” They wrote, “The basic idea behind the market reforms was that by enabling some citizens to become rich this would in turn help the rest to become rich as well. Our analysis shows that at least so far there are few traces of the reforms leveling the playing field.” They found that in other developing countries, parents’ education was the most decisive factor in determining how much a child would earn someday. But in China, the decisive factor was “parental connections.” A separate study of parents and children in Chinese cities found “a strikingly low level of intergenerational mobility.” Writing in 2010, the authors ranked “urban China among the least socially mobile places in the world.”
↑ comment by lukeprog · 2014-05-31T18:27:27.015Z · LW(p) · GW(p)
From Soldiers of Reason:
Replies from: lukeprog, lukeprogThe paper, called NSC-68, warned apocalyptically, in the spirit of Leites, about the "Kremlin's design for world domination" and the constant threat that presented to the United States...
The paper also warned that with the Soviet buildup of atomic capability, the Kremlin might very well stage a surprise attack, with 1954 as the year of maximum danger, unless the United States substantially and immediately increased its armed forces and civil defenses. NSC-68 was forwarded to President Truman around the time that Soviet-backed North Korea launched an invasion of South Korea, an American ally. Nitze's warning about Communist designs worked so well that President Truman adopted NSC-68 as the official policy of the land and increased the national defense budget by almost $40 billion.
↑ comment by lukeprog · 2014-05-31T18:54:54.525Z · LW(p) · GW(p)
More (#2) from Soldiers of Reason:
RAND analysts in the 1990s pointed out that terrorists until then had come in five different categories—revolutionaries, dissatisfied individuals, ethnic minorities, economically disadvantaged groups, and anarchists. They warned that henceforth the greatest danger would come from another group, religious extremists. Their next targets would be Western financial institutions like the World Bank, American and Western corporations, and other religions and their leaders, as prefigured by Mehmet Ali Aga's attempt on the life of Pope John Paul II. Above all, terrorists would concentrate on the symbolic value of their targets, for they would seek not military victory but the psychological defeat of their adversaries through fear. RAND warned that at some point in the future terrorists might resort to weapons of mass destruction, like nuclear, biological, or chemical weapons, especially state-sponsored terrorist groups such as Hezbollah, backed by Iran, and Hamas, sponsored by Syria.
And:
By 2006, following the U.S.- led invasions of Afghanistan and Iraq to smash al Qaeda and to remove terrorist-friendly regimes, Jenkins added a cautionary note: American military responses to terrorism might prove counterproductive. If the United States attempts to eliminate all terrorist groups and attacks all nation-states that host terrorists, the conflicts will only spread the terrorist seed around the globe, much like the mujahideen morphed and scattered after Afghanistan. Terrorism will be defeated by a combination of tactics and weapons, but, above all, by ideas. Armed force alone will not succeed; conviction and ideology will. Jenkins also urged that the drive to eliminate terrorism not trigger a change in American values. Counterterrorism will triumph if America preserves its traditional freedoms, abjuring torture, partisanship, and needless bravado. Should American democracy and the American Constitution be among the victims of terrorism, America's most potent weapons—its traditional freedoms—will be lost for the sake of a Pyrrhic victory. As Jenkins concludes, "Whatever we do must be consistent with our fundamental values. This is no mere matter of morality, it is a strategic calculation, and here we have at times mis-calculated."
↑ comment by lukeprog · 2014-05-31T18:51:46.685Z · LW(p) · GW(p)
More (#1) from Soldiers of Reason:
The prisoner's dilemma is not as arcane or trivial as it might appear, for it addresses the conflict between individual and collective rationality. What is in a player's best interest, and how do you know you have chosen correctly? When applied to societies, the prisoner's dilemma has profound implications that could well determine whether a nation chooses a path to armament, conflict, and war, or disarmament, cooperation, and peace. Witness the case of Oppenheimer, father of the nuclear bomb and head of the general advisory committee to the Atomic Energy Commission. He recommended to Secretary of State Acheson that the United States not develop the hydrogen bomb, so as to provide "limitations on the totality of war and thus eliminating the fear and raising the hope of mankind." In other words, the United States would tell Stalin we will not build it, so you don't have to either. To this idealistic argument, Acheson, ever the wary diplomat, replied, "How can you persuade a paranoid adversary to 'disarm by example'?" The Truman administration echoed Acheson's skepticism and ultimately, in 1950, approved the development of the H-bomb.
And:
Jenkins pointed out that terrorists have a limited number of modes of attack. Their repertoire consists of six basic tactics: bombings, assassinations, armed assaults, kidnappings, barricade and hostage situations, and hijackings. More imitative than innovative, terrorists continue using a preferred mode until governments catch on and improve security measures. Thus, airplane hijackings and hostage taking were popular in the 1970s until hostage-rescue units were created and international treaties against hijackings were vigorously enforced. (Thus, New York's World Trade Center was attacked twice—in 1993 and 2001.)
By the mid-1980s, RAND analysts observed a very disturbing trend: terrorism was becoming bloodier. Whereas in 1968 the bombs of groups like the Croatian separatists were disarmed before they could injure anyone, by 1983 Hezbollah followers were ramming trucks full of explosives into U.S. Marine barracks in Lebanon, killing American servicemen by the score.28 This last incident brought attention to what would become the most worrying trend of all, suicide attacks by extremists in and from the Middle East.
According to RAND analysts, the first record of a suicide attack since ancient times occurred in May of 1972, when Japanese terrorists acting on behalf of Palestinian causes tossed a hand grenade into a group of Christian pilgrims at the airport in Lod, Israel. The attack claimed twenty-six victims but also exposed the terrorists to immediate retribution from security agents at the scene; two of the three terrorists were killed in what amounted to a suicide mission, similar to the "divine wind" or kamikaze immolations of World War II. RAND analysts believed this self-sacrifice shamed the Palestinians into similar action. If Japanese were willing to die for a foreign cause, Palestinians must demonstrate their readiness to sacrifice themselves for their own cause. The inevitable next step was the glorification of death in battle as the bloody gate to paradise.
This transformation in tactics gave terrorists unexpected results. By most accounts, after the bombing of the Beirut barracks, Reagan administration officials decided Lebanon was not worth the American funeral candles; the marines packed up and went home. American withdrawal from Lebanon and the Soviet defeat in Afghanistan, when conjoined to the rise in Muslim fundamentalism fueled by the financial support of Saudi Arabia, created a belief among terrorist groups that they had finally found a way to change the policies of Western powers.
↑ comment by lukeprog · 2014-05-29T04:40:31.202Z · LW(p) · GW(p)
From David and Goliath:
A regulation basketball court is ninety-four feet long. Most of the time, a team would defend only about twenty-four feet of that, conceding the other seventy feet. Occasionally teams played a full-court press—that is, they contested their opponent’s attempt to advance the ball up the court. But they did it for only a few minutes at a time. It was as if there were a kind of conspiracy in the basketball world about the way the game ought to be played, Ranadivé thought, and that conspiracy had the effect of widening the gap between good teams and weak teams. Good teams, after all, had players who were tall and could dribble and shoot well; they could crisply execute their carefully prepared plays in their opponent’s end. Why, then, did weak teams play in a way that made it easy for good teams to do the very things that they were so good at?
Ranadivé looked at his girls. Morgan and Julia were serious basketball players. But Nicky, Angela, Dani, Holly, Annika, and his own daughter, Anjali, had never played the game before. They weren’t all that tall. They couldn’t shoot. They weren’t particularly adept at dribbling. They were not the sort who played pickup games at the playground every evening. Ranadivé lives in Menlo Park, in the heart of California’s Silicon Valley. His team was made up of, as Ranadivé put it, “little blond girls.” These were the daughters of nerds and computer programmers. They worked on science projects and read long and complicated books and dreamed about growing up to be marine biologists. Ranadivé knew that if they played the conventional way—if they let their opponents dribble the ball up the court without opposition—they would almost certainly lose to the girls for whom basketball was a passion. Ranadivé had come to America as a seventeen-year-old with fifty dollars in his pocket. He was not one to accept losing easily. His second principle, then, was that his team would play a real full-court press—every game, all the time. The team ended up at the national championships. “It was really random,” Anjali Ranadivé said. “I mean, my father had never played basketball before.”
And:
Replies from: lukeprog[Lawrence of Arabia's] masterstroke was an assault on the port town of Aqaba. The Turks expected an attack from British ships patrolling the waters of the Gulf of Aqaba to the west. Lawrence decided to attack from the east instead, coming at the city from the unprotected desert, and to do that, he led his men on an audacious, six-hundred-mile loop—up from the Hejaz, north into the Syrian desert, and then back down toward Aqaba. This was in summer, through some of the most inhospitable land in the Middle East, and Lawrence tacked on a side trip to the outskirts of Damascus in order to mislead the Turks about his intentions...
When they finally arrived at Aqaba, Lawrence’s band of several hundred warriors killed or captured twelve hundred Turks and lost only two men. The Turks simply had not thought that their opponent would be crazy enough to come at them from the desert.
↑ comment by lukeprog · 2014-05-29T04:55:25.059Z · LW(p) · GW(p)
More (#2) from David and Goliath:
The stranger Cohn had jumped into the cab with happened to be high up at one of Wall Street’s big brokerage firms. And just that week, the firm had opened a business buying and selling options.
“The guy was running the options business but did not know what an option was,” Cohn went on. He was laughing at the sheer audacity of it all. “I lied to him all the way to the airport. When he said, ‘Do you know what an option is?’ I said, ‘Of course I do, I know everything, I can do anything for you.’ Basically by the time we got out of the taxi, I had his number. He said, ‘Call me Monday.’ I called him Monday, flew back to New York Tuesday or Wednesday, had an interview, and started working the next Monday. In that period of time, I read McMillan’s Options as a Strategic Investment book. It’s like the Bible of options trading.”
It wasn’t easy, of course, since Cohn estimates that on a good day, it takes him six hours to read twenty-two pages. He buried himself in the book, working his way through one word at a time, repeating sentences until he was sure he understood them. When he started at work, he was ready. “I literally stood behind him and said, ‘Buy those, sell those, sell those,’” Cohn said. “I never owned up to him what I did. Or maybe he figured it out, but he didn’t care. I made him tons of money.”
...Today he is the president of Goldman Sachs.
And:
One of the best known case studies in criminology is about what happened in the fall of 1969 when the Montreal police went on strike for sixteen hours. Montreal was—and still is—a world-class city in a country that is considered one of the most law-abiding and stable in the world. So, what happened? Chaos. There were so many bank robberies that day—in broad daylight—that virtually every bank in the city had to close. Looters descended on downtown Montreal, smashing windows. Most shocking of all, a long-standing dispute between the city’s taxi drivers and a local car service called Murray Hill Limousine Service over the right to pick up passengers from the airport exploded into violence, as if the two sides were warring principalities in medieval Europe. The taxi drivers descended on Murray Hill with gasoline bombs. Murray Hill’s security guards opened fire. The taxi drivers then set a bus on fire and sent it crashing through the locked doors of the Murray Hill garage. This is Canada we’re talking about. As soon as the police returned to work, however, order was restored.
↑ comment by lukeprog · 2014-05-18T21:01:24.988Z · LW(p) · GW(p)
From Wade's A Troublesome Inheritance:
Replies from: lukeprog, lukeprogFormer president Theodore Roosevelt wrote to Davenport in 1913, “We have no business to permit the perpetuation of citizens of the wrong type.” The eugenics program reached a pinnacle of acceptance when it received the imprimatur of the U.S. Supreme Court. The court was considering an appeal by Carrie Buck, a woman whom the State of Virginia wished to sterilize on the grounds that she, her mother and her daughter were mentally impaired.
In the 1927 case, known as Buck v. Bell, the Supreme Court found for the state, with only one dissent. Justice Oliver Wendell Holmes, writing for the majority, endorsed without reservation the eugenicists’ credo that the offspring of the mentally impaired were a menace to society. “It is better for the world,” he wrote, “if instead of waiting to execute degenerate offspring for crime, or to let them starve for their imbecility, society can prevent those who are manifestly unfit from continuing their kind. The principle that sustains compulsory vaccination is broad enough to cover cutting the Fallopian tubes. Three generations of imbeciles are enough.”
Eugenics, having started out as a politically impractical proposal for encouraging matches among the well-bred, had now become an accepted political movement with grim consequences for the poor and defenseless.
The first of these were sterilization programs. At the urging of Davenport and his disciples, state legislatures passed programs for sterilizing the inmates of their prisons and mental asylums. A common criterion for sterilization was feeblemindedness, an ill-defined diagnostic category that was often identified by knowledge-based questions that put the ill educated at particular disadvantage.
...Up until 1928, fewer than 9,000 people had been sterilized in the United States, even though the eugenicists estimated that up to 400,000 citizens were “feeble minded.” After the Buck v. Bell decision, the floodgates opened. By 1930, 24 states had sterilization laws on their books, and by 1940, 35,878 Americans had been sterilized or castrated.
↑ comment by lukeprog · 2014-05-18T22:12:45.263Z · LW(p) · GW(p)
More (#2) from A Troubled Inheritance:
China, though for different reasons, developed the same antipathy to modern science as did the Islamic world. One problem in China was the absence of any institutions independent of the emperor. There were no universities. Such academies as existed were essentially crammers for the imperial examination system. Independent thinkers were not encouraged. When Hung-wu, the first emperor of the Ming dynasty, decided that scholars had let things get out of hand, he ordered the death penalty for 68 degree holders and 2 students, and penal servitude for 70 degree holders and 12 students. The problem with Chinese science, Huff writes, was not that it was technically flawed, “but that Chinese authorities neither created or tolerated independent institutions of higher learning within which disinterested scholars could pursue their insights.”
China, unlike the Islamic world, did not ban printing presses, but the books they produced were only for the elite. Another impediment to independent thought was the stultifying education system, which consisted of rote memorization of the more than 500,000 characters that comprised the Confucian classics, and the ability to write a stylized commentary on them. The imperial examination system, which began in 124 BC, took its final form in 1368 AD and remained unchanged until 1905, deterring intellectual innovation for a further five centuries.
↑ comment by lukeprog · 2014-05-18T22:10:57.146Z · LW(p) · GW(p)
More (#1) from A Troublesome Inheritance:
Contrary to widespread belief that the 20th century was more violent than any other, Pinker establishes [in The Better Angels of Our Nature] that both personal violence and deaths in warfare have been in steady decline for as long as records can tell.
...Pinker agrees with Elias that the principal drivers of the civilizing process were the increasing monopoly of force by the state, which reduced the need for interpersonal violence, and the greater levels of interaction with others that were brought about by urbanization and commerce.
The next question of interest is whether the long behavioral shift toward more restrained behavior had a genetic basis. The gracilization of human skulls prior to 15,000 years ago almost certainly did, and Clark makes a strong case that the molding of the English population from rough peasants into industrious citizenry between 1200 and 1800 AD was a continuation of this evolutionary process. On the basis of Pinker’s vast compilation of evidence, natural selection seems to have acted incessantly to soften the human temperament, from the earliest times until the most recent date for which there is meaningful data.
This is the conclusion that Pinker signals strongly to his readers. He notes that mice can be bred to be more aggressive in just five generations, evidence that the reverse process could occur just as speedily. He describes the human genes, such as the violence-promoting MAO-A mutation mentioned in chapter 3, that could easily be modulated so as to reduce aggressiveness. He mentions that violence is quite heritable, on the evidence from studies of twins, and so must have a genetic basis. He states that “nothing rules out the possibility that human populations have undergone some degree of biological evolution in recent millennia, or even centuries, long after races, ethnic groups, and nations diverged.”
But at the last moment, Pinker veers away from the conclusion, which he has so strongly pointed to, that human populations have become less violent in the past few thousand years because of the continuation of the long evolutionary trend toward less violence. He mentions that evolutionary psychologists, of whom he is one, have always held that the human mind is adapted to the conditions of 10,000 years ago and hasn’t changed since.
But since many other traits have evolved more recently than that, why should human behavior be any exception? Well, says Pinker, it would be terribly inconvenient politically if this were so. “It could have the incendiary implication that aboriginal and immigrant populations are less biologically adapted to the demands of modern life than populations that have lived in literate state societies for millennia.”
Whether or not a thesis might be politically incendiary should have no bearing on the estimate of its scientific validity. That Pinker would raise this issue in a last minute diversion of a sustained scientific argument is an explicit acknowledgment to the reader of the political dangers that researchers, even ones of his stature and independence, would face in pursuing the truth too far.
Turning on a dime, Pinker then contends that there is no evidence that the decline in violence over the past 10,000 years is an evolutionary change. To reach this official conclusion, he is obliged to challenge Clark’s evidence that there was indeed such a change. But he does so with an array of arguments that seem less than decisive...
And:
Outside of Europe, the most promising new users of the telescope were in China, whose government had a keen interest in astronomy. Moreover, there was an unusual but vigorous mechanism for pumping the new European astronomical discoveries into China in the form of the Jesuit mission there. The Jesuits figured they had a better chance of converting the Chinese to Christianity if they could show that European astronomy provided more accurate calculations of the celestial events in which the Chinese were interested. Through the Jesuits’ efforts, the Chinese certainly knew of the telescope by 1626, and the emperor probably received a telescope from Cardinal Borromeo of Milan as early as 1618.
The Jesuits invested significant talent in their mission, which was founded by Matteo Ricci, a trained mathematician who also spoke Chinese. Ricci, who died in 1610, and his successors imported the latest European books on math and astronomy and diligently trained Chinese astronomers, who set about reforming the calendar. One of the Jesuits, Adam Schall von Bell, even became head of the Chinese Bureau of Mathematics and Astronomy.
The Jesuits and their Chinese followers several times arranged prediction challenges between themselves and Chinese astronomers following traditional methods, which the Jesuits always won. The Chinese knew, for instance, that there would be a solar eclipse on June 21, 1629, and the emperor asked both sides to submit the day before their predictions of its exact time and duration. The traditional astronomers predicted the eclipse would start at 10:30 AM and last for two hours. Instead it began at 11:30 AM and lasted two minutes, exactly as the Jesuits had calculated.
But these computational victories did not solve the Jesuits’ problem. The Chinese had little curiosity about astronomy itself. Rather, they were interested in divination, in forecasting propitious days for certain events, and astronomy was merely a means to this end. Thus the astronomical bureau was a small unit within the Ministry of Rites. The Jesuits doubted how far they should get into the business of astrological prediction, but their program of converting the Chinese through astronomical excellence compelled them to take the plunge anyway. This led them into confrontation with Chinese officials and to being denounced as foreigners who were interfering in Chinese affairs. In 1661, Schall and the other Jesuits were bound with thick iron chains and thrown into jail. Schall was sentenced to be executed by dismemberment, and only an earthquake that occurred the next day prompted his release.
The puzzle is that throughout this period the Chinese made no improvements on the telescope. Nor did they show any sustained interest in the ferment of European ideas about the theoretical structure of the universe, despite being plied by the Jesuits with the latest European research. Chinese astronomers had behind them a centuries-old tradition of astronomical observation. But it was embedded in a Chinese cosmological system that they were reluctant to abandon. Their latent xenophobia also supported resistance to new ideas. “It is better to have no good astronomy than to have Westerners in China,” wrote the anti-Christian scholar Yang Guangxian.
↑ comment by lukeprog · 2014-05-12T10:55:53.863Z · LW(p) · GW(p)
From Moral Mazes:
Consider, for instance, the case of a large coking plant of the chemical company. Coke making requires a gigantic battery to cook the coke slowly and evenly for long periods; the battery is the most important piece of capital equipment in a coking plant. In 1975, the plant's battery showed signs of weakening and certain managers at corporate headquarters had to decide whether to invest $6 million to restore the battery to top form...
No decision was made. The CEO had sent the word out to defer all unnecessary capital expenditures to give the corporation cash reserves for other investments. So the managers allocated small amounts of money to patch the battery up until 1979, when it collapsed entirely. This brought the company into a breach of contract with a steel producer and into violation of various... EPA pollution regulations. The total bill, including lawsuits and now federally mandated repairs to the battery, exceeded $100 million...
This simple but very typical example gets to the heart of how decision making is intertwined with a company's authority structure and advancement patterns. As Alchemy managers see it, the decisions facing them in 1975 and 1979 were crucially different. Had they acted decisively in 1975 — in hindsight, the only substantively rational course — they would have salvaged the battery and saved their corporation millions of dollars in the long run.
In the short run, however, since even seemingly rational decisions are subject to widely varying interpretations, particularly decisions that run counter to a CEO's stated objectives, they would have been taking serious personal risks in restoring the battery. What is more, their political networks might have unraveled, leaving them vulnerable to attack. They chose short-term safety over long-term gain.
And:
...according to an environmental manager searching for requisite information to confirm the Superfund legislation on toxic waste disposal, the whole archives of the Covenant Corporation in 1981 consisted of five or six cardboard boxes of materials. His search for chemical waste sites formerly used or operated by Alchemy Inc. revealed the names of 150 such locations, but no further location. For one 29-year period, there was only one document giving any details about the history of the company.
And:
Although managers see few defenses against being caught in the wrong place at the wrong time except constant wariness and perhaps being shrewd enough to declare the ineptitude of one’s predecessor on first taking a job, they do see safeguards against suffering the consequences of their own errors. Most important, they can “outrun their mistakes” so that when blame-time arrives, the burden will fall on someone else. At the institutional level, the absence of any system for tracking responsibility here becomes crucial. A lawyer explains how this works in the sprawling bureaucracy of Covenant Corporation:
"I look at it this way. See, in a big bureaucracy like this, very few individual people can really change anything. It’s like a big ant colony. I really believe that if most people didn’t come to work, it wouldn’t matter. You could come in one day a week and accomplish the absolutely necessary work. But the whole colony has significance; it’s just the individual that doesn’t count. Somewhere though some actions have to have significance. Now you see this at work with mistakes. You can make mistakes in the work you do and not suffer any consequences. For instance, I could negotiate a contract that might have a phrase that would trigger considerable harm to the company in the event of the occurrence of some set of circumstances. The chances are that no one would ever know. But if something did happen and the company got into trouble, and I had moved on from that job to another, it would never be traced to me. The problem would be that of the guy who presently has responsibility. And it would be his headache. There’s no tracking system in the corporation."
↑ comment by lukeprog · 2014-05-12T10:41:46.709Z · LW(p) · GW(p)
From Lewis' The New New Thing:
Back in 1921 Veblen had predicted that engineers would one day rule the U.S. economy. He argued that since the economy was premised on technology and the engineers were the only ones who actually understood how the technology worked, they would inevitably use their superior knowledge to seize power from the financiers and captains of industry who wound up on top at the end of the first round of the Industrial Revolution. After all, the engineers only needed to refuse to fix anything, and modern industry would grind to a halt. Veblen rejoiced at this prospect. He didn't much care for financiers and captains. He thought they were parasites.
And:
After the retreat Ed McCracken quickly set about making his company less like Jim Clark. This is just how it always went with one of these new Silicon Valley hardware companies: once it showed promise, it ditched its visionary founder, who everyone deep down thought was a psycho anyway, and became a sane, ordinary place. With the support of Glenn Mueller and the other venture capitalists on the board of directors, McCracken brought in layer upon layer of people more like him: indirect, managerial, diplomatic, politically minded. These people could never build the machines of the future, but they could sell the machines of the present. And they did this very well. For the next six years Silicon Graphics was perhaps the most successful company in Silicon Valley. The stock rose from three dollars a share to more than thirty dollars a share. The company grew from two hundred employees to more than six thousand. The annual revenues swelled from a few million to billions.
↑ comment by lukeprog · 2014-05-12T10:26:40.369Z · LW(p) · GW(p)
From Dartnell's The Knowledge:
many inventions seem obvious in retrospect, but sometimes the time of emergence of a key advance or invention doesn’t appear to have followed any particular scientific discovery or enabling technology... The wheelbarrow, for instance, could have occurred centuries before it actually did — if only someone had thought of it. This may seem like a trivial example, combining the operating principles of the wheel and the lever, but it represents an enormous labor saver, and it didn’t appear in Europe until millennia after the wheel (the first depiction of a wheelbarrow appears in an English manuscript written about 1250 AD).
And:
perhaps the most impressive feat of leapfrogging in history was achieved by Japan in the nineteenth century. During the Tokugawa shogunate, Japan isolated itself for two centuries from the rest of the world, forbidding its citizens to leave or foreigners to enter, and permitting only minimal trade with a select few nations. Contact was reestablished in the most persuasive manner in 1853 when the US Navy arrived in the Bay of Edo (Tokyo) with powerfully weaponized steam-powered warships, far superior to anything possessed by the technologically stagnant Japanese civilization. The shock of realization of this technological disparity triggered the Meiji Restoration. Japan’s previously isolated, technologically backward feudal society was transformed by a series of political, economic, and legal reforms, and foreign experts in science, engineering, and education instructed the nation how to build telegraph and railroad networks, textile mills and factories. Japan industrialized in a matter of decades, and by the time of the Second World War was able to take on the might of the US Navy that had forced this process in the first place.
And:
In our history, both compressor and absorption designs for refrigeration were being developed around the same time, but it is the compressor variety that achieved commercial success and now dominates. This is largely due to encouragement by nascent electricity companies keen to ensure growth in demand for their product. Thus the widespread absence of absorber refrigerators today (except for gas-fueled designs for recreation vehicles, where the ability to run without an electrical supply is paramount) is not due to any intrinsic inferiority of the design itself, but far more due to contingencies of social or economic factors. The only products that become available are those the manufacturer believes can be sold at the highest profit margin, and much of that depends on the infrastructure that already happens to be in place. So the reason that the fridge in your kitchen hums— uses an electric compressor rather than a silent absorption design— has less to do with the technological superiority of that mechanism than with quirks of the socioeconomic environment in the early 1900s, when the solution became “locked in.” A recovering post-apocalyptic society may well take a different trajectory in its development.
And:
Whether your garments are stitched from leather or woven fabric, the next problem is how to attach them securely to your body. Disregarding zippers and velcro as too complex to be fabricated by a rebooting civilization, you’re low on options for easily reversible fastenings. The best low-tech solution never occurred to any of the ancient or classical civilizations, yet is now so ubiquitous it has become seemingly invisible. Astoundingly, the humble button didn’t become common in Europe until the mid-1300s. Indeed, it never was developed by Eastern cultures, and the Japanese were absolutely delighted when they first saw buttons sported by Portuguese traders. Despite the simplicity of its design, the new capability offered by the button is transformative. With an easily manufactured and readily reversible fastening, clothes do not need to be loose-fitting and formless so they can be pulled over the top of your head. Instead, they can be put on and then buttoned up at the front, and can be designed to be snugly fitted and comfortable: a true revolution in fashion.
↑ comment by lukeprog · 2014-04-13T00:46:49.006Z · LW(p) · GW(p)
From Ayres' Super Crunchers, speaking of Epagogix, which uses neural nets to predict a movie's box office performance from its screenplay:
Replies from: lukeprogSome studios are utterly closed-minded to the idea that statistics could help them decide whether to greenlight a project. Copaken tells the extraordinary story of bringing two hedge fund managers to meet with a studio head. "These hedge fund guys had raised billions of dollars," Copaken explained, "and they were prepared to start with $500 million to fund films that would pass muster by our [neural net] test and be optimized for box office... [But] there was a lot of resistance to this new way of thinking... and finally one of these hedge fund guys sort of jumped into the discussion and said, 'Well, let me ask you a question. If Dick's system here gets it right fifty times out of fifty times, are you telling me that you wouldn't take that into account to change the way you decide which movies to make or how to make them?' And the guys said, 'No, that's absolutely right. We would not even if he were right fifty times out of fifty times... So what if we are leaving a billion dollars of the shareholder's money on the table; that is shareholders' money... Whereas if we change the way we do this, we might antagonize various people. We might not be invited. Our wives wouldn't be invited to the parties. People would get pissed at us. So why mess with a good thing?'"
Copaken was completely depressed when he walked out of the meeting, but when he looked over he noticed that the hedge fund guys were grinning from ear to ear. He asked them why they were so happy. They told him, "You don't understand, Dick. We make our fortunes by identifying small imperfections in the marketplace. They are usually tiny and they are usually fleeting and they are immediately filled by the efficiency of the marketplace. But if we can discover these things... we end up making lots of money before the efficiency of the marketplace closes out that opportunity. What you just showed us here in Hollywood is a ten-lane paved highway of opportunity. It's like they are committed to doing things the wrong way..."
↑ comment by lukeprog · 2014-04-13T01:01:49.214Z · LW(p) · GW(p)
More (#1) from Super Crunchers:
...the Office of Education and the Office of Economic Opportunity sought to determine what types of education models could best break this cycle of failure. The result was Project Follow Through, an ambitious effort that studied 79,000 children in 180 low-income communities for twenty years at a price tag of more than $600 million... At the time it was the largest education study ever done. Project Follow Through looked at the impact of seventeen different teaching methods, ranging from models like DI [direct instruction], where lesson plans are carefully scripted, to unstructured models where students themselves direct their learning by selecting what and how they will study... Project Follow Through's designers wanted to know which model performed the best, not only in developing skills in its area of emphasis, but also across the board.
Direct Instruction won hands down. Education writer Richard Nadler summed it up this way: "When the testing was over, students in DI classrooms had placed first in reading, first in math, first in spelling, and first in language. No other model came close." And DI's dominance wasn't just in basic skill acquisition. DI students could also more easily answer questions that required higher-order thinking... DI even did better in promoting students' self-esteem than several child-centered approaches...
More recent studies by both the American Federation of Teachers and the American Institutes for Research reviewed data on two dozen "whole school" reforms and found once again that the Direct Instruction model had the strongest empirical support.
And:
The news media almost completely ignored the point that Summers was just talking about a difference in variability. It's not nearly as sexy as reporting "Harvard President Says Women Are Innately Deficient in Mathematics." (They might as easily have reported that Summers was claiming that women are innately superior in mathematics, since they are less likely to be really bad in math.) Many reporters simply didn't understand the point or couldn't figure out a way to communicate it to a general audience... At least in small part, Summers may have lost his job because people don't understand standard deviations.
And:
I remember when my partner, Jennifer, and I were expecting for the first time — back in 1994. Back then, women were told the probability of Down syndrome based on their age. After sixteen weeks, the mother could have a blood test for measuring her alphafetoprotein (AFP) level, and then they'd give you another probability. I remember asking the doctor if they had a way of combining the different probabilities. He told me flat out, "That's impossible. You just can't combine probabilities like that."
I bit my tongue, but I knew he was dead wrong. It is possible to combine different pieces of evidence, and has been since 1763 when a short essay by the Reverand Thomas Bayes was posthumously published...
↑ comment by lukeprog · 2014-03-25T20:51:01.245Z · LW(p) · GW(p)
From Isaacson's Steve Jobs:
Even though they were not fervent about their faith, Jobs’s parents wanted him to have a religious upbringing, so they took him to the Lutheran church most Sundays. That came to an end when he was thirteen. In July 1968 Life magazine published a shocking cover showing a pair of starving children in Biafra. Jobs took it to Sunday school and confronted the church’s pastor. “If I raise my finger, will God know which one I’m going to raise even before I do it?”
The pastor answered, “Yes, God knows everything.”
Jobs then pulled out the Life cover and asked, “Well, does God know about this and what’s going to happen to those children?”
“Steve, I know you don’t understand, but yes, God knows about that.”
Jobs announced that he didn’t want to have anything to do with worshipping such a God, and he never went back to church.
And:
[During his Atari days and long after,] Jobs clung to the belief that his fruit-heavy vegetarian diet would prevent not just mucus but also body odor, even if he didn’t use deodorant or shower regularly. It was a flawed theory.
And:
Even after Wozniak became convinced that his new computer design should become the property of the Apple partnership, he felt that he had to offer it first to HP, since he was working there. “I believed it was my duty to tell HP about what I had designed while working for them. That was the right thing and the ethical thing.” So he demonstrated it to his managers in the spring of 1976. The senior executive at the meeting was impressed, and seemed torn, but he finally said it was not something that HP could develop. It was a hobbyist product, at least for now, and didn’t fit into the company’s high-quality market segments. “I was disappointed,” Wozniak recalled, “but now I was free to enter into the Apple partnership.”
On April 1, 1976, Jobs and Wozniak went to Wayne’s apartment in Mountain View to draw up the partnership agreement...
...Wayne then got cold feet. As Jobs started planning to borrow and spend more money, he recalled the failure of his own company. He didn’t want to go through that again. Jobs and Wozniak had no personal assets, but Wayne (who worried about a global financial Armageddon) kept gold coins hidden in his mattress. Because they had structured Apple as a simple partnership rather than a corporation, the partners would be personally liable for the debts, and Wayne was afraid potential creditors would go after him. So he returned to the Santa Clara County office just eleven days later with a “statement of withdrawal” and an amendment to the partnership agreement. “By virtue of a re-assessment of understandings by and between all parties,” it began, “Wayne shall hereinafter cease to function in the status of ‘Partner.’” It noted that in payment for his 10% of the company, he received $800, and shortly afterward $1,500 more.
Had he stayed on and kept his 10% stake, at the end of 2010 it would have been worth approximately $2.6 billion. Instead he was then living alone in a small home in Pahrump, Nevada, where he played the penny slot machines and lived off his social security check.
And:
Replies from: lukeprogNow it was necessary to convince Wozniak to come on board full-time. “Why can’t I keep doing this on the side and just have HP as my secure job for life?” he asked. Markkula said that wouldn’t work, and he gave Wozniak a deadline of a few days to decide. “I felt very insecure in starting a company where I would be expected to push people around and control what they did,” Wozniak recalled. “I’d decided long ago that I would never become someone authoritative.” So he went to Markkula’s cabana and announced that he was not leaving HP.
Markkula shrugged and said okay. But Jobs got very upset. He cajoled Wozniak; he got friends to try to convince him; he cried, yelled, and threw a couple of fits. He even went to Wozniak’s parents’ house, burst into tears, and asked Jerry for help. By this point Wozniak’s father had realized there was real money to be made by capitalizing on the Apple II, and he joined forces on Jobs’s behalf. “I started getting phone calls at work and home from my dad, my mom, my brother, and various friends,” Wozniak recalled. “Every one of them told me I’d made the wrong decision.” None of that worked. Then Allen Baum, their Buck Fry Club mate at Homestead High, called. “You really ought to go ahead and do it,” he said. He argued that if he joined Apple full-time, he would not have to go into management or give up being an engineer. “That was exactly what I needed to hear,” Wozniak later said. “I could stay at the bottom of the organization chart, as an engineer.” He called Jobs and declared that he was now ready to come on board.
↑ comment by lukeprog · 2014-03-25T20:54:48.450Z · LW(p) · GW(p)
More (#1) from Steve Jobs:
One of Atkinson’s amazing feats (which we are so accustomed to nowadays that we rarely marvel at it) was to allow the windows on a screen to overlap so that the “top” one clipped into the ones “below” it. Atkinson made it possible to move these windows around, just like shuffling papers on a desk, with those below becoming visible or hidden as you moved the top ones. Of course, on a computer screen there are no layers of pixels underneath the pixels that you see, so there are no windows actually lurking underneath the ones that appear to be on top. To create the illusion of overlapping windows requires complex coding that involves what are called “regions.” Atkinson pushed himself to make this trick work because he thought he had seen this capability during his visit to Xerox PARC. In fact the folks at PARC had never accomplished it, and they later told him they were amazed that he had done so. “I got a feeling for the empowering aspect of naïveté,” Atkinson said. “Because I didn’t know it couldn’t be done, I was enabled to do it.” He was working so hard that one morning, in a daze, he drove his Corvette into a parked truck and nearly killed himself. Jobs immediately drove to the hospital to see him. “We were pretty worried about you,” he said when Atkinson regained consciousness. Atkinson gave him a pained smile and replied, “Don’t worry, I still remember regions.”
And:
[no more clips, because Audible somehow lost all my bookmarks for the last two parts of the audiobook!]
↑ comment by lukeprog · 2014-03-05T05:04:24.409Z · LW(p) · GW(p)
From Feinstein's The Shadow World:
Replies from: lukeprog, lukeprog, lukeprog, lukeprog, lukeprog, lukeprog, lukeprog, lukeprogThe £75m Airbus, painted in the colours of the Prince’s beloved Dallas Cowboys, was a gift from the British arms company BAE Systems. It was a token of gratitude for the Prince’s role, as son of the country’s Defence Minister, in the biggest arms deal the world has seen. The Al Yamamah – ‘the dove’ – deal signed between the United Kingdom and Saudi Arabia in 1985 was worth over £40bn. It was also arguably the most corrupt transaction in trading history. Over £1bn was paid into accounts controlled by Bandar. The Airbus – maintained and operated by BAE at least until 2007 – was a little extra, presented to Bandar on his birthday in 1988.
A significant portion of the more than £1bn was paid into personal and Saudi embassy accounts at the venerable Riggs Bank opposite the White House on Pennsylvania Avenue, Washington DC. The bank of choice for Presidents, ambassadors and embassies had close ties to the CIA, with several bank officers holding full agency security clearance. Jonathan Bush, uncle of the President, was a senior executive of the bank at the time. But Riggs and the White House were stunned by the revelation that from 1999 money had inadvertently flowed from the account of Prince Bandar’s wife to two of the fifteen Saudis among the 9/11 hijackers.
↑ comment by lukeprog · 2014-03-05T20:37:42.087Z · LW(p) · GW(p)
More (#8) from The Shadow World:
After numerous stories of British arms being used to repress Arab Spring revolutions the UK government suspended arms exports to several countries. These moves, while very late, considering the weapons were already in the hands of repressive governments, were welcome. However, despite de facto arms embargoes on states such as Bahrain, Saudi Arabia was conspicuously not embargoed, in spite of its National Guard’s intervention in Bahrain, which also utilized BAE Tactica armoured vehicles.
And:
It was the criminal corruption and negligence of the Albanian government, the systemic incompetence of the US Department of Defense procurement process, and the naked greed of the American and global arms trade that caused the deaths of Erison Durdaj and twenty-five other entirely innocent people. There has been no attempt by the US to assist the people of Gerdec to gain justice. No one in the US government has been charged in the case, ‘even though’ as Rolling Stone magazine has suggested, ‘officials in both the Pentagon and the State Department knew that AEY was shipping Chinese-made ammunition to Afghanistan. The Bush administration’s push to outsource its wars had sent companies like AEY into the world of illegal arms dealing – but when things turned nasty, the government reacted with righteous indignation.’
No legal action has been permitted in Albania against any of the senior politicians involved in the events that led to the deaths of the villagers.
And:
On conclusion of this book I set about passing on the hundreds of thousands of pages of documents, archives and other source materials I have collected over the past ten years on the arms trade, to the relevant investigative and prosecuting authorities around the world. I don’t hold out much hope that they will be acted on, having witnessed, at first hand, the South African arms deal investigation stymied, the SFO capitulate on BAE, and the closure of investigations into the illicit trade in Italy, Sweden, Germany, India and Albania. Israel, Angola, Russia and China barely, if ever, investigate arms trade corruption.
The arms industry receives unique treatment from government. Many companies were, and some still are, state-owned. Even those that have been privatized continue to be treated, in many ways, as if they were still in the public fold. Physical access to and enormous influence on departments of defence is commonplace. Government officials and ministers act as salespeople for private arms contractors as enthusiastically as they do for state-owned entities. Partly this is because they are seen as contributing to national security and foreign policy, as well as often playing substantial roles in the national economy. In many, if not all, countries of the world, arms companies and dealers play an important role in intelligence gathering and are involved in ‘black’ or secret operations.
The constant movement of staff between government, arms companies, the intelligence agencies and lobbying firms the world over only entrenches this special treatment. As do the contributions of money and support to political parties in both selling and purchasing countries. This also results in the companies and individuals in this industry exercising a disproportionate and usually bellicose influence on all manner of policymaking, be it on economic, foreign or national security issues.
It is for these reasons that arms companies and individuals involved in the trade very seldom face justice, even for transgressions that are wholly unrelated to their strategic contributions to the state. Political interventions, often justified in the name of national security, ensure that the arms trade operates in its own privileged shadow world, largely immune to the legal and economic vagaries experienced by other companies. Even when a brave prosecutor attempts to investigate and bring charges against an arms company or dealer, the matter is invariably settled with little or no public disclosure and seldom any admission of wrongdoing. And the investigator, whistle-blower or prosecutor inevitably finds their career prospects significantly diminished.
↑ comment by lukeprog · 2014-03-05T20:29:09.093Z · LW(p) · GW(p)
More (#7) from The Shadow World:
Meaningful Congressional oversight of the Defense Department and defence contractors is severely undermined by the combination of cronyism, executive pressure on foreign purchasers, the revolving door and elected representatives’ desperate desire for defence companies in their states. In addition, national security is invoked to limit public scrutiny of the relationship between government and the arms industry. The result is an almost total loss of accountability for public money spent on military projects of any sort. As Insight magazine has reported, in 2001 the Deputy Inspector General at the Pentagon ‘admitted that $4.4 trillion in adjustments to the Pentagon’s books had to be cooked to compile required financial statements and that $1.1 trillion was simply gone and no one can be sure of when, where and to whom the money went.’ This exceeds the total amount of money raised in tax revenue in the US for that year.
Remarkably, the Pentagon hasn’t been audited for over twenty years and recently announced that it hopes to be audit-ready by 2017, a claim that a bipartisan group of Senators thought unlikely.
And:
A study by government auditors in 2008 found that dozens of the Pentagon’s weapons systems are billions of dollars over budget and years behind schedule. In fact ninety-five systems have exceeded their budgets by a total of $259bn and are delivered on average two years late.
A defence industry insider with close links to the Pentagon put it to me that ‘the procurement system in the US is a fucking joke. Every administration says we need procurement reform and it never happens.’ Robert Gates on his reappointment as Secretary of Defense stated to Congress: ‘We need to take a very hard look at the way we go about acquisition and procurement.’ However, this is the same official who in June 2008 endorsed a Bush administration proid=l to develop a treaty with the UK and Australia that would allow unlicensed trade in arms and services between the US and these countries. The proposal is procedurally scandalous and would lead to even less oversight but has generated little media coverage. In September 2010, with Robert Gates in office, the agreement was passed.
And:
While there has been an increase in prosecutions of individuals – in 2009 there were three trials of four individuals in FCPA cases, equalling the number of trials in the preceding seven years – this still unimpressive figure does not include anyone from the large defence companies, suggesting that bribery and corruption are still more tolerated when it comes to the commanding heights of the weapons business.
The closest a company has come to debarment was the temporary suspension of BAE’s US export privileges while the State Department considered the matter. Specific measures seem to be taken to avoid applying debarment rules to major arms companies, in particular by charging companies with non-FCPA charges as in the case of BAE. A legislative effort was undertaken to debar Blackwater (Xe) from government contracts due to its FCPA violations. Legislation was introduced in May 2010 to debar any company that violates the FCPA, though with a waiver system in place that would require any federal agency to justify the use of a debarred company in a report submitted to Congress.
The mutual dependence between the government, Congress and defence companies means that, in practice, even serial corrupters are ‘too important’ to fail. For example, the US could not practically debar KBR, a company to which it has outsourced billions of dollars of its military functions. Similarly, debarring BAE would threaten its work on new arms projects and the maintenance of BAE products that the US military already uses.
And:
Chuck Spinney probably echoes the views of many who are critical of the MICC, on the subject of Obama: ‘I have been very disappointed. He has been a total disappointment on defense. He is continuing his predecessor’s war-centric foreign policy... he is continuing the establishment’s business-as-usual practices including the grotesque diversion of scarce resources to a bloated defense budget that is leading the United States into ruin.’
I asked Chuck what he thought of the Pentagon now. ‘It’s worse now. Things are worse today than they’ve ever been.’
And:
In March 2009, the International Criminal Court (ICC) issued an arrest warrant against President Omar al-Bashir for crimes against humanity and war crimes. In mid-2010, after initially rejecting the request, the ICC added three counts of genocide to al-Bashir’s list of charges stating that there ‘are reasonable grounds to believe [al-Bashir is] responsible’ for orchestrating a wave of rapes, murders and torture. Al-Bashir, despite travelling widely in the region, has not been arrested as the ICC has no independent enforcement mechanism, relying instead on the prerogative of member states, who baulk at the diplomatic fallout of such an action. In August 2010, al-Bashir controversially attended the signing of the new Kenyan constitution. Kenya, as a signatory to the ICC, was obliged to arrest al-Bashir. Nothing was done.
↑ comment by lukeprog · 2014-03-05T20:18:45.982Z · LW(p) · GW(p)
More (#6) from The Shadow World:
Significant amounts of money continue to be made available to countries buying weapons from the US. So, in addition to the record levels of defence spending and foreign military cooperation funding (that is often used to buy US weapons and totalled around $5bn in 2003), the State Department and Pentagon spend an average of over $15bn per year in security assistance funding, a large share of which goes to finance purchases of US weapons and training. In addition, low-rate, US government-backed loans are made available to potential arms-purchasing nations. Such a loans programme existed in the 1970s and 1980s but was closed down after loans worth $10bn were either forgiven or never repaid, i.e. the programme became a further giveaway for US contractors and their foreign clients. Despite this history, in 1995 another $15bn loan guarantee fund was signed into law by President Clinton. This followed six years of lobbying by the arms industry, led by Lockheed Martin’s CEO, Norm Augustine.
And:
Direct pressure from the Pentagon and the White House is often used to close a sale. For instance, in 2002 the US government demanded that South Korea award a $4.5bn contract to Boeing rather than a French company. Leaks from the South Korean defence ministry indicate that the French plane outperformed its American rival in every area and was $350m cheaper. But the Deputy Defense Secretary, Paul Wolfowitz, told the Koreans that they risked not only losing US political support but the American military would refuse to provide them with cryptographic systems that allow aircraft to identify one another or to supply the American-made air-to-air missiles that the plane uses. Boeing was awarded the contract.
When Colombia considered buying light attack aircraft from Brazil rather than a US manufacturer, the senior American commander in the region wrote to Bogotá that the purchase would have a negative impact on Congressional support for future military aid to Colombia. The deal with Brazil fell through.
And:
The winners of the Deepwater competition were Lockheed Martin and Northrop Grumman. The companies were to work in partnership not only to build their own aspects of the contract but to supervise the work of every other company involved in the programme. This ‘innovative’ approach was touted as a way to reduce bureaucracy and increase efficiency compared with a system in which the Coast Guard itself would retain primary control. What it ended up proving was that contractors can be far less efficient than the government at running major programmes. Anthony D’Armiento, an engineer who worked for both the Coast Guard and Northrop Grumman on the project, called it ‘the fleecing of America. It’s the worst contract I’ve seen in my 20-plus years in naval engineering.’
Initially eight ships were produced for $100m. They were unusable: the hulls cracked and the engines didn’t work properly. The second-largest boat couldn’t even pass a simple water tank test and was put on hold. The largest ship, produced at a cost of over half a billion dollars, was also plagued by cracks in the hull, leading to fears of the hull’s complete collapse.
In May 2005, Congress cut the project’s budget in half, leading to the usual battery of letter writing, lobbying and campaign contributions that resulted in not only the avoidance of cuts to the disastrous programme but an increase to the budget of about $1bn a year, bringing the total project budget to $24bn. Finally, in April 2007, the Coast Guard took back the management of the project from the defence contractors. The first boats are expected to be ready for launch sometime in 2011, ten years after the 9/11 attacks that prompted the modernization effort in the first place.
↑ comment by lukeprog · 2014-03-05T20:13:52.887Z · LW(p) · GW(p)
More (#5) from The Shadow World:
In November 2001, the Air Force had drafted a document detailing what capabilities the new tankers needed. Colonel Mark Donohue, an official in the air mobility office, promptly sent it to Boeing for private comment, and the company sought and received concessions so the requirements matched what the 767 could do. Most importantly and extraordinarily, the Air Force agreed to drop a demand that the new tankers match or exceed the capabilities of the old ones.
And:
In her plea agreement Druyun admitted that, in addition to the tanker case, she had awarded $100m to Boeing as part of a NATO contract in 2002. She admitted that the payment could have been lower, but favoured Boeing because her daughter and son-in-law worked there and she was considering working there as well. She also oversaw a $4bn award to Boeing to modernize the avionics on C-130J aircraft in 2001. In this instance, she favoured Boeing over four competitors because the company had just employed her son-in-law. And she agreed topay $412m to the company as settlement over a dispute in a C-17 aircraft contract in 2000, at the time when her son-in-law was seeking the job.
And:
In September 2009, the bidding process was restarted once again, this time for 179 aircraft for $35bn over forty years. On this occasion Northrop withdrew in protest, claiming that the set-up of the competition favoured Boeing. Despite Northrop’s departure, EADS continued with the contest. Both sides accused the other of benefiting from illegal subsidies. The World Trade Organization (WTO) first ruled that Airbus had received illegal financial aid and then released an interim ruling that Boeing had also received illegal subsidies, though at a lower level than those received by Airbus.
And:
The Air Force’s intention at this point was to buy 339 planes for a projected cost of over $62bn – up from an initial proposal to buy 750 planes for $25bn. That’s less than half as many planes for more than double the price. This absurd situation arose because initially Lockheed Martin put in a low bid, knowing that the planes would cost far more than their initial estimate. This practice of ‘buying in’ allows a company to get the contract first and then jack up the price later. Then the Air Force engaged in ‘gold plating’ – setting new and ever more difficult performance requirements once the plane is already in development. And finally Lockheed Martin messed up aspects of the plane’s production, while still demanding costs for overheads and spare parts from the Pentagon. As Hartung observes, this is a time-tested approach that virtually guarantees massive cost overruns.
From inside the Pentagon, Chuck Spinney described the process as follows:
"When you start a programme the prime management objective is to make it hard to cancel. The way to think about this is in terms of managing risk: you have performance risk and the bearers of the performance risk are the soldiers who are going to fight with the weapon. You have an economic risk, the bearers of which are the people paying for it, the tax payers. And then you have programmatic risk, that’s the risk that a programme would be cancelled for whatever reasons. Whether you are a private corporation or a public operation you always have those three risks. Now if you look at who bears the programmatic risks it’s the people who are associated with and benefit from the promotion and continuance of that programme. That would include the military and civilians whose careers are attached to its success, and the congressman whose district it may be made in, and of course the companies that make it. If you look at traditional engineering, you start by designing and testing prototypes. To reduce performance risk you test it and redesign it and test it, redesign it. In this way you evolve the most workable design, which in some circumstances may be very different from your original conception. This process also reduces the economic risk because you work bugs out of it beforehand and figure out how to make it efficiently. But the process increases the programmatic risk, or the likelihood of it being cancelled because it doesn’t work properly or is too expensive.
"But the name of the game in the Pentagon is to keep the money flowing to the programme’s constituents. So we bypass the classical prototyping phase and rush a new programme into engineering development before its implications are understood. The subcontractors and jobs are spread all over the country as early as possible to build the programme’s political safety net. But this madness increases performance and economic risk because you’re locking into a design before you understand the future consequences of your decision. It’s insane. If you are spending your own money you would never do it this way but we are spending other people’s money and because we won’t be the ones to use the weapon – so we are risking other people’s blood. So protecting the programme and the money flow takes priority over reducing risk. That’s why we don’t do prototyping and why we lie about costs and why soldiers in the field end up with weapons that do not perform as promised.
"In the US government money is power. The way you preserve that power is to eliminate decision points that might threaten the flow of money. So with the F-22 we should have built a combat capable prototype. But the Cold War was ending, and the Air Force wanted that cow out of the barn door before the door closed."
↑ comment by lukeprog · 2014-03-05T20:06:39.650Z · LW(p) · GW(p)
More (#4) from The Shadow World:
Lockheed was responsible for, and benefited financially from, one of the myriad technologies that comprised the Strategic Defense Initiative (SDI). Its Homing Overlay Experiment – interceptor warheads that would unfurl umbrella-like spokes – was tested successfully in June 1984, after three failed tests had threatened the future of the initiative. To this day the company brags about the test which, it turns out, was rigged. A decade later the GAO reported that the mock warhead used in the test had been ‘enhanced’ to make it easier to hit. By that time $35bn had been spent on Star Wars. So, displaying its customary lack of ethics, the company cooperated with the Army to once again dupe the American taxpayer out of billions of dollars.
And:
Under Augustine, Lockheed had set a goal of doubling its arms exports within five years. A real obstacle to achieving this was that few countries could afford the multibillion-dollar cost of the company’s sophisticated weaponry. As Chairman of the DPACT Augustine led the effort to create a new arms export subsidy; a $15bn fund that would provide low-rate US government-backed loans to potential arms-buying countries. With the arrival of Newt Gingrich's conservative revolution in Congress the fund was approved and signed by President Clinton in December 1995.
Armed with this new ‘open chequebook for arms sales’, Augustine and Lockheed’s vice-president for International Operations, Bruce Jackson, determined that their best hope of new business lay in an extended NATO. New entrants to the military alliance would be required to replace their Soviet-era weapons with systems compatible with NATO’s dominant Western members. Augustine toured Eastern Europe. In Romania he pledged that if the country’s government bought a new radar system from Lockheed Martin, the company would use its considerable clout in Washington to promote Bucharest’s NATO candidacy. In other words, a major defence manufacturer made clear that it was willing to reshape American international security and foreign policy to secure an arms order.
And:
The [Defense Policy Board], like a number of defence-related public bodies, blurs the distinction between the public and the private, resulting in the situation where activities undertaken with public money display minimal transparency and accountability. This is consistent with American capitalism, in which the activities of a corporation are seen as the province of that corporation, and neither the public nor Congress has a fundamental right to access information about them. Most notably, the US Freedom of Information Act doesn’t apply to private companies, leading a Democratic representative from Illinois to suggest that ‘it’s almost as if these private military contractors are involved in a secret war’.
By allocating so much public sector work to private companies, the Bush administration created a condition in which the nature and practice of government activities could be hidden under the cloak of corporate privacy. This severely limits both financial and political accountability. The financial activities of these companies are scrutinized primarily by its shareholders if it is a public company and occasionally by government auditors on a contract-by-contract basis. And of course, at a political level, it is not just feasible but common for the government to claim that a contractor had promised to do one thing but then did another, thus absolving government of responsibility.
This opaque operating environment, in addition to the secrecy afforded by national security, makes it extremely difficult to critically analyse and hold to account the massive military-industrial complex that drives the country’s predisposition to warfare and the increasing militarization of American society. What analysis there is tends to focus on the few corruption scandals that see the light of day.
↑ comment by lukeprog · 2014-03-05T06:03:57.002Z · LW(p) · GW(p)
More (#3) from The Shadow World:
The key agent used by Lockheed in Japan was one Yoshio Kodama, aka ‘The Monster’. After spending three years in prison on war crimes charges after the Second World War, Kodama was set free by the US occupying forces on the grounds that he would make a good ally in the Cold War fight against communism. He then took his fortune – earned by supplying Japanese troops during the war and looting diamonds and platinum from areas conquered by Japan – and put it to work in his country’s politics. Variously described as an organized crime boss and a CIA asset, he helped found and fund the dominant Liberal Democratic Party.
In the late 1950s, Lockheed paid bribes of about $1.5m to $2m to various officials and a fee of $750,000 to Kodama to secure an order for 230 Starfighter planes. The details of the bribes were passed on to the CIA, which confirmed that every move made was approved by Washington. Lockheed was seen to be conducting a deep layer of Washington foreign policy.
This marked the high point of the Starfighter. It was sold to the German air force, and over a ten-year period crashed 178 times, killing a total of eighty-five German pilots. It earned the nickname ‘the Flying Coffin’, and a group of fifty widows of the pilots sued the company.
And:
The SEC offered an amnesty for companies admitting to questionable or illegal payments; over 450 US companies admitted making such payments worth over $300m to government officials, politicians and political parties. Over 117 of the self-reporting entities were Fortune 500 companies. Many of the payments were justified as ‘facilitation payments’ or ‘commissions. Despite the lurid accounts of not only Lockheed’s activities around the world, but similar schemes by scores of other companies, there was no re-imagining of ethics in the violent, corrupt world of the arms dealers, but there was a dramatic recognition of the scale and damage of corruption in the US. The demand for stronger regulation and banning of bribery was resisted by corporate interests which argued that it would put the US at an economic disadvantage.
And:
By the end of Reagan’s second term military spending doubled, marking the largest peacetime military build-up in US history. This was a massive windfall for the MICC, with, for instance, Lockheed’s Pentagon contracts doubling to $4bn a year from 1980 to 1983.
Resistance to this massive build-up was slow in coming, partly because of its popularity among ordinary Americans. But towards the end of Reagan’s first term, criticism was voiced of both the excessive size of the build-up at a time of growing deficits and social needs, and fear that the massive increase in nuclear weapons could exacerbate the risk of a superpower nuclear confrontation. The latter led to the nuclear freeze campaign, one of the most inspiring citizens’ movements of the twentieth century, while the former forced at least a slow-down in the military build-up.
Among the most effective tools of Reagan’s critics were two vastly overpriced items: a $600 toilet seat and a $7,662 coffeemaker. At a time when Caspar Weinberger was telling Congress that there wasn’t ‘an ounce of waste’ in the largest peacetime military budget in the nation’s history, the spare parts scandal opened the door to a more objective – and damning – assessment of what the tens of billions in new spending was actually paying for. It also opened up Weinberger to ridicule, symbolized most enduringly in a series of cartoons by the Washington Post cartoonist Herblock in which the Defense Secretary was routinely shown with a toilet seat around his neck. Appropriately enough, the coffeemaker was procured for Lockheed’s C-5A transport plane, the poster child for cost overruns and abject performance.
A young journalist, who had been mentored by the Pentagon whistle-blower Ernie Fitzgerald, was central to exposing the scandals. Dina Rasor fingered the aircraft engine makers Pratt & Whitney for thirty-four engine parts that had all increased in price by more than 300 per cent in a year. A procurement official noted in the memo which revealed the scam that ‘Pratt & Whitney has never had to control prices and it will be difficult for them to learn.’
This profiteering at the taxpayer’s expense was surpassed by the Gould Corporation, which provided the Navy with a simple claw hammer, sold in a hardware store for $7, at a price of $435. The Navy suggested the charges – $37 for engineering support, $93 for manufacturing support and a $56 fee that was clear profit – were acceptable. Further revelations included Lockheed charging the Pentagon $591 for a clock for the C-5A and $166,000 for a cowling door to cover the engines. The exorbitant coffeemakers were exposed as poorly made and needing frequent repairs. Lockheed was also billing the taxpayer over $670 for an armrest pad that the Air Force could make itself for between $5 and $25. Finally, it was discovered that a $181 flashlight was built with twenty-year-old technology and a better one could be bought off the shelf for a fraction of the cost.
Lockheed defended itself by pointing out that spare parts were only 1.6 per cent of the defence budget, suggesting that those uncovering the fraud, waste and abuse were the enemies of peace and freedom and should remain silent in the interests of national unity in the face of global adversaries. Ernie Fitzgerald again brought sanity to bear, by suggesting that an overcharge was an overcharge, and that the same procurement practices used with toilet covers and coffeemakers when applied to whole aircraft like the C-5A made the planes ‘a flying collection of spare parts’.
Rasor also revealed that the Air Force planned to pay Lockheed $1.5bn to fix severe problems with the wings on the C-5A that the company itself had created. The wing fix was little more than a multibillion-dollar bailout for Lockheed.
Despite this litany of disasters, the Air Force engaged in illegal lobbying to help Lockheed win the contract to build the next-generation transport plane. In August 1981, a McDonnell Douglas plane was selected for the project, with the Air Force concerned about Lockheed’s proposed C-5B. Two weeks later the Air Force reversed its decision. Rasor could not believe that the Air Force ‘would want to have an updated version of one of its most embarrassing procurements’.
↑ comment by lukeprog · 2014-03-05T05:26:28.930Z · LW(p) · GW(p)
More (#2) from The Shadow World:
BAE had attempted to reinvent itself as an ethical arms company before, and would continue to do so. In 2006, Deborah Allen, director of corporate responsibility, told the BBC that BAE was doing ‘Everything from looking at making a fighter jet more fuel-efficient and looking at the materials that munitions are made of and what their impact on the environment would be.’ The company had plans to manufacture ‘green’ lead-free bullets so that once in the environment they ‘do not cause any additional harm’. Additional, that is, to the harm they’ve caused to the injured or dead target.
BAE also spoke about making a quieter bomb so that the users' exposure to fumes would be reduced. And the company was reported to be making landmines which would turn into manure over time. As Allen put it, they would ‘regenerate the environment that they had initially destroyed’.
She continued: ‘It is very ironic and very contradictory, but I do think, surely, if all the weapons were made in this manner it would be a good thing.’ This green initiative led only to much mirth at the absurd notion of the ethical arms company making weapons and ammunition that would be more caring. The plan to make green bullets was scrapped two years later after BAE discovered that tipping bullets with tungsten instead of lead resulted in higher production costs, making the venture unprofitable.
And:
[There is] a distinct lack of political will to prosecute arms dealers on the part of many countries. The early history of Merex illustrates how dealers are often protected from prosecution by their links to state intelligence agencies or other quasi-state actors. In extreme cases dealers are integral components of organized crime networks that include political actors, while others are or have been useful to powerful politicians or officials, who explicitly or tacitly condone their actions. Their apprehension and prosecution could result in severe embarrassment and politico-legal difficulties for their abettors. With friends in high places some arms dealers have been able to evade arrest and prosecution throughout their illicit careers and beyond.
Viktor Bout’s evasion of justice for many years is an exemplar of how these issues have combined to bedevil the prosecution of arms dealers.
In February 2002, Belgian authorities issued an Interpol ‘red notice’* that they were seeking the arrest of Bout on charges of money laundering and arms dealing. In theory, if he was in a member state, local police authorities were obliged to arrest him and hand him over to Belgium.
...A plan was hatched to arrest him when he landed in Athens and bring him to justice in Belgium. Soon after Bout’s flight took off, British field agents sent an encrypted message to London informing them that ‘the asset’ was in the air. Minutes later the plane changed direction, abandoning its flight plan. It disappeared into mountainous territory out of reach of local radars. The plane re-emerged ninety minutes later and landed in Athens. When police boarded the aircraft it was empty except for the pilots. Twenty-four hours later Bout was spotted 3,000 miles away in the Democratic Republic of Congo. Bout’s crew had been informed of the plan to arrest him in Athens and had arranged to drop him off safely elsewhere. For a European investigator all signs pointed towards US complicity: ‘There were only two intelligence services that could have decrypted the British transmission in so short a time,’ he explained. ‘The Russians and the Americans. And we know for sure it was not the Russians.’
Shortly after Bout’s narrow escape he moved back into the safety of his ‘home territories’ in Russia. Russian officials were reluctant to see Bout prosecuted as he had close contacts within the Russian establishment through whom he had been able to source surplus matériel for years. In 2002, in response to a request to reveal his whereabouts, Russian authorities declared that Bout was definitely not in Russia.
As they were issuing this definitive denial Bout was giving a two-hour interview in the Moscow studios of one of the country’s largest radio stations. Shortly afterwards Russian authorities released a second clarifying statement. It was a thinly veiled message, in classic Orwellian doublespeak, that Bout was now untouchable. With this Russian protection – known locally as krisha – Bout was able to resume operations, albeit with a higher degree of caution. As a consequence, as recently as 2006, Bout was sending weapons to Islamist militants in Somalia and Hezbollah in Lebanon.
↑ comment by lukeprog · 2014-03-05T05:14:34.703Z · LW(p) · GW(p)
More (#1) from The Shadow World:
In addition to the primary moral issue of the destruction caused by their products, there is the related concern of the ‘opportunity cost’ of the arms business. For while a weapons capability is clearly required in our unstable and aggressive world, the scale of defence spending in countries both under threat and peaceable results in the massive diversion of resources from crucial social and development needs, which in itself feeds instability.
A stark example of this cost could be seen in the early years of South Africa’s democracy. With the encouragement of international arms companies and foreign states, the government spent around £6bn on arms and weapons it didn’t require at a time when its President claimed the country could not afford to provide the antiretroviral drugs needed to keep alive the almost 6 million of its citizens living with HIV and Aids. Three hundred million dollars in commissions were paid to middlemen, agents, senior politicians, officials and the African National Congress (ANC – South Africa’s ruling party) itself. In the following five years more than 355,000 South Africans died avoidable deaths because they had no access to the life-saving medication...
And:
Between the world wars, all the large arms companies, including Vickers-Armstrong, agitated against the prospect of a permanent peace. At the Geneva disarmament conference in 1927 an ebullient arms lobbyist, William G. Shearer – employed by three big American shipbuilding companies at huge cost – was instrumental in sabotaging any moves towards international agreements on disarmament by stoking fears and spreading propaganda to encourage the building of warships. Shearer’s lobbying, however, had an unintended consequence, leading to an unprecedented crusade against the arms companies: soon after the Geneva conference, he filed a suit against the three companies that had employed him for $258,000 in unpaid lobbying fees, thus making public not only the exorbitant cost of his employment but also the arms companies’ opposition to disarmament.
And:
As part of the lobbying effort Bandar visited the former Republican Governor Ronald Reagan, then plotting his presidential bid. He had no idea who Reagan was, which highly amused Carter. They hoped Reagan might support the sale, persuading fellow Republicans on the basis of Saudi Arabia’s strong anti-communist credentials. Bandar contacted Thomas Jones, the chairman of the F-5’s maker, Northrop Grumman, and a close friend of Reagan’s, and was soon invited to see the Governor in California. As Bandar tells it: "I sat down with Governor Reagan, and we chatted a little bit. Then I explained why we needed the aircraft. He said to me at the end of it, ‘Prince, let me ask you this question. Does this country consider itself a friend of America?’ I said, ‘Yes, since King Abdulaziz, my grandfather, and President Roosevelt met. Until now, we are very close friends.’ Then Reagan asked a second question. ‘Are you anticommunist?’ I said, ‘Mr. Governor, we are the only country in the world that not only does not have relationships with communists, but when a communist comes in an airplane in transit, we don’t allow him to get out of the airplane at our airport.’"
Bandar says he was expecting a long discussion about the sale but "That was it. Two things were important. Are you friends of ours? Are you anticommunist? When I said yes to both, he said, 'I will support it.'" Bandar then asked Reagan to voice his support to a reporter from the Los Angeles Times whom Dutton had tipped off. According to Bandar, the reporter asked: "Do you support the sale of the F-15s to Saudi Arabia that President Carter is proposing?" Reagan responded: "Oh yes, we support our friends and they should have the F-15s. But I disagree with him [Carter] on everything else."
And:
The strategy was further oiled by the Saudis’ legendary schmoozing. King Fahd, as confirmation of his support for the American cause, lavished Arabian horses and diamonds worth $2m on the President and First Lady. Bandar was inventive in ensuring that the gifts became the personal property of the first couple rather than, as protocol demanded, being accepted and registered on behalf of the American people. Bandar, who was particularly close to Nancy, helped the family in countless ways. When Nancy asked him to employ Michael Deaver, the powerful Deputy Chief of Staff to the President who was leaving the White House broke, with legal problems and drinking heavily, Bandar hired him as a consultant for $50,000 a month, even though he had absolutely no contact with him throughout the year that he was on the payroll.
↑ comment by lukeprog · 2014-02-24T02:25:33.486Z · LW(p) · GW(p)
From Weiner's Enemies:
Replies from: lukeprog, lukeprog, lukeprog, lukeprog, lukeprog“President Roosevelt directed Bonaparte to create an investigative service within the Department of Justice subject to no other department or bureau, which would report to no one except the Attorney General.” The president’s order “resulted in the formation of the Bureau of Investigation.”
By law, Bonaparte had to ask the House and the Senate to create this new bureau...
On May 27, 1908, the House emphatically said no. It feared the president intended to create an American secret police. The fear was well-founded. Presidents had used private detectives as political spies in the past.
...Congress banned the Justice Department from spending a penny on Bonaparte’s proposal. The attorney general evaded the order. The maneuver might have broken the letter of the law. But it was true to the spirit of the president.
Theodore Roosevelt was “ready to kick the Constitution into the back yard whenever it gets in the way,” as Mark Twain observed. The beginnings of the FBI rose from that bold defiance.
↑ comment by lukeprog · 2014-02-24T02:55:13.021Z · LW(p) · GW(p)
More (#5) from Enemies:
The report bore down hard on the FBI’s intelligence directorate, created by Mueller two years before. It concluded that the directorate had great responsibility but no authority. It did not run intelligence investigations or operations. It performed no analysis. It had little sway over the fifty-six field groups it had created. No one but the director himself had power over any of these fiefs.
“We asked whether the Directorate of Intelligence can ensure that intelligence collection priorities are met,” the report said. “It cannot. We asked whether the directorate directly supervises most of the Bureau’s analysts. It does not.” It did not control the money or the people over whom it appeared to preside. “Can the FBI’s latest effort to build an intelligence capability overcome the resistance that has scuppered past reforms?” the report asked. “The outcome is still in doubt.” These were harsh judgments, all the more stinging because they were true.
If the FBI could not command and control its agents and its authorities, the report concluded, the United States should break up the Bureau and start anew, building a new domestic intelligence agency from the ground up.
With gritted teeth, Mueller began to institute the biggest changes in the command structure of the Bureau since Hoover’s death. A single National Security Service within the FBI would now rule over intelligence, counterintelligence, and counterterrorism. The change was imposed effective in September 2005. As the judge had predicted, it would take the better part of five years before it showed results.
And:
The FBI had more than seven hundred million terrorism-related records in its files. The list of suspected terrorists it oversaw held more than 1.1 million names. Finding real threats in the deluge of secret intelligence remained a nightmarish task. The Bureau’s third attempt to create a computer network for its agents was floundering, costing far more and taking far longer than anyone had feared. It remained a work in progress for years to come; only one-third of the FBI’s agents and analysts were connected to the Internet. Mueller had the authority to hire two dozen senior intelligence officers at headquarters. By 2008, he had found only two. Congress continued to flog the FBI’s counterterrorism managers for their failures of foresight and stamina; Mueller had now seen eight of them come and go.
↑ comment by lukeprog · 2014-02-24T02:51:21.664Z · LW(p) · GW(p)
More (#4) from Enemies:
Six months after the bombing, the FBI’s Lockerbie task force was disbanded. Marquise and a small group of terrorism analysts stayed on the case.
The Scots spent the summer and the fall piecing the hundreds of thousands of shards of evidence together. They got on-the-job training from FBI veterans like Richard Hahn—a man who had been combing through the wreckage of lethal bombings for fifteen years, ever since the unsolved FALN attack on the Fraunces Tavern in New York. They learned how the damage from a blast of Semtex looked different from the scorching from the heat of flame.
The Scots soon determined that bits of clothing with tags saying “Made in Malta” had been contained in a copper Samsonite Silhouette with the radio that held the bomb. But they did not tell the FBI. Then the Germans discovered a computer printout of baggage records from the Frankfurt airport; they showed a single suitcase from an Air Malta flight had been transferred to Pan Am 103 in Frankfurt. But they did not tell the Scots. The international teams of investigators reconvened in Scotland in January 1990. Once again, it was a dialogue of the deaf. Marquise had a terrible feeling that the case would never be solved.
“We’re having tons of problems with CIA. Lots of rivalry,” Marquise said. “Scots are off doing their thing. You’ve got the Germans who are giving the records when they feel like it to the Scots. The FBI’s still doing its thing.… Everybody’s still doing their own thing.”
Then, in June 1990, came small favors that paid big returns. Stuart Henderson, the new senior investigator in Scotland, shared one piece of evidence with Marquise: a photograph of a tiny piece of circuit board blasted into a ragged strip of the Maltese clothing. The Scots had been to fifty-five companies in seventeen countries without identifying the fragment. “They had no idea. No clue,” Marquise said. “So they said, probably tongue-in-cheek, ‘You guys try. Give it a shot.’ ”
The FBI crime laboratory gave the photo to the CIA. An Agency analyst had an image of a nearly identical circuit board, seized four years earlier from two Libyans in transit at the airport in Dakar, Senegal. On the back were four letters: MEBO. Nobody knew what MEBO meant.
And:
The Bureau’s working relationships with the rest of the government remained a constant struggle. The attorney general was appalled when the FBI failed to find a mad scientist sending letters filled with anthrax spores to television newsrooms, newspapers, and United States senators. The FBI focused for four years on the wrong man. The Bureau was drowning in false leads; its networks were crashing; its desktop computers still required twelve clicks to save a document.
The FBI had no connectivity with the rest of American intelligence. Headquarters could not receive reports from the NSA or the CIA classified at the top secret level—and almost everything was classified top secret. Fresh intelligence could not be integrated into the FBI’s databases.
And:
Mueller was caught again between the rule of law and the requisites of secrecy. He agreed with D’Amuro in principle. But he also kept his silence. He put nothing in writing. The argument about whether the FBI could countenance torture went on.
The CIA water-boarded Abu Zubaydah eighty-three times in August and kept him awake for a week or more on end. It did not work. A great deal of what the CIA reported from the black site turned out to be false. The prisoner was not bin Laden’s chief of operations. He was not a terrorist mastermind. He had told the FBI everything he knew. He told the CIA things he did not know.
“You said things to make them stop, and those things were actually untrue, is that correct?” he was asked five years later in a tribunal at Guantánamo.
“Yes,” he replied. “They told me, ‘Sorry, we discover that you are not Number Three, not a partner, not even a fighter.’ ”
↑ comment by lukeprog · 2014-02-24T02:46:11.000Z · LW(p) · GW(p)
More (#3) from Enemies:
In March 1979, Hanssen started a two-year tour at the FBI’s Soviet Counterintelligence Division in New York...
...Hanssen’s supervisors had discovered his one outstanding talent a few weeks after he arrived on duty: he was one of the very few people in the FBI who understood how computers worked. They assigned him to create an automated database about the Soviet contingent of diplomats and suspected spies in New York.
...In November 1979, Hanssen walked undetected into the midtown Manhattan offices of Amtorg, the Soviet trade mission that had served as an espionage front for six decades. The office was run by senior officers of the GRU. Hanssen knew where to go and who to see at Amtorg. That day, he volunteered his services as a spy. He turned over a sheaf of documents on the FBI’s electronic surveillance of the Soviet residential compound in New York, and he set up a system for delivering new secrets every six months through encoded radio communications. Hanssen’s next package contained an up-to-date list of all the Soviets in New York who the FBI suspected were spies. He delivered another revelation that shook the Soviet services to their roots: a GRU major general named Dmitri Polyakov had been working for America since 1961. He had been posted at the United Nations for most of those years. The Soviets recalled Polyakov to Moscow in May 1980. It is likely—though the question is still debated at the FBI—that Polyakov served thereafter as a channel of disinformation intended to mislead and mystify American intelligence.
Hanssen’s responsibilities grew. He was given the task of preparing the budget requests for the Bureau’s intelligence operations in New York. The flow of money showed the FBI’s targets for the next five years—and its plans for projects in collaboration with the CIA and the National Security Agency. His third delivery to the Soviets detailed those plans. And then he decided to lie low.
If Hanssen had stopped spying then and there, the damage he wrought still would have been unequaled in the history of the FBI. William Webster himself would conduct a postmortem after the case came to light in 2001. He called it “an incredible assault,” an epochal disaster, “a five-hundred-year flood” that destroyed everything in its path.
Hanssen suspended his contacts with the Soviets in New York as a major case against an American spy was about to come to light. The investigation had reached across the United States into France, Mexico, and Canada before the FBI began to focus on a retired army code clerk named Joe Helmich in the summer of 1980. He was arrested a year later and sentenced to life in prison after he was convicted of selling the Soviets the codes and operating manual to the KL-7 system, the basic tool of encrypting communications developed by the NSA. He was a lowly army warrant officer with a top secret clearance; his treason had taken place in covert meetings with Soviet intelligence officers in Paris and Mexico City from 1963 to 1966; he was paid $131,000. He had sold the Soviets the equivalent of a skeleton key that let them decode the most highly classified messages of American military and intelligence officers during the Vietnam War.
Hanssen understood one of the most important aspects of the investigation: it had lasted for seventeen years. The FBI could keep a case of counterintelligence alive for a generation. There was no statute of limitations for espionage.
And:
The Miller case was an unsavory affair. He was a twenty-year FBI counterintelligence veteran whose life was falling apart in the months before he became a spy. The father of eight children, he had been excommunicated by the Mormon Church for adultery. He had been suspended by the FBI for two weeks without pay because he was obese. Shortly after that disciplinary action, he had willingly been recruited by a woman he knew to be a KGB agent. Svetlana Ogorodnikov enticed Miller into trading a copy of the FBI’s twenty-five-page manual on foreign counterintelligence investigations in exchange for $15,000 in cash and her sexual favors. Miller was convicted and received a twenty-year sentence.
And:
The secrets spilled because the covert operations of the United States were so badly conceived, and so poorly executed, that they began to break down in public. First the crash of a cargo plane maintained by Southern Air Transport exposed the role of the White House in arming the contras in defiance of the law. Then a newspaper in Beirut revealed that the White House was smuggling weapons into Iran.
The president denied it in public. But Revell knew it was true.
On the afternoon of November 13, 1986, the White House asked Revell to review a speech that President Reagan would deliver to the American people that evening. As he pored over the draft of the speech in North’s office, he pointed out five evident falsehoods.
“We did not—repeat, did not—trade weapons or anything else for hostages, nor will we,” the president’s draft said. The United States would never “strengthen those who support terrorism”; it had only sold “defensive armaments and spare parts” to Iran. It had not violated its stance of neutrality in the scorched-earth war between Iran and Iraq; it had never chartered arms shipments out of Miami.
Revell knew none of this was true. He warned Judge Webster, who alerted Attorney General Meese. He was ignored.
...The president delivered the speech almost precisely as drafted, word for dissembling word.
Colonel North and his superior, the president’s national security adviser, Admiral John Poindexter, began shredding their records and deleting their computer files as fast as they could. But within the White House, one crucial fact emerged: they had skimmed millions of dollars in profits from the weapons sales to Iran and siphoned off the money to support the contras.
↑ comment by lukeprog · 2014-02-24T02:38:10.945Z · LW(p) · GW(p)
More (#2) from Enemies:
Reflecting on the past lives of the British spies at Cambridge in the 1930s, Hoover conflated their communism with their homosexuality.
The connection seemed self-evident to him. Homosexuality and communism were causes for instant dismissal from American government service—and most other categories of employment. Communists and homosexuals both had clandestine and compartmented lives. They inhabited secret underground communities. They used coded language. Hoover believed, as did his peers, that both were uniquely susceptible to sexual entrapment and blackmail by foreign intelligence services.
The FBI’s agents became newly vigilant to this threat. “The Soviets knew, in those days, a government worker, if he was a homosexual, he’d lose his job,” said John T. Conway, who worked on the Soviet espionage squad in the FBI’s Washington field office. Conway investigated a State Department official suspected of meeting a young, blond, handsome KGB officer in a gay bar. “It was a hell of an assignment,” he said. “One night we had him under surveillance and he picked up a young kid, took him up to his apartment, kept him all night. Next day we were able to get the kid and get a statement from him and this guy in the State Department lost his job.”
On June 20, 1951, less than four weeks after the Homer case broke, Hoover escalated the FBI’s Sex Deviates Program. The FBI alerted universities and state and local police to the subversive threat, seeking to drive homosexuals from every institution of government, higher learning, and law enforcement in the nation. The FBI’s files on American homosexuals grew to 300,000 pages over the next twenty-five years before they were destroyed. It took six decades, until 2011, before homosexuals could openly serve in the United States military.
And:
[The weathermen] carried out thirty-eight bombings. The FBI solved none.
...Dyson had questions about the rule of law: “Can I put an informant in a college classroom? Or even on the campus? Can I penetrate any college organization? What can I do? And nobody had any rules or regulations. There was nothing...”
“This was going to come and destroy us,” he said. “We were going to end up with FBI agents arrested. Not because what they did was wrong. But because nobody knew what was right or wrong.” Not knowing that difference is a legal definition of insanity. Dyson’s premonitions of disaster would prove prophetic. In time, the top commanders of the FBI in Washington and New York would face the prospect of prison time for their work against the threat from the left. So would the president’s closest confidants.
And:
An impassioned diatribe from Sullivan arrived at Hoover’s home on the day that the debate over the director’s future started at the White House. It read like a cross between a Dear John letter and a suicide note. “This complete break with you has been a truly agonizing one for me,” he wrote. But he felt duty-bound to say that “the damage you are doing to the Bureau and its work has brought all this on.”
He laid out his accusations in twenty-seven numbered paragraphs, like the counts of a criminal indictment. Some dealt with Hoover’s racial prejudices; the ranks of FBI agents remained 99.4 percent white (and 100 percent male). Some dealt with Hoover’s use of Bureau funds to dress up his home and decorate his life. Some dealt with the damage he had done to American intelligence by cutting off liaisons with the CIA. Some came close to a charge of treason.
“You abolished our main programs designed to identify and neutralize the enemy,” he wrote, referring to COINTELPRO and the FBI’s black-bag jobs on foreign embassies. “You know the high number of illegal agents operating on the east coast alone. As of this week, the week I am leaving the FBI for good, we have not identified even one of them. These illegal agents, as you know, are engaged, among other things, in securing the secrets of our defense in the event of a military attack so that our defense will amount to nothing. Mr. Hoover, are you thinking? Are you really capable of thinking this through? Don’t you realize we are betraying our government and people?”
Sullivan struck hardest at Hoover’s cult of personality: “As you know you have become a legend in your lifetime with a surrounding mythology linked to incredible power,” he wrote. “We did all possible to build up your legend. We kept away from anything which would disturb you and kept flowing into your office what you wanted to hear … This was all part of the game but it got to be a deadly game that has accomplished no good. All we did was to help put you out of touch with the real world and this could not help but have a bearing on your decisions as the years went by.” He concluded with a plea: “I gently suggest you retire for your own good, that of the Bureau, the intelligence community, and law enforcement.” Sullivan leaked the gist of his letter to his friends at the White House and a handful of reporters and syndicated columnists. The rumors went out across the salons and newsrooms of Washington: the palace revolt was rising at the FBI. The scepter was slipping from Hoover’s grasp.
↑ comment by lukeprog · 2014-02-24T02:32:43.423Z · LW(p) · GW(p)
More (#1) from Enemies:
The biggest slacker raid by far was a three-day roundup set for September 3, the most ambitious operation in the decade-long history of the Bureau of Investigation. Thirty-five agents gathered under the direction of Charles de Woody, the head of the Bureau’s New York office. The Bureau’s men were backed by roughly 2,000 American Protective League members, 2,350 army and navy men, and at least 200 police officers. They hit the streets of Manhattan and Brooklyn at dawn, crossed the Hudson River in ferries, and fanned out across Newark and Jersey City. They arrested somewhere between 50,000 and 65,000 suspects, seizing them off sidewalks, hauling them out of restaurants and bars and hotels, marching them into local jails and national armories. Some 1,500 draft dodgers and deserters were among the accused. But tens of thousands of innocent men had been arrested and imprisoned without cause.
Attorney General Gregory tried to disavow the raids, but the Bureau would not let him. “No one can make a goat of me,” de Woody said defiantly. “Everything I have done in connection with this roundup has been done under the direction of the Attorney General and the chief of the Bureau of Investigation.”
The political storm over the false arrest and imprisonment of the multitudes was brief. But both Attorney General Gregory and the Bureau’s Bielaski soon resigned. Their names and reputations have faded into thin air. Their legacy remains only because it was Hoover’s inheritance.
And:
Venona was one of America’s most secret weapons in the Cold War—so secret that neither President Truman nor the CIA knew about it. On the occasions that Hoover sent intelligence derived from Venona to his superiors, it was scrubbed, sanitized, and attributed only to “a highly sensitive source.” Hoover decreed: “In view of loose methods of CIA & some of its questionable personnel we must be most circumspect. H.”
And:
Coplon was a spy, without question. But the FBI had broken the law trying to convict her. The Bureau illegally wiretapped her telephone conversations with her lawyer. At the first trial, an FBI special agent on the witness stand denied that Coplon’s phone had been tapped, a lie that was later detected.
Then, to Hoover’s dismay, the judge admitted into evidence FBI reports alluding to the search for information on the Soviet atomic spy ring—a threat to the secrecy of Venona.
To protect the intelligence secrets of the FBI from exposure by the court, Hoover instituted a new internal security procedure on July 29, 1949. It was known as June Mail—a new hiding place for records about wiretaps, bugs, break-ins, black-bag jobs, and potentially explosive reports from the most secret sources. June Mail was not stored or indexed in the FBI’s central records but kept in a secret file room, far from the prying eyes of outsiders.
FBI headquarters issued a written order to destroy “all administrative records in the New York field office”—referring to the Coplon wiretaps—“in view of the immediacy of her trial.” The written order contained a note in blue ink: “O.K.—H.”
Despite Hoover’s efforts, the existence of the wiretaps was disclosed at the second trial—another layer of the FBI’s secrecy penetrated. Then the same FBI special agent who had lied at the first trial admitted that he had burned the wiretap records.
...The FBI had been caught breaking the law again. For the first time since the raids of 1920, lawyers, scholars, and journalists openly questioned the powers that Hoover exercised. Almost everyone agreed that the FBI should have the ability to wiretap while investigating treason, espionage, and sabotage. Of course taps would help to catch spies. But so did opening the mails, searching homes and offices, stealing documents, and planting bugs without judicial warrants—all standard conduct for the FBI, and all of it illegal. Even at the height of the Cold War, a free society still looked askance on a secret police.
↑ comment by lukeprog · 2014-02-24T02:17:03.563Z · LW(p) · GW(p)
From Roose's Young Money:
Fashion Meets Finance is a singles mixer series with a simple premise: take several hundred male financiers, put them in a room with several hundred women who work in the fashion industry, and let the magic happen. The series was started in 2007, and it is predicated on the idea that male Wall Streeters and female fashion workers, as the respective alpha ascenders of their tribal clans, deserve to meet and procreate, preserving the dominant line in perpetuity. It’s social Darwinism in its purest, most obnoxious form.
Fashion Meets Finance hit a snag in 2008, when the financial sector nearly collapsed, taking bankers down a few notches on the Manhattan social ladder and necessitating a brief hiatus. But in 2009, it returned with a vengeance. Its organizers proclaimed proudly: "2008 was a confusing time, but we are here to announce the balance is restoring itself to the ecosystem of the New York dating community. We fear that news of shrinking bonuses, banks closing, and the Dow plummeting confused the gorgeous women of the city.…The uncertainty caused panic which caused irrational decisions—there’s going to be a two-year blip in the system where a hot fashion girl might commit to a pharmaceutical salesman.…Fashion Meets Finance has returned to let the women of fashion know that the recession is officially over. It might be a year before bonuses start inflating themselves again, but it will happen. Invest in the future; feel confident in your destiny. Hold on. It will only be a couple more years until you can quit your job and become a tennis mom."
I almost admired the candor with which Fashion Meets Finance accepted noxious social premises as fact. (One early advertisement read, “Ladies, you don’t need to worry that the cute guy at the bar works in advertising!”) But others disagreed. Gawker called one gathering “an event where Manhattan banker-types and fashion slaves meet, consummate, and procreate certain genetics to create lineages of people you’d rather not know.”
...This installment of Fashion Meets Finance, held after a yearlong break, had undergone a significant rebranding. Now, it was being billed as a charity event (proceeds were going to a nonprofit focused on Africa), and the cringe-worthy marketing slogans had been erased. Now, the financiers and fashionistas were joined by a smattering of young professionals from other industries: law, consulting, insurance, even a few female bankers.
After a few hours of drinking and socializing, I had filled my douchebag quota many times over. I had seen and heard the following things:
A banker showing off his expensive watch (which he called “my piece”) to a gaggle of interested-looking women.
A former Lehman Brothers banker explaining his strategy for picking up women. “I use Lehman to break the ice—you know, get their sympathy. Then I tell them I make twice as much as Lehman paid me at my new job. They love my story, and then they end up in my bed.”
A private equity associate using the acronym “P.J.” to refer to his firm’s private jet.
A hedge fund trader giving dating advice to his down-and-out friend: “Girls come in many shapes and sizes. But just remember: when you hold them by the ankles and look down, they all look the same!”
As the night wore on, I identified the two primary strains of Fashion Meets Finance attendees. There were the merely curious, the people who had heard about the event from a friend and were intrigued enough about the premise to pay $25 for a ticket. These people mainly stood or sat on the perimeter of the roof deck, where they could observe (and, in some cases, laugh at) the commingling of the other partygoers.
And then there were the true believers. A portion of the attendees at Fashion Meets Finance seemingly had no idea that the event had become a punch line. They were bankers and fashionistas who were determined to find their matches at a superficial singles mixer, and they had no qualms about it. “I want the real deal!” said one female fashionista, who was sprawled out on a white sofa on Bar Basque’s terrace, sipping a vodka soda and watching the men walk by. “I’m really independent,” she said, “and I don’t want someone who needs to be around me all the time. I want them to work 150 hours a week at Goldman Sachs.”
↑ comment by lukeprog · 2014-02-22T21:57:49.382Z · LW(p) · GW(p)
From Tetlock's Expert Political Judgment:
Replies from: lukeprog, lukeprogSkeptics also stress the fine line between success and failure. Churchill's career was almost ruined in 1916 by his sponsorship of the disastrous Gallipoli campaign designed to knock the Ottoman Empire out of World War I. But Churchill insisted, and some historians agree, that the plan "almost worked" and would have if it had been more resolutely implemented. Conversely, Stalin arguably escaped his share of blame for his blunders because, in the end, he was victorious. Stalin nearly lost everything but was saved by Hitler's even bigger blunders.
On close scrutiny, reputations for political genius rest on thin evidential foundations: genius is a matter of being in the right place at the right time. Hero worshippers reveal their own lack of historical imagination: their incapacity to see how easily things could have worked out far worse as a result of contingencies that no mortal could have foreseen. Political geniuses are just a close-call counterfactual away from being permanently pilloried as fools.
↑ comment by lukeprog · 2014-02-22T22:57:42.339Z · LW(p) · GW(p)
More (#2) from Expert Political Judgment:
I have long resonated to classical liberal arguments that stress the efficacy of free-for-all exchanges in stimulating good ideas and screening out bad ones. But I now see many reasons why the routine checks and balances — in society at large as well as in the cloisters of academe — are not up to correcting the judgmental biases documented here. The marketplace of ideas, especially that for political prognostication, has at least three serious imperfections that permit lots of nonsense to persist for long stretches of time.
First, vigorous competition among providers of intellectual products (off-the-shelf opinions) is not enough if the consumers are unmotivated to be discriminating judges of competing claims and counterclaims. This state of affairs most commonly arises when the mass public reacts to intellectuals peddling their wares on op-ed pages or in television studios, but it even arises in academia when harried, hyperspecialized faculty make rapid-fire assessments of scholars whose work is remote from their own. These consumers are rationally ignorant. They do not think it worth their while trying to gauge quality on their own. So, they rely on low-effort heuristics that prize attributes of alleged specialists, such as institutional affiliation, fame, and even physical attractiveness, that are weak predictors of epistemic quality. Indeed, our data— as well as other work— suggest that consumers, especially the emphatically self-confident hedgehogs among them, often rely on low-effort heuristics that are negative predictors of epistemic quality. Many share Harry Truman’s oft-quoted preference for one-armed advisers.
↑ comment by lukeprog · 2014-02-22T22:52:24.737Z · LW(p) · GW(p)
More (#1) from Expert Political Judgment:
For radical skeptics, though, there is a deeper lesson: the impossibility of picking the influential acorns before the fact. Joel Mokyr compares searching for the seeds of the Industrial Revolution to “studying the history of Jewish dissenters between 50 A.D. and 50 B.C. We are looking for something that at its inception was insignificant, even bizarre, but destined to change the life of every man and woman in the West.”
And:
We often want to know why a particular consequence — be it a genocidal bloodbath or financial implosion — happened when and how it did. Examination of the record identifies a host of contributory causes. In the [crash of Western flight 903 in 1979], five factors loom. It is tempting to view each factor by itself as a necessary cause. But the temptation should be resisted. Do we really believe that the crash could not have occurred in the wake of other antecedents? It is also tempting to view the five causes as jointly sufficient. But believing this requires endorsing the equally far-fetched counterfactual that, had something else happened, such as a slightly different location for the truck, the crash would still have occurred.
Exploring these what-if possibilities might seem a gratuitous reminder to families of victims of how unnecessary the deaths were. But the exercise is essential for appreciating why the contributory causes of one accident do not permit the NTSB to predict plane crashes in general. Pilots are often tired; bad weather and cryptic communication are common; radio communication sometimes breaks down; and people facing death frequently panic. The NTSB can pick out, post hoc, the ad hoc combination of causes of any disaster. They can, in this sense, explain the past. But they cannot predict the future. The only generalization that we can extract from airplane accidents may be that, absent sabotage, crashes are the result of a confluence of improbable events compressed into a few terrifying moments.
If a statistician were to conduct a prospective study of how well retrospectively identified causes, either singly or in combination, predict plane crashes, our measure of predictability— say, a squared multiple correlation coefficient— would reveal gross unpredictability. Radical skeptics tell us to expect the same fate for our quantitative models of wars, revolutions, elections, and currency crises. Retrodiction is enormously easier than prediction.
And:
Political observers run the same risk when they look for patterns in random concatenations of events. They would do better by thinking less. When we know the base rates of possible outcomes— say, the incumbent wins 80 percent of the time— and not much else, we should simply predict the more common outcome. But work on base rate neglect suggests that people often insist on attaching high probabilities to low-frequency events. These probabilities are rooted not in observations of relative frequency in relevant reference populations of cases, but rather in case-specific hunches about causality that make some scenarios more “imaginable” than others. A plausible story of how a government might suddenly collapse counts for far more than how often similar outcomes have occurred in the past. Forecasting accuracy suffers when intuitive causal reasoning trumps extensional probabilistic reasoning.
Psychological skeptics are also not surprised when people draw strong lessons from brief runs of forecasting failures or successes. Winning forecasters are often skilled at concocting elaborate stories about why fortune favored their point of view. Academics can quickly spot the speciousness of these stories when the forecaster attributes her success to a divinity heeding a prayer or to planets being in the correct alignment. But even these observers can be gulled if the forecaster invokes an explanation in intellectual vogue.
↑ comment by lukeprog · 2014-02-08T23:43:44.994Z · LW(p) · GW(p)
From Sabin's The Bet:
To make sure The Population Bomb would reach the widest possible audience, Ehrlich paid his twelve-year-old daughter ten dollars to read the draft manuscript and flag any difficult passages.
And:
Replies from: lukeprog, lukeprog, lukeprogEhrlich’s fear of overpopulation and famine reflected broader elite concerns in the mid-1960s. With the world population growing from 2.5 billion people in 1950 to 3.35 billion in 1965, many commentators questioned whether the planet could sustain the growing number of people. The New Republic announced in 1965 that “world population has passed food supply. The famine has started.” The magazine predicted that even “dramatic counter-measures” could not reverse the situation. A “world calamity” would strike within the decade. World hunger, the magazine editors wrote, would be the “single most important fact in the final third of the 20th Century.” US Ambassador to India Chester Bowles concurred, telling a Senate subcommittee in June 1965 that the approaching world famine threatened “the most colossal catastrophe in history.” Starting in January 1968, around the time that Ehrlich was writing The Population Bomb, a group calling itself the “Campaign to Check the Population Explosion” started running full-page advertisements in the Washington Post and the New York Times. The imagery was apocalyptic. One advertisement showed a large stopwatch and announced that someone “dies from starvation” every 8.6 seconds. “World population has already outgrown world food supply,” the advertisement declared. Another advertisement showed the picture of a baby under the headline “Threat to Peace,” warning that “skyrocketing population growth may doom the world we live in.” A third pictured Earth as a bomb about to explode — with population control the only way to defuse the threat.
↑ comment by lukeprog · 2014-02-09T00:05:02.041Z · LW(p) · GW(p)
More (#3) from The Bet:
The stark clash of perspectives that the bet represented suited the divisive environmental politics of the early 1990s. Regulations to protect endangered species, policies to slow global warming, and efforts to protect national forests and rangelands now sharply split Democrats and Republicans. Whereas in the 1970s, major environmental legislation had passed with bipartisan support, by the early 1990s, where a politician stood on environmental policies served as a litmus test of ideology and political affiliation.
↑ comment by lukeprog · 2014-02-08T23:55:59.489Z · LW(p) · GW(p)
More (#2) from The Bet:
“How often does a prophet have to be wrong before we no longer believe that he or she is a true prophet?” Simon goaded. He argued that Ehrlich had been wrong about the “demographic facts of the 1970s,” whereas Simon’s own predictions had been right. Ehrlich had said in 1969, for instance, “If I were a gambler, I would take even money that England will not exist in the year 2000.” Ehrlich had been expressing his view that, without worldwide population control, overpopulation would cause nuclear war, plague, ecological catastrophe, or disastrous resource scarcities. Complaining that Ehrlich made wild statements without ever facing the “consequences of being wrong,” Simon said, “I’ll put my money where my mouth is” and asked Ehrlich to do the same. Rather than betting on the future existence of England, Simon challenged Ehrlich to bet on raw material prices and test their theories about future abundance. Ehrlich’s warnings about limits to economic growth, famines, and declining food harvests suggested rising prices that reflected growing scarcity due to population growth. But Simon argued that prices generally were falling for natural resources because they were becoming less scarce due to increasing productivity and human ingenuity.
And:
Ehrlich, Holdren, and Harte knew about inflation and exchange rates, but soaring nominal prices could not help but encourage their belief that resources were rapidly getting scarcer. Many shared their conviction. The story of Bunker and Herbert Hunt, scions of a leading Texas oil family, might have provided a cautionary tale for the scientists. The Hunts gambled billions of dollars on the rising price of silver. When prices did not increase sufficiently, the Hunt brothers tried to corner the silver market; at one point, they and their partners controlled 77 percent of the silver in private hands. Their effort failed spectacularly in March 1980, however, when government regulators tightened credit and restricted silver purchases. As silver prices collapsed, the Hunt brothers in desperation were forced to borrow more than a billion dollars to extricate themselves from their silver play. Despite such stories from the business pages, Ehrlich and his colleagues believed that the price trends all were in their favor. They felt confident that they would prevail in the bet.
And:
During the late 1970s, tin prices rose sharply. Many observers anticipated shortages. Malaysian producers sought to corner the tin market to force prices even higher in 1981 and 1982. New high-quality tin deposits discovered in the Brazilian Amazon, however, undermined this market-cornering effort. By the end of the 1980s, Brazil, a previously marginal producer, produced roughly one quarter of the world’s tin supply. At the higher prices of the late 1970s, demand for tin also started to fall. Manufacturers substituted aluminum and plastic for tin in packaging. Trying in vain to stabilize prices, the International Tin Agreement held a quarter of the world’s annual tin production off the market. The International Tin Council soon ran out of money, however, and the agreement collapsed. Tin prices went into “free fall,” dropping more than 50 percent from $ 5.50 per pound in October 1985 to $ 2.50 per pound in March 1986. Overall, between 1980 and 1990, the gyrations of the tin market supported the arguments of Simon and the economists. New sources of supply, product substitution, and, above all, the breaking of the tin cartel had a far greater impact than population growth on tin prices, ultimately driving them down almost 75 percent.
↑ comment by lukeprog · 2014-02-08T23:48:21.414Z · LW(p) · GW(p)
More (#1) from The Bet:
Fears of a Malthusian calamity that would intensify global conflicts prompted radical prescriptions. In their 1967 book, Famine 1975! the brothers William and Paul Paddock argued that the United States should apply the concept of military triage to its international food aid. Countries should be separated according to whether they “can’t be saved” (Haiti, Egypt, India), were “walking wounded” (Libya, The Gambia), or “should receive food” (Pakistan, Tunisia). The Paddocks’ ideas of triage and limits to food aid resonated in Washington, DC. President Johnson had refused to send American wheat to India in 1966 until that country adopted a vigorous family planning program. According to presidential adviser Joseph Califano, Johnson told him, “I’m not going to piss away foreign aid in nations where they refuse to deal with their own population problems.” How much Johnson and other American policy-makers believed that India faced a Malthusian crisis— and how much they needed to use the idea of a famine to sell Congress on continuing the Food for Peace export program— is a matter of historical argument. The massive scale of the eventual American relief effort is indisputable: over a two-year period, roughly one-quarter of annual US wheat production was sent to India.
And:
Wading into American politics inevitably meant that Zero Population Growth would have to tackle controversial issues such as birth control, abortion, and women’s rights. Ehrlich’s push for measures to reduce population growth thus drew on the 1960s sexual revolution and efforts to separate the pleasure of sex from reproduction. As a biologist, Ehrlich did not view sexual intercourse as anything sacred. He attacked “sexual repression” and celebrated sex as “mankind’s major and most enduring recreation.” Ehrlich waged an aggressive campaign against Pope Paul VI’s 1968 encyclical “Humanae Vitae,” which affirmed the Catholic Church’s traditional proscription of most forms of birth control. Zero Population Growth continued this fight in the early 1970s, advocating forcefully for abortion rights and access to contraception. In California, Zero Population Growth sought to help pass a proabortion ballot initiative. Ehrlich, who served as the organization’s first president, urged the legalization of abortion and removal of restrictions on contraception in the interest of population control. Ehrlich mocked the association of a fetus with a human being as “confusing a set of blueprints with a building.” After New York passed a liberal abortion law in 1970, Charles Wurster, a founder of the nonprofit Environmental Defense Fund, wrote exultantly to Ehrlich, “I wouldn’t have dreamed this could have happened so fast! This bill is now LAW in the State of New York.”
And:
One summer, a mentally ill woman who had grown obsessed with Ehrlich broke into their house and began living there with her dog, while the family was away at the Colorado field station. When the police came to investigate, they found the Ehrlich’s house in disarray, with piles of papers and books in apparent chaos. The place had been ransacked, they thought. It turned out, however, in what would become a family joke, that this was just the way Paul and Anne lived.
↑ comment by lukeprog · 2014-01-30T20:31:21.150Z · LW(p) · GW(p)
From Yergin's The Quest:
Replies from: lukeprog, lukeprog, lukeprog, lukeprog, lukeprog, lukeprog, lukeprogAs Carroll saw it from his vantage point, there were three priorities for the restoration of the Iraqi oil industry—and the rest of the economy—“security, security, and security.” But none of the three was being met. The collapse of the organized state and the inadequacy of the allied forces left large parts of the country very lightly guarded, and the forces that were there were overstretched. And what crippled everything else was the disorder that was the consequence of two decisions haphazardly made by the Coalition Provisional Authority, the entity set up to run the American-led occupation.
The first was “Order #1—De-Baathification of Iraqi Society.” Some two million people had belonged to Saddam’s Baath Party. Some were slavish and brutal followers of Saddam; some were true believers. Many others were compelled to join the Baath Party to get along in their jobs and rise up in the omnipresent bureaucracies and other government institutions that dominated the economy, and to ensure that their children had educational opportunities in a country that had been ruled by the Baathists for decades...
Initially, de-Baathification was meant only to lop off the top of the hierarchy, which needed to be done immediately. But as rewritten and imposed, it reached far down into the country’s institutions and economy, where support for the regime was less ideological and more pragmatic. The country was, as one Iraqi general put it, “a nation of civil servants.” Many schoolteachers were turned out of their jobs and left with no income. The way the purge was applied removed much of the operational capability from government ministries, dismantled the central government, and promoted disorganization. It also eliminated a wide swath of expertise from the oil industry. Broadly, it set the stage for a radicalization of Iraqis—especially Sunnis, stripped of their livelihood, pensions, access to medical care, and so forth—and helped to create conditions for the emergence of Al Qaeda in Iraq. In the oil industry, the result of its almost blanket imposition was to further undermine operations.
...The problem of inadequate troop levels was compounded by Order #2 by the Coalition Provisional Authority—“Dissolution of Entities”—which dismissed the Iraqi Army. Sending or allowing more than 400,000 soldiers, including the largely Sunni officer corps, to go home, with no jobs, no paychecks, no income to support their families, no dignity—but with weapons and growing animus to the American and British forces—was an invitation to disaster. The decision seems to have been made almost off-hand, somewhere between Washington and Baghdad, with little consideration or review. It reversed a decision made ten weeks earlier to use the Iraqi Army to help maintain order. In bluntly criticizing the policy to Bremer, one of the senior U.S. officers used an expletive. Rather than responding to the substance of the objection, Bremer said that he would not tolerate such language in his office and ordered the officer to leave the room.
↑ comment by lukeprog · 2014-01-30T21:21:28.905Z · LW(p) · GW(p)
More (#7) from The Quest:
A few years later, Henry Ford’s grandson, Henry Ford II, acknowledged that “the law requiring greater fuel efficiency in motor vehicle usage has moved us faster toward conservation goals than competitive, free-market forces would have done.” Still he pleaded for Washington to “give up” on pushing for tighter post-1985 fuel-efficiency standards.
↑ comment by lukeprog · 2014-01-30T21:19:48.222Z · LW(p) · GW(p)
More (#6) from The Quest:
The combination of the number of delegations, the overall size of the crowd, and the sharp disagreement on the basic questions—all these led to a chaotic conference that was, as the days went by, becoming more and more frustrating for all involved. It was possible that there would be no agreement at all.
Barack Obama flew in early one morning toward the end of the conference, with the intention of leaving later in the day. Shortly after his arrival, he was told by Secretary of State Hillary Clinton, “Copenhagen was the worst meeting I’ve been to since eighth-grade student council.”
After sitting in a confusing meeting with a group of leaders, Obama turned to his own staff and said he wanted, urgently, to see Premier Wen Jiabao of China. Unfortunately, he was told, the premier was on his way to the airport. But then, no: word came back that Wen was still somewhere in the conference center. Obama and his aides started off at a fast pace to find him. Time was short, for Obama himself was scheduled to leave in a couple of hours, hoping to beat a blizzard that was bearing down on Washington.
At the end of a long corridor, Obama came upon a surprised security guard outside the conference room that was the office of the Chinese delegation. Despite the guard’s panicked efforts, Obama brushed right passed him and burst into the room. Not only was Wen there but, to Obama’s surprise, he found that so were the other members of what was now known as the BASIC group—President Luiz Inácio Lula da Silva of Brazil, President Jacob Zuma of South Africa, and Prime Minister Manmohan Singh of India—huddling to find a common position. For their part, they were no less taken aback by the sudden, unexpected appearance of the president of the United States. But they were hardly going to turn Obama away. He took a seat next to Lula and across from Wen. Wen, overcoming his surprise, passed over to Obama the draft they were working on. The president read it quickly and said it was good. But, he said, he had a “couple of points” to add.
Thereupon followed a drafting session with Obama more or less in the role of scribe. At one point the chief Chinese climate negotiator wanted to strenuously disagree with Obama, but Wen instructed that this interjection not be translated.
Finally, after much give-and-take, some of it heated, they came to an agreement. There would be no treaty and no legally binding targets. Instead developed and developing countries would adopt parallel nonbinding pledges to reduce their emissions. That would be accompanied by a parallel understanding that the “mitigation actions” undertaken by developing countries be “subject to international measurement, reporting and verification.” The agreement also crystallized the prime objective of preventing temperatures from rising more than 2°C (3.6°F). The BASIC leaders tossed it to Obama to secure approval from European leaders, Chancellor Angela Merkel of Germany, President Nicolas Sarkozy of France, and Prime Minister Gordon Brown of the UK. The Europeans did so, but only reluctantly, as they wanted something much stronger. Obama then took off, beating the snowstorm back to Washington.
And:
Over drinks before dinner, Piebalgs was asked—in light of the EU’s aggressive 2020 efficiency targets—about the relative popularity of renewables versus efficiency. “Renewables are more popular,” he said. “Renewables are supply side. They provide new energy. Efficiency is something that pays back over the years. Energy efficiency involves a lot of nitty-gritty, a lot of incentives and a lot of regulations.
“And there’s no red ribbon to cut.” Conservation—energy efficiency—may be so obvious as a solution to cost and environmental issues. But there is no photo op, no opening ceremony where government officials and company executives can cut a ribbon, smile broadly into the camera, and inaugurate a grand new facility. He shook his head as he considered one of the most powerful of the life lessons he had learned from his deep immersion in global politics.
“It’s very important to be able to cut a red ribbon.”
And:
The spread of air-conditioning changed the course of global economic development and made possible the expansion of the world economy. Lee Kuan Yew, the founder and former prime minister of modern Singapore, once described air-conditioning as “the most important invention of the twentieth century,” because, he explained, it enabled the people of the tropics to become productive. Singapore’s minister of the environment was a little more explicit, saying that, without air-conditioning, “instead of working in high-tech factories” Singapore’s workers “would probably be sitting under coconut trees."
And:
“Mottainai is the spirit in which we have approached things over a thousand years because we never really had anything in abundance,” Kawaguchi continued. “So we’ve had to be wise about resources. I was taught at home, every child was taught at home, that you don’t leave a grain of rice on your plate. That’s mottainai. Too precious to waste.”
This sense of mottainai has underpinned Japan’s approach to energy efficiency, which was codified in the Energy Conservation Law of 1979. The law was expanded in 1998 with the introduction of the Top Runner program. It takes the most efficient appliance or motorcar in a particular class—the “top runner”—and then sets a requirement that all appliances and cars must, within a certain number of years, exceed the efficiency of the top runner. This creates a permanent race to keep upping the ante on efficiency. The results are striking: the average efficiency of videocassette recorders increased 74 percent between 1997 and 2003. Even television sets improved by 26 percent between 1997 and 2003. Further amendments to the law mandate improvements by factories and buildings, and require them to adopt efficiency plans.
And:
By the 1970s Brazil was importing 85 percent of its oil, and its economy was booming. But the 1973 oil crisis abruptly ended what was being called the Brazilian Economic Miracle. Petroleum prices quadrupled, delivering a devastating shock to the economy. The military government responded with what it described as a “wartime economy” to meet the nation’s energy crisis. Brazil, according to the universal consensus, had absolutely no prospects for petroleum. The only energy option was sugar. As part of the “war effort”—and at the strong urging of distraught sugar growers—the government established the national Pro-Alcohol program. It was backed by the slogan “Let’s unite, make alcohol.” As an extra incentive, fuel stations, previously closed on weekends, were granted the right to stay open on Saturdays and Sundays in order to sell ethanol—but not gasoline. Ethanol consumption increased dramatically. Initially ethanol was added to gasoline. But by 1980, in response to the government’s insistence, the Brazilian subsidiaries of the major car companies agreed to manufacture vehicles that ran exclusively on ethanol. In turn, the government made a crucial pledge, both to the companies and consumers, that there would be sufficient ethanol. It was an absolute guarantee. The actual production costs of ethanol in 1980 were three times that of gasoline, but that was hidden from consumers by huge subsidies that were paid for by a tax on gasoline.
By 1985, 95 percent of all new cars sold in Brazil ran exclusively on “alcohol.”
↑ comment by lukeprog · 2014-01-30T21:10:39.481Z · LW(p) · GW(p)
More (#5) from The Quest:
As it was, the Framework Convention on Climate Change—the agreement that came out of Rio—was remarkable. Not because of its targets, for it had none save the “aim” to reduce emissions in 2000 to 1990 levels, but because it existed at all. Four years earlier, climate change had not even been on the political agenda in the United States, nor on that of many other countries. Yet in less than half a decade, what heretofore had been an obscure scientific preoccupation had been turned into something that the international community had gone on record promulgating as an urgent and fundamental challenge to humanity and to the planet’s well-being.
The road to Rio was actually quite long; it had begun more than two centuries earlier, in the Swiss Alps. But what had started as an obsession by a handful of researchers with the past, with glaciers and the mysteries of the Ice Age, was now set to become a dominating energy issue for the future.
And:
On November 15, 1990, George H. W. Bush signed the Clean Air Amendments into law. Title IV established an emissions trading system to reduce acid rain. It was a great victory for something that had been considered beyond-the-pale just a year earlier. Shrinking the caps over time, that is, reducing the total number of allowances or permits year by year, would have the effect of making the permits scarcer and thus more expensive, increasing the incentive to reduce emissions. Many called this system allowance trading. Others, more optimistically, called it the “Grand Policy Experiment.”
After a slow start, the buying and selling of allowances became standard practice among utilities. The results in the years since have been very impressive. Emissions trading delivered much larger reductions, at much lower costs, and much more speedily, than what would have been anticipated with a regulatory system. By 2008, emissions had fallen from the 1980 level by almost 60 percent. As a bonus, the rapid reduction in emissions meant less lung disease and thus significant savings on health care.
The impact on thinking about how to solve environmental problems was enormous. “We are unaware of any other U.S. environmental program that has achieved this much,” concluded a group of MIT researchers, “and we find it impossible to believe that any feasible alternative command-and-control program could have done nearly as well.” Coase’s theorem worked; markets were vindicated. Within a decade, a market-based approach to pollution had gone from immorality and heresy to almost accepted wisdom. The experience would decisively shape the policy responses in the ensuing debate over how to deal with climate change. Overall, the evidence on SO2 was so powerful that it was invoked again and again in the struggles over climate change policy.
And:
What seemed to be the attitude of the Bush administration was captured at a ceremony at the State Department in May 2001, when Secretary of State Colin Powell swore in Paula Dobriansky as Undersecretary of State. Going through her list of responsibilities, he came to climate change. At that point, he paused, and with a small, almost embarrassed grin, laughed, and jokingly put his hand over his mouth as if he had said something slightly naughty.
And:
On April 2, 2007, the Supreme Court delivered its opinion in what has been called “the most important environmental ruling of all times.” In a split 5–4 decision, the Court declared that Massachusetts had standing to bring the suit because of the costly storms and the loss of coastal shore that would result from climate change and that the “risk of harm” to Massachusetts was “both actual and imminent.”
And in the heart of its opinion, the Court said that CO2—even though it was produced not only by burning hydrocarbons but by breathing animals—was indeed a pollutant that “may reasonably be anticipated to endanger public health and welfare.” And just to be sure not to leave any doubt as to how it felt, the majority added that the EPA’s current stance of nonregulation was “arbitrary” and “capricious” and “not in accordance with the law.”
The consequences were enormous; for it meant that if the U.S. Congress did not legislate regulation of carbon, the EPA had the authority—and requirement—to wield its regulatory machinery to achieve the same end by making an “endangerment finding.” Two out of three of the branches of the federal government were now determined that the government should move quickly to control CO2.
↑ comment by lukeprog · 2014-01-30T21:00:44.240Z · LW(p) · GW(p)
More (#4) from The Quest:
in 1978, in Washington, D.C., Rafe Pomerance, president of the environmental group Friends of the Earth, was reading an environmental study when one sentence caught his eye: increasing coal use could warm the earth. “This can’t be true,” Pomerance thought. He started researching the subject, and he soon caught up with a scientist named Gordon MacDonald, who had been a member of Richard Nixon’s Council on Environmental Quality. After a two-hour discussion with MacDonald, Pomerance said, “If I set up briefings around town, will you do them?” MacDonald agreed, and they started making the rounds in Washington, D.C.
The president of the National Academy of Sciences, impressed by the briefing, set up a special task force under Jule Charney. Charney had moved from Princeton to MIT where, arguably, he had become America’s most prominent meteorologist. Issuing its report in 1979, the Charney Committee declared that the risk was very real. A few other influential studies came to similar conclusions, including one by the JASON committee, a panel of leading physicists and other scientists that advised the Department of Defense and other government agencies. It concluded that there was “incontrovertible evidence that the atmosphere is indeed changing and that we ourselves contribute to that change.” The scientists added that the ocean, “the great and ponderous flywheel of the global climate system,” was likely to slow observable climate change. The “JASONs,” as they were sometimes called, said that “a wait-and-see policy may mean waiting until it is too late.”
The campaign “around town” led to highly attended Senate hearings in April 1980. The star of the hearing was Keeling’s Curve. After looking at a map presented by one witness that showed the East Coast of the United States inundated by rising sea waters, the committee chair, Senator Paul Tsongas from Massachusetts, commented with rising irony: “It means good-bye Miami, Corpus Christi . . . good-bye Boston, good-bye New Orleans, good-bye Charleston. . . . On the bright side, it means we can enjoy boating at the foot of the Capitol and fishing on the South Lawn.”
And:
In late August 1990, as the deadline for preparation of the report for the U.N. General Assembly approached, scientists and policymakers met in the northern Swedish town of Sundsvall. A week of acrimonious negotiations ensued, with enormous frustrating arguments even about individual words. What, for instance, did “safe” really mean? By Friday afternoon there was still no agreement. And without agreement they could not go to the United Nations General Assembly with concrete recommendations.
Then came the epic crisis that threatened to scuttle the entire IPCC process: At 6:00 p.m. the U.N. translators walked off the job. They had come to the end of their working day and they were not going to work overtime. This was nonnegotiable. Those were their work rules. But without translators the delegates could not communicate among themselves, the meeting could not go on, there would be no report to the General Assembly and no resolution on climate change. But then the French chairman of the session, who had insisted on speaking French all week, made a huge concession. He agreed to switch to English, in which, it turned out, he was exceedingly fluent.
The discussions and debates now continued in English, and progress was laboriously made. But the chief Russian delegate sat silent, angrily scowling, wreathed in cigarette smoke. Without his assent, there would be no final report, and he gave no sign of coming on board.
Finally one of the scientists from the American delegation who happened to speak Russian approached the scientist. He made a stunning discovery. The Russian did not speak English, and he was certainly not going to sign on to something he did not understand. The American scientist turned himself into a translator, and the Russian finally agreed to the document. Thus consensus was wrought. The IPCC was rescued—just in time.
↑ comment by lukeprog · 2014-01-30T20:54:38.066Z · LW(p) · GW(p)
More (#3) from The Quest:
After World War II, the Navy enlisted Revelle to help understand the oceanographic effects of those tests. Revelle’s assignment was to devise techniques to measure the waves and water pressure from the explosions. This would enable him to track radioactive diffusion through ocean currents. In the course of this work, Revelle’s team discovered “sharp, sudden” variations in water temperatures at different depths. This was the startling insight—the ocean worked differently from what they had thought. In Revelle’s words, the ocean was “a deck of cards.” Revelle concluded that “the ocean is stratified with a lid of warm water on the cold, and the mixing between them is limited.” That constrained the ability of the ocean to accept CO2. It was this period, in the mid-1950s, that Revelle, collaborating with a colleague, Hans Suess, wrote an article that captured this insight and would turn out to be a landmark in climate thinking.
The title made clear what the article was all about: “Carbon Dioxide Exchange Between Atmosphere and Ocean and the Question of an Increase in Atmospheric CO2 During the Past Decades.” Their paper invoked both Arrhenius and Callendar. Yet the article itself reflected ambiguity. Part of it suggested that the oceans would absorb most of the carbon, just as Revelle’s Ph.D. had argued, meaning that there would be no global warming triggered by carbon. Yet another paragraph suggested the opposite; that, while the ocean would absorb CO2, much of that was only on a temporary basis, owing to the chemistry of sea water, and the lack of interchange between warmer and cooler levels, and that the CO2 would seep back into the atmosphere. In other words, on a net basis, the ocean absorbed much less CO2 than expected. If not in the ocean, there was only one place for the carbon to go, and that was back into the atmosphere. That meant that atmospheric concentration of CO2 was destined, inevitably, to rise. The latter assertion was a late addition by Revelle, literally typed on a different kind of paper and then taped onto the original manuscript.
Before sending off the article, Revelle appended a further last-minute thought: The buildup of CO2 “may become significant during future decades if industrial fuel combustion continues to rise exponentially,” he wrote. “Human beings are now carrying out a large scale geophysical experiment of a kind that could not have happened in the past nor be reproduced in the future.” This last sentence would reverberate down through the years in ways that Revelle could not have imagined. Indeed, it would go on to achieve prophetic status—“quoted more than any other statement in the history of global warming.”
Yet it was less a warning and more like a reflection. For Revelle was not worried. Like Svante Arrhenius who had tried 60 years earlier to quantify the effect of CO2 on the atmosphere, Revelle did not foresee that increased concentrations would be dangerous. Rather, it was a very interesting scientific question.
And:
nothing had so forcefully underlined the strategic importance of better comprehension of the weather than D-Day, the invasion of Normandy in June 1944. The “Longest Day,” as it was called, had been preceded by the “longest hours”—hours and hours of soul-wrenching stress, uncertainty, and fear in the headquarters along the southern coast of England, as indecisive hourly briefings followed indecisive hourly briefings, with the “go/no go” decision held hostage to a single factor: the weather.
“The weather in this country is practically unpredictable,” the commander in chief Dwight Eisenhower had complained while anxiously waiting for the next briefing. The forecasts were for very bad weather. How could 175,000 men be put at risk in such dreadful circumstances? At best, the reliability of the weather forecasts went out no more than two days; the stormy weather over the English Channel reduced the reliability to 12 hours. So uncertain was the weather that at the last moment the invasion scheduled for June 5 was postponed, and ships that had already set sail were called back just in time before the Germans could detect them.
Finally, on the morning of June 5, the chief meteorologist said, “I’ll give you some good news.” The forecasts indicated that a brief break of sorts in the weather was at hand. Eisenhower sat silently for 30 or 40 seconds, in his mind balancing success against failure and the risk of making a bad decision. Finally, he stood up and gave the order, “Okay, let’s go.” With that was launched into the barely marginal weather of June 6, 1944, the greatest armada in the history of the world. Fortunately, the German weather forecasters did not see the break and assured the German commander, Erwin Rommel, that he did not have to worry about an invasion.
A decade later, knowing better than anyone else the strategic importance of improved weather knowledge, Eisenhower, now president, gave the “let’s go” order for the International Geophysical Year.
↑ comment by lukeprog · 2014-01-30T20:47:09.622Z · LW(p) · GW(p)
More (#2) from The Quest:
A power crisis that erupted in California in 2000 threw the state into disarray, created a vast economic and political firestorm, and shook the entire nation’s electric power system. The brownouts and economic mayhem that rolled over the Golden State would have been expected in a struggling developing nation, but not in the state that was home to Disneyland, and that had given birth to Silicon Valley, the very embodiment of technology and innovation. After all, California was, if an independent country, the seventh-largest economy in the world.
What unfolded in California graphically exposed the dangers of misdesigning a regulatory system. It was also a case study of how short-term politics can overwhelm the needs of sound policy.
According to popular lore, the crisis was manufactured and manipulated by cynical and wily out-of-state power traders, the worst being Enron, the Houston-based natural gas and energy company. Its traders and those of other companies were accused of creating and then exploiting the crisis with a host of complex strategies. Some traders certainly did blatantly, and even illegally, exploit the system and thus accentuated its flaws. Yet that skims over the fundamental cause of the crisis. For, by then, the system was already broken.
The California crisis resulted from three fundamental factors: The first was an unworkable form of partial deregulation that explicitly rejected the normal power-market stabilizers that could have helped avoid or at least blunt the crisis but instead built instability into the new system. The second was a sharp, adverse turn in supply and demand. The third was a political culture that wanted the benefits of increased electric power but without the costs.
And:
[California] was in an uproar; its economy, disrupted. In April 2001, after listening to Governor Davis threaten the utilities with expropriation, the management of PG&E, the state’s largest utility, serving Northern California, decided that it had no choice but to file for bankruptcy protection. San Diego Gas & Electric teetered on the edge of bankruptcy. The management of one of the state’s major utilities hurriedly put together an analysis of urban disruption to try to prepare for the distress and social breakdown—and potential mayhem—that could result if the blackouts really got out of hand. They foresaw the possibility of riots, looting, and rampant vandalism, and feared for the physical safety of California’s citizens.
But Governor Gray Davis was still dead set against the one thing that would have immediately ameliorated the situation — letting retail prices rise. Instead he had the state step in and negotiate, of all things, long-term contracts, as far out as twenty years. Here the state demonstrated a stunning lack of commercial acumen—buying at the top of the market, committing $40 billion for electricity that would probably be worth only $20 billion in the years to come. With this the state transferred the financial crisis of the utilities to its own books, transforming California’s projected budget surplus of $8 billion into a multibillion-dollar state deficit.
And:
a question troubled Saussure as he traipsed through the Swiss mountains. Why, he asked, did not all the earth’s heat escape into space at night? To try to find an answer, he built in the 1770s what became known as his “hot box”—sort of mini greenhouse. The sides and bottom were covered with darkened cork. The top was glass. As heat and light flowed into the box, it was trapped, and the temperature inside would rise. Perhaps, he mused, the atmosphere did the same thing as the glass. Perhaps the atmosphere was a lid over the earth’s surface, a giant greenhouse, letting the light in but retaining some of the heat, keeping the earth warm even when the sun had disappeared from the sky.
The French mathematician Joseph Fourier — a friend of Napoléon’s and a sometime governor of Egypt — was fascinated by the experiments of Saussure, whom he admiringly described as “the celebrated voyager.” Fourier, who devoted much research to heat flows, was convinced that Saussure was right. The atmosphere, Fourier thought, had to function as some sort of top or lid, retaining heat. Otherwise, the earth’s temperature at night would be well below freezing.
But how to prove it? In the 1820s Fourier set out to do the mathematics. But the work was daunting and extremely inexact, and his inability to work out the calculations left him deeply frustrated. “It is difficult to know up to what point the atmosphere influences the average temperature of the globe,” he lamented, for he could find “no regular mathematical theory” to explain it. With that, he figuratively threw up his hands, leaving the problem to others.
And:
In 1938 an amateur meteorologist stood up to deliver a paper to the Royal Meteorological Society in London. Guy Stewart Callendar was not a professional scientist, but rather a steam engineer. The paper he was about to present would restate Arrhenius’s argument with new documentation. Callendar began by admitting that the CO2 theory had had a “chequered history.” But not for him. He was obsessed with carbon dioxide and its impact on climate; he spent all his spare time collecting and analyzing data on weather patterns and carbon emissions. Amateur though he was, he had more systematically and fully collected the data than anyone else. His work bore out Arrhenius. The results seemed to show that CO2 was indeed increasing in the atmosphere and that would lead to a change in the climate—more specifically, global warming. 13
While Callendar found this obsessively interesting, he, like Arrhenius, was hardly worried. He too thought this would make for a better, more pleasant world—“beneficial to mankind”—providing, among other things, a boon for agriculture. And there was a great bonus. “The return of the deadly glaciers should be delayed indefinitely.”
But Callendar was an amateur, and the professionals in attendance that night at the Royal Meteorological Society did not take him very seriously. After all, he was a steam engineer.
Yet what Callendar described — the role of CO2 in climate change — eventually became known as the Callendar Effect. “His claims rescued the idea of global warming from obscurity and thrust it into the marketplace of ideas,” wrote one historian. But it was only a temporary recovery. For over a number of years thereafter the idea was roundly dismissed. In 1951 a prominent climatologist observed that the CO2 theory of climate change “was never widely accepted and was abandoned.” No one seemed to take it very seriously.
↑ comment by lukeprog · 2014-01-30T20:39:26.589Z · LW(p) · GW(p)
More (#1) from The Quest:
...auto executives could now see a point on the horizon when China might actually overtake the United States as the world’s largest automobile market. It was inevitable, they said. It was just a matter of time. In 2004 General Motors predicted that it could happen as early as 2025. Some went further and said it could happen as early as 2020. Maybe even 2018. But, they would add, that would be a real stretch.
As things turned out, it happened much sooner — in 2009, amid the Great Recession. That year China, accelerating in the fast lane, not only overtook the United States but pulled into a clear lead.
And:
During World War II, in order to meet the energy needs of factories working two or three shifts a day to supply the war effort, the East Ohio Gas Company built an LNG storage facility in Cleveland. In October 1944 one of the tanks failed. Stored [liquefied natural gas] seeped into the sewer system and ignited, killing 129 people and creating a mile-long fireball. Subsequently, the causes of the accident were identified: poor ventilation, insufficient containment measures, and the improper use of a particular steel alloy that turned brittle at very low temperatures. The design and safety lessons would be seared into the minds of future developers.
And:
It was one thing to build an atomic bomb. It was quite another to harness a controlled chain reaction of fission to generate power. So much had to be invented and developed from scratch—the technology, the engineering, the know-how. It was Rickover who chose the pressurized light-water reactor as the propulsion system. He also imposed “an engineering and technical discipline unknown to industry or, except for his own organization, to government.”
To accomplish his goals, Rickover built a cadre of highly skilled and highly trained officers for the nuclear navy, who were constantly pushed to operate at peak standards of performance...
In Rickover’s tireless campaign to build a nuclear submarine and bulldoze through bureaucracy, he so alienated his superiors that he was twice passed over for promotion to admiral. It took congressional intervention to finally secure him the title.
Rickover’s methods worked. The development of the technology, the engineering , and construction for a nuclear submarine—all these were achieved in record time. The first nuclear submarine, the USS Nautilus, was commissioned in 1954. The whole enterprise had been achieved in seven years—compared with the quarter century that others had predicted. In 1958, to great acclaim, the Nautilus accomplished a formidable, indeed unthinkable, feat—it sailed 1,400 miles under the North Pole and the polar ice cap. The journey was nonstop except for those times when the ship got temporarily stuck between the massive ice cap and the shallow sea bottom. When, on the ship’s return, the Nautilus’s captain was received at the White House, the abrasive Rickover, who was ultimately responsible for the very existence of the Nautilus, was pointedly excluded from the ceremony.
...By the time Rickover finally retired in 1986, 40 percent of the navy’s major combatant ships would be nuclear propelled.
↑ comment by lukeprog · 2014-01-25T20:32:33.802Z · LW(p) · GW(p)
From The Second Machine Age:
Replies from: lukeprogKary Mullis won the 1993 Nobel Prize in Chemistry for the development of the polymerase chain reaction (PCR), a now ubiquitous technique for replicating DNA sequences. When the idea first came to him on a nighttime drive in California, though, he almost dismissed it out of hand. As he recounted in his Nobel Award speech, “Somehow, I thought, it had to be an illusion...It was too easy... There was not a single unknown in the scheme. Every step involved had been done already.” “All” Mullis did was recombine well-understood techniques in biochemistry to generate a new one. And yet it’s obvious Mullis’s recombination is an enormously valuable one.
After examining many examples of invention, innovation, and technological progress, complexity scholar Brian Arthur became convinced that stories like the invention of PCR are the rule, not the exception. As he summarizes in his book The Nature of Technology, “To invent something is to find it in what previously exists.” Economist Paul Romer has argued forcefully in favor of this view, the so-called ‘new growth theory’ within economics, in order to distinguish it from perspectives like Gordon’s. Romer’s inherently optimistic theory stresses the importance of recombinant innovation.
↑ comment by lukeprog · 2014-01-25T20:36:57.441Z · LW(p) · GW(p)
More (#1) from The Second Machine Age:
Quirky, another Web-based startup, enlists people to participate in both phases of Weitzman’s recombinant innovation— first generating new ideas, then filtering them. It does this by harnessing the power of many eyeballs not only to come up with innovations but also to filter them and get them ready for market. Quirky seeks ideas for new consumer products from its crowd, and also relies on the crowd to vote on submissions, conduct research, suggest improvements, figure out how to name and brand the products, and drive sales. Quirky itself makes the final decisions about which products to launch and handles engineering, manufacturing, and distribution. It keeps 70 percent of all revenue made through its website and distributes the remaining 30 percent to all crowd members involved in the development effort; of this 30 percent, the person submitting the original idea gets 42 percent, those who help with pricing share 10 percent, those who contribute to naming share 5 percent, and so on. By the fall of 2012, Quirky had raised over $ 90 million in venture capital financing and had agreements to sell its products at several major retailers, including Target and Bed Bath & Beyond. One of its most successful products, a flexible electrical power strip called Pivot Power, sold more than 373 thousand units in less than two years and earned the crowd responsible for its development over $400,000.
↑ comment by lukeprog · 2014-01-25T19:12:50.933Z · LW(p) · GW(p)
From Making Modern Science:
Replies from: lukeprogTo those who (like Whewell) retained the hope that science and religion could work in harmony, the materialist program of the Enlightenment was a positive danger to science. It encouraged scientists to abandon their objectivity in favor of the arrogant claim that the laws of nature could explain everything. Alfred North Whitehead's Science and the Modern World (1926) urged the scientific community to turn its back on this materialist program and return to an earlier vision in which nature was studied on the assumption that it would reveal evidence of divine purpose. This model of science's history dismisses episodes such as the trial of Galileo as aberrations and portrays the Scientific Revolution as founded on the hope that nature could be seen as the handiwork of a rational and benevolent Creator. For Whitehead and others of his generation, evolution itself could be seen as the unfolding of a divine purpose. This debate between two rival views of science-and hence of its history-is still active today.
In the early twentieth century, the legacy of the rationalist program was transformed in the work of Marxists such J. D. Bernal. Bernal, an eminent crystallographer, berated the scientific community for selling out to the industrialists. In his Social Function of Science (1939) he called for a renewed commitment to use science for the good of all. His 1954 Science in History was a monumental attempt to depict science as a potential force for good (as in the Enlightenment program) that had been perverted by its absorption into the military-industrial complex. In one important respect, then, the Marxists challenged the assumption that the rise of science represented the progress of human rationality. For them, science had emerged as a byproduct of the search for technical mastery over nature, not a disinterested search for knowledge, and the information it accumulated tended to reflect the interests of the society within which the scientist functioned. The aim of the Marxists was not to create a purely objective science but to reshape society so that the science that was done would benefit everyone, not just the capitalists. They dismissed the program advocated by Whitehead as a smokescreen for covering up science's involvement in the rise of capitalism. Similarly, many intellectual historians reacted furiously to what they regarded as the denigration of science implicit in works such as the Soviet historian Boris Hessen's "The Social and Economic Roots of Newton's `Principia"' from 1931. The outbreak of World War II highlighted two conflicting visions of science's history, both of which linked it to the dangers revealed in Nazi Germany. The optimistic vision of the Enlightenment had vanished along with the idea of inevitable progress in the calamities that the Western world had now experienced. Science must either turn its back on materialism and renew its links with religion or turn its back on capitalism and begin fighting for the common good.
↑ comment by lukeprog · 2014-01-25T19:18:15.928Z · LW(p) · GW(p)
More (#1) from Making Modern Science:
To some extent, the [history of science] continued and extended the Whiggish approach favored by the scientists themselves, because progress was defined in terms of steps toward what were perceived to be the main components of our modern worldview. In another respect, however, the new historiography of science did go beyond Whiggism: it was willing to admit that scientists were deeply involved with philosophical and religious concerns and often shaped their theories in accordance with their views on these wider questions. A leading influence here was the Russian emigre Alexandre Koyre, working in France and America, who used close textual analysis of classic works in science to demonstrate this wider dimension. Koyre (1978) argued that Galileo was deeply influenced by the Greek philosopher Plato, who had taught that the world of appearances hides an underlying reality structured along mathematical lines. Newton, too, turned out to be a far more complex figure than the old Enlightenment hero, deeply concerned with religious and philosophical issues (Koyre 1965).
↑ comment by lukeprog · 2014-01-18T20:49:47.269Z · LW(p) · GW(p)
From Johnson's Where Good Ideas Come From:
Several years ago, the theoretical physicist Geoffrey West decided to investigate whether Kleiber’s law applied to one of life’s largest creations: the superorganisms of human-built cities. Did the “metabolism” of urban life slow down as cities grew in size? Was there an underlying pattern to the growth and pace of life of metropolitan systems? Working out of the legendary Santa Fe Institute, where he served as president until 2009, West assembled an international team of researchers and advisers to collect data on dozens of cities around the world, measuring everything from crime to household electrical consumption, from new patents to gasoline sales. When they finally crunched the numbers, West and his team were delighted to discover that Kleiber’s negative quarter-power scaling governed the energy and transportation growth of city living. The number of gasoline stations, gasoline sales, road surface area, the length of electrical cables: all these factors follow the exact same power law that governs the speed with which energy is expended in biological organisms. If an elephant was just a scaled-up mouse, then, from an energy perspective, a city was just a scaled-up elephant.
But the most fascinating discovery in West’s research came from the data that didn’t turn out to obey Kleiber’s law. West and his team discovered another power law lurking in their immense database of urban statistics. Every datapoint that involved creativity and innovation — patents, R&D budgets, “supercreative” professions, inventors — also followed a quarter-power law, in a way that was every bit as predictable as Kleiber’s law. But there was one fundamental difference: the quarter-power law governing innovation was positive, not negative. A city that was ten times larger than its neighbor wasn’t ten times more innovative; it was seventeen times more innovative. A metropolis fifty times bigger than a town was 130 times more innovative.
↑ comment by lukeprog · 2014-01-18T19:59:04.328Z · LW(p) · GW(p)
From Gertner's The Idea Factory:
Replies from: lukeprog, lukeprogIn Vail’s view, another key to AT&T’s revival was defining it as a technological leader with legions of engineers working unceasingly to improve the system. As the business historian Louis Galambos would later point out, as Vail’s strategy evolved, the company’s executives began to imagine how their company might adapt its technology not only for the near term but for a future far, far away: “Eventually it came to be assumed within the Bell System that there would never be a time when technological innovation would no longer be needed.” The Vail strategy, in short, would measure the company’s progress “in decades instead of years.”
↑ comment by lukeprog · 2014-01-18T20:45:21.537Z · LW(p) · GW(p)
More (#2) from The Idea Factory:
In many respects, says Mathews, a phone monopoly in the early part of the twentieth century made perfect sense. Analog signals — the waves that carry phone calls — are very fragile. “If you’re going to send sound a long way, you have to send it through fifty amplifiers,” he explains, just as the transatlantic cable did. “The only thing that would work is if all the amplifiers in the path were designed and controlled by one entity, being the AT&T company. That was a natural monopoly. The whole system — an analog system — wouldn’t work if it was done by a myriad of companies.” But when Shannon explained how all messages could be classified as information, and all information could be digitally coded, it hinted at the end of this necessary monopoly. Digital information as Shannon envisioned it was durable and portable. In time, any company could code and send a message digitally, and any company could uncode it. And with transistors, which were increasingly cheap and essential to digital transmission, the process would get easier by the year.
Mathews argued that Shannon’s theorem “was the mathematical basis for breaking up the Bell System.” If that was so, then perhaps Shockley’s work would be the technical basis for a breakup. The patents, after all, were now there for the taking. And depending on how it played out, one might attach a corollary to Kelly’s loose formula for innovation—namely, that in any company’s greatest achievements one might, with the clarity of hindsight, locate the beginnings of its own demise.
And:
Pierce let Wells know that one of his science fiction concepts — an atomic bomb — was coming true: America was building one. He had deduced this from the way most of the country’s good physicists were disappearing and being directed to secret laboratories around the country. Pierce told Wells that he and his fellow engineers joked that promising scientists had been “body snatched.” But Wells was largely uninterested in what Pierce was saying. He wanted to talk about politics — among other things, Churchill, Roosevelt, and race in America.
And:
In October 1954, [Pierce] was invited to give a talk about space in Princeton at a convention of the Institute of Radio Engineers. Pierce decided he would discuss an idea he had for communications satellites — that is, orbiting unmanned spaceships that could relay communications (radio, telephone, television, or the like) from one great distance to another. A terrestrial signal could be directed toward the orbiting satellite in space; the satellite, much like a mirror, could in turn direct the signal to another part of the globe. Pierce didn’t consider himself the inventor of this idea; it was, he would later say, “in the air.” In fact, unbeknownst to Pierce, Arthur Clarke had written an obscure paper about ten years before suggesting that a small number of satellites, orbiting the earth at a height of about 22,300 miles, could connect the continents. Clarke never developed the idea any further and quickly lost interest in it. “There seemed nothing more that could be said until technical developments had validated (or invalidated) the basic concept,” he later wrote. In Pierce’s talk, however, he made some detailed calculations about satellites. He concluded that orbiting relays might not be financially viable over land; in the United States, the Bell System already had an intricate system of coaxial cables and microwave links. The oceans were a different story. The new cable that Bell Labs was planning for the Atlantic crossing in 1954 would carry only thirty-six telephone channels at tremendous expense and tremendous risk of mechanical failure. A satellite could satisfy the need for more connections without laying more cable.
One academic in the audience that day in Princeton suggested to Pierce that he publish his talk, which he soon did in the journal Jet Propulsion. “But what could be done about satellite communications in a practical way?” Pierce wondered. “At the time, nothing.” He questioned whether he had fallen into a trap of speculation, something a self-styled pragmatist like Pierce despised. There were no satellites yet of any kind, and there were apparently no rockets capable of launching such devices. It was doubtful, moreover, whether the proper technology even existed yet to operate a useful communications satellite. As Pierce often observed ruefully, “We do what we can, not what we think we should or what we want to do.”
↑ comment by lukeprog · 2014-01-18T20:39:11.030Z · LW(p) · GW(p)
More (#1) from The Idea Factory:
On January 1, 1925, AT&T officially created Bell Telephone Laboratories as a stand-alone company, to be housed in its West Street offices, which would be expanded from 400,000 to 600,000 square feet. The new entity—owned half by AT&T and half by Western Electric—was somewhat perplexing, for you couldn’t buy its stock on any of the exchanges. A new corporate board, led by AT&T’s chief engineer, John J. Carty, and Bell Labs’ new president, Frank Jewett, controlled the laboratory. The Labs would research and develop new equipment for Western Electric, and would conduct switching and transmission planning and invent communications-related devices for AT&T. These organizations would fund Bell Labs’ work. At the start its budget was about $12 million, the equivalent of about $150 million today.
And:
We usually imagine that invention occurs in a flash, with a eureka moment that leads a lone inventor toward a startling epiphany. In truth, large leaps forward in technology rarely have a precise point of origin. At the start, forces that precede an invention merely begin to align, often imperceptibly, as a group of people and ideas converge, until over the course of months or years (or decades) they gain clarity and momentum and the help of additional ideas and actors. Luck seems to matter, and so does timing, for it tends to be the case that the right answers, the right people, the right place—perhaps all three—require a serendipitous encounter with the right problem. And then—sometimes—a leap. Only in retrospect do such leaps look obvious. When Niels Bohr—along with Einstein, the world’s greatest physicist—heard in 1938 that splitting a uranium atom could yield a tremendous burst of energy, he slapped his head and said, “Oh, what idiots we have all been!”
↑ comment by lukeprog · 2014-01-06T21:55:05.103Z · LW(p) · GW(p)
I guess I might as well post quotes from (non-audio) books here as well, when I have no better place to put them.
First up is Revolution in Science.
Starting on page 45:
Replies from: shminuxVery few scientists appear to have described their own work in terms of revolution. Some fifteen years of research on this subject, aided by contributions of many students and friends and the fruits of the investigation of several research assistants, have uncovered only some dozen or so instances of a scientist who said explicitly that his contribution was revolutionary or revolution-making or part of a revolution. These are, in chronological order: Robert Symmer, J.-P Marat, A.-L. Lavoisier, Justus von Lieberg, William Rowan Hamilton, Charles Darwin, Rudolf Virchow, Georg Cantor, Albert Einstein, Hermann Minkowski, Max von Laue, Alfred Wegener, Arthur H. Compton, Ernest Everett Just, James D. Watson, and Benoit Mandelbrot.
Of course, there have been others who have said dramatically that they have produced a new science (Tartaglia, Galileo) or a new astronomy (Kepler) or a "new way of philosophizing" (Gilbert). We would not expect to find many explicit references to a revolution in science prior to the late 1600s. Of the three eighteenth-century scientists who claimed to be producing a revolution, only Lavoisier succeeded in eliciting the same judgment of his work from his contemporaries and from later historians and scientists.
↑ comment by Shmi (shminux) · 2014-01-06T22:29:34.911Z · LW(p) · GW(p)
This amazingly high percentage of self-proclaimed revolutionary scientists (30% or more) seems like a result of selection bias, since most scientist with oversized egos are not even remembered. I wonder what fraction of actual scientists (not your garden-variety crackpots) insist on having produced a revolution in science.
↑ comment by lukeprog · 2014-01-05T18:14:55.631Z · LW(p) · GW(p)
From Sunstein's Worst-Case Scenarios:
Replies from: lukeprog, lukeprog, lukeprog, lukeprog, lukeprogOur intuitions can lead to both too much and too little concern with low-probability risks. When judgments are based on analysis, they are more likely to be accurate, certainly if the analysis can be trusted. But for most people, intuitions rooted in actual experience are a much more powerful motivating force. An important task, for individuals and institutions alike, is to ensure that error-prone intuitions do not drive behavior.
↑ comment by lukeprog · 2014-01-08T18:25:01.729Z · LW(p) · GW(p)
More (#2) from Worst-Case Scenarios:
What lessons follow from an understanding of the extraordinary success of the Montreal Protocol and mixed picture for the Kyoto Protocol? Since we have only two data points here, we must be careful in drawing general conclusions. But two lessons seem both important and indisputable.
The first is that public opinion in the United States greatly matters, at least if it is reflected in actual behavior. When ozone depletion received massive attention in the media, American consumers responded by greatly reducing their consumption of aerosol sprays containing CFCs. This action softened industry opposition to regulation, because product lines containing CFCs were no longer nearly as profitable. In addition, market pressures from consumers spurred technological innovation in developing CFC substitutes. In the environmental domain as elsewhere, markets themselves can be technology-forcing. At the same time, public opinion put a great deal of pressure on public officials, affecting the behavior of legislators and the White House alike.
In Europe, by contrast, those involved in CFC production and use felt little pressure from public opinion, certainly in the early stages. The absence of such pressure, combined with the efforts of well-organized private groups, helped to ensure that European nations would take a weak stand on the question of regulation, at least at the inception of negotiations. In later stages, public opinion and consumer behavior were radically transformed in the United Kingdom and in Europe, and the transformation had large effects on the approach of political leaders there as well.
With respect to climate change, the attitude of the United States remains remarkably close to that of pre-Montreal Europe, urging regulators to "wait and learn"; to date, research and voluntary action rather than emission reduction mandates have been recommended by high-level officials. It is true that since 1990 the problem of climate change has received a great deal of media attention in the United States. But the public has yet to respond to that attention through consumer choices, and the best evidence suggests that most American citizens are not, in fact, alarmed about the risks associated with a warmer climate. American consumers and voters have put little pressure on either markets or officials to respond to the risk.
...The second lesson is that international agreements addressing global environmental problems will be mostly ineffective without the participation of the United States, and the United States is likely to participate only if the domestic benefits are perceived to be at least in the general domain of the domestic costs.
↑ comment by lukeprog · 2014-01-08T18:39:31.644Z · LW(p) · GW(p)
More (#5) from Worst-Case Scenarios:
Objection 4: [Knightian] uncertainty is too infrequent to be a genuine source of concern for purposes of policy and law Perhaps regulatory problems, including those mentioned here, hardly ever involve genuine uncertainty. Perhaps regulators are usually able to assign probabilities to outcomes; and where they cannot, perhaps they can instead assign probabilities to probabilities (or where this proves impossible, probabilities to probabilities of probabilities). For example, we have a lot of information about the orbits of asteroids, and good reason to believe that the risk of a devastating collision is very small. In many cases, such as catastrophic terrorist attack, regulators might be able to specify a range of probabilities-say, above 0 percent but below 5 percent. Or they might be able to say that the probability that climate change presents a risk of catastrophe is, at most, 20 percent. Some scientists and economists believe that climate change is unlikely to create catastrophic harm, and that the real costs, human and economic, will be high but not intolerable. In their view, the worst-case scenarios can be responsibly described as improbable.
Perhaps we can agree that pure uncertainty is rare. Perhaps we can agree that, at worst, regulatory problems involve problems of "bounded uncertainty," in which we cannot specify probabilities within particular bands. Maybe the risk of a catastrophic outcome is above 1 percent and below 10 percent, but maybe within that band it is impossible to assign probabilities. A sensible approach, then, would be to ask planners to identify a wide range of possible scenarios and to select approaches that do well for most or all of them. Of course, the pervasiveness of uncertainty depends on what is actually known, and in the case of climate change, people dispute what is actually known. Richard Posner believes that "no probabilities can be attached to the catastrophic global-warming scenarios, and without an estimate of probabilities an expected cost cannot be calculated." A 1994 survey of experts showed an extraordinary range of estimated losses from climate change, varying from no economic loss to a 20 percent decrease in gross world product - a catastrophic decline in the world's well-being.
↑ comment by lukeprog · 2014-01-08T18:34:56.538Z · LW(p) · GW(p)
More (#4) from Worst-Case Scenarios:
If the argument thus far is correct, we need to ask why reasonable people endorse that principle. If precautions themselves create risks, and if no course of action lacks significant worst-case scenarios, it is puzzling why people believe that the Precautionary Principle offers real guidance. The simplest answer is that a weak version is doing the real work. The more interesting answer is that the principle seems to give guidance because people single out a subset of risks that are actually involved. In other words, those who invoke the principle wear blinders. But what kind of blinders do they wear, and what accounts for them? I suggest that two factors are crucial. The first, emphasized in Chapter 1, is availability; the second, which we have not yet encountered, involves loss aversion.
Availability helps to explain the operation of the Precautionary Principle for a simple reason: Sometimes a certain risk, said to call for precautions, is cognitively available, whereas other risks, including the risks associated with regulation itself, are not. For example, everyone knows that nuclear power is potentially dangerous; the associated risks, and the worst-case scenarios, are widely perceived in the culture, because of the Chernobyl disaster and popular films about nuclear catastrophes. By contrast, a relatively complex mental operation is involved in the judgment that restrictions on nuclear power might lead people to depend on less safe alternatives, such as fossil fuels. In many cases where the Precautionary Principle seems to offer guidance, the reason is that some of the relevant risks are available while others are barely visible.
...But there is another factor. Human beings tend to be loss averse, which means that a loss from the status quo is seen as more distressing than a gain is seen as desirable... Because we dislike losses far more than we like corresponding gains, opportunity costs, in the form of forgone gains, often have a small impact on our decisions. When we anticipate a loss of what we already have, we often become genuinely afraid, in a way that greatly exceeds our feelings of pleasurable anticipation when we anticipate some addition to our current holdings.
The implication in the context of danger is clear: People will be closely attuned to the potential losses from any newly introduced risk, or from any aggravation of existing risks, but far less concerned about future gains they might never see if a current risk is reduced. Loss aversion often helps to explain what makes the Precautionary Principle operational. The status quo marks the baseline against which gains and losses are measured, and a loss from the status quo seems much more "bad" than a gain from the status quo seems good.
This is exactly what happens in the case of drug testing. Recall the emphasis, in the United States, on the risks of insufficient testing of medicines as compared with the risks of delaying the availability of those medicines. If there is a lot of testing, people may get sicker, and even die, simply because medicines are not made available. But if the risks of delay are off-screen, the Precautionary Principle will appear to give guidance notwithstanding the objections I have made. At the same time, the lost benefits sometimes present a devastating problem with the use of the Precautionary Principle. In the context of genetic modification of food, this is very much the situation; many people focus on the risks of genetic modification without also attending to the benefits that might be lost by regulation or prohibition. We can find the same problem when the Precautionary Principle is invoked to support bans on nonreproductive cloning. For many people, the possible harms of cloning register more strongly than the potential therapeutic benefits that would be made unattainable by a ban on the practice.
↑ comment by lukeprog · 2014-01-08T18:31:33.123Z · LW(p) · GW(p)
More (#3) from Worst-Case Scenarios:
Notwithstanding the similarities, the Montreal Protocol has proved a stunning success, and the Kyoto Protocol has largely failed. The contrasting outcomes are best explained by reference to the radically different approaches taken by the United States-by far the most significant contributor, per capita, to both ozone depletion and climate change. It is tempting to attribute those different approaches to the political convictions of the relevant administrations. But the Reagan administration, which pressed for the Montreal Protocol, was hardly known for its aggressive pursuit of environmental protection, and the Senate showed no interest in the Kyoto Protocol during the Clinton administration. The American posture, and hence the fate of the two protocols, was largely determined by perceived benefits and costs.
And:
The real problem with the Precautionary Principle, thus understood, is that it offers no guidance-not that it is wrong, but that it forbids all courses of action, including regulation. Taken seriously, it is paralyzing, banning the very steps that it simultaneously requires. If you accepted the strong version, you would not be able to get through a single day, because every action, including inaction, would be forbidden by the principle by which you were attempting to live. You would be banned from going to work; you would be banned from staying at home; you would be banned from taking medications; you would be banned from neglecting to take medications. The same point holds for governments that try to follow the Precautionary Principle.
In some cases, serious precautions would actually run afoul of the Precautionary Principle. Consider the "drug lag," produced whenever the government takes a highly precautionary approach to the introduction of new medicines and drugs onto the market. If a government insists on this approach, it will protect people against harms from inadequately tested drugs, in a way that fits well with the goal of precaution. But it will also prevent people from receiving potential benefits from those very drugs-and hence subject people to serious risks that they would not otherwise face. Is it "precautionary" to require extensive premarket testing, or to do the opposite? In 2006, 50,000 dogs were slaughtered in China, and the slaughter was defended as a precautionary step against the spread of rabies. But the slaughter itself caused a serious harm to many animals, and it inflicted psychological harms on many dog-owners, and even physical injuries on those whose pets were clubbed to death during walks. Is it so clear that the Precautionary Principle justified the slaughter? And even if the Precautionary Principle could be applied, was the slaughter really justified?
Or consider the case of DDT, often banned or regulated in the interest of reducing risks to birds and human beings. The problem with such bans is that, in poor nations, they eliminate what appears to be the most effective way of combating malaria. For this reason, they significantly undermine public health. DDT may well be the best method for combating serious health risks in many countries. With respect to DDT, precautionary steps are both mandated and forbidden by the idea of precaution in its strong forms. To know what to do, we need to identify the probability and magnitude of the harms created and prevented by DDT-not to insist on precaution as such.
Similar issues are raised by the continuing debate over whether certain antidepressants impose a (small) risk of breast cancer. A precautionary approach might seem to argue against the use of these drugs because of their carcinogenic potential. But the failure to use those antidepressants might well impose risks of its own, certainly psychological and possibly even physical (because psychological ailments are sometimes associated with physical ones as well). Or consider the decision by the Soviet Union to evacuate and relocate more than 270,000 people in response to the risk of adverse effects from the Chernobyl fallout. It is hardly clear that on balance this massive relocation project was justified on health grounds: "A comparison ought to have been made between the psychological and medical burdens of this measure (anxiety, psychosomatic diseases, depression and suicides) and the harm that may have been prevented." More generally, a sensible government might want to ignore the small risks associated with low levels of radiation, on the ground that precautionary responses are likely to cause fear that outweighs any health benefits from those responses - and fear is not good for your health.
And:
It has become standard to say that some nations are more precautionary, and more concerned about worst-case scenarios, than are others. European countries, for example, are said to be more precautionary than the United States. If the argument thus far is correct, this conclusion is utterly implausible. First, it is implausible empirically. Some nations take strong precautions against some risks, but no nation takes precautions against every risk. As we have seen, the United States has followed a kind of Precautionary Principle with respect to ozone depletion, and certainly with respect to terrorism, but not for climate change or genetic modification of food. The United Kingdom was not particularly focused on the worst-case scenarios associated with ozone depletion; but it closely attends to those scenarios in the context of climate change. France is not precautionary with respect to nuclear power, and it followed no strong Precautionary Principle with respect to Saddam Hussein. But on many issues of health and safety, France takes aggressive precautionary measures. No nation is precautionary in general; costly precautions are inevitably taken against only those hazards that seem especially salient or insistent.
↑ comment by lukeprog · 2014-01-05T18:20:21.509Z · LW(p) · GW(p)
More (#1) from Worst-Case Scenarios:
For advocates of cost-benefit analysis, a particularly thorny question is how to handle future generations when they are threatened by worst-case scenarios. According to standard practice, money that will come in the future must be "discounted"; a dollar twenty years hence is worth a fraction of a dollar today. (You would almost certainly prefer $1,000 now to $1,000 in twenty years.) Should we discount future lives as well? Is a life twenty years hence worth a fraction of a life today? I will argue in favor of a Principle of Intergenerational Neutrality - one that requires the citizens of every generation to be treated equally. This principle has important implications for many problems, most obviously climate change. Present generations are obliged to take the interests of their threatened descendents as seriously as they take their own.
But the Principle of Intergenerational Neutrality does not mean that the present generation should refuse to discount the future, or should impose great sacrifices on itself for the sake of those who will come later. If human history is any guide, the future will be much richer than the present; and it makes no sense to say that the relatively impoverished present should transfer its resources to the far wealthier future. And if the present generation sacrifices itself by forgoing economic growth, it is likely to hurt the future too, because long-term economic growth is likely to produce citizens who live healthier, longer, and better lives. I shall have something to say about what intergenerational neutrality actually requires, and about the complex relationship between that important ideal and the disputed practice of "discounting" the future.
But at least so far in the book, Sunstein doesn't mention the obvious rejoinder about investing now to prevent existential catastrophe.
Anyway, another quote:
Why was the Montreal Protocol so much more successful than the Kyoto Protocol? I shall suggest here that both the success in Montreal and the mixed picture in Kyoto were driven largely by decisions of the United States, based on a domestic cost-benefit analysis. To the United States, the monetized benefits of the Montreal Protocol dwarfed the monetized costs, and hence the circumstances were extremely promising for American support and even enthusiasm for the agreement. As we will see, the United States had so much to lose from depletion of the ozone layer that it would have been worthwhile for the nation unilaterally to take the steps required by the Montreal Protocol. For the world as a whole, the argument for the Montreal Protocol was overwhelming.
But careful analysis and economic rationality were not the whole story: The nation's attention was also riveted by a vivid image, the ominous and growing "ozone hole" over Antarctica. Ordinary people could easily understand the idea that the earth was losing a kind of "protective shield," one that operated as a safeguard against skin cancer, a dreaded condition.
↑ comment by lukeprog · 2014-01-01T17:13:16.371Z · LW(p) · GW(p)
From Gleick's Chaos:
Replies from: lukeprog, lukeprog, lukeprogEvery scientist who turned to chaos [theory] early had a story to tell of discouragement or open hostility. Graduate students were warned that their careers could be jeopardized if they wrote theses in an untested discipline, in which their advisors had no expertise. A particle physicist, hearing about this new mathematics, might begin playing with it on his own, thinking it was a beautiful thing, both beautiful and hard — but would feel that he could never tell his colleagues about it. Older professors felt they were suffering a kind of midlife crisis, gambling on a line of research that many colleagues were likely to misunderstand or resent...
Those who recognized chaos in the early days agonized over how to shape their thoughts and findings into publishable form. Work fell between disciplines — for example, too abstract for physicists yet too experimental for mathematicians. To some the difficulty of communicating the new ideas and the ferocious resistance from traditional quarters showed how revolutionary the new science was. Shallow ideas can be assimilated; ideas that require people to reorganize their picture of the world provoke hostility.
↑ comment by lukeprog · 2014-01-01T17:35:46.824Z · LW(p) · GW(p)
More (#3) from Chaos:
Hubbard began using a computer to do what the orthodox techniques had not done. The computer would prove nothing. But at least it might unveil the truth so that a mathematician could know what it was he should try to prove. So Hubbard began to experiment. He treated Newton’s method not as a way of solving problems but as a problem in itself. Hubbard considered the simplest example of a degree-three polynomial, the equation x3– 1 =0. That is, find the cube root of 1. In real numbers, of course, there is just the trivial solution: 1. But the polynomial also has two complex solutions: –½ + i√3/2, and –½ – i√3/2. Plotted in the complex plane, these three roots mark an equilateral triangle, with one point at three o’clock, one at seven o’clock, and one at eleven o’clock. Given any complex number as a starting point, the question was to see which of the three solutions Newton’s method would lead to. It was as if Newton’s method were a dynamical system and the three solutions were three attractors. Or it was as if the complex plane were a smooth surface sloping down toward three deep valleys. A marble starting from anywhere on the plane should roll into one of the valleys—but which?
Hubbard set about sampling the infinitude of points that make up the plane. He had his computer sweep from point to point, calculating the flow of Newton’s method for each one, and color-coding the results. Starting points that led to one solution were all colored blue. Points that led to the second solution were red, and points that led to the third were green. In the crudest approximation, he found, the dynamics of Newton’s method did indeed divide the plane into three pie wedges. Generally the points near a particular solution led quickly into that solution. But systematic computer exploration showed complicated underlying organization that could never have been seen by earlier mathematicians, able only to calculate a point here and a point there. While some starting guesses converged quickly to a root, others bounced around seemingly at random before finally converging to a solution. Sometimes it seemed that a point could fall into a cycle that would repeat itself forever—a periodic cycle—without ever reaching one of the three solutions.
As Hubbard pushed his computer to explore the space in finer and finer detail, he and his students were bewildered by the picture that began to emerge. Instead of a neat ridge between the blue and red valleys, for example, he saw blotches of green, strung together like jewels. It was as if a marble, caught between the conflicting tugs of two nearby valleys, would end up in the third and most distant valley instead. A boundary between two colors never quite forms. On even closer inspection, the line between a green blotch and the blue valley proved to have patches of red. And so on—the boundary finally revealed to Hubbard a peculiar property that would seem bewildering even to someone familiar with Mandelbrot’s monstrous fractals: no point serves as a boundary between just two colors. Wherever two colors try to come together, the third always inserts itself, with a series of new, self-similar intrusions. Impossibly, every boundary point borders a region of each of the three colors.
And:
For... Peitgen the study of complexity provided a chance to create new traditions in science instead of just solving problems. “In a brand new area like this one, you can start thinking today and if you are a good scientist you might be able to come up with interesting solutions in a few days or a week or a month,” Peitgen said. The subject is unstructured.
“In a structured subject, it is known what is known, what is unknown, what people have already tried and doesn’t lead anywhere. There you have to work on a problem which is known to be a problem, otherwise you get lost. But a problem which is known to be a problem must be hard, otherwise it would already have been solved.”
Peitgen shared little of the mathematicians’ unease with the use of computers to conduct experiments. Granted, every result must eventually be made rigorous by the standard methods of proof, or it would not be mathematics. To see an image on a graphics screen does not guarantee its existence in the language of theorem and proof. But the very availability of that image was enough to change the evolution of mathematics. Computer exploration was giving mathematicians the freedom to take a more natural path, Peitgen believed. Temporarily, for the moment, a mathematician could suspend the requirement of rigorous proof. He could go wherever experiments might lead him, just as a physicist could. The numerical power of computation and the visual cues to intuition would suggest promising avenues and spare the mathematician blind alleys. Then, new paths having been found and new objects isolated, a mathematician could return to standard proofs. “Rigor is the strength of mathematics,” Peitgen said. “That we can continue a line of thought which is absolutely guaranteed — mathematicians never want to give that up. But you can look at situations that can be understood partially now and with rigor perhaps in future generations. Rigor, yes, but not to the extent that I drop something just because I can’t do it now.”
↑ comment by lukeprog · 2014-01-01T17:29:39.636Z · LW(p) · GW(p)
More (#2) from Chaos:
As a mathematics paper, Lorenz’s climate work would have been a failure—he proved nothing in the axiomatic sense. As a physics paper, too, it was seriously flawed, because he could not justify using such a simple equation to draw conclusions about the earth’s climate. Lorenz knew what he was saying, though. “The writer feels that this resemblance is no mere accident, but that the difference equation captures much of the mathematics, even if not the physics, of the transitions from one regime of flow to another, and, indeed, of the whole phenomenon of instability.” Even twenty years later, no one could understand what intuition justified such a bold claim, published in Tellus, a Swedish meteorology journal. (“Tellus! Nobody reads Tellus,” a physicist exclaimed bitterly.) Lorenz was coming to understand ever more deeply the peculiar possibilities of chaotic systems—more deeply than he could express in the language of meteorology.
And:
Modern economics relies heavily on the efficient market theory. Knowledge is assumed to flow freely from place to place. The people making important decisions are supposed to have access to more or less the same body of information. Of course, pockets of ignorance or inside information remain here and there, but on the whole, once knowledge is public, economists assume that it is known everywhere. Historians of science often take for granted an efficient market theory of their own. When a discovery is made, when an idea is expressed, it is assumed to become the common property of the scientific world. Each discovery and each new insight builds on the last. Science rises like a building, brick by brick. Intellectual chronicles can be, for all practical purposes, linear.
That view of science works best when a well-defined discipline awaits the resolution of a well-defined problem. No one misunderstood the discovery of the molecular structure of DNA, for example. But the history of ideas is not always so neat. As nonlinear science arose in odd corners of different disciplines, the flow of ideas failed to follow the standard logic of historians. The emergence of chaos as an entity unto itself was a story not only of new theories and new discoveries, but also of the belated understanding of old ideas. Many pieces of the puzzle had been seen long before — by Poincaré, by Maxwell, even by Einstein — and then forgotten. Many new pieces were understood at first only by a few insiders. A mathematical discovery was understood by mathematicians, a physics discovery by physicists, a meteorological discovery by no one. The way ideas spread became as important as the way they originated.
Each scientist had a private constellation of intellectual parents. Each had his own picture of the landscape of ideas, and each picture was limited in its own way. Knowledge was imperfect. Scientists were biased by the customs of their disciplines or by the accidental paths of their own educations. The scientific world can be surprisingly finite. No committee of scientists pushed history into a new channel — a handful of individuals did it, with individual perceptions and individual goals.
And:
In the age of computer simulation, when flows in everything from jet turbines to heart valves are modeled on supercomputers, it is hard to remember how easily nature can confound an experimenter. In fact, no computer today can completely simulate even so simple a system as Libchaber’s liquid helium cell. Whenever a good physicist examines a simulation, he must wonder what bit of reality was left out, what potential surprise was sidestepped. Libchaber liked to say that he would not want to fly in a simulated airplane—he would wonder what had been missed. Furthermore, he would say that computer simulations help to build intuition or to refine calculations, but they do not give birth to genuine discovery. This, at any rate, is the experimenter’s creed. His experiment was so immaculate, his scientific goals so abstract, that there were still physicists who considered Libchaber’s work more philosophy or mathematics than physics. He believed, in turn, that the ruling standards of his field were reductionist, giving primacy to the properties of atoms. “A physicist would ask me, How does this atom come here and stick there? And what is the sensitivity to the surface? And can you write the Hamiltonian of the system?
“And if I tell him, I don’t care, what interests me is this shape, the mathematics of the shape and the evolution, the bifurcation from this shape to that shape to this shape, he will tell me, that’s not physics, you are doing mathematics. Even today he will tell me that. Then what can I say? Yes, of course, I am doing mathematics. But it is relevant to what is around us. That is nature, too.”
↑ comment by lukeprog · 2014-01-01T17:21:06.970Z · LW(p) · GW(p)
More (#1) from Chaos:
...for decades, Mandelbrot believes, he had to play games with his work. He had to couch original ideas in terms that would not give offense. He had to delete his visionary-sounding prefaces to get his articles published. When he wrote the first version of his book, published in French in 1975, he felt he was forced to pretend it contained nothing too startling. That was why he wrote the latest version explicitly as “a manifesto and a casebook.” He was coping with the politics of science.
“The politics affected the style in a sense which I later came to regret. I was saying, ‘It’s natural to…, It’s an interesting observation that….’ Now, in fact, it was anything but natural, and the interesting observation was in fact the result of very long investigations and search for proof and self-criticism. It had a philosophical and removed attitude which I felt was necessary to get it accepted. The politics was that, if I said I was proposing a radical departure, that would have been the end of the readers’ interest.
“Later on, I got back some such statements, people saying, ‘It is natural to observe…’ That was not what I had bargained for.”
Looking back, Mandelbrot saw that scientists in various disciplines responded to his approach in sadly predictable stages. The first stage was always the same: Who are you and why are you interested in our field? Second: How does it relate to what we have been doing, and why don’t you explain it on the basis of what we know? Third: Are you sure it’s standard mathematics? (Yes, I’m sure.) Then why don’t we know it? (Because it’s standard but very obscure.)
Mathematics differs from physics and other applied sciences in this respect. A branch of physics, once it becomes obsolete or unproductive, tends to be forever part of the past. It may be a historical curiosity, perhaps the source of some inspiration to a modern scientist, but dead physics is usually dead for good reason. Mathematics, by contrast, is full of channels and byways that seem to lead nowhere in one era and become major areas of study in another. The potential application of a piece of pure thought can never be predicted. That is why mathematicians value work in an aesthetic way, seeking elegance and beauty as artists do. It is also why Mandelbrot, in his antiquarian mode, came across so much good mathematics that was ready to be dusted off.
So the fourth stage was this: What do people in these branches of mathematics think about your work? (They don’t care, because it doesn’t add to the mathematics. In fact, they are surprised that their ideas represent nature.)
↑ comment by lukeprog · 2013-12-14T22:14:54.559Z · LW(p) · GW(p)
From Lewis' The Big Short:
Replies from: lukeprog, lukeprog, lukeprog, lukeprogMy [first] book was mainly about the bond market, because Wall Street was now making even bigger money packaging and selling and shuffling around America's growing debts. This, too, I assumed was unsustainable. I thought that I was writing a period piece about the 1980s in America, when a great nation lost its financial mind. I expected readers of the future would be appalled that, back in 1986, the CEO of Salomon Brothers, John Gutfreund, was paid $3.1 million as he ran the business into the ground. I expected them to gape in wonder at the story of Howie Rubin, the Salomon mortgage bond trader, who had moved to Merrill Lynch and promptly lost $250 million. I expected them to be shocked that, once upon a time on Wall Street, the CEOs had only the vaguest idea of the complicated risks their bond traders were running.
And that's pretty much how I imagined it; what I never imagined is that the future reader might look back on any of this, or on my own peculiar experience, and say, "How quaint." How innocent. Not for a moment did I suspect that the financial 1980s would last for two full decades longer, or that the difference in degree between Wall Street and ordinary economic life would swell to a difference in kind. That a single bond trader might be paid $47 million a year and feel cheated. That the mortgage bond market invented on the Salomon Brothers trading floor, which seemed like such a good idea at the time, would lead to the most purely financial economic disaster in history. That exactly twenty years after Howie Rubin became a scandalous household name for losing $250 million, another mortgage bond trader named Howie, inside Morgan Stanley, would lose $9 billion on a single mortgage trade, and remain essentially unknown, without anyone beyond a small circle inside Morgan Stanley ever hearing about what he'd done, or why.
...In the two decades after I left, I waited for the end of Wall Street as I had known it. The outrageous bonuses, the endless parade of rogue traders, the scandal that sank Drexel Burnham, the scandal that destroyed John Gutfreund and finished off Salomon Brothers, the crisis following the collapse of my old boss John Meriwether's Long-Term Capital Management, the Internet bubble: Over and over again, the financial system was, in some narrow way, discredited. Yet the big Wall Street banks at the center of it just kept on growing, along with the sums of money that they doled out to twenty-six-year-olds to perform tasks of no obvious social utility. The rebellion by American youth against the money culture never happened. Why bother to overturn your parents' world when you can buy it and sell off the pieces?
At some point, I gave up waiting. There was no scandal or reversal, I assumed, sufficiently great to sink the system.
↑ comment by lukeprog · 2013-12-14T22:41:07.952Z · LW(p) · GW(p)
More (#4) from The Big Short:
[Mike Barry] wasn't wasting a lot of time worrying about why these supposedly shrewd investment bankers were willing to sell him insurance so cheaply. He was worried that others would catch on and the opportunity would vanish. "I would play dumb quite a bit," he said, "making it seem to them like I don't really know what I'm doing. 'How do you do this again?' 'Oh, where can I find that information?' Or, 'Really?'--when they tell me something really obvious." It was one of the fringe benefits of living for so many years essentially alienated from the world around him: He could easily believe that he was right and the world was wrong.
And:
In the second quarter of 2005, credit card delinquencies hit an all-time high--even though house prices had boomed. That is, even with this asset to borrow against, Americans were struggling more than ever to meet their obligations. The Federal Reserve had raised interest rates, but mortgage rates were still effectively falling--because Wall Street was finding ever more clever ways to enable people to borrow money. Burry now had more than a billion-dollar bet on the table and couldn't grow it much more unless he attracted a lot more money. So he just laid it out for his investors: The U.S. mortgage bond market was huge, bigger than the market for U.S. Treasury notes and bonds. The entire economy was premised on its stability, and its stability in turn depended on house prices continuing to rise. "It is ludicrous to believe that asset bubbles can only be recognized in hindsight," he wrote. "There are specific identifiers that are entirely recognizable during the bubble's inflation. One hallmark of mania is the rapid rise in the incidence and complexity of fraud.... The FBI reports mortgage-related fraud is up fivefold since 2000." Bad behavior was no longer on the fringes of an otherwise sound economy; it was its central feature. "The salient point about the modern vintage of housing-related fraud is its integral place within our nation's institutions," he added.
And:
[Eisman] and Vinny and Danny had been making these side bets with Goldman Sachs and Deutsche Bank on the fate of the triple-B tranche of subprime mortgage-backed bonds without fully understanding why those firms were so eager to accept them. Now he was face-to-face with the actual human being on the other side of his credit default swaps. Now he got it: The credit default swaps, filtered through the CDOs, were being used to replicate bonds backed by actual home loans. There weren't enough Americans with shitty credit taking out loans to satisfy investors' appetite for the end product. Wall Street needed his bets in order to synthesize more of them. "They weren't satisfied getting lots of unqualified borrowers to borrow money to buy a house they couldn't afford," said Eisman. "They were creating them out of whole cloth. One hundred times over! That's why the losses in the financial system are so much greater than just the subprime loans. That's when I realized they needed us to keep the machine running. I was like, This is allowed?"
And:
The first half of 2007 was a very strange period in financial history. The facts on the ground in the housing market diverged further and further from the prices on the bonds and the insurance on the bonds. Faced with unpleasant facts, the big Wall Street firms appeared to be choosing simply to ignore them. There were subtle changes in the market, however, and they turned up in Burry's e-mail in-box. On March 19 his salesman at Citigroup sent him, for the first time, serious analysis on a pool of mortgages. The mortgages were not subprime but Alt-A. Still, the guy was trying to explain how much of the pool consisted of interest-only loans, what percentage was owner-occupied, and so on--the way a person might do who actually was thinking about the creditworthiness of the borrowers. "When I was analyzing these back in 2005," Burry wrote in an e-mail, sounding like Stanley watching tourists march through the jungle on a path he had himself hacked, "there was nothing even remotely close to this sort of analysis coming out of brokerage houses. I glommed onto 'silent seconds' as an indicator of a stretched buyer and made it a high-value criterion in my selection process, but at the time no one trading derivatives had any idea what I was talking about and no one thought they mattered." In the long quiet between February and June 2007, they had begun to matter. The market was on edge.
↑ comment by lukeprog · 2013-12-14T22:34:54.885Z · LW(p) · GW(p)
More (#3) from The Big Short:
The original cast of subprime financiers had been sunk by the small fraction of the loans they made that they had kept on their books. The market might have learned a simple lesson: Don't make loans to people who can't repay them. Instead it learned a complicated one: You can keep on making these loans, just don't keep them on your books. Make the loans, then sell them off to the fixed income departments of big Wall Street investment banks, which will in turn package them into bonds and sell them to investors. Long Beach Savings was the first existing bank to adopt what was called the "originate and sell" model. This proved such a hit--Wall Street would buy your loans, even if you would not!--that a new company, called B&C mortgage, was founded to do nothing but originate and sell. Lehman Brothers thought that was such a great idea that they bought B&C mortgage. By early 2005 all the big Wall Street investment banks were deep into the subprime game.
And:
Even in life or death situations, doctors, nurses, and patients all responded to bad incentives. In hospitals in which the reimbursement rates for appendectomies ran higher, for instance, the surgeons removed more appendixes. The evolution of eye surgery was another great example. In the 1990s, the ophthalmologists were building careers on performing cataract procedures. They'd take half an hour or less, and yet Medicare would reimburse them $1,700 a pop. In the late 1990s, Medicare slashed reimbursement levels to around $450 per procedure, and the incomes of the surgically minded ophthalmologists fell. Across America, ophthalmologists rediscovered an obscure and risky procedure called radial keratotomy, and there was a boom in surgery to correct small impairments of vision. The inadequately studied procedure was marketed as a cure for the suffering of contact lens wearers. "In reality," says Burry, "the incentive was to maintain their high, often one-to two-million-dollar incomes, and the justification followed. The industry rushed to come up with something less dangerous than radial keratotomy, and Lasik was eventually born."
And:
In October 2001, [Mike Barry] explained the concept in his letter to investors: "Ick investing means taking a special analytical interest in stocks that inspire a first reaction of 'ick.'"
The alarmingly named Avant! Corporation was a good example. He'd found it searching for the word "accepted" in news stories. He knew that, standing on the edge of the playing field, he needed to find unorthodox ways to tilt it to his advantage, and that usually meant finding unusual situations the world might not be fully aware of. "I wasn't searching for a news report of a scam or fraud per se," he said. "That would have been too backward-looking, and I was looking to get in front of something. I was looking for something happening in the courts that might lead to an investment thesis. An argument being accepted, a plea being accepted, a settlement being accepted by the court." A court had accepted a plea from a software company called the Avant! Corporation. Avant! had been accused of stealing from a competitor the software code that was the whole foundation of Avant!'s business. The company had $100 million in cash in the bank, was still generating $100 million a year of free cash flow--and had a market value of only $250 million! Michael Burry started digging; by the time he was done, he knew more about the Avant! Corporation than any man on earth. He was able to see that even if the executives went to jail (as they did) and the fines were paid (as they were), Avant! would be worth a lot more than the market then assumed. Most of its engineers were Chinese nationals on work visas, and thus trapped--there was no risk that anyone would quit before the lights were out. To make money on Avant!'s stock, however, he'd probably have to stomach short-term losses, as investors puked up shares in horrified response to negative publicity.
And:
[Mike Barry] analyzed the relative importance of the loan-to-value ratios of the home loans, of second liens on the homes, of the location of the homes, of the absence of loan documentation and proof of income of the borrower, and a dozen or so other factors to determine the likelihood that a home loan made in America circa 2005 would go bad. Then he went looking for the bonds backed by the worst of the loans. It surprised him that Deutsche Bank didn't seem to care which bonds he picked to bet against. From their point of view, so far as he could tell, all subprime mortgage bonds were the same. The price of insurance was driven not by any independent analysis but by the ratings placed on the bond by the rating agencies, Moody's and Standard & Poor's.* If he wanted to buy insurance on the supposedly riskless triple-A-rated tranche, he might pay 20 basis points (0.20 percent); on the riskier A-rated tranches, he might pay 50 basis points (0.50 percent); and, on the even less safe triple-B-rated tranches, 200 basis points--that is, 2 percent. (A basis point is one-hundredth of one percentage point.) The triple-B-rated tranches--the ones that would be worth zero if the underlying mortgage pool experienced a loss of just 7 percent--were what he was after. He felt this to be a very conservative bet, which he was able, through analysis, to turn into even more of a sure thing. Anyone who even glanced at the prospectuses could see that there were many critical differences between one triple-B bond and the next--the percentage of interest-only loans contained in their underlying pool of mortgages, for example. He set out to cherry-pick the absolute worst ones, and was a bit worried that the investment banks would catch on to just how much he knew about specific mortgage bonds, and adjust their prices.
Once again they shocked and delighted him: Goldman Sachs e-mailed him a great long list of crappy mortgage bonds to choose from. "This was shocking to me, actually," he says. "They were all priced according to the lowest rating from one of the big three ratings agencies." He could pick from the list without alerting them to the depth of his knowledge. It was as if you could buy flood insurance on the house in the valley for the same price as flood insurance on the house on the mountaintop.
↑ comment by lukeprog · 2013-12-14T22:29:33.511Z · LW(p) · GW(p)
More (#2) from The Big Short:
The year was now 2002. There were no public subprime lending companies left in America. There was, however, an ancient consumer lending giant called Household Finance Corporation. Created in the 1870s, it had long been a leader in the field. Eisman understood the company well, he thought, until he realized that he didn't. In early 2002 he got his hands on Household's new sales document offering home equity loans. The company's CEO, Bill Aldinger, had grown Household even as his competitors went bankrupt. Americans, digesting the Internet bust, seemed in no position to take on new debts, and yet Household was making loans at a faster pace than ever. A big source of its growth had been the second mortgage. The document offered a fifteen-year, fixed-rate loan, but it was bizarrely disguised as a thirty-year loan. It took the stream of payments the homeowner would make to Household over fifteen years, spread it hypothetically over thirty years, and asked: If you were making the same dollar payments over thirty years that you are in fact making over fifteen, what would your "effective rate" of interest be? It was a weird, dishonest sales pitch. The borrower was told he had an "effective interest rate of 7 percent" when he was in fact paying something like 12.5 percent. "It was blatant fraud," said Eisman. "They were tricking their customers."
And:
...[Eisman] attended a lunch organized by a big Wall Street firm. The guest speaker was Herb Sandler, the CEO of a giant savings and loan called Golden West Financial Corporation. "Someone asked him if he believed in the free checking model," recalls Eisman. "And he said, 'Turn off your tape recorders.' Everyone turned off their tape recorders. And he explained that they avoided free checking because it was really a tax on poor people--in the form of fines for overdrawing their checking accounts. And that banks that used it were really just banking on being able to rip off poor people even more than they could if they charged them for their checks."
Eisman asked, "Are any regulators interested in this?"
"No," said Sandler.
"That's when I decided the system was really, 'Fuck the poor.'"
And:
Instead of money, Eisman attracted people, whose views of the world were as shaded as his own. Vinny, who had just coauthored a gloomy report called "A Home without Equity Is Just a Rental with Debt," came right away. Porter Collins, a two-time Olympic oars-man who had worked with Eisman at Chilton Investment and never really understood why the guy with the bright ideas wasn't given more authority, came along too. Danny Moses, who became Eisman's head trader, came third. Danny had worked as a salesman at Oppenheimer and Co. and had pungent memories of Eisman doing and saying all sorts of things that sell-side analysts seldom did. In the middle of one trading day, for instance, Eisman had walked to the podium at the center of the Oppenheimer trading floor, called for everyone's attention, announced that "the following eight stocks are going to zero," and then listed eight companies that indeed went bankrupt. Raised in Georgia, the son of a finance professor, Danny was less openly fatalistic than Vinny or Steve, but he nevertheless shared a general sense that bad things can and do happen, especially on Wall Street. When a Wall Street firm helped him to get into a trade that seemed perfect in every way, he asked the salesman, "I appreciate this, but I just want to know one thing: How are you going to fuck me?"
Heh-heh-heh, c'mon, we'd never do that, the trader started to say, but Danny, though perfectly polite, was insistent.
We both know that unadulterated good things like this trade don't just happen between little hedge funds and big Wall Street firms. I'll do it, but only after you explain to me how you are going to fuck me. And the salesman explained how he was going to fuck him. And Danny did the trade.
↑ comment by lukeprog · 2013-12-14T22:25:00.609Z · LW(p) · GW(p)
More (#1) from The Big Short:
[Meredith] Whitney was an obscure analyst of financial firms for an obscure financial firm, Oppenheimer and Co., who, on October 31, 2007, ceased to be obscure. On that day she predicted that Citigroup had so mismanaged its affairs that it would need to slash its dividend or go bust. It's never entirely clear on any given day what causes what inside the stock market, but it was pretty clear that, on October 31, Meredith Whitney caused the market in financial stocks to crash. By the end of the trading day, a woman whom basically no one had ever heard of, and who could have been dismissed as a nobody, had shaved 8 percent off the shares of Citigroup and $390 billion off the value of the U.S. stock market. Four days later, Citigroup CEO Chuck Prince resigned. Two weeks later, Citigroup slashed its dividend.
From that moment, Meredith Whitney became E. F. Hutton: When she spoke, people listened. Her message was clear: If you want to know what these Wall Street firms are really worth, take a cold, hard look at these crappy assets they're holding with borrowed money, and imagine what they'd fetch in a fire sale. The vast assemblages of highly paid people inside them were worth, in her view, nothing. All through 2008, she followed the bankers' and brokers' claims that they had put their problems behind them with this write-down or that capital raise with her own claim: You're wrong. You're still not facing up to how badly you have mismanaged your business. You're still not acknowledging billions of dollars in losses on subprime mortgage bonds. The value of your securities is as illusory as the value of your people. Rivals accused Whitney of being overrated; bloggers accused her of being lucky. What she was, mainly, was right. But it's true that she was, in part, guessing. There was no way she could have known what was going to happen to these Wall Street firms, or even the extent of their losses in the subprime mortgage market. The CEOs themselves didn't know. "Either that or they are all liars," she said, "but I assume they really just don't know."
Now, obviously, Meredith Whitney didn't sink Wall Street. She'd just expressed most clearly and most loudly a view that turned out to be far more seditious to the social order than, say, the many campaigns by various New York attorneys general against Wall Street corruption. If mere scandal could have destroyed the big Wall Street investment banks, they would have vanished long ago. This woman wasn't saying that Wall Street bankers were corrupt. She was saying that they were stupid. These people whose job it was to allocate capital apparently didn't even know how to manage their own.
And:
"Here's this database," Eisman said simply. "Go into that room. Don't come out until you've figured out what it means."...
What first caught Vinny's eye were the high prepayments coming in from a sector called "manufactured housing." ("It sounds better than 'mobile homes.'") Mobile homes were different from the wheel-less kind: Their value dropped, like cars', the moment they left the store. The mobile home buyer, unlike the ordinary home buyer, couldn't expect to refinance in two years and take money out. Why were they prepaying so fast? Vinny asked himself. "It made no sense to me. Then I saw that the reason the prepayments were so high is that they were involuntary." "Involuntary prepayment" sounds better than "default." Mobile home buyers were defaulting on their loans, their mobile homes were being repossessed, and the people who had lent them money were receiving fractions of the original loans. "Eventually I saw that all the subprime sectors were either being prepaid or going bad at an incredible rate," said Vinny. "I was just seeing stunningly high delinquency rates in these pools." The interest rate on the loans wasn't high enough to justify the risk of lending to this particular slice of the American population. It was as if the ordinary rules of finance had been suspended in response to a social problem. A thought crossed his mind: How do you make poor people feel wealthy when wages are stagnant? You give them cheap loans.
To sift every pool of subprime mortgage loans took him six months, but when he was done he came out of the room and gave Eisman the news. All these subprime lending companies were growing so rapidly, and using such goofy accounting, that they could mask the fact that they had no real earnings, just illusory, accounting-driven, ones. They had the essential feature of a Ponzi scheme: To maintain the fiction that they were profitable enterprises, they needed more and more capital to create more and more subprime loans. "I wasn't actually a hundred percent sure I was right," said Vinny, "but I go to Steve and say, 'This really doesn't look good.' That was all he needed to know. I think what he needed was evidence to downgrade the stock."
The report Eisman wrote trashed all of the subprime originators; one by one, he exposed the deceptions of a dozen companies. "Here is the difference," he said, "between the view of the world they are presenting to you and the actual numbers." The subprime companies did not appreciate his effort. "He created a shitstorm," said Vinny. "All these subprime companies were calling and hollering at him: You're wrong. Your data's wrong. And he just hollered back at them, 'It's YOUR fucking data!'" One of the reasons Eisman's report disturbed so many is that he'd failed to give the companies he'd insulted fair warning. He'd violated the Wall Street code. "Steve knew this was going to create a shitstorm," said Vinny. "And he wanted to create the shitstorm. And he didn't want to be talked out of it. And if he told them, he'd have had all these people trying to talk him out of it."
"We were never able to evaluate the loans before because we never had the data," said Eisman later. "My name was wedded to this industry. My entire reputation had been built on covering these stocks. If I was wrong, that would be the end of the career of Steve Eisman."
Eisman published his report in September 1997, in the middle of what appeared to be one of the greatest economic booms in U.S. history. Less than a year later, Russia defaulted and a hedge fund called Long-Term Capital Management went bankrupt. In the subsequent flight to safety, the early subprime lenders were denied capital and promptly went bankrupt en masse. Their failure was interpreted as an indictment of their accounting practices, which allowed them to record profits before they were realized. No one but Vinny, so far as Vinny could tell, ever really understood the crappiness of the loans they had made. "It made me feel good that there was such inefficiency to this market," he said. "Because if the market catches on to everything, I probably have the wrong job. You can't add anything by looking at this arcane stuff, so why bother? But I was the only guy I knew who was covering companies that were all going to go bust during the greatest economic boom we'll ever see in my lifetime. I saw how the sausage was made in the economy and it was really freaky."
↑ comment by lukeprog · 2013-12-06T17:40:07.732Z · LW(p) · GW(p)
From Gleick's The Information:
[John Wilkins (1614-1672)] set out to determine how a restricted set of symbols — perhaps just two, three, or five — might be made to stand for a whole alphabet. They would have to be used in combination. For example, a set of five symbols — a, b, c, d, e — used in pairs could replace an alphabet of twenty-five letters...
...So even a small symbol set could be arranged to express any message at all. However, with a small symbol set, a given message requires a longer string of characters — “more Labour and Time,” he wrote. Wilkins did not explain that 25 = 52, nor that three symbols taken in threes (aaa, aab, aac,…) produce twenty-seven possibilities because 33 = 27. But he clearly understood the underlying mathematics. His last example was a binary code, awkward though this was to express in words:
Replies from: lukeprogTwo symbols. In groups of five. “Yield thirty two Differences.”
That word, differences, must have struck Wilkins’s readers (few though they were) as an odd choice. But it was deliberate and pregnant with meaning. Wilkins was reaching for a conception of information in its purest, most general form. Writing was only a special case: "For in the general we must note, That whatever is capable of a competent Difference, perceptible to any Sense, may be a sufficient Means whereby to express the Cogitations." A difference could be “two Bells of different Notes”; or “any Object of Sight, whether Flame, Smoak, &c.”; or trumpets, cannons, or drums. Any difference meant a binary choice. Any binary choice began the expressing of cogitations. Here, in this arcane and anonymous treatise of 1641, the essential idea of information theory poked to the surface of human thought, saw its shadow, and disappeared again for four hundred years.
↑ comment by lukeprog · 2013-12-06T17:48:02.583Z · LW(p) · GW(p)
More (#1) from The Information:
The global expansion of the telegraph continued to surprise even its backers. When the first telegraph office opened in New York City on Wall Street, its biggest problem was the Hudson River. The Morse system ran a line sixty miles up the eastern side until it reached a point narrow enough to stretch a wire across. Within a few years, though, an insulated cable was laid under the harbor. Across the English Channel, a submarine cable twenty-five miles long made the connection between Dover and Calais in 1851. Soon after, a knowledgeable authority warned: “All idea of connecting Europe with America, by lines extending directly across the Atlantic, is utterly impracticable and absurd.” That was in 1852; the impossible was accomplished by 1858, at which point Queen Victoria and President Buchanan exchanged pleasantries and The New York Times announced “a result so practical, yet so inconceivable … so full of hopeful prognostics for the future of mankind … one of the grand way-marks in the onward and upward march of the human intellect.” What was the essence of the achievement? “The transmission of thought, the vital impulse of matter.” The excitement was global but the effects were local. Fire brigades and police stations linked their communications. Proud shopkeepers advertised their ability to take telegraph orders.
Information that just two years earlier had taken days to arrive at its destination could now be there—anywhere—in seconds. This was not a doubling or tripling of transmission speed; it was a leap of many orders of magnitude. It was like the bursting of a dam whose presence had not even been known.
And:
Szilárd — who did not yet use the word information — found that, if he accounted exactly for each measurement and memory, then the conversion could be computed exactly. So he computed it. He calculated that each unit of information brings a corresponding increase in entropy—specifically, by k log 2 units. Every time the demon makes a choice between one particle and another, it costs one bit of information. The payback comes at the end of the cycle, when it has to clear its memory (Szilárd did not specify this last detail in words, but in mathematics). Accounting for this properly is the only way to eliminate the paradox of perpetual motion, to bring the universe back into harmony, to “restore concordance with the Second Law."
Szilárd had thus closed a loop leading to Shannon’s conception of entropy as information. For his part, Shannon did not read German and did not follow Zeitschrift für Physik. “I think actually Szilárd was thinking of this,” he said much later, “and he talked to von Neumann about it, and von Neumann may have talked to Wiener about it. But none of these people actually talked to me about it.” Shannon reinvented the mathematics of entropy nonetheless.
And:
Solomonoff, Kolmogorov, and Chaitin tackled three different problems and came up with the same answer. Solomonoff was interested in inductive inference: given a sequence of observations, how can one make the best predictions about what will come next? Kolmogorov was looking for a mathematical definition of randomness: what does it mean to say that one sequence is more random than another, when they have the same probability of emerging from a series of coin flips? And Chaitin was trying to find a deep path into Gödel incompleteness by way of Turing and Shannon—as he said later, “putting Shannon’s information theory and Turing’s computability theory into a cocktail shaker and shaking vigorously.” They all arrived at minimal program size. And they all ended up talking about complexity.
And, an amusing quote:
The key to [quantum] teleportation and to so much of the quantum information science that followed is the phenomenon known as entanglement. Entanglement takes the superposition principle and extends it across space, to a pair of qubits far apart from each other. They have a definite state as a pair even while neither has a measurable state on its own. Before entanglement could be discovered, it had to be invented, in this case by Einstein. Then it had to be named, not by Einstein but by Schrödinger. Einstein invented it for a thought experiment designed to illuminate what he considered flaws in quantum mechanics as it stood in 1935. He publicized it in a famous paper with Boris Podolsky and Nathan Rosen titled “Can Quantum-Mechanical Description of Physical Reality Be Considered Complete?” It was famous in part for provoking Wolfgang Pauli to write to Werner Heisenberg, “Einstein has once again expressed himself publicly on quantum mechanics... As is well known, this is a catastrophe every time it happens.”
↑ comment by lukeprog · 2013-11-29T15:42:35.553Z · LW(p) · GW(p)
From Acemoglu & Robinson's Why Nations Fail:
Replies from: lukeprog, lukeprogThe [American] Civil War was bloody and destructive. But both before and after it there were ample economic opportunities for a large fraction of the population, especially in the northern and western United States. The situation in Mexico was very different. If the United States experienced five years of political instability between 1860 and 1865, Mexico experienced almost nonstop instability for the first fifty years of independence. This is best illustrated via the career of Antonio López de Santa Ana.
Santa Ana, son of a colonial official in Veracruz, came to prominence as a soldier fighting for the Spanish in the independence wars. In 1821 he switched sides with Iturbide and never looked back. He became president of Mexico for the first time in May of 1833, though he exercised power for less than a month, preferring to let Valentín Gómez Farías act as president. Gómez Farías’s presidency lasted fifteen days, after which Santa Ana retook power. This was as brief as his first spell, however, and he was again replaced by Gómez Farías, in early July. Santa Ana and Gómez Farías continued this dance until the middle of 1835, when Santa Ana was replaced by Miguel Barragán. But Santa Ana was not a quitter. He was back as president in 1839, 1841, 1844, 1847, and, finally, between 1853 and 1855. In all, he was president eleven times, during which he presided over the loss of the Alamo and Texas and the disastrous Mexican-American War, which led to the loss of what became New Mexico and Arizona. Between 1824 and 1867 there were fifty-two presidents in Mexico, few of whom assumed power according to any constitutionally sanctioned procedure.
↑ comment by lukeprog · 2013-11-29T16:36:23.576Z · LW(p) · GW(p)
More (#2) from Why Nations Fail:
When Sierra Leone became independent in 1961, the British handed power to Sir Milton Margai and his Sierra Leone People’s Party (SLPP), which attracted support primarily in the south, particularly Mendeland, and the east. Sir Milton was followed as prime minister by his brother, Sir Albert Margai, in 1964. In 1967 the SLPP narrowly lost a hotly contested election to the opposition, the All People’s Congress Party (APC), led by Siaka Stevens. Stevens was a Limba, from the north, and the APC got most of their support from northern ethnic groups, the Limba, the Temne, and the Loko.
Though the railway to the south was initially designed by the British to rule Sierra Leone, by 1967 its role was economic, transporting most of the country’s exports: coffee, cocoa, and diamonds. The farmers who grew coffee and cocoa were Mende, and the railway was Mendeland’s window to the world. Mendeland had voted hugely for Albert Margai in the 1967 election. Stevens was much more interested in holding on to power than promoting Mendeland’s exports. His reasoning was simple: whatever was good for the Mende was good for the SLPP, and bad for Stevens. So he pulled up the railway line to Mendeland. He then went ahead and sold off the track and rolling stock to make the change as irreversible as possible.
And:
It is not only that many of the postindependence leaders of Africa moved into the same residences, made use of the same patronage networks, and employed the same ways of manipulating markets and extracting resources as had the colonial regimes and the emperors they replaced; but they also made things worse. It was indeed a farce that the staunchly anticolonial Stevens would be concerned with controlling the same people, the Mende, whom the British had sought to control; that he would rely on the same chiefs whom the British had empowered and then used to control the hinterland; that he would run the economy in the same way, expropriating the farmers with the same marketing boards and controlling the diamonds under a similar monopoly. It was indeed a farce, a very sad farce indeed, that Laurent Kabila, who mobilized an army against Mobutu’s dictatorship with the promise of freeing the people and ending the stifling and impoverishing corruption and repression of Mobutu’s Zaire, would then set up a regime just as corrupt and perhaps even more disastrous. It was certainly farcical that he tried to start a Mobutuesque personality cult aided and abetted by Dominique Sakombi Inongo, previously Mobutu’s minister of information, and that Mobutu’s regime was itself fashioned on patterns of exploitation of the masses that had started more than a century previously with King Leopold’s Congo Free State. It was indeed a farce that the Marxist officer Mengistu would start living in a palace, viewing himself as an emperor, and enriching himself and his entourage just like Haile Selassie and other emperors before him had done.
It was all a farce, but also more tragic than the original tragedy, and not only for the hopes that were dashed. Stevens and Kabila, like many other rulers in Africa, would start murdering their opponents and then innocent citizens. Mengistu and the Derg’s policies would bring recurring famine to Ethiopia’s fertile lands. History was repeating itself, but in a very distorted form. It was a famine in Wollo province in 1973 to which Haile Selassie was apparently indifferent that did so much finally to solidify opposition to his regime. Selassie had at least been only indifferent. Mengistu instead saw famine as a political tool to undermine the strength of his opponents. History was not only farcical and tragic, but also cruel to the citizens of Ethiopia and much of sub-Saharan Africa.
The essence of the iron law of oligarchy, this particular facet of the vicious circle, is that new leaders overthrowing old ones with promises of radical change bring nothing but more of the same.
↑ comment by lukeprog · 2013-11-29T16:30:44.068Z · LW(p) · GW(p)
More (#1) from Why Nations Fail:
Inventors in the United States were once again fortunate. During the nineteenth century there was a rapid expansion of financial intermediation and banking that was a crucial facilitator of the rapid growth and industrialization that the economy experienced. While in 1818 there were 338 banks in operation in the United States, with total assets of $160 million, by 1914 there were 27,864 banks, with total assets of $27.3 billion. Potential inventors in the United States had ready access to capital to start their businesses. Moreover, the intense competition among banks and financial institutions in the United States meant that this capital was available at fairly low interest rates.
The same was not true in Mexico. In fact, in 1910, the year in which the Mexican Revolution started, there were only forty-two banks in Mexico, and two of these controlled 60 percent of total banking assets. Unlike in the United States, where competition was fierce, there was practically no competition among Mexican banks. This lack of competition meant that the banks were able to charge their customers very high interest rates, and typically confined lending to the privileged and the already wealthy, who would then use their access to credit to increase their grip over the various sectors of the economy.
And:
The first president of the United States, George Washington, was also a successful general in the War of Independence. Ulysses S. Grant, one of the victorious Union generals of the Civil War, became president in 1869, and Dwight D. Eisenhower, the supreme commander of the Allied Forces in Europe during the Second World War, was president of the United States between 1953 and 1961. Unlike Iturbide, Santa Ana, and Díaz, however, none of these military men used force to get into power. Nor did they use force to avoid having to relinquish power. They abided by the Constitution. Though Mexico had constitutions in the nineteenth century, they put few constraints on what Iturbide, Santa Ana, and Díaz could do. These men could be removed from power only the same way they had attained it: by the use of force.
Díaz violated people’s property rights, facilitating the expropriation of vast amounts of land, and he granted monopolies and favors to his supporters in all lines of business, including banking. There was nothing new about this behavior. This is exactly what Spanish conquistadors had done, and what Santa Ana did in their footsteps.
And:
Globalization made the large open spaces of the Americas, its “open frontiers,” valuable. Often these frontiers were only mythically open, since they were inhabited by indigenous peoples who were brutally dispossessed. All the same, the scramble for this newly valuable resource was one of the defining processes of the Americas in the second half of the nineteenth century. The sudden opening of this valuable frontier led not to parallel processes in the United States and Latin America, but to a further divergence, shaped by the existing institutional differences, especially those concerning who had access to the land. In the United States a long series of legislative acts, ranging from the Land Ordinance of 1785 to the Homestead Act of 1862, gave broad access to frontier lands. Though indigenous peoples had been sidelined, this created an egalitarian and economically dynamic frontier. In most Latin American countries, however, the political institutions there created a very different outcome. Frontier lands were allocated to the politically powerful and those with wealth and contacts, making such people even more powerful.
And:
The contrast between how Bill Gates and Carlos Slim became the two richest men in the world—Warren Buffett is also a contender—illustrates the forces at work. The rise of Gates and Microsoft is well known, but Gates’s status as the world’s richest person and the founder of one of the most technologically innovative companies did not stop the U.S. Department of Justice from filing civil actions against the Microsoft Corporation on May 8, 1998, claiming that Microsoft had abused monopoly power. Particularly at issue was the way that Microsoft had tied its Web browser, Internet Explorer, to its Windows operating system. The government had been keeping an eye on Gates for quite some time, and as early as 1991, the Federal Trade Commission had launched an inquiry into whether Microsoft was abusing its monopoly on PC operating systems. In November 2001, Microsoft reached a deal with the Justice Department. It had its wings clipped, even if the penalties were less than many demanded.
In Mexico, Carlos Slim did not make his money by innovation. Initially he excelled in stock market deals, and in buying and revamping unprofitable firms. His major coup was the acquisition of Telmex, the Mexican telecommunications monopoly that was privatized by President Carlos Salinas in 1990. The government announced its intention to sell 51 percent of the voting stock (20.4 percent of total stock) in the company in September 1989 and received bids in November 1990. Even though Slim did not put in the highest bid, a consortium led by his Grupo Corso won the auction. Instead of paying for the shares right away, Slim managed to delay payment, using the dividends of Telmex itself to pay for the stock. What was once a public monopoly now became Slim’s monopoly, and it was hugely profitable.
The economic institutions that made Carlos Slim who he is are very different from those in the United States. If you’re a Mexican entrepreneur, entry barriers will play a crucial role at every stage of your career. These barriers include expensive licenses you have to obtain, red tape you have to cut through, politicians and incumbents who will stand in your way, and the difficulty of getting funding from a financial sector often in cahoots with the incumbents you’re trying to compete against. These barriers can be either insurmountable, keeping you out of lucrative areas, or your greatest friend, keeping your competitors at bay. The difference between the two scenarios is of course whom you know and whom you can influence—and yes, whom you can bribe. Carlos Slim, a talented, ambitious man from a relatively modest background of Lebanese immigrants, has been a master at obtaining exclusive contracts; he managed to monopolize the lucrative telecommunications market in Mexico, and then to extend his reach to the rest of Latin America.
There have been challenges to Slim’s Telmex monopoly. But they have not been successful. In 1996 Avantel, a long-distance phone provider, petitioned the Mexican Competition Commission to check whether Telmex had a dominant position in the telecommunications market. In 1997 the commission declared that Telmex had substantial monopoly power with respect to local telephony, national longdistance calls, and international long-distance calls, among other things. But attempts by the regulatory authorities in Mexico to limit these monopolies have come to nothing. One reason is that Slim and Telmex can use what is known as a recurso de amparo, literally an “appeal for protection.” An amparo is in effect a petition to argue that a particular law does not apply to you. The idea of the amparo dates back to the Mexican constitution of 1857 and was originally intended as a safeguard of individual rights and freedoms. In the hands of Telmex and other Mexican monopolies, however, it has become a formidable tool for cementing monopoly power. Rather than protecting people’s rights, the amparo provides a loophole in equality before the law.
Slim has made his money in the Mexican economy in large part thanks to his political connections. When he has ventured into the United States, he has not been successful. In 1999 his Grupo Curso bought the computer retailer CompUSA. At the time, CompUSA had given a franchise to a firm called COC Services to sell its merchandise in Mexico. Slim immediately violated this contract with the intention of setting up his own chain of stores, without any competition from COC. But COC sued CompUSA in a Dallas court. There are no amparos in Dallas, so Slim lost, and was fined $454 million. The lawyer for COC, Mark Werner, noted afterward that “the message of this verdict is that in this global economy, firms have to respect the rules of the United States if they want to come here.” When Slim was subject to the institutions of the United States, his usual tactics for making money didn’t work.
↑ comment by lukeprog · 2013-11-29T12:13:02.695Z · LW(p) · GW(p)
From Greenblatt's The Swerve: How the World Became Modern:
Replies from: lukeprogApart from the charred papyrus fragments recovered in Herculaneum [buried by the Vesuvius eruption that buried Pompeii], there are no surviving contemporary manuscripts from the ancient Greek and Roman world. Everything that has reached us is a copy, most often very far removed in time, place, and culture from the original. And these copies represent only a small portion of the works even of the most celebrated writers of antiquity. Of Aeschylus’ eighty or ninety plays and the roughly one hundred twenty by Sophocles, only seven each have survived; Euripides and Aristophanes did slightly better: eighteen of ninety-two plays by the former have come down to us; eleven of forty-three by the latter.
These are the great success stories. Virtually the entire output of many other writers, famous in antiquity, has disappeared without a trace. Scientists, historians, mathematicians, philosophers, and statesmen have left behind some of their achievements—the invention of trigonometry, for example, or the calculation of position by reference to latitude and longitude, or the rational analysis of political power—but their books are gone. The indefatigable scholar Didymus of Alexandria earned the nickname Bronze-Ass (literally, “Brazen-Bowelled”) for having what it took to write more than 3,500 books; apart from a few fragments, all have vanished. At the end of the fifth century ce an ambitious literary editor known as Stobaeus compiled an anthology of prose and poetry by the ancient world’s best authors: out of 1,430 quotations, 1,115 are from works that are now lost.
↑ comment by lukeprog · 2013-11-29T13:21:09.663Z · LW(p) · GW(p)
More (#1) from The Swerve:
It is almost impossible for jokes that are centuries old to retain any life. The fact that a few of the jokes of Shakespeare or Rabelais or Cervantes continue to make us smile is something of a miracle. Almost six hundred years old, Poggio’s Facetiae is by now largely interesting only as a symptom. These relics, like the remains of long-dead insects, tell us what once buzzed about in the air of the Vatican. Some of the jokes are professional complaints, of the sort secretaries must always have had: the boss routinely claims to detect mistakes and demands rewriting, but, if you bring him the identical document, which you pretend to have corrected, he will take it into his hand, as if to peruse it, give it a glance, and then say, “Now it is all right: go, seal it up...” Some are stories, half-skeptical, half-credulous, about popular miracles and prodigies of nature. A few reflect wryly on Church politics, as when Poggio compares the pope who conveniently forgot his promise to end the schism to a quack from Bologna who announced that he was going to fly: “At the end of the day, when the crowd had watched and waited, he had to do something, so he exposed himself and showed his ass.”
Most of the stories in the Facetiae are about sex, and they convey, in their clubroom smuttiness, misogyny mingled with both an insider’s contempt for yokels and, on occasion, a distinct anticlerical streak. There is the woman who tells her husband that she has two cunts (duos cunnos), one in front that she will share with him, the other behind that she wants to give, pious soul that she is, to the Church. The arrangement works because the parish priest is only interested in the share that belongs to the Church. There is the clueless priest who in a sermon against lewdness (luxuria) describes practices that couples are using to heighten sexual pleasure; many in the congregation take note of the suggestions and go home to try them out for themselves. There are dumb priests who, baffled by the fact that in confession almost all the women say that they have been faithful in matrimony and almost all the men confess to extramarital affairs, cannot for the life of them figure out who the women are with whom the men have sinned. There are many tales about seductive friars and lusty hermits, about Florentine merchants nosing out profits, about female medical woes magically cured by lovemaking, about cunning tricksters, bawling preachers, unfaithful wives, and foolish husbands. There is the fellow humanist—identified by name as Francesco Filelfo — who dreams that he puts his finger into a magic ring that will keep his wife from ever being unfaithful to him and wakes to find that he has his finger in his wife’s vagina. There is the quack doctor who claims that he can produce children of different types — merchants, soldiers, generals — depending on how far he pushes his cock in. A foolish rustic, bargaining for a soldier, hands his wife over to the scoundrel, but then, thinking himself sly, comes out of hiding and hits the quack’s ass to push his cock further in: “Per Sancta Dei Evangelia,” the rustic shouts triumphantly, “hic erit Papa!” “This one is going to be pope!”
The Facetiae was a huge success.
↑ comment by lukeprog · 2013-11-28T12:00:55.783Z · LW(p) · GW(p)
From Aid's The Secret Sentry:
Replies from: lukeprog, lukeprog, lukeprog, lukeprog, lukeprog, lukeprogU.S. intelligence first learned about al-Hada and his telephone number from one of the captured al Qaeda planners of the August 1998 East Africa bombings, a Saudi national named Mohamed Rashed Daoud al-’Owhali, who was arrested by Kenyan authorities on August 12, 1998, five days after the bombing of the U.S. embassy in Nairobi. Interrogated by a team of FBI agents, al-’Owhali gave up the key relay number (011-967-1-200-578)—the telephone number of Ahmed al-Hada.
NSA immediately began intercepting al-Hada’s telephone calls. This fortuitous break could not have come at a better time for the U.S. intelligence community, since NSA had just lost its access to bin Laden’s satellite phone traffic. For the next three years, the telephone calls coming in and out of the al-Hada house in Sana’a were the intelligence community’s principal window into what bin Laden and al Qaeda were up to. The importance of the intercepted al-Hada telephone calls remains today a highly classified secret within the intelligence community, which continues to insist that al-Hada be referred to only as a “suspected terrorist facility in the Middle East” in declassified reports regarding the 9/11 intelligence disaster.
In January 1999, NSA intercepted a series of phone calls to the al-Hada house. (The agency later identified Pakistan as their point of origin.) NSA analysts found only one item of intelligence interest in the transcripts of these calls— references to a number of individuals believed to be al Qaeda operatives, one of whom was a man named Nawaf al-Hazmi. NSA did not issue any intelligence reports concerning the contents of these intercepts because al-Hazmi and the other individuals mentioned in the intercept were not known to NSA’s analysts at the time. Almost three years later, al-Hazmi was one of the 9/11 hijackers who helped crash the Boeing airliner into the Pentagon. That al-Hazmi succeeded in getting into the United States using his real name after being prominently mentioned in an intercepted telephone call with a known al Qaeda operative is but one of several huge mistakes made by the U.S. intelligence community that investigators learned about only after 9/11.
↑ comment by lukeprog · 2013-11-28T12:54:05.598Z · LW(p) · GW(p)
More (#6) from The Secret Sentry:
The 1980s saw NSA grow from more than fifty thousand military and civilian personnel to seventy-five thousand in 1989, twenty-five thousand of whom worked at NSA headquarters at Fort Meade. In terms of manpower alone, the agency was the largest component of the U.S. intelligence community by far, with a headquarters staff larger than the entire CIA.
As the agency’s size grew at a staggering pace, so did the importance of its intelligence reporting. The amount of reporting produced by NSA during the 1980s was astronomical. According to former senior American intelligence officials, on some days during the 1980s SIGINT accounted for over 70 percent of the material contained in the CIA’s daily intelligence report to President Reagan. Former CIA director (now Secretary of Defense) Robert Gates stated, “The truth is, until the late 1980s, U.S. signals intelligence was way out in front of the rest of the world.”
But NSA’s SIGINT efforts continued to produce less information because of a dramatic increase in worldwide telecommunications traffic volumes, which NSA had great difficulty coping with. It also had to deal with the growing availability and complexity of new telecommunications technologies, such as cheaper and more sophisticated encryption systems. By the late 1980s, the number of intercepted messages flowing into NSA headquarters at Fort Meade had increased to the point that the agency’s staff and computers were only able to process about 20 percent of the incoming materials.68These developments were to come close to making NSA deaf, dumb, and blind in the decade that followed.
And:
The invasion of Kuwait on August 2, 1990, by Iraq’s Saddam Hussein caught the U.S. intelligence community by surprise once again. In a familiar but worrisome pattern, intelligence indicating the possibility of the invasion was not properly analyzed or was discounted by senior Bush administration officials, including then–secretary of defense Dick Cheney, who did not think that Hussein would be foolish enough to do it. General Lee Butler, the commander of the Strategic Air Command, was later quoted as saying, “We had the warning from the intelligence community— we refused to acknowledge it.”
It took five months for the United States to move resources by land and sea to implement Desert Storm’s ground attack by three hundred thousand coalition troops.
And:
The worst threat to NSA’s fragile code-breaking capabilities came not from abroad but from a tiny computer software company in northern California called RSA Data Security, headed by Jim Bidzos. NSA was aware by the late 1980s that new encryption technologies being developed by private companies meant, according to a declassified internal NSA publication, that NSA’s code breakers were falling behind: “The underlying rate of cryptologic development throughout the world is faster than ever before and getting faster. Cryptologic literature in the public domain concerning advanced analytic techniques is proliferating. Inexpensive high-grade cryptographic equipment is readily accessible on the open market.” The agency was still able to break the cipher systems used by a small number of key countries around the world, such as Libya and Iran, but this could change quickly as target nations began using commercially available and rapidly evolving encryption software packages. It would have a catastrophic impact on the agency’s code-breaking efforts.
And:
But what was really killing NSA was the size of the agency’s payroll. Although the number of NSA personnel plummeted during McConnell’s tenure, the cost of paying those who remained skyrocketed as the agency had to reach deep into its pockets to try to keep its best and brightest from jumping ship and joining the dot-com boom. NSA stripped ever-increasing amounts of money from infrastructure improvement programs and its research and development efforts so that it could meet its payroll. It was left with little money to develop and build the new equipment desperately needed to access international communications traffic being carried by new and increasingly important telecommunications technologies, such as the Internet, cellular telephones, and fiber-optic cables. It was a decision that would, according to a former senior NSA official, “come back and bite us in the ass.”
↑ comment by lukeprog · 2013-11-28T12:44:32.505Z · LW(p) · GW(p)
More (#5) from The Secret Sentry:
A former marine SIGINT operator stationed in Lebanon recalled, “We were trained and equipped to intercept conventional Soviet military radio communications, not the walkie-talkies used by the Shi’ites and Druze in the foothills overlooking our base... Initially we couldn’t hear shit.” The Shi’ite and Druze militiamen who were their principal targets did not use fixed radio frequencies or regular call signs, or follow standardized radio procedures, which made monitoring their communications extremely difficult. The differing Arabic dialects spoken by the militiamen were also extremely hard for the school-trained marine intercept operators to understand, as was the West Beirut street slang the militiamen used. Taken together, this meant that the marine radio intercept operators and analysts had to improvise (oftentimes under fire) to do their job. A former marine SIGINT detachment commander recalled, “It was a hell of a way to learn your job, but that’s what Marines are good at. Adapt and improvise. I just wish we didn’t have to. So many lives were lost because we weren’t prepared for the enemy that we faced.”
And:
In July 1979, Pelton was forced to resign from NSA after filing for bankruptcy three months earlier. Desperate for money, on January 15, 1980, Pelton got in touch with the Russian embassy in Washington, and in the months that followed, he sold them, for a paltry thirty-five thousand dollars, a number of Top Secret Codeword documents and anything else he could remember. For the Soviets this was pure gold, and a bargain at that.
The damage that Pelton did was massive. He compromised the joint NSA– U.S. Navy undersea-cable tapping operation in the Sea of Okhotsk called Ivy Bells, which was producing vast amounts of enormously valuable, unencrypted, and incredibly detailed intelligence about the Soviet Pacific Fleet, information that might give the United States a clear, immediate warning of a Soviet attack. In 1981, a Soviet navy salvage ship lifted the Ivy Bells pod off the seafloor and took it to Moscow to be studied by Soviet electronics experts. It now resides in a forlorn corner of the museum of the Russian security service in the Lubyanka, in downtown Moscow.
Even worse, Pelton betrayed virtually every sensitive SIGINT operation that NSA and Britain’s GCHQ were then conducting against the Soviet Union, including the seven most highly classified compartmented intelligence operations that A Group was then engaged in. The programs were so sensitive that Charles Lord, the NSA deputy director of operations at the time, called them the “Holiest of Holies.” He told the Russians about the ability of NSA’s Vortex SIGINT satellites to intercept sensitive communications deep inside the USSR that were being carried by microwave radio-relay systems. Pelton also revealed the full extent of the intelligence being collected by the joint NSA-CIA Broadside listening post in the U.S. embassy in Moscow. Within months of Pelton being debriefed in Vienna, the Soviets intensified their jamming of the frequencies being monitored by the Moscow embassy listening post, and the intelligence “take” coming out of Broadside fell to practically nothing. Pelton also told the Russians about virtually every Russian cipher machine that NSA’s cryptanalysts in A Group had managed to crack in the late 1970s. NSA analysts had wondered why at the height of the Polish crisis in 1981 they had inexplicably lost their ability to exploit key Soviet and Polish communications systems, which had suddenly gone silent without warning. Pelton also told the Russians about a joint CIA-NSA operation wherein CIA operatives placed fake tree stumps containing sophisticated electronic eavesdropping devices near Soviet military installations around Moscow. The data intercepted by these devices was either relayed electronically to the U.S. embassy or sent via burst transmission to the United States via communication satellites.
In December 1985, Pelton was arrested and charged in federal court in Baltimore, with six counts of passing classified information to the Soviet Union. After a brief trial, in June 1986 Pelton was found guilty and sentenced to three concurrent life terms in prison.
↑ comment by lukeprog · 2013-11-28T12:33:45.511Z · LW(p) · GW(p)
More (#4) from The Secret Sentry:
A 1976 study of U.S. intelligence reporting on the Soviet Union, however, found that virtually all of the material contained in the CIA’s National Intelligence Estimates about Soviet strategic and conventional military forces came from SIGINT and satellite imagery. A similar study found that less than 5 percent of the finished intelligence being generated by the U.S. intelligence community came from HUMINT. Moreover, rapid changes in intelligence-gathering and information-processing technology proved to be a godsend for NSA. In 1976, NSA retired its huge IBM Harvest computer system, which had been the mainstay of the agency’s cryptanalysts since February 1962. It was replaced by the first of computer genius Seymour Cray’s new Cray-1 supercomputers. Standing six feet six inches high, the Cray supercomputer was a remarkable piece of machinery, capable of performing 150–200 million calculations a second, giving it ten times the computing power of any other computer in the world. More important, the Cray allowed the agency’s crypt-analysts for the first time to tackle the previously invulnerable Soviet high-level cipher systems.
Shortly after Bobby Inman became the director of NSA in 1977, cryptanalysts working for the agency’s Soviet code-breaking unit, A Group, headed by Ann Caracristi, succeeded in solving a number of Soviet cipher systems that gave NSA access to high-level Soviet communications. Credit for this accomplishment goes to a small and ultra-secretive unit called the Rainfall Program Management Division, headed from 1974 to 1978 by a native New Yorker named Lawrence Castro. Holding bachelor’s and master’s degrees in electrical engineering from the Massachusetts Institute of Technology, Castro got into the SIGINT business in 1965 when he joined ASA as a young second lieu-tenant. In 1967, he converted to civilian status and joined NSA as an engineer in the agency’s Research and Engineering Organization, where he worked on techniques for solving high-level Russian cipher systems.
By 1976, thanks in part to some mistakes made by Russian cipher operators, NSA cryptanalysts were able to reconstruct some of the inner workings of the Soviet military’s cipher systems. In 1977, NSA suddenly was able to read at least some of the communications traffic passing between Moscow and the Russian embassy in Washington, including one message from Russian ambassador Anatoly Dobrynin to the Soviet Foreign Ministry repeating the advice given him by Henry Kissinger on how to deal with the new Carter administration in the still-ongoing SALT II negotiations.
And:
Since there have been so few success stories in American intelligence history, when one comes along, it is worthwhile to examine it to see what went right. NSA’s performance in the months prior to the Soviet invasion of Afghanistan in December 1979 was one of these rare cases. Not only did all of the new high-tech intelligence-collection sensors that NSA had purchased in the 1970s work as intended, but the raw data that they collected was processed in a timely fashion, which enabled Bobby Ray Inman to boast that his agency had accurately predicted that the Soviets would invade Afghanistan.
As opposition to the Soviet-supported Afghan regime in Kabul headed by President Nur Mohammed Taraki mounted in late 1978 and early 1979, the Soviets continued to increase their military presence in the country, until it had grown to five Russian generals and about a thousand military advisers.91A rebellion in the northeastern Afghan city of Herat in mid-March 1979 in which one hundred Russian military and civilian personnel were killed was put down by Afghan troops from Kandahar, but not before an estimated three thousand to five thousand Afghans had died in the fighting.
At this point, satellite imagery and SIGINT detected unusual activity by the two Soviet combat divisions stationed along the border with Afghanistan.
The CIA initially regarded these units as engaged in military exercises, but these “exercises” fit right into a scenario for a Soviet invasion. On March 26– 27, SIGINT detected a steady stream of Russian reinforcements and heavy equipment being flown to Bagram airfield, north of Kabul, and by June, the intelligence community estimated that the airlift had brought in a total of twenty-five hundred personnel, which included fifteen hundred airborne troops and additional “advisers” as well as the crews of a squadron of eight AN-12 military transport aircraft now based in-country. SIGINT revealed that the Russians were also secretly setting up a command-and-control communications network inside Afghanistan; it would be used to direct the Soviet intervention in December 1979.
In the last week of August and the first weeks of September, satellite imagery and SIGINT revealed preparations for Soviet operations obviously aimed at Afghanistan, including forward deployment of Soviet IL-76 and AN-12 military transport aircraft that were normally based in the European portion of the USSR.
So clear were all these indications that CIA director Turner sent a Top Secret Umbra memo to the NSC on September 14 warning, “The Soviet leaders may be on the threshold of a decision to commit their own forces to prevent the collapse of the Taraki regime and protect their sizeable stake in Afghanistan. Small Soviet combat units may have already arrived in the country.”
On September 16, President Taraki was deposed in a coup d’état, and his pro-Moscow deputy, Hafizullah Amin, took his place as the leader of Afghanistan.
Over the next two weeks, American reconnaissance satellites and SIGINT picked up increased signs of Soviet mobilization, including three divisions on the border and the movement of many Soviet military transport aircraft from their home bases to air bases near the barracks of two elite airborne divisions, strongly suggesting an invasion was imminent.
On September 28, the CIA concluded that “in the event of a breakdown of control in Kabul, the Soviets would be likely to deploy one or more Soviet airborne divisions to the Kabul vicinity to protect Soviet citizens as well as to ensure the continuance of some pro-Soviet regime in the capital.” Then, in October, SIGINT detected the call-up of thousands of Soviet reservists in the Central Asian republics.
Throughout November and December, NSA monitored and the CIA reported on virtually every move made by Soviet forces. The CIA advised the White House on December 19 that the Russians had perhaps as many as three airborne battalions at Bagram, and NSA predicted on December 22, three full days before the first Soviet troops crossed the Soviet-Afghan border, that the Russians would invade Afghanistan within the next seventy-two hours.
NSA’s prediction was right on the money. The Russians had an ominous Christmas present for Afghanistan, and NSA unwrapped it. Late on Christmas Eve, Russian linguists at the U.S. Air Force listening posts at Royal Air Force Chicksands, north of London, and San Vito dei Normanni Air Station, in southern Italy, detected the takeoff from air bases in the western USSR of the first of 317 Soviet military transport flights carrying elements of two Russian airborne divisions and heading for Afghanistan; on Christmas morning, the CIA issued a final intelligence report saying that the Soviets had prepared for a massive intervention and might “have started to move into that country in force today.” SIGINT indicated that a large force of Soviet paratroopers was headed for Afghanistan—and then, at six p.m. Kabul time, it ascertained that the first of the Soviet IL-76 and AN-22 military transport aircraft had touched down at Bagram Air Base and the Kabul airport carrying the first elements of the 103rd Guards Airborne Division and an in dependent parachute regiment. Three days later, the first of twenty-five thousand troops of Lieutenant General Yuri Vladimirovich Tukharinov’s Fortieth Army began crossing the Soviet-Afghan border.
The studies done after the Afghan invasion all characterized the performance of the U.S. intelligence community as an “intelligence success story.”101NSA’s newfound access to high-level Soviet communications enabled the agency to accurately monitor and report quickly on virtually every key facet of the Soviet military’s activities. As we shall see in the next chapter, Afghanistan may have been the “high water mark” for NSA.
↑ comment by lukeprog · 2013-11-28T12:27:35.657Z · LW(p) · GW(p)
More (#3) from The Secret Sentry:
the Anglo-American code breakers discovered that they now faced two new and seemingly insurmountable obstacles that threatened to keep them deaf, dumb, and blind for years. First, there was far less high-level Soviet government and military radio traffic than prior to Black Friday because the Russians had switched much of their military communication to telegraph lines or buried cables, which was a simple and effective way of keeping this traffic away from the American and British radio intercept operators. Moreover, the high-level Russian radio traffic that could still be intercepted was proving to be nearly impossible to crack because of the new cipher machines and unbreakable cipher systems that were introduced on all key radio circuits. The Russians also implemented tough communications security practices and procedures and draconian rules and regulations governing the encryption of radio communications traffic, and radio security discipline was suddenly rigorously and ruthlessly enforced. Facing potential death sentences for failing to comply with the new regulations, Russian radio operators suddenly began making fewer mistakes in the encoding and decoding of messages, and operator chatter disappeared almost completely from the airwaves. It was also at about this time that the Russian military and key Soviet government ministries began encrypting their telephone calls using a newly developed voice-scrambling device called Vhe Che (“High Frequency”), which further degraded the ability of the Anglo-American SIGINT personnel to access even low-level Soviet communications. It would eventually be discovered that the Russians had made their massive shift because William Weisband, a forty-year-old Russian linguist with ASA, had told the KGB everything that he knew about ASA’s Russian code-breaking efforts at Arlington Hall. (For reasons of security, Weisband was not put on trial for espionage.)
Decades later, at a Central Intelligence Agency conference on Venona, Meredith Gardner, an intensely private and taciturn man, did not vent his feelings about Weisband, even though he had done grave damage to Gardner’s work on Venona. But Gardner’s boss, Frank Rowlett, was not so shy in an interview before his death, calling Weisband “the traitor that got away.”
Unfortunately, internecine warfare within the upper echelons of the U.S. intelligence community at the time got in the way of putting stronger security safeguards into effect— despite the damage that a middle-level employee like Weisband had done to America’s SIGINT effort. Four years later, a 1952 review found that “very little had been done” to implement the 1948 recommendations for strengthening security practices within the U.S. cryptologic community.
And:
The 1964 Gulf of Tonkin Crisis is an important episode in the history of both NSA and the entire U.S. intelligence community because it demonstrated all too clearly two critical points that were to rear their ugly head again forty years later in the 2003 Iraqi weapons of mass destruction scandal. The first was that under intense political pressure, intelligence collectors and analysts will more often than not choose as a matter of political expediency not to send information to the White House that they know will piss off the president of the United States. The second was that intelligence information, if put in the wrong hands, can all too easily be misused or misinterpreted if a system of analytic checks and balances are not in place and rigidly enforced.
And:
In Saigon, Ambassador Graham Martin refused to believe the SIGINT reporting that detailed the massive North Vietnamese military buildup taking place all around the city. He steadfastly disregarded the portents, even after the South Vietnamese president, Nguyen Van Thieu, and most of his ministers resigned and fled the country. An NSA history notes that Martin “believed that the SIGINT was NVA deception” and repeatedly refused to allow NSA’s station chief, Tom Glenn, to evacuate his forty-three-man staff and their twenty-two dependents from Saigon. Glenn also wanted to evacuate as many of the South Vietnamese SIGINT staff as possible, as they had worked side by side with NSA for so many years, but this request was also refused. NSA director Lieutenant General Lew Allen Jr., who had taken over the position in August 1973, pleaded with CIA director William Colby for permission to evacuate the NSA station from Saigon, but even this plea was to no avail because Martin did not want to show any sign that the U.S. government thought Saigon would fall. So Glenn disobeyed Martin’s direct order and surreptitiously put most of his staff and all of their dependents onto jammed commercial airlines leaving Saigon. There was nothing he could do for the hundreds of South Vietnamese officers and staff members who remained at their posts in Saigon listening to the North Vietnamese close in on the capital.
By April 24, 1975, even the CIA admitted the end was near. Colby delivered the bad news to President Gerald Ford, telling him that “the fate of the Republic of Vietnam is sealed, and Saigon faces imminent military collapse.”
Even when enemy troops and tanks overran the major South Vietnamese military base at Bien Hoa, outside Saigon, on April 26, Martin still refused to accept that Saigon was doomed. On April 28, Glenn met with the ambassador carry ing a message from Allen ordering Glenn to pack up his equipment and evacuate his remaining staff immediately. Martin refused to allow this. The following morning, the military airfield at Tan Son Nhut fell, cutting off the last air link to the outside.
A massive evacuation operation to remove the last Americans and their South Vietnamese allies from Saigon began on April 29. Navy helicop ters from the aircraft carrier USS Hancock, cruising offshore, began shuttling back and forth, carrying seven thousand Americans and South Vietnamese to safety. U.S. Air Force U-2 and RC-135 reconnaissance aircraft were orbiting off the coast monitoring North Vietnamese radio traffic to detect any threat to the evacuation. In the confusion, Glenn discovered that no one had made any arrangements to evacuate his remaining staff, so the U.S. military attaché arranged for cars to pick up Glenn and his people at their compound outside Saigon and transport them to the embassy. That night, Glenn and his colleagues boarded a U.S. Navy helicopter for the short ride to one of the navy ships off the coast.
But the thousands of South Vietnamese SIGINT officers and intercept operators, including their chief, General Pham Van Nhon, never got out. The North Vietnamese captured the entire twenty-seven-hundred-man organization intact as well as all their equipment. An NSA history notes, “Many of the South Vietnamese SIGINTers undoubtedly perished; others wound up in reeducation camps. In later years a few began trickling into the United States under the orderly departure program. Their story is yet untold.” By any measure, it was an inglorious end to NSA’s fifteen-year involvement in the Vietnam War, one that still haunts agency veterans to this day.
↑ comment by lukeprog · 2013-11-28T12:16:12.360Z · LW(p) · GW(p)
More (#2) from The Secret Sentry:
Unfortunately, despite the best efforts of the SIGINT collectors, the vast majority of the foreign fighters managed to successfully evade the U.S. Army units deployed along the border. An army battalion commander stationed on the border in 2003 recalled that they “weren’t sneaking across; they were just driving across, because in Arab countries it’s easy to get false passports and stuff.” Once inside Iraq, most of them made their way to Ramadi, in rebellious al-Anbar Province, which became the key way station for foreign fighters on their way into the heart of Iraq. In Ramadi, they were trained, equipped, given false identification papers, and sent on their first missions. The few foreign fighters who were captured were dedicated— but not very bright. One day during the summer of 2003, Lieutenant Colonel Henry Arnold, a battalion commander stationed on the Syrian border, was shown the passport of a person seeking to enter Iraq. “I think he was from the Sudan or something like that— and under ‘Reason for Traveling,’ it said, ‘Jihad.’ That’s how dumb these guys were.”
And:
Hayden and his senior managers had hoped that they could keep the massive reengineering of NSA out of the public realm. But these hopes were dashed when, on December 6, reporter Seymour Hersh published an article in the New Yorker magazine that blew the lid off NSA’s secret, revealing that America’s largest intelligence agency was having trouble performing its mission. Hersh’s article set off a furious debate within NSA about the difficulties the agency was facing. The considered judgment of many NSA insiders was in many respects harsher and more critical than anything Hersh had written. Diane Mezzanotte, then a staff officer in NSA’s Office of Corporate Relations, wrote, “NSA is facing a serious survival problem, brought about by the widespread use of emerging communications technologies and public encryption keys, draconian budget cuts, and an increasingly negative public perception of NSA and its SIGINT operations.”
Less than sixty days later, another disaster hit the agency. During the week of January 23, 2000, the main SIGINT processing computer at NSA collapsed and for four days could not be restarted because of a critical software anomaly. The result was an intelligence blackout, with no intelligence reporting coming out of Fort Meade for more than seventy-two hours. A declassified NSA report notes, “As one result, the President’s Daily Briefing—60% of which is normally based on SIGINT— was reduced to a small portion of its typical size.”
And:
During President Truman’s October 1948 nationwide whistle-stop train tour in his uphill battle for reelection against Governor Thomas Dewey, the U.S. government was at a virtualstandstill. On the afternoon of Friday, October 29, just as Truman was preparing to deliver a fiery campaign speech at the Brooklyn Academy of Music in New York City, the Russian government and military executed a massive change of virtually all of their cipher systems. On that day, referred to within NSA as Black Friday, and continuing for several months thereafter, all of the cipher systems used on Soviet military and internal-security radio networks, including all mainline Soviet military, naval, and police radio nets, were changed to new, unbreakable systems. The Russians also changed all their radio call signs and operating frequencies and replaced all of the cipher machines that the Americans and British had solved, and even some they hadn’t, with newer and more sophisticated cipher machines that were to defy the ability of American and British cryptanalysts to solve them for almost thirty years, until the tenure of Admiral Bobby Ray Inman in the late 1970s.
Black Friday was an unmitigated disaster, inflicting massive and irreparable damage on the Anglo-American SIGINT organizations’ efforts against the USSR, killing off virtually all of the productive intelligence sources that were then available to them regarding what was going on inside the Soviet Union and rendering useless most of four years’ hard work by thousands of American and British cryptanalysts, linguists, and traffic analysts. The loss of so many critically important high-level intelligence sources in such a short space of time was, as NSA historians have aptly described it, “perhaps the most significant intelligence loss in U.S. history.” And more important, it marked the beginning of an eight-year period when reliable intelligence about what was occurring inside the USSR was practically nonexistent.
↑ comment by lukeprog · 2013-11-28T12:10:46.173Z · LW(p) · GW(p)
More (#1) from The Secret Sentry:
Beginning in May and continuing through early July 2001, NSA intercepted thirty-three separate messages indicating that bin Laden intended to mount one or more terrorist attacks against U.S. targets in the near future. But the intercepts provided no specifics about the impending operation other than that “Zero Hour was near.”
In June, intercepts led to the arrest of two bin Laden operatives who were planning to attack U.S. military installations in Saudi Arabia as well as another one planning an attack on the U.S. embassy in Paris. On June 22, U.S. military forces in the Persian Gulf and the Middle East were once again placed on alert after NSA intercepted a conversation between two al Qaeda operatives in the region, which indicated that “a major attack was imminent.” All U.S. Navy ships docked in Bahrain, homeport of the U.S. Fifth Fleet, were ordered to put to sea immediately.
These NSA intercepts scared the daylights out of both the White House’s “terrorism czar,” Richard Clarke, and CIA director George Tenet. Tenet told Clarke, “It’s my sixth sense, but I feel it coming. This is going to be the big one.” On Thursday, June 28, Clarke warned National Security Advisor Condoleezza Rice that al Qaeda activity had “reached a crescendo,” strongly suggesting that an attack was imminent. That same day, the CIA issued what was called an Alert Memorandum, which stated that the latest intelligence indicated the probability of imminent al Qaeda attacks that would “have dramatic consequences on governments or cause major casualties.”
But many senior officials in the Bush administration did not share Clarke and Tenet’s concerns, notably Secretary of Defense Donald Rumsfeld, who distrusted the material coming out of the U.S. intelligence community. Rumsfeld thought this traffic might well be a “hoax” and asked Tenet and NSA to check the veracity of the al Qaeda intercepts. At NSA director Hayden’s request, Bill Gaches, the head of NSA’s counterterrorism office, reviewed all the intercepts and reported that they were genuine al Qaeda communications.
But unbeknownst to Gaches’s analysts at NSA, most of the 9/11 hijackers were already in the United States busy completing their final preparations. Calls from operatives in the United States were routed through the Ahmed al-Hada “switchboard” in Yemen, but apparently none of these calls were intercepted by NSA. Only after 9/11 did the FBI obtain the telephone billing records of the hijackers during their stay in the United States. These records indicated that the hijackers had made a number of phone calls to numbers known by NSA to have been associated with al Qaeda activities, including that of al-Hada.
Unfortunately, NSA had taken the legal position that intercepting calls from abroad to individuals inside the United States was the responsibility of the FBI. NSA had been badly burned in the past when Congress had blasted it for illegal domestic intercepts, which had led to the 1978 Foreign Intelligence Surveillance Act (FISA). NSA could have gone to the Foreign Intelligence Surveillance Court (FISC) for warrants to monitor communications between terrorist suspects in the United States and abroad but feared this would violate U.S. laws.
The ongoing argument about this responsibility between NSA and the FBI created a yawning intelligence gap, which al Qaeda easily slipped through, since there was no effective coordination between the two agencies. One senior NSA official admitted after the 9/11 attacks, “Our cooperation with our foreign allies is a helluva lot better than with the FBI.”
While NSA and the FBI continued to squabble, the tempo of al Qaeda intercepts mounted during the first week of July 2001. A series of SIGINT intercepts produced by NSA in early July allowed American and allied intelligence services to disrupt a series of planned al Qaeda terrorist attacks in Paris, Rome, and Istanbul. On July 10, Tenet and the head of the CIA’s Coun-terterrorism Center, J. Cofer Black, met with National Security Advisor Rice to underline how seriously they took the chatter being picked up by NSA. Both Tenet and Black came away from the meeting believing that Rice did not take their warnings seriously.
Clarke and Tenet also encountered continuing skepticism at the Pentagon from Rumsfeld and his deputy, Paul Wolfowitz. Both contended that the spike in traffic was a hoax and a diversion. Steve Cambone, the undersecretary of defense for intelligence, asked Tenet if he had “considered the possibility that al-Qa’ida’s threats were just a grand deception, a clever ploy to tie up our resources and expend our energies on a phantom enemy that lacked both the power and the will to carry the battle to us.”
In August 2001, either NSA or Britain’s GCHQ intercepted a telephone call from one of bin Laden’s chief lieutenants, Abu Zubaida, to an al Qaeda operative believed to have been in Pakistan. The intercept centered on an operation that was to take place in September. At about the same time, bin Laden telephoned an associate inside Afghanistan and discussed the upcoming operation. Bin Laden reportedly praised the other party to the conversation for his role in planning the operation. For some reason, these intercepts were reportedly never forwarded to intelligence consumers, although this contention is strongly denied by NSA officials.13Just prior to the September 11, 2001, bombings, several Eu rope an intelligence services reportedly intercepted a telephone call that bin Laden made to his wife, who was living in Syria, asking her to return to Afghanistan immediately.
In the seventy-two hours before 9/11, four more NSA intercepts suggested that a terrorist attack was imminent. But NSA did not translate or disseminate any of them until the day after 9/11.15 In one of the two most significant, one of the speakers said, “The big match is about to begin.” In the other, another unknown speaker was overheard saying that tomorrow is “zero hour.”
↑ comment by lukeprog · 2013-11-25T15:12:29.143Z · LW(p) · GW(p)
From Mazzetti's The Way of the Knife:
Replies from: lukeprog, lukeprog, lukeprog, lukeprog, lukeprogBut the Americans still had to find al-Harethi [mastermind behind the 200- bombing of the U.S.S. Cole], who eluded surveillance by switching between five different cell-phone numbers. The Gray Fox team had identified several of them, but al-Harethi was always careful enough to use the phones sparingly. On November 4, however, the surveillance net got its first big catch.
The cell phone in the back of the Land Cruiser was beaming its signal into the skies, and Gray Fox operatives sent a flash message to analysts at the National Security Agency’s sprawling headquarters, at Fort Meade, Maryland. Separately, the CIA had dispatched an armed Predator from its drone base in Djibouti, just across the Red Sea from Yemen. As the Predator moved into position above the Land Cruiser, an analyst at Fort Meade heard al-Harethi’s voice over the cell phone, barking directions to the driver of the four-by-four. With confirmation that al-Harethi was in the truck, the CIA was now authorized to fire a missile at the vehicle. The missile came off the Predator drone and destroyed the truck, killing everyone inside. Qaed Salim Sinan al-Harethi was eventually identified in the rubble by a distinguishing mark on one of his legs, which was found at the scene, severed from his body.
President Saleh’s government was quick to issue a cover story: The truck had been carrying a canister of gas that triggered an explosion. But inside the Counterterrorist Center, the importance of the moment was not lost. It was the first time since the September 11 attacks that the CIA had carried out a targeted killing outside a declared war zone. Using the sweeping authority President Bush had given to the CIA in September 2001, clandestine officers had methodically gathered information about al-Harethi’s movements and then coolly incinerated his vehicle with an antitank missile.
↑ comment by lukeprog · 2013-11-26T16:43:37.807Z · LW(p) · GW(p)
More (#5) from The Way of the Knife:
[Anwar al-Awlaki's son] went to Shabwa province, the region of Yemen where Anwar al-Awlaki was thought to be hiding and where American jets and drones had narrowly missed him the previous May. What Abdulrahman did not know was that his father had already fled Shabwa for al Jawf. He wandered about, having little idea about what to do next. Then, he heard the news about the missile strike that had killed his father, and he called his family back in Sana’a. He told them he was coming home.
He didn’t return to Sana’a immediately. On October 14, two weeks after CIA drones killed his father, Abdulrahman al-Awlaki was sitting with friends at an open-air restaurant near Azzan, a town in Shabwa province. From a distance, faint at first, came the familiar buzzing sound. Then, missiles tore through the air and hit the restaurant. Within seconds, nearly a dozen dead bodies were strewn in the dirt. One of them was Abdulrahman al-Awlaki. Hours after the news of his death was reported, the teenager’s Facebook page was turned into a memorial.
American officials have never discussed the operation publicly, but they acknowledge in private that Abdulrahman al-Awlaki was killed by mistake. The teenager had not been on any target list. The intended target of the drone strike was Ibrahim al-Banna, an Egyptian leader of AQAP. American officials had gotten information that al-Banna was eating at the restaurant at the time of the strike, but the intelligence turned out to be wrong. Al-Banna was nowhere near the location of the missile strike. Abdulrahman al-Awlaki was in the wrong place at the wrong time.
Although the strike remains classified, several American officials said that the drones that killed the boy were not, like those that killed his father, operated by the CIA. Instead, Abdulrahman al-Awlaki was a victim of the parallel drone program run by the Pentagon’s Joint Special Operations Command, which had continued even after the CIA joined the manhunt in Yemen. The CIA and the Pentagon had converged on the killing grounds of one of the world’s poorest and most desolate countries, running two distinct drone wars. The CIA maintained one target list, and JSOC kept another. Both were in Yemen carrying out nearly the exact same mission. Ten years after Donald Rumsfeld first tried to wrest control of the new war from American spies, the Pentaton and CIA were conducting the same secret missions at the ends of the earth.
And:
The drone strikes remained a secret, at least officially. The Obama administration has gone to court to fend off challenges over the release of documents related to CIA and JSOC drones and the secret legal opinions buttressing the operations. In late September 2012, a panel of three judges sat in front of a wall of green marble in a federal courtroom in Washington and listened to oral arguments in a case brought by the American Civil Liberties Union demanding that the CIA hand over documents about the targeted-killing program. A lawyer representing the CIA refused to acknowledge that the CIA had anything to do with drones, even under cross-examination from skeptical judges who questioned him about public statements by former CIA director Leon Panetta. In one case, Panetta had joked to a group of American troops stationed in Naples, Italy, that, although as secretary of defense he had “a helluva lot more weapons available... than... at CIA,” the “Predators [weren’t] that bad.”
At one point in the court proceeding, an exasperated Judge Merrick Garland pointed out the absurdity of the CIA’s position, in light of the fact that both President Obama and White House counterterrorism adviser John Brennan had spoken publicly about drones. “If the CIA is the emperor,” he told the CIA’s lawyer, “you’re asking us to say that the emperor has clothes even when the emperor’s bosses say he doesn’t.”
And:
For all their policy differences during the 2012 presidential campaign, Obama and Governor Mitt Romney found nothing to disagree about when it came to targeted killings, and Romney said that if elected president he would continue the campaign of drone strikes that Obama had escalated. Fearing such a prospect, Obama officials raced during the final weeks before the election to implement clear rules in the event they were no longer holding the levers in the drone wars. The effort to codify the procedures of targeted killings revealed just how much the secret operations remained something of an ad hoc effort. Fundamental questions about who can be killed, where they can be killed, and when they can be killed still had not been answered. The pressure to answer those questions eased on November 6, 2012, when a decisive election ensured that President Obama would remain in office for another four years. The effort to bring clarity to the secret wars flagged.
↑ comment by lukeprog · 2013-11-26T16:39:14.733Z · LW(p) · GW(p)
More (#4) from The Way of the Knife:
Five months after Petraeus’s meeting with Saleh, American missiles blew up the car of Jaber al-Shabwani, the deputy governor of Ma’rib province and the man President Saleh had tapped to be a liaison between the Yemeni government and the al Qaeda faction. When al-Shabwani and his bodyguards were killed, they were on the way to meet with AQAP operatives to discuss a truce. But al-Shabwani’s political rivals had told American special-operations troops in the country a different story: that the Yemeni politician was in league with al Qaeda. The Americans had just been used to carry out a high-tech hit to settle a tribal grudge.
And:
Just months after President Obama took office, the new administration announced a decision to ship forty tons of weapons and ammunition to Somalia’s embattled Transitional Federal Government, the United Nations–backed government that was considered by Somalis to be as corrupt as it was weak. By 2009 the TFG already controlled little territory beyond several square miles inside Mogadishu, and President Obama’s team was in a panic over the possibility that an al Shabaab offensive in the capital might push the government out of central Mogadishu. With an embargo in place prohibiting foreign weapons from flooding into Somalia, the administration had to get the UN’s approval for the arms shipments. The first weapons delivery arrived in June 2009, but Somali government troops didn’t keep them for long. Instead, they sold the weapons that Washington had purchased for them in Mogadishu weapons bazaars. The arms market collapsed, and a new supply of cheap weapons was made available to al Shabaab fighters. By the end of the summer, American-made M16s could be found at the bazaars for just ninety-five dollars, and a more coveted AK-47 could be purchased for just five dollars more.
And:
Like a desert sandstorm, the popular revolts spreading across the states of North Africa were in the process of burying decades of authoritarian rule. But they had also caught the CIA flat-footed, and White House officials were aware that for all of the billions of dollars that the United States spends each year to collect intelligence and forecast the world’s cataclysmic events, American spy agencies were several steps behind the popular uprisings. “The CIA missed Tunisia. They missed Egypt. They missed Libya. They missed them individually, and they missed them collectively,” said one senior member of the Obama administration. In the frantic weeks after the Arab revolts began, hundreds of intelligence analysts at the CIA and other American spy agencies were reassigned to divine meaning from the turmoil. It was a game of catch-up.
And:
Vaccination campaigns were considered a good front for spying: DNA information could be collected from the needles used on children and analyzed for leads on the whereabouts of al Qaeda operatives for whom the CIA already had DNA information. In that time, Afridi conducted half a dozen vaccination campaigns around Khyber Agency, and the CIA paid him eight million rupees.
And:
American officials admit it is somewhat difficult to judge a person’s age from thousands of feet in the air, and in Pakistan’s tribal areas a “military-aged male” could be as young as fifteen or sixteen. Using such broad definitions to determine who was a “combatant” and therefore a legitimate target allowed Obama administration officials to claim that the drone strikes in Pakistan had not killed any civilians. It was something of a trick of logic: In an area of known militant activity, all military-aged males were considered to be enemy fighters. Therefore, anyone who was killed in a drone strike there was categorized as a combatant, unless there was explicit intelligence that posthumously proved him to be innocent.
And:
... [an] intelligence tip warned that two suspicious fertilizer trucks were navigating the NATO supply routes from Pakistan into Afghanistan. The tip was vague and warned only that the trucks might be used as bombs and driven into Afghanistan for an attack against an American base. U.S. military officials in Afghanistan called General Kayani in Pakistan to alert him, and Kayani promised that the trucks would be stopped before they reached the Afghan border.
But the Pakistanis did not act. The trucks sat in North Waziristan for two months, as operatives from the Haqqani Network turned them into suicide bombs powerful enough to kill hundreds of people. American intelligence about the location of the trucks remained murky, but Admiral Mullen was certain that, given the ISI’s history of contacts with the Haqqanis, Pakistani spies would be able to put a stop to any attack. By September 9, 2011, the trucks were moving toward Afghanistan, and the top American commander in the region, General John Allen, urged General Kayani to stop the trucks during a trip to Islamabad. Kayani told Allen he would “make a phone call” to prevent any imminent assault, an offer that raised eyebrows because it seemed to indicate a particularly close relationship between the Haqqanis and Pakistan’s security apparatus.
Then, on the eve of the tenth anniversary of the attacks on the World Trade Center and the Pentagon, one of the trucks pulled up next to the outer wall of a U.S. military base in Wardak Province, in eastern Afghanistan. The driver detonated the explosives inside the vehicle and the blast ripped open the wall to the base. The explosion wounded more than seventy American Marines inside the base, and spiraling shrapnel killed an eight-year-old Afghan girl standing half a mile away.
The attack infuriated Mullen and convinced him that General Kayani had no sincere interest in curbing his military’s ties to militant groups like the Haqqanis. Other top American officials had been convinced of this years earlier, but Mullen had believed that Kayani was a different breed of Pakistani general, a man who saw the ISI’s ties to the Taliban, the Haqqani Network, and Lashkar-e-Taiba as nothing more than a suicide pact. But the Wardak bombing was, for Mullen, proof that Pakistan was playing a crooked and deadly game.
Days after the bombing—and immediately after the Haqqani Network launched another brazen attack, this time on the American-embassy compound in Kabul—Admiral Mullen went to Capitol Hill to give his final congressional testimony as chairman of the Joint Chiefs of Staff. He came to deliver a blunt message, one that State Department officials had been unsuccessful in trying to soften in the hours before he appeared before the Senate Armed Services Committee.
Pakistani spies were directing the insurgency inside of Afghanistan, Mullen told the congressional panel, and had blood on their hands from the deaths of American troops and Afghan civilians. “The Haqqani Network,” Mullen said, “acts as a veritable arm of Pakistan’s Inter-Services Intelligence agency.”
Even after a tumultuous decade of American relations with Pakistan, no top American official up to that point had made such a direct accusation in public. The statement carried even more power because it came from Admiral Michael Mullen, whom Pakistani officials considered to be one of their few remaining allies in Washington. The generals in Pakistan were stung by Mullen’s comments, no one more than his old friend General Ashfaq Parvez Kayani.
The relationship was dead; the two men didn’t speak again after Mullen’s testimony. Each man felt he had been betrayed by the other.
And:
In the midst of the surge of drone attacks, President Obama ordered a reshuffling of his national-security team. The result was something of a grace note at the end of a decade during which the work of soldiers and spies had become largely indistinguishable. Leon Panetta, who as CIA director had made the spy agency more like the military, was taking over the Pentagon. General Petraeus, the four-star general who had signed secret orders in 2009 to expand military spying operations throughout the Middle East, would run the CIA.
In his fourteen months at Langley, before ignominiously resigning over an extramarital affair with his biographer, Petraeus accelerated the trends that Hayden had warned him about. He pushed the White House for money to expand the CIA’s drone fleet, and he told members of Congress that, under his watch, the CIA was carrying out more covert-action operations than at any point in its history. Within weeks of arriving at Langley, Petraeus even ordered an operation that, up to that point, no CIA director had ever done before: the targeted killing of an American citizen [Anwar al-Awlaki].
↑ comment by lukeprog · 2013-11-25T16:25:36.318Z · LW(p) · GW(p)
More (#3) from The Way of the Knife:
...Pakistani military officers in mid-2006 quietly began discussing a peace deal in North Waziristan, similar to the one already in place in South Waziristan. Keller and his CIA colleagues warned their ISI counterparts that the deal could have disastrous consequences. Their views, though, had little impact. Pakistan’s government brokered a cease-fire agreement in North Waziristan in September 2006. And it came about because of the secret negotiations of a familiar figure to many in Washington, Lt. General Ali Jan Aurakzai, the man President Musharraf had appointed as military commander in the tribal areas after the September 11 attacks and who had long believed that the hunt for al Qaeda in Pakistan and Afghanistan was a fool’s errand.
Aurakzai had since retired from the military, and Musharraf had appointed him as the governor of the North-West Frontier Province, which gave him oversight over the tribal areas. Aurakzai believed that appeasing militant groups in the tribal areas was the only way to halt the spread of militancy into the settled areas of Pakistan. And he used his influence with Musharraf to convince the president on the merits of a peace deal in North Waziristan.
But Washington still needed to be convinced. President Musharraf decided to bring Aurakzai on a trip to sell the Bush White House on the cease-fire. Both men sat in the Oval Office and made a case to President Bush about the benefits of a peace deal, and Aurakzai told Bush that the North Waziristan peace agreement should even be replicated in parts of Afghanistan and would allow American troops to withdraw from the country sooner than expected.
Bush administration officials were divided. Some considered Aurakzai a spineless appeaser—the Neville Chamberlain of the tribal areas. But few saw any hope of trying to stop the North Waziristan peace deal. And Bush, whose style of diplomacy was intensely personal, worried even in 2006 about putting too many demands on President Musharraf. Bush still admired Musharraf for his decision in the early days after the September 11 attacks to assist the United States in the hunt for al Qaeda. Even after White House officials set up regular phone calls between Bush and Musharraf designed to apply pressure on the Pakistani leader to keep up military operations in the tribal areas, they usually were disappointed by the outcome: Bush rarely made specific demands on Musharraf during the calls. He would thank Musharraf for his contributions to the war on terrorism and pledge that American financial support to Pakistan would continue.
The prevailing view among the president’s top advisers in late 2006 was that too much American pressure on Musharraf could bring about a nightmarish scenario: a popular uprising against the Pakistan government that could usher in a radical Islamist government. The frustration of doing business with Musharraf was matched only by the fear of life without him. It was a fear that Musharraf himself stoked, warning American officials frequently about his tenuous grip on power and citing his narrow escape from several assassination attempts. The assassination attempts were quite real, but Musharraf’s strategy was also quite effective in maintaining a steady flow of American aid and keeping at bay demands from Washington for democratic reforms.
The North Waziristan peace deal turned out to be a disaster both for Bush and Musharraf. Miranshah was, in effect, taken over by the Haqqani Network as the group consolidated its criminal empire along the eastern edge of the Afghanistan border. As part of the agreement, the Haqqanis and other militant groups pledged to cease attacks in Afghanistan, but in the months after the deal was signed cross-border incursions from the tribal areas into Afghanistan aimed at Western troops rose by 300 percent. During a press conference in the fall of 2006, President Bush declared that al Qaeda was “on the run.” In fact, the opposite was the case. The group had a safe home, and there was no reason to run anywhere.
↑ comment by lukeprog · 2013-11-25T15:23:58.614Z · LW(p) · GW(p)
More (#2) from The Way of the Knife:
[In Iraq] Lt. General Stanley McChrystal’s task force had been handed the mission of attacking the al Qaeda franchise in the country led by Jordanian terrorist Abu Musab al-Zarqawi. Wave upon wave of deadly violence was washing over the country, and al-Zarqawi’s al Qaeda in Mesopotamia had claimed responsibility for devastating attacks on American troop convoys and Shi‘ite holy sites. Within months of the beginning of the insurgency, it became clear to commanders on the ground that the war would be sucking American troops into the country for years, and Rumsfeld and his senior intelligence adviser, Stephen Cambone, gave JSOC a long leash to try to neutralize what had become the Iraqi insurgency’s most lethal arm.
The mantra of the task force, based inside an old Iraqi air-force hangar at Balad Air Base, north of Baghdad, was “fight for intelligence.” In the beginning, the white dry-erase boards that McChrystal and his team had set up to diagram the terror group were blank. McChrystal realized that much of the problem came from the poor communication between the various American military commands in Iraq, with few procedures in place to share intelligence with one another. “We began a review of the enemy, and of ourselves,” he would later write. “Neither was easy to understand.” Just how little everyone knew was apparent in 2004, amid reports that Iraqi troops had captured al-Zarqawi near Fallujah. Since nobody knew exactly what the Jordanian terrorist looked like, he was released by accident.
And:
The clandestine missions in Somalia in early 2007 had mixed results. American troops and intelligence aided the Ethiopian offensive through southern Somalia and led to a swift retreat by Islamic Courts Union troops. But the JSOC missions had failed to capture or kill any of the most senior Islamist commanders or members of the al Qaeda cell responsible for the 1998 embassy bombings. And, beyond the narrow manhunt, the larger Ethiopian occupation of Somalia could fairly be called a disaster.
The Bush administration had secretly backed the operation, believing that Ethiopian troops could drive the Islamist Courts Union out of Mogadishu and provide military protection for the UN-backed transitional government. The invasion had achieved that first objective, but the impoverished Ethiopian government had little interest in spending money to keep its troops in Somalia to protect the corrupt transitional government. Within weeks of the end of fighting, senior Ethiopian officials declared that they had met their military objectives and began talking publicly about a withdrawal.
The Ethiopian army had waged a bloody and indiscriminate campaign against its most hated enemy. Using lead-footed urban tactics, Ethiopian troops lobbed artillery shells into crowded marketplaces and dense neighborhoods, killing thousands of civilians. Discipline in the Ethiopian ranks broke down, and soldiers went on rampages of looting and gang rape. One young man interviewed by the nonprofit group Human Rights Watch spoke of witnessing Ethiopians kill his father and then rape his mother and sisters.
↑ comment by lukeprog · 2013-11-25T15:19:53.036Z · LW(p) · GW(p)
More (#1) from The Way of the Knife:
Weeks later, when the September 11 attacks killed nearly three thousand Americans, thorny questions about assassination, covert action, and the proper use of the CIA in hunting America’s enemies were quickly swept aside. Within weeks, the CIA began conducting dozens of drone strikes in Afghanistan.
And:
Lucky for both the American and Pakistani spies, Nek Muhammad wasn’t exactly in deep hiding. He gave regular interviews to the Pashto channels of Western news outlets, bragging about humbling the mighty Pakistani military. These interviews, by satellite phone, made him an easy mark for American eavesdroppers, and by mid-June 2004 the Americans were regularly tracking his movements. On June 18, one day after Nek Muhammad spoke to the BBC and wondered aloud about the strange bird that was following him, a Predator fixed on his position and fired a Hellfire missile at the compound where he had been resting. The blast severed Nek Muhammad’s left leg and left hand, and he died almost instantly. Pakistani journalist Zahid Hussain visited the village days later and saw the mud grave at Shakai that was already becoming a pilgrimage site. A sign on the grave read, HE LIVED AND DIED LIKE A TRUE PASHTUN.
After a discussion between CIA and ISI officials about how to handle news of the strike, they decided that Pakistan would take credit for killing the man who had humiliated its military. One day after Nek Muhammad was killed, a charade began that would go on for years. Major General Shaukat Sultan, Pakistan’s top military spokesman, told Voice of America that “al Qaeda facilitator” Nek Muhammad and four other militants had been killed during a rocket attack by Pakistani troops.
And:
General Kayani was consumed with the past, and he understood that Afghanistan’s bloody history was prologue to America’s war in that country. He had been studying Afghanistan for decades and was an expert in the dynamics that helped Afghan insurgents vanquish a superpower in the 1980s. In 1988, as a young Pakistani army major studying at Fort Leavenworth, in Kansas, Kayani wrote a master’s thesis about the Soviet war in Afghanistan titled “Strengths and Weaknesses of the Afghan Resistance Movement.” By then, the Soviet Union had endured nearly a decade of war in Afghanistan, and Soviet premier Mikhail Gorbachev had already begun to pull out his troops. Over ninety-eight pages of clear, straightforward prose, Kayani examined how the Afghan Resistance Movement (ARM) had bled the vaunted Soviet army and increased “the price of Soviet presence in Afghanistan.”
Kayani was, in essence, writing the playbook for how Pakistan could hold the strings in Afghanistan during the occupation of a foreign army. Pakistan, he wrote, could use proxy militias to wreak havoc in the country but also to control the groups effectively so that Islamabad could avoid a direct confrontation with the occupying force.
In a country without national identity, Kayani argued, it was necessary for the Afghan resistance to build support in the tribal system and to gradually weaken Afghanistan’s central government. As for Pakistan, Kayani believed that Islamabad likely didn’t want to be on a “collision course” with the Soviet Union, or at least didn’t want the Afghan resistance to set them on that path. Therefore, it was essential for Pakistan’s security to keep the strength of the Afghan resistance “managed.”
By the time he took over the ISI in 2004, Kayani knew that the Afghan war would be decided not by soldiers in mountain redoubts but by politicians in Washington who had an acute sensitivity to America’s limited tolerance for years more of bloody conflict. He knew because he had studied what had happened to the Soviets. In his thesis, he wrote that “the most striking feature of the Soviet military effort at present is the increasing evidence that it may not be designed to secure a purely military solution through a decisive defeat of the ARM.
“This is likely due to the realization that such a military solution is not obtainable short of entailing massive, and perhaps intolerable, personnel losses and economic and political cost.”
In 2004, Kayani’s thesis sat in the library at Fort Leavenworth, amid stacks of other largely ignored research papers written by foreign officers who went to Kansas to study how the United States Army fights its battles. This was a manual for a different kind of battle, a secret guerrilla campaign. Two decades after the young Pakistani military officer wrote it, he was the country’s spymaster, in the perfect position to put it to use.
↑ comment by lukeprog · 2013-11-24T16:21:32.268Z · LW(p) · GW(p)
From Freese's Coal: A Human History:
Replies from: lukeprog, lukeprogThe real irony of this story, though, is that when the two surviving [boats, out of five] heroically delivered their product [anthracite] to Philadelphia, nobody wanted it; the anthracite was thrown away, except for some that was used to gravel the foot-walks. Philadelphians didn't yet know how to burn the hard-to-kindle anthracite, which requires different stoves than those that burn bituminous coal. Two days of failed attempts to make anthracite burn led one frustrated consumer to conclude that "if the world should take fire, the Lehigh coal mine would be the safest retreat, the last place to burn." (As it happened, this statement was thoroughly disproved in 1859 when a fire started in that very mine and burned, famously, for eighty-two years.)
↑ comment by lukeprog · 2013-11-25T14:57:22.512Z · LW(p) · GW(p)
More (#2) from Coal: A Human History:
The image of a tyrannical King Coal whose power extended far beyond the coal camps was starting to form in the public mind. By encompassing nearly every lump of anthracite in the nation, the cartel reached into the hearth-fires of millions of Americans. In 1875, when Gowen and the coal operators cut wages and the miners went on strike in response, the public sympathized with the miners. Newspapers that normally condemned all strikes now denounced the coal cartel that "with one hand reaches for the pockets of the consumers, and with the other for the throats of the laborers." The strike, lasting five long months, was marked by vio lence on both sides. Striking miners were beaten and killed, as were strikebreakers and mine bosses. Miners derailed trains, sabotaged machinery, and burned down mine buildings. The newly combined coal operators held firm, though, and ultimately the hungry miners straggled back to work at the lower wages, their union essentially destroyed. The miners blamed Gowen, and for years they did not speak his name without a curse.
It wasn't long before the Pennsylvania legislature began to investigate Gowen's monopolistic strategies. Appearing in person before the investigating committee, Gowen persuasively argued that large mining companies were in the public interest because only they could make the needed investments. Then he quite effectively changed the subject: He read out a long list of threats, beatings, fires, and shootings committed by "a class of agitators" among the anthracite miners. When he was through, the focus of the legislature and the public (for Gowen published his arguments) had shifted from the Reading's growing power to the region's growing wave of organized crime.
Gowen's list of crimes had been compiled by Allan Pinkerton's private detective agency, which Gowen had secretly hired two years earlier to infiltrate the Molly Maguires. Pinkerton had sent an Irish Catholic spy into the region, and after he had gathered evidence of their crimes, and perhaps provoked additional ones, the trap was sprung. In September 1875, scores of suspected Mollies were rounded up by the Coal and Iron Police, a private security force which was controlled by Gowen and was the main law enforcement agency in the region.
The following spring, a spectacular and high-profile murder trial of five of the suspects opened in anthracite country. Not only did Gowen's secret agent testify against the suspects, who had been arrested by Gowen's private police, but the prosecution team was led by none other than Gowen himself, the former district attorney now acting as special prosecutor for the state. It would be hard to find another proceeding in American history where a single corporation, indeed a single man, had so blatantly taken over the powers of the sovereign.
Gowen, ever flamboyant, appeared in the courtroom dressed in formal evening clothes. Before an electrified audience, he presented a case not just against the five suspects but against all the Molly Maguires, and, by strong implication, against the miners' now-defunct union. At issue was not.just the murder with which the suspects were charged but a whole array of crimes. Following Gowen's line of reasoning, the press soon blamed the Molly Maguires for all the labor violence by miners during the long strike of 1875. After a series of trials, twenty accused Mollies were hanged, and twenty-six more imprisoned. For bringing down the Mollies, Gowen-so recently the subject of public scorn and suspicion-was lauded in the press for "accomplishing one of the greatest works for public good that has been achieved in this country in this generation."
Two conflicting lines of folklore have emerged around the Molly Maguires, one branding them brutal criminals, the other hailing them as martyrs in the battle against King Coal and corporate tyranny. Modern historians generally agree that the legend of the Mollies was greatly magnified by Gowen's oratory and by the press, and that the wave of crime against coal producers in the area, particularly after the long strike of 1875, was the predictable result of the miners' desperation rather than the work of a structured secret society. Clearly, the miners' union, far from being dominated by the Mollies, had helped prevent violence by the miners while it existed. In the public's mind, though, organized anthracite miners were now seen as terrorists, and support for miners' attempts to unionize withered away. The specter of the Molly Maguires so completely undermined subsequent attempts to unionize that no union would succeed in organizing the anthracite miners until the United Mine Workers did so at the end of the century.
↑ comment by lukeprog · 2013-11-24T16:24:35.363Z · LW(p) · GW(p)
More (#1) from Coal: A Human History:
One problem with the shiny, wood-burning engines proved hard to ignore: They spewed out a continuous shower of sparks and cinders wherever they went, "a storm of fiery snow," as Charles Dickens called it when he visited the United States. It was a beautiful display at night, but it had a predictable downside. Wood-burning trains commonly set nearby fields and forests ablaze; some said the trains burned more wood outside the firebox than inside.
The worst problems were on the train itself, since many early passenger cars were roofless, and all were made of wood. For example, the inaugural trip of the Mohawk Valley line in New York in 1831 (just a year after the opening of the Liverpool and Manchester line) was marred when red-hot cinders rained down upon passengers who, just moments before, had felt privileged to be experiencing this exciting new mode of travel. Those who had brought umbrellas opened them, but tossed them overboard after the first mile once their covers had burned away. According to one witness, "a general melee [then] took place among the deck-passengers, each whipping his neighbor to put out the fire. They presented a very motley appearance on arriving at the first station."
Sparks on another train reportedly consumed $6o,ooo worth of freshly minted dollar bills that were on board, singeing many passengers in the process; according to one complaint, some of the women, who wore voluminous and flammable dresses, were left "almost denuded." Over a thousand patents were granted for devices that attempted to stop these trains from igniting their surroundings, their cargo, and their passengers; but the real cure would come later in the century, when coal replaced wood as the fuel of choice. In the meantime, some of the more safety conscious railways had their passengers travel with buckets of sand in their laps to pour on each other when they caught fire.
↑ comment by lukeprog · 2013-11-07T18:17:39.483Z · LW(p) · GW(p)
Passages from The Many Worlds of Hugh Everett III:
Bohr declared that although there may be a reality underlying quantum phenomena, we cannot know what the reality is. It is accessible to human understanding only through the mediation of experiment and classical concepts. Consequently, generations of physicists were taught that there is no quantum reality independent of experimental result. And that the Schrödinger equation, while incredibly useful as a predictive tool, should not be interpreted literally as a description of reality.
Everett took the opposite view.
And:
Fifteen years after the thesis was published, Everett penned a letter (found in the basement) to Max Jammer, who was writing his book on the philosophy of quantum mechanics... [saying] "It seemed to me unnatural that there should be a ‘magic’ process in which something quite drastic occurred (collapse of the wave function), while in all other times systems were assumed to obey perfectly natural continuous laws."
[By] 1954, Everett was not alone in his feeling that the collapse postulate was illogical, but he was one of the very few physicists who dared to publicly express deep dissatisfaction with it... Everett had hoped to reinvent quantum mechanics on its own terms and was disappointed that his revolutionary idea was experimentally unproveable, as the only “proof” of it was that quantum mechanics works — a fact which was already known.
(It wasn't until decades later that David Deutsch and others showed that Everettian quantum mechanics does make novel experimental predictions.)
↑ comment by lukeprog · 2013-11-03T16:45:22.804Z · LW(p) · GW(p)
A passage from Tim Weiner's Legacy of Ashes: The History of the CIA:
Replies from: lukeprog[On April 12th, 1945, the day President Roosevelt died,] Colonel Park submitted his top secret report on the [Office of Strategic Services, the precursor to the CIA] to the new president. The report, fully declassified only after the cold war ended, was a political murder weapon, honed by the military and sharpened by J. Edgar Hoover, the FBI director since 1924; Hoover despised [OSS director William] Donovan and harbored his own ambitions to run a worldwide intelligence service. Park’s work destroyed the possibility of the OSS continuing as part of the American government, punctured the romantic myths that Donovan created to protect his spies, and instilled in Harry Truman a deep and abiding distrust of secret intelligence operations. The OSS had done “serious harm to the citizens, business interests, and national interests of the United States,” the report said.
Park admitted no important instance in which the OSS had helped to win the war, only mercilessly listing the ways in which it had failed. The training of its officers had been “crude and loosely organized.” British intelligence commanders regarded American spies as “putty in their hands.” In China, the nationalist leader Chiang Kai-shek had manipulated the OSS to his own ends. Germany’s spies had penetrated OSS operations all over Europe and North Africa. The Japanese embassy in Lisbon had discovered the plans of OSS officers to steal its code books—and as a consequence the Japanese changed their codes, which “resulted in a complete blackout of vital military information” in the summer of 1943. One of Park’s informants said, “How many American lives in the Pacific represent the cost of this stupidity on the part of OSS is unknown.” Faulty intelligence provided by the OSS after the fall of Rome in June 1944 led thousands of French troops into a Nazi trap on the island of Elba, Park wrote, and “as a result of these errors and miscalculations of the enemy forces by OSS, some 1,100 French troops were killed.”
...Colonel Park acknowledged that Donovan’s men had conducted some successful sabotage missions and rescues of downed American pilots. He said the deskbound research and analysis branch of OSS had done “an outstanding job,” and he concluded that the analysts might find a place at the State Department after the war. But the rest of the OSS would have to go. “The almost hopeless compromise of OSS personnel,” he warned, “makes their use as a secret intelligence agency in the postwar world inconceivable.”
↑ comment by lukeprog · 2013-11-03T16:55:58.558Z · LW(p) · GW(p)
More (#1) from Legacy of Ashes:
All over Europe, “a legion of political exiles, former intelligence officers, ex-agents and sundry entrepreneurs were turning themselves into intelligence moguls, brokering the sale of fabricated-to-order information.” The more his spies spent buying intelligence, the less valuable it became. “If there are more graphic illustrations of throwing money at a problem that hasn’t been thought through, none comes to mind,” he wrote. What passed for intelligence on the Soviets and their satellites was a patchwork of frauds produced by talented liars.
Helms later determined that at least half the information on the Soviet Union and Eastern Europe in the CIA’s files was pure falsehood. His stations in Berlin and Vienna had become factories of fake intelligence. Few of his officers or analysts could sift fact from fiction. It was an ever present problem: more than half a century later, the CIA confronted the same sort of fabrication as it sought to uncover Iraq’s weapons of mass destruction.
And:
Forrestal then went to an old chum, John W. Snyder, the secretary of the treasury and one of Harry Truman’s closest allies. He convinced Snyder to tap into the Exchange Stabilization Fund set up in the Depression to shore up the value of the dollar overseas through short-term currency trading, and converted during World War II as a depository for captured Axis loot. The fund held $200 million earmarked for the reconstruction of Europe. It delivered millions into the bank accounts of wealthy American citizens, many of them Italian Americans, who then sent the money to newly formed political fronts created by the CIA. Donors were instructed to place a special code on their income tax forms alongside their “charitable donation.” The millions were delivered to Italian politicians and the priests of Catholic Action, a political arm of the Vatican. Suitcases filled with cash changed hands in the four-star Hassler Hotel. “We would have liked to have done this in a more sophisticated manner,” Wyatt said. “Passing black bags to affect a political election is not really a terribly attractive thing.” But it worked: Italy’s Christian Democrats won by a comfortable margin and formed a government that excluded communists. A long romance between the party and the agency began. The CIA’s practice of purchasing elections and politicians with bags of cash was repeated in Italy—and in many other nations—for the next twenty-five years.
And:
One of the many myths about Operation Success [aka Operation PBSUCCESS; the US-led Guatemalan coup d’état], planted by Allen Dulles in the American press, was that its eventual triumph lay not in violence but in a brilliant piece of espionage. As Dulles told the story, the trick was turned by an American spy in the Polish city Stettin, on the Baltic Sea—the northern terminus of the iron curtain—posing as a bird watcher. He saw through his binoculars that a freighter called the Alfhem was carrying Czech arms to the Arbenz government. He then posted a letter with a microdot message—“My God, my God, why hast thou forsaken me?”—addressed to a CIA officer under deep cover in a Paris auto parts store, who relayed the coded signal by shortwave to Washington. As Dulles told the story, another CIA officer secretly inspected the hold of the ship while it docked at the Kiel Canal connecting the Baltic to the North Sea. The CIA, therefore, knew from the moment that the Alfhem left Europe that she was bound for Guatemala carrying guns.
A wonderful yarn, repeated in many history books, but a bald-faced lie—a cover story that disguised a serious operational mistake. In reality, the CIA missed the boat.
Arbenz was desperate to break the American weapons embargo on Guatemala. He thought he could ensure the loyalty of his officer corps by arming them. Henry Hecksher had reported that the Bank of Guatemala had transferred $4.86 million via a Swiss account to a Czech weapons depot. But the CIA lost the trail. Four weeks of frantic searching ensued before the Alfhem docked successfully at Puerto Barrios, Guatemala. Only after the cargo was uncrated did word reach the U.S. Embassy that a shipment of rifles, machine guns, howitzers, and other weapons had come ashore.
The arrival of the arms—many of them rusted and useless, some bearing a swastika stamp, indicating their age and origin—created a propaganda windfall for the United States. Grossly overstating the size and military significance of the cargo, Foster Dulles and the State Department announced that Guatemala was now part of a Soviet plot to subvert the Western Hemisphere. The Speaker of the House, John McCormack, called the shipment an atomic bomb planted in America’s backyard.
Ambassador Peurifoy said the United States was at war. “Nothing short of direct military intervention will succeed,” he cabled Wisner on May 21. Three days later, U.S. Navy warships and submarines blockaded Guatemala, in violation of international law.
And:
In the East Wing of the White House, in a room darkened for a slide show, the CIA sold Eisenhower a dressed-up version of Operation Success. When the lights went on, the president’s first question went to the paramilitary man Rip Robertson.
“How many men did Castillo Armas lose?” Ike asked.
Only one, Robertson replied.
“Incredible,” said the president.
At least forty-three of Castillo Armas’s men had been killed during the invasion, but no one contradicted Robertson. It was a shameless falsehood.
This was a turning point in the history of the CIA. The cover stories required for covert action overseas were now part of the agency’s political conduct in Washington. Bissell stated it plainly: “Many of us who joined the CIA did not feel bound in the actions we took as staff members to observe all the ethical rules.” He and his colleagues were prepared to lie to the president to protect the agency’s image. And their lies had lasting consequences.
↑ comment by lukeprog · 2013-10-31T22:31:11.374Z · LW(p) · GW(p)
I shared one quote here. More from Life at the Speed of Light:
Replies from: lukeprogSafety, of course, is paramount. The good news is that, thanks to a debate that dates back to Asilomar in the 1970s, robust and diverse regulations for the safe use of biotechnology and recombinant-DNA technology are already firmly in place. However, we must be vigilant and never drop our guard. In years to come it might be difficult to identify agents of concern if they look like nothing we have encountered before. The political, societal, and scientific backdrop is continually evolving and has shifted a great deal since the days of Asilomar. Synthetic biology also relies on the skills of scientists who have little experience in biology, such as mathematicians and electrical engineers. As shown by the efforts of the budding synthetic biologists at iGEM, the field is no longer the province of highly skilled senior scientists only. The democratization of knowledge and the rise of “open-source biology”; the establishment of a biological design-build facility, BIOFAB in California; and the availability of kitchen-sink versions of key laboratory tools, such as the DNA-copying method PCR, make it easier for anyone — including those outside the usual networks of government, commercial, and university laboratories and the culture of responsible training and biosecurity — to play with the software of life.
There are also “biohackers” who want to experiment freely with the software of life. The theoretical physicist and mathematician Freeman Dyson has already speculated on what would happen if the tools of genetic modification became widely accessible in the form of domesticated biotechnology: “There will be do-it-yourself kits for gardeners who will use genetic engineering to breed new varieties of roses and orchids. Also kits for lovers of pigeons and parrots and lizards and snakes to breed new varieties of pets. Breeders of dogs and cats will have their kits too.”
Many have focused on the risks of this technology’s falling into the “wrong hands.” The events of September 11, 2001, the anthrax attacks that followed, and the H1N1 and H7N9 influenza pandemic threat have all underscored the need to take their concerns seriously. Bioterrorism is becoming ever more likely as the technology matures and becomes ever more available. However, it is not easy to synthesize a virus, let alone one that is virulent or infective, or to create it in a form that can be used in a practical way as a weapon. And, of course, as demonstrated by the remarkable speed with which we can now sequence a pathogen, the same technology makes it easier to counteract with new vaccines.
For me, a concern is “bioerror”: the fallout that could occur as the result of DNA manipulation by a non-scientifically trained biohacker or “biopunk.” As the technology becomes more widespread and the risks increase, our notions of harm are changing, along with our view of what we mean by the “natural environment” as human activities alter the climate and, in turn, change our world.
In a similar vein, creatures that are not “normal” tend to be seen as monsters, as the product of an abuse of power and responsibility, as most vividly illustrated by the story of Frankenstein. Still, it is important to maintain our sense of perspective and of balance. Despite the knee-jerk demands for ever more onerous regulation and control measures consistent with the “precautionary principle” — whatever we mean by that much-abused term — we must not lose sight of the extraordinary power of this technology to bring about positive benefits for the world.
↑ comment by lukeprog · 2013-10-31T22:36:09.589Z · LW(p) · GW(p)
Also from Life at the Speed of Light:
...the Presidential Commission for the Study of Bioethical Issues released a report in December 2010 entitled New Directions: The Ethics of Synthetic Biology and Emerging Technologies...
Among its recommendations to the president, the commission said that the government should undertake a coordinated evaluation of public funding for synthetic-biology research, including studies on techniques for risk assessment and risk reduction and on ethical and social issues, so as to reveal noticeable gaps, if one considered that “public good” should be the main aim. The recommendations were, fortunately, pragmatic: given the embryonic state of the field, innovation should be encouraged, and, rather than creating a traditional system of bureaucracy and red tape, the patchwork quilt of regulation and guidance of the field by existing bodies should be coordinated.
Concerns were, of course, expressed about “low-probability, potentially high-impact events,” such as the creation of a doomsday virus. These rare but catastrophic possibilities should not be ignored, given that we are still reeling from the horrors of September 11. Nor should they be overstated: though one can gain access to “dangerous” viral DNA sequences, obtaining them is a long way from growing them successfully in a laboratory. Still, the report stated that safeguards should be instituted for monitoring, containment, and control of synthetic organisms — for instance, by the incorporation of “suicide genes,” molecular “brakes,” “kill switches,” or “seatbelts” that restrain growth rates or require special diets, such as novel amino acids, to limit their ability to thrive outside the laboratory. As was the case with our “branded” bacterium, we need to find new ways to label and tag synthetic organisms. More broadly, the report called for international dialogue about this emerging technology, as well as adequate training to remind all those engaged in this work of their responsibilities and obligations, not least to biosafety and stewardship of biodiversity, ecosystems, and food supplies. Though it encouraged the government to back a culture of self-regulation, it also urged it to be vigilant about the possibilities of do-it-yourself synthetic biology being carried out in what it called “noninstitutional settings.” One problem facing anyone who casts a critical eye over synthetic biology is that the field is evolving so quickly. For that reason, assessments of the technology should be under rolling review, and we should be ready to introduce new safety and control measures as necessary.
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-05-31T23:29:30.611Z · LW(p) · GW(p)
Personal and tribal selfishness align with AI risk-reduction in a way they may not align on climate change.
This seems obviously false. Local expenditures - of money, pride, possibility of not being the first to publish, etc. - are still local, global penalties are still global. Incentives are misaligned in exactly the same way as for climate change.
RSI capabilities could be charted, and are likely to be AI-complete.
This is to be taken as an arguendo, not as the author's opinion, right? See IEM on the minimal conditions for takeoff. Albeit if "AI-complete" is taken in a sense of generality and difficulty rather than "human-equivalent" then I agree much more strongly, but this is correspondingly harder to check using some neat IQ test or other "visible" approach that will command immediate, intuitive agreement.
Which historical events are analogous to AI risk in some important ways?
Most obviously molecular nanotechnology a la Drexler, the other ones seem too 'straightforward' by comparison. I've always modeled my assumed social response for AI on the case of nanotech, i.e., funding except for well-connected insiders, term being broadened to meaninglessness, lots of concerned blither by 'ethicists' unconnected to the practitioners, etc.
Replies from: Benja, lukeprog, None↑ comment by Benya (Benja) · 2013-06-01T10:08:56.639Z · LW(p) · GW(p)
Personal and tribal selfishness align with AI risk-reduction in a way they may not align on climate change.
This seems obviously false. Local expenditures - of money, pride, possibility of not being the first to publish, etc. - are still local, global penalties are still global. Incentives are misaligned in exactly the same way as for climate change.
Climate change doesn't have the aspect that "if this ends up being a problem at all, then chances are that I (or my family/...) will die of it".
(Agree with the rest of the comment.)
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-06-01T12:47:04.546Z · LW(p) · GW(p)
Climate change doesn't have the aspect that "if this ends up being a problem at all, then chances are that I (or my family/...) will die of it".
Many people believe that about climate change (due to global political disruption, economic collapse etcetera, praising the size of the disaster seems virtuous). Many others do not believe it about AI. Many put sizable climate-change disaster into the far future. Many people will go on believing this AI independently of any evidence which accrues. Actors with something to gain by minimizing their belief in climate change so minimize. This has also been true in AI risk so far.
Replies from: Benja↑ comment by Benya (Benja) · 2013-06-01T14:09:50.754Z · LW(p) · GW(p)
Many people believe that about climate change (due to global political disruption, economic collapse etcetera, praising the size of the disaster seems virtuous).
Hm! I cannot recall a single instance of this. (Hm, well; I can recall one instance of a TV interview with a politician from a non-first-world island nation taking projections seriously which would put his nation under water, so it would not be much of a stretch to think that he's taking seriously the possibility that people close to him may die from this.) If you have, probably this is because I haven't read that much about what people say about climate change. Could you give me an indication of the extent of your evidence, to help me decide how much to update?
Many others do not believe it about AI.
Ok, agreed, and this still seems likely even if you imagine sensible AI risk analyses being similarly well-known as climate change analyses are today. I can see how it could lead to an outcome similar to today's situation with climate change if that happened... Still, if the analysis says "you will die of this", and the brain of the person considering the analysis is willing to assign it some credence, that seems to align personal selfishness with global interests more than (climate change as it has looked to me so far).
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-06-01T17:06:49.461Z · LW(p) · GW(p)
Many people believe that about climate change (due to global political disruption, economic collapse etcetera, praising the size of the disaster seems virtuous).
Hm! I cannot recall a single instance of this.
Will keep an eye out for the next citation.
Still, if the analysis says "you will die of this", and the brain of the person considering the analysis is willing to assign it some credence
This has not happened with AI risk so far among most AIfolk, or anyone the slightest bit motivated to reject the advice. We had a similar conversation at MIRI once, in which I was arguing that, no, people don't automatically change their behavior as soon as they are told that something bad might happen to them personally; and when we were breaking it up, Anna, on her way out, asked Louie downstairs how he had reasoned about choosing to ride motorcycles.
People only avoid certain sorts of death risks under certain circumstances.
Replies from: Benja, Eugine_Nier↑ comment by Benya (Benja) · 2013-06-01T17:27:13.442Z · LW(p) · GW(p)
Will keep an eye out for the next citation.
Thanks!
[...] motorcycles. [...]
Point. Need to think.
↑ comment by Eugine_Nier · 2013-06-01T20:27:03.192Z · LW(p) · GW(p)
We had a similar conversation at MIRI once, in which I was arguing that, no, people don't automatically change their behavior as soon as they are told that something bad might happen to them personally
Being told something is dangerous =/= believing it is =/= alieving it is.
↑ comment by [deleted] · 2013-06-02T22:04:10.142Z · LW(p) · GW(p)
Albeit if "AI-complete" is taken in a sense of generality and difficulty rather than "human-equivalent" then I agree much more strongly, but this is correspondingly harder to check using some neat IQ test or other "visible" approach that will command immediate, intuitive agreement.
This seems implied by X-complete. X-complete generally means "given a solution to an X-complete problem, we have a solution for X".
eg. NP complete: given a polynomial solution to any NP-complete problem, any problem in NP can be solved in polynomial time.
(Of course the technical nuance of the strength of the statement X-complete is such that I expect most people to imagine the wrong thing, like you say.)
comment by Benya (Benja) · 2013-05-31T20:34:46.064Z · LW(p) · GW(p)
(I don't have answers to your specific questions, but here are some thoughts about the general problem.)
I agree with most of you said. I also assign significant probability mass to most parts of the argument for hope (but haven't thought about this enough to put numbers on this), though I too am not comforted on these parts because I also assign non-small chance to them going wrong. E.g., I have hope for "if AI is visible [and, I add, AI risk is understood] then authorities/elites will be taking safety measures".
That said, there are some steps in the argument for hope that I'm really worried about:
- I worry that even smart (Nobel prize-type) people may end up getting the problem completely wrong, because MIRI's argument tends to conspicuously not be reinvented independently elsewhere (even though I find myself agreeing with all of its major steps).
- I worry that even if they get it right, by the time we have visible signs of AGI we will be even closer to it than we are now, so there will be even less time to do the necessary basic research necessary to solve the problem, making it even less likely that it can be done in time.
Although it's also true that I assign some probability to e.g. AGI without visible signs, I think the above is currently the largest part of why I feel MIRI work is important.
comment by JonahS (JonahSinick) · 2013-05-31T20:25:26.792Z · LW(p) · GW(p)
I personally am optimistic about the world's elites navigating AI risk as well as possible subject to inherent human limitations that I would expect everybody to have, and the inherent risk. Some points:
I've been surprised by people's ability to avert bad outcomes. Only two nuclear weapons have been used since nuclear weapons were developed, despite the fact that there are 10,000+ nuclear weapons around the world. Political leaders are assassinated very infrequently relative to how often one might expect a priori.
AI risk is a Global Catastrophic Risk in addition to being an x-risk. Therefore, even people who don't care about the far future will be motivated to prevent it.
The people with the most power tend to be the most rational people, and the effect size can be expected to increase over time (barring disruptive events such as economic collapses, supervolcanos, climate change tail risk, etc). The most rational people are the people who are most likely to be aware of and to work to avert AI risk. Here I'm blurring "near mode instrumental rationality" and "far mode instrumental rationality," but I think there's a fair amount of overlap between the two things. e.g. China is pushing hard on nuclear energy and on renewable energies, even though they won't be needed for years.
Availability of information is increasing over time. At the time of the Dartmouth conference, information about the potential dangers of AI was not very salient, now it's more salient, and in the future it will be still more salient.
In the Manhattan project, the "will bombs ignite the atmosphere?" question was analyzed and dismissed without much (to our knowledge) double-checking. The amount of risk checking per hour of human capital available can be expected to increase over time. In general, people enjoy tackling important problems, and risk checking is more important than most of the things that people would otherwise be doing.
I should clarify that with the exception of my first point, the arguments that I give are arguments that humanity will address AI risk in a near optimal way – not necessarily that AI risk is low.
For example, it could be that people correctly recognize that building an AI will result in human extinction with probability 99%, and so implement policies to prevent it, but that sometime over the next 10,000 years, these policies will fail, and AI will kill everyone.
But the actionable thing is how much we can reduce the probability of AI risk, and if by default people are going to do the best that one could hope, we can't reduce the probability substantially.
Replies from: falenas108, lukeprog, ryjm, hairyfigment, FeepingCreature↑ comment by falenas108 · 2013-06-01T01:02:19.484Z · LW(p) · GW(p)
The people with the most power tend to be the most rational people
What?
Replies from: JonahSinick↑ comment by JonahS (JonahSinick) · 2013-06-01T01:22:16.531Z · LW(p) · GW(p)
Rationality is systematized winning. Chance plays a role, but over time it's playing less and less of a role, because of more efficient markets.
Replies from: Decius, elharo, ChrisHallquist↑ comment by Decius · 2013-06-01T17:39:56.522Z · LW(p) · GW(p)
There is lots of evidence that people in power are the most rational, but there is a huger prior to overcome.
Among people for whom power has an unsatiated major instrumental or intrinsic value, the most rational tend to have more power- but I don't think that very rational people are common and I think that they are less likely to want more power than they have.
Particularly since the previous generation of power-holders used different factors when they selected their successors.
Replies from: JonahSinick↑ comment by JonahS (JonahSinick) · 2013-06-02T05:16:20.661Z · LW(p) · GW(p)
I agree with all of this. I think that "people in power are the most rational" was much less true in 1950 than it is today, and that it will be much more true in 2050.
↑ comment by elharo · 2013-06-02T11:28:20.769Z · LW(p) · GW(p)
Actually that's a badly titled article. At best "Rationality is systematized winning" applies to instrumental, not epistemic, rationality. And even for that you can't make rationality into systematized winning by defining it so. Either that's a tautology (whatever systematized winning is, we define that as "rationality") or it's an empirical question. I.e. does rationality lead to winning? Looking around the world at "winners", that seems like a very open question.
And now that I think about it, it's also an empirical question whether there even is a system for winning. I suspect there is--that is, I suspect that there are certain instrumental practices one can adopt that are generically useful for achieving a broad variety of life goals--but this too is an empirical question we should not simply assume the answer to.
Replies from: JonahSinick↑ comment by JonahS (JonahSinick) · 2013-06-02T16:38:43.980Z · LW(p) · GW(p)
I agree that my claim isn't obvious. I'll try to get back to you with detailed evidence and arguments.
↑ comment by ChrisHallquist · 2014-01-19T05:32:06.852Z · LW(p) · GW(p)
The problem is that politicians have a lot to gain from really believing the stupid things they have to say to gain and hold power.
To quote an old thread:
Every politician I've ever met has in fact been a completely sincere person who considers themselves to do what they do with the aim of good in the world. Even the ones that any outsider would say "haha, leave it out" to the notion. Every politician is completely sincere. I posit that this is a much more frightening notion than the comfort of a conspiracy theory.
Cf. Stephen Pinker historians who've studied Hitler tend to come away convinced he really believed he was a good guy.
To get the fancy explanation of why this is the case, see "Trivers' Theory of Self-Deception."
↑ comment by lukeprog · 2013-06-16T03:09:27.167Z · LW(p) · GW(p)
In the Manhattan project, the "will bombs ignite the atmosphere?" question was analyzed and dismissed without much (to our knowledge) double-checking. The amount of risk checking per hour of human capital available can be expected to increase over time...
It's not much evidence, but the two earliest scientific investigations of existential risk I know of, LA-602 and the RHIC Review, seem to show movement in the opposite direction: "LA-602 was written by people curiously investigating whether a hydrogen bomb could ignite the atmosphere, and the RHIC Review is a work of public relations."
Perhaps the trend you describe is accurate, but I also wouldn't be surprised to find out (after further investigation) that scientists are now increasingly likely to avoid serious analysis of real risks posed by their research, since they're more worried than ever before about funding for their field (or, for some other reason). The AAAI Presidential Panel on Long-Term AI Futures was pretty disappointing, and like the RHIC Review seems like pure public relations, with a pre-determined conclusion and no serious risk analysis.
↑ comment by ryjm · 2013-06-01T03:23:24.274Z · LW(p) · GW(p)
I've been surprised by people's ability to avert bad outcomes. Only two nuclear weapons have been used since nuclear weapons were developed, despite the fact that there are 10,000+ nuclear weapons around the world. Political leaders are assassinated very infrequently relative to how often one might expect a priori.
Why would a good AI policy be one which takes as a model a universe where world destroying weapons in the hands of incredibly unstable governments controlled by glorified tribal chieftains is not that bad of a situation? Almost but not quite destroying ourselves does not reflect well on our abilities. The Cold War as a good example of averting bad outcomes? Eh.
AI risk is a Global Catastrophic Risk in addition to being an x-risk. Therefore, even people who don't care about the far future will be motivated to prevent it.
This is assuming that people understand what makes an AI so dangerous - calling an AI a global catastrophic risk isn't going to motivate anyone who thinks you can just unplug the thing (and even worse if it does motivate them, since then you have someone who is running around thinking the AI problem is trivial).
The people with the most power tend to be the most rational people, and the effect size can be expected to increase over time (barring disruptive events such as economic collapses, supervolcanos, climate change tail risk, etc). The most rational people are the people who are most likely to be aware of and to work to avert AI risk. Here I'm blurring "near mode instrumental rationality" and "far mode instrumental rationality," but I think there's a fair amount of overlap between the two things. e.g. China is pushing hard on nuclear energy and on renewable energies, even though they won't be needed for years.
I think you're just blurring "rationality" here. The fact that someone is powerful is evidence that they are good at gaining a reputation in their specific field, but I don't see how this is evidence for rationality as such (and if we are redefining it to include dictators and crony politicians, I don't know what to say), and especially of the kind needed to properly handle AI - and claiming evidence for future good decisions related to AI risk because of domain expertise in entirely different fields is quite a stretch. Believe it or not, most people are not mathematicians or computer scientists. Most powerful people are not mathematicians or computer scientists. And most mathematicians and computer scientists don't give two shits about AI risk - if they don't think it worthy of attention, why would someone who has no experience with these kind of issues suddenly grab it out of the space of all possible ideas he could possibly be thinking about? Obviously they aren't thinking about it now - why are you confident this won't be the case in the future? Thinking about AI requires a rather large conceptual leap - "rationality" is necessary but not sufficient, so even if all powerful people were "rational" it doesn't follow that they can deal with these issues properly or even single them out as something to meditate on, unless we have a genius orator I'm not aware of. It's hard enough explaining recursion to people who are actually interested in computers. And it's not like we can drop a UFAI on a country to get people to pay attention.
Availability of information is increasing over time. At the time of the Dartmouth conference, information about the potential dangers of AI was not very salient, now it's more salient, and in the future it will be still more salient.
In the Manhattan project, the "will bombs ignite the atmosphere?" question was analyzed and dismissed without much (to our knowledge) double-checking. The amount of risk checking per hour of human capital available can be expected to increase over time. In general, people enjoy tackling important problems, and risk checking is more important than most of the things that people would otherwise be doing.
It seems like you are claiming that AI safety does not require a substantial shift in perspective (I'm taking this as the reason why you are optimistic, since my cynicism tells me that expecting a drastic shift is a rather improbable event) - rather, we can just keep chugging along because nice things can be "expected to increase over time", and this somehow will result in the kind of society we need. These statements always confuse me; one usually expects to be in a better position to solve a problem 5 years down the road, but trying to describe that advantage in terms of out of thin air claims about incremental changes in human behavior seems like a waste of space unless there is some substance behind it. They only seem useful when one has reached that 5 year checkpoint and can reflect on the current context in detail - for example, it's not clear to me that the increasing availability of information is always a net positive for AI risk (since it could be the case that potential dangers are more salient as a result of unsafe AI research - the more dangers uncovered could even act as an incentive for more unsafe research depending on the magnitude of positive results and the kind of press received. But of course the researchers will make the right decision, since people are never overconfident...). So it comes off (to me) as a kind of sleight of hand where it feels like a point for optimism, a kind of "Yay Open Access Knowledge is Good!" applause light, but it could really go either way.
Also I really don't know where you got that last idea - I can't imagine that most people would find AI safety more glamorous then, you know, actually building a robot. There's a reason why it's hard to get people to do unit tests and software projects get bloated and abandoned. Something like what Haskell is to software would be optimal. I don't think it's a great idea to rely on the conscientiousness of people in this case.
Replies from: JonahSinick↑ comment by JonahS (JonahSinick) · 2013-06-01T05:08:49.947Z · LW(p) · GW(p)
Thanks for engaging.
Why would a good AI policy be one which takes as a model a universe where world destroying weapons in the hands of incredibly unstable governments controlled by glorified tribal chieftains is not that bad of a situation? Almost but not quite destroying ourselves does not reflect well on our abilities. The Cold War as a good example of averting bad outcomes? Eh.
The point is that I would have expected things to be worse, and that I imagine that a lot of others would have as well.
This is assuming that people understand what makes an AI so dangerous - calling an AI a global catastrophic risk isn't going to motivate anyone who thinks you can just unplug the thing (and even worse if it does motivate them, since then you have someone who is running around thinking the AI problem is trivial).
I think that people will understand what makes AI dangerous. The arguments aren't difficult to understand.
The fact that someone is powerful is evidence that they are good at gaining a reputation in their specific field, but I don't see how this is evidence for rationality as such (and if we are redefining it to include dictators and crony politicians, I don't know what to say),
Broadly, the most powerful countries are the ones with the most rational leadership (where here I mean "rational with respect to being able to run a country," which is relevant), and I expect this trend to continue.
Also, wealth is skewing toward more rational people over time, and wealthy people have political bargaining power.
why would someone who has no experience with these kind of issues suddenly grab it out of the space of all possible ideas he could possibly be thinking about?
Political leaders have policy advisors, and policy advisors listen to scientists. I expect that AI safety issues will percolate through the scientific community before long.
It seems like you are claiming that AI safety does not require a substantial shift in perspective (I'm taking this as the reason why you are optimistic, since my cynicism tells me that expecting a drastic shift is a rather improbable event) - rather, we can just keep chugging along because nice things can be "expected to increase over time", and this somehow will result in the kind of society we need. [...]
I agree that AI safety requires a substantial shift in perspective — what I'm claiming is that this change in perspective will occur organically substantially before the creation of AI is imminent.
Also I really don't know where you got that last idea - I can't imagine that most people would find AI safety more glamorous then, you know, actually building a robot.
You don't need "most people" to work on AI safety. It might suffice for 10% or fewer of the people who are working on AI to work on safety. There are lots of people who like to be big fish in a small pond, and this will motivate some AI researchers to work on safety even if safety isn't the most prestigious field.
If political leaders are sufficiently rational (as I expect them to be), they'll give research grants and prestige to people who work on AI safety.
Replies from: wubbles, Desrtopa↑ comment by wubbles · 2013-06-02T14:08:51.260Z · LW(p) · GW(p)
Things were a lot worse then everyone knew: Russia almost invaded Yugoslavia, which would have triggered a war according to newly declassified NSA journals, in the 1950's. The Cuban Missile Crisis could easily have gone hot, and several times early warning systems were triggered by accident. Of course, estimating what could have happened is quite hard.
Replies from: JonahSinick↑ comment by JonahS (JonahSinick) · 2013-06-02T16:55:55.746Z · LW(p) · GW(p)
I agree that there were close calls. Nevertheless, things turned out better than I would have guessed, and indeed, probably better than a large fraction of military and civilian people would have guessed.
Replies from: Baughn↑ comment by Desrtopa · 2013-06-02T14:19:05.856Z · LW(p) · GW(p)
I think that people will understand what makes AI dangerous. The arguments aren't difficult to understand.
We still get people occasionally who argue the point while reading through the Sequences, and that's a heavily filtered audience to begin with.
Replies from: JonahSinick↑ comment by JonahS (JonahSinick) · 2013-06-02T16:25:06.283Z · LW(p) · GW(p)
There's a difference between "sufficiently difficult so that a few readers of one person's exposition can't follow it" and "sufficiently difficult so that after being in the public domain for 30 years, the arguments won't have been distilled so as to be accessible to policy makers."
I don't think that the arguments are any more difficult than the arguments for anthropogenic global warming. One could argue that the difficulty of these arguments has been a limiting factor in climate change policy, but I believe that by far the dominant issue has been misaligned incentives, though I'd concede that this is not immediately obvious.
↑ comment by hairyfigment · 2013-06-06T19:38:54.219Z · LW(p) · GW(p)
Only two nuclear weapons have been used since nuclear weapons were developed,
And I have the impression that relatively low-ranking people helped produce this outcome by keeping information from their superiors. Petrov chose not to report a malfunction of the early warning system until he could prove it was a malfunction. People during the Korean war and possibly Vietnam seem not to have passed on the fact that pilots from Russia or America were cursing in their native languages over the radio (and the other side was hearing them).
This in fact is part of why I don't think we 'survived' through the anthropic principle. Someone born after the end of the Cold War could look back at the apparent causes of our survival. And rather than seeing random events, or no causes at all, they would see a pattern that someone might have predicted beforehand, given more information.
This pattern seems vanishingly unlikely to save us from unFriendly AI. It would take, at the very least, a much more effective education/propaganda campaign.
Replies from: JonahSinick↑ comment by JonahS (JonahSinick) · 2013-06-06T20:07:58.377Z · LW(p) · GW(p)
As I remark elsewhere in this thread, the point is that I would have expected substantially more nuclear exchange by now than actually happened, and in view of this, I updated in the direction of things being more likely to go well than I would have thought. I'm not saying "the fact that there haven't been nuclear exchanges means that destructive things can't happen."
This pattern seems vanishingly unlikely to save us from unFriendly AI. It would take, at the very least, a much more effective education/propaganda campaign.
I was using the nuclear war thing as one of many outside views, not as direct analogy. The AI situation needs to be analyzed separately — this is only one input.
↑ comment by FeepingCreature · 2013-06-01T02:13:56.838Z · LW(p) · GW(p)
I've been surprised by people's ability to avert bad outcomes. Only two nuclear weapons have been used since nuclear weapons were developed, despite the fact that there are 10,000+ nuclear weapons around the world.
It may be challenging to estimate the "actual, at the time" probability of a past event that would quite possibly have resulted in you not existing. Survivor bias may play a role here.
Replies from: JonahSinick↑ comment by JonahS (JonahSinick) · 2013-06-01T04:40:04.411Z · LW(p) · GW(p)
Nuclear war would have to be really, really big to kill a majority of the population, and probably even if all weapons were used the fatality rate would be under 50% (with the uncertainty coming from nuclear winter). Note that most residents of Hiroshima and Nagasaki survived the 1945 bombings, and that fewer than 60% of people live in cities.
Replies from: elharo, FeepingCreature↑ comment by elharo · 2013-06-01T23:04:52.902Z · LW(p) · GW(p)
It depends on the nuclear war. An exchange of bombs between India and Pakistan probably wouldn't end human life on the planet. However an all-out war between the U.S. and the U.S.S.R in the 1980s most certainly could have. Fortunately that doesn't seem to be a big risk right now. 30 years ago it was. I don't feel confident in any predictions one way or the other about whether this might be a threat again 30 years from now.
Replies from: JonahSinick↑ comment by JonahS (JonahSinick) · 2013-06-02T00:19:58.179Z · LW(p) · GW(p)
However an all-out war between the U.S. and the U.S.S.R in the 1980s most certainly could have.
Why do you think this?
Replies from: elharo↑ comment by elharo · 2013-06-02T11:39:03.988Z · LW(p) · GW(p)
Because all the evidence I've read or heard (most of it back in the 1980s) agreed on this. Specifically in a likely exchange between the U.S. and the USSR the northern, hemisphere would have been rendered completely uninhabitable within days. Humanity in the southern hemisphere would probably have lasted somewhat longer, but still would have been destroyed by nuclear winter and radiation. Details depend on the exact distribution of targets.
Remember Hiroshima and Nagasaki were 2 relatively small fission weapons. By the 1980s the USSR and the US each had enough much bigger fusion bombs to individually destroy the planet. The only question was how many each would use in an exchange and where they target them.
Replies from: JonahSinick↑ comment by JonahS (JonahSinick) · 2013-06-02T16:26:49.710Z · LW(p) · GW(p)
This is mostly out of line with what I've read. Do you have references?
↑ comment by FeepingCreature · 2013-06-01T05:11:55.748Z · LW(p) · GW(p)
I'm not sure what the correct way to approach this would be. I think it may be something like comparing the number of people in your immediate reference class - depending on preference, this could be "yourself precisely" or "everybody who would make or have made the same observation as you" - and then ask "how would nuclear war affect the distribution of such people in that alternate outcome". But that's only if you give each person uniform weighting of course, which has problems of its own.
Replies from: JonahSinick↑ comment by JonahS (JonahSinick) · 2013-06-01T05:16:24.426Z · LW(p) · GW(p)
Sure, these things are subtle — my point was that the numbers who would have perished isn't very large in this case, so that under a broad class of assumptions, one shouldn't take the observed absence of nuclear conflict to be a result of survivorship bias.
comment by [deleted] · 2013-06-02T21:59:33.244Z · LW(p) · GW(p)
The argument from hope or towards hope or anything but despair and grit is misplaced when dealing with risks of this magnitude.
Don't trust God (or semi-competent world leaders) to make everything magically turn out all right. The temptation to do so is either a rationalization of wanting to do nothing, or based on a profoundly miscalibrated optimism for how the world works.
/doom
Replies from: Eugine_Nier↑ comment by Eugine_Nier · 2013-06-04T01:30:51.697Z · LW(p) · GW(p)
Don't trust God (or semi-competent world leaders) to make everything magically turn out all right.
I agree. Of course the article you linked to ultimately attempts to argue for trusting semi-competent world leaders.
Replies from: None↑ comment by [deleted] · 2013-06-04T02:30:51.230Z · LW(p) · GW(p)
It alludes to such an argument and sympathizes with it. Note I also "made the argument" that civilization should be dismantled.
Personally I favor the FAI solution, but I tried to make the post solution-agnostic and mostly demonstrate where those arguments are coming from, rather than argue any particular one. I could have made that clearer, I guess.
Thanks for the feedback.
comment by timtyler · 2013-06-01T19:09:11.083Z · LW(p) · GW(p)
I think there's a >15% chance AI will not be preceded by visible signals.
Aren't we seeing "visible signals" already? Machines are better than humans at lots of intelligence-related tasks today.
Replies from: jsalvatier↑ comment by jsalvatier · 2013-06-03T02:51:43.107Z · LW(p) · GW(p)
I interpreted that as 'visible signals of danger', but I could be wrong.
comment by timtyler · 2013-06-01T12:56:49.047Z · LW(p) · GW(p)
Which historical events are analogous to AI risk in some important ways? Possibilities include: nuclear weapons, climate change, recombinant DNA, nanotechnology, chloroflourocarbons, asteroids, cyberterrorism, Spanish flu, the 2008 financial crisis, and large wars.
Cryptography and cryptanalysis are obvious precursors of supposedly-dangerous tech within IT.
Looking at their story, we can plausibly expect governments to attempt to delay the development of "weaponizable" technology by others.
These days, cryptography facilitates international trade. It seems like a mostly-positive force overall.
comment by novalis · 2013-05-31T20:14:29.879Z · LW(p) · GW(p)
One question is whether AI is like CFCs, or like CO2, or like hacking.
With CFCs, the solution was simple: ban CFCs. The cost was relatively low, and the benefit relatively high.
With CO2, the solution is equally simple: cap and trade. It's just not politically palatable, because the problem is slower-moving, and the cost would be much, much greater (perhaps great enough to really mess up the world economy). So, we're left with the second-best solution: do nothing. People will die, but the economy will keep growing, which might balance that out, because a larger economy can feed more people and produce better technology.
With hacking, we know it's a problem and we are highly motivated to solve it, but we just don't know how. You can take every recommendation that Bruce Schneier makes, and still get hacked. The US military gets hacked. The Australian intelligence agency gets hacked. Swiss banks get hacked. And it doesn't seem to be getting better, even though we keep trying.
Banning AI research (once it becomes clear that RSI is possible) would have the same problem as banning CO2. And it might also have the same problems as hacking: how do you stop people from writing code?
comment by Wei Dai (Wei_Dai) · 2013-07-03T01:30:28.579Z · LW(p) · GW(p)
Here are my reasons for pessimism:
There are likely to be effective methods of controlling AIs that are of subhuman or even roughly human-level intelligence which do not scale up to superhuman intelligence. These include for example reinforcement by reward/punishment, mutually beneficial trading, legal institutions. Controlling superhuman intelligence will likely require qualitatively different methods, such as having the superintelligence share our values. Unfortunately the existence of effective but unscalable methods of AI control will probably lull elites into a false sense of security as we deploy increasingly smarter AIs without incident, and both increase investments into AI capability research and reduce research into "higher" forms of AI control.
The only possible approaches I can see of creating scalable methods of AI control require solving difficult philosophical problems which likely require long lead times. By the time elites take the possibility of superhuman AIs seriously and realize that controlling them requires approaches very different from controlling subhuman and human-level AIs, there won't be enough time to solve these problems even if they decide to embark upon Manhattan-style projects (because there isn't sufficient identifiable philosophical talent in humanity to recruit for such projects to make enough of a difference).
In summary, even in a relatively optimistic scenario, one with steady progress in AI capability along with apparent progress in AI control/safety (and nobody deliberately builds a UFAI for the sake of "maximizing complexity of the universe" or what have you), it's probably only a matter of time until some AI crosses a threshold of intelligence and manages to "throw off its shackles". This may be accompanied by a last-minute scramble by mainstream elites to slow down AI progress and research methods of scalable AI control, which (if it does happen) will likely be too late to make a difference.
comment by lukeprog · 2013-06-28T20:43:16.455Z · LW(p) · GW(p)
Congress' non-responsiveness to risks to critical infrastructure from geomagnetic storms, despite scientific consensus on the issue, is also worrying.
Replies from: wedrifid↑ comment by wedrifid · 2013-06-28T21:44:23.361Z · LW(p) · GW(p)
Congress' non-responsiveness to risks to critical infrastructure from geomagnetic storms, despite scientific consensus on the issue, is also worrying.
Perhaps someone could convince congress that "Terrorists" had developed "geomagnetic weaponry" and new "geomagnetic defence systems" need to be implemented urgently. (Being seen to be) taking action to defend against the hated enemy tends to be more motivating than worrying about actual significant risks.
comment by hedges · 2013-05-31T20:20:48.204Z · LW(p) · GW(p)
Even if one organization navigates the creation of friendly AI successfully, won't we still have to worry about preventing anyone from ever creating an unsafe AI?
Unlike nuclear weapons, a single AI might have world ending consequences, and an AI requires no special resources. Theoretically a seed AI could be uploaded to Pirate Bay, from where anyone could download and compile it.
Replies from: Manfred↑ comment by Manfred · 2013-05-31T20:41:50.109Z · LW(p) · GW(p)
If the friendly AI comes first, the goal is for it to always have enough resources to be able to stop unsafe AIs from being a big risk.
Replies from: Benja↑ comment by Benya (Benja) · 2013-06-01T07:37:31.190Z · LW(p) · GW(p)
Upvoted, but "always" is a big word. I think the hope is more for "as long as it takes until humanity starts being capable of handling its shit itself"...
Replies from: Benja↑ comment by Benya (Benja) · 2013-06-07T08:05:36.784Z · LW(p) · GW(p)
Why the downvotes? Do people feel that "the FAI should at some point fold up and vanish out of existence" is so obvious that it's not worth pointing out? Or disagree that the FAI should in fact do that? Or feel that it's wrong to point this out in the context of Manfred's comment? (I didn't mean to suggest that Manfred disagrees with this, but felt that his comment was giving the wrong impression.)
Replies from: Pentashagon↑ comment by Pentashagon · 2013-06-07T21:48:02.008Z · LW(p) · GW(p)
Will sentient, self-interested agents ever be free from the existential risks of UFAI/intelligence amplification without some form of oversight? It's nice to think that humanity will grow up and learn how to get along, but even if that's true for 99.9999999% of humans that leaves 7 people from today's population who would probably have the power to trigger their own UFAI hard takeoff after a FAI fixes the world and then disappears. Even if such a disaster could be stopped it is a risk probably worth the cost of keeping some form of FAI around indefinitely. What FAI becomes is anyone's guess but the need for what FAI does will probably not go away. If we can't trust humans to do FAI's job now, I don't think we can trust humanity's descendents to do FAI's job either, just from Loeb's theorem. I think it is unlikely that humans will become enough like FAI to properly do FAI's job. They would essentially give up their humanity in the process.
Replies from: Eliezer_Yudkowsky, Benja↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-06-07T22:09:40.622Z · LW(p) · GW(p)
A secure operating system for governed matter doesn't need to take the form of a powerful optimization process, nor does verification of transparent agents trusted to run at root level. Benja's hope seems reasonable to me.
Replies from: Wei_Dai, Benja↑ comment by Wei Dai (Wei_Dai) · 2013-06-08T14:28:19.250Z · LW(p) · GW(p)
A secure operating system for governed matter doesn't need to take the form of a powerful optimization process
This seems non-obvious. (So I'm surprised to see you state it as if it was obvious. Unless you already wrote about the idea somewhere else and are expecting people to pick up the reference?) If we want the "secure OS" to stop posthumans from running private hell simulations, it has to determine what constitutes a hell simulation and successfully detect all such attempts despite superintelligent efforts at obscuration. How does it do that without being superintelligent itself?
nor does verification of transparent agents trusted to run at root level
This sounds interesting but I'm not sure what it means. Can you elaborate?
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-06-08T20:38:12.693Z · LW(p) · GW(p)
Hm, that's true. Okay, you do need enough intelligence in the OS to detect certain types of simulations / and/or the intention to build such simulations, however obscured.
If you can verify an agent's goals (and competence at self-modification), you might be able to trust zillions of different such agents to all run at root level, depending on what the tiny failure probability worked out to quantitatively.
Replies from: Pentashagon↑ comment by Pentashagon · 2013-06-10T19:50:25.385Z · LW(p) · GW(p)
If you can verify an agent's goals (and competence at self-modification), you might be able to trust zillions of different such agents to all run at root level, depending on what the tiny failure probability worked out to quantitatively.
That means each non-trivial agent would become the FAI for its own resources. To see the necessity of this imagine what initial verification would be required to allow an agent to simulate its own agents. Restricted agents may not need a full FAI if they are proven to avoid simulating non-restricted agents, but any agent approaching the complexity of humans would need the full FAI "conscience" running to evaluate its actions and interfere if necessary.
EDIT: "interfere" is probably the wrong word. From the inside the agent would want to satisfy the FAI goals in addition to its own. I'm confused about how to talk about the difference between what an agent would want and what an FAI would want for all agents, and how it would feel from the inside to have both sets of goals.
↑ comment by Benya (Benja) · 2013-06-07T22:57:17.815Z · LW(p) · GW(p)
Benja's hope seems reasonable to me.
I'd hope so, since I think I got the idea from you :-)
This is tangential to what this thread is about, but I'd add that I think it's reasonable to have hope that humanity will grow up enough that we can collectively make reasonable decisions about things affecting our then-still-far-distant future. To put it bluntly, if we had an FAI right now I don't think it should be putting a question like "how high is the priority of sending out seed ships to other galaxies ASAP" to a popular vote, but I do think there's reasonable hope that humanity will be able to make that sort of decision for itself eventually. I suppose this is down to definitions, but I tend to visualize FAI as something that is trying to steer the future of humanity; if humanity eventually takes on the responsibility for this itself, then even if for whatever reason it decides to use a powerful optimization process for the special purpose of preventing people from building uFAI, it seems unhelpful to me to gloss this without more qualification as "the friendly AI [... will always ...] stop unsafe AIs from being a big risk", because the latter just sounds to me like we're keeping around the part where it steers the fate of humanity as well.
↑ comment by Benya (Benja) · 2013-06-07T23:03:59.836Z · LW(p) · GW(p)
Thanks for explaning the reasoning!
I do agree that it seems quite likely that even in the long run, we may not want to modify ourselves so that we are perfectly dependable, because it seems like that would mean getting rid of traits we want to keep around. That said, I agree with Eliezer's reply about why this doesn't mean we need to keep an FAI around forever; see also my comment here.
I don't think Löb's theorem enters into it. For example, though I agree that it's unlikely that we'd want to do so, I don't believe Löb's theorem would be an obstacle to modifying humans in a way making them super-dependable.
comment by Wei Dai (Wei_Dai) · 2013-06-02T03:41:08.011Z · LW(p) · GW(p)
The use of early AIs to solve AI safety problems creates an attractor for "safe, powerful AI."
What kind of "AI safety problems" are we talking about here? If they are like the "FAI Open Problems" that Eliezer has been posting, they would require philosophers of the highest (perhaps even super-human) caliber to solve. How could "early AIs" be of much help?
If "AI safety problems" here do not refer to FAI problems, then how do those problems get solved, according to this argument?
Replies from: timtyler↑ comment by timtyler · 2013-06-16T10:19:04.807Z · LW(p) · GW(p)
The use of early AIs to solve AI safety problems creates an attractor for "safe, powerful AI."
What kind of "AI safety problems" are we talking about here? If they are like the "FAI Open Problems" that Eliezer has been posting, they would require philosophers of the highest (perhaps even super-human) caliber to solve. How could "early AIs" be of much help?
We see pretty big boosts already, IMO - largely by facilitating networking effects. Idea recombination and testing happen faster on the internet.
comment by [deleted] · 2015-10-17T11:30:29.622Z · LW(p) · GW(p)
@Lukeprog, can you
(1) update us on your working answers the posed questions in brief? (2) your current confidence (and if you would like to, by proxy, MIRI's as an organisation's confidence in each of the 3:
Elites often fail to take effective action despite plenty of warning.
I think there's a >10% chance AI will not be preceded by visible signals.
I think the elites' safety measures will likely be insufficient.
Thank you for your diligence.
comment by falenas108 · 2013-06-01T01:01:32.046Z · LW(p) · GW(p)
There's another reason for hope in this above global warming: The idea of a dangerous AI is already common in the public eye as "things we need to be careful about." A big problem the global warming movement had, and is still having, is convincing the public that it's a threat in the first place.
comment by Eugine_Nier · 2013-06-01T05:56:15.083Z · LW(p) · GW(p)
Who do you mean by "elites". Keep in mind that major disruptive technical progress of the type likely to precede the creation of a full AGI tends to cause the type of social change that shakes up the social hierarchy.
comment by [deleted] · 2013-05-31T20:04:46.713Z · LW(p) · GW(p)
Combining the beginning and the end of your questions reveals an answer.
Can we trust the world's elite decision-makers (hereafter "elites") to navigate the creation of [nuclear weapons, climate change, recombinant DNA, nanotechnology, chloroflourocarbons, asteroids, cyberterrorism, Spanish flu, the 2008 financial crisis, and large wars] just fine?
Answer how just fine any of these are any you have analogous answers.
You might also clarify whether you are interested in what is just fine for everyone, or just fine for the elites, or just fine for the AI in question. The answer will change accordingly.