A Detailed Critique of One Section of Steven Pinker’s Chapter “Existential Threats” in Enlightenment Now (Part 1)

post by philosophytorres · 2018-05-12T13:34:21.381Z · LW · GW · 1 comments

Contents

  Key findings:
None
1 comment

This is the first of three posts; the critique has been split into three parts to enhance readability, given the document's length. For the official publication, go here.

Key findings:

—> The first quarter or so of the chapter contains at least two quotes from other scholars that are taken “completely” out of context—that is, their original meaning is either in tension or outright contradictory with respect to the meaning implied by their use in this chapter. In both cases, the quotes play integral rhetorical, and to some extent substantive, roles in the argument that Pinker aims to develop.

—> The chapter expends a great deal of energy attacking a small village of straw men, from the pessimism/optimism dichotomy that frames the entire discussion to the theoretical dangers posed by value-misaligned machine superintelligence. I argue that this tendency to knock down unserious or non-existent positions while ignoring or misrepresenting the most intellectually robust ideas does a disservice to the ongoing public and academic discussions about the various global-scale threats facing humanity this century.

—> Many citations appear to have been poorly vetted. For example, Pinker relies on numerous non-scholarly articles to make what purport to be scholarly assertions about a wide range of topics that fall outside his area of expertise. In some cases, Pinker makes these claims with considerable confidence, thus giving non-expert readers—some of whom may be responsible for shaping domestic and foreign policies—a false sense of their tenability.

—> Along these lines, many of the sources that Pinker cites to support his theses contain some facts, evidence, or ideas that undercut those theses. Rather than acknowledging that alternative views are also compatible with (or supported by) the evidence, though, Pinker preferentially selects the facts, evidence, and ideas that support his narrative while simply ignoring those that don’t. This is part of a larger issue of “cherry-picked” data. Indeed, I argue that, when the facts are more comprehensively considered, the positions that Pinker champions appear far less defensible.

—> Overall, the assessment presented below leads me to conclude that it would be unfortunate if this chapter were to significantly shape the public and academic discussions surrounding “existential risks.” In the harshest terms, the chapter is guilty of misrepresenting ideas, cherry-picking data, misquoting sources, and ignoring contradictory evidence.

PART 1:

“My new favorite book of all time.” That’s how Bill Gates has described Steven Pinker’s most recent book Enlightenment Now. Since I was an admirer of Pinker’s previous book The Better Angels of Our Nature, which I have cited (approvingly) many times in the past, I was eager to get a copy of the new tome. In particular, I was curious about Pinker’s chapter on “existential threats,” since this is a topic that I’ve worked on for years in both a journalistic and academic capacity, publishing numerous articles in popular media outlets and scholarly journals as well as two books on the topic (one of which Pinker mentions in Enlightenment Now). Thus, unlike world history, evolutionary psychology, and economics—all of which Pinker discusses with apparent erudition—this is a subject on which I have expertise and, consequently, can offer a thorough and informed evaluation of Pinker’s various theses.

The present document does precisely this by dissecting individual sentences and paragraphs, and then placing them under a critical microscope for analysis. Why choose this unusual approach? Because, so far as I can tell, almost every paragraph of the chapter contains at least one misleading claim, problematic quote, false assertion, or selective presentation of the evidence. Given (i) the ubiquity of such problems—or so I will try to show in the cooperative spirit of acquiring a better approximation of the truth—along with (ii) the fact that Enlightenment Now will likely become a massively influential, if not canonical, book among a wide range of scholars and the general public, it seems important that someone takes the time to comb through the chapter on existential threats (again, my area of expertise) and point out the various problems, ranging from the trivial to the egregious, that it encounters.

To be clear, I think Pinker’s overall contribution to culture, including intellectual culture, has been positive: humanity really has made measurable progress in multiple domains of well-being, morality, knowledge, and so on, and people ought to know this—if only to ward off the despair that reading the daily headlines tends to elicit. But I also believe that Pinker suffers from a scotoma in his vision of our collective existential plight: while violence has declined and our circles of moral concern have expanded, large-scale human activity and increasingly powerful “dual-use” technologies have introduced—and continue to introduce—a constellation of historically unique hazards that genuinely threaten our species’ future on spaceship Earth. There is no contradiction here! Indeed, I have often recommended (before Enlightenment Now) that people read Better Angels alongside books like The Future of Violence, Our Final Hour, Global Catastrophic Risks, and Here Be Dragons to acquire a more complete picture of our (rapidly) evolving survival situation. The major problem with Pinker’s Enlightenment progressionism is thus one of incompleteness: he simply ignores (or misinterprets, in my view) a range of phenomena and historical trends that clearly indicate that, as Stephen Hawking soberly put it, “this is the most dangerous time for our planet.” Again, any perceived contradiction is illusory: the moment in history with the lowest rates of violence (etc.) also contains more global risk potential than any other in the past 200,000 years.

This is a general criticism of Pinker’s progressionist project, in contrast to the more specific criticisms below. Another general complaint is that, with respect to the existential threats chapter, Pinker doesn’t appear to be sufficiently conversant with the scholarly literature to put forth a strong, much less trenchant, criticism of (certain aspects of) the topic. Consistent with this are the following two facts: first, Pinker hardly cites any scholars within the field of existential risk studies; and second, the preface of the book suggests that Pinker didn’t consult a single existential risk scholar while preparing the manuscript. If one wishes to present a fair, ideologically-neutral account of existential threats—especially if one’s purpose is to knock the topic down to size—then surely it behooves one to seek the advice of actual experts and peruse the relevant body of the most serious scholarship. This may sound harsh as stated but, as we will see, Pinker’s chapter expends considerable energy fallaciously beating to death a small village of straw men.

Pinker not only ignores the scholarly literature on existential risks, though, he often relies upon popular media articles and opinion pieces in Reason, Salon, Wired, Slate, The Guardian, and The New York Times to support his claims. (The Reason citation in particular is deeply problematic, as we will explore below.) Not all of these media platforms are created equal, of course, and in fact the Salon article that Pinker cites (as additional reading) is one that I wrote about superintelligence more than two years ago. But given the very general audience that I had in mind while writing, it shouldn’t have ended up in an “authoritative” book like Pinker’s, or so I would argue. (I—and plenty of others with even more competence on the topic—have numerous peer-reviewed articles, book chapters, etc. on superintelligence! A serious analysis of this ostensible risk, which Pinker purports to provide, should have cited these, and only these, instead.)

But the problems with Pinker’s chapter are even more significant than this. The chapter also suffers from what I would describe as cherry-picked data, questionable citations, a few out-of-context quotes, and other scholarly infractions. One might argue that this is somewhat unsurprising given similar problems in Better Angels. For example, some investigative digging by Magdi Semrau, a communication science and disorders PhD student, finds that a single paragraph in Better Angels: (i) cites a non-scholarly book whose relevant citation (given Pinker’s citation) is of a discredited academic article; (ii) bases a cluster of propositions, which Pinker presents as fact, on two sources: (a) a mere opinion expressed by a well-known anti-feminist whose employer is the American Enterprise Institute, a conservative think tank, and (b) an op-ed piece also written by an anti-feminist crusader, published in the non-scholarly, partisan magazine City Journal; and (iii) references a survey from the Bureau of Justice Statistics but leaves out aspects of the survey that don’t support the narrative being spun. Furthermore, Semrau notes that Pinker includes data in a graph about homicide rates that is deeply flawed—and known to be this since 1998. Although Semrau has yet to organize these discoveries into a proper paper, they are sufficiently well-supported to warrant concern about the scholarly practices embraced in Better Angels—and thus Enlightenment Now. Indeed, they are prec-isely the sort of tendentious (if that’s not too loaded of a word) shortcuts that we will encounter many times below.

Given that Pinker’s chapter on existential threats is quite long and the process of responding to each paragraph is tedious (although perhaps the tedium will be even worse for the reader!), I have here only reproduced part of this chapter. If readers find it particularly useful, then I would consider responding to the rest of the chapter as well.

* * *

Pinker begins the chapter with:

But are we flirting with disaster? When pessimists are forced to concede that life has been getting bet­ter and better for more and more people, they have a retort at the ready. We are cheerfully hurtling toward a catastrophe, they say, like the man who fell off the roof and says “So far so good” as he passes each floor. Or we are playing Russian roulette, and the deadly odds are bound to catch up to us. Or we will be blindsided by a black swan, a four-sigma event far along the tail of the statistical distribution of hazards, with low odds but calamitous harm.

This gets the entire conversation off to a bad start. First, my reading of this chapter is that it’s targeting, at least in part, the field of “existential risk studies,” which has spawned a number of public discussions about biotechnology, synthetic biology, advanced nanotechnology, geoengineering, artificial intelligence, and so on. In fact, Pinker has elsewhere specifically attacked existential risk studies by calling its central concept (i.e., existential risks) a “useless category.”

If this reading is correct, then Pinker’s reference to “pessimists” is quite misleading. Many of the scholars who are the most concerned about existential risks are also pro-technology “transhumanists” and “techno-progressives”—in some cases, even Kurzweilian “singularitarians”—who explicitly hope, if not positively expect, technological innovation to usher in a techno-utopian future world marked by the elimination of all diseases, indefinite lifespans, “radical” cognitive and moral enhancements, mind-uploading, Dyson swarms, colonization of the galaxy and beyond, “radical abundance” (as Eric Drexler puts it), the creation of a type III (or higher) civilization (on the Kardashev scale), and so on. Indeed, most scholars working on existential risks unhesitantly endorse the sort of Enlightenment progressionism for which Pinker evangelizes, even identifying such progress as a reason to take existing and emerging existential hazards seriously. I myself begin my book Morality, Foresight, and Human Flourishing: An Introduction to Existential Risks (hereafter, “Morality”) with an affirmation of scientific, technological, and moral progress over time, especially since the Enlightenment; and Nick Bostrom, a leading transhumanist who more or less founded the field of existential risk studies (along with John Leslie), has literally written an ebullient article titled “Letter from Utopia” that describes the unfathomably blissful lives of future posthumans, who we could become if only we promote the values of technological progress and, in Bostrom’s words, have “the opportunity to explore the transhuman and posthuman realms.” One can be hopeful about a better future and still shout, “Oh my lord, there’s a lion running toward us!”

So, this is not an either/or situation—and this is why Pinker framing the issue as an intellectual battle between optimists and pessimists distorts the “debate” from the start. This being said, there no doubt are, as Pinker gestures at below, neo-Luddites, romantics, environmentalists, and people espousing certain moral theories (e.g., antinatalism) who champion pessimistic views about humanity’s past and/or future. But the large majority of individuals who are worried about existential risks don’t fall within any of these categories. Rather, like the technocratic, idealist, neoliberal, space expansionist, visionary entrepreneur Elon Musk—who has repeatedly made anxious noises about the behemoth dangers of superintelligence—they see technology as a Janus-faced, double-edged sword (if readers don’t mind mixed metaphors).

We should also mention that yes, indeed, we are playing Russian roulette to some extent, although the “deadly” odds are not necessarily “bound to catch up to us”! (I don’t know of any prominent thinker in the field who believes this.) No species in our genus has ever before, in our ~2-million-year career on Earth, had to confront global-scale problems like anthropogenic climate change, the Anthropocene extinction, dual-use emerging technologies, and perhaps even computers whose problem-solving capabilities exceed that of the best humans in every cognitive domain. This is a historical fact, of course: we have no track record of surviving such risks. It follows that (i) given the astronomical potential value of the future (literally trillions and trillions and trillions of humans living worthwhile lives throughout the universe), and (ii) the growing ability for humanity to destroy itself through error, terror, global coordination failures, and so on, (iii) it would be extremely imprudent not to have an ongoing public and academic discussion about the number and nature of existential hazards and the various mechanisms by which we could prevent such risks from occurring. That’s not pessimism! It’s realism combined with the virtues of wisdom and deep-future foresight.

For half a century the four horsemen of the modern apocalypse have been overpopulation, resource shortages, pollution, and nuclear war. They have recently been joined by a cavalry of more exotic knights: nanobots that will engulf us, robots that will enslave us, artificial intelli­gence that will turn us into raw materials, and Bulgarian teenagers who will brew a genocidal virus or take down the Internet from their bedrooms.

A quick note about epistemology: it’s crucial for readers to recognize that, when it comes to evaluating the legitimacy of a given risk, its “sounds crazy” quality is irrelevant. Consider the statements: “Over geological time, one species can evolve into another” and “if your twin were to board a spaceship and fly to Saturn and back, she would have aged less than you.” Both sound—to naive ears “uncorrupted” by science—utterly absurd. Yet it is epistemically reasonable to accept them because the evidence and arguments upon which they’re founded are strong. Thus, don’t be fooled by the extent to which some emerging or anticipated future risks sound silly. Epistemology doesn’t care about what a proposition says (content), it cares about why one might accept it (reasons).

The sentinels for the familiar horsemen tended to be romantics and Luddites. But those who warn of the high­er-tech dangers are often scientists and technologists who have deployed their ingenuity to identify ever more ways in which the world will soon end.

To my ear, the second sentence makes it sound like devising new doomsday scenarios is a hobby: something done for the fun of it, for its own sake. That’s not the case. As mentioned above, the future could contain immense amounts of moral, intellectual, scientific, etc. value; in Morality, I call this the “astronomical value thesis.” It follows that one of the most important tasks that anyone could engage in is to increase, even if by minuscule increments, the probability that humanity avoids an existential catastrophe. This idea is formalized in Nick Bostrom’s “maxipok rule,” which essentially states that “the loss in expected value resulting from an existential catastrophe is so enormous that the objective of reducing existential risks should be a dominant consideration whenever we act out of an impersonal concern for humankind as a whole.” Thus, toward this end, a relatively tiny group of scholars have indeed labored to identify as many existential risk scenarios as possible—not to scare people, declare that “we’re all doomed,” or give existential riskologists one more reason to lay awake at night with sweaty palms and dilated pupils, but to devise a regimen of effective strategies for avoiding an existential catastrophe. Given what’s at stake, even a small reduction in overall existential risk could have an immense payoff.

In 2003, the eminent astrophysicist Martin Rees published a book entitled Our Final Hour in which he warned that “humankind is poten­tially the maker of its own demise” and laid out some dozen ways in which we have “endangered the future of the entire universe.” For example, experiments in particle colliders could create a black hole that would annihilate the Earth, or a “strangelet” of compressed quarks that would cause all matter in the cosmos to bind to it and disappear.

Note that these statements are true: particle colliders could, in theory, destroy the earth, although this appears unlikely—but see this important article by Toby Ord, Rafaela Hillerbrand, and Anders Sandberg for complications.

Rees tapped a rich vein of catastrophism.

This short sentence strikes me as overly dismissive. Again, the entire point of existential risk studies—a nascent field of empirical and philosophical inquiry that receives a relative pittance of funding and has fewer publications than the subfield of entomology dedicated to studying dung beetles—is to better understand the various hazards that could seriously and permanently affect the well-being of our species. That’s it.

The book’s Amazon page notes, “Customers who viewed this item also viewed Global Catastrophic Risks; Our Final Inven­tion: Artificial Intelligence and the End of the Human Era; The End: What Science and Religion Tell Us About the Apocalypse; and World War Z: An Oral History of the Zombie War.” Tech­no-philanthropists have bankrolled research institutes dedicated to discovering new existential threats and figur­ing out how to save the world from them, including the Future of Humanity Institute, the Future of Life Institute, the Center for the Study of Existential Risk, and the Global Catastrophic Risk Institute.

(Note that “Center” in “Center for the Study of Existential Risk” should be “Centre.”)

How should we think about the existential threats that lurk behind our incremental progress? No one can proph­esy that a cataclysm will never happen, and this chapter contains no such assurance. But I will lay out a way to think about them, and examine the major menaces. Three of the threats—overpopulation, resource depletion, and pollution, including greenhouse gases—were discussed in chapter 10, and I will take the same approach here. Some threats are figments of cultural and historical pessimism. Others are genuine, but we can treat them not as apoca­lypses in waiting but as problems to be solved.

This last sentence seems to knock down a(nother) straw man. I don’t know of a single scholar in the field—and this is not from lack of familiarity—who believes that there are “apocalypses in waiting.” Even when Bostrom writes that we should recognize the “default outcome” of machine superintelligence as “doom,” he’s saying that unless we solve the control problem, then almost by definition the consequences will be existential, so let’s allocate the necessary resources to solve the control problem, please? And he provides an entire book of rather nuanced, sophisticated, and philosophically formidable arguments to support this conclusion. (We will return to this issue later.)

The reigning view among existential risk scholars is thus precisely what Pinker advocates: secular apocalypses like nuclear winters, engineered pandemics, and superintelligence takeovers are seen as problems to be solved. Since one can’t solve these problems without doing the relevant research—or communicating with the public so that they vote for political leaders who understand and care about the relevant challenges—the fledgling “interdiscipline” of existential risk studies was born! Yet Pinker writes that:

At first glance one might think that the more thought we give to existential risks, the better. The stakes, quite liter­ally, could not be higher. What harm could there be in getting people to think about these terrible risks? The worst that could happen is that we would take some pre­cautions that turn out in retrospect to have been unnecessary.

Note that the phrases “thought we give to existential risks” and “getting people to think about these terrible risks” are ambiguous. By “we” and “people,” is Pinker referring to (Group A) scientists, philosophers, policymakers, and other specialists, or (Group B) the general public comprised of non-experts? This is important to disambiguate because there could be quite distinct reasons for promoting the concept of existential risks to one group but not the other (or vice versa, or neither). Conflating the two is thus problematic, as the very next paragraph illustrates:

But apocalyptic thinking has serious downsides. One is that false alarms to catastrophic risks can themselves be catastrophic. The nuclear arms race of the 1960s, for example, was set off by fears of a mythical “missile gap” with the Soviet Union. The 2003 invasion of Iraq was justified by the uncertain but catastrophic possibility that Saddam Hussein was developing nuclear weapons and planning to use them against the United States. (As George W. Bush put it, “We cannot wait for the final proof—the smoking gun-that could come in the form of a mushroom cloud.”) And as we shall see, one of the rea­sons the great powers refuse to take the common-sense pledge that they won’t be the first to use nuclear weapons is that they want to reserve the right to use them against other supposed existential threats such as bioterror and cyberattacks.2 Sowing fear about hypothetical disasters, far from safeguarding the future of humanity, can endan­ger it.

If Pinker means to include the public in this passage, one could argue that what matters isn’t that the public is warned about “hypothetical disasters” but how they are warned. After all, as mentioned above, the public is responsible for deciding who ends up with the political clout to catalyze societal change—indeed, this is one reason that (the now-disgraced) Lawrence Krauss once told me in an interview about the Doomsday Clock:

As responsible citizens, we can vote. We can pose questions to our political representatives. And that’s a major factor. Politicians actually are accountable, and if lots of people phone them with questions or issues, politicians will listen. The second thing is that we all have access to groups, although some of us have bigger soapboxes than others. School groups, church groups, book clubs—we can all work to educate ourselves and our local surroundings, on a personal basis, to address these issues. The last thing anyone should feel is completely hopeless or powerless. We certainly affect our daily lives in how we utilize things, but also we affect our community in various ways. So, we have to start small, and each of us can do that. And, of course, if you’re more interested [in working to reduce the threat of a catastrophe], you can organize a local group and have sessions in which you educate others about such issues. The power of voting and the power of education—those are the two best strategies.

Furthermore, George Bush’s politically-motivated and often mendacious exclamations about Saddam—some of which were based on cherry-picked intelligence—are quite unlike the rather “clinical” warnings of scholars like Lord Martin Rees, Nick Bostrom, Stephen Hawking, Anders Sandberg, Jason Matheny, Richard Posner, Max Tegmark, and countless climatologists, ecologists, biotechnologists, synthetic biologists, nanotechnologists, and other experts. One might also wonder why Pinker ignores those instances when dire warnings actually did gesture at some real hazard. For example, many observers made (what critics at the time could have described as) “alarmist” or “hyperbolic" claims about the march of Nazi Germany in the 1930s; yet Neville Chamberlain conceded lands to Hitler on the (false) assumption that this would mollify him and the US didn’t enter the war until 1941b (after the attack on Pearl Harbor). In other words, if only such warnings had been heeded, World War II might not have left some 80 million people in the muddy grave. Furthermore, concerns about the catastrophic effects of ozone depletion during the 1980s led to the Montreal Protocol of 1987, which effectively averted what most experts agree would have been a disastrous state of affairs for humanity.

So, one could easily retort that “apocalyptic thinking can also have serious upsides” by citing instances in which shouts about death and doom either did or probably could have obviated major calamities, if only they were taken seriously. In his Global Catastrophic Risks chapter about millennialist tendencies, James Hughes examines a number of historical cases that lead him to a similar conclusion, namely, that

millennialist energies can overcome social inertia and inspire necessary prophylaxis and force recalcitrant institutions to necessary action and reform. In assessing the prospects for catastrophic risks, and potentially revolutionary social and technological progress, can we embrace millennialism and harness its power without giving in to magical thinking, sectarianism, and overly optimistic or pessimistic cognitive biases? … I believe so: understanding the history and manifestations of the millennial impulse, and scrutinizing even our most purportedly scientific and rational ideas for their signs, should provide some correction for their downsides.

A second hazard of enumerating doomsday scenarios is that humanity has a finite budget of resources, brainpower, and anxiety. You can’t worry about everything. Some of the threats facing us, like climate change and nuclear war, are unmistakable, and will require immense effort and ingenuity to mitigate. Folding them into a list of exotic scenarios with minuscule or unknown probabili­ties can only dilute the sense of urgency.

The problem with this passage is the word—also used above—“exotic.” The fact is that most serious analyses of “dual-use” emerging technologies, both from intelligence agencies and the academic community, conclude that they could carry far more profound risks to the long-term survival of humanity than climate change or nuclear war (the two biggest existing risks). Why? One reason is that—as we’ll discuss at the end of this document—such technologies are simultaneously becoming more powerful and accessible. The result is that a growing number of lone wolves and terrorist organizations are gaining the technological capacity to wreak ever-more devastating harm on civilization.

Put differently, consider what Leó Szilárd famously wrote after he successfully initiated a chain reaction with uranium in 1939: “We turned the switch and saw the flashes. We watched them for a little while and then we switched everything off and went home. That night, there was very little doubt in my mind that the world was headed for grief.” This captures precisely what many scholars who study anthropogenic existential risks in particular feel: unless humanity seriously examines how malicious agents could misuse and abuse (i) emerging artifacts like CRISPR/Cas-9, base editing, digital-to-biological converters, “slaughterbots,” advanced AI systems, SILEX (i.e., “separation of [uranium] isotopes by laser excitation”), and (ii) future anticipated technologies like autonomous nanobots, nanofactories, and “stratospheric sulfate aerosol deposition” techniques (for the purpose of geoengineering), then the world may be headed for grief. These are not “exotic” dangers in the sense that Pinker seems to mean: they concern dual-use technologies currently being developed and some that appear very likely, if not almost certain, to be developed in the foreseeable future.

Perhaps the only risk discussed in the literature that could aptly be described as “exotic” is the possibility that we live in a computer simulation and it gets shut down. Yet even this scenario is based on a serious philosophical argument—the “simulation argument,” of which one aspect is the “simulation hypothesis”—that has not yet been refuted, at least to the satisfaction of many philosophers. To my eye, the word “exotic” is far too facile, and it suggests (to me) that Pinker has not seriously perused the body of scholarly work on existential dangers to humanity. (For a comprehensive list of risk scenarios that are taken seriously by the community, see my book Morality and this report by the Global Challenges Foundation.)

Recall that peo­ple are poor at assessing probabilities, especially small ones, and instead play out scenarios in their mind’s eye. If two scenarios are equally imaginable, they may be consid­ered equally probable, and people will worry about the genuine hazard no more than about the science-fiction plotline. And the more ways people can imagine bad things happening, the higher their estimate that some­ thing bad will happen.

This is why cognitive biases are so strongly emphasized within the field. Indeed, there’s an entire chapter dedicated to this topic in the seminal Global Catastrophic Risks edited collection, and I begin Morality with a section in Chapter 1 titled “Biases and Distortions,” about the many ways that bad mental software can lead us to incorrect conclusions—including conclusions that the overall risk to human survival is high or low.

And that leads to the greatest danger of all: that peo­ple will think, as a recent New York Times article put it,“These grim facts should lead any reasonable person to conclude that humanity is screwed.”3

This is a somewhat odd article to cite here. First of all, it’s a short review of the journalist Dan Zak’s book Almighty: Courage, Resistance, and Existential Peril in the Nuclear Age. It’s not an article about “existential threats” in general. Second, by “screwed,” the author of the review, Kai Bird, isn’t saying that humanity is destined to go extinct or civilization is bound to collapse next year; he’s merely referring to the use of one or more nuclear weapons. And third, the larger point that he’s making is simply that a nuclear weapon being detonated appears to be inevitable given that (i) “a quarter-century after the end of the Cold War, nine nations possess some 16,000 nuclear warheads; the United States and Russia each have more than 7,000 warheads” and “four countries—North Korea, Pakistan, India and Israel—have developed nuclear arsenals and refuse to sign the Treaty on Non-Proliferation of Nuclear Weapons,” and (ii) a nuclear bomb could be smuggled into New York City by only “three or four men.” To support the latter claim, Bird quotes Robert Oppenheimer who, in response to a question about whether this is possible, avers “of course it could be done.” As Bird puts it—and this statement is almost certainly true, as simple arithmetic affirms—“the odds are that these weapons will be used again, somewhere and probably in the not-so-distant future.” It’s hard to see how Pinker’s “greatest danger of all” statement follows from this citation, since the book review isn’t about multiple risk scenarios but the specific risk of nuclear conflict (which is indeed serious).

Here we should also reiterate that the large majority of “technodoomsters”—Pinker’s coinage—who are nervous about existential risks does not believe that humanity is “screwed,” at least not in the sense that our extinction is certain within the coming decades or centuries (or even before some 1040 years in the future, at which point all protons in the universe will have decayed). I can’t think of a single notable scholar who holds this view. There are a few conspiratorial, fringe figures like Guy McPherson who’ve made such claims but, as such, these individuals are not at all representative of the far more modest, tentative “mainstream” positions within the field of existential risk studies. Indeed, perhaps the most radical estimate from a respectable scholar comes from Lord Martin Rees, who proposed the conditional argument that unless humanity alters the developmental trajectory of civilization in the coming decades, then it may be that “the odds are no better than fifty-fifty that our present civilisation on Earth will survive to the end of the present century.” This is not a fatalistic declaration that we’re “screwed.” Rather, Rees’s warning is more like a doctor telling a patient: “If you don’t make certain lifestyle changes right away, then there’s a roughly 50 percent chance that you’ll perish”—in contrast to, “No matter what you do at this point, you’re a goner, sucker!” Alerting others that humanity is in great danger is not tantamount to declaring that all hope is lost.

1 comments

Comments sorted by top scores.