Intelligence Explosion analysis draft: introduction

post by lukeprog · 2011-11-14T09:50:22.531Z · LW · GW · Legacy · 13 comments

Contents

  Singularity Skepticism
  References for this snippet
None
13 comments

I invite your feedback on this snippet from an intelligence explosion analysis Anna Salamon and myself have been working on.

This snippet is a possible introduction to the analysis article. Its purpose is to show readers that we aim to take seriously some common concerns about singularity thinking, to bring readers into Near Mode about the topic, and to explain the purpose and scope of the article.

Note that the target style is serious but still more chatty than a normal journal article.

_____

 

 

The best answer to the question, "Will computers ever be as smart as humans?" is probably “Yes, but only briefly."

Vernor Vinge

 

 

Humans may create human-level artificial intelligence in this century (Bainbridge 2006; Baum, Goertzel, and Goertzel 2011; Bostrom 2003; Legg 2008; Sandberg and Bostrom 2011). Shortly thereafter, we may see an “intelligence explosion” or “technological Singularity” — a chain of events by which human-level AI leads, fairly rapidly, to intelligent systems whose capabilities far surpass those of biological humanity as a whole (Chalmers 2010).

How likely is this, and what should we do about it? Others have discussed these questions previously (Turing 1950; Good 1965; Von Neumann 1966; Solomonoff 1985; Vinge 1993; Yudkowsky 2001, 2008a; Russell and Norvig 2010, sec. 26.3); we will build on their thinking in our review of the subject.

 

Singularity Skepticism

Many are skeptical of Singularity arguments because they associate such arguments with detailed storytelling — the “if and then” fallacy of “speculative ethics” by which an improbable conditional becomes a supposed actual (Nordmann 2007). They are right to be skeptical: hundreds of studies show that humans are overconfident of their beliefs (Moore and Healy 2008), regularly overestimate the probability of detailed visualized scenarios (Tversky and Kahneman 2002), and tend to seek out only information that confirms their current views (Nickerson 1998). AI researchers are not immune from these errors, as evidenced by a history of over-optimistic predictions going back to the 1956 Dartmouth conference on AI (Dreyfus 1972).

Nevertheless, mere mortals have at times managed to reason usefully and somewhat accurately about the future, even with little data. When Leo Szilard conceived of the nuclear chain reaction, he realized its destructive potential and filed his patent in a way that kept it secret from the Nazis (Rhodes 1995, 224–225). Svante Arrhenius' (1896) models of climate change lacked modern climate theory and data but, by making reasonable extrapolations from what was known of physics, still managed to predict (within 2°C) how much warming would result from a doubling of CO2 in the atmosphere (Crawford 1997). Norman Rasmussen's (1975) analysis of the safety of nuclear power plants, written before any nuclear accidents had occurred, correctly predicted several details of the Three Mile Island incident that previous experts had not (McGrayne 2011, 180).

In planning for the future, how can we be more like Rasmussen and less like the Dartmouth conference? For a start, we can apply the recommendations of cognitive science on how to meliorate overconfidence and other biases (Larrick 2004; Lillienfeld, Ammirati, and Landfield 2009). In keeping with these recommendations, we acknowledge unknowns and do not build models that depend on detailed storytelling. For example, we will not assume the continuation of Moore’s law, nor that hardware trajectories determine software progress. To avoid nonsense, it should not be necessary to have superhuman reasoning powers; all that should be necessary is to avoid believing we know something when we do not.

One might think such caution would prevent us from concluding anything of interest, but in fact it seems that intelligence explosion may be a convergent outcome of many or most future scenarios. That is, an intelligence explosion may have fair probability, not because it occurs in one particular detailed scenario, but because, like the evolution of eyes or the emergence of markets, it can come about through many different paths and can gather momentum once it gets started. Humans tend to underestimate the likelihood of such “disjunctive” events, because they can result from many different paths (Tversky and Kahneman 1974). We suspect the considerations in this paper may convince you, as they did us, that this particular disjunctive event (intelligence explosion) is worthy of consideration.

First, we provide evidence which suggests that, barring global catastrophe and other disruptions to scientific progress, there is a significant probability we will see the creation of digital intelligence within a century. Second, we suggest that the arrival of digital intelligence is likely to lead rather quickly to intelligence explosion. Finally, we discuss the possible consequences of an intelligence explosion and which actions we can take now to influence those results.

These questions are complicated, the future is uncertain, and our chapter is brief. Our aim, then, can only be to provide a quick survey of the issues involved. We believe these matters are important, and our discussion of them must be permitted to begin at a low level because there is no other place to lay the first stones.

 

References for this snippet

13 comments

Comments sorted by top scores.

comment by steven0461 · 2011-11-14T20:20:25.012Z · LW(p) · GW(p)

Svante Arrhenius' (1896) models of climate change lacked modern climate theory and data but, by making reasonable extrapolations from what was known of physics, still managed to predict (within 2°C) how much warming would result from a doubling of CO2 in the atmosphere (Crawford 1997).

This makes it sound like we've now observed how much warming resulted from a doubling of CO2, and Arrhenius's estimate was within 2°C of the measured value. That is not the case. Rather, we have models that give a range of estimates that's a few degrees wide (which makes "within 2°C" harder to interpret). So you'll want to say something like, "still managed to be within 2°C of typical modern estimates" (if that's accurate).

Please nobody use this as an excuse to discuss global warming.

ETA an amusing fact from Wikipedia:

Arrhenius expected CO2 doubling to take about 3000 years; it is now estimated in most scenarios to take about a century.

Replies from: Manfred
comment by Manfred · 2011-11-15T00:55:20.048Z · LW(p) · GW(p)

Arrhenius expected CO2 doubling to take about 3000 years; it is now estimated in most scenarios to take about a century.

Good man with atmospheric physics, not so great at predicting the fossil fuel economy :P

Anyhow, Arrhenius' estimate being close involved plenty of luck, with bad spectroscopic data canceling out the effects of simplifications - he could have been a factor of 4 off it it had gone the other way. So if we're to take the lesson of Arrhenius, the recipe for predictive success is not cognitive science, it's "use solid physical simplifications to make estimates that are correct within a factor of 4, and then get remembered for the ones that wind up being close."

Of course, that works better when you have physical simplifications to make.

comment by XiXiDu · 2011-11-14T10:45:51.829Z · LW(p) · GW(p)

Nevertheless, mere mortals have at times managed to reason usefully and somewhat accurately about the future...

If you read a thousand classic science fiction books, one of them will have made a somewhat accurate prediction about the future. That Leo Szilard or Svante Arrhenius turned out to be right doesn't mean that we can usefully reason about the future. You have to actually show that, without hindsight bias, there have been people capable of predicting their success in predicting the future and not cherry pick some successful examples under millions of unsuccessful ones.

Replies from: Grognor
comment by Grognor · 2011-11-14T11:23:25.770Z · LW(p) · GW(p)

I thought of this as well, and I also thought that this is not a good criticism, because this introduction explicitly poses the question of what those Szilards and Arrheniuses did right, and what others did wrong. It is implicit, though not obvious, that such questions may result in the answer, "They did nothing better than anyone else; but they were lucky," though I do not actually suspect this to be the case. Perhaps it could be made explicit.

comment by Luke_A_Somers · 2011-11-14T16:29:21.445Z · LW(p) · GW(p)

Norman Rasmussen's (1975) analysis of the safety of nuclear power plants, written before any nuclear accidents had occurred, correctly predicted several details of the Three Mile Island incident that previous experts had not (McGrayne 2011, 180).

There had definitely been nuclear accidents before 1975 - just not major civilian power plant nuclear accidents.

comment by [deleted] · 2011-11-14T13:02:26.010Z · LW(p) · GW(p)

n=1, I liked this a lot.

comment by lessdazed · 2011-11-15T17:37:07.126Z · LW(p) · GW(p)

or most future scenarios

...most plausible future scenarios. (?) I would take out "or most".

Shortly thereafter, we may see an “intelligence explosion” or “technological Singularity” — a chain of events by which human-level AI leads, fairly rapidly, to intelligent systems whose capabilities far surpass those of biological humanity as a whole (Chalmers 2010)...Finally, we discuss the possible consequences of an intelligence explosion and which actions we can take now to influence those results.

Is the idea of a "technological Singularity" different than a combination of predictions about technology and predictions about its social and political effects? An intelligence explosion could be followed by little changing, if for example all human created AIs tended to become the equivalent of ascetic monks. That being so, I would start with the technological claims and make them the focus by not emphasizing the "Singularity" aspect, a Singularity being a situation after which the future will be very different than before.

comment by Kaj_Sotala · 2011-11-15T06:22:43.971Z · LW(p) · GW(p)

Do you want to refer to the intelligence explosion as a Singularity at all? It might be clearer if you didn't, to avoid confusing people who have a more Kurzweilian conception of the term.

Then again, the fact that some previous work does use the term might mean that you need to do so, also. But in that case, you should at least briefly make it explicit that e.g. the "intelligence explosion Singularity" and the "accelerating change Singularity" are two different things.

comment by spuckblase · 2011-11-15T09:59:10.789Z · LW(p) · GW(p)

going back to the 1956 Dartmouth conference on AI

maybe better (if this is good english): going back to the seminal 1956 Dartmouth conference on AI

comment by Kaj_Sotala · 2011-11-15T06:19:08.094Z · LW(p) · GW(p)

This is a little defensive, and I'm not entirely sure of whether the approach is a good or a bad thing. As a reader, I would prefer the text to get right to the "interesting stuff", instead of spending time on what feels like only a remotely related tangent.

I'm not sure of how likely this introduction is to actually persuade people who are skeptical about Singularity predictions. But as usual, it will probably have a better effect on fence-sitters and people who haven't encountered the debate before.

comment by amcknight · 2011-11-15T01:41:43.249Z · LW(p) · GW(p)

You switch between saying "intelligence explosion" and "an intelligence explosion". I'd stick to just one.

comment by Grognor · 2011-11-14T11:30:37.065Z · LW(p) · GW(p)

Note that the target style is serious but still more chatty than a normal journal article.

It is unclear to me why this is dichotomized at all. Eliezer himself often (usually with the analogy of giving a lecture in a clown suit) discriminates between seriousness and solemnity. It appears that what you are looking for is something that appeals to members of academia, while still being readable to the layman.

I have not read very many journal articles, or written any at all, so I can't speak for how it appeals to academia, but I'd say that the readability goal has been very much accomplished, though as a fairly typical Less Wrong user I might be finding it more readable than others; however, if the goal is to make it readable to non-academia of Less Wrong caliber, the style is exactly right. It reads like a less self-referential Selfish Gene (noting also that I haven't read very many books, so this comparison might be worthless).

Replies from: Grognor
comment by Grognor · 2011-11-14T16:16:10.968Z · LW(p) · GW(p)

This comment was at one point +1, so either the up-voter retracted the vote, or two people down-voted the comment. If someone could please explain why.