"How We're Predicting AI — or Failing to"
post by lukeprog · 2012-11-18T10:52:00.119Z · LW · GW · Legacy · 18 commentsContents
18 comments
The new paper by Stuart Armstrong (FHI) and Kaj Sotala (SI) has now been published (PDF) as part of the Beyond AI conference proceedings. Some of these results were previously discussed here. The original predictions data are available here.
Abstract:
This paper will look at the various predictions that have been made about AI and propose decomposition schemas for analysing them. It will propose a variety of theoretical tools for analysing, judging and improving these predictions. Focusing specifically on timeline predictions (dates given by which we should expect the creation of AI), it will show that there are strong theoretical grounds to expect predictions to be quite poor in this area. Using a database of 95 AI timeline predictions, it will show that these expectations are born out in practice: expert predictions contradict each other considerably, and are indistinguishable from non-expert predictions and past failed predictions. Predictions that AI lie 15 to 25 years in the future are the most common, from experts and non-experts alike.
18 comments
Comments sorted by top scores.
comment by gwern · 2012-11-18T18:47:10.388Z · LW(p) · GW(p)
It's a good paper overall, and I'm glad to see it's been published - especially the Maes-Garreau material! (I wonder what Kevin Kelly made of our results? His reaction would've been neat to mention.)
But reading it all in one place, I think one part seems pretty weak: the criticizing of the 'expert' predictions. It seems to me like there ought to be more rigorous forms of assessment, and I wonder about possible explanations for the clumping at 20+ years: the full median-estimate graph seems to show a consistent expert trend post-1970s to put AI at x-2050 (I can't read the dates because the graphs are so illegible, what the heck?) and also many recent predictions. Perhaps there really is a forming expert consensus and the clump is due to the topic gaining a great deal of attention recently, and then the non-expert predictions are just taking their cue from the experts (as one would hope!)
Replies from: rompi, Stuart_Armstrong, rompi↑ comment by rompi · 2012-11-18T19:09:14.042Z · LW(p) · GW(p)
Hi
Re the graph quality: I'm REALLY sorry and I have to apologize to Stuart for the poor quality of images - it's kind of my fault... When I typesetted the final version of the proceedings, they were in A5 format, but on A4 page. We sent it to the printing company and they ran it through some program that cropped the pages to A5. Alas, this program also terribly compressed the images and I didn't check it carefully before letting them print it. So this is it... Once more sorry about that.
The only thing I can do is to fix it in this electronic version - will be done asap.
Anyway, thanks Stuart for your great talk!
Best wishes
Jan Romportl
Replies from: gwern↑ comment by gwern · 2012-11-18T19:19:30.402Z · LW(p) · GW(p)
Well, at least it's partially fixed... (Actually this reminds me that, as ElGalambo pointed out earlier, I should update the Wikipedia Maes-Garreau article.)
↑ comment by Stuart_Armstrong · 2012-11-19T12:52:10.058Z · LW(p) · GW(p)
The original data can be found via: http://lesswrong.com/lw/e79/ai_timeline_prediction_data/
(much better to use that than to squint at the pictures!)
My subjective impressions: predictors very rarely quote or reference each other when making predictions. Many predictions seem purely an individual guess. I've seen no sign of an expert consensus, or of much experts critiquing or commending each other's work. I really feel that predicting AI has not been seen as something where anyone should listen to other people's opinions. There are some exceptions - Kurzweil, for instance, seems famous enough that people are willing to quote his estimates, usually to claim he got it wrong - but too few.
Replies from: gwern, Kaj_Sotala↑ comment by gwern · 2012-11-19T17:20:27.442Z · LW(p) · GW(p)
My subjective impressions: predictors very rarely quote or reference each other when making predictions. Many predictions seem purely an individual guess. I've seen no sign of an expert consensus, or of much experts critiquing or commending each other's work. I really feel that predicting AI has not been seen as something where anyone should listen to other people's opinions.
They may not cite each other, but the influence can still be there as background reading etc. I may not cite Legge when I think there's a good chance of breakthroughs in the 2020s but the influence is there (well, it was until I mentioned him just now). To give a real-world example, compiling http://www.gwern.net/2012%20election%20predictions I know that the forecasters were all reading each others' blogs or twitters etc because in scouring their sites I see enough cross-links or similar topics, but anyone who looked at just the relevant pages of predictions or prediction CSVs would miss that completely and think they were deriving their similar predictions from independent models.
I think there's a lot of shared ideas and reading which rarely is explicitly cited in the same passage as a specific prediction with the exception of really offensive estimates like Kurzweil's self-promoting (have you been reading the reviews of his latest book? Everyone's dragging out Hofstadter's old dog shit quote, which one can't help but feel that he would not have been so explicit and crude if Kurzweil didn't really rub him the wrong way). But I don't know how one would test the consensus idea other than waiting and seeing whether expert predictions continue to cluster around 2040 even as we hit 2020s and 2030s.
↑ comment by Kaj_Sotala · 2012-11-20T05:17:32.017Z · LW(p) · GW(p)
I'm actually thinking that the "non-experts were no better than experts" bit is maybe a little misleading, as I remember seeing a lot of the non-experts base their predictions on what experts had been saying.
Replies from: Stuart_Armstrong↑ comment by Stuart_Armstrong · 2012-11-20T08:15:27.038Z · LW(p) · GW(p)
Really? That wasn't my recollection. But you probably saw the data more than I did, so I'll bear that in mind in future!
comment by beoShaffer · 2012-11-18T19:57:57.451Z · LW(p) · GW(p)
Overall, a very good paper, both from an AI perspective and in terms of demonstrating how to apply various epistemic techniques that aren't nearly as widespread as they should be. However, I have seen a few typos and other problems. The bottom of page 64 says,"Moore’s law could be taken as an ultimate example of grid: " I think that should be grind. Also, I liked
Care must be taken when applying this method: the point is to extract a useful verifiable prediction, not to weaken or strengthen a reviled or favoured argument. The very first stratagems in Shopenhauer’s “The Art of Always being Right” [17] are to extend and over-generalise the consequences of your opponent’s argument; conversely, one should reduce and narrow down one’s own arguments. There is no lack of rhetorical tricks to uphold one’s own position, but if one is truly after the truth, one must simply attempt to find the most reasonable empirical version of the argument; the truth-testing will come later.
But wish the paper had been slightly more specific about how the authors avoided this failure mode.
comment by noen · 2012-11-21T20:03:24.604Z · LW(p) · GW(p)
I predict that the search for AI will continue to live up to it's proud tradition of failing to produce a viable AI for the indefinite future. Since the Chinese Room argument does refute the strong AI hypothesis no AI will be possible on current hardware. An artificial brain that duplicates the causal functioning of an organic brain is necessary before an AI can be constructed.
I further predict that AI researchers will continue to predict immanent AI in direct proportion to research grant dollars they are able to attract. Corollary: A stable nuclear fusion reactor will be built before a truly conscious artificial mind is. Neither of which will happen in the lifetime of anyone reading this.
Replies from: JoshuaZ↑ comment by JoshuaZ · 2012-11-21T21:03:38.530Z · LW(p) · GW(p)
Since the Chinese Room argument does refute the strong AI hypothesis no AI will be possible on current hardware. An artificial brain that duplicates the causal functioning of an organic brain is necessary before an AI can be constructed.
There's a lot of objections to the Chinese room, but in this context, the primary issue is that the Chinese room doesn't matter: Even if the AI isn't conscious in some deep philosophical sense, it has all the same results then for humans the dangers and promises of strong AI are identical.
I further predict that AI researchers will continue to predict immanent AI in direct proportion to research grant dollars they are able to attract.
Continue implies this is currently the case. Do you have evidence for this? My impression is that most AI research is going to practical machine learning which is currently being used for many real world applications. Many people in the machine learning world state that any form of general AI is extremely unlikely to happen soon, so what evidence for this claimed proportion is there?
Corollary: A stable nuclear fusion reactor will be built before a truly conscious artificial mind is.
I don't see how this is a corollary. If you mean to state it as an example of a comparison to what sort of technology would be needed that might make some sense. However, we actually already have stable fusion reactors. Examples include tabtletop designs that can be made by hobbyists. Do you mean something like a fusion reactor that produces more useful energy than is inputted?
Replies from: noen↑ comment by noen · 2012-11-22T19:06:40.810Z · LW(p) · GW(p)
Is there something that it is like to be Siri? Still, Siri is a tool and potentially a powerful one. But I feel no need to be afraid of Siri as Siri any more than I am afraid of nuclear weapons in themselves. What frightens me is how people might misuse them. Not the tools themselves. Focusing on the tools then does not address the root issue. Which is human nature and what social structures we have in place to make sure some clown doesn't build a nuke in his basement.
Did ELIZA present the "dangers and promises" of AI? Weizenbaum's secretary thought so. She thought it passed the Turing test. Did it? Will future AI tools really be indistinguishable from living beings? I doubt it. I think it will always be apparent to people that they are dealing with a software tool that makes it easier for them to do something.
If behaviorism has been rejected as an explanation for consciousness how can one appeal to behaviorism as a model for future AI?
--
"so what evidence for this claimed proportion is there?"
Oh, I was just being flippant. It is a law of the universe that if there is a joke to be made I must at least try for it. ;)
"I don't see how this is a corollary. "
Yeah, also not serious. I meant only to mock the eternal claim of fusion proponents that it is always "just around the corner". I remember as a child reading breathless articles in Popular Science in the 70's about the immanent breakthroughs in nuclear fusion "any day now". Just like AI researchers of that day. And 40 years later little has changed.
I do not mistake Google translate for a conscious entity. Neither does anyone else. I can see no reason to believe that will change in the next 40 years.
"Examples include tabtletop designs that can be made by hobbyists."
Well now, that was cool. But yeah, no net increase in energy. Still, good for him.
Replies from: JoshuaZ↑ comment by JoshuaZ · 2012-11-23T17:30:12.640Z · LW(p) · GW(p)
Is there something that it is like to be Siri?
I'm not sure what you mean by this question. Is this a variant of what it is like to be a bat? There's a decent argument that such questions don't make sense. But this doesn't matter much: Whether some AI has qualia or not doesn't change any of the external behavior, than for most purposes like existential risk it doesn't matter.
I doubt it. I think it will always be apparent to people that they are dealing with a software tool that makes it easier for
This and most of the rest of your post are assertions, not arguments.
If behaviorism has been rejected as an explanation for consciousness how can one appeal to behaviorism as a model for future AI?
First, what do you mean by behaviorism in this context? Behaviorism as that word is classically defined isn't an attempt to explain consciousness. It doesn't care about consciousness at all.
Replies from: noen↑ comment by noen · 2012-11-24T15:30:48.274Z · LW(p) · GW(p)
"Is this a variant of what it is like to be a bat?"
Is there something that it is like to be you? There are also decent arguments that qualia does matter. It is hardly a settled matter. If anything, the philosophical consensus is that qualia is important.
"Whether some AI has qualia or not doesn't change any of the external behavior,"
Yes, behaviorism is a very attractive solution. But presumably what people want is a living conscious artificial mind and not a useful house maid in robot form. I can get that functionality right now.
If I write a program that allows my PC to speak in perfect English and in a perfectly human voice can my computer talk to me? Can it say hello? Yes it can, Can it greet me hello? No, it cannot because it cannot intend to say hello.
"Behaviorism as that word is classically defined isn't an attempt to explain consciousness."
Wikipedia? Really? Did you even bother to read the page or are you just pointing to something on wikipedia and believing that constitutes an argument? Look at section 5 "Behaviorism in philosophy". Read that and follow the link to the Philosophy of Mind article. Read that. You will discover that behaviorism was at one time thought to be a valid theory of mind. That all we needed to do to explain human behavior was to describe human behavior.
"If it is raining, Mr. Smith will use his umbrella. It is raining, therefore Mr. Smith will use his umbrella." Is this a valid deduction? No, it isn't because consciousness is not behavior only.
If you are a fan of Doctor Who, is the Teselecta conscious? Is there something that it is like to be the Teselecta? My answer is no, there is nothing it is like to be a robot piloted by miniature people emulating the behavior of a real conscious person.
Don't be a blockhead. ;)
Replies from: JoshuaZ↑ comment by JoshuaZ · 2012-11-24T20:42:02.801Z · LW(p) · GW(p)
Is there something that it is like to be you?
I'm not sure this question is any better formed. "What it is like to be an X" doesn't seem to have any coherent meaning when one presses people about what they actually are talking about.
If anything, the philosophical consensus is that qualia is important.
Taking qualia seriously as a question is a distinct claim than qualia actually having anything substantial to do with consciousness. I'm not sure of specific acceptance levels of qualia, but the fact is that a majority of philosophers either accept physicalism or lean towards it. So I'm not sure how to reconcile that with your claim.
Yes, behaviorism is a very attractive solution. But presumably what people want is a living conscious artificial mind and not a useful house maid in robot form. I can get that functionality right now.
On the contrary, most people don't care whether it is conscious in some deep philosophical sense. In fact, having functional AI that are completely not conscious have certain advantages- such as being less of an ethical problem in sending them to be destroyed (say as robot soldiers, or as probes to other planets). Moreover, the primary worry discussed on LW as far as AI is concerned is that the AI will bootstrap itself in a way that results in a very unpleasant bad singularity. Whether the AI is truly conscious or not has nothing to do with that worry.
Wikipedia? Really?
Yes, for many purposes Wikipedia is quite useful and reasonably reliable as a source. In many fields (math and chemistry for example) articles have been written by actual experts in the fields.
Did you even bother to read the page or are you just pointing to something on wikipedia and believing that constitutes an argument?
My primary intent for the link was for its use in the introduction where it uses the fairly standard notion that "that psychology should concern itself with the observable behavior of people and animals, not with unobservable events that take place in their minds." It is incidentally useful to understand behaviorism in most senses of the term went away not due to arguments about things like qualia, but rather that advances in neuroscience and related areas allowed us to get much more direct access to what was going on inside. At some level, psychology is still controlled by behaviorism if one interprets that to include brain activity as behavior.
And yes, I am familiar with behaviorism in the sense that is discussed in that section. But it still isn't an attempt to explain consciousness. It is essentially an argument that psychology doesn't need to explain consciousness. These aren't the same thing.
"If it is raining, Mr. Smith will use his umbrella. It is raining, therefore Mr. Smith will use his umbrella." Is this a valid deduction? No, it isn't because consciousness is not behavior only.
So I don't follow you at all here, and it doesn't even look like there's any argument you've made here other than just some sort of conclusion. But I don't see where in the notion of "deduction" consciousness comes in. Are you using some non-standard definition of "use" or of "umbrella"?
If you are a fan of Doctor Who, is the Teselecta conscious? Is there something that it is like to be the Teselecta? My answer is no, there is nothing it is like to be a robot piloted by miniature people emulating the behavior of a real conscious person.
Don't be a blockhead. ;)
So, on LW there's a general expectation of civility, and I suspect that that general expectation doesn't go away when one punctuates with a winky-emoticon.
Replies from: noen↑ comment by noen · 2012-11-25T04:41:07.127Z · LW(p) · GW(p)
"On the contrary, most people don't care whether it is conscious in some deep philosophical sense."
Do you mean that people don't care if they are philosophical zombies or not? I think they care very much. I also think that you're eliding the point a bit by using "deep" as a way to hand wave the problem away. The problem of consciousness is not some arcane issue that only matters to philosophers in their ivory towers. It is difficult. It is unsolved. And... and this is important. it is a very large problem, so large that we should not spend decades exploring false leads. I believe strong AI proponents have wasted 40 years of time and energy pursuing a ill advised research program. Resources that could have better been spent in more productive ways.
That's why I think this is so important. You have to get things right, get your basic "vector" right otherwise you'll get lost because the problem is so large once you make a mistake about what it is you are doing you're done for. The "brain stabbers" are in my opinion headed in the right direction. The "let's throw more parallel processors connected in novel topologies at it" crowd are not.
"Moreover, the primary worry discussed on LW as far as AI is concerned is that the AI will bootstrap itself in a way that results in a very unpleasant bad singularity."
Sounds like more magical thinking if you ask me. Is bootstrapping a real phenomenon? In the real world is there any physical process that arises out of nothing?
"And yes, I am familiar with behaviorism in the sense that is discussed in that section. But it still isn't an attempt to explain consciousness."
Yes it is. In every lecture I have heard when the history of the philosophy of mind is recounted the behaviorism of the 50's and early 60's it's main arguments for and against it as an explanation of consciousness are given. This is just part of the standard literature. I know that cognitive/behavioral therapeutic models are in wide use and very successful but that is simply beside the point here.
"So I don't follow you at all here, and it doesn't even look like there's any argument you've made here other than just some sort of conclusion."
Are you kidding!??? It was nothing BUT argument. Here, let me make it more explicit.
Premise 1 "If it is raining, Mr. Smith will use his umbrella." Premise 2 "It is raining" Conclusion "therefore Mr. Smith will use his umbrella."
That is a behaviorist explanation for consciousness. It is logically valid but still fails because we all know that Mr. Smith just might decide not to use his umbrella. Maybe that day he decides he likes getting wet. You cannot deduce intent from behavior. If you cannot deduce intent from behavior then behavior cannot constitute intentionality.
"So, on LW there's a general expectation of civility, and I suspect that that general expectation doesn't go away when one punctuates with a winky-emoticon."
It's a joke hun. I thought you would get the reference to Ned Block's counter argument to behaviorism. It shows how an unconscious machine could pass the Turing test. I'm pretty sure that Steven Moffat must have been aware of it and created the Teselecta.
Suppose we build a robot and instead of robot brain we put in a radio receiver. The robot can look and move just like any human. Suppose then that we take the nation of China and give everyone a transceiver and a rule they must follow. For each individual if they receive as input state S1 they will then output state S2. They are all connected in a functional flowchart that perfectly replicates a human brain. The robot then looks moves and above all talks just like any human being. It passes the Turing test.
Is "Blockhead" (the name affectionately given to this robot) conscious?
No it is not. A non-intelligent machine passes the behaviorist Turing test for an intelligent AI. Therefore behaviorism cannot explain consciousness and an intelligent AI could never be constructed from a database of behaviors. (Which is essentially what all attempts at computer AI consist of. A database and a set of rules for accessing them.)
Replies from: JoshuaZ↑ comment by JoshuaZ · 2012-11-25T20:48:19.243Z · LW(p) · GW(p)
"On the contrary, most people don't care whether it is conscious in some deep philosophical sense."
Do you mean that people don't care if they are philosophical zombies or not?
If you look above, you'll note that the statement you've quoted was in response to your claim that "people want is a living conscious artificial mind" and my sentence after the one you are quoting is also about AI. So if it helps, replace "it" with "functional general AI" and reread the above. (Although frankly, I'm confused by how you interpreted the question given that the rest of your paragraph deals with AI.)
But I think it is actually worth touching on your question: Do people care if they are philosophical zombies? I suspect that by and large the answer is "no". While many people care about whether they have free will in any meaningful sense, the question of qualia simply isn't something that's widely discussed at all. Moreover, whether a given individual think that they have qualia in any useful sense almost certainly doesn't impact how they think they should be treated.
The problem of consciousness is not some arcane issue that only matters to philosophers in their ivory towers. It is difficult. It is unsolved. And... and this is important. it is a very large problem, so large that we should not spend decades exploring false leads. I believe strong AI proponents have wasted 40 years of time and energy pursuing a ill advised research program. Resources that could have better been spent in more productive ways.
If a problem is large, exploring false leads is going to be inevitable. This is true even for small problems. Moreover, I'm not sure what you mean by "strong AI proponents" in this context. Very few people actively work towards research directly aimed at building strong AI, and the research that does go in that direction often turns out to be useful in weaker cases like machine learning. That's how for example we now have practical systems with neural nets that are quite helpful.
Sounds like more magical thinking if you ask me. Is bootstrapping a real phenomenon? In the real world is there any physical process that arises out of nothing?
So insisting that thinking has to occur in a specific substrate is not magical thinking but self-improvement is? Bootstraping doesn't involve physical processes arising out of nothing. The essential idea in most variants is self-modification producing a more and more powerful AI. There are precedents for this sort of thing. Human civilization for example has essentially self-modified itself, albeit at a slow rate, over time.
"And yes, I am familiar with behaviorism in the sense that is discussed in that section. But it still isn't an attempt to explain consciousness."
Yes it is. In every lecture I have heard when the history of the philosophy of mind is recounted the behaviorism of the 50's and early 60's it's main arguments for and against it as an explanation of consciousness are given.
I suspect this is a definitional issue. What do you think behaviorism says that is an attempt to explaine consciousness and not just argue that it doesn't need an explanation?
Premise 1 "If it is raining, Mr. Smith will use his umbrella." Premise 2 "It is raining" Conclusion "therefore Mr. Smith will use his umbrella."
That is a behaviorist explanation for consciousness. It is logically valid but still fails because we all know that Mr. Smith just might decide not to use his umbrella. Maybe that day he decides he likes getting wet. You cannot deduce intent from behavior. If you cannot deduce intent from behavior then behavior cannot constitute intentionality.
Ok. I think I'm beginning to see the problem to some extent, and I wonder how much this is due to trying to talk about behaviorism in a non-behaviorist framework. The behaviorist isn't making any claim about "intent" at all. Behaviorism just tries to talk about behavior. Similarly "decides" isn't a statement that goes into their model. Moreover, the fact that some days Smith does one thing in response to rain and sometimes does other things isn't a criticism of behaviorism: In order to argue it is one needs to be claiming that some sort of free willed decision is going on, rather than subtle differences in the day or recent experiences. The objection then isn't to behaviorism, but rather one's asserting a strong notion of free will.
I thought you would get the reference to Ned Block's counter argument to behaviorism. It shows how an unconscious machine could pass the Turing test
It may help to be aware of illusion of transparency. Oblique references are one of the easiest things to miscommunicate about. But yes, I'm familiar with Block's look-up table argument. It isn't clear how it is relevant here: Yes, the argument raises issues with many purely descriptive notions of consciousness, especially funcitonalism. But it isn't an argument that consciousness needs to involve free will and qualia and who knows what else. If anything, it is a decent argument that the whole notion of consciousness is fatally confused.
Is "Blockhead" (the name affectionately given to this robot) conscious?
No it is not.
So everything here is essentially just smuggling in the conclusion you want in other words. It might help to ask if you can give a definition of consciousness.
I'm pretty sure that Steven Moffat must have been aware of it and created the Teselecta.
Massive illusion of transparency here- you're presuming that Moffat is thinking about the same things that you are. The idea of miniature people running a person has been around for a long-time. Prior examples include a series of Sunday strips of Calvin and Hobbes, as well as a truly awful Eddie Murphy movie.