Journalist's piece about predicting AI

post by Stuart_Armstrong · 2013-04-02T14:49:34.108Z · LW · GW · Legacy · 4 comments

Here's a piece by Mark Piesing in Wired UK about the difficulty and challenges in predicting AI. It covers a lot of our (Stuart Armstrong, Kaj Sotala and Seán Óh Éigeartaigh) research into AI prediction, along with Robin Hanson's response. It will hopefully cause people to look more deeply into our work, as published online, in the Pilsen Beyond AI conference proceedings, and forthcoming as "The errors, insights and lessons of famous AI predictions and what they mean for the future".

4 comments

Comments sorted by top scores.

comment by gwern · 2013-04-02T15:38:40.623Z · LW(p) · GW(p)

His research also suggests that predictions by philosophers are more accurate than those of sociologists or even computer scientists. "We know very little about the final form an AI would take, so if they [the experts] are grounded in a specific approach they are likely to go wrong, while those on a meta level are very likely to be right".

Is this actually right, or is it just based on your piece praising Searle's pessimism? I don't recall any breakdown favoring philosophers in the original analysis of the dataset.

And what are Armstrong's predictions about the future of AI?

"My prediction is that [AI is] likely to happen sometime in the next five to 80 years. I would give a 90 percent chance [it will happen] in the next two centuries, although there is always the chance that someone could come up with an AI algorithm tomorrow."

And I guess that's what's wrong with more accurate predictions.

Hehe.

Replies from: Stuart_Armstrong
comment by Stuart_Armstrong · 2013-04-02T17:42:26.893Z · LW(p) · GW(p)

His research also suggests that predictions by philosophers are more accurate than those of sociologists or even computer scientists. "We know very little about the final form an AI would take, so if they [the experts] are grounded in a specific approach they are likely to go wrong, while those on a meta level are very likely to be right".

Is this actually right, or is it just based on your piece praising Searle's pessimism? I don't recall any breakdown favoring philosophers in the original analysis of the dataset.

I extracted the best I could from Searle's "non-predictive" argument - I didn't praise his pessimism ;-)

I'd have phrased it as "there are some pretty good philosophical arguments about AI (eg Omahundro), while timeline predictions seem to be uniformly ungrounded". I certainly wouldn't have said that a generic philosophical argument on AI was good (see all the permutations of "Godel's theorem, hence no AI").

Replies from: gwern
comment by gwern · 2013-04-02T18:44:45.656Z · LW(p) · GW(p)

the way he quoted you certainly makes you sound like you think something along those lines.

Replies from: Stuart_Armstrong
comment by Stuart_Armstrong · 2013-04-02T19:06:55.906Z · LW(p) · GW(p)

Quotes are not always entirely accurate. I'm sure this fact is surprising to people here :-P

Actually it's not that bad, in terms of presenting a complex idea; not what I would have written, but acceptable to get people thinking on the issues.