The difficulty in predicting AI, in three lines
post by Stuart_Armstrong · 2012-10-02T15:10:26.749Z · LW · GW · Legacy · 21 commentsContents
21 comments
An over-simplification, but an evocative one:
- The social sciences are contentious, their predictions questionable.
- And yet social sciences use the scientific method; AI predictions generally don't.
- Hence predictions involving human-level AI should be treated as less certain than any prediction in the social sciences.
21 comments
Comments sorted by top scores.
comment by DuncanS · 2012-10-02T19:41:42.259Z · LW(p) · GW(p)
To summarise the argument further.
"A lot of people talk rubbish about AI. Therefore most existing predictions are not very certain."
That doesn't in itself mean that it's hard to predict AI - merely that there are many existing predictions which aren't that good. Whether we could do better if we (to take the given example) used the scientific method isn't something the argument covers.
Replies from: Stuart_Armstrong↑ comment by Stuart_Armstrong · 2012-10-03T09:45:29.474Z · LW(p) · GW(p)
Whether we could do better if we (to take the given example) used the scientific method
I don't really see how we could do that. Yes, most predictions are rubbish - but a lot are rubbish because predicting AI is not something we have good ways of doing.
comment by thomblake · 2012-10-02T15:12:53.691Z · LW(p) · GW(p)
I don't see how the third proposition follows from the first two.
Replies from: Stuart_Armstrong↑ comment by Stuart_Armstrong · 2012-10-02T15:16:37.765Z · LW(p) · GW(p)
Clarified the second line.
Replies from: thomblake↑ comment by thomblake · 2012-10-02T15:18:35.624Z · LW(p) · GW(p)
That at least makes sense.
Replies from: Raemon↑ comment by Raemon · 2012-10-02T15:23:03.331Z · LW(p) · GW(p)
What did it originally say?
Replies from: Stuart_Armstrong↑ comment by Stuart_Armstrong · 2012-10-02T15:29:44.795Z · LW(p) · GW(p)
it didn't have the "; AI predictions generally don't."
I've been working with these predictions for such a long time, I forgot not everyone had this at the forefront of their minds.
comment by [deleted] · 2012-10-03T22:53:41.174Z · LW(p) · GW(p)
Replies from: Stuart_ArmstrongThe social sciences are contentious, their predictions questionable.
And yet social sciences use the scientific method; mathematics doesn't
Hence statements involving math should be treated as less certain than any prediction in the social sciences.
↑ comment by Stuart_Armstrong · 2012-10-04T04:06:53.105Z · LW(p) · GW(p)
:-)
You're right - there is one area whose methods are even better than science. If only more problems could be solved like math problems!
comment by Kevin · 2012-10-03T03:17:11.778Z · LW(p) · GW(p)
I've started giving AI timelines as between 10 years and 100 years.
Replies from: Stuart_Armstrong↑ comment by Stuart_Armstrong · 2012-10-03T09:46:16.169Z · LW(p) · GW(p)
That seems reasonable. I give the 5-100 year range myself.
Replies from: Kevincomment by A1987dM (army1987) · 2012-10-02T16:13:56.720Z · LW(p) · GW(p)
That implicitly assumes that there aren't reasons why social sciences are contentious which don't also apply to AI predictions, but I don't think that's terribly unreasonable (EDIT: where by “I don't think that's terribly unreasonable” I mean that the reasons why the social sciences are contentious despite using the scientific method that I can think off the top of my head would also kind-of apply to AI predictions).
comment by Shmi (shminux) · 2012-10-02T15:24:50.296Z · LW(p) · GW(p)
And yet social sciences use the scientific method; AI predictions generally don't.
Can you please clarify this point?
Replies from: Stuart_Armstrong↑ comment by Stuart_Armstrong · 2012-10-02T15:27:24.866Z · LW(p) · GW(p)
The social sciences are sciences; AI predictions are mainly speculative thinking by people who just put on their thinking caps and think really really hard about the future (see some of the examples in http://lesswrong.com/lw/e79/ai_timeline_prediction_data/).
Replies from: shminux↑ comment by Shmi (shminux) · 2012-10-02T16:58:02.251Z · LW(p) · GW(p)
Are you saying that these predictions are unscientific because they are based on untestable models? Or because the models are testable for "small" predictions, but the AI predictions based on them are wild extrapolations beyond the models' validity?
Replies from: Stuart_Armstrong↑ comment by Stuart_Armstrong · 2012-10-02T18:50:06.708Z · LW(p) · GW(p)
Most predictions don't use models; most models aren't tested; and AI predictions based on tested models are generally wild extrapolations.
Replies from: shminux↑ comment by Shmi (shminux) · 2012-10-02T19:06:29.758Z · LW(p) · GW(p)
It does sound pretty bad if that's the case. My suspicion is that the models are there, just implicit and poor-quality. Maybe trying to explicate, compare and critique them would be worthwhile.
comment by Mitchell_Porter · 2012-10-02T17:08:08.675Z · LW(p) · GW(p)
Yes, people say all sorts of unjustified stuff about AI as if their musings were true, out of excitement and carelessness. But the line of thought in the post is ultimately destructive because it sets low expectations for no good reason.
To use the scientific method just means to make falsifiable predictions. So any arbitrary hypothesis counts, no matter how outlandish, so long as it's predictive. On the other hand, you don't need to use science in order to reason, and since "human-level AI" is not available for experimental study, we can only reason about it. But it's a pretty sure thing that such an AI will think that 1+1 equals 2...
There are no details here e.g. about the methodologies used to produce futurological predictions of the "time until X", or about the premises employed in reasoning about AI dispositions and capabilities; and that means there's no argument about the degree of reliability or usefulness that can be obtained when reasoning about AI; just the bare assertion, "not even as good as the worst of social science". Also, there's no consideration of the power of intention. A lot of the important statements in LW's AI futurology are about designing an AI to have desired properties.
Replies from: Stuart_Armstrong↑ comment by Stuart_Armstrong · 2012-10-02T18:44:39.271Z · LW(p) · GW(p)
I'm constructing a detailed analysis of all these points for my "How to Predict AI" paper.
And there are few details about methodologies, yes - because the vast majority of predictions have no methodologies. The quality of predictions is really, really low, and there are reasons to suspect that even when the methodologies are better, the prediction is still barely better than guesswork.
My stub was an unjustified snark, but the general sentiment behind it - that AI predictions (especially timeline predictions) are less reliable that social science results - is, as far as I can tell, true.
comment by GeraldMonroe · 2012-10-04T01:37:10.528Z · LW(p) · GW(p)
A working AI probably needs to duplicate thousands of individual systems found in the human mind. Whether we get there by scanning a brain for 4 years and 1 million electron beams working in parallel, or we have thousands of programming teams develop each subsystem, this is not going to be cheap.
You don't get there by accident - evolution did it, but it took millions of years, with each subsystem being developed to build upon previous ones.
Have you heard anything about some massive corporation or government getting ready to drop a few tril on an all out effort?
No, and the current discussions are how there are not enough common resources to pay for current needs. There isn't enough money to fund large militaries and to pay all of the expenses for the elderly and fix the roads and do everything else as it is. Money has to be borrowed from more successful economies, which just makes the fiscal crisis worse in the future.
Also, no corporation can justify spending more money than any company on the planet actually has to develop something that no one has ever done before and thus seems likely to fail.
Having read the brain emulation roadmap, and articles on how modern neural networks can model individual subsystems in the human mind successfully, this does not seem like a problem that we have to wait another 100 years to solve. The human race might be able to do it in 20 years if they started today and put the needed resources into the problem.
But it isn't going to happen, and predictions of success can't really be made until the actual process is actually started. It could be 10 years from now, it could be 200, before the actual effort is initiated. On the plus side, as time goes on, the cost to do this does go down to an extent. The total "bill of materials" for the hardware goes down with every year with Moore's law. Better software techniques make it more likely that such a huge project could be developed and not be so buggy it wouldn't run at all. But, in 30 years from now, it will still be a difficult and expensive endeavor needing a lot of resources.