Performance Trends in AI

post by sarahconstantin · 2017-01-28T08:36:59.679Z · LW · GW · Legacy · 5 comments

This is a link post for https://srconstantin.wordpress.com/2017/01/28/performance-trends-in-ai/

Contents

5 comments

5 comments

Comments sorted by top scores.

comment by Houshalter · 2017-02-06T17:14:53.029Z · LW(p) · GW(p)

There are some problems with this analysis. First of all, translation is natural language processing. What task requires more understanding of natural language than translation? Second, the BLEU score mentioned is only a cheap and imperfect measure of translation quality. The best measure is human evaluation. And neural machine translation excels at that. Look at this graph. On all languages, the neural network is closer to human performance than the previous method. And on several languages it's extremely close to human performance, and it's translations would almost be indistinguishable from human. That's incredible! And it shows that NNs can handle symbolic problems, which the author disputes.

The biggest problem though, is that all machine learning tasks are expected to have diminishing returns.You can't do better on a classification task than 0% error, for example. You might have an algorithm that is 10x better than another. But it may only do 1% better on some chosen dataset. Because there isn't more than 1% progress that can be made! Just looking at benchmark scores is going to really underestimate the rate of progress.

comment by The_Jaded_One · 2017-01-28T17:38:00.543Z · LW(p) · GW(p)

I might wonder if there are things humans can do with concepts and symbols and principles, the traditional tools of the “higher intellect”, the skills that show up on highly g-loaded tasks, that deep learning cannot do with current algorithms. ... So far, I think there is no empirical evidence from the world of deep learning to indicate that today’s deep learning algorithms are headed for general AI in the near future.

I strongly agree, and I think people at Deepmind already get this because they are working on differentiable neural computers.

Another key point here is that hardware gains such as GPUs and Moore's law increase the returns to investing time and effort into software and research.

Replies from: sarahconstantin
comment by sarahconstantin · 2017-01-28T19:00:07.837Z · LW(p) · GW(p)

Yep! I want to distinguish between "deep learning by itself is probably not general intelligence" (which I believe) and "nobody is making progress towards general intelligence" (which I'm uncertain about and definitely don't think is safe to assume.)

Replies from: gjm, The_Jaded_One
comment by gjm · 2017-01-28T22:01:40.718Z · LW(p) · GW(p)

What makes recent "deep learning" progress interesting to me is that traditionally there's been a sort of paradox in AI: things we might naively think of as impressive achievements of the human intellect (e.g., grandmaster-level chess) turned out to be much easier to get computers to do than things we take for granted because even averagely intelligent children do them without much trouble (e.g., looking at a cat and saying "cat") -- and deep neural networks seem (not hugely surprisingly, perhaps) to be a good approach to some of those.

That doesn't, of course, mean that deep NNs + good old-fashioned AI = human-level intelligence. There are still what seem like important gaps that no one has very good ideas how to fill. But it does seem like one gap is getting somewhat filled.

comment by The_Jaded_One · 2017-01-29T21:20:49.170Z · LW(p) · GW(p)

It is definitely true that progress towards AGI is being made, if we count the indirect progress of more money being thrown at the problem, and importantly perceptual challenges being solved means that there is now going to be a greater ROI for symbolic AI progress.

A world with lots of stuff that is just waiting for AGI-tech to be plugged into it is a world where more people will try hard to make that AGI-tech. Examples of 'stuff' would include robots, drones, smart cars, better compute hardward, corporate interest in the problem/money, highly refined perceptual algorithms that are fast and easy to use, lots of datasets, things like deepmind's universe, etc.

A lot of stuff that was created from 1960 to 1990 helped to create the conditions for machine learning; the internet, Moore's law, databases, operating system, open source software, a computer science education system etc.