This AI Boom Will Also Bust

post by username2 · 2016-12-03T23:21:05.593Z · LW · GW · Legacy · 9 comments

This is a link post for http://www.overcomingbias.com/2016/12/this-ai-boom-will-also-bust.html

Contents

9 comments

9 comments

Comments sorted by top scores.

comment by Brendan Long (korin43) · 2016-12-04T02:06:00.456Z · LW(p) · GW(p)

I worked on some neural networks this summer (mostly as a learning experience) and this matches my experience. I spent almost all of my time trying to get good training data, and if I had good training data, our previous method would work just as well (and faster).

comment by moridinamael · 2016-12-04T03:59:08.672Z · LW(p) · GW(p)

Will it "bust", or will it be superseded by the next wave of more general AI architectures designed to require less supervision, less data?

Replies from: HungryHobo
comment by HungryHobo · 2016-12-05T16:54:19.764Z · LW(p) · GW(p)

If the typical pattern holds:

Step one: new trick is discovered solving some problem X which couldn't be handled before.

Step two: people try to apply it to everything that the old styles didn't work on like problem Y which is sort of in the same problem class. At this stage overly enthusiastic people may over-promise. "I'm sure it will work amazingly on Y"

Step three: "Bah! These CS types never deliver, Y will always be better done by humans."

Step four: Interest and funding flees as the news stops paying attention, a few people keep chipping away at the problem and eventually slightly outperform humans on Y and try to get it to work on Z.

Step five: Someone proves mathematically that it can never solve major set of problems in Z.

Step six: Someone comes up a new trick... GOTO 1

comment by MrMind · 2016-12-05T13:52:40.543Z · LW(p) · GW(p)

It's interesting to note that AlphaGo's improvement was to use data to train a neural network, and then use this network to produce even more data, which in the case of Go is easily labelled, and then use this second wave of data to train a second neural network. The output was then the average of the two NN's output.

Replies from: Luke_A_Somers
comment by Luke_A_Somers · 2016-12-05T19:30:52.790Z · LW(p) · GW(p)

What two NNs are you talking about? I thought the two NN system was different roles with different kinds of outputs, so that averaging them wouldn't even make sense.

Replies from: MrMind
comment by MrMind · 2016-12-06T08:27:21.240Z · LW(p) · GW(p)

For example, in Nature's article about AlphaGo: page 485, picture a: it says that a reinforcement learning rollout network is used to produce another bout of data which is then used to train the value network.
Page 486, second column, the third formula: the valuation of the two networks (the fast rollout policy and the value network) for the current position is averaged to give a final score on every possible next move, then the most valuable move is choosed.

Replies from: Luke_A_Somers
comment by Luke_A_Somers · 2016-12-07T16:02:13.790Z · LW(p) · GW(p)

Interesting. To expand on that explanation, it seems that there are FOUR networks here - Three policy networks (P networks). One is the optimized learner (P-rho), which is trained by playing against itself, starting from a network made by supervised learning which is pretty good (P-sigma). There's another network that just makes vaguely reasonable moves but evaluates quickly (P-pi) that as far as I can tell isn't used for direct training of P-rho.

Then they train a new neural network to recognize position quality (Nu-theta) based on this optimized system playing against itself. The averaging you mention is mixing that with the results of just using P-pi to finish the game quick and see which side wins.

That's rather convoluted.

Replies from: MrMind
comment by MrMind · 2016-12-07T16:59:51.179Z · LW(p) · GW(p)

Well, there are actually many, many more. Sigma is just the initial seed from which a population of networks rho are created, each evolved by playing against a random previous iteration of themselves. In this way you can say that sigma-rho are just the initial and final point of an entire spectrum of networks, whose only purpose is to create the raw data which are used to train theta, and then be discarded.
The stroke of genius of AlphaGo in my opinion was complementing the rollout network, already used in many other programs, with an intuitive network whose purpose is to imitate intuition, and furthermore to create this network by a pool of 'cheap' experts (compared to human experts) play.
This technology could be adopted in other area where data are prone to be easily and automatically labelled (such as go ending positions).

Replies from: Luke_A_Somers
comment by Luke_A_Somers · 2016-12-09T02:38:25.340Z · LW(p) · GW(p)

But haven't people been having AIs do that - self-play-training - for a long time? I think the most remarkable idea is to use the massive breed of policy nets only to create a judgement net, and use that in the end instead. That's wild.