Posts

Comments

Comment by jkrause on "Human-level control through deep reinforcement learning" - computer learns 49 different games · 2015-02-26T21:09:26.200Z · LW · GW

That was indeed one of the hypotheses about why it was difficult to train the networks - the vanishing gradient problem. In retrospect, one of the main reasons why this happened was the use of saturating nonlinearities in the network -- nonlinearities like the logistic function or tanh which asymptote at 1. Because they asymptote, their derivatives always end up being really small, and the deeper your network the more this effect compounds. The first large-scale network that fixed this was by Krizhevsky et al., which used a Rectified Linear Unit (ReLU) for their nonlinearity, given by f(x) = max(0, x). The earliest reference I can find to using ReLUs is Jarrett et al., but since Krizhevsky's result pretty much everyone uses ReLUs (or some variant thereof). In fact, the first result I've seen showing that logistic/tanh nonlinearities can work is the batch normalization paper Sean_o_h linked, which gets around the problem by normalizing the input to the nonlinearity, which presumably prevents the units from saturating too much (though this is still an open question).

Comment by jkrause on "Human-level control through deep reinforcement learning" - computer learns 49 different games · 2015-02-26T18:42:00.739Z · LW · GW

Can confirm that hardware (and data!) are the two main culprits here. The actual learning algorithms haven't changed much since the mid 1980s, but computers have gotten many times faster, GPUs are 30-100x faster still, and the amount of data has similarly increased by several orders of magnitude.

Comment by jkrause on "Human-level control through deep reinforcement learning" - computer learns 49 different games · 2015-02-26T17:47:33.662Z · LW · GW

My layperson's understanding is that this is the first time human accuracy has been exceeded on the Imagenet benchmarking challenge, and represents an advance on Chinese giant Baidu's progress reported last month, which I understood to be significant in its own right. http://arxiv.org/abs/1501.02876

One thing to note about the number for human accuracy for ImageNet that's been going around a lot recently is that it was really a relatively informal experiment done by a couple of members of the Stanford vision lab (see section 6.4 of the paper for details). In particular, the number everyone cites was just one person, who, while he trained himself quite a while to recognize the ImageNet categories, nonetheless was prone to silly mistakes from time to time. A more optimistic human error is probably closer to 3-4%, but with that in mind the recent results people have been posting are still extremely impressive.

It's also worth pointing another paper from Microsoft Research that beat the 5.1% human performance and actually came out a few days before Google's. It's a decent read, and I wouldn't be surprised if people start incorporating elements from both MSR and Google's papers in the near future.

Comment by jkrause on "Human-level control through deep reinforcement learning" - computer learns 49 different games · 2015-02-26T17:33:23.881Z · LW · GW

Training networks layer by layer was the trend from the mid to late 2000s up until early 2012, but that changed in mid 2012 when Alex Krizhevsky and Geoff Hinton finally got neural nets to work for large-scale tasks in computer vision. They simply trained the whole network jointly with stochastic gradient descent, which has remained the case for most neural nets in vision since then.

Comment by jkrause on Open thread, 18-24 March 2014 · 2014-03-19T03:39:40.378Z · LW · GW

Yes, this happens to me in Windows, but not Ubuntu (both Chrome).

Comment by jkrause on Open Thread for February 11 - 17 · 2014-02-12T07:16:37.509Z · LW · GW

Here's one interesting way of viewing it that I once read:

Suppose that the option you chose, rather than being a single trial, were actually 1,000 trials. Then, risk averse or not, Option 5 is clearly the best approach. The only difficulty, then, is that we're considering a single trial in isolation. However, when you consider all such risks you might encounter in a long period of time (e.g. your life), then the situation becomes much closer to the 1,000 trial case, and so you should always take the highest expected value option (unless the amounts involved are absolutely huge, as others have pointed out).