Deepmind Plans for Rat-Level AI

post by moridinamael · 2016-08-18T16:26:05.540Z · LW · GW · Legacy · 9 comments

Contents

9 comments

Demis Hassabis gives a great presentation on the state of Deepmind's work as of April 20, 2016. Skip to 23:12 for the statement of the goal of creating a rat-level AI -- "An AI that can do everything a rat can do," in his words. From his tone, it sounds like this is more a short-term, not a long-term goal.

I don't think Hassabis is prone to making unrealistic plans or stating overly bold predictions. I strongly encourage you to scan through Deepmind's publication list to get a sense of how quickly they're making progress. (In fact, I encourage you to bookmark that page, because it seems like they add a new paper about twice a month.) The outfit seems to be systematically knocking down all the "Holy Grail" milestones on the way to GAI, and this is just Deepmind. The papers they've put out in just the last year or so concern successful one-shot learning, continuous control, actor-critic architectures, novel memory architectures, policy learning, and bootstrapped gradient learning, and these are just the most stand-out achievements. There's even a paper co-authored by Stuart Armstrong concerning Friendliness concepts on that list.

If we really do have a genuinely rat-level AI within the next couple of years, I think that would justify radically moving forward expectations of AI development timetables. Speaking very naively, if we can go from "sub-nematode" to "mammal that can solve puzzles" in that timeframe, I would view it as a form of proof that "general" intelligence does not require some mysterious ingredient that we haven't discovered yet.

9 comments

Comments sorted by top scores.

comment by Daniel_Burfoot · 2016-08-20T02:30:57.438Z · LW(p) · GW(p)

Note that DeepMind's two big successes (Atari and Go) come from scenarios that are perfectly simulable in a computer. That means they can generate an arbitrarily large number of data points to train their massive neural networks. Real world ML problems almost all have strict limitations on the amount of training data that is available.

Replies from: moridinamael, Houshalter, Gurkenglas
comment by moridinamael · 2016-08-20T13:10:55.463Z · LW(p) · GW(p)

That is true. However, since they released those papers, they've published some results demonstrating learning from only a handful of samples in certain contexts by using specialized memory networks which seem to be more analogous to human memory.

comment by Houshalter · 2016-08-27T01:00:53.116Z · LW(p) · GW(p)

I'm not sure this is true. The internet contains billions of hours of video, trillions of images, and libraries worth of text. If they can use unsupervised, semi-supervised, or weakly-supervised learning, they could take advantage of nearly limitless data. And neural networks can do unsupervised learning well, by learning features for one task and then transferring those to another task.

Deepmind has also had a paper on approximate bayesian learning for neural net parameters. That would make them much more able to learn from limited amounts of data, instead of overfitting.

Anyway deep nets are not really going to take over traditional ML methods, but rather open up a whole new set of problems that traditional methods can't handle. Like processing audio and video data, or reinforcement learning.

comment by Gurkenglas · 2016-08-20T22:19:07.256Z · LW(p) · GW(p)

On the other hand, it's simple to generate AI-complete problems where you can generate training data.

comment by WalterL · 2016-08-19T13:31:35.450Z · LW(p) · GW(p)

I'm vastly skeptical, but let's see where this goes.

comment by MrMind · 2016-08-19T07:25:41.348Z · LW(p) · GW(p)

Aww... Instead of maximizing paperclips, we'll be all enslaved to maximize cheese. Could've been worse, I guess.

comment by turchin · 2016-08-18T21:29:42.797Z · LW(p) · GW(p)

Personally, I feel like nothing great has happened in the field of ML after Lee Segol defeat and all field has slowed down in last 6 months.

Replies from: AlexMennen, RomeoStevens
comment by AlexMennen · 2016-08-18T22:02:29.640Z · LW(p) · GW(p)

Great things tend not to happen on a 6 month time scale. It does not make sense to conclude that ML has slowed down in the last 6 months just because that was the last time machine learning passed a milestone that people who don't specialize in ML were paying attention to.

comment by RomeoStevens · 2016-08-18T23:27:19.796Z · LW(p) · GW(p)

uhh, why would highly visible events necessarily correlate to the most important for timeline stuff?