[1911.08265] Mastering Atari, Go, Chess and Shogi by Planning with a Learned Model | Arxiv

post by DragonGod · 2019-11-21T01:18:25.722Z · LW · GW · 4 comments

This is a link post for https://arxiv.org/abs/1911.08265

4 comments

Comments sorted by top scores.

comment by Charlie Steiner · 2019-11-21T08:03:16.334Z · LW(p) · GW(p)

Welp, we're doomed (/s), as soon as someone figures out how to get 100 million tries at taking over the world so we can crush the world-taking-over problem with stochastic gradient descent.

Replies from: gwern, FactorialCode
comment by gwern · 2019-11-21T15:12:58.606Z · LW(p) · GW(p)

Meta-learning and transfer learning. You take over 100 million different simulated worlds, and the actual real world is a doddle.

Replies from: An1lam
comment by FactorialCode · 2019-11-21T19:40:03.013Z · LW(p) · GW(p)

Yeah, it's interesting that this works so well, but I think that best way to think of this is as a middle ground between full model based RL and model free RL. Their data efficiency isn't going to be optimal, because they're effectively throwing away the information carried by the observations. However, by making that choice, they don't need to model irrelevant details, so they end up with a very accurate and effective MCTS. As a result, I'd wager that with smaller neural networks or more experience, completely model-free RL would out-preform this agent, because all the modelling power can be focused on representing the policy. Likewise, with larger networks or less experience, I would expect this to fall behind MBRL that also predicts observations because the latter would be more data efficient.

I would have liked it if they had done more investigation into why they were able to outperform AZ in go. At the moment, they seem to have left it to one line of speculation.

comment by [deleted] · 2019-11-28T19:15:09.468Z · LW(p) · GW(p)

Posted a little reaction to this paper here [LW(p) · GW(p)].