AlphaGo variant reaches superhuman play in multiple games
post by scarcegreengrass · 2017-12-26T16:19:35.804Z · LW · GW · 2 commentsThis is a link post for https://www.theguardian.com/technology/2017/dec/07/alphazero-google-deepmind-ai-beats-champion-program-teaching-itself-to-play-four-hours
Contents
2 comments
https://www.theguardian.com/technology/2017/dec/07/alphazero-google-deepmind-ai-beats-champion-program-teaching-itself-to-play-four-hours
https://arxiv.org/abs/1712.01815
I'm posting this slightly late; the paper is from December 5.
I'd be interested to learn if AlphaZero could be applied to other closed-environment tasks, such as designing hardware in a simulator.
2 comments
Comments sorted by top scores.
comment by ChristianKl · 2017-12-27T01:21:43.683Z · LW(p) · GW(p)
We previously had a link to the actual scientfic article. The Guardian article contains no useful additional information and leaves out the fact that Stockfish 8 was severly handicapped by giving it only 1 GB Ram and no access to opening and lategame libraries. The published Elo number for the AlphaGoZero chess program was 3000 while the published Elo numbers for Stockfish 8 is 3600.
It's interesting that AlphaGoZero did as well as it did but it's misleading to say it's better than Stockfish 8.
Replies from: scarcegreengrass↑ comment by scarcegreengrass · 2018-01-03T19:36:12.448Z · LW(p) · GW(p)
Noted!
Sorry, i couldn't find the previous link here when i searched for it.