Alpha Go Zero comments
post by Kyre · 2017-10-19T01:27:40.306Z · LW · GW · 6 commentsThis is a link post for http://www.nature.com/news/self-taught-ai-is-best-yet-at-strategy-game-go-1.22858
Contents
6 comments
DeepMind has published a new paper in Nature detailing "Alpha Go Zero", a Go AI that was trained with only self-play. Alpha Go Zero equalled the ability of a system trained with supervised learning on a Go profession game corpus in 24 hours (using the same computing resources) and exceeded the ability of the version of Alpha Go that defeated Lee Sedol in 48 hours.
This surprised me. I remember Demis Hassabis saying during the Lee Sedol game coverage that he'd like to look at learning Go from scratch, using only self play and no supervised learning from human games. I thought that that sounded much harder, and guessed that if supervised training times were on the order of a month then from scratch it would take anywhere from three months to years to recapitulate what was learned from human games.
Other things I noted from the paper were that Alpha Go Master (which played and won 50 games against professional players in 2016-2017) used a different architecture to the previous versions, and is about 12 times more computationally efficient as well as being a significantly better player. Zero is based on that architecture (I think), and took about 35 days of training (i.e. self-play) to equal, and then exceed Master's ability. I don't know how long Master took to train, but going by the results of this paper I'm guessing that the supervised training would have provided maybe only a day or so worth of head-start compared to zero.
6 comments
Comments sorted by top scores.
comment by slightlyfunny · 2017-10-19T12:22:24.547Z · LW(p) · GW(p)
To quote Eliezer,
"So let’s say you have an Artificial Intelligence that thinks enormously faster than a human. How does that affect our world? Well, hypothetically, the AI solves the protein folding problem. And then emails a DNA string to an online service that sequences the DNA, synthesizes the protein, and fedexes the protein back. The proteins self-assemble into a biological machine that builds a machine that builds a machine and then a few days later the AI has full-blown molecular nanotechnology".
Anyone else think Hassabis has an ulterior motive here?
↑ comment by scarcegreengrass · 2017-10-19T15:46:05.657Z · LW(p) · GW(p)
Presumably finding profitable new technology is a sufficient motive.
comment by ChristianKl · 2017-10-19T11:25:04.051Z · LW(p) · GW(p)
The announcement that Deep Mind goes to protein folding is intersting. Solving protein folding would really matter.
Giving that we can print DNA, solving protein folding means that we can design new proteins to do what we want and have powerful nanotech.
We might also get algorithms that can predict binding between different proteins.
↑ comment by scarcegreengrass · 2017-10-19T15:44:38.291Z · LW(p) · GW(p)
Yes, altho it is of course possible that the protein folding search space has a low maximal speedup from software, and could turn out to be hardware bottlenecked.
↑ comment by ChristianKl · 2017-10-20T11:17:15.302Z · LW(p) · GW(p)
I don't see a reason why this should be the case. It might be that deep learning is not helpful for the problem but the idea that software can't get better feels very unlikely.
comment by ChristianKl · 2017-10-21T16:15:32.596Z · LW(p) · GW(p)
When reading about Alpha Go's I'm really curious about how strong it is. I would like the team to change the scoring function in way that makes it value winning with more points a bit more. Afterwards it could play online and give handicap until the winning percentage becomes 50-50.