AI and Efficiency

post by DragonGod · 2020-07-27T20:58:43.625Z · LW · GW · 1 comments

This is a link post for https://openai.com/blog/ai-and-efficiency/

Contents

1 comment

The amount of compute required to train neural networks to SOTA (state of the art) on some tasks has been decreasing at an exponential rate. Efficiency of training AI models seems to be growing faster than Moore's law.

From June 2012 to May 2019, the amount of compute needed to train a neural network to AlexNet level of performance fell 44x (halving every 16 months). In contrast, Moore's Law would have provided an 11x doubling in available compute.

There are other examples of exponential gains in efficiency. From the article:

We saw a similar rate of training efficiency improvement for ResNet-50 level performance on ImageNet (17-month doubling time).716 We saw faster rates of improvement over shorter timescales in Translation, Go, and Dota 2:

  1. Within translation, the Transformer22 surpassed seq2seq23 performance on English to French translation on WMT’14 with 61x less training compute 3 years later.
  2. We estimate AlphaZero24 took 8x less compute to get to AlphaGoZero25 level performance 1 year later.
  3. OpenAI Five Rerun required 5x less training compute to surpass OpenAI Five26 (which beat the world champions, OG) 3 months later.

1 comments

Comments sorted by top scores.