Posts
Comments
Why do we need utility functions and expected utilities at all?
Isn't this more metaphysics than actual AI?
On the CIFAR-10 dataset, our method, starting from scratch, can design a novel network architecture that rivals the best human-invented architecture in terms of test set accuracy. Our CIFAR-10 model achieves a test error rate of 3.84, which is only 0.1 percent worse and 1.2x faster than the current state-of-the-art model. On the Penn Treebank dataset, our model can compose a novel recurrent cell that outperforms the widely-used LSTM cell, and other state-of-the-art baselines. Our cell achieves a test set perplexity of 62.4 on the Penn Treebank, which is 3.6 perplexity better than the previous state-of-the-art.
Ummm... if I'm reading this correctly, they had to do extra training for the architecture learner, and then they didn't do that much better than grad-student descent. Interesting, but not necessarily what I would call self-improving AI.
To me the interesting question is: how did the AI acquire enough ontology and bridging to build a subagent whose goals are well-grounded? And grounded in what, so to speak? In the subagent's observable data, or in a fully deterministic ontology where all the uncertainty has been packed into the parameters?