From free-energy to embodied valuing: weighted active inference in everyday life 2017-01-01T19:20:09.000Z
Free-energy, reinforcement, and utility 2016-12-26T23:02:12.000Z
The Free-Energy Theory and Precision: can a neural network catch a fly ball? 2016-12-25T01:10:59.000Z
Introduction to the Free-Energy Theory of Mind 2016-12-24T01:15:10.000Z


Comment by IAFF-User-177 (Imported-IAFF-User-177) on Generalizing Foundations of Decision Theory · 2017-02-27T20:37:59.000Z · LW · GW

Why do we need utility functions and expected utilities at all?

Comment by IAFF-User-177 (Imported-IAFF-User-177) on The universal prior is malign · 2017-02-27T19:25:32.000Z · LW · GW

Isn't this more metaphysics than actual AI?

Comment by IAFF-User-177 (Imported-IAFF-User-177) on Neural nets designing neural nets · 2017-01-20T04:20:49.000Z · LW · GW

On the CIFAR-10 dataset, our method, starting from scratch, can design a novel network architecture that rivals the best human-invented architecture in terms of test set accuracy. Our CIFAR-10 model achieves a test error rate of 3.84, which is only 0.1 percent worse and 1.2x faster than the current state-of-the-art model. On the Penn Treebank dataset, our model can compose a novel recurrent cell that outperforms the widely-used LSTM cell, and other state-of-the-art baselines. Our cell achieves a test set perplexity of 62.4 on the Penn Treebank, which is 3.6 perplexity better than the previous state-of-the-art.

Ummm... if I'm reading this correctly, they had to do extra training for the architecture learner, and then they didn't do that much better than grad-student descent. Interesting, but not necessarily what I would call self-improving AI.

Comment by IAFF-User-177 (Imported-IAFF-User-177) on Ontology, lost purposes, and instrumental goals · 2016-12-25T03:16:56.000Z · LW · GW

To me the interesting question is: how did the AI acquire enough ontology and bridging to build a subagent whose goals are well-grounded? And grounded in what, so to speak? In the subagent's observable data, or in a fully deterministic ontology where all the uncertainty has been packed into the parameters?