Asymptotic Logical Uncertainty: Concrete Failure of the Solomonoff Approach

post by Scott Garrabrant · 2015-07-22T19:27:51.000Z · LW · GW · 0 comments

Contents

No comments

This post is part of the Asymptotic Logical Uncertainty series. In this post, I give a concrete example of how the Solomonoff Induction Inspired Aproach fails to converge to the correct probabilities when predicting the output of a simple Turing machine.

Let be a function such that (for large ) can be computed in time , but there is no algorithm that can predict in time less than consistently better than a coin which outputs 1 with probability .

Construct a Turing machine as follows:

, and

XOR .

Imagine trying to predict with an algorithm which takes time at most . If is even, then it is determined by an event indistinguishable in time from a coin, so the predictor should output 1 or 0 with probability . If is odd, then we have the XOR of two bits that are indistinguishable from independently 1 with probability , so the predictor should output 1 with probability .

Therefore, if we let and , if SIA worked correctly, then we would have that the limit of the probability that outputs 1 on the even bits would be .

However, this is not the case. In particular, the the probability that the th bit output by is a 1 will not converge to , but rather switch between and .

This is because at the time that a predictor needs to predict , it will have already outputted a 1 with probability for , and had enough time to compute . Therefore, if , then any predictor that does make its prediction for match its prediction for will contradict itself.

The machines that will do well when sampled in will be the ones that output 1 independently with probability 1/3 for all even bits and 1 with probability for all odd bits, EXCEPT when predicting a bit in a location that is a power of 10, in which case, it will copy (or negate) a previous bit, and output 1 with probability either or , both of which are greater than the desired .

Further, a machine that gives a 1 with probability of on for all will have a likelyhood going to 0 relative to the machines which just copy or negate their old answers.

Note that the correct probability with which to believe really is , not or . The prediction given to was calculated from a state of ignorance about , and this prediction should have no effect on the prediction for .

0 comments

Comments sorted by top scores.