# Solomonoff’s solipsism

post by Mergimio H. Doefevmil · 2023-05-08T06:55:22.192Z · LW · GW · 9 comments

EDIT: This post applies if deterministic computing is assumed, but not if nondeterministic computing is assumed. Solomonoff induction itself is agnostic in that regard. It doesn’t prescribe any particular paradigm of computation.

———

According to Solomonoff induction [? · GW], the probability that solipsism is not true is zero.

When applying Solomonoff induction, the entirety of the probability mass is distributed between world models which are one-dimensional sequences of states where every state has precisely one successor, the only possible exception being the last episode of the sequence, if the sequence ever ends (a condition which incidentally always makes the sequence more complex, making some kind of solipsistic afterlife quite probable).

Does this mean that we should modify Solomonoff induction, or could it perhaps - just perhaps - mean that there is no we after all?

## 9 comments

Comments sorted by top scores.

## comment by JBlack · 2023-05-09T01:22:04.703Z · LW(p) · GW(p)

You need to distinguish between world models - which can include any number of entities no matter how complex or useless - with the predictions made by those models. The *predictions* are sequences (more correctly, probability distributions over sequences). The *models* are not.

A world model could, for example, include hypothesized general rules for a universe, together with a specification of 13.7 billion years of history, that there exists a particular observer with specific details of some particular sensory apparatus, and that the sequence is based on the signal from that sensory apparatus. The actual distribution of *sequences* predicted by this model at some given time may be {0->0.9, 1->0.1}, corresponding to the observer having just been activated and most likely starting with a 0 bit.

The probability assigned by Solomonoff induction to this model *is not zero*. It is very small since this is a very complex model requiring a lot of bits to specify, but not zero. It may *never *be zero - that would depend upon the details of the predictions and the observations.

## ↑ comment by Mergimio H. Doefevmil · 2023-05-09T02:15:22.484Z · LW(p) · GW(p)

I suspect that the paradigm of computation one chooses plays an important role here. The paradigm of a deterministic Turing machine leads to what I described in the post - one dimensional sequences and guaranteed solipsism. The paradigm of a a nondeterministic Turing machine allows for multi-dimensional sequences. I will edit the post to reflect on this.

Replies from: JBlack## ↑ comment by JBlack · 2023-05-16T03:47:54.296Z · LW(p) · GW(p)

Solomonoff induction is about computable models that produce conditional probabilities for an input symbol (which can represent anything at all) given a previous sequence of input symbols. The models are initially weighted by representational complexity, and for any given input sequence are further weighted by the probability assigned to the observed sequence.

The distinction between deterministic and non-deterministic Turing machines is not relevant since the same functions are computable by both. The distinction I'm making is between *models* and *input*. They are not the same thing. This part of your post

[...] world models which are one-dimensional sequences of states where every state has precisely one successor [...]

Confuses the two. The *input* is a sequence of states. World-models are any computable structure at all that provide *predictions* as output. Not even the predictions are sequences of states - they're conditional probabilities for next input given previous input, and so can be viewed as a distribution over all finite sequences.

## comment by the gears to ascension (lahwran) · 2023-05-08T13:26:43.595Z · LW(p) · GW(p)

According to Solomonoff induction, the probability that solipsism is not true is zero.

well that sure is a logical statement that might or might not be true

## comment by Lucius Bushnaq (Lblack) · 2023-05-08T08:47:13.086Z · LW(p) · GW(p)

The sequence a hypothesis predicts the inductor to receive is not the world model that hypothesis implies.

A hypothesis can consist of very simple laws of physics describing time evolution in an eternal universe, yet predict that the sequence will be cut off soon because the camera that is sending the pixel values that are the sequence the inductor is seeing is about to die.

Replies from: Mergimio H. Doefevmil## ↑ comment by Mergimio H. Doefevmil · 2023-05-08T09:19:47.724Z · LW(p) · GW(p)

Solomonoff indiction doesn’t say anything about larger world models that contain the one-dimensional sequences that form the Solomonoff distribution. You appear to be saying that although the predicted sequence is always solipsistic from the point of view of the inductor, there can be a larger reality that contains that sequence, but that is an extra add-on that doesn’t appear anywhere in the original Solomonoff induction.

Replies from: JBlack## ↑ comment by JBlack · 2023-05-09T01:19:01.996Z · LW(p) · GW(p)

A Solomonoff hypothesis can be *any* computable model that predicts the sequence, including any model that also happens to predict a larger reality if queried in that way. There are always infinitely many such "large world" models that are compatible with the input sequence up to any given point, and all of them are assigned nonzero probability.

It is possible that there may be a simpler model that predicts the same sequence and does not model the existence of any other reality in any meaningful sense, but I suspect that a general universe model plus a fixed-size "you are here" will in a universe with computable rules remain pretty close to optimal.

## comment by tailcalled · 2023-05-08T07:24:47.884Z · LW(p) · GW(p)

See Thou Art Physics [LW · GW].