What does Solomonoff induction say about brain duplication/consciousness?

post by riceissa · 2020-03-02T23:07:28.604Z · LW · GW · 6 comments

This is a question post.

Contents

  Answers
    5 Daniel Kokotajlo
    5 jessicata
    2 Lanrian
    1 riceissa
None
6 comments

Back in 2012, in a thread [LW(p) · GW(p)] on LW, Carl Shulman wrote a couple of comments connecting Solomonoff induction to brain duplication, epiphenomenalism, functionalism, David Chalmers's "psychophysical laws", and other ideas in consciousness.

The first comment [LW(p) · GW(p)] says:

It seems that you get similar questions as a natural outgrowth of simple computational models of thought. E.g. if one performs Solomonoff induction on the stream of camera inputs to a robot, what kind of short programs will dominate the probability distribution over the next input? Not just programs that simulate the physics of our universe: one would also need additional code to "read off" the part of the simulated universe that corresponded to the camera inputs. That additional code looks like epiphenomenal mind-stuff. Using this framework you can pose questions like "if the camera is expected to be rebuilt using different but functionally equivalent materials, will his change the inputs Solomonoff induction predicts?" or "if the camera is about to be duplicated, which copy's inputs will be predicted by Solomonoff induction?"

If we go beyond Solomonoff induction to allow actions, then you get questions that map pretty well to debates about "free will."

The second comment [LW(p) · GW(p)] says:

The code simulating a physical universe doesn't need to make any reference to which brain or camera in the simulation is being "read off" to provide the sensory input stream. The additional code takes the simulation, which is a complete picture of the world according to the laws of physics as they are seen by the creatures in the simulation, and outputs a sensory stream. This function is directly analogous to what dualist/epiphenomenalist philosopher of mind David Chalmers calls "psychophysical laws."

Carl's comments pose the questions you can ask/highlight the connection, but they don't answer those questions. I would be interested in references to other places discussing this idea, or answers to these questions.

Here are some of my own confused thoughts (I'm still trying to learn algorithmic information theory, so I would appreciate hearing any corrections):

Answers

answer by Daniel Kokotajlo · 2020-03-03T21:00:50.301Z · LW(p) · GW(p)

To answer your first bullet: Solomonoff induction has many hypotheses. One class of hypotheses would continue predicting bits in accordance with what the first camera sees, and another class of hypotheses would continue predicting bits in accordance with what the second camera sees. (And there would be other hypotheses as well in neither class.) Both classes would get roughly equal probability, unless one of the cameras was somehow easier to specify than the other. For example, if there was a gigantic arrow of solid iron pointing at one camera, then maybe it would be easier to specify that one, and so it would get more probability. Bostrom discusses this a bit in Anthropic Bias, IIRC.

To answer your second bullet: Yep. To reason about Solomonoff Induction properly we need to think about what the simplest "psychophysical laws" are, since they are what SI will be using to make predictions given the physics-simulation. And depending on what they are, various transformations of the camera may or may not be supported. Plausibly, when a camera is destroyed and rebuilt with functionally similar materials, the sorts of psychophysical laws which say "you survive the process" will be more complex than the sorts which say you don't. If so, SI would predict the end of its perceptual sequence. (Of course, after the transformation, you'd have a system which continued to use SI. So it would update away from those psychophysical laws that (in its view) just made an erroneous prediction.

To answer your third question: For SI, there is only one rule: Simpler is better. So, think about how you are not sure how to classify what counts as "drastic." Insofar as it turns out to be hard to specify, it's a distinction SI would not make use of. So it may well be that a rock falling on a camera would be predicted to result in doom, but it may not. It depends on what the overall simplest psychophysical laws are. (Of course, they have to also be consistent with data so far -- so presumably lots of really simple psychophysical laws have already been ruled out by our data, and any real-world SI agent would have an "infancy period" where it is busy ruling out elegan, simple, and wrong hypotheses, hypotheses which are so wrong that they basically make it flail around like a human baby.)

Those are my answers at least, I'd be interested to hear if anyone disagrees.

FWIW I am excited to hear Carl was thinking about this in 2012, I ended up having similar thoughts independently a few years ago. (My version: Solomonoff Induction is solipsistic phenomenal idealism.)

comment by riceissa · 2020-03-06T00:32:32.906Z · LW(p) · GW(p)

My version: Solomonoff Induction is solipsistic phenomenal idealism.

I don't understand what this means (even searching "phenomenal idealism" yields very few results on google, and none that look especially relevant). Have you written up your version anywhere, or do you have a link to explain what solipsistic phenomenal idealism or phenomenal idealism mean? (I understand solipsism and idealism already; I just don't know how they combine and what work the "phenomenal" part is doing.)

Replies from: daniel-kokotajlo
comment by Daniel Kokotajlo (daniel-kokotajlo) · 2020-03-06T15:21:53.538Z · LW(p) · GW(p)

Here's an old term paper I wrote defending phenomenal idealism. It explains early on what it is. It's basically Berkeley's idealism but without God. As I characterize it, phenomenal idealism says there are minds /experiences and also physical things, but only the former are fundamental; physical things are constructs out of minds/experiences. Solipsistic phenomenal idealism just means you are the only mind (or at least, the only fundamental one -- all others are constructs out of yours.)

"Phenomenal" might not be relevant, it's just the term I was taught for the view. I'd just say "Solipsistic idealism" except that there are so many kinds of idealism that I don't think that would be helpful.

answer by jessicata · 2020-03-03T02:08:53.623Z · LW(p) · GW(p)

I wrote about a closely related issue (more directly about human developmental psychology / cognitive science than Solomonoff induction) here [LW · GW].

comment by riceissa · 2020-03-06T00:31:19.674Z · LW(p) · GW(p)

Thanks, that's definitely related. I had actually read that post when it was first published, but didn't quite understand it. Rereading the post, I feel like I understand it much better now, and I appreciate having the connection pointed out.

answer by Lukas Finnveden (Lanrian) · 2020-03-06T17:12:55.013Z · LW(p) · GW(p)

This is highly related to UDASSA [LW · GW]. In the linked post, especially Problem #2 (about splitting conscious computers) and bits of Problem #3 (e.g. "What happens if we apply UDASSA to a quantum universe? For one, the existence of an observer within the universe doesn't say anything about conscious experience. We need to specify an algorithm for extracting a description of that observer from a description of the universe"...)

answer by riceissa · 2020-03-07T05:06:09.719Z · LW(p) · GW(p)

Lanrian's mention of UDASSA made me search for discussions of UDASSA again, and in the process I found Hal Finney's 2005 post "Observer-Moment Measure from Universe Measure", which seems to be describing UDASSA (though it doesn't mention UDASSA by name); it's the clearest discussion I've seen so far, and goes into detail about how the part that "reads off" the camera inputs from the physical world works.

I also found this post by Wei Dai, which seems to be where UDASSA was first proposed.

6 comments

Comments sorted by top scores.

comment by Pattern · 2020-03-04T20:18:48.535Z · LW(p) · GW(p)
So right before the camera is duplicated, Solomonoff induction "knows" that it will be in just one of the cameras soon, but doesn't know which one.

It sounds like it'd "know" that it will be both, separately.

Replies from: riceissa
comment by riceissa · 2020-03-06T00:03:49.772Z · LW(p) · GW(p)

I'm not sure I understand. The bit sequence that Solomonoff induction receives (after the point where the camera is duplicated) will either contain the camera inputs for just one camera, or it will contain camera inputs for both cameras. (There are also other possibilities, like maybe the inputs will just be blank.) I explained why I think it will just be the camera inputs for one camera rather than two (namely, tracking the locations of two cameras requires a longer program). Do you have an explanation of why "both, separately" is more likely? (I'm assuming that "both, separately" is the same thing as the bit sequence containing camera inputs for both cameras. If not, please clarify what you mean by "both, separately".)

Replies from: Pattern
comment by Pattern · 2020-03-06T20:02:27.459Z · LW(p) · GW(p)

My disagreement was terminological, not conceptual.

There is a teleporter. You step into part A and you will disappear, and step out of both part B and part C separately. There are now two of you. These two do not possess any special telepathy or connection, but both are you, and you may care about the outcomes for both before you step into the teleporter, and this may affect whether you choose to do so.

Duplication is not a process where you will end up as 'one of the two, but unclear which'. Duplication is a process where you become two entities which are not changed by the process. You become not one, but "both, separately." The 'separation' means that the two do not share observations directly with each other (though an object entering the same room as both could be seen by both from different angles).

comment by Donald Hobson (donald-hobson) · 2020-03-03T12:30:00.930Z · LW(p) · GW(p)

I consider this to be a flaw in AIXI type designs. To actually make sense, these designs need hypercompute, and so have to guess at what rules allow the hypercompute to interact with the normal universe. I have a rough idea of some kind of FDTish agent that can solve this, but can't formalize it.

Replies from: riceissa
comment by riceissa · 2020-03-06T00:29:03.905Z · LW(p) · GW(p)

I might have misunderstood your comment, but it sounds like you're saying that Solomonoff induction isn't naturalized/embedded, and that this is a problem (sort of like in this post [LW · GW]). If so, I'm fine with that, and the point of my question was more like, "given this flawed-but-interesting model (Solomonoff induction), what does it say about this question that I'm interested in (consciousness)?"

Replies from: donald-hobson
comment by Donald Hobson (donald-hobson) · 2020-03-06T15:07:28.453Z · LW(p) · GW(p)

We can make Solomonoff induction believe all sorts of screwy things about consciousness. Take a few trillion identical computers running similar computations. Put something really special and unique next to one of the cases, say a micro black hole. Run solomonoff induction on all the computers, each with different input. Each inductor simulates the universe and has to know its own position in order to predict its input. The one next to the black hole can most easily locate itself as the one next to the black hole, if the black hole is moved, it will believe its consciousness resides in "the computer next to the black hole" and predict accordingly.