Demonstrating MWI by interfering human simulations
post by Yair Halberstadt (yair-halberstadt) · 2022-05-08T17:28:27.649Z · LW · GW · 25 commentsContents
The demonstration None 25 comments
TLDR: A demonstration that, given some reasonable assumptions, a quantum superposition of a brain creates an exponential number of independent consciousnesses, and each independent consciousness experiences reality as classical.
The demonstration
I'm not going to explain the Many World's Interpretation of Quantum Physics, since much better expositions exist. Feel free to suggest your favourite in the comments.
Imagine we have the ability to simulate a conscious human in the future. This is almost definitely theoretically feasible, but it is unknown how difficult it is in practice.
I take it as obvious that a simulation of a conscious human will be conscious, given that it will answer exactly the same to any questions about consciousness as an actual human. So you should be as convinced it is conscious as you are about any other human.
Now we create a function that takes an input of length n bits and produces as an output whatever a given simulation responds to that input. For example the input could be an index into a gigantic list of philosophy questions, and the output would be whatever Eliezer Yudkowsky would have replied if you had texted him that question at exactly midnight on the 1st of January 2020.
We want to find the exact number of inputs that produce an output with a given property - e.g. the number of inputs that produce a yes answer, or the number where the answer starts with a letter from the first half of the alphabet, or whatever.
Classically it's obvious the only way to do this is to run the function 2^n times - once for every possible input, and then count how many have the desired property. Doing this will require creating 2^n separate consciousness, each of which live for the duration of the function call.
However, on a quantum computer we only need to run the function O(2^n/2) times using a quantum counting algorithm. So for example, if the inputs were 20 bits long, a classical computer would have to call the simulation function roughly a million times whilst the quantum computer would only use on the order of a thousand calls.
The way this works is by creating a superposition of all possible inputs, and then repeatedly applying the function in such a way as to create interference between the outputs which eventually leaves only the desired number.
So for any given property a simulation can exhibit, we can count how many of the 2^n possible simulations have that property in less than 2^n calls. And conceptually there's no possible way to know that without running every single possible simulation.
Now if you agree that anything which acts exactly the same as a conscious being from an input output perspective must in fact be conscious, rather than a philosophical zombie, it seems reasonable to extend that to something which act exactly the same as an aggregate of conscious beings - it must in fact be an aggregate of conscious beings. So even though we've run the simulation function only a thousand times, we must have simulated at least million consciousnesses, or how else could we know that exactly 254,368 of them e.g. output a message which doesn't contain the letter e?
The only way this is possible is if each time we ran the simulation function with a superposition of all possible inputs, it creates a superposition of all possible consciousnesses.
Now each of those consciousnesses produce the same output as they would in a classical universe, so even though they exist in superposition, they themselves see only a single possible input. To them it will appear as though when they looked at the input the superposition collapses, leaving only a single random result, but on the outside view we can see that the whole thing exists in a giant superposition.
This strongly mirrors how the MWI says the entire universe is in a giant superposition, but each individual consciousness sees a collapsed state due to decoherence preventing them interfering with other consciousnesses.
25 comments
Comments sorted by top scores.
comment by Leo P. · 2022-05-09T02:32:15.293Z · LW(p) · GW(p)
So even though we've run the simulation function only a thousand times, we must have simulated at least million consciousnesses, or how else could we know that exactly 254,368 of them e.g. output a message which doesn't contain the letter e?
Isn't that exactly what Scott Aaronson explains a quantum computer/algorithm doesn't do? (See https://www.scottaaronson.com/papers/philos.pdf, page 34 for the full explanation)
Replies from: yair-halberstadt↑ comment by Yair Halberstadt (yair-halberstadt) · 2022-05-09T02:46:59.018Z · LW(p) · GW(p)
Sort of. It kind of is how it works, at least for some algorithms, but getting the output is tricky and requires interference. For our purposes the details don't matter.
Replies from: Leo P.↑ comment by Leo P. · 2022-05-09T11:00:46.291Z · LW(p) · GW(p)
I'm not sure I understand your answer. I'm saying that you did not simulate at least million consciousnesses, just as in Shor's algorithm you do not try all the divisors.
Replies from: yair-halberstadt↑ comment by Yair Halberstadt (yair-halberstadt) · 2022-05-09T12:06:46.346Z · LW(p) · GW(p)
You created a superposition of a million consciousnesses and then outputted an aggregate value about all those consciousnesses. Either a million entities experienced a conscious experience, or you can find out the output of a conscious being with ever actually creating a conscious being - i.e. p-zombies exist (or at least aggregated p-zombies).
Replies from: JBlack, Leo P.↑ comment by JBlack · 2022-05-09T23:39:27.661Z · LW(p) · GW(p)
You can find out some aggregate property of the output of functions that have equivalent output to a million conscious beings. It is far from obvious that this is equivalent to there actually being a million conscious beings, or even one conscious being.
Also, equivalence of one output is definitely not the same thing as equivalence of consciousness, as even a moment's thought will show.
↑ comment by Leo P. · 2022-05-09T12:41:38.684Z · LW(p) · GW(p)
You created a superposition of a million consciousnesses and then outputted an aggregate value about all those consciousnesses.
This I agree with.
Either a million entities experienced a conscious experience, or you can find out the output of a conscious being with ever actually creating a conscious being - i.e. p-zombies exist (or at least aggregated p-zombies).
This I do not. You do not get access to a million entities by the argument I laid out previously. You did not simulate all of them. And you did not create something that behaves like a million entities aggregated either, just like you cannot store 2^n classical bits on a quantum computer consisting of n qbits. You get a function which outputs an aggregated value of your superposition, but you can't recover each consciousness you pretend to have been simulated from it. Therefore this is what I believe is flawed in your position:
it seems reasonable to extend that to something which act exactly the same as an aggregate of conscious beings - it must in fact be an aggregate of conscious beings
If I understand your arguments correctly (which I may not, in which case I'll be happy to stand corrected), this sentence should mean to you that for something to act the same as an aggregate of n conscious beings, it must be an aggregate of at least n conscious beings? But then doesn't this view mean that a function of d variables can never be reduced to a function of k variables, k < d?
comment by Algon · 2022-05-08T21:28:11.808Z · LW(p) · GW(p)
I don't think the quantum counting part is doing anything here.
Replies from: yair-halberstadt↑ comment by Yair Halberstadt (yair-halberstadt) · 2022-05-09T02:53:18.195Z · LW(p) · GW(p)
It's allowing us to get the results out of the simulation. Without it you could dismiss the question of whether the consciousnesses actually exist as so much philosophical nonsense, but with it I can actually show you the results.
Replies from: Algon↑ comment by Algon · 2022-05-09T10:02:34.719Z · LW(p) · GW(p)
Sorry, I meant that I think the arguement doesn't depend on the quantum speed up.
Replies from: yair-halberstadt↑ comment by Yair Halberstadt (yair-halberstadt) · 2022-05-09T12:03:14.037Z · LW(p) · GW(p)
Maybe? It makes it more obvious something is going on here than if you had to call the function 2^n times.
comment by TAG · 2022-05-08T21:44:15.002Z · LW(p) · GW(p)
Now each of those consciousnesses produce the same output as they would in a classical universe, so even though they exist in superposition, they themselves see only a single possible input. To them it will appear as though when they looked at the input the superposition collapses, leaving only a single random result, but on the outside view we can see that the whole thing exists in a giant superposition.
This strongly mirrors how the MWI says the entire universe is in a giant superposition, but each individual consciousness sees a collapsed state due to decoherence preventing them interfering with other consciousnesses.
In a decoherently superposed multiverse, the decoherence itself ensures that each mind is objectively non interacting with the others , and therefore has no reason to detect itself as in superposition.
But what happens inside a quantum computer is a coherent superposition. And it isn't a coherent superposition of classical states, because there is no fact of the matter about the basis of a superposed state.
I take it as obvious that a simulation of a conscious human will be conscious, given that it will answer exactly the same to any questions about consciousness as an actual human
So would a zombie. A computational duplicate is a functional duplicate, and a functional duplicate would answer the same questions in the same way, conscious or not.
Replies from: yair-halberstadt, yair-halberstadt↑ comment by Yair Halberstadt (yair-halberstadt) · 2022-05-09T02:50:15.141Z · LW(p) · GW(p)
I'm taking it was obvious was that zombies can't exist. See https://www.lesswrong.com/tag/zombies [? · GW] for lots of arguments as to why that is indeed obvious.
Replies from: TAG↑ comment by TAG · 2022-05-09T11:12:05.429Z · LW(p) · GW(p)
It not strictly a p zombie because it's not a physical duplicate. It's a functional duplicate. A functional duplicate of a person will answer questions about consciousness in the same way whether it's conscious of not. If it is conscious that would not violate physicalism about consciousness, because the physics is different.
Replies from: yair-halberstadt, green_leaf↑ comment by Yair Halberstadt (yair-halberstadt) · 2022-05-09T12:03:41.675Z · LW(p) · GW(p)
That's not making any sense to me, sorry.
Replies from: TAG↑ comment by TAG · 2022-05-09T12:29:07.719Z · LW(p) · GW(p)
From the wiki "Physicalists typically deny the possibility of zombies: if a p-zombie is atom-by-atom identical to a human being in our universe, then our speech can be explained by the same mechanisms as the zombie’s — and yet it would seem awfully peculiar that our words and actions would have an entirely materialistic explanation, but also, furthermore, our universe happens to contain exactly the right bridging law such that our utterances about consciousness are true and our consciousness syncs up with what our merely physical bodies do. It’s too much of a stretch: Occam’s razor dictates that we favor a monistic universe with one uniform set of laws."
The computational simulation you are talking about is not an atom by atom duplicate, so the above does not apply.
Replies from: yair-halberstadt↑ comment by Yair Halberstadt (yair-halberstadt) · 2022-05-09T14:35:32.127Z · LW(p) · GW(p)
Almost everyone I know who thinks p-zombies can't exist agree that a perfect simulation of a human can't be a zombie either for identical reasons. I've never seen someone claim that p-zombies can't exist, but functional zombies can.
Replies from: TAG↑ comment by green_leaf · 2022-05-09T15:41:21.384Z · LW(p) · GW(p)
A functional duplicate of a consciousness necessarily has consciousness.
That can be shown by many thought experiments.
Replies from: TAG↑ comment by TAG · 2022-05-09T15:54:47.743Z · LW(p) · GW(p)
That would mean that the dependence of consciousness on physics is impossible.
Thought experiments only demonstrate plausability and implausibility.
There are also thought experiments, such as the Blockhead, showing the implausability of functional realisbility.
Replies from: green_leaf↑ comment by green_leaf · 2022-05-10T18:40:46.497Z · LW(p) · GW(p)
That would mean that the dependence of consciousness on physics is impossible.
No, it wouldn't. Consciousness depends on the pattern, which, in turn, depends on physics. "Depending on physics" is a very vague phrase. It's possible to define it in a way that will make it false for what I said, in which case consciousness doesn't "depend on physics." There is no point in using very vague phrases to check if they apply to a particular ontology of consciousness. That's worthless - it gives us no information.
Instead, let's concentrate on what's actually the case. That's the important thing. Not the semantics used (like "depending on physics").
The Blockhead experiment (the way Wikipedia describes it) can't pass the Turing test. The response to every sentence doesn't depend only on the last sentence, but also on all sentences before that (and on all responses, but those might be deterministic). We can't build anything that has all possible responses preprogrammed (like a giant lookup table) and only takes as its input the last sentence in the conversation (well, we can, but it wouldn't pass the Turing test).
Replies from: TAG↑ comment by TAG · 2022-05-10T19:20:16.674Z · LW(p) · GW(p)
Consciousness depends on the pattern, which, in turn, depends on physics
But obviously not in the sense I meant.if I meant in that sense I wouldn't be disagreeing with you.
Replies from: green_leaf↑ comment by green_leaf · 2022-05-13T19:10:28.141Z · LW(p) · GW(p)
If you meant it in the sense that's not true for what I wrote, then it's not true. That's why I recommended not relying on ill-defined phrases, and instead concentrating on the topic itself.
↑ comment by Yair Halberstadt (yair-halberstadt) · 2022-05-09T02:52:02.792Z · LW(p) · GW(p)
But what happens inside a quantum computer is a coherent superposition. And it isn't a coherent superposition of classical states, because there is no fact of the matter about the basis of a superposed state.
And yet each consciousness will still only see a classical state, or otherwise it would not answer exactly the same as the original consciousness. Hence this is an example of Many Worlds (even though it doesn't rely on decoherence to work).
Replies from: TAG, Leo P.↑ comment by TAG · 2022-05-09T11:05:56.031Z · LW(p) · GW(p)
You're starting at the end. There's no fact of the matter about how many consciousnesses there are, including whether there are any. The output of the whole system can be explained by computation. It might be "as if" a bunch of classical computations were occurring , but that doesn't mean they are. A Giant Look Up Table can behave as though it is computing, but isnt.
↑ comment by Leo P. · 2022-05-09T23:31:44.280Z · LW(p) · GW(p)
Talking about the fact that each consciouness will only see a classical state doesn't make sense, because they are in a quantum superposition state. Just like it does not make sense to say that the photon went either right or left in the double slit experiment.