Isomorphisms don't preserve subjective experience... right?
post by notfnofn · 2024-07-03T14:22:59.679Z · LW · GW · 3 commentsThis is a question post.
Contents
Answers 13 JBlack 7 Tapatakt 5 RussellThor 4 JuliaHP 3 Ben 3 TAG 3 metachirality 2 Dagon 1 James Camacho -2 Signer None 3 comments
I've seen a couple discussions about brain/CNS uploads and there seems to be a common assumption that it would be conscious in the way that we are conscious. I've even seen some anthropic principle-esque arguments that we are likely in a simulation because this seems theoretically feasible.
When I think of a simulation, I think of a mathematical isomorphism from the relevant pieces of the brain/CNS onto some model of computation, combined with an isomorphism from possible environments to inputs to this model of computation.
But this model of computation could be anything. It could be a quantum super-computer. It could be a slow classical computer with a huge memory. Heck, it could be written on a huge input file and given to a human that slowly works out the computations by hand.
And so this feels to me like it suggests the mathematical structure itself is conscious, which feels absurd (not to mention that the implications are downright terrifying). So there should be some sort of hardware-dependence to obtain subjective experience. Is this a generally accepted conclusion?
Answers
I seriously don't know whether subjective experience is mostly independent of hardware implementation or not. I don't think we can know for sure.
However: If we take the position that conscious experience is strongly connected with behaviour such as writing about those conscious experiences, then it has to be largely hardware-independent since the behaviour of an isomorphic system is identical.
So my expectation is that it probably is hardware-independent, and that any system that internally implements isomorphic behaviour probably is at least very similarly conscious.
In any event, we should probably treat them as being conscious even if we can't be certain. After all, none of us can be truly certain that any other humans are conscious. They certainly behave as if they are, but that leads back to "... and so does any other isomorphic system".
No, it isn't.
the mathematical structure itself is conscious
If I understand it correctly, that's the position of, e.g. Max Tegmark (more generally, he thinks that "to exist" equals "to have a corresponding math structure").
So there should be some sort of hardware-dependence to obtain subjective experience
My (and, I think, a lot of other people's) intuition says something like "there is no hardware-dependence, but the process of computation must exist somewhere".
↑ comment by notfnofn · 2024-07-03T15:30:55.277Z · LW(p) · GW(p)
Would your intuition suggest that a computation by hand produces the same kind of experience as your brain? Your intuition reminds me of the strange mathematical philosophy of ultrafinitism, where even mathematical statements that require a finite amount of computation to verify do not have a truth value until they are computed.
Replies from: JBlack, Ape in the coat↑ comment by JBlack · 2024-07-04T02:06:55.845Z · LW(p) · GW(p)
Yes, my default expectation is that in theory a sufficiently faithful computation performed "by hand" would be in itself conscious. The scale of those computations is likely staggeringly immense though, far beyond the lifespan of any known being capable of carrying them out. It would not be surprising that a second of conscious experience might require 10^20 years of "by hand" computation.
I doubt that any practical computation by hand can emulate even the (likely total lack of) consciousness of a virus, so the intuition that any actual computation by hand cannot support consciousness is preserved.
↑ comment by Ape in the coat · 2024-07-03T17:42:06.107Z · LW(p) · GW(p)
Consider some less mysterious algorithm, for instance, image recognition. Does a system implementing this algorithm recognizes images, regardless of what substance this system is made from? Does the mathematical description itself of this algorithm recognizes images, even when the algorithm is not executed?
Replies from: notfnofn↑ comment by notfnofn · 2024-07-03T23:14:04.894Z · LW(p) · GW(p)
I'm having a little trouble understanding how to extend this toy example. You meant for these questions to all be answered "yes", correct?
Replies from: Ape in the coat↑ comment by Ape in the coat · 2024-07-04T05:37:38.290Z · LW(p) · GW(p)
If it was true and math itself was enough to perform image recognition why would implementation of this or any other algorithm be necessary? Why would engineers and programmers even exist in such a world?
Replies from: notfnofn↑ comment by notfnofn · 2024-07-04T11:53:45.199Z · LW(p) · GW(p)
It feels like this a semantic issue. For instance, if you asked me if Euclid's algorithm produces the gcd, I wouldn't think the answer is "no, until it runs". Mathematically, we often view functions as the set of all pairs (input,output), even when the input size is infinite. Can you clarify?
Replies from: Ape in the coat↑ comment by Ape in the coat · 2024-07-04T12:28:22.924Z · LW(p) · GW(p)
Exactly! Well done! Indeed this is a semantic issue.
So, to resolve this confusion we simply need to explicitly distinguish between potential to do something and actually doing something. Algorithm for image recognition has the potential to recognize images, but only when executed in matter, image recognition actually happens.
Likewise, we can say that some class of mathematical algorithms has the potential to be conscious. Which means that when executed these algorithms will actually be conscious.
Don't think there is a conclusion, just more puzzling situations the deeper you go:
"Scott Aaronson: To my mind, one of the central things that any account of consciousness needs to do, is to explain where your consciousness “is” in space, which physical objects are the locus of it. I mean, not just in ordinary life (where presumably we can all agree that your consciousness resides in your brain, and especially in your cerebral cortex—though which parts of your cerebral cortex?), but in all sorts of hypothetical situations that we can devise. What if we made a backup copy of all the information in your brain and ran it on a server somewhere? Knowing that, should you then expect there’s a 50% chance that “you’re” the backup copy? Or are you and your backup copy somehow tethered together as a single consciousness, no matter how far apart in space you might be? Or are you tethered together for a while, but then become untethered when your experiences start to diverge? Does it matter if your backup copy is actually “run,” and what counts as running it? Would a simulation on pen and paper (a huge amount of pen and paper, but no matter) suffice? What if the simulation of you was encrypted, and the only decryption key was stored in some other galaxy? Or, if the universe is infinite, should you assume that “your” consciousness is spread across infinitely many physical entities, namely all the brains physically indistinguishable from yours—including “Boltzmann brains” that arise purely by chance fluctuations?"
Link
The point here is that you could have a system that to an outside observer looked random or encrypted but with the key would be revealed to be a conscious creature. But what if the key was forever destroyed? Does the universe then somehow know to assign it consciousness?
You also need to fully decide when replaying vs computing apparently conscious behavior counts. If you compute a digital sim once, then save the states and replay it the 2nd time what does that mean? What about playing it backwards?
Boltzmann brains really mess things up further.
It seems to lead to the position then that its just all arbitrary and there is no objective truth, or uncountable infinities of consciousness in un-causal timeless situations. Embracing this view doesn't lead anywhere useful from what I can see and of course I don't want it to be the logical conclusion.
Unless I misunderstand the confusion, a useful line of thought which might resolve some things:
Instead of analyzing whether you yourself are conscious or not, analyze what is causally upstream of your mind thinking that you are conscious, or your body uttering the words "I am conscious".
Similarly you could analyze whether an upload would would think similar thoughts, or say similar things. What about a human doing manual computations? What about a pure mathematical object?
A couple of examples of where to go from there:
- If they have the same behavior, perhaps they are the same?
- If they have the same behavior, but you still think there is a difference, try to find out why you think there is a difference, what is causally upstream of this thought/belief?
If you think consciousness is a real existent thing then it has to either be in:
The Software, or
The Hardware.
Assuming to be in the hardware causes some weird problems. Like "which atom in my brain in the conscious one?". Or "No, Jimmy can't be conscious because he had a hip replacement, so his hardware now contains non-biological components".
Most people therefore assume it is in the software. Hence why a simulation of you, even one done by thousands of apes working it out on paper, is imagined to be as conscious as you are. If it helps, that simulation (assuming it works) will say the same things you would. So, if you think its not conscious then you also think that everything you do and say, in some sense, does not depend on your being conscious, because a simulation can do and say the same without the consciousness.
There is an important technicality here. If I am simulating a projectile then the real projectile has mass, and my simulation software doesn't have any mass. But that doesn't imply that the projectiles mass does not matter. My simulation software has a parameter for the mass, which has some kind of mapping onto the real mass. A really detailed simulation of every neuron in your brain will have some kind of emergent combination of parameters that has some kind of mapping onto the real consciousness I assume you possess. If the consciousness is assumed to be software, then you have two programs that do the same thing. I don't think there is any super solid argument that forces you to accept that this thing that maps 1:1 to your consciousness is itself conscious. But there also isn't any super solid argument that forces you to accept that other people are conscious. So at some point I think its best to shrug as say "if it quacks like consciousness".
You don’t have to be a substance dualist to believe a sim (something computationally or functionally isomorphic to a person) could be a zombie. It's a common error , that because dualism is a reason to reject something as being genuinely conscious,it is the only reason --there is also an argument based on physicalism.
There are three things that can defeat the multiple realisability of consciousness:-
-
Computationalism is true, and the physical basis makes a difference to the kinds of computations that are possible.
-
Physicalism is true, but computationalism isn't. Having the right computation without the right physics only gives a semblance of consciousness.
-
Dualism is true. Consciousness depends on something that is neither physics nor computation.
So there are two issues: what explains claims of consciousness? What explains absence of consciousness?
Computationalism is a theory of multiple realisability: the hardware on which the computation runs doesn't matter, so long as it is adequate to run the computation, so grey matter and silicon can run the same computations...and a lot of physical details are therefore irrelevant to conscious.
Computationalism isn't a direct consequence of physicalism.
Physicalism has it that an exact atom-by-atom duplicate of a person will be a person and not a zombie, because there is no nonphysical element to go missing. That's the argument against p-zombies. But if actually takes an atom-by-atom duplication to achieve human functioning, then the computational theory of mind will be false, because there CTM implies that the same algorithm running on different hardware will be sufficient. Physicalism doesn't imply computationalism, and arguments against p-zombies don't imply the non existence of c-zombies-duplicates that are identical computationally, but not physically.
So it is possible,given physicalism , for qualia to depend on the real physics , the physical level of granularity, not on the higher level of granularity that is computation.
A computational duplicate of a believer in consciousness and qualia will continue to state that it has them , whether it does or not, because its a computational duplicate , so it produces the same output in response to the same input. Likewise, a duplicate of a non believer will deny them. (This point is clearer if you think in terms of duplicates of specific individuals with consistent views, like Dennett and Chalmers, rather than a generic human ).
@JuliaHP
Instead of analyzing whether you yourself are conscious or not, analyze what is causally upstream of your mind thinking that you are conscious, or your body uttering the words “I am conscious”.
Since an effect can have more that one cause that isn't going to tell you much.
Sorta? Usually the idea is that the presence or absence of hardware determines the anthropic probability of being that conscious process, otherwise you would expect to be some random arbitrary Boltzmann brain-like conscious.
Also this is an immediate corollary of the mathematical universe hypothesis, which says our universe is a mathematical structure.
I suspect there's a fair amount of handwaving involved, but generally as I understand it, the common belief is that "true" isomorphism does include everything necessary for subjective experience. There's a TON of weight put on the idea of "relevant pieces" of the brain/CNS, and IMO there's a serious disconnect among those who claim it's anywhere near feasible today.
You'll have to define "conscious" and "subjective experience" more operationally before anyone can even guess what parts of the computation are relevant to those things. It does seem very likely that everything is physics, and a sufficiently accurate simulation will have all the properties of the original. Note the circular definition of "sufficient" there.
Keep in mind that intuition is a bad guide for what a subjective experience even is. We have zero positive or negative experiences outside of the very limited singleton that is ourselves. There's nothing to have trained or evolved an intuition on.
↑ comment by notfnofn · 2024-07-03T17:32:25.684Z · LW(p) · GW(p)
I think any operational definition of subjective experience would vacuously be preserved by an isomorphism, by definition of an isomorphism. But if your mind ever gets uploaded, you see/remember this conversation, and you feel that you are self-aware in any capacity, that would be a falsification of the claim that mind uploads don't have subjective experience.
Replies from: Dagon↑ comment by Dagon · 2024-07-03T18:08:29.615Z · LW(p) · GW(p)
Right, that vacuousness is what I was trying to point out. If there is no consciousness, then "the relevant pieces of the brain/CNS" have not been copied/simulated. It's a definition of "relevant pieces" and isomorphism, not an empirical question.
And really, until you can prove to yourself that I have (or do not have) subjective experience, and until you can answer the question of whether a goldfish or an LLM has subjective experiences, it's either meaningless or unknowable. And that's just whether a simulation HAS subjective experiences. Whether they're the SAME type of subjective experiences as a given biological source clump of matter is a whole new level of measurements you have to invent.
An isomorphism isn't enough. Stealing from Robert (Lastname?), you could make an isomorphism from a rock to your brain, but you likely wouldn't consider it "conscious". You have to factor out the Kolmogorov complexity of the isomorphism.
↑ comment by notfnofn · 2024-07-04T00:03:09.039Z · LW(p) · GW(p)
While I sort of get what you're going for (easy interpretability of the isomorphism?), I don't really a see a way to make this precise.
Replies from: james-camacho↑ comment by James Camacho (james-camacho) · 2024-07-04T00:40:27.768Z · LW(p) · GW(p)
Consider all programs encoding isomorphisms from a rock to something else (e.g. my brain, or your brain). If the program takes bits to encode, we add times the other entity to the rock (times some partition number so all the weights add up to one). Since some programs are automorphisms, we repeatedly do this until convergence.
The rock will now possess a tiny bit of consciousness, or really any other property. However, where do we get the original "sources" of consciousness? If you're a solipsist, you might say, "I am the source of consciousness." I think a better definition is your discounted entropy.
Replies from: notfnofn↑ comment by notfnofn · 2024-07-05T11:16:33.671Z · LW(p) · GW(p)
Not trying to split hairs here, but here's what was throwing me off (and still is):
Let's say I have an isomorphism: sequential states of a brain molecules of a rock
I now create an encoding procedure: physical things txt file
Now via your procedure, I consider all programs which map txt files to txt files such that
and obtain some discounted entropy. But isn't doing a lot of work here? Is there a way to avoid infinite regress?
Replies from: james-camacho↑ comment by James Camacho (james-camacho) · 2024-07-05T23:46:51.899Z · LW(p) · GW(p)
Can't you choose an arbitrary encoding procedure? Choosing a different one only adds a constant number of bits. Also, my comment on discounted entropy was a little too flippant. What I mean is closer to entropy rate with a discount factor, like in soft-actor critic. Maximizing your ability to have options in the future requires a lot of "agency".
Maybe consciousness should be more than just agency, e.g. if a chess bot were trained to maximize entropy, games wouldn't be as strategic as if it wants to get a high*-energy payoff. However, I'm not convinced energy even exists? Humans learn strategy because their genes are more likely to survive, thrive, and have choices in the future when they win. You could even say elementary particles are just the ones still around since the Big Bang.
*Note: The physicists should reverse the sign on energy. While they're at it, switch to inverse-temperature.
conscious in the way that we are conscious
Whether it's the same way is an ethical question, so you can decide however you want.
So there should be some sort of hardware-dependence to obtain subjective experience.
I certainly don't believe in subjective experience without any hardware, but no, there is no much dependence except for your preferences for hardware.
As for generally accepted conclusions... I think it's generally accepted that some preferences for hardware are useful in epistemic contexts, so you can be persuaded to say "rock is not conscious" for the same reason you say "rock is not calculator".
3 comments
Comments sorted by top scores.
comment by Gunnar_Zarncke · 2024-07-03T20:53:18.271Z · LW(p) · GW(p)
The phrases "feels absurd" and "there should be" are an indication that you are reasoning from intuition - and that is something that doesn't work very well when dealing with physics.
Also compare: Thou Art Physics [LW · GW]
comment by Zach Stein-Perlman · 2024-07-06T00:06:25.817Z · LW(p) · GW(p)
Chalmers defends a principle of organizational invariance.
comment by green_leaf · 2024-07-06T00:43:37.124Z · LW(p) · GW(p)
There is no dependency on any specific hardware.
What's conscious isn't the mathematical structure itself but its implementation.