Is Consciousness Simulated?
post by Daniele De Nuntiis (daniele-de-nuntiis) · 2024-04-10T09:02:02.228Z · LW · GW · 2 commentsContents
Intro Perception Consciousness Who’s conscious? The problem Assumptions - (Interlude) Why Consciousness? Coherence AIs Conclusion None 2 comments
Crossposted from Substack.
Intro
In the post I go through: “How did I come up with the question?”, “What does it mean exactly?”, “Is it worth asking/investigating or is the answer “no” in some obvious way?”, “What would it mean if it was true?”, “Conclusion”.
Does consciousness need to be simulated? A recursive simulation (at least one level) is necessary or can you have it on “base reality”?
Is it easier to create something that “fakes” consciousness in a simulation than in the real?
Perception
Let’s take a step back toward “Why would someone even think of something like that?”. I first came up with the question while thinking about perception. What is perception and how do you go from physical interactions to feeling something?
Perception is: “the ability to see, hear, or become aware of something through the senses.". How would perceiving something directly look like? Is it even possible?
Intuitively, nah, maybe but not really, it’s also wrong by definition. Is there something I think I’m perceiving directly? Myself, maybe. Well, I can’t perceive my foot directly even though in a way I am it so I’m not really sure.
What am I perceiving exactly? My senses send some input into the brain which creates a world model and that’s what I perceive. So, if I perceive myself does that mean that I am part of the world model? Well that’s just a model of me it’s not really me but yes.
Consciousness
The thing that bridges the gap between I feel things and the world model is consciousness. Consciousness is said to be the thing that perceives and what is perceived is the content of consciousness.
Let’s try going from the bottom up, if I had to make an agent that perceives something how would I do it?
First, you need sensors that take in data, a processor then takes the data and computes a response that would be outputted to the world, to compute a response you might want to take the raw data and preprocess it into a world model that you can use to compute the next move. And the processor, as him being part of the world, will be part of the model too and will be “processing itself” in the model.
I have a model of reality and there’s also a model of myself. So what about him? What about the guy inside my head that is a “simulation” of me? Is he conscious? How does perception work for him? It’s simulated perception he doesn’t have to sense the world and to the extent that he does the brain can just simulate him doing so.
That’s pretty much where the question started to make sense for me, he can “skip” the sensory part and go straight to feeling/perceiving.
And of course, to the extent that he does it at all, he thinks he’s me so what if he actually were?
Who’s conscious?
To recap, the brain (processor) takes sensory data as input, computes a model of reality that it can use to choose the next move, and adds itself to the model.
There’s a model of the brain inside the brain. And the model could perceive “directly“ what on the outside is perceived through the senses.
So the main questions now are: “WHAT?”, “Does that make any sense at all?”, “Am I just moving the problem into the simulation/model without actually solving anything?”
The problem
The main concern I have at this point is “Yes, I am actually moving the problem one step just to complicate things without solving anything”. I’m moving the question a level just to make it hard enough so that I can make some mistakes and assume whatever I want as: “not obviously wrong”.
So, is simulating a conscious agent easier than computing it directly? And is there even a difference?
The difference in my head comes from trying to take something from the outside and make it feel like it’s been perceived directly and writing the code for something that is just supposed to feel like that, but at this point I can feel my intuitions starting to break down when I try to look too closely. I could maybe define it with “where is the territory?” in the first case the territory (the real) is outside the brain. In the second, the territory (in this case the model of the world) is already inside and could make the whole perception thing a lot easier. How exactly would that work though I have no idea and it might as well turn out to be wrong.
(It feels intuitively right to me that the brain having consciousness is one thing and the brain pretending to have it, and from the inside it feels “real”, is another. Again it’s extremely fuzzy, how would “pretending” work exactly? And how would it differ from the real thing?).
Assumptions - (Interlude)
I think it might be worth it to point out a couple of assumptions/priors I have:
We live in a lawful fully reductionistic universe;
Physics is the best religion (kinda).
Why Consciousness?
Why are we conscious at all? Why would nature make something like that?
I guess probably one of two reasons, consciousness is either a necessary part or a good heuristic for achieving some goal.
A wild scenario where consciousness emerges while it isn’t that important could involve empathy. We want to be able to simulate other people to predict their behavior. And we use the same “software” that we use to simulate ourselves. So we might start out with some basic way to simulate ourselves and other people, then nature selects for more in depth simulation of other people and as a result we also get more consciousness for ourselves without consciousness actually being necessary, that would also kind of imply we might be conscious in each other brains but …
(that’s probably almost impossible, it could be true based on “how much conscious”, maybe they have weak forms of qualia, instead of “conscious or not”, but I’d expect neuroscience to have found something like that at this point if it had had a similar “strength”).
Coherence
Why is our (read: my) conscious experience coherent? If I had to simulate the world I would probably want to iterate on it to find the best next move, why do we experience consciousness as mostly just “now”? Why aren’t we aware of more branches? Are there more branches?
It could be that there are two (or more) types of brain’ sims. The present is simulated in depth and it generates qualia but while iterating it uses simpler heuristics that don’t (or not as much).
I guess it could also be true that they’re conscious too and we just aren’t aware of their awareness, but given that we’re aware of when we dream I guess it’s more likely to be a matter of compute, and simulating the now it’s more important than simulating the future in great details.
AIs
Assuming consciousness is simulated will AIs be conscious? If consciousness does something, which it probably does, unless we can specifically train the model to not be conscious (I’m not really knowledgeable in LLMs but I’d assume the process of “finding” abstract reasoning functions in a neural net would eventually have “similar” results to the processes in brains) and as far as I can tell we can’t really decide how our models will work internally at all, so I would guess yes.
It could also be the case that consciousness is a kind of heuristic, it’s cheaper than other non-conscious but strictly better algorithms, and AIs will be conscious just up to a point.
If we define consciousness as awareness of oneself without any particular qualia then I guess pretty much any intelligent agent has to have it, you want to be able to simulate at least your future, current, and maybe past self. As it is for “AGI”, I think a big part of the question for “Will AIs be (or: are AIs) conscious?” is just definitional, I don’t think we have a precise enough definition that we can apply and we’ll keep moving the goalpost till we’re way past it.
Conclusion
What do I think is actually true?
Writing the post I kept going back and forth between “Yes of course you need a level of simulation” and “Dude it’s obvious that the brain computes everything, is that the simulation you’re talking about?” and I think in the end it probably is, the idea was that from the outside you have a causal arrow that goes from the world to the brain while on the “inside” (what I considered a level deep) the causal link is from the “real” brain to the “sim” brain (and the “computing”, in the first case comes from physics which doesn’t know what is real, and in the second from the brain which already knows what is supposed to be perceived). I guess in reality whatever computation I think is needed to be done a level deep can just be done on the first level and the question starts to break down, “Yes you need a simulation of the outside world” and “Duh of course you need a simulation of the outside world”.
2 comments
Comments sorted by top scores.
comment by Dagon · 2024-04-10T18:23:13.760Z · LW(p) · GW(p)
I think this would benefit from a crisp definition of "consciousness" and of "simulation". THEN you can clarify your question to one of:
- MUST consciousness be simulated, because it can't exist in a base-level reality.
- DOES my consciousness happen to be simulated, even though it's feasible to exist in a base-level reality.
- CAN consciousness exist in a simulation, or is it only conscious in the base-level reality, with the simulation being some sort of interference layer.
I haven't seen good enough definitions of either thing for these questions to make sense. Most conceptions of 'simulation' are complete enough that it's impossible to determine from inside whether or not it's a simulation, so that would lead to "with that conception of simulation, with the consciousness that I'm experiencing, it is untestable and unimportant whether it's in a simulation".
Replies from: daniele-de-nuntiis↑ comment by Daniele De Nuntiis (daniele-de-nuntiis) · 2024-04-11T10:33:57.118Z · LW(p) · GW(p)
Yeah I probably should have, thanks for the comment.
What I meant by simulation was whatever model the brain has of itself, and if that was necessary for consciousness (with consciousness I don't have a really precise definition, but I meant what my experience feels like, being me feels like something, while I'd assume a basic computer program or an object does not feel anything) to arise, and the distinction from that and base reality was where the computing happens (in an abstract way) the brain is computing me and what I'm feeling (the computed is what I mean by simulation). The way it might be testable is that it predicts that if an agent is not modeling himself internally we can rule out that it's conscious.