are "almost-p-zombies" possible?
post by KvmanThinking (avery-liu) · 2025-03-07T22:58:57.835Z · LW · GW · No commentsThis is a question post.
Contents
Answers 4 avturchin 2 Noosphere89 None No comments
It's probably not possible [LW · GW] to have a twin of me that does everything the same except experiences no qualia, i.e. you can predict 100% accurately, if you expose it to stimulus X and it does Y, that I would also do Y if I was exposed to stimulus X.
But can you make an "almost-p-zombie"? A copy of me, which, while not being exactly like me (besides consciousness), is almost exactly like me. So a function, which, when it takes in a stimulus X, says, not with 100% certainty, but 99.999999999%, what I will do in response. Is this possible to construct within the laws of our universe?
Additionally, is this easier or harder to construct than a fully conscious simulation of me?
Just curious.
Answers
It is possible to create a good model of a person with current LLMs who will behave 70-90 percent like me. The model could even claim that it is conscious. I experimented with my model, but it is most likely not conscious (or all LLMs are conscious).
My guess is that the answer is also likely no, because the self-model is still retained to a huge degree, so p-zombies can't really exist without hugely damaging the brain/being dead.
I explain a lot more about the (IMO) best current model of how consciousness works in general, since I reviewed a post on this topic:
https://www.lesswrong.com/posts/FQhtpHFiPacG3KrvD/seth-explains-consciousness#7ncCBPLcCwpRYdXuG [LW(p) · GW(p)]
↑ comment by KvmanThinking (avery-liu) · 2025-03-08T14:19:23.904Z · LW(p) · GW(p)
Would that imply that there is a hard, rigid, and abrupt limit on how accurately you can predict the actions of a conscious being without actually creating a conscious being? And if so, where is this limit?
No comments
Comments sorted by top scores.