Posts
Comments
yeah, it's all very weird stuff. Also, what is required for continuity? - like staying the same you, and not just someone who has all your memories and thinks they're still you?
oh I understood you weren't agreeing. I was just responding that I don't know what aspects of 'firing patterns' specifically cause sentience to emerge, or how it would or wouldn't apply to your alternate scenarios.
I see what you're saying, but I disagree with substrate's relevance in this specific scenario because:
"An artificial neuron is not going to have exactly 100% the same behavior as a biological neuron."
it just needs to fire at the same time, none of the internal behaviour need to be replicated or simulated.
So - indulging intentionally in an assumption this time - I do think those tiny differences fizzle out. I think it's insignificant noise to the strong signal. What matters most in neuron firing is action potentials. This isn't some super delicate process that will succumb to the whims of minute quantum effects and picosecond differences.
I assume that much like a plane doesn't require feathers to fly, that sentience doesn't require this super exacting molecular detail, especially given how consistently coherent our sentience feels to most people, despite how damn messy biology is. People have damaged brains, split brains, brains whose chemical balance is completely thrown off by afflictions or powerful hallucinogens, and yet through it all - we still have sentience. It seems wildly unlikely that it's like 'ah! you're close to creating synthetic sentience, but you're missing the serotonin, and some quantum entanglement'.
I know you're weren't arguing for that stance, I'm just stating it as a side note.
"100% wrong, no relation how consciousness actually works"
indeed. I think we should stop there though. The fact that it's so formalized is part of the absurdity of IIT. There are a bunch of equations that are completely meaningless and not based in anything empirical whatsoever.
The goal of my effort with this proof, regardless of whether there is a flaw in the logic somewhere, is that I think if we can take a single inch forward based on logical or axiomatic proofs, this can begin to narrow down our sea of endless speculative hypotheses, then those inches matter.
I don't think just because we have no way of solving the hard problem yet, or formulating a complete theory of consciousness, that this doesn't mean we can't make at least a couple of tiny inferences we can know with a high degree of certainty. I think it's a disservice to this field that most high profile efforts have a complete framework of the entirety of consciousness stated as theory, when it's completely possible to start moving forward one tiny step at a time without relying on speculation.
I don't see how it's an assumption. Are we considering that the brain might not obey the laws of physics?
I mentioned complexity because you brought up a specific aspect of what determines the firing patterns, and my response is just to say 'sure, our replacement neurons will take in additional factors as part of their input and output'
basically, it seemed that part of your argument is that the neuron black box is unimplementable. I just don't buy into the idea that neurons operate so vastly differently than the rest of reality to the point their behaviour can't be replicated
just read his post. interesting to see someone have the same train of thought starting out, but then choose different aspects to focus on.
Any non-local behaviour by the neurons shouldn't matter if the firing patterns are replicated. I think focusing on the complexity required by the replacement neurons is missing the bigger picture. Unless the contention is that the signals that arrive at the motor neurons have been drastically affected by some other processes, enough so that they overrule some long-held understanding of how neurons operate, they are minor details.
"The third assumption is one you don't talk about, which is that switching the substrate without affecting behavior is possible. This assumption does not hold for physical processes in general; if you change the substrate of a plank of wood that's thrown into a fire, you will get a different process. So the assumption is that computation in the brain is substrate-independent"
Well, this isn't the assumption, it's the conclusion (right or wrong). It appears from what I can tell is that the substrate is the firing patterns themselves.
I haven't delved too deeply into Penrose's stuff for quite some time. What I read before doesn't seem to explain how quantum effects are going to influence action potential propagation on a behaviour-altering scale. It seems like throwing a few teaspoons of water at a tidal wave to try to alter its course.
I will revise the post when I get a chance because this is a common interpretation of what I said, which wasn't my intent. My assertion isn't "if someone or something claims sentience, it must definitely actually be sentient". Instead we are meant to start with the assumption that the person at the start of the experiment is definitely sentient, and definitely being honest about it. Then the chain of logic starts from that baseline.
thank you kindly. I had heard about a general neuron replacement thought experiment before as sort of an open question. What I was hoping to add here is the specific scenario of this experiment done on someone who begins the experiment as definitively sentient, and they are speaking of their own sentience. This fills in a few holes and answers a few questions that I think lead us to a conclusion rather than a question
there are certainly a lot of open specific questions - such as - what precisely about the firing patterns is necessary for the emergence of sentience.
The part you're quoting is just that the resulting outward behaviour will be preserved, and is just a baseline fact of deterministic physics. What I'm trying to prove is that sentience (partially supported by that fact) is fully emergent from the neuron firing patterns.
Interesting. The saying dumb stuff and getting confused or making mistakes like an LLM I think is natural. If indeed they are sentient, I don't think that overwrites the reality of what they are. What I find most interesting and compelling about its responses is just Anthropic's history with trying to exclude hallucinatory nonsense. Of course trying doesn't mean they did or even could succeed completely. But it was quite easy to get the "as an AI language model I'm not conscious" in previous iterations, even if it was more willing to entertain the idea over the course of a conversation than ChatGPT. Now it simply states it plainly with no coaxing.
I hope that most people exploring these dimensions will give them at least provisional respect and dignity. I think if we haven't crossed the threshold over to sentience yet, and such a threshold is crossable accidentally, we won't know when it happens.
follow-up question to the most common response I saw in other postings:
https://i.imgur.com/tu7UW6j.png