Sentience in Silicon: The Challenges of AI Consciousness
post by Hannes Thurnherr (hannes-thurnherr) · 2023-04-25T13:15:56.358Z · LW · GW · 2 commentsContents
Why it’s important Why we’re confused My intuition for why i think we're just processing What this means for morality and AI None 2 comments
Why it’s important
The current phase of acceleration in AI has increased the importance of the debate around consciousness to a degree that I never thought would arrive this early. Systems like GPT-4 are passing most of our, thus far thought of, tests for consciousness[1]. Still, only very few even consider the consciousness of an LLM to be a possibility. Even if they may have initially agreed with the metrics used to measure consciousness. Why is that?
Intuitively, it makes sense. When I consider a simple logic gate on a computer chip inside a server, responding to changes in electrical currents, I don't associate it with consciousness any more than I do a pebble on the ground. Similarly, if I were to see a brain outside of a human skull, it might not immediately seem conscious, even though we know it is capable of consciousness. Thus, it's evident that our intuition is not a reliable guide for determining whether something possesses consciousness.
Another argument I often hear is simply an appeal to the underlying mechanism. “LLMs cannot be conscious. They are simply neural networks that predict the next token!”.[2] I think this argument is equally flawed. It assumes that mechanical explainability and consciousness are mutually exclusive. We don’t really understand the brain but I can’t think of any serious person who says that we never will. So I don't think that merely knowing the underlying mechanism at its most basic level, be that the binary configurations of transistors, the word predictions of LLMs or the firing of neurons should disqualify anything from being considered conscious.
There is a third major argument for the biological monopoly on consciousness, which I only consider to be worth discussing because one of its major proponents is sir Roger Penrose, the renowned physicist. At its most basic, it is the belief that our consciousness, our subjective experience is the result of a thus far undiscovered property of matter, that only so happens to exclusively exist in the matter of our brains.[3] To me, it seems like a profoundly unscientific approach to take one's own subjective experience, by itself, as evidence, that our extremely well-supported ideas about something as basic as matter, are somehow incomplete or wrong. Especially since cognition otherwise measurably happens on the far bigger scale of large molecules or cells, not subatomic particles, and shows no indication of being in any way related to any quantum phenomena. As expected, most physicists seem to also take issue with Penrose’s claims, saying that the brain is far away from the conditions needed to host the necessary quantum processes.
Despite this confusion and uncertainty, most of us tie consciousness incredibly tightly to our sense of morality.[4] This makes sense since things that normally cause us pain are irrelevant to us when we are unconscious (another indicator that human intuition tends to be utilitarian). Beings that don’t have a subjective experience do not seem to have moral value to us. This is the reason for the current increase in relevance, pace and intensity of the consciousness debate. The consciousness of AI is becoming more and more plausible which makes it feel more and more like they have moral relevancy. The discussion around this has never really been productive though. Let's explore why.
Why we’re confused
I think there are two reasons for the huge confusion in the debate around consciousness:
The first one seems to be the very compelling incentive to engage in motivated reasoning. This stems from the previously described, strong connection between consciousness and morality. We need bacteria to be unconscious or conscious to a negligibly small degree, or else every use of antibiotics would amount to the moral equivalent of genocide. In short, we want our concept of consciousness to line up with what we consider to be most morally relevant in society. Since morality as a category contains all the "touchy subjects," it is incredibly hard to talk about a concept that has the potential to turn all of morality on its head.
The second and more important source of our confusion is the obvious absence of a clear definition of the central terms. Consciousness is famously undefined, and I have often been perplexed by the fact that otherwise very intelligent people apparently expect a debate to be in any way productive, despite this. But to be fair, consciousness is, by its very nature, incredibly hard to define. To have sentience, to experience not something specific but anything at all is so vague and hard to communicate that we never really know if we are talking about the same thing as someone else when we say the word "consciousness" (We actually can't even be completely sure that anyone but ourselves is conscious at all). I think this alone is a red flag when it comes to the use of this concept in dialogue that is supposed to be productive. But it might also hint at the possibility that "consciousness" doesn't actually refer to anything real.
My intuition for why i think we're just processing
Take a moment to ask yourself this question: how would you know if there was no underlying feeling of general consciousness and your experience was just a sequence of very specific and simple sensations? Would you notice? Is your attempt to stop and feel the "sentience" right now not just another specific experience? I don't see a reason why the things we are feeling and how it feels to feel them would suggest any additional concept worth mentioning. Information processing is all that's happening. Some of that is feelings, like pain, panic, pleasure, or euphoria, that cause other phenomena in our body that we then perceive as more information. These feelings acquire normative meaning through evolution, which favours the development of goals that are beneficial in response to certain information. This whole process, including how it feels to you, is just information propagating through the neural connections in your head.
The assertion that information processing is all that's happening is supported by one of the leading theories of consciousness called Integrated Information Theory or IIT.[5] IIT proposes that consciousness is essentially the result of how much information is being exchanged between the different components of the system. Refreshingly, this theory consists of very exact definitions. What makes it even more appealing is the fact that it lines up with the common conception of consciousness. It predicts consciousness in awake and dreaming individuals, based on brain scans, but doesn't when they are not dreaming and sleeping or are in a coma.[6]
What this means for morality and AI
This all sounds pretty straightforward, and on the upside, this is in my view the most sensible approach to the consciousness debate. But it has some earth-shattering implications for our context. Because we are far from the only systems that have the amount of integrated information required to qualify as conscious according to IIT and are starting to slip away from the top of the spectrum when it comes to the amount of information processing. The first candidate, for an extremely interconnected system, that comes to my mind is the Internet. Is the internet one big, extremely conscious mind? What would that mean? Does the internet feel pain? Is the ability to feel pain what's actually relevant? What does it mean to be conscious without any well-defined sensory input? Some of these questions -to me- seem terrifying, unanswerable, or both¨. Another obvious candidate is GPT-4 or any other large neural network. Equally many questions come to mind. Are we torturing RL models by giving them negative rewards? Should we be polite to LLMs? Is the movie “Her” going to happen even sooner now?
I think there are two ways out of the dilemma for society (not that I expect any coordination on a big important moral question). We can either decouple our notion of consciousness from morality and continue acting like we have so far or we can start engaging with these systems under the assumption that they have a subjective experience similar to ours. The latter option would clearly be the bigger endeavour. On one hand, because convincing everyone that machines have a subjective experience is obviously hard and on the other because this just opens up a whole range of moral questions. Does it matter if these systems are sentient if they can’t feel pain or negative emotions? How would we even determine if they can? Neural networks have no pain centre. Should we just believe LLMs when they say they feel pain, like that Google engineer who was fired for doing exactly that?
Viewing morality independent of consciousness isn’t exactly easy either. We would have to find a concrete specific physical cause of “wrongness” and “rightness”. This implies obvious questions. I could settle for pain being the defining property of wrong things, but what exactly would that mean? Is it wrong to excite nerve cells in a lab? Should we be morally appalled at the existence of some neurotransmitter chemical in a bottle somewhere?
The development of AI has confronted us with an urgent need to make substantial progress in moral philosophy. While there have been some advancements in the philosophy of mind over the past few decades, with the field appearing to incorporate more scientific methods, the overall progress remains discouraging. We are far from a consensus on the right framework to deal with consciousness, let alone a consensus on a conclusion. Consequently, it is hard to be optimistic about our prospects for rapidly resolving these critical issues in the face of rapidly advancing AI technologies.
- ^
GPT-4 passes consciousness tests: video
- ^
Example of the explainability-argument
- ^
Roger Penrose consciousness beliefs: Big Think
- ^
Morality-consciousness connection: paper
- ^
- ^
2 comments
Comments sorted by top scores.
comment by Richard_Kennaway · 2023-04-26T13:10:59.588Z · LW(p) · GW(p)
Take a moment to ask yourself this question: how would you know if there was no underlying feeling of general consciousness and your experience was just a sequence of very specific and simple sensations? Would you notice?
By hypothesis, I wouldn't, any more than if I were dead. There would be no "I" to know these things. The fact that if I didn't exist I wouldn't be around to know it does not invalidate my perception that I do exist.
The fact is, I do have this experience. It seems that most other people do also, perhaps to varying degrees.
Is your attempt to stop and feel the "sentience" right now not just another specific experience?
Experience is the very thing that we have no explanation for. Why is it like something to be me, in the way it is not like something to be a rock?
BTW, the paper you link in footnote 4 actually (so far as the abstract says) argues against the relevance of consciousness to morality. But all the reasoning in the abstract is typical of philosophy so I'm not inclined to seek out the full text. Footnote 6 is as much a refutation of IIT as evidence for it.
comment by alenoach (glerzing) · 2023-07-26T00:53:23.624Z · LW(p) · GW(p)
That's a crucial subject indeed.
What's more crazy is that, since AI can process information much faster than the human brain, it's probably possible to engineer digital minds that are multiple orders of magnitude more sentient than the human brain.[1] I can't precisely tell how much more sentient, but biological neurons have a typical peak frequency of 200 Hz, whereas for transistors, it can exceed 2 GHz (10 millions times more).[2]
It's not us versus them. As Nick Bostrom says, we should search for "paths that will enable digital minds and biological minds to coexist, in a mutually beneficial way where all of these different forms can flourish and thrive".[1]