Consciousness, Intelligence, and AI – Some Quick Notes [call it a mini-ramble]

post by Bill Benzon (bill-benzon) · 2024-12-12T15:04:57.398Z · LW · GW · 0 comments

Contents

  An extra stage?
  Reorganization
  Turing and non-Turing computing
  Artificial neural nets
  Another item – What Chatbots can’t seem to do
  And Super-intelligence?
  During the Day [addendum]
None
No comments

Cross-posted from New Savanna.

Epistemic status? Are you kidding me? I just made this up. How would I know its epistemic status? Sheesh!

The subject of consciousness keeps turning up in current discussions of AI and LLMs. Can AIs be conscious? Are current AIs conscious? Maybe a little?

What do consciousness and intelligence have to do with one another? I see no reason to think that dogs, rats, and cats are not conscious, though I have no idea how far down the phylogenetic chain consciousness exists. No one would argue that dogs, rats, and cats are as intelligence as we are. Intelligence is something different from consciousness, no?

And yet the issue gets raised. One line of (implicit) reasoning seems to go like this: It converses with me in an intelligent way, things that converse with me in an intelligent way (or even at all!) are conscious, therefore it must be conscious. And then there’s the fact that you can ask a chatbot about itself and it will say something, though just what it says depends on what it has been RLHFed to say. But, still, these LLMs have been “trained” on tons of text using the word “consciousness” and all its cognates, so sure, it can use the word in human-seeming ways. That doesn’t make it conscious.

An extra stage?

Fact is, philosophers often argue about consciousness as though it were a further or extra stage in...in what? Human information processing? Thinking? Whatever. It adds something extra, something beyond what was before. Let’s say it adds an extra bit of intelligence. Yeah, let’s say that.

So, a conscious being is more intelligent that its non-conscious simulacrum, to which it is otherwise identical. But then we have those philosophical zombies and they, presumably, are as intelligent as non-zombies.

Reorganization

This strikes me as being wrong-headed. I take my conception of consciousness from Wm Powers (Behavior: The Control of Perception). Consciousness enables reorganization. I explain this in a post from 2022: Consciousness, reorganization and polyviscosity, Part 1: The link to Powers. (It’s complicated, so I’m not going to try to summarize it here.) In that conception, consciousness really isn’t a further step in reasoning, though it may facilitate “moving around” in one’s mind (in particular, think default mode). Reorganization isn’t a further step in reasoning, though it may seem like it. (Should say more about this, but later.)

Turing and non-Turing computing

And then we have the difference between Turing computation and, shall we say, neural computation. Turing computation requires a strict separation of processing from memory (e.g. see article linked in this recent post). That’s not how the brain works. As I recall, von Neumann wondered about that in this little book on the brain. If neurons are memory, as they surely are, then where’s the processing, he wondered. In a Turing device, learning means adding new blocks of memory. In a neural device, where memory and computation are not separate, learning means, well, it means reorganization, to use Powers’s term. Things have to change all over the place, more in some places than others, perhaps a lot in some few places and very damn little in most places.

Well, consciousness is what allows that to happen. Consciousness mediates reorganization. Consciousness modulates and “distributes” change throughout the system. A Turing system doesn’t need consciousness in order to learn, to change. A network (or neural) system does. Consciousness is the mechanism that solves learning for neural systems. I further hypothesize that the glial cells are crucial here.

I’m not sure of how to formulate that, but that seems to be my key thought for the morning. It’s why I got out of bed at 5:41 AM.

Artificial neural nets

So, what of artificial neural nets and consciousness? Well, the machines themselves are Turing-type machines. No need for consciousness. Does that also imply no possibility for consciousness? Skip that for now. The neural net, however, is not a Turing-type machine. It’s, well, it is a network and, as such, does not distinguish between memory and processing. But it’s a network that’s stored in the memory of a Turing-type machine. And we don’t yet know how to reorganize such a network, that is, to add new information to it. But we may well solve that problem one of these days. I don’t see why not?

Would that make the artificial neural net conscious? Or will it only be simulating consciousness? Remember, it’s running on a non-consciousness Turing-type machine. I think it’s only a simulation, not the real thing.

Another item – What Chatbots can’t seem to do

For examples of what I have in mind, see these posts:

If that’s the case – and here I’m taking a leap – then I do believe it’s consciousness that connects us to the world. Not a surprising thought on the face of it, but in THIS context, some explaining is required. LATER.

And Super-intelligence?

Well, if super-intelligence is a species of intelligence, then it doesn’t imply consciousness. And if consciousness is our/then connection to the world, then superintelligence is a zombie (yeah, I know, another leap). I’m mean, these SOA LLMs already “know,” in some meaningful sense, more than any individual human. That’s some kind of superintelligence. Whatever kind of superintelligence that is, it doesn’t bother us.

Nor does the super-intelligence of AlphaZero bother us. Super-intelligence? But that’s narrow intelligence. Does the fact that it’s narrow mean it can’t be super? Does the emergence of the concept of AGI, it’s differentiation from AI, mark the break-down of the classical concept of intelligence and its pursuit by artificial means?

What about the AIs that do protein folding and now 15-day weather predicting? We can’t do either of those things. Provisionally, why not?

At the moment I’m thinking: Super-intelligence? Bring it on. 

Let them be super. 

Consciousness is ours, but also dogs, cats, rats, and other animals.

During the Day [addendum]

We are awake, mostly. When we’re awake, we’re conscious, mostly. Once we’re out of infancy, and perhaps toddlerhood, we are conscious most of the time. As I said, it is consciousness that connects us to the world.

Note, however, that consciousness is very mobile. Are attention can flit from one thing to another, quite freely. [Hence the literary technique, stream of consciousness.] I suppose we could analogize it to time-sharing in computers, but...

Consciousness HAS to be flexible and mobile. We live in an unpredictable world. We may get around by predicting the next thing or three, but sometimes the world intervenes, drastically. We’ve got to be able to disconnect from the prediction and attend to the real.

[I recall an email exchange with Walter Freeman about this. I asked him whether or not a state of global coherence was necessary for us to be able to make a quick change from one thing to another. He said yes.]

More later.

0 comments

Comments sorted by top scores.