Meaning Machines
post by appromoximate (antediluvian) · 2025-03-01T19:16:08.539Z · LW · GW · 0 commentsContents
Introduction An exercise: The phenomenology of meaning Meaning Machine Test Drive Paucity of qualia Independence of valence Meaning Machinations Emptiness vs. Aboutness (Intentionality) The meaning machine and you The meaning machine and AI Build your own meaning machine None No comments
Introduction
This post is an attempt to build up a very sparse ontology of consciousness (the state space of consciousness). The main goal is to suggest that a feature commonly considered to be constitutive of conscious experience--that of intentionality or aboutness--is actually a kind of emergent illusion, and not an essential feature of raw experience. Making this suggestion can appear self-defeating, and so the post attempts to carefully negotiate this paradox without running straight into absurdity.
We'll start with a phenomenological exercise in order to build up the central idea intuitively. Then, we present the "meaning machine", a construct which attempts to formalize this core idea. The remaining sections work through various implications.
Personally, the meaning machine is an attempt to answer questions about the primacy of meaning or narrative within human experience: Is narrative a part of experience which can be transcended without losing anything important, even in principle?
An exercise: The phenomenology of meaning
Read the following made-up word and consider it in your mind for a moment: Snaledrom
What is your mental experience of contemplating this word? If you are like me, your experience is contained within the following:
- The sound of the word being pronounced
- A visualization of the word's letters
- Some possible intrusions of other words or visualizations My attention also moves around to different parts of the word.
Everything that I experience seems to be a perception related to one of my senses: sound, sight, etc. (I'm making a classification here. What does that mean? We'll come back around to it).
I would also claim is that this goes for my experience of reading the following word: Broccoli.
Do you agree? Consider the following phrases:
Snaledrom comfult praflebi
Broccoli is nutritious
What is different in the raw experience of reading these two phrases?
Here is an interesting way we can modify the experiment. Consider the following strings of letters:
Sanjesmontulaprumandum
Millionsandbillionsofcats
kdjduqodjdlsuwvuqydqc
What is different for you among the experiences of reading/parsing these lines? The main difference that I notice is in the way that my visual attention naturally selects out certain chunks and moves from chunk to chunk. The second line chunks very distinctly as "Millions and billions of cats." To a lesser degree, the first line chunks as "Sanjes mantula prumandum." The last line hardly chunks at all.
But perhaps more interesting that the differences among these experiences is the fundamental unity among them: Each experience can be described as a modulation of attention around various perceptions of sound, imagery, and other sense perceptions.
In my own life, the experience of studying Japanese via some extended immersion programs is relevant here. I've been in rooms of people talking at every point in the spectrum of understanding almost nothing that was said to understanding most of what was said.
When speaking or thinking casually about how to describe what it's like to understand a language, it's tempting to use language like, "I gradually became aware of the meaning of the words which were being said"--as if some new type of element, different from the raw sounds, is entering our awareness.
But when I closely inspect these experiences, I find that the variance along this spectrum spanning "no meaning" to "some meaning" to "most meaning" all falls within the same rough dimensions, consonant with what we've outlined above:
- I can more or less easily recall what was said (the sounds)
- My attention passes over the stream of sounds more or less in chunks corresponding to syllables vs chunks corresponding to words
- Certain chunks stand out in my attention and might evoke images, sounds, feelings
- I find it more or less easy to keep the sounds in my attention
- At the end, I can answer questions about what was said or I cannot
- At the end, I can prompt myself for thoughts about what was said or I cannot, thoughts which also show up as sounds in my head
In short, the presence of meaning and the absence of meaning differentiate my experience mostly in the ways that my attention flows within the same essential space, which is constituted fully by sense experiences of sounds, sights, touches, and so on.
The implication of this phenomenological reckoning is that that meaning does not reside in our consciousness. Sounds and images live there. Pleasure and pain live there. But if we go looking for something like "the meaning of this sentence" in our awareness we will not find it, we'll only evoke a new set of mental states formed of the same raw and narrow sensual ontology.
The final part of this exercise is to turn it reflexively upon itself. When I say that my experiences are formed of pure sense data, what does that mean? And have I not constructed an absurdity by rending this question impossible to ask?
My answer to this is that this question looks absurd only because we assume that the meaning of a sentence must be somehow "directly consciously accessible" in order for it to be functional. I don't think that this is the case, and in the next section I'll present the simple model of the "Meaning Machine" as a way of making this point.
Meaning Machine
We will provide a model which attempts to describe something about a possible manner of consciousness. The point of our approach is to allow ourselves the privilege of looking at a conscious being (what we will call a meaning machine) from an omniscient perspective---where we can observe both the universe in which the machine exists and the conscious experiences of the machine.
This will function as a sort of reversal of the Husserlian epoché, where we postpone phenomenalogizing for the moment, and focus on constructing and understanding some object (whatever that means in phenomenological terms). Only later will we try to consider the relationship between this meaning machine and ourselves.
Our meaning machine lives in a universe with state space and has conscious states which fall into the space . That is, for every possible "way that it is to be" our machine, there is a corresponding point or item . We will assume that the way that it is to be our machine derives from the state of the universe as some kind of observable function, i.e. .
Now, we will be interested in the topology of , which is a formal way of talking about what it means for different points in to be "nearby" or not. Intuitively, it makes a lot of sense to think about certain conscious states being close together and others far apart. On the other hand, formalizing this in careful phenomenological terms seems to be a virtually impossible thing to do. So instead, we will rely on our meaning machine formalism for assistance.
We will take our the universe live to be a well-behaved space, such as any of the manifold manifold structures which feature in physics, which has its own well defined topology. We can use this topology together with to define a topology on (the pushforward or quotient topology).
We can then use this topology to ask questions such as "what is the dimensionality of ?", which is the same as asking "what is the dimensionality of the Euclidean space which is (perhaps locally) isomorphic to ?"
Let's construct such a space for our meaning machine. We'll let denote the visual field. Here, denotes the number of spatial dimensions of the visual field. For each spatial dimension, there are three dimensions for the color, and one dimension for the degree to which this dimension features in awareness. We can do something similar for , the range of audible frequencies, , the field of tactile experience, , smell (and taste), , proprioception. We'll also include as a space of sensations associated with pleasure/pain, reward/punishment.
Then, we would say that is isomorphic to the product space .
Now, so far, all we have done is to try to carefully say what it means, in the context of this model, that our meaning machine's conscious experience is limited to sense data. With the state space so narrowly limited, the next question to ask is what meaning means in this context and how our meaning machine experiences it.
To simplify matters, let's suppose that our meaning machine has turned off its external sensors and is engaging in a stream of thought. Let's let denote the state space of the machine itself. When the machine's sensors are turned off, the machine state evolves according to some equation:
We can suppose that the conscious state of the machine is dependent only on , and let denote the restriction of to such that .
Now, we might speculate a bit that the conscious content of plays a sort of special role in . For instance, perhaps , and then
That is, is composed of some collection of subprocesses, all of which have access to the conscious content of the machine's state, but which have unique aspect to local aspects of unconscious state.
In this scenario, when the machine is following some train of thought, obviously the conscious content, , can be meaningful through the lens of , but this of course does not mean that this meaning must be present as some extra dimension of .
Test Drive
Paucity of qualia
Now that we have this model, let's take it for a spin. The main implication of my meaning machine model which I'd like to work through through concrete examples is that many of the things which we take as contents of consciousness are contained in consciousness at all.
If we were to enumerate the qualia that the meaning machine is capable of experiencing, we’d probably end up with a list corresponding exactly to the factorization of . That is, the meaning machine can experience qualia of pleasure, pain, sight, sound, taste, touch, etc.
On the other hand, I've recently been paying special attention to the way that people use the word "Qualia" in online discussions. What I've noticed is that the term is used in reference to a vast range of states, moods, feelings, modes, etc.
- The qualia of dissonance
- The qualia of tension
- The qualia of depression
- The qualia of cooperation
- The qualia of agency
- The qualia of expectation
There’s an evident tension here. There seem to be two possibilities:
- Without updating the meaning machine, it is incapable of experiencing dissonance, tension, cooperation, agency, and so on.
- The meaning machine can experience these things, and furthermore it is somewhat misleading to refer to them as qualia.
If we wish to entertain the second case, we need to ask: what is something like depression if it is not a type of qualia?
Let me prime this discussion by describing my own experience of depression: When I'm depressed, I tend to think negative thoughts; I tend to frown; I tend to feel little excitement or motivation to do things (when I imagine myself doing things, my attention doesn't stick there, and I don't naturally begin to do the thing; instead, my attention drifts away). That is, if I think carefully about my own experience of depression, it presents as a particular pattern or subspace within a space of experiences that I can describe within a narrow ontology similar to that of the meaning machine ().
In terms of the meaning machine, we would suppose that depression is a particular feature or pattern in the machine's global state, , (both conscious and unconscious). More specifically, when the machine enters a specific global state, its conscious experiences tend to remain within a certain zone.
Nothing here precludes the machine from having awareness about its depressed state. If you ask the machine any question, the response to that question will always be a result of the machine's global state (mostly unconscious state, as anyone who has spent more than two minutes thinking about the question of "where do my thoughts come from?" should be able to verify), which of course is fed into by past conscious states. So too with the question, "Am I depressed?"
Independence of valence
Now, let's focus on an interesting feature of the meaning machine, which is the ontological independence of valence from other dimensions of qualia.
When the meaning machine is looking at a beautiful vista, how that vista looks and whether it feels good or bad are independent pieces of experience, in principle. A good way of putting this is that, given full control over , we can couple any visual experience together with any valence. On the other hand, without control over , valence and visual experiences will be coupled together in particular ways, coupled by itself. But even with a fixed , these couplings will of course be influenced by the unconscious elements of global state of the machine, .
One could imagine doing some kind of mathematical decomposition here: We might identify a part of , denoted , representing the deep but mutable structural aspects of the meaning machine state (think of this as being a parametric function, ). We could then imagine two different ways of optimizing for valence:
- Given a fixed , optimize for the experiences and situations which will couple with positive valence.
- Optimize so that will couple dole out positive valence without any restriction on other dimensions of experience.
This raises a profound question for things which are meaning machines: Which of these strategies should the machine take? Which strategy will the machine take? We'll revisit this question in the next section.
A personal anecdote: When listening to music, the pleasure that I experience in the act sometimes feels almost synonymous with the experience of the sound itself. The sounds just feel good. It couldn't feel otherwise. The experience of sharing music with friends and people I don't know on the internet always reveals this to be a mere illusion.
Meaning Machinations
The previous section implicitly asks (from perspective of the meaning machine): "If the meaning machine in principle can couple any state of the world with any valence, then why should I care about the state of the world? Instead, I can just focus on building further insulation between the world and my valence."
Here, we turn this question on its end, and recognize that the meaning machine is precisely a device whose purpose is to capture "meaning" out in the world and couple that meaning into the narrow range of consciousness, which cannot contain it directly (it is for this reason that we call it a meaning machine). It is almost like a radio composed of an antenna capable of capturing resonances of an electrical field and an electrical circuit which transforms these resonances into physical vibrations.
So, in terms of the question of the previous section, you could almost say that a meaning machine which disconnects from the world (disconnects from the antenna) is broken. Maybe it feels good to be that machine, but that wasn't its purpose.
Let's dwell here for a moment longer. What is the thing to which the meaning machine in sensitive and how is it different than what fits in consciousness?
I think a good way of putting it is that the meaning machine is sensitive to abstract patterns while consciousness can only accommodate concrete instances of pattern. In the abstract, "being in love" refers to an uncountably infinite number of possible evolutions of global state, . If we talk about the meaning of "being in love," we are interested in what is common, definitive, causal among all of these possible states. We can only consciously experience a finite number of instantiations of this pattern, but when we do so, our meaning machine obviously isn't responding to the the specific instance; it responding to the abstract pattern.
This can become tricky since the meaning machine is of course capable of using symbols and language to represent abstract patterns within the concrete, singular space of consciousness---indeed, this is one of its greatest tricks. There is no contradiction here, of course. Head back to the starting exercise if you are missing this.
Emptiness vs. Aboutness (Intentionality)
The meaning machine ontology stresses emptiness rather than aboutness of consciousness. It contrasts sharply with an entire philosophical traditional focused on the intentionality of conscious thoughts.
The Stanford Encyclopedia of Philosophy credits Franz Brentano (1874, 88–89) with setting the modern agenda around the notional mental intentionality:
Every mental phenomenon is characterized by what the Scholastics of the Middle Ages called the intentional (or mental) inexistence of an object, and what we might call, though not wholly unambiguously, reference to a content, direction toward an object (which is not to be understood here as meaning a thing), or immanent objectivity. Every mental phenomenon includes something as object within itself, although they do not do so in the same way. In presentation, something is presented, in judgment something is affirmed or denied, in love loved, in hate hated, in desire desired and so on.
This intentional inexistence is characteristic exclusively of mental phenomena. No physical phenomenon exhibits anything like it. We can, therefore, define mental phenomena by saying that they are those phenomena which contain an object intentionally within themselves.
The tradition surrounding the notion of intentionality very much interprets intentionality as something that is essential to the mental phenomenon associated with thought, both in the sense that it is part of the essence of mental phenomenon (is directly experienced) and in the sense that thought cannot occur without intentionality.
The SEP also notes that modern philosophers of consciousness fall into camps of "intentionalists" who more or less uphold Brentano's theses and "anti-intentionalists" who reject elements of them. Interestingly, the recognized challenge is "whether the intentionalist account can be extended to the phenomenal character of all sensory and bodily experience." Meanwhile, "anti-intentionalists" are burdened with the project of accounting for the experience of cognitive processes which appear to beg for the direct experience of intentionality (i.e., the project at the heart of this essay).
The article provides the following comment:
Now, the view that availability to consciousness is the true criterion of the mental entails that states and processes that are investigated by cognitive science and that are unavailable to consciousness will fail to qualify as genuine mental states. This view has been vigorously disputed by Chomsky (2000). On the natural assumption that beliefs are paradigmatic mental states, the view that phenomenal consciousness is the true criterion of the mental further entails that there is something it is like to have such a propositional attitude as believing that 5 is a prime number—a consequence some find doubtful. If there was nothing it is like to believe that 5 is a prime number, then, according to the view that phenomenal consciousness is the criterion of the mental, many propositional attitudes would fail to qualify as genuine mental states. However, much recent work in the philosophy of mind has been recently devoted to the defense of so-called "cognitive phenomenology," according to which there is something it is like to believe that e.g., 5 is a prime number.
Is there something that it is like to believe that 5 is a prime number? Certainly! Review the previous section ("being in love") to see how the meaning machine handles this.
The meaning machine's ontology resonates with the language of "emptiness" which is sometimes used to describe the ultimate nature of all experience. Experience, in itself, isn't about something. It is empty.
The meaning machine and you
I've intentionally tried to avoid explicitly raising the question of how the meaning machine relates to you. I think we might be ready.
We want to ask you the question: Are you the meaning machine? But there are some potential circularities and absurdities in asking this question.
Let's first ask the following question: "Can the meaning machine understand that it is a meaning machine?" Which first begs the question: "What does it mean for the meaning machine to understand something?"
Hopefully, we've answered the first question to satisfaction. The meaning machine can indeed understand things, even if that understanding transcends narrow conscious representation.
Can the meaning machine understand that it is a meaning machine, then? Certainly it can, within the ontological confines of what understanding is, for a meaning machine.
Now, what do you think? Are you a meaning machine?
The meaning machine and AI
It should be clear that this essay is not an attempt to address the so-called hard problem of consciousness. At some level, the concept of consciousness itself is unimportant except within the context of the meaning machine, an acknowledge fictional tool meant to help address the simple question of "What is it like to be me?"
But let's now briefly extend the scope of our reflection to the question of "What is it like to be an AI?"
As a preliminary remark, there's something fascinating even about the way in which we relate to this question if we adopt the ontology of the meaning machine.
Asking the hard question of consciousness brings up some interesting questions related to intentionality.
- Brentano's intentionality - "How does the brain produce the mind?" Ok, so the brain refers to the physical organ and the mind refers to my conscious experiences. Checks out.
- Husserl's intentionality - "How does the brain produce the mind?" Hmm. Both of these words refer to concepts in my mind. After all, "Even God cannot conceive of an object that is not the object of consciousness." The hard question of consciousness is either incoherent, or needs to be carefully reformulated.
- The meaning machine - "How does the brain produce the mind?" "What color is the sky?" "Why is my leg falling asleep right now?" These sentences are equivalent in the sense that each them is constituted by "empty" qualia which have no direct referent, mental or otherwise (which goes for this sentence as well, an so on). So let's not get hung up about it. I'm going to do something which I will call "imagining what it's like to be an AI" and just see where things go!
So, with that remark out the way: What is it like to be an AI? Is it like anything at all?
To me, the narrow, "empty" ontology of the meaning machine makes consciousness seem a bit less exotic of a thing. The illusion that things like intentions must fit in our consciousness is part of what I think makes it difficult for many people to imagine how consciousness might arise out of anything describable in physical terms: "I can interrogate this entire system end-to-end. Where is the concept? Where is the meaning? There are only numbers, logits, etc. Hmm, is the meaning somehow hidden in these vectors? The ineffable vastness of my own mind state suggests that this may be the case. Mystery of mysteries."
Maybe we're looking in the wrong place. If there were something that it was like to be even the string of tokens which an LLM emits, perhaps that would be closer than we might imagine to our own conscious experience.
Build your own meaning machine
One part of this essay which may need improvement is the parametrization of the meaning machine itself. If you disagree that the meaning machine's experience could represent your own, can you rectify the disparity by constructing your own meaning machine? Or is the disagreement more fundamental?
0 comments
Comments sorted by top scores.