Feel the Meaning
post by Eliezer Yudkowsky (Eliezer_Yudkowsky)
When I hear someone say, "Oh, look, a butterfly," the spoken phonemes "butterfly" enter my ear and vibrate on my ear drum, being transmitted to the cochlea, tickling auditory nerves that transmit activation spikes to the auditory cortex, where phoneme processing begins, along with recognition of words, and reconstruction of syntax (a by no means serial process), and all manner of other complications.
But at the end of the day, or rather, at the end of the second, I am primed to look where my friend is pointing and see a visual pattern that I will recognize as a butterfly; and I would be quite surprised to see a wolf instead.
My friend looks at a butterfly, his throat vibrates and lips move, the pressure waves travel invisibly through the air, my ear hears and my nerves transduce and my brain reconstructs, and lo and behold, I know what my friend is looking at. Isn't that marvelous? If we didn't know about the pressure waves in the air, it would be a tremendous discovery in all the newspapers: Humans are telepathic! Human brains can transfer thoughts to each other!
Well, we are telepathic, in fact; but magic isn't exciting when it's merely real, and all your friends can do it too.
Think telepathy is simple? Try building a computer that will be telepathic with you. Telepathy, or "language", or whatever you want to call our partial thought transfer ability, is more complicated than it looks.
But it would be quite inconvenient to go around thinking, "Now I shall partially transduce some features of my thoughts into a linear sequence of phonemes which will invoke similar thoughts in my conversational partner..."
So the brain hides the complexity—or rather, never represents it in the first place—which leads people to think some peculiar thoughts about words.
As I remarked earlier, when a large yellow striped object leaps at me, I think "Yikes! A tiger!" not "Hm... objects with the properties of largeness, yellowness, and stripedness have previously often possessed the properties 'hungry' and 'dangerous', and therefore, although it is not logically necessary, auughhhh CRUNCH CRUNCH GULP."
Similarly, when someone shouts "Yikes! A tiger!", natural selection would not favor an organism that thought, "Hm... I have just heard the syllables 'Tie' and 'Grr' which my fellow tribe members associate with their internal analogues of my own tiger concept, and which they are more likely to utter if they see an object they categorize as aiiieeee CRUNCH CRUNCH help it's got my arm CRUNCH GULP".
Considering this as a design constraint on the human cognitive architecture, you wouldn't want any extra steps between when your auditory cortex recognizes the syllables "tiger", and when the tiger concept gets activated.
Going back to the parable of bleggs and rubes, and the centralized network that categorizes quickly and cheaply, you might visualize a direct connection running from the unit that recognizes the syllable "blegg", to the unit at the center of the blegg network. The central unit, the blegg concept, gets activated almost as soon as you hear Susan the Senior Sorter say "Blegg!"
Or, for purposes of talking—which also shouldn't take eons—as soon as you see a blue egg-shaped thing and the central blegg unit fires, you holler "Blegg!" to Susan.
And what that algorithm feels like from inside is that the label, and the concept, are very nearly identified; the meaning feels like an intrinsic property of the word itself.
The cognoscenti will recognize this as yet another case of E. T. Jaynes's "Mind Projection Fallacy". It feels like a word has a meaning, as a property of the word itself; just like how redness is a property of a red apple, or mysteriousness is a property of a mysterious phenomenon.
Indeed, on most occasions, the brain will not distinguish at all between the word and the meaning—only bothering to separate the two while learning a new language, perhaps. And even then, you'll see Susan pointing to a blue egg-shaped thing and saying "Blegg!", and you'll think, I wonder what "blegg" means, and not, I wonder what mental category Susan associates to the auditory label "blegg".
Consider, in this light, the part of the Standard Dispute of Definitions where the two parties argue about what the word "sound" really means—the same way they might argue whether a particular apple is really red or green:
Albert: "My computer's microphone can record a sound without anyone being around to hear it, store it as a file, and it's called a 'sound file'. And what's stored in the file is the pattern of vibrations in air, not the pattern of neural firings in anyone's brain. 'Sound' means a pattern of vibrations."
Barry: "Oh, yeah? Let's just see if the dictionary agrees with you."
Albert feels intuitively that the word "sound" has a meaning and that the meaning is acoustic vibrations. Just as Albert feels that a tree falling in the forest makes a sound (rather than causing an event that matches the sound category).
Barry likewise feels that:
sound.meaning == auditory experiences
forest.sound == false
myBrain.FindConcept("sound") == concept_AuditoryExperience
concept_AuditoryExperience.match(forest) == false
Which is closer to what's really going on; but humans have not evolved to know this, anymore than humans instinctively know the brain is made of neurons.
Albert and Barry's conflicting intuitions provide the fuel for continuing the argument in the phase of arguing over what the word "sound" means—which feels like arguing over a fact like any other fact, like arguing over whether the sky is blue or green.
You may not even notice that anything has gone astray, until you try to perform the rationalist ritual of stating a testable experiment whose result depends on the facts you're so heatedly disputing...
Comments sorted by oldest first, as this post is from before comment nesting was available (around 2009-02-27).
comment by tcpkac ·
2008-02-13T08:00:55.000Z · LW(p) · GW(p)
Albert and Barry's different usages of the word 'sound' are both perfectly testable.
Once they've taken the reasonable and sufficient step of looking 'sound' up in a dictionary, and having identified the two (out of many) possible meanings they were using, then one can go off and test for the presence of pressure waves in the air, while the other tests for auditory perceptions in the humans (and/or other animals doted with hearing) nearest to the event.
They can later compare their results and Albert will say 'there was sound according to the definition that I was using (Webster : sound(1) 1a), while Barry can happily agree while saying there wasn't, according to the definition that he was using (Webster : sound(1) 1b).
Having got that over, they will go off for a beer at the nearest bar and have a good laugh over that time-travelling guy's not even knowing how to use a dictionary....
comment by George_Weinberg2 ·
2008-02-13T19:08:55.000Z · LW(p) · GW(p)
It would certainly facilitate communication, though, if people could agree on what words mean rather than having personal definitions. No doubt it's unrealistic to expect everyone to agree on precisely where the boundary between yellow and orange lies, but tigers aren't even a yellowish orange.
comment by Yelsgib ·
2008-02-15T06:56:22.000Z · LW(p) · GW(p)
It seems like you would claim that there is "meaningness" to a word. I would claim that you are essentializing lack of process; namely, just because people do not process a difference between word and content does not mean that that process is not possible, or that the lack of a process itself deserves a title.
This is a subtle point. I would like to clarify. My keyboard has "whiteness" in the sense that when I am looking at it I experience "white." The claim that a word has "meaningness" would state that while using a word we "feel meaning." But perhaps this "feeling of meaning" is just equivalent to the feeling of "using a word."
My main point of (personal) evidence is that I am currently learning Japanese and have had significant experience (and failure) in attempting to directly absorb words. I find that to actually understand the language I must respond in the latter manner of the hypothetical language learner responding to hearing "Blegg" for the first time. There are elements of Japanese that are impossible to understand as having meaning - e.g. "particles" such as "ga," "ha," "wo," etc. What is the definition of the word "the?" As a slightly less simplistic example, certain words like "omiyage" which have no english synonym can only be understood by a cultural outside through precise comprehension of the relation of the word to the greater cultural context. If this is not done self-consciously (by asking "what are the mental/cultural processes which give meaning to this word?") then it takes to long. So I do it consciously. Thus, Japanese words (and, increasingly, English words) do not have "meaningness."
Once you start performing the processing that you have not been, the illusionary "feeling" of word-as-meaning disappears.
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) ·
2008-02-15T07:05:08.000Z · LW(p) · GW(p)
Yelsgib, for "feels that" you may also read "falsely believes that" or "mistakenly intuits that". I am claiming that words do not have meanings, but, rather, labels associate to concepts (cognitive patterns that can (among other things) perform membership tests).
comment by Yelsgib ·
2008-02-15T07:29:23.000Z · LW(p) · GW(p)
If labels associate to concepts, what does the label "word" associate to?
You should be very careful when using terms like "falsely believes that" when referring to the way people are thinking. "False" as a label only has an association in the context of "verifiable fact." This places the onus on you to show that the claim "words have meanings" lies in the context of "verifiable fact." You must show that an entity is claiming implicitly or explicitly that the assertion "words have meanings" is "true" (a.k.a. consistent with the axioms of the context in which it is expressed). My claim would be that the statement "words have meanings" is actually the basis of a context - that the claim is "hollow" in the sense that the axioms of math are "hollow" (neither true nor false) but that it is useful in the very same sense - we can generate a set of deductively consistent (and more "powerful") claims from the claim.
I hope you'll forgive my constant use of quotes - I use them when I fear that my definition of a word might significantly vary from yours. I also hope that you'll forgive my somewhat idiosyncratic use of language - I expect that we are coming at the question of human intelligence from at least slightly different intellectual backgrounds.
comment by Amanojack ·
2010-03-11T17:20:55.527Z · LW(p) · GW(p)
I'm loving these semantics/logic posts. Well done.
The easy solution is just to realize that words are labels and nothing more - end of story. It's just that that's quite a hard lesson to internalize.
Replies from: Origin
↑ comment by Origin ·
2012-03-21T21:00:09.647Z · LW(p) · GW(p)
I am new to this wiki (first post even) so I might be missing something, but is it really that hard a lesson to process? If I called a monkey a garp it'd still be exactly the same creature, therefore words are labels and have no meaning of themselves. Quite a simple train of thought. And I can't think of a single emotional reason why anyone wouldn't want to adopt this belief, since most people don't care about words. Right?
Replies from: gunnervi
↑ comment by gunnervi ·
2012-07-09T00:07:48.822Z · LW(p) · GW(p)
but is it really that hard a lesson to process?
I am going to assume that by now you've read enough of the Sequences to recognize your possible hindsight bias, in your post.
In any case, merely saying that "words are labels" is akin to the guessing the teacher's password; people have said it for ages (e.g., "a rose by any other name" from Romeo and Juliet), yet most people (in my opinion) do not truly understand it.
comment by Дмитрий Зеленский (dmitrii-zelenskii) ·
2019-08-20T17:44:26.300Z · LW(p) · GW(p)
Well, you describe language somewhat as if it were designed for communication. If, as Chomsky et al. argue, it was not, if it is a thought machine with communication hastily and inconveniently added later, then:
1)it is a bad - no, really bad - idea to try and teach computers speak language the way humans do - they should do better and probably start with a different (functional) architecture;
2)sound 2b and sound 2c may have a different underlying structure which is simply compressed by the hasty externalization (aka communication) module.