Extensions and Intensions
post by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2008-02-04T21:34:52.000Z · LW · GW · Legacy · 32 commentsContents
33 comments
"What is red?"
"Red is a color."
"What's a color?"
"A color is a property of a thing."
But what is a thing? And what's a property? Soon the two are lost in a maze of words defined in other words, the problem that Steven Harnad once described as trying to learn Chinese from a Chinese/Chinese dictionary.
Alternatively, if you asked me "What is red?" I could point to a stop sign, then to someone wearing a red shirt, and a traffic light that happens to be red, and blood from where I accidentally cut myself, and a red business card, and then I could call up a color wheel on my computer and move the cursor to the red area. This would probably be sufficient, though if you know what the word "No" means, the truly strict would insist that I point to the sky and say "No."
I think I stole this example from S. I. Hayakawa—though I'm really not sure, because I heard this way back in the indistinct blur of my childhood. (When I was 12, my father accidentally deleted all my computer files. I have no memory of anything before that.)
But that's how I remember first learning about the difference between intensional and extensional definition. To give an "intensional definition" is to define a word or phrase in terms of other words, as a dictionary does. To give an "extensional definition" is to point to examples, as adults do when teaching children. The preceding sentence gives an intensional definition of "extensional definition", which makes it an extensional example of "intensional definition".
In Hollywood Rationality and popular culture generally, "rationalists" are depicted as word-obsessed, floating in endless verbal space disconnected from reality.
But the actual Traditional Rationalists have long insisted on maintaining a tight connection to experience:
"If you look into a textbook of chemistry for a definition of lithium, you may be told that it is that element whose atomic weight is 7 very nearly. But if the author has a more logical mind he will tell you that if you search among minerals that are vitreous, translucent, grey or white, very hard, brittle, and insoluble, for one which imparts a crimson tinge to an unluminous flame, this mineral being triturated with lime or witherite rats-bane, and then fused, can be partly dissolved in muriatic acid; and if this solution be evaporated, and the residue be extracted with sulphuric acid, and duly purified, it can be converted by ordinary methods into a chloride, which being obtained in the solid state, fused, and electrolyzed with half a dozen powerful cells, will yield a globule of a pinkish silvery metal that will float on gasolene; and the material of that is a specimen of lithium."
— Charles Sanders Peirce
That's an example of "logical mind" as described by a genuine Traditional Rationalist, rather than a Hollywood scriptwriter.
But note: Peirce isn't actually showing you a piece of lithium. He didn't have pieces of lithium stapled to his book. Rather he's giving you a treasure map—an intensionally defined procedure which, when executed, will lead you to an extensional example of lithium. This is not the same as just tossing you a hunk of lithium, but it's not the same as saying "atomic weight 7" either. (Though if you had sufficiently sharp eyes, saying "3 protons" might let you pick out lithium at a glance...)
So that is intensional and extensional definition., which is a way of telling someone else what you mean by a concept. When I talked about "definitions" above, I talked about a way of communicating concepts—telling someone else what you mean by "red", "tiger", "human", or "lithium". Now let's talk about the actual concepts themselves.
The actual intension of my "tiger" concept would be the neural pattern (in my temporal cortex) that inspects an incoming signal from the visual cortex to determine whether or not it is a tiger.
The actual extension of my "tiger" concept is everything I call a tiger.
Intensional definitions don't capture entire intensions; extensional definitions don't capture entire extensions. If I point to just one tiger and say the word "tiger", the communication may fail if they think I mean "dangerous animal" or "male tiger" or "yellow thing". Similarly, if I say "dangerous yellow-black striped animal", without pointing to anything, the listener may visualize giant hornets.
You can't capture in words all the details of the cognitive concept—as it exists in your mind—that lets you recognize things as tigers or nontigers. It's too large. And you can't point to all the tigers you've ever seen, let alone everything you would call a tiger.
The strongest definitions use a crossfire of intensional and extensional communication to nail down a concept. Even so, you only communicate maps to concepts, or instructions for building concepts—you don't communicate the actual categories as they exist in your mind or in the world.
(Yes, with enough creativity you can construct exceptions to this rule, like "Sentences Eliezer Yudkowsky has published containing the term 'huragaloni' as of Feb 4, 2008". I've just shown you this concept's entire extension. But except in mathematics, definitions are usually treasure maps, not treasure.)
So that's another reason you can't "define a word any way you like": You can't directly program concepts into someone else's brain.
Even within the Aristotelian paradigm, where we pretend that the definitions are the actual concepts, you don't have simultaneous freedom of intension and extension. Suppose I define Mars as "A huge red rocky sphere, around a tenth of Earth's mass and 50% further away from the Sun". It's then a separate matter to show that this intensional definition matches some particular extensional thing in my experience, or indeed, that it matches any real thing whatsoever. If instead I say "That's Mars" and point to a red light in the night sky, it becomes a separate matter to show that this extensional light matches any particular intensional definition I may propose—or any intensional beliefs I may have—such as "Mars is the God of War".
But most of the brain's work of applying intensions happens sub-deliberately. We aren't consciously aware that our identification of a red light as "Mars" is a separate matter from our verbal definition "Mars is the God of War". No matter what kind of intensional definition I make up to describe Mars, my mind believes that "Mars" refers to this thingy, and that it is the fourth planet in the Solar System.
When you take into account the way the human mind actually, pragmatically works, the notion "I can define a word any way I like" soon becomes "I can believe anything I want about a fixed set of objects" or "I can move any object I want in or out of a fixed membership test". Just as you can't usually convey a concept's whole intension in words because it's a big complicated neural membership test, you can't control the concept's entire intension because it's applied sub-deliberately. This is why arguing that XYZ is true "by definition" is so popular. If definition changes behaved like the empirical nullops they're supposed to be, no one would bother arguing them. But abuse definitions just a little, and they turn into magic wands—in arguments, of course; not in reality.
32 comments
Comments sorted by oldest first, as this post is from before comment nesting was available (around 2009-02-27).
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2008-02-04T22:35:12.000Z · LW(p) · GW(p)
Arg. I know that, but my fingers don't obey.
comment by Silas · 2008-02-04T23:31:45.000Z · LW(p) · GW(p)
Alternatively, if you asked me "What is red?" I could point to a stop sign, then to someone wearing a red shirt, and a traffic light that happens to be red, and blood from where I accidentally cut myself, and a red business card, and then I could call up a color wheel on my computer and move the cursor to the red area. This would probably be sufficient,
Ah, so that's what "red" is! Man, that has stumped me for SO long. It all makes sense now! Red is the set: {some stop sign, some guy, some traffic light, some blood on Eliezer_Yudkowsky's body, a business card, and a cursor on a portion of Eliezer_Yudkowsky's screen}
But when would I ever need to use that?
Replies from: AshwinVcomment by Paul_Gowder · 2008-02-04T23:57:38.000Z · LW(p) · GW(p)
Silas, that's actually a pretty good way to capture some of the major theories about color -- ostensive definition for a given color solves a lot of problems.
But I wish Eliezer had pointed out that intensional definitions allow us to use kinds of reasoning that extensional definitions don't ... how do you do deduction on an extensional definition?
Also, extensional definitions are harder to interpersonally communicate using. I can wear two shirts, both of which I would call "purple," and someone else would call one "mauve" and the other "taupe" (or something like that -- I'm not even sure what those last two colors are). Whereas if we'd defined the colors on wavelengths of light, well, we know what we're talking about. It's harder to get more overlap between people on extensional rather than intensional definitions.
Replies from: NancyLebovitz↑ comment by NancyLebovitz · 2010-04-18T19:24:22.017Z · LW(p) · GW(p)
Mauve is a light grayish purple, reasonably likely to appear in the sky soon after sunset. Taupe is some sort of brown. I was bewildered by the top example at the wikipedia article-- it's much darker than what I think of as taupe. It turns out (page down a ways) that what I had in mind was sandy taupe-- the Crayola version.
comment by tc · 2008-02-05T05:22:57.000Z · LW(p) · GW(p)
Silas: red is not the set, but what all of those things have in common. The set would be most effective if you presented a sequence of examples that was different in every way except in color. To be extra sure of getting the point across, you could present examples that are exactly the same, except in color, and then say one was "red" and the other was "not red" - a whole educational philosophy has been built up out of this (look up Siegfried Engelmann and Direct Instruction). Of course this method of communication assumes that the audience is sighted, not colorblind, understands the concept of "same" and "different", etc.
comment by Keith_Elis · 2008-02-05T14:28:26.000Z · LW(p) · GW(p)
These last couple of posts on definitions have been very good.
Another definitional strategy prone to abuse is coinage or creation of neologisms, sometimes used to sneak assumptions into a debate that would require significant support otherwise.
For one example, I have noticed the use of the term 'technoscience' or 'technoscientific' in rhetoric concerning science and technology. The use of this term is striking given the pretty obvious differences between science and technology as domains and activities in the real world. One must be making a very imprecise point for it to apply equally well to both science and technology in one breath. Use of this term might be nothing more than a symptom of this imprecision, but can also be thought of as stipulating an unsupported conclusion in itself. That is, anyone meeting the argument on its terms implicitly agrees that technology and science are identical for purposes of reasoning about them.
There are many other examples, I'm sure.
comment by Silas · 2008-02-05T14:33:48.000Z · LW(p) · GW(p)
Wow, this is the most response I've ever gotten to an Overcoming_Bias comment O.o
My point was just, as Benquo noted, that definition that way (extensive) competes with every other conceivable interpretation. The success of such definitions in conveying the meaning suggests sufficient common understanding between the people to rule out the infinity of ("obviously" ridiculous) solutions, and therefore that the describer hasn't actually excluded all the wrong answers. But, that was close enough to Eliezer_Yudkowsky's point in the rest of the post, so, go fig.
I just mentioned it because his post reminded me of a passage I recently read in Steven Pinker's The Stuff of Thought, where he mentions an exchange between a child and his father, showing that parental corrections do not suffice to define (rule out all wrong-sounding syntax) all of the rules we naturally use when speaking languages.
child: I turned the raining off. father: You mean, you turned the sprinkler off? child: I turned the raining off of the sprinkler.
comment by StuartBuck · 2008-02-05T15:44:01.000Z · LW(p) · GW(p)
For some reason, I'm reminded of the passage from the opening of Augustine's Confessions -- in the true spirit of autobiography, he describes how he learned words and ideas as an infant by being shown extensional definitions:
13. Did I not, then, as I grew out of infancy, come next to boyhood, or rather did it not come to me and succeed my infancy? My infancy did not go away (for where would it go?). It was simply no longer present; and I was no longer an infant who could not speak, but now a chattering boy. I remember this, and I have since observed how I learned to speak. My elders did not teach me words by rote, as they taught me my letters afterward. But I myself, when I was unable to communicate all I wished to say to whomever I wished by means of whimperings and grunts and various gestures of my limbs (which I used to reinforce my demands), I myself repeated the sounds already stored in my memory by the mind which thou, O my God, hadst given me. When they called some thing by name and pointed it out while they spoke, I saw it and realized that the thing they wished to indicate was called by the name they then uttered. And what they meant was made plain by the gestures of their bodies, by a kind of natural language, common to all nations, which expresses itself through changes of countenance, glances of the eye, gestures and intonations which indicate a disposition and attitude--either to seek or to possess, to reject or to avoid. So it was that by frequently hearing words, in different phrases, I gradually identified the objects which the words stood for and, having formed my mouth to repeat these signs, I was thereby able to express my will. Thus I exchanged with those about me the verbal signs by which we express our wishes and advanced deeper into the stormy fellowship of human life, depending all the while upon the authority of my parents and the behest of my elders.
comment by Chinese · 2008-02-05T16:29:04.000Z · LW(p) · GW(p)
By the way, two nice Chinese dictionaries:
- http://www.chinese-tools.com/tools/dictionary.html (with audio + examples + calligraphy)
- http://www.chinese-dictionary.org (multilingual, chinese vs english, french, spanish...)
comment by Ron_Hardin · 2008-02-05T22:29:50.000Z · LW(p) · GW(p)
It's easy to teach a dog what words mean, provided the dog has some interest you can quickly show in the thing meant.
I wrote out on a napkin, one day when she was two, all the words and phrases that my Doberman Susie definitely knew in context, and came up with 200.
All of them were for things that involved her somehow. The most direct naming of things was for toys ; but commands and so forth, and the ever-versitile ``fetch the ...'' where ... is something fetchable, provided a link to lots of items you could name. Her interest was then in fetching, and indirectly to the name of the thing.
People are no different. To teach what red is, you need some interest in red.
comment by Brian_Jaress2 · 2008-02-06T19:21:33.000Z · LW(p) · GW(p)
I once saw a person from Korea discover, much to her surprise, that pennies are not red. She had been able to speak English for a while and could correctly identify a stop sign or blood as red, and she had seen plenty of pennies before discovering this.
In Korea they put the color of pennies and the color of blood in the same category and give that category a Korean name.
Replies from: army1987↑ comment by A1987dM (army1987) · 2013-09-05T13:29:57.494Z · LW(p) · GW(p)
And in Hungarian they put the colour of stop signs and the colour of blood in different categories.
comment by fmailhot · 2009-12-06T03:02:24.638Z · LW(p) · GW(p)
Pretty late on this, but just in case, a few of points:
someone's already sort of mentioned this, but your first example (defining "red"), is by ostension, not by extension. Defining something by extension, especially something like "red", would require pointing out an infinite number of things.
You were probably just careles in your choice of words, but "the neural pattern (in my temporal cortex) that inspects an incoming signal from the visual cortex to determine whether or not it is a tiger" is a good example of Betty Crocker's Theory of Microwave Cooking (cf. http://books.google.ca/books?id=9JGOmd66jGsC&pg=PA121&lpg=PA121&dq=churchland+betty+crocker+microwave&source=bl&ots=5JDuIkAIe6&sig=6KK3pE7xUE12U5O5C5DKaP2gz_c&hl=en&ei=HRsbS46YDsWKlQf7_rm6BA&sa=X&oi=book_result&ct=result&resnum=1&ved=0CA0Q6AEwAA#v=onepage&q=&f=false
your final point about redefining words "any way I like" being the same as changing/reassigning beliefs is exactly right...our words can only "mean" our intensions of them, and since we can't/don't communicate intensions, the chain ends there.
comment by robertzk (Technoguyrob) · 2010-12-23T01:09:17.815Z · LW(p) · GW(p)
An important algorithm for attempting to translate extensional definition into intensional definition is Mitchell's version spaces:
comment by buybuydandavis · 2011-09-22T22:21:52.495Z · LW(p) · GW(p)
S.I. Hayakawa is mentioned in this article instead of Alfred Korzybski, but the Intensional vs. Extensional distinction was one of the fundamental distinctions of AK, along with The Map is Not the Territory, The Word is not the Thing, etc.
comment by [deleted] · 2015-07-22T08:29:21.674Z · LW(p) · GW(p)
What is the benefit of knowing this?
It's a trivial distinction.
comment by jeronimo196 · 2020-04-21T08:10:42.325Z · LW(p) · GW(p)
To give an "extensional definition" is to point to examples, as adults do when teaching children. The preceding sentence gives an intensional definition of "extensional definition", which makes it an extensional example of "intensional definition".
Why do you feel the need to do this?
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2021-02-02T04:06:33.830Z · LW(p) · GW(p)
Correction for future note: The extensional definition is the complete set of objects obeying a definition. To define a thing by pointing out some examples (without pointing out all possible examples) has the name "ostensive definition". H/t @clonusmini on Twitter. Original discussion in "Language in Thought and Action" here.
comment by tmercer · 2022-06-28T22:25:16.473Z · LW(p) · GW(p)
"you only communicate maps to concepts, or instructions for building concepts—you don't communicate the actual categories as they exist in your mind or in the world."
Really? I've always defined definition as either the sorting algorithm between, or description of the boundary between, two categories, yes-thing and not-thing.
Of course you're not giving me your neuronal arrangement, but you ought to be giving me one of two things that any agent, on any substrate, can use to sort every possible thing into yes-thing and not-thing, the same way you do.
If, after receiving a "definition", I (and any/every rational agent) am not able to apply the algorithm or description of the boundary between, and sort everything into yes-thing and not-thing the same way as you do, then what you've given me isn't really a definition (by my definition).
Using this definition of definition cuts down lots of useless (or anti-useful) definitions people try to give. I find that bad definitions are at the root of most stupid disagreements, both on the internet and IRL.
comment by Adam Zerner (adamzerner) · 2023-10-26T02:13:46.910Z · LW(p) · GW(p)
I find myself always struggling with these concepts, coming back to this post, kinda-sorta understanding it, but still rather confused. Some questions and comments:
- I've never heard this before. Where does it come from? Like, the idea of extensions and intensions, does it come from linguistics? Philosophy? English? Is it widely agreed on/applied?
- Why use the word "extension"? How does it relate to the typical use of the word? Like, you can extend a 10 chapter book by writing an 11th chapter. More generally, when you extend something, you add more of a thing to that thing. But here, with extensional definitions, you're not making a thing bigger. You're just... giving examples of it. So I have a sense that something like "example-oriented definition" would be more appropriate than "extensional definition".
- Same with "intensional definition". I think "intensional" relates to "intense" rather than "intent". But I don't see what it has to do with intensity or intentionality.
- The idea of distinguishing between symbols and referents here makes sense to me. Like, yes, definitions are symbols not referents. So in some sense they are maps not territory. But I feel like an actual definition is supposed to be complete. Yes, especially for extensional definitions, the set of examples that fit is usually going to be too large to fit on a piece of paper, but I feel like that just means that the definition is incomplete. Not that definitions themselves are supposed to be incomplete.
- "you can't control the concept's entire intension because it's applied sub-deliberately". I really like this. I'm thinking about it as opposed to a computer program. In a computer program, you could have one statement saying
let name = 'alice'
and then, later on, have another statement sayingname = 'bob'
. And boom: you just changed the "definition" ofname
. But with humans, it's not so simple. It's more wishy-washy. Using this analogy, if computers behaved like humans, it'd be something like "name
? That's gotta be'alice'
. I've accessedname
so many times and it's always been'alice'
." Or other times, "name
? Hmmm. I know it used to be'alice'
, but I remember it being reassigned to something else. What was it reassigned to? Oh yeah,'bob'
." In other words, the computer would have a hard time finding the correct value. Sometimes it'd mistakenly use an old, incorrect value. Other times it'd find the correct value, but only after some time and effort. Sorta like a busted cache. So yeah, when you redefine an English word, I guess you gotta keep in mind that people already use caches, you'd need to invalidate all of these caches, but in practice that won't happen, so you're gonna get people who use the old and now incorrect value a lot, and even when you avoid this, it's going to mean that people can't read from the cache anymore, so reads are going to take longer, and it's especially going to take longer to populate the cache with the new value.
↑ comment by Carl Feynman (carl-feynman) · 2023-10-26T02:26:04.930Z · LW(p) · GW(p)
The concept of “extensional“ and “intentional” definitions is a traditional distinction in philosophy and logic.
comment by orthonormal · 2024-10-14T00:55:49.293Z · LW(p) · GW(p)
Soon the two are lost in a maze of words defined in other words, the problem that Steven Harnad once described as trying to learn Chinese from a Chinese/Chinese dictionary.
Of course, it turned out that LLMs do this just fine, thank you.
Replies from: adele-lopez-1, gwern↑ comment by Adele Lopez (adele-lopez-1) · 2024-10-14T02:04:00.826Z · LW(p) · GW(p)
I don't doubt that LLMs could do this, but has this exact thing actually been done somewhere?
Replies from: martin-randall↑ comment by Martin Randall (martin-randall) · 2024-10-14T21:15:21.901Z · LW(p) · GW(p)
I've not read the paper but something like https://arxiv.org/html/2402.19167v1 seems like the appropriate experiment.
↑ comment by gwern · 2024-10-14T20:48:51.786Z · LW(p) · GW(p)
I don't think LLMs do the equivalent of that. It's more like, learning Chinese from a Chinese/Chinese dictionary stapled to a Chinese encyclopedia.
It is not obvious to me that using a Chinese/Chinese dictionary, purged of example sentences, would let you learn, even in theory, even things a simple n-grams or word2vec model trained on a non-dictionary corpus does and encodes into embeddings [LW(p) · GW(p)]. For example, would a Chinese/Chinese dictionary let you plot cities by longitude & latitude? (Most dictionaries do not try to list all names, leaving that to things like atlases or gazetteers, because they are about the language, and not a specific place like China, after all.)
Note that the various examples from machine translation you might think of, such as learning translation while having zero parallel sentences/translations, are usually using corpuses much richer than just an intra-language dictionary.